text
stringlengths 9
7.94M
|
---|
\begin{document}
\title [Power numerical radius inequalities]
{{ Power numerical radius inequalities from an extension of Buzano's inequality }}
\author[P. Bhunia]{Pintu Bhunia}
\address{ {Department of Mathematics, Indian Institute of Science, Bengaluru 560012, Karnataka, India}}
\email{[email protected]; [email protected]}
\thanks{Dr. Pintu Bhunia would like to thank SERB, Govt. of India for the financial support in the form of National Post Doctoral Fellowship (N-PDF, File No. PDF/2022/000325) under the mentorship of Professor Apoorva Khare}
\thanks{}
\subjclass[2020]{47A12, 47A30, 15A60}
\keywords {Numerical radius, Operator norm, Bounded linear operator, Inequality}
\date{\today}
\maketitle
\begin{abstract}
Several numerical radius inequalities are studied by developing an extension of the Buzano’s inequality. It is shown that if $T$ is a bounded linear operator on a complex Hilbert space, then
\begin{eqnarray*}
w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}, \end{eqnarray*}
for every positive integer $n\geq 2.$ This is a non-trivial improvement of the classical inequality $w(T)\leq \|T\|.$ The above inequality gives an estimation for the numerical radius of the nilpotent operators, i.e., if $T^n=0$ for some least positive integer $n\geq 2$, then \begin{eqnarray*}
w(T) &\leq& \left(\sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}\right)^{1/n}
\leq \left( 1- \frac{1}{2^{n-1}}\right)^{1/n} \|T\|. \end{eqnarray*}
Also, we deduce a reverse inequality for the numerical radius power inequality $w(T^n)\leq w^n(T)$. We show that
if $\|T\|\leq 1$, then
\begin{eqnarray*}
w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}},
\end{eqnarray*}
for every positive integer $n\geq 2.$ This inequality is sharp. \end{abstract}
\section{\textbf{Introduction}}
\noindent Let $\mathcal{B}(\mathcal{H})$ denote the $C^*$-algebra of all bounded linear operators on a complex Hilbert space $\mathcal{H}$ with usual inner product $\langle.,. \rangle$ and the corresponding norm $\|\cdot\|.$
Let $T\in \mathcal{B}(\mathcal{H})$ and let $|T|=(T^*T)^{1/2}$, where $T^*$ denotes the adjoint of $T.$ The numerical radius and the usual operator norm of $T$ are denoted by $w(T)$ and $\|T\|,$ respectively. The numerical radius of $T$ is defined as $$ w(T)=\sup \left\{|\langle Tx,x\rangle| : x\in \mathcal{H}, \|x\|=1 \right\}.$$ It is well known that the numerical radius defines a norm on $\mathcal{B}(\mathcal{H})$ and is equivalent to the usual operator norm. Moreover, \text{for every $T\in \mathcal{B}(\mathcal{H})$,} the following inequalities hold: \begin{eqnarray}\label{eqv}
\frac12 \|T\| \leq w(T) \leq \|T\|. \end{eqnarray}
The inequalities are sharp, $w(T)=\frac12 \|T\|$ if $T^2=0$ and $w(T)=\|T\|$ if $T$ is normal. Like as the usual operator norm, the numerical radius satisfies the power inequality, i.e., \begin{eqnarray}\label{power}
w(T^n) \leq w^n(T) \end{eqnarray}
\text{for every positive integer $n.$}
One other basic property for the numerical radius is the weakly unitary invariant property, i.e., $w(T)=w(U^*TU)$ for every unitary operator $U\in \mathcal{B}(\mathcal{H}).$ To study more interesting properties of the numerical radius we refer to \cite{book,book2}.
\noindent The numerical radius has various applications in many branches in sciences, more precisely, perturbation problem, convergence problem, approximation problem and iterative method as well as recently developed quantum information system. Due to importance of the numerical radius, many eminent mathematicians have been studied numerical radius inequalities to improve the inequalities \eqref{eqv}. Various inner product inequalities play important role to study numerical radius inequalities. The Cauchy-Schwarz inequality is one of the most useful inequality, which states that for every \text{$x,y \in \mathcal{H}$,} \begin{eqnarray}\label{cauchy}
|\langle x,y\rangle| &\leq& \|x\| \|y\|. \end{eqnarray}
A generalization of the Cauchy-Schwarz inequality is the Buzano's inequality \cite{Buzano}, which states that for $x,y,e \in \mathcal{H}$ with $\|e\|=1,$ \begin{eqnarray}\label{buzinq}
|\langle x,e\rangle \langle e,y\rangle| &\leq& \frac{ |\langle x,y\rangle| +\|x\| \|y\|}{2}. \end{eqnarray} Another generalization of the Cauchy-Schwarz inequality is the mixed Schwarz inequality \cite[pp. 75--76]{Halmos}, which states that for every $x,y\in \mathcal{H}$ and $T\in \mathcal{B}(\mathcal{H}),$ \begin{eqnarray}\label{mixed}
|\langle Tx,y\rangle|^2 &\leq& \langle |T|x,x\rangle \langle |T^*|y,y\rangle. \end{eqnarray} Using the above inner product inequalities various mathematicians have developed various numerical radius inequalities, which improve the inequalities \eqref{eqv}, see \cite{Bhunia_ASM_2022, Bhunia_LAMA_2022,Bhunia_RIM_2021,Bhunia_BSM_2021,Bhunia_ADM_2021,D08,Kittaneh_2003}. Also using other technique various nice numerical radius inequalities have been studied, see \cite{Abu_RMJM_2015,Bag_MIA_2020,Bhunia_LAA_2021,Bhunia_LAA_2019,Kittaneh_LAMA_2023, Kittaneh_STD_2005, Yamazaki}. Haagerup and Harpe \cite{Haagerup} developed a nice estimation for the numerical radius of the nilpotent operators, i.e., if $T^n=0$ for some positive integer $n\geq 2$, then \begin{eqnarray}\label{haag}
w(T) &\leq& \cos \left(\frac{\pi}{n+1} \right) \|T\|. \end{eqnarray}
\noindent
In this paper, we obtain a generalization of the Buzano's inequality \eqref{buzinq} and using this generalization we develop new numerical radius inequalities, which improve the existing ones. From the numerical radius inequalities obtained here, we deduce several results. We deduce an estimation for the nilpotent operators like \eqref{haag}. Further, we deduce a reverse inequality for the numerical radius power inequality \eqref{power}.
\section{\textbf{Numerical radius inequalities}} We begin our study with proving the following lemma, which is a generalization of the Buzano's inequality \eqref{buzinq}.
\begin{lemma}\label{buz-extension}
Let $x_1,x_2,\ldots,x_n,e \in \mathcal{H}$, where $\|e\|=1$. Then
\begin{eqnarray*}
\left| \mathop{\Pi}\limits_{k=1}^n \langle x_k,e\rangle \right| &\leq& \frac{ \left| \langle x_1,x_2\rangle \mathop{\Pi}\limits_{k=3}^n \langle x_k,e\rangle\right| + \mathop{\Pi}\limits_{k=1}^n \|x_k\|}{2}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Following the inequality \eqref{buzinq}, we have
$$ |\langle x_1, e\rangle \langle x_2,e\rangle| \leq \frac{|\langle x_1, x_2\rangle |+ \|x_1\|\|x_2\|}{2}.$$
By replacing $x_2$ by $ \mathop{\Pi}\limits_{k=3}^n \langle x_k,e\rangle x_2$
and using $\left| \mathop{\Pi}\limits_{k=3}^n \langle x_2,e\rangle\right| \leq \mathop{\Pi}\limits_{k=3}^n \| x_k \|$, we obtain the desired inequality. \end{proof}
Now using Lemma \ref{buz-extension} (for $n=3$), we prove the following numerical radius inequality.
\begin{theorem}\label{th1}
Let $T\in \mathcal{B}(\mathcal{H})$. Then
\begin{eqnarray*}
w(T) &\leq& \sqrt[3]{\frac14 w(T^3) + \frac14\left( \|T^2\|+ \|T^*T+TT^*\|\right) \|T\|}.
\end{eqnarray*} \end{theorem} \begin{proof}
Take $x\in \mathcal{H}$ and $\|x\|=1.$ From Lemma \ref{buz-extension} (for $n=3$), we have
\begin{eqnarray*}
|\langle Tx,x\rangle |^3 &=& |\langle Tx,x\rangle\langle T^*x,x\rangle\langle T^*x,x\rangle|\\
&\leq& \frac{|\langle Tx,T^*x\rangle \langle T^*x,x\rangle|+ \|Tx\| \|T^*x\|^2}{2}\\
&=& \frac{|\langle T^2x,x\rangle \langle T^*x,x\rangle|}{2}+ \frac{(\|Tx\|^2+ \|T^*x\|^2)\|T^*x\|}{4}
\end{eqnarray*} Also, from the Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*}
|\langle T^2x,x\rangle \langle T^*x,x\rangle| &\leq& \frac{|\langle T^2x,T^*x\rangle|+ \|T^2x\| \|T^*x\| }{2}\\
&=& \frac{|\langle T^3x,x\rangle|+ \|T^2x\| \|T^*x\| }{2}. \end{eqnarray*} Therefore,
\begin{eqnarray*}
|\langle Tx,x\rangle |^3
&\leq& \frac{|\langle T^3x,x\rangle|+ \|T^2x\| \|T^*x\|}{4} + \frac{(\|Tx\|^2+ \|T^*x\|^2)\|T^*x\|}{4}\\
&\leq& \frac14 w(T^3)+ \frac14 \|T^2\| \|T\|+\frac14 \|T^*T+TT^* \| \|T \|.
\end{eqnarray*}
Therefore, taking supremum over $\|x\|=1,$ we get the desired inequality. \end{proof}
The inequality in Theorem \ref{th1} is an improvement of the second inequality in \eqref{eqv}, since $w(T^3)\leq \|T^3\|\leq \|T\|^3$, $\|T^2\|\leq \|T\|^2$ and $\|T^*T+TT^*\|\leq 2\|T\|^2.$ To show non-trivial improvement, we consider a matrix $T=\begin{bmatrix}
0&2&0\\
0&0&1\\
0&0&0
\end{bmatrix}.$ Then $w(T^3)=0$ and so
$$ \sqrt[3]{ \frac14 w(T^3) + \frac14\left( \|T^2\|+ \|T^*T+TT^*\|\right) \|T\| } < \|T\|.$$
Next, using the Buzano's inequality \eqref{buzinq}, we obtain the following numerical radius inequality.
\begin{theorem}\label{th2}
Let $T\in \mathcal{B}(\mathcal{H}).$ Then
\begin{eqnarray*}
w(T) &\leq& \sqrt[3]{\frac12 w(TT^*T)+\frac14 \|T^*T+TT^*\| \|T\|}.
\end{eqnarray*} \end{theorem} \begin{proof}
Take $x\in \mathcal{H}$ and $\|x\|=1.$ By the Cauchy-Schwarz inequality \eqref{cauchy}, we have
\begin{eqnarray*}
|\langle Tx,x\rangle |^3 &\leq& \|Tx\|^2|\langle T^*x,x\rangle|
= |\langle |T|^2x,x\rangle \langle T^*x,x\rangle|.
\end{eqnarray*} From the Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*}
|\langle Tx,x\rangle |^3
&\leq& \frac{|\langle |T|^2x,T^*x\rangle|+ \||T|^2x\| \|T^*x\|}{2}\\
&\leq& \frac{|\langle T|T|^2x,x\rangle |+ \||T|^2x\| \|T^*x\|}{2}\\
&\leq& \frac{|\langle T|T|^2x,x\rangle |+ \|T\| \|Tx\| \|T^*x\|}{2}\\
&\leq& \frac{|\langle T|T|^2x,x\rangle |}{2}+\frac{ (\|Tx\|^2+ \|T^*x\|^2)\|T\|}{4}\\
&\leq& \frac12 w(T|T|^2)+ \frac14 \|T^*T+TT^*\| \|T\|. \end{eqnarray*}
Therefore, taking supremum over $\|x\|=1,$ we get the desired inequality. \end{proof}
Clearly, for every $T\in \mathcal{B}(\mathcal{H})$, $$\sqrt[3]{\frac12 w(TT^*T)+\frac14 \|T^*T+TT^*\| \|T\|}\leq \sqrt[3]{\frac12 w(TT^*T)+\frac12 \|T\|^3} \leq \|T\|.$$
Also, using similar technique as Theorem \ref{th2}, we can prove the following numerical radius inequality.
\begin{eqnarray}\label{eqn5}
w(T)&\leq& \sqrt[3]{\frac12 w(T^*T^2)+ \frac12 \|T\|^3}. \end{eqnarray} And also replacing $T$ by $T^*$ in \eqref{eqn5}, we get \begin{eqnarray}\label{eqn6}
w(T)&\leq& \sqrt[3]{\frac12 w(T^2T^*)+ \frac12 \|T\|^3}. \end{eqnarray}
Consider a matrix $T=\begin{bmatrix}
0&2&0\\
0&0&1\\
0&0&0 \end{bmatrix}$. Then we see that $$ w(T^2 T^*)=1< w(T^*T^2)=2< w(TT^*T)=\sqrt{65}/2 .$$
Therefore, combining Theorem \ref{th2} and the inequalities \eqref{eqn5} and \eqref{eqn6}, we obtain the following corollary.
\begin{cor}\label{cor2}
Let $T\in \mathcal{B}(\mathcal{H})$. Then \begin{eqnarray*}
w(T) &\leq& \sqrt[3]{\frac12 \min \Big(w(TT^*T), w(T^2T^*), w(T^*T^2) \Big) + \frac12 \|T\|^3}. \end{eqnarray*} \end{cor}
Since
$w(TT^*T)\leq \|T\|^3, \ w(T^2T^*)\leq \|T^2\| \|T\|\leq \|T\|^3,\ w(T^*T^2)\leq \|T^2\| \|T\|\leq \|T\|^3,$
the inequality in Corollary \ref{cor2} is an improvement of the second inequality in \eqref{eqv}.
Now, Theorem \ref{th2} together with the inequalities \eqref{eqn5} and \eqref{eqn6} implies the following result.
\begin{cor}\label{coor2}
Let $T\in \mathcal{B}(\mathcal{H})$. If $w(T)=\|T\|,$ then
\begin{eqnarray*}
w(TT^*T)= w(T^2T^*)= w(T^*T^2)= \|T\|^3.
\end{eqnarray*} \end{cor}
Next, by using Lemma \ref{buz-extension} (for $n=3$), the Buzano's inequality \eqref{buzinq} and the mixed Schwarz inequality \eqref{mixed}, we obtain the following result.
\begin{theorem}\label{th3}
Let $T\in \mathcal{B}(\mathcal{H}).$ Then
\begin{eqnarray*}
w(T) &\leq& \sqrt[3]{\frac14 w(|T|T|T^*|)+ \frac14 \Big ( \|T^2\|+ \|T^*T+TT^*\| \Big)\|T\|}.
\end{eqnarray*} \end{theorem}
\begin{proof}
Let $x\in \mathcal{H}$ with $\|x\|=1.$ From the mixed Schwarz inequality \eqref{mixed}, we have
\begin{eqnarray*}
|\langle Tx,x\rangle|^3 &\leq & \langle |T^*|x,x\rangle |\langle T^*x,x\rangle| \langle |T|x,x\rangle.
\end{eqnarray*} Using Lemma \ref{buz-extension} (for $n=3$), we have \begin{eqnarray*}
|\langle Tx,x\rangle|^3 &\leq & \frac{|\langle |T^*|x,T^*x\rangle \langle |T|x,x\rangle | + \||T^*|x\| \|T^*x\| \||T|x\| } {2}\\
&= & \frac{|\langle T |T^*|x,x\rangle | \langle |T|x,x\rangle + \||T^*|x\| \|T^*x\| \||T|x\| } {2}. \end{eqnarray*}
By Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*}
|\langle T |T^*|x,x\rangle | \langle |T|x,x\rangle
&\leq& \frac{ | \langle T |T^*|x,|T|x\rangle |+ \|T |T^*|x\| \||T|x\| }{2}\\
&=& \frac{| \langle |T| T |T^*|x, x\rangle |+ \|T |T^*|x\| \||T|x\| }{2}.\\
\end{eqnarray*} Also by AM-GM inequality, we have \begin{eqnarray*}
\||T^*|x\| \|T^*x\| \||T|x\| &\leq& \frac12(\||T^*|x\|^2 + \||T|x\|^2) \|T^*x\|\\
&=& \frac12 \langle (|T|^2+|T^*|^2)x,x\rangle \|T^*x\|. \end{eqnarray*} Therefore, \begin{eqnarray*}
|\langle Tx,x\rangle|^3 &\leq & \frac{ |\langle |T| T |T^*|x, x\rangle |+ \|T |T^*|x\| \||T|x\| }{4} + \frac14 \langle (|T|^2+|T^*|^2)x,x\rangle \|T^*x\|\\
&\leq& \frac14 w(|T|T|T^*|)+ \frac14 \Big ( \|T |T^*|\|+ \|T^*T+TT^*\| \Big)\|T\|. \end{eqnarray*}
From the polar decomposition $T^*=U|T^*|$, it is easy to verify that $\|T |T^*|\|=\|T^2\|.$ So, \begin{eqnarray*}
|\langle Tx,x\rangle|^3
&\leq& \frac14 w(|T|T|T^*|)+ \frac14 \Big ( \|T^2 \|+ \|T^*T+TT^*\| \Big)\|T\|. \end{eqnarray*}
Therefore, taking supremum over $\|x\|=1,$ we get the desired result. \end{proof}
Now, combining Theorem \ref{th3} and Theorem \ref{th1}, we obtain the following corollary.
\begin{cor}\label{cor3}
Let $T\in \mathcal{B}(\mathcal{H}),$ then
\begin{eqnarray*}
w(T) \leq \sqrt[3]{\frac14 \min \Big( w(T^3), w(|T|T|T^*|) \Big)+ \frac14 \Big ( \|T^2\|+ \|T^*T+TT^*\| \Big)\|T\|}.
\end{eqnarray*} \end{cor}
Using similar technique as Theorem \ref{th3}, we can also prove the following inequality. \begin{eqnarray}\label{eqn7}
w(T) &\leq& \sqrt[3]{\frac14 w(|T^*|T|T|)+ \frac38 \|T^*T+TT^*\| \|T\|}. \end{eqnarray}
Clearly, the inequalities in Corollary \ref{cor3} and \eqref{eqn7} are stronger than the second inequality in \eqref{eqv}.
And when $w(T)=\|T\|,$ then
$$ w(|T|T|T^*|)= w(|T^*|T|T|) =w(T^3)=\|T\|^3.$$
Next theorem reads as follows:
\begin{theorem}\label{th4}
Let $T\in \mathcal{B}(\mathcal{H}).$ Then
\begin{eqnarray*}
w(T) &\leq& \sqrt[4]{ \Big( \frac12 w(T|T|) +\frac14 \|T^*T+TT^*\| \Big) \Big ( \frac12 w(T^*|T^*|)+ \frac14\|T^*T+TT^*\| \Big)}.
\end{eqnarray*} \end{theorem} \begin{proof}
Let $x\in \mathcal{H}$ with $\|x\|=1.$ From the mixed Schwarz inequality \eqref{mixed}, we have
$$ |\langle Tx,x\rangle|^2 \leq \langle |T|x,x\rangle \langle |T^*|x,x\rangle.$$
Therefore, $$ |\langle Tx,x\rangle|^4 \leq \langle |T|x,x\rangle \langle |T^*|x,x\rangle |\langle Tx,x\rangle \langle T^*x,x\rangle|.$$
From the Buzano's inequality \eqref{buzinq}, we have
\begin{eqnarray*}
|\langle |T|x,x \rangle \langle T^*x,x\rangle |
&\leq& \frac{|\langle |T|x, T^*x \rangle|+ \||T|x\| \|T^*x\|}{2}\\
&\leq& \frac12 |\langle T|T|x, x \rangle|+ \frac14 (\||T|x\|^2+ \|T^*x\|^2)\\
&\leq& \frac12 w(T|T|)+ \frac14 \|T^*T+TT^*\|.
\end{eqnarray*} Similarly, \begin{eqnarray*}
|\langle |T^*|x,x \rangle \langle Tx,x\rangle |
&\leq& \frac{|\langle |T^*|x, Tx \rangle|+ \||T^*|x\| \|Tx\|}{2}\\
&\leq& \frac12 |\langle T^*|T^*|x, x \rangle|+ \frac14 (\||T^*|x\|^2+ \|Tx\|^2)\\
&\leq& \frac12 w(T^*|T^*|)+ \frac14 \|T^*T+TT^*\|. \end{eqnarray*}
Therefore, $$|\langle Tx,x\rangle|^4 \leq \left(\frac12 w(T|T|)+ \frac14 \|T^*T+TT^*\|\right) \left( \frac12 w(T^*|T^*|)+ \frac14 \|T^*T+TT^*\|\right).$$
Taking supremum over $\|x\|=1,$ we get
$$w^4(T) \leq \left(\frac12 w(T|T|)+ \frac14 \|T^*T+TT^*\|\right) \left( \frac12 w(T^*|T^*|)+ \frac14 \|T^*T+TT^*\|\right),$$ as desired. \end{proof}
Again, using similar technique as Theorem \ref{th4}, we can prove the following result.
\begin{theorem}\label{th5}
Let $T\in \mathcal{B}(\mathcal{H}).$ Then
\begin{eqnarray*}
w(T) &\leq& \sqrt[4]{ \Big( \frac12 w(T|T^*|) +\frac12 \|T\|^2 \Big) \Big ( \frac12 w(T^*|T|)+ \frac12\|T\|^2 \Big)}.
\end{eqnarray*} \end{theorem}
Clearly, the inequalities in Theorem \ref{th4} and Theorem \ref{th5} are improvements of the second inequality in \eqref{eqv}. The inequalities imply that when $w(T)=\|T\|$, then
$$w(T|T|)=w(T^*|T^*|) = w(T|T^*|)= w(T^*|T|)=\|T\|^2.$$
Now we consider the following example to compare the inequalities in Theorem \ref{th4} and Theorem \ref{th5}.
\begin{example}
Consider a matrix $T=\begin{bmatrix}
0&1&0\\
0&0&1\\
0&0&0
\end{bmatrix},$ then Theorem \ref{th4} gives $w(T)\leq \sqrt{ \frac{1}{2\sqrt{2}}+ \frac12 }$, whereas Theorem \ref{th5} gives $w(T)\leq \sqrt{ \frac{1}{4}+ \frac12}.$
Again, Consider $T=\begin{bmatrix}
0&1&0\\
0&0&2\\
0&0&0 \end{bmatrix},$ then Theorem \ref{th4} gives $w(T)\leq \sqrt{ \frac{\sqrt{17}+5}{4} }$, whereas Theorem \ref{th5} gives $w(T)\leq \sqrt{ \frac{5+5}{4} }.$ Therefore, we would like to note that the inequalities obtained in Theorem \ref{th4} and Theorem \ref{th5} are not comparable, in general. \end{example}
\section{\textbf{Numerical radius inequalities involving general powers}}
We develop a numerical radius inequality involving general powers $w^n(T)$ and $w(T^n)$ for every positive integer $n\geq 2$ and from which we derive nice results related to the nilpotent operators and reverse power inequality for the numerical radius. First we prove the following theorem.
\begin{theorem}\label{th7}
If $T\in \mathcal{B}(\mathcal{H}),$ then
\begin{eqnarray*}
|\langle Tx, x\rangle|^n &\leq& \frac{1}{2^{n-1}} \left|\langle T^nx, x\rangle \right|+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^kx \right\| \left\|T^*x \right\|^{n-k},
\end{eqnarray*}
for all $x\in \mathcal{H}$ with $\|x\|=1$ and for every positive integer $n\geq 2.$ \end{theorem} \begin{proof}
We have
\begin{eqnarray*}
&& |\langle Tx,x \rangle|^n \\
&=&|\langle Tx,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-2} |\\
&\leq& \frac{\left|\langle Tx,T^*x \rangle \langle T^*x,x \rangle^{n-2} \right|+ \|Tx\| \|T^*x\|^{n-1} }{2} \,\, (\text{by Lemma \ref{buz-extension}})\\
&=& \frac{\left|\langle T^2x,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-3} \right|+ \|Tx\| \|T^*x\|^{n-1} }{2}\\
&\leq & \frac{ \frac{\left|\langle T^2x,T^*x \rangle \langle T^*x,x \rangle^{n-3} \right|+ \|T^2x\| \|T^*x\|^{n-2} }{2}+ \|Tx\| \|T^*x\|^{n-1} }{2} \,\, (\text{by Lemma \ref{buz-extension}})\\
&= & \frac{\left|\langle T^3x,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-4} \right|+ \|T^2x\| \|T^*x\|^{n-2} }{2^2}+\frac{ \|Tx\| \|T^*x\|^{n-1} }{2} \\
&\leq & \frac{\frac{\left|\langle T^3x,T^*x \rangle \langle T^*x,x \rangle^{n-4}\right|+ \|T^3x\| \|T^*x\|^{n-3} }{2} + \|T^2x\| \|T^*x\|^{n-2} }{2^2}
+\frac{ \|Tx\| \|T^*x\|^{n-1} }{2} \\
&& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (\text{by Lemma \ref{buz-extension}})\\
&= & \frac{\left|\langle T^4x,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-5}\right|+ \|T^3x\| \|T^*x\|^{n-3} }{2^3} + \frac{ \|T^2x\| \|T^*x\|^{n-2} }{2^2} \\
&& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\frac{ \|Tx\| \|T^*x\|^{n-1} }{2}.
\end{eqnarray*}
Repeating this approach $(n-1)$ times (i.e., using Lemma \ref{buz-extension}), we obtain
\begin{eqnarray*}
|\langle Tx, x\rangle|^n &\leq& \frac{1}{2^{n-1}} \left|\langle T^nx, x\rangle \right|+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^kx \right\| \left\|T^*x \right\|^{n-k},
\end{eqnarray*}
as desired. \end{proof}
The following generalized numerical radius inequality is a simple consequence of Theorem \ref{th7}.
\begin{cor}\label{cor5}
If $T\in \mathcal{B}(\mathcal{H}),$ then
\begin{eqnarray}\label{pp}
w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k},
\end{eqnarray} for every positive integer $n\geq 2.$ \end{cor}
\begin{remark} (i) For every $T\in \mathcal{B}(\mathcal{H})$ and for every positive integer $n\geq 2$,
\begin{eqnarray*}
w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k} \\
&\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n} \\
&\leq& \frac{1}{2^{n-1}} \|T^n\|+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n} \,\, \text{(by \eqref{eqv})}\\
&\leq& \frac{1}{2^{n-1}} \|T\|^n+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n}
= \|T\|^n.
\end{eqnarray*}
Therefore, the inequality \eqref{pp} is an improvement of the second inequality in \eqref{eqv}.
\noindent (ii) From the above inequalities, it follows that if $w(T)=\|T\|$, then
$$ w^n(T)=w(T^n)=\|T^n\|=\|T\|^n,$$
for every positive integer $n\geq 2.$ And so it is easy to see that when $w(T)=\|T\|$, then
$$w(T)= \lim\limits_{n\to \infty} \|T^n\|^{1/n}= r(T),$$ where $r(T)$ denotes the spectral radius of $T.$ The second equality holds for every operator $T\in \mathcal{B}(\mathcal{H})$ and it is known as Gelfand formula for spectral radius.
\noindent (iii) Taking $n=2$ in Corollary \ref{cor5}, we get
$$ w^2(T) \leq \frac12 w(T^2)+ \frac12\|T\|^2,$$ which was proved by Dragomir \cite{D08}.
\noindent (iv) Taking $n=2$ in Theorem \ref{th7}, we deduce that
$$ w^2(T) \leq \frac12 w(T^2)+ \frac14 \|T^*T+TT^*\|,$$ which was proved by Abu-Omar and Kittaneh \cite{Abu_RMJM_2015}. \end{remark}
Following Corollary \ref{cor5}, we obtain an estimation for the nilpotent operators.
\begin{cor}\label{nilpotent}
Let $T\in \mathcal{B}(\mathcal{H}).$
If $T^n=0$ for some least positive integer $n\geq 2$, then \begin{eqnarray*}
w(T) &\leq& \left(\sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}\right)^{1/n}
\leq \left( 1- \frac{1}{2^{n-1}}\right)^{1/n} \|T\|. \end{eqnarray*} \end{cor}
Consider a matrix $T=\begin{bmatrix}
0&1&2\\
0&0&3\\
0&0&0 \end{bmatrix}.$ Then we see that Corollary \ref{nilpotent} gives $w(T)\leq \alpha \approx 3.0021$ and Theorem \ref{th1} gives $w(T) \leq \beta \approx 2.5546,$ whereas the inequality \eqref{haag} gives $w(T)\leq \gamma \approx 2.5811.$ Therefore, we conclude that for the nilpotent operators the inequality \eqref{haag} (given by Haagerup and Harpe) is not always better than the inequalities discussed here and vice versa.
Finally, by using Corollary \ref{cor5} we obtain a reverse power inequality for the numerical radius.
\begin{cor}
Let $T\in \mathcal{B}(\mathcal{H})$. If $\|T\|\leq 1$, then
\begin{eqnarray*}
w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}},
\end{eqnarray*} for every positive integer $n\geq 2.$ This inequality is sharp. \end{cor} \begin{proof}
From Corollary \ref{cor5}, we have
\begin{eqnarray*}
w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}\\
&\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n}\\
&\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \\
&=& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}}.
\end{eqnarray*}
If $T^*T=TT^*$ and $\|T\|=1,$ then \begin{eqnarray*}
w^n(T) &=& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}}=1. \end{eqnarray*} \end{proof}
\noindent {\bf{Data availability statements.}}\\ Data sharing not applicable to this article as no datasets were generated or analysed during the current study.\\
\noindent {\bf{Declarations.}}\\ \noindent {\bf{Conflict of Interest.}} The author declares that there is no conflict of interest.
\end{document} |
\begin{document}
\title{Attractors of Sequences of Function Systems \\ and their relation to Non-Stationary Subdivision} \author[David Levin]{David Levin} \author[Nira Dyn]{Nira Dyn} \address{D. Levin, N. Dyn, School of Mathematical Sciences, Tel Aviv University, Israel} \author[P. V. Viswanathan]{Puthan Veedu Viswanathan} \address{P. V. Viswanathan, Department of Mathematics, Indian Institute of Technology, Delhi, India}
\begin{abstract} Iterated Function Systems (IFSs) have been at the heart of fractal geometry almost from its origin, and several generalizations for the notion of IFS have been suggested. Subdivision schemes are widely used in computer graphics and attempts have been made to link fractals generated by IFSs to limits generated by subdivision schemes. With an eye towards establishing connection between non-stationary subdivision schemes and fractals, this paper introduces the notion of ``trajectories of maps defined by function systems" which may be considered as a new generalization of the traditional IFS. The significance and the convergence properties of `forward' and `backward' trajectories are studied. Unlike the ordinary fractals which are self-similar at different scales, the attractors of these trajectories may have different structures at different scales.
\end{abstract} \maketitle
\section{\bf Introduction}\label{sect1}
The concept of Iterated Function system (IFS) was introduced by Hutchinson \cite{H} and popularized by Barnsley \cite{B1}. IFSs form a standard framework for describing self-referential sets such as fractals and provide a potential new method of researching the shape and texture of images. Due to its importance in understanding images, several extensions to the classical IFS such as Recurrent IFS, partitioned IFS and Super IFS are discussed in the literature \cite{B2,BEH,F}. Fractal functions whose graphs are attractors of suitably chosen IFS provide a new method of interpolation and approximation \cite{B1,PRM,MAN,PV1}.
\par
Subdivision schemes are efficient algorithmic methods for generating curves and surfaces from discrete sets of control points. A subdivision scheme generates values associated with the vertices of a sequence of nested meshes, by repeated application of a set of local refinement rules. These subdivision rules, usually linear, iteratively transform the vertices of a given mesh to vertices of a refined mesh. In recent years, the subject of subdivision has gained more popularity because of many new applications such as computer graphics. The reader may turn to \cite{CDM,DL,MP,PBP} for an introduction and survey of the mathematics of subdivision schemes and their applications.
\par
Being two different topics that had been developing independently and in parallel, the connections between subdivision and theory of IFS were sought after. Later it has been observed that there is a close connection between curves and surfaces generated by subdivision algorithms and self-similar fractals generated by IFSs \cite{SLG}. However, this relationship is established for stationary subdivision schemes. The relation between non-stationary subdivision and IFS remains obscure and unexplored.
\par In this paper we target to establish the interconnection between the theory of IFS and non-stationary subdivision schemes. In this attempt, we introduce and study what we call "trajectories of a sequence of transformations". Trajectories generated by a sequence of function system maps may provide new attractor sets, generalizing fractal sets, and help us to link the theory of IFS with non-stationary subdivision schemes.
\eject \section{\bf Preliminaries} For a nonspecialist, we mention here the concepts, notation and basic results concerning traditional
IFS and provide a brief outline of subdivision. For a detailed exposition the reader may consult \cite{B1,H} and \cite{CDM,DL} respectively.
\subsection{Basics of iterated function systems}
Let $(X,d)$ be a complete metric space. For a function $f: X \to X$, we define the Lipschitz constant associated with $f$ by $$\text{Lip}(f) = \sup_{x,y \in X, x \neq y} \frac{d\big(f(x),f(y)\big)}{d(x,y)}.$$ A function $f$ is said to be Lipschitz function if $\text{Lip}(f) < + \infty$ and a contraction if $\text{Lip}(f) < 1$. Let $\mathbb{H}(X)$ be the collection of all nonvoid compact subsets of $X$. Then $\mathbb{H}$ is a metric space when endowed with the Hausdorff metric $$ h (B,C) = \max \big\{d(B,C), d(C,B)\big\},$$ where $d(B,C)= \sup_{b \in B} d(b,C)= \sup_{ b \in B} \inf_{c \in C} d(b,c)$. It is well-known that the metric space $\big(\mathbb{H}(X),h\big)$ is complete \cite{B2}. \begin{definition}\label{defIFS}
An iterated function system, IFS for short, consists of a metric space $(X,d)$ and a finite family of continuous maps $f_i: X \to X$, $i \in \{1,2,\dots,n\}$. We denote such an IFS by $\mathcal{F}=\{X; f_i: i=1,2,\dots, n\}$. \end{definition} With the IFS $\mathcal{F}$ as above, one can associate a set-valued map referred to as Barnsley-Hutchinson operator. With a slight abuse of notation, we use the same symbol $\mathcal{F}$ for the IFS, the set of functions in the IFS, and for the Barnsley-Hutchinson operator defined below. Consider the function $\mathcal{F}: \mathbb{H}(X)\to \mathbb{H}(X)$ \begin{equation}\label{FX}
\mathcal{F}(B) := \cup_{f \in \mathcal{F}} f(B),\ \ B\in \mathbb{H}(X), \end{equation} where $f(B):= \big\{f(b): b \in B \big\}$.
The contraction constant of $\mathcal{F}$ is \cite{B2}: \begin{equation}\label{CLF} L_{\mathcal{F}}=\max_{i=1,2,\dots, n} \text{Lip}(f_i). \end{equation} If $f_i$ are contraction maps, the IFS is contractive. Therefore, by the Banach contraction principle we have \begin{theorem}\label{theoremIFS} Let $(X,d)$ be a complete metric space and $\mathcal{F}=\{X; f_i: i=1,2,\dots, n\}$ be an IFS with contraction constant $L_{\mathcal{F}}<1$. Then there exists a unique set $A_{\mathcal{F}}$, such that $\mathcal{F}(A_{\mathcal{F}}) = A_{\mathcal{F}}$. Furthermore, for every $B_0 \in \mathbb{H}(X)$ the sequence $B_{k+1} = \mathcal{F} (B_k)$ converges to $A_{\mathcal{F}}$ in $\mathbb{H}$. Also \cite{B2}, $$h(B_0,A_{\mathcal{F}}) = \frac{1}{1- L_{\mathcal{F}}}~ h(B_0, B_1).$$ \end{theorem} \begin{remark}\label{remarkIFS}
\begin{enumerate} \item The set $A _{\mathcal{F}}$ appearing in the previous theorem is called the attractor of the IFS. The construction of $A _{\mathcal{F}}$ through iterations of the map $\mathcal{F}$ suggests the name iterated function system for $\mathcal{F}=\{X; f_i: i=1,2,\dots, n\}$. \item The result of Theorem \ref{theoremIFS} holds even if $\mathcal{F}$ is not a contraction map, but an $\ell$-term composition of $\mathcal{F}$, namely, $\mathcal{F}\circ \mathcal{F}\circ...\circ \mathcal{F}$ is a contraction map. The $\ell$-term composition is a contraction if all the compositions of the form \begin{equation}\label{lterm} f_{i_1}\circ f_{i_2}\circ \cdot\cdot\cdot f_{i_\ell},\ \ \ i_j\in\{1,2,...,n\}, \end{equation} are contractions. \end{enumerate} \end{remark}
\subsection{Basics of subdivision schemes}\label{BOS}
A subdivision scheme is defined by a collection of real maps called refinement rules relative to a set of meshes of isolated points $$N_0 \subseteq N_1 \subseteq \dots \subseteq \mathbb{R}^s.$$ Each refinement rule maps real vector values defined on $N_k$ to real vector values defined on a refined net $N_{k+1}$. Here we consider only scalar binary subdivision schemes, with $N_k=2^{-k}\mathbb{Z}^s$. Given a set of control points $p^{0}=\{p_j^0\in \mathbb{R}^m,\ \ j \in \mathbb{Z}^s\}$ at level $0$, a stationary binary subdivision scheme recursively defines new sets of points $p^k= \{p_j^k: j \in \mathbb{Z}^s\}$ at level $k \ge 1$, by the refinement rule \begin{equation}\label{sa} p_i^{k+1} = \sum_{j \in \mathbb{Z}^s} a_{i-2j} p_j^k,\ \ k\ge 0, \end{equation} or in short form, $$p^{k+1}=S_a p^k,\ \ k\ge 0.$$ The set of real coefficients $a= \{a_j: j \in \mathbb{Z}^s\}$ that determines the refinement rule is called the mask of the scheme. We assume that the support of the mask, $\sigma(a)= \{j \in \mathbb{Z}^s:a_j \neq 0\}$, is finite. $S_a$ is a bi-infinite two-slanted matrix with the entries $(S_a)_{i,j}=a_{i-2j}$.\\ A non-stationary binary subdivision scheme is defined formally as $$p^{k+1}=S_{a^{[k]}} p^k,\ \ k\ge 0,$$ where the refinement rule at refinement level $k$ is of the form \begin{equation}\label{sak} p_i^{k+1} = \sum_{j \in \mathbb{Z}^s} a_{i-2j}^{[k]} p_j^k,\ \ i\in \mathbb{Z}^s. \end{equation} In a non-stationary scheme, the mask $a^{[k]}:= \{a_j^{[k]}: j \in \mathbb{Z}^s\}$ depends on the refinement level.
In univariate schemes $s=1$, there are two different rules in (\ref{sak}), depending on the parity of $i$.
In this paper we refer to two definitions of convergent subdivision. The first is the classical one in subdivision theory \cite{DL}: \begin{definition}\label{C0conv}{\bf $C^0$-convergent subdivision}\\ A subdivision scheme is termed $C^0$-convergent if for any initial data $p^0$ there exists a continuous function $f:\mathbb{R}^s \to \mathbb{R}^m$, such that \begin{equation}\label{pktof}
\lim_{k\to\infty}\sup_{i\in \mathbb{Z}^s}|p_i^{k}-f(2^{-k}i)|=0, \end{equation} and for some initial data $f\ne 0$. \end{definition}
\begin{remark}\label{remarkC0}
\begin{enumerate} \item The limit curve of a $C^0$-convergent subdivision is denoted by $p^\infty=S_a^\infty p^0$, and the function $f$ in Definition \ref{C0conv} specifies a parametrization of the limit curve.
\end{enumerate} \end{remark} The analysis of subdivision schemes aims at studying the smoothness properties of the limit function $f$. For further reading see \cite{DL}.
We introduce here a weaker type of convergence using a set distance approach, influenced by IFS convergence: \begin{definition}\label{hconv}{\bf $h$-convergent subdivision}\\ A subdivision scheme is termed $h$-convergent if for any initial data $p^0$ there exists a set $p^\infty\subset \mathbb{R}^m$, such that \begin{equation}\label{setlimit} \lim_{k\to\infty}h(p^k,p^\infty)=0, \end{equation} where $h$ is the Euclidian-Hausdorff metric on $\mathcal{R}^m$. The set $p^\infty$ is termed the $h$-limit of the subdivision scheme. \end{definition}
It is clear that any $C^0$-convergent subdivision is also $h$-convergent.
In both subjects, IFS and subdivision, one is interested in the limits of iterative processes. A connection between IFS and stationary subdivision is established in \cite{SLG}. In order to extend this connection to the case of non-stationary subdivision we investigate below the convergence properties of sequences of transformations in a metric space.
\section{\bf Sequences of transformations and Trajectories} This section is intended to introduce trajectories induced by a sequence of transformations and establish some elementary properties. \par Let $(X,d)$ be a complete metric space. Consider a sequence of continuous transformations $\{T_i\}_{i \in N}$, $T_i: X \to X$. \begin{definition}{\bf Forward and backward procedures:}
For the sequence of maps $\{T_i\}_{i \in N}$ we define forward and backward procedures \begin{enumerate} \item $ \Phi_k=T_k \circ T_{k-1} \circ \dots \circ T_1,$ \item $ \Psi_k=T_1 \circ T_2 \circ \dots \circ T_k.$ \end{enumerate} \end{definition}
\begin{definition}{\bf Forward and backward trajectories:}
Induced by the forward and the backward procedures, we define consequent forward and backward trajectories in $X$, starting from $x\in X$, $\{\Phi_k(x)\}$ and $\{\Psi_k(x)\}$,
\begin{equation}\label{PhiPsi} \begin{aligned} \Phi_k(x)=T_k \circ T_{k-1} \circ \dots \circ T_1(x)=T_k\circ\Phi_{k-1}(x),\ \ k \in \mathbb{N},\\ \Psi_k(x)=T_1 \circ T_2 \circ \dots \circ T_k(x)=\Psi_{k-1}\circ T_k(x),\ \ k \in \mathbb{N}. \end{aligned} \end{equation} \end{definition}
In the present section we study the convergence of both types of trajectories. Later on we demonstrate the application of both types to sequences of function systems and to subdivision.
To state our next proposition, let us first introduce the following definition. \begin{definition} Two sequences $\{x_i\}_{i \in \mathbb{N}}$ and $\{y_i\}_{i \in \mathbb{N}}$ in a metric space $(X,d)$ are said to be asymptotically similar if $d(x_i,y_i) \to 0$ as $i \to \infty$. We denote this relation by \begin{equation} \{x_i\}\sim \{y_i\}. \end{equation} \end{definition} \begin{proposition}\label{Equivalence}{\bf Asymptotic similarity of trajectories}\\ Let $\{T_i\}_{i \in \mathbb{N}}$ be a sequence of transformations on $X$, where each $T_i$ is a Lipschitz map with Lipschitz constant $s_i$. If $\lim_{ k \to \infty} \prod_{i=1}^k s_i =0$, then
for any $x,y\in X$, \begin{equation} \begin{aligned} \{\Phi_k(x)\}\sim \{\Phi_k(y)\},\\ \{\Psi_k(x)\}\sim \{\Psi_k(y)\}. \end{aligned} \end{equation}
\end{proposition}
Note that the condition $\lim_{k\to\infty} \prod_{i=1}^k s_i =0$ does not imply $\limsup_{k \to \infty} s_k <1$. \begin{proof} The proof is similar for the forward and the backward trajectories. Let $x,y\in X$ and consider the trajectories $\{\Psi_k(x)\}$ and $\{\Psi_k(y)\}$. Using the fact that $T_i$ is a Lipschitz map with Lipschitz constant $s_i$, we get \begin{equation}\label{prodsi} \begin{aligned} d\big(\Psi_k(x),\Psi_k(y)\big)\ \le\ & s_1 d(\big(T_2 \circ T_3 \circ \dots \circ T_k(x),T_2 \circ T_3 \circ \dots \circ T_k(y)\big)) \\ & \le s_1s_2d(\big(T_3 \circ T_4 \circ \dots \circ T_k(x),T_3 \circ T_4 \circ \dots \circ T_k(y)\big)) ... \\ & \le \big(\prod_{i=1}^k s_i\big) d(x,y), \end{aligned} \end{equation} from which the result follows. \end{proof}
\begin{remark}\label{remarkT} The condition $\lim_{ k \to \infty} \prod_{i=1}^k s_i =0$ stated in Proposition \ref{Equivalence} does not guarantee convergence of the trajectories $\{\Phi_k(x)\}$. \end{remark}
If $T_i=T$ $\forall i\in \mathbb{N}$, and $T$ is a Lipschitz map with Lipschitz constant $\mu<1$, then both types of trajectories are just the fixed-point iteration trajectories $\{T^k(x)\}$, where $T^k$ is the $k$-fold autocomposition of $T$ which converge to a unique limit for any starting point $x$. It is known from the Banach contraction principle that $\{T^k(x)\}$ converges to a unique limit irrespective of the starting point $x$. The question now arises regarding the convergence of general trajectories, i.e., which conditions guarantee the convergence of the forward and the backward trajectories. Having in mind the applications to fractal generation and to subdivision, we would like to know which trajectories yield new types of fractals or new types of limit functions. Let us start with the forward trajectories $\{\Phi_k(x)\}$.
\begin{definition}\label{def3}{\bf Invariant set of $\{T_i\}$.}\\ We call $C\subseteq X$ an invariant set of a sequence of transformations $\{T_i\}_{i \in \mathbb{N}}$ if \begin{equation}\label{C} \forall~ x\in C,\ \ T_i(x)\in C,\ \ \forall~ i\in \mathbb{N}. \end{equation} \end{definition}
\begin{lemma}\label{lemma3} Consider a sequence of transformations $\{T_i\}_{i \in \mathbb{N}}$. If there exists $q$ in $X$ such that for every $x\in X$ \begin{equation}\label{C2} d(T_i(x),q)\le \mu d(x,q)+M,\ \ 0\le\mu <1,\ \ M\in \mathbb{R}_+, \end{equation} then the ball of radius $\frac{M}{1-\mu}$ centered at $q$, $B\big(q,\frac{M}{1-\mu}\big)$, is an invariant set of $\{T_i\}_{i \in \mathbb{N}}$. \end{lemma} \begin{proof} For $x\in B\big(q,\frac{M}{1-\mu}\big)$ \begin{equation}\label{C3} d(T_i(x),q)\le \mu d(x,q)+M \le \mu \frac{M}{1-\mu}+M = \frac{M}{1-\mu}. \end{equation} \end{proof}
\begin{remark}\label{remarkC} Under the conditions of Lemma \ref{lemma3}, any ball $B(q,R)$ with $R>\frac{M}{1-\mu}$ is also an invariant set of $\{T_i\}_{i \in \mathbb{N}}$. This follows since $M$ in (\ref{C2}) can be replaced by any $M^*>M$. \end{remark}
\begin{example}\label{Ex1} Consider a sequence of affine transformations on ${\mathbb R}^m$ of the form \begin{equation}\label{Ex1eq} T_i(x)=A_ix+b_i,\ \ i\in \mathbb{N}, \end{equation}
where $\{A_i\}$ are $m\times m$ matrices with $\|A_i\|_2\le \mu<1$, and $\|b_i\|_2\le M$. Then the conditions of Lemma \ref{lemma3} are satisfied with $q=0$, and thus $C=B\big(0, \frac{M}{1-\mu}\big)$ is an invariant set of $\{T_i\}_{i \in \mathbb{N}}$. \end{example}
\begin{proposition}\label{forwardconvergence}{\bf Convergence of forward trajectories}\\ Let $\{T_i\}_{i \in \mathbb{N}}$ be a sequence of transformations on $X$, with a compact invariant set $C$, and assume $\{T_i\}_{i \in \mathbb{N}}$ converges uniformly on $C$ to a Lipschitz map $T$ with Lipschitz constant $\mu<1$. Then for any $x\in C$ the trajectory $\{\Phi_i(x)\}_{i \in \mathbb{N}}$ converges to the fixed-point $p$ of $T$, namely, \begin{equation} \lim_{k\to\infty}d(\Phi_k(x),p)=0. \end{equation} \end{proposition} \begin{proof} Denoting $\epsilon_i=\sup_{x\in C}d(T_i(x),T(x))$, $i\in \mathbb{N}$, it follows that \begin{equation}\label{epsto0} \lim_{i\to\infty}\epsilon_i=0. \end{equation} Since $T$ is a Lipschitz map with Lipschitz constant $\mu<1$, the fixed-point iterations $\{T^k(x)\}$ converge to a unique fixed-point $p\in X$ for any starting point $x$. It also follows that $C$ is an invariant set of $T$. Starting with $x\in C$, we have that $\{\Phi_k(x)\}\subseteq C$. Using the triangle inequality in $\{X,d\}$ and the Lipschitz property of $T$, we have \begin{equation} \begin{aligned} d(\Phi_{k+m}(x),T^m\Phi_k(x))= d(T_{k+m}\circ T_{k+m-1}\circ ...\circ T_{k+1}\circ \Phi_k(x),T^m\Phi_k(x))\le \\ d(T_{k+m}\circ T_{k+m-1}\circ ...\circ T_{k+1}\circ \Phi_k(x),T\circ T_{k+m-1}\circ ...\circ T_{k+1}\circ \Phi_k(x))+ \\ d(T\circ T_{k+m-1}\circ ...\circ T_{k+1}\circ \Phi_k(x),T^2\circ T_{k+m-2}\circ ...\circ T_{k+1}\circ \Phi_k(x))+ \\ ... \\ +d(T^{m-1}\circ T_{k+1}\circ \Phi_k(x),T^m\Phi_k(x))\le \\ \epsilon_{k+m} + \mu \epsilon_{k+m-1}+ \mu^2\epsilon_{k+m-2}+...+ \mu^{m-1}\epsilon_{k+1}\le \\ \max_{1\le i\le m}\{\epsilon_{k+i}\}\times {\frac {1} {1-\mu}}. \end{aligned} \end{equation} Now we use the relation \begin{equation} d(\Phi_{k+m}(x),p)\le d(\Phi_{k+m}(x),T^m\Phi_k(x))+d(T^m\Phi_k(x),p). \end{equation} The result follows by observing that for $k$ large enough $\max_{1\le i\le m}\{\epsilon_{k+i}\}$ can be made as small as needed (by (\ref{epsto0})), and for that $k$, for a large enough $m$, $d(T^m\Phi_k(x),p)$ is as small as needed. \end{proof}
In Section \ref{IFS} we consider trajectories of transformations $\{T_i\}$ defined by function systems, and we look for the attractors of such trajectories. We refer to such systems as non-stationary function systems, and we apply them to generate new fractals. Proposition \ref{forwardconvergence} implies that in the case of forward trajectories, if $T_i\to T$ as $i\to \infty$, the limit of the forward trajectories is the attractor of the IFS corresponding to the limit function system, and hence not new. Let us now examine the backward trajectories $\{\Psi_k(x)\}$, and establish conditions for their convergence.
\begin{proposition}\label{BTp2}{\bf Convergence of backward trajectories} Let $\{T_i\}_{i \in \mathbb{N}}$ be a sequence of transformations on $X$, with a compact invariant set $C$, and assume each $T_i$ is a Lipschitz map with Lipschitz constant $s_i$. If $\sum_{k=1}^\infty \prod_{i=1}^k s_i <\infty$, then the backward trajectories $\{\Psi_k(x)\}$, with $ \Psi_k=T_1 \circ T_2 \circ \dots \circ T_k,\ \ k \in \mathbb{N},$ converge for any starting point $x\in C$ to a unique limit in $C$. \end{proposition} \begin{proof}
By (\ref{PhiPsi}) and the relation in (\ref{prodsi}) \begin{equation*} \begin{split} d\big(\Psi_{k+1}(x),\Psi_k(x) \big) = &~ d \big( \Psi_k (T_{k+1}(x)), \Psi_k(x) \big) \\ \le &~ \big(\prod_{i=1}^k s_i \big)d\big(T_{k+1}(x),x \big).\\ \end{split} \end{equation*} For $m,k \in \mathbb{N}$, $m>k$, we obtain \begin{equation}\label{e1} \begin{split} d\big(\Psi_m(x),\Psi_k(x)\big) \le &~ d\big(\Psi_m(x),\Psi_{m-1}(x)\big) + \dots + d\big(\Psi_{k+2}(x),\Psi_{k+1}(x) \big)+d\big(\Psi_{k+1}(x),\Psi_k(x) \big)\\ \le &~ \big(\prod_{i=1}^{m-1}s_i\big) d\big(T_m(x),x\big)+\dots + \big(\prod_{i=1}^{k+1}s_i\big) d\big(T_{k+2}(x),x \big)+ \big(\prod_{i=1}^{k}s_i\big) d\big(T_{k+1}(x), x \big). \end{split} \end{equation} For $i\in\mathbb{N}$, $T_i(x)\in C$ $\forall x\in C$, which implies that $d(T_i(x),x)\le M$ $\forall x\in C$, where $M$ is the diameter of $C$. Since $\sum_{k=1}^\infty \prod_{i=1}^k s_i < \infty$, Eq. (\ref{e1}) asserts that $d\big(\Psi_m(x),\Psi_k(x)\big) \to 0$ as $ k \to \infty$. That is, $\{\Psi_k(x) \}_{k \in \mathbb{N}}\subseteq C$ is a Cauchy sequence, and due to the completeness of $\{X,d\}$, it is convergent $\forall x\in C$. The uniqueness of the limit is derived by the equivalence of all trajectories as proved in Proposition \ref{Equivalence}. \end{proof}
\begin{remark}\label{remark2} In view of (\ref{PhiPsi}, the result of Proposition \ref{BTp2} holds under the milder assumption that $C$ is an invariant set of $\{T_i\}_{i\ge I}$, for some $I\in\mathbb{N}$. \end{remark}
\begin{remark}\label{remark3} {\bf Differences between forward and backward trajectories} \begin{enumerate} \item Note that if $T_i\to T$ and $T$ has Lipschitz constant $\mu<1$, then $$\sum_{k=1}^\infty \prod_{i=1}^k s_i < \infty ,$$ and both the forward and the backward trajectories converge. \item The condition $\lim_{k\to\infty}\prod_{i=1}^k s_i=0$ is sufficient for the asymptotic similarity result of both forward and backward trajectories. Under the stronger condition $\sum_{k=1}^\infty \prod_{i=1}^k s_i < \infty$ and the existence of a compact invariant set, we get convergence for the backward trajectories.
\item In many cases, the backward trajectories converge, while the forward trajectories do not converge. To demonstrate this let the metric space be $\mathbb{R}$ with $d(x,y)=|x-y|$, and let us consider the simple sequence of contractive transformations $T_{2i-1}(x)=x/2$, $T_{2i}=x/2+c$, $i\ge 1$. The backward trajectories converge to the fixed point of $S_1=T_1\circ T_2$, which is $2c/3$. The forward trajectories have two accumulation points, which are the fixed point of $S_1$, i.e., $2c/3$, and the fixed point of $S_2=T_2\circ T_1$, which is $4c/3$. \end{enumerate} \end{remark}
\section{\bf Trajectories of Sequences of Function Systems}\label{IFS}
Generalizing the classical IFS we consider a sequence of function systems, SFS in short, and its trajectories.
Let $(X,d)$ be a complete metric space. Consider an SFS $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$ defined by $$ \mathcal{F}_i = \big\{X; f_{1,i}, f_{2,i}, \dots, f_{n_i,i} \big \},$$ where $f_{r,i}: X \to X$ are continuous maps. The associated set-valued maps are given by $$\mathcal{F}_i: \mathbb{H}(X) \to \mathbb{H}(X); \quad \mathcal{F}_i(A)= \cup_{r=1}^{n_i} f_{r,i}(A).$$ Denoting $s_{r,i}=\text{Lip}(f_{r,i})$, for $r=1,2,\dots,n_i$, we recall that as in (\ref{CLF}), the contraction factors of $\mathcal{F}_i$ in $(\mathbb{H}(X),h)$ is $L_{\mathcal{F}_i}=\max_{r=1,2,\dots, n_i} s_{r,i}\equiv s_i$. The traditional IFS theory deals with the attractor, namely, the set which is the `fixed-point' of a map $\mathcal{F}$. In this section we consider the trajectories of the SFS maps $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$, which we refer to as forward and backward SFS trajectories \begin{equation} \Phi_k(A)=\mathcal{F}_k \circ \mathcal{F}_{k-1} \circ \dots \circ \mathcal{F}_1(A),\ \ \ \Psi_k(A)=\mathcal{F}_1 \circ \mathcal{F}_{2} \circ \dots \circ \mathcal{F}_k(A),\ \ k \in \mathbb{N}, \end{equation} respectively.
As presented in Section \ref{sect1}, $\mathbb{H}(X)$, endowed with the Hausdorff metric $h$, is a complete metric space if $(X,d)$ is complete.
The first observation is a corollary of Proposition \ref{Equivalence}:
\begin{corollary}\label{IFSequiv}{\bf Asymptotic similarity of SFS trajectories}\\ Consider an SFS defined by $\mathcal{F}_i=\big\{X; f_{1,i}, f_{2,i}, \dots, f_{n_i,i} \big \}$, $i \in \mathbb{N}$, where $f_{r,i}: X \to X$ are Lipschitz maps. Further assume that the corresponding contraction factors $\{L_{\mathcal{F}_i}\}$ for the set-valued maps $\{\mathcal{F}_i\}$ on $(\mathbb{H}(X),h)$ satisfy $\lim_{ k \to \infty} \prod_{i=1}^k L_{\mathcal{F}_i} =0$. Then all the forward trajectories of $\{\mathcal{F}_i\}$ are asymptotically similar, and all the backward trajectories of $\{\mathcal{F}_i\}$ are asymptotically similar.
\end{corollary}
The next result is a corollary of Proposition \ref{forwardconvergence}: \begin{corollary}{\bf Convergence of forward SFS trajectories}\label{SFSforward}\\ Let $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$ be as in Corollary \ref{IFSequiv}, with equal number of maps, $n_i=n$, and let $\mathcal{F}=\{X; f_r: r=1,2,\dots, n\}$. Assume that there exists $ C\subseteq X$, a compact invariant set of $\{f_{r,i}\}$ and that for each $r=1,2, \dots,n$, the sequence $\{f_{r,i}\}_{i\in \mathbb{N}}$ converges uniformly to $f_r$ on $C$ as $ i \to \infty$. Also assume that $\mathcal{F}$ has a contraction factor $L_{\mathcal{F}}<1$ . Then the forward trajectories $\{\Phi_k(A)\}$ converge for any initial set $A\subseteq C$ to the unique attractor of $\mathcal{F}$ . \end{corollary}
\iffalse \begin{proof} Define $\tilde{\mathcal{F}}: \mathbb{H}(X) \to \mathbb{H}(X)$ by $$\tilde{\mathcal{F}}(A) = \cup_{r=1}^n \tilde{f_r} (A).$$ We have \begin{equation*} \begin{split} h\big(\mathcal{F}_i(A), \tilde{\mathcal{F}}(A)\big)=&~ h\Big(\cup_{r=1}^n f_{r,i} (A), \cup_{r=1}^n \tilde{f_r} (A)\Big)\\ \le &~ \max_{r=1,2,\dots,n} h \Big( f_{r,i} (A), \tilde{f_r} (A) \Big). \end{split} \end{equation*} Since $\{f_{r,i}\}_{i\in \mathbb{N}}$ converges uniformly to $\tilde{f_r}$ on $X$, Lemma \ref{nonstatIFSl1} in conjunction with the previous inequality implies that $\{\mathcal{F}_i\}_{i \in \mathbb{N}}$ converges $\tilde{\mathcal{F}}$ uniformly on $\mathbb{H}(X)$. Now the conclusion follows exactly as in Proposition \ref{nonstatIFSp2}. \end{proof} \fi
\begin{remark}\label{remark4} The forward trajectories of the SFS in Corollary \ref{SFSforward} converge to the fractal set (attractor) associated with $\mathcal{F}$ (see \cite{B1}). This observation implies that forward trajectories of a converging SFS do not produce any new entities.
\end{remark}
Backward trajectories of SFS do not seem natural. However, as they converge under mild conditions, even if the SFS $\{\mathcal{F}_i\}_{i \in \mathbb{N}}$ does not converge to a contractive function system, their limits, or attractors, may constitute new entities, different from the known fractals which are self similar.
\begin{corollary}\label{backSFS}{\bf Convergence of backward SFS trajectories}
Let $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$ and $\{L_{\mathcal{F}_i}\}$ be as in Corollary \ref{IFSequiv}. Assume there exists $ C\subseteq X$, a compact invariant set of $\{f_{r,i}\}$, $r=1,...,n_i$, $i\in \mathbb{N}$, and assume that $\sum_{k=1}^\infty \prod_{i=1}^k L_{\mathcal{F}_i}<\infty$. Then the backward trajectories $\{\Psi_k(A)\}$ converge, for any initial set $A\subseteq C$, to a unique set (attractor) $P\subseteq C$.
\end{corollary}
\section{\bf Hidden fractals}
The fractal defined as the attractor of a single $\mathcal{F}=\{X; f_r: r=1,2,\dots, n\}$ has the property of self-similarity, i.e., its local shape is unchanged under certain contraction maps. The entities defined as the attractors of backward trajectories are more flexible. With a proper choice of $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$ one can design different local behaviour under different contraction maps.
Such a design relies on the observation that in a set defined by a sequence of contraction maps \begin{equation} \mathcal{G}_k(B)=\mathcal{F}_{1}\circ \mathcal{F}_{2}\circ \mathcal{F}_{3}\circ \cdot \cdot \cdot \circ \mathcal{F}_k(B), \end{equation} the first maps $\mathcal{F}_{1}$,$\mathcal{F}_{2}$,$\mathcal{F}_{3}$,... determine the global shape of the set, while the details of the local shape is determined by the last maps $\mathcal{F}_{k}$,$\mathcal{F}_{k-1}$,$\mathcal{F}_{k-2}$,.... To understand this note, e.g., that the set $\mathcal{F}_k(B)$ is undergoing a sequence of $k-1$ contraction maps. Therefore, its shape is not noticeable at larger scales. The arrangement of the set $\mathcal{G}_k(B)$ is finally fixed by the maps $\{f_{1,1},f_{1,2},...,f_{1,n}\}$ of $\mathcal{F}_{1}$. In general, if we scale by the contraction factor of $\Psi_k=\mathcal{F}_1\circ\mathcal{F}_2\circ ...\circ\mathcal{F}_k$, we shall see the behavior of the attractor of the backward trajectories of $\{\mathcal{F}_i\}_{i>k}$.
\begin{example} As an example we consider an alternating sequence of maps $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$, where for $10(j-1)<i\le 10j-5$, $\mathcal{F}_i$ is the function system generating cubic polynomial splines, and for $10j-5<i\le 10j$ it is the function system generating the Koch fractal. Both function systems are contractive of course. The forward trajectories do not converge (see Remark \ref{remark3}(3)), while any backward trajectory is rapidly converging. In Figure \ref{cubicKoch} we see on the left image of the global behavior of the limit which is a cubic spline behavior, and on the right image the local behavior near $x=0$, which is like the Koch fractal. In higher resolution we have smooth behavior again, and so on. Note that the scaling factor between the two images in Figure \ref{cubicKoch} is approximately $(1/2)^5$ which is the contraction factor of the first five mappings in $\{\mathcal{F}_i\}_{i\in \mathbb{N}}$.
\begin{figure}
\caption{The cubic-Koch attractor: "Smooth" in one scale and "Fractal" in another.}
\label{cubicKoch}
\end{figure}
\end{example}
\section{\bf IFS related to convergent stationary subdivision}\label{6}
In this section we present IFS systems related to stationary subdivision schemes. The result in Subsections \ref{51}, \ref{52} are taken from \cite{SLG}. As in \cite{SLG} the discussion is restricted to the case $s=1$, i.e., curves in $\mathbb{R}^m$.
\subsection{\bf $C^0$-convergent subdivision}\label{51}
The connection between a $C^{0}$- convergent stationary subdivision for curves and IFS is presented in \cite{SLG}. In subdivision processes for curves ($s=1$) one starts with an initial control polygon $p^0$, and the limit curve depends upon $p^0\subset \mathbb{R}^m$. The attractor of the IFS does not depend upon the initial set. This dichotomy is resolved in \cite{SLG} by defining an IFS related to the subdivision operator $S$ which depends upon $p^0$. The resulting IFS then converges to the relevant subdivision limit from any initial starting set. To understand the extension to non-stationary subdivision, let us first elaborate the construction suggested in \cite{SLG} for the case of stationary subdivision for curves.
As presented in Section \ref{BOS}, a stationary binary subdivision scheme for curves in the plane ($s=1,\ m=2$) is defined by two refinement rules that take a set of control points at level $k$, $p^k$, to a refined set at level $k+1$, $p^{k+1}$. For an infinite sequence $p^k$ this operation can be written in matrix form as \begin{equation}\label{S} p^{k+1}=Sp^k, \end{equation} where $S\equiv S_a$ is a two-slanted infinite martix with rows representing the two refinement rules, namely
$S_{i,j}=a_{i-2j}$, and $p^k$ is a matrix with $m$ columns and an infinite number of rows. Given a finite set of control points, $\{p^0_j\in\mathbb{R}^m\}_{j=1}^n$ at level $0$, we are interested in computing the limit curve defined by these points. For a non-empty limit curve, $n$ should be larger than the support size $|\sigma(a)|$. We consider the sub-matrix of $S$ which operates on these points, and we cut from it two square $n\times n$ sub-matrices, $S_1$ and $S_2$, which define all the $n_1$ resulting control points at level $1$. Note that $S_1$ defines the transformation to the first $n$ points at level $1$, and $S_2$ defines the transformation to the last $n$ points at level $1$. Of course there can be an overlap between these two vectors of points, namely $n_1< 2n$. Some examples of these sub-matrices are given in \cite{SLG}. We provide below the explicit forms of $S_1$ and $S_2$:
We distinguish two types of masks, an even mask, with $2\ell$ elements, $a_{-\ell+1},...,a_\ell$, and an odd mask with $2\ell+1$ elements, $a_{-\ell},...,a_\ell$. For both cases we assume $n>\ell+1$. For both the even and the odd masks \begin{equation}\label{S1} S_1=\{a_{i-2j}\}_{i=\ell+1,\ j=1}^{\ \ \ \ell+n,\ \ \ n}. \end{equation} $S_2$ is different for odd and even masks. For an even mask \begin{equation}\label{S2E} S_2=\{a_{i-2j}\}_{i=n-\ell+2,\ j=1}^{\ \ \ 2n-\ell+1,\ \ n}, \end{equation} and for an odd mask \begin{equation}\label{S2O} S_2=\{a_{i-2j}\}_{i=n-\ell+3,\ j=1}^{\ \ \ 2n-\ell+2,\ \ n}. \end{equation}
Repeated applications of $S_1$ and $S_2$, define all the control points at all levels. Therefore, \begin{equation}\label{pinfty} \bigcup_{i_1,i_2,...,i_k\in \{1,2\}}S_{i_k},...,S_{i_2}S_{i_1}p^0 \to p^\infty,\ \ \ as\ \ k \to \infty, \end{equation} where $p^\infty$ is the set of points on the curve defined by the subdivision process starting with $p^0$.
\begin{remark}\label{Union}{\bf Union of vectors of points}\\ $p^0$ is a vector of $n$ points in $\mathbb{R}^m$, and thus each $S_{i_k},...,S_{i_2}S_{i_1}p^0$ is a vector of $n$ points in $\mathbb{R}^m$, which we regard as a set of $n$ points in $\mathbb{R}^m$ . By $\bigcup S_{i_k},...,S_{i_2}S_{i_1}p^0$ we mean the set in $\mathbb{R}^m$ which is the union of all these sets. \end{remark}
\begin{remark}\label{PTP}{\bf Parameterizing the points in $p^\infty$}\\ To order the points of the set $p^\infty$ we introduce the following parametrization. An infinite sequence $\eta=\{i_k\}_{k=1}^\infty$, $i_k\in\{1,2\}$ defines a vector of $n$ points in $\mathbb{R}^m$ \begin{equation} \lim_{k\to \infty}S_{i_k},...,S_{i_2}S_{i_1}p^0=(q_1,...,q_n)^t,\ \ q_i\in \mathbb{R}^m. \end{equation} In case of a $C^{0}$-convergent subdivision, the differences between adjacent points tend to zero \cite{DGL}. Therefore, all these $n$ points are the same point, \begin{equation}\label{samepoint} \lim_{k\to \infty}S_{i_k},...,S_{i_2}S_{i_1}p^0=(q_\eta,...,q_\eta)^t,\ \ q_\eta\in \mathbb{R}^m. \end{equation} We attach this point $q_\eta$ to the parameter value $x_\eta=\sum_{k=1}^\infty (i_k-1)2^{-k}\in [0,1]$. \end{remark}
\subsection{\bf IFS related to stationary subdivision}\label{52}
Here the metric space is $\{\mathbb{R}^n,d\}$ with $d(x,y)=\|x-y\|_2$, where $\|\cdot\|_2$ is the Euclidean norm. The observation (\ref{pinfty}) leads in \cite{SLG} to the definition of an IFS with two maps on $X=\mathbb{R}^n$ (row vectors) \begin{equation}\label{f1f2} f_r(A)=AP^{-1}S_r P,\ \ \ r=1,2, \end{equation} where $P$ is an $n\times n$ matrix defined as follows:
\begin{enumerate} \item The first $m$ columns of $P$ are the $n$ given control points $p^0$, which are points in $\mathbb{R}^m$. \item The last column is a column of $1$'s. \item The rest of the columns are defined so that $P$ is non-singular. We assume here that the control points $p^0$ do not all lie on an $m-1$ hyper plane so that the first $m$ columns of $P$ are linearly independent, and that the column of $1$'s is independent of the first $m$ columns. \end{enumerate}
This special choice of $P$, together with the special definition of $f_1,f_2$ in (\ref{f1f2}), yields the following essential observations: \begin{itemize} \item Since $S_1$ and $S_2$ have eigenvalue $1$, with right eigenvector $(1,1,...,1)^t$ which is also the last column of $P$, then \begin{equation}\label{PSP} P^{-1}S_rP =\left(
\begin{array}{c|c} G_r & 0 \\ \hline v&1 \end{array} \right) , \ \ r=1,2, \end{equation} where $G_r$ are $(n-1)\times(n-1)$ matrices. Denoting by $Q^{n-1}$ the $n-1$ dimensional hyperplane (flat) of vectors of the form $(x_1,...,x_{n-1},1)$, it follows from (\ref{PSP}) that $f_r\ :\ Q^{n-1}\to Q^{n-1}$, $r=1,2$.
\item By applying the IFS iterations to the set $A=P$, using equation (\ref{FX}), we identify the candidate attractor as \begin{equation}\label{Pinfty} P^\infty=\lim_{k\to\infty}\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}S_{i_k},...,S_{i_2}S_{i_1}P. \end{equation} Similarly to Remark \ref{Union}, the rows of $P^\infty$ constitute a set of points in $\mathbb{R}^n$. By the structure of $P$, and in view of (\ref{pinfty}), we observe that $p^\infty$ is the set of points in $\mathbb{R}^m$ defined by the first $m$ components of the points (in $\mathbb{R}^n$) of $P^\infty$. \end{itemize}
The above observations lead to the main result in \cite{SLG}, stated in the Theorem below. The original proof in \cite{SLG} of this theorem has a flaw. We provide here a proof which serves us later in the discussion on non-stationary subdivision.
\begin{theorem}\label{ThSLG} Let $S_a$ be a $C^{0}$-convergent subdivision, and let $p^0$ be a sequence of initial control points. Define the IFS $\mathcal{F}=\{X; f_1,f_2\}$ on $Q^{n-1}$, with $f_1,f_2$ defined in (\ref{f1f2}) and $S_1,S_2$ defined in (\ref{S1})-(\ref{S2O}). Then the IFS converges to a unique attractor in $Q^{n-1}$, and the first $m$ components of the points of this attractor constitute the limit curve $p^\infty=S_a^\infty p^0$. \end{theorem}
\begin{proof} Since all the eigenvalues of $S_1$ and $S_2$ which differ from $1$ are smaller than $1$, it follows that $\rho(G_r)<1$, $r=1,2$, where $\rho(G)$ is the spectral radius of $G$. This does not directly imply that the maps $f_1,f_2$ are contractive on $Q^{n-1}$. Following Remark \ref{remarkIFS}(2), to prove convergence of the IFS $\mathcal{F}$, we show that there exists an $\ell$-term composition of $\mathcal{F}$ is a contraction map. We notice that such an $\ell$-term composition of $\mathcal{F}$ is itself an IFS, with $2^\ell$ functions of the form \begin{equation}\label{ftau} f_\eta(A)=AP^{-1}S_{i_\ell}...S_{i_2}S_{i_1}P,\ \ \ \eta\in I_\ell, \end{equation} where $I_\ell=\{\eta=\{i_j\}_{j=1}^\ell$, $i_j\in\{1,2\}\}$. $S_a$ is $C^{0}$-convergent, thus by Definition \ref{C0conv} it is also uniformly convergent. It follows from (\ref{samepoint}) that for any $\epsilon>0$, there exists $ \ell=\ell(\epsilon)$ such that for any $\eta\in I_\ell$ \begin{equation}\label{stauP} S_{i_\ell}...S_{i_2}S_{i_1}P=Q_\eta+E_\eta, \end{equation}
where $Q_\eta$ is an $n\times n$ matrix of constant columns, and $\|E_\eta\|_\infty<\epsilon$. The last column of $Q_\eta$ is $(1,1,...,1)^t$, and the last column of $E_\eta$ is the zero column. Recalling that the last column of $P$ is the constant vector of $1$'s, and since $P^{-1}P=I_{n\times n}$, it follows that \begin{equation}\label{Pm1stauP} P^{-1}S_{i_\ell}...S_{i_2}S_{i_1}P= \begin{pmatrix} 0&0&...&0&0\\ 0&0&...&0&0\\ . & . & . & . & .\\ . & . & . & . & .\\
0&0&...&0&0\\
q_{\eta,1}&q_{\eta,2}&...&q_{\eta,n-1}&1\\ \end{pmatrix} +P^{-1}E_\eta= \left(
\begin{array}{c|c} G_\eta & 0 \\ \hline q_{\eta}&1 \end{array} \right), \end{equation}
Where $q_{\eta_j}(1,1,...,1)^t$ is the $j$-th column of $Q_\eta$. It follows that $\|G_\eta\|_2\le \epsilon\|P^{-1}\|_2$. Next we show that for $\epsilon$ small enough, $f_\eta$ is contractive with respect to the Euclidean norm in $Q^{n-1}$. Indeed, for $x,y\in \mathbb{R}^{n-1}$, $(x,1),(y,1)\in Q^{n-1}$, and \begin{equation}\label{fetaxy}
d(f_\eta((x,1)),f_\eta((y,1)))=\|f_\eta((x,1)-(y,1))\|_2=\|f_\eta((x-y,0))\|_2=\|(x-y)^tG_\eta(x-y)\|_2. \end{equation}
Choosing $\epsilon$ such that $\epsilon \|P^{-1}\|_2<1$, it follows that for all $ \eta\in I_{\ell(\epsilon)}$, the map $f_\eta$ is contractive on $Q^{n-1}$, and the IFS defined by $\mathcal{F}$ is convergent. \end{proof}
\begin{remark} Theorem \ref{ThSLG} reveals the fractal nature of curves generated by subdivision. However, the self-similarity property of these curves is not achieved in $\mathbb{R}^m$. The self-similarity property is of $p^\infty$, as a set in $Q^{n-1}$. $p^\infty$ is the projection on $\mathbb{R}^m$ of this self similar entity in $Q^{n-1}$. \end{remark}
\subsection{\bf A basis for convergent stationary subdivision}
As presented above, and earlier in \cite{SLG}, the definition of an IFS for a $C^0$-convergent stationary subdivision involves the specific given control points $p^0$. We observe that it is enough to consider one basic IFS, and its attractor can serve as a basis for generating the limit of the subdivision process for any given $n$ control points $p^0$. Instead of the matrix $P$, we may define any other non-singular $n\times n$ matrix with a last column of $1$'s. We choose the matrix \begin{equation}\label{H} H= \begin{pmatrix} 1& 0& 0 & 0&...&1\\ 0 &1& 0& 0&... &1 \\ 0 &0& 1& 0&... &1 \\
\cdot&\cdot&\cdot&\cdot&\cdot&1\\
\cdot&\cdot&\cdot&\cdot&\cdot&1\\
0 &0& 0&... &1&1 \\ 0 &0& 0&... &0&1 \\ \end{pmatrix}, \end{equation} and define the IFS with \begin{equation}\label{f1f2H} f_r(A)=AH^{-1}S_r H,\ \ \ r=1,2, \end{equation} As shown above, the attractor of this IFS is the union of $n\times n$ matrices \begin{equation}\label{Hinfty} \mathcal{H}^\infty=\lim_{k\to\infty}\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}S_{i_k},...,S_{i_2}S_{i_1}H. \end{equation} In view of Remark \ref{Union}, $\mathcal{H}^\infty\subset Q^{n-1}$.
For any given control points $p^0$ we can simply calculate $p^\infty$ as the set \begin{equation}\label{pinftyH} p^\infty=\mathcal{H}^\infty H^{-1}p^0. \end{equation}
\iffalse \subsection{\bf A class of $h$-convergent subdvision}
The IFS machinery enables us to identify a class of $h$-convergent subdivision schemes (Definition \ref{hconv}). Using the above idea on a basis for stationary subdivision we can present an almost inverse theorem to Theorem \ref{ThSLG}:
\begin{theorem}\label{thH} Let $S_a$ be a subdivision scheme, and assume there exists a non-singular matrix $H$ such that the IFS defined by (\ref{f1f2H}) is contractive in some $Q\subseteq \mathbb{R}^n$. Then, $S_a$ is $h$-convergent. \end{theorem} \begin{proof} The proof follows directly from (\ref{Hinfty}),(\ref{pinftyH}). \end{proof}
The next result presents a specific wide class of such $h$-convergent subdivision schemes. The proof involves an extension of the IFS construction in \cite{SLG}.
\begin{theorem}\label{thsvv} Let $S_a$ be a subdivision scheme such that $S_1$ and $S_2$ have a common invariant subspace $V$ in $\mathbb{R}^n$, $dim(V)=\ell$, such that \begin{equation}\label{svv} S_iv=v,\ \ \ \forall v\in V,\ \ \ \ i=1,2, \end{equation}
and each has $n-\ell$ eigenvalues satisfying $|\lambda|<1$. Then $S_a$ is $h$-convergent. \end{theorem} \begin{proof} The proof requires an appropriate definition of the matrix $H$ appearing in the construction of the IFS in (\ref{f1f2H}). W.l.o.g. we assume $n>m+\ell$, and let $v_1,...,v_\ell$ be a basis of $V$. Define $H$ to be an $n\times n$ non-singular matrix with $v_1,...,v_\ell$ as its last $\ell$ columns.
Since $S_1$ and $S_2$ have eigenvalue $1$, with eigenvectors $v_1,...,v_\ell$ which are also the last columns of $H$, then \begin{equation}\label{HSH2} H^{-1}S_rH=
\left({\bf B}_{n\times n-\ell} \, \middle| \, \frac{{\bf 0}_{(n-\ell)\times\ell}}{I_{\ell\times\ell}} \right) \end{equation} Denoting by $Q^{n-\ell}$ the $n-\ell$ dimensional affine subspace of vectors of the form $(x_1,...,x_{n-\ell},1,...,1)$, it follows from (\ref{HSH2}) that $f_r:Q^{n-\ell}\to Q^{n-\ell}$, $r=1,2$, where $f_1$, $f_2$ are defined by (\ref{f1f2H}). Furthermore, since all the other eigenvalues of $S_1$ and $S_2$ are smaller than $1$, the maps $f_1,f_2$ are contractive on $Q^{n-\ell}$. Using Theorem \ref{thH} we conclude that $S_a$ is $h$-convergent. \end{proof}
\begin{remark} The above theorem covers the case where $S_1$ and $S_2$ have eigenvalue $1$, with eigenvector $(1,1,...,1)^t$, and all other eigenvalues of $S_1$ and $S_2$ are smaller than $1$, and yet $S_a$ is not $C^0$-convergent. This happens if $\rho(S_1,S_2)\ge 1$, where $\rho(S_1,S_2)$ denotes the joint spectral radius of $S_1$ and $S_2$ (see e.g. \cite{DL}).
Returning to Remark \ref{PTP}, if $S_a$ is $h$-convergent and not $C^0$-convergent, one can not guarantee the assignment of a unique point to a given parameter $x_\eta\in[0,1]$. Hence, the set $p^\infty$ may not be parameterizable, or is not representing a $C^0$ curve. \end{remark} \fi
\section{\bf SFS trajectories associated with non-stationary subdivision}
This research was motivated by the idea to adapt the framework of the previous section to non-stationary subdivision processes. In binary non-stationary subdivision, as shown in (\ref{sak}), the refinement rules may depend upon the refinement level, and can be written in matrix form as \begin{equation}\label{Sk} p^{k+1}=S^{[k]}p^k, \end{equation} where each $S^{[k]}\equiv S_{a^{[k]}}$ is a ``two-slanted" matrix. As demonstrated in \cite{DL1}, non-stationary subdivision processes can generate interesting limits which cannot be generated by stationary schemes, e.g., exponential splines. Interpolatory non-stationary subdivision schemes can generate new types of orthogonal wavelets, as shown in \cite{DKLR}.
In the following we discuss the possible relation between non-stationary subdivision processes and SFS processes. A necessary condition for the convergence (to a continuous limit) of a stationary subdivision scheme is the {\bf constants reproduction property}, namely, \begin{equation}\label{Seeqe}
Se=e, \ \ \ \ e=(...,1,1,1,1,1,...)^t . \end{equation} As explained in Section \ref{6}, this condition is used in \cite{SLG} in order to show that the maps defined in (\ref{f1f2}) are contractive on $Q^{n-1}$. This condition is not necessarily satisfied by converging non-stationary subdivision schemes. It is also not a necessary condition for the construction of SFS related to non-stationary subdivision.
\subsection{\bf Constructing SFS mappings for non-stationary subdivision}
In the following we assume that the supports of the masks $a^{[k]}$, $|\sigma(a^{[k]})|$, are of the same size, which is at most the number of initial control points. As in the stationary case, for a given set of control points, $\{p^0_j\}_{j=1}^n$, we define for each $k$ the two square $n\times n$ sub-matrices of each $S^{[k]}$, $S^{[k]}_1$ and $S^{[k]}_2$, in the same way as for a stationary scheme, by equations (\ref{S1}), (\ref{S2E}), (\ref{S2O}). The points generated by the subdivision process are obtained by applying $S^{[1]}_1$ and $S^{[1]}_2$, to the initial control points vector $p^0$, and then applying $S^{[2]}_1$ and $S^{[2]}_2$ to the two resulting vectors, and so on. The set of points generated at level $k$ of the subdivision process is given by \begin{equation}\label{nspk} p^k=\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}S^{[k]}_{i_k},...,S^{[2]}_{i_2}S^{[1]}_{i_1}p^0\ . \end{equation} If the subdivision is $C^0$-convergent or $h$-convergent, then \begin{equation}\label{pktopinfty} p^k\to p^\infty\ \ \ as\ \ k\to\infty, \end{equation} in the sense of Definitions \ref{C0conv}, \ref{hconv} respectively. Here $p^\infty$ is the set of points defined by the non-stationary subdivision process starting with $p^0$.
Now we define the SFS $\{\mathcal{F}_k\}$, where $\mathcal{F}_k=\big\{X; f_{1,k}, f_{2,k} \big \}$, with the level dependent maps \begin{equation}\label{f1f2k} f_{r,k}(A)=AP^{-1}S^{[k]}_r P,\ \ \ r=1,2, \end{equation} where $P$ is the $n\times n$ matrix defined as in the stationary case.
\begin{remark}\label{QorR} If the non-stationary scheme satisfies the constant reproduction property at every subdivision level, then all the mappings in the SFS map $Q^{n-1}$ into itself (by (\ref{PSP})). If not, then the mappings are considered as maps on $\mathbb{R}^n$. \end{remark}
Let us now follow a forward trajectory and a backward trajectory of $\Sigma\equiv\{\mathcal{F}_k\}$, starting from $A\subset \mathbb{R}^n$: $$\mathcal{F}_k(A)=f_{1,k}(A)\cup f_{2,k}(A)=AP^{-1}S^{[k]}_1P\cup AP^{-1}S^{[k]}_2P,$$ and $$\mathcal{F}_{j}(\mathcal{F}_k(A))=f_{1,j}(AP^{-1}S^{[k]}_1P\cup AP^{-1}S^{[k]}_2P)\cup f_{2,j}(AP^{-1}S^{[k]}_1P\cup AP^{-1}S^{[k]}_2P).$$ We note that $$f_{r,j}(AP^{-1}S^{[k]}_iP)=AP^{-1}S^{[k]}_iPP^{-1}S^{[j]}_rP=AP^{-1}S^{[k]}_iS^{[j]}_rP.$$ Therefore, $$\mathcal{F}_{j}(\mathcal{F}_k(A))=\bigcup_{r,i\in\{1,2\}}AP^{-1}S^{[k]}_iS^{[j]}_rP.$$ In the same way it follows that at the $k$th step of a forward trajectory of $\Sigma$ we generate the set \begin{equation}\label{FASSP} \mathcal{F}_{k}\circ\mathcal{F}_{k-1}\circ ...\circ\mathcal{F}_{2}\circ\mathcal{F}_1(A)=\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}AP^{-1}S^{[1]}_{i_1},...,S^{[k-1]}_{i_{k-1}}S^{[k]}_{i_k}P. \end{equation} Similarly, the set generated at the $k$th step of a backward trajectory is \begin{equation}\label{ASSP} \mathcal{F}_{1}\circ\mathcal{F}_2\circ ...\circ\mathcal{F}_{k-1}\circ\mathcal{F}_k(A)=\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}AP^{-1}S^{[k]}_{i_k},...,S^{[2]}_{i_2}S^{[1]}_{i_1}P. \end{equation} For the special backward trajectory with $A=P$ we obtain \begin{equation}\label{PSSP} \mathcal{F}_{1}\circ\mathcal{F}_2\circ ...\circ\mathcal{F}_{k-1}\circ\mathcal{F}_k(P)=\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}S^{[k]}_{i_k},...,S^{[2]}_{i_2}S^{[1]}_{i_1}P. \end{equation}
If the non-stationary subdivision scheme is either $C^0$-convergent or $h$-convergent, then, in view of (\ref{nspk}), it follows that the first $m$ components in this special trajectory converge to the limit $p^\infty$ of $\{S_{a^{[k]}}\}$, starting with $p^0$. The challenging question is finding for which classes of non-stationary schemes {\bf all} the backward trajectories converge to the same limit. As we show later, and as explained in Remark \ref{remark4}, forward trajectories of $\Sigma$ are less interesting.
\subsection{\bf Attractors of forward and backward SFS trajectories for non-stationary subdivision}
We consider forward and backward SFS trajectories for several cases of non-stationary subdivision schemes:
\begin{itemize}
\item[Case (i)] A $C^{0}$-convergent non-stationary scheme $\{S_{a^{[k]}}\}$.
\item[Case (ii)] A non-stationary scheme $\{S_{a^{[k]}}\}$ satisfying the constants reproduction property,
with masks of the same support, converging to a mask $a$ of a $C^{0}$-convergent subdivision, i.e., $\sigma(a^{[k]})=\sigma(a)$, and
\begin{equation}\label{aktoa} \lim_{k\to \infty}a^{[k]}_j=a_j,\ \ \ j\in\sigma(a). \end{equation}
\item[Case (iii)] A non-stationary scheme $\{S_{a^{[k]}}\}$ with masks $\{a^{[k]}\}$ satisfying the constants reproduction property, and corresponding $\{\mathcal{F}_k\}$ satisfying
$\sum_{\ell=1}^\infty \prod_{k=1}^\ell L_{\mathcal{F}_k}<\infty$. \end{itemize}
In Case (i) we do not assume that the non-stationary subdivision scheme reproduces constants, nor do we assume that the masks $\{a^{[k]}\}$ converge to a limit mask. Therefore, the associated SFS maps do not necessarily map $Q^{n-1}$ to itself. We do assume that the non-stationary scheme is $C^0$-convergent.
\begin{theorem}\label{nsprop2} Let $\{S_{a^{[k]}}\}$ be a non-stationary $C^0$-convergent subdivision scheme, and let $\Sigma=\{\mathcal{F}_k\}_{k=1}^\infty$ be the SFS defined in (\ref{f1f2k}). Then the backward trajectories of $\Sigma$ starting with $A\subset Q^{n-1}$ converge to a unique attractor. The first $m$ components of the points of this attractor constitute the limit curve (in $\mathbb{R}^m$) of the non-stationary scheme defined in (\ref{nspk})-(\ref{pktopinfty}). \end{theorem} \begin{proof} Here we consider the SFS as mappings from $\mathbb{R}^n$ to itself. Since $\{S_{a^{[k]}}\}$ converges, it immediately follows from (\ref{PSSP}) that the backward trajectory of $\Sigma$ initialized with $A=P$ converge. We would like to show that all the backward trajectories of $\Sigma$ initialized with an arbitrary set of points $A\subset Q^{n-1}$ converge to the same limit. We recall that the first $m$ columns of $P$ are the control points $p^0$. Starting the backward trajectory of $\Sigma$ with $A=P$, it follows, as discussed in Remark \ref{PTP}, that an infinite sequence $\eta=\{i_k\}_{k=1}^\infty$, $i_k\in\{1,2\}$, defines a vector of $n$ equal points in $\mathbb{R}^m$ \begin{equation} q=\lim_{k\to \infty}S^{[k]}_{i_k},...,S^{[2]}_{i_2}S^{[1]}_{i_1}p^0=(q_\eta,...,q_\eta)^t,\ \ q_\eta=(q_{\eta,1},...,q_{\eta,m}), \end{equation} attached to a parameter value $x_\eta=\sum_{k=1}^\infty (i_k-1)2^{-k}$. Starting the backward trajectory with a general set $A$ in $Q^{n-1}$, and following the same sequence $\sigma$, it follows from (\ref{ASSP}) that the limit is the $n\times m$ matrix $AP^{-1}q$. We recall that the last column of $P$ is a constant vector of $1$'s. Since each column of $q$ is a constant vector of length $n$, and since $P^{-1}P=I_{n\times n}$, it follows that \begin{equation}\label{pm1q} P^{-1}q= \begin{pmatrix} 0&0&...&0\\ 0&0&...&0\\
& . & & . \\
& . & & . \\
0&0&...&0\\
q_{\eta,1}&q_{\eta,2}&...&q_{\eta,m}\\ \end{pmatrix}. \end{equation} For any row vector of the form $r=(r_1,r_2,...,r_{n-1},1)\in Q^{n-1}$, it follows from (\ref{pm1q}) that $rP^{-1}q=q_\eta$. If $A$ represents a set of $N$ points in $Q^{n-1}$, i.e., the $n$th element in each row of $A$ is $1$, it follows that $AP^{-1}q$ represent $N$ copies of the same point $q_\eta$. That is, for any sequence of indices $\eta$, the limit of the corresponding trajectory is the same for any initial $A\subset Q^{n-1}$, and it is the limit point of the non-stationary subdivision attached to the parameter value $x_\eta$. Comparing the trajectories displayed in (\ref{ASSP}) and (\ref{PSSP}), it follows that \begin{equation}\label{limeqlim} \lim_{k\to\infty}\mathcal{F}_{1}\circ\mathcal{F}_2\circ ...\circ\mathcal{F}_{k-1}\circ\mathcal{F}_k(A)=AP^{-1} \lim_{k\to\infty}\mathcal{F}_{1}\circ\mathcal{F}_2\circ ...\circ\mathcal{F}_{k-1}\circ\mathcal{F}_k(P). \end{equation} Interchanging the order of $\lim_{k\to\infty}$ and $\bigcup_{i_1,i_2,...,i_k\in \{1,2\}}$ we conclude that both trajectories converge to the same limit for any $A\subset Q^{n-1}$. \end{proof}
In Case (ii) we consider a non-stationary scheme $\{S_{a^{[k]}}\}$ with masks converging to a mask $a$, \begin{equation}\label{aktoa2} \lim_{k\to \infty}a^{[k]}_j=a_j,\ \ \ j\in\sigma(a), \end{equation} with $S_a$ a convergent stationary scheme. Thus \begin{equation}\label{fikktofi} \lim_{k\to \infty} f_{r,k}=f_r,\ \ \ r=1,2. \end{equation}
Following Corollaries \ref{SFSforward} and \ref{backSFS}, we are now ready to discuss the convergence of forward and backward trajectories of $\Sigma\equiv\{\mathcal{F}_k\}$.
\begin{corollary}\label{nscor1}{\bf Forward trajectories of $\{\mathcal{F}_k\}$:} Let $\{S_{a^{[k]}}\}$ have the constant reproducing property, with masks $\{a^{[k]}\}$ of the same support size converging to the mask of a $C^{0}$-convergent subdivision scheme $S_a$. Then the forward trajectories of the SFS $\{\mathcal{F}_k\}$ defined above converge to the attractor $P^\infty$ of the IFS related to $S_a$. \end{corollary} \begin{proof} Let $\mathcal{F}$ be the IFS related to $S_a$, and let $\{\mathcal{F}_k\}$ be the SFS related to the non-stationary scheme $\{S_{a^{[k]}}\}$. Following the proof of Theorem \ref{ThSLG}, there exists an $\ell$ such that the $\ell$-term composition of $\mathcal{F}$, namely, $\mathcal{G}=\mathcal{F}\circ\mathcal{F}\circ ... \circ\mathcal{F}$, is a contraction map. Let \begin{equation} \mathcal{G}_k=\mathcal{F}_{k\ell}\circ\mathcal{F}_{k\ell-1}\circ ... \circ\mathcal{F}_{(k-1)\ell+1},\ \ k\ge 1. \end{equation} Thus, $\mathcal{G}_k\to \mathcal{G}$ as $k\to\infty$, and $\exists K$ such that the maps $\{\mathcal{G}_k\}_{k\ge K}$ are contractive. In order to apply Corollary \ref{SFSforward} we need to show the existence of an invariant set $C$ for the maps $\{\mathcal{G}_k\}$. Applying Example \ref{Ex1} we derive the existence of an invariant set $C_K$ for the maps $\{\mathcal{G}_k\}_{k\ge K}$. $C_K$ is a ball of radius $r$ in $Q^{n-1}$, centered at $q=(0,0,...,0,1)^t$. By Remark \ref{remarkC}, any ball of radius $R>r$, centered at $q$, is also an invariant set of $\{\mathcal{G}_k\}_{k\ge K}$.
Using this observation in Corollary \ref{SFSforward}, implies that all forward trajectories of $\{\mathcal{G}_k\}_{k\ge K}$ converge from any set in $Q^{n-1}$ to the attractor of $\mathcal{G}$.
In particular, for any set $A\in Q^{n-1}$, we can start the forward trajectory of $\{\mathcal{G}_k\}_{k\ge K}$ with the set \begin{equation} \mathcal{G}_{K-1}\circ\mathcal{G}_{K-2}\circ ...\circ\mathcal{G}_{2}\circ\mathcal{G}_1(A), \end{equation} and conclude that all forward trajectories of $\{\mathcal{G}_k\}_{k\ge 1}$ converge from any point in $Q^{n-1}$ to the attractor of the IFS related to $S_a$. \end{proof}
\begin{remark}
\begin{enumerate} \item It is important to note that in case the non-stationary scheme does not reproduce constants, the result in Corollary \ref{nscor1} does not necessarily hold. To see this it is enough to consider the simple case where $S_i^{[k]}=S_i$, $i=1,2$, for $k\ge 2$, and only $S_1^{[1]}$ and $S_2^{[1]}$ are different, and the corresponding $S_{a^{[1]}}$ does not reproduce constants. Then, in view of the expression (\ref{FASSP}), the forward trajectory with $A=P$ converges to $S_1^{[1]}P^\infty\cup S_2^{[1]}P^\infty\ne P^\infty$, where $P^\infty$ is the attractor corresponding to the stationary subdivision with $S_1$ and $S_2$. \item The important conclusion from the above corollary is that forward trajectories of an SFS related to a non-stationary subdivision with masks converging to the mask of a $C^0$-convergent subdivision do not produce any new attractors.
On the other hand, the backward trajectories related to such non-stationary subdivision schemes do generate new interesting curves. See e.g. \cite{DL}.
\item Under the conditions of Corollary \ref{nscor1}, it is proved in \cite{CDMM} that the non-stationary subdivision $\{S_{a^{[k]}}\}$ is $C^0$- convergent. Therefore, by Theorem \ref{nsprop2} the backward trajectories of $\Sigma$ starting with $A\subset Q^{n-1}$ converge to a unique attractor. This result follows from Corollary \ref{backSFS} as well. \end{enumerate} \end{remark}
In case (iii), the mask of the subdivision schemes $\{S_{a^{[k]}}\}$ do not have to converge to a mask of a $C^0$-convergent subdivision scheme. We still assume here that the non-stationary scheme reproduces constants, i.e., $(1,1,...,1)^t$ is an eigenvector of $S_1^{[k]}$ and $S_2^{[k]}$ with eigenvalue $1$, for $k\ge 1$. Let us denote by $\mu(S_{a^{[k]}})$ the maximal absolute value of the eigenvalues of $S_1^{[k]}$ and $S_2^{[k]}$ which differ from $1$.
\begin{corollary}\label{nscor5} Consider a constant reproducing non-stationary scheme $\{S_{a^{[k]}}\}$ and let $\{\mathcal{F}_k\}_{k=1}^\infty$ be the SFS defined by (\ref{f1f2k}). If $\ \sum_{\ell=1}^\infty \prod_{k=1}^\ell L_{\mathcal{F}_k}<\infty$ then: \begin{enumerate} \item All the backward trajectories of $\{\mathcal{F}_k\}$ converge to a unique attractor in $Q^{n-1}$. \item The first $m$ components of this attractor constitute the $h$-limit (in $\mathbb{R}^m$) of the scheme applied to the initial control polygon $p^0$. \end{enumerate} \end{corollary}
The proof follows directly from Corollary \ref{backSFS}.
\subsection{Numerical Examples}
\begin{example}(Case (i) and case (ii)) For our first example we consider a non-stationary subdivision which produces exponential splines. It is convenient to view the mask coefficients $\{a_i\}$ of a subdivision scheme as the coefficients of a Laurent polynomial $$a(z)=\sum_ia_iz^i.$$ The subdivision mask for generating cubic polynomial splines is $$a(z)={(1+z)^4\over 8}={1\over 8}+{1\over 2}z+{3\over 4}z^2+{1\over 2}z^3+{1\over 8}z^4.$$ Following \cite{SLG}, the corresponding matrices $P$, $S_1$ and $S_2$, for $n=5$, are
$$ P= \begin{pmatrix} x_1&y_1&1&0&1\\ x_2&y_2&0&1&1\\ x_3&y_3&0&0&1\\ x_4&y_4&0&0&1\\ x_5&y_5&0&0&1\\ \end{pmatrix}, \ \ \ S_1= \begin{pmatrix} {1\over 2}&{1\over 2}&0&0&0\\ {1\over 8}&{3\over 4}&{1\over 8}&0&0\\ 0&{1\over 2}&{1\over 2}&0&0\\ 0&{1\over 8}&{3\over 4}&{1\over 8}&0\\ 0&0&{1\over 2}&{1\over 2}&0\\ \end{pmatrix}, \ \ \ S_2= \begin{pmatrix} 0&{1\over 2}&{1\over 2}&0&0\\ 0&{1\over 8}&{3\over 4}&{1\over 8}&0\\ 0&0&{1\over 2}&{1\over 2}&0\\ 0&0&{1\over 8}&{3\over 4}&{1\over 8}\\ 0&0&0&{1\over 2}&{1\over 2}\\ \end{pmatrix}. $$
A related non-stationary subdivision is defined by the sequence of mask polynomials \begin{equation}\label{ak} a^{[k]}(z)={b_k(1+z)(1+c_kz)^3}, \ \ \ \text{with} \ \ c_k=\text{exp}({\lambda 2^{-k-1}}),\ b_k=1/(1+c_k)^3. \end{equation} The non-stationary subdivision $\{S_a^{[k]}\}$ generates exponential splines with integer knots, piecewise spanned by $\{1,e^{\lambda x},xe^{\lambda x},x^2e^{\lambda x}\}.$ The matrices $S^{[k]}_1,S^{[k]}_2$ are
$$ S^{[k]}_1=b_k \begin{pmatrix} 3c_k^2+c_k^3&1+3c_k&0&0&0\\ c_k^3&3(c_k+c_k^2)&1&0&0\\ 0&3c_k^2+c_k^3&1+3c_k&0&0\\ 0&c_k^3&3(c_k+c_k^2)&1&0\\ 0&0&3c_k^2+c_k^3&1+3c_k&0\\ \end{pmatrix}, $$ $$ S^{[k]}_2=b_k \begin{pmatrix} 0&3c_k^2+c_k^3&1+3c_k&0&0\\ 0&c_k^3&3(c_k+c_k^2)&1&0\\ 0&0&3c_k^2+c_k^3&1+3c_k&0\\ 0&0&c_k^3&3(c_k+c_k^2)&1\\ 0&0&0&3c_k^2+c_k^3&1+3c_k\\ \end{pmatrix}. $$
We observe that $\lim_{k\to \infty} c_k=1$, and thus $\lim_{k\to\infty}a^{[k]}= a$. The conditions for both Corollary \ref{nscor1} and Theorem \ref{nsprop2} are satisfied, and both forward and backward trajectories of $\{\mathcal{F}_k\}$ converge. The attractors of both forward and backward trajectories, for $\lambda=3$, are presented in Figure \ref{nsfig}. The symmetric set is in Figure \ref{nsfig} is the attractor of the forward trajectory, which is a segment of the cubic polynomial B-spline, and the non-symmetric set is the attractor of the backward trajectory, and it is a part of the exponential B-spline.
\begin{figure}
\caption{Left: Forward trajectory limit - cubic spline\\
Right: Backward trajectory limit - exponential spline.}
\label{nsfig}
\end{figure} \end{example}.
\begin{example}(Case (iii)). As we have learnt from Corollary \ref{backSFS}, backward SFS trajectories may converge under quite mild conditions. In particular, an SFS derived from a non-stationary subdivision process, may converge even if it is not asymptotically equivalent to a converging stationary process. Let us consider the random non-stationary 4-point interpolatory subdivision process defined by the Laurent polynomials \begin{equation}\label{randomns} a^{[k]}(z)=-w_k(z^{-3}+z^3)+(0.5+w_k)(z^{-1}+z)+1, \end{equation} where $\{w_k\}_{k=1}^\infty$ are randomly chosen in an interval $I$. For the constant sequence $w_k=w$, this is the Laurent polynomial representing the stationary $4$-point scheme presented in \cite{DGL}. This random 4-point subdivision has been considered in \cite{Levin}, and it is shown there that the scheme is $C^1$ convergent for $w_k\in [\epsilon, 1/8-\epsilon]$. Here we study the convergence for a larger interval $I$. We define the SFS $\mathcal{F}_k=\big\{\mathbb{R}^n; f_{1,k}, f_{2,k} \big \}$ where $f_{1,k}, f_{2,k}$ are define by (\ref{f1f2k}) with the corresponding matrices $S^{[k]}_1,S^{[k]}_2$ $$ S^{[k]}_1= \begin{pmatrix} 0&1&0&0&0&0\\ -w_k&0.5+w_k&0.5+w_k&-w_k&0&0\\ 0&0&1&0&0&0\\ 0&-w_k&0.5+w_k&0.5+w_k&-w_k&0\\ 0&0&0&1&0&0\\ 0&0&-w_k&0.5+w_k&0.5+w_k&-w_k\\ \end{pmatrix}, $$ $$ S^{[k]}_2= \begin{pmatrix} -w_k&0.5+w_k&0.5+w_k&-w_k&0&0\\ 0&0&1&0&0&0\\ 0&-w_k&0.5+w_k&0.5+w_k&-w_k&0\\ 0&0&0&1&0&0\\ 0&0&-w_k&0.5+w_k&0.5+w_k&-w_k\\ 0&0&0&0&1&0\\ \end{pmatrix}, $$ and $$ P= \begin{pmatrix} 0&2&1&0&0&1\\ 1&1&0&1&0&1\\ 2&1&0&0&1&1\\ 3&2&0&0&0&1\\ 2&4&0&0&0&1\\ 1&4&0&0&0&1\\ \end{pmatrix}. $$ \end{example}
Considering Corollary \ref{backSFS} about the convergence of backward SFS trajectories, we need the existence of a compact invariant set of $\{f_{r,i}\}$, and that $\sum_{k=1}^\infty \prod_{i=1}^k L_{\mathcal{F}_i}<\infty$. By numerical simulations we observe that for this example $\sum_{k=1}^\infty \prod_{i=1}^k L_{\mathcal{F}_i}<\infty$ is satisfied if $\{w_k\}$ are chosen according to a uniform random distribution in $I=[-b,b]$, with $0<b<0.86$. We further conclude that for $\{w_k\}\in I$ there exists $m$ such that for any $i\in \mathbb{N}$, $\prod_{i=k}^{k+m-1} L_{\mathcal{F}_i}<\mu<1$. Using Example \ref{Ex1} we can verify that there exists a compact invariant set of the linear maps $\{\mathcal{A}_i\}$, where $$\mathcal{A}_i=\mathcal{F}_i\circ \mathcal{F}_{i+1}\circ ....\circ \mathcal{F}_{i+m-1}.$$ By Corollary \ref{backSFS}, this guarantees the convergence of the backward trajectories of $\{A_{km}\}$ to a unique attractor, and this implies the convergence of the backward trajectories of $\{\mathcal{F}_i\}$. Figures \ref{randw1}, \ref{randw2}, \ref{randw3} depict the convergence of the backward trajectories $\{\Psi_k(A)\}$ of $\{\mathcal{F}_i\}$ for $w_k\in [-0.2,0.2]$, $w_k\in [-0.4,0.4]$, $w_k\in [-0.8,0.8]$, respectively, and for $k=10,12,14$.
\begin{figure}
\caption{$w_k\in [-0.2,0.2]$; \ \ Backward trajectories:\ \ $\Psi_k(A)$, $k=10,12,14$.}
\label{randw1}
\end{figure}
\begin{figure}
\caption{$w_k\in [-0.4,0.4]$; \ \ Backward trajectories:\ \ $\Psi_k(A)$, $k=10,12,14$.}
\label{randw2}
\end{figure}
\begin{figure}
\caption{$w_k\in [-0.8,0.8]$; \ \ Backward trajectories:\ \ $\Psi_k(A)$, $k=10,12,14$.}
\label{randw3}
\end{figure}
\end{document} |
\begin{document}
\title{Generic rigidity of reflection frameworks} \author{Justin Malestein\thanks{Temple University, \url{[email protected]}} \and Louis Theran\thanks{Institut für Mathematik, Diskrete Geometrie, Freie Universität Berlin, \url{[email protected]}}} \date{} \maketitle \begin{abstract} \begin{normalsize}
We give a combinatorial characterization of generic minimally rigid reflection frameworks. The main new idea is to study a pair of direction networks on the same graph such that one admits faithful realizations and the other has only collapsed realizations. In terms of infinitesimal rigidity, realizations of the former produce a framework and the latter certifies that this framework is infinitesimally rigid. \end{normalsize} \end{abstract}
\section{Introduction} \seclab{intro}
A \emph{reflection framework} is a planar structure made of \emph{fixed-length bars} connected by \emph{universal joints} with full rotational freedom. Additionally, the bars and joints are symmetric with respect to a reflection through a fixed axis. The allowed motions preserve the \emph{length} and \emph{connectivity} of the bars and \emph{symmetry} with respect to some reflection. This model is very similar to that of \emph{cone frameworks} that we introduced in \cite{MT11}; the difference is that the symmetry group $\mathbb{Z}/2\mathbb{Z}$ acts on the plane by reflection instead of rotation through angle $\pi$.
When all the allowed motions are Euclidean isometries, a reflection framework is \emph{rigid} and otherwise it is \emph{flexible}. In this paper, we give a \emph{combinatorial} characterization of minimally rigid, generic reflection frameworks.
\subsection{The algebraic setup and combinatorial model} Formally a reflection framework is given by a triple $(\tilde{G},\varphi,\tilde{\bm{\ell}})$, where $\tilde{G}$ is a finite graph, $\varphi$ is a $\mathbb{Z}/2\mathbb{Z}$-action on $\tilde{G}$ that is free on the vertices and edges, and $\tilde{\bm{\ell}} = (\ell_{ij})_{ij\in E(\tilde{G})}$ is a vector of non-negative \emph{edge lengths} assigned to the edges of $\tilde{G}$. A \emph{realization} $\tilde{G}(\vec p,\Phi)$ is an assignment of points $\vec p = (\vec p_i)_{i\in V(\tilde{G})}$ and a representation of $\mathbb{Z}/2\mathbb{Z}$ by a reflection $\Phi\in \operatorname{Euc}(2)$ such that: \begin{eqnarray}\eqlab{lengths-1}
||\vec p_j - \vec p_i||^2 = \ell_{ij}^2 & \qquad \text{for all edges $ij\in E(\tilde{G})$} \\ \eqlab{lengths-2} \vec p_{\varphi(\gamma)\cdot i} = \Phi(\gamma)\cdot\vec p_i & \qquad \text{for all $\gamma\in \mathbb{Z}/2\mathbb{Z}$ and $i\in V(\tilde{G})$} \end{eqnarray} The set of all realizations is defined to be the \emph{realization space} $\mathcal{R}(\tilde{G},\varphi,\bm{\ell})$ and its quotient by the Euclidean isometries $\mathcal{C}(\tilde{G},\varphi,\bm{\ell}) = \mathcal{R}(\tilde{G},\varphi,\bm{\ell})/\operatorname{Euc}(2)$ to be the configuration space. A realization is \emph{rigid} if it is isolated in the configuration space and otherwise \emph{flexible}.
As the combinatorial model for reflection frameworks it will be more convenient to use colored graphs. A \emph{colored graph} $(G,\bm{\gamma})$ is a finite, directed \footnote{For the group $\mathbb{Z}/2\mathbb{Z}$, the orientation of the edges do not play a role, but we give the standard definition for consistency.} graph $G$, with an assignment $\bm{\gamma} = (\gamma_{ij})_{ij\in E(G)}$ of an element of a group $\Gamma$ to each edge. In this paper $\Gamma$ is always $\mathbb{Z}/2\mathbb{Z}$. There is a standard dictionary \cite[Section 9]{MT11} associating $(\tilde{G},\varphi)$ with a colored graph $(G,\bm{\gamma})$: $G$ is the quotient of $\tilde{G}$ by $\Gamma$, and the colors encode the covering map via a natural map $\rho : \pi_1(G,b) \to \Gamma$. In this setting, the choice of base vertex does not matter, and indeed, we may define $\rho : \HH_1(G, \mathbb{Z})\to \mathbb{Z}/2\mathbb{Z}$ and obtain the same theory.
\subsection{Main Theorem} We can now state the main result of this paper. \begin{theorem}[\reflectionlaman]\theolab{reflection-laman} A generic reflection framework is minimally rigid if and only if its associated colored graph is reflection-Laman. \end{theorem} The \emph{reflection-Laman graphs} appearing in the statement are defined in \secref{matroid}. Genericity has its standard meaning from algebraic geometry: the set of non-generic reflection frameworks is a measure-zero algebraic set, and a small \emph{geometric} perturbation of a non-generic reflection framework yields a generic one.
\subsection{Infinitesimal rigidity and direction networks} As in all known proofs of ``Maxwell-Laman-type'' theorems such as \theoref{reflection-laman}, we give a combinatorial characterization of a linearization of the problem known as \emph{infinitesimal rigidity}. To do this, we use a \emph{direction network} method (cf. \cite{W88,ST10,MT10,MT11}). A \emph{reflection direction network} $(\tilde{G},\varphi,\vec d)$ is a symmetric graph, along with an assignment of a \emph{direction} $\vec d_{ij}$ to each edge. The \emph{realization space} of a direction network is the set of solutions $\tilde{G}(\vec p)$ to the system of equations: \begin{eqnarray} \eqlab{dn-realization1} \iprod{\vec p_j - \vec p_i}{\vec d^{\perp}_{ij}} = 0 & \qquad \text{for all edges $ij\in E(\tilde{G})$} \\ \eqlab{dn-realization2} \vec p_{\varphi(\gamma)\cdot i} = \Phi(\gamma)\cdot\vec p_i & \qquad \text{for all $\gamma\in \mathbb{Z}/2\mathbb{Z}$ and $i\in V(\tilde{G})$} \end{eqnarray} where the $\mathbb{Z}/2\mathbb{Z}$-action $\Phi$ on the plane is by reflection through the $y$-axis. A reflection direction network is determined by assigning a direction to each edge of the colored quotient graph $(G,\bm{\gamma})$ of $(\tilde{G},\varphi)$ (cf. \cite[Lemma 17.2]{MT11}). Since all the direction networks in this paper are reflection direction networks, we will refer to them simply as ``direction networks'' to keep the terminology manageable. A realization of a direction network is \emph{faithful} if none of the edges of its graph have coincident endpoints and \emph{collapsed} if all the endpoints are coincident.
A basic fact in the theory of finite planar frameworks \cite{W88,ST10,DMR07} is that, if a direction network has faithful realizations, the dimension of the realization space is equal to that of the space of infinitesimal motions of a generic framework with the same underlying graph. In \cite{MT10,MT11}, we adapted this idea to the symmetric case when all the symmetries act by rotations and translations.
As discussed in \cite[Section 1.8]{MT11}, this so-called ``parallel redrawing trick'' \footnote{This terminology comes from the engineering community, in which the basic idea has been folklore for quite some time.} described above does \emph{not} apply verbatim to reflection frameworks. Thus, we rely on the somewhat technical (cf. \cite[Theorem B]{MT10}, \cite[Theorem 2]{MT11}) \theoref{direction-network}, which we state after giving an important definition.
Let $(\tilde{G},\varphi,\vec d)$ be a direction network and define $(\tilde{G},\varphi,\vec d^{\perp})$ to be the direction network with $(\vec d^\perp)_{ij} = (\vec d_{ij})^\perp$. These two direction networks form a \emph{special pair} if: \begin{itemize} \item $(\tilde{G},\varphi,\vec d)$ has a faithful realization. \item $(\tilde{G},\varphi,\vec d^\perp)$ has only collapsed realizations. \end{itemize} \begin{theorem}[\linkeddirectionnetworks]\theolab{direction-network} Let $(G,\bm{\gamma})$ be a colored graph with $n$ vertices, $2n-1$ edges, and lift $(\tilde{G},\varphi)$. Then there are directions $\vec d$ such that the direction networks $(\tilde{G},\varphi,\vec d)$ and $(\tilde{G},\varphi,\vec d^\perp)$ are a special pair if and only if $(G,\bm{\gamma})$ is reflection-Laman. \end{theorem} Briefly, we will use \theoref{direction-network} as follows: the faithful realization of $(\tilde{G},\varphi,\vec d)$ gives a symmetric immersion of the graph $\tilde{G}$ that can be interpreted as a framework, and the fact that $(\tilde{G},\varphi,\vec d^\perp)$ has only collapsed realizations will imply that the only symmetric infinitesimal motions of this framework correspond to translation parallel to the reflection axis.
\subsection{Notations and terminology} In this paper, all graphs $G=(V,E)$ may be multi-graphs. Typically, the number of vertices, edges, and connected components are denoted by $n$, $m$, and $c$, respectively. The notation for a colored graph is $(G,\bm{\gamma})$, and a symmetric graph with a free $\mathbb{Z}/2\mathbb{Z}$-action is denoted by $(\tilde{G},\varphi)$. If $(\tilde{G},\varphi)$ is the lift of $(G,\bm{\gamma})$, we denote the fiber over a vertex $i\in V(G)$ by $\tilde{i}_\gamma$, with $\gamma\in \mathbb{Z}/2\mathbb{Z}$, and the fiber over a directed edge $ij$ with color $\gamma_{ij}$ by $\tilde{i}_\gamma \tilde{j}_{\gamma+\gamma_{ij}}$.
We also use \emph{$(k,\ell)$-sparse graphs} \cite{LS08} and their generalizations. For a graph $G$, a \emph{$(k,\ell)$-basis} is a maximal $(k,\ell)$-sparse subgraph; a \emph{$(k,\ell)$-circuit} is an edge-wise minimal subgraph that is not $(k,\ell)$-sparse; and a \emph{$(k,\ell)$-component} is a maximal subgraph that has a spanning $(k,\ell)$-graph.
Points in $\mathbb{R}^2$ are denoted by $\vec p_i = (x_i,y_i)$, indexed sets of points by $\vec p = (\vec p_i)$, and direction vectors by $\vec d$ and $\vec v$. Realizations of a reflection direction network $(\tilde{G},\varphi,\vec d)$ are written as $\tilde{G}(\vec p)$, as are realizations of abstract reflection frameworks. Context will always make clear the type of realization under consideration.
\section{Reflection-Laman graphs} \seclab{matroid}
In this short section we introduce the combinatorial families of sparse colored graphs we use.
\subsection{The map $\rho$} Let $(G,\bm{\gamma})$ be a $\mathbb{Z}/2\mathbb{Z}$-colored graph. Since all the colored graphs in this paper have $\mathbb{Z}/2\mathbb{Z}$ colors, from now on we make this assumption and write simply ``colored graph''. We recall two key definitions from \cite{MT11}.
The map $\rho : \HH_1(G, \mathbb{Z})\to \mathbb{Z}/2\mathbb{Z}$ is defined on cycles by adding up the colors on the edges. (The directions of the edges don't matter for $\mathbb{Z}/2\mathbb{Z}$ colors. Similarly, neither does the traversal order.) As the notation suggests, $\rho$ extends to a homomorphism from $\HH_1(G, \mathbb{Z})$ to $\mathbb{Z}/2\mathbb{Z}$, and it is well-defined even if $G$ is not connected.
\subsection{Reflection-Laman graphs} Let $(G,\bm{\gamma})$ be a colored graph with $n$ vertices and $m$ edges. We define $(G,\bm{\gamma})$ to be a \emph{reflection-Laman graph} if: the number of edges $m=2n-1$, and for all subgraphs $G'$, spanning $n'$ vertices, $m'$ edges, $c'$ connected components with non-trivial $\rho$-image and $c'_0$ connected components with trivial $\rho$-image \begin{equation}\eqlab{cone-laman} m'\le 2n' - c' - 3c'_0 \end{equation} This definition is equivalent to that of \emph{cone-Laman graphs} in \cite[Section 15.4]{MT11}. The underlying graph $G$ of a reflection-Laman graph is a $(2,1)$-graph.
\subsection{Ross graphs and circuits} Another family we need is that of \emph{Ross graphs} (see \cite{BHMT11} for an explanation of the terminology). These are colored graphs with $n$ vertices, $m = 2n - 2$ edges, satisfying the sparsity counts \begin{equation}\eqlab{ross} m'\le 2n' - 2c' - 3c'_0 \end{equation} using the same notations as in \eqref{cone-laman}. In particular, Ross graphs $(G,\bm{\gamma})$ have as their underlying graph, a $(2,2)$-graph $G$, and are thus connected \cite{LS08}.
A \emph{Ross-circuit} \footnote{The matroid of Ross graphs has more circuits, but these are the ones we are interested in here. See \secref{reflection-22}.} is a colored graph that becomes a Ross graph after removing \emph{any} edge. The underlying graph $G$ of a Ross-circuit $(G,\bm{\gamma})$ is a $(2,2)$-circuit, and these are also known to be connected \cite{LS08}, so, in particular, a Ross-circuit has $c'_0=0$, and thus satisfies \eqref{cone-laman} on the whole graph. Since \eqref{cone-laman} is always at least \eqref{ross}, we see that every Ross-circuit is reflection-Laman.
Because reflection-Laman graphs are $(2,1)$-graphs and subgraphs that are $(2,2)$-sparse are, in addition, Ross-sparse, we get the following structural result. \begin{prop}[\xyzzy][{\cite[Proposition 5.1]{MT12},\cite[Lemma 11]{BHMT11}}]\proplab{ross-circuit-decomp} Let $(G,\bm{\gamma})$ be a reflection-Laman graph. Then each $(2,2)$-component of $G$ contains at most one Ross-circuit, and in particular, the Ross-circuits in $(G,\bm{\gamma})$ are vertex disjoint. \end{prop}
\subsection{Reflection-$(2,2)$ graphs}\seclab{reflection-22} The next family of graphs we work with is new. A colored graph $(G,\bm{\gamma})$ is defined to be a \emph{reflection-$(2,2)$} graph, if it has $n$ vertices, $m=2n-1$ edges, and satisfies the sparsity counts \begin{equation} \eqlab{ref22a} m' \le 2n' - c' - 2c'_0 \end{equation} using the same notations as in \eqref{cone-laman}.
The relationship between Ross graphs and reflection-$(2,2)$ graphs we will need is: \begin{prop} \proplab{ross-adding} Let $(G,\bm{\gamma})$ be a Ross-graph. Then for either \begin{itemize} \item an edge $ij$ with any color where $i \neq j$ \item or a self-loop $\ell$ at any vertex $i$ colored by $1$ \end{itemize} the graph $(G+ij,\bm{\gamma})$ or $(G+\ell,\bm{\gamma})$ is reflection-$(2,2)$. \end{prop} \begin{proof} Adding $ij$ with any color to a Ross $(G,\bm{\gamma})$ creates either a Ross-circuit, for which $c'_0=0$ or a Laman-circuit with trivial $\rho$-image. Both of these types of graph meet this count, and so the whole of $(G+ij,\bm{\gamma})$ does as well. \end{proof} It is easy to see that every reflection-Laman graph is a reflection-$(2,2)$ graph. The converse is not true. \begin{prop}\proplab{reflection-laman-vs-reflection-22} A colored graph $(G,\bm{\gamma})$ is a reflection-Laman graph if and only if it is a reflection-$(2,2)$ graph and no subgraph with trivial $\rho$-image is a $(2,2)$-block.
$\qed$ \end{prop} Let $(G,\bm{\gamma})$ be a reflection-Laman graph, and let $G_1,G_2,\ldots,G_t$ be the Ross-circuits in $(G,\bm{\gamma})$. Define the \emph{reduced graph} $(G^*,\bm{\gamma})$ of $(G,\bm{\gamma})$ to be the colored graph obtained by contracting each $G_i$, which is not already a single vertex with a self-loop (this is necessarily colored $1$), into a new vertex $v_i$, removing any self-loops created in the process, and then adding a new self-loop with color $1$ to each of the $v_i$. By \propref{ross-circuit-decomp} the reduced graph is well-defined. \begin{prop}\proplab{reduced-graph} Let $(G,\bm{\gamma})$ be a reflection-Laman graph. Then its reduced graph is a reflection-$(2,2)$ graph. \end{prop} \begin{proof} Let $(G,\bm{\gamma})$ be a reflection-Laman graph with $t$ Ross-circuits with vertex sets $V_1,\ldots,V_t$. By \propref{ross-circuit-decomp}, the $V_i$ are all disjoint. Now select a Ross-basis $(G',\bm{\gamma})$ of $(G,\bm{\gamma})$. The graph $G'$ is also a $(2,2)$-basis of $G$, with $2n-1 - t$ edges, and each of the $V_i$ spans a $(2,2)$-block in $G'$. The $(k,\ell)$-sparse graph Structure Theorem \cite[Theorem 5]{LS08} implies that contracting each of the $V_i$ into a new vertex $v_i$ and discarding any self-loops created, yields a $(2,2)$-sparse graph $G^+$ on $n^+$ vertices and $2n^+ - 1 - t$ edges. It is then easy to check that adding a self-loop colored $1$ at each of the $v_i$ produces a colored graph satisfying the reflection-$(2,2)$ counts \eqref{ref22a} with exactly $2n^+ -1$ edges. Since this is the reduced graph, we are done. \end{proof}
\subsection{Decomposition characterizations} A \emph{map-graph} is a graph with exactly one cycle per connected component. A \emph{reflection-$(1,1)$} graph is defined to be a colored graph $(G,\bm{\gamma})$ where $G$, taken as an undirected graph, is a map-graph and the $\rho$-image of each connected component is non-trivial. \begin{lemma}\lemlab{reflection-22-decomp} Let $(G,\bm{\gamma})$ be a colored graph. Then $(G,\bm{\gamma})$ is a reflection-$(2,2)$ graph if and only if it is the union of a spanning tree and a reflection-$(1,1)$ graph. \end{lemma} \begin{proof} By \cite[Lemma 15.1]{MT11}, reflection-$(1,1)$ graphs are equivalent to graphs satisfying \begin{equation} \eqlab{ref11a} m' \le n' - c'_0 \end{equation} for every subgraph $G'$. Thus, \eqref{ref22a} is \begin{equation}\eqlab{ref22redux} m' \le (n' - c'_0) + (n' - c' - c'_0) \end{equation} The second term in \eqref{ref22redux} is well-known to be the rank function of the graphic matroid, and the Lemma follows from the Edmonds-Rota construction \cite{ER66} and the Matroid Union Theorem. \end{proof} In the next section, it will be convenient to use this slight refinement of \lemref{reflection-22-decomp}. \begin{prop}\proplab{reflection-22-nice-decomp} Let $(G,\bm{\gamma})$ be a reflection-$(2,2)$ graph. Then there is a coloring $\bm{\gamma}'$ of the edges of $G$ such that: \begin{itemize} \item The $\rho$-image of every subgraph in $(G,\bm{\gamma}')$ is the same as in $(G,\bm{\gamma})$. \item There is a decomposition of $(G,\bm{\gamma}')$ as in \lemref{reflection-22-decomp} in which the spanning tree has all edges colored by the identity. \end{itemize} \end{prop} \begin{proof} It is shown in \cite[Lemma 2.2]{MT10} that $\rho$ is determined by its image on a homology basis of $G$. Thus, we may start with an arbitrary decomposition of $(G,\bm{\gamma})$ into a spanning tree $T$ and a reflection-$(1,1)$ graph $X$, as provided by \lemref{reflection-22-decomp}, and define $\bm{\gamma}'$ by coloring the edges of $T$ with the identity and the edges of $X$ with the $\rho$-image of their fundamental cycle in $T$ in $(G,\bm{\gamma})$. \end{proof} \propref{reflection-22-nice-decomp} has the following re-interpretation in terms of the symmetric lift $(\tilde{G},\varphi)$: \begin{prop}\proplab{reflection-laman-decomp-lift} Let $(G,\bm{\gamma})$ be a reflection-$(2,2)$ graph. Then for a decomposition, as provided by \propref{reflection-22-nice-decomp}, into a spanning tree $T$ and a reflection-$(1,1)$ graph $X$: \begin{itemize} \item Every edge $ij\in T$ lifts to the two edges $i_0j_0$ and $i_1j_1$. (In other words, the vertex representatives in the lift all lie in a single connected component of the lift of $T$.) \item Each connected component of $X$ lifts to a connected graph. \end{itemize} \end{prop}
\section{Special pairs of reflection direction networks} \seclab{direction-network}
We recall, from the introduction, that for reflection direction networks, $\mathbb{Z}/2\mathbb{Z}$ acts on the plane by reflection through the $y$-axis, and in the rest of this section $\Phi(\gamma)$ refers to this action.
\subsection{The colored realization system} The system of equations \eqref{dn-realization1}--\eqref{dn-realization2} defining the realization space of a reflection direction network $(\tilde{G},\varphi,\vec d)$ is linear, and as such has a well-defined dimension. Let $(G,\bm{\gamma})$ be the colored quotient graph of $(\tilde{G},\varphi)$.
To be realizable at all, the directions on the edges in the fiber over $ij\in E(G)$ need to be reflections of each other. Thus, we see that the realization system is canonically identified with the solutions to the system: \begin{eqnarray}\eqlab{colored-system} \iprod{\Phi(\gamma_{ij})\cdot\vec p_j - \vec p_i}{\vec d_{ij}} = 0 & \qquad \text{for all edges $ij\in E(G)$} \end{eqnarray} From now on, we will implicitly switch between the two formalisms when it is convenient.
\subsection{Genericity} Let $(G,\bm{\gamma})$ be a colored graph with $m$ edges. A statement about direction networks $(\tilde{G},\varphi,\vec d)$ is \emph{generic} if it holds on the complement of a proper algebraic subset of the possible direction assignments, which is canonically identified with $\mathbb{R}^{2m}$. Some facts about generic statements that we use frequently are: \begin{itemize} \item Almost all direction assignments are generic. \item If a set of directions is generic, then so are all sufficiently small perturbations of it. \item If two properties are generic, then their intersection is as well. \item The maximum rank of \eqref{colored-system} is a generic property. \end{itemize}
\subsection{Direction networks on Ross graphs} We first characterize the colored graphs for which generic direction networks have strongly faithful realizations. A realization is \emph{strongly faithful} if no two vertices lie on top of each other. This is a stronger condition than simply being faithful which only requires that edges not be collapsed. \begin{prop}\proplab{ross-realizations} A generic direction network $(\tilde{G},\varphi,\vec d)$ has a unique, up to translation and scaling, strongly faithful realization if and only if its associated colored graph is a Ross graph. \end{prop} To prove \propref{ross-realizations} we expand upon the method from \cite[Section 20.2]{MT11}, and use the following proposition. \begin{prop}\proplab{reflection-22-collapse} Let $(G,\bm{\gamma})$ be a reflection-$(2,2)$ graph. Then a generic direction network on the symmetric lift $(\tilde{G},\varphi)$ of $(G,\bm{\gamma})$ has only collapsed realizations. \end{prop} Since the proof of \propref{reflection-22-collapse} requires a detailed construction, we first show how it implies \propref{ross-realizations}. \subsection{Proof that \propref{reflection-22-collapse} implies \propref{ross-realizations}} Let $(G,\bm{\gamma})$ be a Ross graph, and assign directions $\vec d$ to the edges of $G$ such that, for any extension $(G+ij,\bm{\gamma})$ of $(G,\bm{\gamma})$ to a reflection-$(2,2)$ graph as in \propref{ross-adding}, $\vec d$ can be extended to a set of directions that is generic in the sense of \propref{reflection-22-collapse}. This is possible because there are a finite number of such extensions.
For this choice of $\vec d$, the realization space of the direction network $(\tilde{G},\varphi,\vec d)$ is $2$-dimensional. Since solutions to \eqref{colored-system} may be scaled or translated in the vertical direction, all solutions to $(\tilde{G},\varphi,\vec d)$ are related by scaling and translation. It then follows that a pair of vertices in the fibers over $i$ and $j$ are either distinct from each other in all non-zero solutions to \eqref{colored-system} or always coincide. In the latter case, adding the edge $ij$ with any direction does not change the dimension of the solution space, no matter what direction we assign to it. It then follows that the solution spaces of generic direction networks on $(\tilde{G},\varphi,\vec d)$ and $(\widetilde{G+ij},\varphi,\vec d)$ have the same dimension, which is a contradiction by \propref{reflection-22-collapse}.
$\qed$
\subsection{Proof of \propref{reflection-22-collapse}} It is sufficient to construct a specific set of directions with this property. The rest of the proof gives such a construction and verifies that all the solutions are collapsed. Let $(G,\bm{\gamma})$ be a reflection-$(2,2)$ graph.
\paragraph{Combinatorial decomposition} We apply \propref{reflection-22-nice-decomp} to decompose $(G,\bm{\gamma})$ into a spanning tree $T$ with all colors the identity and a reflection-$(1,1)$ graph $X$. For now, we further assume that $X$ is connected.
\paragraph{Assigning directions} Let $\vec v$ be a direction vector that is not horizontal or vertical. For each edge $ij\in T$, set $\vec d_{ij} = \vec v$. Assign all the edges of $X$ the vertical direction. Denote by $\vec d$ this assignment of directions. \begin{figure}
\caption{Schematic of the proof of \propref{reflection-22-collapse}: the $y$-axis is shown as a dashed line. The directions on the edges of the lift of the tree $T$ force all the vertices to be on one of the two lines meeting at the $y$-axis, and the directions on the reflection-$(1,1)$ graph $X$ force all the vertices to be on the $y$-axis.}
\label{fig:ref-22-collapse}
\end{figure} \paragraph{All realizations are collapsed} We now show that the only realizations of $(\tilde{G},\varphi,\vec d)$ have all vertices on top of each other. By \propref{reflection-laman-decomp-lift} $T$ lifts to two copies of itself, in $\tilde{G}$. It then follows from the connectivity of $T$ and the construction of $\vec d$ that, in any realization, there is a line $L$ with direction $\vec v$ such that every vertex of $\tilde{G}$ must lie on $L$ or its reflection. Since the vertical direction is preserved by reflection, the connectivity of the lift of $X$, again from \propref{reflection-laman-decomp-lift}, implies that every vertex of $\tilde{G}$ lies on a single vertical line, which must be the $y$-axis by reflective symmetry.
Thus, in any realization of $(\tilde{G},\varphi,\vec d)$ all the vertices lie at the intersection of $L$, the reflection of $L$ through the $y$-axis and the $y$-axis itself. This is a single point, as desired. \figref{ref-22-collapse} shows a schematic of this argument.
\paragraph{$X$ does not need to be connected} Finally, we can remove the assumption that $X$ was connected by repeating the argument for each connected component of $X$ separately.
$\qed$
\subsection{Special pairs for Ross-circuits} The full \theoref{direction-network} will reduce to the case of a Ross-circuit.
\begin{prop}\proplab{ross-circuit-pairs} Let $(G,\bm{\gamma})$ be a Ross circuit with lift $(\tilde{G},\varphi)$. Then there is an edge $i'j'$ such that, for a generic direction network $(\tilde{G'},\varphi,\vec d')$ with colored graph $(G-i'j',\bm{\gamma})$: \begin{itemize} \item The solution space of $(\tilde{G'},\varphi,\vec d')$ induces a well-defined direction $\vec d_{ij}$ between $i$ and $j$, yielding an assignment of directions $\vec d$ to the edges of $G$. \item The direction networks $(\tilde{G},\varphi,\vec d)$ and $(\tilde{G},\varphi,(\vec d)^\perp)$ are a special pair. \end{itemize} \end{prop} Before giving the proof, we describe the idea. We are after sets of directions that lead to faithful realizations of Ross-circuits. By \propref{reflection-22-collapse}, these directions must be non-generic. A natural way to obtain such a set of directions is to discard an edge $ij$ from the colored quotient graph, apply \propref{ross-realizations} to obtain a generic set of directions $\vec d'$ with a strongly faithful realization $\tilde{G}'(\vec p)$, and then simply set the directions on the edges in the fiber over $ij$ to be the difference vectors between the points.
\propref{ross-realizations} tells us that this procedure induces a well-defined direction for the edge $ij$, allowing us to extend $\vec d'$ to $\vec d$ in a controlled way. However, it does \emph{not} tell us that rank of $(\tilde{G},\varphi,\vec d)$ will rise when the directions are turned by angle $\pi/2$, and this seems hard to do directly. Instead, we construct a set of directions $\vec d$ so that $(\tilde{G},\varphi,\vec d)$ is rank deficient and has faithful realizations, and $(\tilde{G},\varphi,\vec d^\perp)$ is generic. Then we make a perturbation argument to show the existence of a special pair.
The construction we use is, essentially, the one used in the proof of \propref{reflection-22-collapse} but turned through angle $\pi/2$. The key geometric insight is that horizontal edge directions are preserved by the reflection, so the ``gadget'' of a line and its reflection crossing on the $y$-axis, as in \figref{ref-22-collapse}, degenerates to just a single line.
\subsection{Proof of \propref{ross-circuit-pairs}} Let $(G,\bm{\gamma})$ be a Ross-circuit; recall that this implies that $(G,\bm{\gamma})$ is a reflection-Laman graph.
\paragraph{Combinatorial decomposition} We decompose $(G,\bm{\gamma})$ into a spanning tree $T$ and a reflection-$(1,1)$ graph $X$ as in \propref{reflection-laman-decomp-lift}. In particular, we again have all edges in $T$ colored by the identity. For now, we \emph{assume that $X$ is connected}, and we fix $i'j'$ to be an edge that is on the cycle in $X$ with $\gamma_{i'j'}\neq 0$; such an edge must exist by the hypothesis that $X$ is reflection-$(1,1)$. Let $G' = G\setminus i'j'$. Furthermore, let $T_0$ and $T_1$ be the two connected components of the lift of $T$. For a vertex $i \in G$, we denote the lift in $T_0$ by $i_0$ and the lift in $T_1$ by $i_1$. We similarly denote the lifts of $i'$ and $j'$ by $i_0', i_1'$ and $j_0', j_1'$.
\paragraph{Assigning directions} The assignment of directions is as follows: to the edges of $T$, we assign a direction $\vec v$ that is neither vertical nor horizontal. To the edges of $X$ we assign the horizontal direction. Define the resulting direction network to be $(\tilde{G},\varphi,\vec d)$, and the direction network induced on the lift of $G'$ to be $(\tilde{G'},\varphi,\vec d)$.
\paragraph{The realization space of $(\tilde{G},\varphi,\vec d)$} \figref{ross-circuit-special-pair} contains a schematic picture of the arguments that follow. \begin{lemma}\lemlab{RC-proof-1} The realization space of $(\tilde{G},\varphi,\vec d)$ is $2$-dimensional and parameterized by exactly one representative in the fiber over the vertex $i$ selected above. \end{lemma} \begin{proof} In a manner similar to the proof of \propref{reflection-22-collapse}, the directions on the edges of $T$ force every vertex to lie either on a line $L$ in the direction $\vec v$ or its reflection. Since the lift of $X$ is connected, we further conclude that all the vertices lie on a single horizontal line. Thus, all the points $\vec p_{j_0}$ are at the intersection of the same horizontal line and $L$ or its reflection. These determine the locations of the $\vec p_{j_1}$, so the realization space is parameterized by the location of $\vec p_{i'_0}$. \end{proof} Inspecting the argument more closely, we find that: \begin{lemma} In any realization $\tilde{G}(\vec p)$ of $(\tilde{G},\varphi,\vec d)$, all the $\vec p_{j_0}$ are equal and all the $\vec p_{j_1}$ are equal. \end{lemma} \begin{proof} Because the colors on the edges of $T$ are all zero, it lifts to two copies of itself, one of which spans the vertex set $\{\tilde{j_0} : j\in V(G)\}$ and one which spans $\{\tilde{j_1} : j\in V(G)\}$. It follows that in a realization, we have all the $\vec p_{j_0}$ on $L$ and the $\vec p_{j_1}$ on the reflection of $L$. \end{proof} In particular, because the color $\gamma_{i'j'}$ on the edge $i'j'$ is $1$, we obtain the following. \begin{lemma}\lemlab{RC-proof-5} The realization space of $(\tilde{G},\varphi,\vec d)$ contains points where the fiber over the edge $i'j'$ is not collapsed. \end{lemma} \begin{figure}
\caption{Schematic of the proof of \propref{ross-circuit-pairs}: the $y$-axis is shown as a dashed line. The directions on the edges of the lift of the tree $T$ force all the vertices to be on one of the two lines meeting at the $y$-axis. The horizontal directions on the connected reflection-$(1,1)$ graph $X$ force the point $\vec p_{j_0}$ to be at the intersection marked by the black dot and $\vec p_{j_1}$ to be at the intersection marked by the gray one.}
\label{fig:ross-circuit-special-pair}
\end{figure} \paragraph{The realization space of $(\tilde{G}',\varphi,\vec d)$} The conclusion of \lemref{RC-proof-1} implies that the realization system for $(\tilde{G},\varphi,\vec d)$ is rank deficient by one. Next we show that removing the edge $i'j'$ results in a direction network that has full rank on the colored graph $(G',\bm{\gamma})$. \begin{lemma}\lemlab{RC-proof-2} The realization space of $(\tilde{G},\varphi,\vec d)$ is canonically identified with that of $(\tilde{G}',\varphi,\vec d)$. \end{lemma} \begin{proof} In the proof of \lemref{RC-proof-1}, that $X$ lifts to a connected subgraph of $\tilde{G}$ was not essential. Because a horizontal line is preserved by the reflection, realizations will take on the same structure provided that $X$ lifts to a subgraph with two connected components. Removing $i'j'$ from $X$ leaves a graph $X'$ with this property since $X'$ is a tree.
It follows that the equation corresponding to the edge $i'j'$ in \eqref{colored-system} was dependent. \end{proof}
\paragraph{The realization space of $(\tilde{G},\varphi,\vec d^\perp)$} Next, we consider what happens when we turn all the directions by $\pi/2$. \begin{lemma}\lemlab{RC-proof-3} The realization space of $(\tilde{G},\varphi,\vec d^\perp)$ has only collapsed solutions. \end{lemma} \begin{proof} This is exactly the construction used to prove \propref{reflection-22-collapse}. \end{proof}
\paragraph{Perturbing $(\tilde{G},\varphi,\vec d)$} To summarize what we have shown so far: \begin{itemize} \item[(a)] $(\tilde{G},\varphi,\vec d)$ has a $2$-dimensional realization space parameterized by $\vec p_{i'_0}$ and identified with that of a full-rank direction network on the Ross graph $(G',\bm{\gamma})$. \item[(b)] There are points $\tilde{G}(\vec p)$ in this realization space where $\vec p_{i'_0}\neq \vec p_{j'_1}$. \item[(c)] $(\tilde{G},\varphi,\vec d)$ has a $1$-dimensional realization space containing only collapsed solutions. \end{itemize} What we have not shown is that the realization space of $(\tilde{G},\varphi,\vec d)$ has \emph{faithful} realizations, since the ones we constructed all have many coincident vertices. \propref{ross-realizations} will imply the rest of the theorem, provided that the above properties hold for any small perturbation of $\vec d$, since some small perturbation of \emph{any} assignment of directions to the edges of $(G',\bm{\gamma})$ has only faithful realizations. \begin{lemma}\lemlab{RC-proof-4} Let $\vec{ \hat d'}$ be a perturbation of the directions $\vec d'$ on the edges of $G'$. If $\vec{ \hat d'}$ is sufficiently close to $\vec d'$ , then there are realizations of the direction network $(\tilde{G}',\varphi,\vec{ \hat d'})$ such that $\vec p_{i'_0}\neq \vec p_{j'_1}$. \end{lemma} \begin{proof} The realization space is parameterized by $\vec p_{i'_0}$, and so $\vec p_{j'_1}$ varies continuously with the directions on the edges and $\vec p_{i'_0}$. Since there are realizations of $(\tilde{G}', \varphi, \vec d)$ with $\vec p_{i_0} \neq \vec p_{j_1}$, the Lemma follows. \end{proof} \lemref{RC-proof-4} implies that any sufficiently small perturbation of the directions assigned to the edges of $G'$ gives a direction network that induces a well-defined direction on the edge $i'j'$ which is itself a small perturbation of $\vec d_{i'j'}$. Since the ranks of $(\tilde{G'},\varphi,\vec d')$ and $(\tilde{G},\varphi,\vec d^\perp)$ are stable under small perturbations, this implies that we can perturb $\vec d'$ to a $\vec{\hat d'}$ that is generic in the sense of \propref{ross-realizations}, while preserving faithful realizability of $(\tilde{G},\varphi,\hat{\vec d})$ and full rank of the realization system for $(\tilde{G},\varphi,\hat{\vec d}^\perp)$. The Proposition is proved for when $X$ is connected.
\paragraph{$X$ need not be connected} The proof is then complete once we remove the additional assumption that $X$ was connected. Let $X$ have connected components $X_1, X_2,\ldots,X_c$. For each of the $X_i$, we can identify an edge $(i'j')_k$ with the same properties as $i'j'$ above.
Assign directions to the tree $T$ as above. For $X_1$, we assign directions exactly as above. For each of the $X_k$ with $k\ge 2$, we assign the edges of $X_k\setminus (i'j')_k$ the horizontal direction and $(i'j')_k$ a direction that is a small perturbation of horizontal.
With this assignment $\vec d$ we see that for any realization of $(\tilde{G},\varphi,\vec d)$, each of the $X_k$, for $k\ge 2$ is realized as completely collapsed to a single point at the intersection of the line $L$ and the $y$-axis. Moreover, in the direction network on $\vec d^\perp$, the directions on these $X_i$ are a small perturbation of the ones used on $X$ in the proof of \propref{reflection-22-collapse}. From this is follows that, in any realization $(\tilde{G},\varphi,\vec d^\perp)$, is completely collapsed and hence full rank.
We now see that this new set of directions has properties (a), (b), and (c) above required for the perturbation argument. Since that argument makes no reference to the decomposition, it applies verbatim to the case where $X$ is disconnected.
$\qed$
\subsection{Proof of \theoref{direction-network}} The easier direction to check is necessity. \paragraph{The Maxwell-direction} If $(G,\bm{\gamma})$ is not reflection-Laman, then it contains either a Laman-circuit with trivial $\rho$-image, or a violation of $(2,1)$-sparsity. If there is a Laman-circuit with trivial $\rho$-image, the Parallel Redrawing Theorem \cite[Theorem 4.1.4]{W96} in the form \cite[Theorem 3]{ST10} implies that this subgraph has no faithful realizations for $(G,\varphi,\vec d)$ only if it does in $(G,\varphi,\vec d^\perp)$ if rank-deficient. A violation of $(2,1)$-sparsity implies that the realization system \eqref{colored-system} of $(\tilde{G},\varphi,\vec d^\perp)$ has a dependency, since the realization space is always at least $1$-dimensional.
\paragraph{The Laman direction} Now let $(G,\bm{\gamma})$ be a reflection-Laman graph and let $(G',\bm{\gamma})$ be a Ross-basis of $(G,\bm{\gamma})$. For any edge $ij \notin G'$, adding it to $G'$ induces a Ross-circuit which contains some edge $i'j'$ having the property specified in \propref{ross-circuit-pairs}. Note that $G' - ij +i'j'$ is again a Ross-basis. We therefore can assume (after edge-swapping in this manner) for all $ij \notin G'$ that $ij$ has the property from \propref{ross-circuit-pairs} in the Ross-circuit it induces.
We assign directions $\vec d'$ to the edges of $G'$ such that: \begin{itemize} \item The directions on each of the intersections of the Ross-circuits with $G'$ are generic in the sense of \propref{ross-circuit-pairs}. \item The directions on the edges of $G'$ that remain in the reduced graph $(G^*,\bm{\gamma})$ are perpendicular to an assignment of directions on $G^*$ that is generic in the sense of \propref{reflection-22-collapse}. \item The directions on the edges of $G'$ are generic in the sense of \propref{ross-realizations}. \end{itemize} This is possible because the set of disallowed directions is the union of a finite number of proper algebraic subsets in the space of direction assignments. Extend to directions $\vec d$ on $G$ by assigning directions to the remaining edges as specified by \propref{ross-circuit-pairs}. By construction, we know that: \begin{lemma}\lemlab{laman-1} The direction network $(\tilde{G},\varphi,\vec d)$ has faithful realizations. \end{lemma} \begin{proof} The realization space is identified with that of $(\tilde{G'},\varphi,\vec d')$, and $\vec d'$ is chosen so that \propref{ross-realizations} applies. \end{proof} \begin{lemma}\lemlab{laman-2} In any realization of $(\tilde{G},\varphi,\vec d^{\perp})$, the Ross-circuits are realized with all their vertices coincident and on the $y$-axis. \end{lemma} \begin{proof} This follows from how we chose $\vec d$ and \propref{ross-circuit-pairs}. \end{proof} As a consequence of \lemref{laman-2}, and the fact that we picked $\vec d$ so that $\vec d^\perp$ extends to a generic assignment of directions $(\vec d^*)^\perp$ on the reduced graph $(G^*,\bm{\gamma})$ we have: \begin{lemma} The realization space of $(\tilde{G},\varphi,\vec d^\perp)$ is identified with that of $(\tilde{G^*},\varphi,\vec (d^*)^\perp)$ which, furthermore, contains only collapsed solutions. \end{lemma} Observe that a direction network for a single self-loop (colored $1$) with a generic direction only has solutions where vertices are collapsed and on the $y$-axis. Consequently, replacing a Ross-circuit with a single vertex and a self-loop yields isomorphic realization spaces. Since the reduced graph is reflection-$(2,2)$ by \propref{reduced-graph} and the directions assigned to its edges were chosen generically for \propref{reflection-22-collapse}, that $(\tilde{G},\varphi,\vec d^\perp)$ has only collapsed solutions follows. Thus, we have exhibited a special pair, completing the proof.
$\qed$
\paragraph{Remark} It can be seen that the realization space of a direction network as supplied by \theoref{direction-network} has at least one degree of freedom for each edge that is not in a Ross basis. Thus, the statement cannot be improved to, e.g., a unique realization up to translation and scale.
\section{Infinitesimal rigidity of reflection frameworks} \seclab{reflection-laman-proof}
Let $(\tilde{G},\varphi,\bm{\ell})$ be a reflection framework and let $(G,\bm{\gamma})$ be the quotient graph. The configuration space, which is the set of solutions to the quadratic system \eqref{lengths-1}--\eqref{lengths-2} is canonically identified with the solutions to: \begin{eqnarray}\eqlab{colored-lengths}
||\Phi(\gamma_{ij})\cdot \vec p_j - \vec p_i||^2 = \ell^2_{ij} & \qquad \text{for all edges $ij\in E(G)$} \end{eqnarray} where $\Phi$ acts on the plane by reflection through the $y$-axis. (That ``pinning down'' $\Phi$ does not affect the theory is straightforward from the definition of the configuration space: it simply removes rotation and translation in the $x$-direction from the set of trivial motions.)
\subsection{Infinitesimal rigidity} Computing the formal differential of \eqref{colored-lengths}, we obtain the system \begin{eqnarray}\eqlab{colored-inf} \iprod{\Phi(\gamma_{ij})\cdot \vec p_j - \vec p_i}{\vec v_j - \vec v_i} = 0 & \qquad \text{for all edges $ij\in E(G)$} \end{eqnarray} where the unknowns are the \emph{velocity vectors} $\vec v_i$. A standard kind of result (cf. \cite{AR78}) is the following. \begin{prop}\proplab{ar-direction} Let $\tilde{G}(\vec p,\Phi)$ be a realization of an abstract framework $(\tilde{G},\varphi,\bm{\ell})$. If the corank of the system \eqref{colored-inf} is one, then $\tilde{G}(\vec p)$ is rigid. \end{prop} Thus, we define a realization to be \emph{infinitesimally rigid} if the system \eqref{colored-inf} has maximal rank, and \emph{minimally infinitesimally rigid} if it is infinitesimally rigid but ceases to be so after removing any edge from the colored quotient graph.
By definition, infinitesimal rigidity is defined by a polynomial condition in the coordinates of the points $\vec p_i$, so it is a generic property associated with the colored graph $(G,\bm{\gamma})$.
\subsection{Relation to direction networks} Here is the core of the direction network method for reflection frameworks: we can understand the rank of \eqref{colored-inf} in terms of a direction network. \begin{prop}\proplab{rigidity-vs-directions} Let $\tilde{G}(\vec p,\Phi)$ be a realization of a reflection framework with $\Phi$ acting by reflection through the $y$-axis. Define the direction $\vec d_{ij}$ to be $\vec \Phi(\gamma_{ij})\cdot \vec p_j - \vec p_i$. Then the rank of \eqref{colored-inf} is equal to that of \eqref{colored-system} for the direction network $(G,\bm{\gamma},\vec d^{\perp})$. \end{prop} \begin{proof} Exchange the roles of $\vec v_i$ and $\vec p_i$ in \eqref{colored-inf}. \end{proof}
\subsection{Proof of \theoref{reflection-laman}} The, more difficult, ``Laman direction'' of the Main Theorem follows immediately from \theoref{direction-network} and \propref{rigidity-vs-directions}: given a reflection-Laman graph \theoref{direction-network} produces a realization with no coincident endpoints and a certificate that \eqref{colored-inf} has corank one.
$\qed$
\subsection{Remarks} The statement of \propref{rigidity-vs-directions} is \emph{exactly the same} as the analogous statement for orientation-preserving cases of this theory. What is different is that, for reflection frameworks, the rank of $(G,\bm{\gamma},\vec d^{\perp})$ is \emph{not}, the same as that of $(G,\bm{\gamma},\vec d)$. By \propref{reflection-22-collapse}, the set of directions arising as the difference vectors from point sets are \emph{always non-generic} on reflection-Laman graphs, so we are forced to introduce the notion of a special pair as in \secref{direction-network}.
\end{document} |
\begin{document}
\title{Controlled deflection of cold atomic clouds and of Bose-Einstein condensates}
\author{N. Gaaloul}
\affiliation{Laboratoire de Spectroscopie Atomique, Mol\'{e}culaire et Applications, D\'{e}partement de Physique, Facult\'{e} des Sciences de Tunis, Universit\'{e} Tunis El Manar, 2092 Tunis, Tunisia.}
\affiliation{Laboratoire de Photophysique Mol\'{e}culaire du CNRS, Universit\'{e} Paris-Sud, B\^{a}timent 210, 91405 Orsay Cedex, France.}
\author{A. Jaouadi}
\affiliation{Laboratoire de Spectroscopie Atomique, Mol\'{e}culaire et Applications, D\'{e}partement de Physique, Facult\'{e} des Sciences de Tunis, Universit\'{e} Tunis El Manar, 2092 Tunis, Tunisia.}
\affiliation{Laboratoire de Photophysique Mol\'{e}culaire du CNRS, Universit\'{e} Paris-Sud, B\^{a}timent 210, 91405 Orsay Cedex, France.}
\author{L. Pruvost}
\affiliation{Laboratoire Aim\'{e} Cotton du CNRS, Universit\'{e} Paris-Sud, B\^{a}timent 505, 91405 Orsay Cedex, France.}
\author{M. Telmini}
\affiliation{Laboratoire de Spectroscopie Atomique, Mol\'{e}culaire et Applications, D\'{e}partement de Physique, Facult\'{e} des Sciences de Tunis, Universit\'{e} Tunis El Manar, 2092 Tunis, Tunisia.}
\author{E. Charron}
\affiliation{Laboratoire de Photophysique Mol\'{e}culaire du CNRS, Universit\'{e} Paris-Sud, B\^{a}timent 210, 91405 Orsay Cedex, France.}
\abstract{We present a detailed, realistic proposal and analysis of the implementation of a cold atom deflector using time-dependent far off-resonance optical guides. An analytical model and numerical simulations are used to illustrate its characteristics when applied to both non-degenerate atomic ensembles and to Bose-Einstein condensates. Using for all relevant parameters values that are achieved with present technology, we show that it is possible to deflect almost entirely an ensemble of $^{87}$Rb atoms falling in the gravity field. We discuss the limits of this proposal, and illustrate its robustness against non-adiabatic transitions.}
\pacs{37.10.Gh} }
\maketitle
\section{Introduction} \label{sec:Intro}
Optical and magnetic fields are extremely efficient tools used for the controlled manipulation of large ensembles of cold atoms\,\cite{Adams_1994,Balykin_1995}. In the past fifteen years, cold matter waves have shown great possibilities in the context of linear atom optics, when phase-space densities are sufficiently low that the effect of collisions can be neglected. Dipole and radiation-pressure forces have for instance allowed the achievement of various optical manipulations such as atomic focusing, diffraction or interference\,\cite{Berman_1997,Meystre_2001}.
Many efforts have been recently devoted to the experimental implementation of atomic beam splitters with magnetic\,\cite{Muller_2000,Cassettari_2000,Muller_2001,Hommelhoff_2005} or optical\,\cite{Houde_2000,Hansel_2001,Dumke_2002} potentials. These different experimental investigations were accompanied by various theoretical studies\,\cite{Stickney_2003,Kreutzmann_2004,Bortolotti_2004,Gaaloul_2006,Zhang_2006}. These devices are obviously of clear interest for atom interferometry experiments. After the advent of Bose-Einstein condensation (BEC) in 1995\,\cite{Anderson_1995,Davis_1995}, different setups were designed in order to split and recombine a BEC\,\cite{Shin_2004,Wang_2005,Schumm_2005}. In this case, the experimental implementation is even more difficult since inter-atomic interactions due to high atomic densities in the wave-guides can sometimes not only induce the fragmentation of the BEC\,\cite{Stickney_2002,Gaaloul_2007}, but also affect the overall coherence of the system\,\cite{Chen_2003}.
In a recent paper we have derived a semi-classical mo\-del for the description of the splitting dynamics of a cold atomic cloud in such a device\,\cite{Gaaloul_2006}. This setup involves two crossing far off-resonant dipole guides [see Fig.\,\ref{fig:f1}(a)], and we have shown that a simple variation of the laser beam intensities allows to control the splitting ratio in the two guides. In the present paper, we first show that if the vertical guide is switched off when the atomic cloud reaches the crossing point, this device becomes an efficient coherent atom deflector. We then extend this study to the quantum degenerate regime, in order to demonstrate the efficiency of this deflection setup with Bose-Einstein condensates.
\begin{figure}
\caption{(a) Schematic representation of the proposed optical deflector for cold atoms. The right inset is a magnification of the crossing region. The vertical position of the crossing point is $z=-h$, and the total transverse width of the oblique guide is equal to $2\ell_1$. (b) Timing of the magnetic and optical trap (MOT) and of the vertical (V) and oblique (O) guides used in this setup. $t_c=\sqrt{2h/g}$ corresponds to the time at which the Rb atoms reach the crossing height $z=-h$.}
\label{fig:f1}
\end{figure}
As illustrated in Fig.\,\ref{fig:f1}(a), we use a setup involving two crossing far off-resonant dipole guides similar to the one of Ref.\,\cite{Houde_2000}. A large ensemble of $^{87}$Rb atoms is initially trapped and cooled around the position $z=0$ in Fig.\,\ref{fig:f1}(a). This trap is switched off at time $t=0$, while a vertical far off-resonant laser beam, crossing the cloud close to its center, is switched on. A significant portion of the atoms, falling due to gravity, is captured and guided in this vertical wave-guide\,\cite{Houde_2000}. When the center of the guided cloud reaches a given height $z=-h$, at time $t=t_c$, the vertical laser beam is switched off while a second oblique guide is switched on. This timing sequence is illustrated schematically in Fig.\,\ref{fig:f1}(b). The durations of the switching-on and -off procedures are supposed to be much shorter than the typical time scale of the fall dynamics. In spite of the high velocities achieved in this vertical fall, we will show that this setup allows for the implementation of an efficient deflector since the atoms can be deviated from their initial trajectory with no significant loss. This scheme is used both with a thermal cloud of atoms and with an atomic condensate after rescaling the whole problem due to the difference in size of condensates compared to cold atomic clouds.
The outline of the paper is as follows: in Sec.\,\ref{sec:Pot} we discuss the properties of $^{87}$Rb atoms that are relevant for our analysis. We also give the values of typical laser parameters that realize this atom deflector. We describe briefly our semi-classical numerical model in Sec.\,\ref{sec:Model-cold}. In Sec.\,\ref{sec:Results-cold} we give the results of our numerical investigations on the performance of this setup with cold atomic clouds ($T\sim 10\,\mu$K). We show that a high efficiency ($\geqslant 90\%$) can be achieved with large deflection angles. We also discuss the adiabaticity of the deflection process. We then present in Sec.\,\ref{sec:Bec} a full quantum model designed to treat the dynamics of a BEC falling in the gravity field in the presence of these time-dependent guiding potentials. We then present the results of the numerical simulations with BECs, demonstrating the efficiency of the proposed setup in the quantum degenerate domain. Our conclusions are finally summarized in the last section.
\section{Guiding potentials} \label{sec:Pot}
During the guiding process and in the case of a large detuning, the atoms are subjected to a dipole force induced by the dipole potential \begin{equation} \label{eq:pot} {\cal U}({\textbf{\em r}}) = \frac{\hbar\Gamma}{2}\,\frac{I({\textbf{\em r}})/I_s}{4\delta/\Gamma}\,, \end{equation} where $\delta=\omega_L-\omega_0$ denotes the detuning between the laser frequency $\omega_L$ and the atomic transition frequency $\omega_0$. $I_s$ is the saturation intensity, and $\Gamma$ the natural linewidth of the atomic transition\,\cite{Phillips_1992,Grimm_2000}.
The atomic dynamics is supposed to take place in the $(x,z)$ plane defined by the two guides (see Fig.\,\ref{fig:f1}(a)) thanks to a strong confinement applied in the $y$-direction. The transverse intensity distribution of the TEM$_{00}$ vertical laser beam of power $P_0$ is approximated by the Gaus\-sian-like form \begin{equation} \label{eq:int} \begin{array}{lccl}
\textrm{if }|x| \leqslant \ell_0\textrm{ : } & I_0(x) & = & \displaystyle\frac{2P_0}{\pi w_0^2}\,\sin^2\left(\frac{\pi}{2}\,\frac{x-\ell_0}{\ell_0}\right)\,,\\
\textrm{if }|x| > \ell_0\textrm{ : } & I_0(x) & = & 0\,, \end{array} \end{equation} where the size $\ell_0$ of the vertical guide is simply related to the laser waist $w_0$ by the relation \begin{equation} \label{eq:size} \ell_0 = w_0 \sqrt{2\ln2} \sim 1.18\,w_0\,. \end{equation}
This sinus-squared shape, which is often used in time-dependent calculations\,\cite{Giusti-Suzor_1995}, is very close to the ideal Gaussian intensity distribution, except for the absence of the extended wings of the true Gaussian shape which lengthen the calculations without noticeable contribution to the physical processes. With this sinus-squared convention, the guiding region ($|x| \leqslant \ell_0$) is also well defined. The trapping potentials associated with the vertical and oblique laser guides are thus expressed as \begin{subequations} \label{eq:U0U1} \begin{eqnarray} \label{eq:U0}
{\cal U}_0(x) & = & -U_{0} \sin^{2}\left(\frac{\pi}{2}\,\frac{x -\ell_0}{\ell_0}\right)\textrm{~~for }|x| \leqslant \ell_0\\ \label{eq:U1}
{\cal U}_1(x,z) & = & -U_{1} \sin^{2}\left(\frac{\pi}{2}\,\frac{x'-\ell_1}{\ell_1}\right)\textrm{ for }|x'| \leqslant \ell_1 \end{eqnarray} \end{subequations} where $x'$ denotes the rotated coordinate $x' = x\cos\gamma+(z+h)\sin\gamma$.
Typical laser powers $P_0 \sim 5-30$\,W for a Nd:YAG laser operating at 1064\,nm with laser waists of about $100-300\,\mu$m yield potential depths of about $5-250$\,$\mu$K. With these laser parameters, the $^{87}$Rb transition to consider is the D$_1$\,: 5$^2$S$_{1/2}\rightarrow$\,5$^2$P$_{1/2}$, with a decay rate $\Gamma/2\pi \simeq 5.75$\,MHz, a saturation intensity $I_s \simeq 4.5$\,mW/cm$^2$ and a detuning $\delta/2\pi \simeq -95.4$\,THz. With these conditions, the Rayleigh range $z_R = \pi w_0^2 / \lambda$ is about 3\,cm, thus allowing us to neglect the divergence of the beam on a length up to about 1\,cm.
\section{Semi-classical model for cold atoms} \label{sec:Model-cold}
The guided atomic dynamics can be followed by solving numerically the time-dependent Schr\"{o}dinger equation for the atomic translational coordinates, taking into account the effect of the gravity field, and choosing realistic values for all laser parameters. We adopt a semi-classical approach where the $z$ coordinate is described classically, following \begin{subequations} \label{eq:z(t)} \begin{eqnarray} t \leqslant t_c :\;\; z_{cl}(t) & = & -gt^2/2\\ t > t_c :\;\; z_{cl}(t) & = & -g\big[t_c+(t-tc)\cos\gamma\big]^2/2\,, \end{eqnarray} \end{subequations} where $t_c=(2h/g)^{1/2}$ is the time at which the atoms reach the crossing point (position \mbox{$z=-h$}). These equations of motion are obtained under the assumption of energy conservation for a classical particle which is perfectly deflected, and which therefore follows the paths blazed initially by the vertical beam and later on by the oblique guide. The other dimension $x$ is treated at the quantum level. This semi-classical approach was compared to the experimental study\,\cite{Houde_2000} in Ref.\,\cite{Gaaloul_2006}. In this approach, the two-dimensional guiding potentials\,(\ref{eq:U0U1}) can be replaced by the one-dimensional time-dependent potential \begin{subequations} \label{eq:U(x,t)} \begin{eqnarray} t \leqslant t_c :\;\; {\cal U}(x,t) & = & {\cal U}_0(x)\\ t > t_c :\;\; {\cal U}(x,t) & = & {\cal U}_1(x,z_{cl}(t))\,, \end{eqnarray} \end{subequations} and the quantum dynamics is now summarized in the one-dimensional time-dependent Hamiltonian \begin{equation} \label{eq:H(x,t)} \hat{\mathcal{H}}(x,t) = -\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}} + {\cal U}(x,t)\,, \end{equation} where $m$ denotes the $^{87}$Rb atomic mass. The time-de\-pen\-dent Schr\"{o}dinger equation \begin{equation} \label{eq:TDSE} i\hbar \frac{\partial }{\partial t}\varphi (x,t)=\hat{\mathcal{H}}(x,t)\;\varphi (x,t)\,, \end{equation} is then solved using the numerical split operator technique of the short-time propagator\,\cite{Feit_1983}, assuming that the atom is initially ($t=0$) in a well defined eigenstate $v$, of energy $E_v$, of the potential\,(\ref{eq:U0}) created by the vertical laser beam. In addition, it was shown in Ref.\,\cite{Gaaloul_2006} that the deflection probability obtained for the initial classical conditions $z(0)=0$ and $\dot{z}(0)=0$ is very close to the probability averaged over the entire atomic cloud. We therefore use these initial classical conditions in the present study.
At the end of the propagation, the final wave function $\varphi (x,t_{f})$ is analyzed spatially, in order to extract the deflection efficiency $\eta_D$. An averaging procedure over the set of all possible initial states finally allows to calculate the total deflection probability $\langle\eta_D\rangle$ of the entire atomic cloud (see Sec.\,\ref{sec:multiv} hereafter for details).
\section{Numerical Results for cold atoms} \label{sec:Results-cold}
\subsection{Case of a single initial state}
In this study, the value of the position $h$ of the crossing point between the two guides is the main parameter which controls the efficiency of the deflector. Indeed, for large values of $h$ the atoms reach the crossing point with a large kinetic energy $E_c=mgh$, and they will not be deflected if this energy exceeds by far the binding energy in the oblique guide.
In order to predict precisely the largest value of the height $h$ allowing for atomic deflection, one should compare the kinetic energy gained by the atoms along the direction $x'$ transverse to the oblique guide at the position $z=-h-\ell_1/\sin\gamma$ [see Fig.\,\ref{fig:f1}(a)] with the binding energy $U_1-E_v$. The energy $E_v$ denotes here the energy of the initial vibrational state $v$. One can effectively expect that the deflection will fail if \begin{equation} \label{eq:critere} m g \left( h + \frac{\ell_1}{\sin\gamma} \right) \sin^2\gamma > U_1-E_v\,. \end{equation} In this expression, the $\sin^2\gamma$ factor originates from the fact that the transverse direction $x'$ of the deflecting beam makes an angle $\gamma$ with the fall direction $z$. The validity of this simple prediction is illustrated in Fig.\,\ref{fig:f2}, which represents the deflection probability $\eta_D$ as a function of $h$ [Fig.\,\ref{fig:f2}(a)] and of $w_1=\ell_1/\sqrt{2\ln2}$ [Fig.\,\ref{fig:f2}(b)], all other parameters being fixed. These probabilities are calculated numerically for the initial state $v=0$ and for $v=2094$, whose energy is about halfway in the optical potential $(E_v \simeq -U_0/2)$. In both graphs, the frontiers defined by the inequality\,(\ref{eq:critere}) are indicated by vertical dashed arrows. By comparison with the ``exact'' value obtained from the solution of the time-dependent Schr\"odinger equation\,(\ref{eq:TDSE}), one can notice that these frontiers correspond to a deflection probability of 50\%. This energy criterion, which simply compares the atomic kinetic energy with the binding energy in the oblique guide, can thus be used safely to predict the efficiency of this setup.
\begin{figure}
\caption{Deflection probability $\eta_D$ as a function (a) of the falling distance $h$ [see Fig.\,\ref{fig:f1}] and (b) of the waist $w_1$ of the oblique laser beam. The deflection angle is equal to $\gamma = 10\,$deg. These results are for a single initial state\,: $v=0$ (solid line with red circles) or $v=2094$ (solid line with green squares). In both graphs, the dashed blue arrows mark the positions at which a deflection efficiency of 50\% is expected according to inequality\,(\ref{eq:critere}). The laser parameters have been chosen such that $U_0=U_1=30\,\mu$K, and $w_0=100\,\mu$m. This corresponds to $P/\delta \simeq 3.1 \times 10^{-4}\,$W/GHz. In graph (a) the oblique laser waist is $w_1=100\,\mu$m and in graph (b) the height $h$ is equal to 9.03\,mm for $v=0$ and to 4.18\,mm for $v=2094$.}
\label{fig:f2}
\end{figure}
One can also notice in Fig.\,\ref{fig:f2}(a) the different variations of $\eta_D$ with $h$ for $v=0$ and for $v=2094$. The different behavior of these two vibrational levels comes from the fact that $v=0$ is associated with a well localized atomic wavefunction, deeply bound in an almost harmonic potential, while $v=2094$ is entirely delocalized over a large spatial range $|x|\leqslant\ell_0/2$, since its energy is about halfway in the potential. As a consequence, $v=0$ satisfies fully the conditions imposed by the Ehrenfest theorem\,\cite{Ehrenfest_1927} and its evolution can be described classically, while $v=2094$ shows a quantum behavior. For $v=0$, as soon as the inequality\,(\ref{eq:critere}) is satisfied, the deflection probability falls to zero, in agreement with the usual dynamics of a classical particle. On the other hand, the stationnary state $v=2094$ can be seen as a coherent superposition of incoming and outgoing wave packets characterized by a rather broad kinetic energy distribution of width $\Delta E_c \sim U_0/2$. The packet moving in the $+x$ direction will be easily captured by the oblique guide, while the packet moving in the opposite direction easily avoids this wave guide. These two different dynamics are not much affected by the exact value of the falling height $h$, and this explains the very slow variation of $\eta_D$ with $h$ in Fig.\,\ref{fig:f2}(a) for $v=2094$.
The variation of $\eta_D$ with $w_1$ [see Fig.\,\ref{fig:f2}(b)] is also opposite for $v=0$ and $v=2094$. The case $v=0$ can again be interpreted classically\,: when $w_1$ increases, the possibility is open for the atoms to fall from a higher distance $d=(h+\ell_1/\sin\gamma)$, thus gaining a larger kinetic energy. This explains the decrease of $\eta_D$ with $w_1$ for $v=0$. This variation is just reversed in the case of $v=2094$. Here, the initial wave function is characterized by a large typical size $\Delta x \sim \ell_0$. An efficient deflector can thus only be obtained if the size of the oblique wave guide remains of the order of, or is higher than, this typical size $\ell_0$. Consequently, for $v=2094$, when $w_1$ decreases below $w_0$, the deflection probability decreases, as seen in Fig.\,\ref{fig:f2}(b).
In addition, it is worth noting that, due to the $\sin^2\gamma$ factor in the inequality\,(\ref{eq:critere}), it is possible to induce an efficient deflection of atoms with relatively large kinetic energies using modest laser powers, as long as the angle $\gamma$ remains small. For instance, in the case $v=0$ shown in Fig.\,\ref{fig:f2}(a) for $\gamma = 10\,$deg, an almost perfect deflection is obtained for $h=8.5$\,mm, even though the total kinetic energy of the atom reaches then about $E_c \sim 900\,\mu$K, {\it i.e.} 30 times the depth of the oblique wave guide. A larger deflection angle could be achieved easily and with a very high efficiency by simply adding a succession of several deflection setups, each one inducing a small deflection of about 10\,degrees.
An important issue for the preservation of the coherence properties of an atomic cloud is the adiabaticity of the process. Previous theoretical studies have shown that similar beam splitter setups are able to conserve the coherence of the system even for a thermal distribution of atoms with an average energy far exceeding the level spacing of the transverse confinement\,\cite{Kreutzmann_2004}. This behavior results from the fact that non-adiabatic transitions are induced by the time derivative operator $d/dt$ which does not couple states of opposite parities, thus preventing nearest neighbor transitions\,\cite{Hansel_2001}. In comparison, transitions to other states presenting the same parity as the initial state are also not favored since they involve larger energy differences\,\cite{Zhang_2006}.
As shown in Fig.\,\ref{fig:f3}, this robustness to non-adiabatic transitions is also present in our deflection scheme. This figure represents the probability distributions $\left|\varphi (x,t_{f})\right|^2$ calculated 7\,mm below the crossing point $z=-h$ for the initial state $v=0$, with $h=2$\,mm [Fig.\,\ref{fig:f3}(a)], $h=7$\,mm [Fig.\,\ref{fig:f3}(b)], and $h=9$\,mm [Fig.\,\ref{fig:f3}(c)]. The vibrational distributions obtained in the oblique guide after deflection are also shown in the small insets of Fig.\,\ref{fig:f3}(a) and\,\ref{fig:f3}(b). Even though the kinetic energy of the atoms exceeds the average vibrational spacing in the trap, the initial state $v=0$ is preserved at 99.1\% for $h=2$\,mm, and at 50.3\% for $h=7$\,mm. Indeed, in the first case, only $v=2$ is slightly populated, while the first five even vibrational levels are populated in the second case. It is only when the falling height $h$ approaches the limit given by the inequality\,(\ref{eq:critere}) that the population of the initial state $v=0$ is almost entirely redistributed to higher excited states, as seen in the wave function shown Fig.\,\ref{fig:f3}(c).
\begin{figure}
\caption{Atomic probability distributions $\left|\varphi (x,t_{f})\right|^2$ as a function of the transverse coordinate $x$ at the end of the propagation, for (a) $h=$2\,mm, (b) $h=$7\,mm and (c) $h=9$\,mm. The laser parameters are identical to the one of Fig.\,\ref{fig:f2}, with $v=0$ and $w_1=100\,\mu$m. Note that, for the sake of clarity, the horizontal axis has been broken in panel\,(c). The small insets in panels (a) and (b) represent the vibrational distributions in the oblique guide at the end of the propagation.}
\label{fig:f3}
\end{figure}
\subsection{Case of an initial vibrational distribution} \label{sec:multiv}
Realistically, an atomic cloud of typical size $\sigma_0$ and temperature $T_0$ can be described as a statistical mixture of trapped vibrational states, represented by the density matrix \begin{equation} \label{eq:thermal}
\rho(\sigma_0,T_0) = \sum_{v} c_v(\sigma_0,T_0) \; | v \rangle \langle v |\,, \end{equation}
where the coefficients $c_v(\sigma_0,T_0)$ are involved functions of the cloud parameters $\sigma_0$ and $T_0$ and of the wave guide parameters $U_0$ and $w_0$ (see equation (16) in reference\,\cite{Gaaloul_2006} for instance). The calculation of the total deflection probability of the entire cloud therefore requires to average incoherently the deflection probability of each possible initial vibrational level $v$, taking into account the weight functions $c_v(\sigma_0,T_0)$. It is also worth noting that typical initial vibrational distributions $P(v)=|c_v(\sigma_0,T_0)|^2$ are relatively flat when $k_B T \sim U_0$, except for the lowest energy levels which are usually more populated\,\cite{Gaaloul_2006}. In the calculation, we include all populated vibrational states.
\begin{figure}
\caption{Deflection probability $\eta_D$ as a function of the initial vibrational level $v$. (a) The deflection angle is equal to $\gamma=10\,$deg, and the solid line with red circles stands for $U_1=30\,\mu$K while the solid line with green squares is for $U_1=25\,\mu$K. (b) The depth of the oblique wave guide is equal to $U_1=30\,\mu$K, and the solid line with red circles stands for $\gamma=10\,$deg while the solid line with green squares is for $\gamma=13\,$deg. The falling height is $h=4$\,mm and the oblique laser waist is equal to $w_1=100\,\mu$m. All other parameters are identical to the one of Fig.\,\ref{fig:f2}.}
\label{fig:f4}
\end{figure}
Fig.\,\ref{fig:f4} shows the variation of the deflection efficiency with the initial vibrational level $v$ for a series of different laser parameters. The transverse trapping potential associated with the vertical wave guide supports about 5000 vibrational states when $U_0=30\,\mu$K and $w_0=100\,\mu$m. One can notice the general tendency of measuring a lower deflection probability when $v$ increases, in perfect agreement with the variation expected from the energy criterion\,(\ref{eq:critere}). In addition, one can notice that increasing $U_1$ [Fig.\,\ref{fig:f4}(a)] or decreasing $\gamma$ [Fig.\,\ref{fig:f4}(b)] increases the deflection probability of any initial state. In Fig.\,\ref{fig:f4}, the vertical dashed arrows indicate the limits defined by the inequality\,(\ref{eq:critere}), which are again in good agreement with the numerical values. One can also notice that the highest levels $v \simeq 5000$ are not deflected. This is due to the fact that atoms trapped in these levels, whose energies are very close to the threshold, are easily lost during the deflection process.
\begin{figure}
\caption{Total deflection probability $\left<\eta_D\right>$ of an atomic cloud of $^{87}$Rb of size $\sigma_0=0.15$\,mm at temperature $T_0=10\,\mu$K. The laser parameters have been chosen such that $U_0=30\,\mu$K, $w_0=200\,\mu$m, $w_1=158\,\mu$m, and $h=4$\,mm (a) or $h=1$\,mm (b).}
\label{fig:f5}
\end{figure}
Fig.\,\ref{fig:f5} represents the averaged deflection probability $\langle \eta_D \rangle$ as a function of the deflection angle $\gamma$ and of the potential depth $U_1$ of the oblique laser guide, for a thermal input state of size $\sigma_0=0.15\,$mm and temperature $T_0=10\mu$K, with $h=4\,$mm [Fig.\,\ref{fig:f5}(a)] and $h=1\,$mm [Fig.\,\ref{fig:f5}(b)]. Realistic values have been chosen for all laser parameters, close to the one used in the experimental study\,\cite{Houde_2000}, and the coefficients $c_v(\sigma_0,T_0)$ of Eq.\,(\ref{eq:thermal}) were calculated following Ref.\,\cite{Gaaloul_2006}. One can notice a rapid decrease of $\langle \eta_D \rangle$ when $U_1$ decreases and when $\gamma$ increases. However, an almost complete deflection (93.8\%) is still observed in the case $h=1\,$mm with $\gamma=25\,$deg and $U_1=120\,\mu$K, even though the total kinetic energy of the atoms reaches then about $E_c \sim 100\,\mu$K at the crossing point, all trapped states being significantly populated initially. For $\gamma=10\,$deg, the deflection efficiency reaches 99.8\%. We have also verified that decreasing the temperature of the initial cloud increases significantly the deflection efficiency since it suppresses the population of the highest trapped states, for which the deflection process is less efficient [see Fig.\,\ref{fig:f4}]. It is also worth noting that since the deflection process is less efficient for the highest trapped levels, it could also be used to selectively separate the lowest energy levels of the trap. Since it behaves very well for the lowest trapped states, we expect that this setup will prove useful with Bose-Einstein condensates. We therefore derive in the next section a quantum model aimed at the description of the dynamics of a Bose gas in such a deflection setup.
\section{Deflection of Bose-Einstein condensates} \label{sec:Bec}
\subsection{Theoretical model} \label{sec:Theo-BEC}
From the theoretical point of view, in the case of a low density the dynamics of the macroscopic wave function $\Psi(\mathbi{r},t)$ of a Bose-Einstein condensate can be accurately described by the mean-field Gross-Pitaevskii equation\,\cite{Gross_1961,Pitaevskii_1961,Gross_1963}. In three dimensions and in the presence of both a time-dependent external potential $V(\mathbi{r},t)$ and the gravity field this equation reads \begin{equation} \label{eq:3DGPE}
i\hbar\frac{\partial\Psi}{\partial t} = \Big[ -\frac{\hbar^2}{2m} \nabla^2_{\!r} + V(\mathbi{r},t) + mgz + NU_0\left|\Psi\right|^2 \Big]\Psi\,, \end{equation}
where $U_0=4\pi\hbar^2a_0/m$ is the scattering amplitude and $a_0$ the $s$-wave scattering length. $N$ denotes the condensate number and $N U_0 \left|\Psi(\mathbi{r},t)\right|^2$ is the mean field interaction energy. The three-dimensional coordinate is denoted by \mbox{$\mathbi{r} \equiv (x,y,z)$}.
In the absence of a trapping potential in the $z$-di\-rec\-tion, the condensate will not only expand but also fall around the average classical height $z_{cl}(t)=-gt^2/2$. Since the de Broglie wavelength of the BEC is no more negligible compared to the characteristic distances of the problem, a quantum treatment of this direction is necessary unlike the thermal atoms case discussed in the first part of the paper. The simulation of the fall dynamics is thus greatly facilitated when done in the moving frame \mbox{$\mathbi{R} \equiv (X,Y,Z)$}, where \begin{equation} \label{eq:frame} \mathbi{R}=\mathbi{r}+\frac{1}{2}\,gt^2\,\mathbi{u}_z\,, \end{equation} using the unitary transformation \begin{equation} \label{eq:tranf} \Xi(\mathbi{R},t) = \exp\left[\,i\;\frac{mgt}{\hbar}\left(z+\frac{gt^2}{6}\right)\right] \Psi(\mathbi{r},t)\,. \end{equation} Indeed, applying this transformation yields a simplified Gross-Pitaevskii equation \begin{equation} \label{eq:3DGPEs}
i\hbar\frac{\partial\Xi}{\partial t} = \Big[ -\frac{\hbar^2}{2m} \nabla^2_{\!R} + V(\mathbi{R},t) + NU_0\left|\Xi\right|^2 \Big] \Xi\,, \end{equation} where the gravitational term $mgz$ has vanished.
Following the variational approach of Ref.\,\cite{Salasnich_2002}, we now assume (as in Sec. \ref{sec:Model-cold} and \ref{sec:Results-cold}) a strong harmonic confinement in the perpendicular $Y$-direction, with \begin{equation} \label{eq:poty} V(\mathbi{R},t) = \frac{1}{2} m \omega_\perp^2 Y^2 + V_\parallel(X,Z,t)\,, \end{equation} where $V_\parallel(X,Z,t)$ denotes the optical guiding potential \begin{subequations} \label{eq:potpar} \begin{eqnarray} t \leqslant t_c :\;\; V_\parallel(X,Z,t) & = & {\cal U}_0(X)\\ t > t_c :\;\; V_\parallel(X,Z,t) & = & {\cal U}_1(X,Z-gt^2/2)\,. \end{eqnarray} \end{subequations} The confinement along the perpendicular direction $Y$ is supposed to be much stronger than along the parallel directions $X$ and $Z$, thus yielding the conditions \begin{equation} \label{eq:confine} \omega_\perp \gg \left[\frac{4U_{0}}{mw_{0}^2}\right]^{\frac{1}{2}} \quad\textrm{and}\quad \omega_\perp \gg \left[\frac{4U_{1}}{mw_{1}^2}\right]^{\frac{1}{2}}\,. \end{equation} The condensate dynamics is now followed using the appropriate ansatz\,\cite{Salasnich_2002,Jackson_1998} \begin{equation} \label{eq:trialwf}
\Xi(\mathbi{R},t) = \Phi(X,Z,t)\;f\big(Y|\Omega\big)\,, \end{equation} where \begin{equation} \label{eq:trialwfG}
f\big(Y|\Omega\big) = \frac{e^{-\frac{1}{2}\frac{Y^2}{\Omega^2}}}{\pi^{\frac{1}{4}}\Omega^{\frac{1}{2}}}\,. \end{equation} This choice amounts to assume a Gaussian shape of the wave function in the $Y$-direction, characterized by a time-dependent width $\Omega(X,Z,t)$. This width varies slowly along the parallel directions, thus implying \begin{equation} \nabla^2_{\!R} f \simeq \partial^2 f/\partial Y^2\,. \end{equation} It has been shown that this choice is well justified not only in the limit of weak interatomic couplings but also with large condensate numbers\,\cite{Perez-Garcia_1996,Perez-Garcia_1997,Parola_1998}.
An effective two-dimensional non-linear wave equation is then derived using the quantum least action principle\,\cite{Schiff_1968,Salasnich_2002} for $\Phi(X,Z,t)$ \begin{eqnarray} \label{eq:2DGPE} i\hbar\frac{\partial\Phi}{\partial t} & = & \Big[ -\frac{\hbar^2}{2m} \nabla^2_{\parallel} + V_\parallel +
\frac{N\,U_0}{\sqrt{2\pi}}\,\frac{\left|\Phi\right|^2}{\Omega}\nonumber\\
& & \qquad + \frac{1}{4}\left(\frac{\hbar^2}{m\Omega^2} + m\omega_\perp^2\Omega^2\right) \Big] \Phi\,. \end{eqnarray} This equation describes the condensate dynamics in the $(X,Z)$ plane, with an accuracy which goes beyond the usual two-dimensional Gross-Pitaevskii equation. It takes into account the influence of the dynamics along the perpendicular direction on the evolution of $\Phi(X,Z,t)$ with the introduction of the width parameter $\Omega(X,Z,t)$. Note the difference by a factor $1/2$ in the last two terms of Eq.(\ref{eq:2DGPE}) when compared to Eq.(25) of Ref.\,\cite{Salasnich_2002}, due to a misprint in Ref.\,\cite{Salasnich_2002}. The least action variational principle also yields the following quartic equation governing the evolution of this parameter \begin{equation} \label{eq:Omega}
\left(\frac{1}{2}m\omega_\perp^2-\frac{2V_\parallel}{w^2(t)}\right)\Omega^4-\frac{N\,U_0}{2\sqrt{2\pi}}\left|\Phi\right|^2\,\Omega-\frac{\hbar^2}{2m}=0\,, \end{equation} where $w(t)=w_0$ for $t \leqslant t_c$ and $w(t)=w_1$ for $t > t_c$. This last equation was obtained assuming $\Omega(X,Z,t) \ll w(t)$ for all $X$, $Z$ and $t$, in agreement with the strong confinement in $Y$. Compared to Eq.(26) of Ref.\,\cite{Salasnich_2002}, the additional term $2V_\parallel/w^2(t)$ is a small correction due to the $Y$-dependence of the TEM$_{00}$ laser intensity profile.
The time-dependent wave equation\,(\ref{eq:2DGPE}) is solved numerically using the splitting technique of the short-time propagator\,\cite{Feit_1983}, while the quartic equation\,(\ref{eq:Omega}) for $\Omega$ is solved at each time step and for each coordinate grid point $(X,Z)$ using an efficient numerical algorithm\,\cite{Hacke_1941}. The initial wave function is obtained using the imaginary time relaxation technique\,\cite{Kosloff_cpl}, and the three-dimensional condensate wave function $\Psi(\mathbi{r},t)$ is reconstructed at the end of the propagation by inverting the transformations\,(\ref{eq:frame}) and\,(\ref{eq:tranf}).
\begin{figure}
\caption{(a) and (c)\,: Atomic density $|\Phi(x,z,t)|^2$ in arbitrary units as a function of $x$ and $z$ at times $t=0$ (upper graph) and $t=5.2\,$ms (lower graph). (b) and (d)\,: Width parameter $\Omega(x,z,t)$ in $\mu$m as a function of $x$ and $z$ at times $t=0$ (upper graph) and $t=5.2\,$ms (lower graph). In each sub-plot, the color scale is defined on the right hand side of the graph. The guiding potentials are defined by $U_0=2.2\,\mu$K [$P/\delta \simeq 2.3 \times 10^{-5}\,$W/GHz], $U_1=8.8\,\mu$K [$P/\delta \simeq 9.2 \times 10^{-5}\,$W/GHz], and $w_0=w_1=300\,\mu$m. The deflection angle is $\gamma=50\,$deg and the condensate number is $N=5 \times 10^4$. In the $y-$direction, the trapping frequency $\omega_y$ is assumed to be 10 times larger than $\omega_x$. The crossing height is $z=-h=-10\,\mu$m and the time step for the split operator numerical propagation is $\delta t=1\,\mu$s.}
\label{fig:f6}
\end{figure}
\subsection{Numerical Results} \label{sec:Res-BEC}
A typical BEC dynamics is illustrated in Fig.\,\ref{fig:f6}, which shows a surface plot of the atomic density $|\Phi(x,z,t)|^2$ and of the width parameter $\Omega(x,z,t)$ at time $t=0$ [upper part, labels (a) and (b)] and during the deflection process, at time $t=5.2\,$ms [lower part, labels (c) and (d)]. In this numerical example, the condensate number is $N=5 \times 10^4$. The crossing height $z=-h$ is reached at time $t_c=1.43\,$ms, and the deflection angle is fixed at the value $\gamma=50\,$deg. In the lower graphs the transparent oblique line indicates the direction of deflection corresponding to this angle.
\begin{figure}
\caption{Condensate deflection efficiency $\eta_D$ as a function of (a) the crossing height $h$, (b) the deflection angle $\gamma$ and (c) the ratio of binding energies $U_1/U_0$, equivalent to the ratio of laser powers $P_1/P_0$. In these three graphs, the potential depth $U_0$ is fixed at 2.2\,$\mu$K, and $w_0=w_1=300\,\mu$m. The condensate number is $N=5 \times 10^4$. In (a) $U_1=8.8\,\mu$K and $\gamma=60\,$deg. In (b) $U_1=8.8\,\mu$K and $h=10\,\mu$m. In (c) $h=10\,\mu$m and $\gamma=50\,$deg (blue line with circles) and $\gamma=70\,$deg (red line with squares). The dashed blue arrow in the upper graph marks the position at which a deflection efficiency of 50\,\% is expected according to Eq.(\ref{eq:critere}).}
\label{fig:f7}
\end{figure}
One can first notice in Fig.\,\ref{fig:f6}(b) and (d) that even for a relatively small condensate number, a Gaussian ansatz for $f(Y|\Omega)$ can only be used if the width $\Omega$ is a free parameter which can take different values at different positions $x$ and $z$. This result is in agreement with the numerical studies of Salasnich \emph{et al}\,\cite{Salasnich_2002}.
The inset (a) of Fig.\,\ref{fig:f6} shows the symmetric ground state wave function of the initial condensate prepared in a parabolic trapping potential of equal frequencies $\omega_x=\omega_z$. At time $t=5.2\,$ms [insets (c) and (d) of Fig.\,\ref{fig:f6}], the condensate wave function has expanded during the fall dynamics, and has been efficiently deflected along the direction $\gamma=50\,$deg by the oblique laser guide.
In our numerical simulations, we propagate the condensate wave function well after the crossing point between the vertical and oblique laser beams has been rea\-ched, and we obtain the deflection efficiency $\eta_D$ by calculating the condensate number in the oblique trapping potential at the end of the propagation. Fig.\,\ref{fig:f7} shows the variation of the condensate deflection efficiency with the crossing height $h$ [inset (a)], the deflection angle $\gamma$ [inset (b)] and the ratio of laser powers $P_1/P_0=U_1/U_0$ [inset (c)]. The other parameters are given in the figure caption.
The simple energy criterium\,(\ref{eq:critere}) is still shown to be relatively accurate with Bose-Einstein condensates since in Fig.\,\ref{fig:f7}(a), a deflection efficiency of 50\,\% is expected for a falling height $h = 114\,\mu$m according to Eq.(\ref{eq:critere}), and the numerical simulation yields a deflection probability of 55\,\%. The variation of the deflection efficiency $\eta_D$ with $h$, $\gamma$ and $U_1/U_0$ is also found to be very similar to the one obtained with cold atomic clouds. One can finally notice that large deflection angles can be reached when $U_1 > U_0$, with almost no atom loss for instance when $\gamma = 50\,$deg and $U_1 = 4 U_0$. This high deflection efficiency can be explained by the small size of the BEC and the weak spread in velocities compared to cold atoms.
\section{Conclusion} \label{sec:Conclusion}
In summary, we have presented a detailed analysis of the implementation of an optical deflector for cold atomic clouds and for Bose-Einstein condensates. Our analysis is quite close to the experimental conditions, and is clearly within the reach of current technology. We have shown how to create a high performance deflector using two crossing laser beams which are switched on and off in a synchronized way. We have found that a 10\,$\mu$K cloud of Rubidium atoms can be deflected by 25\,degrees with an efficiency of about 94\%, and by 10\,degrees with an efficiency exceeding 99\%. A succession of such deflecting setups at this small angle could also be implemented in order to achieve larger deflection angles with high fidelities. We have shown that this device is robust against non-adiabatic transitions, an undesirable effect which could have led to heating processes. A high degree of control can therefore be achieved with such quantum systems, opening some possibilities for a range of applications. We have also derived an original approach treating the dynamics of a Bose-Eintein condensate in the gravity field using the quantum least action principle in a moving frame. This model was used to demonstrate the high efficiency of this deflection setup with quantum degenerate gases since deflection angles of up to about 50\,degrees can be implemented with no significant atom loss.
\end{document} |
\begin{document}
\title{Isomorphy classes of $k$-involutions of $G_2$}
\begin{abstract} Isomorphy classes of $k$-involutions have been studied for their correspondence with symmetric $k$-varieties, also called generalized symmetric spaces. A symmetric $k$-variety of a $k$-group $G$ is defined as $G_k/H_k$ where $\theta:G \to G$ is an automorphism of order $2$ that is defined over $k$ and $G_k$ and $H_k$ are the $k$-rational points of $G$ and $H=G^{\theta}$, the fixed point group of $\theta$, respectively. This is a continuation of papers written by A.G. Helminck and collaborators \cite{He00}, \cite{DHW06}, \cite{DHW-}, \cite{HW02} expanding on his combinatorial classification over certain fields. Results have been achieved for groups of type $A$, $B$ and $D$. Here we begin a series of papers doing the same for algebraic groups of exceptional type. \end{abstract}
\section{Introduction}
The problem of identifying all isomorphy classes of symmetric $k$-varieties is described by Helminck in \cite{He94}. There he notes that isomorphy classes of symmetric $k$-varieties of algebraic groups and isomorphy classes of their $k$-involutions are in bijection. In the following we provide a classification of isomorphy classes of $k$-involutions for the split type of $G_2$ over certain fields. \
The main result of this paper is an explicit classification of $k$-involutions of the split form of $G_2$ where $k=\mathbb{R},\mathbb{C},\mathbb{Q}, \mathbb{Q}_p,$ and $\mathbb{F}_q$, where $q>2$. We do this by finding explicit elements of $\Aut(G_2)$, where $G_2=\Aut(C)$ and $C$ is always the split octonion algebra over a given field of characteristic not $2$. \
The results from this paper rely most heavily on the works of Jacobson \cite{Ja58} on composition algebras, Lam's presentation of quadratic forms \cite{La05}, and Helminck et. al. on symmetric spaces and $k$-involutions of algebraic groups.
A \emph{$k$-involution} is an automorphism of order exactly 2, that is defined over the field $k$. The isomorphy classes of these $k$-involutions are in bijection with the quotient spaces $G_k/H_k$, where $G_k$ and $H_k$ are the $k$-rational points of the groups $G$ and $H=G^{\theta}=\{g\in G \ | \ \theta(g)=g \}$ respectively, these quotient spaces will be called \emph{symmetric $k$-varieties} or \emph{generalized symmetric spaces}. \
The group of characters and root space associated with a torus $T$ are denoted by $X^*(T)$ and $\Phi(T)$ respectively. We will also denote by
\[ A_{\theta}^- = \{ a \in A \ | \ \theta(a) = a^{-1} \}^{\circ}, \] and by
\[ I_k(A_{\theta}^-) = \{ a \in A_{\theta}^- \ | \ \theta \circ \Inn(a) \text{ is a $k$-involution} \}. \]
In the following we introduce a characterization of $k$-involutions of an algebraic group of a specific type given by Helminck. The full classification can be completed with the classification of the following three types of invariants, \cite{He00},
\begin{enumerate}[(1)] \item classification of admissible involutions of $(X^*(T),X^*(A), \Phi(T), \Phi(A))$, where $T$ is a maximal torus in $G$, A is a maximal $k$-split torus contained in $T$ \item classification of the $G_k$-isomorphy classes of $k$-involutions of the $k$-anisotropic kernel of $G$ \item classification of the $G_k$-isomorphy classes of $k$-inner elements $a\in I_k(A_{\theta}^-)$. \end{enumerate}
For this paper we do not consider $(2)$ since our algebraic groups will be $k$-split. We mostly focus on (3) and refer to (1) when appropriate, though Helminck has provided us with a full classification of these \cite{He00}. \
The main result is an explicit description of the $k$-inner elements up to isomorphy, which completes the classification of $k$-involutions for the split group of type $G_2$. Each admissible involution of $(X^*(T), X^*(A), \Phi(T), \Phi(A) )$ can be lifted to a $k$-involution $\theta$ of the algebraic group. This lifting is not unique. The involutions that induce the same involution as $\theta$ when restricted to \\ $(X^*(T), X^*(A), \Phi(T), \Phi(A) )$ are of the form $\theta \circ \Inn(a)$ where $a \in A_{\theta}^-$. This set of elements such that $\theta \circ \Inn(a)$ is a $k$-involution form the set of \emph{$k$-inner elements} associated with the involution $\theta$ and are denoted $I_k(A_{\theta}^-)$. \
Yokota wrote about $k$-involutions, $\theta$, and fixed point groups, $G^{\theta}$, for algebraic groups of type $G_2$ for $k=\mathbb{R}, \mathbb{C}$. In fact the elements of $G_2=\Aut(C)$ we call $\mathcal{I}_{t_{(\pm 1, \pm 1)}}$ correspond to the $\gamma$ maps in \cite{Yo90}, which are a conjugation with respect to complexification at different levels within the octonion algebra taken over $\mathbb{R}$ or $\mathbb{C}$ \cite{Yo90} . We found these using different methods than in \cite{Yo90}, and show the correspondence. \
In section 3.1 we find $k$-involutions that come from conjugation by elements in a maximal $k$-split torus and using a result of Jacobson show they are isomorphic in Proposition \ref{tconj}. This will take care of all cases where $\Aut(C)= G_2$ is taken over a field whose structure permits a split octonion algebra and only split quaternion algebras, namely $k=\mathbb{C}$ and $\mathbb{F}_p$ when $p>2$. \
Over other fields there is the possibility of division quaternion algebras, and this fact using Proposition \ref{jake} gives us another isomorphy class of $k$-involutions when we take $k=\mathbb{R}, \mathbb{Q}$ and $\mathbb{Q}_p$. In section 3.2 we find the $k$-involution $\theta$ for which our maximal $k$-split torus is a maximal $(\theta,k)$-split torus and in Lemma \ref{nonsplitq} we find a representative for the only other possible isomorphy class of $k$-involutions over these fields. \
In section 3.3 we summarize the main results of the paper in Theorem \ref{maintheorem} and give the full classification of $k$-involutions when $k=\mathbb{C}, \mathbb{R}, \mathbb{Q}_p$ and $\mathbb{F}_q$ when $p \geq 2$ and $q>2$. We finish up by giving descriptions of the fixed point groups of isomorphy classes of $k$-involutions.
\subsection{Preliminaries and recollections}
Most of our notation is borrowed from \cite{Sp98} for algebraic groups, \cite{He00} for $k$-involutions and generalized symmetric spaces, \cite{La05} for quadratic forms and \cite{SV00} for octonion and quaternion algebras. \
The letter $G$ is reserved for an arbitrary reductive algebraic group unless it is $G_2$, which is specifically the automorphism group of an octonion algebra. When we refer to a maximal torus we use $T$ and any subtorus is denoted by another capital letter, usually $A$. Lowercase Greek letters are field elements and other lowercase letters denote vectors. We use $Z(G)$ to denote the center of $G$, $Z_G(T)$ to denote the centralizer of $T$ in $G$, and $N_G(T)$ to denote the normalizer of $T$ in $G$. \
By $\Aut(G)$ we mean the automorphism group of $G$, and by $\Aut(C)$ we mean the linear automorphisms of the composition algebra $C$. The group of inner automorphisms are denoted $\Inn(G)$ and the elements of $\Inn(G)$ are denoted by $\mathcal{I}_g$ where $g\in G$ and $\mathcal{I}_g(x) = gxg^{-1}$. \
We define a \emph{$\theta$-split torus}, $A$, of an involution, $\theta$, as a torus $A \subset G$ such that $\theta(a)=a^{-1}$ for all $a\in A$. We call a torus \emph{$(\theta, k)$-split} if it is both $\theta$-split and $k$-split. \
Let $A$ be a $\theta$-stable maximal $k$-split torus such that $A_{\theta}^-$ is a maximal $(\theta,k)$-split torus. By \cite{HW02} there exists a maximal $k$-torus $T \supset A$ such that $T_{\theta}^- \supset A_{\theta}^-$ is a maximal $\theta$-split torus. The involution $\theta$ induces an involution $\theta \in \Aut(X^*(T), X^*(A), \Phi(T), \Phi(A))$. It was shown by Helminck \cite{He00} that such an involution is unique up to isomorphy. For $T\supset A$ a maximal $k$-torus, an
\[ \theta \in \Aut(X^*(T), X^*(A), \Phi(T), \Phi(A)) \] is \emph{admissible} if there exists an involution $\tilde{\theta} \in \Aut(G,T,A)$ such that $\tilde{\theta}|_T = \theta$, $A_{\theta}^-$ is a maximal $(\theta, k)$-split torus, and $T_{\theta}^-$ is a maximal $\tilde{\theta}$-split torus of $G$. \
This will give us the set of $k$-involutions on $G$ that extend from involutions on the group of characters, $X^*(T)$.\
As for the $k$-inner elements they are defined as follows; if $\theta$ is a $k$-involution and $A_{\theta}^-$ is a maximal $\theta$-split torus then the elements of the set,
\[ I_k(A_{\theta}^-) = \left\{ a \in A_{\theta}^- \ \big| \ \left(\theta \circ \mathcal{I}_a \right)^2 = \id, \ \left(\theta \circ \mathcal{I}_a\right)(G_k)=G_k \right\}, \] are called \emph{$k$-inner elements} of $\theta$. Some compositions $\theta \circ \mathcal{I}_a$ will not be isomorphic in the group $\Aut(G)$ for different $a\in I_k(A_{\theta}^-)$, though they will project down to the same involution of the group of characters of a maximal torus fixing the characters associated with a maximal $k$-split subtorus for all $a\in I_k(A_{\theta}^-)$. \
\subsection{Split octonion algebra}
Throughout this paper we use $N$ to be a quadratic form of a composition algebra and $\langle \ , \ \rangle$ to be the bilinear form associated with $N$. The capital letters $C$ and $D$ denote composition algebras and composition subalgebras respectively. The composition algebras we will refer to always have an identity, $e$. There is an anti-automorphism on a composition algebra that resembles complex conjugation denoted by, $\bar{ \ }$, which will have specific but analogous definitions depending on the dimension of the algebra. \
If $\{ e, a, b, ab\}$ is a basis for $D$, a quaternion algebra such that $a^2 = \alpha$ and $b^2 = \beta$, then we denote its quadratic from by, $\left( \frac{\alpha,\beta}{k} \right) \cong \langle 1, -\alpha, -\beta, \alpha\beta \rangle$. We will often refer to a quaternion algebra over a field $k$ by the $2$-Pfister form notation of its quadratic form. Since a composition algebra is completely determined by its quadratic form and its center, $ke$, there is no risk of ambiguity. By $k$ we are referring to an arbitrary field and by using the blackboard bold $\mathbb{F}$ we refer to a specific field. We consider only fields that do not have characteristic $2$. \
We always consider the octonion algebra as a doubling of the split quaternions thought of as $M_2(k)$, the $2 \times 2$ matrices over our given field. The octonion algebras we consider are an ordered pair of these with an extended multiplication that will be described in the next section.
For the following results we refer you to \cite{SV00} for the proofs. In these upcoming results $V$ is a vector space over a field, $k$, $\ch(k)\neq 2$, equipped with a quadratic form $N:V \to k$ and the associated bilinear form $\langle \ , \ \rangle:V\times V \to k$, such that \begin{enumerate} \item $N(\alpha v) = \alpha^2 N(v)$, \item $\langle v,w \rangle = N(v+w) - N(v) - N(w)$, \end{enumerate} where, $v,w \in V$ and $\alpha \in k$.
A \emph{composition algebra}, $C$, will be a vector space with identity, $e$, and $N$ and $\langle \ , \ \rangle$ as above such that $N(xy)=N(x)N(y)$. Note that $ke$ is a composition algebra in a trivial way.
\newtheorem{doubling}[subsubsection]{Proposition, \cite{SV00}} \begin{doubling} When $C$ is a composition algebra with $D\subset C$ a finite dimensional subalgebra of $C$ with $C\neq D$, then we can choose $a \in D^{\perp}$ with $N(a) \neq 0$, then $D \oplus Da$ is a composition algebra. The product, quadratic form, and complex conjugation are given by \begin{enumerate} \item $(x+ya)(u+va) = (xu + \alpha \bar{v}y) + (vx + y\bar{u})a$, \text{ for } $x,y,u,v \in D$, $\alpha \in k^*$ \item $N(x+ya) = N(x) - \alpha N(y)$ \item $\overline{x+ya} = \bar{x} - ya$. \end{enumerate} The dimension of $D\oplus Da$ is twice the dimension of $D$ and $\alpha = -N(a)$. \end{doubling}
We often use this decomposition above in the results that follow, and a theorem of Adolf Hurwitz gives us the possible dimensions of such algebras.
\newtheorem{hurwitz}[subsubsection]{Theorem, (A. Hurwitz), \cite{SV00}} \begin{hurwitz} \label{hurwitz} Every composition algebra can be obtained from iterations of the doubling process starting from $ke$. The possible dimensions of a composition algebra are $1,2,4$, or $8$. A composition algebra of dimension $1$ or $2$ is commutative and associative, a composition algebra of dimension $4$ is associative and not commutative, a composition algebra of dimension $8$ is neither commutative nor associative. \end{hurwitz}
\newtheorem{doublesplit}[subsubsection]{Corollary, \cite{SV00}} \begin{doublesplit} Any doubling of a split composition algebra is again a split composition algebra. \end{doublesplit}
There are $2$ general types of composition algebras. If there are no zero divisors we call the composition algebra a \emph{division algebra}, and otherwise we call it a \emph{split algebra}. It follows from the definition that a composition algebra is determined completely by its norm, and we have the following theorem.
\newtheorem{splitcomp}[subsubsection]{Theorem, \cite{SV00}} \begin{splitcomp} \label{splitcomp} In dimensions $2,4,$ and $8$ there is exactly one split composition algebra, over a given field $k$, up to isomorphism. \end{splitcomp}
We can take as a split quaternion algebra, $D$, over a field $k$ to be $M_2(k)$, the $2\times 2$ matrices over $k$. Multiplication in $D$ will be typical matrix multiplication, our quadratic form will be given by \[ N\left( \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix} \right) = \text{det} \left( \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix} \right) = x_{11} x_{22} - x_{12}x_{21}, \] and \emph{bar involution} will be given by \[ \overline{ \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix} } = \begin{bmatrix} x_{22} & -x_{12} \\ -x_{21} & x_{11} \end{bmatrix}. \] Elements of our split octonion algebra will have the form \[ (x,y) =\left( \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix}, \begin{bmatrix} y_{11} & y_{12} \\ y_{21} & y_{22} \end{bmatrix} \right). \] Since all split octonions over a given field are isomorphic, we can take $\alpha =1$ in our composition algebra doubling process. The multiplication, quadratic form, and octonion conjugation are given by the following; \begin{align} (x,y)(u,v) &= (xu + \bar{v}y, vx + y \bar{u}), \\ N\big( (x,y) \big) &= \text{det}(x) - \text{det}(y), \\ \overline{(x,y)} &= (\bar{x},-y), \end{align} with $x,y,u,v \in M_2(k)=D$. The basis of the underlying vector space is taken to be $\left\{ (E_{ij}, 0) , ( 0, E_{ij}) \right\}_{i,j=1,2}$, where the $E_{ij}$ are the standard basis elements for $2 \times 2$ matrices, and so our identity element in $C$ is $e = (E_{11} + E_{22},0)$.
\section{Automorphisms of $G_2$}
\subsection{Some results on $G_2$}
It is well known that the automorphism group of a $k$-split octonion algebra, $C$, over a field, $k$, is a $k$-split linear algebraic group of type $G_2$ over $k$. We can compute a split maximal torus for $\Aut(C)$, where $C= \big( M_2(k), M_2(k) \big)$ as above. Here we collect some some known results, and again we refer to \cite{SV00}.
\newtheorem{maximaltorus}[subsubsection]{Theorem} \begin{maximaltorus} The following statements concerning $G= \Aut(C)$ are true; \begin{enumerate} \item There is a maximal $k$-split torus $T \subset G$ of the form
\[ T = \left\{ \diag(1,\beta \gamma, \beta^{-1} \gamma^{-1}, 1, \gamma^{-1}, \beta, \beta^{-1}, \gamma ) \ \big| \beta, \gamma \in k^* \right\}. \] \item The center of $\Aut(C)$ contains only the identity. \item For any composition algebra, $C$, over $k$. The only nontrivial subspaces of $C$ left invariant by $\Aut(C)$ are $ke$ and $e^{\perp}$. \item All automorphisms of $\Aut(C)$ are inner automorphisms. \end{enumerate} \end{maximaltorus}
\subsection{$k$-involutions of $G$}
\newtheorem{kinv}[subsubsection]{Remark} \begin{kinv} If $\theta \in \Inn(G)$ and $\theta = \mathcal{I}_t$ is a $k$-involution then $t^2 \in Z(G)$. \end{kinv}
Since groups of type $G_2$ have a trivial center, the problem of classifying $k$-involutions for $\Aut(C)$, where $C$ is a split octonion algebra, is the same as classifying the conjugacy classes of elements of order $2$ in $\Aut(C)$ that preserve the $k$-structure of $\Aut(C)$. \
\newtheorem{torusinv}[subsubsection]{Remark} \begin{torusinv} \label{torusinv} The involutions that are of the form $\mathcal{I}_t$ where
\[ t_{(\beta, \gamma)} \in T = \left\{ \diag(1,\beta \gamma, \beta^{-1} \gamma^{-1}, 1, \gamma^{-1}, \beta, \beta^{-1}, \gamma ) \ \big| \beta, \gamma \in k^* \right\} \] have $(\beta, \gamma) = (1,-1),(-1,1)$ or $(-1,-1)$. \end{torusinv} \begin{proof} We set $\diag(1,\beta \gamma, \beta^{-1} \gamma^{-1}, 1, \gamma^{-1}, \beta, \beta^{-1}, \gamma )^2 = \id$ and exclude $(\beta, \gamma)=(1,1)$ which corresponds to the identity map. \end{proof}
Using the above statement and the following result of Jacobson we can show that all $k$-involutions given by conjugation by elements coming from the maximal $k$-split torus $T$ are isomorphic.
\newtheorem{jake}[subsubsection]{Proposition, \cite{Ja58}} \begin{jake} \label{jake} Let $C$ be an octonion algebra over $k$, then the conjugacy class of quadratic elements, $t\in G=\Aut(C)$ such that $t^2=\id$ are in bijection with the isomorphism classes of quaternion subalgebras of $C$. \end{jake}
In particular if $t \in \Aut(C)$ has order $2$, then it leaves some quaternion subalgebra $D$ elementwise fixed giving us the eigenspace corresponding to the eigenvalue $1$. Then $D^{\perp}$ is the eigenspace corresponding to the eigenvalue $-1$. If $gtg^{-1}=s$ for some $g\in G$, then $s$ has order $2$ and $g(D)=D'$, $D'$ a quaternion subalgebra elementwise fixed by $s$, and $D\cong D'$.
\newtheorem{qsubalg}[subsubsection]{Corollary} \begin{qsubalg} Let $C$ be an octonion algebra over $k$ and $D$ and $D'$ quaternion subalgebras of $C$. If $s,t \in G = \Aut(C)$ are elements of order $2$ and $s,t$ fix $D,D'$ elementwise respectively, then $s \cong t$ if and only if $D \cong D'$ over $k$. \end{qsubalg}
\newtheorem{conjint2}[subsubsection]{Corollary} \begin{conjint2} \label{conjint2} For $s,t \in \Aut(C)$, $\mathcal{I}_t \cong \mathcal{I}_s$ if and only if $s$ and $t$ leave isomorphic quaternion subalgebras invariant. \end{conjint2}
We can begin looking for elements of order $2$ in the $k$-split maximal torus we have computed, which in our case will be $t \in T$, $t^2 = \id$. Solving the following equation \[ \diag(1,\beta \gamma, \beta^{-1} \gamma^{-1}, 1, \gamma^{-1}, \beta, \beta^{-1}, \gamma )^2 = \id, \] we obtain the elements \[ t_{(\beta,\gamma)} \text{ where } (\beta ,\gamma) = (\pm 1,\pm 1), \text{ and } (\beta,\gamma) \neq (1,1). \]
\newtheorem{normalconjlemma}[subsubsection]{Lemma} \begin{normalconjlemma} \label{normalconjlemma} $\mathcal{I}_g \mathcal{I}_{\varepsilon}\mathcal{I}_g^{-1} = \mathcal{I}_{g\epsilon g^{-1}}$ \end{normalconjlemma} \begin{proof} We can apply the left hand side to an element $y\in G$, \begin{align*} \mathcal{I}_g\mathcal{I}_{\varepsilon}\mathcal{I}_g^{-1} (y) &= g \big( \varepsilon (g^{-1} y g) \varepsilon^{-1} \big) g^{-1} \\ &= (g \varepsilon g^{-1}) y (g \varepsilon g^{-1})^{-1} \\ &= \mathcal{I}_{(g \varepsilon g^{-1})}(y). \end{align*} \end{proof}
\newtheorem{normalconj}[subsubsection]{Proposition} \begin{normalconj} If $\varepsilon_1^2=\varepsilon_2^2 = \id$ and $\varepsilon_1, \varepsilon_2 \in T$ a maximal torus of $G$ when $Z(G)=\{ \id \}$, then $\mathcal{I}_{\varepsilon_1} \cong \mathcal{I}_{\varepsilon_2}$ if and only if $\varepsilon_1 = n \varepsilon_2 n^{-1}$ for some $n \in N_G(T)$. \end{normalconj} \begin{proof} If $n\varepsilon_2 n^{-1} = \varepsilon_1$ for $n\in N_G(T)$ then $\mathcal{I}_{\varepsilon_1} \cong \mathcal{I}_{\varepsilon_2}$ via the isomorphism $\Inn \left( \mathcal{I}_n \right)$.
Now we let $\mathcal{I}_{\varepsilon_1} \cong \mathcal{I}_{\varepsilon_2}$, and so there exists a $g\in G$ such that $\mathcal{I}_{\varepsilon_1}=\mathcal{I}_g \big(\mathcal{I}_{\varepsilon_2} \big) \mathcal{I}_g^{-1}$, then we have by Lemma \ref{normalconjlemma}, \[ (g \varepsilon_2 g^{-1})^{-1}\varepsilon_1 y = y (g \varepsilon_2 g^{-1})^{-1}\varepsilon_1, \] for all $y\in G$ and so $(g \varepsilon_2 g^{-1})^{-1}\varepsilon_1 \in Z(G) = \{\id \}$. Thus we have that $\varepsilon_1^{-1} = (g \varepsilon_2 g^{-1})^{-1}$, so $\varepsilon_1 = g \varepsilon_2 g^{-1}$. So now notice $S = g T g^{-1}$ is a maximal torus containing $\varepsilon_1$. The group $Z_G(\varepsilon_1)$ contains $S$ and $T$, so there exists an $x\in Z_G(\varepsilon_1)$ such that $x S x^{-1} = T$. We know that $S = gTg^{-1}$ so \[ xg T g^{-1}x^{-1} = xg T (xg)^{-1} = T, \] which has $xg \in N_G(T)$. We notice that \begin{align*} \mathcal{I}_{xg}\mathcal{I}_{\varepsilon_2} \mathcal{I}_{xg}^{-1} &= \mathcal{I}_{ xg \varepsilon_2 (xg)^{-1} } \\ &= \mathcal{I}_{ xg \varepsilon_2 g^{-1}x^{-1} } \\ &= \mathcal{I}_{x\varepsilon_1 x^{-1}} \\ &= \mathcal{I}_{\varepsilon_1}, \end{align*} which from the previous argument we have $(xg)\varepsilon_2(xg)^{-1} = \varepsilon_1$.
\end{proof}
Using the previous proposition it is possible to find elements $n,m \in N_G(T)$ such that \[ t_{(-1,-1)} = n \left(t_{(-1,1)}\right) n^{-1} = m \left(t_{(1,-1)}\right) m^{-1}. \]
It is also possible to show, and perhaps more illustrative, that they leave isomorphic quaternion subalgebras invariant, and thus by Corollary \ref{conjint2} provide us with isomorphic $k$-involutions.
\newtheorem{tconj}[subsubsection]{Proposition} \begin{tconj} \label{tconj} $t_{(-1,-1)} \cong t_{(-1,1)} \cong t_{(1,-1)}.$ \end{tconj} \begin{proof}
Let $G=\Aut(C)\supset T = \left\{ \diag(1,\beta \gamma, \beta^{-1} \gamma^{-1}, 1, \gamma^{-1}, \beta, \beta^{-1}, \gamma ) \ \big| \ \beta, \gamma \in k^* \right\}$, then $G$ is an algebraic group over the field $k$, $\ch(k)\neq 2$. The automorphism $t_{(-1,-1)}$ leaves the split quaternion subalgebra $(M_2(k),0)$ elementwise fixed. \
The element of order $2$, \[ t_{(1,-1)} = \diag(1,-1,-1,1,-1,1,1,-1), \] leaves the quaternion subalgebra, \[ k \underbrace{ \left( \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix}, 0 \right) }_{e} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix}
& 1 \\ -1 & \\ \end{bmatrix} \right) }_{a} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix}
& 1 \\
1 & \\ \end{bmatrix} \right) }_{b} \bigoplus k \underbrace{
\left( \begin{bmatrix} 1 & \\
& -1 \\ \end{bmatrix}, 0 \right) }_{ab} , \] elementwise fixed. Notice that $(b-a)(e+ab)=(0,0)$, and so the quaternion subalgebra is split.
And \[ t_{(-1,1)} = \diag(1,-1,-1,1,1,-1,-1,1), \] leaves the quaternion algebra \[ k \underbrace{ \left( \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix}, 0 \right) }_{e} \bigoplus k \underbrace{ \left( \begin{bmatrix} 1 & \\
& -1 \\ \end{bmatrix}, 0 \right) }_{a} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix} \right) }_{b} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix} 1 & \\
& -1 \\ \end{bmatrix} \right) }_{ab}, \] elementwise fixed. Notice that $(ab + b)(e + a) = (0,0)$, and so the quaternion subalgebra is split. Since over a given field $k$ every split quaternion subalgebra is isomorphic, we have that $t_{(-1,-1)} \cong t_{(-1,1)} \cong t_{(1,-1)}$. \end{proof}
\newtheorem{cortconj}[subsubsection]{Corollary} \begin{cortconj} \label{cortconj} $\mathcal{I}_{t_{(-1,-1)}} \cong \mathcal{I}_{t_{(-1,1)}} \cong \mathcal{I}_{t_{(1,-1)}}.$ \end{cortconj}
So from now on we refer to a representative of the congruence class containing $\mathcal{I}_{t_{(-1,-1)}}$, $\mathcal{I}_{t_{(-1,1)}}$, and $\mathcal{I}_{t_{(1,-1)}}$ as $\mathcal{I}_t$, when there is no ambiguity.
\newtheorem{splitinv}[subsubsection]{Lemma} \begin{splitinv} There is only one isomorphy class of $k$-involutions when $k$ is a finite field of order greater than $2$, complex numbers, $p$-adic fields when $p>2$, or when $k$ is a complete, totally imaginary algebraic number field. \end{splitinv} \begin{proof} In these cases only split quaternion algebras exist, \cite{O'M63}, \cite{SV00}. \end{proof}
In \cite{Yo90} Yokota talks about the maps $\gamma, \gamma_C$, and $\gamma_H$ and shows are they isomorphic, and that they are also isomorphic to any composition of maps between them. In his paper he defines a conjugation coming from complexification. In particular we can look at $\gamma_H$, which is the complexification conjugation on the quaternion level of an octonion algebra over $\mathbb{R}$. If we take $u + vc \in H \oplus Hc$ where $u,v\in H$ and $c\in H^{\perp}$ his map is
\[ \gamma_H(u+vc) = u-vc, \] which in our presentation of the octonion algebra would look like,
\[ \gamma_H \left(
\begin{bmatrix}
u_{11} & u_{12} \\
u_{21} & u_{22}
\end{bmatrix},
\begin{bmatrix}
v_{11} & v_{12} \\
v_{21} & v_{22}
\end{bmatrix} \right)
= \left(
\begin{bmatrix}
u_{11} & -u_{12} \\
-u_{21} & u_{22}
\end{bmatrix},
\begin{bmatrix}
v_{11} & -v_{12} \\
-v_{21} & v_{22}
\end{bmatrix} \right),
\] and corresponds to our map $\mathcal{I}_{t_{(-1,1)}}$.
\subsection{Maximal $\theta$-split torus}
Rather than trying to find a maximal $\theta$-split torus, where $\theta \cong \mathcal{I}_t$, and then computing its maximal $k$-split subtorus, we find a $k$-involution $\theta$ that splits our already maximal $k$-split torus of the form
\[ T = \left\{ \diag(1,\beta \gamma, \beta^{-1} \gamma^{-1}, 1, \gamma^{-1}, \beta, \beta^{-1}, \gamma ) \ \big| \beta, \gamma \in k^* \right\}. \] It is straight forward to check that \[ s= \begin{bmatrix} & & & 1 \\ & & 1 & \\ & 1 & & \\ 1 & & & \end{bmatrix} \bigoplus \begin{bmatrix} & & & 1 \\ & & 1 & \\ & 1 & & \\ 1 & & & \end{bmatrix} , \] is an element of $\Aut(C)$, where $C$ is the split octonion algebra described above over a field $k$, $\ch(k) \neq 2$. It is immediate that $T$ is a $\mathcal{I}_s$-split torus.
\newtheorem{maxsplit}[subsubsection]{Proposition} \begin{maxsplit} $T$ is a maximal $(\mathcal{I}_s,k)$-split torus. \end{maxsplit} \begin{proof} Notice first that if $t \in T$ that $\mathcal{I}_s(t)=t^{-1}$, and next that $T$ is $k$-split and is a maximal torus. \end{proof}
\newtheorem{scongt}[subsubsection]{Proposition} \begin{scongt} \label{scongt} $\mathcal{I}_s \cong \mathcal{I}_t$ \end{scongt} \begin{proof} The element $s$ is an automorphism of order $2$ of $C$, our split octonion algebra described above, that leaves the following quaternion algebra fixed elementwise, \[ k \underbrace{ \left( \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix}, 0 \right) }_{e} \bigoplus k \underbrace{ \left( \begin{bmatrix}
& 1 \\ 1 & \\ \end{bmatrix}, 0 \right) }_{a} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix} \right) }_{b} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix}
& 1 \\ 1 & \\ \end{bmatrix} \right) }_{ab}. \] Notice that $(b+ab)(e+a+b+ab)=0$, and so the quaternion subalgebra is split. \end{proof}
\subsection{Another isomorphy class of $k$-involutions over certain fields}
We have seen that our maximal torus $T=T_{\mathcal{I}_s}^-$, and so we can look at elements of $T$ for $k$-inner elements of $\mathcal{I}_s$ that will give us new conjugacy classes over fields for which quaternion division algebras can exist. The fields we are interested in include the real numbers, 2-adics, and rationals.
\newtheorem{nonsplitq}[subsubsection]{Lemma} \begin{nonsplitq} \label{nonsplitq} For $C$ a split octonion algebra over a field $k=\mathbb{R}, \mathbb{Q}_2, \mathbb{Q}$, \[ s\cdot t_{(1,-1)} \in \Aut(C), \]
leaves a quaternion division subalgebra elementwise fixed. \end{nonsplitq} \begin{proof} The element $s\cdot t_{(1,-1)} \in \Aut(C)$ leaves the following quaternion subalgebra elementwise fixed, \[ k \underbrace{ \left( \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix}, 0 \right) }_{e} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix} 1 & \\
& -1 \\ \end{bmatrix} \right) }_{a} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix}
& 1 \\ 1 & \\ \end{bmatrix} \right) }_{b} \bigoplus k \underbrace{ \left( \begin{bmatrix}
& 1 \\ -1 & \\ \end{bmatrix}, 0 \right) }_{ab}. \] All basis elements are such that $x \bar{x} = 1$, and so have a norm isomorphic to the 2-Pfister form $\left( \frac{-1,-1}{k} \right)$, where $k=\mathbb{R}, \mathbb{Q}_2, \mathbb{Q}$, which corresponds to a quaternion division algebra over each respective field. Moreover, over $k=\mathbb{R}$ or $\mathbb{Q}_2$ there is only one quaternion division algebra up to isomorphism. \end{proof}
\newtheorem{nonsplitqQp}[subsubsection]{Lemma} \begin{nonsplitqQp} \label{nonsplitqQp} For $C$ a split octonion algebra over a field $k=\mathbb{Q}_p$ and $p>2$ $s \cdot t_{(-N_p,-pN_p^{-1})}$ leaves a divison quaternion algebra elementwise fixed. \end{nonsplitqQp} \begin{proof} The element $s \cdot t_{(-N_p,-pN_p^{-1})}$ leaves the following quaternion subalgebra elementwise fixed, \[ k \underbrace{ \left( \begin{bmatrix} 1 & \\
& 1 \\ \end{bmatrix}, 0 \right) }_{e} \bigoplus k \underbrace{ \left( 0, \begin{bmatrix}
& -N_p \\
1 & \\ \end{bmatrix} \right) }_{a} \bigoplus k \underbrace{ \left( \begin{bmatrix}
& p \\ 1 & \\ \end{bmatrix},0 \right) }_{b} \bigoplus k \underbrace{ \left(0, \begin{bmatrix} N_p & \\
& -p \\ \end{bmatrix}
\right) }_{ab}, \] with $\mathbb{Q}_p^*/ (\mathbb{Q}_p^*)^2 = \{ 1, p, N_p, pN_p\}$. This algebra is ismorphic to $\left( \frac{p,N_p}{\mathbb{Q}_p} \right)$, which is a representative of the unique ismorphy class of quaternion division algebras for a given $p$. \end{proof}
\newtheorem{maintheorem}[subsubsection]{Theorem} \begin{maintheorem} \label{maintheorem} Let $\theta = \mathcal{I}_s$ and $G=\Aut(C)$ where $C$ is a split octonion algebra over a field $k$, then \begin{enumerate} \item when $k=\mathbb{R}, \mathbb{Q}, \mathbb{Q}_2$; $\theta$ and $\theta \circ \mathcal{I}_{ t_{(1,-1)} }$ are representatives of $2$ isomorphy classes of $k$-involutions of $G$. In the cases $k=\mathbb{R}$ or $\mathbb{Q}_2$ these are the only cases, but this is not true for $k=\mathbb{Q}$. \item when $k=\mathbb{Q}_p$ and $p>2$, we have two isomorphy classes of $k$-involutions. \item when $k=\mathbb{C}$ and $\mathbb{F}_p$ with $p>2$; there is only one isomorphy class of $k$-involutions of $G$. \end{enumerate} \end{maintheorem} \begin{proof}
For part (1) we only need to notice that over the fields $\mathbb{R}$, $\mathbb{Q}$ and $\mathbb{Q}_2$ that $\theta$ and $\theta \circ \mathcal{I}_{t_{(1,-1)}}$ leave nonisomorphic subalgebras elementwise fixed and so by Theorem \ref{jake}, Corollary \ref{cortconj}, and Proposition \ref{scongt} they are not isomorphic. And by Theorem \ref{splitcomp} and Lemma \ref{nonsplitq} these are the only $2$ quaternion subalgebras up to isomorphism. There are no other possible isomorphy classes.
For part (2) by Theorem \ref{splitcomp}, Lemma \ref{nonsplitqQp}.
For part (3) by Theorem \ref{splitcomp}, Proposition \ref{tconj}, and Corollary \ref{cortconj} we have the result. \end{proof}
\subsection{Fixed point groups}
In order to compute the fixed point groups of each $k$-involution we first look at how such elements of $\Aut(C)$ act on $C$.
\newtheorem{fixedlemma}[subsubsection]{Lemma} \begin{fixedlemma} \label{fixedlemma}
Let $t\in \Aut(C)=G$ such that $t^2 = \id$ and $D \subset C$ the quaternion algebra elementwise fixed by $t$ then $f \in G^t := G^{\mathcal{I}_t} = \{g \in G \ | \ \mathcal{I}_t(g)=g \}$ if and only if $f$ leaves $D$ invariant. \end{fixedlemma} \begin{proof} ($\Rightarrow$): Let $D \subset C$ be fixed elementwise by $t$ and $f\in G^t$, then $\mathcal{I}_t(f)=f$. Now let $c\in C$ be any element of the octonion algebra containing $D$, then we can write $C = D \oplus D^{\perp}$ and $c=a+b$ where $a \in D$ and $b \in D^{\perp}$. Since $t$ is a $k$-involution it has only $\pm 1$ as eigenvalues and $t(a+b) = a - b$. Furthermore,
\begin{align*}
\mathcal{I}_t(f)(c) &= tft^{-1}(c) \\
&=tft(c) \\
&=tft(a+b) \\
&=tft(a)+tft(b) \\
&=tf(a)+f(b),
\end{align*} and since $\mathcal{I}_t(f)(c) = f(c)=f(a+b)$ for all $c\in C$,
\[ f(a)+f(b) = tf(a) + f(b). \] From this we can conclude that $tf(a) = f(a)$ so $f(a) \in D$. \
($\Leftarrow$): If we assume, conversely, that $D \subset C$ is the subalgebra fixed elementwise by $t$ and $f(D)=D$ then $f(D^{\perp}) = D^{\perp}$, and
\begin{align*}
tft^{-1}(c) &= tft(a+b) \\
&= tft(a)+tft(b) \\
&= tf(a) - tf(b) \\
&= f(a) + f(b) \\
&= f(c),
\end{align*} for all $c\in C$ and we have the result. \end{proof}
Every involution in $\Aut(C)$ leaves some quaternion algebra, $D$, elementwise fixed. Now we need only to see what automorphisms of $C$ leave $D$ invariant. In every case we will consider if we have a fixed quaternion algebra $D \subset C$, then $C= D\oplus D^{\perp}$ with respect to $N$. The automorphisms of $C$ that leave $D$ invariant, denoted $\Aut(C,D)$ are of the form,
\[ s(x+ya) = s(x) +s(y)s(a), \] where $s \in \Aut(C,D)$; $x,y \in D$ and $a\in D^{\perp}$ such that $N(a) \neq 0$. Since $s$ leaves $D$ invariant we can further see that
\[ s(y) \in D, \text{ and } s(a) \in D^{\perp} \text{ such that } N(s(a)) \neq 0, \] and we can write
\[ s(x+ya) = s_{dp}(x+ya) = dxd^{-1} + (pdyd^{-1}) a, \] where $d, p \in D$ and $N(d)\neq 0$, $N(p)=1$. For more details see \cite{SV00}.
First let us consider the case where $D$ is a split quaternion algebra. In this case $D \cong M_2(k)$ and $d\in \GL _2(k)$ and $p \in \SL_2(k)$.
\newtheorem{fixsplit}[subsubsection]{Proposition} \begin{fixsplit} When $t \in \Aut(C)$ is an involution and leaves $D \subset C$, a split quaternion subalgebra, elementwise fixed, then
\[ G^t := G^{\mathcal{I}_t} \cong \PGL_2(k) \times \SL_2(k). \] \end{fixsplit}
\begin{proof}
If we consider the map
\[ \psi: \GL _2(k) \times \SL_2(k) \to \Aut(C,D) \text{, where } (d,p) \mapsto s_{dp}, \]
is surjective. The kernel is given by $\ker(\psi) = \{ (\alpha \cdot e , e ) \ | \ \alpha \in k^*\}$.
\end{proof}
In the case where the involution $t$ leaves a quaternion division algebra $D$ invariant, we have the same initial set up, i.e., $s\in \Aut(C,D)$ then
\[ s(x+ya) = s_{dp}(x+ya) = d x d^{-1} + (pdyd^{-1}) a, \] where $N(d) \neq 0$ and $N(p)=1$, only $D \not\cong M_2(k)$ it is isomorphic to Hamilton's quaternions over $k$. In this case $N(d) \neq 0$ only tells us that $d\neq 0 \in D$.
\newtheorem{fixdiv}[subsubsection]{Proposition} \begin{fixdiv} When $t \in \Aut(C)$ is an involution and leaves $D \subset C$, a quaternion division algebra, elementwise fixed, then $G^t \cong \SO(D_0,N) \times \Sp(1)$ where $D_0 = ke^{\perp}$. \end{fixdiv}
\begin{proof}
Then $N(p) = p \bar{p} = 1$ tells us that $p \in \Sp(1)$ is the group of $1 \times 1$ symplectic matrices over $D$. If we consider the surjective homomorphism
\[\psi: D^* \times \Sp(1) \to \Aut(C,D) \text{ where } (d,p) \mapsto s_{dp},\]
and $D^*=D-\{0\}$ is the group consisting of the elements of $D$ having inverses, its kernel, $\ker(\psi) = \{ (\alpha \cdot e,e) \ | \ \alpha \in k^* \}$. So we have
\[ D^*/Z(D^*) \times \Sp(1) \cong \Inn(D^*) \times \Sp(1). \]
Jacobson tells us, \cite{Ja58}, that all automorphisms of $D^*$ are inner and leave the identity fixed. Also included in \cite{Ja58} is that $\Inn(D^*) \cong \SO(D_0,N)$, where $D_0 \subset D$ such that $D_0 = e^{\perp}$, the three dimensional subspace, and $\SO(D_0,N)$ is the group of rotations of $D_0$ with respect to $N|_{D_0}$.
\end{proof}
\section{Concluding remarks}
\subsection{$k=\mathbb{Q}$}
Isomorphy classes of quaternion algebras over $k$ are given by equivalence classes of $2$-Pfister forms $\left(\frac{\alpha,\beta}{k}\right)$ over $k$ corresponding to the quadratic form,
\[ N(x) = x_0^2 + (-\alpha)x_1^2 + (-\beta)x_2^2 + (\alpha\beta)x_3^2, \] while octonion algebras depend on $3$-Pfister forms, $\left( \frac{\alpha,\beta,\gamma}{k} \right)$ with quadratic form,
\[ N(x) = x_0^2 + (-\alpha)x_1^2 +(-\beta)x_2^2 + (\alpha\beta)x_3^2 + (-\gamma)x_4^2 + (\alpha\gamma)x_5^2 + (\beta\gamma)x_6^2 + (-\alpha\beta\gamma)x_7^2. \]
It is not difficult to show that for a prime $p$, such that $p \equiv 3 \mod 4$,
\[ \left( \frac{ -1, p }{\mathbb{Q}} \right) \] is a division algebra. Further, for $p$ and $q$ distinct primes both equivalent to $3 \mod 4$
\[ \left( \frac{ -1, p }{\mathbb{Q}} \right) \not\cong \left( \frac{ -1, q }{\mathbb{Q}} \right), \] see \cite{Pi82} Exercise 1.7.4.
\newtheorem{rationalex}[subsubsection]{Example} \begin{rationalex} Let $C = \left(M_2(\mathbb{Q}),M_2(\mathbb{Q}) \right)$. We can find an involution of $\Aut(C)$ leaving $D \cong \left( \frac{ -1, p }{\mathbb{Q}} \right)$ elementwise fixed by first constructing a basis for $D$. If we pick
\[ a = \left( 0,
\begin{bmatrix}
& 1 \\
1 &
\end{bmatrix}
\right)
\text{ and }
b = \left(
\begin{bmatrix}
& p \\
1 &
\end{bmatrix},
0 \right), \] then $a^2 = -1$ and $b^2 = p$. Then the other basis elements of $D$ are $e= (\id,0)$ and
\[ ab = \left( 0,
\begin{bmatrix}
-1 & \\
& -p
\end{bmatrix}
\right). \] Let $p \equiv 3 \mod 4$, then $D$ is a division algebra and
\[
s_p = \begin{bmatrix}
& & & 1 \\
& & p & \\
& p^{-1} & & \\
1 & & &
\end{bmatrix}
\bigoplus
\begin{bmatrix}
& & & p^{-1} \\
& & 1 & \\
& 1 & & \\
p & & &
\end{bmatrix} \in \Aut(C), \] leaves $D$ elementwise fixed. Notice $s_p = s \cdot t_{(p,1)}$. There is a $\mathbb{Q}$-involution for each $p \equiv 3 \mod 4$ of which there are an infinite number. \end{rationalex}
Over $\mathbb{Q}$ there are only $2$ octonion algebras, $\left( M_2(\mathbb{Q}), M_2(\mathbb{Q}) \right)$ and a division algebra. When we consider quaternion algebras over $\mathbb{Q}$ we get one split quaternion algebra, but we get exactly one division algebra for each unique set of an even number of real or finite places of $\mathbb{Q}$, \cite{O'M63} and \cite{SV00}. \
\subsection{$k$ is an algebraic number fields}
When the quaternion or octonion algebra is taken over an algebraic number field we have that $2$ quadratic forms are equivalent if and only if they are equivalent over all local fields $k_{\nu}$ where $\nu$ varies over all places of $k$, which is a result of Hasse's Theorem \cite{Se73}. Hasse's Theorem also tells us the number of possible anisotropic quadratic forms corresponding to nonisomorphic octonion algebras is given an upper bound of $2^r$ where $r$ is the number of real places of $k$ \cite{SV00}. \
\end{document} |
\begin{document}
\noindent {\footnotesize }\\[1.00in]
\title[ On globally symmetric Finsler spaces]{On globally symmetric Finsler spaces}
\author[R. Chavosh Khatamy , R. Esmaili\\]{R. Chavosh Khatamy $^{*}$, R. Esmaili\\}
\address{Department of Mathematics, Faculty of Sciences, Islamic Azad University, Tabriz Branch} \email{\tt [email protected], r\[email protected] }
\address{Department of Mathematics, Faculty of Sciences, Payame noor University, Ahar Branch} \email{\tt [email protected]}
\subjclass[2000]{53C60, 53C35}
\keywords{Finsler Space, Locally symmetric Finsler space, Globally Symmetric Finsler space, Berwald space. \\ \indent $^{*}$ The first author was supported by the funds of the Islamic Azad University- Tabriz Branch, (IAUT) }
\begin{abstract} The paper consider the symmetric of Finsler spaces. We give some conditions about globally symmetric Finsler spaces. Then we prove that these spaces can be written as a coset space of Lie group with an invariant Finsler metric. Finally, we prove that such a space must be Berwaldian. \end{abstract} \maketitle
\section{Introduction} The study of Finsler spaces has important in physics and Biology (\cite{5}), In particular there are several important books about such spaces (see \cite{1}, \cite{8}). For example recently D. Bao, C. Robels, Z. Shen used the Randers metric in Finsler on Riemannian manifolds (\cite{9} and \cite{8}, page 214). We must also point out there was only little study about symmetry of such spaces (\cite{3}, \cite{12}). For example E. Cartan has been showed symmetry plays very important role in Riemannian geometry (\cite{5} and \cite{12}, page 203).
\begin {definition}\label{def1.0} A Finsler space is locally symmetric if, for any $p\in M$, the geodesic reflection $s_{p}$ is a local isometry of the Finsler metric. \end {definition} \begin {definition}\label{def1.1} A reversible Finsler space $(M,F)$ is called globally Symmetric if for any $p\in M$ the exists an involutive isometry $\sigma_{x}$ $($that is, $\sigma_{x}^{2}=I$ but $\sigma_{x}\neq I)$ of such that $x$ is an isolated fixed point of $\sigma_{x}$. \end {definition} \begin {definition}\label{def1.2} Let $G$ be a Lie group and $K$ is a closed subgroup of $G$. Then the coset space $G/K$ is called symmetric if there exists an involutive automorphism $\sigma$ of $G$ such that $$G^{0}_{\sigma}\subset K\subset G_{\sigma},$$ where $G_{\sigma}$ is the subgroup consisting of the fixed points of $\sigma$ in $G$ and $G^{0}_{\sigma}$ denotes the identity component of $G_{\sigma}$. \end {definition} \begin {theorem}\label{the1.3} Let $G/K$ be a symmetric coset space. Then any G-invariant reversible Finsler metric $($if exists$)$ $F$ on $G/K$ makes $(G/K, F)$ a globally symmetric Finsler space $($\cite{8}, page 8$)$. \end {theorem} \begin {theorem}\label{the1.4} Let $(M,F)$ be a globally Symmetric Finsler space. For $p\in M$, denote the involutive isometry of $(M,F)$ at $p$ by $S_{x}$. Then we have \\
~~~~~~ $($a$)$ ~~ For any $p\in M, (dS_{x})_{x}=-I$. In particular, $F$ must
be reversible.
~~~~~~ $($b$)$ ~~ $(M,F)$ is forward and backward complete;
~~~~~~ $($c$)$ ~~ $(M,F)$ is homogeneous. This is, the group of isometries of $(M,F), I(M,F)$, acts transitively on $M$.
~~~~~~ $($d$)$ ~~ Let $\widetilde{M}$ be the universal covering space of $M$
and $\pi$ be the projection mapping. Then
$(\widetilde{M},\pi^{*}(F))$ is a globally Symmetric Finsler
space, where $\pi^{*}(F)$ is define by \begin{eqnarray*} \pi^{*}(F)(q)=F((d\pi)_{\widetilde{p}}(q)), ~~ q\in T_{\widetilde{p}} (\widetilde{M}), \end{eqnarray*} (See \cite{8} to prove). \end {theorem} \begin {corollary}\label{cor1.5} Let $(M,F)$ be a globally Symmetric Finsler space. Then for any $p\in M, s_{p}$ is a local geodesic Symmetry at $p$. The Symmetry $s_{p}$, is unique. $($See prove of Theorem 1.2 and \cite{1}$)$ \end {corollary} \section{A theorem on globally Symmetric Finsler spaces} \begin {theorem}\label{the2.1} Let $(M,F)$ be a globally Symmetric Finsler space. Then exits a Riemannian Symmetric pair $(G,K)$ such that $M$ is diffeomorphic to $G/K$ and $F$ is invariant under $G$. \end {theorem} \begin {proof} The group $I(M,F)$ of isometries of $(M,F)$ acts transitively on $M$ ($(C)$ of theorem 1.5). We proved that $I(M,F)$ is a Lie transformation group of $M$ and for any $p\in M$ (\cite{12} and \cite{7}, page 78), the isotropic subgroup $I_{p}(M,F)$ is a compact subgroup of $I(M,F)$ (\cite{4}). Since $M$ is connected (\cite{7}, \cite{10}) and the subgroup $K$ of $G$ which $p$ fixed is a compact subgroup of $G$. Furthermore, $M$ is diffeomorphic to $G/K$ under the mapping $gH\to g.p$ , $g\in G$ (\cite{7} Theorem 2.5, \cite{10}). \\ As in the Riemannian case in page 209 of \cite{7}, we define a mapping $s$ of $G$ into $G$ by $s(g)=s_{p}gs_{p}$, where $s_{p}$ donote the (unique) involutive isometry of $(M,f)$ with $p$ as an isolated fixed point. Then it is easily seen that $s$ is an involutive automorphism of $G$ and the group $K$ lies between the closed subgroup $K_{s}$ of fixed points of $s$ and the identity component of $K_{s}$ (See definition of the symmetric coset space,
\cite{11}). Furthermore, the group $K$ contains no normal subgroup of $G$ other than $\{e\}$. That is, $(G,K)$ is symmetric pair. $(G,K)$ is a Riemannian symmetric pair, because $K$ is compact. \end {proof} The following useful will be results in the proof of our aim of this paper. \begin {proposition} Let $(M,\bar{F})$ be a Finsler space, $p\in M$ and $H_{p}$ be the holonomy group of $\bar{F}$ at $p$. If $F_{p}$ is a $H_{p}$ invariant Minkowski norm on $T_{p}(M)$, then $F_{p}$ can be extended to a Finsler metric $F$ on $M$ by parallel translations of $\bar{F}$ such that $F$ is affinely equivalent to $\bar{F}$ $($\cite{5}, proposition 4.2.2$)$ \end {proposition} \begin {proposition} A Finsler metric $F$ on a manifold $M$ is a Berwald metric if and only if it is affinely equivalent to a Riemannian metric $g$. In this case, $F$ and $g$ have the same holonomy group at any point $p\in M$ $($see proposition 4.3.3 of \cite{5}$)$. \end {proposition} Now the main aim \begin {theorem} Let $(M,F)$ be a globally symmetric Finsler space. Then $(M,F)$ is a Berwald space. Furthermore, the connection of $F$ coincides with the Levi-civita connection of a Riemannian metric $g$ such that $(M,g)$ is a Riemannian globally symmetric space. \end {theorem} \begin {proof} We first prove $F$ is Beraldian. By Theorem 2.1, there exists a Riemannian symmetric pair $(G,K)$ such that $M$ is diffeomorphic to $G/K$ and $F$ is invariant under $G$. Fix a $G$- invariant Riemannian metric $g$ on $G/K$. Without losing generality, we can assume that $(G,K)$ is effective (see \cite{11} page 213). Since being a Berwald space is a local property, we can assume further that $G/K$ is simple connected. Then we have a decomposition (page 244 of \cite{11}): \begin{eqnarray*} G/K=E\times G_{1}/K_{1}\times G_{2}/K_{2}\times ...\times G_{n}/K_{n}, \end{eqnarray*} where $E$ is a Euclidean space, $G_{i}/K_{i}$ are simply connected irreducible Riemannian globally symmetric spaces, $i=1,2,...,n$. Now we determine the holonomy groups of $g$ at the origin of $G/K$. According to the de Rham decomposition theorem (\cite{2}), it is equal to the product of the holonomy groups of $E$ and $G_{i}/K_{i}$ at the origin. Now $E$ has trivial holonomy group. For $G_{i}/K_{i}$, by the holonomy theorem of Ambrose and Singer (\cite{12}, page 231, it shows, for any connection, how the curvature form generats the holonomy group), we know that the lie algebra $\eta_{i}$ of the holonomy group $H_{i}$ is spanned by the linear mappings of the form $\{\widetilde{\tau}^{-1}R_{0}(X,Y)\widetilde{\tau}\}$, where $\tau$ denotes any piecewise smooth curve starting from $o$, $\widetilde{\tau}$ denotes parallel displacements (with respect to the restricted Riemannian metric) a long $\widetilde{\tau}$, $\widetilde{\tau}^{-1}$ is the inverse of $\widetilde{\tau}$, $R_{0}$ is the curvature tensor of $G_{i}/K_{i}$ of the restricted Riemannian metric and $X,Y\in T_{0}(G_{i}/K_{i})$. Since $G_{i}/K_{i}$ is a globally symmetric space, the curvature tensor is invariant under parallel displacements (page 201 of \cite{10},\cite{11}). So \begin{eqnarray*}
\eta_{i}=span\{R_{0}(X,Y)|X,Y\in T_{0}(G_{i}/K_{i})\}, \end{eqnarray*} (see page 243 of \cite{7}, \cite{11}).\\ On the other hand, Since $G_{i}$ is a semisimple group. We know that the Lie algebra of $K^{*}_{i}=Ad(K_{i})\simeq K$ is also equal to the span of $R_{0}(X,Y)$ (\cite{11}). The groups $H_{i}$, $K^{*}_{i}$ are connected (because $G_{i}/K_{i}$ is simply connected) (\cite{10} and \cite{11}). Hence we have $H_{i}=K^{*}_{i}$. Consequently the holonomy group $H_{0}$ of $G/K$ at the origin is \begin{eqnarray*} K^{*}_{1}\times K^{*}_{2}\times...\times K^{*}_{n} \end{eqnarray*} Now $F$ defines a Minkowski norm $F_{0}$ on $T_{0}(G/K)$ which is invariant by $H_{0}$ (\cite{2}). By proposition 2.2, we can construct a Finsler metric $\bar{F}$ on $G/K$ by parallel translations of $g$. By proposition 2.3, $\bar{F}$ is Berwaldian. Now for any point $p_{0}=aK\in G/K$, there exists a geodesic of the Riemannian manifold $(G/K , g)$, say $\gamma(t)$ such that $\gamma(0)=0, \gamma(1)=p_{0}$. Suppose the initial vector of $\gamma$ is $X_{0}$ and take $X\in p$ such that $d\pi(X)=X_{0}$. Then it is known that $\gamma(t)=\exp tX.p_{0}$ and $d\tau(\exp tX)$ is the parallel translate of $(G/K, g)$ along $\gamma$ (\cite{11} and \cite{7}, page 208). Since $F$ is $G$- invariant, it is invariant under this parallel translate. This means that $F$ and $\bar{F}$ concede at $T_{p_{0}}(G/K)$. Consequently they concide everywhere. Thus $F$ is a Berwald metric. \\ For the next assertion, we use a result of Szabo' (\cite{2}, page 278) which asserts that for any Berwald metric on $M$ there exists a Riemannian metric with the same connection. We have proved that $(M,F)$ is a Berwald space. Therefore there exists a Riemannian metric $g_{1}$ on $M$ with the same connection as $F$. In \cite{11}, we showed that the connection of a globally symmetric Berwald space is affine symmetric. So $(M,F)$ is a Riemannian globally symmetric space (\cite{7}, \cite{11}). \end {proof}
From the proof of theorem 2.4, we have the following corollary. \begin {corollary} Let $(G/K, F)$ be a globally symmetric Finsler space and $g=\ell+p$ be the corresponding decomposition of the Lie algebras. Let $\pi$ be the natural mapping of $G$ onto $G/K$. Then $(d\pi)_{e}$ maps $p$ isomorphically onto the tangent space of $G/K$ at $p_{0}=eK$. If $X\in p$, then the geodesic emanating from $p_{0}$ with initial tangent vector $(d\pi)_{e}X$ is given by \begin{eqnarray*} \gamma_{d\pi.X}(t)=\exp tX.p_{0}. \end{eqnarray*} Furthermore, if $y\in T_{p_{0}}(G/K)$, then $(d\exp tX)_{p_{0}}(Y)$ is the parallel of $Y$ along the geodesic (see \cite{11}, \cite{7} proof of theorem 3.3). \end {corollary} \begin{example} Let $G_{1}\big/ K_{1}$, $G_{2}\big/ K_{2}$ be two symmetric coset spaces with $K_{1},K_{2}$ compact (in this coset, they are Riemannian symmetric spaces) and $g_{1},g_{2}$ be invariant Riemannian metric on $G_{1}\big/ K_{1}$, $G_{2}\big/ K_{2}$, respectively. Let $M=G_{1}\big/ K_{1}\times G_{2}\big/ K_{2}$ and $O_{1}, O_{2}$ be the origin of $G_{1}\big/ K_{1}, G_{2}\big/ K_{2}$, respectively and denote $O=(O_{1}, O_{2})$ (the origin of $M$). Now for $y=y_{1}+y_{2}\in T_{O}(M)=T_{O_{1}}(G_{1}\big/ K_{1})+T_{O_{2}}(G_{2}\big/ K_{2})$, we define \begin{eqnarray*} F(y)=\sqrt{g_{1}(y_{1},y_{2})+g_{2}(y_{1},y_{2})+\sqrt[s]{g_{1}(y_{1},y_{2})^{s}+g_{2}(y_{1},y_{2})^{s}}}, \end{eqnarray*} where $s$ is any integer $\geq2$. Then $F(y)$ is a Minkowski norm on $T_{O}(M)$ which is invariant under $K_{1}\times K_{2}$ (\cite{4}). Hence it defines an $G$- invariant Finsler metric on $M$ (\cite{6}, Corollary 1.2, of page 8246). By theorem 2.1, (M,F) is a globally symmetric Finsler space. By theorem 2.4 and (\cite{2}, page 266) $F$ is non-Riemannian. \end{example}
\end{document} |
\begin{document}
\title[Poloids]{Poloids from the Points of View of Partial Transformations and Category Theory}
\author{Dan Jonsson}
\address{Dan Jonsson, Department of Sociology, University of Gothenburg, Sweden.}
\email{[email protected]} \begin{abstract} Monoids and groupoids are examples of poloids. On the one hand, poloids can be regarded as one-sorted categories; on the other hand, poloids can be represented by partial magmas of partial transformations. In this article, poloids are considered from these two points of view. \end{abstract}
\maketitle
\section{Introduction}
While category theory is, in a sense, a mathematical theory of mathematics, there does also exist a mathematical (algebraic) theory of (small) categories. The phrase ``categories are just monoidoids'' summarizes this theory in a somewhat cryptic manner. One part of this article is concerned with clarifying this statement, systematically developing definitions of category-related algebraic concepts such as semigroupoids, poloids and groupoids, and deriving results that we recognize from category theory. While no new results are presented, the underlying notion that (small) categories are ``just webs of monoids'' \textendash{} or partial magmas generalizing monoids, semigroups, groups etc. \textendash{} may deserve more systematic attention than it has received.
The other part of the article deals with the link between abstract algebraic structures such as poloids and concrete systems of partial transformations on some set. We obtain systems of partial transformations that satisfy the axioms of poloids as abstract algebraic structures by successively adding constraints on partial magmas of partial transformations; it is also shown that every poloid is isomorphic to such a system of partial transformations. This procedure provides an intuitive interpretation of the poloid axioms, helping to motivate the axioms and making it easier to discover important concepts and results. As is known, this approach has shown its usefulness in the study of semigroups, for example, in the work of Wagner \cite{key-9}.
At a late stage in the preparation of the manuscript, I became aware of related work on constellations \cite{key-4,key-5,key-7}. Constellations turned out to generalize poloids in a way that I had not considered, yet had several points of contact with my concepts and results. I have added an Appendix where these matters are discussed.
\section{Poloids and transformation magmas}
\subsection{(Pre)functions, (pre)transformations and magmas} \begin{defn} \label{def1a} A \emph{(partial) prefunction}, $\mathsf{f}:X\nRightarrow Y$ is a set $\mathfrak{X}\subseteq X$ and a rule $\overline{\mathsf{f}}$ that assigns exactly one $\overline{\mathsf{f}}\!\left(x\right)\in Y$ to each $x\in\mathcal{\mathfrak{X}}$; to simplify the notation we may write $\overline{\mathsf{f}}\!\left(x\right)$ as $\mathsf{f}\!\left(x\right)$. We call $\mathfrak{X}$ the \emph{domain} of $\mathsf{f}$, denoted $\mathrm{dom}\!\left(\mathsf{f}\right)$. The \emph{image} of $\mathsf{f}$, denoted $\mathrm{im}\!\left(\mathsf{f}\right)$, is the set $\mathsf{f}\!\left(\mathrm{dom}\!\left(\mathsf{f}\right)\right)=\left\{ \mathsf{f}\!\left(x\right)\mid x\in\mathcal{\mathfrak{X}}\right\} $. \end{defn}
\begin{defn} \label{def1}A \emph{(partial) function} $f:X\nrightarrow Y$ is a prefunction $\mathsf{f}:X\nRightarrow Y$ and a set $\mathfrak{Y}$ such that $\mathrm{im}\!\left(\mathsf{f}\right)\subseteq\mathfrak{Y}\subseteq Y$. The \emph{domain} of $f$, denoted $\mathrm{dom}\!\left(f\right)$, is the domain of $\mathsf{f}$, and $\mathfrak{Y}$ is called the \emph{codomain} of $f$, denoted $\mathrm{cod}\!\left(f\right)$. The \emph{image} of $f$, denoted $\mathrm{im}\!\left(f\right)$, is defined to be the image of $\mathsf{f}$. \end{defn}
Although this terminology will not be used below, $X$ may be called the \emph{total domain} for $\mathsf{f}:X\nRightarrow Y$ or $f:X\nrightarrow Y$, and $Y$ may be called the \emph{total codomain} for $\mathsf{f}:X\nRightarrow Y$ or $f:X\nrightarrow Y$.
A \emph{total} \emph{prefunction} $\mathsf{f}:X\Rightarrow Y$ is a prefunction such that $\mathrm{dom}\!\left(\mathsf{f}\right)=X$. A \emph{non-empty} \emph{prefunction} $\mathsf{f}$ is a prefunction such that $\mathrm{dom}\!\left(\mathsf{f}\right)=\emptyset$. The \emph{restriction} of $\mathsf{f}:X\nRightarrow Y$ to $X'\subset X$
is the prefunction $\mathfrak{\mathsf{f}}\raise-.2ex\hbox{\ensuremath{|}}_{X'}:X'\nRightarrow Y$
such that $\mathrm{dom}\!\left(\mathsf{\mathsf{f}}\raise-.2ex\hbox{\ensuremath{|}}_{X'}\right)=\mathrm{dom}\!\left(\mathsf{f}\right)\cap X'$
and $\mathsf{f}\raise-.2ex\hbox{\ensuremath{|}}_{X'}\!\left(x\right)=\mathsf{f}\!\left(x\right)$
for all $x\in\mathrm{dom}\!\left(\mathsf{f}\raise-.2ex\hbox{\ensuremath{|}}_{X'}\right)$. A\emph{ pretransformation} on $X$ is a prefunction $\mathsf{f}:X\nRightarrow X$; a \emph{total} pretransformation on $X$ is a total prefunction $\mathsf{f}:X\Rightarrow X$. An \emph{identity pretransformation} $\mathsf{Id}_{S}$ is a pretransformation on $X\supseteq S$ such that $\mathrm{dom}\!\left(\mathsf{Id}_{S}\right)=S$ and $\mathsf{Id}_{S}\!\left(x\right)=x$ for all $x\in\mathrm{dom}\!\left(\mathsf{Id}_{S}\right)$.
Similarly, a \emph{total} \emph{function} $f:X\rightarrow Y$ is a function such that $\mathrm{dom}\!\left(f\right)=X$ and $\mathrm{cod}\!\left(f\right)=Y$. A \emph{non-empty function} $f$ is a function such that $\mathrm{dom}\!\left(f\right)~\neq~\emptyset$. The \emph{restriction} of $f:X\nrightarrow Y$ to $X'\subset X$ is the function $f\raise-.2ex\hbox{\ensuremath{|}}_{X'}:X'\nrightarrow Y$
such that $\mathrm{dom}\!\left(f\raise-.2ex\hbox{\ensuremath{|}}_{X'}\right)=\mathrm{dom}\!\left(f\right)\cap X'$,
$\mathrm{cod}\!\left(f\raise-.2ex\hbox{\ensuremath{|}}_{X'}\right)=\mathrm{cod}\!\left(f\right)$
and $f\raise-.2ex\hbox{\ensuremath{|}}_{X'}\!\left(x\right)=f\!\left(x\right)$
for all $x\in\mathrm{dom}\!\left(f\raise-.2ex\hbox{\ensuremath{|}}_{X'}\right)$. A\emph{ transformation} on $X$ is a function $f:X\nrightarrow X$; a \emph{total} transformation on $X$ is a total function $f:X\rightarrow X$. An \emph{identity transformation} $I\!d_{S}$ is a transformation on $X\supseteq S$ such that $\mathrm{dom}\!\left(I\!d_{S}\right)=\mathrm{cod}\!\left(I\!d_{S}\right)=S$ and $I\!d_{S}\!\left(x\right)=x$ for all $x\in\mathrm{dom}\!\left(I\!d_{S}\right)$.
Given a pretransformation $\mathsf{f}$ on $X$, $\mathsf{f}\!\left(x\right)$ denotes some $x\in X$ if and only if $x\in\mathrm{dom}\!\left(\mathsf{f}\right)$; $\mathsf{f}\!\left(\mathsf{f}\!\left(x\right)\right)$ denotes some $x\in X$ if and only if $x,\mathsf{\mathsf{f}}\!\left(x\right)\in\mathrm{dom}\!\left(\mathsf{f}\right)$; etc. We describe such situations by saying that $\mathsf{f}\!\left(x\right)$, $\mathsf{f}\!\left(\mathsf{f}\!\left(x\right)\right)$, etc. are \emph{defined}. Similarly, given a transformation $f$ on $X$, $f\!\left(x\right)$, $f\!\left(f\!\left(x\right)\right)$, etc. are said to be defined if the corresponding pretransformations $\mathsf{f}\!\left(x\right)$, $\mathsf{f}\!\left(\mathsf{f}\!\left(x\right)\right)$, etc. are defined\emph{.} \begin{defn} \label{def2}A \emph{(partial) binary operation} on a set $X$ is a non-empty prefunction \[ \uppi:X\times X\nRightarrow X,\qquad\left(x,y\right)\mapsto xy. \] \end{defn}
A \emph{total} binary operation on $X$ is a total prefunction $\uppi:X\times X\Rightarrow X$. A \emph{(partial) magma} $P$ is a non-empty set $\left|P\right|$
equipped with a binary operation on $\left|P\right|$; a \emph{total}
magma $P$ is a non-empty set $\left|P\right|$ equipped with a total binary operation on $\left|P\right|$. A \emph{submagma} $P'$ of a magma $P$ is a set $\left|P'\right|\subseteq\left|P\right|$ such that if $x,y\in\left|P'\right|$ then $xy\in\left|P'\right|$, with the restriction of $\uppi$ to $\left|P'\right|\times\left|P'\right|$
as a binary operation. (By an abuse of notation, $P$ will also denote the set $\left|P\right|$ henceforth.)
The notion of being defined for expressions involving a pretransformation can be extended in a natural way to expressions involving a binary operation. We say that $xy$ is defined if and only if $\left(x,y\right)\in\mathrm{dom}\!\left(\uppi\right)$; that $\left(xy\right)\!z$ is defined if and only if $\left(x,y\right),\left(xy,z\right)\in\mathrm{dom}\!\left(\uppi\right)$; that $z\!\left(xy\right)$ is defined if and only if $\left(x,y\right),\left(z,xy\right)\in\mathrm{dom}\!\left(\uppi\right)$; and so on. Thus, if $\left(xy\right)\!z$ or $z\!\left(xy\right)$ is defined then $xy$ is defined. \begin{rem} {\small{}To avoid tedious repetition of the word ``partial'', we speak about (pre)functions and magmas as opposed to total (pre)functions and total magmas rather than partial (pre)functions and partial magmas as opposed to (pre)functions and magmas. Note that a binary operation $\uppi:P\times P\nRightarrow P$ can always be regarded as a total binary operation $\uppi^{0}:P^{0}\times P^{0}\Rightarrow P^{0}$, where $P^{0}=P\cup\left\{ 0\right\} $ and $0x=x0=0$ for each $x\in P$, considering $xy$ to be defined if and only if $xy\neq0$. If we let $P^{0}$ represent $P$ in this way, it becomes a theorem that if $\left(xy\right)\!z$ or $z\!\left(xy\right)$ is defined then $xy$ is defined.}{\small \par} \end{rem}
\subsection{Semigroupoids, poloids and groupoids }
We say that $x$ \emph{precedes} $y$, denoted $x\prec y$, if and only if $xy$ or $x\!\left(yz\right)$ or $\left(zx\right)\!y$ is defined, and we write $x\prec y\prec z$ if and only if $x\prec y$ and $y\prec z$, meaning that $xy$ and $yz$ are defined or $x\!\left(yz\right)$ is defined or $\left(xy\right)\!z$ is defined. \begin{defn} \label{def3}A \emph{semigroupoid} is a magma $P$ such that for any \linebreak{}
$x\prec y\prec z\in P$, $\left(xy\right)\!z$ and $x\!\left(yz\right)$ are defined and $\left(xy\right)\!z=x\!\left(yz\right)$. \end{defn} A\emph{ unit} in a magma $P$ is any $e\in P$ such that $ex=x$ for all $x$ such that $ex$ is defined and $xe=x$ for all $x$ such that $xe$ is defined. \begin{defn} \label{def4}A \emph{poloid} is a semigroupoid $P$ such that for any $x\in P$ there are units $\epsilon_{\!x},\varepsilon_{\!x}\in P$ such that $\epsilon_{x}x$ and $x\varepsilon_{\!x}$ are defined. \end{defn} For any $x\in P$, we have $\epsilon_{x}x=x=x\varepsilon_{x}$ since $\epsilon_{x}$ and $\varepsilon_{x}$ are units; we may call $\epsilon_{x}$ an \emph{effective left unit} for $x$ and $\varepsilon_{x}$ an \emph{effective right unit} for $x$. \begin{defn} \label{def5}A \emph{groupoid} is a poloid $P$ such that for every $x\in P$ there is a unique $x^{-1}\in P$ such that $xx^{-1}$ and $x^{-1}x$ are defined and units. \end{defn} \begin{rem} {\small{}Recall that groups, monoids and semigroups are total magmas with additional properties. Each kind of total magma can be generalized to a (partial) magma with similar properties, sometimes named by adding the ending ``-oid'', as in group/groupoid and semigroup/semigroupoid, so that the process of generalizing to a not necessarily total magma has become known as ``oidification''. (See the table below.) However, the terminology is not consistent \textendash{} for example, a monoid is not a (partial) magma. I prefer ``poloid'' to the rather clumsy and confusing term ``monoidoid'', which suggests some kind of ``double oidification''. An important concept should have a short name, and the idea behind the current terminology is that a monoid has a single unit, whereas a poloid may have more than one unit.
{} }{\small \par}
{\small{}} \begin{tabular}{ll} \hline {\small{}total magma (magma, groupoid)} & {\small{}magma (partial magma, halfgroupoid)}\tabularnewline {\small{}semigroup} & {\small{}semigroupoid}\tabularnewline {\small{}monoid} & {\small{}poloid (monoidoid)}\tabularnewline {\small{}group} & {\small{}groupoid}\tabularnewline \hline \end{tabular}{\small \par}
{} \end{rem} {\small{}It should be kept in mind that semigroups, monoids and groups can be generalized to other (partial) magmas than semigroupoids, poloids and groupoids, respectively. For example, if we do not require that if $x\prec y\prec z$ then $x\!\left(yz\right)$ and $\left(xy\right)\!z$ are defined and equal but only that if $x\!\left(yz\right)$ or $\left(xy\right)\!z$ is defined then $x\!\left(yz\right)$ and $\left(xy\right)\!z$ are defined and equal, we obtain a semigroup generalized to a certain (partial) magma but this is not a semigroupoid as defined here. The specific definitions given in this section are suggested by category theory.}{\small \par}
\subsection{(Pre)transformation magmas}
Recall that the \emph{full transformation monoid} $\overline{\mathcal{F}}{}_{\!X}$ on a non-empty set $X$ is the set $\overline{\mathcal{F}}{}_{\!X}$ of all total functions $f:X\rightarrow X$, equipped with the total binary operation \[ \circ:\overline{\mathcal{F}}{}_{\!X}\times\overline{\mathcal{F}}{}_{\!X}\Rightarrow\overline{\mathcal{F}}{}_{\!X},\qquad\left(f,g\right)\mapsto f\circ g, \] where $\left(f\circ g\right)\!\left(x\right)=f\!\left(g\!\left(x\right)\right)$ for all $x\in X$. More generally, a \emph{transformation semigroup} $\mathcal{F}{}_{\!X}$ is a set of total functions $f:X\rightarrow X$ with $\circ$ as binary operation and such that $f,g\in\mathcal{F}{}_{\!X}$ implies $\mathsf{f}\circ\mathsf{g}\in\mathcal{F}{}_{\!X}$, and a \emph{transformation monoid} $\mathcal{M}_{\!X}$ is a transformation semigroup such that $I\!d{}_{\!X}\in\mathcal{M}{}_{\!X}$. \begin{example} \label{exa1}Set $X=\left\{ 1,2\right\} $, let $e:X\rightarrow X$ be defined by $e\!\left(1\right)=e\!\left(2\right)=1$ and let $M_{X}$ be the magma with $\left\{ e\right\} $ as underlying set and function composition $\circ$ as binary operation. Then $M_{X}$ is a (trivial) \emph{monoid of transformations}, but it is not a \emph{transformation monoid}. \end{example} When we generalize from total functions $X\rightarrow X$ to functions $X\nrightarrow X$ or prefunctions $X\nRightarrow X$, $\mathcal{F}{}_{\!X}$ is generalized from a transformation semigroup to a transformation magma $\mathscr{F}_{\!X}$ or a pretransformation magma $\mathscr{R}_{\!X}$. \begin{defn} \label{def6a}Let $X$ be a non-empty set. A\emph{ pretransformation magma} $\mathscr{R}_{\!X}$ on $X$ is a set $\mathscr{R}_{X}$ of non-empty pretransformations $\mathsf{f}:X\nRightarrow X$, equipped with the binary operation \[ \circ:\mathscr{R}_{\!X}\times\mathscr{R}_{\!X}\nRightarrow\mathscr{R}_{\!X},\qquad\left(\mathsf{f},\mathsf{g}\right)\mapsto\mathsf{f}\circ\mathsf{g}, \] where $\mathrm{dom}\!\left(\circ\right)=\left\{ \left(\mathsf{f},\mathsf{g}\right)\mid\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathsf{im}\!\left(\mathsf{g}\right)\right\} $ and $\mathsf{f}\circ\mathsf{g}$ if defined is given by $\mathsf{\mathrm{dom}}\!\left(\mathsf{f}\circ\mathsf{g}\right)=\mathrm{dom}\!\left(\mathsf{g}\right)$ and $\left(\mathsf{f}\circ\mathsf{g}\right)\!\left(x\right)=\mathsf{f}\!\left(\mathsf{g}\!\left(x\right)\right)$ for all $x\!\in\!\mathrm{dom}\!\left(\mathsf{f}\circ\mathsf{g}\right)$. \end{defn} The \emph{full} pretransformation magma on $X$, denoted $\overline{\mathscr{R}}_{\!X}$, is the pretransformation magma whose underlying set is the set of all non-empty pretransformations of the form $\mathsf{f}:X\nRightarrow X$. \begin{defn} \label{def6}Let $X$ be a non-empty set. A\emph{ transformation magma} $\mathscr{F}_{\!X}$ on $X$ is a set $\mathscr{F}_{X}$ of non-empty transformations $f:X\nrightarrow X$, equipped with the binary operation \[ \circ:\mathscr{F}_{\!X}\times\mathscr{F}_{\!X}\nRightarrow\mathscr{F}_{\!X},\qquad\left(f,g\right)\mapsto f\circ g, \] where $\mathrm{dom}\!\left(\circ\right)=\left\{ \left(f,g\right)\mid\mathrm{dom}\!\left(f\right)\supseteq\mathsf{im}\!\left(g\right)\right\} $ and $f\circ g$ if defined is given by $\mathsf{\mathrm{dom}}\!\left(f\circ g\right)=\mathrm{dom}\!\left(g\right)$, $\mathrm{cod}\!\left(f\circ g\right)=\mathrm{cod}\!\left(f\right)$ and $\left(f\circ g\right)\!\left(x\right)=f\!\left(g\!\left(x\right)\right)$ for all $x\!\in\!\mathrm{dom}\!\left(f\circ g\right)$. \end{defn} The \emph{full} transformation magma on $X$, denoted $\mathscr{\overline{F}}_{\!X}$, is the transformation magma whose underlying set is the set of all non-empty transformations $\mathsf{f}:X\nrightarrow X$.
A (pre)transformation magma is clearly a magma as described in Definition \ref{def2}.
The plan in this section, derived from the view that categories are ``webs of monoids'', is to construct transformation magmas that relate to poloids in the same way that transformation monoids relate to monoids. As a monoid is an \emph{associative} magma with a \emph{unit}, we look for appropriate generalizations of these two notions. \begin{fact} \label{f1}Let $\mathsf{f},\mathsf{g},\mathsf{h}$ be elements of a pretransformation magma.\emph{ }If $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ and $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ are defined then $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}=\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$. \end{fact} \begin{proof} We have \begin{gather*} \mathrm{dom}\!\left(\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}\right)=\mathrm{dom}\!\left(\mathsf{h}\right)=\mathrm{dom}\!\left(\mathsf{g}\circ\mathsf{h}\right)=\mathrm{dom}\!\left(\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)\right), \end{gather*} and \[ \left(\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}\right)\!\left(x\right)=\left(\mathsf{f}\circ\mathsf{g}\right)\left(\mathsf{h}\!\left(x\right)\right)=\mathsf{f}\!\left(\mathsf{g}\!\left(\mathsf{h}\!\left(x\right)\right)\right)=\mathsf{f}\!\left(\left(\mathsf{g}\circ\mathsf{h}\right)\!\left(x\right)\right)=\left(\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)\right)\!\left(x\right) \] for all $x\in\mathrm{dom}\!\left(\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}\right)=\mathrm{dom}\!\left(\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)\right)$. \end{proof} \begin{lem} \label{lem1}Let $\mathsf{f,g}$ be elements of a pretransformation magma. If $\mathsf{f}\circ\mathsf{g}$ is defined then $\mathsf{im}\!\left(\mathsf{f}\right)\supseteq\mathsf{im}\!\left(\mathsf{f}\circ\mathsf{g}\right)$. \end{lem} \begin{proof} Since $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathsf{im}\!\left(\mathsf{g}\right)$ by definition, we have $\mathsf{im}\!\left(\mathsf{f}\right)=\mathsf{f}\!\left(\mathrm{dom}\!\left(\mathsf{f}\right)\right)\supseteq\mathsf{f}\!\left(\mathsf{im}\!\left(\mathsf{g}\right)\right)=\mathsf{f}\!\left(\mathsf{g}\!\left(\mathrm{dom}\!\left(\mathsf{g}\right)\right)\right)=\left(\mathsf{f}\circ\mathsf{g}\right)\left(\mathrm{dom}\!\left(\mathsf{f}\circ\mathsf{g}\right)\right)=\mathsf{im}\!\left(\mathsf{f}\circ\mathsf{g}\right)$. \end{proof} \begin{fact} \label{f2}Let $\mathsf{f},\mathsf{g},\mathsf{h}$ be elements of a pretransformation magma. If $\mathsf{f}\circ\mathsf{g}$ and $\mathsf{g}\circ\mathsf{h}$ are defined then $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ and $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ are defined. \end{fact} \begin{proof} We have $\mathrm{dom}\!\left(\mathsf{f}\circ\mathsf{g}\right)=\mathrm{dom}\!\left(\mathsf{g}\right)\supseteq\mathsf{im}\!\left(\mathsf{h}\right)$, so $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ is defined. Also, $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathsf{im}\!\left(\mathsf{g}\right)$ and by Lemma \ref{lem1} $\mathsf{im}\!\left(\mathsf{g}\right)\supseteq\mathsf{im}\!\left(\mathsf{g}\circ\mathsf{h}\right)$, so $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ is defined. \end{proof} \begin{fact} \label{f3}Let $\mathsf{f},\mathsf{g},\mathsf{h}$ be elements of a pretransformation magma{\small{}.} If $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ is defined then $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ is defined. \end{fact} \begin{proof} If $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ is defined so that $\mathsf{f}\circ\mathsf{g}$ is defined then $\mathrm{dom}\!\left(\mathsf{g}\right)=\mathrm{dom}\!\left(\mathsf{f}\circ\mathsf{g}\right)\supseteq\mathsf{im}\!\left(\mathsf{h}\right)$. Thus, $\mathsf{g}\circ\mathsf{h}$ is defined so Fact \ref{f2} implies that $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ is defined. \end{proof} The implication in the opposite direction does not hold. \begin{example} \label{exa2}Let $\mathsf{f,g,h}$ be pretransformations on $\left\{ 1,2\right\} $; specifically, $\mathsf{f}=\mathsf{h}=\mathsf{Id}_{\left\{ 1\right\} }$ and $\mathsf{g}=\mathsf{Id}_{\left\{ 1,2\right\} }$. Then, $\mathrm{dom}\!\left(\mathsf{g}\right)\supseteq\mathrm{im}\!\left(\mathsf{h}\right)$ and $\mathrm{im}\!\left(\mathsf{g}\circ\mathsf{h}\right)=\left\{ 1\right\} $, so $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathrm{im}\!\left(\mathsf{\mathsf{g}\circ\mathsf{h}}\right)$. Hence, $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ is defined, but we do not have $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathrm{im}\!\left(\mathsf{g}\right)$, so $\mathsf{f}\circ\mathsf{g}$ is not defined and hence $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ is not defined. \end{example} So, somewhat surprisingly, pretransformation magmas do not have a two-sided notion of associativeness. We need the notion of a transformation magma and an additional assumption to derive the complement of Fact \ref{f3}. \begin{defn} \label{def7}A\emph{ transformation semigroupoid} $\mathscr{S}_{\!X}$ on $X$ is a transformation magma $\mathscr{F}_{\!X}$ such that if $\mathrm{dom}\!\left(f\right)\supseteq\mathsf{im}\!\left(g\right)$ for some $f,g\in\mathscr{F}_{\!X}$ then $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{cod}}\!\left(g\right)$. \end{defn} Of course, if $\mathrm{dom}\!\left(f\right)=\mathrm{cod}\!\left(g\right)$ then $\mathrm{dom}\!\left(f\right)\supseteq\mathsf{im}\!\left(g\right)$. Thus, in a transformation semigroupoid $f\circ g$ is defined if and only if $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{cod}}\!\left(g\right)$.
If $\left(f\circ g\right)\circ h$ and $f\circ\left(g\circ h\right)$ are defined then $\mathrm{cod}\!\left(\left(f\circ g\right)\circ h\right)=\mathrm{cod}\!\left(f\circ g\right)=\mathrm{cod}\!\left(f\right)=\mathrm{cod}\!\left(f\circ\left(g\circ h\right)\right)$, so Fact 1 holds for transformation magmas as well. It is also clear that the proofs of Facts 2 and 3 apply to transformation magmas as well. Thus, we can use Facts 1\textendash 3 also when dealing with transformation magmas. On the other hand, Example 2 applies to transformation magmas as well, but not to transformation semigroupoids. \begin{fact} \label{f4}Let $f,g,h$ be elements of a transformation semigroupoid. If $f\circ\left(g\circ h\right)$ is defined then $\left(f\circ g\right)\circ h$ is defined. \end{fact} \begin{proof} If $f\circ\left(g\circ h\right)$ is defined then $\mathrm{dom}\!\left(f\right)=\mathrm{cod}\!\left(g\circ h\right)=\mathsf{\mathrm{cod}}\!\left(g\right)$. Thus, $f\circ g$ is defined, and as $g\circ h$ is defined as well Fact \ref{f2} implies that {\small{}$\left(f\circ g\right)\circ h$ }is defined. \end{proof} \begin{thm} \label{the1}A transformation semigroupoid is a semigroupoid. \end{thm} \begin{proof} By Facts 2, 3 and 4, if $f\prec g\prec h$ then $\left(f\circ g\right)\circ h$ and $f\circ\left(g\circ h\right)$ are defined, and by Fact 1 this implies that $\left(f\circ g\right)\circ h=f\circ\left(g\circ h\right)$. \end{proof} Poloids are semigroupoids with effective left and right units. Such units can be added to transformation semigroupoids in a quite natural way. \begin{defn} \label{def8}A\emph{ transformation poloid} $\mathscr{P}_{\!X}$ is a\emph{ }transformation semigroupoid $\mathscr{S}_{\!X}$ such that if $f\in\mathscr{S}_{\!X}$ then $I\!d_{\mathrm{dom}\left(f\right)},I\!d_{\mathrm{cod}\left(f\right)}\in\mathscr{S}_{\!X}$. \end{defn} \begin{fact} \label{f5}Let $\mathscr{P}_{X}$ be a transformation poloid. For any $f\in\mathscr{P}_{X}$, $I\!d_{\mathrm{dom}\left(f\right)}$ and $I\!d_{\mathrm{cod}\left(f\right)}$ are units. \end{fact} \begin{proof} If $f,g\in\mathscr{P}_{X}$ and $I\!d_{\mathrm{dom}\left(f\right)}\circ g$ is defined then \begin{gather*} \mathrm{dom}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\circ g\right)=\mathrm{dom}\!\left(g\right),\\ \mathrm{cod}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\circ g\right)=\mathrm{cod}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)=\mathrm{dom}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)=\mathrm{cod}\!\left(g\right), \end{gather*}
and $I\!d{}_{\mathrm{dom}\left(\mathsf{f}\right)}\!\left(g\!\left(x\right)\right)=g\!\left(x\right)$ for all $x\in\mathrm{dom}\!\left(g\right)$. Hence, $I\!d_{\mathrm{dom}\left(f\right)}\circ g=g$.
Also, if $f,h\in\mathscr{P}_{X}$ and $h\circ I\!d_{\mathrm{dom}\left(f\right)}$ is defined then \begin{gather*} \mathrm{dom}\!\left(h\right)=\mathrm{cod}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)=\mathrm{dom}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)=\mathrm{dom}\!\left(h\circ I\!d_{\mathrm{dom}\left(f\right)}\right),\\ \mathrm{cod}\!\left(h\right)=\mathrm{cod}\!\left(h\circ I\!d_{\mathrm{dom}\left(f\right)}\right), \end{gather*}
and $h\!\left(I\!d{}_{\mathrm{dom}\left(\mathsf{f}\right)}\!\left(x\right)\right)=h\!\left(x\right)$ for all $x\in\mathrm{dom}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)=\mathrm{dom}\!\left(h\right)$, so $h\circ I\!d_{\mathrm{dom}\left(f\right)}=h$.
We have thus shown that $I\!d{}_{\mathrm{dom}\left(f\right)}$ is a unit.
It is shown similarly that if $I\!d{}_{\mathrm{cod}\left(f\right)}\circ g$ is defined then $I\!d{}_{\mathrm{cod}\left(f\right)}\circ g=g$, and if $h\circ I\!d_{\mathrm{cod}\left(f\right)}$ is defined then $h\circ I\!d_{\mathrm{cod}\left(f\right)}=h$, so $I\!d_{\mathrm{cod}\left(f\right)}$ is a unit as well. \end{proof} \begin{fact} \label{f6}Let $\mathscr{P}_{X}$ be a transformation poloid. For any $f\in\mathscr{P}_{X}$, $f\circ I\!d_{\mathrm{dom}\left(f\right)}$ and $I\!d_{\mathrm{cod}\left(f\right)}\circ f$ are defined. \end{fact} \begin{proof} We have $\mathrm{dom}\!\left(f\right)=\mathrm{dom}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)=\mathrm{cod}\!\left(I\!d_{\mathrm{dom}\left(f\right)}\right)$ and $\mathrm{dom}\!\left(I\!d_{\mathrm{cod}\left(f\right)}\right)=\mathrm{cod}\!\left(f\right)$. \end{proof} \begin{thm} \label{the2}A transformation poloid is a poloid. \end{thm} \begin{proof} Immediate from Facts 5 and 6. \end{proof} \begin{rem} {\small{}We have considered two requirements for $\mathsf{f}\circ$$\mathsf{g}$ or $f\circ g$ being defined, namely that $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathsf{im}\!\left(\mathsf{g}\right)$ or that $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{cod}}\!\left(g\right)$. Other definitions are common in the literature. Instead of requiring that $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathsf{im}\!\left(\mathsf{g}\right)$, it is often required that $\mathrm{dom}\!\left(\mathsf{f}\right)\cap\mathsf{im}\!\left(\mathsf{g}\right)\neq\emptyset$, and instead of requiring that $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{cod}}\!\left(g\right)$, it is sometimes required that $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{im}}\!\left(g\right)$. Of these alternative definitions, the first one tends to be too weak for present purposes, while the second one tends to be too restrictive.}{\small \par}
{\small{}For example, if we stipulate that $\mathsf{f}\circ\mathsf{g}$ is defined if and only if $\mathrm{dom}\!\left(\mathsf{f}\right)\cap\mathsf{im}\!\left(\mathsf{g}\right)\neq\emptyset$ then $\mathsf{f}\circ\mathsf{g}$ and $\mathsf{g}\circ\mathsf{h}$ being defined does not imply that $\left(\mathsf{f}\circ\mathsf{g}\right)\circ\mathsf{h}$ and $\mathsf{f}\circ\left(\mathsf{g}\circ\mathsf{h}\right)$ are defined, contrary to Fact \ref{f2}.}{\small \par}
{\small{}Also, if we stipulate that $f\circ g$ is defined if and only if $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{im}}\!\left(g\right)$ and let $f,g$ be total transformations on $X$, then $f\circ g$ is defined only if $g$ is surjective so that $\mathrm{im}\!\left(g\right)=X$. Thus, $\overline{\mathcal{F}}{}_{\!X}=\left\{ f\mid f:X\rightarrow X\right\} $ is not a monoid under this function composition. As monoids are poloids, this anomaly suggests that the condition $\mathrm{dom}\!\left(f\right)=\mathsf{\mathrm{im}}\!\left(g\right)$ for $f\circ g$ to be defined is not appropriate in the context of poloids.}{\small \par}
{\small{}On the other hand, stipulating that $f\circ g$ is defined if and only if $\mathrm{dom}\!\left(f\right)\supseteq\mathsf{im}\!\left(g\right)$ does not give a fully associative binary operation (Example 2). This is a fatal flaw for many purposes, including representing poloids as magmas of transformations.}{\small \par}
{\small{}We note that the exact formalization of the notion of ``partial function'' is important. A ``partial function'' $\mathfrak{f}$ is often defined as being equipped only with a domain and an image (range), and then there are only three reasonable ways of composing ``partial transformations'': $\mathfrak{f}\circ\mathfrak{g}$ is defined if and only if $\mathrm{dom}\!\left(\mathfrak{f}\right)\cap\mathsf{im}\!\left(\mathsf{\mathfrak{g}}\right)\neq\emptyset$ or $\mathrm{dom}\!\left(\mathfrak{f}\right)\supseteq\mathsf{im}\!\left(\mathfrak{g}\right)$ or $\mathrm{dom}\!\left(\mathfrak{f}\right)=\mathsf{\mathrm{im}}\!\left(\mathsf{\mathfrak{g}}\right)$. But according to Definition \ref{def1}, ``partial functions'' have codomains of their own, so we can stipulate that $\mathfrak{f}\circ\mathfrak{g}$ is defined if and only if $\mathrm{dom}\!\left(\mathfrak{f}\right)=\mathsf{\mathrm{cod}}\!\left(\mathfrak{g}\right)$, and this turns out to be just what we need when specializing magmas of ``partial transformations'' to semigroupoids and poloids.}{\small \par} \end{rem}
\section{Poloids and categories}
\subsection{Elementary properties of abstract poloids}
Recall that a poloid $P$ is a magma satisfying the following conditions: \begin{lyxlist}{1} \item [{(P1).}] For any $x\prec y\prec z\in P$, $\left(xy\right)\!z$ and $x\!\left(yz\right)$ are defined and $\left(xy\right)\!z=x\!\left(yz\right)$. \item [{(P2).}] \noindent For any $x\!\in\!P$ there are units $\epsilon_{\!x},\varepsilon_{\!x}\!\in\!P$ such that $\epsilon_{\!x}x$ and $x\varepsilon_{\!x}$ are defined. \end{lyxlist} Let us derive some elementary properties of poloids as abstract algebraic structures.
{} \begin{prop} \label{pro1}Let $P$ be a poloid and $e\in P$ a unit. Then $ee$ is defined and $ee=e$. \end{prop} \begin{proof} Let $\epsilon_{e}\in P$ be an effective left unit for the unit $e$. Then, $\epsilon_{e}e$ is defined and $e=\epsilon_{e}e=\epsilon_{e}$, implying the assertion. \end{proof} By Proposition \ref{pro1}, every unit is an effective left and right unit for itself. \begin{prop} \label{pro2}Let $P$ be a poloid. If $\epsilon_{x}$ and $\epsilon_{x}'$ are effective left units for $x\in P$ then $\epsilon_{x}=\epsilon_{x}'$, and if $\varepsilon{}_{x}$ and $\varepsilon_{x}'$ are effective right units for $x\in P$ then $\varepsilon_{x}=\varepsilon_{x}'$. \end{prop} \begin{proof} By assumption, $\epsilon_{x}x$ and and $\epsilon_{x}'x$ are defined and equal to $x$, so $\epsilon_{x}\!\left(\epsilon_{x}'x\right)$ is defined. Thus, $\left(\epsilon_{x}\epsilon_{x}'\right)\!x$ is defined, so $\epsilon_{x}\epsilon_{x}'$ is defined. As $\epsilon_{x}$ and $\epsilon_{x}'$ are units, this implies $\epsilon_{x}=\epsilon_{x}\epsilon_{x}'=\epsilon_{x}'$. The uniqueness of the effective right unit for $x$ is proved in the same way. \end{proof} Note that if $xy$ is defined then $\left(\epsilon_{x}x\right)\!y$ is defined so $\epsilon_{x}\!\left(xy\right)$ is defined and $\epsilon_{x}\!\left(xy\right)=\left(\epsilon_{x}x\right)\!y=xy=\epsilon_{xy}\!\left(xy\right)$, so by Proposition \ref{pro2} we have $\epsilon_{x}=\epsilon_{xy}$. A similar argument shows that $\varepsilon_{y}=\varepsilon_{xy}$.
Also note that in a groupoid, where $xx^{-1}$ and $x^{-1}x$ are defined and units, we have $x\left(x^{-1}x\right)=x$, $x^{-1}\left(xx^{-1}\right)=x^{-1}$, $\left(xx^{-1}\right)x=x$, and $\left(x^{-1}x\right)x^{-1}=x^{-1}$, where the four left-hand sides are defined. Thus, by Proposition \ref{pro2} we have $xx^{-1}=\epsilon_{x}=\varepsilon_{x^{-1}}$ and $x^{-1}x=\varepsilon_{x}=\epsilon_{x^{-1}}$. \begin{prop} \label{pro3}Every poloid $P$ can be equipped with surjective functions \begin{gather*} s:P\rightarrow E,\qquad x\mapsto\epsilon_{x},\\ t:P\rightarrow E,\qquad x\mapsto\varepsilon_{x}, \end{gather*}
where $E$ is the set of all units in $P$ and $s\!\left(e\right)=t\!\left(e\right)=e$ for all $e\in E$. \end{prop} \begin{proof} Immediate from (P2), Proposition \ref{pro1} and Proposition \ref{pro2}. \end{proof} \begin{prop} \label{pro4}Let $P$ be a poloid. For any $x,y\in P$, $xy$ is defined if and only if $\varepsilon_{x}=\epsilon_{y}$. \end{prop} \begin{proof} If $xy$ is defined then $\left(x\varepsilon_{x}\right)\!y$ is defined, so $\varepsilon_{x}y$ is defined and as $\varepsilon_{x}$ is a unit we have $\varepsilon_{x}y=y=\epsilon_{y}y$, so $\varepsilon_{x}=\epsilon_{y}$ by Proposition \ref{pro2}. Conversely, if $\varepsilon_{x}=\epsilon_{y}$ then $\varepsilon_{x}y$ is defined, and as $x\varepsilon_{x}$ is defined, $\left(x\varepsilon_{x}\right)\!y=xy$ is defined. \end{proof} A \emph{total poloid} is a poloid $P$ whose binary operation $\uppi$ is a total function. \begin{prop} A total poloid has only one unit. \end{prop} \begin{proof} For any pair $e,e'\in P$ of units, $ee'$ is defined so $e=ee'=e'$. \end{proof}
\begin{prop} \label{pro5}A poloid with only one unit is a monoid. \end{prop} \begin{proof} Let $P$ be a poloid. By assumption, there is a unique unit $e\in P$ such that $e=\varepsilon_{x}=\epsilon_{y}$ for any $x,y\in P$. Therefore, it follows from Proposition \ref{pro4} that $xy$ and $yz$ are defined for any $x,y,z\in P$. Hence, $\left(xy\right)\!z$ and $x\!\left(yz\right)$ are defined and equal for any $x,y,z\in P$. Also, $x=\epsilon_{x}x=x\varepsilon_{x}$ for any $x\in P$ implies $x=ex=xe$ for any $x\in P$. \end{proof} A poloid can thus be regarded as a generalized monoid, and also as a generalized groupoid; in fact, poloids generalize groups via monoids and via groupoids. \begin{prop} \label{pro6}A groupoid with only one unit is a group. \end{prop} \begin{proof} A monoid with inverses is a group. \end{proof}
\subsection{Subpoloids, poloid homomorphisms and poloid actions}
Recall that a submonoid of a monoid $M$ is a monoid $M'$ such that $M'$ is a submagma of $M$ and the unit in $M'$ is the unit in $M$. Subpoloids can be defined similarly. \begin{defn} \label{def9}A \emph{subpoloid} of a poloid $P$ is a poloid $P'$ such that $P'$ is a submagma of $P$ and every unit in $P'$ is a unit in $P$. \end{defn} Homomorphisms and actions of poloids similarly generalize homomorphisms and actions of monoids. \begin{defn} \label{def10}Let $P$ and $Q$ be poloids. A \emph{poloid homomorphism} from $P$ to $Q$ is a total function $\phi:P\rightarrow Q$ such that \begin{lyxlist}{1} \item [{(1)}] if $x,y\in P$ and $xy$ is defined then $\phi\!\left(x\right)\!\phi\!\left(y\right)$ is defined and $\phi\!\left(xy\right)=\phi\!\left(x\right)\!\phi\!\left(y\right)$; \item [{(2)}] if $e$ is a unit in $P$ then $\phi\!\left(e\right)$ is a unit in $Q$. \end{lyxlist} A \emph{poloid isomorphism} is a poloid homomorphism $\phi$ such that the inverse function $\phi^{-1}$ exists and is a poloid homomorphism. \end{defn} Note that $\phi\!\left(x\right)=\phi\!\left(\epsilon_{x}x\right)=\phi\!\left(\epsilon_{x}\right)\!\phi\!\left(x\right)$ by (1) and $\phi\!\left(\epsilon_{x}\right)$ is a unit by (2) in Definition \ref{def10}, so by Proposition \ref{pro2} we have $\phi\!\left(\epsilon_{x}\right)=\epsilon_{\phi\left(x\right)}$. Dually, $\phi\!\left(\varepsilon_{x}\right)=\varepsilon_{\phi\left(x\right)}$.
Let $P$ be a poloid, let $Q$ be a magma and assume that there exists a total function $\phi:P\rightarrow Q$ satisfying (1) and (2) in Definition \ref{def10} and also such that (1') if $\phi\!\left(x\right)\!\phi\!\left(y\right)$ is defined then $xy$ is defined. It is easy to verify that then $\phi\!\left(P\right)$ is a magma, (P1) is satisfied in $\phi\!\left(P\right)$, and if $x'=\phi\!\left(x\right)\in\phi\!\left(P\right)$ then $\phi\!\left(\epsilon_{x}\right)$ ($\phi\!\left(\varepsilon_{x}\right))$ is an effective left (right) unit for $x'$, so $\phi\!\left(P\right)$ is a poloid. \begin{defn} \label{def11}A \emph{poloid action} of a poloid $P$ on a set $X$ is a total function \[ \alpha:P\rightarrow\alpha\!\left(P\right)\subseteq\overline{\mathscr{F}}_{\!X} \] which is a poloid homomorphism such that if $e\in P$ is a unit then $\alpha\!\left(e\right)\in\alpha\!\left(P\right)$ is an identity transformation $I\!d_{\mathrm{dom}\left(\alpha\left(e\right)\right)}$ on $X$.
A \emph{prefunction poloid action} of a poloid $P$ on $X$ is similarly a total function \[ \upalpha:P\rightarrow\upalpha\!\left(P\right)\subseteq\overline{\mathscr{R}}_{\!X} \] which is a poloid homomorphism such that if $e\in P$ is a unit then $\upalpha\!\left(e\right)\in\upalpha\!\left(P\right)$ is an identity pretransformation $\mathsf{Id}{}_{\mathrm{dom}\left(\upalpha\left(e\right)\right)}$ on $X$. \end{defn} A poloid action $\alpha$ thus assigns to each $x\in P$ a non-empty transformation \[ \alpha\!\left(x\right):X\nrightarrow X,\qquad t\mapsto\alpha\!\left(x\right)\!\left(t\right) \] such that if $xy$ is defined then $\alpha\!\left(x\right)\circ\alpha\!\left(y\right)$ is defined and $\alpha\!\left(xy\right)=\alpha\!\left(x\right)\circ\alpha\!\left(y\right)$, and for each unit $e$ in $P$ its image $\alpha\!\left(e\right)$ is a unit in $\alpha\!\left(P\right)$ such that $\alpha\!\left(e\right)\!\left(t\right)=t$ for each $t\in\mathrm{dom}\!\left(\alpha\!\left(e\right)\right)$. \begin{rem} {\small{}The definition of a poloid homomorphism given here implies the usual definition of a monoid homomorphism. The definition of a monoid action obtained from Definition \ref{def11} is also the usual one. Specifically, a monoid action $\alpha$ of $M$ on a set $X$ is a function \[ \alpha:M\rightarrow\alpha\!\left(M\right)\subseteq\overline{\mathcal{F}}_{\!X} \] such that $\alpha\!\left(xy\right)\!\left(t\right)=\alpha\!\left(x\right)\circ\alpha\!\left(y\right)\!\left(t\right)$ and $\alpha\!\left(e\right)\!\left(t\right)=t$ for all $x,y\in M$ and all $t\in X$. Denoting $\alpha\!\left(x\right)\!\left(t\right)$ by $x\cdot t$, this is rendered as $\left(xy\right)\cdot t=x\cdot\left(y\cdot t\right)$ and $e\cdot t=t$. Note that $\alpha\!\left(e\right)=I\!d_{X}$ is a unit in $\overline{\mathcal{F}}_{\!X}$ and thus in $\alpha\!\left(M\right)$.}{\small \par} \end{rem}
\subsection{Poloids as transformation poloids}
Recall that every transformation poloid is, indeed, a poloid. Up to isomorphism, there are, in fact, no other poloids. \begin{lem} \label{lem2}For any poloid $P$, there is a prefunction poloid action \[ \upmu:P\rightarrow\upmu\!\left(P\right)\subseteq\mathscr{\overline{R}}\!_{P},\qquad x\mapsto\upmu\!\left(x\right) \]
of $P$ on $P$ such that $\upmu$ is a poloid isomorphism. \end{lem} \begin{proof} Set $\upmu\!\left(x\right)=\left(\overline{\upmu\!\left(x\right)},\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)\right)$, where $\overline{\upmu\!\left(x\right)}\!\left(t\right)=xt$ for all $t\in\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)$ and $\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)=\left\{ t\!\mid\!xt\;\mathrm{defined}\right\} $. Then $\upmu\!\left(x\right)$ is a prefunction $P\nRightarrow P$, and $\upmu\!\left(x\right)$ is non-empty for each $x\in P$ since $x\varepsilon_{x}$ is defined for each $x\in P$.
Furthermore, $\overline{\upmu\!\left(x\right)}\!\left(\varepsilon{}_{x}\right)=x\varepsilon{}_{x}=x$ for any $x\in P$, and also $\overline{\upmu\!\left(y\right)}\!\left(\varepsilon{}_{x}\right)=y\varepsilon{}_{x}=y$ for any $y\in P$ such that $y\varepsilon{}_{x}$ is defined since $\varepsilon{}_{x}$ is a unit. Hence, if $x\neq y$ and $y\varepsilon{}_{x}$ is defined then $\overline{\upmu\!\left(x\right)}\!\left(\varepsilon{}_{x}\right)\neq\overline{\upmu\!\left(y\right)}\!\left(\varepsilon{}_{x}\right)$, so $\overline{\upmu\!\left(x\right)}\neq\overline{\upmu\!\left(y\right)}$; if $x\neq y$ and $y\varepsilon{}_{x}$ is not defined then $\mathrm{dom}\left(\alpha\!\left(x\right)\right)\neq\mathrm{dom}\left(\alpha\!\left(y\right)\right)$ since $x\varepsilon{}_{x}$ is defined. Thus, $\upmu$ is a bijection.
For any fixed $x,y\in P$ such that $xy$ is defined, $\left(xy\right)\!t$ is defined if and only if $t\in P$ is such that $yt$ is defined. Thus, $\mathrm{im}\!\left(\upmu\!\left(y\right)\right)=\left\{ yt\mid yt\:\mathrm{defined}\right\} =\left\{ yt\mid x\!\left(yt\right)\;\mathrm{defined}\right\} \subseteq\left\{ t\mid xt\;\mathrm{defined}\right\} =$ $\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)$ and $\left\{ t\mid\left(xy\right)\!t\:\mathrm{defined}\right\} =\left\{ t\mid yt\:\mathrm{defined}\right\} $, so if $xy$ is defined then $\upmu\!\left(x\right)\circ\upmu\!\left(y\right)$ is defined and $\mathrm{dom}\!\left(\upmu\!\left(xy\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right)$. Also, if $xy$ is defined and $t\in\mathrm{dom}\!\left(\upmu\!\left(xy\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)$, meaning that $yt$ is defined, then $\left(xy\right)\!t$ and $x\!\left(yt\right)$ are defined and equal, and as $\left(xy\right)\!t=\overline{\upmu\!\left(xy\right)}\!\left(t\right)$ for all $t\in\mathrm{dom}\!\left(\upmu\!\left(xy\right)\right)$ and $x\!\left(yt\right)=\overline{\upmu\!\left(x\right)}\circ\overline{\upmu\!\left(y\right)}\left(t\right)$ for all $t\in\mathrm{dom}\!\left(\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right)$, this implies that if $xy$ is defined then $\upmu\!\left(xy\right)=\upmu\!\left(x\right)\circ\upmu\!\left(y\right)$.
Conversely, if $\upmu\!\left(x\right)\circ\upmu\!\left(y\right)$ is defined so that $\left\{ t\mid yt\;\mathrm{defined}\right\} =\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right)=\left\{ t\mid x\!\left(yt\right)\;\mathrm{defined}\right\} $, then $yt$ defined implies $x\!\left(yt\right)$ defined for any fixed $x,y\in P$. But this implication does not hold if $xy$ is not defined; then, $x\!\left(y\varepsilon_{y}\right)$ is not defined although $y\varepsilon_{y}$ is defined. Hence, if $\upmu\!\left(x\right)\circ\upmu\!\left(y\right)$ is defined then $xy$ must be defined. Therefore, $\upmu\!\left(x\right)\circ\upmu\!\left(y\right)=\upmu\!\left(xy\right)\in\upmu\!\left(P\right)$, so $\upmu\!\left(P\right)$ is a magma with $\circ$ as binary operation.
Let $e\in P$ be a unit. If $\upmu\!\left(e\right)\circ\upmu\!\left(x\right)$ is defined then $ex$ is defined so $\upmu\!\left(x\right)=\upmu\!\left(ex\right)=\upmu\!\left(e\right)\circ\upmu\!\left(x\right)$, and if $\upmu\!\left(x\right)\circ\upmu\!\left(e\right)$ is defined then $xe$ is defined so $\upmu\!\left(x\right)=\upmu\!\left(xe\right)=\upmu\!\left(x\right)\circ\upmu\!\left(e\right)$. Thus, $\upmu\!\left(e\right)\in\upmu\!\left(P\right)$ is a unit.
Conversely, if $f'=\upmu\!\left(f\right)\in\upmu\!\left(P\right)$ is a unit and $fx$ is defined then $\upmu\!\left(f\right)\circ\upmu\!\left(x\right)$ is defined and $\upmu\!\left(fx\right)=\upmu\!\left(f\right)\circ\upmu\!\left(x\right)=\upmu\!\left(x\right)$, so $fx=x$ since $\upmu$ is injective. Similarly, if $\upmu\!\left(f\right)\in\upmu\!\left(P\right)$ is a unit and $xf$ is defined then $xf=x$. Hence, $f\in P$ is a unit.
Thus, we have shown that $\upmu$ satisfies the conditions labeled (1), (1') and (2) in Section 3.2, so $\upmu\!\left(P\right)$ is a poloid. Also, (1) and (2) in Definition \ref{def10} are satisfied by both $\upmu$ and $\upmu^{-1}$, so $\upmu:P\rightarrow\upmu\!\left(P\right)$ is a poloid isomorphism.
The observation that if $e\in P$ is a unit then $\upmu\!\left(e\right)\!\left(t\right)=et=t$ for all $t\in\mathrm{dom}\!\left(\upmu\!\left(e\right)\right)$, so that $\upmu\!\left(e\right)$ is an identity pretransformation $\mathsf{Id}{}_{\mathrm{dom}\left(\upmu\left(e\right)\right)}$, completes the proof. \end{proof} \begin{lem} \label{lem3}For any poloid $P$ and function $\upmu$ defined as in Lemma \ref{lem2}, there is a total function $\tau:\upmu\!\left(P\right)\rightarrow\tau\!\left(\upmu\!\left(P\right)\right)\subseteq\mathscr{\overline{F}}\!_{P}$ such that \begin{enumerate} \item $\tau$ is bijective; \item $\upmu\!\left(x\right)\circ\upmu\!\left(y\right)$ is defined if and only if $\tau\!\left(\upmu\!\left(x\right)\right)\circ\tau\!\left(\upmu\!\left(y\right)\right)$ is defined; \item if $\upmu\!\left(x\right)\circ\upmu\!\left(y\right)$ is defined then $\tau\!\left(\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right)=\tau\!\left(\upmu\!\left(x\right)\right)\circ\tau\!\left(\upmu\!\left(y\right)\right)$; \item if $e\in P$ is a unit then $\tau\!\left(\upmu\!\left(e\right)\right)\in\mathscr{\overline{F}}\!_{P}$ is a unit and identity transformation $I\!d{}_{\mathrm{dom}\left(\tau\left(\upmu\left(e\right)\right)\right)}$. \end{enumerate} \end{lem} \begin{proof} For any prefunction $\upmu\!\left(x\right)=\left(\overline{\upmu\!\left(x\right)},\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)\right):P\nRightarrow P$, $x\in P$, the tuple \[ \left/\upmu\!\left(x\right)\right/=\left(\overline{\upmu\!\left(x\right)},\mathrm{dom}\!\left(\upmu\!\left(x\right)\right),\mathrm{dom}\!\left(\upmu\!\left(\epsilon_{x}\right)\right)\right) \] is a function $P\nrightarrow P$ for which $\overline{\left/\upmu\!\left(x\right)\right/}=\overline{\upmu\!\left(x\right)}$, $\mathrm{dom}\!\left(\left/\upmu\!\left(x\right)\right/\right)=\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)$ and $\mathrm{cod}\!\left(\left/\upmu\!\left(x\right)\right/\right)=\mathrm{dom}\!\left(\upmu\!\left(\epsilon_{x}\right)\right)$. In fact, $\epsilon_{x}x$ is defined, so $\upmu\!\left(\epsilon_{x}\right)\circ\upmu\!\left(x\right)$ is defined, so $\mathrm{cod}\!\left(\left/\upmu\!\left(x\right)\right/\right)\!=\mathrm{dom}\!\left(\upmu\!\left(\epsilon_{x}\right)\right)\supseteq\mathrm{im}\!\left(\upmu\!\left(x\right)\right)=\overline{\upmu\!\left(x\right)}\!\left(\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)\right)=\overline{\left/\upmu\!\left(x\right)\right/}\!\left(\mathrm{dom}\!\left(\left/\upmu\!\left(x\right)\right/\right)\right)$\linebreak{} $=\mathrm{im}\!\left(\left/\upmu\!\left(x\right)\right/\right)$, as required. Thus, there is a total function \[ \tau:\mathscr{\overline{R}}\!_{P}\supseteq\upmu\!\left(P\right)\rightarrow\tau\!\left(\upmu\!\left(P\right)\right)\subseteq\mathscr{\overline{F}}\!_{P},\qquad\upmu\!\left(x\right)\mapsto\left/\upmu\!\left(x\right)\right/. \]
It remains to prove (1) \textendash{} (4). (1) and (2) are obvious. Also, $\mathrm{dom}\!\left(\left/\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right/\right)=\mathrm{dom}\!\left(\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)=\mathrm{dom}\!\left(\left/\upmu\!\left(y\right)\right/\right)=\mathrm{dom}\!\left(\left/\upmu\!\left(x\right)\right/\circ\left/\upmu\!\left(y\right)\right/\right)$ and\linebreak{}
$\mathrm{cod}\!\left(\left/\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right/\right)=\mathrm{cod}\!\left(\left/\upmu\!\left(xy\right)\right/\right)=\mathrm{dom}\!\left(\upmu\!\left(\epsilon_{xy}\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(\epsilon_{x}\right)\right)=\mathrm{cod}\!\left(\left/\upmu\!\left(x\right)\right/\right)=\mathrm{cod}\!\left(\left/\upmu\!\left(x\right)\right/\circ\left/\upmu\!\left(y\right)\right/\right),$ so $\left/\upmu\!\left(x\right)\circ\upmu\!\left(y\right)\right/=\left/\upmu\!\left(x\right)\right/\circ\left/\upmu\!\left(y\right)\right/$. Concerning (4), $\upmu\!\left(e\right)$ is a unit and identity pretransformation $\mathsf{Id}{}_{\mathrm{dom}\left(\upmu\left(e\right)\right)}$ in $\mathscr{\overline{R}}\!_{P}$, so it suffices to note that $\mathrm{cod}\!\left(\left/\upmu\!\left(e\right)\right/\right)=\mathrm{dom}\!\left(\upmu\!\left(\epsilon_{e}\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(e\right)\right)=\mathrm{dom}\!\left(\left/\upmu\!\left(e\right)\right/\right)$. \end{proof} \begin{thm} \label{lem2-1}For any poloid $P$, there is a poloid action \[ \alpha:P\rightarrow\alpha\!\left(P\right)\subseteq\mathscr{\overline{F}}\!_{P},\qquad x\mapsto\alpha\!\left(x\right) \]
of $P$ on $P$ such that $\alpha$ is a poloid isomorphism and $\alpha\!\left(P\right)$ equipped with $\circ$ is a transformation poloid. \end{thm} \begin{proof} First set $\alpha=\tau\circ\upmu$ and use Lemmas \ref{lem2} and \ref{lem3} to prove the first part of the theorem. It remains to show that $\alpha\!\left(P\right)$ is a transformation poloid. Recall that $\upmu$ and $\tau$ are injective so that $\alpha$ is injective, and note that $\mathrm{dom}\!\left(\alpha\!\left(x\right)\right)=\mathrm{dom}\!\left(\alpha\!\left(x\varepsilon_{x}\right)\right)=\mathrm{dom}\!\left(\alpha\!\left(x\right)\circ\alpha\!\left(\varepsilon_{x}\right)\right)=\mathrm{dom}\!\left(\alpha\!\left(\varepsilon_{x}\right)\right)$ and that identity transformations, such as $\alpha\!\left(\varepsilon_{x}\right)$ and $\alpha\!\left(\epsilon_{y}\right)$, are determined by their domains. Hence, we have \begin{align*}
& \mathrm{dom}\!\left(\alpha\!\left(x\right)\right)=\mathrm{cod}\!\left(\alpha\!\left(y\right)\right)\\ \Longleftrightarrow\quad & \mathrm{dom}\!\left(\alpha\!\left(\varepsilon_{x}\right)\right)=\mathrm{dom}\!\left(\alpha\!\left(\epsilon_{y}\right)\right)\\ \Longleftrightarrow\quad & \alpha\!\left(\varepsilon_{x}\right)=\alpha\!\left(\epsilon_{y}\right)\\ \Longleftrightarrow\quad & \varepsilon_{x}=\epsilon_{y}\\ \Longleftrightarrow\quad & xy\;\mathrm{defined}\\ \Longleftrightarrow\quad & \alpha\!\left(x\right)\circ\alpha\!\left(y\right)\;\mathrm{defined}. \end{align*} Thus the poloid of transformations $\alpha\!\left(P\right)$ is a transformation semigroupoid by \linebreak{} Definition \ref{def7}. Also, if $\alpha\!\left(x\right)\in\alpha\!\left(P\right)$ then $\alpha\!\left(\epsilon_{x}\right),\alpha\!\left(\varepsilon_{x}\right)\in\alpha\!\left(P\right)$, $\alpha\!\left(\epsilon_{x}\right)=I\!d_{\mathrm{dom}\left(\alpha\left(\epsilon_{x}\right)\right)}=I\!d{}_{\mathrm{cod}\left(\alpha\left(x\right)\right)}$ and $\alpha\!\left(\varepsilon{}_{x}\right)=I\!d{}_{\mathrm{dom}\left(\alpha\left(\varepsilon{}_{x}\right)\right)}=I\!d{}_{\mathrm{dom}\left(\alpha\left(x\right)\right)}$, so the transformation semigroupoid $\alpha\!\left(P\right)$ is a transformation poloid by Definition \ref{def8}. \end{proof} \begin{cor} \label{the3-1}Any poloid is isomorphic to a transformation poloid. \end{cor} This is a 'Cayley theorem' for poloids; it generalizes similar isomorphism theorems for groupoids, monoids and groups. Note, though, that $\alpha\!\left(P\right)$ is not only a \emph{poloid of transformations} isomorphic to $P$, but actually a \emph{transformation poloid} isomorphic to $P$, so Corollary \ref{the3-1} is stronger than a straight-forward generalization of the 'Cayley theorem' as usually stated.
\subsection{Categories as poloids}
It is no secret that a poloid is the same as a small arrows-only category. In various guises, (P1), (P2) and Propositions \ref{pro1} \textendash{} \ref{pro4} appear as axioms or theorems in category theory. The two-axiom system proposed here is related to the set of ``Gruppoid'' axioms given by Brandt \cite{key-1}, and essentially equivalent to axiom systems used by Freyd \cite{key-2}, Hastings \cite{key-6}, and others. By Proposition 3, one can define functions $s:x\mapsto\epsilon_{x}$ and $t:x\mapsto\varepsilon_{x}$; axiom systems using these two functions but equivalent to the one given here, as used by Freyd and Scedrov \cite{key-3}, currently often serve to define arrows-only categories.
Concepts from category theory can be translated into the the language of poloids and vice versa. For example, an initial object in a category corresponds to some unit $\epsilon\in P$ such that for every unit $e\in P$ there is a unique $x\in P$ such that $\epsilon x$ and $xe$ are defined (hence, $\epsilon x=x=xe)$. More significantly, in the language of category theory a subpoloid is a subcategory, and a poloid homomorphism is a functor.
Looking at categories as ``webs of monoids'' does lead to some shift of emphasis and perspective, however. In particular, whereas the notion of a category acting on a set is not emphasized in texts on category theory, the corresponding notion of a poloid action is central when regarding categories as poloids. For example, recall that letting a group act on itself we obtain Cayley's theorem for groups. Similarly, letting a poloid act on itself we have obtained a Cayley theorem for poloids \cite{key-7}, corresponding to Yoneda's lemma for categories. Poloid actions are also a tool that can be used to define ordinary (small) two-sorted categories in terms of poloids \textendash{} we let a poloid $P$ act on a set $O$ in a special way, then interpreting the elements of $P$ as morphisms and the elements of $O$ acted on by $P$ as objects.
Applying an algebraic perspective on category theory may thus lead to more than merely a reformulation of category theory, especially as the algebraic structures related to categories are also linked to specific magmas of transformations.
{}
\appendix
\section{Constellations}
A\emph{ constellation} \cite{key-4,key-7}, is defined in \cite{key-5} as follows: \begin{quote} A {[}\emph{left}{]}\emph{ constellation} is a structure $P$ of signature $(\cdot,D)$ consisting of a class $P$ with a partial binary operation and unary operation $D$ {[}...{]} that maps onto the set of \emph{projections} $E\subseteq P$, so that $E=\left\{ D(x)\mid x\in P\right\} $, and such that for all $e\in E$, $ee$ exists and equals $e$, and for which, for all $x,y,z\in P$:
\begin{lyxlist}{00.00.0000} \item [{(C1)}] if $x\cdot(y\cdot z)$ exists then so does $(x\cdot y)\cdot z$, and then the two are equal; \item [{(C2)}] $x\cdot(y\cdot z)$ exists if and only if $x\cdot y$ and $y\cdot z$ exist; \item [{(C3)}] for each $x\in P$, $D(x)$ is the unique left identity of $x$ in $E$ (i.e. it satisfies $D(x)\cdot x=x$); \item [{(C4)}] for $a\in P$ and $g\in E$, if $a\cdot g$ exists then it equals $a$. \end{lyxlist} \end{quote} It turns out that constellations generalize poloids. Recall that by Definition \ref{def3} a semigroupoid is a partial magma such that if (a) $x\!\left(yz\right)$ is defined or (b) $\left(xy\right)\!z$ is defined or (c) $xy$ and $yz$ are defined then $x\!\left(yz\right)$ and $\left(xy\right)\!z$ are defined and $x\!\left(yz\right)=\left(xy\right)\!z$. Removing (a), we obtain the following definition. \begin{defn} \label{def12}A \emph{right-directed semigroupoid} is a magma $P$ such that, for any $x,y,z\in P$, if $\left(xy\right)\!z$ is defined or $xy$ and $yz$ are defined then $\left(xy\right)\!z$ and $x\!\left(yz\right)$ are defined and $x\!\left(yz\right)=\left(xy\right)\!z$. \end{defn} The condition in this definition corresponds to conditions (C1) and (C2) in \cite{key-5} except for some non-substantial differences. First, we are defining here the left-right dual of the notion defined by (C1) and (C2). This amounts to a difference in notation only, deriving from the fact that functions are composed from left to right in \cite{key-5} while they are composed from right to left here. Second, it is not necessary to postulate that if $\left(xy\right)\!z$ is defined then $xy$ and $yz$ are defined, in accordance with (C2), because, by Definition \ref{def12}, if $\left(xy\right)\!z$ is defined then $x(yz)$ is defined, so $xy$ and $yz$ are defined. Finally, in \cite{key-5} $P$ is assumed to be a class rather than a set; this difference has to do with set-theoretic considerations that need not concern us here.
We shall need some generalizations of the unit concept. First, a \emph{left unit} in $P$ is an element $\epsilon$ of $P$ such that $\epsilon x=x$ for all $x\in P$ such that $\epsilon x$ is defined, while a \emph{right unit} in $P$ is an element $\varepsilon$ of $P$ such that $x\varepsilon=x$ for all $x\in P$ such that $x\varepsilon$ is defined. Also, a \emph{local left unit} $\lambda_{x}$ for $x\in P$ is an element of $P$ such that $\lambda_{x}x$ is defined and $\lambda_{x}x=x$, while a \emph{local right unit} $\rho_{x}$ for $x\in P$ is an element of $P$ such that $x\rho_{x}$ is defined and $x\rho_{x}=x$. \begin{defn} \label{def13}A \emph{right poloid} is a right-directed semigroupoid $P$ such that for any $x\in P$ there is a unique left unit $\varphi_{x}\in P$ such that $\varphi_{x}$ is a local right unit for $x$. \end{defn} \begin{prop} \label{pro7}Let $P$ be a right poloid. If $\epsilon\in P$ is a left unit then $\epsilon\epsilon$ is defined and $\epsilon\epsilon=\epsilon$. \end{prop} \begin{proof} Let $\varphi_{\epsilon}\in P$ be a local right unit for the left unit $\epsilon$. Then $\epsilon\varphi_{\epsilon}$ is defined and $\varphi_{\epsilon}=\epsilon\varphi_{\epsilon}=\epsilon$, and this implies the assertion. \end{proof} Thus, the left unit $\epsilon$ is the unique local right unit $\varphi_{e}$ for itself.
Disregarding (C1) and (C2), which were incorporated in Definition \ref{def12}, the requirements stated in the definition cited above can be summed up as follows: \begin{lyxlist}{00.00.0000} \item [{(C)}] For each $x\in P$, there is exactly one $D\!\left(x\right)\in E=\left\{ D\!\left(x\right)\mid x\in P\right\} $ such that $D\!\left(x\right)\cdot x$ is defined and $D\!\left(x\right)\cdot x=x$, and every $e\in E$ is a right unit in $P$ and such that $e\cdot e$ is defined and equal to $e$. \end{lyxlist} Using (C), it can be proved as in Proposition \ref{pro7} that if $f\in P$ is a right unit then $f\cdot f$ is defined and $f\cdot f=f$, so $f$ is the unique local left unit $D\!\left(f\right)$ for itself. Thus, $E$ equals the set of right units in $P$, since conversely every $e\in E$ is a right unit in $P$ by (C). As all right units are idempotent, this means that the requirement that all elements of $E$ are idempotent is redundant, so (C) can be simplified to: \begin{lyxlist}{00.00.0000} \item [{(C{*})}] For each $x\in P$, there is exactly one right unit $D\!\left(x\right)\in P$ such that $D\!\left(x\right)\cdot x$ is defined and $D\!\left(x\right)\cdot x=x$. \end{lyxlist} In our terminology, this means, of course, that for any $x\in P$ there is a unique left unit $\varphi_{x}$ in $P$ such $\varphi_{x}$ is a local right unit for $x$. We conclude that a (small) constellation is just a right poloid; note that $D$ is just the function $x\mapsto\varphi_{x}$. Proposition \ref{pro7} generalizes Proposition \ref{pro1}, and there are also natural generalizations of Propositions \ref{pro2} \textendash{} \ref{pro4} to right poloids.
It should be pointed out that in \cite{key-5} an alternative definition of constellations is also given; this definition is essentially the same as Definition \ref{def13} here (see Proposition 2.9 in \cite{key-5}). So while the definition of constellations cited above reflects the historical development of that notion, it has been shown here and in \cite{key-5} that a more direct approach can also be used.
{}
Let us also look at the transformation systems corresponding to constellations.
{} \begin{thm} \label{the3}A pretransformation magma is a right-directed semigroupoid. \end{thm} \begin{proof} Use Facts \ref{f1} \textendash{} \ref{f3} in Section 2.3. \end{proof} A \emph{domain} pretransformation magma is a pretransformation magma $\mathscr{R}_{\!X}$ such that if $\mathsf{f}\!\in\!\mathscr{R}_{\!X}$ then $\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\!\in\!\mathscr{R}_{\!X}$. Corresponding to Theorem \ref{the2} in Section 2.3, we have the following result. \begin{thm} \label{the4}A domain pretransformation magma is a right poloid. \end{thm} \begin{proof} In view of Theorem \ref{the3}, it suffices to show that for any $\mathsf{f}\in\mathscr{R}_{\!X}$ there is a unique left unit $\upvarphi_{\mathsf{f}}\in\mathscr{R}_{\!X}$ such that $\mathsf{f}\circ\upvarphi_{\mathsf{f}}$ is defined and equal to $\mathsf{f}$, namely $\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}$.
If $\mathsf{f},\mathsf{g}\in\mathscr{R}_{\!X}$ and $\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\circ\mathsf{g}$ is defined so that $\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathfrak{\mathsf{f}}\right)}\right)\supseteq\mathrm{im}\!\left(\mathsf{g}\right)$ then \begin{gather*} \mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\circ\mathsf{g}\right)=\mathrm{dom}\!\left(\mathfrak{\mathsf{g}}\right),\\ \mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\circ\mathsf{g}\left(x\right)=\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\!\left(\mathsf{g}\!\left(x\right)\right)=\mathsf{g}\!\left(x\right) \end{gather*} for all $x\!\in\!\mathrm{dom}\!\left(\mathsf{g}\right)$, meaning that $\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\mathsf{f}\right)}\!\circ\mathsf{g}=\mathsf{g}$. Thus, $\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\mathsf{f}\right)}$ is a left unit in $\mathscr{R}_{\!X}$.
Also, $\mathrm{dom}\!\left(\mathsf{f}\right)=\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\right)=\mathrm{im}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\right)$, so $\mathsf{f}\circ\mathsf{Id}_{\mathrm{dom}\left(\mathfrak{\mathsf{f}}\right)}$ is defined, and \begin{gather*} \mathrm{dom}\!\left(\mathfrak{\mathsf{f}}\circ\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\right)=\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathfrak{\mathsf{f}}\right)}\right)=\mathrm{dom}\!\left(\mathfrak{\mathsf{f}}\right),\\ \mathfrak{\mathsf{f}}\circ\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\left(x\right)=\mathsf{f}\!\left(\mathsf{Id}{}_{\mathrm{dom}\left(\mathsf{f}\right)}\!\left(x\right)\right)=\mathfrak{\mathsf{f}}\!\left(x\right) \end{gather*} for all $x\in\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\right)=\mathrm{dom}\!\left(\mathfrak{\mathsf{f}}\right)$. Thus, $\mathsf{f}\circ\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}$ is defined and equal to $\mathsf{f}$, so $\mathsf{Id}{}_{\mathrm{dom}\left(\mathsf{f}\right)}\in\mathscr{R}_{\!X}$ is a left unit $\upvarphi_{\mathsf{f}}$ such that $\mathsf{f}\circ\upvarphi_{\mathsf{f}}$ is defined and equal to $\mathsf{f}$.
It remains to show that $\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}$ is the only such $\upvarphi_{\mathsf{f}}$. Let $\upepsilon\in\mathscr{R}_{\!X}$ be a left unit. Then $\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\upepsilon\right)}\in\mathscr{R}_{\!X}$ and as $\mathrm{dom}\!\left(\upepsilon\right)=\mathrm{dom}\!\left(\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\upepsilon\right)}\right)=\mathrm{im}\!\left(\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\upepsilon\right)}\right)$, so that $\upepsilon\circ\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\upepsilon\right)}$ is defined, we have $\upepsilon\circ\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\upepsilon\right)}=\mathfrak{\mathsf{Id}}_{\mathrm{dom}\left(\upepsilon\right)}$. On the other hand, \begin{gather*} \mathrm{dom}\left(\upepsilon\circ\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}\right)=\mathrm{dom}\left(\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}\right)=\mathrm{dom}\!\left(\upepsilon\right),\\ \upepsilon\circ\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}\left(x\right)=\upepsilon\!\left(\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}\!\left(x\right)\right)=\upepsilon\!\left(x\right) \end{gather*} for all $x\in\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}\right)=\mathrm{dom}\!\left(\upepsilon\right)$, so $\upepsilon\circ\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}=\upepsilon$. Thus $\upepsilon=\mathsf{Id}_{\mathrm{dom}\left(\upepsilon\right)}$, so $\upvarphi_{\mathsf{f}}=\mathsf{Id}_{\mathrm{dom}\left(\upvarphi_{\mathsf{f}}\right)}=\mathsf{Id}_{\mathrm{dom}\left(\mathfrak{\mathsf{f}}\circ\upvarphi_{\mathsf{f}}\right)}=\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}$. \end{proof} With Theorem \ref{the2} and Corollary \ref{the3-1} in mind, one might expect, given Theorem \ref{the4}, that conversely every right poloid is isomorphic to some domain pretransformation magma (regarded as a right poloid). Indeed, any poloid can be embedded in a pretransformation magma by Lemma \ref{lem2}, and it can be shown that $\upmu\!\left(\varepsilon_{x}\right)=\mathsf{Id}_{\mathrm{dom}\left(\upmu\left(x\right)\right)}$, so any poloid can actually be embedded in a domain pretransformation magma. Also, the proof of Lemma \ref{lem2} uses almost only properties of poloids that they share with right poloids. There is one crucial exception, though: both $\varepsilon_{x}$ and $\varphi_{x}$ are local right units, but in addition $\varepsilon_{x}$ is a unit while $\varphi_{x}$ is just a left unit. The fact that $\varepsilon_{x}$ is a unit is used to prove that $x\mapsto\upmu\!\left(x\right)$ is injective, and this is not true for all right poloids. \begin{example} \label{ex3} The magma defined by the Cayley table below is a right poloid with $x=\varphi_{x}$ and $y=\varphi_{y}$, but $\upmu\!\left(x\right)=\upmu\!\left(y\right)$.
\[ \begin{array}{ccc}
& x & y\\ x & x & y\\ y & x & y \end{array} \] \end{example} This suggests that we look for an additional condition on right poloids to ensure that $x\mapsto\upmu\!\left(x\right)$ is injective. On finding such a condition, we can prove a weakened converse of Theorem \ref{the4} by an argument similar to the proof of Lemma \ref{lem2}.
Adapting a definition in \cite{key-5}, we say that a right poloid such that if $\varphi_{x}\varphi_{y}$ and $\varphi_{y}\varphi_{x}$ are defined then $\varphi_{x}=\varphi_{y}$ is \emph{normal}. (The poloid in Example \ref{ex3} is not normal.) This notion is the key to the following three results: \begin{thm} \label{the6}A domain pretransformation magma is a normal right poloid. \end{thm} \begin{proof} In a domain pretransformation magma, $\upvarphi_{\mathsf{f}}=\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}$. Thus, $\upvarphi_{\mathsf{f}}\circ\upvarphi_{\mathsf{g}}$ is defined if and only if $\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}\right)\supseteq\mathrm{im}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{g}\right)}\right)=\mathrm{dom}\!\left(\mathsf{Id}_{\mathrm{dom}\left(\mathsf{g}\right)}\right)$, or equivalently $\mathrm{dom}\!\left(\mathsf{f}\right)\supseteq\mathrm{dom}\!\left(\mathsf{g}\right)$, so if $\upvarphi_{\mathsf{f}}\circ\upvarphi_{\mathsf{g}}$ and $\upvarphi_{\mathsf{g}}\circ\upvarphi_{\mathsf{f}}$ are defined then $\mathrm{dom}\!\left(\mathsf{f}\right)=\mathrm{dom}\!\left(\mathsf{g}\right)$, so $\mathsf{Id}_{\mathrm{dom}\left(\mathsf{f}\right)}=\mathsf{Id}_{\mathrm{dom}\left(\mathsf{g}\right)}$ or equivalently $\upvarphi_{\mathsf{f}}=\upvarphi_{\mathsf{g}}$. \end{proof} \begin{lem} \label{lem4} In a normal right poloid, the correspondence $x\mapsto\upmu\!\left(x\right)$ is injective. \end{lem} \begin{proof} Assume that $x\neq y$. If $\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)\neq\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)$ then $\upmu\!\left(x\right)\neq\upmu\!\left(y\right)$ as required. Otherwise, $\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)$, and as $x\varphi_{x}$ and $y\varphi_{y}$ are defined we have $\varphi_{x},\varphi_{y}\in\mathrm{dom}\!\left(\upmu\!\left(x\right)\right)=\mathrm{dom}\!\left(\upmu\!\left(y\right)\right)$. Thus, $x\varphi_{y}$ is defined, so $\left(x\varphi_{x}\right)\!\varphi_{y}$ is defined, so $x\!\left(\varphi_{x}\varphi_{y}\right)$ is defined, so $\varphi_{x}\varphi_{y}$ is defined. Similarly, $y\varphi_{x}$ is defined, so $\varphi_{y}\varphi_{x}$ is defined. Therefore, $\varphi_{x}=\varphi_{y}$, so $\upmu\!\left(x\right)\!\left(\varphi_{x}\right)=x$ and $\upmu\!\left(y\right)\!\left(\varphi_{x}\right)=\upmu\!\left(y\right)\!\left(\varphi_{y}\right)=y$, so again $\upmu\!\left(x\right)\neq\upmu\!\left(y\right)$. \end{proof} Using Lemma \ref{lem4} and proceeding as in the proof of Lemma \ref{lem2}, keeping in mind that $\varphi_{\upmu\left(x\right)}=\upmu\!\left(\varepsilon_{x}\right)=\mathsf{Id}_{\mathrm{dom}\left(\upmu\left(x\right)\right)}$, we obtain the following result: \begin{thm} \label{the7}A normal right poloid can be embedded in a domain pretransformation magma. \end{thm} Theorems \ref{the6} and \ref{the7} correspond to Proposition 2.23 in \cite{key-5}.
Let us look at another way of narrowing down the notion of a right poloid so that any right poloid considered can be embedded in a domain pretransformation magma. Consider the relation $\leq$ on a right poloid $P$ given by $x\leq y$ if and only if $y\varphi_{x}$ is defined and $x=y\varphi_{x}$. The relation $\leq$ is obviously reflexive, and if $x\leq y$ and $y\leq z$ then (a) $y\varphi_{x}=\left(z\varphi_{y}\right)\!\varphi_{x}$ is defined so that $z\!\left(\varphi_{y}\varphi_{x}\right)=z\varphi_{x}$ is defined and (b) $x=y\varphi_{x}=\left(z\varphi_{y}\right)\!\varphi_{x}=z\!\left(\varphi_{y}\varphi_{x}\right)=z\varphi_{x}$, so $\leq$ is transitive as well. Hence, $\leq$ is a preorder, called the \emph{natural preorder} on $P$, so $\leq$ is a partial order if and only if it is antisymmetric. \emph{A} right poloid such that $\epsilon\leq\epsilon'$ and $\epsilon'\leq\epsilon$ implies $\epsilon=\epsilon'$ for any left units $\epsilon,\epsilon'\in P$ is said to be \emph{unit-posetal}.
Recall that for any left unit $\epsilon\in P$ we have $\varphi_{\epsilon}=\epsilon$, so $\varphi_{\varphi_{x}}=\varphi_{x}$. Thus, $\varphi_{x}\leq\varphi_{y}$ if and only $\varphi_{y}\varphi_{x}$ is defined and $\varphi_{x}=\varphi_{y}\varphi_{x}$, so as $\varphi_{y}$ is a left unit we have $\varphi_{x}\leq\varphi_{y}$ if and only $\varphi_{y}\varphi_{x}$ is defined. Hence, we obtain the following results. \begin{thm} \label{the8}A right poloid is unit-posetal if and only if it is normal. \end{thm}
\begin{thm} \label{the9}A domain pretransformation magma is a unit-posetal right poloid. \end{thm}
\begin{thm} \label{the10}A unit-posetal right poloid can be embedded in a domain pretrans\-formation magma. \end{thm} If we specialize the concept of a unit-posetal right poloid by adding more requirements, the analogue of Theorem \ref{the9} need of course not hold. In particular, the partial order on the left units is not necessarily a semilattice. \begin{example} Set $X=\left\{ 1,2,3\right\} $ and $\mathscr{R}_{\!X}=\left\{ \mathsf{Id}{}_{\left\{ 1,2\right\} },\mathsf{Id}{}_{\left\{ 2,3\right\} }\right\} $ with $\mathsf{f}\circ\mathsf{g}$ defined as usual when $\mathrm{dom}\left(\mathsf{f}\right)\supseteq\mathrm{im}\left(\mathsf{g}\right)$. Then $\mathscr{R}_{\!X}$ is a domain pretransformation magma where $\mathsf{Id}{}_{\left\{ 1,2\right\} }\leq\mathsf{Id}{}_{\left\{ 1,2\right\} }$ and $\mathsf{Id}{}_{\left\{ 2,3\right\} }\leq\mathsf{Id}{}_{\left\{ 2,3\right\} }$, but this partial order is not a semilattice. \end{example} More broadly, let $\boldsymbol{A}$ denote a class of abstract algebraic structures corresponding to a class $\boldsymbol{C}$ of concrete magmas of correspondences (functions, prefunctions etc.) in the sense that any $\boldsymbol{c}$ in $\boldsymbol{C}$ belongs to $\boldsymbol{A}$ when certain operations in $\boldsymbol{C}$ are interpreted as the operations in $\boldsymbol{A}$. Note that this does not imply that any $\boldsymbol{a}$ in $\boldsymbol{A}$ can be embedded in some $\boldsymbol{c}$ in $\boldsymbol{C}$. In particular, if $\boldsymbol{A}$ is a class of generalized groups, with axioms merely defining a generalized group operation and (optional) generalized identities and inverses, then the fact that the axioms defining $\boldsymbol{A}$ are satisfied for any concrete magma $\boldsymbol{c}$ in $\boldsymbol{C}$ does not provide a strong reason to expect that any $\boldsymbol{a}$ satisfying these axioms can be embedded in some $\boldsymbol{c}$ in $\boldsymbol{C}$. As we have just seen, the relation between right poloids and domain pretransformation magmas is asymmetrical in this respect. (One-sided) restriction semigroups \cite{key-4} provide another example of this phenomenon. \begin{example} Let $\boldsymbol{A}$ be the class of semigroups such that for each $\boldsymbol{a}$ in $\boldsymbol{A}$ and each $x\in\boldsymbol{a}$ there is a unique local right unit for $x$ in $\boldsymbol{a}$. Let $\boldsymbol{C}$ be the class of semigroups of functional relations on a given set where the binary operation is composition of relations and such that for each $\boldsymbol{c}$ in $\boldsymbol{C}$ and each $\mathtt{f}\in\boldsymbol{c}$ the functional relation $\mathtt{Id}{}_{\mathrm{dom}\left(\mathtt{f}\right)}$ belongs to $\boldsymbol{c}$. Then any $\boldsymbol{c}$ in $\boldsymbol{C}$ belongs to $\boldsymbol{A}$, with $\mathtt{Id}{}_{\mathrm{dom}\left(\mathtt{f}\right)}$ the local right unit for $\mathtt{f}$, but it is not the case that any $\boldsymbol{a}$ in $\boldsymbol{A}$ can be embedded in some $\boldsymbol{c}$ in $\boldsymbol{C}$. To ensure embeddability, $\boldsymbol{A}$ needs to be narrowed down by additional conditions, subject to the restriction that $\boldsymbol{A}$ remains wide enough to accommodate all $\boldsymbol{c}$ in $\boldsymbol{C}$. \end{example} We have seen that any transformation poloid is a poloid and that those poloids which can be embedded in a transformation poloid are simply all poloids, and a similar elementary symmetry exists for inverse semigroups, but such cases are perhaps best regarded as ideal rather than normal, reflecting the fact that poloids and inverse semigroups are particularly natural algebraic structures.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Hyperbolicity in the corona and join of graphs}
\author[a]{Walter Carballosa\corref{x}} \address[a]{Consejo Nacional de Ciencia y Tecnolog\'ia (CONACYT) $\&$ Universidad Aut\'onoma de Zacatecas, Paseo la Bufa, int. Calzada Solidaridad, 98060 Zacatecas, ZAC, M\'exico} \ead{[email protected]} \cortext[x]{Corresponding author.}
\author[b]{Jos\'e M. Rodr{\'\i}guez} \address[b]{Department of Mathematics, Universidad Carlos III de Madrid, Av. de la Universidad 30, 28911 Legan\'es, Madrid, Spain.} \ead{[email protected]}
\author[d]{Jos\'e M. Sigarreta} \address[d]{Faculdad de Matem\'aticas, Universidad Aut\'onoma de Guerrero, Carlos E. Adame 5, Col. La Garita, Acapulco, Guerrero, Mexico} \ead{[email protected]}
\begin{abstract} If X is a geodesic metric space and $x_1,x_2,x_3\in X$, a {\it geodesic triangle} $T=\{x_1,x_2,x_3\}$ is the union of the three geodesics $[x_1x_2]$, $[x_2x_3]$ and $[x_3x_1]$ in $X$. The space $X$ is $\delta$-\emph{hyperbolic} $($in the Gromov sense$)$ if any side of $T$ is contained in a $\delta$-neighborhood of the union of the two other sides, for every geodesic triangle $T$ in $X$. If $X$ is hyperbolic, we denote by $\delta(X)$ the sharp hyperbolicity constant of $X$, i.e. $\delta(X)=\inf\{\delta\ge 0: \, X \, \text{ is $\delta$-hyperbolic}\,\}\,.$ Some previous works characterize the hyperbolic product graphs (for the Cartesian product, strong product and lexicographic product) in terms of properties of the factor graphs. In this paper we characterize the hyperbolic product graphs for graph join $G_1\uplus G_2$ and the corona $G_1\diamond G_2$: $G_1\uplus G_2$ is always hyperbolic, and $G_1\diamond G_2$ is hyperbolic if and only if $G_1$ is hyperbolic. Furthermore, we obtain simple formulae for the hyperbolicity constant of the graph join $G_1\uplus G_2$ and the corona $G_1\diamond G_2$. \end{abstract}
\begin{keyword}
Graph join \sep Corona graph \sep Gromov hyperbolicity \sep Infinite graph
\MSC[2010] 05C69 \sep 05A20 \sep 05C50. \end{keyword}
\end{frontmatter}
\section{Introduction}
Hyperbolic spaces play an important role in geometric group theory and in the geometry of negatively curved spaces (see \cite{ABCD,GH,G1}). The concept of Gromov hyperbolicity grasps the essence of negatively curved spaces like the classical hyperbolic space, Riemannian manifolds of negative sectional curvature bounded away from $0$, and of discrete spaces like trees and the Cayley graphs of many finitely generated groups. It is remarkable that a simple concept leads to such a rich general theory (see \cite{ABCD,GH,G1}).
The first works on Gromov hyperbolic spaces deal with finitely generated groups (see \cite{G1}). Initially, Gromov spaces were applied to the study of automatic groups in the science of computation (see, \emph{e.g.}, \cite{O}); indeed, hyperbolic groups are strongly geodesically automatic, \emph{i.e.}, there is an automatic structure on the group \cite{Cha}.
The concept of hyperbolicity appears also in discrete mathematics, algorithms and networking. For example, it has been shown empirically in \cite{ShTa} that the internet topology embeds with better accuracy into a hyperbolic space than into an Euclidean space of comparable dimension; the same holds for many complex networks, see \cite{KPKVB}. A few algorithmic problems in hyperbolic spaces and hyperbolic graphs have been considered in recent papers (see \cite{ChEs,Epp,GaLy,Kra}). Another important application of these spaces is the study of the spread of viruses through on the internet (see \cite{K21,K22}). Furthermore, hyperbolic spaces are useful in secure transmission of information on the network (see \cite{K27,K21,K22,NS}).
The study of Gromov hyperbolic graphs is a subject of increasing interest; see, \emph{e.g.}, \cite{BRS,BRSV2,BRST,BPK,BHB1,CDR,CPRS,CRS,CRSV,CDEHV,K50,K27,K21,K22,K23,K24,K56,KPKVB,MRSV,MRSV2,NS,PeRSV,PRST,PRSV,PT,R,RSVV,S,S2,T,WZ} and the references therein.
We say that the curve $\gamma$ in a metric space $X$ is a
\emph{geodesic} if we have $L(\gamma|_{[t,s]})=d(\gamma(t),\gamma(s))=|t-s|$ for every $s,t\in [a,b]$ (then $\gamma$ is equipped with an arc-length parametrization). The metric space $X$ is said \emph{geodesic} if for every couple of points in $X$ there exists a geodesic joining them; we denote by $[xy]$ any geodesic joining $x$ and $y$; this notation is ambiguous, since in general we do not have uniqueness of geodesics, but it is very convenient. Consequently, any geodesic metric space is connected. If the metric space $X$ is a graph, then the edge joining the vertices $u$ and $v$ will be denoted by $[u,v]$.
Along the paper we just consider graphs with every edge of length $1$. In order to consider a graph $G$ as a geodesic metric space, identify (by an isometry) any edge $[u,v]\in E(G)$ with the interval $[0,1]$ in the real line; then the edge $[u,v]$ (considered as a graph with just one edge) is isometric to the interval $[0,1]$. Thus, the points in $G$ are the vertices and, also, the points in the interior of any edge of $G$. In this way, any connected graph $G$ has a natural distance defined on its points, induced by taking shortest paths in $G$, and we can see $G$ as a metric graph. If $x,y$ are in different connected components of $G$, we define $d_G(x,y)=\infty$. Throughout this paper, $G=(V,E)$ denotes a simple graph (not necessarily connected) such that every edge has length $1$ and $V\neq \emptyset$. These properties guarantee that any connected graph is a geodesic metric space. Note that to exclude multiple edges and loops is not an important loss of generality, since \cite[Theorems 8 and 10]{BRSV2} reduce the problem of compute the hyperbolicity constant of graphs with multiple edges and/or loops to the study of simple graphs. For a nonempty set $X\subseteq V$, and a vertex $v\in V$, $N_X(v)$ denotes the set of neighbors $v$ has in $X$:
$N_X(v):=\{u\in X: [u,v]\in E\},$ and the degree of $v$ in $X$ will be denoted by $\deg_{X}(v)=|N_{X}(v)|$. We denote the degree of a vertex $v\in V$ in $G$ by $\deg(v)\le\infty$, and the maximum degree of $G$ by $\Delta_{G}:=\sup_{v\in V}\deg(v)$.
Consider a polygon $J=\{J_1,J_2,\dots,J_n\}$ with sides $J_j\subseteq X$ in a geodesic metric space $X$. We say that $J$ is $\delta$-{\it thin} if for every $x\in J_i$ we have that $d(x,\cup_{j\neq i}J_{j})\le \delta$. Let us denote by $\delta(J)$ the sharp thin constant of $J$, \emph{i.e.}, $\delta(J):=\inf\{\delta\ge 0: \, J \, \text{ is $\delta$-thin}\,\}\,. $ If $x_1,x_2,x_3$ are three points in $X$, a {\it geodesic triangle} $T=\{x_1,x_2,x_3\}$ is the union of the three geodesics $[x_1x_2]$, $[x_2x_3]$ and $[x_3x_1]$ in $X$. We say that $X$ is $\delta$-\emph{hyperbolic} if every geodesic triangle in $X$ is $\delta$-thin, and we denote by $\delta(X)$ the sharp hyperbolicity constant of $X$, \emph{i.e.}, $\delta(X):=\sup\{\delta(T): \, T \, \text{ is a geodesic triangle in }\,X\,\}.$ We say that $X$ is \emph{hyperbolic} if $X$ is $\delta$-hyperbolic for some $\delta \ge 0$; then $X$ is hyperbolic if and only if $ \delta(X)<\infty.$ If $X$ has connected components $\{X_i\}_{i\in I}$, then we define $\delta(X):=\sup_{i\in I} \delta(X_i)$, and we say that $X$ is hyperbolic if $\delta(X)<\infty$.
In the classical references on this subject (see, \emph{e.g.}, \cite{BHB,GH}) appear several different definitions of Gromov hyperbolicity, which are equivalent in the sense that if $X$ is $\delta$-hyperbolic with respect to one definition, then it is $\delta'$-hyperbolic with respect to another definition (for some $\delta'$ related to $\delta$). The definition that we have chosen has a deep geometric meaning (see, \emph{e.g.}, \cite{GH}).
Trivially, any bounded metric space $X$ is $((\diam X)/2)$-hyperbolic. A normed linear space is hyperbolic if and only if it has dimension one. A geodesic space is $0$-hyperbolic if and only if it is a metric tree. If a complete Riemannian manifold is simply connected and its sectional curvatures satisfy $K\leq c$ for some negative constant $c$, then it is hyperbolic. See the classical references \cite{ABCD,GH} in order to find further results.
We want to remark that the main examples of hyperbolic graphs are the trees. In fact, the hyperbolicity constant of a geodesic metric space can be viewed as a measure of how ``tree-like'' the space is, since those spaces $X$ with $\delta(X) = 0$ are precisely the metric trees. This is an interesting subject since, in many applications, one finds that the borderline between tractable and intractable cases may be the tree-like degree of the structure to be dealt with (see, \emph{e.g.}, \cite{CYY}).
Given a Cayley graph (of a presentation with solvable word problem) there is an algorithm which allows to decide if it is hyperbolic. However, for a general graph or a general geodesic metric space deciding whether or not a space is hyperbolic is usually very difficult. Therefore, it is interesting to study the hyperbolicity of particular classes of graphs. The papers \cite{BRST,BHB1,CCCR,CDR,CRSV,MRSV2,PeRSV,PRSV,R,Si} study the hyperbolicity of, respectively, complement of graphs, chordal graphs, strong product graphs, lexicographic product graphs, line graphs, Cartesian product graphs, cubic graphs, tessellation graphs, short graphs and median graphs. In \cite{CCCR,CDR,MRSV2} the authors characterize the hyperbolic product graphs (for strong product, lexicographic product and Cartesian product) in terms of properties of the factor graphs. In this paper we characterize the hyperbolic product graphs for graph join $G_1\uplus G_2$ and the corona $G_1\diamond G_2$: $G_1\uplus G_2$ is always hyperbolic, and $G_1\diamond G_2$ is hyperbolic if and only if $G_1$ is hyperbolic (see Corollaries \ref{cor:SP} and \ref{cor:sup}). Furthermore, we obtain simple formulae for the hyperbolicity constant of the graph join $G_1\uplus G_2$ and the corona $G_1\diamond G_2$ (see Theorems \ref{th:hypJoin} and \ref{th:corona}). In particular, Theorem \ref{th:corona} states that $\delta(G_1\diamond G_2)=\max\{\delta(G_1),\delta(G_2\uplus E_1)\}$, where $E_1$ is a graph with just one vertex. We want to remark that it is not usual at all to obtain explicit formulae for the hyperbolicity constant of large classes of graphs.
\section{Distance in graph join}
In order to estimate the hyperbolicity constant of the graph join $G_1\uplus G_2$ of $G_1$ and $G_2$, we will need an explicit formula for the distance between two arbitrary points. We will use the definition given by Harary in \cite{H}.
\begin{definition}\label{def:join} Let $G_1=(V(G_1),E(G_1))$ and $G_2=(V(G_2),E(G_2))$ two graphs with $V(G_1)\cap V(G_2)=\varnothing$. The \emph{graph join} $G_1\uplus G_2$ of $G_1$ and $G_2$ has $V(G_1\uplus G_2)=V(G_1) \cup V(G_2)$ and two different vertices $u$ and $v$ of $G_1\uplus G_2$ are adjacent if $u\in V(G_1)$ and $v\in V(G_2)$, or $[u,v]\in E(G_1)$ or $[u,v]\in E(G_2)$. \end{definition}
From the definition, it follows that the graph join of two graphs is commutative. Figure \ref{fig:join} shows the graph join of two graphs.
\begin{figure}
\caption{Graph join of two graphs $C_3 \uplus P_3$.}
\label{fig:join}
\end{figure}
\begin{remark}\label{r:K_nm} For every graphs $G_1,G_2$ we have that $G_1\uplus G_2$ is a connected graph with a subgraph isomorphic to a complete bipartite graph with $V(G_1)$ and $V(G_2)$ as its parts. \end{remark}
Note that, from a geometric viewpoint, the graph join $G_1\uplus G_2$ is obtained as an union of the graphs $G_1$, $G_2$ and the complete bipartite graph $K(G_1,G_2)$ linking the vertices of $V(G_1)$ and $V(G_2)$.
The following result allows to compute the distance between any two points in $G_1\uplus G_2$. Furthermore, this result provides information about the geodesics in the graph join.
\begin{proposition}\label{prop:JoinDist} For every graphs $G_1, G_2$ we have:
\begin{itemize} \item[(a)] If $x,y \in G_i$ ($i\in\{1,2\}$), then
\[d_{G_1\uplus G_2}(x,y) = \min\left\{ d_{G_i}(x,y) , d_{G_i}\big(x,V(G_i)\big)+2+d_{G_i}\big(V(G_i),y\big)\right\}.\] \item[(b)] If $x \in G_i$ and $y \in G_j$ with $i\neq j$, then
\[d_{G_1\uplus G_2}(x,y) = d_{G_i}\big(x,V(G_i)\big)+1+d_{G_j}\big(V(G_j),y\big).\] \item[(c)] If $x \in G_i$ and $y \in K(G_1,G_2)$, then
\[d_{G_1\uplus G_2}(x,y) = \min\left\{ d_{G_i}(x,Y_i)+d_{G_1\uplus G_2}(Y_i,y) , d_{G_i}\big(x,V(G_i)\big)+1+d_{G_1\uplus G_2}(Y_j,y)\right\},\]
where $y\in [Y_1,Y_2]$ with $Y_i\in V(G_i)$ and $Y_j\in V(G_j)$. \item[(d)] If $x,y \in K(G_1,G_2)$, then
\[d_{G_1\uplus G_2}(x,y) = \min\{ d_{K(G_1,G_2)}(x,y), M\},\]
where $x\in [X_1,X_2]$, $y\in [Y_1,Y_2]$ with $X_1,Y_1\in V(G_1)$ and $X_2,Y_2\in V(G_2)$, and $M=\min_{i\in\{1,2\}}\{d_{G_1\uplus G_2}(x,X_i)+d_{G_i}(X_i,Y_i)+d_{G_1\uplus G_2}(Y_i,y)\}$ \end{itemize} \end{proposition}
\begin{proof} We will prove each item separately. In item (a), if $i\neq j$, we consider the two shortest possible paths from $x$ to $y$ such that they either is contained in $G_i$ or intersects $G_j$ (and then it intersects $G_j$ just in a single vertex). In item (b), since any path in $G_1\uplus G_2$ joining $x$ and $y$ contains at less one edge in $K(G_1,G_2)$, we have a geodesic when the path contains an edge joining a closest vertex to $x$ in $V(G_i)$ and a closest vertex to $y$ in $V(G_j)$. In item (c) we consider the two shortest possible paths from $x$ to $y$ containing either $Y_1$ or $Y_2$. Finally, in item (d) we may consider the three shortest possible paths from $x$ to $y$ such that they either is contained in $K(G_1,G_2)$ or contains at lest an edge in $E(G_1)$ or contains at lest an edge in $E(G_2)$. \end{proof}
We say that a subgraph $\Gamma$ of $G$ is \emph{isometric} if $d_{\Gamma}(x,y)=d_{G}(x,y)$ for every $x,y\in \Gamma$. Proposition \ref{prop:JoinDist} gives the following result.
\begin{proposition}\label{prop:IsomJoin}
Let $G_1,G_2$ be two graph and let $\Gamma_1,\Gamma_2$ be isometric subgraphs to $G_1$ and $G_2$, respectively. Then, $\Gamma_1\uplus\Gamma_2$ is an isometric subgraph to $G_1\uplus G_2$. \end{proposition}
The following result allows to compute the diameter of the set of vertices in a graph join.
\begin{proposition}\label{prop:vert} For every graphs $G_1,G_2$ we have $1\le\diam V(G_1\uplus G_2)\le 2$. Furthermore, $\diam V(G_1\uplus G_2)=1$ if and only if $G_1$ and $G_2$ are complete graphs. \end{proposition}
\begin{proof} Since $V(G_1),V(G_2)\neq\emptyset$, $\diam V(G_1\uplus G_2)\ge 1$. Besides, if $u,v\in V(G_1\uplus G_2)$, we have $d_{G_1\uplus G_2}(u,v)\le d_{K(G_1,G_2)}(u,v)\le 2$.
In order to finish the proof note that on the one hand, if $G_1$ and $G_2$ are complete graphs, then $G_1\uplus G_2$ is a complete graph with at least $2$ vertices and $\diam V(G_1\uplus G_2)=1$. On the other hand, if $\diam V(G_1\uplus G_2)=1$, then for every two vertices $u,v \in V(G_1)$ we have $[u,v]\in E(G_1)$; by symmetry, we have the same result for every $u,v \in V(G_2)$. \end{proof}
Since $\diam V(G) \le \diam G \le \diam V(G) + 1$ for every graph $G$, the previous proposition has the following consequence.
\begin{corollary}\label{c:diam} For every graphs $G_1,G_2$ we have $1\le\diam G_1\uplus G_2\le 3$. \end{corollary}
Proposition \ref{prop:JoinDist} and Corollary \ref{c:diam} give the following results. Given a graph $G$, we say that $x\in G$ is a midpoint (of an edge) if $d_{G}(x,V(G))=1/2$.
\begin{corollary}\label{cor:midpoint}
Let $G_1,G_2$ be two graphs. If $d_{G_1\uplus G_2}(x,y) = 3$, then $x,y$ are two midpoints in $G_i$ with $d_{G_i}(x,y)\ge3$ for some $i\in \{1,2\}$. \end{corollary}
\begin{corollary}\label{r:diam3}
Let $G_1,G_2$ be two graphs. Then, $\diam G_1\uplus G_2 = 3$ if and only if there are two midpoints $x,y$ in $G_i$ with $d_{G_i}(x,y)\ge3$ for some $i\in \{1,2\}$. \end{corollary}
\section{Hyperbolicity constant of the graph join of two graphs}
In this section we obtain some bounds for the hyperbolicity constant of the graph join of two graphs. These bounds allow to prove that the joins of graphs are always hyperbolic with a small hyperbolicity constant. The next well-known result will be useful.
\begin{theorem}\cite[Theorem 8]{RSVV}\label{t:diameter1} In any graph $G$ the inequality $\delta(G)\le \diam G / 2$ holds and it is sharp. \end{theorem}
We have the following consequence of Corollary \ref{c:diam} and Theorem \ref{t:diameter1}.
\begin{corollary}\label{cor:SP} For every graphs $G_1,G_2$, the graph join $G_1\uplus G_2$ is hyperbolic with $\delta(G_1\uplus G_2)\leq 3/2$, and the inequality is sharp. \end{corollary}
Theorem \ref{th:hyp3/2} characterizes the graph join of two graphs for which the equality in the previous corollary is attained.
The following result in \cite[Lemma 5]{RSVV} will be useful.
\begin{lemma}\label{l:subgraph} If $\Gamma$ is an isometric subgraph of $G$, then $\delta(\Gamma) \le \delta(G)$. \end{lemma}
\begin{theorem}\label{th:HypIsomJoin} For every graphs $G_1,G_2$, we have $$\delta(G_1\uplus G_2)=\max\{ \delta(\Gamma_1\uplus \Gamma_2) : \Gamma_i \text{ is isometric to } G_i \text{ for } i=1,2 \}.$$ \end{theorem}
\begin{proof} By Proposition \ref{prop:IsomJoin} and Lemma \ref{l:subgraph} we have $\delta(G_1\uplus G_2)\ge \delta(\Gamma_1\uplus \Gamma_2)$ for any isometric subgraph $\Gamma_i$ of $G_i$ for $i=1,2$. Besides, since any graph is an isometric subgraph of itself we obtain the equality by taking $\Gamma_1=G_1$ and $\Gamma_2=G_2$. \end{proof}
Denote by $J(G)$ the set of vertices and midpoints of edges in $G$. As usual, by \emph{cycle} we mean a simple closed curve, i.e., a path with different vertices, unless the last one, which is equal to the first vertex.
First, we collect some previous results of \cite{BRS} which will be useful.
\begin{theorem}\cite[Theorem 2.6]{BRS} \label{t:multk/4} For every hyperbolic graph $G$, $\delta(G)$ is a multiple of $1/4$. \end{theorem}
\begin{theorem}\cite[Theorem 2.7]{BRS} \label{t:TrianVMp} For any hyperbolic graph $G$, there exists a geodesic triangle $T = \{x, y, z\}$ that is a cycle with $x, y, z \in J(G)$ and $\delta(T) = \delta(G)$. \end{theorem}
The following result characterizes the hyperbolic graphs with a small hyperbolicity constant, see \cite[Theorem 11]{MRSV}.
Let us define the \emph{circumference} $c(G)$ of a graph $G$ which is not a tree as the supremum of the lengths of its cycles; if $G$ is a tree we define $c(G)=0$.
\begin{theorem}\label{th:delt<1} Let $G$ be any graph. \begin{itemize}
\item[(a)] {$\delta(G) = 0$ if and only if $G$ is a tree.}
\item[(b)] {$\delta(G) = 1/4, 1/2$ is not satisfied for any graph $G$.}
\item[(c)] {$\delta(G) = 3/4$ if and only if $\ c(G)=3$.} \end{itemize} \end{theorem}
We have the following consequence for the hyperbolicity constant of the joins of graphs.
\begin{proposition}\label{r:discretJoin}
For every graphs $G_1,G_2$ the graph join $G_1\uplus G_2$ is hyperbolic with hyperbolicity constant $\delta(G_1\uplus G_2)$ in $\{0, 3/4, 1, 5/4, 3/2\}$. \end{proposition}
If $G_1$ and $G_2$ are \emph{isomorphic}, then we write $G_1 \simeq G_2$. It is clear that if $G_1\simeq G_2$, then $\delta(G_1)=\delta(G_2)$.
The $n$-vertex edgeless graph ($n\ge1$) or \emph{empty graph} is a graph without edges and with $n$ vertices, and it is commonly denoted as $E_n$.
The following result allows to characterize the joins of graphs with hyperbolicity constant less than one in terms of its factor graphs. Recall that $\Delta_G$ denotes the maximum degree of the vertices in $G$.
\begin{theorem}\label{th:deltJoin<1}
Let $G_1,G_2$ be two graphs.
\begin{itemize}
\item[(1)] {$\delta(G_1\uplus G_2)=0$ if and only if $G_1$ and $G_2$ are empty graphs and one of them is isomorphic to $E_1$.}
\item[(2)] {$\delta(G_1\uplus G_2)=3/4$ if and only if $G_1\simeq E_1$ and $\Delta_{G_2}=1$, or $G_2\simeq E_1$ and $\Delta_{G_1}=1$.}
\end{itemize} \end{theorem}
\begin{proof}$ $ \begin{itemize}
\item[(1)] {By Theorem \ref{th:delt<1} it suffices to characterize the joins of graphs which are trees. If $G_1$ and $G_2$ are empty graphs and one of them is isomorphic to $E_1$, then it is clear that $G_1\uplus G_2$ is a tree. Assume now that $G_1\uplus G_2$ is a tree. If $G_1$ and $G_2$ have at least two vertices then $G_1\uplus G_2$ has a cycle with length four. Thus, $G_1$ or $G_2$ is isomorphic to $E_1$. Without loss of generality we can assume that $G_1\simeq E_1$. Note that if $G_2$ has at least one edge then $G_1\uplus G_2$ has a cycle with length three. Then, $G_2\simeq E_n$ for some $n\in \mathbb{N}$.}
\item[(2)] {By Theorem \ref{th:delt<1} it suffices to characterize the joins of graphs with circumference three. If $G_1\simeq E_1$ and $\Delta_{G_2}=1$, or $G_2\simeq E_1$ and $\Delta_{G_1}=1$, then it is clear that $c(G_1\uplus G_2)=3$. Assume now that $c(G_1\uplus G_2)=3$. If $G_1,G_2$ both have at least two vertices then $G_1\uplus G_2$ contains a cycle with length four and so $c(G_1\uplus G_2)\ge4$. Therefore, $G_1$ or $G_2$ is isomorphic to $E_1$. Without loss of generality we can assume that $G_1\simeq E_1$. Note that if $\Delta_{G_2}\ge2$ then there is an isomorphic subgraph to $E_1\uplus P_3$ in $G_1\uplus G_2$; thus, $G_1\uplus G_2$ contains a cycle with length four. So, we have $\Delta_{G_2}\le1$. Besides, since $G_2$ is a non-empty graph by (1), we have $\Delta_{G_2}\ge1$.} \end{itemize} \end{proof}
The following result will be useful, see \cite[Theorem 11]{RSVV}. The graph join of a cycle $C_{n-1}$ and a single vertex $E_1$ is referred to as a \emph{wheel} with $n$ vertices and denoted by $W_n$. Notice that the complete bipartite graph $K_{n,m}$ is isomorphic to the graph join of two empty graphs $E_n,E_m$, i.e., $K_{n,m}\simeq E_n\uplus E_m$.
\begin{example}\label{examples} The following graphs have these hyperbolicity constants: \begin{itemize}
\item The wheel graph with $n$ vertices $W_n$ verifies $\delta(W_4)=\delta(W_5)=1$, $\delta(W_n)=3/2$ for every $7\le n\le 10$, and $\delta(W_n)=5/4$ for $n=6$ and for every $n\ge 11$.
\item The complete bipartite graphs verify $\delta(K_{1,n}) = 0$ for every $n\ge1$, $\delta(K_{m,n}) = 1$ for every $m,n \ge2$. \end{itemize} \end{example}
Theorem \ref{th:deltJoin<1} and Example \ref{examples} show that the family of graphs $E_1\uplus G$ when $G$ belongs to the set of graphs is a representative collection of joins of graphs since their hyperbolicity constants take all possible values.
The following results characterize the graphs with hyperbolicity constant one and greater than one, respectively. If $G_0$ is a subgraph of $G$ and $w\in V(G_0)$, we denote by $\deg_{G_0}(w)$ the degree of $w$ in the induced subgraph by $V(G_0)$.
\begin{theorem}\cite[Theorem 3.10]{BRS2}\label{th:delt=1} Let $G$ be any graph. Then $\delta(G) = 1$ if and only if the following conditions hold: \begin{itemize}
\item[(1)] {There exists a cycle isomorphic to $C_4$.}
\item[(2)] {For every cycle $\sigma$ with $L(\sigma) \ge 5$ and for every vertex $w \in \sigma$, we have $\deg_\sigma(w) \ge3$.} \end{itemize} \end{theorem}
\begin{theorem}\cite[Theorem 3.2]{BRS2}\label{th:delt>=5/4} Let $G$ be any graph. Then $\delta(G) \ge 5/4$ if and only if there exist a cycle $\sigma$ in $G$ with length $L(\sigma) \ge 5$ and a vertex $w \in V(\sigma)$ such that $\deg_\sigma(w) = 2$. \end{theorem}
Theorem \ref{th:delt>=5/4} has the following consequence for joins of graphs.
\begin{lemma}\label{l:Fact_Delt>1} Let $G_1,G_2$ be two graphs. If $\delta(G_1)>1$, then $\delta(G_1\uplus G_2)>1$. \end{lemma}
\begin{proof}
By Theorem \ref{th:delt>=5/4}, there exist a cycle $\sigma$ in $G_1\uplus G_2$ (contained in $G_1$) with length $L(\sigma) \ge 5$ and a vertex $w \in\sigma$ such that $\deg_\sigma(w) = 2$. Thus, Theorem \ref{th:delt>=5/4} gives $\delta(G_1\uplus G_2)>1$. \end{proof}
Note that the converse of Lemma \ref{l:Fact_Delt>1} does not hold, since $\delta(E_1)=\delta(P_4)=0$ and we can check that $\delta(E_1\uplus P_4)=5/4$.
\begin{corollary}\label{c:FactDelt>1} Let $G_1,G_2$ be two graphs. Then $$\delta(G_1\uplus G_2)\ge \min\big\{5/4,\max\{\delta(G_1),\delta(G_2)\}\big\}.$$ \end{corollary}
\begin{proof} By symmetry, it suffices to show $\delta(G_1\uplus G_2)\ge \min\{5/4,\delta(G_1)\}$. If $\delta(G_1)>1$, then the inequality holds by Lemma \ref{l:Fact_Delt>1}. If $\delta(G_1)=1$, then there exists a cycle isomorphic to $C_4$ in $G_1\subset G_1\uplus G_2$; hence, $\delta(G_1\uplus G_2)\ge1$. If $\delta(G_1)=3/4$, then there exists a cycle isomorphic to $C_3$ in $G_1\subset G_1\uplus G_2$; hence, $\delta(G_1\uplus G_2)\ge3/4$. The inequality is direct if $\delta(G_1)=0$. \end{proof}
The following results allow to characterize the joins of graphs with hyperbolicity constant one in terms of $G_1$ and $G_2$.
\begin{lemma}\label{l:EmptyJoin}
Let $G$ be any graph. Then, $\delta(E_1\uplus G)\le1$ if and only if every path $\eta$ joining two vertices of $G$ with $L(\eta) = 3$ satisfies $\deg_\eta(w)\ge2$ for every vertex $w \in V(\eta)$. \end{lemma}
Note that if every path $\eta$ joining two vertices of $G$ with $L(\eta) = 3$ satisfies $\deg_\eta(w)$ $\ge2$ for every vertex $w \in V(\eta)$, then the same result holds for $L(\eta)\ge3$ instead of $L(\eta)=3$.
\begin{proof} Let $v$ be the vertex in $E_1$.
Assume first that $\delta(E_1\uplus G)\le1$. Seeking for a contradiction, assume that there is a path $\eta$ joining two vertices of $G$ with $L(\eta) = 3$ and one vertex $w' \in V(\eta)$ with $\deg_\eta(w')=1$. Consider now the cycle $\sigma$ obtained by joining the endpoints of $\eta$ with $v$. Note that $w'\in \sigma$ and $\deg_\sigma(w')=2$; therefore, Theorem \ref{th:delt>=5/4} gives $\delta(E_1\uplus G)>1$, which is a contradiction.
Assume now that every path $\eta$ joining two vertices of $G$ with $L(\eta) = 3$ satisfies $\deg_\eta(w)\ge2$ for every vertex $w \in V(\eta)$. Note that if $G$ does not have paths isomorphic to $P_4$ then there is no cycle in $E_1\uplus G$ with length greater than $4$ and so, $\delta(E_1\uplus G)\le1$. We are going to prove now that for every cycle $\sigma$ in $G$ with $L(\sigma) \ge 5$ we have $\deg_{\sigma}(w)\ge3$ for every vertex $w \in V(\sigma)$. Let $\sigma$ be any cycle in $E_1\uplus G$ with $L(\sigma) \ge 5$. If $v\in \sigma$, then $\sigma\cap G$ is a subgraph of $G$ isomorphic to $P_{n}$ for $n=L(\sigma)-1$, and $\deg_\sigma(v)=n\ge4$. Since $L(\sigma\cup G)\ge3$, $\deg_{\sigma\cap G}(w)\ge2$ for every $w\in V(\sigma\cap G)$ by hypothesis, and we conclude $\deg_\sigma(w)\ge3$ for every $w\in V(\sigma)\setminus\{v\}$. If $v\notin \sigma$, let $w$ be any vertex in $\sigma$ and let $P(w)$ be a path with length $3$ contained in $\sigma$ and such that $w$ is an endpoint of $P(w)$. By hypothesis $\deg_{P(w)}(w)\ge2$; since $w$ has a neighbor $w'\in V(\sigma\setminus P(w))$, $\deg_{\sigma}(w)\ge3$ for any $w\in V(\sigma)$. Then, Theorem \ref{th:delt>=5/4} gives the result. \end{proof}
Note that if a graph $G$ verifies $\diam G\le2$ then every path $\eta$ joining two vertices of $G$ with $L(\eta) = 3$ satisfies $\deg_\eta(w)\ge2$ for every vertex $w \in V(\eta)$. The converse does not hold, since in the disjoint union $C_3\cup C_3$ of two cycles $C_3$ any path with length $3$ is a cycle and $\diam C_3\cup C_3 = \infty$. However, these two conditions are equivalent if $G$ is connected.
If $G$ is a graph with connected components $\{G_j\}$, we define $$ \diam^* G :=\sup_{j} \, \diam G_j. $$ Note that $\diam^* G = \diam G$ if $G$ is connected; otherwise, $\diam G=\infty$. Also, $\diam^* G$ $>1$ is equivalent to $\Delta_{G}\ge2$. We also have the following result:
\begin{lemma}\label{lemaX} Let $G$ be any graph. Then $\diam^* G\le2$ if and only if every $\eta$ joining two vertices of $G$ with $L(\eta)=3$ satisfy $\deg_\eta(w)\ge2$ for every $w\in V(G)$. \end{lemma}
\begin{lemma}\label{l:Fact2Vert} Let $G_1$ and $G_2$ be two graphs with at least two vertices. Then, $\delta(G_1\uplus G_2)=1$ if and only if $\diam G_i\le2$ or $G_i$ is an empty graph for $i=1,2$. \end{lemma}
\begin{proof} Assume that $\delta(G_1\uplus G_2)=1$. Seeking for a contradiction, assume that $\diam G_1$ $\ge5/2$ and $G_1$ is a non-empty graph or $\diam G_2 \ge 5/2$ and $G_2$ is a non-empty graph. By symmetry, without loss of generality we can assume that $\diam G_1\ge5/2$ and $G_1$ is a non-empty graph; hence, there are a vertex $v\in V(G_1)$ and a midpoint $p\in [w_1,w_2]$ with $d_{G_1}(v,p)\ge5/2$. Consider a cycle $\sigma$ in $G_1\uplus G_2$ containing the vertex $v$, the edge $[w_1,w_2]$ and two vertices of $G_2$, with $L(\sigma)=5$. We have $\deg_{\sigma}(v)=2$. Thus, Theorem \ref{th:delt>=5/4} gives $\delta(G_1\uplus G_2)>1$. This contradicts our assumption, and so, we obtain $\diam G_1\le2$.
Assume now that $\diam G_i\le2$ or $G_i$ is an empty graph for $i=1,2$. Since $G_1$ and $G_2$ have at least two vertices, there exists a cycle isomorphic to $C_4$ in $G_1\uplus G_2$.
First of all, if $G_1$ and $G_2$ are empty graphs then Example \ref{examples} gives $\delta(G_1\uplus G_2)=1$.
Without loss of generality we can assume that $G_1$ is a non-empty graph, then $G_1$ satisfies $\diam G_1\le2$.
Assume that $G_2$ is an empty graph. Let $\sigma$ be any cycle in $G_1\uplus G_2$ with $L(\sigma)\ge5$. Since $\sigma$ contains at least three vertices in $G_1$, we have $\deg_{\sigma}(v)=|V(G_1)\cap\sigma|\ge3$ for every $v\in V(G_2)\cap\sigma$. Besides, if $|V(G_2)\cap\sigma|\ge3$ then $\deg_{\sigma}(w)\ge|V(G_2)\cap\sigma|\ge3$ for every $w\in V(G_1)\cap\sigma$. If $|V(G_2)\cap\sigma|=1$, then $\eta:=\sigma\cap G_1$ is a path in $G_1$ with $L(\eta)\ge3$, and so, $\deg_\eta(w)\ge2$ and $\deg_\sigma(w)\ge3$ for every $w \in V(\eta)$. If $|V(G_2)\cap\sigma|=2$, then $\sigma\cap G_1$ is the union of two paths and $|V(G_1)\cap\sigma|\ge3$; since $\diam G_1\le2$, we have $\deg_{G_1\cap\sigma}(w)\ge1$ for every $w\in V(G_1)\cap\sigma$ (otherwise there are a vertex $w\in V(G_1)\cap\sigma$ and a midpoint $p\in G_1\cap \sigma$ with $d_{G_1}(w,p)>2$). Then, we have $\deg_{\sigma}(v)\ge3$ for every $v\in V(\sigma)$ and so, we obtain $\delta(G_1\uplus G_2)=1$ by Theorem \ref{th:delt=1}.
Finally, assume that $\diam G_2\le2$. By Theorem \ref{t:TrianVMp} it suffices to consider geodesic triangles $T=\{x,y,z\}$ in $G_1\uplus G_2$ that are cycles with $x,y,z\in J(G_1\uplus G_2)$. So, since $\diam G_1,\diam G_2\le2$, Proposition \ref{prop:JoinDist} gives that $L([xy]),L([yz]),L([zx]) \le 2$; thus, for every $\alpha\in[xy]$, $d_{G_1\uplus G_2}(\alpha,[yz]\cup[zx])\le d_{G_1\uplus G_2}(\alpha,\{x,y\})\le L([xy])/2$. Hence, $\delta(T)\le \max\{L([xy]),L([yz]),L([zx])\}/2\le1$ and so, $\delta(G_1\uplus G_2)\le1$. Since $G_1$ and $G_2$ have at least two vertices, by Theorem \ref{th:delt<1} we have $\delta(G_1\uplus G_2)\ge1$ and we conclude $\delta(G_1\uplus G_2)=1$. \end{proof}
The following result characterizes the joins of graphs with hyperbolicity constant one.
\begin{theorem}\label{th:hypJoin1} Let $G_1,G_2$ be any two graphs. Then the following statements hold: \begin{itemize}
\item {Assume that $G_1\simeq E_1$. Then $\delta(G_1\uplus G_2)=1$ if and only if $1<\diam^* G_2\le2$.}
\item {Assume that $G_1$ and $G_2$ have at least two vertices. Then $\delta(G_1\uplus G_2)=1$ if and only if $\diam G_i\le2$ or $G_i$ is an empty graph for $i=1,2$.} \end{itemize} \end{theorem}
\begin{proof} We have the first statement by Theorem \ref{th:deltJoin<1} and Lemmas \ref{l:EmptyJoin} and \ref{lemaX}.
The second statement is just Lemma \ref{l:Fact2Vert}. \end{proof}
In order to compute the hyperbolicity constant of any graph join we are going to characterize the joins of graphs with hyperbolicity constant $3/2$.
\begin{lemma}\label{l:hyp3/2Fact} Let $G_1,G_2$ be any two graphs. If $\delta(G_1\uplus G_2)=3/2$, then each geodesic triangle $T=\{x,y,z\}$ in $G_1\uplus G_2$ that is a cycle with $x,y,z \in J(G_1\uplus G_2)$ and $\delta(T)=3/2$ is contained in either $G_1$ or $G_2$. \end{lemma}
\begin{proof} Seeking for a contradiction assume that there is a geodesic triangle $T=\{x,y,z\}$ in $G_1\uplus G_2$ that is a cycle with $x,y,z \in J(G_1\uplus G_2)$ and $\delta(T)=3/2$ which contains vertices in both factors $G_1,G_2$. Without loss of generality we can assume that there is $p\in[xy]$ with $d_{G_1\uplus G_2}(p, [yz]\cup[zx]) = 3/2$, and so, $L([xy])\ge3$. Hence, $d_{G_1\uplus G_2}(x,y)=3$ by Corollary \ref{c:diam}, and by Corollary \ref{cor:midpoint} we have that $x,y$ are midpoints either in $G_1$ or in $G_2$, and so, $p$ is a vertex in $G_1\uplus G_2$. Without loss of generality we can assume that $x,y\in G_1$. Let $V_x$ be the closest vertex to $x$ in $[xz]\cup[zy]$. If $p\in V(G_2)$ then $d_{G_1\uplus G_2}(p,[yz]\cup[zx]) \le d_{G_1\uplus G_2}(p,V_x) =1$. This contradicts our assumption. If $p\in V(G_1)$ then since $T$ contains vertices in both factors, we have $d_{G_1\uplus G_2}(p,[yz]\cup[zx]) \le d_{G_1\uplus G_2}\big((p,V(G_2)\cap\{[yz]\cup[zx]\}\big) =1$. This also contradicts our assumption, and so, we have the result. \end{proof}
\begin{corollary}\label{c:3/2} Let $G_1,G_2$ be any two graphs. If $\delta(G_1\uplus G_2)=3/2$, then $$\max\{\delta(G_1),\delta(G_2)\}\ge3/2.$$ \end{corollary}
The following families of graphs allow to characterize the joins of graphs with hyperbolicity constant $3/2$. Denote by $C_n$ the cycle graph with $n\ge3$ vertices and by $V(C_n):=\{v_1^{(n)},\ldots,v_n^{(n)}\}$ the set of their vertices such that $[v_n^{(n)},v_1^{(n)}]\in E(C_n)$ and $[v_i^{(n)},v_{i+1}^{(n)}]\in E(C_n)$ for $1\le i\le n-1$. Let us consider $\mathcal{C}_6^{(1)}$ the set of graphs obtained from $C_6$ by addying a (proper or not) subset of the set of edges $\{[v_2^{(6)},v_6^{(6)}]$, $[v_4^{(6)},v_6^{(6)}]\}$. Let us define the set of graphs $$\mathcal{F}_6:=\{ G \ \text{\small containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_6^{(1)}\}.$$ Let us consider $\mathcal{C}_7^{(1)}$ the set of graphs obtained from $C_7$ by addying a (proper or not) subset of the set of edges $\{[v_2^{(7)},v_6^{(7)}]$, $[v_2^{(7)},v_7^{(7)}]$, $[v_4^{(7)},v_6^{(7)}]$, $[v_4^{(7)},v_7^{(7)}]\}$. Define $$\mathcal{F}_7:=\{ G \ \text{\small containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_7^{(1)}\}.$$ Let us consider $\mathcal{C}_8^{(1)}$ the set of graphs obtained from $C_8$ by addying a (proper or not) subset of the set $\{[v_2^{(8)},v_6^{(8)}]$, $[v_2^{(8)},v_8^{(8)}]$, $[v_4^{(8)},v_6^{(8)}]$, $[v_4^{(8)},v_8^{(8)}]\}$. Also, consider $\mathcal{C}_8^{(2)}$ the set of graphs obtained from $C_8$ by addying a (proper or not) subset of $\{[v_2^{(8)},v_8^{(8)}]$, $[v_4^{(8)},v_6^{(8)}]$, $[v_4^{(8)},v_7^{(8)}]$, $[v_4^{(8)},v_8^{(8)}]\}$. Define $$\mathcal{F}_8:=\{ G \ \text{\small containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_8^{(1)}\cup \mathcal{C}_8^{(2)}\}.$$ Let us consider $\mathcal{C}_9^{(1)}$ the set of graphs obtained from $C_9$ by addying a (proper or not) subset of the set of edges $\{[v_2^{(9)},v_6^{(9)}]$, $[v_2^{(9)},v_9^{(9)}]$, $[v_4^{(9)},v_6^{(9)}]$, $[v_4^{(9)},v_9^{(9)}]\}$. Define $$\mathcal{F}_9:=\{ G \ \text{\small containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_9^{(1)}\}.$$ Finally, we define the set $\mathcal{F}$ by $$\mathcal{F}:=\mathcal{F}_6\cup\mathcal{F}_7\cup\mathcal{F}_8\cup\mathcal{F}_9.$$ Note that $\mathcal{F}_6$, $\mathcal{F}_7$, $\mathcal{F}_8$ and $\mathcal{F}_9$ are not disjoint sets of graphs.
The following theorem characterizes the joins of graphs $G_1$ and $G_2$ with $\delta(G_1\uplus G_2)=3/2$. For any non-empty set $S\subset V(G)$, the induced subgraph of $S$ will be denoted by $\langle S\rangle$.
\begin{theorem}\label{th:hyp3/2} Let $G_1,G_2$ be any two graphs. Then, $\delta(G_1\uplus G_2)=3/2$ if and only if $G_1\in \mathcal{F}$ or $G_2\in\mathcal{F}$. \end{theorem}
\begin{proof} Assume first that $\delta(G_1\uplus G_2)=3/2$. By Theorem \ref{t:TrianVMp} there is a geodesic triangle $T=\{x,y,z\}$ in $G_1\uplus G_2$ that is a cycle with $x,y,z \in J(G_1)$ and $\delta(T)=3/2$. By Lemma \ref{l:hyp3/2Fact}, $T$ is contained either in $G_1$ or in $G_2$. Without loss of generality we can assume that $T$ is contained in $G_1$. Without loss of generality we can assume that there is $p\in[xy]$ with $d_{G_1\uplus G_2}(p, [yz]\cup[zx]) = 3/2$, and by Corollary \ref{c:diam}, $L([xy])=3$. Hence, by Corollary \ref{cor:midpoint} we have that $x,y$ are midpoints in $G_1$, and so, $p\in V(G_1)$. Since $L([yz])\le3$, $L([zx])\le3$ and $L([yz])+L([zx])\ge L([xy])$, we have $6\le L(T)\le9$.
Assume that $L(T)=6$. Denote by $\{v_1,\ldots,v_6\}$ the vertices in $T$ such that $T=\bigcup_{i=1}^{6}[v_i,v_{i+1}]$ with $v_7:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\langle\{v_1,\ldots,v_6\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\langle\{v_1,\ldots,v_6\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$ nor $[v_3,v_6]$.
Note that $[v_2,v_6]$, $[v_4,v_6]$ may be contained in $\langle\{v_1,\ldots,v_6\}\rangle$. Therefore, $G_1\in \mathcal{F}_6$.
Assume that $L(T)=7$ and $G_1\notin \mathcal{F}_6$. Denote by $\{v_1,\ldots,v_7\}$ the vertices in $T$ such that $T=\bigcup_{i=1}^{7}[v_i,v_{i+1}]$ with $v_8:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\langle\{v_1,\ldots,v_7\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\langle\{v_1,\ldots,v_7\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$, $[v_3,v_6]$ nor $[v_3,v_7]$. Since $G_1\notin \mathcal{F}_6$, $[v_1,v_6]$ and $[v_5,v_7]$ are not contained in $\langle\{v_1,\ldots,v_7\}\rangle$.
Note that $[v_2,v_6]$, $[v_2,v_7]$, $[v_4,v_6]$, $[v_4,v_7]$ may be contained in $\langle\{v_1,\ldots,v_7\}\rangle$. Hence, $G_1\in \mathcal{F}_7$.
Assume that $L(T)=8$ and $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7$. Denote by $\{v_1,\ldots,v_8\}$ the vertices in $T$ such that $T=\bigcup_{i=1}^{8}[v_i,v_{i+1}]$ with $v_9:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\langle\{v_1,\ldots,v_8\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\langle\{v_1,\ldots,v_8\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$, $[v_3,v_6]$, $[v_3,v_7]$ nor $[v_3,v_8]$. Since $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7$, $[v_1,v_6]$, $[v_1,v_7]$, $[v_5,v_7]$, $[v_5,v_8]$ and $[v_6,v_8]$ are not contained in $\langle\{v_1,\ldots,v_8\}\rangle$. Since $T$ is a geodesic triangle we have that $z\in\{v_{6,7},v_7,v_{7,8}\}$ with $v_{6,7}$ and $v_{7,8}$ the midpoints of $[v_6,v_7]$ and $[v_7,v_8]$, respectively. If $z=v_7$ then $\langle\{v_1,\ldots,v_8\}\rangle$ contains neither $[v_2,v_7]$ nor $[v_4,v_7]$. Note that $[v_2,v_6]$, $[v_2,v_8]$, $[v_4,v_6]$, $[v_4,v_8]$ may be contained in $\langle\{v_1,\ldots,v_8\}\rangle$. If $z=v_{6,7}$ then $\langle\{v_1,\ldots,v_8\}\rangle$ contains neither $[v_2,v_6]$ nor $[v_2,v_7]$. Note that $[v_2,v_8]$, $[v_4,v_6]$, $[v_4,v_7]$, $[v_4,v_8]$ may be contained in $\langle\{v_1,\ldots,v_8\}\rangle$. By symmetry, we obtain an equivalent result for $z=v_{7,8}$. Therefore, $G_1\in \mathcal{F}_8$.
Assume that $L(T)=9$ and $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7\cup\mathcal{F}_8$. Denote by $\{v_1,\ldots,v_9\}$ the vertices in $T$ such that $T=\bigcup_{i=1}^{9}[v_i,v_{i+1}]$ with $v_{10}:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\langle\{v_1,\ldots,v_9\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\langle\{v_1,\ldots,v_9\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$, $[v_3,v_6]$, $[v_3,v_7]$, $[v_3,v_8]$ nor $[v_3,v_9]$. Since $T$ is a geodesic triangle we have that $z$ is the midpoint of $[v_7,v_8]$. Since $d_{G_1\uplus G_2}(y,z)=d_{G_1\uplus G_2}(z,x)=3$, we have that $\langle\{v_1,\ldots,v_9\}\rangle$ contains neither $[v_1,v_7]$, $[v_1,v_8]$, $[v_2,v_7]$, $[v_2,v_8]$, $[v_4,v_7]$, $[v_4,v_8]$, $[v_5,v_7]$ nor $[v_5,v_8]$. Since $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7\cup\mathcal{F}_8$, $[v_1,v_6]$, $[v_5,v_9]$, $[v_6,v_8]$, $[v_6,v_9]$ and $[v_7,v_9]$ are not contained in $\langle\{v_1,\ldots,v_9\}\rangle$. Note that $[v_2,v_6]$, $[v_2,v_9]$, $[v_4,v_6]$, $[v_4,v_9]$ may be contained in $\langle\{v_1,\ldots,v_9\}\rangle$. Hence, $G_1\in \mathcal{F}_9$.
Finally, one can check that if $G_1\in \mathcal{F}$ or $G_2\in \mathcal{F}$, then $\delta(G_1\uplus G_2)=3/2$, by following the previous arguments. \end{proof}
These results allow to compute, in a simple way, the hyperbolicity constant of every graph join:
\begin{theorem}\label{th:hypJoin} Let $G_1,G_2$ be any two graphs. Then,
\[ \delta(G_1\uplus G_2)=\left\{ \begin{array}{ll} 0,&\text{if } G_i \simeq E_1 \text{ and } G_j \simeq E_n \text{ for } i\neq j \text{ and } n\in \mathbb{N},\\ 3/4,&\text{if } G_i\simeq E_1 \text{ and } \Delta_{G_j}=1 \text{ for } i\neq j,\\ 1,&\text{if } G_i\simeq E_1 \text{ and } 1< \diam^* G_j\le2 \text{ for } i\neq j;\text{ or}\\ & n_i\ge2 \text{ and } \diam G_i\le2 \text{ or}\\ & G_i \text{ is an empty graph for } i=1,2;\\ 3/2,&\text{if } G_1\in \mathcal{F} \text{ or } G_2\in\mathcal{F},\\ 5/4,&\text{otherwise}. \end{array} \right. \] \end{theorem}
\begin{corollary}\label{c:emptyUplus} Let $G$ be any graph. Then, \[ \delta(E_1\uplus G)=\left\{ \begin{array}{ll} 0,\quad &\mbox{if } \diam^* G=0,\\ 3/4,\quad &\mbox{if } \diam^* G=1,\\ 1,\quad &\mbox{if } 1< \diam^* G\le2,\\ 5/4,\quad &\mbox{if } \diam^* G>2 \text{ and } G\notin \mathcal{F},\\ 3/2,\quad &\mbox{if } G\in \mathcal{F}. \end{array} \right. \] \end{corollary}
\section{Hyperbolicity of corona of two graphs}\label{Sect4} In this section we study the hyperbolicity of the corona of two graphs, defined by Frucht and Harary in 1970, see \cite{FH}.
\begin{definition}\label{def:crown} Let $G_1$ and $G_2$ be two graphs with $V(G_1)\cap V(G_2)=\emptyset$. The \emph{corona} of $G_1$ and $G_2$, denoted by $G_1\diamond G_2$, is defined as the graph obtained by taking one copy of $G_1$ and a copy of $G_2$ for each vertex $v\in V(G_1)$, and then joining each vertex $v\in V(G_1)$ to every vertex in the $v$-th copy of $G_2$. \end{definition}
From the definition, it clearly follows that the corona product of two graphs is a non-commutative and non-associative operation. Figure \ref{fig:crown} show the corona of two graphs.
\begin{figure}
\caption{Corona of two graphs $C_4 \diamond C_3$.}
\label{fig:crown}
\end{figure}
Many authors deal just with corona of finite graphs; however, our results hold for finite or infinite graphs.
If $G$ is a connected graph, we say that $v\in V(G)$ is a \emph{connection vertex} if $G \setminus \{v\}$ is not connected.
Given a connected graph $G$, a family of subgraphs $\{G_n\}_{n\in \Lambda}$ of $G$ is a \emph{T-decomposi\-tion} of $G$ if $\cup_n G_n=G$ and $G_n\cap G_m$ is either a connection vertex or the empty set for each $n\neq m$.
We will need the following result (see \cite[Theorem 5]{BRSV2}), which allows to obtain global information about the hyperbolicity of a graph from local information.
\begin{theorem} \label{t:treedec} Let $G$ be any connected graph and let $\{G_n\}_n$ be any T-decomposition of $G$. Then $\delta(G)=\sup_n \delta(G_n)$. \end{theorem}
We remark that the corona $G_1\diamond G_2$ of two graphs is connected if and only if $G_1$ is connected.
The following result characterizes the hyperbolicity of the corona of two graphs and provides the precise value of its hyperbolicity constant.
\begin{theorem}\label{th:corona} Let $G_1,G_2$ be any two graphs. Then $\delta(G_1\diamond G_2)=\max\{\delta(G_1),\delta(E_1\uplus G_2)\}$. \end{theorem}
\begin{proof} Assume first that $G_1$ is connected. The formula follows from Theorem \ref{t:treedec}, since $\{G_1\}\cup \big\{\{v\}\uplus G_2\big\}_{v\in V(G_1)}$ is a T-decomposition of $G_1\diamond G_2$. Finally, note that if $G_1$ is a non-connected graph, then we can apply the previous argument to each connected component. \end{proof}
Note that Corollary \ref{c:emptyUplus} provides the precise value of $\delta(E_1\uplus G_2)$.
\begin{corollary}\label{cor:sup} Let $G_1,G_2$ be any two graphs. Then $G_1\diamond G_2$ is hyperbolic if and only if $G_1$ is hyperbolic. \end{corollary}
\begin{proof} By Theorem \ref{th:corona} we have $\delta(G_1\diamond G_2)=\max\{\delta(G_1),\delta(E_1\uplus G_2)\}$. Then, by Corollary \ref{cor:SP} we have $\delta(G_1) \le \delta(G_1\diamond G_2) \le \max\{\delta(G_1),3/2\}$. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We revisit Vasy's method \cite{vasy1},\cite{vasy2} for showing meromorphy of the resolvent for (even) asymptotically hyperbolic manifolds. It provides an effective definition of resonances in that setting by identifying them with poles of inverses of a family of Fredholm differential operators. In the Euclidean case the method of complex scaling made this available since the 70's but in the hyperbolic case an effective definition was not known till \cite{vasy1},\cite{vasy2}. Here we present a simplified version which relies only on standard pseudodifferential techniques and estimates for hyperbolic operators. As a byproduct we obtain more natural invertibility properties of the Fredholm family.
\end{abstract}
\maketitle
\section{Introduction}
We present a version of the method introduced by Andr\'as Vasy \cite{vasy1},\cite{vasy2} to prove meromorphic continuations of resolvents of Laplacians on even asymptotically hyperbolic spaces -- see \eqref{eq:gash}. That meromorphy was first established for any asymptotically hyperbolic metric by Mazzeo--Melrose \cite{mm}. Other early contributions were made by Agmon \cite{Ag}, Fay \cite{Fa}, Guillop\'e--Zworski \cite{GuZw}, Lax--Phillips \cite{LaPh}, Mandouvalos \cite{Man}, Patterson \cite{Pa} and Perry \cite{Pe}. Guillarmou \cite{g} showed that the evenness condition was needed for a global meromorphic continuation and clarified the construction given in \cite{mm}.
Vasy's method is dramatically different from earlier approaches and is related to the study of stationary wave equations for Kerr--de Sitter black holes -- see \cite{vasy1} and \cite[\S 5.7]{res}. Its advantage lies in relating the resolvent to the inverse of a family of {\em Fredholm differential operators}. Hence, microlocal methods can be used to prove results which have not been available before, for instance existence of resonance free strips for non-trapping metrics \cite{vasy2}. Another application is the work of Datchev--Dyatlov \cite{DaDy} on the fractal upper bounds on the number of resonances for (even) asymptotically hyperbolic manifolds and in particular for convex co-compact quotients of $ {\mathbb H}^n $. Previously only the case of convex co-compact Schottky quotients was known \cite{glz} and that was established using transfer operators and zeta function methods. In the context of black holes the construction has been used to obtain a quantitative version of Hawking radiation \cite{Dr}, exponential decay of waves in the Kerr--de Sitter case \cite{Dy1}, the description of quasi-normal modes for perturbations of Kerr--de Sitter black holes \cite{Dy2} and rigorous definition of quasi-normal modes for Kerr--Anti de Sitter black holes \cite{ga}. The construction of the Fredholm family also plays a role in the study of linear and non-linear scattering problems -- see \cite{BaVWu}, \cite{HiV1}, \cite{HiV2} and references given there.
A related approach to meromorphic continuation, motivated by the study of Anti-de Sitter black holes, was independently developed by Warnick \cite{Wa}. It is based on physical space techniques for hyperbolic equations and it also provides meromorphic continuation of resolvents for even asymptotically hyperbolic metrics \cite[\S 7.5]{Wa}.
We should point out that for a large class of asymptotically Euclidean manifolds an effective characterization of resonances has been known since the introduction of the method of complex scaling by Aguilar--Combes, Balslev--Combes and Simon in the 1970s -- see \cite[\S 4.5]{res} for an elementary introduction and references and \cite{WuZw} for a class asymptotically Euclidean manifolds to which the method applies.
In this note we present a direct proof of meromorphic continuation based on standard pseudodifferential techniques and estimates for hyperbolic equations which can found, for instance, in \cite[\S 18.1]{ho3} and \cite[\S 23.2]{ho3} respectively. In particular, we prove Melrose's {\em radial estimates} \cite{mel} which are crucial for establishing the Fredholm property. A semiclassical version of the approach presented here can be found in \cite[Chapter 5]{res} -- it is needed for the high energy results \cite{DaDy}, \cite{vasy2} mentioned above.
\renewcommand\thefootnote{\dag}
We now define {even asymptotically hyperbolic manifolds}. Suppose that $ \overline M$ is a compact manifold with boundary $ \partial M \neq \emptyset$ of dimension $ n +1 $. We denote by $ M $ the interior of $ \overline M $. The Riemannian manifold $ ( M , g ) $ is even asymptotically hyperbolic if there exits functions $ y' \in {\bar{\mathcal C}^\infty} ( M ; \partial M )$ and
$ y_1 \in {{\bar{\mathcal C}^\infty}} ( M ; (0, 2 ) ) $\footnote{We cannot write a paper about Vasy's method without some footnotes: we follow the notation of \cite[Appendix B]{ho3} where $ {\bar{\mathcal C}^\infty} ( M )$ denotes functions which are smoothly extendable across $ \partial M $ and $ {\dot{\mathcal C}^\infty} ( \overline M ) $ functions which are extendable to smooth functions {\em supported} in $ \overline M $ -- see \S \ref{Fredamp}.},
$ y_1|_{\partial M } = 0 $, $ dy_1|_{\partial M } \neq 0 $, such that \begin{equation} \label{eq:coords} \overline M \supset y_1^{-1} ( [0 , 1 ] ) \ni m \mapsto ( y_1( m), y'( m ) ) \in [ 0 , 1 ] \times \partial M \end{equation} is a diffeomorphism, and near $ \partial M $ the metric has the form, \begin{equation}
\label{eq:gash} g|_{ y_1 \leq 1} = \frac{ dy_1^2 + h ( y_1^2 ) }{ y_1^2 } , \end{equation} where
$ [ 0, 1 ] \ni t \mapsto h ( t ) $, is a smooth family of Riemannian metrics on $ \partial M $.
For the discussion of invariance of this definition and of its geometric meaning we refer to \cite[\S 2]{g}.
Let $ - \Delta_g \geq 0 $ be the Laplace--Beltrami operator for the metric $ g $. Since the spectrum is contained in $ [ 0 , \infty ) $ the operator $- \Delta_g -\zeta ( n -\zeta ) $ is invertible on $ H^2 ( M , d \vol_g ) $ for $ \Re\zeta > n $. Hence we can define \begin{equation} \label{eq:Res4Deltag} R (\zeta) := ( - \Delta_g -\zeta ( n - \zeta ) )^{-1} : L^2 ( M, d\!\vol_g ) \to H^2 ( M, d \!\vol_g ) , \ \ \Re\zeta > n . \end{equation} We note that elliptic regularity shows that $ R ( \zeta ) : {\dot{\mathcal C}^\infty} ( M ) \to {{\mathcal C}^\infty} ( M )$, $ \Re\zeta > n $. We also remark that as a byproduct of the construction we will show the well known fact that $ R ( \lambda ) : L^2 \to H^2 $ is meromorphic for $ \Re\zeta > n/2 $: the poles correspond to $ L^2 $ eigenvalues of $ - \Delta_g $ and hence lie in $ ( n/2, n ) $.
We will prove the result of Mazzeo--Melrose \cite{mm} and Guillarmou \cite{g}: \begin{thm} \label{t:1} Suppose that $ ( M , g) $ is an even asymptotically hyperbolic manifold and that $ R ( \zeta ) $ is defined by \eqref{eq:Res4Deltag}. Then \[ R (\zeta ) : {\dot{\mathcal C}^\infty} ( M ) \to {{\mathcal C}^\infty} ( M ) , \] continues meromorphically from $ \Re\zeta > n $ to $ {\mathbb C}$ with poles of finite rank.
\end{thm}
The key point however is the fact that $ R (\zeta) $ can be related to $ P( i ( \zeta - n/2) )^{-1}$ where \[ \zeta \longmapsto P ( i ( \zeta - n/2) ) \]
is a family of Fredholm differential operators -- see \S \ref{ffdo} and Theorem \ref{t:2}. That family will be shown to be invertible for $ \Re\zeta > n $ which proves the meromorphy of $ P ( i ( \zeta - n/2) )^{-1} $ -- see Theorem \ref{t:3}. We remark that for $ \Re \zeta > \frac n 2 $, $ R ( \zeta )$ is meromorphic as an operator $ L^2 ( M ) \to L^2 ( M ) $ with poles corresponding to eigenvalues of $ - \Delta_g $.
The paper is organized as follows. In \S \ref{ffdo} we define the family $ P ( \lambda ) $ and the spaces on which it has the Fredholm property. That section contains the main results of the paper: Theorems \ref{t:2} and \ref{t:3}. In \S \ref{Fredamp} we recall the notation from the theory of pseudodifferential operators and provide detailed references. We also recall estimates for hyperbolic operators needed here. In \S \ref{radest} we prove Melrose's propagation estimates at radial points and in \S \ref{s:t1} we use them to show the Fredholm property. \S\ref{asym} gives some precise estimates valid for $ \Im \lambda \gg 1 $. Finally \S \ref{merc} we present invertibility of $ P ( \lambda ) $ for $ \Im \lambda \gg 1 $ and that proves the meromorphic continuation. Except for references to \cite[18.1]{ho3} and \cite[23.2]{ho3} and some references to standard approximation arguments \cite[Appendix E]{res} (with material readily available in many other places) the paper is self-contained.
\smallsection{Acknowledgements} I would like to thank Semyon Dyatlov and Andr\'as Vasy for helpful comments on the first version of this note. I am particularly grateful to Peter Hintz for many suggestions and for his help with the proof of Proposition \ref{p:hintz}. Partial support by the National Science Foundation under the grant DMS-1500852 is also gratefully acknowledged.
\section{The Fredholm family of differential operators} \label{ffdo}
Let $ y' \in \partial M $ denote the variable on $ \partial M $. Then \eqref{eq:gash} implies that near $ \partial M $, the Laplacian has the form \begin{gather} \label{eq:Deltagg} \begin{gathered} - \Delta_g = ( y_1 D_{y_1} )^2 + i ( n + y_1^2 \gamma ( y_1^2, y') ) y_1 D_{y_1} - y_1^2 \Delta_{h (y_1^2) } , \\ \gamma ( t, y') := - \partial_t \bar h ( t ) / \bar h ( t ) , \ \ \bar h ( t ) := \det h ( t ) . \end{gathered} \end{gather} Here $ \Delta_{h (y_1^2 ) } $ is the Laplacian for the family of metrics on $ \partial M $ depending smoothly on $ y_1^2 $ and $ \gamma \in {{\mathcal C}^\infty} ( [0,1]\times \partial M ) $. (The logarithmic derivative defining $ \gamma $ is independent of of the density on $ \partial M$ needed to define the determinant $ \bar h $.)
In \S \ref{asym} we will show that the unique $ L^2 $ solutions to \[ ( - \Delta_g -\zeta ( n -\zeta ) ) u =f \in {\dot{\mathcal C}^\infty} ( M) , \ \ \Re\zeta > n, , \ \ \] satisfy \[ u = y_1^{\zeta } {\bar{\mathcal C}^\infty} ( M ) \ \ \ \text{
and $ \ \ \ \ y_1^{ -\zeta }u |_{ y_1 < 1 } = F ( y_1^2 , y') , \ \ F \in {\bar{\mathcal C}^\infty} ( [0,1 ]\times \partial M ) $.} \] Eventually we will show that the meromorphic continuation of the resolvent provides solutions of this form for all $\zeta \in {\mathbb C} $ that are not poles of the resolvent.
This suggests two things:
\begin{itemize}
\item To reduce the investigation to the study of smooth solutions we should conjugate $ - \Delta_g -\zeta ( n- \zeta) $ by the weight $ y_1^\zeta $.
\item The desired smoothness properties should be stronger in the sense that the functions should be smooth in $ (y_1^2 , y' ) $.
\end{itemize}
Motivated by this we calculate, \begin{equation} \label{eq:firstconj} y_1^{ -\zeta} ( - \Delta_g - \zeta ( n - \zeta) ) y_1^{\zeta } = x_1 P ( \lambda ) , \ \ x_1 = y_1^2 , \ \ x' = y', \ \lambda = i (\zeta - {\textstyle{\frac n 2 }}) , \end{equation} where, near $ \partial M $, \begin{equation} \label{eq:Plag} P ( \lambda ) = 4 ( x_1 D_{x_1}^2 - ( \lambda + i ) D_{x_1} ) - \Delta_h + i \gamma ( x ) \left( 2 x_1 D_{x_1} - \lambda - i {\textstyle\frac{ n-1} 2 } \right) . \end{equation} The switch to $ \lambda $ is motivated by the fact that numerology is slightly lighter on the $ \zeta$-side for $ - \Delta_g $ and on the $\lambda$-side for $ P ( \lambda ) $.
\renewcommand\thefootnote{\ddag}
To define the operator $ P ( \lambda ) $ geometrically we introduce a new manifold using coordinates \eqref{eq:coords} and $ x_1 = y_1^2 $ for $ y_1 > 0 $: \begin{equation} \label{eq:coordX} X = [ -1 , 1 ]_{x_1} \times \partial M \sqcup \left( M \setminus y^{-1} (( 0, 1 ) ) \right). \end{equation} We note that $ X_1 := X \cap \{ x_1 > 0 \} $ is diffeomorphic to $ M $ but $ \overline X_1 $ and $ \overline M $ have different $ {{\mathcal C}^\infty} $-structures\footnote{This construction appeared already in \cite[\S 2]{GuZw} and $ P ( \lambda ) = Q ( n/4 - i \lambda/2) $ where $ Q ( \zeta ) $ was defined in \cite[(2.6),(3.12)]{GuZw}. However the significance of $ Q ( \zeta ) $ did not become clear until \cite{vasy1}.}.
We can extend $ x_1 \to h ( x_1 ) $ to a family of smooth non-degenerate metrics on $ \partial M $ on $ [-1,1]$. Using \eqref{eq:Deltagg} that provides a natural extension of the function $ \gamma $ appearing \eqref{eq:firstconj}.
The Laplacian $ -\Delta_g $ is a self-adjoint operator on $ L^2 ( M , d \vol_g ) $, where near $ \partial M $ and in the notation of \eqref{eq:Deltagg}, \[ d \vol_g = y_1^{-n-1} \bar h ( y_1^2, y') dy_1 dy' , \] where $d y' $ in a density on $ \partial M $ used to define the determinant $ \bar h = \det h $. The conjugation \eqref{eq:firstconj} shows that for $ \lambda \in {\mathbb R} $ ($ \zeta \in \frac n2 + i {\mathbb R} $) $ x_1 P ( \lambda ) $ is formally self-adjoint with respect to $ x_1^{-1} \bar h ( x) dx_1 dx' $ and consequently $ P ( \lambda ) $ is formally self-adjoint for \begin{equation} \label{eq:dmug} d \mu_g = \bar h ( x) dx . \end{equation} This will be the measure used for defining $ L^2 ( X) $ in what follows. In particular we see that the formal adjoint with respect to $ d \mu_g $ satisfies \begin{equation} \label{eq:formaladj} P ( \lambda )^* = P ( \bar \lambda ) . \end{equation}
We can now define spaces on which $ P ( \lambda ) $ is a Fredholm operator. For that we denote by $ \bar H^s ( X^\circ ) $ the space of restrictions of elements of $ H^s $ on an extension of $ X$ across the boundary to the interior of $ X $ -- see \cite[\S B.2]{ho3} and \S \ref{hypest} -- and put \begin{equation} \label{eq:hypXY} \mathscr Y_s := \bar H^s ( X^\circ ) , \ \ \mathscr X_s := \{ u \in \mathscr Y_{s+1} : P ( 0 ) u \in \mathscr Y_s \}. \end{equation} Since the dependence on $ \lambda $ in $ P ( \lambda ) $ occurs only in lower order terms we can replace $ P ( 0 ) $ by $ P ( \lambda ) $ in the definition of $ \mathscr X $.
\noindent {\bf Motivation:} Since for $ x_1 < 0 $ the operator $ P ( \lambda ) $ is hyperbolic with respect to surfaces $ x_1 = a > 0 $ the following elementary example motivates the definition \eqref{eq:hypXY}. Consider $ P = D_{x_1}^2 - D_{x_2}^2 $ on $ [ -1, 0 ] \times {\mathbb S}^1 $ and define \[ Y_s := \{ u \in \bar H^s ( [-1, \infty ) \times {\mathbb S}^1 ) : \supp u \subset [ -1, 0 ] \times {\mathbb S}^1 \}, \ \ X_s := \{ u \in Y_{s+1} : P u \in Y_s \}. \] Then standard hyperbolic estimates -- see for instance \cite[Theorem 23.2.4]{ho3} -- show that for any $ s \in {\mathbb R} $, the operator $ P : X_s \to Y_s $ is invertible. Roughly, the support condition gives $ 0 $ initial values at $ x_1 = 0 $ and hence $ P u = f $ can be uniquely solved for $ x_1 < 0 $.
We can now state the main theorems of this note: \begin{thm} \label{t:2} Let $ \mathscr X_s , \mathscr Y_s $ be defined in \eqref{eq:hypXY}. Then for $ \Im \lambda > - s - \frac12 $ the operator \[ P ( \lambda ) : \mathscr X_s \to \mathscr Y_s , \] has the Fredholm property, that is \[ \dim \{ u \in \mathscr X_s : P ( \lambda ) u = 0 \} < \infty , \ \ \dim \mathscr Y_s / P ( \lambda ) \mathscr X_s < \infty , \] and $ P ( \lambda ) \mathscr X_s $ is closed. \end{thm}
The next theorem provides invertibility of $ P ( \lambda ) $ for $ \Im \lambda > 0 $ and that shows the meromorphy of $ P ( \lambda )^{-1} $ -- see \cite[Theorem C.4]{res}. We will use that in Proposition \ref{p:hintz} to show the well known fact that in addition to Theorem \ref{t:1} $ R ( \frac n 2 - i \lambda ) $ is meromorphic on $ L^2 ( M , d \! \vol_g ) $ for $ \Im \lambda > 0 $.
\begin{thm} \label{t:3} For $ \Im \lambda > 0 $, $ \lambda^2 + (\frac n 2 )^2 \notin \Spec ( - \Delta_g ) $ and $ s > -\Im \lambda - \frac12 $, \[ P ( \lambda ) : \mathscr X_s \to \mathscr Y_s \] is invertible. Hence, for $ s \in {\mathbb R} $ and $ \Im \lambda > - s - \frac12 $, $ \lambda \mapsto P ( \lambda )^{-1} : \mathscr Y_s \to \mathscr X_s , $ is a meromorphic family of operators with poles of finite rank. \end{thm}
For interesting applications it is crucial to consider the semiclassical case, that is, uniform analysis as $ \Re \lambda \to \infty $ -- see \cite[Chapter 5]{res} --
but to indicate the basic mechanism behind the meromorphic continuation we only present the Fredholm property and invertibility in the upper half-plane.
\section{Preliminaries} \label{Fredamp}
Here we review the notation and basic facts need in the proofs of Theorems \ref{t:2} and \ref{t:3}.
\subsection{Pseudodifferential operators} \label{pseudo}
We use the notation of \cite[\S 18.1]{ho3} and for $ X $, an open $ {{\mathcal C}^\infty} $-manifold we denote by $ \Psi^m ( X ) $ the space of {\em properly supported} pseudodifferential operators of order $m $. (The operator $ A : {{\mathcal C}^\infty_{\rm{c}}} ( X ) \to \mathcal D' ( X) $ is properly supported if the projections from support of the Schwartz kernel of $ A $ in $ X \times X $ to each factor are proper maps, that is inverse images of compact sets are compact. The support of the Schwartz kernel of any differential operator is contained in the diagonal in $ X \times X $ and clearly has that property.)
For $ A \in \Psi^m ( X) $ we denote by $ \sigma ( A ) \in S^{m} ( T^*X \setminus 0 ) / S^{m-1} ( T^*X \setminus 0 ) $ the symbol of $ A $, sometimes writing $ \sigma ( A ) = a \in S^m ( T^* X \setminus 0 ) $ with an understanding that $ a $ is a {\em representative} from the equivalence class in the quotient.
We will use the following basic properties of the symbol map: if $ A \in \Psi^m ( X ) $ and $ B \in \Psi^k ( X ) $ then \begin{gather*} \sigma ( A B ) = \sigma ( A )\sigma ( B) \in S^{m+k}/ S^{m+k-1} , \\ \sigma ( [ A, B ] ) = H_{ \sigma ( A ) } \sigma ( B ) \in S^{m+k -1} / S^{m+k-2} , \end{gather*} where for $ a \in S^m $, $ H_a $ is the {\em Hamiton vector field} of $ a $.
For any operator $ P \in \Psi^m ( X) $ we can define $ \WF ( P ) \subset T^* X \setminus 0 $ (the smallest subset outside of which $ A $ has order $ - \infty $ -- see \cite[(18.1.34)]{ho3}). We also define $ \Char ( P) $ the smallest conic closed set outside of which $ P $ is {\em elliptic} -- see \cite[Definition 18.1.25]{ho3}. A typical application of the symbolic calculus and of this notation is the following statement \cite[Theorem 18.1.24$'$]{ho3}: if $ P \in \Psi^m ( X ) $ and $ V $ is an open conic set such that $ V \cap \Char ( P) = \emptyset $ then there exists $ Q \in \Psi^{-m} ( X ) $ such that \begin{equation} \label{eq:WFPQ} \WF ( I - P Q ) \cap V = \WF ( I - Q P ) \cap V = \emptyset . \end{equation} This means that $ Q $ is a {\em microlocal} inverse of $ P $ in $ V $.
We also recall that the operators in $ A \in \Psi^m ( X ) $ have mapping properties \[ A : H^s_{\rm{loc} } ( X ) \to H^{s-m}_{\rm{loc}} ( X ) , \ \
A : H^s_{\rm{comp} } ( X ) \to H^{s-m}_{\rm{comp}} ( X ) , \ \
s \in {\mathbb R} .\] Combined with \eqref{eq:WFPQ} we obtain the following {\em elliptic} estimate: if $ A, B \in \Psi^0 ( X ) $ have {\em compactly supported} Schwartz kernels,
$ P \in \Psi^m ( X )$ and \[ \WF ( A ) \cap ( \Char ( B ) \cup \Char ( P ) ) = \emptyset , \] then for any $N $ there exists $ C $ such that \begin{equation} \label{eq:elle4P}
\| A u \|_{ H^{s+m} } \leq C \| B P u \|_{H^s} + C \| u \|_{ H^{-N}}. \end{equation}
\subsection{Hyperbolic estimates} \label{hypest}
If $ X $ is a smooth compact manifold with boundary we follow \cite[\S B.2]{ho3} and define Sobolev spaces of extendible distributions, $ \bar H^s ( X^\circ ) $ and of supported distributions $ \dot H^s ( X ) $. Here $ X = X^\circ \sqcup \partial X $ and $ X^\circ $ is the interior of $ X $. These are modeled on the case of $ X = \overline {\mathbb R}^{n}_{+} $, $ {\mathbb R}_+^n := \{ x \in {\mathbb R}^n : x_1 > 0 \}$ in which case \begin{gather*}
\bar H^s( {\mathbb R}_+^n ) = \{ u : \exists \, U \in H^s ( {\mathbb R}^n ) , \ u= U|_{ x_1 > 0 }\} , \\ \dot H^s ( \overline {\mathbb R}^n_+ ) := \{ u \in H^s ( {\mathbb R}^n ) : \supp u \subset \overline {\mathbb R}^n_+ \}. \end{gather*}
The key fact is that the $ L^2 $ pairing (defined using a smooth density on $ X $) \[ {\dot{\mathcal C}^\infty} ( X ) \times {\bar{\mathcal C}^\infty} ( X^\circ ) \ni ( u , v ) \mapsto \int_X u ( x ) \bar v ( x ) dx , \] extends by density to $ ( u , v ) \in \dot H^{-s} ( X ) \times \bar H ( X^\circ ) $ and provides the identification of dual spaces, \begin{equation} \label{eq:duality} ( \bar H^s ( X^\circ ) )^* \simeq \dot H^{-s} ( X ) , \ \ s \in {\mathbb R} . \end{equation}
Suppose that $ P = D_t^2 + P_1 ( t, x, D_x ) D_t + P_0 ( t, x , D_x ) $, $ x \in N $, where $ N $ is a compact manifold and $ P_j \in {{\mathcal C}^\infty} ( {\mathbb R}_t ; \Psi^{2-j} ( N ) ) $ is strictly hyperbolic with respect to the level surfaces $ t = \rm{const} $ -- see \cite[\S 23.2]{ho3}. For any $ T > 0 $ and $ s \in {\mathbb R} $, we define \[ \widetilde H^s ( [0 , T ) \times N) =
\left\{ u : u = U|_{ [ 0 , T ) \times N } , \ \ U \in
H^s ( {\mathbb R} \times N ), \ \supp U \subset [ 0 , \infty ) \times N \right\}, \] with the norm defined as infimum of $ H^s $ norms over all $ U \in H^s $ with $ u_{ [ 0 , T) } = U $. (These spaces combines the $ \dot H^s $ space at the $ t = 0 $ with $ \bar H^s $ at $ t = T $.)
Then \begin{equation} \label{eq:hypeP} \forall \, f \in \widetilde H^s ( [ 0 , T ) \times N ) \, \ \exists \, ! \, u \in \widetilde H^{s+1} ( [0 , T ) \times N ) , \ \ P u = f , \end{equation} and \begin{equation} \label{eq:hypePest}
\| u \|_{ \widetilde H^{s+1} ( [ 0 , T ) \times N ) }
\leq C \| f \|_{\widetilde H^s ( [ 0 , T ) \times N ) }, \end{equation} see \cite[Theorem 23.2.4]{ho3}.
If we define \[ Y_s := \widetilde H^s ( [ 0 , T ) \times N ) , \ \ X_s := \{ u \in Y_{s+1} : P u \in Y_s \} \] then $ P : X_s \to Y_s $ is {\em invertible}. In our application we will need the following estimate which follows from the invertibility of $ P $: if $ u \in \bar H^s ( ( 0 , T ) \times N ) $ then for any $ \delta > 0 $, \begin{equation} \label{eq:hypest1}
\| u \|_{ \bar H^{s+1} ( ( 0 , T )\times N ) } \leq C
\| P u \|_{ \bar H^s ( ( 0 , T ) \times N ) } + C \| u \|_{ \bar H^{s+1} ( ( 0 , \delta ) ) \times N ) } . \end{equation}
The operator $ P ( \lambda ) $ defined in \eqref{eq:Plag} is of the form $ x_1 ( D_{x_1}^2 - P_1 ( x ) D_{x_1} + P_0 ( x , D_{x'} ) ) $ where $ P_1 \in {{\mathcal C}^\infty} $ and $ P_0 $ is elliptic for $ -1 \geq x_1 < - \epsilon < 0 $, for any fixed $ \epsilon$. That means that for $ t = 1 + x_1 $ and $ T = 1 - \epsilon $ or $ t = - \epsilon - x_1 $, $ T = 1 - \epsilon $, the operator is (up to the non-zero smooth factor $ x_1 $) is of the form to which estimates \eqref{eq:hypePest} and \eqref{eq:hypest1} apply.
We will also need an estimate valid all the way to $ x_1 = 0 $:
\begin{lem} \label{l:ahe} Suppose that $ u \in {\dot{\mathcal C}^\infty} ( X \cap \{ x_1 \leq 0 \} ) $ and $ P ( \lambda ) u = 0 $. Then $ u \equiv 0 $. \end{lem} As pointed out by Andr\'as Vasy this follows from general properties of the de Sitter wave equation \cite[Proposition 5.3]{vasy3} but we provide a simple direct proof.
\begin{proof}
We note that if $ u|_{ x_1 \geq - \epsilon } = 0 $ for some $ \epsilon > 0 $ then $ u \equiv 0 $ by \eqref{eq:hypePest}. That follows from energy estimates. We want to make that argument quantitative. We will work in $ [ -1, -\epsilon ] \times \partial M $ and define $ d : {{\mathcal C}^\infty} ( \partial M ) \to {{\mathcal C}^\infty} ( \partial M ; T^* \partial M ) $ to be the differential. We denote by $ d^* $ its Hodge adjoint with with respect to the ($ x_1 $-dependend) metrics $ h $, $d^*_h : {{\mathcal C}^\infty} ( \partial M ; T^* \partial M ) \to {{\mathcal C}^\infty} ( \partial M ) $. Then \[ P ( \lambda ) = 4 x_1 D_{x_1}^2 + d^*_h d - 4 ( \lambda + i ) D_{x_1} - i \gamma ( x ) ( 2x_1 D_{x_1} - \lambda - i {\textstyle{\frac{n-1}2}} ) . \] Since for $ f \in {{\mathcal C}^\infty} ( \partial M ) $ and any fixed $ x_1$, $ h = h ( x_1 ) $, \[ \begin{split} \int_{\partial M } d_{h}^* ( v d u ) \bar f \, d \! \vol_h & = \int_{\partial M } \langle v d u , d f \rangle_h \, d \! \vol_h = \int_{\partial M } \left( \langle du , d ( \bar v f ) \rangle_h - \langle d u , d \bar v \rangle_h \bar f\, \right) d\! \vol_h \\ & = \int_{\partial M } \left( v d^*_h d u - \langle d u , d \bar v \rangle \right) \bar f\, d \!\vol_h , \end{split} \] we conclude that
$ d_h^* ( v du ) = v d^*_h d u - \langle d u , d \bar v \rangle_h $. From this we derive the following form of the energy identity valid for $ x_1 < 0 $: \[ \begin{split} &
\partial_{x_1} \left( |x_1|^{-N} ( -x_1 |\partial_{x_1} u |^2 +
| d u |_h^2 + | u |^2 ) \right) +
|x_1|^{-N} d_h^* \left( \Re ( \bar u_{x_1} d u ) \right) = \\
& \ \ 2 \Re |x_1|^{-N} \bar u_{x_1} P ( \lambda ) u
- N |x_1|^{-N-1} \left( - x_{1} | u_{x_1}|^2 + | du |_h^2 + |u|^2 \right) +
|x_1|^{-N} R ( \lambda , u ) , \end{split} \] where $ R ( \lambda, u ) $ is a quadratic form in $ u $ and $ du $, independent of $ N $. We now fix $ \delta > 0 $ and apply Stokes theorem in $ [ - \delta, -\epsilon ] \times M $. For $ N $ large enough (depending on $ \lambda $) that gives \[ \begin{split}
\int_{\partial M } ( | u_{x_1}|^2 + | d u |_h^2 )|_{x_1 = \delta } \, d \vol_h & \leq C \epsilon^{-N} \int_{\partial M }
( | u_{x_1}|^2 + | d u |_h^2 )|_{x_1 = \epsilon } \, d \vol_h \\ & \leq C_K \epsilon^{-N+K} , \end{split}\] for any $ K $, as $ \epsilon \to 0 + $ (since $ u $ vanishes to infinite order at $ x_1 = 0 $). By choosing $K > N $ we see that the left hand side is $ 0 $ and that implies that $ u $ is zero. \end{proof}
\section{Propagation of singularities at radial points} \label{radest}
To obtain meromorphic continuation of the resolvent \eqref{eq:Res4Deltag} we need propagation estimates at {\em radial} points. These estimates were developed by Melrose \cite{mel} in the context of scattering theory on asymptotically Euclidean spaces and are crucial in the Vasy approach \cite{vasy1}. A semiclassical version valid for very general sinks and sources was given in Dyatlov--Zworski \cite{dz} (see also \cite[Appendix E]{res}).
To explain this estimates we first review the now standard results on propagation of singularities due to H\"ormander \cite{ho-pc}. Thus let $ P \in \Psi^m ( X ) $, with a real valued symbol $p := \sigma ( P ) $. Suppose that in an open conic subset of $ U \subset T^*X \setminus 0 $, $ \pi ( U ) \Subset X $ ($ \pi : T^*X \to X $), \begin{equation} \label{eq:condp} p ( x , \xi ) = 0 , \ ( x, \xi ) \in U \ \Longrightarrow \ \text{ $ H_p $ and $ \xi \partial_\xi $ are linearly independent at $ ( x , \xi ) $.} \end{equation} Here $ H_p $ is the Hamilton vector field of $ p $ and $ \xi \partial_\xi $ is the {\em radial} vector field. The latter is invariantly defined as the generator of the $ {\mathbb R}_+ $ action on $ T^* X \setminus 0 $ (multiplication of one forms by positive scalars).
The basic propagation estimate is given as follows: suppose that $ A, B , B_1 \in \Psi^0 ( X ) $ and $ \WF ( A ) \cup \WF ( B ) \subset U $, $ \WF ( I - B_1 ) \cap U = \emptyset$.
We also assume that that $ \WF ( A ) $ is {\em forward controlled} by $ \complement \Char ( B ) $ in the following sense: for any $ ( x, \xi ) \in \WF(A) $ there exists $ T > 0$ such that \begin{equation} \label{eq:control4P} \exp ( - T H_p ) ( x, \xi ) \notin \Char ( B ), \ \ \exp ( [-T , 0 ] H_p ) ( x, \xi) \subset U , \end{equation}
The forward control can be replaced by backward control, that is we can demand existence of $ T < 0 $. That is allowed since the symbol is real.
The crucial estimate is then given by \begin{equation}
\label{eq:DH} \| A u \|_{ H^{s+m-1} } \leq C \| B_1 P u \|_{H^{s}}
+ C \| B u \|_{ H^{s+m-1} } + C \| u \|_{ H^{-N} } , \end{equation} where $ N $ is arbitrary and $ C $ is a constant depending on $ N $. A direct proof can be found in \cite{ho-pc}. The estimate is valid with $ u \in \mathcal D' ( X ) $ for which the right hand side is finite -- see \cite[Exercise E.28]{res}.
We will consider a situation in which the condition \eqref{eq:condp} is violated. We will work on the manifold $ X $ given by \eqref{eq:coordX}, near $ x_1 = 0 $. In the notation of \eqref{eq:condp} we assume that, near $ x_1 = 0 $, \begin{gather} \label{eq:modred} \begin{gathered} P \in \Diff^2 ( X ) , \ \ p = \sigma ( P ) = x_1 \xi_1^2 + q ( x, \xi' ) , \ \ q ( x_1 , x' , \xi') :=
| \xi' |_{ h ( x_1, x') }^2, \end{gathered} \end{gather} $ ( x', \xi' ) \in T^* \partial M $, $( x, \xi ) \in T^* X \setminus 0 $. The Hamilton vector field is given by \begin{equation} \label{eq:Hpmod} H_p = \xi_1 ( 2 x_1 \partial_{x_1} - \xi_1 \partial_{\xi_1} ) + \partial_{x_1} q ( x, \xi') \partial_{ \xi_1 } + H_{q( x_1) } , \end{equation} where $ H_{ q ( x_1 ) } $ is the Hamilton vectorfield of $ ( x' , \xi' ) \mapsto q ( x_1, x' ,\xi') $ on $ T^* \partial M $.
We see that the condition \eqref{eq:condp} is violated at \begin{gather} \label{eq:radial1} \begin{gathered} \Gamma = \{ ( 0 , x' , \xi_1 , 0 ) : x' \in \partial M , \xi_1 \in {\mathbb R} \setminus 0 \} \subset T^* X \setminus 0 , \\ \Gamma = N^*Y \setminus 0 , \ \ Y := \{ x_1 = 0 \} . \end{gathered} \end{gather}
In fact, $ H_p |_{ N^* Y } = -\xi_1 ( \xi \partial_\xi |_{ N^* Y } ) $. Nevertheless Propositions \ref{p:rad1} and \ref{p:rad2} below provide propagation estimates valid in spaces with restricted regularity.
We note that $ \Gamma = p^{-1} ( 0 ) \cap \pi^{-1} ( Y ) $ and that near $ \pi^{-1} ( Y ) $, $ \Sigma=: p^{-1} ( 0 ) $ has two {\em disjoint} connected components: \begin{gather} \label{eq:Sigmapm} \begin{gathered} \Sigma = \Sigma_+ \sqcup \Sigma_- , \ \ \ \ \Gamma_\pm := \Sigma_\pm \cap \Gamma , \\ \Sigma_\pm \cap
\{ |x_1| < 1 \} := \{ (- q ( x, \xi')/\rho^2 , x', \rho , \xi' ) : \pm \rho > 0 , \
|x_1 | < 1 \} . \end{gathered} \end{gather} The set $ \Gamma_+ $ is a source and $ \Gamma_- $ is a sink for the flow projected to the sphere at infinity -- see Fig.~\ref{f:radial}.
\begin{figure}
\caption{An illustration of the behaviour of the Hamilton flows for radial sources and for radial sinks and of the localization of operators in the estimates \eqref{eq:sourcest} and \eqref{eq:sinkest} respectively.
The horizontal line on the top denotes the boundary, $\partial\overline T^*X$, of the {\em fiber--compactified} cotangent bundle $ \overline T^*X$. The shaded half-discs then correspond to conic neighbourhoods in $ T^* X $. In the simplest example of $ X = ( - 1 , 1 ) \times {\mathbb R}/{\mathbb Z} $, and $ p = x_1 \xi_1^2 + \xi_2^2 $, $ H_p = \xi_1 ( 2 x_1 \partial_{x_1} - \xi_1 \partial_{\xi_1} ) + 2 \xi_2 \partial_{x_2} $, $ x_2 \in {\mathbb R}/{\mathbb Z} $. Near $ \partial \overline \Gamma_\pm $ explicit (projective)
compactifications is given by $ r = 1/|\xi_1|$, (so that $ \partial \overline T^* X = \{ r = 0 \} $), $ \theta =
\xi_2/|\xi_1| $, with $ x $ (the base variable) unchanged. In this variables, near $ \partial \overline \Gamma_\pm $ (boundaries of compactifications of $ \Gamma_\pm$ we check that $ r \partial_r = - \xi_1 \partial_{\xi_1 } - \xi_2 \partial_{\xi_2}$ and $ \theta \partial_\theta = \xi' \partial_{\xi'} $. Hence near $ \Gamma_\pm $, $ H_p = \pm r ( \theta \partial_\theta + r \partial_r + 2 x_1 \partial_{x_1 } + 2 \theta \partial_{x_2} )$ and (after rescaling) we see a source and a sink.}
\label{f:radial}
\end{figure}
We now write $ P $ as follows: \begin{equation} \label{eq:PQR} P = P_0 + i Q , \ \ P_0 = P_0^* , \ \ Q = Q^* , \end{equation} where the formal $ L^2$-adjoints are taken with respect to the density $ dx_1 d\vol_h $.
We can now formulate the following propagation result at the source. We should stress that changing $ P $ to $ - P $ changes a source into a sink and the relevant thing is the sign of $ \sigma ( Q ) \in S^1/S^0 $ which then changes -- see \eqref{eq:s01} below.
We first state a {\em radial source estimate}: \begin{prop} \label{p:rad1} In the notation of \eqref{eq:Sigmapm} and \eqref{eq:PQR} put \begin{equation}
\label{eq:s01} s_+= \sup_{\Gamma_+} |\xi_1|^{-1} \sigma ( Q ) - \textstyle{ \frac12} , \end{equation} and take $ s > s_+ $. For any $ B_1 \in \Psi^0 ( X ) $ satisfying $ \WF ( I - B_1 ) \cap \Gamma_+ = \emptyset $ there exists $ A \in \Psi^0 ( X ) $ with $ \Char ( A ) \cap \Gamma_+ = \emptyset $ such that for $ u \in {{\mathcal C}^\infty_{\rm{c}}} ( X ) $ \begin{equation}
\label{eq:sourcest} \| A u \|_{ H^{s+1} } \leq C \| B_1 P u \|_{ H^s } +
C\| u \|_{H^{-N}} , \end{equation} for any $ N $.
\end{prop}
\noindent {\bf Remarks.} 1. The supremum in \eqref{eq:s01} should be understood as being taken at the $ \xi$-infinity or as
$ s_+ = \sup_{ x' \in \partial M } \lim_{\xi_1 \to \infty } |\xi_1|^{-1} \sigma ( Q ) ( 0, x', \xi_1 , 0 ) - \frac12 $.
\noindent 2. An approximation argument -- see \cite[Lemma E.42]{res} -- shows that \eqref{eq:sourcest} is valid for $ u \in H^{-N} $, $ \supp u \cap X = \emptyset $, such that $ B_1 u \in H^{s+1} $, $ B_1 P u \in H^{s} $.
\noindent 3. Using a regularization argument -- see for instance \cite[\S 3.5]{ho-pc} or \cite[Exercises E.28, E.33]{res} -- \eqref{eq:sourcest} holds for all $ u \in \mathcal D' ( X )$, $ \supp u \subset K $ where $ K $ is a fixed compact subset of $ X^\circ $, such that $ B_1 u \in H^{r} $ for some $ r > s_+ +1 $. In particular, when combined with the hyperbolic estimate \eqref{eq:hypest1}, that gives \begin{equation} \label{eq:PtoC} P u \in {\bar{\mathcal C}^\infty} ( X) , \ \ u \in \bar H^r ( X ) , \ \ r > s_+ + 1 \ \Longrightarrow \ u \in {\bar{\mathcal C}^\infty} ( X ) . \end{equation} In fact, the smoothness near $ x_1 = 0 $ is obtained from the estimate \eqref{eq:sourcest} and elliptic estimates applied to $ \chi u $, $ \chi \in {{\mathcal C}^\infty_{\rm{c}}} ( X ) $ and then the hyperbolic estimates show smoothness for $ x_1 < - \epsilon $.
\noindent 4. To see that the threshold \eqref{eq:s01} is essentially optimal for \eqref{eq:PtoC} we consider $ X = ( - 1, 1 ) \times {\mathbb R}/{\mathbb Z} $ and $ P = x_1 D_{x_1}^2 - i ( \rho + 1 ) D_{x_1} - D_{x_2}^2 $, $ x_2 \in {\mathbb R}/{\mathbb Z} $, $ \rho \in {\mathbb R} $. In this case $ s_+ = -\rho - \frac12 $. Put $ u ( x ) := \chi ( x_1 ) (x_1)_{+}^{-\rho} $, $ \rho \notin - {\mathbb N} $, and and note that \[ ( x_1 D_{x_1}^2 - i ( \rho + 1 ) D_{x_1 }) (x_1)_+^{-\rho} = 0 . \] Hence $ P u \in {{\mathcal C}^\infty_{\rm{c}}} ( X ) $ and $ u \in H^{-\rho + \frac12 - } \setminus H^{ - \rho + \frac12} $.
The {\em radial sink estimate} requires a control condition similar to that in \eqref{eq:control4P}. There is also a change in the regularity condition.
\begin{prop} \label{p:rad2} In the notation of \eqref{eq:Sigmapm} and \eqref{eq:PQR} put \begin{equation}
\label{eq:s02} s_- = \sup_{ \Gamma_- } |\xi_1|^{-1} \sigma ( Q ) - \textstyle {\frac12} , \end{equation} and take $ s > s_- $. For any $ B_1 \in \Psi^0 ( X ) $ satisfying $ \WF ( I - B_1 ) \cap \Gamma_- = \emptyset $ there exist $ A , B \in \Psi^0 ( X ) $ such that \[ \Char ( A ) \cap \Gamma_- = \emptyset , \ \ \WF ( B ) \cap \Gamma_- = \emptyset \] and for $ u \in {{\mathcal C}^\infty_{\rm{c}}} ( X ) $, \begin{equation} \label{eq:sinkest}
\| A u \|_{ H^{-s} } \leq C \| B_1 P u \|_{ H^{-s-1} } +
C \| B u \|_{H^{-s} } + C \| u \|_{H^{-N}} , \end{equation} for any $ N $. \end{prop}
\noindent {\bf Remark.} A regularization method -- see \cite[Exercise 34]{res} -- shows that \eqref{eq:sinkest} is valid for $ u \in \mathcal D' ( X^\circ )$, $ \supp u \subset K $ where $ K \Subset X^\circ $ is a fixed set, and for which the right hand side of \eqref{eq:sinkest} is finite.
\begin{proof}[Proof of Proposition \ref{p:rad1}] The basic idea is to produce an operator $ F_s \in \Psi^{s+\frac12} ( X) $, elliptic on $ \WF ( A ) $ such that for $ s > s_+ $ and $ u \in {{\mathcal C}^\infty_{\rm{c}}} ( X ) $, we have \begin{equation}
\label{eq:Fsu} \| F_s u \|^2_{H^\frac12} \leq C \| B_1 P u\|_{H^{s} } \| F_s u \|_{ H^{\frac12}} + C \| B_1 u \|_{ H^{s+ \frac12} }^2 + C \| u\|_{ H^{-N} }^2 . \end{equation} This is achieved by writing, in the notation of \eqref{eq:PQR}, \begin{equation} \label{eq:negcom} \begin{split} & \Im \langle P u , F_s^* F_s u \rangle = \langle {\textstyle \frac i 2 } [ P_0 , F_s^* F_s ] u , u \rangle +
\Re \langle Q u , F_s^* F_s u \rangle , \end{split} \end{equation} and using the first term on the right hand side to control the left hand side of \eqref{eq:Fsu}. We note here that since $ \WF ( F_s ) \cap \WF ( I - B_1 ) = \emptyset $, then in any expression involving $ F_s $ we can replace $ u $ and $ P u $ by $ B_1 u $ and $ B_1 P u $ respectively by introducing errors
$ \mathcal O ( \| u \|_{ H^{-N} } ) $ for any $ N $. Hence from now on we will consider estimates with $ u $ only.
To construct a suitable $ F_s $ we take $ \psi_1 \in {{\mathcal C}^\infty_{\rm{c}}} ( ( - 2\delta, 2 \delta ) ; [ 0 , 1] ) $,
$ \psi_1 ( t ) = 1$, for $ |t| < \delta $, $ t \psi_1'( t) \leq 0 $, and $ \psi_2 \in {{\mathcal C}^\infty} ( {\mathbb R} ) $, $ \psi_2 ( t ) = 0 $ for $ t \leq 1 $, $ \psi_2 ( t ) = 1$, $ t \geq 2 $, and propose \begin{gather*} F_s := \psi_1 ( x_1 ) \psi_1 ( -\Delta_h /{D_{x_1}^2 } ) \psi_2 ( D_{x_1} ) D_{x_1}^{s+\frac12} \in \Psi^{s+\frac12} ( X ) , \\ \sigma ( F_s ) =: f_s ( x, \xi ) = \psi_1 ( x_1 ) \psi_1 ( q(x, \xi' ) / \xi_1^2 ) \psi_2 ( \xi_1 ) \xi_1^{s+ \frac12} . \end{gather*} We note that because of the cut-off $ \psi_2 $, $ D_{x_1}^{s+\frac12} $ and $ - \Delta_h/D_{x_1}^2 $ are well defined.
For $ |\xi| $ large enough (which implies that $ \xi_1 > |\xi|/C $ on the support of $ f_s $ if $ \delta $ is small enough) we use \eqref{eq:Hpmod} to obtain \begin{equation} \label{eq:fHfs} \begin{split} H_p f_s ( x, \xi ) & = \xi_1^{s +\frac32} \left( 2 x_1 \psi_1'( x_1 ) \psi_1 ( \xi_2/\xi_1 ) + 2 \psi_1 ( x_1 ) ( q ( x, \xi') / \xi_1^2 ) \psi_1 '(q (x,\xi') /\xi_1^2 ) \right. \\ & \ \ \ \ \ \ \left. - (s + {\textstyle \frac12}) \psi_1 ( x_1 ) \psi_1 ( q ( x, \xi') /\xi_1^2 ) \right) \psi_2 ( \xi_1 )
\leq - (s + {\textstyle \frac12} ) \xi_1 f_s . \end{split} \end{equation} In particular, \begin{equation} \label{eq:fsign} f_s H_p f_s + (s + {\textstyle \frac12} ) \xi_1 f_s^2 \leq 0 , \ \ \ \
| \xi | > C_0 . \end{equation}
The inequality \eqref{eq:fsign} is important since $ \sigma ( \frac i 2 [ P_0 , F_s^* F_s ] ) = f_s H_p f_s $. Hence returning to \eqref{eq:negcom}, using \eqref{eq:fsign}, the sharp G{\aa}rding inequality \cite[Theorem 18.1.14]{ho3} and the fact that $ F_s^* [ Q , F_s ] \in \Psi^{ 2 s + 1 } ( X ) $, we see that \[ \begin{split} \Im \langle P u , F_s^* F_s \rangle & = \langle {\textstyle \frac i 2 } [ P_0 , F_s^* F_s ] u , u \rangle + \langle Q F_s u , F_s u \rangle + \langle F_s^* [ Q , F_s ] u , u \rangle \\ & \leq \langle {\textstyle \frac i 2 } [ P_0 , F_s^* F_s ] u , u \rangle +
\langle Q F_s u , F_s u \rangle + C \| u \|_{ H^{s + \frac12} }^2 \\ & \leq \langle ( - (s + {\textstyle \frac12} ) D_{x_1} + Q ) F_s u , F_s u \rangle
+ C \| u \|_{ H^{s+\frac12}}^2 . \end{split} \] Since $ D_{x_1 } $ is elliptic (and positive) on $ \WF ( F_s ) $ we can use \eqref{eq:WFPQ} to see that if $ s > s_+ $ (where $ s_+ $ is given in \eqref{eq:s01}) then
\[ \begin{split} \| F_s u \|_{ H^{\frac12} }^2 & \leq
- \Im \langle P u, F_s^* F_s u \rangle + C \| u \|_{ H^{s+\frac12} }^2
\leq \| P u \|_{H_s} \| F_s^* F_s u \|_{H^{-s}} + C \| u \|_{ H^{s+\frac12 }}^2 \\
& \leq 2 \| P u \|_{H_s }^2 + \textstyle {\frac12} \| F_s u \|_{ H^{\frac12}}^2 +
C \| u \|_{ H^{s + \frac{1}{2} } } ^2 . \end{split} \] Recalling the remark made after \eqref{eq:negcom} this gives \eqref{eq:Fsu}. Choosing $ A $ so that $ F_s \in \Psi^{ s + \frac12} $ is elliptic on $ \WF ( A ) $ we obtain \begin{equation}
\label{eq:sourcest1} \| A u \|_{ H^{s+1} } \leq C \| B_1 P u \|_{ H^s } + C \| B_1 u \|_{ H^{s+\frac12} }
C\| u \|_{H^{-N}} .\end{equation} It remains to eliminate the second term on the right hand side. We note that $ \WF ( B_1 ) \cap \Char ( A ) $ forward controlled by $ \complement \Char ( A ) $ in the sense of \eqref{eq:control4P}. Since \eqref{eq:condp} is satisfied on $ \WF ( B_1 ) \cap \Char ( A ) $ we apply \eqref{eq:DH} to obtain \begin{equation} \label{eq:sourcest2} \begin{split}
\| B_1 u\|_{ H^{s +\frac12 } } & \leq C
\| B_2 P u \|_{ H^{s- \frac12 } } +
C \| A u \|_{ H^{ s+ \frac12 } } + C \| u \|_{H^{-N}} \\
& \leq C \| B_2 P u \|_{H^{s} } +
\textstyle \frac12 \| A u \|_{ H^{s }} + C ' \| u \|_{H^{-N}} ,
\ \ s + \frac12 > - N , \end{split} \end{equation} where $ B_2 $ has the same propeties as $ B_1 $ but a larger microsupport. (Here we used an interpolation estimate for Sobolev spaces based on $ t^{s + \frac12} \leq \gamma t^{s} + \gamma^{-2N-2s -1} t^{-N} $, $ t \geq 0 $ -- that follows from rescaling $ \tau^{ s + \frac12} \leq \tau^{s } + \tau^{-N} $, $ \tau \geq 0 $.)
Combining \eqref{eq:sourcest1} and \eqref{eq:sourcest2} gives \eqref{eq:sourcest} with $ B_1 $ replaced by $ B_2 $. Relabeling the operators concludes the proof. \end{proof}
\begin{proof}[Proof of Proposition \ref{p:rad2}] The proof of \eqref{eq:sinkest} is similar to the proof of Proposition \ref{p:rad1}. We now use $ G_s \in \Psi^{ - s -\frac12} ( X ) $ given by the same formula: \begin{gather*} G_s := \psi_1 ( x_1 ) \psi_1 ( -\Delta_h /{D_{x_1}^2 } ) \psi_2 ( D_{x_1} ) D_{x_1}^{-s-\frac12} \in \Psi^{-s-\frac12} ( X ) , \\ \sigma ( G_s ) =: g_s ( x, \xi ) = \psi_1 ( x_1 ) \psi_1 ( q(x, \xi' ) / \xi_1^2 ) \psi_2 ( \xi_1 ) \xi_1^{-s - \frac12} .\end{gather*} However now, \[ \begin{split} g_s H_g g_s ( x , \xi ) & = \xi_1^{-s + \frac12 } g_s ( x, \xi ) \left( 2 x_1 \psi_1'( x_1 ) \psi_1 ( \xi_2/\xi_1 ) + 2 \psi_1 ( x_1 ) ( q ( x, \xi') / \xi_1^2 ) \psi_1 '(q (x,\xi') /\xi_1^2 ) \right. \\ & \ \ \ \ \ \ \left. - (s + {\textstyle \frac12}) \psi_1 ( x_1 ) \psi_1 ( q ( x, \xi') /\xi_1^2 ) \right) \psi_2 ( \xi_1 ) \\
& \leq - (s + {\textstyle \frac12} ) |\xi_1| g_s^2 + C_0 |\xi_1|^{-2s} b ( x, \xi )^2 , \end{split} \]
where $ b = \sigma ( B ) $ is chosen to control the terms involving
$ t \psi_1' ( t ) $
(which now have the ``wrong" sign compared to \eqref{eq:fHfs}).
The proof now proceeds in the
same way as the proof of \eqref{eq:sourcest} but we have to carry over the
$ \| B u \|_{ H^s} $ terms.
\end{proof}
\section{Proof of Theorem 1} \label{s:t1}
We first show that $ \ker_{ \mathscr X_s } P ( \lambda ) $ is finite dimensional when $ \Im \lambda > - s - \frac12 $. Using standard arguments this follows from the definition \eqref{eq:hypXY} and the estimate \eqref{eq:estker} below. To formulate it suppose that
\[ \chi \in {{\mathcal C}^\infty_{\rm{c}}} ( X ) , \ \ \chi|_{ x_1 < - 2\delta} \equiv 0 , \
\ \chi |_{ x_1 > - \delta } \equiv 1 , \] where $ \delta > 0$ is a fixed (small) constant. Then for $ u \in \mathscr X_s $ and $ s > - \Im \lambda - \frac12 $, \begin{equation}
\label{eq:estker} \| u \|_{ \bar H^{s+1} ( X^\circ ) } \leq
C \| P (\lambda ) u \|_{ \bar H^s ( X^\circ ) } + \| \chi u \|_{ H^{-N} ( X) } . \end{equation}
\begin{proof}[Proof of \eqref{eq:estker}] If $ \chi_+ \in {{\mathcal C}^\infty_{\rm{c}}} $, $ \supp \chi_+ \subset \{ x_1 > 0 \} $ then elliptic estimates show that
\[ \| \chi_+ u \|_{ H^{s+1 } } \leq \| \chi_+ u \|_{ H^{s+2} } \leq
C \| P u \|_{ H^s } + C \| \chi u \|_{ H^{-N} } . \] Near $ x_1 = 0 $ we use the estimates \eqref{eq:sourcest} (valid for $ u \in \mathscr X_s $) -- see Remark 2 after Proposition \ref{p:rad1}) which give for, for $ \chi_0 \in {{\mathcal C}^\infty_{\rm{c}}} $,
$ \supp \chi_0 \subset \{ |x_1| < \delta /2 \} $ \begin{equation}
\label{eq:Pchi0} \| \chi_0 u \|_{H^{s +1} ( X )} \leq C \| P ( \lambda ) u \|_{ \bar H^s ( X) } +
, \| \chi u \|_{ H^{-N} ( X ) } . \end{equation}
To prove \eqref{eq:Pchi0} we microlocalize to neighbourhoods of $ \{ \pm \xi_1 > |\xi|/C \}$ and use \eqref{eq:sourcest} for $ P ( \lambda ) $ and $ - P ( \lambda ) $ respectively -- from \eqref{eq:Plag} we see that $ s_+ = - \Im \lambda - \frac12 $ for $ P= P ( \lambda ) $ and $ s_- = - \Im \lambda - \frac12 $ for $P = - P ( \lambda ) $ (a rescaling
by a factor of $ 4 $ is needed by comparing \eqref{eq:Plag} with \eqref{eq:modred}). Elsewhere the operator is elliptic in $ |x_1|< \delta $.
Finally if $ \chi_- $ is supported in $ \{ x_1 < -\delta/2 \} $ then the hyperbolic estimate \eqref{eq:hypest1} shows that
\[ \| \chi_- u \|_{ \bar H^{s+1} ( X ) } \leq
C \| P ( \lambda ) u \|_{ \bar H^s ( X) } + C \| \chi_0 u \|_{ H^{s+1} ( X) }. \] Putting these estimates together gives \eqref{eq:estker}. \end{proof}
To show that the range of $ P $ on $ \mathscr X_s $ is of finite codimension and is closed we need the following
\begin{lem} \label{l:kerP} The cokernel of $ P ( \lambda ) $ in $ \dot H^{-s} ( X ) \simeq \mathscr Y_s ^* $ (see \eqref{eq:duality}) \[ \coker_{ \mathcal X_s } P ( \lambda ) := \{ v \in \dot H^{-s} ( X ) : \forall \, u \in \mathscr X_s, \ \langle P ( \lambda ) u , v \rangle = 0 \} , \] is equal to the kernel of $ P ( \bar \lambda ) $ on $ \dot H^{-s} ( X )$:
$\coker_{ \mathcal X_s } P ( \lambda ) = \ker_{ \dot H^{-s} ( X ) } P ( \bar \lambda)$ .
\end{lem} \begin{proof} In view of \eqref{eq:formaladj} we have, for $ u \in {\bar{\mathcal C}^\infty} ( X^\circ ) $ and $ v \in \dot H^{-s} ( X ) $, \[ \langle P ( \lambda ) u , v \rangle = \langle u , P ( \bar \lambda ) v \rangle .\] Since $ {\bar{\mathcal C}^\infty} ( X^\circ ) $ is dense in $ \mathscr X_s $ (see for instance Lemma \cite[Lemma E.42]{res}) it follows that $ \langle P ( \lambda ) u , v \rangle = 0 $ for all $ u \in \mathscr X_s $ if and only if $ P ( \bar \lambda ) v = 0 $. \end{proof}
Hence to show that $ \coker_{ \mathscr X_s} $ is finite dimensional it suffices to prove that the kernel of $ P ( \bar \lambda ) $ is finite dimensional. We claim an estimate from which this follows: \begin{equation}
\label{eq:estcok} u \in \ker_{ \dot H^{-s} ( X ) } \ \Longrightarrow \ \| u \|_{ \dot H^{-s} ( X) } \leq C \| \chi u \|_{ H^{-N} ( X ) } , \ \ s > - \Im \lambda - \textstyle{\frac12} \end{equation} where $ \chi $ is the same as in \eqref{eq:estker}.
\begin{proof}[Proof of \eqref{eq:estcok}] The hyperbolic estimate \eqref{eq:hypePest} shows that if $ P (\bar \lambda ) u = 0 $ for $ u \in \dot H^{-s} ( X ) $ (with any $ \lambda \in {\mathbb C} $ or $ s \in {\mathbb R}$)
then $ u |_{ x_1 < 0 } \equiv 0 $. We can now apply \eqref{eq:sinkest} with $ P = P ( \lambda ) $ near $ \Gamma_- $ and $ P = - P ( \lambda ) $ near $ \Gamma_+ $. We again see that the threshold condition is the same at both places: we require that $ s > - \Im \lambda - \frac12 $. Since $ u $ vanishes in $ x_1 < 0 $ there $ \WF ( B u ) \cap \Char P ( \lambda ) = \emptyset $
and hence (using \eqref{eq:WFPQ}) $ \| B u \|_{ \dot H^{-s} ( X ) } \leq C \| \chi u \|_{-N} $. Hence \eqref{eq:sinkest} and elliptic estimates give \eqref{eq:estcok}. \end{proof}
\section{Asymptotic expansions} \label{asym}
To prove Theorem \ref{t:3} we need a regularity result for $ L^2 $ solutions of \begin{equation} \label{eq:L2R} ( - \Delta_g - \lambda^2 - ({\textstyle{ \frac n 2 }})^2)^{-1} u = f \in {{\mathcal C}^\infty_{\rm{c}}} ( M ) , \ \ \Im \lambda > {\textstyle{\frac n 2 }} . \end{equation} To formulate it we recall the definition of $ X $ given in \eqref{eq:coordX} and of $ X_1 := X \cap \{ x_1 > 0 \}$. We also define $ j: M \to X_1 $ to be the natural identification, given by $ j ( y_1 , y' ) = (y_1^2 , y') $ near the boundary. Then we have
\begin{prop} \label{p:propu} For $ \Im \lambda \gg 1 $ and $ \lambda \notin i {\mathbb N} $, the unique $ L^2$-solution $ u $ to \eqref{eq:L2R} satisfies \begin{equation} \label{eq:propu} u = y_1^{-i \lambda + \frac n 2} j^* U , \ \ U \in {\bar{\mathcal C}^\infty} ( X_1 ) . \end{equation} In other words, near the boundary, $ u ( y ) = y_1^{ - i \lambda + \frac n2} U ( y_1^2, y') $ where $ U $ is smoothly extendible. \end{prop}
\noindent {\bf Remark.} Once Theorem \ref{t:3} is established then the relation between $ P ( \lambda )^{-1} $ and the meromorphically continued resolvent $ R ( \frac n 2 - i \lambda ) $ shows that $ y_1^{ - s } R ( s ) : {\dot{\mathcal C}^\infty} ( M ) \to j^*{\bar{\mathcal C}^\infty} ( X_1 ) $ is meromorphic away from $ s \in {\mathbb N} $ -- see \S \ref{merc}. That means that away from exceptional points \eqref{eq:propu} remains valid for $ u = R ( \frac n 2 - i \lambda ) $.
To give a direct proof of Proposition \ref{p:propu} we need a few lemmas. For that we define Sobolev spaces $ H^k_g ( M , d \vol_g ) $ associated to the Laplacian $ - \Delta_g $: \begin{equation} \label{eq:Hk}
H^k_g ( M ) :=
\{ u : y_1^{|\alpha|} D_{y}^\alpha u \in
L^2 ( M , d \! \vol_g ), \ |\alpha| \leq k \}, \ \ \ell \in {\mathbb N} . \end{equation} (In invariant formulation can be obtained by taking vector fields vanishing at $ \partial M $ -- see \cite{mm}.) Let us also put \begin{equation} \label{eq:Qofla} Q ( \lambda^2) :=
- \Delta_g - \lambda^2 - ({\textstyle{ \frac n 2 }})^2 . \end{equation}
\begin{lem} \label{l:0} With $ H_g^k ( M ) $ defined by \eqref{eq:Hk} and $ Q( \lambda^2 ) $ by \eqref{eq:Qofla} we have for any $ k \geq 0 $, \begin{equation} \label{eq:0} Q( \lambda^2)^{-1} : H_g^k ( M ) \to H_g^{k+2} ( M ) , \ \ \Im \lambda > \textstyle{\frac n 2 } . \end{equation} \end{lem} \begin{proof} Using the notation from the proof of \eqref{eq:Deltagg} and
Lemma \ref{l:ahe} we write \[ Q ( \lambda^2 ) = ( y_1 D_{y_1} )^2 + y_1^2 d_h^* d- i ( n + y_1^2 \gamma ( y_1^2, y') ) y_1 D_{y_1} \] so that for $ u \in {{\mathcal C}^\infty_{\rm{c}}} ( M ) $ supported near $ \partial M $, and with the inner products in $ L^2_g = L^2 ( M , d \vol_g ) $, \[ \langle Q ( \lambda^2 ) u , u \rangle_{L^2_g} =
\int_M ( | y_1 D_{y_1}|^2 + y_1^2 | d u|_h ^2 ) d \vol_g . \] Hence,
$ \| u \|_{ H^1_g } \leq C \| Q ( \lambda^2 ) u \|_{ L^2_g } + C \| u \|_{L^2_g } $.
Using this and expanding $ \langle Q ( \lambda ) u , Q ( \lambda ) u \rangle_{L^2_g} $ we see that
\[ \| u \|_{ H^2_g } \leq C \| Q ( \lambda^2 ) u \|_{L^2_g } + C \| u \|_{L^2_g } , \ \ u \in {{\mathcal C}^\infty_{\rm{c}}} ( M ) . \] Since $ {{\mathcal C}^\infty_{\rm{c}}} ( M ) $ is dense in $ H^2_g ( M ) $ it follows that for $ \Im \lambda > \frac n 2 $, $ Q ( \lambda)^2 : L^2_g \to H_g^2 $. Commuting $ y_1 V $, where $ V \in {\bar{\mathcal C}^\infty} ( M ; TM ) $, with $ Q ( \lambda^2 )$ gives the general estimate,
\[ \| u \|_{ H^{k+2}_g } \leq C \| Q ( \lambda^2 ) u \|_{H^k_g } + C \| u \|_{L^2_g } , \ \ u \in {{\mathcal C}^\infty_{\rm{c}}} ( M) , \] and that gives \eqref{eq:0}. \end{proof}
\begin{lem} \label{l:01} For any $ \alpha > 0 $ there exists $ c ( \alpha ) > 0 $ such that for $ \Im \lambda > c ( \alpha ) $, \begin{equation} \label{eq:01}
y_1^{\alpha } Q ( \lambda^2 )^{-1} y_1^{-\alpha} : L^2_g ( M ) \to H^2_g ( M ) . \end{equation} \end{lem} \begin{proof} We expand the conjugated operator as follows: \begin{equation} \label{eq:QlaK} \begin{split} y_1^{\alpha} Q ( \lambda^2 ) y_1^{-\alpha} & = Q ( \lambda^2 + \alpha^2 ) - \alpha ( 2 i y_1 D_{y_1 } - n - y_1^2 \gamma( y_1^2 , y') ) \\ & = \left( I + K ( \lambda, \alpha ) \right)^{-1} Q ( \lambda^2 + \alpha^2) ,\\ K ( \lambda, \alpha ) & := \alpha ( 2 i y_1 D_{y_1 } - n - y_1^2 \gamma ( y_1^2 , y') ) Q ( \lambda^2 + \alpha^2 )^{-1} . \end{split} \end{equation} The inverse $ Q ( \lambda^2 + \alpha^2 ) $ exists due to the following bound provided by the spectral theorem (since $ \Spec ( - \Delta_g ) \subset [ 0 , \infty ) $) and \eqref{eq:0} (with $ k =0 $): \begin{equation} \label{eq:Qlambdaest}
\| Q ( \mu^2 )^{-1} \|_{ L^2_g \to H^k_g }
\leq \frac{( 1 + C |\mu|)^{k/2}} {d(\mu^2 , [- ({\textstyle \frac{n}2})^2 , \infty ) ) } , \ \ k = 0 , 2. \end{equation} It follows that for $ \Im \lambda > c ( \alpha ) $, $ I + K ( \lambda, \alpha ) $ in \eqref{eq:QlaK} is invertible on $ L^2_g $. Hence we can invert $ y_1^\alpha Q ( \lambda^2 ) y_1^{-\alpha} $ with the mapping property given in \eqref{eq:01}. \end{proof}
\begin{proof}[Proof of Proposition \ref{p:propu}] The first step of the proof is a strengthening of Lemma \ref{l:0} for solutions of \eqref{eq:L2R}. We claim that if $ u $ solves \eqref{eq:L2R} and $ u \in L^2_g $ then, near the boundary $ \partial M $, \begin{equation} \label{eq:Qlambda} V_1 \cdots V_N u \in L^2_g , \ \
V_j \in {\bar{\mathcal C}^\infty} ( M , T M ) , \ \ V_j y_1 |_{ y_1} = 0 , \end{equation} for any $N$. The condition on $ V_j $ means that $ V_j $ are tangent to the boundary $ \partial M $ (for more on spaces defined by such conditions see \cite[\S 18.3]{ho3}).
To obtain \eqref{eq:Qlambda} we see that if $ V $ is a vector field tangent to the boundary of $ \partial M $ then \[ \begin{split} Q ( \lambda^2 ) V u & = F := V f + [ ( y_1 D_{y_1} )^2 , V ] u + y_1^2 [ \Delta_{ h(y_1^2 ) } , V ] - i [ ( n + y_1^2 \gamma ( y ) ) y_1 D_{y_1} , V ] \\ & = Vf + y_1^2 Q_2 u + y_1 Q_1 u , \end{split} \] where $ Q_j $ are differential operators of order $ j$. Lemma \ref{l:0} shows that $ F \in L^2_g $. From Lemma \ref{l:0} we also know that $ y_1 V u \in L^2_g $. Hence, \[ y_1 V u - y_1 Q ( \lambda^2 )^{-1} F \in L^2_g , \ \ Q ( \lambda^2 ) y_1^{-1} ( y_1 V u - y_1 Q ( \lambda^2 )^{-1} F ) = 0 . \] But for $ \Im \lambda > c_0 $, Lemma \ref{l:01} shows that \begin{equation} \label{eq:injQla}
Q ( \lambda^2 ) y_1^{-1} v = 0 , \ \ v \in L^2 (M , d\! \vol_g ) \ \Longrightarrow \ v = 0 . \end{equation} Hence $ V u = Q( \lambda^2 )^{-1} F \in L^2_g $. This argument can be iterated showing \eqref{eq:Qlambda}.
We now consider $ P ( \lambda ) $ as an operator on $ X_1 $, formally selfadjoint with respect to $ d \mu = dx_1 d \vol_h $. Since we are on open manifolds the two $ {{\mathcal C}^\infty} $ structures agree and we can consider $ P ( \lambda ) $ as operator on $ {{\mathcal C}^\infty} ( M ) $. Since \[ Q ( \lambda^2) = y_1^{-i \lambda + \frac n 2} y_1^2 P ( \lambda ) y_1^{ i \lambda - \frac n 2 } = x_1^{ -\frac{i\lambda} 2 + \frac n 4 } x_1 P ( \lambda ) x_1^{ \frac{ i \lambda }2 - \frac n4} , \] we can define \begin{equation} \label{eq:Tla} T ( \lambda ) := x_1^{ \frac{i \lambda }2 - \frac n 4 } Q ( \lambda^2 )^{-1}
x_1^{ -\frac{i\lambda}2 + \frac{n}4 + 1 } , \ \ \Im \lambda >
{\textstyle \frac n 2}, \end{equation} which satisfies \begin{gather} \label{eq:Tla1} \begin{gathered} P ( \lambda ) T ( \lambda ) f = f , \ \ f \in {{\mathcal C}^\infty_{\rm{c}}} ( X_1 ) , \\ T ( \lambda ) : x_1^{ - \frac{\rho}2 - \frac12 }
L^2 \to
x_1^{ - \frac{\rho}2 + \frac12 } L^2 ,
\ \ \rho := \Im \lambda > {\textstyle \frac n 2} . \end{gathered} \end{gather} Here we used the fact that $ 2dy_1/y_1 = dx_1/x_1 $ and that \[ L^2 ( y^{-n-1}_1 dy_1 d \! \vol_h ) =
L^2 \left( x_1^{-\frac n 2 - 1} {dx_1 d \! \vol_h }\right) = x_1^{\frac n 4 + \frac12 } L^2 , \ \ L^2 := L^2 ( dx_1 d \!\vol_h ) . \] Proposition \ref{p:propu} is equivalent to the following mapping property of $ T ( \lambda ) $: \begin{equation} \label{eq:P0las} T ( \lambda ) : {{\mathcal C}^\infty_{\rm{c}}} ( X_1 ) \longrightarrow {\bar{\mathcal C}^\infty} ( X_1 ) , \ \ \ \Im \lambda \geq c_0 , \ \ \lambda \notin i {\mathbb N} . \end{equation} To prove \eqref{eq:P0las} we will use a classical tool for obtaining asymptotic expansions, the {\em Mellin transform}. Thus let $ u = T ( \lambda ) f $, $ f \in {{\mathcal C}^\infty_{\rm{c}}} (X_1) $. By replacing $ u $ by $ \chi ( x_1 ) u $, $ \chi \in {{\mathcal C}^\infty_{\rm{c}}} ( (-1,1) ;[0,1])$, $ \chi = 1 $ near $ 0 $, we can assume that \[ u \in {{\mathcal C}^\infty} ( ( 0,1 ) \times \partial M ) \cap x_1^{-\frac{\rho}2 + \frac12 } L^2 , \ \ P(\lambda ) u = f_1 \in {{\mathcal C}^\infty_{\rm{c}}} ( ( 0 , 1 ) \times \partial M ) , \ \ \rho > {\textstyle \frac n 2} , \] where smoothness for $ x_1 > 0 $ follows from Lemma \ref{l:0}. In addition \eqref{eq:Qlambda} shows that \begin{equation} \label{eq:Qlambdax} V_1 \cdots V_N u \in x_1^{-\frac \rho 2 + \frac12 } L^2 ( dx_1 d\! \vol_h ) , \ \
V_j \in {\bar{\mathcal C}^\infty} ( X_1 , T X_1 ) , \ \ V_j x_1 |_{ x_1} = 0 . \end{equation} In particular, for any $ k $ \begin{equation} \label{eq:x1N} x_1^N u \in C^k ( [0,1]\times {\mathbb S}^1 ) \end{equation} if $ N $ is large enough.
We define the Mellin transform (for functions with support in $ [0,1)$) as \[ M u ( s, x' ) := \int_0^1 u ( x) x_1^{ s } \frac {dx_1}{x_1} . \] This is well defined for $ \Re s > \rho /2 $:
\[ \begin{split} \| M u ( s , x') \|_{ L^2 ( d \! \vol_h ) }^2
& = \int_{{\mathbb S}^1} \left| \int_0^1 x_1^{ s +\frac{ i \lambda}2 - \frac 1 2} ( x_1^{ - \frac{ i \lambda} 2 - \frac 1 2 } u ( x_1, x' ) ) dx_1
\right|^2 d \! \vol_h \\ & \leq \left( \int_0^1 t^{ - \rho + 2 \Re s - 1 } dt \right)
\| x_1^{ \frac \rho 2 - \frac 1 2 } u \|_{ L^2 }
= ( 2\! \Re \!s \!- \!\rho)^{-1} \| x_1^{ \frac \rho 2 - \frac 1 2 } u \|_{ L^2 } . \end{split} \] In view of \eqref{eq:Qlambda} $ s \longmapsto M u ( s , x_2 ) $ is a holomorphic family of {\em smooth} functions in $ \Re s > \rho /2 $. We claim now that $ M u ( s, x' ) $ continues meromorphically to all of $ {\mathbb C} $. In fact, from \eqref{eq:Plag} we see that for $ f_2 := \frac14 f_1 $, \[ M (x_1 f_2) ( s , x') = M ( {\textstyle{\frac 14}} x_1 P ( \lambda ) u ) ( s , x' ) = - s ( s + i \lambda ) M u ( s, x' ) + M ( Q_2 u) ( s+1, x' ) , \] where $ Q_2 $ is a second order differential operator built out of vector fields tangent to the boundary of $ X_1 $. In view of \eqref{eq:Qlambdax} $ Q_2 u \in x_1^{- \frac \rho 2 + \frac 12} L^2 $. Also, $ s \mapsto M ( x_1 f_2 ) ( s, x' ) $ is entire as $ f_1 $ vanishes near $ x_1 = 0 $. Hence, \[ \begin{split} M u ( s , x' ) = &
\frac{ M ( Q_2^k u ) ( s + k + 1 , x' ) } { s ( s + i \lambda ) \cdots ( s + k ) ( s + k + i \lambda ) }
- \sum_{ j=0}^k \frac{ M Q_1^{j}( x_1 f_2 ) ( s + j , x' ) }{ s ( s + i \lambda ) \cdots ( s + j ) ( s + j + i \lambda ) } , \end{split} \] and that provides a meromorphic continuations with possible poles at $ - i \lambda - k $, $ k \in {\mathbb N} $.
The Mellin transform inversion formula, a contour deformation and the residue theorem (applied to simple poles thanks to our assumption that $ i \lambda \notin {\mathbb Z} $) then give \[ u ( x ) \simeq x_1^{i\lambda } ( b_0 (x' ) + x_1 b_1 ( x' ) + \cdots ) + a_0 (x' ) + x_1 a_1 ( x' ) + \cdots , \ \ a_j , b_j \in {{\mathcal C}^\infty} ( \partial M ), \] where the regularity of remainders comes from \eqref{eq:x1N}. (The basic point is that $$ M ( x_1^a \chi ( x_1 ) ) ( s ) = ( s + a )^{-1} F ( s ) , \ \
F ( s ) = - \int x_1^{ a + s }\chi' ( x_1 ) dx_1 , $$ so that
$ F ( s ) $ is an entire function with $ F ( -a ) = 1 $.)
Since $P u ( x ) = 0 $ for $ 0 < x_1 < \epsilon $ the equation shows that $ b_k $ is determined by $ b_0, \cdots b_{k-1} $. We claim that $ b_k \equiv 0 $: if $ b_0 \neq 0 $ then
\[ |x_1^{ \frac{ \rho} 2 - \frac12} u | = x_1^{- \frac 1 2} | b_0 ( x') | + \mathcal O ( x_1^{\frac12} ) \notin L^2 ( dx_1 d\! \vol_h ) . \] contradicting \eqref{eq:Qlambdax}. It follows that $ u \in {\bar{\mathcal C}^\infty} (X_1 ) $ proving \eqref{eq:P0las} and completing the proof of Proposition \ref{p:propu}. \end{proof}
\section{Meromorphic continuation} \label{merc}
To prove Theorem \ref{t:3} we recall that $ ( - \Delta_g - \lambda^2 - (\frac n2 )^2 )^{-1} $ is a holomorphic family of operators on $ L^2_g $ for $ \lambda^2 + ( \frac n 2 )^2 \notin \Spec ( - \Delta_g ) $ and in particular for $ \Im \lambda > \frac n 2 $.
\begin{proof}[Proof of Theorem \ref{t:3}] We first show that for $ \Im \lambda > 0 $, $ \lambda^2 + {\textstyle \frac14} \notin \Spec(-\Delta_g)$, \begin{equation} \label{eq:Pla0} P ( \lambda ) u = 0 , \ \ u \in \mathscr X_s , \ \ s > - \Im \lambda - \textstyle{\frac12} \ \Longrightarrow \ u \equiv 0 . \end{equation} In fact, from \eqref{eq:PtoC} we see that $ u \in {\bar{\mathcal C}^\infty} ( X ) $. Then
putting $ v ( y ) := y_1^{ - i \lambda + \frac n 2} j^* (u|_{X_1} ) $, $ j: M \to X_1 $, \eqref{eq:Plag} shows that $ ( - \Delta_g - \lambda^2 - (\frac n 2 )^2 ) v = 0 $. For $ \Im \lambda> 0 $ we have $ v \in L^2_g $ and hence from our assumptions, $ v \equiv 0 $.
Hence $ u |_{X_1 } \equiv 0 $, and $ u \in {\bar{\mathcal C}^\infty} ( X ) $. Lemma \ref{l:ahe} then shows that $ u \equiv 0 $ proving \eqref{eq:Pla0}.
In view of Lemma \ref{l:kerP} we now need to show that $ P ( \lambda )^* w = 0 $, $ w \in \dot H^{-s} ( X) $, implies that $ w \equiv 0 $. It is enough to do this for $ \lambda_0 \notin i {\mathbb N} $ and $ \Im \lambda \gg 1 $ since invertibility at one point shows that the index of $ P ( \lambda ) $ is $ 0 $. Then \eqref{eq:Pla0} shows invertibility for all $ \Im \lambda > 0 $, $ \lambda^2 + (\frac n 2)^2 \in \Spec ( - \Delta_g ) $.
Hence suppose that $ P ( \lambda )^* w = 0 $, $ w \in \dot H^{-s} ( X ) $. Estimate \eqref{eq:hypePest} then shows that $ \supp w \subset \overline X_1 $. (For $ -1 < x_1 < 0 $ we solve a hyperbolic equation with zero initial data and zero right hand side.) We now show that $ \supp w \cap X_1 \neq \emptyset $ (that is there is some support in $ x_1 > 0$; in fact by unique continuation results for second order elliptic operators, see for instance \cite[\S 17.2]{ho3}, this shows that $ \supp w = \overline X_1 $). In other words we we need to show that we cannot have $ \supp w \subset \{x_1 = 0 \} $. Since $ \WF ( w ) \subset N^* \partial X_1 $ we can restrict $ w $ to fixed values of $ x' \in \partial M $ and the restriction and is then a linear combination of $ \delta^{(k)} ( x_1 ) $. But \[ P ( \bar \lambda ) ( \delta^{(k)} ( x_1 ) ) = ( k + 1 - \bar \lambda / i ) \delta^{(k+1) } (x_1) - i \gamma ( x ) ( 2 i ( k+1) - \bar \lambda - i {\textstyle\frac{n-1} 2 } ) \delta^{(k)} ( x_1 ) , \] and that does not vanish for $ \Im \lambda > 0 $.
Mapping property \eqref{eq:P0las} and the definition of $P ( \lambda ) $ show that for any $ f \in {{\mathcal C}^\infty_{\rm{c}}} ( X_1 ) $ (that is $ f $ supported in $ x_1 > 0 $) there exists $ u \in \bar C^\infty ( X_1 ) $ such that $ P ( \lambda ) u = f $ in $ X_1 $. Then (with $L^2 $ inner products meant as distributional pairings), \[ \langle f , w \rangle = \langle P ( \lambda ) u , w \rangle = \langle u , P ( \lambda ) ^* w \rangle = 0 . \] Since $ w \in \dot {\mathcal D} ( X_1 ) $ and $ u \in \bar C^\infty ( X_1 ) $ the pairing is justified. In view of support properties of $ w $, we can find $ f $ such that the left hand side does not vanish. This gives a contradiction. \end{proof}
\noindent {\bf Remark.} Different proofs of the existence of $ \lambda $ with $ P( \lambda ) $ invertible can be obtained using semiclassical versions of the propagation estimates of \S \ref{radest}. That is done for $ \Im \lambda_0 \gg \langle \Re \lambda_0\rangle $ in \cite{vasy2} and for $ \Im \lambda_0 \gg 1 $ in \cite[\S 5.5.3]{res}.
Theorem \ref{t:3} guarantees existence of the inverse at many values of $ \lambda$. Then standard Fredholm analytic theory (see for instance \cite[Theorem C.5]{res}) gives \begin{equation} \label{eq:Plainv} P ( \lambda)^{-1} : \mathscr Y_s \to \mathscr X_s \ \text{ is a meromorphic family of operators in $ \Im \lambda > - s -\frac12 $.} \end{equation}
\begin{proof}[Proof of Theorem \ref{t:1}] We define \[ V ( \lambda ) : {{\mathcal C}^\infty_{\rm{c}}} ( M ) \to {{\mathcal C}^\infty_{\rm{c}}} ( X ) , \ \ \ f ( y ) \longmapsto T f ( x ) := \left\{ \begin{array}{ll} x_1^{ \frac {i \lambda} 2 - \frac n 4 - 1} (j^{-1})^* f & x_1 > 0 , \\ \ \ \ \ 0, & x_1 \leq 0 , \end{array} \right. \] \[ U ( \lambda ) : {\bar{\mathcal C}^\infty} ( X ) \to {{\mathcal C}^\infty} ( M ) , \ \ \
u ( x ) \longmapsto y_1^{ - i \lambda + \frac n 2} j^* ( u|_{X_1} ) , \] where $ j : M \to X_1 $ is the map defined by $ j ( y) = ( y_1^2, y') $ near $ \partial M $. Then, for $ \Im \lambda > \frac n2 $, \eqref{eq:firstconj} and \eqref{eq:Plag} show that \begin{equation} \label{eq:Rla} R ( \textstyle \frac{ n } 2 - i \lambda ) = U ( \lambda ) P ( \lambda )^{-1} V ( \lambda ) . \end{equation} Since $ P ( \lambda )^{-1} : {\bar{\mathcal C}^\infty} ( X ) \to {\bar{\mathcal C}^\infty} ( X) $ is a meromorphic family of operators in $ {\mathbb C} $, Theorem \ref{t:1} follows. \end{proof}
\noindent {\bf Remarks.} 1. The structure of the residue of $ P ( \lambda )^{-1} $ is easiest to describe when the pole at $ \lambda_0 $ is simple and has rank one. In that case, \begin{gather*} P ( \lambda ) = \frac{ u \otimes v }{ \lambda - \lambda_0 } + Q ( \lambda, \lambda_0 ) , \ \ u \in {\bar{\mathcal C}^\infty} ( X ) , \ \ v \in \!\!\!\!\!\bigcap_{ s > - \Im \lambda_0 - \frac12} \dot H^{-s} (\overline X_1 ) \, \\ P ( \lambda_0 ) u = 0 , \ \ P ( \bar \lambda_0 ) v = 0 , \end{gather*} and where $ Q ( \lambda, \lambda_0 ) $ is holomorphic near $ \lambda_0 $. We note that $ u \in {{\mathcal C}^\infty} ( X) $ because of \eqref{eq:PtoC}. The regularity of $ v \in \dot H^{-s}$, $ s > - \Im \lambda_0 - \frac12 $ just misses the threshold for smoothness -- in particular there is no contradiction with Theorem \ref{t:3}!
\noindent 2. The relation \eqref{eq:Rla} between $ R ( \frac n 2 - i \lambda ) $ and $ P ( \lambda ) $ shows that unless the elements of the kernel of $ P ( \bar \lambda ) $ are supported on $ \partial X_1 = \{ x_1 = 0 \} $ then the multiplicities of the poles of $ R ( \frac n 2 - i \lambda ) $ agree.
For completeness we conclude with the proof of the following standard fact: \begin{prop} \label{p:hintz} If $ R ( \zeta ) := ( - \Delta_g - \zeta ( n - \zeta ) )^{-1} $ for $ \Re \zeta > n $ then \begin{equation} \label{eq:hintz0} R ( \zeta ) : L^2 ( M , d \! \vol_g ) \to L^2 ( M , d \! \vol_g ) , \end{equation} is meromorphic for $ \Re \zeta > \frac n 2 $ with simple poles where $ \zeta ( n - \zeta ) \in \Spec ( - \Delta_g ) $. \end{prop} \begin{proof} The spectral theorem implies that $ R ( \zeta ) $ is holomorphic on $ L^2_g $ in $ \{ \Re \zeta > \frac n 2 \} \setminus [ \frac n 2 , n ] $. In the $ \lambda $-plane that corresponds to $ \{ \Im \lambda > 0 \} \setminus i [ 0 , \frac n 2 ] $.
From \eqref{eq:Tla} and \eqref{eq:Tla1} we see that boundeness of $ R ( \frac n 2 - i \lambda ) $ on $ L^2_g ( M ) $ is equivalent to \begin{equation} \label{eq:hintz1} P ( \lambda)^{-1} : x_1^{ -\frac \rho 2 - \frac12} L^2 ( X_1 ) \to x_1^{ - \frac \rho 2 + \frac 12} L^2 ( X_1 ) , \ \ \rho := \Im \lambda . \end{equation} We will first prove \eqref{eq:hintz} for $ 0 < \rho \leq 1$. From Theorem \ref{t:3} we know that except at a discrete set of poles, $ P ( \lambda )^{-1} : \bar H^s ( X_1 ) \to \bar H^{s+1} ( X_1 ) $, $ s > - \rho - \frac12$. We claim that for $ - 1 \leq s < - \frac12 $ \begin{equation} \label{eq:Sob} x_1^{s } L^2 ( X_1 ) \hookrightarrow \bar H^s ( X_1 ) , \ \ \bar H^{s+1} ( X_1 ) \hookrightarrow x_1^{s+1} L^2 ( X_1 ) . \end{equation} By duality the first inclusion follows from the inclusion \begin{equation} \label{eq:hintz} \dot H^{r} ( X_1 ) \hookrightarrow x_1^{r} L^2 , \ \ 0 \leq r \leq 1 . \end{equation} Because of interpolation we only need to prove this for $ r = 1$ in which case it follows from Hardy's inequality,
$\int_0^{\infty } | x_1^{-1} u ( x_1 ) |^2 dx_1 \leq 4 \int_0^\infty
| \partial_{x_1}^2 u ( x_1 ) |^2 dx_1 $. The second inclusion follows from \eqref{eq:hintz} and the fact that $ \bar H^r ( X_1 ) = \dot H^r ( X_1) $ for $ 0 \leq r < \frac12 $ -- see \cite[Chapter 4, (5.16)]{Ta}. We can now take $ s = - \frac \rho 2 - \frac 12 $ in \eqref{eq:Sob} which for $ 0 < \rho \leq 1 $ is in the allowed range. That proves \eqref{eq:hintz1} for $ 0 < \Im \lambda \leq 1$, except at the poles and consequently establishes \eqref{eq:hintz0} for $ \frac n 2 < \Re s \leq \frac n 2 + 1 $. We can choose a polynomial $ p ( s ) $ such that $ p ( s ) R ( s ) : {{\mathcal C}^\infty_{\rm{c}}} ( M ) \to {{\mathcal C}^\infty} ( M ) $ is holomorphic near $ [ \frac n2 , n ] $. The maximum principle applied to $ \langle p ( s ) R ( s ) f, g \rangle$, $ f, g \in {{\mathcal C}^\infty_{\rm{c}}} ( M ) $ now proves that $ p ( s ) R ( s )$ is bounded on $ L^2_g ( M ) $ near $ [ \frac n 2 , n ] $ concluding the proof. \end{proof}
\def\arXiv#1{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\end{document}
\end{document} |
\begin{document}
\title{Upper bounds for number of removed edges in the Erased Configuration Model} \author{Pim van der Hoorn\inst{1} \and Nelly Litvak\inst{1}} \institute{
University of Twente, Department of Electrical Engineering, Mathematics and Computer Science \\
\email{[email protected]} }
\maketitle
\begin{abstract}
Models for generating simple graphs are important in the study of real-world complex networks. A
well established example of such a model is the erased configuration model, where each node
receives a number of half-edges that are connected to half-edges of other nodes at random, and
then self-loops are removed and multiple edges are concatenated to make the graph simple.
Although asymptotic results for many properties of this model, such as the limiting degree
distribution, are known, the exact speed of convergence in terms of the graph sizes remains
an open question. We provide a first answer by analyzing the size dependence of the average
number of removed edges in the erased configuration model. By combining known upper bounds with a
Tauberian Theorem we obtain upper bounds for the number of removed edges, in terms of the size of
the graph. Remarkably, when the degree distribution follows a power-law, we observe three scaling
regimes, depending on the power law exponent. Our results provide a strong theoretical basis for
evaluating finite-size effects in networks. \end{abstract}
\section{Introduction}
The use of complex networks to model large systems has proven to be a powerful tool in recent years. Mathematical and empirical analysis of structural properties of such networks, such as graph distances, centralities, and degree-degree correlations, have received vast attention in recent literature. A common approach for understanding these properties on real-world networks, is to compare them to those of other networks which have the same basic characteristics as the network under consideration, for instance the distribution of the degrees. Such test networks are usually created using random graph models. An important property of real-world networks is that they are usually simple, i.e. there is at most one edge between any two nodes and there are no self-loops. Hence, random graph models that produce simple graphs are of primary interest from the application point of view.
One well established model for generating a graph with given degree distribution is the configuration model~\cite{Bollobas1980,Molloy1995,Newman2001}, which has been studied extensively in the literature~\cite{Britton2006,VanDerHofstad2007,Hoorn2014,Hoorn2015}. In this model, each node first receives a certain number of half-edges, or stubs, and then the stubs are connected to each other at random. Obviously, multiple edges and self-loops may appear during the random wiring process. It is well-known that when the degree distribution has finite variance, the graph will be simple with positive probability, so a simple graph can be obtained by repeatedly applying the model until the resulting graph is simple. However, when the variance of the degrees is infinite the resulting graph will, with high probability, not be simple. To remedy this, one can remove self-loops and concatenate the multiple edges to make the graph simple. This version is know as the erased configuration model. Although removal of edges impacts the degree distribution, it has been shown that asymptotically the degree distribution is unchanged. For a thorough systematic treatment of these results we refer the reader to~\cite{VanDerHofstad2007}.
An important feature of the configuration model is that, conditioned on the graph being simple, it samples a graph uniformly from among all simple graphs with the specified degree distribution. This, in combination with the neutral wiring in the configuration model, makes it a crucial model for studying the effects of degree distributions on the structural properties of the networks, such as, for instance, graph distances~\cite{Esker2005,Hofstad2005,Hofstad2005a,Molloy1998} and epidemic spread~\cite{Andersson1998,Ferreira2012,Lee2013}.
We note that there are several different methods for generating simple graphs, sampled uniformly from the set of all simple graphs with a given degree sequence. A large class of such models use Markov-Chain Monte Carlo methods for sampling graphs uniformly from among all graphs with a given set of constraints, such as the degree sequence. These algorithms use so-called edge swap or switching steps,\cite{Artzy-Randrup2005,Maslov2002,Tabourier2011}, each time a pair of edges is sampled and swapped, if this is allowed. The main problem with this method are the limited theoretical results on the mixing times, in~\cite{Cooper2007} mixing times are analyzed, but only for regular graphs. Other methods are, for instance, the sequential algorithms proposed in \cite{Blitzstein2011sequential,DelGenio2010} which have complexity $O(E N^2)$ and $O(E N)$, respectively, where $N$ is the size of the graph and $E$ denotes the number of edges. The erased configuration model however,is well studied, with strong theoretical results and is easy to implement.
In a recent study \cite{Schlauch2015}, authors compare several methods, including the above mentioned Markov-Chain Monte Carlo methods, for creating test graphs for the analysis of structural properties of networks. The authors found that the number of removed edges did not impact the degree sequence in any significant way. However, several other measures on the graph, for instance average neighbor degree, did seem to be altered by the removal of self-loops and double-edges. This emphasizes the fact that asymptotic results alone are not sufficient. The analysis of networks requires a more detailed understanding of finite-size effects in simple random graphs. In particular, it is important to obtain a more precise characterization for dependence of the number of erased edges on the graph size, and their impact on other characteristics of the graph.
In our recent work~\cite{Hoorn2015} we analyzed the average number of removed edges in order to evaluate the degree-degree correlations in the directed version of the erased configuration model. We used insights obtained from several limit theorems to derive the scaling in terms of the graph size. Here we make these rigorous by proving three upper bounds for the average number of removed edges in the undirected erased configuration model with regularly varying degree distribution. We start in Section 2 with the formal description of the model. Our main result is stated in Section 3 and the proofs are provided in Section 4.
\section{Erased Configuration Model}
The Erased Configuration Model (ECM) is an alteration of the Configuration Model (CM), which is a random graph model for generating graphs of size $n$ with either prescribed degree sequence or degree distribution. Given a degree sequence ${\bf D}_n$ such that $\sum_{i = 1}^n D_i$ is even, the degrees of each node are represented as stubs and the graph is constructed by randomly pairing stubs to form edges. This will create a graph with the given degree sequence.
In another version of the model, degrees are sampled independently from a given distribution, an additional stub is added to the last node if the sum of degrees is odd, and the stubs are connected as in the case with given degrees. The empirical degree distribution of the resulting graph will then converge to the distribution from which the degrees were sampled as the graph size goes to infinity, see for instance~\cite{VanDerHofstad2007}.
When the degree distribution has finite variance, the probability of creating a simple graph with the CM is bounded away from zero. Hence, by repeating the model, one will obtain a simple graph after a finite number of attempts. This construction is called the Repeated Configuration Model (RCM). It has been shown that the RCM samples graphs uniformly from among all simple graphs with the given degree distribution, see Proposition 7.13 in \cite{VanDerHofstad2007}.
When the degrees have infinite variance the probability of generating a simple graph with the CM converges to zero as the graph size increases. In this case the ECM can be used, where after all stubs are paired, multiple edges are merged and self-loops are removed. This model is computationally far less expensive than the RCM since the pairing only needs to be done once while in the other case the number of attempts increases as the variance of the degree distribution grows. The trade-off is that the ECM removes edges, altering the degree sequence and hence the empirical degree distribution. Nevertheless it was proven, see \cite{VanDerHofstad2007}, that the empirical degree distribution for the ECM still converges to the original one as $n \to \infty$.
For our analysis we shall consider graphs of size $n$ generated by the ECM, where the degrees are sampled at random from a regularly varying distribution. We recall that $X$ is said to have a regularly varying distribution with finite mean if \begin{equation}
\Prob{X > k} = L(k) k^{-\gamma} \quad \text{with } \gamma > 1, \label{eq:degree_distribution} \end{equation} where $L$ is a slowly varying function, i.e. $\lim_{x \to \infty} L(tx)/L(x) = 1$ for all $t > 0$. The parameter $\gamma$ is called the exponent of the distribution.
For $n \in \N$ we consider the degree sequence ${\bf D}_n$ as a sequence of i.i.d. samples from distribution~\eqref{eq:degree_distribution}, let $\mu = \Exp{D}$ and denote by $L_n = \sum_{i = 1}^n D_i$ the sum of the degrees. Formally we need $L_n$ to be even in order to have a graphical sequence, in which case $L_n/2$ is the number of edges. This can be achieved by increasing the degree of the last node $D_n$ by one if the sum is odd. This alteration adds a term uniformly bounded by one which does not influence the analysis. Therefore we can omit this and treat the degree sequence ${\bf D}_n$ as an i.i.d. sequence.
For the analysis we denote by $E_{ij}$ the number of edges between two nodes, $1 \le i,j \le n$, created by the CM and by $E_{ij}^c$ the number of edges between the two nodes that where removed by the ECM. Furthermore, we let $\mathbb{P}_n$ and $\mathbb{E}_n$ be, respectively, the probability and expectation conditioned on the degree sequence ${\bf D}_n$.
\section{Main result}
The main result of this paper is concerned with the scaling of the average number of erased edges in the ECM. It was proven in~\cite{Hoorn2014} that \begin{equation}\label{eq:erased_edges_convergence}
\frac{1}{L_n} \sum_{i,j} \Expn{E_{ij}^c} \plim 0 \quad \text{as } n \to \infty, \end{equation} where $\plim$ denotes convergence in probability. This result states that the average number of removed edges converges to zero as the graph size grows, which is in agreement with the convergence in probability of the empirical degree distribution to the original one. However, until now there have not been any results on the speed of convergence in~\eqref{eq:erased_edges_convergence}. In this section we will state our result, which establishes upper bound on the scaling of the average number of erased edges.
To make our statement rigorous we first need to define what we mean by scaling for a random variable.
\begin{definition}\label{def:probabilistic_bigO}
Let $(X_n)_{n \in \N}$ be sequences of random variables and let
$\rho \in \R$. Then we define
\[
X_n = \bigOp{n^\rho} \iff \text{ for all } \varepsilon > 0 \quad
n^{-\rho - \varepsilon} X_n \plim 0, \quad \text{as } n \to \infty.
\] \end{definition}
We are now ready to state the main result on the scaling of the average number of erased edges in the ECM
\begin{theorem}\label{thm:main_result}
Let $G_n$ be a graph generated by the ECM with degree distribution
\eqref{eq:degree_distribution}, let $L_n$ be the sum of the degrees and denote by $E_{ij}^c$ the
number of removed edges from $i$ to $j$. Then
\begin{equation}
\frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E_{ij}^c} =
\begin{cases}
\bigOp{n^{\frac{1}{\gamma} - 1}} &\mbox{ if } 1 < \gamma \le \frac{3}{2}, \\
\bigOp{n^{\frac{4}{\gamma} - 3}} &\mbox{ if } \frac{3}{2} < \gamma \le 2,\\
\bigOp{n^{-1}} &\mbox{ if } \gamma > 2.
\end{cases}
\label{eq:upper_bounds_erased_edges}
\end{equation} \end{theorem}
The proof of Theorem~\ref{thm:main_result} is given in the next section. The strategy of the proof is to establish two upper bounds for $\sum_{i,j = 1}^n \Expn{E_{ij}^c}/L_n$ for the case $1 < \gamma \le 2$, each of which scales as one of the first two terms from \eqref{eq:upper_bounds_erased_edges}. Then it remains to observe that the term $n^{1/\gamma - 1}$ dominates $n^{4/\gamma - 3}$ when $1 < \gamma \le 3/2$ while the latter one dominates when $3/2 < \gamma < 2$. In addition, we prove the $n^{-1}$ scaling for $\gamma > 2$.
Theorem~\ref{thm:main_result} gives several insights into the behavior of the ECM. First, consider the case when the degrees have finite variance ($\gamma > 2$). Equation \eqref{eq:upper_bounds_erased_edges} tells us that in that case the ECM will erase only a finite, in $n$, number of edges. For large $n$, this alters the degree sequence in a negligible way. We then gain the advantage that we need to perform the random wiring only once. In contrast, the RCM requires multiple attempts before a simple graph is produced. This will be a problem, especially as $\gamma$ approaches $2$.
An even more interesting phenomenon established by Theorem~\ref{thm:main_result} is the remarkable change in the scaling at $\gamma = 3/2$. Figure~\ref{fig:erased_edges_scaling} shows the exponent in the scaling term in \eqref{eq:upper_bounds_erased_edges} as a function of $\gamma$. \begin{figure}
\caption{The scaling exponent of the average number of erased edges, as a function of $\gamma$.}
\label{fig:erased_edges_scaling}
\end{figure} Notice that for small values of $\gamma$, the fraction of erased edges decreases quite slowly with $n$. For example, when $\gamma=1.1$ and $n=10^6$ then $n^{1/\gamma} \approx 284803$. Hence, a significant fraction of edges will be removed, so we can expect notable finite size effects even in large networks. However, when $\gamma\ge 1.5$ the finite size effects are already very small and decrease more rapidly with $\gamma$.
It will be seen from the proofs in the next section that the upper bounds for $\gamma>3/2$ in Theorem~\ref{thm:main_result} follow readily from the literature. Our main contribution is in the upper bound for $1 < \gamma<3/2$, which corresponds to many real-world networks. The proof uses a Central Limit Theorem and a Tauberian Theorem for regularly varying random variables. Note that when $1 < \gamma < 3/2$ the upper bound $n^{4/\gamma-3}$ is not at all tight and even increasing in $n$ for $\gamma<4/3$.
\section{Upper bounds for erased edges}\label{sec:upper_bounds}
Throughout this section we will use the Central Limit Theorem for regularly varying random variables also called the Stable Law CLT, see \cite{Whitt2002} Theorem 4.5.1. We summarize it below, letting $\dlim$ denote convergence in distribution, in the setting of non-negative regularly varying random variables.
\begin{theorem}[Stable Law CLT~\cite{Whitt2002}]\label{thm:stable_clt}
Let $\{X_i: i \ge 1\}$ be an i.i.d. sequence of non-negative random variables with
distribution~\eqref{eq:degree_distribution} and $0 < \gamma < 2$. Then there exists a slowly
varying function $L_0$, different from $L$, such that
\[
\frac{\sum_{i = 1}^n D_i - m_n}{L_0(n)n^{\frac{1}{\gamma}}}
\dlim \mathcal{S}_{\gamma},
\]
where $S_{\gamma}$ is a stable random variable and
\begin{equation*}
m_n = \begin{cases}
0 &\mbox{if } 0 < \gamma < 1 \\
n^{2}\Exp{\sin\left(\frac{X}{L_0(n) n}\right)}
&\mbox{if } \gamma = 1 \\
n\Exp{X} &\mbox{if } 1 < \gamma < 2.
\end{cases}
\end{equation*} \end{theorem}
From Theorem~\ref{thm:stable_clt} we can infer several scaling results using the following observation: By Slutsky's Theorem it follows that \[
\frac{\sum_{i = 1}^n X_i - m_n}{L_0(n) n^{\frac{1}{\gamma}}} \dlim S_{\gamma} \quad
\text{as } n \to \infty \] implies that for any $\varepsilon > 0$, \[
n^{-\varepsilon}\frac{\sum_{i = 1}^n X_i - m_n}{L_0(n) n^{\frac{1}{\gamma}}} \plim 0
\quad \text{as } n \to \infty. \]
Hence $\left|\sum_{i = 1}^n X_i - m_n\right| = \bigOp{L_0(n) n^{1/\gamma}}$ and therefore, by
Potter's Theorem, it follows that $\left|\sum_{i = 1}^n X_i - m_n\right| = \bigOp{n^{1/\gamma}}$. Finally, we remark that if $D$ has distribution~\eqref{eq:degree_distribution} with $1 < \gamma \le 2$, then $D^2$ has distribution~\eqref{eq:degree_distribution} with exponent $1/2 < \gamma/2 \le 1$. Summarizing, we have the following.
\begin{corollary}\label{cor:clt_edges} Let $G_n$ be a graph generated by the ECM with degree distribution \eqref{eq:degree_distribution} and $1 < \gamma \le 2$, then \begin{equation*}
L_n = \bigOp{n}, \quad
\left|\sum_{i = 1}^n D_i - \mu n\right| = \bigOp{n^{\frac{1}{\gamma}}} \, \text{ and } \,
\sum_{i = 1}^n D_i^2 = \bigOp{n^{\frac{2}{\gamma}}} \end{equation*} \end{corollary} The third equation also holds for $\gamma = 2$ since \begin{align*}
\sum_{i = 1}^n D_i^2 &= \left(\sum_{i = 1}^n D_i^2 - L_0(n) n^2 \Exp{\sin\left(
\frac{D}{n L_0(n)}\right)}\right) + L_0(n) n^2 \Exp{\sin\left( \frac{D}{n^1 L_0(n)}\right)} \\
&\le \left(\sum_{i = 1}^n D_i^2 - n^2 L_0(n) \Exp{\sin\left(
\frac{D}{n^1 L_0(n)}\right)}\right) + n \mu \\
&= \bigOp{L_0(n) n} + n \mu = \bigOp{n}. \end{align*}
\subsection{The upper bounds $\bigOp{n^{\frac{4}{\gamma} - 3}}$ and $\bigOp{n^{-1}}$}
For the proof of the upper bounds we will use the following proposition.
\begin{proposition}[Proposition 7.10~\cite{VanDerHofstad2007}]\label{prop:first_upper_bound}
Let $G_n$ be a graph generated by the CM and denote by $S_n$ and $M_n$,
respectively, the number of self-loops and multiple edges. Then
\begin{equation*}
\Expn{S_n} \le \sum_{i = 1}^n \frac{D_i^2}{L_n} \quad \text{and} \quad
\Expn{M_n} \le 2\left(\sum_{i = 1}^n \frac{D_i^2}{L_n}\right)^2.
\end{equation*} \end{proposition}
\begin{lemma}\label{lem:first_upper_bound}
Let $G_n$ be a graph generated by the ECM with degree distribution
\eqref{eq:degree_distribution}, then
\begin{equation}
\frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E_{ij}^c} =
\begin{cases}
\bigOp{n^{\frac{4}{\gamma} - 3}} &\mbox{ if } 1 < \gamma \le 2,\\
\bigOp{n^{-1}} &\mbox{ if } \gamma > 2.
\end{cases}
\label{eq:erased_edges_bound_1}
\end{equation} \end{lemma}
\begin{proof} We start by observing that \[
\sum_{i,j = 1}^n E^c_{ij} = S_n + M_n, \] and hence it follows from Proposition~\ref{prop:first_upper_bound} that \[
\sum_{i,j = 1}^n \Expn{E^c_{ij}} \le \sum_{i = 1}^n \frac{D_i^2}{L_n}
+ 2\left(\sum_{i = 1}^n \frac{D_i^2}{L_n}\right)^2. \]
First suppose that $1 < \gamma \le 2$. Then, by Corollary~\ref{cor:clt_edges} and the continuous mapping theorem it follows that \[
\frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E_{ij}^c} \le \frac{1}{L_n^2} \sum_{i = 1}^n D_i^2
+ 2\frac{1}{L_n^3} \left(\sum_{i = 1}^n D_i^2\right)^2 = \bigOp{n^{\frac{4}{\gamma} - 3}}. \]
Now suppose that $\gamma > 2$. Then $D_i^2$ has finite mean, say $\nu$, and therefore, by Theorem \ref{thm:stable_clt}, \[
\frac{1}{L_n^2} \sum_{i = 1}^n D_i^2 \le \frac{1}{L_n^2}\left|\sum_{i = 1}^n D_i^2 - n\nu\right|
+ \frac{n\nu}{L_n^2} = \bigOp{n^{\frac{2}{\gamma} - 2} + n^{-1}} = \bigOp{n^{-1}}, \] where the last step follows since $2/\gamma - 2 < -1$ when $\gamma > 2$. Since this is the main term the result follows. \qed \end{proof}
Lemma~\ref{lem:first_upper_bound} provides the last two upper bounds from Theorem \ref{thm:main_result}. However, as we mentioned before, the bound $\bigOp{n^{4/\gamma - 3}}$ is not tight over the whole range $1< \gamma\le 2$ since for $1 < \gamma < 4/3$ we have $4/\gamma - 3 > 0$, and hence the upper bound diverges as $n \to \infty$ which is in disagreement with~\eqref{eq:erased_edges_convergence}. Therefore, there must exist another upper bound on the average erased number of edges, which goes to zero as $n \to \infty$ for all $\gamma > 1$. This new bound does not follow readily from the literature. Below we establish such upper bound and explain the essential new ingredients needed for its proof.
\subsection{The upper bound $\bigOp{n^{\frac{1}{\gamma} - 1}}$}
We first observe that the number of erased edges between nodes $i$ and $j$ equals the total number of edges between the nodes minus one, if there is more than one edge. This gives, \begin{align*}
\frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E^c_{ij}}
&= \frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E_{ij} - \mathbbm{1}_{\{E_{ij} > 0\}}} \\
&= \frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E_{ij}} - \frac{1}{L_n} \sum_{i,j = 1}^n
\Expn{1 - \mathbbm{1}_{\{E_{ij} = 0\}}} \\
&= 1 - \frac{n^2}{L_n} + \frac{1}{L_n} \sum_{i,j = 1}^n \Probn{E_{ij} = 0}.
\numberthis \label{eq:removed_edges_alternative} \end{align*}
We can get an upper bound for $\Probn{E_{ij} = 0}$ from the analysis performed in \cite{Hofstad2005}, Section 4. Since the probability of no edges between $i$ and $j$ equals the probability that none of the $D_i$ stubs connects to one of the $D_j$ stubs, it follows from equation (4.9) in~\cite{Hofstad2005} that \begin{equation}
\Probn{E_{ij} = 0} \le \prod_{k = 0}^{D_i - 1}\left(1 - \frac{D_j}{L_n - 2D_i - 1}\right)
+ \frac{D_i^2 D_j}{(L_n - 2D_i)^2}.
\label{eq:no_edge_bound} \end{equation} The product term in~\eqref{eq:no_edge_bound} can be upper bounded by $\exp\{-D_iD_j/E_n\}$. For the second term we use that \begin{align*}
\frac{1}{L_n}\sum_{i,j = 1}^n \frac{D_i^2 D_j}{(L_n - 2D_i)^2}
&= \frac{1}{L_n^2}\sum_{i = 1}^n D_i^2\left(\frac{1}{1-\frac{2D_i}{L_n}}\right)^2
\left(\frac{1}{L_n}\sum_{j = 1}^n D_j\right) \\
&\le \frac{1}{L_n^2} \sum_{i = 1}^n D_i^2 = \bigOp{n^{\frac{2}{\gamma} - 2}}. \end{align*} Putting everything together we obtain \begin{equation}
\frac{1}{L_n}\sum_{i,j = 1}^n\Probn{E_{ij} = 0} \le \sum_{i,j = 1}^n
\exp\left\{-\frac{D_i^+ D_j^-}{L_n}\right\} + \bigOp{n^{\frac{2}{\gamma} - 2}}.
\label{eq:no_edge_exponential_bound} \end{equation}
We will use~\eqref{eq:no_edge_exponential_bound} to upper bound \eqref{eq:removed_edges_alternative}. In order to obtain the desired result we will employ a Tauberian Theorem for regularly varying random variables, which we summarize first. We write $a\sim b$ to denote that $a/b$ goes to one in a corresponding limit.
\pagebreak
\begin{theorem}[Tauberian Theorem, \cite{Bingham1974}]\label{thm:tauberian_theorem}
Let $X$ be a non-negative random variable with only finite mean. Then, for $1 < \gamma < 2$, the
following are equivalent,
\begin{enumerate}
\item[i)] $\displaystyle \Prob{X > t} \sim L(t) t^{-\gamma} \quad \text{as } t \to \infty$,
\item[ii)] $\displaystyle \frac{\Exp{X}}{t} - 1 + \exp\left\{-\frac{X}{t}\right\} \sim
L\left(\frac{1}{t}\right) t^{-\gamma} \quad \text{as } t \to \infty$.
\end{enumerate} \end{theorem}
We will first explain the idea behind the proof of the $\bigOp{n^{1/\gamma - 1}}$ bound. If we insert~\eqref{eq:no_edge_exponential_bound} into~\eqref{eq:removed_edges_alternative} we get \begin{equation}
\frac{1}{L_n}\sum_{i,j = 1}^n \Expn{E_{ij}^c} \le 1 - \frac{n^2}{L_n}
+ \frac{1}{L_n}\sum_{i,j = 1}^n\exp\left\{-\frac{D_i^+ D_j^-}{L_n}\right\}
+ \bigOp{n^{\frac{2}{\gamma} - 2}}.
\label{eq:erased_edges_exponential_bound} \end{equation} The terms on the right side can be rewritten to obtian an expression that resembles an empirical version of the left hand side of part ii) from Theorem~\ref{thm:tauberian_theorem}, with $t = L_n$ and $X = D_1 D_2$. Thus, the scaling of the average number of erased edges will be determined by the scaling that follows from the Tauberian Theorem and the Stable Law CLT.
\begin{proposition}\label{prop:second_upper_bound}
Let $G_n$ be a graph generated by the ECM with degree distribution
\eqref{eq:degree_distribution} and $1 < \gamma < 2$. Then
\begin{equation}
\frac{1}{L_n} \sum_{i,j = 1}^n \Expn{E_{ij}^c} = \bigOp{n^{\frac{1}{\gamma} - 1}}.
\label{eq:erased_edges_bound_2}
\end{equation} \end{proposition}
\begin{proof}
We start with equation~\eqref{eq:erased_edges_exponential_bound}. Since the correction term here
is of lower order, by extracting a factor $n^2/L_n$ from the other terms and using that $L_n =
\sum_{i = 1}^n D_i$, it suffices to show that
\begin{equation}
\frac{n^2}{L_n}\left(\frac{1}{n^2}\sum_{i,j = 1}^n \frac{D_i D_j}{L_n} - 1
+ \frac{1}{n^2} \sum_{i,j = 1}^n \exp\left\{-\frac{D_i D_j}{L_n}\right\}\right)
= \bigOp{n^{\frac{1}{\gamma} - 1}}.
\label{eq:removed_edges_main_term}
\end{equation}
We first consider the term inside the brackets in the left hand side of
\eqref{eq:removed_edges_main_term}.
\begin{align}
&\left|\frac{1}{n^2} \sum_{i,j = 1}^n \frac{D_i D_j}{L_n} - 1 + \frac{1}
{n^2} \sum_{i,j = 1}^n \exp\left\{-\frac{D_i D_j}{L_n}\right\}\right| \notag \\
&\le \frac{1}{n^2}\left|\frac{1}{L_n} - \frac{1}{\mu n}\right|\sum_{i,j = 1}^n D_i D_j
\label{eq:degree_estimation_linear}\\
&\hspace{10pt}+ \frac{1}{n^2} \sum_{i,j = 1}^n\left|\exp\left\{-\frac{D_i D_j}{L_n}
\right\} - \exp\left\{-\frac{D_i D_j}{\mu n}\right\}\right|
\label{eq:degree_estimation_exponential}\\
&\hspace{10pt}+ \left|\frac{1}{n^2} \sum_{i,j = 1}^n \left(\frac{D_i D_j}{\mu n} - 1 +
\exp\left\{-\frac{D_i D_j}{\mu n}\right\}\right)\right| \label{eq:tauberian_estimation}
\end{align}
Since
\[
\frac{1}{n^2} \sum_{i,j = 1}^n\left|\exp\left\{-\frac{D_i D_j}{L_n}
\right\} - \exp\left\{-\frac{D_i D_j}{\mu n}\right\}\right|
\le \frac{1}{n^2}\left|\frac{1}{L_n} - \frac{1}{\mu n}\right| \sum_{i,j = 1}^n D_i D_j,
\]
it follows from Corollary~\ref{cor:clt_edges} that both~\eqref{eq:degree_estimation_linear} and
\eqref{eq:degree_estimation_exponential} are $\bigOp{n^{\frac{1}{\gamma} - 2}}$. Next, observe
that the function $e^{-x} - 1 + x$ is positive which implies, by Markov's inequality, that
\eqref{eq:tauberian_estimation} scales as its average
\begin{equation}
\frac{\Exp{D_1 D_2}}{\mu n} - 1 + \Exp{\exp\left\{-\frac{D_1 D_2}{\mu n}\right\}}.
\label{eq:tauberian_term}
\end{equation}
where $D_1$ and $D_2$ are two independent random variables with distribution
\eqref{eq:degree_distribution} and $1 < \gamma < 2$, so that the product $D_1 D_2$ again has
distribution~\eqref{eq:degree_distribution} with the same exponent, see for instance the
Corollary after Theorem 3 in~\cite{Embrechts1980}. Now we use Theorem~\ref{thm:tauberian_theorem}
to find that~\eqref{eq:tauberian_term}, and hence~\eqref{eq:tauberian_estimation} are
$\bigOp{n^{-\gamma}}$. Finally, the term outside of the brackets in
\eqref{eq:removed_edges_main_term} is $\bigOp{n}$ and since $1 - \gamma < \frac{1}{\gamma} - 1$
for all $\gamma > 1$, the result follows.
\qed \end{proof}
\section{Discussion}
The configuration model is one of the most important random graph models developed so far for constructing test graphs, used in the study of structural properties of, and processes on, real-world networks. The model is of course most true to reality when it produces a simple graph. Because this will happen with vanishing probability for most networks, since these have infinite degree variance, the ECM can be seen as the primary model for a neutrally wired simple graph with scale-free degrees. The fact that the fraction of erased edges is vanishing, suffices for obtaining asymptotic structural properties and asymptotic behavior of network processes in the ECM. However, real-world networks are finite, albeit very large. Therefore, it is important to understand and quantify how the properties and processes in a finite network are affected by the fact that the graph is simple.
This paper presents the first step in this direction by providing probabilistic upper bounds for the number of the erased edges in the undirected ECM. This second order analysis shows that the average number of erased edges by the ECM decays as $n^{-1}$ when the variance of the degrees is finite. Since the ECM is computationally less expensive then the RCM and other sequential algorithms, this is a strong argument for using the ECM as a standard model for generating test graphs with given degree distribution. Especially since, in contrast to Markov-Chain Monte Carlo methods using edge swap mechanics, it is theoretically well analyzed. We also uncover a new transition in the scaling of the average number of erased edges for regularly varying degree distributions with only finite mean, in terms of the exponent of the degree distribution.
Based on the empirical results found by us in~\cite{Hoorn2015}, we conjecture that the bounds we obtained are tight, up to some slowly varying functions. Therefore, as a next step one could try to prove Central Limit Theorems for the number of erased edges, using the bounds from Theorem \ref{thm:main_result} as the correct scaling factors. These tools would make it possible to perform statistical analysis of properties on networks, using the ECM as a model for generating test graphs.
\end{document} |
\begin{document}
\title[On period polynomials of degree $2^m$]{On period polynomials of degree $\bm{2^m}$\\ for finite fields}
\author{Ioulia N. Baoulina}
\address{Department of Mathematics, Moscow State Pedagogical University, Krasnoprudnaya str. 14, Moscow 107140, Russia} \email{[email protected]}
\date{}
\maketitle
\begin{abstract} We obtain explicit factorizations of reduced period polynomials of degree $2^m$, $m\ge 4$, for finite fields of characteristic $p\equiv 3\text{\;or\;}5\pmod{8}$. This extends the results of G.~Myerson, who considered the cases $m=1$ and $m=2$, and S.~Gurak, who studied the case $m=3$. \end{abstract}
\keywords{{\it Keywords}: Period polynomial; cyclotomic period; $f$-nomial period; reduced period polynomial; Gauss sum; Jacobi sum; factorization.}
\subjclass{2010 Mathematics Subject Classification: 11L05, 11T22, 11T24}
\thispagestyle{empty}
\section{Introduction} Let $\mathbb F_q$ be a finite field of characteristic~$p$ with $q=p^s$ elements, $\mathbb F_q^*=\mathbb F_q^{}\setminus\{0\}$, and let $\gamma$ be a fixed generator of the cyclic group $\mathbb F_q^*$ . By ${\mathop{\rm Tr}\nolimits}:\mathbb F_q\rightarrow\mathbb F_p$ we denote the trace mapping, that is, ${\mathop{\rm Tr}\nolimits}(x)=x+x^p+x^{p^2}+\dots+x^{p^{s-1}}$ for $x\in\mathbb F_q$. Let $e$ and $f$ be positive integers such that $q=ef+1$. Denote by $\mathcal{H}$ the subgroup of $e$-th powers in $\mathbb F_q^*$. For any positive integer $n$, write $\zeta_n=\exp(2\pi i/n)$.
The cyclotomic (or $f$-nomial) periods of order $e$ for $\mathbb F_q$ with respect to $\gamma$ are defined by $$ \eta_k=\sum_{x\in\gamma^k\mathcal{H}}\zeta_p^{{\mathop{\rm Tr}\nolimits}(x)}=\sum_{h=0}^{f-1}\zeta_p^{{\mathop{\rm Tr}\nolimits}(\gamma^{eh+k})},\quad k=0,1,\dots,e-1. $$ The period polynomial of degree $e$ for $\mathbb F_q$ is the polynomial $$ P_e(X)=\prod_{k=0}^{e-1}(X-\eta_k). $$ The reduced cyclotomic (or reduced $f$-nomial) periods of order $e$ for $\mathbb F_q$ with respect to $\gamma$ are defined by $$ \eta_k^*=\sum_{x\in\mathbb F_q}\zeta_p^{{\mathop{\rm Tr}\nolimits}(\gamma^k x^e)}=1+e\eta_k,\quad k=0,1,\dots,e-1, $$ and the reduced period polynomial of degree $e$ for $\mathbb F_q$ is $$ P_e^*(X)=\prod_{k=0}^{e-1}(X-\eta_k^*). $$
The polynomials $P_e(X)$ and $P_e^*(X)$ have integer coefficients and are independent of the choice of generator~$\gamma$. They are irreducible over the rationals when $s=1,$ but not necessarily irreducible when $s>1$. More precisely, $P_e(X)$ and $P_e^*(X)$ split over the rationals into $\delta=\gcd(e,(q-1)/(p-1))$ factors of degree~$e/\delta$ (not necessarily distinct), and each of these factors is irreducible or a power of an irreducible polynomial. Furthermore, the polynomials $P_e(X)$ and $P_e^*(X)$ are irreducible over the rationals if and only if $\gcd(e,(q-1)/(p-1))=1$. For proofs of these facts, see~\cite{M}.
In the case $s=1$, the period polynomials were determined explicitly by Gauss for ${e\in\{2, 3, 4\}}$ and by many others for certain small values of~$e$. In the general case, Myerson~\cite{M} derived the explicit formulas for $P_e(X)$ and $P_e^*(X)$ when $e\in\{2,3,4\}$, and also found their factorizations into irreducible polynomials over the rationals. Gurak~\cite{G3} obtained similar results for $e\in\{6,8,12,24\}$; see also \cite{G2} for the case $s=2$, $e\in\{6,8,12\}$. Note that if $-1$ is a power of $p$ modulo $e$, then the period polynomials can also be easily obtained. Indeed, if $e>2$ and $e\mid(p^{\ell}+1)$, with $\ell$ chosen minimal, then $2\ell\mid s$, and \cite[Proposition~20]{M} yields $$ P_e^*(X)=(X+(-1)^{s/2\ell}(e-1)q^{1/2})(X-(-1)^{s/2\ell}q^{1/2})^{e-1}. $$ Baumert and Mykkeltveit~\cite{BM} found the values of cyclotomic periods in the case when $e>3$ is a prime, $e\equiv 3\pmod{4}$ and $p$ generates the quadratic residues modulo~$e$; see also \cite[Proposition~21]{M}.
It is seen immediately from the definitions that $P_e(X)=e^{-e}P_e^*(eX+1)$, and so it suffices to factorize only $P_e^*(X)$.
The aim of this paper is to obtain the explicit factorizations of the reduced period polynomials of degree $2^m$ with $m\ge 4$ in the case that $p\equiv 3\text{\;or\;}5\pmod{8}$. Notice that in this case $\mathop{\rm ord}_2(q-1)=\mathop{\rm ord}_2(p^s-1)=\mathop{\rm ord}_2 s+2$. Hence, for $p\equiv 3\pmod{8}$, $$ \gcd(2^m,(q-1)/(p-1))=\begin{cases} 2^m&\text{if $2^{m-1}\mid s$,}\\ 2^{m-1}&\text{if $2^{m-2}\parallel s$.} \end{cases} $$ Appealing to \cite[Theorem~4]{M}, we conclude that in the case when $2^{m-1}\mid s$, $P_{2^m}^*(X)$ splits over the rationals into linear factors. If $2^{m-2}\parallel s$, then $P_{2^m}^*(X)$ splits into irreducible polynomials of degrees at most 2. Similarly, for $p\equiv 5\pmod{8}$, $$ \gcd(2^m,(q-1)/(p-1))=\begin{cases} 2^m&\text{if $2^m\mid s$,}\\ 2^{m-1}&\text{if $2^{m-1}\parallel s$,}\\ 2^{m-2}&\text{if $2^{m-2}\parallel s$.} \end{cases} $$ Using \cite[Theorem~4]{M} again, we see that $P_{2^m}^*(X)$ splits over the rationals into linear factors if $2^m\mid s$, splits into linear and quadratic irreducible factors if $2^{m-1}\parallel s$, and splits into linear, quadratic and biquadratic irreducible factors if $2^{m-2}\parallel s$. Our main results are Theorems~\ref{t1} and \ref{t2}, which give the explicit factorizations of $P_{2^m}^*(X)$ in the cases $p\equiv 3\pmod{8}$ and $p\equiv 5\pmod{8}$, respectively. All the evaluations in Sections~\ref{s3} and \ref{s4} are effected in terms of parameters occuring in quadratic partitions of some powers of~$p$.
\section{Preliminary Lemmas} \label{s2}
In the remainder of the paper, we assume that $p$ is an odd prime. Let $\psi$ be a nontrivial character on $\mathbb F_q$. We extend $\psi$ to all of $\mathbb F_q$ by setting $\psi(0)=0$. The Gauss sum $G(\psi)$ over $\mathbb F_q$ is defined by $$ G(\psi)=\sum_{x\in\mathbb F_q}\psi(x)\zeta_p^{{\mathop{\rm Tr}\nolimits}(x)}. $$ Gauss sums occur in the Fourier expansion of a reduced cyclotomic period. \begin{lemma} \label{l1} Let $\psi$ be a character of order $e>1$ on $\mathbb F_q$ such that $\psi(\gamma)=\zeta_e$. Then for $k=0,1,\dots, e-1$, $$ \eta_k^*=\sum_{j=1}^{e-1} G(\psi^j)\zeta_e^{-jk}. $$ \end{lemma}
\begin{proof} It follows from \cite[Theorem~1.1.3 and Equation~(1.1.4)]{BEW}. \end{proof}
In the next three lemmas, we record some properties of Gauss sums which will be used throughout this paper. By $\rho$ we denote the quadratic character on $\mathbb F_q$ ($\rho(x)=+1, -1, 0$ according as $x$ is a square, a non-square or zero in $\mathbb F_q$). \begin{lemma} \label{l2} Let $\psi$ be a nontrivial character on $\mathbb F_q$ with $\psi\ne\rho$. Then \begin{itemize} \item[\textup{(a)}] $G(\psi)G(\bar\psi)=\psi(-1)q$; \item[\textup{(b)}] $G(\psi)=G(\psi^p)$; \item[\textup{(c)}] $G(\psi)G(\psi\rho)=\bar\psi(4)G(\psi^2)G(\rho)$. \end{itemize} \end{lemma}
\begin{proof} See \cite[Theorems~1.1.4(a, d) and 11.3.5]{BEW} or \cite[Theorem~5.12(iv, v) and Corollary~5.29]{LN}. \end{proof}
\begin{lemma} \label{l3} We have $$ G(\rho)= \begin{cases} (-1)^{s-1}q^{1/2}&\text{if\,\, $p\equiv 1\pmod{4}$,}\\ (-1)^{s-1}i^s q^{1/2}&\text{if\,\, $p\equiv 3\pmod 4$.} \end{cases} $$ \end{lemma}
\begin{proof} See \cite[Theorem~11.5.4]{BEW} or \cite[Theorem~5.15]{LN}. \end{proof}
\begin{lemma} \label{l4} Let $p\equiv 3\pmod{8}$, $2\mid s$ and $\psi$ be a biquadratic character on $\mathbb F_q$. Then $G(\psi)=-q^{1/2}$. \end{lemma}
\begin{proof} It is a special case of \cite[Theorem~11.6.3]{BEW}. \end{proof}
Let $\psi$ be a nontrivial character on $\mathbb F_q$. The Jacobi sum $J(\psi)$ over $\mathbb F_q$ is defined by $$ J(\psi)=\sum_{x\in\mathbb F_q}\psi(x)\psi(1-x). $$ The following lemma gives a relationship between Gauss sums and Jacobi sums.
\begin{lemma} \label{l5} Let $\psi$ be a nontrivial character on $\mathbb F_q$ with $\psi\ne\rho$. Then $$ G(\psi)^2=G(\psi^2)J(\psi). $$ \end{lemma}
\begin{proof} See \cite[Theorem~2.1.3(a)]{BEW} or \cite[Theorem~5.21]{LN}. \end{proof}
Let $\psi$ be a character on $\mathbb F_q$. The lift $\psi'$ of the character $\psi$ from $\mathbb F_{q^{\vphantom{r}}}$ to the extension field $\mathbb F_{q^r}$ is given by $$ \psi'(x)=\psi({\mathop{\rm N}}_{\mathbb F_{q^r}/\mathbb F_{q^{\vphantom{r}}}}(x)), \qquad x\in\mathbb F_{q^r}, $$ where ${\mathop{\rm N}}_{\mathbb F_{q^r}/\mathbb F_{q^{\vphantom{r}}}}(x)=x\cdot x^q\cdot x^{q^2}\cdots x^{q^{r-1}}=x^{(q^r-1)/(q-1)}$ is the norm of $x$ from $\mathbb F_{q^r}$ to $\mathbb F_{q^{\vphantom{r}}}$.
\begin{lemma} \label{l6} Let $\psi$ be a character on $\mathbb F_{q^{\vphantom{r}}}$ and let $\psi'$ denote the lift of $\psi$ from $\mathbb F_{q^{\vphantom{r}}}$ to $\mathbb F_{q^r}$. Then \begin{itemize} \item[\textup{(a)}] $\psi'$ is a character on $\mathbb F_{q^r}$; \item[\textup{(b)}] a character $\lambda$ on $\mathbb F_{q^r}$ equals the lift $\psi'$ of some character $\psi$ on $\mathbb F_q$ if and only if the order of $\lambda$ divides $q-1$; \item[\textup{(c)}] $\psi'$ and $\psi$ have the same order. \end{itemize} \end{lemma}
\begin{proof} See \cite[Theorem~11.4.4(a, c, e)]{BEW}. \end{proof}
The following lemma, which is due to Davenport and Hasse, connects a Gauss sum and its lift. \begin{lemma} \label{l7} Let $\psi$ be a nontrivial character on $\mathbb F_q$ and let $\psi'$ denote the lift of $\psi$ from $\mathbb F_{q^{}}$ to $\mathbb F_{q^r}$. Then $$ G(\psi')=(-1)^{r-1}G(\psi)^r. $$ \end{lemma}
\begin{proof} See \cite[Theorem~11.5.2]{BEW} or \cite[Theorem~5.14]{LN}. \end{proof}
Now we turn to the case $p\equiv 3\text{\;or\;}5\pmod{8}$. We recall a few facts which were established in our earlier paper~\cite{B2} in more general settings.
\begin{lemma} \label{l8} Let $p\equiv 3\text{\;or\;}5\pmod{8}$ and $\psi$ be a character of order~$2^r$ on $\mathbb F_q$, where $$ r\ge\begin{cases} 4&\text{if $p\equiv 3\pmod{8}$,}\\ 3&\text{if $p\equiv 5\pmod{8}$.} \end{cases} $$ Then $G(\psi)=G(\psi\rho)$. \end{lemma}
\begin{proof} See \cite[Lemma 2.13]{B2}. \end{proof}
\begin{lemma} \label{l9} Let $p\equiv 3\text{\;or\;}5\pmod{8}$, $r\ge 3$, and $\psi$ be a character of order~$2^r$ on $\mathbb F_q$. Then $$ \psi(4)= \begin{cases} 1 & \text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{s/2^{r-2}}&\text{if $p\equiv 5\pmod{8}$.} \end{cases} $$ \end{lemma}
\begin{proof} See \cite[Lemma 2.16]{B2}. \end{proof}
\begin{lemma} \label{l10} Let $p\equiv 3\text{\;or\;}5\pmod{8}$, $n\ge 1$ and $r\ge 3$ be integers, $r\ge n$. Then $$ \sum_{v=0}^{2^{r-2}-1}\zeta_{2^n}^{p^v}=\begin{cases} -2^{r-2}&\text{if $n=1$,}\\ 2^{r-2}i&\text{if $n=2$ and $p\equiv 5\pmod{8}$,}\\ 2^{r-3}i\sqrt{2}&\text{if $n=3$ and $p\equiv 3\pmod{8}$,}\\ 0&\text{otherwise.} \end{cases} $$ \end{lemma}
\begin{proof} It is an immediate consequence of \cite[Lemma 2.2]{B2}. \end{proof}
The next lemma relates Gauss sums over $\mathbb F_q$ to Jacobi sums over a subfield of $\mathbb F_q$.
\begin{lemma} \label{l11} Let $p\equiv 3\text{\;or\;}5\pmod{8}$, and $\psi$ be a character of order $2^r$ on $\mathbb F_q$, where $$ r\ge n=\begin{cases} 3&\text{if $p\equiv 3\pmod{8}$,}\\ 2&\text{if $p\equiv 5\pmod{8}$,} \end{cases} $$ Assume that $2^{r-1}\mid s$. Then $\psi^{2^{r-n}}$ is equal to the lift of some character $\chi$ of order~$2^n$ on $\mathbb F_{p^{s/2^{r-n+1}}}$. Moreover, $$ G(\psi)=q^{(2^{r-n+1}-1)/2^{r-n+2}}J(\chi)\cdot\begin{cases} 1&\text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{s(r-1)/2^{r-1}}&\text{if $p\equiv 5\pmod{8}$.} \end{cases} $$ \end{lemma}
\begin{proof} We prove the assertion of the lemma by induction on $r$, for $r\ge n$. Let $2^{n-1}\mid s$ and $\psi$ be a character of order $2^n$ on $\mathbb F_q$. As $2^n\mid(p^{s/2}-1)$, Lemma~\ref{l6} shows that $\psi$ is equal to the lift of some character $\chi$ of order $2^n$ on $\mathbb F_{p^{s/2}}$, that is, $\chi'=\psi$. Lemmas~\ref{l5} and \ref{l7} yield $G(\psi)=G(\chi')=-G(\chi)^2=-G(\chi^2)J(\chi)$. Note that $\chi^2$ has order~$2^{n-1}$. Thus, by Lemmas~\ref{l3} and \ref{l4}, $$ G(\chi^2)=\begin{cases} -q^{1/4}&\text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{(s/2)-1}q^{1/4}&\text{if $p\equiv 5\pmod{8}$,} \end{cases} $$ and so $$ G(\psi)=q^{1/4}J(\chi)\cdot\begin{cases} 1&\text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{s/2}&\text{if $p\equiv 5\pmod{8}$.} \end{cases} $$ This completes the proof for the case $r=n$.
Suppose now that $r\ge n+1$, and assume that the result is true when $r$ is replaced by $r-1$. Let $2^{r-1}\mid s$ and $\psi$ be a character of order $2^r$ on $\mathbb F_q$. Then $2^{r-2}\mid\frac s2$, and so $2^r\mid(p^{s/2}-1)$. By Lemma~\ref{l6}, $\psi$ is equal to the lift of some character $\phi$ of order $2^r$ on $\mathbb F_{p^{s/2}}$, that is $\phi'=\psi$. Applying Lemmas \ref{l2}(c), \ref{l3}, \ref{l7}, \ref{l8} and using the fact that $2^n\mid s$, we deduce \begin{equation} \label{eq1} G(\psi)=-G(\phi)^2=-G(\phi)G(\phi\rho_0)=-\bar\phi(4)G(\phi^2)G(\rho_0)=\bar\phi(4)q^{1/4}G(\phi^2), \end{equation} where $\rho_0$ denotes the quadratic character on $\mathbb F_{p^{s/2}}$. Note that $\phi^2$ has order $2^{r-1}$ and $2^{r-2}\mid\frac s2$. Hence, by inductive hypothesis, $(\phi^2)^{2^{r-1-n}}=\phi^{2^{r-n}}$ is equal to the lift of some character $\chi$ of order $2^n$ on $\mathbb F_{p^{(s/2)/2^{r-n}}}=\mathbb F_{p^{s/2^{r-n+1}}}$ and $$ G(\phi^2)=(p^{s/2})^{(2^{r-n}-1)/2^{r-n+1}}J(\chi)\cdot\begin{cases} 1&\text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{(s/2)(r-2)/2^{r-2}}&\text{if $p\equiv 5\pmod{8}$,} \end{cases} $$ that is, $$ G(\phi^2)=q^{(2^{r-n}-1)/2^{r-n+2}}J(\chi)\cdot\begin{cases} 1&\text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{s(r-2)/2^{r-1}}&\text{if $p\equiv 5\pmod{8}$.} \end{cases} $$ Substituting this expression for $G(\phi^2)$ into \eqref{eq1} and using Lemma~\ref{l9}, we obtain $$ G(\psi)=q^{(2^{r-n+1}-1)/2^{r-n+2}}J(\chi)\cdot\begin{cases} 1&\text{if $p\equiv 3\pmod{8}$,}\\ (-1)^{s(r-1)/2^{r-1}}&\text{if $p\equiv 5\pmod{8}$.} \end{cases} $$ It remains to show that $\psi^{2^{r-n}}$ is equal to the lift of $\chi$. Indeed, for any $x\in\mathbb F_q$ we have \begin{align*} \chi({\mathop{\rm N}}_{\mathbb F_q/\mathbb F_{p^{s/2^{r-n+1}}}}(x))&=\chi(x^{(p^s-1)/(p^{s/2^{r-n+1}}-1)})\\ &=\chi((x^{(p^s-1)/(p^{s/2}-1)})^{(p^{s/2}-1)/(p^{s/2^{r-n+1}}-1)})\\ &=\chi({\mathop{\rm N}}_{\mathbb F_{p^{s/2}}/\mathbb F_{p^{s/2^{r-n+1}}}}(x^{(p^s-1)/(p^{s/2}-1)}))=\phi^{2^{r-n}}(x^{(p^s-1)/(p^{s/2}-1)})\\ &=\left(\phi({\mathop{\rm N}}_{\mathbb F_{p^s}/\mathbb F_{p^{s/2}}}(x))\right)^{2^{r-n}}=\psi^{2^{r-n}}(x). \end{align*} Therefore $\chi'=\psi^{2^{r-n}}$, and the result now follows by the principle of mathematical induction. \end{proof}
For an arbitrary integer $k$, it is convenient to set $\eta_k^*=\eta_{\ell}^*$, where $k\equiv {\ell}\pmod{e}$, $0\le\ell\le e-1$.
\begin{lemma} \label{l12} For any integer $k$, $\eta_{kp}^*=\eta_k^*$. \end{lemma}
\begin{proof} It is a straightforward consequence of \cite[Proposition~1]{G1}. \end{proof}
From now on we shall assume that $p\equiv 3\text{\;or\;}5\pmod{8}$, $e=2^m$ with $m\ge 3$, and $\lambda$ is a character of order $2^m$ on $\mathbb F_q$ such that $\lambda(\gamma)=\zeta_{2^m}$. We observe that $2^{m-2}\mid s$.
\begin{lemma} \label{l13} We have $$ P_{2^m}^*(X)=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}(X-\eta_{2^t}^*)^{2^{m-t-2}}(X-\eta_{-2^t}^*)^{2^{m-t-2}}. $$ \end{lemma}
\begin{proof} Write \begin{align*} P_{2^m}^*(X)&=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}\,\prod_{\substack{k=1\\ 2^t\parallel k}}^{2^m-1}(X-\eta_k^*)\\ &=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}\,\prod_{\substack{k_0=1\\ 2\nmid k_0}}^{2^{m-t}-1}(X-\eta_{2^tk_0}^*). \end{align*} Since $p\equiv 3\text{\;or\;}5\pmod{8}$, \,$\pm p^0,\pm p^1,\dots, \pm p^{2^{m-t-2}-1}$ is a reduced residue system modulo $2^{m-t}$ for each $0\le t\le m-2$. Thus $$ P_{2^m}^*(X)=(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)\prod_{t=0}^{m-2}\,\prod_{j=0}^{2^{m-t-2}-1}(X-\eta_{2^t p^j}^*)(X-\eta_{-2^t p^j}^*). $$ The result now follows from Lemma~\ref{l12}. \end{proof}
\begin{lemma} \label{l14} We have \begin{align*} \eta_0^*=\,&G(\rho)+\sum_{r=2}^m 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right),\\ \eta_{2^{m-1}}^*=\,&G(\rho)+\sum_{r=2}^{m-1} 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right)-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right), \end{align*} and, for $0\le t\le m-2$, \begin{align*} \eta_{\pm 2^t}^*=\,&\sum_{r=2}^t 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right),\\ &+\begin{cases} -G(\rho)&\text{if $t=0$,}\\ G(\rho)-2^{t-1}\left(G(\lambda^{2^{m-t-1}})+G(\bar\lambda^{2^{m-t-1}})\right)&\text{if $t>0$,} \end{cases}\\ &\mp\begin{cases} 0&\text{if $p\equiv 3\pmod{8}$,}\\ 2^ti\,\left(G(\lambda^{2^{m-t-2}})-G(\bar\lambda^{2^{m-t-2}})\right)&\text{if $p\equiv 5\pmod{8}$,} \end{cases}\\ &\mp\begin{cases} 2^ti\sqrt{2}\,\left(G(\lambda^{2^{m-t-3}})-G(\bar\lambda^{2^{m-t-3}})\right)&\text{if $p\equiv 3\pmod{8}$ and $t\le m-3$,}\\ 0&\text{otherwise.} \end{cases} \end{align*} \end{lemma}
\begin{proof} From Lemma~\ref{l1} we deduce that $$ \eta_k^*=\sum_{j=1}^{2^m-1}G(\lambda^j)\zeta_{2^m}^{-jk}=\sum_{r=1}^m \sum_{\substack{j=1\\ 2^{m-r}\parallel j}}^{2^m-1}G(\lambda^j)\zeta_{2^m}^{-jk} =\sum_{r=1}^m \sum_{\substack{j_0=1\\ 2\nmid j_0}}^{2^r-1}G(\lambda^{2^{m-r}j_0})\zeta_{2^r}^{-j_0k}. $$ Since $\lambda^{2^{m-r}}$ has order $2^r$ and, for $r\ge 2$, $\pm p^0,\pm p^1,\dots,\pm p^{2^{r-2}-1}$ is a reduced residue system modulo $2^r$, we conclude that $$ \eta_k^*=(-1)^k G(\rho)+\sum_{r=2}^m\, \sum_{u\in\{\pm 1\}}\sum_{v=0}^{2^{r-2}-1}G(\lambda^{2^{m-r}up^v})\zeta_{2^r}^{-kup^v}, $$ or, in view of Lemma~\ref{l2}(b), \begin{equation} \label{eq2} \eta_k^*=(-1)^k G(\rho)+\sum_{r=2}^m\left[G(\lambda^{2^{m-r}})\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{-kp^v}+G(\bar\lambda^{2^{m-r}})\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{kp^v}\right]. \end{equation} The expressions for $\eta_0^*$ and $\eta_{2^{m-1}}^*$ follow immediately from \eqref{eq2}. Next we assume that $0\le t\le m-2$. If $r>t+3$, then, by Lemma~\ref{l10}, $$ \sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{2^t p^v}=\sum_{v=0}^{2^{r-2}-1}\zeta_{2^r}^{-2^t p^v}=0, $$ and so \eqref{eq2} yields \begin{align*} \eta_{2^t}^*=\,&\sum_{r=2}^t 2^{r-2}\left(G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})\right),\\ &+\begin{cases} -G(\rho)&\text{if $t=0$,}\\ G(\rho)-2^{t-1}\left(G(\lambda^{2^{m-t-1}})+G(\bar\lambda^{2^{m-t-1}})\right)&\text{if $t>0$,} \end{cases}\\ &+G(\lambda^{2^{m-t-2}})\sum_{v=0}^{2^t-1}i^{-p^v}+G(\bar\lambda^{2^{m-t-2}})\sum_{v=0}^{2^t-1}i^{p^v}\\ &+\begin{cases} G(\lambda^{2^{m-t-3}})\sum_{v=0}^{2^{t+1}-1}\zeta_8^{-p^v}+G(\bar\lambda^{2^{m-t-3}})\sum_{v=0}^{2^{t+1}-1}\zeta_8^{p^v}&\text{if $t\le m-3$,}\\ 0&\text{if $t=m-2$.} \end{cases} \end{align*} The asserted result now follows from Lemmas~\ref{l4} and \ref{l10}. The expression for $\eta_{-2^t}^*$ can be obtained in a similar manner. \end{proof}
\section{The Case $p\equiv 3\pmod{8}$} \label{s3}
In this section, $p\equiv 3\pmod{8}$. As before, $2^m\mid(q-1)$ and $\lambda$ is a character of order~$2^m$ on $\mathbb F_q$ with $\lambda(\gamma)=\zeta_{2^m}$.
For $3\le r\le m$, define the integers $A_r$ and $B_r$ by \begin{gather} p^{s/2^{r-2}}=A_r^2+2B_r^2,\qquad A_r\equiv -1\pmod{4},\qquad p\nmid A_r,\label{eq3}\\ 2B_r\equiv A_r(\gamma^{(q-1)/8}+\gamma^{3(q-1)/8})\pmod{p}.\label{eq4} \end{gather} It is well known that for each fixed $r$, the conditions \eqref{eq3} and \eqref{eq4} determine $A_r$ and $B_r$ uniquely.
\begin{lemma} \label{l15} Let $r$ be an integer with $2^{r-1}\mid s$ and $3\le r\le m$. Then $$ G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})=2A_r q^{(2^{r-2}-1)/2^{r-1}} $$ and $$ G(\lambda^{2^{m-r}})-G(\bar\lambda^{2^{m-r}})=2B_r q^{(2^{r-2}-1)/2^{r-1}}i\sqrt{2}. $$ \end{lemma}
\begin{proof} We observe that $\lambda^{2^{m-r}}$ has order $2^r$. By Lemma~\ref{l11}, $(\lambda^{2^{m-r}})^{2^{r-3}}=\lambda^{2^{m-3}}$ is equal to the lift of some octic character $\chi$ on $\mathbb F_{p^{s/2^{r-2}}}$ and $$ G(\lambda^{2^{m-r}})\pm G(\bar\lambda^{2^{m-r}})=q^{(2^{r-2}-1)/2^{r-1}}(J(\chi)\pm J(\bar\chi)). $$ Note that $\gamma^{(q-1)/(p^{s/2^{r-2}-1})}$ is a generator of the cyclic group $\mathbb F_{p^{s/2^{r-2}}}^*$ and, by the defition of the lift, $\chi(\gamma^{(q-1)/(p^{s/2^{r-2}-1})})=\chi({\mathop{\rm N}}_{\mathbb F_q/\mathbb F_{p^{s/2^{r-2}}}}(\gamma))=\lambda^{2^{m-3}}(\gamma)=\zeta_8$. By \cite[Lemma~17]{B1}, $J(\chi)=A_r+B_ri\sqrt{2}$, and the result follows. \end{proof}
We are now in a position to prove the main result of this section.
\begin{theorem} \label{t1} Let $p\equiv 3\pmod{8}$ and $m\ge 4$. Then $P_{2^m}^*(X)$ has a unique decomposition into irreducible polynomials over the rationals as follows: \begin{itemize} \item[\rm (a)] if $2^{m-1}\mid s$, then \begin{align*} P_{2^m}^*(X)=\,& (X-q^{\frac 12}+4B_3 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-4B_3 q^{\frac 14})^{2^{m-2}}\\ &\times (X-q^{\frac 12}+8B_4 q^{\frac 38})^{2^{m-3}} (X-q^{\frac 12}-8B_4 q^{\frac 38})^{2^{m-3}}\\ &\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-2} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-2}A_{m-1} q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\\ &\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-1}A_m q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)\\ &\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^m 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}\Bigr)\prod_{t=2}^{m-3}Q_t(X)^{2^{m-t-2}}; \end{align*} \item[\rm (b)] if $2^{m-2}\parallel s$ and $m\ge 5$, then \begin{align*} P_{2^m}^*(X)=\,& (X-q^{\frac 12}+4B_3 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-4B_3 q^{\frac 14})^{2^{m-2}}\\ &\times (X-q^{\frac 12}+8B_4 q^{\frac 38})^{2^{m-3}} (X-q^{\frac 12}-8B_4 q^{\frac 38})^{2^{m-3}}\\ &\times \Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-2} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-2}A_{m-1} q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\\ &\times \left(\Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}\Bigr)^2+2^{2(m-1)}A_m^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\right)\\ &\times \left(\Bigl(X+3q^{\frac 12}-\sum_{r=3}^{m-3} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-3}A_{m-2} q^{\frac{2^{m-4}-1}{2^{m-3}}}\Bigr)^2\right.\\ &\hskip42pt+\Biggl.2^{2(m-1)}B_m^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\Biggr)^2\, \prod_{t=2}^{m-4}Q_t(X)^{2^{m-t-2}}; \end{align*} \item[\rm (c)] if $4\parallel s$, then \begin{align*} P_{16}^*(X)=\,&(X+3q^{\frac 12}+4A_3 q^{\frac 14})^2 (X-q^{\frac 12}+4B_3 q^{\frac 14})^4 (X-q^{\frac 12}-4B_3 q^{\frac 14})^4\\ &\times\left((X+3q^{\frac 12}-4A_3 q^{\frac 14})^2+64A_4^2 q^{\frac 34}\right)\left((X-q^{\frac 12})^2+64B_4^2 q^{\frac 34}\right)^2. \end{align*} \end{itemize}
The integers $A_r$ and $|B_r|$ are uniquely determined by~\eqref{eq3}, and \begin{align*} Q_t(X)=\,&\Bigl(X+3q^{\frac 12}-\sum_{r=3}^t 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^t A_{t+1} q^{\frac{2^{t-1}-1}{2^t}}+2^{t+2}B_{t+3}q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr)\\ &\times\Bigl(X+3q^{\frac 12}-\sum_{r=3}^t 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^t A_{t+1} q^{\frac{2^{t-1}-1}{2^t}}-2^{t+2}B_{t+3}q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr). \end{align*} \end{theorem}
\begin{proof} Since $4\mid s$, Lemmas~\ref{l3} and \ref{l4} yield $G(\rho)=G(\lambda^{2^{m-2}})=G(\bar\lambda^{2^{m-2}})=-q^{1/2}$. Appealing to Lemmas \ref{l14} and \ref{l15}, we deduce that \begin{align} \eta_0^*&=-3q^{\frac 12}+\sum\limits_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq5}\\ \eta_{2^{m-1}}^*&=-3q^{\frac 12}+\sum\limits_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq6}\\ \eta_{\pm 2^{m-2}}^*&=-3q^{\frac 12}+\sum\limits_{r=3}^{m-2} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}-2^{m-2}A_{m-1}q^{\frac{2^{m-3}-1}{2^{m-2}}},\label{eq7}\\ \eta_{\pm 2^{m-3}}^*&=\begin{cases} q^{\frac 12}\mp 2i\sqrt{2}\left(G(\lambda)-G(\bar\lambda)\right)&\text{if $m=4$,}\\ -3q^{\frac 12}+\sum\limits_{r=3}^{m-3} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}&\\ -2^{m-3}A_{m-2}q^{\frac{2^{m-4}-1}{2^{m-3}}}\mp 2^{m-3}i\sqrt{2}\left(G(\lambda)-G(\bar\lambda)\right)&\text{if $m\ge 5$,} \end{cases}\label{eq8}\\ \eta_{\pm 1}^*&= q^{\frac 12}\pm 4B_3 q^{\frac 14}.\label{eq9} \end{align} Moreover, if $m\ge 5$, then \begin{equation} \label{eq10} \eta_{\pm 2}^*=q^{\frac 12}\pm 8B_4 q^{\frac 38} \end{equation} and, for $2\le t\le m-4$, \begin{equation} \eta_{\pm 2^t}^*=-3q^{\frac 12}+\sum_{r=3}^t 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}-2^t A_{t+1} q^{\frac{2^{t-1}-1}{2^t}}\pm 2^{t+2}B_{t+3}q^{\frac{2^{t+1}-1}{2^{t+2}}}.\label{eq11} \end{equation}
Assume that $2^{m-1}\mid s$. Combining \eqref{eq5}~--~\eqref{eq11} with Lemma~\ref{l15}, we obtain the values of the cyclotomic periods, which are all integers. Part~(a) now follows from Lemma~\ref{l13}.
Next assume that $2^{m-2}\parallel s$. We have $2^m\parallel(q-1)$, and so $\lambda(-1)=-1$. Hence, by Lemma~\ref{l2}(a), $$ \left(G(\lambda)\pm G(\bar\lambda)\right)^2=G(\lambda)^2+G(\bar\lambda)^2\pm 2\lambda(-1)q=G(\lambda)^2+G(\bar\lambda)^2\mp 2q. $$ Lemmas~\ref{l2}(c), \ref{l3}, \ref{l8}, \ref{l9} and \ref{l15} yield \begin{align*} G(\lambda)^2+G(\bar\lambda)^2&=G(\lambda)G(\lambda\rho)+G(\bar\lambda)G(\bar\lambda\rho)=\bar\lambda(4)G(\lambda^2)G(\rho)+\lambda(4)G(\bar\lambda^2)G(\rho)\\ &=-q^{1/2}(G(\lambda^2)+G(\bar\lambda^2))=-2A_{m-1}q^{(2^{m-2}-1)/2^{m-2}}, \end{align*} and thus \begin{equation} \label{eq12} \left(G(\lambda)\pm G(\bar\lambda)\right)^2=-2q^{(2^{m-2}-1)/2^{m-2}}(A_{m-1}\pm p^{s/2^{m-2}}). \end{equation}
Note that $$ A_{m-1}^2+2B_{m-1}^2=p^{s/2^{m-3}}=(p^{s/2^{m-2}})^2=(A_m^2+2B_m^2)^2 =(A_m^2-2B_m^2)^2+2\cdot(2A_mB_m)^2. $$ Hence $A_{m-1}=\pm(A_m^2-2B_m^2)$. Since $p^{s/2^{m-2}}=A_m^2+2B_m^2\equiv 3\pmod{8}$, $B_m$ is odd, and so $A_{m-1}=A_m^2-2B_m^2$. Substituting the expressions for $p^{s/2^{m-2}}$ and $A_{m-1}$ into \eqref{eq12}, we find that \begin{align*} \left(G(\lambda)+G(\bar\lambda)\right)^2&=-4A_m^2q^{(2^{m-2}-1)/2^{m-2}},\\ \left(G(\lambda)-G(\bar\lambda)\right)^2&=8B_m^2q^{(2^{m-2}-1)/2^{m-2}}. \end{align*} The last two equalities together with \eqref{eq5}, \eqref{eq6} and \eqref{eq9} imply \begin{align} (X-\eta_0^*)(X-\eta_{2^{m-1}}^*)=\,& \Bigl(X+3q^{\frac 12}-\sum\limits_{r=3}^{m-1} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}\Bigr)^2\notag\\ &+2^{2(m-1)}A_m^2q^{\frac{2^{m-2}-1}{2^{m-2}}},\label{eq13}\\ (X-\eta_{2^{m-3}}^*)(X-\eta_{-2^{m-3}}^*)=\,&\Bigl(X+3q^{\frac 12}-\sum\limits_{r=3}^{m-3} 2^{r-1}A_r q^{\frac{2^{r-2}-1}{2^{r-1}}}+2^{m-3}A_{m-2}q^{\frac{2^{m-4}-1}{2^{m-3}}}\Bigr)^2\notag\\ &+2^{2(m-1)}B_m^2q^{\frac{2^{m-2}-1}{2^{m-2}}}\qquad\text{if $m\ge 5$,}\label{eq14}\\ (X-\eta_{2^{m-3}}^*)(X-\eta_{-2^{m-3}}^*)=\,&(X-q^{\frac 12})^2+64B_4^2 q^{\frac 34}\qquad\text{if $m=4$.}\label{eq15} \end{align} Clearly, the quadratic polynomials on the right sides of \eqref{eq13}~--~\eqref{eq15} are irreducible over the rationals.
Putting \eqref{eq7}, \eqref{eq9}~--~\eqref{eq11}, \eqref{eq13}~--~\eqref{eq15} together and appealing to Lemma~\ref{l13}, we deduce parts~(b) and (c). This completes the proof. \end{proof}
\begin{remark} \label{r} {\rm The result of Gurak~\cite[Proposition~3.3(iii)]{G3} can be reformulated in terms of $A_3$ and $B_3$. Namely, $P_8^*(X)$ has the following factorization into irreducible polynomials over the rationals: \begin{align*} P_8^*(X)=\,& (X-q^{1/2})^2 (X-q^{1/2}+4B_3 q^{1/4})^2 (X-q^{1/2}-4B_3 q^{1/4})^2 &\\ &\times (X+3q^{1/2}+4A_3 q^{1/4})(X+3q^{1/2}-4A_3 q^{1/4})&\text{if $4\mid s$,}\\ P_8^*(X)=\,&(X-3q^{1/2})^2 &\\ &\times\left((X+q^{1/2})^2+16A_3^2 q^{1/2}\right)\left((X+q^{1/2})^2+16B_3^2 q^{1/2}\right)^2 &\text{if $2\parallel s$.} \end{align*} We see that Theorem~\ref{t1} is not valid for $m=3$.} \end{remark}
\section{The Case $p\equiv 5\pmod{8}$} \label{s4}
In this section, $p\equiv 5\pmod{8}$. As in the previous sections, $2^m\mid(q-1)$ and $\lambda$ denotes a character of order~$2^m$ on $\mathbb F_q$ such that $\lambda(\gamma)=\zeta_{2^m}$.
For $2\le r\le m-1$, define the integers $C_r$ and $D_r$ by \begin{gather} p^{s/2^{r-1}}=C_r^2+D_r^2,\qquad C_r\equiv 1\pmod{4},\qquad p\nmid C_r,\label{eq16}\\ D_r \gamma^{(q-1)/4}\equiv C_r\pmod{p}.\label{eq17} \end{gather} If $2^{m-1}\mid s$, we extend this notation to $r=m$. It is well known that for each fixed $r$, the conditions~\eqref{eq16} and \eqref{eq17} determine $C_r$ and $D_r$ uniquely.
\begin{lemma} \label{l16} Let $r$ be an integer with $2^{r-1}\mid s$ and $2\le r\le m$. Then $$ G(\lambda^{2^{m-r}})+G(\bar\lambda^{2^{m-r}})=\begin{cases} -2C_r q^{(2^{r-1}-1)/2^r}&\text{if $2^r\mid s$,}\\ (-1)^r\cdot 2C_r q^{(2^{r-1}-1)/2^r}&\text{if $2^{r-1}\parallel s$,} \end{cases} $$ and $$ G(\lambda^{2^{m-r}})-G(\bar\lambda^{2^{m-r}})=\begin{cases} 2D_r q^{(2^{r-1}-1)/2^r}i&\text{if $2^r\mid s$,}\\ (-1)^{r-1}\cdot 2D_r q^{(2^{r-1}-1)/2^r}i&\text{if $2^{r-1}\parallel s$.} \end{cases} $$ \end{lemma}
\begin{proof} The proof proceeds exactly as for Lemma~\ref{l15}, except that at the end, \cite[Proposition~3]{KR} is invoked instead of \cite[Lemma~17]{B1}. \end{proof}
We are now ready to establish our second main result.
\begin{theorem} \label{t2} Let $p\equiv 5\pmod{8}$ and $m\ge 4$. Then $P_{2^m}^*(X)$ has a unique decomposition into irreducible polynomials over the rationals as follows: \begin{itemize} \item[\rm (a)] if $2^m\mid s$, then \begin{align*} P_{2^m}^*(X)=\,& (X-q^{\frac 12}+2D_2 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-2D_2 q^{\frac 14})^{2^{m-2}}\\ &\times\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-1}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-1}C_m q^{\frac{2^{m-1}-1}{2^m}}\Bigr)\\ &\times\Bigl(X+q^{\frac 12}+\sum_{r=2}^m 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)\prod_{t=1}^{m-2}R_t(X)^{2^{m-t-2}}; \end{align*} \item[\rm (b)] if $2^{m-1}\parallel s$, then \begin{align*} P_{2^m}^*(X)=\,& (X-q^{\frac 12}+2D_2 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-2D_2 q^{\frac 14})^{2^{m-2}}\\ &\times\left(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-1}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigl)^2 -2^{2(m-1)}C_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}}\right)\\ &\times\Biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\Biggr.\\ &\hskip20pt -\Biggl.2^{2(m-1)}D_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}}\Biggr)\prod_{t=1}^{m-3}R_t(X)^{2^{m-t-2}}; \end{align*} \item[\rm (c)] if $2^{m-2}\parallel s$, then \begin{align*} P_{2^m}^*(X)=\,& (X-q^{\frac 12}+2D_2 q^{\frac 14})^{2^{m-2}} (X-q^{\frac 12}-2D_2 q^{\frac 14})^{2^{m-2}}\\ &\times\biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\biggr.\\ &\hskip26pt-\biggl.2^{2(m-2)}D_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\biggr)^2 \end{align*} \begin{align*} &\times\Biggl(\biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2+2^{2(m-2)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}+2^{2m-3}q\biggr)^2\Biggr.\\ &\hskip26pt -2^{2(m-1)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\Biggl.\Bigl(X+(2^{m-2}+1)q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2\Biggr)\\ &\times\prod_{t=1}^{m-4}R_t(X)^{2^{m-t-2}}. \end{align*} \end{itemize}
The integers $C_r$ and $|D_r|$ are uniquely determined by~\eqref{eq16}, and \begin{align*} R_t(X)=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^t 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^t C_{t+1} q^{\frac{2^t-1}{2^{t+1}}}+2^{t+1} D_{t+2} q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr)\\ &\times\Bigl(X+q^{\frac 12}+\sum_{r=2}^t 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^t C_{t+1} q^{\frac{2^t-1}{2^{t+1}}}-2^{t+1} D_{t+2} q^{\frac{2^{t+1}-1}{2^{t+2}}}\Bigr). \end{align*} \end{theorem}
\begin{proof} As $s$ is even, Lemma~\ref{l3} yields $G(\rho)=-q^{1/2}$. Then, applying Lemmas~\ref{l14} and \ref{l16}, we obtain \begin{align} \eta_0^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^{m-3}\left(G(\lambda^2)+G(\bar\lambda^2)\right)\notag\\ &+2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq18}\\ \eta_{2^{m-1}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^{m-3}\left(G(\lambda^2)+G(\bar\lambda^2)\right)\notag\\ &-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq19}\\ \eta_{\pm 2^{m-2}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-3}\left(G(\lambda^2)+G(\bar\lambda^2)\right)\notag\\ &\mp 2^{m-2}i\left(G(\lambda)-G(\bar\lambda)\right),\label{eq20}\\ \eta_{\pm 2^{m-3}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\notag\\ &\mp 2^{m-3}i\left(G(\lambda^2)-G(\bar\lambda^2)\right),\label{eq21}\\ \eta_{\pm 1}^*=\,&q^{\frac 12}\pm 2D_2 q^{\frac 14},\label{eq22} \end{align} and, for $1\le t\le m-4$, \begin{equation} \label{eq23} \eta_{\pm 2^t}^*=-q^{\frac 12}-\sum_{r=2}^t 2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+2^t C_{t+1} q^{\frac{2^t-1}{2^{t+1}}}\pm 2^{t+1} D_{t+2} q^{\frac{2^{t+1}-1}{2^{t+2}}}. \end{equation}
First suppose that $2^m\mid s$. By combining \eqref{eq18}~--~\eqref{eq23} with Lemma~\ref{l16}, we find the values of the cyclotomic periods, which are all integers. Now part~(a) follows from Lemma~\ref{l13}.
Next suppose that $2^{m-1}\parallel s$. Using \eqref{eq18}~--~\eqref{eq23} and Lemma~\ref{l16} again, we find the values of the cyclotomic periods. We observe that $\eta_0^*$ and $\eta_{2^{m-1}}^*$ as well as $\eta_{2^{m-2}}^*$ and $\eta_{-2^{m-2}}^*$ are algebraic conjugates of degree~2 over the rationals, and the remaining cyclotomic periods are integers. Therefore the polynomials $$ (X-\eta_0^*)(X-\eta_{2^{m-1}}^*)=\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-1}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2-2^{2(m-1)}C_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}} $$ and \begin{align*} (X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\\ &-2^{2(m-1)}D_m^2 q^{\frac{2^{m-1}-1}{2^{m-1}}} \end{align*} are irreducible over the rationals. Part~(b) now follows in view of Lemma~\ref{l13}.
Finally, suppose that $2^{m-2}\parallel s$. Making use of \eqref{eq18}~--~\eqref{eq20} and Lemma~\ref{l16}, we obtain \begin{align} \eta_0^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\notag\\ &+2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq24}\\ \eta_{2^{m-1}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\notag\\ &-2^{m-2}\left(G(\lambda)+G(\bar\lambda)\right),\label{eq25}\\ \eta_{\pm 2^{m-2}}^*=\,&-q^{\frac 12}-\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\notag\\ &\mp 2^{m-2}i\left(G(\lambda)-G(\bar\lambda)\right).\label{eq26} \end{align} By employing the same type of argument as in the proof of Theorem~\ref{t1}, we see that $$ \left(G(\lambda)\pm G(\bar\lambda)\right)^2=\mp\, 2q^{(2^{m-1}-1)/2^{m-1}}\left(q^{1/2^{m-1}}\pm (-1)^m\cdot C_{m-1}\right). $$ Combining this with \eqref{eq24}~--~\eqref{eq26}, we conclude that \begin{align*} (X-\eta_0^*)(X-&\eta_{2^{m-1}}^*)\\ =\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\\ &+2^{2m-3}q^{\frac{2^{m-1}-1}{2^{m-1}}}\left(q^{\frac 1{2^{m-1}}}+(-1)^m\cdot C_{m-1}\right) \end{align*} and \begin{align*} (X-\eta_{2^{m-2}}^*)(X-&\eta_{-2^{m-2}}^*)\\ =\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-(-1)^m\cdot 2^{m-2}C_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}\Bigr)^2\\ &+2^{2m-3}q^{\frac{2^{m-1}-1}{2^{m-1}}}\left(q^{\frac 1{2^{m-1}}}-(-1)^m\cdot C_{m-1}\right). \end{align*}
Since $q^{1/2^{m-2}}=p^{s/2^{m-2}}=C_{m-1}^2+D_{m-1}^2$, we have $q^{1/2^{m-1}}>|C_{m-1}|$. This means that the polynomials $(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)$ and $(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)$ are irreducible over the reals. Furthermore, since $2^{m-2}\parallel s$, the polynomials $(X-\eta_0^*)(X-\eta_{2^{m-1}}^*)$ and $(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)$ belong to $\mathbb R[X]\setminus\mathbb Q[X]$. Since $\mathbb R[X]$ is a unique factorization domain, it follows that the polynomial \begin{align*} (X-&\eta_0^*)(X-\eta_{2^{m-1}}^*)(X-\eta_{2^{m-2}}^*)(X-\eta_{-2^{m-2}}^*)\\ =\,&\biggl(\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2+2^{2(m-2)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}+2^{2m-3}q\biggr)^2\\ &-2^{2(m-1)}C_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}}\Bigl(X+(2^{m-2}+1)q^{\frac 12}+\sum_{r=2}^{m-2}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}\Bigr)^2 \end{align*} is irreducible over the rationals. Further, by Lemma~\ref{l16} and \eqref{eq21}, $$ \eta_{\pm 2^{m-3}}^*=-q^{\frac 12}-\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}+ 2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\pm (-1)^m\cdot 2^{m-2}D_{m-1}q^{\frac{2^{m-2}-1}{2^{m-1}}}, $$ and so $\eta_{2^{m-3}}^*$ and $\eta_{-2^{m-3}}^*$ are algebraic conjugates of degree~2 over the rationals. Hence, the polynomial \begin{align*} (X-\eta_{2^{m-3}}^*)(X-\eta_{-2^{m-3}}^*)=\,&\Bigl(X+q^{\frac 12}+\sum_{r=2}^{m-3}2^{r-1}C_r q^{\frac{2^{r-1}-1}{2^r}}-2^{m-3}C_{m-2}q^{\frac{2^{m-3}-1}{2^{m-2}}}\Bigr)^2\\ &-2^{2(m-2)}D_{m-1}^2 q^{\frac{2^{m-2}-1}{2^{m-2}}} \end{align*} is irreducible over the rationals. The remaining cyclotomic periods $\eta_{\pm 2^t}^*$, $0\le t\le m-4$, are integers, in view of \eqref{eq22} and \eqref{eq23}. Now Part~(c) follows by appealing to Lemma~\ref{l13}. This concludes the proof. \end{proof}
\begin{remark} {\rm Myerson has shown \cite[Theorem~17]{M} that $P_4^*(X)$ is irreducible if $2\nmid s$, \begin{align*} P_4^*(X)=\,& (X+q^{1/2}+2C_2 q^{1/4})(X+q^{1/2}-2C_2 q^{1/4})&\\ &\times(X-q^{1/2}+2D_2 q^{1/4}) (X-q^{1/2}-2D_2 q^{1/4})&\text{if $4\mid s$,}\\ \intertext{and, with a slight modification,} P_4^*(X)=\,&\left((X+q^{1/2})^2-4C_2^2q^{1/2}\right)\left((X-q^{1/2})^2-4D_2^2q^{1/2}\right)&\text{if $2\parallel s$,} \end{align*} where in the latter case the quadratic polynomials are irreducible over the rationals. Furthermore, the result of Gurak~\cite[Proposition~3.3(ii)]{G3} can be reformulated in terms of $C_2$, $D_2$, $C_3$ and $D_3$. Namely, $P_8^*(X)$ has the following factorization into irreducible polynomials over the rationals: \begin{align*} P_8^*(X)=\,& (X-q^{1/2}+2D_2 q^{1/4})^2 (X-q^{1/2}-2D_2 q^{1/4})^2\\ &\times(X+q^{1/2}+2C_2 q^{1/4}+4C_3 q^{3/8})(X+q^{1/2}+2C_2 q^{1/4}-4C_3 q^{3/8})&\\ &\times(X+q^{1/2}-2C_2 q^{1/4}+4D_3 q^{3/8})(X+q^{1/2}-2C_2 q^{1/4}-4D_3 q^{3/8})&\!\text{if $8\mid s$,}\\ P_8^*(X)=\,& (X-q^{1/2}+2D_2 q^{1/4})^2 (X-q^{1/2}-2D_2 q^{1/4})^2&\\ &\times\left((X+q^{1/2}+2C_2 q^{1/4})^2-16C_3^2 q^{3/4}\right)&\\ &\times\left((X+q^{1/2}-2C_2 q^{1/4})^2-16D_3^2 q^{3/4}\right)&\!\text{if $4\parallel s$,}\\ P_8^*(X)=\,&\left((X-q^{1/2})^2-4D_2^2 q^{1/2}\right)^2&\\ &\times\left(\left((X+q^{1/2})^2+4C_2^2 q^{1/2}+8q\right)^2-16C_2^2 q^{1/2}(X+3q^{1/2})^2\right)&\!\text{if $2\parallel s$.} \end{align*} Thus part~(a) of Theorem~\ref{t2} remains valid for $m=2$ and $m=3$. Moreover, for $m=3$, part~(b) of Theorem~\ref{t2} is still valid (cf. Remark~\ref{r}).} \end{remark}
\end{document} |
\begin{document}
\begin{abstract} We study density of parabolic elements in a finitely generated relatively hyperbolic group $G$ with respect to a word metric. We prove this density to be zero (apart from degenerate cases) and the limit defining the density to converge exponentially fast; this has recently been proven independently by W.\ Yang in \cite{yanggen}. As a corollary, we obtain the analogous result for the set of commuting pairs of elements in $G^2$, showing that the degree of commutativity of $G$ is equal to zero. \end{abstract} \maketitle
\section{Introduction} \label{s:intro}
A group $G$ is hyperbolic to a collection of subgroups $\{ H_\omega \}_{\omega \in \Omega}$ if, loosely speaking, it is hyperbolic except for the part that is inside the set $\mathcal{P}$ consisting of elements in the subgroups $H_\omega$ and their conjugates. It is therefore natural to ask whether taking elements from $G$ ``at random'' we can expect these elements to be outside $\mathcal{P}$ and therefore ``behave like in a hyperbolic group''. We prove that this is the case if $G$ is finitely generated and the sequence of measures that makes sense of the words ``at random'' comes from a word metric on $G$.
More precisely, let $G$ be a finitely generated group and let $X$ be a finite generating set for $G$. Denote by $|\cdot|_X: G \to \mathbb{Z}_{\geq 0}$ the \emph{word metric} on $G$ with respect to $X$. For any $n \in \mathbb{Z}_{\geq 0}$, define the sets $$S_{G,X}(n) := \{ g \in G \mid |g|_X = n \},$$ the \emph{sphere} of radius $n$ in the Cayley graph $\Gamma(G,X)$, and $$B_{G,X}(n) := \{ g \in G \mid |g|_X \leq n \} = \bigcup_{i=0}^n S_{G,X}(n),$$ the \emph{ball} of radius $n$ in $\Gamma(G,X)$. The following definition can be used to characterise ``small'' subsets of $G$. The term ``negligible'' to describe small subsets of $G^r$ (for a finitely generated infinite group $G$) was coined in \cite{kapovich}, although the definition given therein is not equivalent to Definition \ref{d:negl} here; in the case $r = 1$, the following definition is used implicitly in \cite{burillo}.
\begin{defn} \label{d:negl}
Let $r \geq 1$, and let $\mathcal{S} \subseteq G^r$ be a subset. For $n \geq 0$, let $$\delta_X(\mathcal{S},n) := \frac{|\mathcal{S} \cap B_{G,X}(n)^r|}{|B_{G,X}(n)|^r}$$ be the fraction of elements in $B_{G,X}(n)^r$ that belong to $\mathcal{S}$. The set $\mathcal{S}$ is said to be \emph{negligible} in $G$ with respect to $X$ if $\delta_X(\mathcal{S},n) \to 0$ as $n \to \infty$. Moreover, $\mathcal{S}$ is said to be \emph{exponentially negligible} in $G$ with respect to $X$ if in addition there exists a constant $\rho > 1$ such that $\delta_X(\mathcal{S},n) \leq \rho^{-n}$ for all sufficiently large $n$. \end{defn}
There are various definitions of relatively hyperbolic groups, due to M.~Gromov \cite{gromov}, B.~Farb \cite{farb}, C.~Dru\c{t}u \& M.~Sapir \cite{drutu}, D.~V.~Osin \cite{osin06def}, D.~Groves \& J.~S.~Manning \cite{groves}, and B.~H.~Bowditch \cite{bowditch}. In this paper we use the definition by Osin; for a precise statement, see Section \ref{s:prelim}. Our main result is as follows:
\begin{thm} \label{t:main} Let $G$ be a finitely generated group that is not virtually cyclic, and let $X$ be a finite generating set. Suppose that $G$ is hyperbolic with respect to a collection of \emph{proper} subgroups $\{ H_\omega \}_{\omega \in \Omega}$. Let $$\mathcal{P} := \bigcup_{\substack{\omega \in \Omega \\ g \in G}} H_\omega^g$$ be the set of \emph{parabolic} elements of $G$. Then $\mathcal{P}$ is exponentially negligible in $G$ with respect to $X$. \end{thm}
\begin{rmk} During the process of writing up this paper, the author has discovered a more general result by W.~Yang in \cite{yanggen}. In particular, Theorem 1.7 therein is the same as Theorem \ref{t:main} above, and it is closely related to a genericity result that works in a more general setting \cite[Theorem A]{yanggen}. Thus most of this paper merely gives an alternative proof to a recent but already-known result. \end{rmk}
As an immediate consequence of the Theorem we obtain: \begin{cor} \label{c:loxo} Let $G$ and $X$ be as in Theorem \ref{t:main}. Let $\mathcal{Q} \subseteq G$ be the set of finite order elements. Then $\mathcal{P} \cup \mathcal{Q}$ is exponentially negligible in $G$ with respect to $X$. \end{cor}
The next result computes the degree of commutativity of relatively hyperbolic groups. The \emph{degree of commutativity} of a finitely generated group $G$ with respect to a finite generating set $X$ was defined by \cite{amv} as
$$\operatorname{dc}_X(G) := \limsup_{n \to \infty} \frac{|\{ (x,y) \in B_{G,X}(n)^2 \mid xy=yx \}|}{|B_{G,X}(n)|^2}.$$ It has been conjectured \cite[Conjecture 1.6]{amv} that $\operatorname{dc}_X(G) = 0$ whenever $G$ is not virtually abelian (independently of $X$). The next corollary confirms the conclusion of the conjecture in the case when $G$ is a non-elementary relatively hyperbolic group, thereby generalising the result for hyperbolic groups in \cite[Theorem 1.7]{amv}.
\begin{cor} \label{c:dc} Let $G$ and $X$ be as in Theorem \ref{t:main}. Then the set of pairs of commuting elements, $\{ (x,y) \in G^2 \mid xy=yx \}$, is exponentially negligible in $G$ with respect to $X$. \end{cor}
The structure of the paper is as follows. Section \ref{s:prelim} defines relatively hyperbolic groups and recalls some of the main results on the geometry of their Cayley graphs. Section \ref{s:geod} derives some further results relating geodesics and quasi-geodesics of the ``usual'' (i.e.\ locally compact) and ``coned-off'' (cf terminology in \cite{farb}) Cayley graphs. We give a proof of Theorem \ref{t:main} in Section~\ref{s:pftmain}, and proofs of Corollaries \ref{c:loxo} and \ref{c:dc} in Section \ref{s:cor}.
\begin{ntn}
For a group $G$ and a generating subset $Z \subseteq G$, we will write $Z^\ast$ for the set of all words over $Z \cup Z^{-1}$, and we will identify these words in the obvious way with paths starting at $1 \in G$ in the Cayley graph $\Gamma(G,Z)$ (we do not require $Z$ to be finite and so $\Gamma(G,Z)$ to be locally finite). Moreover, we will identify $G$ with the vertices of $\Gamma(G,Z)$. Given a path $P$ in $\Gamma(G,Z)$, we also say it is \emph{labelled} by a word $Q \in Z^\ast$ if $P = gQ$ (viewed as paths) for some $g \in G$, and we let $P_-$ (resp.\ $P_+$) be the starting (resp.\ ending) vertex of $P$. For a word $P \in Z^\ast$, we will write $\ell_Z(P)$ for the length of the path $P$ in $\Gamma(G,Z)$ (= number of letters in $P \in Z^\ast$), and we will write $\overline{P} = \overline{P}_G$ for the corresponding element of $G$. For an element $g \in G$, we will write $|g|_Z$ for the word length of $g$ with respect to $Z$; in particular, if $p$, $q$ are vertices in $\Gamma(G,Z)$, then the distance between them will be $|p^{-1}q|_Z$. \end{ntn}
\end{ack}
\section{Preliminaries} \label{s:prelim}
We use Osin's definition of relative hyperbolicity, given in \cite{osin06def}. For this, let $G$ be a group, $\{ H_\omega \}_{\omega \in \Omega}$ a collection of subgroups of $G$, and $X \subseteq G$ a subset. Define the group $$F := \left(\ast_{\omega \in \Omega} H_\omega\right) \ast F(X),$$ and let $\varphi: F \to G$ be the canonical homomorphism. If there exists a \emph{finite} subset $X \subseteq G$ as above such that $\varphi$ is surjective and $\ker(\varphi)$ is the normal closure of a \emph{finite} set $\mathcal{R} \subseteq F$, then $G$ is said to be \emph{finitely presented relative to} $\{ H_\omega \}_{\omega \in \Omega}$.
Moreover, if we define $$\mathcal{H} := \bigcup_{\omega \in \Omega} \left( H_\omega \setminus \{1\} \right),$$ then any word $P \in \left( X \cup \mathcal{H} \right)^\ast$ such that the image $\overline{P} = \overline{P}_F$ of $P$ in $F$ is in the kernel of $\varphi$ satisfies an equality $$\overline{P} = \prod_{i=1}^n f_i^{-1} r_i^{\varepsilon_i} f_i$$ for some $f_i \in F$, $r_i \in \mathcal{R}$ and $\varepsilon_i \in \{ \pm 1 \}$; let $\operatorname{Area}_{rel}(P)$ be the minimal value of $n$ such that $\overline{P}$ can be written as above. If there exists a constant $C \geq 0$ such that $$\operatorname{Area}_{rel}(P) \leq C \ell_{X \cup \mathcal{H}}(P)$$ for every $P \in \left( X \cup \mathcal{H} \right)^\ast$ such that $\overline{P}_G = 0$, then $G$ is said to \emph{satisfy a relative linear isoperimetric inequality} (with respect to $X$ and $\{ H_\omega \}_{\omega \in \Omega}$).
\begin{defn} \label{d:rh} The group $G$ is said to be \emph{hyperbolic relative to} $\{ H_\omega \}_{\omega \in \Omega}$ if it is finitely presented with respect to $\{ H_\omega \}_{\omega \in \Omega}$ and satisfies a relative linear isoperimetric inequality. We call the $H_\omega$ the \emph{peripheral subgroups} of $G$, and say $G$ is \emph{relatively hyperbolic} if it is hyperbolic relative to some collection of peripheral subgroups. \end{defn}
For the remainder of the paper we fix a group $G$ and a collection of \emph{proper} subgroups $\{ H_\omega \}_{\omega \in \Omega}$, such that $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega}$ (given some finite subset $X \subseteq G$, which is also fixed for now). We will usually assume that, moreover, $G$ is finitely generated and $X$ is a (finite) generating set.
It is worth noting that in this case Definition \ref{d:rh} is independent of a chosen generating set $X$. Indeed, given two finite generating sets $X$ and $Y$, suppose $G$ is finitely presented relative to $X$. Then $G$ is also finitely presented relative to $Y$ since the canonical homomorphism $$\widetilde\varphi: \widetilde{F} := \left(\ast_{\omega \in \Omega} H_\omega\right) \ast F(Y) \to G$$ is surjective and $$\ker(\widetilde\varphi) = \langle\!\langle \{ \psi_Y(r) \mid r \in \mathcal{R} \} \cup \{ y^{-1}\psi_Y(\psi_X(y)) \mid y \in Y \} \rangle\!\rangle^{\widetilde{F}},$$ where $\mathcal{R}$ is as above, and $\psi_X(P)$ (resp.\ $\psi_Y(P)$) is a word over $X \cup \mathcal{H}$ (resp.\ $Y \cup \mathcal{H}$) obtained by replacing every letter of $Y$ (resp.\ $X$) in $P \in (Y \cup \mathcal{H})^\ast$ (resp.\ $P \in (X \cup \mathcal{H})^\ast$) by a word over $X$ (resp.\ $Y$) representing the same element in $G$. Moreover, \cite[Theorem 2.34]{osin06def} says that $G$ satisfies a linear isoperimetric inequality with respect to $X$ if and only if $G$ satisfies it with respect to $Y$.
The definition below summarises common terms used to describe paths in ${\Gamma(G,X \cup \mathcal{H})}$. The endpoints of paths in Cayley graphs that we consider will always be vertices. \begin{defn} Let $P$ be a path in the Cayley graph $\Gamma(G,X \cup \mathcal{H})$. We say that \begin{enumerate}[(i)] \item a subpath $Q$ of $P$ is an \emph{$H_\omega$-subpath} if it is labelled by a word from $(H_\omega)^\ast$, with a convention that a single vertex is an $H_\omega$-subpath for any $\omega \in \Omega$; \item a subpath $Q$ of $P$ is an \emph{($H_\omega$-)component} if it is a maximal $H_\omega$-subpath, and a maximal $H_\omega$-subpath $Q$ is called a \emph{trivial ($H_\omega$-)component} if it is a single vertex; \item a vertex $p$ of $P$ is \emph{non-phase} if it is an interior vertex of some component of $P$, and $p$ is \emph{phase} otherwise; \item two $H_\omega$-components $Q_1$, $Q_2$ of paths $P_1$, $P_2$, respectively, in the graph $\Gamma(G,X \cup \mathcal{H})$ are \emph{connected} if there is an edge from $(Q_1)_-$ to $(Q_2)_-$ labelled by an element of $H_\omega$; \item an $H_\omega$-component $Q$ of $P$ is \emph{isolated} if it is not connected to any other $H_\omega$-component of $P$; \item the path $P$ \emph{does not vertex backtrack} (resp.\ \emph{does not backtrack}) if all its components (resp.\ all its non-trivial components) are isolated, and \emph{vertex backtracks} (resp.\ \emph{backtracks}) otherwise. \end{enumerate} \end{defn} It is clear that if $P$ is a geodesic in $\Gamma(G,X \cup \mathcal{H})$, then $P$ does not vertex backtrack and all its vertices are phase.
We are interested in the collection $\mathcal{P}$ of parabolic elements of $G$. \begin{defn} An element $g \in G$ is \emph{parabolic} if it is conjugate to some element of $H_\omega$ for some $\omega \in \Omega$, and $g$ is \emph{hyperbolic} otherwise. We denote by $$\mathcal{P} := \bigcup_{\substack{\omega \in \Omega \\ g \in G}} H_\omega^g$$ the set of parabolic elements of $G$. \end{defn}
We now recall some of the results about relatively hyperbolic groups which will be used in this paper. The first of them is a stronger version of the statement that the graph $\Gamma(G,X \cup \mathcal{H})$ is Gromov-hyperbolic.
\begin{thm}[\cite{osin06def}, Theorem 3.26] \label{t:hyp}
There exists a constant $\nu \in \mathbb{Z}_{\geq 1}$ with the following property. Let $\Delta \subseteq \Gamma(G,X \cup \mathcal{H})$ be a geodesic triangle with edges $P$, $Q$ and $R$, and let $p \in P$ be a vertex. Then there exists a vertex $q \in Q \cup R$ such that $$|p^{-1}q|_X \leq \nu.$$ \end{thm}
The second result introduces what is known as the \emph{bounded coset penetration (BCP) property}. For this, recall the following definition.
\begin{defn}
Let $(K,d)$ be a geodesic metric space, $\lambda \geq 1$, and $c \geq 0$. A path $\alpha: [0,\ell_\alpha] \to K$ parametrised by arc length is a \emph{$(\lambda,c)$-quasi-geodesic} in $K$ if $$|t_1-t_2| \leq \lambda d(\alpha(t_1),\alpha(t_2))+c$$ for all $t_1,t_2 \in [0,\ell_\alpha]$. \end{defn}
\begin{thm}[\cite{osin06def}, Theorem 3.23] \label{t:bcp} For any given $\lambda \geq 1$ and $c \geq 0$, there exists a constant $\varepsilon = \varepsilon(\lambda,c) \geq 0$ with the following property. Let $P$ and $Q$ be $(\lambda,c)$-quasi-geodesic paths in $\Gamma(G,X \cup \mathcal{H})$ that do not backtrack such that $P_- = Q_-$ and $P_+ = Q_+$. Then \begin{enumerate}[(i)]
\item \label{i:t-bcp-near} If $p \in P$ is a phase vertex, then there exists a phase vertex $q \in Q$ such that $|p^{-1}q|_X \leq \varepsilon$.
\item \label{i:t-bcp-conn} If $R$ is a non-trivial $H_\omega$-component of $P$ and $|R_-^{-1}R_+|_X > \varepsilon$, then there exists a non-trivial $H_\omega$-component of $Q$ that is connected to $R$.
\item \label{i:t-bcp-ste} If $R \subseteq P$ and $S \subseteq Q$ are connected non-trivial $H_\omega$-components, then $$\max \{ |R_-^{-1}S_-|_X, |R_+^{-1}S_+|_X \} \leq \varepsilon.$$ \end{enumerate} \end{thm}
The following (perhaps less standard) result allows us to enlarge an arbitrary finite generating set $Y$ of $G$ to a ``nicer'' set $X$ such that geodesics in $\Gamma(G,X)$ can be related to quasi-geodesics in $\Gamma(G,X \cup \mathcal{H})$. For this we need to construct derived paths, defined by by Antol\'in \& Ciobanu in \cite[Construction 4.1]{ac} (to avoid unnecessary complications, we will only define this for geodesics).
\begin{defn} Let $X$ be a finite generating set for $G$, let $P$ be a geodesic path in $\Gamma(G,X)$. We can (uniquelly) express $P$ as a concatenation $$P = A_0U_1A_1\cdots U_nA_n,$$ where the $U_i$ are labelled by non-trivial words in $(H_{\omega_i})^\ast$ for some $\omega_i \in \Omega$, and no $U_i$ is a proper subpath of a subpath $Q$ of $P$ such that $(U_i)_- = Q_-$ and $Q$ is labelled by a word in some $(H_\omega)^\ast$. \begin{enumerate}[(i)] \item The \emph{derived path} $\widehat{P}$ of $P$ is a path in $\Gamma(G,X \cup \mathcal{H})$ given by $$\widehat{P} := A_0h_1A_1 \cdots h_nA_n,$$ where $h_n$ is an edge labelled by an element of $H_{\omega_i}$ such that $\overline{h_i} = \overline{U_i}$ in $G$. \item We call a vertex $p \in P$ a \emph{phase vertex} of $P$ if it is not an interior vertex of any of the $U_i$ (and so ``survives'' in $\widehat{P}$). \end{enumerate} \end{defn}
\begin{thm}[\cite{ac}, Lemma 5.3] \label{t:gsl} Let $Y$ be an arbitrary generating set for $G$. Then there exist $\lambda \geq 1$, $c \geq 0$ and a finite subset $\mathcal{H}'$ of $\mathcal{H}$ such that for every finite subset $X$ of $G$ satisfying $$Y \cup \mathcal{H}' \subseteq X \subseteq Y \cup \mathcal{H}$$ and for any geodesic path $P$ in $\Gamma(G,X)$, the derived path $\widehat{P}$ in $\Gamma(G,X \cup \mathcal{H})$ is a $(\lambda,c)$-quasi-geodesic that does not vertex backtrack. \end{thm}
A generating set $X$ satisfying the conclusion of Theorem \ref{t:gsl} will be called a \emph{well-behaved} generating set.
Finally, we have the following finiteness results.
\begin{thm}[\cite{osin06def}, Corollary 2.48] \label{t:finmanypar}
If $G$ is finitely generated, then we have $|\Omega| < \infty$. \end{thm}
\begin{thm}[\cite{osin06def}, Proposition 2.36] \label{t:almaln} If $H_\omega \cap H_{\widetilde\omega}^g$ is infinite for some $\omega,\widetilde\omega \in \Omega$ and $g \in G$, then $\widetilde\omega = \omega$ and $g \in H_\omega$. \end{thm}
\section{Geodesics in Cayley graphs} \label{s:geod}
Combining Theorems \ref{t:hyp}, \ref{t:bcp}\eqref{i:t-bcp-near} and \ref{t:gsl} it is easy to see that we have the following result. In particular, we may take $\tilde\delta := \nu + 2\varepsilon(\lambda,c)$, where $\nu$ and $(\lambda,c)$ are given by Theorems \ref{t:hyp} and \ref{t:gsl}, respectively, and $\varepsilon(\lambda,c)$ is given by Theorem \ref{t:bcp}.
\begin{cor} \label{c:hyp}
Let $X$ be a well-behaved generating set of $G$. There exists a constant $\tilde\delta \geq 0$ with the following property. Consider a geodesic triangle in $\Gamma(G,X)$ formed by edges $P$, $Q$ and $R$, and let $p$ be a phase vertex of $P$. Then there exists a phase vertex $q$ of either $Q$ or $R$ such that $$|p^{-1}q|_X \leq \tilde\delta. \eqno\qed$$ \end{cor}
This implies that for a path $P$ in $\Gamma(G,X)$ that is ``not too long'', phase vertices of the geodesic with endpoints $P_-$, $P_+$ are ``not too far'' from $P$.
\begin{prop} \label{p:logclose}
Let $X$ be a well-behaved generating set of $G$, and let $P$ be a path in $\Gamma(G,X)$. Let $Q$ be a geodesic in $\Gamma(G,X)$ with endpoints $Q_- =P_-$, $Q_+=P_+$, and let $q$ be a phase vertex of $Q$. Then there exists a vertex $p$ of $P$ such that $$|p^{-1}q|_X \leq \tilde\delta \left\lceil \log_2(\ell_X(P)) \right\rceil$$ for a universal constant $\tilde\delta \geq 0$. \end{prop}
\begin{proof} Let $\tilde\delta \geq 0$ be the constant given by Corollary \ref{c:hyp}, and let $$s := \left\lceil \log_2(\ell_X(P)) \right\rceil.$$ The proof resembles one that proves that geodesics in a hyperbolic metric space diverge exponentially, see e.g.\ \cite[Lemma 7.1.A]{gromov}.
We start with the geodesic $Q$ and, for $b$ a binary string of length $\leq s$, define the geodesics $Q_b$ as follows. Suppose that the geodesic $Q_b$ has been defined for a binary string $b$ of length $< s$. Let $m_b$ be a vertex on $P$ such that $$\left| \ell_X(P_{b0})-\ell_X(P_{b1}) \right| \leq 1$$ where $P_{b0}$ (resp.\ $P_{b1}$) is a subpath of $P$ with endpoints $(Q_b)_-$ and $m_b$ (resp.\ $m_b$ and $(Q_b)_+$). Then we define $Q_{b0}$ (resp.\ $Q_{b1}$) to be a geodesic with endpoints $(Q_b)_-$ and $m_b$ (resp.\ $m_b$ and $(Q_b)_+$). Note that if $b$ has length $s$, then $\ell_X(Q_b) \leq 1$ and so $Q_b$ is a subpath of $P$.
Now let $q = q_0$ be a phase vertex of $Q$, and construct phase vertices $q_i$ of $Q_{b(i)}$, for $1 \leq i \leq s$ and $b(i)$ a binary string of length $i$, as follows. Suppose $q_j$ and $b(j)$ have been chosen for $0 \leq j \leq i$, for some $i < s$. Consider the geodesic triangle formed by edges $Q_{b(i)}$, $Q_{b(i)0}$ and $Q_{b(i)1}$. Then by Corollary \ref{c:hyp}, for some $c \in \{ 0,1 \}$ there exists a phase vertex $q_{i+1}$ of $Q_{b(i+1)}$, where $b(i+1) = b(i)c$, such that $|q_i^{-1} q_{i+1}|_X \leq \tilde\delta$.
Finally, $Q_{b(s)}$ is a subpath of $P$, so in particular $q_s \in P$, and we get $$|q^{-1}q_s|_X \leq \sum_{i=0}^{s-1} |q_i^{-1}q_{i+1}|_X \leq \tilde\delta s,$$ as required. \end{proof}
In particular, as a corollary we obtain the following result.
\begin{cor} \label{c:logclose}
Let $Y$ be any finite generating set for $G$. Then there exist constants $\tilde\lambda \geq 0$ and $\tilde{c} \geq 0$ such that the following holds. Let $P$ be a geodesic path in $\Gamma(G,Y)$, and let $Q$ be a geodesic path in $\Gamma(G,Y \cup \mathcal{H})$ such that $Q_- = \widehat{P}_-$ and $Q_+ = \widehat{P}_+$. Then for any vertex $q$ of $Q$, there exists a vertex $p$ of $P$ such that $$|p^{-1}q|_Y \leq \tilde\lambda \log_2(\ell_Y(P)) + \tilde{c}.$$ \end{cor}
\begin{rmk}
In fact, the conclusion of Corollary \ref{c:logclose} can be strengthened by further requiring a constant bound on $|p^{-1}q|_Y$ that is independent of $P$, i.e.\ we can further assume that $\tilde\lambda = 0$, and (moreover) it is enough to require $P$ to be a quasi-geodesic. This is shown in \cite[Lemma 8.8]{hruska}. However, for the purposes of proving Theorem \ref{t:main}, the conclusion of Corollary \ref{c:logclose} is enough. \end{rmk}
\begin{proof}[Proof of Corollary \ref{c:logclose}]
Let $X$ be the well-behaved finite generating set containing $Y$ given by Theorem \ref{t:gsl}, and note that $X \cup \mathcal{H} = Y \cup \mathcal{H}$. Let $\lambda_X$ be the constant of the bilipschitz equivalence of $Y$ and $X$, i.e.\ a constant such that $|g|_Y \leq \lambda_X |g|_X$ for any $g \in G$ (we may take $\lambda_X := \max \{ |x|_Y \mid x \in X \}$). Given the generating set $X$, let $(\lambda,c)$ and $\tilde\delta$ be given by Theorem \ref{t:gsl} and Corollary \ref{c:hyp}, respectively, and let $\varepsilon(\lambda,c)$ be given by Theorem \ref{t:bcp}.
Now let $R$ be a geodesic path in $\Gamma(G,X)$ with $R_- = i(P)_-$ and $R_+ = i(P)_+$, where $i: \Gamma(G,Y) \hookrightarrow \Gamma(G,X)$ is the canonical inclusion. Let $q \in Q$ be a vertex; note that, since $Q$ is a geodesic, $q$ is necessarily a phase vertex. By Theorem \ref{t:gsl}, $\widehat{R}$ is a $(\lambda,c)$-quasi-geodesic, and so by Theorem \ref{t:bcp}\eqref{i:t-bcp-near}, there exists a vertex $r$ of $\widehat{R}$ (viewed also as a phase vertex $r$ of $R$) such that $|r^{-1}q|_X \leq \varepsilon(\lambda,c)$. Now let $p \in P$ (technically, $i(p) \in i(P)$) be the vertex given by applying Proposition \ref{p:logclose} to the path $i(P)$, the geodesic $R$ and the phase vertex $r$ of $R$. Thus we have \begin{align*}
\frac{1}{\lambda_X} |p^{-1}q|_Y &\leq |p^{-1}q|_X \leq |p^{-1}r|_X + |r^{-1}q|_X \leq \tilde\delta \left\lceil \log_2(\ell_X(P)) \right\rceil + \varepsilon(\lambda,c) \\ &\leq \tilde\delta \left( \log_2(\ell_Y(P))+1 \right) + \varepsilon(\lambda,c), \end{align*} so setting $\tilde\lambda := \lambda_Y \tilde\delta$ and $\tilde{c} := \lambda_Y \left( \tilde\delta + \varepsilon(\lambda,c) \right)$ gives the result. \end{proof}
In particular, note that with $P$, $p$ and $q$ as in Corollary \ref{c:logclose}, the number $|p^{-1}q|_Y$ has a sublinear upper bound in terms of $\ell_Y(P)$. We will fix the constants $\tilde\lambda$ and $\tilde{c}$ given by Corollary \ref{c:logclose} for the remainder of this section, which gives a proof of the following Theorem.
\begin{thm} \label{t:parbound1}
Suppose $G$ is not virtually cyclic, and let $X$ be a finite generating set for $G$. Then we have $$|\mathcal{P} \cap B_{G,X}(n)| \leq D \sum_{\omega \in \Omega} \sum_{i=0}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} |S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|$$ for some $D \geq 0$ and some function $f: \mathbb{Z}_{\geq 0} \to \mathbb{Z}_{\geq 0}$ such that $\frac{f(n)}{n} \to 0$ as $n \to \infty$. \end{thm}
Let $\hat{h} \in \mathcal{P}$ be an arbitrary parabolic element; by increasing the constant $D$ if necessary we may assume that $\hat{h} \neq 1$. Thus $\hat{h} = ghg^{-1}$ for some $g \in G$ and $h \in H_\omega$ for some $\omega \in \Omega$; choose $(g,h)$ in such a way that $|g|_{X \cup \mathcal{H}}$ is minimal. Consider the conjugacy diagram $R_1 Q R_2^{-1} P^{-1}$, where the paths $P$, $Q$, $R_1$ and $R_2$ are geodesics in $\Gamma(G, X \cup \mathcal{H})$ such that $\overline{P}_G = \hat{h}$, $\overline{Q}_G = h$ and $\overline{(R_1)}_G = \overline{(R_2)}_G = g$; note that $Q$ is a single edge. Let $P_0$ be a geodesic in $\Gamma(G,X)$ such that $(\widehat{P_0})_- = P_-$ and $(\widehat{P_0})_+ = P_+$. See Figure \ref{f:mapto3}, left.
\begin{figure}
\caption{The conjugacy diagram $R_1 Q R_2^{-1} P^{-1}$ (left) and the map $i$ to the tripod (right).}
\label{f:mapto3}
\end{figure}
Now we temporarily relax the assumption that endpoints of paths in Cayley graphs are always assumed to be vertices, and view all graphs as geodesic metric spaces. Let $m$ be the midpoint of the edge $Q$ and consider the isometry $i$ from the diagram $R_1 Q R_2^{-1} P^{-1}$ to a tripod $T$, such that $P_-$, $P_+$ and $m$ are mapped to the ``leaves'' of $T$. Let $c$ be the ``branching vertex'' of $T$, and let $c_1$ (resp.\ $c_2$, $c_3$) be the (unique) point in $i^{-1}(c) \cap R_2$ (resp.\ $i^{-1}(c) \cap R_1$, $i^{-1}(c) \cap P$). For each $j \in \mathbb{Z}/3\mathbb{Z}$, let $U_j$ be a geodesic in $\Gamma(G,X \cup \mathcal{H})$ with $(U_j)_- = c_{j-1}$ and $(U_j)_+ = c_{j+1}$. Finally, for $j \in \mathbb{Z}/2\mathbb{Z}$, let $R_{j1}$ (resp.\ $R_{j2}$) be the subpath of $R_j$ with endpoints $(R_{j1})_- = (R_j)_-$ and $(R_{j1})_+ = c_{j+1}$ (resp.\ $(R_{j2})_- = c_{j+1}$ and $(R_{j2})_+ = (R_j)_+$). See Figure \ref{f:mapto3}.
We call a geodesic $n$-gon in a graph \emph{nice} if its vertices (as an $n$-gon) are also vertices of the graph. Now by Theorem \ref{t:hyp}, nice geodesic triangles in $\Gamma(G, X \cup \mathcal{H})$ are $\nu$-slim (meaning any edge of the triangle is in the $\nu$-neighbourhood of the union of other two edges). Since any general geodesic triangle in a graph is a nice geodesic $n$-gon for $n \leq 6$, it can be shown (by drawing diagonals) that any geodesic triangle in $\Gamma(G, X \cup \mathcal{H})$ is $(3\nu)$-slim. In particular, if we apply \cite[Proposition 2.1]{msri} to the conjugacy diagram $R_1 Q R_2^{-1} P^{-1}$ (viewed as a geodesic triangle with vertices $P_-$, $P_+$ and $m$), the following is true:
\begin{lem} \label{l:diams} \begin{enumerate}[(i)] \item \label{i:ld1} The diameter of $i^{-1}(t)$ is $\leq 18\nu$ for any $t \in T$. \item \label{i:ld2} The diameter of $i^{-1}(c)$ is $\leq 12 \nu$, i.e.\ $\ell_{X \cup \mathcal{H}}(U_j) \leq 12 \nu$ for $j \in \{1,2,3\}$. \qed \end{enumerate} \end{lem}
We now divide the argument in two parts, depending on whether or not $Q$ is connected to a non-trivial component of $U_3$.
\begin{lem} \label{l:hlong}
There exists a universal constant $f_0 = f_0(G,Y,\{H_\omega\}_{\omega \in \Omega}) \geq 0$ such that if $Q$ is connected to a component of $U_3$ for some triple $(\hat{h},h,g)$ as above, then $$|\hat{h}|_X \geq 2|g|_X + |h|_X - 4\tilde\lambda\log_2(|\hat{h}|_X)-f_0.$$ \end{lem}
\begin{proof}
Since $R_{12}$, $R_{22}$ and $U_3$ are geodesics, and $Q$ is connected to a component of $U_3$, it follows by Lemma \ref{l:diams} \eqref{i:ld2} that $$(\ell_{X \cup \mathcal{H}}(R_{12})-1) + (\ell_{X \cup \mathcal{H}}(R_{22})-1) \leq \ell_{X \cup \mathcal{H}}(U_3) \leq 12\nu$$ and so $$\ell_{X \cup \mathcal{H}}(R_{12}) = \ell_{X \cup \mathcal{H}}(R_{22}) \leq 6\nu+1.$$ By Lemma \ref{l:diams}, it follows that given any vertex $r$ on $R_{12}$ (resp.\ $R_{22}$), there exists a vertex $p$ on $P$ such that we have $|p^{-1}r|_{X \cup \mathcal{H}} \leq 18\nu+1$ and $|P_-^{-1}p|_{X \cup \mathcal{H}} \leq |P_-^{-1}r|_{X \cup \mathcal{H}}$ (resp.\ $|P_+^{-1}p|_{X \cup \mathcal{H}} \leq |P_+^{-1}r|_{X \cup \mathcal{H}}$). It is easy to check that in this case $R_1QR_2^{-1}$ is a $(1,36\nu+3)$-quasi-geodesic.
Note also that the path $R_1QR_2^{-1}$ does not backtrack: indeed, if $Q$ was connected to a non-trivial component of either $R_1$ or $R_2$ then it would be connected to both of them, and if some non-trivial components of $R_1$ and $R_2$ were connected then, since $R_1$ and $R_2$ are geodesics both labelled by $g$, this would contradict the minimality of $|g|_{X \cup \mathcal{H}}$.
Now consider the (phase) vertices $Q_-$ and $Q_+$ of $R_1QR_2^{-1}$. Applying Theorem \ref{t:bcp} \eqref{i:t-bcp-near} to $R_1QR_2^{-1}$ and $P$ and Corollary \ref{c:logclose} to $P_0$ and $P$, it follows that there are vertices $p_-$ and $p_+$ on $P_0$ such that $$|Q_-^{-1}p_-|_X, |Q_+^{-1}p_+|_X \leq \tilde\lambda\log_2(|\hat{h}|_X)+\tilde{c}+\varepsilon(1,36\nu+3).$$ It follows that there exist elements $$z_-,z_+ \in B_{G,X}(\tilde\lambda\log_2(|\hat{h}|_X)+\tilde{c}+\varepsilon(1,36\nu+3))$$ such that $$g = p_1 z_- = p_3^{-1}z_+ \qquad \text{and} \qquad h = z_-^{-1}p_2z_+$$ where $p_1,p_2,p_3 \in G$ are such that $$\hat{h} = p_1p_2p_3 \qquad \text{and} \qquad |\hat{h}|_X = |p_1|_X + |p_2|_X + |p_3|_X.$$ Thus setting $f_0 := 4(\tilde{c}+\varepsilon(1,36\nu+3))$ gives the result. \end{proof}
Now consider the paths $R_{12}Q$ and $U_3R_{22}$ with endpoints $(U_3)_-$ and $Q_+$. Since $Q$ is a single edge, it follows from Lemma \ref{l:diams} \eqref{i:ld2} that both of these paths are $(1,24\nu)$-quasi-geodesics. As follows from the proof of Lemma \ref{l:hlong}, $R_{12}Q$ does not backtrack; $U_3R_{22}$ might backtrack, but we can ``shorten this path along any backtracks'' to find a $(1,24\nu)$-quasi-geodesic path $\widetilde{U}$ with $\widetilde{U}_- = (U_3)_-$ and $\widetilde{U}_+ = Q_+$ such that all the vertices of $\widetilde{U}$ are on $U_3R_{22}$. Applying Theorem \ref{t:bcp} \eqref{i:t-bcp-conn} to $R_{12}Q$ and $\widetilde{U}$ then says that if $|h|_X > \varepsilon(1,24\nu)$ then $Q$ is connected to a non-trivial component of $\widetilde{U}$. As the path $QR_{22}^{-1}$ does not backtrack, in this case $Q$ cannot be connected to a component of $R_{22}$ (apart from $(R_{22})_+ = Q_+$ if it is a trivial component of $R_{22}$), and so $Q$ must be connected to a component of $U_3$.
Therefore Lemma \ref{l:hlong} applies whenever $|h|_X > \varepsilon(1,24\nu)$, thus for all but finitely many elements $h$. By setting $D := D_0 B_{G,X}(\varepsilon(1,24\nu))+1$ (for some $D_0 \geq 0$) and picking $f: \mathbb{Z}_{\geq 0} \to \mathbb{Z}_{\geq 0}$ to be the pointwise maximum of finitely many sublinear functions (so still sublinear), Theorem \ref{t:parbound1} follows from the following result (since $1 \in H_\omega$):
\begin{lem} \label{l:hshort}
Let $h_0 \in H_\omega$ for some $\omega \in \Omega$, and let $\mathcal{P}(h_0)$ be the set of elements $\hat{h}$ in the conjugacy class $h_0^G$ such that if $g$ and $h$ are as above, then $h = h_0$ and $Q$ is not connected to a component of $U_3$. Then $$|\mathcal{P}(h_0) \cap B_{G,X}(n)| \leq D_0 \left| B_{G,X}\left( \left\lfloor \frac{n+f_0}{2} \right\rfloor \right) \right|$$ for some constants $D_0,f_0 \geq 0$. \end{lem}
\begin{proof}
Consider the closed path $R_{12}QR_{22}^{-1}U_3^{-1}$. Since the path $R_{12}QR_{22}^{-1}$ does not backtrack and by assumption $Q$ is not connected to a conponent of $U_3$, it follows that given any two distinct non-trivial connected components of $R_{12}QR_{22}^{-1}U_3^{-1}$, one of them must be on $U_3$ and the other one on either $R_{12}$ or $R_{22}$. But since $U_3$ is an arbitrary geodesic, we may without loss of generality assume that (loosely speaking) $U_3$ follows $R_{12}$ (and $U_3^{-1}$ follows $R_{22}$) until the last of their non-trivial connected components. Thus we have a closed path $\mathcal{C} := \widetilde{R_{12}}Q\widetilde{R_{22}}^{-1}\widetilde{U_3}^{-1}$ which does not backtrack and all of its vertices are phase except for, possibly, endpoints of $\widetilde{U_3}$. See Figure \ref{f:cycle}. Note that $\ell_{X \cup \mathcal{H}}(\widetilde{U_3}) \geq 1$ since (by minimality of $|g|_{X \cup \mathcal{H}}$) $R_1 \cap R_2 = \varnothing$.
\begin{figure}\label{f:cycle}
\end{figure}
Suppose without loss of generality that one of the following three cases holds: \begin{enumerate}[(i)] \item $\ell_{X \cup \mathcal{H}}(\widetilde{R_{12}}) > \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}})$, or \item $\ell_{X \cup \mathcal{H}}(\widetilde{R_{12}}) = \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}})$ and $(\widetilde{U_3})_-$ is a phase vertex of the closed path $\mathcal{C}$, or \item $\ell_{X \cup \mathcal{H}}(\widetilde{R_{12}}) = \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}})$ and both $(\widetilde{U_3})_-$, $(\widetilde{U_3})_+$ are non-phase vertices of $\mathcal{C}$.
\end{enumerate} Let $r_+$ be $(\widetilde{U_3})_+$ if $(\widetilde{U_3})_+$ is a phase vertex of $\mathcal{C}$, and let $r_+$ be the vertex on $\widetilde{R_{22}}$ that is adjacent to $(\widetilde{U_3})_+ = (\widetilde{R_{22}})_-$ otherwise. It follows that both $r_+$ and $r_- := \hat{h}^{-1}r_+$ (the latter one being a vertex of $\widetilde{R_{12}}$) are phase vertices of $\mathcal{C}$. Moreover, there is a subpath of $\mathcal{C}$ with endpoints $r_-$ and $r_+$ that is a union of at most $$\ell_{X \cup \mathcal{H}}(\widetilde{U_3}) + \left| \ell_{X \cup \mathcal{H}}(\widetilde{R_{12}'}) - \ell_{X \cup \mathcal{H}}(\widetilde{R_{22}'}) \right| + 1 \leq \ell_{X \cup \mathcal{H}}(U_3)+1 \leq 12\nu+1$$ components. All of these components have length $1$ (apart from, possibly, one or two components which have length $2$), hence $|r_-^{-1}r_+|_{X \cup \mathcal{H}} \leq 12\nu+3$.
Now consider the two subpaths of $\mathcal{C}$ joining $r_-$ and $Q_+$. By the above, it follows that they are $(1,24\nu+6)$-quasi-geodesics that do not backtrack and do not have non-trivial connected components. By Theorem \ref{t:bcp} \eqref{i:t-bcp-conn}, if $S$ is a component of $\mathcal{C}$, then $|S_-^{-1}S_+|_X \leq \varepsilon(1,24\nu+6)$. Thus, by the previous paragraph, it follows that $$|r_-^{-1}r_+|_X \leq (12\nu+1) \varepsilon(1,24\nu+6).$$ In particular, since $\hat{h} = r_-r_0r_-^{-1}$ where $r_0 = r_-^{-1}r_+$, by setting $$D_0 := B_{G,X}((12\nu+1) \varepsilon(1,24\nu+6))$$ and fixing $r_0 \in G$ it is enough to show that $$|\mathcal{P}(h_0,r_0) \cap B_{G,X}(n)| \leq \left| B_{G,X}\left( \left\lfloor \frac{n+f_0}{2} \right\rfloor \right) \right|$$ for some constant $f_0 \geq 0$, where $\mathcal{P}(h_0,r_0)$ is the set of $\hat{h} \in h_0^G$ with $h = h_0$ and $r_0$ as above.
Note that every vertex $v$ on $\widetilde{U_3}$ satisfies $|r_-^{-1}v|_X \leq (12\nu+1) \varepsilon(1,24\nu+6)$. Since $\widetilde{U_3}$ is an arbitrary geodesic (after fixing its endpoints), we may assume (similarly to the case above) that $\widetilde{U_3}$ follows $\widetilde{R_{12}'}^{-1} R_{11}^{-1}$ (and $\widetilde{U_3}^{-1}$ follows $\widetilde{R_{22}'}^{-1} R_{21}^{-1}$) until the last of their non-trivial connected components. Thus we obtain a path $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$ (where $\widetilde{R_1}$, $\widetilde{U_3'}$, $\widetilde{R_2}$ are subpaths of $R_1$, $\widetilde{U_3}$, $R_2$, respectively) that does not backtrack and all its vertices are phase, except for possibly endpoints of $\widetilde{U_3}'$. Since all vertices of this path are on the $(1,36\nu)$-quasi-geodesic $R_{11}U_3R_{21}^{-1}$, it follows that $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$ is also a $(1,36\nu)$-quasi-geodesic.
Now if either $\ell_{X \cup \mathcal{H}}(\widetilde{U_3'}) > 1$ or $(\widetilde{U_3'})_-$ is a phase vertex of $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$, then $\widetilde{R_1} \widetilde{U_3'} \widetilde{R_2}^{-1}$ contains a phase vertex that is on $\widetilde{U_3}$. Otherwise, consider the path $\widetilde{R_1'} \widetilde{U_3''} \widetilde{R_2}^{-1}$ obtained by replacing the non-geodesic subpath of length $2$ with interior point $(\widetilde{U_3'})_-$ by an edge $\widetilde{U_3''}$. Since $R_1$ and $R_2$ have no connected components, we have that either $(\widetilde{U_3''})_+ = (\widetilde{U_3'})_+$ is a phase vertex of $\widetilde{R_1'} \widetilde{U_3''} \widetilde{R_2}^{-1}$, or the edge $\widetilde{U_3''}$ is labelled by an element of $H_\omega \cap H_{\widetilde\omega}$ for some distinct $\omega,\widetilde\omega \in \Omega$.
Thus in either case there exists a vertex $v$ of $\widetilde{U_3}$ and some $w \in G$ such that $vw$ is a phase vertex of a $(1,36\nu)$-quasi-geodesic with endpoints $P_-$ and $P_+$ that does not backtrack, and such that $$|w|_X \leq E_0 := \max \{ |k|_X \mid k \in H_\omega \cap H_{\widetilde\omega}, \omega,\widetilde\omega \in \Omega, \omega \neq \widetilde\omega \},$$ where the right hand side is defined and finite by Theorem \ref{t:almaln}. Therefore, by Theorem \ref{t:bcp} \eqref{i:t-bcp-near} and Corollary \ref{c:logclose} it follows that this vertex $v$ satisfies $|v^{-1}u|_X \leq f_1$ for some vertex $u$ of $\widehat{P_0}$, where $$f_1 := (12\nu+1)\varepsilon(1,24\nu+6) + E_0 + \varepsilon(1,36\nu) + \tilde\lambda\log_2(|\hat{h}|_X)+\tilde{c}.$$ It follows that there exists some $z \in B_{G,X}(f_1)$ such that $$r_- = p_1z = p_2^{-1}zr_0$$ where $p_1,p_2 \in G$ are such that $$\hat{h} = p_1p_2 \qquad \text{and} \qquad |\hat{h}|_X = |p_1|_X + |p_2|_X.$$ Thus setting $f_0 := 2f_1+(12\nu+1)\varepsilon(1,24\nu+6)$ implies that $|r_-|_X \leq \frac{|\hat{h}|_X+f_0}{2}$, which gives the result. \end{proof}
\section{Exponential negligibility of $\mathcal{P}$} \label{s:pftmain}
This section is dedicated to a proof of Theorem \ref{t:main}. We need the following definition.
\begin{defn}
For a group $K$ with a finite generating set $Z$, the \emph{(exponential) growth rate} of $K$ with respect to $X$ is the limit $$\mu(K,Z) := \lim_{n \to \infty} \sqrt[n]{|B_{K,Z}(n)|}.$$ By submultiplicativity of ball sizes in $\Gamma(K,Z)$ and a well-known result called Fekete's Lemma, it follows that this limit always exists and is equal to $\inf \{ \sqrt[n]{|B_{K,Z}(n)|} \mid n \in \mathbb{Z}_{\geq 0} \}$. \end{defn}
In order to prove Theorem \ref{t:main}, we use growth tightness of relatively hyperbolic groups:
\begin{thm}[\cite{yang}, Corollary 1.7] \label{t:grtight} Suppose $G$ is not virtually cyclic. Let $X$ be a finite generating set for $G$, and let $N \trianglelefteq G$ be an infinite normal subgroup. Let $\overline{X}$ be the image of $X$ under the quotient map $G \to G/N$, so that $\overline{X}$ is a finite generating set of $G/N$. Then $\mu(G/N,\overline{X}) < \mu(G,X)$. \end{thm}
We also use Dehn filling in relatively hyperbolic groups, namely the following result.
\begin{thm}[\cite{osin07}, Theorem 1.1 (1)] \label{t:dfill} Let $G$ be hyperbolic relative to a collection of subgroups $\{ H_\omega \}_{\omega \in \widetilde\Omega}$. Then there exists a finite subset $\mathcal{F}$ of $G \setminus \{1\}$ with the following property. Let $\{ N_\omega \}_{\omega \in \widetilde\Omega}$ be a collection of normal subgroups $N_\omega \trianglelefteq H_\omega$ such that $N_\omega \cap \mathcal{F} = \varnothing$ for each $\omega \in \widetilde\Omega$, and define a normal subgroup $N := \left\langle\!\left\langle \bigcup_{\omega \in \widetilde\Omega} N_\omega \right\rangle\!\right\rangle^G \trianglelefteq G$. Then for each $\omega \in \widetilde\Omega$, the natural map $H_\omega / N_\omega \to G / N$ is injective. \end{thm}
We fix a finite generating set $X$ of $G$ for the remainder of the section. The main ingredient in the proof of Theorem \ref{t:main} (apart from Theorem \ref{t:parbound1}) is the following Lemma.
\begin{lem} \label{l:expneg} For any $\omega \in \Omega$, the subgroup $H_\omega \leq G$ is exponentially negligible in $G$ (with respect to $X$). \end{lem}
\begin{proof} The idea is to use results in \cite{osin06elem} and Theorem \ref{t:dfill} to find a quotient of $G$ whose growth could be compared to the growth of $H_\omega$, and then use Theorem \ref{t:grtight}.
Suppose first that there exists a normal subgroup $N \trianglelefteq G$ such that the number $M := |H_\omega \cap N|$ is finite. Then the quotient $G/N$ is generated by the set $\overline{X}$ of images of elements of $X$ under the quotient map, and clearly $$|gN|_{\overline{X}} \leq |g|_X$$ for any $g \in G$. Also, for fixed elements $g \in G$ and $t_0 \in H_\omega \cap gN$, we have $$|H_\omega \cap gN| = |\{ t_0^{-1}t \mid t \in H_\omega \cap gN \}| \leq |H_\omega \cap N| = M.$$ In particular, it follows that \begin{align*}
|H_\omega \cap B_{G,X}(n)| &\leq \sum_{\substack{gN \in G/N \\ gN \cap B_{G,X}(n) \neq \varnothing}} |H_\omega \cap gN| \\ &\leq M | \{ gN \in G/N \mid gN \cap B_{G,X}(n) \neq \varnothing \} | \\ &\leq M |B_{G/N,\overline{X}}(n)|. \end{align*}
Thus, by Theorem \ref{t:grtight} it follows that as long as $N$ is infinite we have $$\limsup_{n \to \infty} \sqrt[n]{\frac{|H_\omega \cap B_{G,X}(n)|}{|B_{G,X}(n)|}} < 1,$$ which implies that $H_\omega$ is exponentially negligible in $G$. Thus the problem reduces to showing that there exists an infinite normal subgroup $N \trianglelefteq G$ such that $H_\omega \cap N$ is finite.
To construct such a subgroup, we use Dehn filling in relatively hyperbolic groups. Let $g \in G$ be a hyperbolic element (i.e.\ an element of $G \setminus \mathcal{P}$) such that the order of $g$ is infinite: such an element exists by \cite[Corollary 4.5]{osin06elem}. Consider the subgroup $$E_G(g) := \{ h \in G \mid h^{-1} g^n h = g^{\pm n} \text{ for some } n \geq 1 \}.$$ Clearly $g \in E_G(g)$, and by \cite[Lemma 4.1]{osin06elem}, the index of $\langle g \rangle \cong \mathbb{Z}$ in $E_G(g)$ is finite. Also, by \cite[Corollary 1.7]{osin06elem}, $G$ is hyperbolic relative to the collection $\{ H_\omega \}_{\omega \in \Omega} \cup \{ E_G(g) \}$. Let $\widetilde\Omega := \Omega \sqcup \{0\}$ and let $H_0 := E_G(g)$.
Now let $\mathcal{F} \subseteq G \setminus \{1\}$ be the finite subset given by Theorem \ref{t:dfill} applied to $G$ and the collection of subgroups $\{ H_\omega \}_{\omega \in \widetilde\Omega}$. Since $\mathcal{F}$ is finite, we have $\langle g^m \rangle \cap \mathcal{F} = \varnothing$ for $m \in \mathbb{Z}_{\geq 1}$ large enough. Let $N_\omega := \{1\}$ for $\omega \in \Omega = \widetilde\Omega \setminus \{0\}$, and let $N_0 := \bigcap_{h \in E_G(g)} \langle h^{-1}g^mh \rangle \trianglelefteq E_G(g)$ be the \emph{normal core} of $\langle g^m \rangle$ in $E_G(g) = H_0$, i.e.\ the kernel of the action of $E_G(g)$ on the set of left cosets of $\langle g^m \rangle$ in $E_G(g)$. As the index of $\langle g^m \rangle$ in $E_G(g)$ is finite, so is the index of $N_0$ in $E_G(g)$, so in particular, as $E_G(g)$ is infinite, so is $N_0$. Therefore, applying Theorem \ref{t:dfill} yields an infinite normal subgroup $N = \langle\!\langle N_0 \rangle\!\rangle^G$ of $G$ such that, for each $\omega \in \Omega$, the group $H_\omega \cap N$ is trivial, hence finite. \end{proof}
\begin{proof}[Proof of Theorem \ref{t:main}]
The proof follows from Theorem \ref{t:parbound1} and Lemma \ref{l:expneg}. In particular, by Lemma \ref{l:expneg} it follows that there exists a constant $\rho > 1$ such that $\frac{|H_\omega \cap B_{G,X}(n)|}{|B_{G,X}(n)|} \leq \rho^{-n}$ for all sufficiently large $n$. Note that we have $\mu := \mu(G,X) > 1$: otherwise, if $\mu = 1$, Theorem \ref{t:grtight} would imply that $N=G$ is \emph{not} an infinite normal subgroup of $G$, and so $G$ is finite, contradicting the assumption that $G$ is not virtually cyclic.
Now choose a constant $\varepsilon > 0$ such that $\varepsilon < \mu (\min\{\mu,\rho\}-1).$ Then there exists a constant $n_0 \in \mathbb{Z}_{\geq 0}$ such that $$\frac{|H_\omega \cap B_{G,X}(n)|}{|B_{G,X}(n)|} \leq \rho^{-n} \qquad \text{and} \qquad |S_{G,X}(n)| \leq |B_{G,X}(n)| \leq (\mu+\varepsilon)^n$$ for all $n \geq n_0$; note also that $|B_{G,X}(n)| \geq \mu^n$ for all $n \in \mathbb{Z}_{\geq 0}$.
Now let $n \geq 3n_0$. By Theorem \ref{t:finmanypar}, it follows that it is enough to find an exponential (with base $<1$) upper bound on the number $$\sum_{i=0}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} \frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|}$$ for any fixed $\omega \in \Omega$. We do this in three parts.
For $i < n_0$, we have \begin{align*}
\sum_{i=0}^{n_0-1} &\frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|} \\ &\leq \sum_{i=0}^{n_0-1} |S_{G,X}(i)| \left(\frac{\mu+\varepsilon}{\rho}\right)^{n+f(n)-2i} \mu^{-n} \\ &\leq |B_{G,X}(n_0-1)| \left(\frac{\mu+\varepsilon}{\rho}\right)^{f(n)} \left(\frac{\mu+\varepsilon}{\rho\mu}\right)^n \end{align*} and $\frac{\mu+\varepsilon}{\rho\mu} < 1$ by the choice of $\varepsilon$, so since $\frac{f(n)}{n} \to 0$ as $n \to \infty$ we get exponential convergence to zero, as required.
For $i \geq n_0$ and $n+f(n)-2i \geq n_0$, we have \begin{align*}
\sum_{i=n_0}^{\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor} &\frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|} \\ &\leq \sum_{i=n_0}^{\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor} (\mu+\varepsilon)^i \left(\frac{\mu+\varepsilon}{\rho}\right)^{n+f(n)-2i} \mu^{-n} \\ &= \sum_{i=n_0}^{\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor} \left(\frac{\mu+\varepsilon}{\rho\mu}\right)^{n} \left( \frac{\rho^2}{\mu+\varepsilon} \right)^i \left(\frac{\mu+\varepsilon}{\rho}\right)^{f(n)}. \end{align*} Now if $\frac{\rho^2}{\mu+\varepsilon} \leq 1$, the result follows immediately as above. If instead $\frac{\rho^2}{\mu+\varepsilon} > 1$, we can bound the above expression by \begin{align*} \left(\frac{\mu+\varepsilon}{\rho\mu}\right)^{n} \frac{\left( \frac{\rho}{\sqrt{\mu+\varepsilon}} \right)^{n+f(n)+2}}{\frac{\rho^2}{\mu+\varepsilon}-1} \left(\frac{\mu+\varepsilon}{\rho}\right)^{f(n)} = \frac{(\sqrt{\mu+\varepsilon})^{f(n)}}{1-\frac{\mu+\varepsilon}{\rho^2}} \left(\frac{\sqrt{\mu+\varepsilon}}{\mu}\right)^n \end{align*} and since $\frac{\mu+\varepsilon}{\mu^2} < 1$ by the choice of $\varepsilon$, we get exponential convergence as before.
Finally, for $n+f(n)-2i < n_0$, we have \begin{align*}
\sum_{i=\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor+1}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} &\frac{|S_{G,X}(i)| |H_\omega \cap B_{G,X}(n+f(n)-2i)|}{|B_{G,X}(n)|} \\ &\leq \sum_{i=\left\lfloor \frac{n+f(n)-n_0}{2} \right\rfloor+1}^{\left\lfloor \frac{n+f(n)}{2} \right\rfloor} (\mu+\varepsilon)^i |H_\omega \cap B_{G,X}(n+f(n)-2i)| \mu^{-n} \\ &\leq \frac{(\sqrt{\mu+\varepsilon})^{n+f(n)+2}}{\mu+\varepsilon-1} |H_\omega \cap B_{G,X}(n_0-1)| \mu^{-n} \\ &= \frac{|H_\omega \cap B_{G,X}(n_0-1)| (\sqrt{\mu+\varepsilon})^{f(n)}}{1-\frac{1}{\mu+\varepsilon}} \left(\frac{\sqrt{\mu+\varepsilon}}{\mu}\right)^n \end{align*} and so we again get exponential convergence. It follows that $\mathcal{P}$ is exponentially negligible. \end{proof}
\section{Degree of commutativity} \label{s:cor}
This section is dedicated to the proofs of Corollaries \ref{c:loxo} and \ref{c:dc}.
Given Theorem \ref{t:main}, the proof of Corollary \ref{c:loxo} is easy. Indeed, it is easy to check (either directly from Definition \ref{d:rh}, or by using characterisation of hyperbolically embedded subgroups given by \cite[Theorem 1.5]{osin06elem}) that if a group $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega}$ and $F \leq G$ is any finite subgroup, then $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega} \cup \{F\}$. But there are only finitely many conjugacy classes of finite-order hyperbolic elements in $G$ (i.e.\ elements of $G \setminus \mathcal{P}$) \cite[Theorem 4.2]{osin06def}, hence there exists a finite collection $\{ F_1,\ldots,F_m \}$ of finite cyclic subgroups of $G$ such that any hyperbolic element of finite order is conjugate to an element of one of the $F_j$. Thus $G$ is hyperbolic relative to $\{ H_\omega \}_{\omega \in \Omega} \cup \{ F_1,\ldots,F_m \}$, and with this structure of a relatively hyperbolic group every hyperbolic element of $G$ has infinite order. Corollary \ref{c:loxo} then follows directly from Theorem \ref{t:main}.
Given the previous paragraph, we may without loss of generality assume that all hyperbolic elements in $G$ have infinite order. The proof of Corollary \ref{c:dc} then follows closely the proof of \cite[Theorem 1.7]{amv}, stating an analogous result for ordinary hyperbolic groups. The following Lemma has been stated as Lemma 3.1 in \cite{amv} in a slightly different form, but the proof remains the same. For a group $G$ and an element $g \in G$, let $C_G(g)$ denote the centraliser of $g$ in $G$.
\begin{lem} \label{l:amv} Let $G$ be a group generated by a finite subset $X$, and let $\mathcal{N} \subseteq G$ be a subset such that \begin{enumerate}[(i)] \item \label{i:l-amv-negl} $\mathcal{N}$ is exponentially negligible in $G$ with respect to $X$, and
\item \label{i:l-amv-cent} there exist constants $\rho > 1$ and $n_0 \geq 0$ such that $$|C_G(g) \cap B_{G,X}(n)| \leq \rho^{-n} |B_{G,X}(n)|$$ for all $g \in G \setminus \mathcal{N}$ and $n \geq n_0$. \end{enumerate} Then $\{ (x,y) \in G^2 \mid xy=yx \}$ is exponentially negligible in $G$ with respect to $X$. \end{lem}
Thus, by taking $\mathcal{N} = \mathcal{P}$, in the view of Corollary \ref{c:loxo} it is enough to show Lemma \ref{l:amv} \eqref{i:l-amv-cent}.
\begin{proof}[Proof of Corollary \ref{c:dc}] It is known \cite[Lemma 4.1]{osin06elem} that given any hyperbolic element $g \in G$, the centraliser $C_G(g)$ contains $\langle g \rangle$ as a finite index subgroup; in particular, $C_G(g)$ is a $2$-ended subgroup of $G$. In this case, a classical result \cite[Lemma 4.1]{wall} tells that $C_G(g)$ fits into an exact sequence $$1 \to F \to C_G(g) \to Q \to 1$$ where $F$ is finite and $Q$ is either $\mathbb{Z}$ or $C_2 \ast C_2$. Thus $C_G(g)$ has a subgroup $K$ of index at most $2$ such that $K/F \cong \mathbb{Z}$ for a finite normal subgroup $F \trianglelefteq K$.
Note that $K$ contains $g^2 \in G \setminus \mathcal{P}$ and so $K$ is a $2$-ended subgroup not contained in $\mathcal{P}$. Since $F \leq K$ is a finite subgroup, \cite[Lemma 9.4]{louder} tells that there is a universal constant $m_0$ (independent of $g$) such that $|F| \leq m_0$. Since the sequence $$1 \to F \to K \to \mathbb{Z} \to 1$$ splits, this means that $C_G(g)$ contains a normal subgroup $\langle k_g \rangle \cong \mathbb{Z}$ of index $\leq 2m_0$. It is clear that $k_g \notin \mathcal{P}$: otherwise $C_G(g) \cap H_\omega^{g_0}$ is infinite for some $\omega \in \Omega$ and $g_0 \in G$, which cannot happen since $$C_G(g) \cap H_\omega^{g_0} \leq H_\omega^{g_0g} \cap H_\omega^{g_0} = \left( H_\omega^{g_0gg_0^{-1}} \cap H_\omega \right)^{g_0}$$ and $H_\omega^{g_0gg_0^{-1}} \cap H_\omega$ is finite by Theorem \ref{t:almaln} since $g \notin \mathcal{P}$.
Now consider the \emph{translation length function} in $G$, defined as $$\tau(g) := \limsup_{n \to \infty} \frac{|g^n|_{X \cup \mathcal{H}}}{n} = \inf \left\{ \frac{|g^n|_{X \cup \mathcal{H}}}{n} \;\middle|\; n \in \mathbb{Z}_{\geq 0} \right\}$$ for $g \in G \setminus \mathcal{P}$, where the second inequality follows from Fekete's Lemma since the sequence $(|g^n|_{X \cup \mathcal{H}})_{n=0}^\infty$ is subadditive. It is known \cite[Theorem 4.25]{osin06def} that there exists a universal constant $\zeta > 0$ such that $\tau(g) \geq \zeta$ for all $g \in G \setminus \mathcal{P}$.
Finally, pick an element $g \in G \setminus \mathcal{P}$. Then $\tau(k_g) \geq \zeta$ for $k_g \in G \setminus \mathcal{P}$ as above, and so if $k_g^m \in B_{G,X}(n) \subseteq B_{G,X \cup \mathcal{H}}(n)$, then $|m| \leq n/\zeta$. It follows that $$|\langle k_g \rangle \cap B_{G,X}(n)| \leq \frac{2n}{\zeta}+1.$$ Moreover, if $g_0 \langle k_g \rangle \cap B_{G,X}(n) \neq \varnothing$ for some $g_0 \in G$ then we may assume that $g_0 \in B_{G,X}(n)$ and so $g_0 \langle k_g \rangle \cap B_{G,X}(n) \subseteq g_0 (\langle k_g \rangle \cap B_{G,X}(2n))$. Thus $$|g_0\langle k_g \rangle \cap B_{G,X}(n)| \leq \frac{4n}{\zeta}+1.$$ Since the index of $\langle k_g \rangle$ in $C_G(g)$ is $\leq 2m_0$, this implies that $$|C_G(g) \cap B_{G,X}(n)| \leq 2m_0 \left( \frac{4n}{\zeta}+1 \right),$$ which gives a linear (and so subexponential) bound independent of $g$. Since by assumption $G$ is not virtually cyclic, the growth rate of $G$ is $\mu(G,X) > 1$, and so the proof is complete. \end{proof}
\end{document} |
\begin{document}
\title[]{Convection-Diffusion Equations with Random Initial Conditions} \author{Mi\l{}osz Krupski} \address{Uniwersytet Wroc\l{}awski, Instytut Matematyczny\\ pl. Grunwaldzki 2/4, 50-384 Wroc\l{}aw} \email{[email protected]}
\begin{abstract} We consider an evolution equation generalising the viscous Burgers equation supplemented by an initial condition which is a homogeneous random field. We develop a non-linear framework enabling us to show the existence and regularity of solutions as well as their long time behaviour.
\end{abstract} \maketitle
\begin{section}{Introduction}
The main objective of this paper is to investigate solutions to a non-local
analogue of the viscous Burgers equation with random initial conditions
\begin{equation}\label{intro}
\left\{
\begin{aligned}
&\dt u + \fLap u = \grad f(u)\quad&\text{on $\T\times\X$}, \\
&u(0) \= u_0\quad&\text{on $\X$}.
\end{aligned}
\right.
\end{equation}
Here the operator $-\fLap$ denotes the (now) standard fractional Laplace operator
with $s\in(\frac{1}{2},1]$
and the initial condition $u_0$ is a real, homogeneous, isotropic random field (as defined in Section~\ref{preliminaries})
whose finite-order moments are all bounded.
By $\grad$ we denote the directional derivative. The function $f$ is a smooth function of polynomial growth.
Such equations have been studied thoroughly in the deterministic case. In papers~\cite{MR1708995,MR1881259,MR1849690} the authors consider
initial conditions which are integrable functions and describe certain properties of solutions that resonate
with some of the results presented here. However, in the context of random initial data new
methods had to be developed. Important results were obtained for bounded (deterministic) initial conditions~\cite{MR2019032}
and we rely on them where possible (see Lemmas~\ref{classical-solutions}, \ref{classical-solutions-equiv}).
Interesting developments were recently described in \cite{2017arXiv170302908I}.
In general, partial differential equations with random initial data (homogeneous, stationary, isotropic, etc.) have been
examined before, notably to describe certain physical phenomena, such
as the Large Scale Structure of the Universe~\cite{MR1305783, MR1732301, MannJr.2001}.
More specifically, there are numerous results available on the local Burgers equation (i.e. equation~\eqref{intro} with $s=1$ and $f(u)=u^2$) or very similar equations with random initial data
\cite{MR1859007,MR1939652,MR1978661,MR1687092,MR1642664}. Equations of this type, however, have a curious property, exploited by the Hopf-Cole transformation, allowing them to be treated essentially as linear problems. Such a reduction is not known to be possible
both in the non-local setting (i.e. with the fractional Laplacian) and for a more general
function $f$,
which forces us to conduct a more involved non-linear analysis of the problem.
A significant part of this analysis are a~priori estimates, in the context of random initial data. Early results
were already obtained by Rosenblatt~\cite{MR0264252} (in the local setting).
Taking them as an inspiration we obtained new and much more general estimates.
In fact, in Theorem~\ref{rosenblatt-lp} we prove that for a solution $u(t)$
to problem~\eqref{intro} we have $E|u(t)|^p \leq E|u_0|^p$ for every $t\geq 0$.
One difficulty when working with random fields is the question of regularity of its individual realisations.
As it turns out, it is impossible to directly apply the classical theory ``pathwise'', treating $x\mapsto u_0(x,\omega)$
as an initial condition of the non-random problem for every $\omega\in\Omega$ separately (cf. Proposition~\ref{iso-infinity} and Remark~\ref{l-infinity}).
On the other hand, restricting the problem to admit only homogeneous random fields has a technical
advantage.
By calculating the expected value $E$ not only we ``eliminate'' the variable $\omega$,
as is normally the case, but also $x$ (because the field has identical distribution
at every point in space the result is constant, see Remark~\ref{independent}).
However simple, this property is essential to obtain e.g. the analogue of the Stroock-Varopoulos
inequality (see \cite{MR1218884,MR1409835}) in Lemma~\ref{cut-off-reg}.
Another important observation is expressed by Lemma~\ref{derivative-zero} and we invite the readers to turn their attention to it.
The remainder of the paper is structured as follows. After introducing the notation and basic concepts in Section~\ref{preliminaries}, in Section~\ref{linear-equation} we review the
results on the linear equation, which is the starting point for the rest of the theory. In Section~\ref{sec-lipschitz} we construct solutions in the case when the function $f$ is assumed to be Lipschitz. In Section~\ref{apriori} we
are able to establish the a~priori estimates.
Finally, in Section~\ref{general} we study the problem for functions $f$ of polynomial growth.
The main result on the existence of solutions to problem~\eqref{intro} is expressed
in Theorem~\ref{main}.
This paper is a continuation of previous work~\cite{MR3628179} which dealt with linear problems.
\end{section}
\begin{section}{Isometry-invariant random fields}\label{preliminaries}
\begin{subsection}{Basic notation}
We denote the Borel sigma-algebra on $\X$ by $\Bor(\X)$ and the Lebesgue measure by $dx$.
We use the Fourier transform defined as
\begin{equation*}
\FT{f}(\xi) = \int_\X e^{i\xi\cdot x}f(x)\,dx.
\end{equation*}
Given a measure space $(X,\Theta,\mu)$, by $L^p(X,\Theta,\mu)$ we denote the space of all $\Theta$-measurable real functions such that the norm defined in the case $1\leq p < \infty$ as
\begin{equation*}
\|f(x)\|_{L^p(X,\Theta,\mu)}^p = \int_X |f|^p d\mu = \int_X |f(x)|^p \mu(dx),
\end{equation*}
or in the case of $p=\infty$ as the value
\begin{equation*}
\|f(x)\|_{L^\infty(X,\Theta,\mu)} = \esssup_{x\in X}|f| = \inf\big\{y\in X: \mu\{x\in X:|f(x)|>y\} = 0\big\},
\end{equation*}
is finite. Usually we shorten the notation and write $L^p(X,\mu)$.
Let $(X,\|\cdot\|_X)$ be a subset of a normed space. For every fixed $K\geq0$ we may define the Bielecki norm
\begin{equation}\label{bielecki}
\K{u}{X} = \sup_{t\geq0}e^{-tK}\|u(t)\|_X
\end{equation}
and the space
\begin{equation*}
\mathcal{B}_{K,X} = \big\{u\in C(\T,X) : \K{u}{X} < \infty\big\}.
\end{equation*}
Let us fix a probability space $(\Omega,\Sigma,P)$ and denote $\Lp = L^p(\Omega,\Sigma,P)$.
The particular case of $p=2$ constitutes a
Hilbert space with the standard inner product $EXY$ defined for all $X,Y\in \Ld$.
We write
\begin{equation*}
(X_1,\ldots,X_k) \= (Y_1,\ldots,Y_k),\quad\text{where $X_i,Y_i\in \Lp$ and $k\in\N$},
\end{equation*}
if both random vectors have the same probability distributions.
\end{subsection} \begin{subsection}{Random fields}
Consider $\Lp$-continuous and $\Lp$-bounded random field $u$ on $\X$, i.e. $u\in C_b(\X,\Lp)$.
We endow this space with the norm
\begin{equation*}
\|u\|_p = \sup_{x\in\X}\|u(x)\|_{\Lp}.
\end{equation*}
For $u,v\in C_b(\X,\Lp)$ we say that $u\=v$ if
\begin{equation*}
\big(u(x_1),\ldots,u(x_n)\big) \= \big(v(x_1),\ldots,v(x_n)\big)
\end{equation*}
as random vectors for every finite collection of points $x_1,\ldots,x_n\in \X$.
\begin{definition}
Let $1\leq p \leq \infty$. Consider the group $\Phi$ of isometries on $\X$ and a random field $u\in C_b(\X,\Lp)$.
For a given function $\phi\in\Phi$ let $(\phi u)(x) = u(\phi(x))$.
We define the space $\Iso_p$ of \emph{isometry-invariant random fields} with finite $p$-th moment as
\begin{equation*}
\Iso_p = \{u\in C_b(\X,\Lp): u \= \phi u \text{ for every $\phi\in\Phi$}\}.
\end{equation*}
\end{definition}
\begin{remark}
Notice that the space $\Iso_p$ inherits the norm (and hence topology) of the space $C_b(\X,\Lp)$
and it forms a closed subspace therein. Indeed, fix an isometry $\phi\in\Phi$ and a sequence $u_n\in\Iso_p$
such that $\lim_{n\to\infty}\|u_n-u\|_p = 0$.
Then $u_n\=\phi u_n$ and because
\begin{equation*}
\|\phi u_n-\phi u\|_p= \sup_{x\in\X}\|u_n(\phi(x))-u(\phi(x))\|_\Lp= \|u_n-u\|_p
\end{equation*} we have $u \= \phi u$, hence $u\in\Iso_p$.
This also entails completeness.
\end{remark}
\begin{remark}\label{independent}
Notice that the property $u\= \phi u$ implies that neither $\|u(x)\|_p$ nor $Eu(x)$ depend on the $x$ variable.
\end{remark}
\begin{remark}
The space $\Iso_p$ is not linear. For a simple example, let $X,Y$ be two independent random variables with identical distribution $\mathcal{N}(0,1)$.
Define random fields $u,v\in C_b(\R,\Ld)$ as $u(x) = X$ and $v(x) = \sin(x)X + \cos(x)Y$.
Notice that $u,v\in\Iso_2$. However, by calculating
\begin{equation*}
E(u(x)+v(x))^2 = 2 \sin(x)+2
\end{equation*}
we discover that $u+v\notin\Iso_2$ (cf. Remark~\ref{independent}).
\end{remark}
In the particular case of $p=2$, for every random field $u\in\Iso_2$ we have $Eu(x)u(y) = B(|x-y|)$ for a certain positive definite function $B\in C(\R,\R)$. Moreover we have the following representation
\begin{equation}\label{spectral-representation}
u(x) = Eu(x) + \int_\X e^{ix\cdot\xi}Z(d\xi),
\end{equation}
where $Z$ is an orthogonal random measure (see~\cite[Chapter 1]{MR1009786} for specific results as well as \cite{MR0173945, MR0247660}, or \cite{MR2840012} for general theory of random measures).
There also exists a finite measure $\sigma$ such that
\begin{equation*}
E(u(x))^2-(Eu(x))^2 = E\Big(\int_\X e^{ix\cdot\xi}Z(d\xi)\Big)^2 = \int_\X \sigma(d\xi) = \sigma(\X).
\end{equation*}
In this case we denote $\sigma = \Spec(u)$.
One may also observe that $\mathcal{F}(\sigma)= B$.
As in the general case~\eqref{bielecki}, for every $K\geq0$
we define the Bielecki norm and the corresponding space
\begin{align*}
&\K{u}{p} = \sup_{t\geq0}e^{-tK}\|u(t)\|_p,
&\!\mathcal{B}_{K,p} = \big\{u\in C(\T,\Iso_p) : \K{u}{p} < \infty\big\}.
\end{align*}
We use these to introduce the spaces of \emph{jointly isometry-invariant random fields}.
\begin{definition}\label{jiso-def}
Given an isometry $\phi\in\Phi$ and $u\in C(\T,\Iso_p)$ let us set $(\phi u)(t,x) = u(t,\phi(x))$.
For $K\geq0$ and $p\geq1$ we define
\begin{equation*}
\JIso_{K,p} = \big\{u\in \mathcal{B}_{K,p} : u \= \phi u\ \text{for every $\phi\in\Phi$}\big\}.
\end{equation*}
\end{definition}
Similarly to the case of $\Iso_p$, the space $\JIso_{K,p}$ is not linear, but it is complete in the norm we defined. In order to compensate for the lack of linearity we consider the following relation.
\begin{definition}
Let $u,v\in \JIso_{K,p}$ or $u,v\in\Iso_p$.
We say that $u\sim v$ if for every isometry $\phi\in\Phi$ we have $(u,v) \= (\phi u,\phi v)$.
\end{definition}
It immediately implies that when $u,v\in\JIso_{K,p}$ such that $u\sim v$,
we have $\alpha u + \beta v\in\JIso_{K,p}$ for every $\alpha,\beta\in\R$.
The same observation holds in the space $\Iso_p$. Keep in mind that this
relation is not transitive.
\begin{remark}
Let $u,v\in\JIso_{K,p}$ or $u,v\in\Iso_p$ and $u\sim v$, and take two measurable functions $f$ and $g$.
Notice that because $(u,v)\= (\phi u, \phi v)$, we also have (see~\cite[Theorem 25.7]{MR1324786})
\begin{equation*}
(f(u,v),g(u,v))\= (f(\phi u,\phi v),g(\phi u,\phi v))
\end{equation*}
and therefore $f(u,v)\sim g(u,v)$.
By similar arguments we have $f(u)\sim f(v)$ and $f(u)\sim g(u)$.
\end{remark}
\begin{remark}
Let $u\in\JIso_{K,p}\cap C^1(\T,C_b(\X,\Lp))$ and $t,h\geq0$. Notice that $u(t)\sim u(t+h)$, hence $u(t+h)-u(t)\in\Iso_p$.
We may consider
\begin{equation}\label{derivative}
\dt u (t) = \lim_{h\to 0} \frac{u(t+h)-u(t)}{h},
\end{equation}
where the limit is taken in the sense of the norm $\|\,\cdot\,\|_p$, and we have
$\phi (\dt u) \= \dt u$ for every isometry $\phi\in\Phi$.
\end{remark}
\begin{definition}
Let $u\in C_b(\X,\Lp)$. For a given $z\in\X$ we define the (normalised) directional derivative $\grad u$ as the random field such that
\begin{equation}\label{derivative-def}
\lim_{h\to0}\Big\|\frac{u(x+hz)-u(x)}{h|z|}-\grad u(x)\Big\|_p = 0,
\end{equation}
whenever the limit exists.
We say that $u\in C^1_b(\X,\Lp)$ if for every $z\in\X$ we have $\nabla_z u\in C_b(\X,\Lp)$.
\end{definition}
\begin{remark}\label{spectral-derivative} Suppose that $u\in\Iso_2$ has the repesentation $u(x)=Eu(x)+\int_\X e^{ix\cdot\xi}Z(d\xi)$ and $\grad u$ exists. It follows directly from identity~\eqref{derivative-def} that
we have \begin{equation*}
\grad u(x) = \int_\X i\tfrac{z}{|z|}\cdot\xi\,e^{ix\cdot\xi}\,Z(d\xi). \end{equation*} \end{remark}
\begin{lemma}\label{derivative-zero} Let $u,v\in\Iso_2$ and $u\sim v$. If $u\in C^1_b(\X,\Ld)$ then for every $z\in\X$ we have \begin{equation*} E\grad u(x)v(x) = 0. \end{equation*} \end{lemma} \begin{proof} Since we assume $u\sim v$, hence for every $x,y\in\X$ we have $Eu(y)v(0) = Eu(-y)v(0)$ and $Eu(x+y)v(x)=Eu(y)v(0)$. Therefore
\begin{multline*}
E\grad u(x)v(x)
= \lim_{h\to0} E \frac{u(x+hz)-u(x)}{h|z|}v(x)
= \lim_{h\to0} E \frac{u(hz)-u(0)}{h|z|}v(0) \\
= \lim_{h\to0} E \frac{u(-hz)-u(0)}{h|z|}v(0) = E\nabla_{-z}
u(x)v(x).
\end{multline*}
On the other hand we have $\grad u =-\nabla_{-z}u$, hence $E\grad u(x)v(x)=0$. \end{proof}
\end{subsection}
\begin{subsection}{Spectral moments}
We introduce Sobolev-type spaces of isotropic random fields.
\begin{definition}
For every $\alpha\geq0$ we define the space
\begin{equation*}
\Iso^\alpha_2 =
\Big\{ u_0 \in \Iso_2 : \int_\X|\xi|^{2\alpha}d\sigma(\xi) < \infty,\ \text{where $\sigma=\Spec(u_0)$}\Big\}
\end{equation*}
supplemented with the norm $\|u_0\|_{\alpha,2}^2 = \int_\X \big(1+|\xi|^{2\alpha}\big)d\sigma(\xi)$.
\end{definition}
\begin{proposition}\label{iso-c1}We have the following embeddings
\begin{enumerate}
\item $\Iso_2^1\subset C^1_b(\X,\Ld)$
\item $\Iso^\alpha_2\subset\Iso^\beta_2$ for every $\alpha\geq\beta$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\sigma$ be the spectral measure and $Z$ be the orthogonal random measure corresponding to an arbitrary
$u_0\in\Iso_2$.
Suppose $u_0\in\Iso^1_2$ and take an arbitrary $z\in\X$.
Then it follows from the Cauchy-Schwarz inequality and Remark~\ref{spectral-derivative} that
\begin{multline*}
E(\grad u_0(x))^2 = E\,\Big(\int_\X i\tfrac{z}{|z|}\cdot\xi\,e^{ix\cdot\xi}\,Z(d\xi)\Big)^2 \\
= \int_\X \big|i\tfrac{z}{|z|}\cdot\xi\big|^2 d\sigma(\xi)
\leq \int_\X|\xi|^{2}d\sigma(\xi) \leq\|u_0\|_{1,2}^2.
\end{multline*} The continuity follows from a similar calculation and the Lebesgue dominated convergence theorem.
This concludes the proof of the first inclusion.
To prove the second claim it suffices to notice that the measure $\sigma$ is finite and for $\alpha \geq \beta$ the function $\psi(|x|) = |x|^{\beta/\alpha}$ is concave. Therefore by the Jensen inequality
we have
\begin{equation*}
\frac{1}{\sigma(\X)}\int_\X\psi(|\xi|^{2\alpha})d\sigma(\xi)\leq\psi\Big(\frac{1}{\sigma(\X)}\int_\X|\xi|^{2\alpha}d\sigma(\xi)\Big),
\end{equation*}
hence $\|u_0\|_{\beta,2}\leq C\|u_0\|_{\alpha,2}$ for some constant $C>0$.
\end{proof}
\begin{lemma}\label{derivative-estimate}
Let $u\in\Iso_2$. We have $u\in\Iso^1_2$ if and only if there exist $c>0$ and $\epsilon>0$ such that
\begin{equation}\label{derivative-estimate-eq}
E\Big|\frac{u(x+hz)-u(x)}{h|z|}\Big|^2 \leq c
\end{equation}
for every $h\in(0,\epsilon)$ and every $z\in\X$.
\end{lemma}
\begin{proof}Let $\sigma=\Spec(u)$. Since we assume $u\in\Iso_2$, we have
\begin{multline}\label{spectral-something}
E\Big|\frac{u(x+hz)-u(x)}{h|z|}\Big|^2 = E\Big|\int_\X\frac{e^{i(x+hz)\cdot\xi}-e^{ix\cdot\xi}}{h|z|}Z(d\xi)\Big|^2 \\
= \int_\X\Big|\frac{e^{i(x+hz)\cdot\xi}-e^{ix\cdot\xi}}{h|z|}\Big|^2d\sigma(\xi).
\end{multline}
Suppose that \eqref{derivative-estimate-eq} holds for some $c>0$, every $h\in(0,\epsilon)$ and every $z\in\X$. The Fatou lemma gives us
\begin{equation*}
\int_\X |\xi|^2 d\sigma(\xi) \leq \liminf_{h\to0}\int_\X\Big|\frac{e^{i(x+hz)\cdot\xi}-e^{ix\cdot\xi}}{h|z|}\Big|^2d\sigma(\xi),
\end{equation*}
which implies
\begin{equation*}
\int_\X |\xi|^2 d\sigma(\xi) \leq c,
\end{equation*}
hence $u\in\Iso^1_2$.
Now suppose that $u\in\Iso^1_2$. By definition we have $\int_\X|\xi|^{2}d\sigma(\xi) = \|u\|_{1,2}^2- \|u\|_{2}^2$.
Because of a simple inequality $|e^{ix\cdot y}-1|\leq|x\cdot y|\leq|x||y|$ for every $x,y\in\X$, we have
\begin{equation*}
\Big|\frac{e^{i(x+hz)\cdot\xi}-e^{ix\cdot\xi}}{h|z|}\Big|^2\leq|\xi|^2
\end{equation*}
and therefore
\begin{equation*}
\int_\X\Big|\frac{e^{i(x+hz)\cdot\xi}-e^{ix\cdot\xi}}{h|z|}\Big|^2d\sigma(\xi)
\leq \|u\|_{1,2}^2- \|u\|_{2}^2.
\end{equation*}
Combining this with \eqref{spectral-something} we get the estimate \eqref{derivative-estimate-eq}.
\end{proof}
\begin{corollary}\label{lipschitz}
If $u\in\Iso^1_2$ and $f:\R\to\R$ is Lipschitz then $f(u)\in\Iso^1_2$.
\end{corollary}
\begin{proof}
Because $f$ is Lipschitz we have
\begin{equation*}
\Big\|\frac{f(u(x+hz))-f(u(x))}{h|z|}\Big\|_2 \leq L \Big\|\frac{u(x+hz)-u(x)}{h|z|}\Big\|_2.
\end{equation*}
We conclude by applying Lemma~\ref{derivative-estimate} twice.
\end{proof}
\end{subsection}
\end{section}
\begin{section}{Linear equation}\label{linear-equation}
In this section we discuss properties of solutions to the linear Cauchy problem
\begin{equation}\label{heat}
\left\{
\begin{aligned}
&\dt u + \fLap u = 0\quad&\text{on $\T\times\X$}, \\
&u(0) \= u_0\quad&\text{on $\X$}.
\end{aligned}
\right.
\end{equation}
Before we study the solutions themselves we need to describe in detail some of the objects we work with.
\begin{definition}\label{flap}
Let $u\in C_c^\infty(\X,\R)$ and $s\in(0,1]$. We define the fractional Laplace operator
\begin{equation*}
\fLap u(x) = \mathcal{F}\{|\xi|^{2s}\mathcal{F}^{-1}u(\xi)\}.
\end{equation*}
\end{definition}
\begin{definition}\label{iso-fractional-laplacian}
For $u_0\in\Iso^{2s}_2$, such that $u_0=Eu_0(x)+ \int_\X e^{ix\cdot\xi}Z(d\xi)$, we define
\begin{equation*}
-\fLap u_0(x) = \int_\X |\xi|^{2s}e^{ix\cdot\xi}Z(d\xi).
\end{equation*}
\end{definition}
\begin{remark}\label{flap-prop}
Notice that we have
\begin{equation*}
E\fLap u_0(x) = 0
\end{equation*}
and $\Spec(\fLap u_0)=|\xi|^{4s}\sigma(d\xi)$.
\end{remark}
By $\{P_t\}_{t\geq0}$ we denote the usual semigroup of linear operators
generated by the fractional Laplacian (regardless of the chosen parameter $s$, which we assume to be fixed).
We define the action of the operators $P_t$ on the space of isometry-invariant random fields
in the following fashion.
\begin{definition}\label{semigroup-definition}
For $u_0\in\Iso_2$, such that $u_0(x)=Eu_0(x)+\int_\X e^{ix\cdot\xi}Z(d\xi)$, we define
\begin{equation*}
P_tu_0(x) = Eu_0(x)+\int_\X e^{ix\cdot\xi-t|\xi|^{2s}}Z(d\xi).
\end{equation*}
\end{definition}
\begin{remark}
Notice that
\begin{equation*}
\|P_tu_0\|_2^2 = (Eu_0(x))^2+\int_\X e^{-2t|\xi|^{2s}}\,\sigma(d\xi) \leq \|u_0\|_2^2,
\end{equation*}
which shows that $P_t$ is a contractive operator on $\Iso_2$. This expression also shows that
$\Spec(P_tu_0)=e^{-2t|\xi|^{2s}}\sigma(d\xi)$. By comparing Definition~\ref{semigroup-definition} with representation~\eqref{spectral-representation} we see that the semigroup property is preserved as well
\begin{multline*}
P_r\big(P_tu_0(x)\big) = EP_tu_0(x)+\int_\X e^{ix\cdot\xi-r|\xi|^{2s}}\big(e^{-t|\xi|^{2s}}Z(d\xi)\big)\\
= Eu_0(x)+\int_\X e^{ix\cdot\xi-(r+t)|\xi|^{2s}}Z(d\xi) = P_{r+t}u_0(x).
\end{multline*}
\end{remark}
\begin{proposition}\label{semigroup-limit}
If $u_0\in\Iso^s_2$ then
\begin{equation*}
\lim_{h\to0} E\frac{u_0-P_hu_0}{h}u_0 = E\big((-\Delta)^{s/2}u_0\big)^2.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\sigma$ be the spectral measure and $Z$ be the orthogonal random measure corresponding to $u_0$. We have
\begin{multline*}
E\frac{u_0-P_hu_0}{h}u_0
=\frac{1}{h}E\int_\X\big(e^{ix\cdot\xi}-e^{ix\cdot\xi-h|\xi|^{2s}}\big)Z(d\xi)\int_\X e^{ix\cdot\xi}Z(d\xi)\\
= \frac{1}{h}\int_\X \big(1-e^{-h|\xi|^{2s}}\big)\sigma(d\xi).
\end{multline*}
Because we assume $u_0\in\Iso^s_2$ we may use the Lebesgue dominated convergence theorem and pass to the limit to obtain
\begin{equation*}
\lim_{h\to0}\frac{1}{h}\int_\X \big(1-e^{-h|\xi|^{2s}}\big)\sigma(d\xi) = \int_\X |\xi|^{2s}\sigma(d\xi).
\end{equation*}
On the other hand,
\begin{equation*}
\int_\X |\xi|^{2s}\sigma(d\xi)
= E\Big(\int_\X |\xi|^s e^{ix\cdot\xi}Z(d\xi)\Big)^2 = E\big((-\Delta)^{s/2}u_0\big)^2
\end{equation*}
(cf. Remark~\ref{flap-prop}).
\end{proof}
In addition to such ``spectral'' framework
we may employ a direct approach as well.
To this end we introduce the kernel $p_t$ of the operator $P_t$ defined by the formula
\begin{equation*}
p_t(x) = \int_\X e^{ix\cdot\xi-t|\xi|^{2s}}\,d\xi.
\end{equation*}
It is well-known that for $s\in(0,1]$ and $t\geq0$ the function $p_t$ is positive, radially symmetric and the function $t\mapsto p_t(y)$ is continuous. For every $t>0$ we also have $\int_\X p_t(x)\,dx=1$.
The following lemma provides a connection between the two approaches.
\begin{lemma}\label{kernel-linfty}
Let $u_0\in\Iso_p$ for some $2\leq p\leq\infty$. Then for every $t>0$ we have
\begin{equation*}
P_tu_0(x) = \int_\X p_t(y)u_0(x-y)\,dy,
\end{equation*}
where the integral is understood in the Bochner sense on functions in the space $C_b(\X,\Lp)$. Moreover, $\|P_tu_0\|_p\leq \|u_0\|_p$ for all $t\geq0$.
\end{lemma}
\begin{proof}
Let $u_0(x) = Eu_0(x)+\int_\X e^{ix\cdot\xi}\,Z(d\xi)$. Following the general definition we have
\begin{multline*}
P_tu_0(x) = Eu_0(x)+\int_\X e^{ix\cdot\xi-t|\xi|^{2s}}Z(d\xi) \\
= Eu_0(x)+\int_\X\int_\X p_t(y) e^{-iy\xi}\,dy\, e^{ix\cdot\xi}Z(d\xi).
\end{multline*}
By the Fubini theorem and the fact that $\int_\X p_t(y)\,dy = 1$ we then obtain
\begin{multline*}
P_tu_0(x) = Eu_0(x)+\int_\X p_t(y) \int_\X e^{-iy\xi}e^{ix\cdot\xi}Z(d\xi)\,dy \\
= \int_\X p_t(y)\Bigg(Eu_0(x)+\int_\X e^{i(x-y)\cdot\xi}Z(d\xi)\Bigg)\,dy
= \int_\X p_t(y)u_0(x-y)\,dy.
\end{multline*}
The last integral is convergent in the Bochner sense because we have
\begin{equation}\label{norm-estimate}
\int_\X \|p_t(y)u_0(x-y)\|_{\Lp}\,dy = \|u_0\|_p\int_\X p_t(y)\,dy = \|u_0\|_p,
\end{equation}
which also confirms that $\|P_tu_0\|_p\leq \|u_0\|_p$.
\end{proof}
\begin{proposition}\label{iso-infinity}
If $u_0\in\Iso_\infty$ then $P_t u_0(x,\omega) = \int_\X p_t(y)u_0(x-y,\omega)\,dy$ for every $t>0$ and almost every $\omega\in\Omega$.
\end{proposition}
\begin{proof}
Notice that the operator $T_\omega:C(\X,L^\infty(\Omega))\to L^\infty(\X,\Bor(\X),dx)$ defined as
$T_\omega u = u(\cdot\,,\omega)$ is bounded. By the Hille theorem we then have
\begin{multline*}
P_t u_0(x,\omega) = T_\omega\Big(\int_\X p_t(y)u_0(x-y)\,dy\Big) \\=
\int_\X p_t(y)T_\omega \big(u_0(x-y)\big)\,dy = \int_\X p_t(y)u_0(x-y,\omega)\,dy.
\end{multline*}
\end{proof}
\begin{remark}\label{l-infinity}
Notice that without additional assumptions, an individual realisation $u_0(x,\omega)$
of the random field $u_0$ may not be integrable, such that for a given $\omega\in\Omega$, the Lebesgue integral
$\int_\X p_t(y)u_0(x-y,\omega)\,dy$ may not exist. This is the main reason
why we cannot consider solutions for \emph{every single realisation} of the initial data separately
and then \emph{average them out} to get our results.
\end{remark}
\begin{lemma}\label{jiso}
For every $2\leq p \leq \infty$ and every $K\geq0$ if $u_0\in\Iso_p$ then $P_tu_0\in\JIso_{K,p}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{kernel-linfty} we have
\begin{equation*}
P_tu_0(x) = \int_\X p_t(y)u_0(x-y)\,dy.
\end{equation*}
Consider a sequence $t_n\to t$. By the continuity of the function $t\mapsto p_t(y)$ and the Lebesgue dominated convergence theorem we obtain
\begin{multline*}
\lim_{n\to\infty}\|P_tu_0(x) - P_{t_n}u_0(x)\|_p \leq
\lim_{n\to\infty}\int_\X\big\|(p_t(y)-p_{t_n}(y))u_0(x-y)\big\|_p\\
=\|u_0\|_p\lim_{n\to\infty}\int_\X\big|(p_t(y)-p_{t_n}(y))\big| =0.
\end{multline*}
For every $K\geq0$ and $2\leq p\leq\infty$ identity \eqref{norm-estimate} gives us the estimate
\begin{equation*}
\sup_{t\geq0}e^{-tK}\|P_tu_0(x)\|_p \leq \sup_{t\geq0}e^{-tK}\int_\X \|p_t(y)u_0(x-y)\|_p\,dy \leq \|u_0\|_p,
\end{equation*}
which shows that $P_tu_0\in\mathcal{B}_{K,p}$.
Let $\phi\in\Phi$ be an isometry and $Z$ be the orthogonal random measure corresponding to $u_0$. Then
because $u_0\in\Iso_2$ we have
\begin{multline*}
\phi(P_tu_0)(x) = \phi\Big(\int_\X e^{ix\cdot\xi-t|\xi|^{2s}}\,Z(d\xi)\Big) \\=
\int_\X e^{i\phi(x)\cdot\xi-t|\xi|^{2s}}\,Z(d\xi) = P_t u_0(\phi(x)) \= P_t u_0(x),
\end{multline*}
which confirms that $P_tu_0\in\JIso_{K,p}$ (see Definition \ref{jiso-def}).
\end{proof}
Let us now show a regularising effect of the linear semigroup.
\begin{lemma}\label{linear-regularity}
If $u_0\in\Iso_2$ then $P_tu_0\in C((0,\infty),\Iso^\alpha_2)$ for every $\alpha\geq 0$. Moreover
there exists a constant $c_{s,\alpha}$ such that for every $t>0$
\begin{equation*}
\|P_tu_0\|_{\alpha,2}\leq (1+c_{s,\alpha}t^{-\alpha/2s})\|u_0\|_{2}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\sigma = \Spec(u_0)$. Keep in mind it is a finite measure and $\sigma(\X) = \|u_0\|_2^2$.
We then have $\Spec(P_tu_0) = e^{-2t|\xi|^{2s}}\sigma(d\xi)$ and
\begin{equation*}
\|P_tu_0\|_{\alpha,2}^2 = \int_\X (1+|\xi|^{2\alpha})\, e^{-2t|\xi|^{2s}} \sigma(d\xi)
\leq \big(1+\sup_{\xi\in\X}|\xi|^{2\alpha} e^{-2t|\xi|^{2s}}\big)\sigma(\X).
\end{equation*}
Notice that for every $t>0$ and $\alpha\geq0$ we have
\begin{equation}\label{cs}
\sup_{\xi\in\X}|\xi|^{2\alpha} e^{-2t|\xi|^{2s}} = (2t)^{-\alpha/s}\sup_{\xi\in\X}|\xi|^{2\alpha} e^{-|\xi|^{2s}} = c_{s,\alpha}^2\,t^{-\alpha/s},
\end{equation}
which shows that
\begin{equation*}
\|P_tu_0\|_{\alpha,2}
\leq \|u_0\|_{2}\sqrt{1+c_{s,\alpha}^2\,t^{-\alpha/s}}
\leq \|u_0\|_{2}\big(1+c_{s,\alpha}\,t^{-\alpha/{2s}}\big).
\end{equation*}
Finally, by the Lebesgue dominated convergence theorem, for every $t>0$ we obtain
\begin{equation*}
\lim_{\tau\to t}\|P_tu_0- P_\tau u_0\|_{\alpha,2}^2
= \lim_{\tau\to t} \int_\X(1+ |\xi|^{2\alpha}) (e^{-2t|\xi|^{2s}}-e^{-2\tau|\xi|^{2s}})\,d\sigma(\xi) = 0,
\end{equation*}
which confirms the continuity.
\end{proof}
\begin{lemma}\label{linear-regularity1}
If $u_0\in\Iso_2^\alpha$
then $\grad P_tu_0\in C((0,\infty),\Iso^\alpha_2)$ for every $\alpha\geq 0$.
Moreover
there exists a constant $c_{s}$ (independent of $\alpha$) such that for every $t>0$
\begin{equation*}
\|\grad P_tu_0\|_{\alpha,2}\leq c_{s}t^{-1/2s}\|u_0\|_{\alpha,2}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is almost identical to that of Lemma~\ref{linear-regularity}.
Let $\sigma = \Spec(u_0)$.
Then, because of the Cauchy-Schwarz inequality and identity~\eqref{cs}, we have
\begin{multline*}
\|\grad P_tu_0\|_{\alpha,2}^2 = \int_\X (1+|\xi|^{2\alpha})\big|\tfrac{z}{|z|}\cdot\xi\big|^2\, e^{-2t|\xi|^{2s}} \sigma(d\xi) \\
\leq \sup_{\xi\in\X}|\xi|^{2} e^{-2t|\xi|^{2s}}\int_\X(1+|\xi|^{2\alpha})\sigma(d\xi)= c_{s}^2\,t^{-1/s}\|u_0\|_{\alpha,2}^2.
\end{multline*}
The continuity follows in a similar fashion.
\end{proof}
\begin{lemma}\label{semigroup-estimate}
There exists a constant ${c}_s$ (independent of $p$) such that
if $u_0\in\Iso_p$ then
\begin{equation*}
\|\grad P_t u_0\|_p \leq {c}_s t^{-1/2s}\|u_0\|_p
\end{equation*}
for every $t>0$.
\end{lemma}
\begin{proof}
Notice that
\begin{equation*}
\grad P_t u_0 = \int_\X\grad p_t(y)u_0(x-y)\,dy.
\end{equation*}
It is well-known that (see~\cite{MR2373320} or \cite{MR3211862})
\begin{equation*}
p_t(y) = t^{-d/2s} p_1(t^{-1/2s}y)
\end{equation*} for every $t>0$ and
\begin{equation*}
|\grad p_1(y)| \leq C(1+|y|)^{-(2s+d+1)}.
\end{equation*}
This allows us to estimate
\begin{multline*}
\|\grad P_t u_0\|_p
\leq \int_\X \|\grad p_t(y)u_0(x-y)\|_p\,dy
= \|u_0\|_p\int_\X |\grad p_t(y)|\,dy \\
\leq t^{-(d+1)/2s}\|u_0\|_p \,C\int_\X (1+|t^{-1/2s}y|)^{-(2s+d+1)}\,dy\\
=t^{-1/2s}\|u_0\|_p \,C\int_\X (1+|y|)^{-(2s+d+1)}\,dy.
\end{multline*}
The last integral is convergent and does not depend on $p$.
\end{proof}
The following theorem justifies calling $P_tu_0$ a solution to problem~\eqref{heat}.
\begin{theorem}\label{semigroup-c1}
If $u_0\in\Iso_2$ and $u(t) =P_tu_0$ then $u\in C^1((0,\infty),\Iso_2)$,
$\dt u\in\JIso_{K,2}$ for every $K\geq 0$ and
\begin{equation*}
\dt u+ \fLap u = 0\quad\text{for every $t>0$}.
\end{equation*}
\end{theorem}
\begin{proof}
It follows from Lemma~\ref{linear-regularity} that $\fLap u(t)\in\Iso_2$ for every $t>0$.
Let $\sigma=\Spec(u_0)$. Then $\Spec(u(t)) = e^{-2t|\xi|^{2s}}\sigma(d\xi)$ and due to identity~\eqref{derivative} and Definition~\ref{iso-fractional-laplacian} it suffices to consider
\begin{equation*}
E\Big|\frac{u(t+h)-u(t)}{h}+\fLap u(t)\Big|^2
= \int_\X\Big(\frac{e^{-h|\xi|^{2s}}-1}{h}-|\xi|^{2s}\Big)^2 e^{-2t|\xi|^{2s}}d\sigma(\xi).
\end{equation*}
Since $\sigma$ is a finite measure we may pass to the limit with $h\to0$ on both sides and use the
Lebesgue dominated convergence theorem to obtain
\begin{equation*}
\dt u(t) = -\fLap u(t)
\end{equation*}
for every $t>0$. Moreover, we have (see estimate~\eqref{cs})
\begin{equation*}
\|\dt u(t)\|_2^2 = \int_\X|\xi|^{4s}e^{-2t|\xi|^{2s}}\sigma(d\xi)\leq c_s t^{-2}\|u_0\|_2^2,
\end{equation*}
therefore $\dt u \in \JIso_{K,2}$ for every $K\geq 0$.
\end{proof}
\begin{remark}
The analysis of the linear problem is exposed in more detail in \cite{MR3628179}.
However, here we use a different definition of solutions, which is better suited for the nonlinear case which we
discuss in the sequel.
\end{remark}
\end{section}
\begin{section}{Equation with Lipschitz nonlinearity}\label{sec-lipschitz}
Let us consider the following initial value problem
\begin{equation}\label{burgers}
\left\{
\begin{aligned}
&\dt u + \fLap u = \grad f(u)\quad&\text{on $\T\times\X$}, \\
&u(0) \= u_0\quad&\text{on $\X$}.
\end{aligned}
\right.
\end{equation}
Here we assume $s\in(\frac{1}{2},1]$, $u_0\in\Iso_p$ and the function $f:\R\to\R$ is to be Lipschitz, i.e. for every $x,y\in\R$ and some constant $L>0$ we have $|f(x)-f(y)| \leq L |x-y|$,
and such that $f(0)=0$.
In the following, by referring to~problem~\eqref{burgers}, we also quietly include these assumptions.
\begin{subsection}{Existence of solutions}
\begin{definition}\label{definition}
Given $u\in\JIso_{K,2}$ for some $K\geq 0$, we define the following nonlinear operator
\begin{equation*}
F(u)(t) = P_{t}u(0) + \int_0^{t} \grad P_{t-\tau} f(u(\tau))\,d\tau.
\end{equation*}
Let $u_0\in\Iso_p$. For $K\geq0$ we say that $u\in\JIso_{K,p}$ is a solution to problem~\eqref{burgers} if $F(u)=u$ and $u_0\=u(0)$.
\end{definition}
\begin{lemma}\label{Fimage}
For every $K>0$ and $2\leq p\leq\infty$ if $u\in\JIso_{K,p}$ then $F(u)\in\JIso_{K,p}$.
\end{lemma}
\begin{proof}
First we estimate the norm to check if $F(u)\in \mathcal{B}_{K,p}$. We have
\begin{equation*}
\K{F(u)}{p}
= \sup_{t\geq0}e^{-tK}\Big\|P_{t}u(0) + \int_0^{t} \grad P_{t-\tau} f(u(\tau))\,d\tau\Big\|_p.
\end{equation*}
It follows from Lemma~\ref{semigroup-estimate} that
\begin{multline}\label{norm-est1}
\Big\|\int_0^{t} \grad P_{t-\tau} f(u(\tau))\,d\tau\Big\|_p
\leq c_sL\int_0^{t} (t-\tau)^{-1/2s}\|u(\tau)\|_p\,d\tau\\
\leq c_sL\,\Big(\sup_{0<\tau<t}e^{-\tau K}\|u(\tau)\|_p\Big) \int_0^{t} (t-\tau)^{-1/2s} e^{\tau K}\,d\tau.
\end{multline}
Using the $\Gamma$ function we estimate
\begin{multline}\label{gamma}
\int_0^{t} (t-\tau)^{-1/2s} e^{-K(t-\tau)}d\tau < K^{-1+\frac{1}{2s}}\int_0^{\infty} z^{-1/2s}e^{-z}\,dz \\= K^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s}),
\end{multline}
which gives us
\begin{align*}
\K{F(u)}{p} \leq \K{P_tu_0}{p} + c_sL K^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s}) \K{u}{p}.
\end{align*}
Then for every isometry $\phi\in\Phi$ we observe
\begin{multline*}
\phi F(u) = \phi\Big(P_t u(0) + \int_0^t\grad P_{t-\tau}f(u(\tau))\,d\tau\Big) \\
=\phi\big(P_t u(0)\big) + \phi\Big(\int_0^t\grad P_{t-\tau}f(u(\tau))\,d\tau\Big) \\
= P_t (\phi u(0)) + \int_0^t\grad P_{t-\tau}f(\phi u(\tau))\,d\tau = F(\phi u).
\end{multline*}
Finally, since $\phi u \= u$, we have $F(\phi(u)) \= F(u)$ and therefore $\phi F(u) \= F(u)$.
\end{proof}
\begin{lemma}\label{contraction}
If $u,v\in\JIso_{K,p}$ and $u\sim v$ then
\begin{equation*}
\K{F(u)-F(v)}{p} \leq \|u(0)-v(0)\|_p + c_sLK^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\K{u-v}{p}.
\end{equation*}
\begin{proof}
First we notice that $f(u)\sim f(v)$ and therefore
\begin{equation*}
\grad P_t f(u) - \grad P_tf(v) = \grad P_t (f(u)-f(v)).
\end{equation*}
This gives us the following inequality
\begin{multline*}
\|F(u)(t)-F(v)(t)\|_p
\\\leq \|P_t(u(0)-v(0))\|_p +\int_0^{t}\big\|\grad P_{t-\tau} (f(u(\tau))-f(v(\tau)))\big\|_p\,d\tau.
\end{multline*}
Similarly to estimate \eqref{norm-est1} we have
\begin{multline*}
\int_0^{t}\big\|\grad P_{t-\tau} (f(u(\tau))-f(v(\tau)))\big\|_p\,d\tau\\
\leq c_sL\,\Big(\sup_{0\leq\tau\leq t}e^{-\tau K}\|u(\tau)-v(\tau)\|_p\Big)\int_0^{t} (t-\tau)^{-1/2s}e^{\tau K}\,d\tau.
\end{multline*}
We combine it with estimate \eqref{gamma} and Lemma~\ref{kernel-linfty} to obtain
\begin{multline*}
\K{F(u)-F(v)}{p} = \sup_{t\geq0}e^{-tK}\|F(u)(t)-F(v)(t)\|_p \\
\leq \|u(0)-v(0)\|_p + c_sLK^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\K{u-v}{p}.\qedhere
\end{multline*}
\end{proof}
\end{lemma}
\begin{theorem}\label{existence}
Let $\frac{1}{2}<s\leq 1$ and $2\leq p\leq\infty$. There exists a constant $K_0$ such that
for every $u_0\in\Iso_p$ and every $K\geq K_0$ the sequence
\begin{equation}\label{picard}
\left\{
\begin{aligned}
&u_1 = P_t u_0, \\
&u_{n+1}=F(u_{n}) = F^{n}(u_1)
\end{aligned}
\right.
\end{equation}
converges in $\JIso_{K,p}$ to a solution of problem~\eqref{burgers}.
\end{theorem}
\begin{proof}
It follows from Lemma~\ref{jiso} that $u_1\in\JIso_{K,p}$ for every $K>0$,
and then from Lemma~\ref{Fimage} that $\{u_n\}\subset\JIso_{K,p}$.
Let us notice that $u_n\sim u_m$ and $u_n(0)=u_m(0)=u_0$.
Thus by Lemma~\ref{contraction} we have
\begin{equation*}
\K{F(u_n)-F(u_m)}{p}\leq c_sLK^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\K{u_n-u_m}{p}.
\end{equation*}
We now choose such $K_0$ that $c_sLK_0^{-1+\frac{1}{2s}}\Gamma(1-\frac{1}{2s})<1$, depending on $s$ and $L$.
It follows from the Banach fixed point theorem that $\{u_n\}$ is a Cauchy sequence in the space
$\JIso_{K,p}$ for every $K\geq K_0$ (we use the assumption $s>\frac{1}{2}$)
and converges to some $u\in\JIso_{K,p}$ which is a fixed point of $F$.
\end{proof} \begin{definition}
We refer to the solution constructed in Theorem~\ref{existence} as the Picard solution. \end{definition}
\begin{corollary}\label{uniqueness}
If $u$ is the Picard solution to~\eqref{burgers} then for every $h>0$ and every $t>0$
we have
\begin{equation*}
u(t+h) = P_hu(t)+\int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau.
\end{equation*}
\end{corollary}
\begin{proof}
Let $u_n$ be the sequence of Picard iterations as defined in \eqref{picard}.
Since $u_{n} = F(u_{n-1})$ for $n\geq 2$ we have
\begin{align*}
P_hu_n(t) &= P_hP_tu_0 + P_h\int_0^t\grad P_{t-\tau}f(u_{n-1}(\tau))\,d\tau\\
&= P_{t+h}u_0 + \int_0^t\grad P_{t+h-\tau}f(u_{n-1}(\tau))\,d\tau
\end{align*}
and
\begin{equation*}
u_n(t+h) = P_{t+h}u_0 + \int_0^{t+h}\grad P_{t+h-\tau}f(u_{n-1}(\tau))\,d\tau.
\end{equation*}
Hence
\begin{equation*}ope
u_n(t+h) - P_hu_n(t) = \int_t^{t+h}\grad P_{t+h-\tau}f(u_{n-1}(\tau))\,d\tau
\end{equation*}
and finally, because $\lim_{n\to\infty} \|u_n -u\|_2 =0$, we get
\begin{equation*}
u(t+h) - P_hu(t) = \int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau.
\end{equation*}
\end{proof}
\begin{remark}\label{uniqueness-remark}
In this paper we do not consider the question of uniqueness of solutions and
indeed, Definition~\ref{definition} may be too relaxed to ascertain it.
In~\cite{MR3628179} it is shown that the semigroup solution $P_tu_0$ is
in fact the unique solution to the linear problem under additional
continuity-in-time assumptions. In the remainder we only work with
the Picard solutions to problem~\eqref{burgers}, which are well-defined.
\end{remark}
\end{subsection} \end{section} \begin{section}{Regularity of solutions}\label{apriori} \begin{subsection}{Moment estimates}
In the first part of this section we reproduce the second moment estimates presented in~\cite{MR0264252}.
We are only able to do this while assuming higher regularity of the initial condition, namely $u_0\in\Iso_2^1$.
Despite this limitation, the result is interesting because of an elegant identity described in Remark~\ref{covariance}. In the sequel we obtain weaker (but sufficient) estimates for all moments in a more general
setting.
\begin{lemma}\label{infinitesimal-linearisation}
Suppose $u$ is the Picard solution to~\eqref{burgers}.
Let $g:\R\to\R$ be a measurable function such that $g(u(t))\in\Iso_2$ for some $t\geq0$.
Then for every $h>0$ we have
\begin{equation*}
Eu(t+h)g(u(t)) = EP_hu(t)g(u(t)) + hR(h),
\end{equation*}
where $R:\T\to\R$ is a function such that $\lim_{h\to0}R(h)=0$.
\end{lemma}
\begin{proof}
From Corollary~\ref{uniqueness} we have
\begin{equation*}
u(t+h) = P_hu(t) + \int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau.
\end{equation*}
We define $R(h) = h^{-1}E\int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau\, g(u(t))$.
By the Lebesgue differentiation theorem and Lemma~\ref{derivative-zero} we obtain
\begin{equation*}
\lim_{h\to0}E\frac{1}{h}\int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau\, g(u(t)) = E \grad f(u(t)) g(u(t)) = 0.
\end{equation*}
\end{proof}
\begin{corollary}\label{c1/2}
Suppose $u_0\in\Iso_2$ and $u$ is the Picard solution to~\eqref{burgers}. Then
\begin{equation*}
\lim_{h\to0}E\frac{\big(u(t+h)-u(t)\big)^2}{h}=0.
\end{equation*}
\end{corollary}
\begin{proof} Notice that if $a=b+c$ then $a^2=b^2 + c(b+a)$.
It follows from Corollary~\ref{uniqueness} that
\begin{equation*}
u(t+h)^2 = \big(P_hu(t)\big)^2 + \Big(\int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau\Big)\big(P_hu(t)+u(t+h)\big).
\end{equation*}
Then by Lemma~\ref{infinitesimal-linearisation} we get
\begin{align*}
&E\big(u(t+h)-u(t)\big)^2
= Eu(t+h)^2-2Eu(t+h)u(t)+Eu(t)^2\\
&= E\big(P_hu(t)\big)^2 + E\Big(\int_t^{t+h}\grad P_{t+h-\tau}f(u(\tau))\,d\tau\Big)\big(P_hu(t)+u(t+h)\big)
\\&-2EP_hu(t)u(t)+Eu(t)^2+hR(h),
\end{align*}
where $\lim_{h\to0} R(h)=0$.
This, combined with the Lebesgue differentiation theorem, implies
\begin{equation*}
\lim_{h\to0}E\frac{\big(u(t+h)-u(t)\big)^2}{h}=\lim_{h\to0}E\frac{\big(P_hu(t)-u(t)\big)^2}{h}= 0.
\end{equation*}
\end{proof}
\begin{lemma}\label{regularity}
If $u$ is the Picard solution to problem~\eqref{burgers} with $u_0\in\Iso^1_2$
then $u\in C([0,\infty),\Iso^1_2)$.
\end{lemma}
\begin{proof}
Let $u_n$ be the sequence of Picard iterations as defined in \eqref{picard}
and let $K_0$ be such as in Theorem~\ref{existence}, i.e.
\begin{equation}\label{k0}
c_sLK_0^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s}) < 1,
\end{equation}
where $c_s$ is a known constant.
Let us proceed by induction.
We have
\begin{equation*}
\|u_1(t)\|_{1,2}^2 = \int(1+|\xi|^{2s})e^{-2t|\xi|^{2s}}\sigma(d\xi)3
\leq \|u_0\|_{1,2}^2,
\end{equation*}
hence $u_1\in C([0,\infty),\Iso^1_2)$.
Suppose $u_n\in C([0,\infty),\Iso^1_2)$ and
$\sup_{t\geq0}e^{-tK_0}\|u_n(t)\|_{1,2}\leq n\|u_0\|_{1,2}$.
By Lem\-ma~\ref{linear-regularity1} and Corollary~\ref{lipschitz} for every $t>0$ we have
\begin{equation*}
\|\grad P_{t} f(u_n(\tau))\|_{1,2}
\leq {c}_s t^{-1/2s}\|f(u_n(\tau))\|_{1,2} \leq {c}_sL t^{-1/2s}\|u_n(\tau)\|_{1,2}.
\end{equation*}
Therefore
\begin{multline*}
\|u_{n+1}(t)\|_{1,2} \leq \|u_1(t)\|_{1,2} + \int_0^{t} \|\grad P_{t-\tau} f(u_n(\tau))\|_{1,2}\,d\tau \\
\leq \|u_1(t)\|_{1,2} + {c}_sL\int_0^{t} (t-\tau)^{-1/2s} \|u_n(\tau)\|_{1,2}\,d\tau.
\end{multline*}
Using estimates \eqref{norm-est1}, \eqref{gamma}, \eqref{k0} and the induction hypothesis, we thus obtain
\begin{multline*}
\sup_{t\geq0}e^{-tK_0}\|u_{n+1}(t)\|_{1,2}\\
\leq \|u_1(t)\|_{1,2} + n{c}_sLK_0^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\|u_0\|_{1,2}\leq (n+1)\|u_0\|_{1,2}.
\end{multline*}
It follows that $u_{n+1}\in C([0,\infty),\Iso^1_2)$.
Moreover, we obtain
\begin{multline*}
\sup_{t\geq0}e^{-tK_0}\|u_{n}(t)-u_m(t)\|_{1,2} \\\leq {c}_sLK^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\sup_{t\geq0}e^{-tK_0}\|u_{n-1}-u_{m-1}\|_{1,2}
\end{multline*}
and because of~\eqref{k0} it follows that $\{u_n\}$ is a Cauchy sequence in $C([0,T],\Iso^1_2)$ for every $T>0$ and $u\in C([0,\infty),\Iso^1_2)$.
\end{proof}
\begin{theorem}\label{rosenblatt}
If $u$ is the Picard solution to~\eqref{burgers} with $u_0\in\Iso^1_2$ then
\begin{equation*}
\dt Eu(t)^2 = -2E\big((-\Delta)^{s/2}u(t)\big)^2.
\end{equation*}
\end{theorem}
\begin{proof}
From Lemma~\ref{infinitesimal-linearisation} we have
\begin{equation}\label{nonlinear-semigroup-limit}
E\frac{u(t+h)-u(t)}{h}u(t) = E\frac{P_hu(t)-u(t)}{h}u(t) + R(h),
\end{equation}
where $\lim_{h\to0}R(h)=0$.
Notice that by Lemma~\ref{regularity} and Proposition~\ref{iso-c1} we have $u(t)\in\Iso^1_2\subset\Iso^s_2$ for $s\leq1$.
Therefore on the right-hand side of equality \eqref{nonlinear-semigroup-limit} by Proposition~\ref{semigroup-limit} we get
\begin{equation*}
\lim_{h\to0}\Big(E\frac{P_hu(t)-u(t)}{h}u(t) + R(h)\Big) = -E\big((-\Delta)^{s/2}u(t)\big)^2.
\end{equation*}
This ensures the existence of the limit on the left-hand side
and by Corollary~\ref{c1/2} we obtain
\begin{align*}
\lim_{h\to 0} &\frac{E(u(t+h)-u(t))}{h}u(t)
= \frac{1}{2} \lim_{h\to 0} \frac{E\big(u(t+h)-u(t)\big)}{h}(u(t)+u(t+h))\\
&= \frac{1}{2} \lim_{h\to 0} \frac{E\big(u(t+h)^2-u(t)^2\big)}{h}
= \frac{1}{2}\dt Eu(t)^2.
\end{align*}
\end{proof}
\begin{remark}\label{covariance}
For the solution $u$ as in Theorem~\ref{rosenblatt} we define the covariance functional $B(t,y) = Eu(t,x)u(t,x+y)$. Let us notice that
$\dt B(t,0) = \dt Eu(t,x)^2$ and according to Definition~\ref{flap} we have
\begin{multline*}
(-\Delta^s_y) B(t,y)|_{y=0}
= \int_\X |\zeta|^{2s}\int_\X e^{-iz\cdot\zeta}Eu(t,x)u(t,x+z)\,dz\,d\zeta\\
= \int_\X |\zeta|^{2s}\int_\X e^{-iz\cdot\zeta}\int_\X e^{iz\cdot \xi}\,d\sigma_t(\xi)\,dz\,d\zeta
= \int_\X |\zeta|^{2s}\sigma_t(d\zeta)\\
= E\big((-\Delta)^{s/2}u(t)\big)^2.
\end{multline*}
Therefore we may write the result of Theorem~\ref{rosenblatt} as
\begin{equation*}
\dt B(t,0) = -2(-\Delta^s_y) B(t,y)|_{y=0}.
\end{equation*}
Note that this property of the functional $B(t,y)$ holds \emph{only} at $y=0$.
\end{remark}
\end{subsection}
\begin{subsection}{Higher moments estimates}
In order to prove the estimates similar to those obtained (for the second moment) in Theorem~\ref{rosenblatt} for moments $p>2$, we need to
fall back to the classical theory applied pathwise because of issues with regularity. As indicated in Remark~\ref{l-infinity}, this is
only possible in the space $\Iso_\infty$. Accordingly, we will work with cut-off initial data
first and then reach the general case by approximation.
\begin{lemma}\label{classical-solutions}
If $u_0\in\Iso_\infty$ and $f\in C^\infty(\R)$ then
for every $\omega\in\Omega$ there exists a unique function
$u^\omega(t,x)\in C^\infty((0,\infty)\times\X)\cap L^\infty(\T\times\X,dx)$ which is a (classical) solution to
\begin{equation}\label{classical-equation}
\left\{
\begin{aligned}
&\dt u + \fLap u = \grad f(u)\quad&\text{on $\T\times\X$}, \\
&u^\omega(0) = u_0(x,\omega)\quad&\text{a.e. on $\X$}.
\end{aligned}
\right.
\end{equation}
\end{lemma}
\begin{proof}
We know from Proposition~\ref{iso-infinity} that $u_0(x,\omega)\in L^\infty(\X,dx)$ for every
$\omega\in\Omega$. Because we assume $f\in C^\infty(\R)$, it follows from~\cite[Theorem 1.1]{MR2019032} that there exists a unique solution $u^\omega$ to problem~\eqref{classical-equation}, such that
$u^\omega\in C^\infty((0,\infty)\times\X)\cap L^\infty(\T\times\X,dx)$.
\end{proof}
\begin{lemma}\label{classical-solutions-equiv}
Let $f$ be Lipschitz. Suppose $u$ is the Picard solution to~\eqref{burgers} with $u_0\in\Iso_\infty$ and $f\in C^\infty(\R)$.
Let $u^\omega$ be defined as in Lemma~\ref{classical-solutions} for every $\omega\in\Omega$.
Then $u^\omega(t,x) = u(t,x,\omega)$ for almost all $\omega\in\Omega$.
\end{lemma}
\begin{proof}
Recall we denote by
\begin{equation*}
\mathcal{B}_{K,L^\infty(\X,dx)} = \{u\in C([0,\infty),L^\infty(\X,dx)):\sup_{t\geq0} e^{-tK}\|u(t)\|_{L^\infty(\X.dx)}\}
\end{equation*}
the Banach space of continuous functions endowed with an appropriate Bielecki norm.
Let us proceed by induction. It follows from Proposition~\ref{iso-infinity} that
\begin{equation*}
u_1^\omega(t,x) = u_1(t,x,\omega) = P_t u_0(x,\omega)
\end{equation*}
for almost every $\omega\in\Omega$, as well as that for every $\omega\in\Omega$
and every $K>0$ we have $u_1^\omega\in \mathcal{B}_{K,L^\infty(dx)}$.
Let $K_0$ be such as in Theorem~\ref{existence}, the sequence $u_n$ be defined as in~\eqref{picard} and $u_n^\omega$ be analogous sequences of (non-random) Picard iterations starting with $u_1^\omega$ for every $\omega\in\Omega$.
Suppose we have $u_n(t,x,\omega) = u_n^\omega(t,x)$ for almost every $\omega\in\Omega$ and $u_n^\omega(t,x)\in \mathcal{B}_{K,L^\infty(dx)}$ for every $\omega\in\Omega$
and every $K\geq K_0$.
Then because of the Hille theorem (as in the proof of Proposition~\ref{iso-infinity}) we get
\begin{equation*}
\Big(\int_0^t \grad P_{t-\tau}f(u_n(\tau,x))\,d\tau\Big) (\omega)
= \int_0^t \grad P_{t-\tau}f(u_n^\omega(\tau,x))\,d\tau.
\end{equation*}
It follows that $u_{n+1}(t,x,\omega) = u_{n+1}^\omega(t,x)$ and moreover
\begin{multline*}
\|u_{n+1}^\omega(t)\|_{L^\infty(dx)}
\leq \|u^\omega_1(0)\|_{L^\infty(dx)}
+ \int_0^t \|\grad P_{t-\tau}f(u_n^\omega(\tau))\|_{L^\infty(dx)}\,d\tau
\\\leq \|u^\omega_1(0)\|_{L^\infty(dx)}
+ c_sL\int_0^{t} (t-\tau)^{-1/2s}\|u_n^\omega(\tau)\|_{L^\infty(dx)}\,d\tau.
\end{multline*}
This gives us an estimate
\begin{multline*}
\K{u_{n+1}^\omega}{L^\infty(\X,dx)}\\\leq
\K{u_{1}^\omega}{L^\infty(\X,dx)} +
c_sLK^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\K{u_{n}^\omega}{L^\infty(\X,dx)},
\end{multline*}
which, because we have $K\geq K_0$, asserts that $\{u_{n}\}\subset\mathcal{B}_{K_0,L^\infty(dx)}$.
Consequently
$u^\omega(t,x) = u(t,x,\omega)$, for almost every $\omega\in\Omega$, since both solutions are defined as limits of
Picard iterations.
\end{proof}
\begin{proposition}\label{liskevich-semenov}
Let $w\in\Iso_\infty$, $\theta:\X\to\R$ be a measurable function such that $|\theta|=1$ and $a+b=2$. Then
\begin{equation*}
EP_h\theta |w|^a \theta |w|^b \leq abEP_h|w||w|
\end{equation*}
\end{proposition}
\begin{proof}
Let us notice that, because $\theta(x)|w(x)|^a\theta(x)|w(x)|^b = |w(x)|^2$ and
$p_h(x-y)=p_h(y-x)$, we have
\begin{multline*}
E|w|^2 - EP_h\theta |w|^a\theta |w|^b
= E|w|^2 - \int_\X p_h(x-y)\theta(y)|w(y)|^a\,\theta(x)|w(x)|^b\,dy\\
=\frac{1}{2}\int_\X\!\!\!\! p_h(x-y)E\big(\theta(y)|w(y)|^a\!-\theta(x)|w(x)|^a\big)\!
\big(\theta(y)|w(y)|^b\!-\theta(x)|w(x)|^b\big)dy.
\end{multline*}
Similarly,
\begin{multline*}
EP_h|w||w|
= \int_\X p_h(x-y)E|w(y)|\,|w(x)|\,dy\\
= E|w|^2
- \frac{1}{2}\int_\X p_h(x-y)E\big(|w(y)|-|w(x)|\big)^2\,dy.
\end{multline*}
We may now use the following inequality (see \cite[Lemma II.5.5]{MR1218884}
\begin{equation*}
(\theta_1x^a-\theta_2y^a)(\theta_1x^b-\theta_2y^b)\geq ab(x-y)^2,\ \text{when $a+b=2$ and $|\theta_1|=|\theta_2|=1$}
\end{equation*}
to obtain the desired result.
\end{proof}
\begin{lemma}\label{cut-off-reg}
If $u$ is the Picard solution to~\eqref{burgers} with $u_0\in\Iso_\infty$ and $f\in C^\infty(\R)$ then
\begin{equation*}
\dt E|u|^p \leq -4\frac{p-1}{p}E\big|(-\Delta)^{s/2}|u|^{p}\big|^2
\end{equation*}
for every $p\geq2$.
\end{lemma}
\begin{proof}
From Lemma~\ref{infinitesimal-linearisation} we have
\begin{multline*}
E\frac{u(t+h)-u(t)}{h}|u|^{p-1}(t)\sgn u(t) \\= E\frac{P_hu(t)-u(t)}{h}|u|^{p-1}(t)\sgn u(t) + R(h),
\end{multline*}
where $\lim_{h\to0}R(h) = 0$.
Let $w = |u|^{\frac{p-2}{2}}u$. Then
\begin{equation*}
P_hu\big(|u|^{p-1}\sgn u\big) = P_h \big(|w|^{2/p}\sgn w\big)\big(|w|^\frac{2(p-1)}{p}\sgn w\big)
\end{equation*}
and it follows from Proposition~\ref{liskevich-semenov} that
\begin{equation*}
E(P_hu(t)-u(t))|u|^{p-1}(t)\sgn u(t) \leq 4\frac{p-1}{p^2}E(P_h |w| -|w|)|w|.
\end{equation*}
Because $u(t)\in\Iso^1_2\cap\Iso_\infty$ for $t>0$ then by Corollary~\ref{lipschitz} $w\in\Iso^1_2\subset\Iso^s_2$ and we have
\begin{equation*}
\lim_{h\to0}E\frac{(P_h |w|-|w|)|w|}{h} = -E\big|(-\Delta)^{s/2}|w|\big|^2.
\end{equation*}
On the other hand, it follows from Lemmas~\ref{classical-solutions} and~\ref{classical-solutions-equiv}
that $u(t,x,\omega)=u^\omega(t,x)$ for almost every $\omega\in\Omega$
and $u^\omega\in C^\infty((0,\infty)\times\X)$, therefore
\begin{equation*}
\lim_{h\to0}E\frac{u(t+h)-u(t)}{h}|u|^{p-1}(t)\sgn u(t) = \frac{1}{p}\dt E|u|^p,
\end{equation*}
which yields the result.
\end{proof}
\begin{theorem}\label{rosenblatt-lp}
If $u$ is the Picard solution to~\eqref{burgers} with $u_0\in\Iso_p$ and $f\in C^\infty(\R)$ then
\begin{equation*}
E|u(t)|^p \leq E|u_0|^p
\end{equation*}
for every $2\leq p\leq\infty$ and every $t\geq0$.
\end{theorem}
\begin{proof}
We consider the sequence $u_0^n = h_n(u_0)$, where $h_n$ are cut-off functions $h_n(x) = \min\{|x|,n\}\sgn(x)$ and the sequence $u_n$ of solutions to problems
\begin{equation*}
\left\{
\begin{aligned}
&\dt u + \fLap u = \grad f(u)\quad&\text{on $\T\times\X$}, \\
&u(0) \= u_0^n\quad&\text{on $\X$}.
\end{aligned}
\right.
\end{equation*}
as constructed in Theorem~\ref{existence}. Then $u_0^n\in\Iso_\infty$ and it follows from Lemma~\ref{cut-off-reg}
that
\begin{equation*}
E|u_n(t)|^p \leq E|u_0^n|^p
\end{equation*}
for every $n\geq1$, $t\geq0$ and $p\geq2$.
By Lemma~\ref{contraction} we know that
\begin{equation*}
\lim_{n\to\infty}\|u_n-u\|_p = 0
\end{equation*}
and the result follows.
\end{proof}
\end{subsection} \end{section} \begin{section}{Nonlinearity with Polynomial Growth}\label{general}
We consider the following initial value problem
\begin{equation}\label{polyburgers}
\left\{
\begin{aligned}
&\dt u + \fLap u = \grad f(u)\quad&\text{on $\T\times\X$}, \\
&u(0) \= u_0\quad&\text{on $\X$}.
\end{aligned}
\right.
\end{equation}
Here we assume $s\in(\frac{1}{2},1]$, $u_0\in\bigcap_{p\geq2}\Iso_p$ and $f\in C^\infty(\R)$, $f(0)=0$ and $|f(u)-f(v)|\leq C|u-v|\big(|u|^q+|v|^q\big)$
for some constants $C>0$ and $q\geq 1$.
By referring to~problem~\eqref{polyburgers} we also quietly include these assumptions.
\begin{subsection}{Existence of solutions}
\begin{definition}\label{polydefinition}
Given $u\in\bigcap_{p\geq2}\JIso_{K,p}$ for some $K\geq 0$, we define the following nonlinear operator
\begin{equation*}
F(u)(t) = P_{t}u(0) + \int_0^{t} \grad P_{t-\tau} f(u(\tau))\,d\tau.
\end{equation*}
Let $u_0\in\bigcap_{p\geq2}\Iso_p$. For $K\geq0$ we say that $u\in\bigcap_{p\geq1}\JIso_{K,p}$ is a solution to problem~\eqref{polyburgers} if $F(u)=u$ and $u_0\=u(0)$.
\end{definition}
In order to prove the existence of solutions we need to consider a sequence of approximations
defined in the following way. Take $u_0^n = h_n(u_0)$, where $h_n$ are cut-off functions $h_n(x) = \min\{|x|,n\}\sgn(x)$ and define the sequence of Picard solutions $u^n$ to problems
\begin{equation}\label{cutoff}
\left\{
\begin{aligned}
&\dt u^n + \fLap u^n = \grad f(h_n(u^n))\quad&\text{on $\T\times\X$}, \\
&u^n(0) \= u_0^n\quad&\text{on $\X$}.
\end{aligned}
\right.
\end{equation}
The function $f(h_n(x))$ is Lipschitz and thus, because of Theorem~\ref{existence}, such a sequence exists.
We call it the sequence of \emph{approximative solutions} throughout this section.
\begin{remark}
For every $n$, the solution $u^n$ of problem~\eqref{cutoff} is by definition a fixed point of the operator
\begin{equation}\label{noperator}
F_n(u)(t) = P_{t}u(0) + \int_0^{t} \grad P_{t-\tau} f(h_n(u(t)))\,d\tau.
\end{equation}
\end{remark}
In the following proposition we show that the approximative solutions $u^n$ are
not only solutions to problems~\eqref{cutoff} but also solutions
to problem~\eqref{polyburgers}, with the cut-off initial conditions $u_0^n$, respectively.
In other words, it turns out we do not need to cut-off the function $f$, once we have constructed
solutions $u^n$ as limits of Picard iterations linked to problems~\eqref{cutoff}.
\begin{proposition}
Let $u^n$ be the sequence of approximative solutions.
Then $u^n$ are fixed points of the operator $F$.
\end{proposition}
\begin{proof}
It follows from Theorem~\ref{rosenblatt-lp} that $\|u^n(t)\|_\infty \leq \|u_0^n\|_\infty\leq n$ for every $n\geq1$ and $t\geq0$. Therefore $f(h_n(u^n(t)) = f(u^n(t))$.
The Picard solution $u^n$ is a fixed point of the operator $F_n$ given by \eqref{noperator}
and hence
$u^n = F_n(u^n)=F(u^n)$.
\end{proof}
\begin{lemma}\label{l1}
The sequence of approximative solutions is convergent in $\JIso_{K,p}$ for every $p\geq2$ and every $K>0$.
\end{lemma}
\begin{proof}
It follows from Lemma~\ref{classical-solutions-equiv} that $u_n(\omega)$ are also classical solutions for almost
every $\omega\in\Omega$, therefore we may write
\begin{equation*}
\dt(u_n-u_m) = -\fLap (u_n-u_m) + \grad(f(u_n)-f(u_m)).
\end{equation*}
Let $g(x)=|x|^{p-1}\sgn(x)$. Because
\begin{equation*}
f(u_n)-f(u_m) = \bar{f}(u_n,u_m)\sim \bar{g}(u_n,u_m) = g(u_n-u_m),
\end{equation*}
it follows from Lemma~\ref{derivative-zero} that
\begin{equation*}
E \grad (f(u_n)-f(u_m))|u_n-u_m|^{p-1}\sgn(u_n-u_m) = 0,
\end{equation*}
hence
\begin{equation*}
E\dt(u_n-u_m) g(u_n-u_m) = -E\fLap (u_n-u_m)g(u_n-u_m).
\end{equation*}
Notice that $g$ is a convex function, therefore
\begin{equation*}
\fLap g(u_n(\omega)-u_m(\omega))\leq\fLap (u_n(\omega)-u_m(\omega))g(u_n(\omega)-u_m(\omega))
\end{equation*}
for almost every $\omega\in\Omega$ (see~\cite[Theorem 1.1]{MR3642734}),
and because of the regularity of classical solutions
(see Theorem~\ref{classical-solutions}) and Remark~\ref{flap-prop}, we have
\begin{equation*}
\dt E|u_n-u_m|^p = -E\fLap (u_n-u_m)g(u_n-u_m) \leq -E\fLap g(u_n-u_m)=0.
\end{equation*}
It follows that
\begin{equation*}
E|u_n(t)-u_m(t)|^p \leq E|u_0^n-u_0^m|^p
\end{equation*}
for every $t\geq 0$. However,
\begin{equation*}
\lim_{n,m\to\infty} E|u_0^n-u_0^m| = 0,
\end{equation*}
thus
\begin{equation*}
\lim_{n,m\to\infty} \sup_{t\geq0} E|u_n(t)-u_m(t)|^p = 0,
\end{equation*}
which shows that $u^n$ is a Cauchy sequence, and therefore converges, in $\JIso_{K,p}$.
\end{proof}
\begin{theorem}\label{main}
There exists
a solution $u$ to problem~\eqref{polyburgers} such that $E|u(t)|^p\leq E|u_0|^p$ for every $t\geq0$ and $p\geq2$.
\end{theorem}
\begin{proof}
Let $u$ be the limit of the approximative solutions $u^n$ given by Lemma~\ref{l1}.
The estimates
\begin{equation}\label{yet-another-estimate}
E|u(t)|^p\leq E|u_0|^p
\end{equation}
follow from the analogous properties of the
approximative solutions given in Theorem~\ref{rosenblatt-lp}.
Consider
\begin{equation*}
F(u) = P_{t}u(0) + \int_0^{t} \grad P_{t-\tau} f(u(\tau))\,d\tau.
\end{equation*}
By Lemma~\ref{semigroup-estimate} we have
\begin{multline*}
\|u^n(t)-F(u)(t)\|_2 \\\leq \|P_{t}(u^n_0-u_0)\|_2 + c_s\int_0^{t} (t-\tau)^{-\frac{1}{2s}} \|f(u^n(\tau))-f(u(\tau))\|_2\,d\tau.
\end{multline*}
Because we assume $|f(x)-f(y)|\leq C|x-y|\big(|x|^q+|y|^q\big)$, then
\begin{multline*}
\|u^n(t)-F(u)(t)\|_2 - \|P_{t}(u^n_0-u_0)\|_2 \\\leq
c_sC\int_0^{t} (t-\tau)^{-\frac{1}{2s}} \|u^n(\tau)-u(\tau)\|_2\big(\|u^n(\tau)\|_2^q+\|u(\tau)\|_2^q\big)\,d\tau.
\end{multline*}
By Theorem~\ref{rosenblatt-lp} and estimate~\eqref{yet-another-estimate} we get
\begin{multline*}
\int_0^{t} (t-\tau)^{-\frac{1}{2s}} \|u^n(\tau)-u(\tau)\|_2\big(\|u^n(\tau)\|^q_2+\|u(\tau)\|^q_2\big)\,d\tau\\
\leq 2\|u_0\|_2^q\int_0^{t} (t-\tau)^{-\frac{1}{2s}} \|u^n(\tau)-u(\tau)\|_2\,d\tau.
\end{multline*}
For every $K>0$ (cf. estimates~\eqref{norm-est1},~\eqref{gamma}) we have
\begin{multline*}
\sup_{t\geq0} e^{-tK}\int_0^{t} (t-\tau)^{-\frac{1}{2s}} \|u^n(\tau)-u(\tau)\|_2\,d\tau\\\leq
\Big(\sup_{0<\tau<t}e^{-\tau K}\|u^n(\tau)-u(\tau)\|_2\Big) \int_0^{t} (t-\tau)^{-1/2s} e^{-(t-\tau) K}\,d\tau
\\\leq K^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\K{u^n-u}{2}.
\end{multline*}
Finally we obtain
\begin{equation*}
\K{u^n-F(u)}{2}\leq c_sCK^{-1+\frac{1}{2s}}\Gamma(1-\tfrac{1}{2s})\K{u^n-u}{2}
\end{equation*}
and by Lemma~\ref{l1}
\begin{equation*}
\lim_{n\to\infty}\K{u^n-F(u)}{2}=0.
\end{equation*}
This of course means that $u=F(u)$ (Lemma~\ref{l1} again) and $u$ is a solution.
\end{proof}
\begin{remark}
As in Remark~\ref{uniqueness-remark}, the solution we have constructed in this section
may not be unique in the context of Definition~\ref{polydefinition}.
\end{remark}
\end{subsection}
\end{document} |
\begin{document}
\title[ Saturated Sets for Nonuniformly Hyperbolic Systems ]
{ Saturated Sets for Nonuniformly Hyperbolic Systems}
\author[Liang, Liao, Sun, Tian] {Chao Liang$^{*}$, Gang Liao$^{\dag}$, Wenxiang Sun$^{\dag}$, Xueting Tian$^{\ddag}$}
\thanks{$^{*}$ Applied Mathematical Department, The Central University of Finance and Economics, Beijing 100081, China; Liang is supported by NNSFC(\# 10901167)}\email{[email protected]}
\thanks{$^{\dag}$ School of Mathematical Sciences, Peking University, Beijing 100871, China; Sun is supported by NNSFC(\# 10231020) and Doctoral Education Foundation of China}\email{[email protected]} \email{[email protected]}
\thanks{$^{\ddag}$ Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China; Tian is supported by CAPES} \email{[email protected]}
\date{July, 2011}
\maketitle
\begin{abstract} In this paper we prove that for an ergodic hyperbolic measure $\omega$ of a $C^{1+\alpha}$ diffeomorphism $f$ on a Riemannian manifold $M$, there is an $\omega$-full measured set $\widetilde{\Lambda}$ such that for every invariant probability $\mu\in \mathcal{M}_{inv}(\widetilde{\Lambda},f)$, the metric entropy of $\mu$ is equal to the topological entropy of saturated set $G_{\mu}$ consisting of generic points of $\mu$: $$h_\mu(f)=h_{\operatorname{top}}(f,G_{\mu}).$$ Moreover, for every nonempty, compact and connected subset $K$ of $\mathcal{M}_{inv}(\widetilde{\Lambda},f)$ with the same hyperbolic rate, we compute the topological entropy of saturated set $G_K$ of $K$ by the following equality: $$\inf\{h_\mu(f)\mid \mu\in K\}=h_{\operatorname{top}}(f,G_K).$$
In particular these results can be applied (i) to the nonuniformy hyperbolic diffeomorphisms described by Katok, (ii) to the robustly transitive partially hyperbolic diffeomorphisms described by ~Ma{\~{n}}{\'{e}}, (iii) to the robustly transitive non-partially hyperbolic diffeomorphisms described by Bonatti-Viana. In all these cases $\mathcal{M}_{inv}(\widetilde{\Lambda},f)$ contains an open subset of $\mathcal{M}_{erg}(M,f)$. \end{abstract}
\tableofcontents
\section{Introduction}
Let $(M, d)$ be a compact metric space and $f : M\rightarrow M$ be a continuous map. Given an invariant subset $\Gamma\subset M$, denote by $\mathcal{M}(\Gamma)$ the set consisting of all Borel probability measures, by $\mathcal{M}_{inv}(\Gamma, f)$ the subset consisting of $f$-invariant probability measures and, by $\mathcal{M}_{erg}(\Gamma, f)$ the subset consisting of $f$-invariant ergodic probability measures. Clearly, if $\Gamma$ is compact then $\mathcal{M}(\Gamma)$ and $\mathcal{M}_{inv}(\Gamma, f)$ are both compact spaces in the weak$^*$-topology of measures. Given $x\in M$, define the $n$-ordered empirical measure
of $x$ by $$\mathcal{E}_n(x)=\frac1n\sum_{i=0}^{n-1}\delta_{f^i(x)},$$ where $\delta_y$ is the Dirac mass at $y\in M$. A subset $W\subset M$ is called saturated if $x\in W$ and the sequence $\{\mathcal{E}_n(y)\}$ has the same limit points set as $\{\mathcal{E}_n(x)\}$ then $y\in W$. The limit point set $V(x)$ of $\{\mathcal{E}_n(x)\}$ is always a compact connected subset of $ \mathcal{M}_{inv}(M, f)$. Given $\mu\in \mathcal{M}_{inv}(M, f)$, we collect the saturated set $G_\mu$ of $\mu$ by those generic points $x$ satisfying $V(x)=\{\mu\}$. More generically, for a compact connected subset $K\subset \mathcal{M}_{inv}(M, f)$, we denote by $G_K$ the saturated set consisting of points $x$ with $V(x)=K$. By Birkhoff Ergodic Theorem, $\mu(G_{\mu})=1$ when $\mu$ is ergodic.
However, this is somewhat a special case. For non-ergodic $\mu$, by Ergodic Decomposition Theorem, $G_\mu$ has measure $0$ and thus is `` thin " in view of measure. In addition, when $f$ is uniformly hyperbolic (\cite{Sigmund}) or non uniformly hyperbolic (\cite{LST}), $G_\mu$ is of first category hence ``thin" in view of topology. Exactly, one can get this topological fact of first category as follows. Denote by $C^0(M)$ the set of continuous real-valued functions on $M$ provided with the sup norm. For non uniformly hyperbolic systems $(f,\mu)$, there is $x\in M$ such that $$\overline{\operatorname{orb}(x,f)}\supset \operatorname{supp}(\mu)\quad \mbox{and}\quad \mathcal{E}_n(x)\quad \mbox{does not converge},$$ where the support of a measure $\nu$, denoted by $\operatorname{supp}(\nu)$, is the minimal closed set with $\nu-$total measure, see \cite{Sigmund,LST}. We can take $0<a_1<a_2$ and $ \varphi\in C(M)$ such that $$\liminf_{n\rightarrow +\infty} \frac1n\sum_{i=0}^{n-1}\varphi(f^i(x))<a_1<a_2<\limsup_{n\rightarrow +\infty}\frac1n\sum_{i=0}^{n-1}\varphi(f^i(x)).$$ Let $$R=\cap_N \cup_{n\geq N}\big{\{}x\mid \frac1n\sum_{i=0}^{n-1}\varphi(f^i(x))<a_1\big{\}}\cap \cap_N \cup_{n\geq N}\big{\{}x\mid \frac1n\sum_{i=0}^{n-1}\varphi(f^i(x))>a_2\big{\}}.$$ Then $$R\cap \overline{\operatorname{orb}(x,f)} \subset (\overline{\operatorname{orb}(x,f)}\setminus G_{\mu})\,\,\mbox{ and} \,\, R\cap \overline{\operatorname{orb}(x,f)}\,\,\mbox{ is a}\,\, G_{\delta}\,\,\mbox{ subset of}\,\, \overline{\operatorname{orb}(x,f)}.$$ Combining with $x\in \overline{\operatorname{orb}(x,f)}$, we can see that $\overline{\operatorname{orb}(x,f)}\setminus G_{\mu}$ is a residual set of $\overline{\operatorname{orb}(x,f)}$. Hence, $G_{\mu}$ is of first category in the subspace $\overline{\operatorname{orb}(x,f)}$.
For a conservative system $(f,M,\operatorname{Leb})$ preserving the normalized volume measure $\operatorname{Leb}$, if $f$ is ergodic, then by ergodic theorem, $$\mathcal{E}_n(x)\rightarrow \operatorname{Leb},\quad \mbox{as}\quad n\rightarrow +\infty,$$ for $\operatorname{Leb}$-a.e. $x\in M$. In the general dissipative case where, a priori, there is no distinguished invariant probability measures, it is much more subtle what one should mean by describing the behavior of almost orbits in the physically observable sense. In this content, an invariant measure $\mu$ is called physical measure (or Sinai-Ruelle-Bowen meaure) if the saturated set $G_{\mu}$ is of positive Lebesgue measure. SRB measures are used to measure the ``thickness'' of saturated sets in view of $\operatorname{Leb}$-measure.
Motivated by the definition of saturated sets, it is reasonable to think that $G_{\mu}$ should put together all information of
$\mu$. If $\mu$ is ergodic, Bowen\cite{Bowen4} has succeeded this motivation to prove that $$h_{\operatorname{top}}(f,G_{\mu})=h_\mu(f).$$ When $f$ is mixing and uniformly hyperbolic (which implies uniform specification property), applying \cite{PS} it also holds that $$h_{\operatorname{top}}(f,G_{\mu})=h_\mu(f).$$ This implies that $G_\mu$ is ``thick" in view of topological entropy. Indeed, the information of invariant measure can be well approximated by nearby measures \cite{Katok2, Wang, Liang,Liao-Sun-Tian}. For non uniformly hyperbolic systems, in \cite{LST} Liang, Sun and Tian proved $G_{\mu}\neq \emptyset$. Our goal in the present paper is to show the ``thickness" of $G_{\mu}$ in view of entropy.
Now we start to introduce our results precisely. Let $M$ be a compact connected boundary-less Riemannian $d$-dimensional manifold and $f : M \rightarrow M$ a $C^{1+\alpha}$ diffeomorphism.
We use $Df_x$ to denote the tangent map of $f$ at $x\in M$. We say that $x\in M$ is a regular point of $f$ if there exist numbers $\lambda_1(x)>\lambda_2(x)>\cdots>\lambda_{\phi(x)}(x)$ and a decomposition on the tangent space $$T_xM=E_1(x)\oplus\cdots\oplus E_{\phi(x)}(x)$$ such that$$\underset{n\rightarrow
\infty}{\lim}\frac{1}{n}\log\|(D_xf^n)u\|=\lambda_j(x)$$ for every $0\neq u\in E_j(x)$ and every $1\leq j\leq \phi(x)$. The number $\lambda_i(x)$ and the space $E_i(x)$ are called the Lyapunov exponents and the eigenspaces of $f$ at the regular point $x$, respectively. Oseledets theorem \cite{Oseledec} states that all regular points of a diffeomorphism $f: M\rightarrow M$ forms a Borel set with total measure. For a regular point $x\in M$ we define $$\lambda^+(x)=\max\{0,\,\min\{\lambda_i(x)\mid \lambda_i(x)>0,\,1\leq i\leq \phi(x)\}\} $$ and $$\lambda^-(x)=\max\{0,\,\min\{-\lambda_i(x)\mid \lambda_i(x)<0,\,1\leq i\leq \phi(x)\}\}, $$ We appoint $\min \emptyset=\max \emptyset =0$. Taking an ergodic invariant measure $\mu$, by the ergodicity for $\mu$-almost all $x\in M$ we can obtain uniform exponents $\lambda_i(x)=\lambda_i(\mu)$ for $1\leq i\leq \phi(\mu)$. In this content we denote $\lambda^+(\mu)=\lambda^+(x)$ and $\lambda^-(\mu)=\lambda^-(x)$. We say an ergodic measure $\mu$ is hyperbolic if $\lambda^+(\mu)$ and $\lambda^-(\mu)$ are both non-zero.
\begin{Def}\label{Def6} Given $\beta_1,\beta_2\gg\epsilon >0$, and for all $k\in \mathbb{Z}^+$, the hyperbolic block $\Lambda_k=\Lambda_k(\beta_1,\beta_2;\,\epsilon)$ consists of all points $x\in M$ for which there is a splitting $T_xM=E_x^s\oplus E_x^u$ with the invariance property $Df^t(E_x^s)=E_{f^tx}^s$ and $Df^t(E_x^u)=E_{f^tx}^u$, and satisfying:\\
$(a)~
\|Df^n|E_{f^tx}^s\|\leq e^{\epsilon k}e^{-(\beta_1-\epsilon)
n}e^{\epsilon|t|}, \forall t\in\mathbb{Z}, n\geq1;$\\
$(b)~
\|Df^{-n}|E_{f^tx}^u\|\leq e^{\epsilon k}e^{-(\beta_2-\epsilon)
n}e^{\epsilon|t|}, \forall t\in\mathbb{Z}, n\geq1;$ and\\
$(c)~
\tan(Angle(E_{f^tx}^s,E_{f^tx}^u))\geq e^{-\epsilon k}e^{-\epsilon|t|}, \forall t\in\mathbb{Z}.$ \end{Def}
\begin{Def}\label{Def7} $\Lambda(\beta_1,\beta_2;\epsilon)=\overset{+\infty}{\underset{k=1}{\cup}} \Lambda_k(\beta_1,\beta_2;\epsilon)$ is a Pesin set. \end{Def}
It is verified that $\Lambda(\beta_1,\beta_2;\epsilon)$ is an $f$-invariant set but usually not compact. Although the definition of Pesin set is adopted in a topology sense, it is indeed related to invariant measures. Actually, given an ergodic hyperbolic measure $\omega$ for $f$ if $\lambda^-(\omega)\geq \beta_1$ and $\lambda^+(\mu)\geq\beta_2$ then $\omega\in \mathcal{M}_{inv}(\Lambda(\beta_1,\beta_2;\epsilon), f)$. From now on we fix such a measure $\omega$ and denote by $\omega\mid_{\Lambda_l}$ the conditional measure of ¥ø$\omega$ on $\Lambda_l$. Set $\widetilde{\Lambda}_l=\operatorname{supp}(\omega\mid_{\Lambda_l})$ and $\widetilde{\Lambda}=\cup_{l\geq1}\widetilde{\Lambda}_l$. Clearly, $f^{\pm}(\widetilde{\Lambda}_l)\subset \widetilde{\Lambda}_{l+1}$, and the sub-bundles $E^s_x$, $E^u_x$ depend continuously on $x\in \widetilde{\Lambda}_l$. Moreover, $\widetilde{\Lambda}$ is also $f$-invariant with $\omega$-full measure \footnote{Here $\widetilde{\Lambda}$ is obtained by taking support for each hyperbolic block $\Lambda_l$ so even if an ergodic measure with Lyapunov exponents away from $[-\beta_1,\beta_2]$ it is not necessary of positive measure for $\widetilde{\Lambda}$. We will give more discussions on $\widetilde{\Lambda}$ in section 6}.
\begin{Thm}\label{main theorem of measure} For every $\mu\in \mathcal{M}_{inv}(\widetilde{\Lambda}, f)$, we have $$h_{\mu}(f)=h_{\operatorname{top}}(f,G_{\mu}).$$ \end{Thm}
Let $\{\eta_l\}_{l=1}^{\infty}$ be a decreasing sequence which approaches zero. As in \cite{Newhouse} we say a probability measure $\mu\in \mathcal{M}_{inv}(M, f)$ has {\it hyperbolic rate} $\{\eta_l\}$ with respect to the Pesin set $\widetilde{\Lambda}=\cup_{l\geq 1}\widetilde{\Lambda}_l$ if $\mu(\widetilde{\Lambda}_l)\geq 1-\eta _l$ for all $l\geq1$. \begin{Thm}\label{main theorem of set}Let $\eta=\{\eta_l\}$ be a sequence decreasing to zero and $\mathcal{M}(\widetilde{\Lambda},\eta)\subset\mathcal{M}_{inv}(M,f) $ be the set of measures with hyperbolic rate $\eta$. Given any nonempty compact connect set $K\subset \mathcal{M}(\widetilde{\Lambda},\eta)$, we have $$\inf\{h_{\mu}(f)\mid \mu\in K\}=h_{\operatorname{top}}(f,G_K).$$
\end{Thm}
\section{Dynamics of non uniformly hyperbolic systems}\setlength{\parindent}{2em}
We start with some notions and results of Pesin theory \cite{Barr-Pesin,Katok1,Pollicott}.
\subsection{Lyapunov metric} Assume $\Lambda(\beta_1,\beta_2;\,\epsilon)=\cup_{k\geq1}\Lambda_k(\beta_1,\beta_2;\,\epsilon)$ is a nonempty Pesin set. Let $\beta_1'=\beta_1-2\epsilon$, $\beta_2'=\beta_2-2\epsilon$. Note that $\epsilon\ll \beta_1,\beta_2$, then $\beta_1'>0, \beta_2'>0$.
For $x\in \Lambda(\beta_1,\beta_2;\,\epsilon)$, we define
$$\|v_s\|_s=\sum_{n=1}^{+\infty}e^{\beta_1'n}\|D_xf^n(v_s)\|, ~~~\forall~v_s\in E^s_x ,$$
$$\|v_u\|_u=\sum_{n=1}^{+\infty}e^{\beta_2'n}\|D_xf^{-n}(v_u)\|, ~~~\forall~v_u\in E^u_x ,$$
$$\|v\|'=\mbox{max}(\|v_s\|_s,\,\|v_u\|_u)~~~\mbox{where}~v=v_s+v_u.$$
We call the norm $\|\cdot\|'$ a Lyapunov metric. This metric is in general not equivalent to the Riemannian metric. With the Lyapunov metric $f : \Lambda\rightarrow\Lambda$ is uniformly hyperbolic. The following estimates are known :\\
$(i) \|Df\mid_{E^s_x}\|'\leq e^{-\beta_1'},~~~ \|Df^{-1}\mid_{E^u_x}\|'\leq
e^{-\beta_2'}$;\\
$(ii) \frac{1}{\sqrt{2}}\|v\|_x\leq \|v\|_x'\leq \frac{2}{1-e^{-\epsilon}}e^{\epsilon k}\|v\|_x,~\forall~v\in T_x M,~x\in \Lambda_k.$
\begin{Def}\label{Lyapunov coordinates} In the local coordinate chart, a coordinate change $C_{\varepsilon}: M\rightarrow GL(m,\mathbb{R})$ is called a Lyapunov change of coordinates if for each regular point $x\in M$ and $u,v\in T_xM$, it satisfies $$<u,v>_x=<C_{\varepsilon}u,C_{\varepsilon}u>_x'.$$
\end{Def} By any Lyapunov change of coordinates $C_{\epsilon}$ sends the orthogonal decomposition $\mathbb{R}^{\dim E^s}\oplus \mathbb{R}^{\dim E^u} $ to the decomposition $E^s_x\oplus E^u_x$ of $T_xM$. Additionally, denote $A_{\epsilon}(x)=C_{\epsilon}(f(x))^{-1}Df_xC_{\epsilon}(x)$. Then $$A_{\epsilon}(x)=\begin{pmatrix}A_{\epsilon}^s(x)&0\\0&A_{\epsilon}^u(x)\\\end{pmatrix},$$
$$\|A_{\epsilon}^s(x)\|\leq e^{-\beta_1'},\quad \|A_{\epsilon}^u(x)^{-1}\|\leq e^{-\beta_2'}.$$ \subsection{Lyapunov neighborhood} Fix a point $x\in \Lambda(\beta_1,\beta_2;\,\epsilon)$. By taking charts about $x, f(x)$ we can assume without loss of generality that $x\in\mathbb{R}^d, f(x)\in \mathbb{R}^d$. For a sufficiently small neighborhood $U$ of $x$, we can trivialize the tangent bundle over $U$ by identifying $T_UM\equiv U\times \mathbb{R}^d$. For any point $y\in U$ and tangent vector $v\in T_yM$ we can then use the identification $T_UM=U\times \mathbb{R}^d$ to translate the vector $v$ to a corresponding vector $\bar{v}\in T_xM$. We then define
$\|v\|''_y=\|\bar{v}\|'_x$, where $\|\cdot\|'$ indicates the Lyapunov metric. This defines a new norm $\|\cdot\|''$ (which agrees with $\|\cdot\|'$ on the fiber $T_xM$). Similarly, we can define
$\|\cdot\|''_z$ on $T_zM$ (for any $z$ in a sufficiently small neighborhood of $fx$ or $f^{-1}x$). We write $\bar{v}$ as $v$ whenever there is no confusion. We can define a new splitting $T_yM = {E^s_y}'\oplus {E^u_y}', y\in U$ by translating the splitting $T_xM = E^s_x\oplus E^u_x$ (and similarly for $T_zM = {E^s_z}'\oplus {E^u_z}'$).
There exist $\beta_1''=\beta_1-3\epsilon>0 , \beta_2''=\beta_2-3\epsilon>0$ and $\epsilon_0> 0 $ such that if we set $\epsilon_k =\epsilon_0e^{-\epsilon k }$ then for any $y\in B(x,\epsilon_k)$ in an $\epsilon_k$ neighborhood of $x\in\Lambda_k$. We have a splitting $T_yM = {E^s_y}'\oplus {E^u_y}'$ with hyperbolic behavior: \\
$(i)~ \|D_yf(v)\|''_{fy}\leq e^{-\beta_1''}\|v\|''¡¸$ for every $v\in {E^s_y}'$;\\
$(ii)~ \|D_yf^{-1}(w)\|''_{f^{-1}y}\leq e^{-\beta_2''}\|w\|''¡¸$ for every $w\in {E^u_y}'$.\\ The constant $\epsilon_0$ here and afterwards depends on various global properties of $f$, e.g., the H$\ddot{o}$lder constants, the size of the local trivialization, see p.73 in \cite{Pollicott}. \begin{Def}\label{Def5} We define the Lyapunov neighborhood $\Pi=\Pi(x,a\epsilon_k)$ of $x\in \Lambda_k$ (with size $a\epsilon_k$, $0<a<1$) to be the neighborhood of $x$ in $M$ which is the exponential projection onto $M$ of the tangent rectangle $(-a\epsilon_k,\,a\epsilon_k)E^s_x\oplus (-a\epsilon_k,\,a\epsilon_k)E^u_x$. \end{Def} In the Lyapunov neighborhoods, $Df$ displays uniformly hyperbolic in the Lyapunov metric. More precisely, one can extend the definition of $C_{\epsilon}(x)$ to the Lyapunov neighborhood $\Pi(x,a\epsilon_k)$ such that for any $y\in \Pi(x,a\epsilon_k)$, $$A_{\epsilon}(y):=C_{\epsilon}(f(y))^{-1}Df_yC_{\epsilon}(y)= \begin{pmatrix}A_{\epsilon}^s(y)&0\\0&A_{\epsilon}^u(y)\\\end{pmatrix},$$
$$\|A_{\epsilon}^s(y)\|\leq e^{-\beta_1''},\quad \|A_{\epsilon}^u(y)^{-1}\|\leq e^{-\beta_2''}.$$ Let $\Psi_x=\exp_x\circ C_{\epsilon}(x)$. Given $x\in \Lambda_k$, we say that the set $H^u\subset \Pi(x,a\epsilon_k)$ is an admissible $(u,\gamma_0,k)$-manifold near $x$ if $H^u=\Psi_x(\mbox{graph }\psi)$ for some $\gamma_0$-Lipschitz function $\psi: (-a\epsilon_k,\,a\epsilon_k)E^u_x\rightarrow
(-a\epsilon_k,\,a\epsilon_k)E^s_x$ with $\|\psi\|\leq a\epsilon_k/4$. Similarly, we can also define $(s,\gamma_0,k)$-manifold near $x$. Through each point $y\in \Pi(x,a\epsilon_k)$ we can take $(u,\gamma_0,k)$-admissible manifold
$H^u(y)\subset \Pi(x,a\epsilon_k)$ and $(s,\gamma_0,k)$-admissible manifold $H^s(y)\subset \Pi(x,a\epsilon_k)$. Fixing $\gamma_0$ small enough, we can assume that \\$(i)~ \|D_zf(v)\|''_{fz}\leq e^{-\beta_1''+\epsilon}\|v\|''$ for every $v\in T_zH^s(y), z\in H^s(y)$;\\
$(ii)~ \|D_zf^{-1}(w)\|''_{f^{-1}z}\leq e^{-\beta_2''+\epsilon}\|w\|''¡¸$ for every $w\in T_zH^u(y), z\in H^u(y)$.\\
For any regular point $x\in \Lambda$, define $k(x)=\min\{i\in \mathbb{Z}\mid x\in \Lambda_i\}$. Using the local hyperbolicity above, we can see that each connected component of $f(H^u(y))\cap \Pi(fx,a\epsilon_{k(fx)})$ is an admissible $(u,\gamma_0,k(fx))$-manifold; each connected component of $f^{-1}(H^s(y))\cap \Pi(f^{-1}x,a\epsilon_{k(f^{-1}x)})$ is an admissible $(s,\gamma_0,k(f^{-1}x))$-manifold.
\subsection{ Weak shadowing lemma}
In this section, we state a weak shadowing property for $C^{1+\alpha}$ non-uniformly hyperbolic systems, which is needful in our proofs.
Let $(\delta_k)_{k=1}^{\infty}$ be a sequence of positive real numbers. Let $(x_n)_{n=-\infty}^{\infty}$ be a sequence of points in $\Lambda=\Lambda(\beta_1,\beta_2,\epsilon)$ for which there exists a sequence $(s_n)_{n=-\infty}^{+\infty}$ of positive integers satisfying: \begin{eqnarray*}&(a)& x_n\in
\Lambda_{s_n},\,\,\forall n\in \mathbb{Z };\\[2mm]& (b)& | s_n-s_{n-1} |\leq 1, \forall \,n\in \mathbb{Z};\\[2mm] &(c)& d(f(x_n), x_{n+1})\leq \delta_{s_n},\,\,\,\forall\, n\in \mathbb{Z},\end{eqnarray*} then we call $(x_n)_{n=-\infty}^{+\infty}$ a $(\delta_k)_{k=1}^{\infty}$ pseudo-orbit. Given $c>0$,
a point $x\in M$ is an $\epsilon$-shadowing point for the
$(\delta_k)_{k=1}^{\infty}$ pseudo-orbit if $d(f^n(x), x_{n})\leq c\epsilon_{s_n}$,\,\,$\forall\,n\in \mathbb{Z}$, where $\epsilon_k=\epsilon_0e^{-\epsilon k}$ are given by the definition of Lyapunov neighborhoods.
\begin{Thm} (Weak shadowing lemma \cite{Hirayama,Katok1,Pollicott})\label{specification} Let $f: M\rightarrow M$ be a $C^{1+\alpha}$ diffeomorphism, with a non-empty Pesin set $\Lambda=\Lambda(\beta_1,\beta_2;\epsilon)$ and fixed parameters, $\beta_1,\beta_2 \gg \epsilon > 0$. For $c > 0$ there exists a sequence $(\delta_k)_{k=1}^{\infty}$ such that for any $(\delta_k)_{k=1}^{\infty}$ pseudo-orbit there exists a unique $c$-shadowing point. \end{Thm}
\section{Entropy for non compact spaces} In our settings the saturated sets are often non compact. In \cite{Bowen4} Bowen gave the definition of topological entropy for non compact spaces. We state the definition in a slightly different way and they are in fact equivalent. Let $E\subset M$ and $\mathcal{C}_n(E,\varepsilon)$ be the set of all finite or countable covers of $E$ by the sets of form $B_m(x,\varepsilon)$ with $m\geq n$. Denote $$\mathcal{Y}(E; t,n,\varepsilon)=\inf\{ \sum_{B_m(x,\varepsilon)\in A}\,\,e^{-tm}\mid \,\,A\in \mathcal{C}_n(E,\varepsilon)\},$$ $$\mathcal{Y}(E; t,\varepsilon)=\lim_{n\rightarrow \infty}\gamma(E; t,n,\varepsilon).$$ Define $$h_{\operatorname{top}}(E;\varepsilon)=\inf \{t\mid\,\mathcal{Y}(E; t,\varepsilon)=0\}=\sup \{t\mid\,\mathcal{Y}(E; t,\varepsilon)=\infty\}$$
and the topological entropy of $E$ is
$$h_{\operatorname{top}}(E,f)=\lim_{\varepsilon\rightarrow 0} h_{\operatorname{top}}(E;\varepsilon).$$
The following formulas from \cite{PS}(Theorem 4.1(3)) are subcases of Bowen's variational principle and true for general topological setting. \begin{Prop}\label{generical lemmas} Let $K\subset \mathcal{M}_{inv}(M,f)$ be non-empty, compact
and connected. Then
$$h_{\operatorname{top}}(f,G_K)\leq \inf\{h_\mu(f)\mid \mu\in K\}.$$
In particular, taking $K=\{\mu\}$ one has
$$h_{\operatorname{top}}(f,G_{\mu})\leq h_\mu(f).$$ \end{Prop}By the above proposition, to prove Theorem \ref{main theorem of measure} and Theorem \ref{main theorem of set}, it suffices to show the following theorems.
\begin{Thm}\label{main theorem of measure1} For every $\mu\in \mathcal{M}_{inv}(\widetilde{\Lambda}, f)$, we have $$h_{\operatorname{top}}(f,G_{\mu})\geq h_\mu(f).$$ \end{Thm}
\begin{Thm}\label{main theorem of set1}Let $\eta=\{\eta_n\}$ be a sequence decreasing to zero and $\mathcal{M}(\widetilde{\Lambda},\eta)\subset\mathcal{M}_{inv}(\widetilde{\Lambda},f) $ be the set of measures with hyperbolic rate $\eta$. Given any nonempty compactly connected set $K\subset \mathcal{M}(\widetilde{\Lambda},\eta)$, we have $$h_{\operatorname{top}}(f,G_K)\geq\inf\{h_{\mu}(f)\mid \mu\in K\}.$$
\end{Thm} \begin{Rem}Let $\mu\in \mathcal{M}_{inv}(M,f)$ and $K\subset \mathcal{M}_{inv}(M,f)$ be a nonempty compactly connect set. In \cite{PS}, C. E. Pfister and W. G. Sullivan proved that\\ (1) with almost product property(for detailed definition, see \cite {PS}), it holds that $$h_{\operatorname{top}}(f,G_{\mu})= h_\mu(f);$$ (2) with almost product property plus uniform separation(for detailed definition, see \cite {PS}), it holds that $$h_{\operatorname{top}}(f,G_K)=\inf\{h_{\mu}(f)\mid \mu\in K\}. $$ However, for nonuniformly hyperbolic systems, the shadowing and separation are inherent from the weak hyperbolicity of Lyapunov neighborhoods which varies in the index $k$ of Pesin blocks $\Lambda_k$, hence in general almost product property and uniform separation both fail to be true. \end{Rem}
\section{Proofs of Theorem \ref{main theorem of measure} and Theorem \ref{main theorem of measure1}}
In this section, we will verify Theorem \ref{main theorem of measure1} and thus complete the proof of Theorem \ref{main theorem of measure} by Proposition \ref{generical lemmas}.
For each ergodic measure $\nu$, we use Katok's definition of metric entropy( see \cite{Katok2}). For $x,y\in M$ and $n\in \mathbb{N}$, let $$d^n(x,y)=\max_{0\leq i\leq n-1}d(f^i(x),\,f^i(y)).$$ For $\varepsilon,\,\delta>0$, let $N_n(\varepsilon,\,\delta)$ be the minimal number of $\varepsilon-$ Bowen balls $B_n(x,\,\varepsilon)$ in the $d^n-$metric, which cover a set of $\nu$-measure at least $1-\delta$. We define $$h_{\nu}^{Kat}(f,\varepsilon\mid \delta)=\limsup_{n\rightarrow\infty}\frac{\log N_n(\varepsilon,\,\delta)}{n}.$$ It follows by Theorem 1.1 of \cite{Katok2} that $$h_{\nu}(f)=\lim_{\varepsilon\rightarrow0}h_{\nu}^{Kat}(f,\varepsilon\mid \delta).$$ Recall that $\mathcal{M}_{erg}(M,f)$ denote the set of all ergodic $f-$invariant measures supported on $M$. Assume $\mu=\int_{\mathcal{M}_{erg}(M,f)}d\tau(\nu)$ is the ergodic decomposition of $\mu$ then by Jacobs Theorem $$h_{\mu}(f)=\int_{\mathcal{M}_{erg}(M,f)}h_{\nu}(f)d\tau(\nu).$$ Define $$h_{\mu}^{Kat}(f,\varepsilon\mid \delta)\triangleq \int_{\mathcal{M}_{erg}(M,f)}h_{\nu}^{Kat}(f,\varepsilon\mid \delta)d\tau(\nu).$$ By Monotone Convergence Theorem, we have $$h_{\mu}(f)=\int_{\mathcal{M}_{erg}(M,f)}\lim_{\varepsilon\rightarrow0}h_{\nu}^{Kat}(f,\varepsilon\mid \delta)d\tau(\nu) =\lim_{\varepsilon\rightarrow0}h_{\mu}^{Kat}(f,\varepsilon\mid \delta).$$
\noindent{\bf Proof of Theorem \ref{main theorem of measure1}}\,\,\,Assume $\{\varphi_i\}_{i=1}^{\infty}$ is the dense subset of $C(M)$ giving the weak$^*$ topology, that is,
$$D(\mu,\,\nu)=\sum_{i=1}^{\infty}\frac{|\int \varphi_id\mu-\int \varphi_id\nu|}{2^{i+1}\|\varphi_i\|}$$ for $\mu,\nu\in\mathcal{M}(M)$. It is easy to check the affine property of $D$, i.e., for any $\mu,\,m_1,\,m_2\in\mathcal{M}(M)$ and $0\leq\theta\leq1$, \begin{eqnarray*} D(\mu,\,\theta m_1+(1-\theta)m_2 )\leq \theta D(\mu, m_1)+(1-\theta)D(\mu, m_2).\end{eqnarray*} In addition, $D(\mu,\nu)\leq 1$ for any $\mu,\nu\in \mathcal{M}(M)$. For any integer $k\geq1$ and $\varphi_1,\cdots,\varphi_k$, there exists $b_k>0$ such that \begin{eqnarray}\label{points approximation}
d(\varphi_j(x),\varphi_j(y))<\frac{1}{k}\|\varphi_j\|\,\,\,\mbox{for any}\,\,d(x,y)<b_k,\,\,1\leq j\leq k.\end{eqnarray} Now fix $\varepsilon,\delta>0$.
\begin{Lem}\label{rational approximation}For any integer $k\geq1$ and invariant measure $\mu$, we can take a finite convex combination of ergodic probability measures with rational coefficients, $$\mu_k=\sum_{j=1}^{p_k}a_{k,j}\,m_{k,j}$$ such that \begin{eqnarray}\label{measures approximation} D(\mu,\mu_k)<\frac{1}{k},\,\,\,m_{k,j}(\widetilde{\Lambda})=1\,\,\,\mbox{
and}\,\,\, |h_{\mu}^{Kat}(f,\varepsilon\mid \delta)-h_{\mu_k}^{Kat}(f,\varepsilon\mid
\delta)|<\frac1k.\end{eqnarray} \end{Lem} \begin{proof} From the ergodic decomposition, we get $$ \int_{\widetilde{\Lambda}} \varphi_i d\mu=\int_{\mathcal{M}_{erg}(\widetilde{\Lambda}, f)} \int_{\widetilde{\Lambda}} \varphi_i dm \,d\tau(m),\quad 1\leq i\leq k. $$ Use the definition of Lebesgue integral, we get the following steps. First, we denote
$$A_{+}:= \max_{1\leq i \leq k}{\int_{\widetilde{\Lambda}} \varphi_i d m}+1, \,\,\,A_{-}:= \min_{1\leq i \leq k}{\int_{\widetilde{\Lambda}} \varphi_i d m }-1$$ $$ F_{+}:=\sup_{\mathcal{M}_{erg}(\widetilde{\Lambda},f)}{h_{\mu}^{Kat}(f,\varepsilon\mid \delta)}+1,\,\,\, F_{-}:=\inf_{\mathcal{M}_{erg}(\widetilde{\Lambda},f)}{h_{\mu}^{Kat}(f,\varepsilon\mid \delta)}-1.$$ It is easy to see that: $$-\infty <A_{-}< A_{+}<+\infty,\,\,\,-\infty <F_{-}< F_{+}<+\infty.$$ For any integer $n>0$, let $$y_{0}=A_{-},\,\, y_{j}-y_{j-1}=\frac{A_{+}-A_{-}}{n},\,\,y_{n}=A_{+},$$ $$F_{0}=F_{-},\,\, F_{j}-F_{j-1}=\frac{F_{+}-F_{-}}{n},\,\,F_{n}=F_{+}.$$ We can take $E_{i,j}$,$\,\,F_{s}\,\,$ to be measurable partitions of $\mathcal{M}_{erg}(\widetilde{\Lambda},f)\,\,$ as follows: $$E_{i,j}=\{\mu\in \mathcal{M}_{erg}(\widetilde{\Lambda},f) \mid \quad y_{j}\leq \int_{\widetilde{\Lambda}} \varphi_i dm\leq y_{j+1}\}, $$ $$F_{n}=\{\mu\in \mathcal{M}_{erg}(\widetilde{\Lambda},f)\mid \quad F_{j}\leq h_{\mu}^{Kat}(f,\varepsilon\mid \delta) \leq F_{j+1}\}.$$ Noticing the fact that $\bigcup_{j}E_{i,j}=\mathcal{M}_{erg}(\widetilde{\Lambda},f)$ and $\bigcup_{j}F_{n}=\mathcal{M}_{erg}(\widetilde{\Lambda},f)$, we can choose a new partition $\,\xi\,$ defined as: $$\xi=\bigwedge_{i,j}E_{i,j}\bigwedge_{s}F_{s},$$ where $\varsigma\bigwedge \zeta$ is given by $\{A\cap B\mid \,A\in \varsigma,\,\,B\in \zeta\}$. For convenience, denote $\xi=\{\xi_{k,1},\xi_{k,2},\cdots,\xi_{k,p_{k}}\}$. To finish the proof of Lemma \ref{rational approximation}, we can let $n$ large enough such that any combination $$\mu_k=\sum_{j=1}^{p_k}a_{k,j}\,m_{k,j}$$ where $m_{k,j}\in \xi_{k,j}$, rational numbers $a_{k,j}>0$ with
$|a_{k,j}-\tau(\xi_{k,j})|<\frac{1}{2k}$, satisfies: \begin{eqnarray*}\label{measures approximation} D(\mu,\mu_k)<\frac{1}{k},\,\,\,m_{k,j}(\widetilde{\Lambda})=1\,\,\,\mbox{
and}\,\,\, |h_{\mu}^{Kat}(f,\varepsilon\mid
\delta)-h_{\mu_k}^{Kat}(f,\varepsilon\mid \delta)|<\frac1k. \end{eqnarray*} \end{proof} For each $k$, we can find $l_k$ such that $m_{k,j}(\widetilde{\Lambda}_{l_k})>1-\delta$ for all $1\leq j\leq p_k$. Recalling that $\epsilon_{l_k}$ is the scale of Lyapunov neighborhoods associated with the Pesin block $\Lambda_{l_k}$. For any $x\in \Lambda_{l_k}$, $Df$ exhibits uniform hyperbolicity in $B(x,\epsilon_{l_k})$. For $c=\frac{\varepsilon}{8\epsilon_0}$, by Theorem \ref{specification} there is a sequence of numbers $(\delta_k)_{k=1}^\infty$. Let $\xi_k$ be a finite partition of $M$ with $\mbox{diam}\,\xi_k <\min\{\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}},\epsilon_{l_k},\delta_{l_k}\}$ and $\xi_k>\{\widetilde{\Lambda}_{l_k},M\setminus\widetilde{ \Lambda}_{l_k}\}$. Given $t\in \mathbb{N}$, consider the set\begin{eqnarray*} \Lambda^t(m_{k,j})&=&\big{\{}x\in \widetilde{\Lambda}_{l_k}\mid f^q(x)\in \xi_k(x)~ \mbox{for some} ~ q\in[t,[(1+{\frac1{k}})t]\\[2mm]
&&\mbox{and}~
D(\mathcal{E}_n(x),m_{k,j})<\frac1k\,\,\mbox{ for all}\,\,n\geq
t\big{\}},
\end{eqnarray*} where $\xi_k(x)$ denotes the element in the partition $\xi_k$ which contains the point $x$. Before going on the proof, we give the following claim.
{\bf Claim} $$m_{k,j}(\Lambda^t(m_{k,j}))\rightarrow m_{k,j}(\widetilde{\Lambda}_{l_k})\,\,\,\mbox{as}\,\,t\rightarrow +\infty.$$ \begin{proof}\,\,By ergodicity of $m_{k,j}$ and Birkhoff Ergodic Theorem, we know that for $m_{k,j}-a.e.x\in \widetilde{\Lambda}_{l_k}$, it holds $$\lim_{n\to\infty}\mathcal{E}_{n}(x)=m_{k,j}.$$ So we only need prove that the set $$ \Lambda^t_1(m_{k,j})=\big{\{}x\in \widetilde{\Lambda}_{l_k}\mid f^q(x)\in \xi_k(x)~ \mbox{for some} ~ q\in[t,[(1+{\frac1{k}})t]\big{\}}$$ satisfying the property
$$m_{k,j}(\Lambda^t_1(m_{k,j}))\rightarrow m_{k,j}(\widetilde{\Lambda}_{l_k})\,\,\,\mbox{as}\,\,t\rightarrow +\infty.$$
We next need the quantitative Poincar$\acute{e}$'s Recurrence Theorem (see Lemma 3.12 in \cite{Bochi} for more detail) as following.
\begin{Lem}\label{Reccurence Thm} Let $f$ be a $C^1$ diffeomorphism preserving an invariant measure $\mu$ supported on $M$. Let $\Gamma\subset M$ be a measurable set with $\mu(\Gamma)> 0$ and let $$\Omega =\cup_{n\in\mathbb{Z}}f^n(\Gamma).$$ Take $\gamma> 0$. Then there exists a measurable function $N_0 : \Omega\to \mathbb{N}$ such that for $a.e. x\in\Omega$, every $n\geq N_0(x)$ and every $t\in [0, 1]$ there is some $l\in\{0, 1, . . .,n\}$ such that $f^l(x)\in\Gamma$
and $|(l/n)-t | < \gamma$. \end{Lem}
\begin{Rem} A slight modify (More precisely, replacing the interval $(n(t-\gamma),n(t+\gamma))$ by $(n(t,n(t+\gamma))$), one can require that $(l/n)-t<\gamma$ in the above lemma. Hence we have $l\in [n,n(t+\gamma)]$. \end{Rem}
Take an element $\xi^l_k$ of the partition $\xi_k$. Let $\Gamma=\xi^l_k$, $\gamma=\frac1k$. Applying Lemma \ref{Reccurence Thm} and its remark, we can deduce that for $a.e.x\in\xi^l_k$, there exists a measurable function $N_0$ such that for every $t\geq N_0(x)$ there is some $q\in\{0, 1, \cdots ,n\}$ such that $f^q(x)\in\xi^l_k=\xi_k(x)$ and $q\in [t,t(1+\frac1k)]$. That is to say, $t\geq N_0(x)$ implies $x\in\Lambda^t_1(m_{k,j})$. And this property holds for $a.e.x\in\xi^l_k$. Hence it is true for $a.e.x\in\Lambda^l_k$. This completes the proof of the claim.
\end{proof}
Now we continue our proof of Theorem \ref{main theorem of measure1}. By above claim, we can take $t_k$ such that $$m_{k,j}(\Lambda^{t}(m_{k,j}))>1-\delta$$ for all $t\geq t_k$ and
$1\leq j\leq p_k$.
Let $E_t(k,j)\subset \Lambda^{t}(m_{k,j})$ be a $(t,\varepsilon)$-separated set of maximal cardinality. Then $\Lambda^{t}(m_{k,j})\subset \cup_{x\in E_t(k,j)}B_{t}(x,\varepsilon) $, and by the definition of Katok's entropy there exist infinitely many $t$ satisfying $$\sharp\,E_t(k,j)\geq e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac1k)}.$$ For each $q\in [t,[(1+{\frac1{k}})t]$, let $$V_q=\{x\in E_t(k,j) \mid f^q(x)\in \xi_k(x)\}$$ and let $n=n(k,j)$ be the value of $q$ which maximizes $\sharp\,V_q$. Obviously, \begin{eqnarray}\label{n t} t\geq \frac{n}{1+\frac1k}\geq n(1-\frac{1}{k}).\end{eqnarray} Since $e^{\frac{t}{k}}>\frac{t}{k}$, we deduce that $$\sharp\, V_n\geq \frac{\sharp\,E_t(k,j)}{\frac{t}{k}}\geq e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac3k)}.$$ Consider the element $A_n(m_{k,j})\in \xi_k$ for which $\sharp\,(V_n\cap A_n(m_{k,j})) $ is maximal. It follows that $$\sharp\,(V_n\cap A_n(m_{k,j}))\geq \frac{1}{\sharp\,\xi_k}\sharp\,V_n\geq \frac{1}{\sharp\,\xi_k}e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac3k)}.$$ Thus taking $t$ large enough so that $e^{\frac{t}{k}}>\sharp\,\xi_k$, we have by inequality (\ref{n t}) that \begin{eqnarray}\label{count}\sharp\,(V_n\cap A_n(m_{k,j}))\geq e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac4k)}\geq e^{n(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac4k)}.\end{eqnarray}
Notice that $A_{n(k,j)}(m_{k,j})$ is contained in an open subset $U(k,j)$ of some Lyapunov neighborhood with $\operatorname{diam}(U(k,j))<2\operatorname{diam}(\xi_k)$. By the ergodicity of $\omega$, for any two measures $m_{k_1,j_1}, m_{k_2,j_2}$ we can find $y=y(m_{k_1,j_1},m_{k_2,j_2})\in U(k_1,j_1)\cap\widetilde{\Lambda}_{l_{k_1}}$ satisfying that for some $s=s(m_{k_1,j_1},m_{k_2,j_2})$ one has $$f^{s}(y)\in U(k_2,j_2)\cap\widetilde{\Lambda}_{l_{k_2}}.$$ Letting $C_{k,j}=\frac{a_{k,j}}{n(k,j)}$, we can choose an integer $N_k$ larger enough so that $N_kC_{k,j}$ are integers and $$N_k\geq k\sum_{1\leq r_1,r_2\leq k+1, 1\leq j_i\leq p_{r_i},i=1,2}s(m_{r_1,j_1},m_{r_2,j_2}).$$
Arbitrarily take $x(k,j)\in A_n(m_{k,j})\cap V_{n(k,j)}$. Denote sequences \begin{eqnarray*} X_k&=&\sum_{j=1}^{p_k-1}s(m_{k,j},m_{k,j+1})+s(m_{k,p_k},m_{k,1})\\[2mm] Y_k&=&\sum_{j=1}^{p_k}N_kn(k,j)C_{k,j}+X_k=N_k+X_k.\end{eqnarray*} So, \begin{eqnarray}\label{small bridge}\frac{N_k}{Y_k}\geq \frac{1}{1+\frac1k}\geq 1-\frac1k.\end{eqnarray}
We further choose a strictly increasing sequence $\{T_k\}$ with $T_k\in \mathbb{N}$, \begin{eqnarray}\label{circle1}Y_{k+1}&\leq& \frac{1}{k+1} \sum _{r=1}^{k}Y_rT_r,\\[2mm] \label{circle2}\sum_{r=1}^{k}(Y_rT_r+s(m_{r,1},m_{r+1,1}))&\leq& \frac{1}{k+1}Y_{k+1}T_{k+1}.\end{eqnarray}
In order to obtain shadowing points $z$ with our desired property $\mathcal{E}_n(z)\rightarrow \mu$ as $n\rightarrow +\infty$, we first construct pseudo-orbits with satisfactory property in the measure theoretic sense.
For simplicity of the statement, for $x\in M$ define segments of orbits \begin{eqnarray*}L_{k,j}(x)&\triangleq&(x,f(x),\cdots, f^{n(k,j)-1}(x)),\,\,\,1\leq j\leq p_k,\\[2mm] \widehat{L}_{k_1,j_1;k_2,j_2}(x)&\triangleq&(x,f(x)\cdots, f^{s(m_{k_1,j_1},m_{k_2,j_2})-1}(x)),\,\,1\leq j_i\leq p_{k_i},i=1,2.\end{eqnarray*} \begin{figure}
\caption{\,Quasi-orbits }
\end{figure}
Consider now the pseudo-orbit \begin{eqnarray}
\label{quasi orbits}\quad \quad O&=&O(x(1,1;1,1),\cdots,x(1,1; 1, N_1C_{1,1}),\cdots,x(1,p_1; 1, 1),\cdots,x(1,p_1; 1, N_1C_{1,p_1});\nonumber\\[2mm]
&&\cdots;\nonumber \\[2mm]
&&x(1,1;T_1,1),\cdots,x(1,1; T_1, N_1C_{1,1}),\cdots,x(1,p_1; T_11),\cdots,x(1,p_1; T_1, N_1C_{1,p_1});\nonumber\\[2mm]
&&\vdots\nonumber \\[2mm]
&&x(k,1; 1, 1),\cdots,x(k,1; 1, N_kC_{k,1}),\cdots,x(k,p_k,; 1, 1),\cdots,x(k,p_k; 1, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1;T_k,1),\cdots,x(K, 1; T_k, N_kC_{k,1}),\cdots,x(k,p_k; T_k, 1),\cdots,x(k,p_k; T_k, N_kC_{k,p_k});\nonumber\\[2mm]
&&\vdots\nonumber \\[2mm]
&&\cdots\,\,)\nonumber\end{eqnarray}
with the precise form as follows \begin{eqnarray*} \label{precise quasi orbits} &\big{\{}&\,\,[\,L_{1,1}(x(1,1; 1,1)),\cdots,L_{1,1}(x(1,1; 1, N_1C_{1,1})),\widehat{L}_{1,1,1,2}(y(m_{1,1},m_{1,2}));\\[2mm] &&L_{1,2}(x(1,2; 1,1)),\cdots,L_{1,2}(x(1,2;1, N_1C_{1,2})),\widehat{L}_{1,2,1,3}(y(m_{1,1},m_{1,2})); \cdots\nonumber\\[2mm] &&L_{1,p_1}(x(1,p_1;1, 1)),\cdots,L_{1,p_1}(x(1,p_1;1, N_1C_{1,p_1})),\widehat{L}_{1,p_1,1,1}(y(m_{1,p_1},m_{1,1}))\,\,;\nonumber \\[2mm] &&\cdots\nonumber\\[2mm] &&\,L_{1,1}(x(1,1; T_1, 1)),\cdots,L_{1,1}(x(1,1; T_1, N_1C_{1,1})),\widehat{L}_{1,1,1,2}(y(m_{1,1},m_{1,2}));\nonumber\\[2mm] &&L_{1,2}(x(1,2;T_1, 1)),\cdots,L_{1,2}(x(1,2; T_1, N_1C_{1,2})),\widehat{L}_{1,2,1,3}(y(m_{1,1},m_{1,2})); \cdots\nonumber\\[2mm] &&L_{1,p_1}(x(1,p_1;T_1, 1)),\cdots,L_{1,p_1}(x(1,p_1;T_1, N_1C_{1,p_1})),\widehat{L}_{1,p_1,1,1}(y(m_{1,p_1},m_{1,1}))\,]\,;\nonumber \\[2mm] &&\widehat{L}(y(m_{1,1},m_{2,1}));\nonumber\\[2mm] &&\vdots\nonumber\\[2mm] &&\,[\,L_{k,1}(x(k,1; 1,1)),\cdots,L_{k,1}(x(k,1;1, N_kC_{k,1})),\widehat{L}_{k,1,k,2}(y(m_{k,1},m_{k,2}));\nonumber\\[2mm] &&L_{k,2}(x(k,2; 1, 1)),\cdots,L_{k,2}(x(k,2;1, N_kC_{k,2})),\widehat{L}_{k,2,k,3}(y(m_{k,1},m_{k,2})); \cdots\nonumber \\[2mm] &&L_{k,p_k}(x(k,p_k; 1, 1)),\cdots,L_{k,p_k}(x(k,p_k;1, N_kC_{k,p_k})),\widehat{L}_{k,p_k,k,1}(y(m_{k,p_k},m_{k,1}))\,\,;\nonumber\\[2mm] &&\widehat{L}(y(m_{k,1},m_{k+1,1}));\nonumber\\[2mm] &&\cdots\,\,\,\nonumber\\[2mm] &&L_{k,1}(x(k,1; T_k,1)),\cdots,L_{k,1}(x(k,1;T_k, N_kC_{k,1})),\widehat{L}_{k,1,k,2}(y(m_{k,1},m_{k,2}));\nonumber\\[2mm] &&L_{k,2}(x(k,2; T_k, 1)),\cdots,L_{k,2}(x(k,2;T_k, N_kC_{k,2})),\widehat{L}_{k,2,k,3}(y(m_{k,1},m_{k,2})); \cdots\nonumber \\[2mm] &&L_{k,p_k}(x(k,p_k; T_k, 1)),\cdots,L_{k,p_k}(x(k,p_k;T_k, N_kC_{k,p_k})),\widehat{L}_{k,p_k,k,1}(y(m_{k,p_k},m_{k,1}))\,]\,;\nonumber\\[2mm] &&\widehat{L}(y(m_{k,1},m_{k+1,1}));\nonumber\\[2mm] &&\vdots\,\,\,\nonumber\\[2mm] &&\cdots
\big{\}},\end{eqnarray*} where $x(k,j;i,t)\in V_{n(k,j)}\cap A_{n(k,j)}(m_{k,j})$.
For $k\geq 1$, $1\leq i\leq T_k$, $1\leq j\leq p_k$, $t\geq 1$, let $M_1=0$, \begin{eqnarray*}M_k&=&M_{k,1}=\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1})),\\[2mm] M_{k,i}&=&M_{k,i,1}=M_k+(i-1)Y_k, \\[2mm] M_{k,i,j}&=& M_{k,i,j,1}=M_{k,i} +\sum_{q=1}^{j-1}(N_k\,n(k,q)C_{k,q}+s(m_{k,q},m_{k,q+1})),\\[2mm] M_{k,i,j,t}&=&M_{k,i,j}+(t-1)n(k,j).\end{eqnarray*}
By Theorem \ref{specification}, there exists a shadowing point $z$ of $O$ such that $$d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j;i, t)))<c\epsilon_0e^{-\epsilon l_k}<\frac{\varepsilon}{4\epsilon_0}\epsilon_0e^{-\epsilon l_k} \leq \frac{\varepsilon}{4},$$ for $0\leq q\leq n(k,j)-1,$ $1\leq i \leq T_k$, $1\leq t\leq N_kC_{k,j}$, $1\leq j\leq p_k$. To be precise, $z$ can be considered as a map with variables $x(k,j;i,t)$: \begin{eqnarray}
\label{quasi orbits}\quad \quad z&=&z(x(1,1;1,1),\cdots,x(1,1; 1, N_1C_{1,1}),\cdots,x(1,p_1; 1, 1),\cdots,x(1,p_1; 1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(1,1;T_1,1),\cdots,x(1,1; T_1, N_1C_{1,1}),\cdots,x(1,p_1; T_11),\cdots,x(1,p_1; T_1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1; 1, 1),\cdots,x(k,1; 1, N_kC_{k,1}),\cdots,x(k,p_k,; 1, 1),\cdots,x(k,p_k; 1, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1;T_k,1),\cdots,x(K, 1; T_k, N_kC_{k,1}),\cdots,x(k,p_k; T_k, 1),\cdots,x(k,p_k; T_k, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&\cdots\,\,)\nonumber\end{eqnarray} We denote by $\mathcal{J}$ the set of all shadowing points $z$ obtained in above procedure.
\begin{Lem}\label{converge} $\overline{\mathcal{J}}\subset G_{\mu}$. \end{Lem} \begin{proof} First we prove that for any $z\in \mathcal{J}$, $$\lim_{k\rightarrow+\infty}\mathcal{E}_{M_{k}}(z)=\mu.$$ We begin by estimating $d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j;i,t)))$ for $0\leq q\leq n(k,j)-1.$ Recalling that in the procedure of finding the shadowing point $z$, all the constructions are done in the Lyapunov neighborhoods $\Pi(x(k,j,t),a\epsilon_k)$. Moreover, notice that we have required $\mbox{diam}\,\xi_k<\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}}$ which implies that for every two adjacent orbit segments $x(k,j; i_1, t_1)$ and $x(k,j; i_2, t_2)$, the ending point of the front orbit segment and the beginning point of the segment following are
$\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}}$ close to each other. Let $y$ be the unique intersection point of admissible manifolds $H^s(z)$ and $H^u(x)$. In what follows, define $d''$ to be the distance induced by $\|\cdot\|''$ in the local Lyapunov neighborhoods. By the hyperbolicity of $Df$ in the Lyapunov coordinates \footnote{This hyperbolic property is crucial in the estimation of distance along adjacent segments, so the weak shadowing lemma \ref{specification} (which is actually stated in topological way) does not suffice to conclude Theorem \ref{main theorem of measure} and the following Theorem \ref{main theorem of set}.}, we obtain \begin{eqnarray*}&&d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j; i, t)))\\[2mm]
&\leq& d(f^{M_{k,i,j,t}+q}(z),f^q(y))+d(f^q(y),f^q(x(k,j; i, t)))\\[2mm] &\leq&\sqrt{2}d''(f^{M_{k,i,j,t}+q}(z),f^q(y))+\sqrt{2}d''(f^q(y),f^q(x(k,j; i, t)))\\[2mm] &\leq&\sqrt{2}e^{-(\beta_1''-\epsilon)q}d''(f^{M_{k,i,j,t}}(z),y)+\sqrt{2}e^{-(\beta_2''-\epsilon)(n(k,j)-q)}d''(f^{n(k,j)}(y),f^{n(k,j)}(x(k,j; i, t)))\\[2mm] &\leq &\sqrt{2} \max\{e^{-(\beta_1''-\epsilon)q},e^{-(\beta_2''-\epsilon)(n(k,j)-q)}\}(d''(f^{M_{k,i,j,t}}(z),y)\\[2mm]&&+d''(f^{n(k,j)}(y),f^{n(k,j)}(x(k,j; i, t))))\\[2mm] &\leq&\frac{2\sqrt{2}e^{\epsilon (k+1)}}{1-e^{-\epsilon}}(d(f^{M_{k,i,j,t}}(z),y)+d(f^{n(k,j)}(y),f^{n(k,j)}(x(k,j; i, t))))\\[2mm] &\leq& \frac{2\sqrt{2}e^{\epsilon (k+1)}}{1-e^{-\epsilon}} 2\operatorname{diam} (\xi_k)\\[2mm] &<&b_k \end{eqnarray*} for $0\leq q\leq n(k,j)-1$. Now we can deduce that
$$|\varphi_p(f^{M_{k,i,j,t}+q}(z)-\varphi_p(f^q(x(k,j; i,t)))|<\frac{1}{k}\|\varphi_p\|,\,\,\,\,1\leq p\leq k,$$ which implies that \begin{eqnarray}\label{approximation2}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j,t}}(z)),\mathcal{E}_{n(k,j)}(x(k,j; i, t)))<\frac1k+\frac{1}{2^{k-1}}<\frac2k, \end{eqnarray} for sufficiently large $k$. By the triangle inequality, we have \begin{eqnarray*} D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)&\leq& D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu_k)+\frac1k\\[2mm] &\leq&D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)))\\[2mm] &&+D(\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)),\,\mu_k)+\frac1k \end{eqnarray*} Note that for any $\varphi\in C^0(M)$, it holds \begin{eqnarray*}
& &\|\int\varphi d\mathcal{E}_{Y_k}(f^{M_{k,i}}(z))-\int\varphi d\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)))\|\\[2mm]
&=&\|\frac1{Y_k}\sum_{q=1}^{Y_k-1}\varphi(f^{M_{k,i}+q}(z))-\frac1{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}\sum_{q=1}^{n(k,j)-1}\varphi(f^{M_{k,i,j}+q}(z))\|\\[2mm]
&\leq&\|\frac1{Y_k}\sum_{j=1}^{p_k}N_kC_{k,j}\sum_{q=1}^{n(k,j)-1}\varphi(f^{M_{k,i,j}+q}(z))-\frac1{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}\sum_{q=1}^{n(k,j)-1}\varphi(f^{M_{k,i,j}+q}(z))\|\\[2mm]
&&+\|\frac1{Y_k}(\sum_{j=1}^{p_k-1}\sum_{q=1}^{s(m_{k,j},\,m_{k,j+1})-1}\varphi(f^{M_{k,i,j}-s(m_{k,j},\,m_{k,j+1})+q}(z))\\[2mm]
&&+\sum_{q=1}^{s(m_{k,p_k},\,m_{k,1})-1}\varphi(f^{M_{k,i,j}-s(m_{k,p_k},\,m_{k,1})+q}(z)))\|\\[2mm]
&\leq&[|(\frac{1}{Y_k}-\frac{1}{Y_k-X_k})(Y_k-X_k)|+\frac{X_k}{Y_k}]\|\varphi\|. \end{eqnarray*} Then by the definition of $D$, the above inequality implies that \begin{eqnarray*} &&D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)))\\&\leq&
|(\frac{1}{Y_k}-\frac{1}{Y_k-X_k})(Y_k-X_k)|+\frac{X_k}{Y_k}. \end{eqnarray*} Thus, by the affine property of $D$, together with the property $a_{k,j}=n(k,j)C_{k,j}$ and $N_k=Y_k-X_k$, we have \begin{eqnarray*} D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)&\leq& D(\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)),
\sum_{j=1}^{p_k}a_{k,j}m_{k,j})\\[2mm]&&+|(\frac{1}{Y_k}-\frac{1}{Y_k-X_k})(Y_k-X_k)|+\frac{X_k}{Y_k}+\frac{1}{k}\\[2mm] &\leq&\frac{N_k}{Y_k-X_k}\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),m_{k,j})+\frac{2X_k}{Y_k}+\frac{1}{k}\\[2mm] &=&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),m_{k,j})+\frac{2X_k}{Y_k}+\frac{1}{k}. \end{eqnarray*} Noting that \begin{eqnarray*} &&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),m_{k,j})\\[2mm] &\leq&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)),\mathcal{E}_{n(k,j)}(x(k,j))) +\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(x(k,j)),\,m_{k,j}) \end{eqnarray*} and by the definition of $\Lambda^t(m_{k,j})$ which all $x(k,j)$ belong to and by (\ref{approximation2}), we can further deduce that \begin{eqnarray*} D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)&\leq&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),\mathcal{E}_{n(k,j)}(x(k,j))) +\frac1k+\frac{2X_k}{Y_k}+\frac{1}{k}\\[2mm] &\leq&\frac{2}{k}+\frac1k+\frac{2X_k}{Y_k}+\frac{1}{k}\\[2mm] &\leq&\frac6k\,\,\,\,\,\,(\mbox{by}\,(\ref{small bridge})). \end{eqnarray*} Hence, by affine property and inequalities (\ref{circle1}) and (\ref{circle2}) and $D(\cdot,\cdot)\leq1$, we obtain that \begin{eqnarray*} D(\mathcal{E}_{M_{k+1}}(z),\,\mu)&\leq& \frac{\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})}{T_kY_k+\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})} \\[2mm] &&+\frac{T_kY_k}{T_kY_k+\sum_{r=1}^{k-1}T_rY_r+s(m_{r,1},m_{r+1,1})}D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)\\[2mm] &\leq& \frac{\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})}{T_kY_k+\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})} \\[2mm] &&+\frac{T_kY_k}{T_kY_k+\sum_{r=1}^{k-1}T_rY_r+s(m_{r,1},m_{r+1,1})}\frac6k\\[2mm] &\leq&\frac8k. \end{eqnarray*} Thus, $$\lim_{k\rightarrow+\infty}\mathcal{E}_{M_k}(z)=\mu.$$ For $M_{k,i}\leq n\leq M_{k,i+1}$ (here we appoint $M_{k,p_k+1}=M_{k+1,1}$), it follows that \begin{eqnarray*} D(\mathcal{E}_{n}(z),\,\mu)&\leq& \frac{M_{k}}{n}D(\mathcal{E}_{M_{k}}(z),\mu)+\frac{1}{n} \sum_{p=1}^{i-1}D(\mathcal{E}_{Y_k}(f^{M_{k,p-1}}(z)),\mu)\,\,\,\,(\mbox{by affine property})\\[2mm] &&+\frac{n-M_{k,i}}{n}D(\mathcal{E}_{n-M_{k,i}}(f^{M_{k,i}}(z)),\mu) \\[2mm] &\leq& \frac{M_{k}}{n}\frac8k+\frac{(i-1)Y_k}{n}\frac6k+\frac{Y_k+s(m_{k,1},m_{k+1,1})}{n}\\[2mm] &\leq&\frac{15}k\,\,\,\,\,\,(\mbox{by}\,(\ref{circle1}) \,\mbox{and}\,(\ref{circle2})). \end{eqnarray*} Let $n\rightarrow +\infty$, then $k\rightarrow +\infty$ and $\mathcal{E}_{n}(z)\rightarrow \mu$. That is $\mathcal{J}\in G_{\mu}$. For any $z'\in \overline{\mathcal{J}}$, we take $z_t\in \mathcal{J} $ with $\lim_n z_t=z'$. Observing that for $M_{k,i}\leq n\leq M_{k,i+1}$, $D(\mathcal{E}_{n}(z_t),\,\mu)\leq 15/k$ by continuity it also holds that $D(\mathcal{E}_{n}(z'),\,\mu)\leq 15/k$. This completes the proof of the Lemma \ref{converge}. \end{proof}
To finish the proof of Theorem \ref{main theorem of measure1}, we need to compute the entropy of $\overline{\mathcal{J}}\subset G_{\mu}$. Notice that the choices of the position labeled by $x(k,j; i, t)$ in (\ref{quasi orbits}) has at least $$e^{n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac{4}{k})}$$ by (\ref{count}). Moreover, fixing the position indexed $k,j,t$, for distinct $x(k,j; i, t), x'(k,j; i, t)\in V_{n(k,j)}\cap A_{n(k,j)}(m_{k,j}) $, the corresponding shadowing points $z,z'$ satisfying \begin{eqnarray*}&&d(f^{M_{k,i,j,t}+q}(z),f^{M_{k,i,j,t}+q}(z'))\\&\geq& d(f^{q}(x(k,j; i, t)), f^{q}(x'(k,j; i, t)))-d(f^{M_{k,i,j,t}+q}(z), f^{q}(x(k,j; i, t)))\\&&-d(f^{M_{k,i,j,t}+q}(z'), f^{q}(x'(k,j;i, t)))\\ &\geq& d(f^{q}(x(k,j; i, t)), f^{q}(x'(k,j,t)))-\frac\varepsilon2.\end{eqnarray*} Since $x(k,j,t), x'(k,j; i, t)$ are $(n(k,j),\varepsilon)$-separated, so $f^{M_{k,i,j,t}}(z)$, $f^{M_{k,i,j,t}}(z')$ are $(n(k,j),\frac\varepsilon2)$-separated. Denote sets concerning the choice of quasi-orbits in $M_{ki}$ \begin{eqnarray*} H_{ki}=\{&&(x(k,j;i,1),\cdots,x(k,j;i,N_kC_{k,j}),\cdots,x(k,p_k;i,1),\cdots,x(1,p_k;i, N_kC_{k,p_k})\\[2mm]
&&\mid \,\,x(k,j;i,t)\in V_{n(k,j)}\cap A_{n(k,j)}\}. \end{eqnarray*}
Then \begin{eqnarray*} \sharp H_{ki}\geq e^{\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac{4}{k})}. \end{eqnarray*} Hence, \begin{eqnarray} \label{large katok entropy}\frac{1}{Y_k}\log \,\sharp H_{ki}&\geq& \frac{Y_k-X_k}{Y_k}\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\\[2mm] &\geq&(1-\frac{1}{k})\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\nonumber\\[2mm] &=&(1-\frac1k)^2h_{\mu_{k}}^{Kat}(f,\varepsilon\mid\delta)-\frac4k(1-\frac{1}{k})^2\nonumber\\[2mm] &\geq&(1-\frac1k)^2(h_{\mu}^{Kat}(f,\varepsilon\mid\delta)-\frac1k)-\frac4k(1-\frac{1}{k})^2.\nonumber \end{eqnarray} Since $\overline{\mathcal{J}}$ is compact we can take only finite covers $\mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)$ of $\overline{\mathcal{J}}$ in the calculation of topological entropy $h_{\operatorname{top}}(\overline{\mathcal{J}},\frac\varepsilon2)$. Let $r<h_{\mu}^{Kat}(f,\varepsilon\mid\delta)$. For each $\mathcal{A}\in \mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)$ we define a new cover $\mathcal{A}'$ in which for $M_{k,i}\leq m\leq M_{k,i+1}$, $B_m(z,\varepsilon/2)$ is replaced by $B_{M_{k,i}}(z,\varepsilon/2)$, where we suppose $M_{k,0}=M_{k-1, p_{k-1}}$, $M_{k,p_k+1}=M_{k+1,1}$. Therefore, \begin{eqnarray*} \mathcal{Y}(\overline{\mathcal{J}};r,n,\varepsilon/2 )=\inf_{\mathcal{A}\in \mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)}\sum_{B_{m}(z,\varepsilon/2)\in \mathcal{A}} e^{-rm}\geq \inf_{\mathcal{A}\in \mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)}\sum_{B_{M_{k,i}}(z,\varepsilon/2)\in \mathcal{A}'} e^{-rM_{k,i+1}}. \end{eqnarray*} Denote $$b=b(\mathcal{A}')=\max\{M_{k,i}\mid\,\,B_{m}(z,\frac\varepsilon2)\in \mathcal{A}'\quad \mbox{and}\quad M_{k,i}\leq m<M_{k,i+1}\}.$$ Noticing that $\mathcal{A}'$ is a cover of $\mathcal{J}$ each point of $\mathcal{J}$ belongs to some $B_{M_{k,i}}(x, \frac\varepsilon2)$ with $M_{k,i}\leq b$. Moreover, if $z,z'\in \mathcal{J}$ with some position $x(k,j;i,t)\neq x'(k,j;i,t)$ then $z,z'$ can't stay in the same $B_{M_{k,i}}(x, \frac\varepsilon2)$. Define \begin{eqnarray*} W_{k,i}=\{B_{M_{k,i}}(z, \frac\varepsilon2)\in \mathcal{A}' \}. \end{eqnarray*} It follows that \begin{eqnarray*} \sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i} \,\,\Pi_{M_{k,i}< M_{k',i'}\leq b } \,\,\,\sharp H_{k',i'} \,\,\geq\,\, \Pi_{M_{k',i'}\leq b }\,\, \sharp H_{k',i'}\,\,. \end{eqnarray*} So, \begin{eqnarray*} \sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i}\,\, (\Pi_{ M_{k',i'}\leq M_{k,i} } \,\,\,\sharp H_{k',i'})^{-1} \,\,\geq\,\, 1. \end{eqnarray*} From (\ref{large katok entropy}) it is easily seen that $$\limsup_{k\rightarrow \infty}\frac{\Pi_{ M_{k',i'}\leq M_{k,i} } \,\,\,\sharp H_{k',i'}}{\exp(h_{\mu}^{Kat}(f,\varepsilon\mid\delta)M_{k,i})}\geq 1.$$ Since $r<h_{\mu}^{Kat}(f,\varepsilon\mid\delta)$ and $\lim_{k\rightarrow \infty}\frac{M_{k,i+1}}{M_{k,i}}=1$, we can take $k$ large enough so that $$\frac{M_{k,i+1}}{M_{k,i}}\leq \frac{h_{\mu}^{Kat}(f,\varepsilon\mid\delta)}{r}.$$ Thus there is some constant $c_0>0$ for large $k$ \begin{eqnarray*} \sum_{B_{M_{k,i}}(z,\varepsilon/2)\in \mathcal{A}'} e^{-rM_{k,i+1}}&=& \sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i}\,\,\,e^{-rM_{k,i+1}}\\[2mm]&\geq& \sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i}\,\, \exp(-h_{\mu}^{Kat}(f,\varepsilon\mid\delta)M_{k,i}) \,\,\\[2mm]&\geq& c_0\sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i}\,\,(\Pi_{ M_{k',i'}\leq M_{k,i} } \,\,\,\sharp H_{k',i'})^{-1}\,\, \\[2mm]&\geq&c_0, \end{eqnarray*} which together with the arbitrariness of $r$ gives rise to the required inequality $$h_{\operatorname{top}}(\overline{\mathcal{J}},\frac\varepsilon2)\geq h_{\mu}^{Kat}(f,\varepsilon\mid\delta).$$ Finally, the arbitrariness of $\varepsilon$ yields: $$h_{\operatorname{top}}(f,G_{\mu})\geq h_{\mu}(f).$$
$\Box$
\section{Proofs of Theorem \ref{main theorem of set} and Theorem \ref{main theorem of set1}} We start this section by recalling the notion of entropy introduced by Newhouse \cite{Newhouse}. Given $\mu\in \mathcal{M}_{inv}(M,f)$, let $F\subset M$ be a measurable set. Define \\ \begin{eqnarray*}&(1)&H(n,\rho\mid x,F,\varepsilon)=\log \max\{\sharp E\mid E \,\mbox{is a}\,(d^n,\rho)-\mbox{separated set in }\,F\cap B_{n}(x,\varepsilon) \};\\[2mm] &(2)&H(n,\rho\mid F,\varepsilon)=\sup_{x\in F}H(n,\rho\mid x, F,\varepsilon); \\[2mm] &(3)&h(\rho\mid F,\varepsilon)=\limsup_{n\rightarrow+\infty}\frac1nH(n,\rho\mid F,\varepsilon);\\[2mm] &(4)&h( F,\varepsilon)=\lim_{\rho\rightarrow0}h(\rho\mid F,\varepsilon);\\[2mm] &(5)&h^{New}_{\operatorname{loc}}( \mu,\varepsilon)=\liminf_{\sigma\rightarrow1}\{h( F,\varepsilon)\mid \mu(F)>\sigma\};\\[2mm] &(6)&h^{New}(\mu,\varepsilon)=h_{\mu}(f)-h^{New}_{\operatorname{loc}}( \mu,\varepsilon)\end{eqnarray*}
Let $\{\theta_k\}_{k=1}^{\infty}$ be a decreasing sequence which approaches zero. One can verify that $(h^{New}(\mu,\theta_k)\mid_{\mu\in \mathcal{M}_{inv}(M,f)})_{k=1}^{\infty}$ is in fact an increasing sequence of functions defined on $\mathcal{M}_{inv}(M,f)$. Further more, $$\lim_{\theta_k\rightarrow 0}h^{New}(\mu,\theta_k)=h_{\mu}(f)\,\,\,\mbox{ for any}\,\,\,\mu\in \mathcal{M}(f).$$
Let $\mathcal{H}=(h_k)$ and $\mathcal{H}'=(h_k')$ be two increasing sequences of functions on a compact domain $\mathcal{D}$. We say $\mathcal{H}'$ {\it uniformly dominates} $\mathcal{H}$, denoted by $\mathcal{H}'\geq \mathcal{H}$, if for every index $k$ and every $\gamma>0$ there exists an index $k'$ such that $$h'_{k'}\geq h_k-\gamma.$$ We say that $\mathcal{H}$ and $\mathcal{H}'$ are {\it uniformly equivalent} if both $\mathcal{H}\geq \mathcal{H}'$ and $\mathcal{H}'\geq \mathcal{H}$. Obviously, uniform equivalence is an equivalence relation.
Next we give some elements from the theory of entropy structures as developed by Boyle-Downarowicz \cite{BD}. An increasing sequence $\alpha_1\leq \alpha_2\leq\cdots$ of partitions of $M$ is called {\it essential} (for $f$ ) if \begin{eqnarray*}&&(1) \operatorname{diam}(\alpha_k)\rightarrow0 \,\,\mbox{as}\,\, k\rightarrow +\infty,\\[2mm] &&(2)\mu(\partial\alpha_k)= 0 \,\,\mbox{for every }\,\,\mu\in \mathcal{M}_{inv}(M, f).\end{eqnarray*} Here $\partial\alpha_k$ denotes the union of the boundaries of elements in the partition $\alpha_k$. Note that essential sequences of partitions may not exist (e.g., for the identity map on the unit interval). However, for any finite entropy system $(f, M)$ it follows from the work of Lindenstrauss and Weiss \cite{Lindenstrauss}\cite{Lin-Weiss} that the product $f\times R$ with $R$ an irrational rotation has essential sequences of partitions. Noting that the rotation doesn't contribute entropy for every invariant measure, we can always assume $(f,M)$ has an essential sequence. By an {\it entropy structure} of a finite topological entropy dynamical system $(f,M)$ we mean an increasing sequence $\mathcal{H}=(h_k)$ of functions defined on $\mathcal{M}_{inv}(M,f)$ which is uniformly equivalent to $(h_{\mu}(f,\alpha_k)\mid_{\mu\in \mathcal{M}_{inv}(M,f)})$. Combining with Katok's definition of entropy, we consider an increasing sequence of functions on $\mathcal{M}_{inv}(M,f)$ given by $(h_{\mu}^{Kat}(f,\epsilon_k\mid\delta)\mid_{\mu\in \mathcal{M}_{inv}(f)})$. \begin{Thm}\label{entropy structure} Both $(h_{\mu}^{Kat}(f,\theta_k\mid\delta)\mid_{\mu\in \mathcal{M}_{inv}(M,f)})$ and $(h^{New}(\mu,\theta_k)\mid_{\mu\in \mathcal{M}_{inv}(M,f)})$ are entropy structures hence they are uniformly equivalent. \end{Thm} \begin{proof}This theorem is a part of Theorem 7.0.1 in \cite{Downarowicz}.\end{proof} \begin{Rem}The entropy structure in fact reflects the uniform convergence of entropy. It is well known that there are various notions of entropy.
However, not all of them can form entropy structure, for
example, the classic definition of entropy by partitions (see Theorem 8.0.1 in \cite{Downarowicz}).\end{Rem}
Let $\eta=\{\eta_n\}_{n=1}^{\infty}$ be a sequence decreasing to zero. $\mathcal{M}(\widetilde{\Lambda},\eta)$ is the subset of $\mathcal{M}_{inv}(M,f)$ with respect to the hyperbolic rate $\eta$.
For $\delta, \varepsilon>0$ and any $\Upsilon\subset \mathcal{M}(M)$, define $$h_{\Upsilon,\operatorname{loc}}^{Kat}(f,\varepsilon\mid \delta)=\max_{\mu\in \Upsilon}\{h_{\mu}(f)-h_{\mu}^{Kat}(f,\varepsilon\mid \delta)\}.$$
\begin{Lem}\label{asymp entropy expansive} $\lim_{\theta_k\rightarrow 0}h_{\mathcal{M}(\widetilde{\Lambda},\eta),\operatorname{loc}}^{Kat}(f,\theta_k\mid \delta)=0$. \end{Lem}
\begin{proof} First we need a proposition contained in Page 226 of \cite{Newhouse}, which reads as $$\lim_{\varepsilon\rightarrow0}\sup_{\mu\in \mathcal{M}(\widetilde{\Lambda},\eta)}h_{\operatorname{loc}}^{New}(\mu,\varepsilon)=0.$$ By Theorem \ref{entropy structure}, $$ (h^{New}(\mu,\theta_k)\mid_{\mu\in \mathcal{M}_{inv}(M, f)})\leq(h_{\mu}^{Kat}(f,\theta_k\mid\delta)\mid_{\mu\in \mathcal{M}_{inv}(M, f)}) .$$ So, for any $k\in \mathbb{N}$, there exists $k'>k$ such that $$h_{\mu}^{Kat}(f,\theta_{k'}\mid\delta) \geq h^{New}(\mu,\theta_k)-\frac1k,$$ for all $\mu\in \mathcal{M}(\widetilde{\Lambda},\eta)$. It follows that \begin{eqnarray*} h_{\mu}(f)-h_{\mu}^{Kat}(f,\theta_{k'}\mid \delta)&\leq& h_{\mu}(f)-(h^{New}(\mu, \theta_k)-\frac1k)\\[2mm]&=&h_{\operatorname{loc}}^{New}(\mu,\theta_k)+\frac1k, \end{eqnarray*} for all $\mu\in \mathcal{M}(\widetilde{\Lambda},\eta)$. Taking supremum on $\mathcal{M}(\widetilde{\Lambda},\eta)$ and letting $k\rightarrow +\infty$, we conclude that $$\lim_{\theta_{k'}\rightarrow 0}h_{\mathcal{M}(\widetilde{\Lambda},\eta),\operatorname{loc}}^{Kat}(f,\theta_{k'}\mid \delta)=0.$$ \end{proof}
\begin{Rem}\label{lose control of local entropy} In \cite{Newhouse}, Lemma \ref{asymp entropy expansive} was used to prove upper semi-continuity of metric entropy on $\mathcal{M}(\widetilde{\Lambda},\eta)$. However, the upper semi-continuity is broadly not true even if the underlying system is non uniformly hyperbolic. For example, in \cite{Downarowicz-Newhouse}, T. Downarowicz and S. E. Newhouse established surface diffeomorphisms whose local entropy of arbitrary pre-assigned scale is always larger than a positive constant. Exactly, they constructed a compact subset $E$ of $\mathcal{M}_{inv}( \Lambda, f )$ such that there exist a periodic measure in $E$ and a positive real number $\rho_0$ such that for each $\mu\in E$ and each $k > 0$, $$\limsup_{\nu\in E, \nu\rightarrow \mu} h_{\nu}( f )- h_k(\nu) > \rho_0,$$ which implies infinity of symbolic extension entropy and also the absence of upper semi-continuity of metric entropy and thus no uniform separation in \cite{PS}.
\end{Rem}
Now we begin to prove Theorem \ref{main theorem of set1} and hence complete the proof of Theorem \ref{main theorem of set} by Proposition \ref{generical lemmas}. Throughout this section, for simplicity, we adopt the symbols used in the proof of Theorem \ref{main theorem of measure}. Except specially mentioned, the relative quantitative relation of symbols share the same meaning.
{\bf Proof of Theorem \ref{main theorem of set1}}$ $
For any nonempty closed connected set $K\subset \mathcal{M}(\widetilde{\Lambda},\eta)$, there exists a sequence of closed balls $U_n$ in $\mathcal{M}_{inv}(M,f)$ with radius $\zeta_n$ in the metric $D$ with the weak$^*$ topology such that the following holds: \begin{eqnarray*} &(i)&U_n\cap U_{n+1}\cap K \neq \emptyset;\\[2mm] &(ii)& \cap_{N\geq 1}^{\infty}\cup_{n\geq N}U_n = K;\\[2mm] &(iii)& \lim_{n\rightarrow +\infty}\zeta_n=0.\end{eqnarray*} By $(1)$, we take $\nu_k\in U_k\cap K$. Given $\gamma>0$, using Lemma \ref{asymp entropy expansive}, we can find an $\varepsilon>0$ such that $$h_{\mathcal{M}(\widetilde{\Lambda},\eta),\operatorname{loc}}^{Kat}(f,\varepsilon\mid \delta)<\gamma.$$ For each $\nu_k$, we then can choose a finite convex combination of ergodic probability measures with rational coefficients, $$\mu_k=\sum_{j=1}^{p_k}a_{k,j}\,m_{k,j}$$ satisfying the following properties: $$D(\nu_k,\mu_k)<\frac{1}{k},\,\,\,m_{k,j}(\Lambda)=1\,\,\,\mbox{
and}\,\,\, |h_{\nu_k}^{Kat}(f,\varepsilon\mid
\delta)-h_{\mu_k}^{Kat}(f,\varepsilon\mid \delta)|<\frac1k.$$ For each $k$, we can find $l_k$ such that $m_{k,j}(\Lambda_{l_k})>1-\delta$ for all $1\leq j\leq p_k$. For $c=\frac{\varepsilon}{8\epsilon_0}$, by Theorem \ref{specification} there is a sequence of numbers $(\delta_k)_{k=1}^\infty$. Let $\xi_k$ be a finite partition of $M$ with $\mbox{diam}\,\xi_k <\min\{\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}},\epsilon_{l_k},\delta_{l_k}\}$ and $\xi_k>\{\widetilde{\Lambda}_{l_k},M\setminus \widetilde{\Lambda}_{l_k}\}$.
For each $m_{k,j}$, following the proof of Theorem \ref{main theorem of measure1}, we can obtain an integer $n(k,j)$ and an $(n(k,j),\varepsilon)$-separated set $W_n$ contained in an open subset $U(k,j)$ of some Lyapunov neighborhood with $\operatorname{diam}(U(k,j))<2\operatorname{diam}(\xi_k)$ and satisfying that $$\sharp\,W_{n(k,j)}\geq e^{n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac4k)}.$$ Then likewise, for $k_1,k_2,j_1,j_2$ one can find $y=y(m_{k_1,j_1},m_{k_2,j_2})\in U(k_1,j_1)$ satisfying that for some $s=s(m_{k_1,j_1},m_{k_2,j_2})\in\mathbb{ N}$, $$f^{s}(y)\in U(k_2,j_2).$$In the same manner, we consider the following pseudo-orbit \begin{eqnarray}
\label{quasi orbits}\quad \quad O&=&O(x(1,1;1,1),\cdots,x(1,1; 1, N_1C_{1,1}),\cdots,x(1,p_1; 1, 1),\cdots,x(1,p_1; 1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(1,1;T_1,1),\cdots,x(1,1; T_1, N_1C_{1,1}),\cdots,x(1,p_1; T_11),\cdots,x(1,p_1; T_1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1; 1, 1),\cdots,x(k,1; 1, N_kC_{k,1}),\cdots,x(k,p_k,; 1, 1),\cdots,x(k,p_k; 1, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1;T_k,1),\cdots,x(K, 1; T_k, N_kC_{k,1}),\cdots,x(k,p_k; T_k, 1),\cdots,x(k,p_k; T_k, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&\cdots\,\,)\nonumber\end{eqnarray} with the precise type as (\ref{precise quasi orbits}), where $x(k,j; i, t)\in W_{n(k,j)}$. Then Theorem \ref{specification} applies to give rise to a shadowing point $z$ of $O$ such that $$d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j;i, t)))<c\epsilon_0e^{-\epsilon l_k}<\frac{\varepsilon}{4\epsilon_0}\epsilon_0e^{-\epsilon l_k}\leq \frac{\varepsilon}{4},$$ for $0\leq q\leq n(k,j)-1,$ $1\leq i \leq T_k$, $1\leq t\leq N_kC_{k,j}$, $1\leq j\leq p_k$. By the construction of $N_k$ and $Y_k$, it is verified that $$D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\nu_k)\leq\frac6k.$$ For sufficiently large $M_{k,i}\leq n\leq M_{k,i+1}$, by affine property, we have that \begin{eqnarray*} D(\mathcal{E}_{n}(z),\,\nu_k)&\leq& \frac{M_{k-2}}{n}D(\mathcal{E}_{M_{k-2}}(z),\nu_k)+\frac{Y_{k-1}}{n}\sum_{r=1}^{T_{k-1}}D(\mathcal{E}_{Y_{k-1}}(f^{M_{k-1,r-1}}(z)),\nu_k)\\[2mm] &&+\frac{s(m_{k-1,1},m_{k,1})}{n}D(\mathcal{E}_{s(m_{k-1,1},m_{k,1})}(f^{M_{k-1,T_{k-1}}}(z)),\nu_k)\\[2mm] &&+\frac{Y_k}{n}\sum_{r=1}^{i-1}D(\mathcal{E}_{Y_k}(f^{M_{k,r-1}}(z)),\nu_k)\\[2mm] &&+\frac{n-M_{k,i}}{n}D(\mathcal{E}_{n-M_{k,i}}(f^{M_{k,i}}(z)),\nu_k). \\[2mm] \end{eqnarray*} Noting that $$D(\mathcal{E}_{Y_{k-1}}(f^{M_{k-1,i-1}}(z)),\nu_k)\leq D(\mathcal{E}_{Y_{k-1}}(f^{M_{k-1,i-1}}(z)),\nu_{k-1})+D(\nu_{k-1},\nu_{k})$$ and using the fact that $D(\nu_k,\nu_{k-1})\leq2\zeta_k+2\zeta_{k-1}$ and inequalities (\ref{circle1}) and (\ref{circle2}), one can deduce that \begin{eqnarray*} D(\mathcal{E}_{n}(z),\,\nu_k)&\leq& \frac1k+(\frac{6}{k-1}+2\zeta_k+2\zeta_{k-1})+\frac1k+\frac6k+\frac1k. \end{eqnarray*}
Letting $n\rightarrow +\infty$, we get $V(z)\subset K$. On the other hand, noting that $$\cap_{N\geq 1}^{\infty}\cup_{n\geq N}U_n = K,$$ so $\mathcal{E}_{n}(z)$ can enter any neighborhood of each $\nu\in K$ in infinitely times, which implies the converse side $K\subset V(x)$. Consequently, $V(z)=K$.
Next we show the inequality concerning entropy. Fixing $k,j,i, t $, the corresponding shadowing points of distinct $x(k,j,i)$ are $(n(k,j),\frac{\varepsilon}{2})$-separated. Let \begin{eqnarray*} H_{ki}=\{&&(x(k,j;i,1),\cdots,x(k,j;i,N_kC_{k,j}),\cdots,x(k,p_k;i,1),\cdots,x(1,p_k;i, N_kC_{k,p_k})\\[2mm]
&&\mid \,\,x(k,j;i,t)\in V_{n(k,j)}\cap A_{n(k,j)}\}. \end{eqnarray*}
Then \begin{eqnarray*} \sharp H_{ki}\geq e^{\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac{4}{k})}. \end{eqnarray*} So, \begin{eqnarray*} \frac{1}{Y_k}\log \,H_{ki}&\geq& \frac{Y_k-X_k}{Y_k}\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\\[2mm] &\geq&(1-\frac{1}{k})\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\\[2mm] &=&(1-\frac1k)^2h_{\mu_{k}}^{Kat}(f,\varepsilon\mid\delta)-\frac4k(1-\frac{1}{k})^2\\[2mm] &\geq&(1-\frac1k)^2(h_{\nu_k}^{Kat}(f,\varepsilon\mid\delta)-\frac1k)-\frac4k(1-\frac{1}{k})^2\\[2mm] &\geq&(1-\frac1k)^2(h_{\nu_k}(f)-\gamma-\frac1k)-\frac4k(1-\frac{1}{k})^2. \end{eqnarray*} In sequel by the analogous arguments in section 4, we obtain that $$h_{\operatorname{top}}(f,G_{K})\geq \inf\{h_{\mu}(f)\mid \mu\in K\}-\gamma.$$ The arbitrariness of $\gamma$ concludes the desired inequality: $$h_{\operatorname{top}}(f,G_{K})\geq \inf\{h_{\mu}(f)\mid \mu\in K\}.$$
$\Box$
\section{On the Structure of Pesin set $\widetilde{\Lambda}$}
The construction of $\widetilde{\Lambda}$ asks for many techniques that yields fruitful properties of Pesin set but meanwhile leads difficulty to check which measures support on $\widetilde{\Lambda}$. Sometimes $\mathcal{M}_{inv}(\widetilde{\Lambda}, f)$ contains only the measure $\omega$ itself, for instance $\omega$ is atomic. In what follows, we will show that for several classes of diffeomorphisms derived from Anosov systems $\mathcal{M}_{inv}(\widetilde{\Lambda},f)$ enjoys many members.
\subsection{Symbolic dynamics of Anosov diffeomorpisms}\quad Let $f_0$ be an Anosov diffeomorphism on a Riemannian manifold $M$. For $x\in M$, $\varepsilon_0>0$, we have the stable manifold $W^s_{\varepsilon_0}(x)$ and the unstable manifold $W^u_{\varepsilon_0}(x)$ defined by \begin{eqnarray*}&&W^s_{\varepsilon_0}(x)=\{y\in M\mid d(f_0^n(x),f_0^n(y))\leq \varepsilon_0,\quad \mbox{for all}\quad n\geq 0\}\\[2mm] &&W^u_{\varepsilon_0}(x)=\{y\in M\mid d(f_0^{-n}(x),F^{-n}(y))\leq \varepsilon_0,\quad \mbox{for all}\quad n\geq 0\}.\end{eqnarray*}
Fixing small $\varepsilon_0>0$ there exists a $\delta_0>0$ so that $W^s_{\varepsilon_0}(x)\cap W^u_{\varepsilon_0}(y)$ contains a single point $[x,y]$ whenever $d(x,y)<\delta_0$. Furthermore, the function $$[\cdot, \cdot]: \{(x,y)\in M\times M\mid d(x,y)<\delta_0 \}\rightarrow M$$ is continuous. A rectangle $R$ is understood by a subset of $M$ with small diameter and $[x,y]\in R$ whenever $x,y\in R$. For $x\in R$ let $$W^s(x,R)=W^s_{\varepsilon_0}(x)\cap R\quad \mbox{and}\quad W^u(x,R)=W^u_{\varepsilon_0}(x)\cap R.$$ For Anosov diffeomorphism $f_0$ one can obtain the follow structure known as a Markov partition $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ of $M$ with properties: \begin{enumerate} \item[(1)] $\operatorname{int} R_i\cap \operatorname{int} R_j=\emptyset$ for $i\neq j$;
\item[(2)] $f_0 W^u(x,R_i)\supset W^u(f_0x,R_j)$ and \\ $f_0 W^s(x,R_i)\subset W^s(f_0x,R_j)$ when $x\in \operatorname{int} R_i$, $fx\in \operatorname{int} R_j$.
\end{enumerate} Using the Markov Partition $\mathcal{R}$ we can define the transition matrix $B=B(\mathcal{R})$ by
$$B_{i,j}=\begin{cases}1\quad \mbox{if}\quad \operatorname{int} R_i\cap f_0^{-1} ( \operatorname{int} R_j)\neq \emptyset;\\ 0\quad \mbox{otherwise}. \end{cases}$$ The subshift $(\Sigma_B,\sigma)$ associated with $B$ is given by $$\Sigma_B=\{\underline{q}\in \Sigma_l\mid \,B_{q_iq_{i+1}}=1\quad \forall i\in \mathbb{Z}\}.$$ For each $\underline{q}\in \Sigma_B$ by the hyperbolic property the set $\cap_{i\in \mathbb{Z}}f_0^{-i}R_{q_i}$ contains of a single point, denoted by $\pi_0( \underline{q} )$. We denote $$\Sigma_B(i)=\{\underline{q}\in \Sigma_B\mid q_0=i\}.$$ The following properties hold for the map $\pi_0$ (see Sinai \cite{Sinai} and Bowen \cite{Bowen1, Bowen2}). \begin{Prop}\label{property of Markov}$ $\\ \begin{enumerate} \item[(1)] The map $\pi_0: \Sigma_B\rightarrow M$ is a continuous surjection satisfying $\pi_0\circ \sigma=f_0\circ \pi_0;$\\ \item[(2)] $\pi_0(\Sigma_B(i))=R_i$,\quad $1\leq i\leq l$;\\ \item[(3)] $h_{\operatorname{top}}(\sigma, \Sigma_B)=h_{\operatorname{top}}(f_0,M)$. \end{enumerate}
\end{Prop} Since $B$ is $(0,1)$-matrix, using Perron Frobenius Theorem the maximal eigenvalue $\lambda$ of $B$ is positive and simple. $\lambda$ has the row eigenvector $u=(u_1,\cdots,u_l)$, $u_i>0$, and the column eigenvector $v=(v_1,\cdots, v_l)^{T}$, $v_i>0$. We assume $\sum_{i=1}^lu_iv_i=1$ and denote $(p_1,\cdots,p_l)=(u_1v_1,\cdots,u_lv_l)$. Define a new matrix $$\mathcal{P}=(p_{ij})_{l\times l},\quad\quad\mbox{where}\quad p_{ij}=\frac{B_{ij}\,v_j}{\lambda\,v_i}.$$ Then $\mathcal{P}$ can define a Markov chain with probability $\mu_0$ satisfying $$\mu_0([a_0a_1\cdots a_i])=p_{a_0}p_{a_0a_1}\cdots p_{a_{i-1}a_i}. $$ Then $\mu_0$ is $\sigma$-invariant and Gurevich \cite{Gu1,Gu2} proved that $\mu_0$ is the unique maximal measure of $(\Sigma_B,\sigma)$, that is, $$h_{\operatorname{top}}(\sigma, \Sigma_B)=h_{\mu_0}(\sigma,\Sigma_B)=\log\lambda.$$ In addition, Bowen \cite{Bowen1} proved that $\pi_{0*}(\mu_0)$ is the unique maximal measure of $f_0$ and $\pi_{0*}(\mu)(\partial \mathcal{R})=0$, where $\partial \mathcal{R}$ consists of all boundaries of $R_i$, $1\leq i\leq l$.
Denote $\mu_1= \pi_{0*}(\mu_0)$. Then $\mu_1(\pi_0\Sigma_B(i))=p_i$ for $1\leq i\leq l$. For $0<\gamma<1$, $N\in \mathbb{N}$ define
\begin{eqnarray*}\Gamma_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid f_0^j(x)\in R_i\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid f_0^{-j}(x)\in R_i\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq 1,\,\,\,\forall\, n\in \mathbb{Z}\}.\end{eqnarray*} Then $f_0^{\pm}(\Gamma_N(i,\gamma))\subset \Gamma_{N+1}(i,\gamma)$. Let $\Gamma(i,\gamma)=\cup_{N\geq 1}\Gamma_N(i,\gamma)$. \begin{Lem}\label{full measure} For any $m\in \mathcal{M}_{inv}(M,f_0)$, if \,$m(R_i)<p_i+\gamma/2$ then $m(\Gamma(i,\gamma))=1$. \end{Lem} \begin{proof} Since $m(R_i)<p_i+\gamma/2$, for $m$ almost all $x$ one can fine $N(x)>0$ such that \begin{eqnarray*}&&n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid f_0^j(x)\in R_i\}\leq n(m(R_i)+\frac\gamma2),\quad \forall \,n\geq N(x);\\[2mm] &&n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid f_0^{-j}(x)\in R_i\}\leq n(m(R_i)+\frac\gamma2),\quad \forall \,n\geq N(x). \end{eqnarray*} Take $N_0(x)$ to be the smallest number such that for every $n\geq 1$, \begin{eqnarray*}&&-N_0(x)+n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid f_0^j(x)\in R_i\}\leq N_0(x)+n(m(R_i)+\frac\gamma2);\\[2mm] &&-N_0(x)+n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid f_0^{-j}(x)\in R_i\}\leq N_0(x)+n(m(R_i)+\frac\gamma2). \end{eqnarray*} Then for any $k\geq 1$, \begin{eqnarray*} &&\sharp \{n\leq j\leq n+k-1\mid f_0^j(x)\in R_i\}\\[2mm] &=& \sharp \{0\leq j\leq n+k-1\mid f_0^j(x)\in R_i\}-\sharp \{0\leq j\leq n-1\mid f_0^j(x)\in R_i\}\\[2mm] &\leq& N_0(x)+(n+k)(m(R_i)+\frac\gamma2)-(-N_0(x)+n(m(R_i)-\frac\gamma2))\\[2mm] &=& 2N_0(x)+k(m(R_i)+\frac\gamma2)+n\gamma. \end{eqnarray*} In this manner we can also show $$\sharp \{n\leq j\leq n+k-1\mid f_0^j(x)\in R_i\}\leq 2N_0(x)+k(m(R_i)+\frac\gamma2)+n\gamma.$$ Thus, $x\in \Gamma_{N_0(x)}(i,\gamma)$. \end{proof}
By Lemma \ref{full measure}, $\mu_1(\Gamma(i,\gamma))=1$. We further define $$\widetilde{\Gamma}_N(i,\gamma)=\operatorname{supp}(\mu_1\mid \Gamma_N(i,\gamma))\quad \mbox{ and}\quad \widetilde{\Gamma}(i,\gamma)=\cup_{N\geq 1}\widetilde{\Gamma}_N(i,\gamma).$$ It holds that $\widetilde{\Gamma}(i,\gamma)$ is $f$-invariant and $\mu_1(\widetilde{\Gamma}(i,\gamma))=1$.
\begin{Prop}\label{frequence} There is a neighborhood $U$ of $\mu_1$ in $\mathcal{M}_{inv}(M,f_0)$ such that for any ergodic measure $m\in U$ we have $m \in \mathcal{M}_{inv}(\widetilde{\Gamma}(i,\gamma),f_0)$.
\end{Prop} \begin{proof} Observing that $\mu_1(\partial R_i)=0$, for $\gamma>0$ there exists a neighborhood $U$ of $\mu_1$ in $\mathcal{M}_{inv}(M,F)$ such that for any $m\in U$ one has $$m( R_i)<p_i+\frac\gamma2.$$ {\bf Claim :} \,\,\,\,We can find an ergodic measure $m_0\in \mathcal{M}_{inv}(\Sigma_B, \sigma)$ satisfying $\pi_{0*}m_0=m$.
\noindent{\it Proof of Claim.} Denote the basin of $m$ by \begin{eqnarray*}Q_{m}(M,f_0)=\Big{\{}x\in M\mid \lim_{n\to+\infty}\frac 1n \sum_{j=0}^{n-1}\varphi(f_0^ix) &=&\lim_{n\to-\infty}\frac 1n \sum_{j=0}^{n-1}\varphi(f_0^ix)\\[2mm]&=&\int_M\varphi dm,\quad\forall \varphi\in C^0(M)\Big{\}}.\end{eqnarray*} Take and fix a point $x\in Q_{m}(M,\, f_0)$ and choose $q\in \Sigma_B$ with $\pi_0(q)=x$. Define a sequence of measures $\nu_n$ on $\Sigma_B$ by $$\int\,\psi d \nu_n:=\frac 1n \Sigma_{i=0}^{n-1}\psi(\sigma^i(q)),\,\,\,\, \,\,\forall \psi\in C^0(\Sigma_B).$$ By taking a subsequence when necessary we can assume that $\nu_n\to \nu_0.$ It is standard to verify that $\nu_0$ is a $\sigma$-invariant measure and $\nu_0$ covers $m$ i.e., $\pi_{0*}(\nu_0)=m.$ Set $$Q(\sigma):= \cup_{\nu \in \mathcal{M}_{erg}(\Sigma_B, \sigma)} Q_\nu(\Sigma_B, \sigma).$$ Then $ Q(\sigma)$ is a $\sigma-$invariant total measure subset in $\Sigma_B.$ We have \begin{align*} &m( Q_{m}(M,F) \cap \pi_{0} Q(\sigma))\\ \ge & \nu_0(\pi_{0}^{-1}Q_{m}(M,f_0)\cap Q(\sigma))\\ =&1. \end{align*} Then the set \begin{align*} \mathcal{A}_0:=\Big{\{}\nu\in \mathcal{M}_{erg}(\Sigma_B, \sigma)\,\mid\,\,&\exists \,\, q\in Q(\sigma), \pi_0(q)\in Q_{m}(M, f_0), s.\,t. \\ &\lim_{n\to+\infty}\frac 1n\Sigma_{i=0}^{n-1} \psi(\sigma^i(q)) =\lim_{n\to-\infty}\frac 1n\Sigma_{i=0}^{n-1} \psi(\sigma^i(q))\\ &=\int _{\Sigma_B} \psi\,d\nu\,\,\,\,\,\,\quad\quad\quad \forall \psi\in C^0(\Sigma_B)\,\Big{\}} \end{align*} is non-empty. It is clear that $\nu $ covers $m$, $\pi_{0*}(\nu)=m,$ for all $\nu\in \mathcal{A}_0.$
$\Box$
We continue the proof of Proposition \ref{frequence}. Since $\pi_{0*}(m_0)=m$ so $m_0(\pi_{0}^{-1}(R_i))=m(R_i)< p_i+\gamma/2$ which together with $\Sigma_B(i)\subset \pi_0^{-1}(R_i)$ implies that $$m_0(\Sigma_B(i))< p_i+\frac\gamma2.$$ In particular, $\mu_0(\Sigma_B(i))< p_i+\frac\gamma2.$ For $0<\gamma<1$, $N\in \mathbb{N}$ define \begin{eqnarray*}\Upsilon_N(i,\gamma)=\{\underline{q}\in \Sigma_B\mid &&\sharp
\{n\leq j\leq n+k-1\mid q_j=i\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm] && \sharp
\{n\leq j\leq n+k-1\mid q_{-j}=i\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm] &&\quad \forall\,k\geq1\,\,\,\forall \,n\in \mathbb{Z}\}.\end{eqnarray*} Let $\Upsilon(i,\gamma)=\cup_{N\geq 1}\Upsilon_N(i,\gamma)$. Then $\mu_0(\Upsilon(i,\gamma))=1$. Further define $$\widetilde{\Upsilon}_N(i,\gamma)=\operatorname{supp}(\mu_0\mid \Upsilon_N(i,\gamma))\quad \mbox{ and}\quad \widetilde{\Upsilon}(i,\gamma)=\cup_{N\geq 1}\widetilde{\Upsilon}_N(i,\gamma).$$ It also holds that $\widetilde{\Upsilon}(i,\gamma)$ is $\sigma$-invariant and $\mu_0(\widetilde{\Upsilon}(i,\gamma))=1$.
\begin{Lem}\label{measures on support}Given $m_0\in \mathcal{M}_{erg}(\Sigma_B,\sigma)$, if $m_0(\Sigma_B(i))<p_i+\gamma/2$ then $m_0\in \mathcal{M}_{inv}(\widetilde{\Upsilon}(i,\gamma),\sigma)$.
\end{Lem}
\noindent{\it Proof of Lemma.}\,\,\,\, Since $m_0(\Sigma_B(i))<p_i+\gamma/2$ we obtain $m_0(\cup_{N\in \mathbb{N}}\,\Upsilon_{N}(i,\gamma))=1$. We can take $N_0$ so large that $m_0(\Upsilon_{N_0}(i,\gamma))>0$ and $\mu_0(\widetilde{\Upsilon}_{N_0}(i,\gamma))>0$. Define $$\Upsilon(i,j)=\{\underline{q}\in \widetilde{\Upsilon}_{N_0}(i,\gamma)\mid q_0=j\}.$$ Then there exists $j\in [ 1,l ]$ such that $\mu_0(\Upsilon(i,j))>0$.
Noting that $(\Sigma_B,\sigma)$ is mixing, there is $L_0>0$ such that for each pair $j_1,j_2$ one can choose an sequence $L(j_1,j_2)=(q_1 \cdots q_L)$ satisfying $q_1=j_1$, $q_L=j_2$ and $2\leq \sharp L(j_1,j_2)\leq L_0$.
Arbitrarily taking $\underline{q}\in \Upsilon_{N_0}(i,\gamma)$,
$\underline{z}\in \Upsilon(i,j)$, $n\in \mathbb{N}$, define $$\underline{y}(\underline{q},\underline{z},n) =(\cdots z_{-3}z_{-2}z_{-1}L(z_0,q_{-n})q_{-n+1}\cdots q_{-1}; \stackrel{0}{q_0} q_1\cdots q_{n-1}L(q_n,z_0)z_1z_2z_3 \cdots ).$$ Denote $N_1=2L_0+2N_0+1$. For any $\theta>0$ we can take large $n$ satisfying $n>N_1$ and $d(\underline{y}(\underline{q},\underline{z},n),\underline{q})<\theta$. Define a new subset of $\Sigma_B$: $$Y(\underline{q},n)=\{\underline{y}(\underline{q},\underline{z},n)\in \Sigma_B \mid \underline{z}\in \Upsilon(i,j)\}.$$ Consider the positive and negative constitutions of $\Upsilon(i,j)$ as follows \begin{eqnarray*} \Upsilon^{+}(i,j)&=&\{\underline{w}\in \Sigma_B\mid\,w_k=z_k,\,\,i\geq 0,\quad \mbox{for some }\,\,\underline{z}\in \Upsilon(i,j)\}\\[2mm] \Upsilon^{-}(i,j)&=&\{\underline{w}\in \Sigma_B\mid\,w_k=z_k,\,\,i\leq 0,\quad \mbox{for some }\,\,\underline{z}\in \Upsilon(i,j)\}. \end{eqnarray*} Clearly $\Upsilon^{+}(i,j)\supset \Upsilon(i,j)$, $\Upsilon^{-}(i,j)\supset \Upsilon(i,j)$. Then by the Markov property of $\mu_0$ it holds that $$\mu_0(Y(\underline{q},n))\geq \mu_0(\Upsilon^{-}(i,j))p_{jq_{-n}}p_{q_{-n}q_{-n+1}}\cdots p_{q_{n-1}q_{n}}q_{q_{n}j}\,\mu_0(\Upsilon^{+}(i,j))>0.$$ Moreover, for any $\underline{y}\in Y(\underline{q},n)$ and $k\geq1,\,\,s\in \mathbb{Z}$ we have
Case 1: $-n-\sharp L\leq s\leq n+\sharp L$, $s+k-1\leq n+\sharp L$ it follows that \begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid y_t=i\}&\leq&\,2L_0+\sharp \{s\leq t\leq s+k-1\mid q_t=i\} \\[2mm]&\leq&
2L_0+N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*}
Case 2: $-n-L\leq s\leq n+L$, $s+k-1> n+L$ it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid y_t=i\}&\leq&\,L_0+N_0+(n+L-s)(p_i+\gamma)+s|\gamma|+N_0+\\[2mm] &&+(s+k-1-n-L)(p_i+\gamma) \\[2mm]&\leq&
L_0+2N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*}
Case 3: $s>n+L$ it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid y_t=i\}&\leq & N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*}
Case 4: $s<-n-L$ it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid y_t=i\}&\leq&\,2L_0+2N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*} The situation of $\sharp \{s\leq t\leq s+k-1\mid y_{-t}=i\}$ is similar. Altogether, since $N_1=2L_0+2N_0+1$, $Y(\underline{q},n)\subset \Upsilon_{N_1}(i,\gamma) $. The arbitrariness of $\theta$ gives rise to that $$\underline{q}\in \operatorname{supp}(\mu_0\mid \Upsilon_{N_1}(i,\gamma)).$$ That is, $\Upsilon_{N_0}(i,\gamma)\subset \widetilde{\Upsilon}_{N_1}(i,\gamma) $. Since $m_0(\Upsilon_{N_0}(i,\gamma))>0$ so $m_0(\widetilde{\Upsilon}_{N_1}(i,\gamma))>0$ which by the ergodicity of $m_0$ implies $m_0(\widetilde{\Upsilon}(i,\gamma))=1$.
$\Box$
Noting that $\pi_0(\widetilde{\Upsilon}_{N}(i,\gamma))\subset \widetilde{\Gamma}_{N}(i,\gamma)$, by Lemma \ref{measures on support} we obtain $$m(\widetilde{\Gamma}(i,\gamma))=m_0(\pi_0^{-1}(\widetilde{\Gamma}(i,\gamma)))\geq m_0(\widetilde{\Upsilon}(i,\gamma))=1 $$ which concludes Proposition \ref{frequence}.
\end{proof}
\subsection{Nonuniformly hyperbolic systems} We shall verify $\widetilde{\Lambda}$ for an example due to Katok \cite{Katok} (see also \cite{Ba-Pesin1, Barr-Pesin}) of a diffeomorphism on the 2-torus $\mathbb{T}^2$ with nonzero Lyapunov exponents, which is not an Anosov map. Let $f_0$ be a hyperbolic linear automorphism given by the matrix
$$A=\begin{pmatrix} 2&1\\1&1 \end{pmatrix} $$ Let $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ be the Markov partition of $f_0$ and $B=B(\mathcal{R})$ be the associated transition matrix. $f_0$ has a maximal measure $\mu_1$. Without loss of generality, at most taking an iteration of $f_0$ we suppose there is a fixed point $O\in \operatorname{int} R_1$. Consider the disk $D_r$ centered at $O$ of radius $r$. Let $(s_1, s_2)$ be the coordinates in $D_r$ obtained from the eigendirections of $A$. The map $A$ is the time-1 map of the local flow in $D_r$ generated by the system of ordinary differential equations: $$\frac{ds_1}{dt} = s_1 \log\lambda ,\quad \frac{ds_2}{dt} = -s_2 \log\lambda.$$ We obtain the desired map by slowing down $A$ near the origin.
Fix small $r_1 < r_0$ and consider the time-1 map $g$ generated by the system of ordinary differential equations in $D_{r_1}$ :
$$ \frac{ds_1}{dt} = s_1\psi(s_1^2 + s_2^ 2) \log\lambda ,\quad \frac{ds_2}{dt} =-s_2\psi(s_1^2 + s_2^2) \log\lambda $$ where $\psi$ is a real-valued function on $[0, 1]$ satisfying: \begin{enumerate} \item[(1)] $\psi$ is a $C^{\infty}$ function except for the origin $O$; \\ \item[(2)] $ \psi(0) = 0$ and $\psi(u) = 1$ for $u\geq r_0$ where $0 < r_0 < 1;$\\ \item[(3)] $ \psi(u) > 0$ for every $0<u<r_0$;\\ \item[(4)] $\int_0^1\frac{du}{\psi(u)}<\infty$. \end{enumerate} The map $f$, given as $f(x) = g(x)$ if $x\in D_{r_1}$ and $f(x) = A(x)$ otherwise, defines a homeomorphism of the torus, which is a $C^{\infty}$ diffeomorphism everywhere except for the origin $O$. To provide the differentiability of the map $f$, the function $\psi$ must satisfy some extra conditions. Namely, near $O$ the integral $\int_0^1du/\psi$ must converge ``very slowly". We refer the smoothness to \cite{Katok}. Here $f$ is contained in the $C^0$ closure of Anosov diffeomorphisms and even more there is a homeomorphism $\pi: \mathbb{T}^2 \rightarrow \mathbb{T}^2$ such that $\pi\circ f_0=f\circ \pi$ and $\pi(O)=O$. By the constructions, there is a continuous decomposition on the tangent space $T\mathbb{T}^2=E^1\oplus E^2$ such that
for any neighborhood $V$ of $O$, there exists $\lambda_V>1$ such that \begin{enumerate}
\item[(1)]$\|Df_x \mid_{ E^1(x)}\|\geq \lambda_{V}$, \,\,\, $\|Df_x \mid_{ E^2(x)}\|\leq \lambda_{V}^{-1}$,\,\,\, $x\in \mathbb{T}^2\setminus V$ ;\\
\item[(2)] $\|Df_x \mid_{ E^1(x)}\|\geq 1$, \,\,\, $\|Df_x \mid_{ E^2(x)}\|\leq 1$,\,\,\, $x\in V$. \end{enumerate} Let $H_i=\pi(R_i)$, $\nu_0=\pi_{*}\mu_1$ and $p_i=\nu_0(H_i)$. Then $H_i$ is a closed subset of $\mathbb{T}^2$ with nonempty interior. Let \begin{eqnarray*}p_0&=&\frac12\min\{1-p_i\,\mid\,\, 1\leq i\leq l\},\\[2mm] \beta&=&(1-p_1-p_0-\gamma)\log\lambda_{V}. \end{eqnarray*}
\begin{Thm} \label{Katok}There exists a neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^2,f)$ such that for any ergodic $\nu\in U$ it holds that $\nu\in \mathcal{M}_{inv}(\widetilde{\Lambda}(\beta,\beta, \epsilon))$ for any $0\leq \epsilon\ll \beta$.
\end{Thm}
\begin{proof}Take a small neighborhood $V\subset H_1$ of $O$. Denote
\begin{eqnarray*}\Phi_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid f^j(x)\in H_i\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid f^{-j}(x)\in H_i\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq 1,\,\,\,\forall\, n\in \mathbb{Z\}}.\end{eqnarray*} Define \begin{eqnarray*}\widetilde{\Phi}_N(1,\gamma)&=&\operatorname{supp}(\nu_0\mid \Phi_N(1,\gamma)). \end{eqnarray*} Then for some large $N$ we have $\nu_0(\widetilde{\Phi}_N(1,\gamma))>0$. Noting that $\mu_1(\partial R_1)=0$, by Proposition \ref{frequence} and the conjugation $\pi$ there exists a neighborhood $U$ of $\nu_0$ in $\mathcal{M}(\mathbb{T}^2,f)$ such that for any ergodic $\nu\in U$, $$\nu(\widetilde{\Phi}_N(1,\gamma))>0.$$
For any $x\in \Phi_N(1,\gamma)$ and $k\geq 1$, $n\in \mathbb{Z}$ we have
Case 1: $k(p_1+\gamma+p_0)\leq N+k(p_1+\gamma)+|n|\gamma$, then
$$k\leq \frac{N+|n|\gamma}{p_0}.$$ So, \begin{eqnarray*}
\|Df^{-k} \mid_{ E^1(f^{n}x)}\|&\leq& e^{-k\beta}\exp(\frac{\beta}{p_0}(N+|n|\gamma)),\\[2mm]
\|Df^k_x
\mid_{ E^2(f^nx)}\|&\leq&
e^{-k\beta}\exp(\frac{\beta}{p_0}(N+|n|\gamma)). \end{eqnarray*}
Case 2: $k(p_1+\gamma+p_0)> N+k(p_1+\gamma)+|n|\gamma$, then \begin{eqnarray*}
\|Df^{-k} \mid_{ E^1(f^{n}x)}\|&\leq& \lambda_V^{-(1-p_1-p_0-\gamma)k}=e^{-\beta k},\\[2mm] \|Df^k_x
\mid_{ E^2(f^nx)}\|&\leq& \lambda_V^{-(1-p_1-p_0-\gamma)k}=e^{-\beta k}. \end{eqnarray*} Let $N_2=[\frac{\beta N}{\gamma p_0}]+1$. Then $$\Phi_N(1,\gamma)\subset \Lambda_{N_2}(\beta,\beta, \gamma) \quad \mbox{and}\quad \widetilde{\Phi}_N(1,\gamma)\subset \widetilde{\Lambda}_{N_2}(\beta,\beta, \gamma).$$
Therefore, $$\nu(\widetilde{\Lambda}_{N_2}(\beta,\beta, \gamma))>0$$ which completes the proof of Theorem \ref{Katok}.
\end{proof}
\subsection{Robustly transitive partially hyperbolic systems} In \cite{Mane} R.~Ma{\~{n}}{\'{e}} constructed a class of robustly transitive diffeomorphisms which is not hyperbolic. Firstly we recall the description of Ma{\~{n}}{\'{e}}'s example. Let $\mathbb{T}^n$, $n\geq 3$, be the torus $n$-dimentional and $f_0 : \mathbb{T}^n \rightarrow \mathbb{T}^n$ be a (linear) Anosov diffeomorphism. Assume that the tangent bundle of $\mathbb{T}^n$ admits the $Df_0$-invariant splitting $T\mathbb{T}^n = E^{ss} \oplus E^u\oplus E^{uu}$, with $\dim E^u = 1$ and $$\lambda_s :=
\|Df\mid_{E^{ss}}\|,\quad \lambda_u:= \|Df\mid_{ E^u}\|,\quad
\lambda_{uu}:= \|Df\mid_{E^{uu}}\|$$ satisfying the relation $$\lambda_s < 1 < \lambda_u < \lambda_{uu}.$$ The following Lemma is proved in \cite{Sambarino-Vasquez}. \begin{Lem}\label{shadowing onto diffeo} Let $f_0 : \mathbb{T}^n\rightarrow \mathbb{T}^n$ be a linear Anosov map. Then there exists $C> 0$ such that for any small $r$ and any $f : \mathbb{T}^n\rightarrow \mathbb{T}^n$ with $\operatorname{dist}_{C^0}(f, g) < r$ there exists $\pi : \mathbb{T}^n\rightarrow \mathbb{T}^n$ continuous and onto, $\operatorname{dist}_{C^0}(\pi, \operatorname{id}) < Cr$, and $$f_0\circ \pi = \pi\circ f.$$ \end{Lem} Let $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ be the Markov partition of $f_0$ and $B=B(\mathcal{R})$ be the associated transition matrix. Let $\mu_1$ be the maximal measure of $(\mathbb{T}^n,f_0)$ and $p_i=\mu_1(R_i)$ for $1\leq i\leq l$. Suppose there is a fixed point $O\in \operatorname{int} R_1$. Take small $r$ satisfying the ball $B(O,Cr)\subset R_1$ and $d(B(O,Cr), \partial R_1)>Cr$. Then deform the Anosov diffeomorphim $f$ inside $B(p, r)$ passing through a flip bifurcation along the central unstable foliation $\mathcal{F}^u(p)$ and then we obtain three fixed points, two of them with stability index equal to $\dim E^s$ and the other one with stability index equal to $\dim E^s + 1$. Moreover take positive numbers $\delta, \gamma \ll \min\{\lambda_s, \lambda_u\}$. Let $f$ satisfy the following $C^1$ open conditions:
\begin{enumerate}\item[(1)] $ \|Df\mid_{E^{ss}}\|< e^{\delta}\lambda_s,\quad
\|Df\mid_{E^{uu}}\|>e^{-\delta}\lambda_{uu}$;\\
\item[(2)] $ e^{-\delta}\lambda_u<\|Df\mid_{ E^u(x)}\|< e^{\delta}\lambda_u$ ,\quad for $x\in \mathbb{T}^n\setminus B(O,r)$;\\
\item[(3)] $ e^{-\delta}<\|Df\mid_{ E^u(x)}\|< e^{\delta}\lambda_u$ ,\quad for $x\in B(O,r)$. \end{enumerate}
As shown in \cite{Sambarino-Vasquez} for the obtained $f$ there exists a unique maximal measure $\nu_0$ of $f$ with $\pi_{*}\nu_0=\mu_1$. Let $H_i=\pi(R_i)$, $p_i=\pi_{*}\mu_1(H_i)$ and \begin{eqnarray*}p_0&=&\frac12\min\{1-p_i\,\mid\,\, 1\leq i\leq l\},\\[2mm] \beta&=&(1-p_1-p_0-\gamma)\min\{-\log\lambda_{s}-\delta,\,\,\log\lambda_{u}-\delta\}. \end{eqnarray*} We can see $E^{uu}$ is uniformly contracted by at least $e^{-\beta}$. \begin{Thm} \label{Mane}There exist $0<\epsilon\ll1<\beta$ and a neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f)$ such that for any ergodic $\nu\in U$ it holds that $\nu\in \mathcal{M}_{inv}(\widetilde{\Lambda}(\beta,\beta, \epsilon), f)$. \end{Thm} \begin{proof} By Proposition \ref{frequence} we can take a neighborhood $U_1$ of $\mu_1$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f_0)$ such that every ergodic $\mu\in U_1$ also belongs to $ \widetilde{\Gamma}(i,\gamma)$, where $\widetilde{\Gamma}(i,\gamma)$ is given by Proposition \ref{frequence}. Since
$\pi$ is continuous, there is a neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f)$ such
that $\pi_{*}U\subset U_1$. For $N\in \mathbb{N}, \gamma>0$, define \begin{eqnarray*}
T_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid f^j(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid f^{-j}(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq 1,\,\,\,\forall\, n\in \mathbb{Z\}}. \end{eqnarray*} For large $N$ we have $\nu_0(T_N(i,\gamma))>0$ and let $$\widetilde{T}_N(i,\gamma)=\operatorname{supp}(\nu_0\mid T_N(i,\gamma)).$$
For any $z\in T_N(i,\gamma)$, $n\in \mathbb{Z}$, $k\geq 1$ we have
Case 1: $k(p_1+\gamma+p_0)\leq N+k(p_1+\gamma)+|n|\gamma$, then
$$k\leq \frac{N+|n|\gamma}{p_0}.$$
So, \begin{eqnarray*} \|Df^{-k} \mid_{ E^{u}(x)\oplus E^{uu}(f^{n}x)}\|&\leq&
e^{-k\beta}\exp(\frac{\beta}{p_0}(N+|n|\gamma)). \end{eqnarray*}
Case 2: $k(p_1+\gamma+p_0)> N+k(p_1+\gamma)+|n|\gamma$, then \begin{eqnarray*}
\|Df^{-k} \mid_{E^{u}(x)\oplus E^{uu}(f^{n}x)}\|\leq (\lambda_ue^{\delta})^{-(1-p_1-p_0-\gamma)k} e^{\delta k(p_1+\gamma+p_0)}\leq e^{-\beta k}e^{\delta k(p_1+\gamma+p_0)}. \end{eqnarray*} Let $N_2=[\frac{\beta N}{\gamma p_0}]+1$, $\epsilon= \max\{\delta (p_1+\gamma+p_0),\,\gamma\}$. Then $$T_N(1,\gamma)\subset \Lambda_{N_2}(\beta,\beta, \epsilon) \quad \mbox{and}\quad \widetilde{T}_N(1,\gamma)\subset \widetilde{\Lambda}_{N_2}(\beta,\beta, \epsilon).$$
For any $x\in \Gamma_N(1,\gamma)$, $z\in \pi^{-1}(x)$, it holds that $$d(f^i(z),f^i_0(x))=d(f^i(z),\, \pi (f^i(x)))<Cr$$ which implies that if $f^i_0(x)\notin R_1$ then $f^i(z)\notin B(O,r)$ because $d(B(O,r),\partial R_1)>Cr$. Thus $$\pi^{-1}(\Gamma_N(1,\gamma))\subset T_N(1,\gamma)\subset \Lambda_{N_2}(\beta , \beta, \epsilon)$$ which yields that $$\pi^{-1}(\widetilde{\Gamma}_N(1,\gamma))\subset \widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon).$$ For any ergodic $\nu\in U$, $\pi_{*}\nu\in U_1$. So $\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0$. We obtain $$\nu(\widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon)) \geq \nu(\pi^{-1}(\widetilde{\Gamma}_N(1,\gamma)))=\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0.$$ The ergodicity of $\nu$ concludes $\nu(\widetilde{\Lambda}(\beta, \beta, \epsilon))=1$.
\end{proof}
\subsection{Robustly transitive systems which is not partially hyperbolic}
In this subsection we will apply the structure of $\widetilde{\Lambda}$ to a class of diffeomorphisms introduced by Bonatti-Viana. For our requirements we need do some additional assumptions on their constants. The class $\mathcal{V}\subset\mathrm{Diff}(\mathbb{T}^n)$ under consideration consists of diffeomorphisms which are also deformations of an Anosov diffeomorphism. To define $\mathcal{V}$, let $f_0$ be a linear Anosov diffeomorphism of the $n$-dimensional torus $\mathbb{T}^n$. Let $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ be the Markov partition of $f_0$ and $B=B(\mathcal{R})$ be the associated transition matrix. Let $\mu_1$ be the maximal measure of $(\mathbb{T}^n,f_0)$ and $p_i=\mu_1(R_i)$ for $1\leq i\leq l$. Suppose there is a fixed point $O\in \operatorname{int} R_1$. Take small $r$ satisfying the ball $B(O,Cr)\subset R_1$ and $d(B(O,Cr), \partial R_1)>Cr$, where $C$ is given by Lemma \ref{shadowing onto diffeo}.
Denote by $TM = E_0^s\oplus E_0^u$ the hyperbolic splitting for $f_0$ and let
$$\lambda_s :=
\|Df\mid_{E^{s}_0}\|,\quad \lambda_u:= \|Df\mid_{ E^u_0}\|.$$ We suppose that $f_0$ has at least one fixed point outside $V$. Fix positive numbers $\delta, \gamma \ll \lambda : =\min\{\lambda_s, \lambda_u\}$. Let \begin{eqnarray*}p_0&=&\frac12\min\{1-p_i\,\mid\,\, 1\leq i\leq l\},\\[2mm] \beta&=&(1-p_1-p_0-\gamma)\log\lambda-\delta,. \end{eqnarray*} By definition $f\in \mathcal{V}$ if it satisfies the following $C^1$ open conditions:
(1) There exist small continuous cone fields $C^{cu}$ and $C^{cs}$ invariant for $Df$ and $Df^{-1}$ containing respectively $E_0^u$ and $E_0^s$.
(2) $f$ is $C^1$ close to $f_0$ in the complement of $B(O,r)$, so that for $x\in \mathbb{T}^n\setminus B(O,r)$:
$$\|(Df|T_xD^{cu})\|>e^{-\delta}\lambda\,\,\,\mbox{and}\,\,\,\|Df|T_xD^{cs}\|<e^{\delta}\lambda^{-1}.$$
(3) For $x\in B(O,r)$:
$$\|(Df|T_xD^{cu})\|>e^{-\delta}\,\,\, \mbox{and}\,\,\, \|(Df|T_xD^{cs}\|<e^{\delta},$$ where $D^{cu}$ and $D^{cs}$ are disks tangent to $C^{cu}$ and $C^{cs}$.
Immediately by the cone property, we can get a dominated splitting $T\mathbb{T}^n=E\oplus F$ with $E\subset D^{cs}$ and $F\subset D^{cu}$.
Use Lemma \ref{shadowing onto diffeo} there exists $\pi : \mathbb{T}^n\rightarrow \mathbb{T}^n$ continuous and onto, $\operatorname{dist}_{C^0}(\pi, \operatorname{id}) < Cr$, and $$f_0\circ \pi = \pi\circ f.$$ In \cite{Buzzi-Fisher} for the obtained $f$, Buzzi and Fisher proved that there exists a unique maximal measure $\nu_0$ of $f$ with $\pi_{*}\nu_0=\mu_1$. This measure $\nu_0$ conforms good structure of Pesin set $\widetilde{\Lambda}$ by the following Theorem. \begin{Thm} \label{Bonatti-Viana}There exist $0<\epsilon\ll1<\beta$ and a neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f)$ such that for any ergodic $\nu\in U$ it holds that $\nu\in \mathcal{M}_{inv}(\widetilde{\Lambda}(\beta,\beta, \epsilon))$. \end{Thm} \begin{proof} The arguments are analogous of Theorem \ref{Mane}. Choose a neighborhood $U_1$ of $\mu_1$ in $\mathcal{M}(\mathbb{T}^n,f_0)$ such that every ergodic $\mu\in U_1$ is contained in $ \widetilde{\Gamma}(i,\gamma)$, where $\widetilde{\Gamma}(i,\gamma)$ defined as Proposition \ref{frequence}. The continuity of $\pi$ give rise to a neighborhood $U$ of $\nu_0$ in $\mathcal{M}(\mathbb{T}^n,f)$ such
that $\pi_{*}U\subset U_1$. For $N\in \mathbb{N}, \gamma>0$, define \begin{eqnarray*}
T_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid f^j(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid f^{-j}(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq 1,\,\,\,\forall\, n\in \mathbb{Z\}}. \end{eqnarray*} For large $N$ we have $\nu_0(T_N(i,\gamma))>0$ and let $$\widetilde{T}_N(i,\gamma)=\operatorname{supp}(\nu_0\mid T_N(i,\gamma)).$$ Let $N_2=[\frac{\beta N}{\gamma p_0}]+1$, $\epsilon= \max\{\delta (p_1+\gamma+p_0),\,\gamma\}$. Then $$T_N(1,\gamma)\subset \Lambda_{N_2}(\beta,\beta, \epsilon) \quad \mbox{and}\quad T_N(1,\gamma)\subset \widetilde{\Lambda}_{N_2}(\beta,\beta, \epsilon).$$
For any $x\in \Gamma_N(1,\gamma)$, $z\in \pi^{-1}(x)$, it holds that $$d(f^i(z),f^i_0(x))=d(f^i(z),\, \pi (f^i(x)))<Cr$$ which implies that if $f^i_0(x)\notin R_1$ then $f^i(z)\notin B(O,r)$ because $d(B(O,r),\partial R_1)>Cr$. Thus $$\pi^{-1}(\Gamma_N(1,\gamma))\subset T_N(1,\gamma)\subset \Lambda_{N_2}(\beta, \beta, \epsilon)$$ which yields that $$\pi^{-1}(\widetilde{\Gamma}_N(1,\gamma))\subset \widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon).$$ For any ergodic $\nu\in U$, $\pi_{*}\nu\in U_1$. So $\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0$. We obtain $$\nu(\widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon)) \geq \nu(\pi^{-1}(\widetilde{\Gamma}_{N}(1,\gamma)))=\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0.$$ Once more, the ergodicity of $\nu_0$ concludes $\nu(\widetilde{\Lambda}(\lambda_1, \lambda_1, \epsilon))=1$.
\end{proof}
\noindent{\it Acknowledgement. } The authors thank very much to the whole seminar of dynamical systems in Peking University. The manuscript was improved according to their many suggestions.
\end{document} |
\begin{document}
\def\vspace*{-1.4ex} \begin{equation}{\vspace*{-1.4ex} \begin{equation}} \def\vspace*{-1.2ex} \end{equation}{\vspace*{-1.2ex} \end{equation}} \def\vspace*{-1.8ex} \begin{eqnarray}{\vspace*{-1.8ex} \begin{eqnarray}} \def\end{eqnarray} \\[-3.25ex]{\end{eqnarray} \\[-3.25ex]}
\unitlength1mm \prvastrana=1 \poslednastrana=6 \defJ. Clausen, M. Dakna, L. Kn\"oll, D.-G. Welsch{J. Clausen, M. Dakna, L. Kn\"oll, D.-G. Welsch} \defConditional quantum state engineering at beam splitter arrays{Conditional quantum state engineering at beam splitter arrays}
\headings{1}{6} \title{CONDITIONAL QUANTUM STATE ENGINEERING\\ AT BEAM SPLITTER ARRAYS} \author{J. Clausen \footnote{\email{[email protected]}}, M. Dakna,
L. Kn\"oll, D.-G. Welsch} { Friedrich-Schiller-Universit\"at Jena,\\ Theoretisch-Physikalisches Institut,\\ Max-Wien-Platz 1, D-07743 Jena, Germany}
\datumy{30 April 1999}{10 May 1999} \abstract{ The generation of arbitrary single-mode quantum states from the vacuum by alternate coherent displacement and photon adding as well as the measurement of the overlap of a signal with an arbitrarily chosen quantum state are studied. With regard to implementations, the transformation of the quantum state of a traveling optical field at an array of beam splitters is considered, using conditional measurement. Allowing for arbitrary quantum states of both the input reference modes and the output reference modes on which the measurements are performed, the setup is described within the concept of two-port non-unitary transformation, and the overall non-unitary transformation operator is derived. It is shown to be a product of operators, where each operator is assigned to one of the beam splitters and can be expressed in terms of an $s$-ordered operator product, with $s$ being determined by the beam splitter transmittance or reflectance. As an example we discuss the generation of and overlap measurement with Schr\"odinger-cat-like states. }
\section{Introduction}
If two traveling (pulse-shaped) modes of the radiation field are mixed at a beam splitter, then the two outgoing modes are in an entangled state in general. Therefore, the reduced state of one of them depends on the result of a measurement performed on the other. By mixing a signal pulse with a reference pulse prepared in a known state and discriminating from all pulses leaving the signal output port those corresponding to a particular measurement result in the other output port, quantum state engineering can be realized \cite{DaknaJacobi,pap2,pap3}. On the other hand, the overlap of a signal with a chosen quantum state may be obtained by mixing the signal mode with a reference mode prepared in a quantum state that is specific to the overlap and performing a measurement on the outgoing field \cite{Barnett}. Hence, novel possibilities of direct quantum state generation and measurement are offered, provided that the designed reference states can be prepared and the required measurements can be realized.
In what follows we present a scheme for the generation of arbitrary quantum states of traveling fields, in which coherent and 1-photon Fock states are fed into an array of beam splitters and zero-photon measurements are performed. We then show how the scheme can be modified in order to measure the overlap of an unknown quantum state of a signal mode with an arbitrarily chosen quantum state. Whereas in the former case a source for 1-photon Fock states should be available, in the latter case only 1-photon Fock state detection is required.
In Section 2 the underlying formalism is outlined and the basic formulas are given. The problem of the generation of arbitrary quantum states is considered in Section 3, and Section 4 is devoted to the problem of overlap measurements. In order to give an example, we consider in Section 5 the generation of and measurement of overlap with Schr\"odinger-cat-like states. A summary and some concluding remarks are given in Section 6.
\section{Conditional quantum state transformation}
Let us consider the state transformation at a beam splitter array. As outlined in Fig.1, \begin{figure}\end{figure} the incoming signal prepared in a state $\hat{\varrho}_{{\rm in}}$ passes an array of $N$ beam splitters \mbox{${\rm B}_1,$ $\!\ldots,$ $\!{\rm B}_N$} at which it is mixed with reference input modes in states
\mbox{$|\Psi_{{\rm in}_1}\rangle,$ $\!\ldots,$ $\!|\Psi_{{\rm in}_N}\rangle$}. When the measuring devices ${\rm D}_1,$ $\!\ldots,$ $\!{\rm D}_N$ detect the
reference output modes in states \mbox{$|\Psi_{{\rm out}_1}\rangle,$
$\!\ldots,$ $\!|\Psi_{{\rm out}_N}\rangle$}, then the conditional signal output state reads \vspace*{-1.4ex} \begin{equation}
\hat{\varrho}_{{\rm out}}=\frac{1}{p}\,\hat{Y}
\hat{\varrho}_{{\rm in}}
\hat{Y}^\dagger, \vspace*{-1.2ex} \end{equation} where \vspace*{-1.4ex} \begin{equation}
p={\rm Tr}\!\left(\hat{Y}\hat{\varrho}_{{\rm in}}\hat{Y}^\dagger\right) \vspace*{-1.2ex} \end{equation} is the probability of generating $\hat{\varrho}_{{\rm out}}$.
The non-unitary transformation operator \vspace*{-1.4ex} \begin{equation}
\hat{Y}=\hat{Y}_N\cdots\hat{Y}_2\hat{Y}_1 \vspace*{-1.2ex} \end{equation} is the product of the individual conditional operators \vspace*{-1.4ex} \begin{equation}
\hat{Y}_k=\langle\Psi_{{\rm out}_k}|\hat{U}_k|
\Psi_{{\rm in}_k}\rangle
\,,
\label{Yk} \vspace*{-1.2ex} \end{equation} where \vspace*{-1.4ex} \begin{equation}
\hat{U}_k = e^{{\rm i}(\varphi_T + \varphi_R)\hat{L}_z}\,
e^{2{\rm i}\vartheta\hat{L}_y}\, e^{{\rm i}(\varphi_T
-\varphi_R)\hat{L}_z} = T^{\hat{n}}\,e^{-R^*\hat{a}^\dagger_k
\hat{a}}\,e^{R\hat{a}^\dagger\hat{a}_k}T^{-\hat{n}_k} \vspace*{-1.2ex} \end{equation} is the unitary transformation operator of the beam splitter ${\rm B}_k$ in Fig.~1, with $\hat{L}_y$ $\!=$ $\!{\rm i}(\hat{a}_k^\dagger\hat{a}$ $\!-$ $\!\hat{a}^\dagger\hat{a}_k)/2$ and $\hat{L}_z$ $\!=$ $\!(\hat{n}$ $\!-$ $\!\hat{n}_k)/2$ \cite{Campos,Daknaadded}. Here, $T$ $\!=$ $\!\cos\vartheta e^{{\rm i}\varphi_T}$ and $R$ $\!=$ $\!\sin\vartheta e^{{\rm i}\varphi_R}$ are the transmittance and reflectance, respectively. The operators $\hat{Y}_k$ can be represented as $s$-ordered operator products \cite{pap3} \vspace*{-1.4ex} \begin{equation}
\hat{Y}_k = \left\{\hat{F}(R\hat{a}^\dagger)\,
\hat{G}^\dagger\!\left(-\frac{R}{T^*}\hat{a}^\dagger\right)
\right\}_sT^{\hat{n}},
\label{order}
\vspace*{-1.2ex} \end{equation} where the operators $\hat{F}$ and $\hat{G}$, respectively, generate $|\Psi_{{\rm in}_k}\rangle$ and
$|\Psi_{{\rm out}_k}\rangle$ from the vacuum, \vspace*{-1.8ex} \begin{eqnarray}
|\Psi_{{\rm in}_k}\rangle=\hat{F}(\hat{a}_k^\dagger)|0\rangle_k\,,
\quad
|\Psi_{{\rm out}_k}\rangle=\hat{G}(\hat{a}_k^\dagger)|0\rangle_k\,, \end{eqnarray} \\[-3.25ex] and the ordering parameter $s$ is determined by the absolute value of the beam splitter reflectance as \vspace*{-1.4ex} \begin{equation}
s=\frac{2}{|R|^2}-1 . \vspace*{-1.2ex} \end{equation} Note that the ordering procedure in (\ref{order}) can be omitted if
$|\Psi_{{\rm in}_k}\rangle$ or $|\Psi_{{\rm out}_k}\rangle$ is a coherent state \cite{pap3}, since for \vspace*{-1.8ex} \begin{eqnarray}
|\Psi_{{\rm in}_k}\rangle=\hat{D}_k(\alpha)\hat{F}(\hat{a}_k^\dagger)
|0\rangle_k\,,
\quad
|\Psi_{{\rm out}_k}\rangle=\hat{D}_k(\beta)\hat{G}(\hat{a}_k^\dagger)
|0\rangle_k\, \end{eqnarray} \\[-3.25ex] we have \vspace*{-1.4ex} \begin{equation}
\hat{Y}_k = \hat{D}\!\left(\frac{\alpha-T\beta}{R^*}\right)\,
\hat{Y}_k\!\left(^{\alpha=0}_{\beta=0}\right)\,
\hat{D}\!\left(\frac{\beta-T^*\alpha}{R^*}\right).
\label{noorder} \vspace*{-1.2ex} \end{equation}
\section{Generation of truncated quantum states $|\Psi\rangle$}
Each quantum state $|\Psi\rangle$ that is composed of a finite
number of Fock states $|n\rangle$ can be written as \vspace*{-1.4ex} \begin{equation}
|\Psi\rangle=\sum_{n=0}^N\,\psi_n|n\rangle
=\frac{\psi_N}{\sqrt{N!}}\prod_{k=1}^N
(\hat{a}^\dagger-\beta_k^*)|0\rangle,
\label{state} \vspace*{-1.2ex} \end{equation} where $\beta_1,$ $\!\ldots,$ $\!\beta_N$ denote the $N$ solutions of the
equation $\langle\Psi|\beta\rangle\equiv\langle\Psi|\hat{D}(\beta)|0\rangle=0$.
This reveals that $|\Psi\rangle$ can be generated from the vacuum state by alternate displacement and photon adding, as outlined in Fig.~2 \cite{pap2}. \begin{figure}\end{figure} A coherent displacement $\hat{D}(\alpha)$ can be realized by mixing the mode
with a reference mode in a strong coherent state $|\alpha/R^\prime\rangle$ at a highly transmitting beam splitter \mbox{($T^\prime$ $\!\rightarrow$ $\!1$)}
\cite{Paris}, and photon adding is achieved by mixing the mode with a reference mode in a Fock state $|1\rangle$ and measuring zero photons in the output detection channel (detectors ${\rm D}_k$ in Fig.~2) \cite{Daknaadded}.
We assume that all the beam splitters used for photon adding have the same transmittance $T$ and reflectance $R$. The non-unitary transformation operator then reads as \vspace*{-1.4ex} \begin{equation}
\hat{Y}=\hat{D}(\alpha_{N+1})\hat{Y}_N\hat{D}(\alpha_N)\cdots
\hat{Y}_1\hat{D}(\alpha_1)\,, \vspace*{-1.2ex} \end{equation} where \vspace*{-1.4ex} \begin{equation}
\hat{Y}_k=R\hat{a}^\dagger T^{\hat{n}}\,,
\label{Ykgen} \vspace*{-1.2ex} \end{equation} and the complex parameters $\alpha_1$,\ldots, $\alpha_{N+1}$ are determined from the equation \vspace*{-1.4ex} \begin{equation}
\frac{\hat{Y}|0\rangle\langle0|\hat{Y}^\dagger}{\|\hat{Y}|0\rangle\|^2}=
|\Psi\rangle\langle\Psi| \vspace*{-1.2ex} \end{equation} as $\alpha_k$ $\!=$ $\!T^{*N+1-k}(\beta_{k-1}$ $\!-$ $\!\beta_k)$ for $k$ $\!=$ $\!2,\ldots,N+1$ ($\beta_{N+1}$ $\!=$ $\!0$), and $\alpha_1$ $\!=$ $\!-\sum_{l=1}^NT^{-l} \alpha_{l+1}$. The probability of generating the desired state
$\hat{\varrho}_{{\rm out}}=|\Psi\rangle\langle\Psi|$, i.e. the probability that all $N$ detectors register zero photons, is given by \vspace*{-1.4ex} \begin{equation}
\|\hat{Y}|0\rangle\|^2=\frac{N!}{|\psi_N|^2}
\frac{|R|^{2N}}{|T|^{N(1-N)}}\, \exp\!\left[-|R|^2\sum_{k=1}^N
\bigg |\frac{\sum_{l=1}^k|T|^{2l}(\beta_{N+2-l}\!-\!\beta_{N+1-l})}
{T^{k+2}}\bigg|^2\right]
\label{prob} \vspace*{-1.2ex} \end{equation} and decreases rapidly with increasing $N$. Nevertheless, for small $N$ this scheme offers a way to generate specific traveling quantum states, given the possibility to prepare 1-photon Fock states.
\section{Measuring arbitrary overlaps}
One fundamental task in quantum mechanics is to find the probability that
a specific quantum state $|\Psi\rangle$ is contained in a state $\hat{\varrho}_{{\rm in}}$ of a given system. With regard to traveling waves,
this overlap $\langle\Psi|\hat{\varrho}_{{\rm in}}|\Psi\rangle$
can be measured for a given state $|\Psi\rangle$ of the type (\ref{state}) as outlined in Fig.~3 \cite{pap4}. \begin{figure}\end{figure}
The signal mode in state $\hat{\varrho}_{{\rm in}}$ is mixed with reference modes in coherent states $|\alpha_1\rangle,$ $\!\ldots,$
$\!|\alpha_N\rangle$ at an array of $N$ beam splitters. Photodetectors ${\rm D}_1,$ $\!\ldots,$ $\!{\rm D}_{N+1}$ perform photon number measurements at the output modes. The joint probability $p(1,1;2,1;\ldots;N,1;N+1,0)$ that each of the detectors \mbox{${\rm D}_1,$ $\!\ldots,$ $\!{\rm D}_{N}$} registers one photon and ${\rm D}_{N+1}$ none is then given by \vspace*{-1.4ex} \begin{equation}
p(1,1;2,1;\ldots;\;N,1;\;N+1,0)=
\langle0|\hat{Y}\hat{\varrho}_{{\rm in}}\hat{Y}^\dagger|0\rangle
\label{joint} \vspace*{-1.2ex} \end{equation} with \vspace*{-1.4ex} \begin{equation}
\hat{Y}=\hat{Y}_N\cdots\hat{Y}_2\hat{Y}_1\,, \vspace*{-1.2ex} \end{equation} where for identical beam splitters \vspace*{-1.4ex} \begin{equation}
\hat{Y}_k=-R^*\hat{D}\left(\frac{\alpha_k}{R^*}\right)T^{\hat{n}}
\hat{a}\hat{D}\left(-\frac{T^*}{R^*}\alpha_k\right),
\label{Ykpro} \vspace*{-1.2ex} \end{equation} see (\ref{Yk}) as well as (\ref{order}) and (\ref{noorder}). This expression reveals that the signal is manipulated by alternate displacement and photon subtraction. If we now choose the arguments $\alpha_k$ ($k$ $\!=$ $\!1,\ldots,N$) of the coherent states as
$\alpha_k$ $\!=$ $\!(R^*/T^{*k})\sum_{l=1}^k|T|^{2l-1}(\beta_l$ $\!-$ $\!\beta_{l-1})$, with \mbox{$\beta_0$ $\!=$ $\!0$}, so that for a chosen
$|\Psi\rangle$ the relation \vspace*{-1.4ex} \begin{equation}
\frac{\hat{Y}^\dagger|0\rangle\langle0|\hat{Y}}
{\|\hat{Y}^\dagger|0\rangle\|^2}=|\Psi\rangle\langle\Psi| \vspace*{-1.2ex} \end{equation} is valid, then from (\ref{joint}) it is seen that the sought overlap can be obtained from the joint probability as \vspace*{-1.4ex} \begin{equation}
\langle\Psi|\hat{\varrho}_{{\rm in}}|\Psi\rangle=
\frac{p(1,1;2,1;\ldots;N,1;N+1,0)}
{\|\hat{Y}^\dagger|0\rangle\|^2}\,.
\vspace*{-1.2ex} \end{equation} The denominator $\|\hat{Y}^\dagger|0\rangle\|^2$ is given by (\ref{prob}), with $(\beta_{N+2-l}$ $\!-$ $\!\beta_{N+1-l})$ being replaced with $(\beta_l$ $\!-$ $\!\beta_{l-1})$. It may be regarded as being the ``fidelity'' of the measurement, since it is a measure of the maximal occurence of the detection coincidences. Obviously, it is equal to $p(1,1;2,1;\ldots;N,1;N+1,0)$ in the particular case when signal and measured state coincide,
$\hat{\varrho}_{{\rm in}}$ $\!=$ $\!|\Psi\rangle\langle\Psi|$.
\section{Schr\"odinger-cat-like states}
The schemes in Figs.~2 and 3 can be simplified if some of the $\beta_k$ in (\ref{state}) are equal: \vspace*{-1.4ex} \begin{equation}
|\Psi\rangle=\frac{\psi_N}{\sqrt{N!}}\prod_{l=1}^M
(\hat{a}^\dagger-\beta_l^*)^{d_l}|0\rangle \vspace*{-1.2ex} \end{equation} with $M$ $\!<$ $\!N$. In this case it is possible to add $d_k$ photons at once in Fig.~2 by using $M$ detectors and combining with
Fock states $|d_k\rangle$ or to subtract $d_k$ photons at each beam splitter in Fig.~3 by using $M$ $\!+$ $\!1$ detectors and measuring the relative frequency of the event $(1,d_1;\ldots;M,d_M;M+1,0)$. All calculations are analogous if $R\hat{a}^\dagger$ in (\ref{Ykgen}) is replaced with $(R\hat{a}^\dagger)^{d_k}/\sqrt{d_k!}$ and $-R^*\hat{a}$ in (\ref{Ykpro}) is replaced with $(-R^*\hat{a})^{d_k}/\sqrt{d_k!}$.
As an example let us consider the states \vspace*{-1.4ex} \begin{equation}
|\Psi_n^{\alpha,\beta}\rangle=\frac{1}{\sqrt{{\cal N}}}\,\hat{D}(\gamma_3)
\hat{a}^{\dagger n}\hat{D}(\gamma_2)\hat{a}^{\dagger n}\hat{D}(\gamma_1)
|0\rangle, \vspace*{-1.2ex} \end{equation} where $\gamma_1$ $\!=$ $\!{\rm i}(\beta$ $\!-$ $\!\alpha)/2$, $\gamma_2$ $\!=$ $\!{\rm i}(\alpha-\beta)$, $\gamma_3$ $\!=$ $\![(1-{\rm i})\alpha$ $\!+$ $\!(1+{\rm i})\beta]/2$, and the normalization factor is ${\cal N}$ $\!=$ $\!(4^nn!/\sqrt{\pi})$
$\!\mbox{$\Gamma(n+1/2)$}$ $\!_1F_2[-n,1/2-n,1,|\alpha-\beta|^4/64]$. From the above considerations it is clear that these states can be generated using 2 detectors, and the overlap of a signal state with such states can be
measured using 3 detectors. The states $|\Psi_n^{\alpha,\beta}\rangle$ reveal
the interesting property that for increasing $n=|\alpha-\beta|^2/4$ they approach superpositions of coherent states \cite{pap4},
$|\Psi_\infty^{\alpha,\beta}\rangle\langle\Psi_\infty^{\alpha,\beta}|=
|\Psi^{\alpha,\beta}\rangle\langle\Psi^{\alpha,\beta}|$ with \vspace*{-1.4ex} \begin{equation}
|\Psi^{\alpha,\beta}\rangle=\frac{1}{\sqrt{2}}\left(
|\alpha\rangle+|\beta\rangle \right) \vspace*{-1.2ex} \end{equation} (note that
$|\langle\Psi_{n=3}^{\alpha,\beta}|\Psi^{\alpha,\beta}\rangle|^2$ $\!>$ $\!0.95$). This offers the possibility of generating Schr\"odinger-cat-like states, provided that two $n$-photon Fock states are available. On the other hand, measuring overlaps with Schr\"odinger-cat-like states allows one, in principle, to reconstruct the signal state \cite{Freyberger}.
It is worth noting that choosing a squeezed coherent signal state offers the possibility of measuring the field strength statistics of a Schr\"odinger cat without need to generate the cat state. Note that squeezed coherent states approach field strength states for sufficiently strong squeezing.
Finally, it should be pointed out that the probability of generation
of the states $|\Psi_n^{\alpha,\beta}\rangle$ and the fidelity of measurement of the overlap with them are equal. For large $n$, they decrease exponentially with increasing $n$: \vspace*{-1.4ex} \begin{equation}
p=\frac{|2R^2T|^{2n}}{n\pi}\,\exp\!\left[n\left(1\!-\!
\left|\frac{R}{T}\right|^2(1\!+\!|T|^{-2}(1-2|T|^2)^2)\right)\right].
\label{p} \vspace*{-1.2ex} \end{equation}
\section{Conclusion}
We have discussed conditional quantum state engineering at beam splitter arrays. We have presented a scheme for the generation of arbitrary quantum states which requires coherent states and 1-photon Fock states and zero-photon detection.
Further, we have given a scheme for the measurement of the overlap of an unknown signal state with an arbitrary quantum state which requires coherent states and $0$- and $1$-photon detections.
For the two schemes we have calculated the probabilities of state generation and overlap measurement. Finally, we have shown how the schemes can be simplified under special conditions. As an example we have considered the generation of (and overlap measurement with) states which approach superpositions of two arbitrary coherent states.
\noindent {\bf Acknowledgements}\\[1ex] This work was supported by the Deutsche Forschungsgemeinschaft.
\itemsep0pt
\end{document} |
\begin{document}
\author{A. A. Rangelov} \affiliation{Fachbereich Physik der Universit\"{a}t, Erwin-Schr\"{o}dinger-Str., 67653 Kaiserslautern, Germany} \affiliation{Permanent address: Department of Physics, Sofia University, James Bourchier 5 blvd., 1164 Sofia, Bulgaria}
\author{N. V. Vitanov} \affiliation{Department of Physics, Sofia University, James Bourchier 5 blvd., 1164 Sofia, Bulgaria} \affiliation{Institute of Solid State Physics, Bulgarian Academy of Sciences, Tsarigradsko chauss\'{e}e 72, 1784 Sofia, Bulgaria}
\author{B. W. Shore} \affiliation{Fachbereich Physik der Universit\"{a}t, Erwin-Schr\"{o}dinger-Str., 67653 Kaiserslautern, Germany} \altaffiliation{Permanent address: 618 Escondido Cir., Livermore, CA}
\title{Population trapping in three-state quantum loops revealed by Householder reflections}
\date{\today}
\begin{abstract} Population trapping occurs when a particular quantum-state superposition is immune to action by a specific interaction, such as the well-known dark state in a three-state lambda system. We here show that in a three-state loop linkage, a Hilbert-space Householder reflection breaks the loop and presents the linkage as a single chain. With certain conditions on the interaction parameters, this chain can break into a simple two-state system and an additional spectator state. Alternatively, a two-photon resonance condition in this Householder-basis chain can be enforced, which heralds the existence of another spectator state. These spectator states generalize the usual dark state to include contributions from all three bare basis states and disclose hidden population trapping effects, and hence hidden constants of motion. Insofar as a spectator state simplifies the overall dynamics, its existence facilitates the derivation of analytic solutions and the design of recipes for quantum state engineering in the loop system. Moreover, it is shown that a suitable sequence of Householder transformations can cast an arbitrary $N$-dimensional hermitian Hamiltonian into a tridiagonal form. The implication is that a general $N$-state system, with arbitrary linkage patterns where each state connects to any other state, can be reduced
to an equivalent chainwise-connected system, with nearest-neighbor interactions only,
with ensuing possibilities for discovering hidden multidimensional spectator states and constants of motion. \special{color cmyk 0 0 0 1.} \end{abstract}
\pacs{32.80.Xx, 33.80.Be, 32.80.Rm, 33.80.Rv} \maketitle
\section{Introduction}
Descriptions of optical excitation of few-state quantum systems traditionally make use of the rotating-wave approximation (RWA),
in which the Hilbert-space unit vectors (the bare quantum states) rotate with angular velocities that are fixed at various laser carrier frequencies, and the Hamiltonian,
with the neglect of rapidly varying terms, becomes a matrix of slowly varying Rabi frequencies and detunings \cite{All75,Shore}. For three states the usual electric-dipole selection rules of optical transitions produce a simple chain of inter-state linkages, depicted as either a ladder, a lambda or a vee.
For some time it has been known that, either by means of a rotation of the arbitrary quantization axis for defining magnetic sublevels,
or by more general reorganization of the Hilbert-state basis states (a Morris-Shore transformation \cite{Morris83}),
such patterns can be presented as a set of independent two-state excitations (bright states) together with spectator states that are unaffected by the specific radiation (dark states or spectator states).
The presence of a third interaction, linking the two states that terminate the three-state chain, turns the linkage pattern into a loop, see Fig. \ref{fig-loop3}. Such interaction would violate the usual selection rules for electric-dipole radiation (which connects only states of opposite parity),
but is possible for a variety of other interactions, such as occur with two-photon optical transitions or microwave transitions between hyperfine states. To avoid the presence of rapidly varying exponential phases in the RWA Hamiltonian, such a link should occur with carrier frequency suitably chosen.
\begin{figure}
\caption{(Color online) RWA linkage pattern for a loop, showing linkages: states 1 and 2 by Rabi frequency $\Omega_P$, states 2 and 3 by $\Omega_S$ and states 1 and 3 by $\Omega_C$. The energy levels are shown with an ordering appropriate to state $\state{1}$ as ground state and state $\state{2}$ as excited state, but the symmetry of the loop linkage allows initial population in any state.}
\label{fig-loop3}
\end{figure}
Within the RWA there is no longer a distinction of the original bare-state energies; all that matters is the detunings, i.e. differences between a Bohr frequency and the associated laser-field carrier frequency. Nonetheless, it is traditional, when depicting the linkage pattern of laser-induced interactions, to place representations of the states in a vertical direction
ranked according to the original bare-state energies, and to consider excitation in which the initial population resides entirely in a ground state. Such display convention is particularly useful in emphasizing the difference between low-energy stable or metastable states, unable to radiatively decay, and excited states, from which spontaneous emission is possible -- visible as fluorescence.
The three-state loop is the simplest example of discrete quantum states that can exhibit nontrivial probability-amplitude interferences, and hence it has attracted continuing interest \cite{Buckle86,Krinitzky86,Carroll88,Kosachiov92,Maichen95,Maichen96,Unanyan97,Fleischhauer99,Shapiro,Liu,Metz06}.
With the loop pattern it is not immediately obvious that any simple restructuring of the Hilbert-space coordinates will produce a single unit vector that is immune to the effects of any interaction -- a spectator state. For example, the loop system does not satisfy an essential condition for the MS transformation \cite{Morris83}, namely that the quantum states be classified in two sets, with transitions only between the sets, not within them.
A special case of the three-state loop was considered by Carroll and Hioe \cite{Carroll88}. They presented analytical solutions for the probability amplitudes when three resonant laser pulses of different shapes were present and two of the couplings where real, while the third was purely imaginary. For this special case, the underlying SU(2) symmetry allows the three-state loop to be reduced to an effective two-state system.
Another fully resonant three-state loop was examined by Unanyan \textit{et al.} \cite{Unanyan97}. In that work a pulsed quasistatic magnetic field supplemented the two optical pulses of a lambda linkage used for stimulated Raman adiabatic passage (STIRAP) \cite{STIRAP}. This additional field provided a supplement to the usual adiabatic constraints and allowed a reduction of diabatic loss, thereby improving the usual adiabatic constraints on achieving complete population transfer.
The three-state loop was examined also by Fleischhauer \textit{et al.} \cite{Fleischhauer99}. They showed that when each link was resonant, the dark state of STIRAP \cite{STIRAP} could be modified to a higher-order trapping state,
becoming an approximate constant of motion even for small pulse areas. This state adiabatically rotates, in Hilbert space, from the initial to the target state. This adiabatic motion leads to efficient population transfer, though at the expense of placing some population into the decaying atomic state.
Recently a three-state loop was shown to occur in physical processes where the free-space symmetry is broken, as it is in chiral systems \cite{Shapiro,Liu}. Such quantum systems occur in left- and right-handed chiral molecules \cite{Shapiro}, or in ``artificial atoms''. Loop linkages amongst discrete quantum states can also occur in superconducting quantum circuits \cite{Liu},
and in modeling entangled atoms coupled through an optical cavity \cite{Metz06}.
We here consider loops that have less stringent constraints on the frequencies, although some do exist. We shall show in the following that it is possible, under appropriate conditions (including three-photon resonance), to break the loop into a chain.
A further transformation of the basis states can then convert the linkage pattern into a pair of coupled states and a spectator state. Alternatively, if the population resides initially in the middle state of the chain, the system has the dynamics of the vee linkage.
The required initial transformation, converting the loop into a simple chain, is taken to be a Householder reflection (HR) of the Hamiltonian matrix \cite{Householder}. Such matrix manipulations, commonplace in works dealing with linear algebra \cite{Householder2}, have recently been applied to quantum-state manipulations \cite{Kyo06,Iva06,Kyo07,Iva07}.
When acting upon an \textit{arbitrary} square matrix a suitable sequence of HRs produces an upper-diagonal (or lower-diagonal) matrix. When acting upon a \textit{unitary} matrix, such a sequence produces a diagonal matrix, with phase factors on the diagonal. This property has been used for decomposition, and therefore synthesis, of arbitrary preselected propagators in multistate systems \cite{Kyo06,Iva06,Kyo07,Iva07}. We show here that, when utilised for a change of basis in Hilbert space, a suitable HR (or a sequence of HRs) can cast a (hermitian) Hamiltonian into a tridiagonal form. This tridiagonalization implies the replacement of a general linkage pattern (for example, each state interacting with any other state)
with an effective chainwise-connected system where only nearest-neighbour interactions are present. We here apply this tridiagonalization to the simplest nontrivial multistate system -- a three-state loop system -- and demonstrate its potential applications,
with examples ranging from effective chain breaking and novel spectator states to hidden two-photon resonances.
\section{The loop RWA Hamiltonian}
We consider three fields, labelled pump ($P$), Stokes ($S$), and control ($C$), \begin{equation} \mathbf{E}_k(t) = \hat{\mathbf{e}}_k \, \mathcal{E}_k(t) \, \cos(\omega_k t + \phi_k), \qquad (k = P, S, C). \end{equation} The three carrier frequencies $\omega_k$ can be chosen arbitrarily, so long as they fulfill the three-photon resonance condition (Fig. \ref{fig-loop3}) \begin{equation} \omega_C - \omega_P + \omega_S = 0. \label{eqn-3phot} \end{equation} This constraint is necessary for the application of the rotating-wave approximation (RWA) \cite{All75,Shore}. However, at the outset we impose no constraints on the single-photon detunings, \begin{equation} \hbar \Delta_P \equiv E_2-E_1 -\hbar \omega_P,\qquad \hbar \Delta_S\equiv E_2-E_3-\hbar \omega_S. \end{equation}
We introduce probability amplitudes $C_n(t)$ in the usual rotating Hilbert-space coordinates $\state{n}(t)$, \begin{equation} \Psi(t) = \exp(- i \zeta_0 t ) \left[ C_1(t) \state{1} + C_2(t) \state{2}(t) + C_3(t) \state{3} (t) \right], \end{equation} where the rotations originate with field carrier frequencies, $\state{2}(t) \equiv \exp(-i \omega_P t) \state{2}$ and $\state{3}(t) \equiv \exp(-i\omega_C t ) \state{3}$. From the time-dependent Schr\"{o}dinger equation we obtain three coupled equations \begin{equation}
\frac{d}{dt} \mathbf{C}(t) = -i \mathsf{W}(t) \mathbf{C}(t), \end{equation} where $\mathbf{C}(t)\equiv [C_1(t),C_2(t),C_3(t)]^{T}$ is a three-component column vector of probability amplitudes and $\hbar \mathsf{W}(t)$ is the slowly varying RWA Hamiltonian. We take the overall phase factor $\zeta_{0}$ to nullify the first diagonal element of the RWA Hamiltonian matrix; it then has the structure \begin{equation} \label{H-loop} \mathsf{W}(t)=\frac{_1}{^{2}}\left[ \begin{array}{ccc} 0 & \Omega_P(t)e^{i\phi_P} & \Omega_C(t)
\\ \Omega_P(t)e^{-i\phi_P} & 2\Delta_2 & \Omega_S(t)e^{i\phi_S} \\
\Omega_C(t) & \Omega_S(t)e^{-i\phi_S} & 2\Delta_3 \end{array} \right] . \end{equation} where the interactions are parameterized by slowly varying real-valued Rabi frequencies $\Omega_k(t)$, for $k = P,S,C$. For simplicity and without loss of generality the $C$ field is assumed real ($\phi_C=0$); then $\phi_P$ and $\phi_S$ represent the phase differences between the $P$ and $S$ fields, respectively, and the $C$ field. The cumulative detunings are \begin{equation}
\Delta_2=\Delta_P,\quad \Delta_3 = \Delta_P-\Delta_S. \end{equation}
\section{The Householder reflection}
We seek a unitary transformation of the Hilbert-space basis states that will first replace the loop with a three-state chain. As we will show, the desired result can be produced by a Householder reflection acting upon the RWA Hamiltonian \cite{Iva06}.
An $N$-dimensional Householder reflection is defined as the operator \begin{equation}
\mathsf{R} = \mathsf{I} - 2\left\vert v\right\rangle \left\langle v\right\vert , \end{equation} where $\mathsf{I}$ is the identity operator and $\left\vert v\right\rangle $ is an $N$-dimensional normalized complex column vector. The Householder operator $ \mathsf{R} $ is Hermitian and unitary, $ \mathsf{R} = \mathsf{R} ^{-1}= \mathsf{R} ^{\dagger }$, hence $ \mathsf{R} $ is involutary, $ \mathsf{R} ^{2}=\mathsf{I}$. The transformation is a reflection, so $\det \mathsf{R} =-1$. If the vector $\left\vert v\right\rangle $ is real, the Householder reflection has a simple geometric interpretation:
it is a reflection with respect to an $(N-1)$-dimensional plane normal to the vector $\left\vert v\right\rangle $.
The Householder reflection, acting upon an arbitrary $N $-dimensional matrix, uses $N-1$ operations to produce an upper or lower triangular matrix. This behavior makes the Householder reflection a powerful tool for many applications in classical data analysis, e.g., in solving systems of linear algebraic equations, finding eigenvalues of high-dimensional matrices, least-squares optimization, QR decomposition, etc. \cite{Householder2}. For us, the reflection serves to transform the Hamiltonian from a full matrix to one that lacks one interaction -- it breaks the loop into a chain.
The three-state system offers three basis vectors with which to define a Householder reflection. Because of the symmetry of the loop system it is only necessary to consider one of these; the effect of others can be examined by a permutation of state labels. We shall take state $\state{1}$ as a fixed coordinate, within the plane of the reflection, and introduce an alteration of the Hilbert subspace spanned by the remaining unit vectors $\state{2}(t)$ and $\state{3}(t)$. Figure \ref{fig-Househ} illustrates the connection of the reflection with the basis states, and the possible choices of initial state.
\begin{figure}
\caption{(Color online) The Householder reflection leaves state $\state{1}$ unchanged. Initial population might be (a) in this state, or (b) in one of the altered states. Relative energies of the original bare states are not relevant, only the couplings. }
\label{fig-Househ}
\end{figure}
With this choice the Householder vector reads \begin{equation}\label{Householder vector}
|v\rangle = \left[ 0,\sin (\theta/2) e^{-i\phi_{P}}, \cos (\theta/2) \right] ^{T}, \end{equation} and the matrix representation of the Householder reflection is \begin{equation} \mathsf{R} \t =\left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta \t & e^{-i\phi_P}\sin \theta \t \\ 0 & e^{i\phi_P}\sin \theta \t & -\cos \theta \t \end{array}\right]. \end{equation} The angle $\theta$, defined by the equation \begin{equation} \label{theta} \tan \theta \t \equiv \frac{\Omega_C \t }{\Omega_P \t }, \end{equation} is twice the angle from the mirror normal to state $\state{2}$ (i.e. the twist of the mirror about the $\state{1}$ axis). Hereafter we omit explicit mention of time dependences; all Rabi frequencies are to be considered slowly varying in time, as is the Householder reflection $ \mathsf{R} $ and the angle $\theta$.
The connection between the probability amplitudes $ \widetilde{\mathbf{C}} \t $ in the Householder basis and the amplitudes $\mathbf{C} \t $ in the original (bare) basis is \begin{equation}
\widetilde{\mathbf{C}} \t = \mathsf{R} \t \mathbf{C} \t . \end{equation} The transformed equation of motion reads \begin{equation}
\frac{d}{dt} \widetilde{\mathbf{C}} \t = -i \widetilde{\mathsf{W}} \t \widetilde{\mathbf{C}} \t, \end{equation} where the Householder Hamiltonian is $\widetilde{\mathsf{W}} \t = \mathsf{R} \t \mathsf{W} \t \mathsf{R} \t -i \mathsf{R} \t \dot{ \mathsf{R} } \t$, with an overdot denoting a time derivative. Explicitly, \begin{equation}\label{HH}
\widetilde{\mathsf{W}} \t =\tfrac{1}{2}\left[\begin{array}{ccc} 0 & \widetilde{\Omega}_P \t & \ 0 \\ \widetilde{\Omega}_P^* \t & 2\widetilde{\Delta}_2 \t & \widetilde{\Omega}_S \t -2ie^{-i\phi_{p}}\dot{\theta } \t \\ \ 0 & \quad \widetilde{\Omega}_S^* \t + 2ie^{i\phi_{p}}\dot{\theta } \t \quad & 2\widetilde{\Delta}_3 \t \end{array}\right] , \end{equation} with effective detunings \begin{subequations}\label{HDelta}\begin{eqnarray} \widetilde{\Delta}_2 \t &=& \frac{\Delta_3 \Omega_C \t ^2 + \Delta_2 \Omega_P \t ^2 +\Omega_P \t \Omega_C \t \Omega_S \t \cos \left( \phi_P+\phi_S\right) }{\Omega \t ^2},\label{HDelta2}
\\ \widetilde{\Delta}_3 \t &=& \frac{\Delta_2 \Omega_C \t ^2 + \Delta_3 \Omega_P \t ^2 -\Omega_P \t \Omega_C \t \Omega_S \t \cos \left( \phi_P+\phi_S\right) }{\Omega \t ^2},\label{HDelta3} \end{eqnarray}\end{subequations} and effective couplings \begin{subequations}\label{HOmega}\begin{eqnarray} \widetilde{\Omega}_P \t &=&e^{i\phi_{p}}\Omega \t ,\label{HOmegaP}
\\ \widetilde{\Omega}_S \t &=&\frac 1{\Omega \t ^2} \Big[ 2e^{-i\phi_P} \left( \Delta_2 -\Delta_3 \right) \Omega_P \t \Omega_C \t \notag\\ && + \left.\left(e^{-2i(\phi_P+\phi_S) }\Omega_C \t ^2 -\Omega_P \t ^2 \right) e^{i\phi_S} \Omega_S \t \right],\label{HOmegaS} \end{eqnarray}\end{subequations} with $\Omega \t \equiv \sqrt{\Omega_P \t ^2+ \Omega_C \t ^2}$. All of these elements acquire time dependence from the pulses, though that is not shown explicitly.
The Hamiltonian in the Householder basis is that of a simple chain, $\state{1}\leftrightarrow \tstate{2}\leftrightarrow \tstate{3}$. By design the Householder reflection places the original two interactions of state $\state{1}$ into a single effective interaction with a new superposition state $\tstate{2}$. This state, in turn, has an interaction with the other terminal state of the chain $\tstate{3}$, also a superposition state. The new \emph{Householder states} $\widetilde{\psi}_{n} \t$ are superpositions of the original basis states $\state{n} \t$, \begin{subequations}\label{Householder states} \begin{eqnarray} \tstate{1} &=&\state{1}, \label{Householder state 1}\\ \tstate{2} \t &=&\cos \theta \t \, \state{2} \t + e^{-i\phi_P}\sin \theta \t \, \state{3} \t , \label{Householder state 2}\\ \tstate{3} \t &=&e^{i\phi_P}\sin \theta \t \, \state{2} \t - \cos\theta \t \, \state{3} \t \label{Householder state 3}. \end{eqnarray} \end{subequations} When the initial population resides entirely in state $\state{1}$, this chain is equivalent to a lambda or ladder system. When the initial population occurs in state $\tstate{2}$ it is a generalization of the vee linkage. The chain Hamiltonian \eqref{HH} in the Householder representation is conceptually simpler than the original loop Hamiltonian \eqref{H-loop}
for it allows only for nearest-neighbour interactions; the resulting chain linkage is easier to understand and treat analytically by a variety of exact or approximate approaches. The inherent interference in the loop system is now imprinted onto the Householder transformation and is absent in the Householder chain.
Moreover, this tranformation allows one to use the considerable literature available on chainwise-connected three-state systems.
\section{Special cases}
In the remainder of this paper we consider special cases of the Householder Hamiltonian,
obtained when we constrain the various pulse parameters, which lead to simplification of the resulting Hamiltonian matrix. Two simplifications are particularly interesting:
(i) breaking the three-state Householder chain $\state{1}\leftrightarrow \tstate{2}\leftrightarrow \tstate{3}$ into a two-state system and a spectator state,
and (ii) two-photon resonance in the Householder basis. We shall identify conditions, and deduce implications, for these important special cases.
\subsection{Effective two-state system and spectator state}
Under appropriate conditions the three-state Householder chain $\state{1}\leftrightarrow \tstate{2}\leftrightarrow \tstate{3}$ breaks into two coupled states and a spectator state. This occurs whenever one of the Householder linkages vanishes. The vanishing of $\widetilde{\Omega}_P$ requires that both $\Omega_P$ and $\Omega_C$ vanish, which is trivial and uninteresting. We hence assume the null linkage to be the coupling between states $\tstate{2}$ and $\tstate{3}$, \begin{equation}\label{H23=0}
\widetilde{\Omega}_S \t + 2i e^{-i\phi_p}\dot{\theta} \t = 0. \end{equation} Under this condition state $\tstate{3}$, Eq.~\eqref{Householder state 3}, decouples from the other two states and becomes a \textit{spectator state}:
its population is trapped, within a subspace of the full Hilbert space. The population distribution between states $\state{2}$ and $\state{3}$ may change, but in a manner that conserves the population of the spectator state \eqref{Householder state 3}.
\subsubsection{Conditions for chain breaking}
One possible solution to Eq.~\eqref{H23=0} reads \begin{subequations}\label{breaking conditions}\begin{eqnarray} \Delta_2 &=&\Delta_3, \label{beaking condition 1}\\ \phi_P &=& -\phi_S-\frac{\pi }{2}, \label{breaking condition 2}\\ \Omega_S \t &=&-2\dot{\theta } \t. \label{breaking condition 3} \end{eqnarray}\end{subequations} The latter condition imposes a strict constraint on the pulse shape. Given $\Omega_P$ and $\Omega_C$, which determine $\dot\theta$ through Eq.~\eqref{theta}, condition \eqref{breaking condition 3} determines both the shape and the magnitude of $\Omega_S$.
Another possible solution to Eq.~\eqref{H23=0} emerges when the $P$ and $C$ pulses have the same time dependence, say $f(t)$,
$\Omega_P (t) =\Omega_{P0} f(t)$, $\Omega_C (t) = \Omega_{C0} f(t)$,
while the $S$ pulse could differ, $\Omega_S (t) =\Omega_{S0} g(t)$. Then $\dot{\theta} = 0$ and condition \eqref{H23=0} becomes $\widetilde{\Omega}_S \t =0$. This condition can be satisfied in several ways, cf. Eq.~\eqref{HOmegaS}. A simple realization for that condition occurs with the choice \begin{subequations}\label{breaking conditions b}\begin{eqnarray} \Delta_2 &=&\Delta_3, \label{breaking condition 1b}\\ \phi_P&=&-\phi_S, \label{breaking condition 2b}\\ \Omega_C \t &=&\Omega_P \t . \label{breaking condition 3b} \end{eqnarray}\end{subequations} Then $\theta=\pi/4$ and the spectator state reads \begin{equation}\label{dark state in case of no adiabatic coupling} \tstate{3} \t = \tfrac{1}{\sqrt{2}} \left( e^{-i\phi_S}\state{2} - \state{3} \right). \end{equation}
\subsubsection{Analytical three-state solutions}
The dynamics of the two coupled Householder states $\state{1}$ and $\tstate{2}$, coupled by the interaction $\widetilde{\Omega}_P \t $,
offer other interesting possibilities. For the two-state system $\state{1}\leftrightarrow\tstate{2}$, analytic solutions may be possible; these are known for many examples of pulse and detuning time dependences. Hence, by writing down the propagator in the Householder basis for a known two-state analytical solution,
and by using the transformation back to the original basis by the Householder reflection $ \mathsf{R} (t)$, one can write down a number of analytic solutions for the three-state loop system. These would generalize the similar analytical solutions for a $\Lambda$ system \cite{Vitanov98}.
\subsubsection{Population initially in state $\state{1}$}
If only state $\state{1}$ is initially populated then the dynamics remains confined within the effective two-state system $\state{1}\leftrightarrow\tstate{2}$. In this two-state system we can enforce complete population return to state $\state{1}$,
complete population inversion to state $\tstate{2}$ or create a superposition of states $\state{1}$ and $\tstate{2}$.
\textit{Complete population transfer} from state $\state{1}$ to the Householder state $\tstate{2}$ can be produced by a resonant $\pi$-pulse \cite{Shore},
by adiabatic level-crossing adiabatic passage \cite{ARPC},
or by a variety of novel more sophisticated techniques \cite{SCRAP,RIBAP,SAP,deltafn}. Viewed in the original basis, the system ends up in a superposition of $\state{2}$ and $\state{3}$, \begin{equation} \cos \theta \t \, \state{2} \t +e^{-i\phi_P}\sin \theta \t \state{3} \t, \end{equation} with the angle $\theta$ given by (\ref{theta});
thus the superposition is fully controlled by the ratio of $\Omega_C \t $ and $\Omega_P \t $ and has a relative phase $\phi_P $.
A predetermined superposition of states $\state{1}$ and $\tstate{2}$ can be created by resonant fractional-$\pi$ pulses,
or by modifications of adiabatic-passage techniques, for example, half-SCRAP \cite{Half-SCRAP} and two-state STIRAP \cite{2s-STIRAP}. Such techniques allow, for instance, the creation of an arbitrary predetermined maximally coherent superposition of the three states $\state{1}$, $\state{2}$, and $\state{3}$. For example, one can create a maximally coherent superposition
using fractional-$\pi $ pulses that obey conditions (\ref{breaking conditions b})
and which are resonant in the Householder basis ($\Delta_2(t) =\Delta_3(t) =-\Omega_S(t)/2 $). Such an example is demonstrated in Fig.~\ref{superposition1}.
\begin{figure}
\caption{(Color online) Creation of an equal superposition of states $\state{1}$, $\state{2}$ and $\state{3}$
for Gaussian pulses: $\Omega_P(t) = \Omega_{P0}\ e^{-t^2/T^2}$, $\Omega_C(t) = \Omega_{C0}\ e^{-t^2/T^2}$,
$\Delta_2(t)=\Delta_3(t)=-\Omega_S(t)/2$, $\Omega_S(t)=\Omega_{S0}\ e^{-(t-\tau)^2}$,
with the following parameters $\Omega_{P0}=\Omega_{C0}=0.76/T$, $\Omega _{S0}=1/T$, $\tau=0.5T$.}
\label{superposition1}
\end{figure}
\subsubsection{Population initially in state $\state{2}$}
Let us assume now that it is state $\state{2}$ that is populated initially.
(The symmetric case of state $\state{3}$ initially populated is just a matter of relabelling the states.) If the $C$ pulse precedes the $P$ pulse, then we are in the dark state (\ref{dark state}) and this is a situation similar to STIRAP, then there will occur complete population transfer to state $\state{3}$. The resonant case of this process was discussed and explained earlier \cite{Unanyan97,Fleischhauer99}. If we are in state $\state{2}$ and we apply the pulse in the intuitive order (the $P$ pulse precedes the $C$ pulse), then we are in the bright state and
depending on the pulses we can have complete population transfer to state $\state{1}$, or end up in a superposition of states $\state{1}$, $\state{2}$ and $\state{3}$. For example, as illustrated in Fig. \ref{superposition2}, one can use fractional-$\pi$ pulses to create an equal superposition of states $\state{1}$, $\state{2}$ and $\state{3}$.
\begin{figure}
\caption{(Color online) Creation of an equal superposition of states $\state{1}$, $\state{2}$ and $\state{3}$
for Gaussian pulses: $\Omega_P(t) = \Omega_{0}\ e^{-(t+\tau)^2/T^2} + \Omega_{0} \cos\alpha\ e^{-(t-\tau)^2/T^2}$,
$\Omega_C(t) = \Omega_{0} \sin\alpha\ e^{-(t-\tau)^2/T^2}$,
$\Delta_2(t)=\Delta_3(t)=0$, $\Omega_S(t)=-2\dot\theta$,
with the following parameters $\alpha=\pi/4$, $\Omega_0=0.567/T$, $\tau=0.5T$.}
\label{superposition2}
\end{figure}
\subsection{Effective two-photon resonance}
We assume now that the $P$ and $C$ pulses have the same time dependence and consider a resonance condition between states $\state{1}$ and $\tstate{3}$, \begin{equation}\label{resonance condition} \widetilde{\Delta}_3 \t =0. \end{equation} The resulting Householder Hamiltonian is exactly that of the lambda linkage on two-photon resonance used for STIRAP \cite{STIRAP}. The traditional dark state of the STIRAP process appears here as \begin{eqnarray} \Phi_{D} \t &=&\cos \varphi \t \,\state{1} - \sin \varphi \t \,\tstate{3} \t \notag \\ &=&\cos \varphi \t \,\state{1} -e^{i\phi_P}\sin \varphi \t \,\sin\theta \,\state{2} \t + \sin \varphi \t \,\cos \theta \state{3} \t ,
\label{dark state} \end{eqnarray} where $\tan \varphi \t = \widetilde{\Omega}_P \t / \widetilde{\Omega}_S \t$. The state $\Phi_{D} \t $ is a spectator (or population-trapping) state because it is not affected by the specified radiation, but it has components of all three of the original basis states. One can use this new kind of spectator state, with the traditional STIRAP pulse sequence of $\widetilde{\Omega}_S \t $ preceding $\widetilde{\Omega}_P \t$,
to move the initial population from state $\state{1}$ to a superposition of state $\tstate{2} \t$ and state $\tstate{3} \t$. The superposition is controlled by the ratio of $\Omega_C \t $ and $\Omega_P \t $ and has the phase $\phi_P$.
Condition \eqref{resonance condition} can always be satisfied for an appropriate choice of the (time-dependent) detuning $\Delta_2(t)$ [or $\Delta_3(t)$]. However, the specific time dependence, although possible in principle, might be complicated and difficult to produce experimentally. Condition \eqref{resonance condition} can be satisfied with constant detunings when the $P$ and $C$ pulses share the same time dependence: $\Omega_P (t) =\Omega_P f(t)$ and $\Omega_C (t) = \Omega_C f(t)$. Then the mixing angle $\theta$ is constant [see Eq.~\eqref{theta}] and $\dot\theta=0$. Two options provide the needed pulses:
(i) The conditions \begin{subequations}\label{2PR conditions}\begin{eqnarray} &&\phi_P+\phi_S = \pi /2, \label{phase condition}\\ && \Delta_3 = -\Delta_2\frac{\Omega_C^2}{\Omega_P^2}\label{detuning condition} \end{eqnarray}\end{subequations} hold. Then the $S$ field can be arbitrary and both detunings $\Delta_2$ and $\Delta_3$ can be constant.
(ii) The $S$ field is constant. Then condition \eqref{resonance condition} can be fulfilled for constant detunings that obey the relation \begin{equation}\label{detuning condition 2} \Delta_3 = -\Delta_2\frac{\Omega_C^2}{\Omega_P^2} + \frac{\Omega_C}{\Omega_P}\Omega_S \cos(\phi_P+\phi_S). \end{equation}
Therefore, the usual two-photon resonance condition, necessary for the emergence of a spectator (dark) state in the original basis,
is replaced by a condition for the two-photon detuning $\Delta_3$:
either (i) Eq.~\eqref{detuning condition}, for arbitrary $S$ field but with the phase relation \eqref{phase condition},
or (ii) Eq.~\eqref{detuning condition 2}, for constant $S$ field.
If we now start initially in state $\state{1}$ and apply the $S$ pulse before the $P$ pulse, then the following superposition is formed \begin{equation} -ie^{-i\phi_S} \,\sin \theta \,\state{2} \t + \cos \theta \state{3} \t. \end{equation} The superposition characteristics are fixed by the $S$-field phase and the angle $\theta$ defined by Eq.~\eqref{theta}. Figure \ref{superposition3} illustrates how, starting in state $\state{1}$ and applying the $S$ pulse before the $P$ pulse we obtain an equal superposition of $\state{2}$ and $\state{3}$.
\begin{figure}
\caption{(Color online) Creation of an equal superposition of states $\state{2}$ and $\state{3}$, with the following couplings and detunings:
$\Omega_P(t) = \Omega_0 \cos\theta\ e^{-(t-\tau)^2/T^2}$,
$\Omega_C(t) = \Omega_0 \sin\theta\ e^{-(t-\tau)^2/T^2}$,
$\Omega_S(t) = \Omega_0\ e^{-(t+\tau)^2/T^2} $, $\Delta_2(t) = \Delta_3(t) = 0$,
where the parameters are $\theta = \pi /4$, $\Omega_0=30/T$, $\tau=0.5T$.}
\label{superposition3}
\end{figure}
\section{Reduction of arbitrary $N$-dimensional quantum systems to chains}
The Householder transformation introduced here for a three-state loop system is readily extended to a general $N$-state quantum system with arbitrary linkages,
even in the most general case when each state connects to any other state. A suitable sequence of at most $N-2$ Householder transformations can cast the Hamiltonian, which is a hermitian matrix, into a \textit{tridiagonal} form. In this sequence the Householder vector for the $n$th reflection is chosen as \begin{equation}
|v_{n}\rangle =\frac{|x_{n}\rangle -\left\vert x_{n}\right\vert
|e_{n+1}\rangle }{\left\vert |x_{n}\rangle -\left\vert x_{n}\right\vert |e_{n+1}\rangle \right\vert },
\end{equation} where $|e_{n+1}\rangle $ is a unit vector that defines the $(n+1)$-st axis, i.e. its components are zero except a unity at the $(n+1)$st place, Here $|x_{n}\rangle $ is the $n$th column vector of the transformed Hamiltonian after the $n$th step.
The tridiagonalization of the Hamiltonian implies that in the Householder basis, each of the new basis states is connected only to its nearest neighbor states, thus forming a \textit{chainwise} linkage pattern. The chain is conceptually simpler, and analytically easier, to treat, with a variety of exact and approximate approaches available in the literature.
\special{color cmyk 0 0 0 1.}
\section{Conclusions and outlook}
A two-parameter Householder reflection can break the loop-linkage pattern of a three-state system, providing instead a simple chain. For the three-state system the result can appear either as a lambda system (with initial population at one end of the chain) or as a vee system (with initial population in the middle state of the chain). In either case the system can be transformed further into a pair of coupled states and a spectator state, within which population remains trapped. This is a new kind of spectator state involving all three basis states; it contrasts with the conventional dark states that have no excited-state component.
These results hold intrinsic interest because the three-state loop is the simplest discrete-state quantum system in which nontrivial interference occurs. The present solutions may therefore offer opportunities for manipulating the quantum states of such systems.
Our objective in this paper has been to introduce this important novel transformation, and with it to show that a loop system is equivalent to a chain system. The presented examples of the uses of the Householder transformation in a three-state loop system, being by no means exhaustive,
have demonstrated a number of potential applications based on analytical approaches, ranging from hidden chain breaking and spectator states to hidden two-photon resonances and analytic solutions. These allow one to establish generic features of the interaction dynamics and engineer interactions that can produce various superposition states at will.
The results in this work for three-state systems are readily extended to $N$-state systems. More general sequences of Householder reflections can replace there arbitrary complicated linkages with simple chain linkages. Hence an $N$-state system wherein each state is coupled to any other state can be reduced to an equivalent chain system with nearest-neighbor interactions only. Then one can apply various available analytical approaches to the Householder chain to reveal interesting novel features of the multistate dynamics,
including Hilbert space factorization, hidden spectator states and ensuing dynamical invariants.
\acknowledgments
This work has been supported by the EU ToK project CAMEL (Grant No. MTKD-CT-2004-014427), the EU RTN project EMALI (Grant No. MRTN-CT-2006-035369),
and the Bulgarian National Science Fund Grants No. WU-205/06 and No. WU-2517/07.
\def\bibttl#1{}
\end{document} |
\begin{document}
\title[Regularization for SBE]{Regularization by noise and \\ stochastic Burgers equations} \author{M.~Gubinelli} \address[M.~Gubinelli]{CEREMADE UMR 7534 -- Universit\'e Paris--Dauphine} \email[M.~Gubinelli]{[email protected]} \author{M.~Jara} \address[M.~Jara]{IMPA\\ Estrada Dona Castorina 110\\ CEP 22460-320\\ Rio de Janeiro\\ Brazil} \email[M.~Jara]{[email protected]}
\begin{abstract} We study a generalized 1d periodic SPDE of Burgers type: $$ \partial_t u =- A^\theta u + \partial_x u^2 + A^{\theta/2} \xi $$ where $\theta > 1/2$, $-A$ is the 1d Laplacian, $\xi$ is a space-time white noise and the initial condition $u_0$ is taken to be (space) white noise. We introduce a notion of weak solution for this equation in the stationary setting. For these solutions we point out how the noise provide a regularizing effect allowing to prove existence and suitable estimates when $\theta>1/2$. When $\theta>5/4$ we obtain pathwise uniqueness. We discuss the use of the same method to study different approximations of the same equation and for a model of stationary 2d stochastic Navier-Stokes evolution. \end{abstract} \keywords{Kardar--Parisi--Zhang equation, SPDEs, noise regularization} \subjclass[2000]{00X00}
\maketitle
The stochastic Burgers equation (SBE) on the one dimensional torus $\mathbb{T}=(-\pi,\pi]$ is the SPDE \begin{equation} \label{eq:burgers} \mathrm{d} u_t = \frac12 \partial_\xi^2 u_t(\xi) \mathrm{d} t + \frac12 \partial_\xi (u_t(\xi))^2 \mathrm{d} t + \partial_\xi \mathrm{d} W_t \end{equation} where $W_t$ is a cylindrical white noise on the Hilbert space $H={L^2_0(\TT)}$ of square integrable, mean zero real function on $\mathbb{T}$ and it has the form $ W_t(\xi) = \sum_{k\in\mathbb{Z}_0} e_k(\xi) \beta^k_t $ with $\mathbb{Z}_0 = \mathbb{Z}\backslash \{0\}$ and $e_k(\xi)=e^{i k\xi}/\sqrt{2\pi}$ and $\{\beta_t^k\}_{t\ge 0, k\in\mathbb{Z}_0}$ is a family of complex Brownian motions such that $(\beta^k_t)^*=\beta^{-k}_t$ and with covariance $\mathbb{E}[\beta_t^k \beta_t^q ]=\mathbb{I}_{q+k=0}$. Formally the solution $u$ of eq.~\eqref{eq:burgers} is the derivative of the solution of the Kardar--Parisi--Zhang equation \begin{equation} \label{eq:kpz} \mathrm{d} h_t = \frac12 \partial_\xi^2 h_t(\xi) \mathrm{d} t + \frac12 (\partial_\xi h_t(\xi))^2 \mathrm{d} t + \mathrm{d} W_t \end{equation} which is believed to capture the macroscopic behavior of a large class of surface growth phenomena~\cite{KPZ}.
The main difficulty with eq.~\eqref{eq:burgers} is given by the rough nonlinearity which is incompatible with the distributional nature of the typical trajectories of the process. Note in fact that, at least formally, eq.~\eqref{eq:burgers} preserves the white noise on $H$ and that the square in the non-linearity is almost surely $+\infty$ on the white noise. Additive renormalizations in the form of Wick products are not enough to cure this singularity~\cite{DDT}.
In~\cite{BG} Bertini and Giacomin studying the scaling limits for the fluctuations of an interacting particles system show that a particular regularization of~\eqref{eq:burgers} converges in law to a limiting process $u^{\textrm{hc}}_t(\xi)=\partial_\xi \log Z_t(\xi)$ (which is referred to as the Hopf-Cole solution) where $Z$ is the solution of the stochastic heat equation with multiplicative space--time white noise \begin{equation} \label{eq:she} \mathrm{d} Z_t = \frac12 \partial_\xi^2 Z_t(\xi) \mathrm{d} t + Z_t(\xi) \mathrm{d} W_t(\xi) . \end{equation}
The Hopf--Cole solution is believed to be the correct physical solution for~\eqref{eq:burgers} however up to recently a rigorous notion of solution to eq.~\eqref{eq:burgers} was lacking so the issue of uniqueness remained open.
Jara and Gon\c{c}alves~\cite{JG} introduced a notion of \emph{energy solution} for eq.~\eqref{eq:burgers} and showed that the macroscopic current fluctuations of a large class of weakly non-reversible particle systems on $\mathbb{Z}$ obey the Burgers equation in this sense. Moreover their results show that also the Hopf-Cole solution is an energy solution of eq.~\eqref{eq:burgers}.
More recently Hairer~\cite{Hairer} obtained a complete existence and uniqueness result for KPZ. In this remarkable paper the theory of controlled rough paths is used to give meaning to the nonlinearity and a careful analysis of the series expansion of the candidate solutions allow to give a consistent meaning to the equation and to obtain a uniqueness result. In particular Hairer's solution coincide with the Cole-Hopf ansatz.
In this paper we take a different approach to the problem. We want to point out the regularizing effect of the linear stochastic part of the equation on the the non-linear part. This is linked to some similar remarks of Assing~\cite{assing1,assing2} and by the approach of Jara and Gon\c{c}alves~\cite{JG}. Our point of view is motivated also by similar analysis in the PDE and SPDE context where the noise or a dispersive term provide enough regularization to treat some non-linear term: there are examples involving the stochastic transport equation~\cite{FGP}, the periodic Korteweg-de~Vries equation~\cites{kdv,babin-kdv} and the fast rotating Navier-Stokes equation~\cite{babin-ns}. In particular in the paper~\cite{kdv} it is shown how, in the context of the periodic Korteweg-de~Vries equation, an appropriate notion of controlled solution can make sense of the non-linear term in a space of distributions. This point of view has also links with the approach via controlled paths to the theory of rough paths~\cite{controlling}.
With our approach we are not able to obtain uniqueness for the SBE above and we resort to study the more general equation (SBE$_\theta$): \begin{equation} \label{eq:burgers-theta} \mathrm{d} u_t = - A^\theta u_t \mathrm{d} t + F(u_t) \mathrm{d} t + A^{\theta/2} \mathrm{d} W_t \end{equation} where $F(u_t)(\xi)=\partial_\xi (u_t(\xi))^2$, $-A$ is the Lapacian with periodic b.c., where $\theta\ge 0$ and where the initial condition is taken to be white noise. In the case $\theta=1$ we essentially recover the stationary case of the SBE above (modulo a mismatch in the noise term which do not affect its law).
For any $\theta \ge 0$ we introduce a class $\mathcal{R}_\theta$ of distributional processes "controlled" by the noise, in the sense that these processes have a \emph{small time} behaviour similar to that of the stationary Ornstein-Uhlenbech process $X$ which solves the linear part of the dynamics: \begin{equation} \label{eq:ou-theta} \mathrm{d} X_t = - A^\theta X_t \mathrm{d} t + A^{\theta/2} \mathrm{d} W_t, \end{equation} where $X_0$ is white noise.
When $\theta > 1/2$ we are able to show that the \emph{time integral} of the non-linear term appearing in SBE$_\theta$ is well defined, namely that for all $v\in \mathcal{R}_\theta$ \begin{equation} \label{eq:drift-process} A^v_t = \int_0 ^t F(v_s) \mathrm{d} s \end{equation} is a well defined process with continous paths in a space of distributions on $\mathbb{T}$ of specific regularity. Note that this process is not necessarily of finite variation with respect to the time parameter even when tested with smooth test functions.
The existence of the drift process~\eqref{eq:drift-process} allows to formulate naturally the SBE$_\theta$ equation in the space $\mathcal{R}_\theta$ of controlled processes and gives a notion of solution quite similar to that of energy solution introduced by Jara and Gon\c{c}alves~\cite{JG}. Existence of (probabilistically) weak solutions will be established for any $\theta > 1/2$, that is well below the KPZ regime. The precise notion of solution will be described below. We are also able to show easily pathwise uniqueness when $\theta > 5/4$ but the case $\theta=1$ seems still (way) out of range for this technique. In particular the question of pathwise uniqueness is tightly linked with that of existence of strong solutions and the key estimates which will allow us to handle the drift~\eqref{eq:drift-process} are not strong enough to give a control on the difference of two solutions (with the same noise) or on the sequence of Galerkin approximations.
Similar regularization phenomena for stochastic transport equations are studied in~\cite{FGP} and in~\cite{DF} for infinite dimensional SDEs. This is also linked to the fundamental paper of Kipnis and Varadhan~\cite{KV} on CLT for additive functionals and to the Lyons-Zheng representation for diffusions with singular drifts~\cites{MR1988703, MR2065168}.
\textbf{Plan.} In Sec.~\ref{sec:controlled} we define the class of controlled paths and we recall some results of the stochastic calculus via regularization which are needed to handle the It\^o formula for the controlled processes. Sec.~\ref{sec:ito-trick} is devoted to introduce our main tool which is a moment estimate of an additive functional of a stationary Dirichlet process in terms of the quadratic variation of suitable forward and backward martingales. In Sec.~\ref{sec:estimates} we use this estimate to provide uniform bounds for the drift of any stationary solution. These bounds are used in Sec.~\ref{sec:existence} to prove tightness of the approximations when $\theta > 1/2$ and to show existence of controlled solution of the stochastic Burgers equation via Galerkin approximations. Finally in Sec.~\ref{sec:uniq} we prove our pathwise uniqueness result in the case $\theta > 5/4$. In Sec.~\ref{sec:alternative} we discuss related results for the model introduced in~\cite{DDT}.
\textbf{Notations.} We write $X \lesssim_{a,b,\dots} Y$ if there exists a positive constant $C$ depending only on $a,b,\dots$ such that $X \le C Y$. We write $X \sim_{a,b,\dots} Y$ iff $X\lesssim_{a,b,\dots} Y \lesssim_{a,b,\dots} X$.
We let $\mathcal{S}$ be the space of smooth test functions on $\mathbb{T}$, $\mathcal{S}'$ the space of distributions and $\langle \cdot,\cdot\rangle$ the corresponding duality.
On the Hilbert space $H={L^2_0(\TT)}$ the family $\{e_k\}_{k\in\mathbb{Z}_0}$ is a complete orthonormal basis. On $H$ we consider the space of smooth cylinder functions $\mathcal{C}yl$ which depends only on finitely many coordinates on the basis $\{e_k\}_{k\in \mathbb{Z}_0}$ and for $\varphi \in\mathcal{C}yl$ we consider the gradient $D \varphi : H\to H$ defined as $D \varphi(x) = \sum_{k\in\mathbb{Z}_0} D_k \varphi(x) e_k$ where $D_k = \partial_{x_k}$ and $x_k = \langle e_k,x \rangle$ are the coordinates of $x$.
For any $\alpha\in \mathbb{R}$ define the space $\mathcal{F} L^{p,\alpha}$ of functions on the torus for which $$
|x|_{\mathcal{F} L^{p,\alpha}} = \big[\sum_{k\in\mathbb{Z}_0} (|k|^\alpha |x_k|)^p\big]^{1/p}<+\infty \, \text{ if $p<\infty$ and
}\,
|x|_{\mathcal{F} L^{\infty,\alpha}} = \sup_{k\in\mathbb{Z}_0} |k|^\alpha |x_k| <+\infty . $$
We will use the notation $H^\alpha = \mathcal{F} L^{2,\alpha}$ for the usual Sobolev spaces of periodic functions on $\mathbb{T}$. We let $A=-\partial_\xi^2$ and $B=\partial_\xi$ as unbounded operators acting on $H$ with domains respectively $H^2$ and $H^{1}$. Note that $\{e_k\}_{k\in\mathbb{Z}_0}$ is a basis of eigenvectors of $A$ for which we denote $\{\lambda_k = |k|^2 \}_{k\in\mathbb{Z}_0}$ the associated eigenvalues. The operator $A^\theta$ will then be defined on $H^{\theta}$ by $A^\theta e_k = |k|^{2\theta}e_k$ with domain $H^{2\theta}$. The linear operator $\Pi_N: H \to H$ is the projection on the subspace generated by $\{e_k\}_{k\in\mathbb{Z}_0, |k|\le N}$.
Denote $\mathcal{C}_T V = C([0,T],V)$ the space of continuous functions from $[0,T]$ to the Banach space $V$ endowed with the supremum norm and with $\mathcal{C}^\gamma_T V = C^\gamma([0,T],V)$ the subspace of $\gamma$-H\"older continuous functions in $\mathcal{C}_T V$ with the $\gamma$-H\"older norm.
\section{Controlled processes} \label{sec:controlled}
We introduce a space of stationary processes which ``looks like" an Ornstein-Uhlenbeck process. The invariant law at fixed time of these processes will be given by the canonical Gaussian cylindrical measure $\mu$ on $H$ which we consider as a Gaussian measure on $H^{\alpha}$ for any $\alpha<-1/2$. This measure is fully characterized by the equation $$ \int e^{i \langle \psi,x \rangle}\mu(\mathrm{d} x) = e^{-\langle \psi,\psi\rangle/2}, \qquad \forall\psi\in H ; $$ or alternatively by the integration by parts formula $$ \int D_k \varphi(x) \mu(\mathrm{d} x) = \int x_{-k} \varphi(x) \mu(\mathrm{d} x),\qquad \forall k\in\mathbb{Z}_0, \varphi \in\mathcal{C}yl . $$
\begin{definition}[Controlled process] \label{def:controlled} For any $\theta\ge 0$ let $\mathcal{R}_\theta$ be the space of stationary stochastic processes $(u_t)_{0 \leq t \leq T}$ with continuous paths in $\mathcal{S}'$ such that \begin{itemize} \item[i)] the law of $u_t$ is the white noise $\mu$ for all $t\in[0,T]$; \item[ii)] there exists a process $\mathcal{A} \in C([0,T],\mathcal{S}')$ of zero quadratic variation such that $\mathcal{A}_0 = 0$ and satisfying the equation \begin{equation} \label{eq:controlled-decomposition} u_t(\varphi) = u_0(\varphi) + \int_0^t u_s(-A^\theta \varphi) \mathrm{d} s+\mathcal{A}_t(\varphi) + M_t(\varphi) \end{equation}
for any test function $\varphi \in \mathcal S$, where $M_t(\varphi)$ is a martingale with respect to the filtration generated by $u$ with quadratic variation $[M(\varphi)]_t = 2t\|A^{\theta/2} \varphi\|_{L^2_0(\mathbb{T})}^2$; \item[iii)] the reversed processes $\hat u_t = u_{T-t}$, $\hat \mathcal{A}_t = -\mathcal{A}_{T-t}$ satisfies the same equation with respect to its own filtration (the backward filtration of $u$). \end{itemize} \end{definition}
For controlled processes we will prove that if $\theta>1/2$ the Burgers drift is well defined by approximating it and passing to the limit. Let $\rho:\mathbb{R}\to\mathbb{R}$ be a positive smooth test function with unit integral and $\rho^\varepsilon(\xi)=\rho(\xi/\varepsilon)/\varepsilon$ for all $\varepsilon>0$. For simplicity in the proofs we require that the function $\rho$ has a Fourier transform $\hat\rho$ supported in some ball and such that $\hat\rho = 1$ in a smaller ball. This is a technical condition which is easy to remove but we refrain to do so here not to obscure the main line of the arguments.
\begin{lemma} \label{lemma:burgers-drift} If $u\in\mathcal{R}_\theta$ and if $\theta >1/2$ then almost surely $$ \lim_{\varepsilon\to 0} \int_0^t F(\rho^\varepsilon* u_s) \mathrm{d} s $$ exists in the space $C([0,T],\mathcal{F} L^{\zeta,\infty})$ for some $\zeta<0$. We \emph{denote} with $\int_0^t F( u_s) \mathrm{d} s$ the resulting process with values in $C([0,T],\mathcal{F} L^{\zeta,\infty})$. \end{lemma} \begin{proof} We postpone the proof in Sect.~\ref{sec:estimates}. \end{proof}
It will turn out that for this process we have a good control of its space and time regularity and also some exponential moment estimates. Then it is relatively natural to \emph{define} solutions of eq.~\eqref{eq:burgers-theta} by the following self-consistency condition.
\begin{definition}[Controlled solution] Let $\theta>1/2$, then a process $u\in\mathcal{R}_\theta$ is a \emph{controlled solution} of SBE$_\theta$ if almost surely \begin{equation} \label{eq:self-consistent}
\mathcal{A}_t(\varphi) = \langle \varphi, \int_0^t F(u_s) \mathrm{d} s \rangle \end{equation} for any test function $\varphi \in \mathcal S$ and any $t\in[0,T]$. \end{definition}
Note that these controlled solutions are a generalization of the notion of probabilistically weak solutions of SBE$_\theta$. The key point is that the drift term is not given explicitly as a function of the solution itself but characterized by the self-consistency relation~\eqref{eq:self-consistent}. In this sense controlled solutions are to be understood as a couple $(u,\mathcal{A})$ of processes satisfying compatibility relations.
An analogy which could be familiar to the reader is that with a diffusion on a bounded domain with reflected boundary where the solution is described by a couple of processes $(X,L)$ representing the position of the diffusing particle and its local time at the boundary~\cite{RY}.
Note also that there is no requirement on $\mathcal{A}$ to be adapted to $u$. Our analysis below cannot exclude the possibility that $\mathcal{A}$ contains some further randomness and that the solutions are strictly weak, that is not adapted to the filtration generated by the martingale term and the initial condition.
\section{The It\^o trick} \label{sec:ito-trick}
In order to prove the regularization properties of controlled processes we will need some stochastic calculus and in particular an It\^o formula and some estimates for martingales. Let us recall here some basic elements here. In this section $u$ will be always a controlled process in $\mathcal{R}_\theta$. For any test function $\varphi\in\mathcal{S}$ the processes $(u_t(\varphi))_{t}$ and $(\hat u_t(\varphi))_{t}$ are Dirichlet processes: sums of a martingale and a zero quadratic variation process. Note that we do not want to assume controlled processes to be semimartingales (even when tested with smooth functions). This is compatible with the regularity of our solutions and there is no clue that solutions of SBE$_\theta$ even with $\theta=1$ are distributional semimartingales. A suitable notion of stochastic calculus which is valid for a large class of processes and in particular for Dirichlet processes is the stochastic calculus via regularization developed by Russo and Vallois~\cite{RV}. In this approach the It\^o formula can be extended to Dirichlet processes. In particular if $(X^i)_{i=1,\dots,k}$ is an $\mathbb{R}^k$ valued Dirichlet process and $g$ is a $C^2(\mathbb{R}^k;\mathbb{R})$ function then $$ g(X_t) = g(X_0) + \sum_{i=1}^k\int_0^t \partial_i g(X_s) \mathrm{d}^- X^i_s + \frac12 \sum_{i,j=1}^k \int_0^t \partial^2_{i,j} g(X_s) \mathrm{d}^- [X^i,X^j]_s $$ where $\mathrm{d}^-$ denotes the forward integral and $[X,X]$ the quadratic covariation of the vector process $X$. Decomposing $X=M+N$ as the sum of a martingale $M$ and a zero quadratic variation process $N$ we have $[X,X]=[M,M]$ and $$ g(X_t) = g(X_0) + \sum_{i=1}^k \int_0^t \partial_i g(X_s) \mathrm{d}^- M^i_s + \sum_{i=1}^k \int_0^t \partial_i g(X_s) \mathrm{d}^- N^i_s $$ $$ + \sum_{i,j=1}^k\frac12 \int_0^t \partial^2_{i,j} g(X_s) \mathrm{d}^- [M^i,M^j]_s $$ where now $\mathrm{d}^- M$ coincide with the usual It\^o integral and $[M,M]$ is the usual quadratic variation of the martingale $M$. The integral $\int_0^t \partial_i g(X_s) \mathrm{d}^- N^i_s$ is well-defined due to the fact that all the other terms in this formula are well defined. The case the function $g$ depends explicitly on time can be handled by the above formula by considering time as an additional (0-th) component of the process $X$ and using the fact that $[X^i,X^0]=0$ for all $i=1,..,k$. In the computations which follows we will only need to apply the It\^o formula to smooth functions.
Let us denote by $L_0$ the generator of the Ornstein-Uhlenbeck process associated to the operator $A^\theta$: \begin{equation} \label{eq:generator-ou}
L_0 \varphi(x) = \sum_{k \in \mathbb Z_0} |k|^{2\theta} \big(- x_k D_k \varphi(x) + \tfrac{1}{2} D_{-k}D_k \varphi(x)\big). \end{equation}
Consider now a smooth cylinder function $h:[0,T]\times \Pi_N H\to \mathbb{R}$. The It\^o formula for the finite quadratic variation process $(u^N_t = \Pi_N u_t)_t$ gives $$ h(t,u^N_t)=h(0,u^N_0)+\int_0^t (\partial_s + L^N_0) h(s,u^N_s) \mathrm{d} s +\int_0^t D h(s,u^N_s) \mathrm{d}\Pi_N \mathcal{A}_s + M^+_t $$ where $$
L^N_0 h(s,x) = \sum_{k\in\mathbb{Z}_0 : |k|\le N} |k|^{2\theta} ( x_{k} D_k h(s,x) + D_k D_{-k} h(s,x)) $$ is the restriction of the operator $L_0$ to $\Pi_N H$ and where the martingale part denoted $M^+$ has quadratic variation given by $ [ M^+ ]_t = \int_0^t \mathcal{E}^\theta_N(h(s,\cdot))(u^N_s) \mathrm{d} s $, where $$
\mathcal{E}_N^\theta(\varphi)(x) = \frac12 \sum_{k\in \mathbb{Z}_0: |k|\le N}|k|^{2\theta} |D_k \varphi(x)|^2 , $$ Similarly the It\^o formula on the backward process reads $$ h(T-t,u^N_{T-t})=h(T,u^N_T)+ \int_0^{t} (-\partial_s + L^N_0) h(T-s,u^N_{T-s}) \mathrm{d} s $$ $$ - \int_0^{t} D h(T-s,u^N_{T-s}) \mathrm{d} \Pi_N \mathcal{A}_{T-s} + M^-_t $$ with $ [ M^- ]_t = \int_0^t \mathcal{E}^\theta_N(h(T-s,\cdot))(u^N_{T-s}) \mathrm{d} s $ so we have the key equality \begin{equation} \label{eq:key-representation} \int_0^t 2 L_0^N h(s,u^N_{s})\mathrm{d} s= -M^+_t + M^-_{T-t}-M^-_T. \end{equation} which allows us to represent the time integral of $h$ as a sum of martingales which allows better control. On this martingale representation result we can use the Burkholder--Davis--Gundy inequalities to prove the following bound. \begin{lemma}[It\^o trick] \label{lemma:ito-trick} Let $h : [0,T]\times \Pi_N H \to \mathbb{R}$ be a cylinder function. Then for any $p \geq 1$, \begin{equation} \label{eq:ito-trick}
\left\|\sup_{t\in[0,T]}\left|\int_0^t L_0 h(s,\Pi_N u_s) \mathrm{d} s\right|\right\|_{L^p(\mathbb{P}_\mu)} \lesssim_p
T^{1/2} \sup_{s\in[0,T]}\left\| \mathcal{E}^\theta(h(s,\cdot)) \right\|^{1/2}_{L^{p/2}(\mu)} \end{equation} where $
\mathcal{E}^\theta(\varphi)(x) = \frac12 \sum_{k\in \mathbb{Z}_0}|k|^{2\theta} |D_k \varphi(x)|^2 $. In the particular case $h(s,x)= e^{a(T- s)}\tilde h(x)$ for some $a\in\mathbb{R}$ we have the improved estimate \begin{equation} \label{eq:ito-trick-conv} \begin{split}
\left\|\int_0^T e^{a(T-s)} L_0 \tilde h(\Pi_N u_s) \mathrm{d} s\right\|_{L^p(\mathbb{P}_\mu)} \lesssim_p
\left(\frac{1-e^{2aT}}{2a}\right)^{1/2}
\left\| \mathcal{E}^\theta(\tilde h) \right\|^{1/2}_{L^{p/2}(\mu)} .
\end{split} \end{equation} \end{lemma} \begin{proof} $$
\left\|\sup_{t\in[0,T]}\left|\int_0^t 2 L_0^N h(s,u_s) \mathrm{d} s\right|\right\|_{L^p(\mathbb{P}_\mu)} \le
\left\|\sup_{t\in[0,T]}|M^+_t| \right\|_{L^p(\mathbb{P}_\mu)}+
2 \left\|\sup_{t\in[0,T]}|M^-_t| \right\|_{L^p(\mathbb{P}_\mu)} $$ $$
\lesssim_p \left\| \langle M^+\rangle_T \right\|_{L^{p/2}(\mathbb{P}_\mu)}^{1/2}+\left\| \langle M^-\rangle_T \right\|_{L^{p/2}(\mathbb{P}_\mu)}^{1/2}
\lesssim_p \left\|\int_0^T \mathcal{E}^\theta(h(s,\cdot))(u_s) \mathrm{d} s \right\|_{L^{p/2}(\mathbb{P}_\mu)}^{1/2} $$ $$
\lesssim_p \left(\int_0^T\left\| \mathcal{E}^\theta(h(s,\cdot))(u_s) \right\|_{L^{p/2}(\mathbb{P}_\mu)} \mathrm{d} s\right)^{1/2}
\lesssim_p T^{1/2} \sup_{s\in[0,T]} \left\| \mathcal{E}^\theta(h(s,\cdot)) \right\|_{L^{p/2}(\mu)}^{1/2} . $$ For the convolution we bound as follows \begin{equation*}
\begin{split}
\left\|\int_0^T e^{a(T-s)} 2 L_0^N \tilde h(u_s) \mathrm{d} s\right\|_{L^p(\mathbb{P}_\mu)} & \lesssim_p \left(\int_0^T e^{2a(T-s)}\mathrm{d} s\right)^{1/2}
\left\| \mathcal{E}^\theta(\tilde h)(u_0) \right\|^{1/2}_{L^{p/2}(\mathbb{P}_\mu)}
\\ & \lesssim_p
\left(\frac{1-e^{2aT}}{2a}\right)^{1/2}
\left\| \mathcal{E}^\theta(\tilde h) \right\|^{1/2}_{L^{p/2}(\mu)}
\end{split} \end{equation*} \end{proof} The bound~\eqref{eq:ito-trick} in the present form (with the use of the backward martingale to remove the drift part) has been inspired by~\cite{CLO}*{Lemma 4.4}.
\begin{lemma}[Exponential integrability] Let $h : [0,T]\times \Pi_N H \to \mathbb{R}$ be a cylinder function. Then \begin{equation} \label{eq:exp-ito-trick} \mathbb{E} \sup_{t\in[0,T]}e^{2 \int_0^t L_0^N h(s,\Pi_N u_s) \mathrm{d} s} \lesssim \mathbb{E} e^{8 \int_0^T \mathcal{E}^\theta(h(s,u_s)) \mathrm{d} s } \end{equation} \end{lemma} \begin{proof} Let as above $M^\pm$ be the (Brownian) martingales in the representation of the integral $\int_0^t L_0^N h(s,\Pi_N u_s) \mathrm{d} s$. By Cauchy-Schwartz $$ \mathbb{E} \sup_{t\in[0,T]}e^{2 \int_0^t L_0^N h(s,\Pi_N u_s) \mathrm{d} s} \le \left[\mathbb{E} \sup_{t\in[0,T]}e^{2 M^+_t}\right]^{1/2} \left[\mathbb{E} \sup_{t\in[0,T]}e^{2 (M^-_T-M^-_{T-t})}\right]^{1/2}. $$ By Novikov's criterion $ e^{4 M^+_t - 8 \langle M^+\rangle_t } $ is a martingale for $t\in[0,T]$ if $\mathbb{E} e^{8 \langle M^+\rangle_T} < \infty$. In this case $$ \mathbb{E} \sup_{t\in[0,T]}e^{2 M^+_t} \le \mathbb{E} \sup_{t\in[0,T]}(e^{2 M^+_t- 4 \langle M^+\rangle_t}\sup_{t\in[0,T]} e^{ 4 \langle M^+\rangle_t }) $$ $$ \le\left[\mathbb{E} \sup_{t\in[0,T]} e^{4 M^+_t- 8 \langle M^+\rangle_t}\right]^{1/2} \left[\mathbb{E} e^{8 \langle M^+\rangle_T }\right]^{1/2} $$ and by Doob's inequality we get that the previous expression is bounded by $$ \left[\mathbb{E} e^{4 M^+_T- 8 \langle M^+\rangle_T}\right]^{1/2} \left[\mathbb{E} e^{8 \langle M^+\rangle_T }\right]^{1/2} \le \left[\mathbb{E} e^{8 \langle M^+\rangle_T }\right]^{1/2}. $$ Reasoning similarly for $M^-$ we obtain that $$ \mathbb{E} \sup_{t\in[0,T]}e^{2 \int_0^t L_0^N h(s,\Pi_N u_s) \mathrm{d} s} \le \mathbb{E} e^{8 \langle M^+\rangle_T } = \mathbb{E} e^{8 \int_0^T \mathcal{E}^\theta(h(s,u_s)) \mathrm{d} s }. $$ \end{proof}
\section{Estimates on the Burgers drift} \label{sec:estimates}
In this section we provide the key estimates on the Burgers drift via the quadratic variations of the forward and backward martingales in its decomposition. Let $F(x)(\xi) = B (x(\xi))^2$ and $F_N(x) = F(\Pi_N x)$. Define $$ H_N(x) = -\int_0^\infty F_N(e^{-A^\theta t}x)\mathrm{d} t $$ and consider $L_0 H_N(x)$ as acting on each Fourier coordinate of $H_N(x)$. Remark that the second order part of $L_0$ does not appear in the computation of $L_0 F_N$ since $$ D_k D_{-k} F(\Pi_N e^{-A^\theta t} x)=0 $$ for each $k\in\mathbb{Z}_0$. Indeed $$ D_{-k} D_k F(\Pi_N e^{-A^\theta t} x) = B [D_{-k} D_k (\Pi_N e^{-A^\theta t} x)^2]=2 B D_{-k} [(\Pi_N e^{-A^\theta t} x) (\Pi_N e^{-A^\theta t} e_k)] $$ $$ =2 [B(\Pi_N e^{-A^\theta t} e_{-k}) (\Pi_N e^{-A^\theta t} e_k)+(\Pi_N e^{-A^\theta t} e_{-k}) B(\Pi_N e^{-A^\theta t} e_k)] = 0 $$ Then it is easy to check that $$ L_0 H_N(\Pi_N x) = \langle A^\theta x, D H_N(\Pi_N x)\rangle = -2 \int_0^\infty B [(e^{-A^\theta t}\Pi_N x)(A^\theta e^{-A^\theta t} \Pi_N x) ]\mathrm{d} t $$ $$ = -\int_0^\infty \frac{\mathrm{d} }{\mathrm{d} t}B [(e^{-A^\theta t}\Pi_N x)^2 ] = B (\Pi_N x)^2=F(\Pi_N x) $$ since $\lim_{t\to \infty} B [(e^{-A^\theta t}\Pi_N x)^2 ] = 0$. Denote by $(x_k)_{k\in\mathbb{Z}}$ and $(H_N(x)_k)_{k\in\mathbb{Z}_0}$ the coordinates of $x=\sum_{k\in\mathbb{Z}_0} x_k e_k$ and $H_N(x)=\sum_{k\in\mathbb{Z}_0} H_N(x)_k e_k$ in the canonical basis $(e_k)_{k\in\mathbb{Z}_0}$. Then a direct computation gives an explicit formula for $H_N(x)$: $$ (H_{N}(x))_k =
2 ik \sum_{k_1,k_2 : k=k_1+k_2} \frac{\mathbb{I}_{|k|,|k_1|,|k_2|\le N}}{|k_1|^{2\theta}+|k_2|^{2\theta}} x_{k_1} x_{k_2}. $$ Let us denote with $(H_{N}(x))_k^{\pm}$ respectively the real and imaginary parts of this quantity: $(H_{N}(x))_k^{\pm}= ((H_{N}(x))_k\pm (H_{N}(x))_{-k})/(2 i^{\pm})$ where $i^+=1$ and $i^-=i$. Now $$ (H_{N}(x))^\pm_k =
i^{\mp}k \sum_{k_1,k_2 : k=k_1+k_2} \frac{\mathbb{I}_{|k|,|k_1|,|k_2|\le N}}{k_1^{2\theta}+k_2^{2\theta}} (x_{k_1} x_{k_2}\mp x_{-k_1} x_{-k_2}) $$ and recall that $
\mathcal{E}^\theta((H_N)^\pm_k)(x) = \sum_{q\in\mathbb{Z}_0} |q|^{2\theta} |D_q H^\pm_{N,k}(x)|^2 $. \begin{lemma} \label{lemma:energy-estimates} For $\lambda >0$ small enough we have \begin{equation} \label{eq:first-energy-estimate}
\sup_{k\in\mathbb{Z}_0} \mathbb{E} \exp\left[\lambda |k|^{2\theta-3} \mathcal{E}^\theta((H_N)^\pm_k)(u_0)\right] \lesssim 1 \end{equation} and \begin{equation} \label{eq:second-energy-estimate}
\sup_{1\le M \le N}\sup_{k\in\mathbb{Z}_0} \mathbb{E} \exp\left[\lambda |k|^{-2} M^{2\theta-1} \mathcal{E}^\theta((H_N-H_M)^\pm_k)(u_0)\right] \lesssim 1. \end{equation} \end{lemma} \begin{proof} We start by computing $\mathcal{E}((H_N)^\pm_k)$: noting that $$
D_q (H_{N})^\pm_k(x)= i^\mp k \left[ \frac{\mathbb{I}_{|k|,|q|,|k-q|\le N}}{|q|^{2\theta}+|k-q|^{2\theta}} x_{k-q}\mp \frac{\mathbb{I}_{|k|,|q|,|k+q|\le N}}{|q|^{2\theta}+|k+q|^{2\theta}} x_{-k-q}\right] $$ we have \begin{equation*}
\begin{split}
\mathcal{E}^\theta((H_N)^\pm_k)(x) & = \sum_{q\in\mathbb{Z}_0} |k|^2 |q|^{2\theta}\left[
2 \frac{\mathbb{I}_{|k|,|q|,|k-q|\le N}}{(|q|^{2\theta}+|k-q|^{2\theta})^2} |x_{k-q}|^2
\right . \\
& \left .
\qquad \qquad \mp \frac{\mathbb{I}_{|k|,|q|,|k-q|\le N}}{|q|^{2\theta}+|k-q|^{2\theta}} \frac{\mathbb{I}_{|k|,|q|,|k+q|\le N}}{|q|^{2\theta}+|k+q|^{2\theta}} (x_{k-q} x_{k+q}+x_{-k+q} x_{-k-q})
\right] \end{split} \end{equation*} which gives the bound $$
\mathcal{E}^\theta((H_N)^\pm_k)(x)\lesssim |k|^2 \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}} \frac{|k_1|^{2\theta}\mathbb{I}_{|k|,|k_1|,|k_2|\le N}}{(|k_1|^{2\theta}+|k_2|^{2\theta})^2} |x_{k_2}|^2 $$ $$
\lesssim |k|^2 \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}} \frac{\mathbb{I}_{|k|,|k_1|,|k_2|\le N}}{|k_1|^{2\theta}+|k_2|^{2\theta}} |x_{k_2}|^2 = \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}c(k,k_1,k_2) |x_{k_2}|^2 = h_N(x) $$
where $c(k,k_1,k_2) = |k|^2/(|k_1|^{2\theta}+|k_2|^{2\theta})$. Let $$
I_N(k) = \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}} c(k,k_1,k_2) $$ and note that the sum in $I_N(k)$ can be bounded by the equivalent integral giving (uniformly in $N$) $$
I_N(k) \lesssim |k|^{2} \int_{\mathbb{R}} \frac{\mathrm{d} q}{|q|^{2\theta}+|k-q|^{2\theta}}
= |k|^{3-2\theta} \int_{\mathbb{R}} \frac{\mathrm{d} q}{|q|^{2\theta}+|1-q|^{2\theta}} \lesssim |k|^{3-2\theta} $$ since that the last integral is finite for $\theta > 1/2$. Then $$
\mathbb{E} e^{\lambda |k|^{2\theta-3}\mathcal{E}^\theta((H_N)^\pm_k)(u_0)} \le \mathbb{E} e^{\lambda C|k|^{2\theta-3} h_N(u_0)} $$ $$
\le \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}} c(k,k_1,k_2) \mathbb{E} \frac{e^{\lambda C |k|^{2\theta-3} I_N(k) |(u_0)_{k_2}|^2}}{I_N(k)}
\le \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}} c(k,k_1,k_2) \mathbb{E} \frac{e^{\lambda C'|(u_0)_{k_2}|^2}}{I_N(k)} $$
where we used the previous bound to say that $C|k|^{2\theta-3} I_N(k) \le C'$ uniformly in $k$. Remind that $(u_0)_k$ has a Gaussian distribution of mean zero and unit variance. Therefore for $\lambda$ small enough $\mathbb{E} e^{\lambda C'|(u_0)_{k_2}|^2}\lesssim 1$ uniformly in $k_2$ so that $$
\mathbb{E} e^{\lambda |k|^{2\theta-3}\mathcal{E}^\theta((H_N)^\pm_k)(u_0)} \lesssim 1. $$ This establishes the claimed exponential bound for $\mathcal{E}^\theta((H_N(x))_k^\pm)$. Similarly we have $$
\mathcal{E}^\theta((H_N-H_M)^\pm_k)(x) \lesssim \sum_{k_1,k_2 :k_1+k_2=k} (\mathbb{I}_{|k|,|k_1|,|k_2|\le N}-\mathbb{I}_{|k|,|k_1|,|k_2|\le M})^2 c(k,k_1,k_2) | x_{k_2}|^2 .
$$ Let $$
I_{N,M}(k) =\sum_{k_1,k_2 :k_1+k_2=k} (\mathbb{I}_{|k|,|k_1|,|k_2|\le N}-\mathbb{I}_{|k|,|k_1|,|k_2|\le M})^2 c(k,k_1,k_2) $$ and note that, for $N\ge M$, $$
(\mathbb{I}_{|k|,|k_1|,|k_2|\le N}-\mathbb{I}_{|k|,|k_1|,|k_2|\le M}) \lesssim \mathbb{I}_{|k|,|k_1|,|k_2|\le N}(\mathbb{I}_{|k|> M}+\mathbb{I}_{|k_1|> M}+\mathbb{I}_{|k_2|> M}). $$ Then, by estimating the sums with the corresponding integrals and after easy simplifications we remain with the following bound $$
I_{N,M}(k)\lesssim |k|^{2} \mathbb{I}_{|k|> M} \int_{\mathbb{R}} \frac{\mathrm{d} q}{|q|^{2\theta}+|k-q|^{2\theta}}
+ |k|^{2} \int_{\mathbb{R}} \frac{\mathbb{I}_{|q|> M}\mathrm{d} q}{|q|^{2\theta}+|k-q|^{2\theta}} $$ The first integral in the r.h.s.~is easily handled by $$
|k|^{2} \mathbb{I}_{|k|> M} \int_{\mathbb{R}} \frac{\mathrm{d} q}{|q|^{2\theta}+|k-q|^{2\theta}} \lesssim |k|^{3-2\theta} \mathbb{I}_{|k|> M} \lesssim |k|^2 M^{1-2\theta} $$ since $\theta > 1/2$. For the second we have the analogous bound $$
|k|^{2} \int_{\mathbb{R}} \frac{\mathbb{I}_{|q|> M}\mathrm{d} q}{|q|^{2\theta}+|k-q|^{2\theta}} \lesssim
|k|^{2} \int_{\mathbb{R}} \frac{\mathbb{I}_{|q|> M}\mathrm{d} q}{|q|^{2\theta}} \lesssim |k|^2 M^{1-2\theta} $$ which concludes the proof. \end{proof}
Using Lemma~\ref{lemma:ito-trick} and the estimates contained in Lemma~\ref{lemma:energy-estimates} we are led to the next set of more refined estimates for the drift and his small scale contributions.
\begin{lemma} \label{lemma:main-bounds} Let $ G^M_t = \int_0^t F_{M}(u_s) \mathrm{d} s $.
For any $M\le N$ we have \begin{equation} \label{eq:basic-est-1}
\|\sup_{t\in[0,T]} \left|(G^M_t)_k\right| \|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k| M T , \end{equation} \begin{equation} \label{eq:basic-est-2}
\|\sup_{t\in[0,T]} \left|(G^M_t)_k\right| \|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k|^{3/2-\theta} T^{1/2} , \end{equation} \begin{equation} \label{eq:basic-est-3}
\|\sup_{t\in[0,T]}\left|(G^M_t)_k-(G^N_t)_k\right| \|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k| T^{1/2} M^{1/2-\theta} , \end{equation} \begin{equation} \label{eq:basic-est-4}
\sup_{M\ge 0}\|\sup_{t\in[0,T]}\left|(G^M_t)_k\right| \|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k| T^{2\theta/(1+2\theta)} . \end{equation} \end{lemma} \begin{proof} The Gaussian measure $\mu$ satisfies the hypercontractivity estimate (see for example~\cite{janson_gaussian_1997}): for any complex-valued finite order polynomial $P(x)\in\mathcal{C}yl$ we have \begin{equation} \label{eq:hypercontractivity}
\left\| P(x) \right\|_{L^p(\mu)}\lesssim_p \left\| P(x) \right\|_{L^2(\mu)}. \end{equation} Then we have $(F_M(x))_k = ik \sum_{k_1+k_2=k} x_{k_1} x_{k_2}$ and for all $k\neq 0$ $$
\int |(F_M(x))_k|^2 \mu(\mathrm{d} x) = |k|^2 \sum_{k_1+k_2=k} \sum_{k'_1+k'_2=k} \mathbb{I}_{|k_1|,|k_2|,|k'_1|,|k'_2|\le M}\int x_{k_1} x_{k_2} x_{k'_1}^* x_{k'_2}^* \mu(\mathrm{d} x) $$ $$
=4 |k|^2 M^2 $$ This allows us to obtain the bound~\eqref{eq:basic-est-1}. Indeed $$
\|\sup_{t\in[0,T]}\left| (G^M_t)_k \right| \|_{L^p(\mathbb{P}_\mu)} \lesssim
\int_0^T \left\| (F_{M}(u_s))_k \right\|_{L^p(\mathbb{P}_\mu)}\mathrm{d} s $$ $$\lesssim
T \left\| (F_{M}(\cdot))_k \right\|_{L^p(\mu)}
\lesssim_p T \left\| (F_{M}(\cdot))_k \right\|_{L^2(\mu)} \lesssim_p
|k| M T. $$ For the bound~\eqref{eq:basic-est-2} we use the fact that $L_0 H_N = F_N$ and Lemma~\ref{lemma:ito-trick} to get \begin{equation*}
\|\sup_{t\in[0,T]} \left|(G^M_t)_k\right| \|_{L^p(\mathbb{P}_\mu)}\lesssim_p T^{1/2} \sup_{t\in[0,T]}
\| \mathcal{E}^\theta(H_N(\cdot)) \|_{L^{p/2}(\mu)}^{1/2} \lesssim
|k|^{3/2-\theta} T^{1/2} \end{equation*} where we used the first energy estimate~\eqref{eq:first-energy-estimate}
of Lemma~\ref{lemma:energy-estimates} and the fact that $
\|Q\|_{L^p(\mu)}^p \lesssim_p \int [e^{Q(x)^+}+e^{Q(x)^-}] \mu(\mathrm{d} x) $ where again $Q^\pm$ are the real and imaginary parts of $Q$. The bound~\eqref{eq:basic-est-3} is obtained in the same way using the second energy estimate~\eqref{eq:second-energy-estimate}. Finally the last bound~\eqref{eq:basic-est-4} is obtained from the previous two by taking $0\le N\le M$, decomposing $F_M(x) = F_N(x)-F_{N,M}(x)$: $$
\|\sup_{t\in[0,T]} \left|(G^M_t)_k\right| \|_{L^p(\mathbb{P}_\mu)} \le \|\sup_{t\in[0,T]} \left|(G^N_t)_k\right| \|_{L^p(\mathbb{P}_\mu)}+\|\sup_{t\in[0,T]} \left|(G^M_t)_k-(G^N_t)_k\right| \|_{L^p(\mathbb{P}_\mu)} $$ $$
\lesssim_p |k| ( N T+ N^{1/2-\theta} T^{1/2}) $$ and performing the optimal choice $N \sim T^{-1/(1+2\theta)}$. \end{proof}
Analogous estimates go through also for the functions obtained via convolution with the $e^{-A^\theta t}$ semi-group. \begin{lemma} \label{lemma:main-bounds-conv} Let $$ \tilde G^M_t = \int_0^t e^{-A^\theta (t-s)} F_{M}(u_s) \mathrm{d} s $$ then for any $M\le N$ we have \begin{equation} \label{eq:basic-est-1-conv}
\|(\tilde G^M_t)_k\|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k| M \left(\frac{1-e^{-2k^{2\theta} t/2}}{2k^{2\theta}}\right) \end{equation} \begin{equation} \label{eq:basic-est-2-conv}
\|(\tilde G^M_t)_k\|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k|^{3/2-\theta} \left(\frac{1-e^{-2k^{2\theta} t/2}}{2k^{2\theta}}\right)^{1/2} \end{equation} \begin{equation} \label{eq:basic-est-3-conv}
\|(\tilde G^M_t)_k-(\tilde G^N_t)_k\|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k| M^{1/2-\theta} \left(\frac{1-e^{-2k^{2\theta} t/2}}{2k^{2\theta}}\right)^{1/2} \end{equation} \end{lemma} \begin{proof} The proof follows the line of Lemma~\ref{lemma:main-bounds} using eq.~\eqref{eq:ito-trick-conv} instead of eq.~\eqref{eq:ito-trick}. \end{proof}
\begin{corollary} For all sufficiently small $\varepsilon > 0$ \begin{equation} \label{eq:basic-est-4-conv}
\sup_{N\ge 0}\|(\tilde G^N_t)_k - (\tilde G^N_s)_k\|_{L^p(\mathbb{P}_\mu)} \lesssim_{p} |k|^{3/2-2\theta+2\varepsilon \theta} (t-s)^\varepsilon \end{equation} \end{corollary} \begin{proof} To control the time regularity of the drift convolution we consider $0\le s \le t$ and decompose $$
\|(\tilde G^N_t)_k - (\tilde G^N_s)_k\|_{L^p(\mathbb{P}_\mu)} $$ $$
\le \| \int_s^t (e^{-A^\theta(t-r)} F_{N}(u_r))_k \mathrm{d} r\|_{L^p(\mathbb{P}_\mu)} + (e^{-k^{2\theta}(t-s)}-1)\|(\tilde G^N_s)_k\|_{L^p(\mathbb{P}_\mu)} $$ $$
\lesssim |k|^{3/2-\theta} (t-s)^{1/2} +|k|^{3/2-2\theta}(e^{-k^{2\theta}(t-s)}-1)
\lesssim |k|^{3/2-\theta} (t-s)^{1/2} $$ Moreover a direct consequence of eq.~\eqref{eq:basic-est-2-conv} is $$
\sup_{t\in[0,T]} \|(\tilde G^N_t)_k \|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k|^{3/2-2\theta}. $$ which give us a uniform estimate in the form $$
\|(\tilde G^N_t)_k - (\tilde G^N_s)_k\|_{L^p(\mathbb{P}_\mu)} \le \|(\tilde G^N_t)_k\|_{L^p(\mathbb{P}_\mu)}+\|(\tilde G^N_s)_k\|_{L^p(\mathbb{P}_\mu)} \lesssim_{p} |k|^{3/2-2\theta} $$ By interpolation we get the claimed bound. \end{proof}
\begin{remark} All these $L^p$ estimates can be replaced with equivalent exponential estimates. For example it is not difficult to prove that for small $\lambda$ we have $$
\sup_{t\in[0,T]} \sup_{k\in\mathbb{Z}_0} \mathbb{E} \exp\left(\lambda |k|^{2\theta-3/2} (\tilde G^N_t)^\pm_k \right) \lesssim 1 $$ where $(\cdot)^\pm$ denote, as before, the real and imaginary parts, respectively. \end{remark}
At this point we are in position to prove Lemma~\ref{lemma:burgers-drift} on the existence of the Burgers' drift for controlled processes.
\begin{proof}(of Lemma~\ref{lemma:burgers-drift}) Let $\mathcal{B}^\varepsilon_t = \int_0^t F(\rho^\varepsilon* u_s) \mathrm{d} s$. We start by noting that since $\hat \rho$ has a bounded support we have $\rho^\varepsilon * (\Pi_N u_s) = \rho^\varepsilon * u_s$ for all $N \ge C/\varepsilon$ for some constant $C$ and $\varepsilon$ small enough. Moreover all the computation we made for $F_N$ remains true for the functions $F_{\varepsilon,N}(x) = F(\rho^\varepsilon * \Pi_N x)$ so we have estimates analogous to that in Lemma~\ref{lemma:main-bounds} for $G^{\varepsilon,M}_t = \int_0^t \int_0^t F(\rho^\varepsilon* \Pi_M u_s) \mathrm{d} s$. In taking $\varepsilon>\varepsilon'>0$ and $N\ge C/\varepsilon$, $M\ge C/\varepsilon'$ and $M\ge N$ we have $$
\left\|\sup_{t\in[0,T]}\left|(\mathcal{B}^\varepsilon_t)_k-(\mathcal{B}^{\varepsilon'}_t)_k\right| \right\|_{L^p(\mathbb{P}_\mu)} =
\left\|\sup_{t\in[0,T]}\left|(G^{\varepsilon,N}_t)_k-(G^{\varepsilon',M}_t)_k\right|\right \|_{L^p(\mathbb{P}_\mu)} $$ $$
\lesssim_p |k| T^{1/2} M^{1/2-\theta} \lesssim_p |k| T^{1/2} (\varepsilon')^{\theta-1/2} $$ uniformly in $\varepsilon,\varepsilon',N,M$. This easily implies that the sequence of processes $(\mathcal{B}^\varepsilon)_{\varepsilon}$ converges almost surely to a limit in $C(\mathbb{R}_+,\mathcal{F} L^{-1-\varepsilon,\infty})$ if $\theta>1/2$. By similar arguments it can be shown that the limit does not depend on the function $\rho$. \end{proof}
\section{Existence of controlled solutions} \label{sec:existence}
Fix $\alpha < 1/2$ and consider the SDE on $H^\alpha$ given by \begin{equation} \label{eq:burgers-reg} \mathrm{d} u^N_t = - A^\theta u^N_t \mathrm{d} t + F_N(u^N_t)\mathrm{d} t + A^{\theta/2} \mathrm{d} W_t, \end{equation} where $F_N : H\to H$ is defined by $F_N(x) = \frac12 \Pi_N B (\Pi_N x)^2$. Global solution of this equation starting from any $u_0^N\in H^\alpha$ can be constructed as follows. Let $(Z_t)_{t\ge 0}$ the unique OU process on $H^\alpha$ which satisfies the SDE \begin{equation} \label{eq:ou} \mathrm{d} Z_t = - A^\theta Z_t \mathrm{d} t + A^{\theta/2} \mathrm{d} W_t. \end{equation} with initial condition $Z_0 = u^N_0$. Let $(v^N_t)_{t\ge 0}$ the unique solution taking values in the finite dimensional vector space $\Pi_N H$ of the following SDE $$ \mathrm{d} v^N_t = - A^\theta v^N_t \mathrm{d} t + F_N(v^N_t)\mathrm{d} t + A^{\theta/2}\mathrm{d} \Pi_N W_t, $$ with initial condition $v^N_0 = \Pi_N u^N_0$. Note that this SDE has global solutions despite of the quadratic non-linearity. Indeed the vector field $F_N$ preserves the $H$ norm: $$ \langle v^N_t,F_N(v^N_t)\rangle = \langle v^N_t,B (v^N_t)^2\rangle = \frac13 \int_\mathbb{T} \partial_\xi(v^N_t(\xi))\mathrm{d} \xi = 0 $$ and by It\^o formula we have $$
\mathrm{d} \|v^N_t\|_H^2 = 2 \langle v^N_t,- A^\theta v^N_t \mathrm{d} t + F_N(v^N_t)\mathrm{d} t + A^{\theta/2}\mathrm{d} \Pi_N W_t \rangle + C_N \mathrm{d} t $$ $$
= -2 \|A^{\theta/2} v^N_t\|^2_H \mathrm{d} t + 2 \langle v^N_t, A^{\theta/2}\mathrm{d} \Pi_N W_t \rangle + C_N \mathrm{d} t $$
where $C_N = [A^{\theta/2}\Pi_N W]_t = \sum_{0<|k|\le N} |k|^{2\theta}$. From this equation we easily obtain that for any initial condition $v^N_0$ the process $(\|v^N_t\|_H)_{t\in[0,T]}$ is almost surely finite for any $T \ge 0$ which implies that the unique solution $(v^N_t)_{t \ge 0}$ can be extended to arbitrary intervals of time. Setting $u^N_t = v^N_t + (1-\Pi_N)Z_t$ we obtain a global solution of eq.~\eqref{eq:burgers-reg}. Moreover the diffusion $(u^N_t)_{t\ge 0}$ has generator $$
L_N \varphi(x) = L_0\varphi(x)+ \sum_{k\in\mathbb{Z}_0, |k|\le N} (F_N(x))_k D_k \varphi(x) $$ where $L_0$ is the generator of the Ornstein--Uhlenbeck defined in eq.~\eqref{eq:generator-ou} and which satisfies the integration by parts formula $ \mu [\varphi L_0 \varphi] = \mu[ \mathcal{E}(\varphi)] $ for $\varphi\in\mathcal{C}yl$. This diffusion preserves the Gaussian measure $\mu$. Indeed if we take $u_0^N$ distributed according to the white noise $\mu$ we have that $((1-\Pi_N)Z_t)_{t \ge 0}$ is independent of $(v^N_t)_{t\ge 0}$. Moreover $Z_t$ has law $\mu$ of any $t\ge 0$ and an easy argument for the finite dimensional diffusion $(v^N_t)_{t\ge 0}$ shows that for any $t\ge0$ the random variable $v^N_t$ is distributed according to $\mu^N = (\Pi_N)_* \mu$: the push forward of the measure $\mu$ with respect to the projection $\Pi_N$.
We will use the fact that $u^N$ satisfy the mild equation~\cite{DZ} \begin{equation} \label{eq:burgers-reg-mild} u^N_t = e^{-A^\theta t} u_0 + \int_0^t e^{-A^\theta (t-s)} F_N(u^N_s) \mathrm{d} s + A^{\theta/2} \int_0^t e^{-A^\theta (t-s)} \mathrm{d} W_s \end{equation}
where the stochastic convolution in the r.h.s is given by $$
A^{\theta/2} \int_0^t e^{-A^\theta (t-s)} \mathrm{d} W_s = \sum_{k\in\mathbb{Z}_0} |k|^{\theta} e_k \int_0^t e^{-|k|^{2\theta} (t-s)} \mathrm{d} \beta^k_s . $$
\begin{lemma} Let $$ \mathcal{A}_t^{N}=\int_0^t F_{N}(u^{N}_s) \mathrm{d} s ,\qquad \tilde \mathcal{A}_t^{N}=\int_0^t e^{-A^\theta(t-s)} F_{N}(u^{N}_s) \mathrm{d} s . $$ and set $\sigma=(3/2-2\theta)_+$. The family of laws of the processes $\{(u^N,\mathcal{A}^N,\tilde \mathcal{A}^N,W)\}_N$ is tight in the space of continuous functions with values in $\mathcal{X}=\mathcal{F} L^{\infty,\sigma-\varepsilon}\times \mathcal{F} L^{\infty,3/2-\theta-\varepsilon}\times \mathcal{F} L^{\infty,3/2-2\theta-\varepsilon}\times \mathcal{F} L^{\infty,-\varepsilon}$ for all small $\varepsilon > 0$. \end{lemma}
\begin{proof} The estimate~\eqref{eq:basic-est-4-conv} in the previous section readily gives that for any small $\varepsilon >0$ and sufficienly large $p$ $$
\mathbb{E}_\mu\left[\sum_{k\in\mathbb{Z}_0} |k|^{-(3/2-2\theta+3\theta\varepsilon) p} \left(|(\tilde \mathcal{A}_t^{N}-\tilde \mathcal{A}_s^{N})_k| \right)^p\right]
\lesssim_{p,\varepsilon} \sum_{k\in\mathbb{Z}_0} |k|^{-\theta\varepsilon p} |t-s|^{p \varepsilon} \lesssim |t-s|^{p \varepsilon} $$ This estimates show that the family of processes $\{ \tilde \mathcal{A}^{N} \}_{N}$ is tight in $C([0,T],\mathcal{F} L^{\infty,\alpha})$ for $\alpha=3/2-2\theta+3\theta\varepsilon$ and sufficiently small $\varepsilon>0$. An analogous argument using the estimate~\eqref{eq:basic-est-2}
shows that the family of processes $\{ \mathcal{A}^{N} \}_{N}$ is tight in $C^\gamma([0,T],\mathcal{F} L^{\infty,\beta})$ for any $\gamma<1/2$ and $\beta < 3/2-\theta$. It is not difficult to show that the stochastic convolution $\int_0^t e^{-A^\theta(t-s)} A^{\theta/2} \mathrm{d} W_s$ belongs to $C([0,T],\mathcal{F} L^{\infty,1-\theta-\varepsilon})$ for all small $\varepsilon>0$. Taking into account the mild equation~\eqref{eq:burgers-reg-mild} we find that the processes $\{(u^{N}_t)_{t\in[0,T]}\}_{N}$ are tight in $C([0,T],\mathcal{F} L^{\infty,\sigma-\varepsilon})$. \end{proof}
We are now ready to prove our main theorem on existence of (probabilistically weak) controlled solutions to the generalized stochastic Burgers equation.
\begin{theorem} There exists a probability space and a quadruple of processes $(u,\mathcal{A},\tilde\mathcal{A},W)$ with continuous trajectories in $\mathcal{X}$ such that $W$ is a cylindrical Brownian motion in $H$, $u$ is a controlled process and they satisfy \begin{equation} \label{eq:limit-1} u_t = u_0 + \mathcal{A}_t - \int_0^t A^\theta u_s \mathrm{d} s + B W_t = e^{-A^\theta t} u_0 + \tilde \mathcal{A}_t + \int_0^t e^{-A^\theta(t-s)} B \mathrm{d} W_s \end{equation} where, as space distributions, \begin{equation} \label{eq:limit-4} \mathcal{A}_t = \lim_{M \to \infty}\int_0^t F_{M}(u_s) \mathrm{d} s \quad \text{ and } \quad \tilde \mathcal{A}_t = \int_0^t e^{-A^\theta(t-s)} \mathrm{d} \mathcal{A}_s . \end{equation} this last integral being defined as a Young integral. \end{theorem}
\begin{proof} Let us first prove~\eqref{eq:limit-4}. By tightness of the laws of $\{(u^N,\mathcal{A}^N,\tilde\mathcal{A}^N , W)\}_N$ in $C(\mathbb{R};\mathcal{X})$ we can extract a subsequence which converges weakly (in the probabilistic sense) to a limit point in $C(\mathbb{R};\mathcal{X})$. By Skhorohod embedding theorem, up to a change of the probability space, we can assume that this subsequence which we call $\{N_n\}_{n\ge 1}$ converges almost surely to a limit $u = \lim_n u^{N_n} \in C(\mathbb{R};\mathcal{X})$. Then $$ \int_0^t F_{M}(u_s) \mathrm{d} s = \int_0^t (F_{M}(u_s) - F_{M}(u^{N_n}_s)) \mathrm{d} s $$ $$\qquad
+ \int_0^t (F_{M}(u^{N_n}_s) - F_{N_n}(u^{N_n}_s)) \mathrm{d} s + \int_0^t F_{N_n}(u^{N_n}_s) \mathrm{d} s . $$ But now, in $C(\mathbb{R}_+,\mathcal{F} L^{\infty,3/2-\theta-\varepsilon})$ we have the almost sure limit $$ \lim_n \int_0^\cdot F_{N_n}(u^{N_n}_s) \mathrm{d} s =\lim_n \mathcal{A}^{N_n}_\cdot = \mathcal{A}_\cdot $$ and, always almost surely in $C(\mathbb{R}_+,\mathcal{F} L^{\infty,3/2-\theta-\varepsilon})$, we have also $$ \lim_n \int_0^\cdot (F_{M}(u_s) - F_{M}(u^{N_n}_s)) \mathrm{d} s = 0 , $$ since the functional $F_M$ depends only of a finite number of components of $u$ and $u^{N_n}$ and that we have the convergence of $u^{N_n}$ to $u$ in $C(\mathbb{R};\mathcal{F} L^{\infty,\sigma-\varepsilon})$ and thus distributionally uniformly in time. Moreover, for all $k\in\mathbb{Z}_0$, $$
\lim_M \sup_{N_n : M<N_n} \left\|\sup_{t\in[0,T]} \left| \int_0^t (F_{M}(u^{N_n}_s) - F_{N_n}(u^{N_n}_s))_k \mathrm{d} s\right| \right\|_{L^p(\mathbb{P}_\mu)}= 0. $$ By the apriori estimates, $\mathcal{A}^{N_n}$ converges to $\mathcal{A}$ in $C^\gamma(\mathcal{F} L^{\infty,3/2-\theta-\varepsilon})$ for all $\gamma <1/2$ and $\varepsilon > 0$ so that we can use Young integration to define $\int_0^t e^{-A^\theta(t-s)} \mathrm{d} \mathcal{A}^{N_n}_s$ as a space distribution and to obtain its distributional convergence (for example for each of its Fourier components) to $\int_0^t e^{-A^\theta(t-s)} \mathrm{d} \mathcal{A}^{N_n}_s$. At this point eq.~\eqref{eq:limit-1} is a simple consequence. The backward processes $\hat u^{N_n}_{t}=u^{N_n}_{T-t}$ and $\hat \mathcal{A}^{N_n}_t = -\mathcal{A}^{N_n}_{T-t}$ converge to $\hat u_{t}=u_{T-t}$ and $\hat \mathcal{A}_t = -\mathcal{A}_{T-t}$ respectively and moreover note that $\mathcal{A}$ as a distributional process has trajectories which are H\"older continuous for any exponent smaller than $2\theta/(1+2\theta)>1/2$ as a consequence of the estimate~\eqref{eq:basic-est-4} and this directly implies that $\mathcal{A}$ has zero quadratic variation. So $u$ is a controlled process in the sense of our definition. \end{proof}
\section{Uniqueness for $\theta>5/4$} \label{sec:uniq}
In this section we prove a simple pathwise uniqueness result for controlled solutions which is valid when $\theta > 5/4$. Note that to each controlled solution $u$ is naturally associated a cylindrical Brownian motion $W$ on $H$ given by the martingale part of the controlled decomposition~\eqref{eq:controlled-decomposition}. Pathwise uniqueness is then understood in the following sense.
\begin{definition} SBE$_\theta$ has pathwise uniqueness if given two controlled processes $u,\tilde u\in\mathcal{R}_\theta$ on the same probability space which generate the same Brownian motion $W$ and such that $\tilde u_0 = u_0$ amost surely then there exists a negligible set $\mathcal{N}$ such that for all $\varphi\in\mathcal{S}$ and $t\ge 0$ $\{u_t(\varphi) \neq \tilde u_t(\varphi)\} \subseteq \mathcal{N}$. \end{definition}
\begin{theorem} \label{th:uniqueness} The generalized stochastic Burgers equation has pathwise uniqueness when $\theta > 5/4$. \end{theorem} \begin{proof}
Let $u$ be a controlled solution to the equation and let $u^N$ be the Galerkin approximations defined above with respect to the cylindrical Brownian motion $W$ obtained from the martingale part of the decomposition of $u$ as a controlled process. We will prove that $u^N \to u$ almost surely in $C(\mathbb{R}_+;\mathcal{F} L^{2\theta-3/2-2\varepsilon,\infty})$ for any small $\varepsilon >0$. Since Galerkin approximations have unique strong solutions we have $\tilde u^N = u^N$ almost surely and in the limit $\tilde u = u$ in $C(\mathbb{R}_+;\mathcal{F} L^{2\theta-3/2-2\varepsilon,\infty})$ almost surely. This will imply the claim by taking as negligible set in the definition of pathwise uniqueness the set $\mathcal{N}=\{\sup_{t\ge 0}\|u_t-\tilde u_t\|_{\mathcal{F} L^{2\theta-3/2-2\varepsilon,\infty}}>0\}$. Let us proceed to prove that $u^N \to u$. By bilinearity, $$
F_N \left( u \right) - F_N \left( u^N \right) = F_N ( \Pi_N u_s+u^N_s,\Delta^N_s) $$ and the difference $\Delta^N = \Pi_N ( u - u^N )$
satisfies the equation $$ \Delta^N_t = \Pi_N \int_0^t e^{- A^{\theta} ( t - s )}
F_N ( u_s+u^N_s,\Delta^N_s) \mathrm{d} s +\varphi^N_t $$ where $$ \varphi^N_t = \int_0^t e^{- A^{\theta} \left( t - s \right)} \left( F_{} \left( u \right)
- F_N \left( u^{} \right) \right) \mathrm{d} s . $$ Note that
\[ \| \sup_{t \in [ 0, T ]} | ( \varphi^N_t )_k | \|_{L^p (
\mathbb{P}_{\mu} )} \lesssim_p \max(| k |^{1 - 2 \theta} N^{1
/ 2 - \theta},| k |^{3/2 - 2 \theta}) \] which by interpolation gives
\[ \| \sup_{t \in [ 0, T ]} | ( \varphi^N_t )_k | \|_{L^p (
\mathbb{P}_{\mu} )}\lesssim_p | k |^{3/2 - 2 \theta+\varepsilon} N^{-\varepsilon} \] for any small $\varepsilon >0$. Now let $$
\Phi_N = \sup_{k\in\mathbb{Z}_0} \sup_{t \in [ 0, T ]} |k|^{2\theta-3/2-2\varepsilon} | ( \varphi^N_t )_k | $$ then $$
\mathbb{E} \sum_{N>1} N \Phi_N^p \le\sum_{N>1} N \sum_{k\in\mathbb{Z}_0} \sup_{t \in [ 0, T ]} |k|^{p(2\theta-3/2-2\varepsilon)}\mathbb{E} | ( \varphi^N_t )_k |^p $$ $$
\lesssim_p \sum_{N>1} N^{1-\varepsilon p} \sum_{k\in\mathbb{Z}_0} |k|^{-p\varepsilon}<+\infty $$ for $p$ large enough, which implies that almost surely $ \Phi_N \lesssim_{p,\omega} N^{-1/p} $. For the other term we have $$
\sup_{t\in[0,T]}\left|\left( \int_0^t e^{- A^{\theta} \left( t - s \right)} F_N \left(
\Pi_N u + u^N, \Delta_N \right) \mathrm{d} s \right)_k\right|
\lesssim A_N |k|^{3/2-2\theta+2\varepsilon} Q_T $$ where
$
A_N = \sup_{t \in \left[ 0, T \right]} \sup_k \left| k \right|^{2 \theta -
3/2 - 2\varepsilon} \left| \left( \Delta^N_t \right)_k \right| $ and $$
Q_T = \sup_{t\in[0,T]} |k|^{2\theta-1/2-2\varepsilon} \int_0^t e^{- |k|^{2\theta} \left( t - s \right)} \sum_{q\in \mathbb{Z}_0} |(
\Pi_N u_s + u^N_s)_q| |k-q|^{3/2-2\theta+2\varepsilon} \mathrm{d} s
$$ This gives \[ A_N \leqslant Q_T A_N + \Phi_N.
\] Since $3/2-2 \theta <-1 $ (that is $\theta > 5/4$), we have the estimate:
$$
Q_T \lesssim \sup_{t\in[0,T]} |k|^{2\theta-1/2-2\varepsilon} \left[\int_0^t e^{- p' |k|^{2\theta} \left( t - s \right)} \mathrm{d} s \right]^{1/p'} \left[ \int_0^T \sum_{q\in \mathbb{Z}_0}\frac{ |(
\Pi_N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta-2\varepsilon}} \mathrm{d} s \right]^{1/p}
$$ valid for some $p> 1$ (with $1/p'+1/p=1$). Then
$$ Q_T \lesssim |k|^{2\theta-1/2-2\varepsilon-2\theta/p'} \left[ \int_0^T \sum_{q\in \mathbb{Z}_0}\frac{ |(
\Pi_N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta-2\varepsilon}} \mathrm{d} s \right]^{1/p} $$ and taking $p$ large enough such that $2\theta-1/2-2\varepsilon-2\theta/p'\le 0$ we obtain $$
Q_T \lesssim_p \left[ \int_0^T \sum_{q\in \mathbb{Z}_0}\frac{ |(
\Pi_N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta-2\varepsilon}} \mathrm{d} s \right]^{1/p} $$ By the stationarity of the processes $u$ and $u^N$ and the fact that their marginal laws are the white noise we have $$
\mathbb{E}[ Q_T^p] \lesssim_p \int_0^T \sum_{q\in \mathbb{Z}_0}\frac{\mathbb{E} |(
\Pi_N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta-2\varepsilon}} \mathrm{d} s = T \sum_{q\in \mathbb{Z}_0}\frac{1}{ |k-q|^{-3/2+2\theta-2\varepsilon}} \lesssim_p T $$ Then by a simple Borel-Cantelli argument, almost surely $Q_{1/n} \lesssim_{p,\omega} n^{-1+1/p}$. Putting together the estimates for $\Phi_N$ and that for $Q_{1/n}$ we see that there exists a (random) $T$ such that $C Q_T\le 1/2$ almost surely and that for this $T$: $ A_N \leqslant 2 \Phi_N $, which given the estimate on $\Phi_N$ implies that $A_N \to 0$ as $N\to\infty$ almost surely and that the solution of the equation is unique and is the (almost-sure) limit of the Galerkin approximations. \end{proof}
\section{Alternative equations} \label{sec:alternative} The technique of the present paper extends straighforwardly to some other modifications of the stochastic Burgers equation.
\subsection{Regularization of the convective term}
Consider for example the equation \begin{equation} \label{eq:burgers-daprato} \mathrm{d} u_t = - A u_t \mathrm{d} t + A^{-\sigma}F(A^{-\sigma} u_t) \mathrm{d} t + B \mathrm{d} W_t \end{equation} which is the equation considered by Da~Prato, Debbussche and Tubaro in~\cite{DDT}. Letting $F_\sigma(x) = A^{-\sigma}F(A^{-\sigma} x)$, denoting by $H_\sigma$ the corresponding solution of the Poisson equation and following the same strategy as above we obtain the same bounds $$
\mathcal{E}((H_{\sigma,N})^\pm_k)(x)\lesssim \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}c_\sigma(k,k_1,k_2) |x_{k_2}|^2 $$
where $c_\sigma(k,k_1,k_2) = |k|^{2-4\sigma}/[|k_1|^{4\sigma}|k_1|^{4\sigma}(|k_1|^{2}+|k_2|^{2})]$. This quantity can then be bounded in terms of the sum $$
I_{\sigma,N}(k) = \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}c_\sigma(k,k_1,k_2) \lesssim |k|^{1-12\sigma} $$ From which we can reobtain similar bounds to those exploited above. For example $$
\left\|\int_0^t (e^{-A(t-s)}F_{\sigma,M}(u_s))_k \mathrm{d} s \right\|_{L^p(\mathbb{P}_\mu)}\lesssim_p |k|^{-1/2-6\sigma} $$ And in particular we have existence of weak controlled solutions when $8\sigma+2>1$, that is $\sigma>-1/8$ and pathwise uniqueness when $-1/2-6\sigma<-1$ that is $\sigma> 1/12$. Which is an improvement over the result in~\cite{DDT} which has uniqueness for $\sigma>1/8$.
\subsection{The Sasamoto--Spohn discrete model} Another application of the above techniques is to the analysis of the discrete approximation to the stochastic Burgers equation proposed by Spohn and Sasamoto in~\cite{Spohn}. Their model is the following: \begin{equation} \label{eq:sasamoto-spohn} \begin{split} \mathrm{d} u_j & = (2N+1) (u_j^2+u_j u_{j+1}-u_{j-1}u_j-u^2_{j-1})\mathrm{d} t \\
& \qquad + (2N+1)^2(u_{j+1}-2 u_j+u_{j-1})\mathrm{d} t + (2N+1)^{3/2} (\mathrm{d} B_j - \mathrm{d} B_{j-1}) \end{split} \end{equation} for $j=1,\dots,2N+1$ with periodic boundary conditions $u_0=u_{2N+1}$ and where the processes $(B_j)_{j=1,\dots,2N+1}$ are a family of independents standard Brownian motions with $B_0=B_{2N+1}$. This model has to be tought as the discretization of the dynamic of the periodic velocity field $u(x)$ with $x\in(-\pi,\pi]$ sampled on a grid of mesh size $1/(2N+1)$, that is $u_j = u(\xi^N_j)$ with $\xi^N_j = -\pi+2\pi(j/(2N+1))$. This fixes also the scaling factors for the different contributions to the dynamics if we want that, at least formally, this equation goes to a limit described by a SBE. Passing to Fourier variables $\hat u(k) = (2N+1)^{-1}\sum_{j=0}^{2N-1} e^{i \xi^N_j k} u_j$ for $k\in \mathbb{Z}^N$ with $\mathbb{Z}^N = \mathbb{Z}\cap [-N,N]$ and imposing that $\hat u(0)=0$, that is, considering the evolution only with zero mean velocity we get the system of ODEs: $$ \mathrm{d} \hat u_t(k) = F^\flat_N(\hat u_t)_k \mathrm{d} t
- |g_N(k)|^2 \hat u_t(k) \mathrm{d} t + (2N+1)^{1/2} g_N(k) \mathrm{d} \hat B_t(k) $$ for $k\in \mathbb{Z}_0^N=\mathbb{Z}_0\cap [-N,N]$, where $g_N(k)=(2N+1)(1-e^{i k/(2N+1)})$, $$ F^\flat_N(u_t)_k = \sum_{k_1,k_2\in\mathbb{Z}^N_0} \hat u_t(k_1)\hat u_t(k_2)[ g_N(k)-g_N(k)^*+g_N(k_1)-g_N(k_2)^*] $$ and $(\hat B_\cdot(k))_{k\in\mathbb{Z}_0^N}$ is a family of centred complex Brownian motions such that $\hat B(k)^* = \hat B(-k)$ and with covariance $\mathbb{E} \hat B_t(k) \hat B_t(-l) = \mathbb{I}_{k=l} t (2N+1)^{-1}$. If we then let $\beta(k) = (2N+1)^{1/2} \hat B(k)$ we obtain a family of complex BM of covariance $\mathbb{E} \beta_t(k) \beta_t(-l) = t \mathbb{I}_{k=l} $. The generator $L^\flat_N$ of this stochastic dynamics is given by $$ L^{\flat}_N \varphi( x) = \sum_{k\in\mathbb{Z}^N_0} F^\flat_N(x)_k D_k \varphi( x)+L^{g_N,OU}_N \varphi (x) $$ with $$
L^{g_N}_N \varphi( x) = \sum_{k\in\mathbb{Z}^N_0} |g_N(xk)|^2 (x_k D_{k}+ D_{-k} D_k) \varphi( x) $$ the generator of the OU process corresponding to the linear part associated with the multiplier $g_N$. It is easy to check that the complete dynamics preserves the (discrete) white noise measure, indeed $$ \sum_{k\in\mathbb{Z}_0^N} x_{-k} F^\flat_N(x)_k = \sum_{\substack{k,k_1,k_2\in\mathbb{Z}_0^N\\k+k_1+k_2=0}} x_{k} x_{k_1} x_{k_2}[ g_N(k)^*-g_N(k)+g_N(k_1)-g_N(k_2)^*] =0 $$ since the symmetrization of the r.h.s. with respect to the permutations of the variables $k,k_1,k_2$ yields zero. Then defining suitable controlled process with respect to the linear part of this equation we can prove our apriori estimates on additive functionals which are now controlled by the quantity $$
\mathcal{E}^{g_N}((H_{g_N,N})^\pm_k)(x)\lesssim \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}c_{g_N}(k,k_1,k_2) |x_{k_2}|^2 $$
with $c_{g_N}(k,k_1,k_2) = |g_N(k)|^2/[(|g_N(k_1)|^{2}+|g_N(k_2)|^{2})]$. Moreover noting that $$
|g_N(k)|^2 = 2 (2N+1)^2(1-\cos(2\pi k/(2N+1)) \sim |k|^2 $$ uniformly $N$, it is possible to estimate this energy in the same way we did before in the case $\theta=1$ and obtain that the family of stationary solutions of equation~\eqref{eq:sasamoto-spohn} is tight in $C([0,T],\mathcal{F} L^{\infty,-\varepsilon})$ for all $\varepsilon >0$. Moreover using the fact that $g_N(k) \to ik$ as $N\to \infty$ uniformly for bounded $k$ and that $$
\pi_M F^\flat_N(\pi_M x)_k = \sum_{k_1,k_2\in\mathbb{Z}^N_0} \mathbb{I}_{|k|,|k_1|,|k_2|\le M} x_{k_1} x_{k_2}[ g_N(k)-g_N(k)^*+g_N(k_1)-g_N(k_2)^*] $$ $$
\to 3 i k \sum_{k_1,k_2\in\mathbb{Z}_0} \mathbb{I}_{|k|,|k_1|,|k_2|\le M} x_{k_1} x_{k_2} = 3 F_M(x)_k $$ it is easy to check that any accumulation point is a controlled solution of the stochastic Burgers equations~\eqref{eq:burgers-theta}.
\section{2d stochastic Navier-Stokes equation} \label{sec:ns}
We consider the problem of stationary solutions to the 2d stochastic Navier-Stokes equation considered in~\cite{albeverio-cruzeiro} (see also~\cite{albeverio-ferrario}). We would like to deal with invariant measures obtained by formally taking the kinetic energy of the fluid and considering the associated Gibbs measure. However this measure is quite singular and we need a bit of hyperviscosity in the equation to make our estimates work.
\subsection{The setting} Fix $\sigma>0$ and consider the following stochastic differential equation \begin{equation} \label{eq:ns}
\mathrm{d} (u_{t})_k = - |k|^{2+2\sigma} (u_{t})_k \mathrm{d} t + B_k(u_t) \mathrm{d} t + |k|^{\sigma} \mathrm{d} \beta^k_t \end{equation} where $(\beta^k)_{k\in \ZZ^2\backslash\{0\}}$ is a family of complex BMs for which $(\beta^k)^* = \beta^{-k}$ and $\mathbb{E}[\beta^k \beta^q] = \mathbb{I}_{q+k=0}$, $u$ is a stochastic process with continuous trajectories in the space of distributions on the two dimensional torus $\mathbb{T}^2$, $$
B_k(x) = \sum_{k_1+k_2=k} b(k,k_1,k_2) x_{k_1} x_{k_2} $$ where $x: \ZZ^2\backslash\{0\} \to \mathbb{C}$ is such that $x_{-k} = x_k^*$ and $$ b(k,k_1,k_2) = \frac{(k^\bot \cdot k_1)(k \cdot k_2)}{k^2} $$ with $(\xi,\eta)^\bot = (\eta,-\xi) \in \mathbb{R}^2$. Apart from the two-dimensional setting and the difference covariance structure of the linear part this problem has the same structure as the one dimensional stochastic Burgers equation we considered before. Note that to make sense of it (and in order to construct controlled solutions) we can consider the Galerkin approximations constructed as follows. Fix $N$ and solve the problem finite dimensional problem \begin{equation} \label{eq:ns-N}
\mathrm{d} (u^N_{t})_k = - |k|^{2+\sigma} (u^N_{t})_k \mathrm{d} t + B^N_k(u^N_t) \mathrm{d} t + |k|^{-\sigma} \mathrm{d} \beta^k_t \end{equation}
for $k \in \mathbb{Z}^2_N = \{k \in \mathbb{Z}^2 : |k| \le N\}$, where \begin{equation} \label{eq:drift-N}
B^N_k(x) =\mathbb{I}_{|k|\le N} \sum_{\substack{k_1+k_2=k\\|k_1|\le N, |k_2|\le N}} b(k,k_1,k_2) x_{k_1} x_{k_2}
\end{equation} The generator of the process $u^N$ is given by $ L^N\varphi(x) = L_0\varphi(x) + \sum_{k\in \ZZ^2\backslash\{0\}} B^N_k(x) D_k \varphi(x) $ where $$
L^0 \varphi(x) = \frac12 \sum_{k\in \ZZ^2\backslash\{0\}} |k|^{2\sigma}( D_{-k}D_k\varphi(x) -|k|^2 x_k D_k \varphi(x)) $$ is the generator of a suitable OU flow. Note moreover that the kinetic energy of $u$ given by $
E(x) = \sum_k |k|^{2} |x_k|^2 $ is invariant under the flow generated by $B^N$. Moreover $ D_{k} B^N_k(x) = 0 $ since $x_k$ does not enter in the expression of $B^N_k(x)$, so the vectorfields $B^N$ leave also the measure $ \prod_{k \in \ZZ^2_N\backslash\{0\}} dx_k $ invariant. Then the (complex) Gaussian measures $$
\gamma(dx) = \prod_{k \in \ZZ^2\backslash\{0\}} Z_k e^{-|k|^2 |x_k|^2} dx_k $$ is invariant under the flow generated by $B^N$. (This measure should be understood restricted to the set $\{x \in \mathbb{C}^{\ZZ^2\backslash\{0\}} : x_{-k} = \overline{x_k} \}$). The measure $\gamma$ is also invariant for the $u^N$ diffusion since it is invariant for $B^N$ and for the OU process generated by $L^0$. Intoduce standard Sobolev norms $
\|x\|_\sigma^2 = \sum_{k \in \ZZ^2\backslash\{0\}} |k|^{2\sigma} |x_k|^2 $
and denote with $H^\sigma$ the space of elements $x$ with $\|x \|_{\sigma}<\infty$. The measure $\gamma$ is the Gaussian measure associated to $H^1$ and is supported on any $H^\sigma$ with $\sigma<0$ $$
\int \|x\|_\sigma^2 \gamma(dx) = \sum_{k \in \ZZ^2\backslash\{0\}} |k|^{2\sigma-2} < \infty $$ so $(\gamma, H^1, \cap_{\varepsilon < 0}H^\varepsilon)$ is an abstract Wiener space in the sense of Gross. Note that the vectorfield $B_k(x)$ in not defined on the support of $\gamma$. To give sense of controlled solutions to this equation we need to control $$
\mathcal{E}((H_{N})^\pm_k)(x)\lesssim \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}c_{\text{ns}}(k,k_1,k_2) |x_{k_2}|^2 $$
with $c_{\text{ns}}(k,k_1,k_2) = |k_1|^{2\sigma} |k_1|^2|k_2|^2/(|k_1|^{2+2\sigma}+|k_2|^{2+2\sigma})^2$ and note that the stationary expectation of this term can be estimated by $$
I_N(k) = \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}c_{\text{ns}}(k,k_1,k_2) |k_2|^{-2} \lesssim \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}\frac{|k_1|^{2+2\sigma}}{ (|k_1|^{2+2\sigma}+|k_2|^{2+2\sigma})^2}\lesssim $$ $$
\lesssim \sum_{\substack{k_1,k_2 :k_1+k_2=k\\|k|,|k_1|,|k_2|\le N}}\frac{1}{ |k_1|^{2+2\sigma}+|k_2|^{2+2\sigma}}\lesssim |k|^{-2\sigma} $$ for any $\sigma > 0$. This estimate allows to apply our machinery and obtain stationary controlled solutions to this equation.
\begin{bibdiv} \begin{biblist}
\bib{albeverio-cruzeiro}{article}{
title = {Global flows with invariant {(Gibbs)} measures for Euler and {Navier-Stokes} two dimensional fluids},
volume = {129},
issn = {0010-3616},
url = {http://www.springerlink.com/content/u1406006h7x32575/abstract/},
doi = {10.1007/BF02097100},
number = {3},
journal = {Communications in Mathematical Physics},
author = {Albeverio, S.},
author = {Cruzeiro, {A.-B.}},
year = {1990},
keywords = {Physics and Astronomy},
pages = {431--444}, }
\bib{albeverio-ferrario}{incollection}{
series = {Lecture Notes in Mathematics},
title = {Some Methods of Infinite Dimensional Analysis in Hydrodynamics: An Introduction},
volume = {1942},
isbn = {978-3-540-78492-0},
shorttitle = {Some Methods of Infinite Dimensional Analysis in Hydrodynamics},
url = {http://www.springerlink.com/content/2n14280t40q7v34q/abstract/},
booktitle = {{SPDE} in Hydrodynamic: Recent Progress and Prospects},
publisher = {Springer Berlin / Heidelberg},
author = {Albeverio, S.},
author = {Ferrario, B.},
year = {2008},
keywords = {Mathematics and Statistics},
pages = {1--50}, } \bib{assing1}{article} {
title = {A Pregenerator for Burgers Equation Forced by Conservative Noise},
volume = {225},
issn = {0010-3616},
url = {http://www.springerlink.com/content/t48f9yxafdddjnwx/abstract/},
doi = {10.1007/s002200100606},
number = {3},
journal = {Communications in Mathematical Physics},
author = {Assing, S.},
year = {2002},
keywords = {Physics and Astronomy},
pages = {611--632}, }
\bib{assing2}{article} {
title = {A rigorous equation for the {Cole-Hopf} solution of the conservative {KPZ} dynamics},
url = {http://arxiv.org/abs/1109.2886},
journal = {{arXiv:1109.2886}},
author = {Assing, S.},
month = {sep},
year = {2011}, }
\bib{babin-kdv}{article}{
title = {On the regularization mechanism for the periodic Korteweg-de Vries equation},
volume = {64},
issn = {0010-3640},
url = {http://dx.doi.org/10.1002/cpa.20356},
doi = {10.1002/cpa.20356},
number = {5},
journal = {Communications on Pure and Applied Mathematics},
author = {Babin, A. V.},
author = {Ilyin, A. A.},
author = {Titi, E. S.},
year = {2011},
pages = {591--648} }
\bib{babin-ns}{article}{
title = {Regularity and integrability of {\$3\$D} Euler and {Navier-Stokes} equations for rotating fluids},
volume = {15},
issn = {0921-7134},
number = {2},
journal = {Asymptotic Analysis},
author = {Babin, A.},
author = {Mahalov, A.},
author = {Nicolaenko, B.},
year = {1997},
pages = {103--150} }
\bib{BG}{article}{
author={Bertini, L.},
author={Giacomin, G.},
title={Stochastic Burgers and KPZ equations from particle systems},
journal={Comm. Math. Phys.},
volume={183},
date={1997},
number={3},
pages={571--607},
issn={0010-3616},
review={\MR{1462228 (99e:60212)}},
doi={10.1007/s002200050044}, }
\bib{CLO}{article}{
author={Chang, C.-C.},
author={Landim, C.},
author={Olla, S.},
title={Equilibrium fluctuations of asymmetric simple exclusion processes
in dimension $d\geq 3$},
journal={Probab. Theory Related Fields},
volume={119},
date={2001},
number={3},
pages={381--409},
issn={0178-8051},
review={\MR{1821140 (2002e:60157)}},
doi={10.1007/PL00008764}, }
\bib{DDT}{article}{
author={Da Prato, G.},
author={Debussche, A.},
author={Tubaro, L.},
title={A modified Kardar-Parisi-Zhang model},
journal={Electron. Comm. Probab.},
volume={12},
date={2007},
pages={442--453 (electronic)},
issn={1083-589X},
review={\MR{2365646 (2008k:60147)}}, }
\bib{DF}{article}{
author={Da Prato, G.},
author={Flandoli, F.},
title={Pathwise uniqueness for a class of SDE in Hilbert spaces and
applications},
journal={J. Funct. Anal.},
volume={259},
date={2010},
number={1},
pages={243--267},
issn={0022-1236},
review={\MR{2610386}},
doi={10.1016/j.jfa.2009.11.019}, }
\bib{DZ}{book}{
author={Da Prato, G.},
author={Zabczyk, J.},
title={Stochastic equations in infinite dimensions},
series={Encyclopedia of Mathematics and its Applications},
volume={44},
publisher={Cambridge University Press},
place={Cambridge},
date={1992},
pages={xviii+454},
isbn={0-521-38529-6},
review={\MR{1207136 (95g:60073)}},
doi={10.1017/CBO9780511666223}, }
\bib{FGP}{article}{
author={Flandoli, F.},
author={Gubinelli, M.},
author={Priola, E.},
title={Well-posedness of the transport equation by stochastic
perturbation},
journal={Invent. Math.},
volume={180},
date={2010},
number={1},
pages={1--53},
issn={0020-9910},
review={\MR{2593276}},
doi={10.1007/s00222-009-0224-4}, }
\bib{MR1988703}{article}{
author={Flandoli, F.},
author={Russo, F.},
author={Wolf, J.},
title={Some SDEs with distributional drift. I. General calculus},
journal={Osaka J. Math.},
volume={40},
date={2003},
number={2},
pages={493--542},
issn={0030-6126},
review={\MR{1988703 (2004e:60110)}}, }
\bib{MR2065168}{article}{
author={Flandoli, F.},
author={Russo, F.},
author={Wolf, J.},
title={Some SDEs with distributional drift. II. Lyons-Zheng structure,
It\^o's formula and semimartingale characterization},
journal={Random Oper. Stochastic Equations},
volume={12},
date={2004},
number={2},
pages={145--184},
issn={0926-6364},
review={\MR{2065168 (2006a:60105)}},
doi={10.1163/156939704323074700}, }
\bib{JG}{article}{
author = {{Gon\c{c}alves}, P.},
author= { {Jara}, M.},
title = {Universality of KPZ equation},
journal = {ArXiv e-prints},
archivePrefix = {arXiv},
eprint = {arXiv:1003.4478},
primaryClass = {math.PR},
year = {2010} }
\bib{controlling}{article}{
title = {Controlling rough paths},
volume = {216},
issn = {0022-1236},
url = {http://dx.doi.org/10.1016/j.jfa.2004.01.002},
doi = {10.1016/j.jfa.2004.01.002},
number = {1},
journal = {Journal of Functional Analysis},
author = {Gubinelli, M.},
year = {2004},
pages = {86--140} } \bib{kdv}{article}{
title = {Rough solutions for the periodic Korteweg--de Vries equation},
volume = {11},
issn = {1534-0392},
number = {2},
journal = {Communications on Pure and Applied Analysis},
author = {Gubinelli, M.},
year = {2012},
pages = {709--733} }
\bib{Hairer}{article}{
title = {Solving the {KPZ} equation},
url = {http://fr.arxiv.org/pdf/1109.6811v2},
journal = {{ArXiv} e-prints},
author = {Hairer, M.},
month = {sep},
year = {2011} }
\bib{janson_gaussian_1997}{book}{
title = {Gaussian Hilbert Spaces},
publisher = {Cambridge University Press},
author = {Janson, S.},
month = {jun},
year = {1997}, }
\bib{KPZ}{article}{
title = {Dynamic Scaling of Growing Interfaces},
author = {Kardar, M.},
author = {Parisi, G.},
author = {Zhang, Y.-C. },
journal = {Phys. Rev. Lett.},
volume = {56},
number = {9},
pages = {889--892},
numpages = {3},
year = {1986},
month = {Mar},
doi = {10.1103/PhysRevLett.56.889},
publisher = {American Physical Society} }
\bib{KV}{article}{
author={Kipnis, C.},
author={Varadhan, S. R. S.},
title={Central limit theorem for additive functionals of reversible
Markov processes and applications to simple exclusions},
journal={Comm. Math. Phys.},
volume={104},
date={1986},
number={1},
pages={1--19},
issn={0010-3616},
review={\MR{834478 (87i:60038)}}, }
\bib{RY}{book}{
edition = {3rd},
title = {Continuous Martingales and Brownian Motion},
isbn = {3540643257},
publisher = {Springer},
author = {Revuz, Daniel},
author = {Yor, Marc},
month = {dec},
year = {2004}, }
\bib{MR2353387}{article}{
author={Russo, F.},
author={Trutnau, G.},
title={Some parabolic PDEs whose drift is an irregular random noise in
space},
journal={Ann. Probab.},
volume={35},
date={2007},
number={6},
pages={2213--2262},
issn={0091-1798},
review={\MR{2353387 (2008j:60153)}},
doi={10.1214/009117906000001178}, }
\bib{RV}{incollection}{
series = {Lecture Notes in Mathematics},
title = {Elements of Stochastic Calculus via Regularization},
volume = {1899},
isbn = {978-3-540-71188-9},
url = {http://www.springerlink.com/content/63u35k7n446q1t64/abstract/},
booktitle = {S\'eminaire de Probabilit\'es {XL}},
publisher = {Springer Berlin / Heidelberg},
author = {Russo, F.},
author = {Vallois, P.},
editor = {{Donati-Martin}, C.},
editor = {\'Emery, Michel},
editor = {Rouault, A.},
editor = {Stricker, C.},
year = {2007},
keywords = {Mathematics and Statistics},
pages = {147--185} }
\bib{Spohn}{article}{
title = {Superdiffusivity of the {1D} Lattice {Kardar-Parisi-Zhang} Equation},
volume = {137},
issn = {0022-4715},
url = {http://www.springerlink.com/content/54h7q2682842g701/abstract/},
doi = {10.1007/s10955-009-9831-0},
number = {5},
journal = {Journal of Statistical Physics},
author = {Sasamoto, T.},
author = {Spohn, H.},
year = {2009},
keywords = {Physics and Astronomy},
pages = {917--935}, }
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\begin{abstract} We prove the existence of generalized solution for incompressible and viscous non-Newtonian two-phase fluid flow for spatial dimension 2 and 3. The phase boundary moves along with the fluid flow plus its mean curvature while exerting surface tension force to the fluid. An approximation scheme combining the Galerkin method and the phase field method is adopted. \end{abstract}
\maketitle
\makeatletter \@addtoreset{equation}{section} \renewcommand{\thesection.\@arabic\c@equation}{\thesection.\@arabic\c@equation} \makeatother
\section{Introduction} \quad In this paper we prove existence results for a problem on incompressible viscous two-phase fluid flow in the torus $\Omega={\mathbb T}^d=({\mathbb R}/{\mathbb Z})^d$, $d=2,\,3$. A freely moving $(d-1)$-dimensional phase boundary $\Gamma(t)$ separates the domain $\Omega$ into two domains $\Omega^+(t)$ and $\Omega^-(t)$, $t\geq 0$. The fluid flow is described by means of the velocity field $u:\Omega\times [0,\infty)\rightarrow {\mathbb R}^d$ and the pressure $\Pi:\Omega\times [0,\infty)\rightarrow \mathbb R$. We assume the stress tensor of the fluids is of the form $T^{\pm}(u,\Pi)=\tau^{\pm}(e(u))-\Pi\, I$ on $\Omega^{\pm}(t)$, respectively. Here $e(u)$ is the symmetric part of the velocity gradient $\nabla u$, i.e. $e(u)=(\nabla u+\nabla u^T)/2$ and $I$ is the $d\times d$ identity matrix. Let $\mathbb{S}(d)$ be the set of $d\times d$ symmetric matrices. We assume that the functions $\tau^{\pm}:\mathbb{S}(d)\rightarrow\mathbb{S}(d)$ is locally Lipschitz and satisfy for some $\nu_0>0$ and $p>\frac{d+2}{2}$ and for all $s,\,\hat{s}\in \mathbb{S}(d)$ \begin{equation}
\nu_0 |s|^p \leq \tau^{\pm}(s):s\leq \nu_0^{-1}(1+|s|^p),\label{taucond1} \end{equation} \begin{equation}
|\tau^{\pm}(s)|\leq \nu_0^{-1}(1+|s|^{p-1}),\label{taucond2} \end{equation} \begin{equation} (\tau^{\pm}(s)-\tau^{\pm}(\hat{s})):(s-\hat{s})\geq 0.\label{taucond3} \end{equation}
Here we define $A:B=\rm{tr}(AB)$ for $d\times d$ matrices $A,\, B$. A typical example is $\tau^{\pm}(s)=(a^{\pm}+b^{\pm}|s|^2)^{\frac{p-2}{2}}s$ with $a^{\pm}>0$ and $b^{\pm}>0$.
We assume that the velocity field $u(x,t)$ satisfies the following non-Newtonian fluid flow equation: \begin{eqnarray} \frac{\partial u}{\partial t}+u\cdot\nabla u ={\rm div}\,(T^+(u,\Pi)),\hspace{.5cm}{\rm div}\, u=0 &\quad & {\rm on} \ \Omega^+(t), \ t> 0,\label{main1}\\ \frac{\partial u}{\partial t}+u\cdot\nabla u ={\rm div}\,(T^-(u,\Pi)),\hspace{.5cm}{\rm div}\, u=0 &\quad & {\rm on} \ \Omega^-(t), \ t> 0,\label{main2}\\ u^+= u^-,\hspace{.5cm}n\cdot (T^+(u,\Pi)-T^-(u,\Pi))= \kappa_1 H &\quad & {\rm on} \ \Gamma(t), \ t> 0.\qquad \qquad \label{main3} \end{eqnarray} The upper script $\pm$ in \eqref{main3} indicates the limiting values approaching to $\Gamma(t)$ from $\Omega^{\pm}(t)$, respectively, $n$ is the unit outer normal vector of $\partial\Omega^+(t)$, $H$ is the mean curvature vector of $\Gamma(t)$ and $\kappa_1>0$ is a constant. The condition \eqref{main3} represents the force balance with an isotropic surface tension effect of the phase boundary. The boundary $\Gamma(t)$ is assumed to move with the velocity given by \begin{equation} V_{\Gamma}=(u\cdot n)n+\kappa_2 H \hspace{.5cm}{\rm on} \quad\Gamma(t),\quad t> 0, \label{velocity} \end{equation} where $\kappa_2>0$ is a constant. This differs from the conventional kinematic condition ($\kappa_2=0$) and is motivated by the phase boundary motion with hydrodynamic interaction. The reader is referred to \cite{Liu} and the references therein for the relevant physical background. By setting $\varphi=1$ on $\Omega^+(t)$, $\varphi=-1$ on $\Omega^-(t)$ and \begin{equation*} \tau(\varphi,e(u))=\frac{1+\varphi}{2}\tau^+(e(u))+\frac{1-\varphi}{2} \tau^-(e(u)) \end{equation*} on $\Omega^+(t)\cup\Omega^-(t)$, the equations \eqref{main1}-\eqref{main3} are expressed in the distributional sense as \begin{equation} \begin{split} \frac{\partial u}{\partial t}+u\cdot\nabla u &={\rm div}\,\tau(\varphi,e(u)) -\nabla \Pi +\kappa_1 H\mathcal{H}^{d-1}\lfloor_{\Gamma(t)} \hspace{.5cm} {\rm on} \ \Omega\times (0,\infty), \label{nsdist}\\ {\rm div}\, u&=0 \hspace{.5cm} {\rm on} \ \Omega\times (0,\infty). \end{split} \end{equation} where $\mathcal{H}^{d-1}$ is the $(d-1)$-dimensional Hausdorff measure. The expression \eqref{nsdist} makes it evident that the phase boundary exerts surface tension force on the fluid wherever $H\neq 0$ on $\Gamma(t)$. Note that if $\Gamma(t)$ is a boundary of convex domain, the sign of $H$ is taken so that the presence of surface tension tends to accelerate the fluid flow inwards in general. We remark that the sufficiently smooth solutions of \eqref{main1}-\eqref{velocity} satisfy the following energy equality, \begin{equation}
\frac{d}{dt}\left\{\frac{1}{2}\int_{\Omega}|u|^2\,dx+\kappa_1{\mathcal H}^{d-1}(\Gamma(t))\right\}=-\int_{\Omega}\tau(\varphi,e(u)):e(u)\,dx
-\kappa_1\kappa_2\int_{\Gamma(t)}|H|^2\,d{\mathcal H}^{d-1}. \label{energyeq} \end{equation} This follows from the first variation formula for the surface measure \begin{equation} \frac{d}{dt}{\mathcal H}^{d-1}(\Gamma(t))=-\int_{\Gamma(t)} V_{\Gamma}\cdot H\, d{\mathcal H}^{d-1} \label{firstvar} \end{equation} and by the equations \eqref{main1}-\eqref{velocity}.
The aim of the present paper is to prove the time-global existence of the weak solution for \eqref{main1}-\eqref{velocity} (see Theorem \ref{maintheorem} for the precise statement). We construct the approximate solution via the Galerkin method and the phase field method. Note that it is not even clear for our problem if the phase boundary may stay as a codimension 1 object since a priori irregular flow field may tear apart or crumble the phase boundary immediately, with a possibility of developing singularities and fine-scale complexities. Even if we set the initial datum to be sufficiently regular, the eventual occurrence of singularities of phase boundary or flow field may not be avoided in general. To accommodate the presence of singularities of phase boundary, we use the notion of varifolds from geometric measure theory. In establishing \eqref{velocity} we adopt the formulation due to Brakke \cite{Brakke} where he proved the existence of moving varifolds by mean curvature. We have the extra transport effect $(u\cdot n)n$ which is not very regular in the present problem. Typically we would only have $u\in L^p_{loc}([0,\infty);W^{1,p}(\Omega)^d)$. This poses a serious difficulty in modifying Brakke's original construction in \cite{Brakke} which is already intricate and involved. Instead we take advantage of the recent progress on the understanding on the Allen-Cahn equation with transport term to approximate the motion law \eqref{velocity}, \[\frac{\partial\varphi}{\partial t}+u\cdot\nabla\varphi=\kappa_2\left(\Delta\varphi-\frac{W'(\varphi)}{\varepsilon^2}\right).\hspace{1cm}{\rm (ACT)} \] Here $W$ is the equal depth double-well potential and we set $W(\varphi)=(1-\varphi^2)^2/2$. When $\varepsilon\rightarrow 0$, we have proved in \cite{LST1} that the interface moves according to the velocity \eqref{velocity} in the sense of Brakke with a suitable regularity assumptions on $u$. To be more precise, we use a regularized version of (ACT) as we present later for the result of \cite{LST1} to be applicable. The result of \cite{LST1} was built upon those of many earlier works, most relevant being \cite{Ilmanen1,Ilmanen2} which analyzed (ACT) with $u=0$, and also \cite{Hutchinson,Tonegawa,Sato,Roeger}.
Since the literature of two-phase flow is immense and continues to grow rapidly, we mention results which are closely related or whose aims point to some time-global existence with general initial data. In the case without surface tension $(\kappa_1=\kappa_2=0)$, Solonnikov \cite{Solonnikov1} proved the time-local existence of classical solution. The time-local existence of weak solution was proved by Solonnikov \cite{Solonnikov2}, Beale \cite{Beale1}, Abels \cite{Abels1}, and others. For time-global existence of weak solution, Beale \cite{Beale2} proved in the case that the initial data is small. Nouri-Poupaud \cite{Nouri} considered the case of multi-phase fluid. Giga-Takahashi \cite{GigaTakahashi} considered the problem within the framework of level set method. When $\kappa_1>0$, $\kappa_2=0$, Plotnikov \cite{Plotnikov} proved the time-global existence of varifold solution for $d=2$, $p>2$, and Abels \cite{Abels2} proved the time-global existence of measure-valued solution for $d=2, 3$, $p>\frac{2d}{d+2}$. When $\kappa_1>0$, $\kappa_2>0$, Maekawa \cite{Maekawa} proved the time-local existence of classical solution with $p=2$ (Navier-Stokes and Stokes) and for all dimension. Abels-R\"{o}ger \cite{Abels-Roeger} considered a coupled problem of Navier-Stokes and Mullins-Sekerka (instead of motion by mean curvature in the present paper) and proved the existence of weak solutions. As for related phase field approximations of sharp interface model which we adopt in this paper, Liu and Walkington \cite{Liu} considered the case of fluids containing visco-hyperelastic particles. Perhaps the most closely related work to the present paper is that of Mugnai and R\"{o}ger \cite{Mugnai} which studied the identical problem with $p=2$ (linear viscosity case) and $d=2,3$. There they introduced the notion of $L^2$ velocity and showed that \eqref{velocity} is satisfied in a weak sense different from that of Brakke for the limiting interface. Kim-Consiglieri-Rodrigues \cite{Kim} dealt with a coupling of Cahn-Hilliard and Navier-Stokes equations to describe the flow of non-Newtonian two-phase fluid with phase transitions. Soner \cite{Soner} dealt with a coupling of Allen-Cahn and heat equations to approximate the Mullins-Sekerka problem with kinetic undercooling. Soner's work is closely related in that he showed the surface energy density bound which is also essential in the present problem.
The organization of this paper is as follows. In Section 2, we summarize the basic notations and main results. In Section 3 we construct a sequence of approximating solutions for the two-phase flow problem. Section 4 describes the result of \cite{LST1} which establishes the upper density ratio bound for surface energy and which proves \eqref{velocity}. In the last Section 5 we combine the results from Section 3 and 4 and obtain the desired weak solution for the two-phase flow problem.
\section{Preliminaries and Main results}
\quad For $d\times d$ matrices $A,B$ we denote $A:B={\rm tr}\,(AB)$ and $|A|:=\sqrt{A:A}$. For $a \in \mathbb R^d$, we denote by $a\otimes a$ the $d\times d$ matrix with the $i$-th row and $j$-th column entry equal to $a_i a_j$.
\subsection{Function spaces} \quad Set $\Omega={\mathbb T}^d$ throughout this paper. We set function spaces for $p>\frac{d+2}{2}$ as follows: \begin{equation*} \begin{split} &{\mathcal V}=\left\{v \in C^{\infty}(\Omega)^d\,;\,{\rm div}\,v=0\right\},\\ &{\rm for} \ s\in {\mathbb Z}^+ \cup\{0\}, \ W^{s,p}(\Omega)=\{v \ : \ \nabla ^j v\in L^p(\Omega) \ {\rm for } \ 0\leq j\leq s\},\\ &V^{s,p}= {\rm closure \ of} \ {\mathcal V} \ {\rm in \ the} \ W^{s,p}(\Omega)^d{\rm \mathchar`-norm.} \end{split} \end{equation*}
We denote the dual space of $V^{s,p}$ by $(V^{s,p})^*$. The $L^2$ inner product is denoted by $(\cdot,\cdot)$. Let $\chi_A$ be the characteristic function of $A$, and let $|\nabla\chi_A|$ be the total variation measure of the distributional derivative $\nabla \chi_A$.
\subsection{Varifold notations} \quad We recall some notions from geometric measure theory and refer to \cite{Allard,Brakke,Simon} for more details. A {\it general $k$-varifold} in $\mathbb R^d$ is a Radon measure on $\mathbb R^d\times G(d,k)$, where $G(d,k)$ is the space of $k$-dimensional subspaces in $\mathbb R^d$. We denote the set of all general $k$-varifolds by ${\bf V}_k(\mathbb R^d)$. When $S$ is a $k$-dimensional subspace, we also use $S$ to denote the orthogonal projection matrix corresponding to $\mathbb R^d\rightarrow S$. The first variation of $V$ can be written as \begin{equation*} \delta V(g)=\int_{\mathbb R^d\times G(d,k)}\nabla g(x):S\,dV(x,S)
=-\int_{\mathbb R^d}g(x)\cdot H(x)\,d\|V\|(x) \quad {\rm if }\, \|\delta V\|\ll \|V\|. \end{equation*}
Here $V \in {\bf V}_k(\mathbb R^d)$, $\|V\|$ is the mass measure of $V$, $g \in C_c^1(\mathbb R^d)^d$, $H=H_V$ is the generalized mean curvature vector if it exists and $\|\delta V\|\ll \|V\|$
denotes that $\|\delta V\|$ is absolutely continuous with respect to $\|V\|$.
We call a Radon measure $\mu$ {\it $k$-integral} if $\mu$ is represented as $\mu=\theta{\mathcal H}^k\lfloor_X$, where $X$ is a countably $k$-rectifiable, ${\mathcal H}^k$-measurable set, and $\theta \in L^1_{\rm loc}({\mathcal H}^k\lfloor_X)$ is positive and integer-valued ${\mathcal H}^k$ a.e on $X$. ${\mathcal H}^k\lfloor_X$ denotes the restriction of ${\mathcal H}^k$ to the set $X$. We denote the set of $k$-integral Radon measures by ${\mathcal{IM}}_k$. We say that a $k$-integral varifold is of {\it unit density} if $\theta=1$ ${\mathcal H}^k$ a.e. on $X$. For each such $k$-integral measure $\mu$ corresponds a unique $k$-varifold $V$ defined by \[\int_{\mathbb R^d\times G(d,k)}\phi(x,S)\,dV(x,S)=\int_{\mathbb R^d}\phi(x,T_x\mu)\,d\mu(x)\quad {\rm for} \ \phi\in C_c(\mathbb R^d\times G(d,k)),\]
where $T_x\mu$ is the approximate tangent $k$-plane. Note that $\mu=\|V\|$. We make such identification in the following. For this reason we define $H_{\mu}$ as $H_V$ (or simply $H$) if the latter exists. When $X$ is a $C^2$ submanifold without boundary and $\theta$ is constant on $X$, $H$ corresponds to the usual mean curvature vector for $X$. In the following we suitably adopt the above notions on $\Omega={\mathbb T}^d$ such as ${\bf V}_k(\Omega)$, which present no essential difficulties.
\subsection{Weak formulation of free boundary motion} For sufficiently smooth surface $\Gamma(t)$ moving by the velocity \eqref{velocity}, the following holds for any $\phi\in C^2(\Omega;\mathbb R^+)$ due to the first variation formula \eqref{firstvar}: \begin{equation} \frac{d}{dt}\int_{\Gamma(t)}\phi\, d{\mathcal H}^{d-1}\leq \int_{\Gamma(t)}(-\phi H+\nabla\phi)\cdot\{\kappa_2 H+(u\cdot n)n\}\, d{\mathcal H}^{d-1}. \label{weakvelo} \end{equation} One can check that having this inequality for any $\phi\in C^2(\Omega;\mathbb R^+)$ implies \eqref{velocity} thus \eqref{weakvelo} is equivalent to \eqref{velocity}. Such use of non-negative test functions to characterize the motion law is due to Brakke \cite{Brakke} where he developed the theory of varifolds moving by the mean curvature. Here we suitably modify Brakke's approach to incorporate the transport term $u$. To do this we recall \begin{thm}{\bf (Meyers-Ziemer inequality)} For any Radon measure $\mu$ on $\mathbb R^d$with \begin{equation*}D=\sup_{r>0,\, x \in {\mathbb R}^d}\frac{\mu(B_r(x))}{\omega_{d-1}r^{d-1}}<\infty, \end{equation*} we have \begin{equation}
\int_{\mathbb R^d}|\phi|\,d\mu\leq c D\int_{\mathbb R^d}|\nabla \phi|\,dx \label{MZ1} \end{equation} for $\phi \in C_c^1(\mathbb R^d)$. Here $c$ depends only on $d$. \label{MZ} \end{thm} See \cite{Meyers} and \cite[p.266]{Ziemer}. By localizing \eqref{MZ1} to $\Omega={\mathbb T}^d$ we obtain (with $r$ in the definition of $D$ above replaced by $0<r<1/2$) \begin{equation}
\int_{\Omega}|\phi|^2\, d\mu\leq c D \left(\|\phi\|_{L^2(\Omega)}^2+\|\nabla
\phi\|_{L^2(\Omega)}^2\right) \label{MZ2} \end{equation} where the constant $c$ may be different due to the localization but depends only on $d$.
The inequality \eqref{MZ2} allows us to define $\int_{\Omega}|\phi|^2\, d\mu$ for $\phi\in W^{1,2}(\Omega)$ by the standard density argument when $D<\infty$.
We define for any Radon measure $\mu$, $u\in L^2(\Omega)^d$ and $\phi\in C^1(\Omega:\mathbb R^+)$ \begin{equation} {\mathcal B}(\mu,\, u,\, \phi)=\int_{\Omega} (-\phi H+\nabla\phi)\cdot\{\kappa_2 H+(u\cdot n)n\}\, d\mu \label{rhs} \end{equation} if $\mu\in {\mathcal{IM}}_{d-1}(\Omega)$ with generalized mean curvature $H\in L^2(\mu)^d$ and with \begin{equation}\sup_{\frac12>r>0,\, x \in \Omega} \frac{\mu(B_r(x))}{\omega_{d-1}r^{d-1}}<\infty \label{den} \end{equation} and $u\in W^{1,2}(\Omega)^d$. Due to the definition of ${\mathcal {IM}}_{d-1}(\Omega)$, the unit normal vector $n$ is uniquely defined $\mu$ a.e. on $\Omega$ modulo $\pm$ sign. Since we have $(u,n)n$ in \eqref{rhs}, the choice of sign does not affect the definition. The right-hand side of \eqref{rhs} gives a well-defined finite value due to the stated conditions and \eqref{MZ2}. If any one of the conditions is not satisfied, we define ${\mathcal B}(\mu,\, u,\, \phi)=-\infty$.
Next we note \begin{prop} For any $0<T<\infty$ and $p>\frac{d+2}{2}$, $$\left\{u\in L^{p}([0,T];V^{1,p})\,;\,\frac{\partial u}{\partial t}\in L^{\frac{p}{p-1}}([0,T]; (V^{1,p})^*)\right\}\hookrightarrow C([0,T];\, V^{0,2}).$$ \label{embed} \end{prop} The Sobolev embedding gives $V^{1,p} \hookrightarrow V^{0,2}$ for such $p$ and we may apply \cite[p. 35, Lemma 2.45]{Malek} to obtain the above embedding. Indeed, we only need $p>\frac{2d}{d+2}$ for Proposition \ref{embed} to be and we have $\frac{d+2}{2}>\frac{2d}{d+2}$. Thus for this class of $u$ we may define $u(\cdot, t)\in V^{0,2}$ for all $t\in [0,T]$ instead of a.e. $t$ and we may tacitly assume that we redefine $u$ in this way for all $t$. For $\{\mu_t\}_{t\in [0,\infty)}$, $u\in L^p_{loc}([0,\infty);V^{1,p})$ with $\frac{\partial u}{\partial t}\in L^{\frac{p}{p-1}}_{loc} ([0,\infty); (V^{1,p})^*)$ for $p>\frac{d+2}{2}$ and $\phi\in C^1(\Omega;\mathbb R^+)$, we define ${\mathcal B}(\mu_t,\, u(\cdot,t),\, \phi)$ as in \eqref{rhs} for all $t\geq 0$.
\subsection{The main results} Our main results are the following. \begin{thm} Let $d=2$ or $3$ and $p>\frac{d+2}{2}$. Let $\Omega={\mathbb T}^d$. Assume that locally Lipschitz functions $\tau^{\pm}:\mathbb{S}(d)\rightarrow\mathbb{S}(d)$ satisfy \eqref{taucond1}-\eqref{taucond3}. For any initial data $u_0\in V^{0,2}$ and $\Omega^+(0)\subset\Omega$ having $C^1$ boundary $\partial\Omega^+(0)$, there exist \begin{enumerate} \item[(a)] $u \in L^{\infty}([0,\infty);V^{0,2})\cap L^p_{loc}([0,\infty);V^{1,p})$ with $\frac{\partial u}{\partial t}\in L^{\frac{p}{p-1}}_{loc}([0,\infty);(V^{1,p})^*)$, \item[(b)] a family of Radon measures $\{\mu_t\}_{t\in [0,\infty)}$ with $\mu_t\in {\mathcal{IM}}_{d-1}$ for a.e. $t\in [0,\infty)$ and \item[(c)] $\varphi \in BV_{loc}(\Omega\times [0,\infty)) \cap L^{\infty}([0,\infty);BV(\Omega)) \cap C^{\frac{1}{2}}_{loc}([0,\infty);L^1(\Omega))$ \end{enumerate} such that the following properties hold: \begin{enumerate} \item[(i)] The triplet $(u(\cdot,t),\, \varphi(\cdot,t),\,\mu_t)_{t\in [0,\infty)}$ is a weak solution of \eqref{nsdist}. More precisely, for any $T>0$ we have \begin{equation} \int_0^T \int_{\Omega}-u\cdot \frac{\partial v}{\partial t}+(u\cdot\nabla u)\cdot v+\tau(\varphi,e(u)):e(v)\,dxdt =\int_{\Omega}u_0\cdot v(0)\,dx+\int_0^T\int_{\Omega}\kappa_1 H\cdot v \, d\mu_t dt \label{maintheorem1} \end{equation} for any $v \in C^{\infty}([0,T];{\mathcal V})$ such that $v(T)=0$. Here $H\in L^2([0,\infty);L^2(\mu_t)^d)$ is the generalized mean curvature vector corresponding to $\mu_t$. \item[(ii)] The triplet $(u(\cdot,t),\, \varphi(\cdot,t),\,\mu_t)_{t\in [0,\infty)}$ satisfies the energy inequality \begin{equation} \begin{split}
\frac12\int_{\Omega}|u(\cdot,T)|^2\,dx+\kappa_1\mu_T(\Omega)&+\int_0^T\int_{\Omega}
\tau(\varphi,e(u)):e(u)\, dxdt+\kappa_1\kappa_2\int_0^T\int_{\Omega}|H|^2\, d\mu_t dt\\
&\leq \frac12\int_{\Omega}|u_0|^2\,dx+\kappa_1{\mathcal H}^{d-1}(\partial \Omega^+(0)) =: E_0\end{split} \label{eneineq} \end{equation} for all $T<\infty$. \item[(iii)] For all $0\leq t_1<t_2< \infty$ and $\phi\in C^2(\Omega;\mathbb R^+)$ we have \begin{equation} \mu_{t_2}(\phi)-\mu_{t_1}(\phi)\leq \int_{t_1}^{t_2}{\mathcal B}(\mu_t,\, u(\cdot,t),\, \phi)\, dt. \label{maintheorem3} \end{equation} Moreover, ${\mathcal B}(\mu_t,\, u(\cdot,t),\, \phi)\in L^{1}_{loc}([0,\infty))$. \item[(iv)] We set $D_0=\sup_{0<r<1/2,\, x\in \Omega}\frac{{\mathcal H}^{d-1} (\partial\Omega^+(0)\cap B_r(x))}{\omega_{d-1}r^{d-1}}$. For any $0<T<\infty$, there exists a constant $D=D(E_0,D_0,T,p,\nu_0,\kappa_1,\kappa_2)$ such that $$\sup_{0<r<1/2,\,x\in \Omega}\frac{\mu_t(B_r(x))}{\omega_{d-1}r^{d-1}} \leq D$$ for all $t\in [0,T]$. \item[(v)] The function $\varphi$ satisfies the following properties.\\
\ (1) $\varphi=\pm 1$ {\rm a.e. on} $\Omega$ for all $t\in [0,\infty)$.\\
\ (2) $\varphi(x,0)=\chi_{\Omega^+(0)}-\chi_{\Omega\setminus\Omega^+(0)}$ {\rm a.e. on} $\Omega$.\\
\ (3) ${\rm spt}|\nabla\chi_{\{\varphi(\cdot,t)=1\}}|
\subset{\rm spt}\mu_t$ for all $t\in [0,\infty)$. \item[(vi)] There exists \[T_1=T_1(E_0,D_0,p,\nu_0,\kappa_1,\kappa_2)>0\]
such that $\mu_t$ is of unit density for a.e. $t\in [0,T_1]$. In addition $|\nabla\chi_{\{\varphi(\cdot,t)=1\}}|=\mu_t$ for a.e. $t\in [0,T_1]$. \end{enumerate} \label{maintheorem} \end{thm}
\begin{rem} Somewhat different from $u=0$ case we do not expect that \begin{equation} \limsup_{\Delta t\rightarrow 0}\frac{\mu_{t+\Delta t}(\phi) -\mu_t(\phi)}{\Delta t}\leq {\mathcal B}(\mu_t,\, u(\cdot,t),\phi) \label{ve1} \end{equation} holds for all $t\geq 0$ and $\phi \in C^2(\Omega; \mathbb R^+)$ in general. While we know that the right-hand side is $<\infty$ (by definition) for all $t$, we do not know in general if the left-hand side is $<\infty$. One may even expect that at a time when
$\int_{\Omega}|\nabla u(\cdot,t)|^p\,dx=\infty$, it may be $\infty$. Thus we may need to define \eqref{velocity} in the integral form \eqref{maintheorem3} for the definition of Brakke's flow. Note that in case $u=0$, one can show that the left-hand side of \eqref{ve1} is $<\infty$ for all $t\geq 0$ (see \cite{Brakke}). \end{rem}
\begin{rem} The difficulty of multiplicities have been often encountered in the measure-theoretic setting like ours. Varifold solutions constructed by Brakke \cite{Brakke} have the same properties in this regard. On the other hand, (vi) says that there is no `folding' for some initial time interval $[0,T_1]$ at least. \end{rem} \begin{rem} In the following we set $\kappa_1=\kappa_2=1$ for notational simplicity, while all the argument can be modified with any positive $\kappa_1$ and $\kappa_2$ with no essential differences. On the other hand, their being positive plays an essential role, and most of the estimates and claims deteriorate as $\kappa_1,\, \kappa_2\rightarrow 0$ and fail in the limit. How severely they fail in the limit may be of independent interest which we do not pursue in the present paper. Note that $\kappa_2=0$ limit should correspond precisely to the setting of Plotnikov \cite{Plotnikov} for $d=2$. \end{rem}
We use the following theorem. See \cite[p.196]{Malek} and the reference therein.
\begin{thm}{\bf(Korn's inequality)} Let $1<p<\infty$. Then there exists a constant $c_K=c(p,d)$ such that
\[\|v\|_{W^{1,p}(\Omega)}^p\leq c_K (\|e(v)\|_{L^p(\Omega)}^p+\|v\|^p_{L^1(\Omega)})\] holds for all $v \in W^{1,p}(\Omega)^d$. \label{Korn} \end{thm}
\section{Existence of approximate solution} \quad In this section we construct a sequence of approximate solutions of \eqref{main1}-\eqref{velocity} by the Galerkin method and the phase field method. The proof is a suitable modification of \cite{LinLiu} for the non-Newtonian setting even though we need to incorporate a suitable smoothing of the interaction terms.
First we prepare a few definitions. We fix a sequence $\{\varepsilon_i\}_{i=1}^{\infty}$ with $\lim_{i\rightarrow\infty} \varepsilon_i=0$ and fix a radially symmetric non-negative function $\zeta\in C^{\infty}_c(\mathbb R^d)$ with ${\rm spt}\, \zeta\subset B_1(0)$ and $\int\zeta\, dx=1$. For a fixed $0<\gamma<\frac12$ we define \begin{equation} \zeta^{\varepsilon_i}(x)=\frac{1}{\varepsilon_i^{\gamma}}\zeta\left(\frac{x} {\varepsilon_i^{\gamma/d}}\right). \label{zeta} \end{equation} We defined $\zeta^{\varepsilon_i}$ so that $\int \zeta^{\varepsilon_i}\, dx=1$,
$|\zeta^{\varepsilon_i}|\leq c(d)\varepsilon_i^{-\gamma}$ and $|\nabla\zeta^{\varepsilon_i}| \leq c(d)\varepsilon_i^{-\gamma-\gamma/d}$.
For a given initial data $\Omega^+(0)\subset\Omega$ with $C^1$ boundary $\partial \Omega^+(0)$, we can approximate $\Omega^+(0)$ in $C^1$ topology by a sequence of domains $\Omega^{i+}(0)$ with $C^3$ boundaries. Let $d^{i}(x)$ be the signed distance function to $\partial \Omega^{i+}(0)$ so that $d^{i}(x)>0$ on $\Omega^{i+}(0)$ and $d^{i}(x)<0$ on $\Omega^{i-}(0)$. Choose $b^{i}>0$ so that $d^{i}$ is $C^3$ function on the $b^{i}$-neighborhood of $\partial\Omega^{i+}(0)$. Now we associate $\{\varepsilon_i\}_{i=1}^{\infty}$ with $\Omega^{i+}(0)$ by re-labeling the index if necessary so that $\lim_{i\rightarrow\infty}\varepsilon_i/b^i=0$ and
$\lim_{i\rightarrow\infty}\varepsilon_i^{j-1}|\nabla^j d^i|=0$ for $j=2,\, 3$ on the $b^{i}$-neighborhood of $\partial\Omega^{i+}(0)$. Let $h\in C^{\infty}(\mathbb R)$ be a function such that $h$ is monotone increasing, $h(s)=s$ for $0\leq s\leq 1/4$ and $h(s)= 1/2$ for $1/2<s$, and define $h(-s)=-h(s)$ for $s<0$. Then define \begin{equation} \varphi_0^{\varepsilon_i}(x)=\tanh(b^i h(d^i(x)/b^i)/\varepsilon_i). \label{tanh} \end{equation}
Note that we have $\varphi_0^{\varepsilon_i}\in C^3(\Omega)$ and $\varepsilon_i^j|\nabla^j\varphi_0^{\varepsilon_i}|$ for $j=1,\, 2,\, 3$ are bounded uniformly independent of $i$. The well-known property of phase field approximation shows that \begin{equation} \lim_{i\rightarrow
\infty}\|\varphi_0^{\varepsilon_i}-(\chi_{\Omega^+(0)}-\chi_{\Omega^-(0)})\|_{L^1(\Omega)}=0,\hspace{.5cm}
\frac{1}{\sigma}\left(\frac{\varepsilon_i|\nabla\varphi_0^{\varepsilon_i}|^2}{2} +\frac{W(\varphi_0^{\varepsilon_i})}{\varepsilon_i}\right)\, dx\rightarrow {\mathcal H}^{d-1}\lfloor_{ \partial\Omega^+(0)} \label{tanhprop} \end{equation} as Radon measures. Here $\sigma=\int_{-1}^{+1}\sqrt{2W(s)}\, ds$.
For $V^{s,2}$ with $s>\frac{d}{2}+1$ let $\{\omega^i\}_{i=1}^{\infty}$ be a set of basis for $V^{s,2}$ such that it is orthonormal in $V^{0,2}$. The choice of $s$ is made so that the Sobolev embedding theorem implies $W^{s-1,2}(\Omega)\hookrightarrow L^{\infty}(\Omega)$ thus $\nabla \omega^i \in L^{\infty}(\Omega)^{d^2}$.
Let $P_i:V^{0,2}\rightarrow V^{0,2}_i={\rm span}\,\{\omega_1,\omega_2,\cdots,\omega_i\}$ be the orthogonal projection. We then project the problem \eqref{main1}-\eqref{velocity} to $V^{0,2}_i$ by utilizing the orthogonality in $V^{0,2}$. Note that just as in \cite{LinLiu}, we approximate the mean curvature term in \eqref{nsdist} by the appropriate phase field approximation. We consider the following problem: \begin{eqnarray} \hspace{.3cm} \frac{\partial u^{\varepsilon_i}}{\partial t}=P_i\left({\rm div}\,\tau(\varphi^{\varepsilon_i}, e(u^{\varepsilon_i}))- u^{\varepsilon_i}\cdot\nabla u^{\varepsilon_i}-\frac{\varepsilon_i}{\sigma}{\rm div}\,((\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i})\right) & & {\rm on} \ \Omega\times[0,\infty),\label{appeq1}\\ u^{\varepsilon_i}(\cdot,t)\in V^{0,2}_i \qquad \qquad \qquad \qquad \qquad & & {\rm for} \ t\geq 0,\label{appeq2}\\ \frac{\partial\varphi^{\varepsilon_i}}{\partial t}+(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}=\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2} \qquad \qquad \quad & & {\rm on} \ \Omega\times [0,\infty),\label{appeq3}\\ u^{\varepsilon_i}(x,0)=P_i u_0(x),\quad \varphi^{\varepsilon_i}(x,0)=\varphi_0^{\varepsilon_i}(x) \qquad \qquad \qquad & & {\rm on} \ \Omega.\label{appeq4} \end{eqnarray} Here $*$ is the usual convolution. We prove the following theorem. \begin{thm} For any $i\in {\mathbb N}$, $u_0\in V^{0,2}$ and $\varphi^{\varepsilon_i}_0$, there exists a weak solution $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ of \eqref{appeq1}-\eqref{appeq4} such that $u^{\varepsilon_i} \in L^{\infty}([0,\infty);V^{0,2})\cap L^p_{loc}([0,\infty);V^{1,p})$,
$|\varphi^{\varepsilon_i}|\leq 1$, $\varphi^{\varepsilon_i} \in L^{\infty}([0,\infty);C^3(\Omega))$ and $\frac{\partial\varphi^{\varepsilon_i}}{\partial t}\in L^{\infty}([0,\infty);C^1(\Omega))$. \label{globalexistence} \end{thm}
We write the above system in terms of $u^{\varepsilon_i}=\sum_{k=1}^{i}c^{\varepsilon_i}_k(t)\omega_k(x)$ first. Since \begin{gather*} \left(\frac{d}{dt}u^{\varepsilon_i},\,\omega_j\right)=\bigg(\frac{d}{dt}\sum_{k=1}^i c^{\varepsilon_i}_k(t)\,\omega_k,\,\omega_j\bigg)=\frac{d}{dt}c^{\varepsilon_i}_j(t),\\ (u^{\varepsilon_i}\cdot\nabla u^{\varepsilon_i},\,\omega_j)=\sum_{k,l=1}^i c_k^{\varepsilon_i}(t)c_l^{\varepsilon_i}(t)(\omega_k\cdot\nabla\omega_l,\,\omega_j),\\ \varepsilon_i({\rm div}\,((\nabla\varphi^{\varepsilon_i} \otimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}),\,\omega_j) = \,-\varepsilon_i \int_{\Omega} (\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}: \nabla \omega_j \,dx,\\ \left({\rm div}\,\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})),\,\omega_j\right) = -\int_{\Omega}\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(\omega_j)\,dx \end{gather*} for $j=1,\cdots,i$, \eqref{appeq1} is equivalent to \begin{equation} \begin{split} \frac{d}{dt}c_j^{\varepsilon_i}(t)=& \,-\int_{\Omega}\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(\omega_j)\,dx -\sum_{k,l=1}^i c_k^{\varepsilon_i}(t)c_l^{\varepsilon_i}(t)(\omega_k\cdot\nabla\omega_l,\,\omega_j) \\ & +\frac{\varepsilon_i}{\sigma}\int_{\Omega}(\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}: \nabla \omega_j \,dx= \,A^{\varepsilon_i}_j(t)+B_{klj} c^{\varepsilon_i}_k(t)c^{\varepsilon_i}_l(t)+D^{\varepsilon_i}_j(t).\label{appeq1-2} \end{split} \end{equation} Moreover, the initial condition of $c_j^{\varepsilon_i}$ is \[c^{\varepsilon_i}_j(0)=(u_0,\,\omega_j)\quad {\rm for} \ j=1,2,\dots,i.\] We also set
\[E_0={\mathcal H}^{d-1}(\partial\Omega^+(0))+\frac12 \int_{\Omega}|u_0|^2\, dx\] and note that \begin{equation}
\frac{1}{\sigma}\int_{\Omega}\left(\frac{\varepsilon_i|\nabla\varphi_0^{\varepsilon_i}|^2}{2} +\frac{W(\varphi^{\varepsilon_i}_0)}{\varepsilon_i}\right)\,dx+\frac12\sum_{j=1}^i(c^{\varepsilon_i}_j(0))^2\leq E_0 +o(1) \label{eqeq} \end{equation} by \eqref{tanhprop} and by the projection $P_i$ being orthonormal.
We use the following lemma to prove Theorem \ref{globalexistence}. \begin{lemma} There exists a constant $T_0=T_0(E_0,i,\nu_0,p)>0$ such that \eqref{appeq1}-\eqref{appeq4} with \eqref{eqeq} has a weak solution $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ in $\Omega\times[0,T_0]$
such that $u^{\varepsilon_i} \in L^{\infty}([0,T_0];V^{0,2})\cap L^p([0,T_0];V^{1,p})$, $|\varphi^{\varepsilon_i}|\leq 1$, $\varphi^{\varepsilon_i} \in L^{\infty}([0,T_0];C^3(\Omega))$ and $\frac{\partial\varphi^{\varepsilon_i}}{\partial t} \in L^{\infty}([0,T_0];C^1(\Omega))$. \label{localexistence} \end{lemma} {\it Proof.} Assume that we are given a function $u(x,t)=\sum_{j=1}^i c_j^{\varepsilon_i}(t)\omega_j(x)\in C^{1/2}([0,T];V^{s,2})$ with \begin{equation} c^{\varepsilon_i}_j(0)=(u_0,\,\omega_j),\hspace{.5cm}
\max_{t\in[0,T]}\left(\frac12\sum_{j=1}^i|c^{\varepsilon_i}_j(t)|^2\right)^{1/2}+ \sup_{0\leq t_1<t_2\leq T}\sum_{j=1}^i
\frac{|c_j^{\varepsilon_i}(t_1)-c_j^{\varepsilon_i}(t_2)|}{|t_1-t_2|^{1/2}}\leq \sqrt{2E_0}.\label{leraycond} \end{equation} We let $\varphi (x,t)$ be the solution of the following parabolic equation: \begin{equation} \begin{split} \frac{\partial\varphi}{\partial t}+(u*\zeta^{\varepsilon_i})\cdot\nabla\varphi=\Delta\varphi-\frac{W'(\varphi)}{\varepsilon_i^2},\\ \varphi(x,0)=\varphi^{\varepsilon_i}_0(x). \end{split}\label{acapprox} \end{equation}
The existence of such $\varphi$ with $|\varphi|\leq 1$ is guaranteed by the standard theory of parabolic equations (\cite{Ladyzhenskaya}). By \eqref{acapprox} and the Cauchy-Schwarz inequality, we can estimate \begin{equation*}
\frac{d}{dt}\int_{\Omega}\left(\frac{\varepsilon_i|\nabla\varphi|^2}{2}+\frac{W(\varphi)}{\varepsilon_i}\right)\,dx \leq -\frac{\varepsilon_i}{2}\int_{\Omega} \left(\Delta\varphi-\frac{W'(\varphi)}{\varepsilon_i^2}\right)^2\,dx+\frac{\varepsilon_i}{2}\int_{\Omega} \left\{(u*\zeta^{\varepsilon_i})\cdot\nabla\varphi\right\}^2\,dx. \end{equation*} Since for any $t \in [0,T]$ \begin{equation*}
\|u*\zeta^{\varepsilon_i}\|^2_{L^{\infty}(\Omega)} \leq \varepsilon_i^{-2\gamma}\|u\|^2_{L^{\infty}(\Omega)}
\leq i\varepsilon_i^{-2\gamma}\max_{1\leq j \leq i}\|\omega_j(x)\|^2_{L^{\infty}(\Omega)}
\sum_{j=1}^i|c^{\varepsilon_i}_j(t)|^2 \leq c(i)E_0, \end{equation*} \begin{equation*}
\frac{d}{dt}\int_{\Omega}\left(\frac{\varepsilon_i|\nabla\varphi|^2}{2}+\frac{W(\varphi)}{\varepsilon_i}\right)\,dx
\leq c(i) E_0\int_{\Omega}\frac{\varepsilon_i|\nabla\varphi|^2}{2}\,dx. \end{equation*} This gives \begin{equation} \sup_{0\leq t \leq T}
\frac{1}{\sigma}\int_{\Omega}\left(\frac{\varepsilon_i|\nabla\varphi|^2}{2}+\frac{W(\varphi)}{\varepsilon_i}\right)\,dx \leq e^{c(i) E_0 T}E_0.\label{energyest} \end{equation} Hence as long as $T\leq 1$, \begin{equation}
|D_j^{\varepsilon_i}(t)| \leq c \|\nabla\omega_j\|_{L^{\infty}(\Omega)}\frac{1}{\sigma}\int_{\Omega}\int_{\Omega}\varepsilon_i|\nabla\varphi(y)|^2\zeta^{\varepsilon_i}(x-y)\,dydx \leq c(i)e^{c(i) E_0}E_0\label{Dest} \end{equation} by $\nabla\omega_j \in L^{\infty}(\Omega)^{d^2}$ and \eqref{energyest}.
Next we substitute the above solution $\varphi$ into the place of $\varphi^{\varepsilon_i}$, and solve \eqref{appeq1-2} with the initial condition $c^{\varepsilon_i}_j(0)=(u_0,\,\omega_j)$. Since $\tau$ is locally Lipschitz with respect to $e(u)$, there is at least some short time $T_1$ such that \eqref{appeq1-2} has a unique solution $\tilde{c}^{\varepsilon_i}_j(t)$ on $[0,T_1]$ with the initial condition
$\tilde{c}_j^{\varepsilon_i}(0)=(u_0,\,\omega_j)$ for $1\leq j\leq i$. We show that the solution exists up to $T_0=T_0(i,E_0,p,\nu_0)$ satisfying \eqref{leraycond}. Let $\tilde{c}(t)=\frac12\sum_{j=1}^i|\tilde{c}^{\varepsilon_i}_j(t)|^2$. Then, \begin{equation*} \frac{d}{dt}\tilde{c}(t)= A^{\varepsilon_i}_j\tilde{c}^{\varepsilon_i}_j+B_{klj}\tilde{c}^{\varepsilon_i}_k\tilde{c}^{\varepsilon_i}_l\tilde{c}^{\varepsilon_i}_j+D_j^{\varepsilon_i}\tilde{c}^{\varepsilon_i}_j. \end{equation*} By \eqref{taucond1} $A_j^{\varepsilon_i}\tilde{c}^{\varepsilon_i}_j\leq 0$ hence \begin{equation*} \frac{d}{dt}\tilde{c}(t) \leq c(i,E_0)(\tilde{c}^{3/2}+\tilde{c}^{1/2}). \end{equation*} Therefore, \begin{equation} \arctan\sqrt{\tilde{c}(t)} \leq \arctan\sqrt{E_0}+2c(i,E_0) t.\label{arc} \end{equation}
We can also estimate $|dc_j^{\varepsilon_i}/dt|$ due to \eqref{appeq1-2}, \eqref{Dest}, \eqref{arc} and \eqref{taucond2} depending only on $E_0,i,p,\nu_0$. Thus, by choosing $T_0$ small depending only on $E_0,i,p,\nu_0$ we have the existence of solution for $t\in[0,T_0]$ satisfying \eqref{leraycond}. We then prove the existence of a weak solution on $\Omega\times [0,T_0]$ by using Leray-Schauder fixed point theorem (see \cite{Ladyzhenskaya}). We define \[\tilde{u}(x,t)=\sum_{j=1}^i\tilde{c}^{\varepsilon_i}_j(t)\omega_j(x)\] and we define a map $\mathcal{L}:u\mapsto \tilde{u}$ as in the above procedure. Let \begin{equation*} \begin{split}V(T_0):=&\left\{u(x,t)
=\sum_{j=1}^i c_j(t)\omega_j(x)\,;\,\,\max_{t\in[0,T_0]}\left(\frac12\sum_{j=1}^i|c_j(t)|^2\right)^{1/2}\right.\\ &\left.+ \sup_{0\leq t_1<t_2\leq T_0}\sum_{j=1}^i
\frac{|c_j(t_1)-c_j(t_2)|}{|t_1-t_2|^{1/2}}\leq \sqrt{2E_0},\,c_j(0)=(u_0,\,\omega_j),\,c_j\in C^{1/2}([0,T_0]) \right\}. \end{split} \end{equation*} Then $V(T_0)$ is a closed, convex subset of $C^{1/2}([0,T_0];V^{0,2}_i)$ equipped with the norm
\[\|u\|_{V(T_0)}=\max_{t\in[0,T_0]}\left(\frac12\sum_{j=1}^i|c_j(t)|^2\right)^{1/2}+ \sup_{0\leq t_1<t_2\leq T_0}\sum_{j=1}^i
\frac{|c_j(t_1)-c_j(t_2)|}{|t_1-t_2|^{1/2}}\] and by the above argument $\mathcal{L}:V(T_0)\rightarrow V(T_0)$. Moreover by the Ascoli-Arzel\`a compactness theorem $\mathcal{L}$ is a compact operator. Therefore by using the Leray-Schauder fixed point theoremC$\mathcal{L}$ has a fixed point $u^{\varepsilon_i}\in V(T_0)$. We denote by $\varphi^{\varepsilon_i}$ the solution of \eqref{appeq3} and \eqref{appeq4}. Then $(u^{\varepsilon_i}, \varphi^{\varepsilon_i})$ is a weak solution of \eqref{appeq1}-\eqref{appeq4} in $\Omega\times [0,T_0]$. Note that we have the required regularities for $\varphi^{\varepsilon_i}$ due to the regularity of $u^{\varepsilon_i}*\zeta^{\varepsilon_i}$ in $x$ and by the standard parabolic regularity theory. $
{\Box}$
\begin{thm} Let $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ be the weak solution of \eqref{appeq1}-\eqref{appeq4} with \eqref{eqeq} in $\Omega\times[0,T]$. Then the following energy estimate holds: \begin{equation} \begin{split}
\int_{\Omega}\frac{1}{\sigma}&\left(\frac{\varepsilon_i|\nabla\varphi^{\varepsilon_i}(\cdot,T)|^2}{2}+\frac{W(\varphi^{\varepsilon_i}(\cdot,T))}{\varepsilon_i}\right)+\frac{|u^{\varepsilon_i}(\cdot,T)|^2}{2}\,dx\\
&+\int_0^{T}\int_{\Omega}\frac{\varepsilon_i}{\sigma}\left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right)^2+\nu_0|e(u^{\varepsilon_i})|^p\,dxdt \leq E_0+o(1). \label{localenergy1} \end{split} \end{equation} Moreover for any $0\leq T_1<T_2<\infty$ \begin{equation}
\int_{T_1}^{T_2}\|u^{\varepsilon_i}(\cdot,t)\|_{W^{1,p}(\Omega)}^p\, dt\leq c_K \{\nu_0^{-1}E_0+(T_2-T_1)E_0^{\frac{p}{2}}\}+o(1). \label{localenergysup} \end{equation} \label{localenergy} \end{thm}
{\it Proof.} Since $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ is the weak solution of \eqref{appeq1}-\eqref{appeq4}, we derive \begin{equation} \begin{split}
& \frac{d}{dt}\int_{\Omega}\frac{1}{\sigma}\left(\frac{\varepsilon_i|\nabla\varphi^{\varepsilon_i}|^2}{2}+\frac{W(\varphi^{\varepsilon_i})}{\varepsilon_i}\right)+\frac{|u^{\varepsilon_i}|^2}{2}\,dx\\ & =\int_{\Omega}-\frac{\varepsilon_i}{\sigma}\frac{\partial \varphi^{\varepsilon_i}}{\partial t}\left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right)+\frac{\partial u^{\varepsilon_i}}{\partial t}\cdot u^{\varepsilon_i}\,dx\\ & =\int_{\Omega}-\frac{\varepsilon_i}{\sigma}\left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}-(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}\right) \left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon^2}\right)\,dx\\ & +\int_{\Omega}\left\{{\rm div}\,\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))-u^{\varepsilon_i}\cdot\nabla u^{\varepsilon_i} -\frac{\varepsilon_i}{\sigma}{\rm div}\,((\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i})\right\}\cdot u^{\varepsilon_i}\,dx=I_1+I_2. \end{split}\label{localenergy1cal} \end{equation} Since ${\rm div}\, (u^{\varepsilon_i}*\zeta^{\varepsilon_i})=({\rm div}\, u^{\varepsilon_i})*\zeta^{\varepsilon_i}=0$, \begin{equation*} \sigma I_1 = -\int_{\Omega}\varepsilon_i\left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi)}{\varepsilon_i^2}\right)^2\,dx+\varepsilon_i\int_{\Omega}(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}\Delta\varphi^{\varepsilon_i}\,dx. \end{equation*} For $I_2$, with \eqref{taucond1} \begin{equation*} \int_{\Omega}{\rm div}\,\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))\cdot u^{\varepsilon_i}\,dx =-\int_{\Omega}\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(u^{\varepsilon_i})\,dx
\leq -\nu_0\int_{\Omega}|e(u^{\varepsilon_i})|^p\,dx. \end{equation*} Moreover the second term of $I_2$ vanishes by ${\rm div}\,u^{\varepsilon_i}=0$ and \begin{equation*} \begin{split} & -\int_{\Omega}\varepsilon_i {\rm div}\,(\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot u^{\varepsilon_i}\,dx
= -\int_{\Omega}\varepsilon_i \left(\nabla \frac{|\nabla\varphi^{\varepsilon_i}|^2}{2}+ \nabla \varphi^{\varepsilon_i}\Delta\varphi^{\varepsilon_i}\right)*\zeta^{\varepsilon} \cdot u^{\varepsilon_i}\,dx\\ & = -\varepsilon_i\int_{\Omega}(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}\Delta\varphi^{\varepsilon_i}\,dx. \end{split} \end{equation*} Hence \eqref{localenergy1cal} becomes \begin{equation*}
\frac{d}{dt}\int_{\Omega}\frac{1}{\sigma}\left(\frac{\varepsilon_i|\nabla\varphi^{\varepsilon_i}|^2}{2}+\frac{W(\varphi^{\varepsilon_i})}{\varepsilon_i}\right)+\frac{|u^{\varepsilon_i}|^2}{2}\,dx \leq -\int_{\Omega}\frac{\varepsilon_i}{\sigma}\left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right)^2
+\nu_0 |e(u^{\varepsilon_i})|^p\, dx. \end{equation*} Integrating with respect to $t$ over $t\in[0,T]$ and by \eqref{eqeq}, we obtain \eqref{localenergy1}. The proof of \eqref{localenergysup} follows from \eqref{localenergy1} and Theorem \ref{Korn}. $
{\Box}$\\
{\it Proof of Theorem \ref{globalexistence}.} For each fixed $i$ we have a short time existence for $[0,T_0]$ where $T_0$ depends only on $i,E_0,p,\nu_0$ at $t=0$. By Lemma \ref{localenergy} the energy at $t=T_0$ is again bounded by $E_0+o(1)$. By repeatedly using Lemma \ref{localexistence}, Theorem \ref{globalexistence} follows. $
{\Box}$\\
\section{Proof of main theorem}
\quad In this section we first prove that $\{\varphi^{\varepsilon_i}\}_{i=1}^{\infty}$ in Section 3 and the associated surface energy measures $\{\mu_t^{\varepsilon_i}\}_{ i=1}^{\infty}$ converge subsequentially to $\varphi$ and $\mu_t$ which satisfy the properties described in Theorem \ref{maintheorem}. Most of the technical and essential ingredients have been proved in \cite{LST1} and we only need to check the conditions to apply the results. We then prove that the limit velocity field satisfies the weak non-Newtonian flow equation, concluding the proof of Theorem \ref{maintheorem}.
First we recall the upper density ratio bound of the surface energy.
\begin{thm} (\cite[Theorem 3.1]{LST1}) Suppose $d\geq 2$, $\Omega={\mathbb T}^d$, $p>\frac{d+2}{2}$, $\frac12>\gamma\geq 0$, $1\geq \varepsilon>0$ and $\varphi$ satisfies \begin{eqnarray} \frac{\partial \varphi}{\partial t}+u\cdot\nabla\varphi=\Delta\varphi-\frac{W'(\varphi)}{\varepsilon^2} \qquad \qquad \quad & & {\rm on} \ \Omega\times [0,T],\label{allen1}\\ \varphi(x,0)=\varphi_0(x) \qquad \qquad \qquad & & {\rm on} \ \Omega,\label{allen2} \end{eqnarray} where $\nabla^i u,\, \nabla^j \varphi, \nabla^k \varphi_t\in C(\Omega\times[0,T])$ for $0\leq i,\, k\leq 1$ and $0\leq j\leq 3$. Let $\mu_t$ be the Radon measure on $\Omega$ defined by \begin{equation}
\int_{\Omega}\phi(x)\, d\mu_t(x)=\frac{1}{\sigma}\int_{\Omega}\phi(x)\left(\frac{\varepsilon|\nabla\varphi(x,t)|^2}{2}+\frac{W(\varphi(x,t))}{\varepsilon}\right)\, dx \label{dmu} \end{equation} for $\phi\in C(\Omega)$, where $\sigma=\int_{-1}^1 \sqrt{2 W(s)}\, ds$. We assume also that \begin{gather}
\sup_{\Omega}|\varphi_0|\leq 1\mbox{ and }\sup_{\Omega}\varepsilon^i|\nabla^i\varphi_0|\leq c_{1}\mbox{ for $1\leq i\leq 3$},\label{inibound}\\
\sup_{\Omega}\left(\frac{\varepsilon|\nabla\varphi_0|^2}{2}-\frac{W(\varphi_0)}{\varepsilon}\right)\leq \varepsilon^{-\gamma},\label{disbd}\\
\sup_{\Omega\times[0,T]}\left\{\varepsilon^{\gamma}|u|,\,
\varepsilon^{1+\gamma}|\nabla u|\right\}\leq c_{2}, \label{uinfbound}\\
\int_0^T\|u(\cdot,t)\|^p_{W^{1,p}(\Omega)}\, dt\leq c_3.\label{ubound} \end{gather} Define for $t\in [0,T]$ \begin{equation} D(t)=\max\left\{\sup_{x\in\Omega,\, 0<r\leq \frac12}\frac{1}{\omega_{d-1}r^{d-1}} \mu_t(B_r(x)), 1\right\},\hspace{1.cm}D(0)\leq D_0. \label{dtdef} \end{equation} Then there exist $\epsilon_1>0$ which depends only on $d$, $p$, $W$, $c_1$, $c_2$, $c_3$, $D_0$, $\gamma$ and $T$, and $c_4$ which depends only on $c_3$, $d$, $p$, $D_0$ and $T$ such that for all $0<\varepsilon\leq \epsilon_1$, \begin{equation} \sup_{0\leq t\leq T}D(t)\leq c_4. \label{fin1} \end{equation} \label{mainmono} \end{thm}
Using this we prove \begin{prop} For $\{\varphi^{\varepsilon_i}\}_{i=1}^{\infty}$ in Theorem \ref{globalexistence}, define $\mu_t^{\varepsilon_i}$ as in \eqref{dmu} replacing $\varphi$ by $\varphi^{\varepsilon_i}$, and define $D^{\varepsilon_i}(t)$ as in \eqref{dtdef} replacing $\mu_t$ by $\mu_t^{\varepsilon_i}$. Given $0<T<\infty$, there exists $c_5$ which depends only on $E_0,\, \nu_0, \, \gamma,\, D_0,\, T,\, d,\, p$ and $W$ such that \begin{equation} \sup_{0\leq t\leq T}D^{\varepsilon_i}(t)\leq c_5 \label{key} \end{equation} for all sufficiently large $i$. \label{du} \end{prop} {\bf Proof}. We only need to check the conditions of Theorem \ref{mainmono} for $\varphi^{\varepsilon_i}$ and $\mu_t^{\varepsilon_i}$. Note that $u$ in \eqref{allen1} is replaced by $u^{\varepsilon_i}*\zeta^{\varepsilon_i}$. We have $d\geq 2$, $\Omega={\mathbb T}^d$, $p>\frac{d+2}{2}$, $\frac12>\gamma\geq 0$, $1\geq\varepsilon>0$ and \eqref{allen1} and \eqref{allen2}. The regularity of functions is guaranteed in Theorem \ref{globalexistence}. With an appropriate choice of $c_1$, \eqref{inibound} is satisfied for all sufficiently large $i$ due to the choice of $\varepsilon_i$ in \eqref{tanh}. The sup bound \eqref{disbd} is satisfied with even 0 on the right-hand side instead of $\varepsilon_i^{-\gamma}$. The bound for $u^{\varepsilon_i}* \zeta^{\varepsilon_i}$ \eqref{uinfbound} is satisfied due to \eqref{zeta} and \eqref{localenergy1}, and \eqref{ubound} is satisfied due to \eqref{localenergysup}. Thus we have all the conditions, and Theorem \ref{mainmono} proves the claim. $
{\Box}$
We next prove \begin{prop} For $\{u^{\varepsilon_i}*\zeta^{\varepsilon_i}\}_{i=1}^{\infty}$ in Theorem \ref{globalexistence}, there exist a subsequence (denoted by the same index) and the limit $u\in L^{\infty}([0,\infty);V^{0,2})\cap L^p_{loc}([0,\infty); V^{1,p})$
such that for any $0<T<\infty$ \begin{equation} u^{\varepsilon_i}*\zeta^{\varepsilon_i}\rightharpoonup u\mbox{ weakly in }L^p([0,T]; W^{1,p}(\Omega)^d), \hspace{1.cm}u^{\varepsilon_i}*\zeta^{\varepsilon_i}\rightarrow u\mbox{ strongly in }L^2([0,T];L^2(\Omega)^d). \label{weak} \end{equation} \end{prop} {\bf Proof}.
Let $\psi \in V^{s,2}$ with $||\psi||_{V^{s,2}}\leq 1$. With \eqref{appeq1}, \eqref{appeq2} and integration by parts, we have \begin{equation*} \begin{split} \left(\frac{\partial u^{\varepsilon_i}}{\partial t},\psi\right)&= \left(\frac{\partial u^{\varepsilon_i}}{\partial t}, P_i\psi\right) = \left(-u^{\varepsilon_i}\cdot \nabla u^{\varepsilon_i}+{\rm div}\,\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})) -\frac{\varepsilon_i}{\sigma}{\rm div} (\nabla\varphi^{\varepsilon_i}\otimes \nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}, P_i\psi\right) \\ &=\left(u^{\varepsilon_i}\otimes u^{\varepsilon_i}-\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})) +\frac{\varepsilon_i}{\sigma} (\nabla\varphi^{\varepsilon_i}\otimes \nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i},\nabla P_i\psi\right). \end{split} \end{equation*} Here we remark that
\[\|\nabla P_i\psi\|_{L^{\infty}(\Omega)}\leq c(d) \|P_i\psi\|_{W^{s,2}(\Omega)}\leq c(d)\|\psi\|_{W^{s,2}(\Omega)}=c(d)\|\psi\|_{V^{s,2}}\leq c(d)\] by $s> \frac{d+2}{2}$ and properties of $P_i$ (see \cite{Lions} or \cite[p.290]{Malek}). Thus by \eqref{taucond2} and \eqref{localenergy1}, we obtain \begin{equation*}
\left(\frac{\partial u^{\varepsilon_i}}{\partial t},\psi\right)\leq c(d,p,\nu_0)\left(1+E_0+\|u^{\varepsilon_i}
\|_{W^{1,p}(\Omega)}^{p-1}\right). \end{equation*} Again using \eqref{localenergy1} and integrating in time we obtain \begin{equation}
\int_0^T\left|\left|\frac{\partial u^{\varepsilon_i}}{\partial t}\right|\right|_{(V^{s,2})^*}^{\frac{p}{p-1}}\,dt\leq c(d,p,E_0,\nu_0,T). \label{utes} \end{equation} Now we use Aubin-Lions compactness Theorem \cite[p.57]{Lions} with $B_0=V^{s,2}$, $B=V^{0,2}\subset L^2(\Omega)^d$, $B_1=(V^{s,2})^*$, $p_0=p$ and $p_1=\frac{p}{p-1}$. Then there exists a subsequence still denoted by $\{u^{\varepsilon_i}\}_{i=1}^{\infty}$ such that \begin{equation*} u^{\varepsilon_i} \rightarrow u \quad {\rm in} \ L^p([0,T];L^2(\Omega)^d). \label{u-converge1} \end{equation*} Since we have uniform $L^{\infty}([0,T];L^2(\Omega)^d)$ bound for $u^{\varepsilon_i}$, the strong convergence also holds in $L^2([0,T];L^2(\Omega)^d)$. Note that we also have proper norm bounds to extract weakly convergent subsequences due to \eqref{localenergy1}. For each $T_n$ which diverges to $\infty$ as $n\rightarrow\infty$, we choose a subsequence and by choosing a diagonal subsequence, we obtain the convergent subsequence with \eqref{weak} with $u^{\varepsilon_i}$ instead of $u^{\varepsilon_i}*\zeta^{\varepsilon_i}$. It is not difficult to show at this point that the same convergence results hold for $u^{\varepsilon_i}* \zeta^{\varepsilon_i}$ as in \eqref{weak}. $
\Box$
{\bf Proof of main theorem}. At this point, the rest of the proof concerning the existence of the limit Radon measure $\mu_t$ and the limit $\varphi=\lim_{i\rightarrow \infty}\varphi^{\varepsilon_i}$ and their respective properties described in Theorem \ref{maintheorem} can be proved by almost line by line identical argument in \cite[Section 4,\, 5]{LST1}. The only difference is that the energy $E_0$ in \cite{LST1} depends also on $T$, while in this paper $E_0$ depends only on the initial data due to \eqref{localenergy1}. This allows us to have time-global estimates such as $u\in L^{\infty}([0,\infty);V^{0,2})$ and $\varphi\in L^{\infty}([0,\infty);BV(\Omega))$. The argument in \cite{LST1} then complete the existence proof of Theorem \ref{maintheorem} (b), (c) along with (iii)-(vi). We still need to prove (a), (i) and (ii).
Due to \eqref{utes}, \eqref{taucond2} and \eqref{localenergysup} we may extract a further subsequence so that \begin{equation} \frac{\partial u^{\varepsilon_i}}{\partial t} \rightharpoonup \frac{\partial u}{\partial t}\mbox{ weakly in } L^{\frac{p}{p-1}}([0,T];(V^{s,2})^*),\hspace{.5cm} \tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))\rightharpoonup \hat{\tau} \mbox{ weakly in }L^{\frac{p}{p-1}}([0,T];L^{\frac{p}{p-1}}(\Omega)^{d^2}). \label{meascon1} \end{equation} For $\omega_j\in V^{s,2}$ ($j=1,\cdots$) and $h \in C^{\infty}_c((0,T))$ we have \begin{equation*} \int_{\Omega}{\rm div} ((\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i})\cdot h\omega_j\, dx=\int_{\Omega}\left(\Delta\varphi^{\varepsilon_i}-\frac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right) \nabla\varphi^{\varepsilon_i}\cdot h\omega_j*\zeta^{\varepsilon_i}\, dx \end{equation*} by integration by parts and ${\rm div}\,\omega_j=0$. Thus the argument in \cite[p.212]{Lions} and the similar convergence argument in \cite{LST1} show \begin{equation} \int_0^T\left\{\left(\frac{\partial u}{\partial t},h\omega_j\right)+\int_{\Omega} (u\cdot\nabla u)\cdot h\omega_j+h\hat{\tau}:e(\omega_j) \, dx\right\}dt=\int_0^T\int_{\Omega}H\cdot h\omega_j\, d\mu_t dt. \label{meascon2} \end{equation} Again by the similar argument using the density ratio bound and Theorem \ref{MZ} one show by the density argument and \eqref{meascon2} that $\frac{\partial u}{\partial t}\in L^{\frac{p}{p-1}}([0,T];(V^{1,p})^*)$ and \begin{equation} \int_0^T \left\{\left(\frac{\partial u}{\partial t}, v\right)+\int_{\Omega}(u\cdot\nabla u)\cdot v+\hat{\tau}:e(v) \, dx\right\}dt=\int_0^T\int_{\Omega}H\cdot v\, d\mu_t dt. \label{meascon3} \end{equation} for all $v\in L^p([0,T];V^{1,p})$. We next prove \begin{equation} \int_0^T\int_{\Omega}\hat{\tau}:e(v)\, dxdt =\int_0^T\int_{\Omega}\tau(\varphi,e(u)):e(v)\, dxdt \label{last} \end{equation} for all $v\in C^{\infty}_c((0,T);{\mathcal{V}})$. As in \cite[p.213 (5.43)]{Lions}, we may deduce that \begin{equation}
\frac12 \|u(t_1)\|^2_{L^2(\Omega)}+\int_0^{t_1}\int_{\Omega}\hat{\tau} :e(u)\, dxdt\geq \int_0^{t_1}\int_{\Omega}H\cdot u\, d\mu_t dt
+\frac12\|u(0)\|^2_{L^2(\Omega)} \label{last1} \end{equation} for a.e. $t_1\in [0,T]$. We set for any $v\in V^{1,p}$ \begin{equation} A_i^{t_1}=\int_0^{t_1}\int_{\Omega} (\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))-\tau(\varphi^{\varepsilon_i},e(v))):(e(u^{\varepsilon_i})-e(v))\, dxdt+
\frac12 \|u^{\varepsilon_i}(t_1)\|^2_{L^2(\Omega)}. \label{last2} \end{equation} The property \eqref{taucond3} of $e(\cdot)$ shows that the first term of \eqref{last2} is non-negative. We may further assume that $u^{\varepsilon_i}(t_1)$ converges weakly to $u(t_1)$ in $L^2(\Omega)^d$ thus we have \begin{equation}
\liminf_{i\rightarrow\infty}A_i^{t_1}\geq\frac12 \|u(t_1)\|^2_{L^2(\Omega)}. \label{last3} \end{equation} By \eqref{appeq1} we have \begin{equation*} \begin{split}
A_i^{t_1}=&\frac12\|u^{\varepsilon_i}(0)\|_{L^2(\Omega)}^2-\frac{\varepsilon_i}{\sigma} \int_0^{t_1}\int_{\Omega}{\rm div}((\nabla\varphi^{\varepsilon_i}\otimes\nabla\varphi^{\varepsilon_i}) *\zeta^{\varepsilon_i})\cdot u^{\varepsilon_i}\\ &-\int_0^{t_1}\int_{\Omega}\tau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(v)+ \tau(\varphi^{\varepsilon_i},e(v)):(e(u^{\varepsilon_i})-e(v))\, dxdt \end{split} \end{equation*} which converges to \begin{equation}
A^{t_1}=\frac12\|u(0)\|_{L^2(\Omega)}^2+\int_0^{t_1}\int_{\Omega} H\cdot u\, d\mu_t dt-\int_0^{t_1}\int_{\Omega}\hat{\tau}:e(v) +\tau(\varphi,e(v)):(e(u)-e(v))\, dxdt. \label{last4} \end{equation} Here we used that $\varphi^{\varepsilon_i}$ converges to $\varphi$ a.e. on $\Omega\times [0,T]$. By \eqref{last1}, \eqref{last3} and \eqref{last4}, we deduce that \begin{equation*} \int_0^{t_1}\int_{\Omega}(\hat{\tau}-\tau(\varphi,e(v))): (e(u)-e(v))\, dxdt\geq 0. \end{equation*} By choosing $v=u+\epsilon\tilde{v}$, divide by $\epsilon$ and letting $\epsilon\rightarrow 0$, we prove \eqref{last}. Finally, \eqref{eneineq} follows from \eqref{last}, the strong $L^1(\Omega\times[0,T])$ convergence of $\varphi^{\varepsilon_i}$, the lower semicontinuity of the mean curvature square term (see \cite{LST1}) and the energy equality appearing in Theorem \ref{localenergy}. This concludes the proof of Theorem \ref{maintheorem} $
{\Box}$
\end{document} |
\begin{document}
\title{A trace finite element method for PDEs on evolving surfaces \thanks{Partially supported by NSF through the Division of Mathematical Sciences grant 1522252.}} \author{Maxim A. Olshanskii\thanks{Department of Mathematics, University of Houston, Houston, Texas 77204-3008 ([email protected])} \and Xianmin Xu\thanks{LSEC, Institute of Computational Mathematics and Scientific/Engineering Computing,
NCMIS, AMSS, Chinese Academy of Sciences, Beijing 100190, China ([email protected]).} } \date{}
\maketitle \begin{abstract} In this paper, we propose an approach for solving PDEs on evolving surfaces using a combination of the trace finite element method and a fast marching method. The numerical approach is based on the Eulerian description of the surface problem and employs a time-independent background mesh that is not fitted to the surface. The surface and its evolution may be given implicitly, for example, by the level set method. Extension of the PDE off the surface is \textit{not} required. The method introduced in this paper naturally allows a surface to undergo topological changes and experience local geometric singularities. In the simplest setting, the numerical method is second order accurate in space and time. Higher order variants are feasible, but not studied in this paper. We show results of several numerical experiments, which demonstrate the convergence properties of the method and its ability to handle the case of the surface with topological changes. \end{abstract}
\section{Introduction}\label{sec:introduction} Partial differential equations on evolving surfaces arise in a number of mathematical models in natural sciences and engineering. Well-known examples include the diffusion and transport of surfactants along interfaces in multiphase fluids \cite{GReusken2011,Milliken,Stone}, diffusion-induced grain boundary motion \cite{GrainBnd1,GrainBnd2} and lipid interactions in moving cell membranes \cite{ElliotStinner,Novaketal}. Thus, recently there has been a significant interest in developing and analyzing numerical methods for PDEs on time-dependent surfaces. While all of finite difference, finite volumes and finite element methods have been considered in the literature for numerical solution of PDEs on manifolds, in this work we focus on finite element methods.
The choice of a numerical approach for solving a PDE on evolving surface $\Gamma(t)$ largely depends on which of Lagrangian or Euclidian frameworks is used to setup the problem and describe the surface evolution. In \cite{Dziuk07,DziukElliot2013a,elliott2015error} Elliott and co-workers developed and analyzed a finite element method (FEM) for computing transport and diffusion on a surface which is based on a Lagrangian tracking of the surface evolution. Some recent developments of the finite element methods for surface PDEs based on the Lagrangian description can be found, e.g., in \cite{barrett2015stable,bretschneider2016solving,elliott2015evolving,macdonald2016computational,sokolov2015afc}. If a surface undergoes strong deformations, topological changes, or it is defined implicitly, e.g., as the zero level of a level set function, then numerical methods based on the Lagrangian approach have certain disadvantages. Methods using an Eulerian approach were developed in \cite{AS03,bertalmio2001variational,xu2006level,XuZh}, based on an extension of the surface PDE into a bulk domain that contains the surface. Although in the original papers, finite differences were used, the approach is also suitable for finite element methods, see, e.g., \cite{burger2009finite} Related technique is the closest point method in \cite{petras2016pdes}, where the closest point representation of the surface and differential operators is used in an ambient space to allow a standard Cartesian finite difference discretization method.
In the present paper, we develop yet another finite element method for solving a PDE on a time-dependent surface $\Gamma(t)$. The surface is embedded in a bulk computational domain. We assume a sharp representation of the surface rather than a diffusive interface approach typical for the phase-field models of interfacial problems. The level set method \cite{SethianBook} is suitable for the purposes of this paper and will be used here to recover the evolution of the surface. We are interested in a surface FEM known in the literature as the trace or cut FEM. The trace finite element method uses the restrictions (traces) of a function from the background time-independent finite element space on the reconstructed discrete surface. This does not involve any mesh fitting towards the surface or an extension of the PDE.
The trace FEM method was originally introduced for elliptic PDEs on stationary surfaces in \cite{OlshReusken08}. Further the analysis and several extensions of the method were developed in the series of publications. This includes higher order, stabilized, discontinuous Galerkin and adaptive variants of the method as well as applications to the surface--bulk coupled transport--diffusion problem, two-phase fluids with soluble surfactants and coupled bulk-membrane elasticity problems, see, e.g., \cite{Alg1,burman2016cutb,burman2016full,cenanovic2015cut,chernyshenko2015adaptive,DemlowOlsh,grande2016higher, gross2015trace,lehrenfeld2016high,OlsR2009,ORXimanum,reusken2015analysis}. There have been several successful attempts to extend the method to time-dependent surfaces. In \cite{deckelnick2014unfitted} the trace FEM was combined with the narrow-band unfitted FEM from \cite{deckelnick2009h} to devise an unfitted finite element method for parabolic equations on evolving surfaces. The resulting method preserves mass in the case of an advection-diffusion conservation law. The method based on a characteristic-Galerkin formulation combined with the trace FEM in space was proposed in \cite{hansbo2015characteristic}. Thanks to the semi-Lagrangian treatment of the material derivative {\color{black}(numerical integration back in time along characteristics)} this variant of the method does not require stabilization for the dominating advection. The first order convergence of the characteristic--trace FEM was demonstrated by a rigorous \textit{a priori} error analysis and in numerical experiments. Another direction was taken in \cite{olshanskii2014eulerian}, where a space--time weak formulation of the surface problem was introduced. Based on this weak formulation, space--time variants of the trace FEM for PDEs on evolving surfaces were proposed in that paper and in \cite{grande2014eulerian}. The method from \cite{olshanskii2014eulerian} employs discontinuous piecewise linear in time -- continuous piecewise linear in space finite elements. In \cite{olshanskii2014error} the first order convergence in space and time of the method in an energy norm and second order convergence in a weaker norm was proved. In \cite{grande2014eulerian}, the author experimented with both continuous and discontinuous in time piecewise linear finite elements.
In the space--time trace FEM, the trial and test finite element spaces consist of traces of standard volumetric elements on a space--time manifold resulting from the evolution of a surface. The implementation requires the numerical integration over the tetrahedral reconstruction of the 3D manifold embedded in the $\mathbb{R}^4$ ambient space. An efficient algorithm for such numerical reconstruction was suggested in \cite{grande2014eulerian} and implemented in the DROPS finite element package \cite{DROPS}. In \cite{hansbo2016cut} a stabilized version of the space--time trace FEM for coupled bulk-surface problems was implemented using Gauss-Lobatto quadrature rules in time. In this implementation, one does not reconstruct the 3D space--time manifold but instead needs the 2D surface approximations in the quadrature nodes. The numerical experience with space--time trace FEM based on quadrature rules in time is mixed. The authors of \cite{hansbo2016cut} reported a second order convergence of the method for a number of 2D tests (in this case a 1D PDE is posed on an evolving curve), while in \cite{grande2014eulerian} one finds an example of a 2D smoothly deforming surface when the space--time method based on the trapezoidal quadrature rule fails to deliver convergent results. The error analysis of such simplified versions is an open question.
Although the space--time framework is natural for the development of unfitted FEMs for PDEs on evolving surfaces, the implementation of such methods is not straightforward, especially if a higher order method is desired. In this paper, we propose a variant of the trace FEM for time-dependent surfaces that uses simple finite difference approximations of \textit{time} derivatives. It avoids any reconstruction of the surface--time manifold, it also avoids finding surface approximations at quadrature nodes. Instead, the method requires arbitrary, but smooth in a sense clarified later, extension of the numerical solution off the surface to a narrow strip around the surface. We stress that in the present method one does not extend either problem data or differential operators to a surface neighborhood as in the methods based on PDEs extension. At a given time node $t_n$, the degrees of freedom in the narrow strip (except those belonging to tetrahedra cut by the surface $\Gamma(t_n)$) do not contribute to algebraic systems, but are only used to store the solution values from several previous time steps. In numerical examples, we use the BDF2 scheme for time integration and so the narrow band degrees of freedom store the finite element solution for $t=t_{n-1}$ and $t=t_{n-2}$. To find a suitable extension, we apply a variant of the Fast Marching Method (FMM), see, e.g., \cite{GReusken2011,sethian1996fast}. At each time step, the trace FEM for a PDE on a steady surface $\Gamma(t_n)$ and the FMM are used in a modular way, which makes the implementation straightforward in a standard or legacy finite element software. For P1 background finite elements and BDF2 time stepping scheme, {\color{black} numerical experiments show that } the method is second order accurate (assuming $\Delta t\simeq h$) and has no stability restrictions on the time step. We remark that the numerical methodology naturally extends to the surface-bulk coupled problems with propagating interfaces. However, in this paper we concentrate on the case when surface processes are decoupled from processes in the bulk.
The remainder of the paper is organized as follows. In section \ref{s_setup} we present the PDE model on an evolving surface and review some properties of the model. Section~\ref{s_FEM} introduces our variant of the trace FEM, which avoids space--time elements. Here we discuss implementation details. Section~\ref{s_num} collects the results for a series of numerical experiments. The experiments aim to access the accuracy of the method as well as the ability to solve PDEs along a surface undergoing topological changes. For the latter purpose we consider the example of the diffusion of a surfactant on a surface of two colliding droplets.
\section{Mathematical formulation}\label{s_setup}
Consider a surface $\Gamma(t)$ passively advected by a smooth velocity field $\mathbf w=\mathbf w(\mathbf x,t)$, i.e. the normal velocity of $\Gamma(t)$ is given by $\mathbf w \cdot \mathbf n$, with $\mathbf n$ the unit normal on $\Gamma(t)$. We assume that for all $t \in [0,T] $, $\Gamma(t)$ is a smooth hypersurface that is closed ($\partial \Gamma =\emptyset$), connected, oriented, and contained in a fixed domain $\Omega \subset \Bbb{R}^d$, $d=2,3$. In the remainder we consider $d=3$, but all results have analogs for the case $d=2$.
As an example of the surface PDE, consider the transport--diffusion equation modelling the conservation of a scalar quantity $u$ with a diffusive flux on $\Gamma(t)$ (cf. \cite{James04}): \begin{equation} \dot{u} + ({\mathop{\rm div}}_\Gamma\mathbf w)u -{ \nu}\Delta_{\Gamma} u=0\quad\text{on}~~\Gamma(t), ~~t\in (0,T], \label{transport} \end{equation}
with initial condition $u(\mathbf x,0)=u_0(\mathbf x)$ for $\mathbf x \in \Gamma_0:=\Gamma(0)$.
Here $\dot{u}$ denotes the advective material derivative, ${\mathop{\rm div}}_\Gamma:=\operatorname{tr}\left( (I-\mathbf n\mathbf n^T)\nabla\right)$ is the surface divergence, $\Delta_\Gamma$ is the Laplace--Beltrami operator, and $\nu>0$ is the constant diffusion coefficient. The well-posedness of suitable weak formulations of \eqref{transport} has been proved in {\color{black}\cite{Dziuk07,olshanskii2014eulerian,alphonse2014abstract}.}
The equation \eqref{transport} can be written in several equivalent forms, see \cite{DEreview}. In particular, for any smooth extension of $u$ from the space--time manifold \[ \mathcal{G}: = \bigcup\limits_{t \in (0,T)} \Gamma(t) \times \{t\},\quad \mathcal{G}\subset \Bbb{R}^{4}, \]
to a neighborhood of $\mathcal{G}$, one can expand the material derivative $\dot{u}= \frac{\partial u}{\partial t} + \mathbf w \cdot \nabla u$.
Note that the identity holds independently of a smooth extension of $u$ off the surface.
Assume further that the surface is defined implicitly as the zero level of the smooth level set function $\phi$ on $\Omega\times(0,T)$: \[ \Gamma(t)=\{\mathbf x\in\mathbb{R}^3\,:\,\phi(t,\mathbf x)=0\}, \]
such that $|\nabla\phi|\ge c_0>0$ in $\mathcal{O}(\mathcal{G})$, a neighborhood of $\mathcal{G}$. One can consider an extension $u^e$ in $\mathcal{O}(\mathcal{G})$ such that $u^e=u$ on $\mathcal{G}$ and $\nabla u^e\cdot\nabla \phi =0$ in $\mathcal{O}(\mathcal{G})$. Note that $u^e$ is smooth once $\phi$ and $u$ are both smooth. Below we use the same notation $u$ for the solution of the surface PDE \eqref{transport} and its extension. We have the equivalent formulation, \begin{equation} \left\{\begin{split}
\frac{\partial u}{\partial t} + \mathbf w \cdot \nabla u + ({\mathop{\rm div}}_\Gamma\mathbf w)u -{ \nu}\Delta_{\Gamma} u&=0\qquad\text{on}~~\Gamma(t), \\
\nabla u\cdot\nabla \phi& =0 \qquad\text{in}~\mathcal{O}(\Gamma(t)),
\end{split}
\right.~~t\in (0,T]. \label{transport_new} \end{equation} If $\phi$ is the signed distance function, the second equation in \eqref{transport_new} defines the normal extension of $u$, i.e. the solution $u$ stays constant in normal directions to $\Gamma(t)$. {\color{black}Otherwise, $\nabla u\cdot\nabla \phi=0$ defines an extension, which is not necessarily the normal extension. In fact, any extension is suitable for our purposes, if $u$ is smooth function in a neighborhood of $\mathcal{G}$. We shall make an exception is section~\ref{s_tFEMs}, where error analysis is reviewed and we need the normal extension to formulate certain estimates.}
In the next section, we devise the trace FEM based on the formulation \eqref{transport_new}.
\section{The finite element method}\label{s_FEM} We first collect some preliminaries and recall the trace FEM from \cite{OlshReusken08} for the elliptic equations on stationary surfaces and some of its properties. Further, in section~\ref{s_tFEMe} we apply this method on each time step of a numerical algorithm for the transient problem \eqref{transport_new}.
\subsection{Background mesh and induced surface triangulations}\label{s_surf} Consider a tetrahedral subdivision $\mathcal{T}_h$ of the bulk computational domain $\Omega$. We assume that the triangulation $\mathcal{T}_h$ is regular (no hanging nodes). For each tetrahedron $S\in \mathcal{T}_h$, let $h_S$ denote its diameter and define the global parameter of the triangulation by $h = \max_{S} h_S$. We assume that $\mathcal{T}_h$ is shape regular, i.e. there exists $\kappa>0$ such that for every $S\in \mathcal{T}_h$ the radius $\rho_S$ of its inscribed sphere satisfies \begin{equation}\label{shaperegularity} \rho_S>h_S/\kappa. \end{equation}
For each time $t\in[0,T]$, denote by $\Gamma_h(t)$ a polygonal approximation of $\Gamma(t)$. We assume that $\Gamma_h(t)$ is a $C^{0,1}$ surface without boundary and $\Gamma_h(t)$ can be partitioned in planar triangular segments: \begin{equation} \label{defgammah}
\Gamma_h(t)=\bigcup\limits_{T\in\mathcal{F}_h(t)} T, \end{equation} where $\mathcal{F}_h(t)$ is the set of all triangular segments $T$. We assume that for any $T\in\mathcal{F}_h(t)$ there is only \textit{one} tetrahedron $S_T\in\mathcal{T}_h$ such that $T\subset S_T$ (if $T$ lies on a face shared by two tetrahedra, any of these two tetrahedra can be chosen as $S_T$).
For the level set description of $\Gamma(t)$, the polygonal surface $\Gamma_h(t)$ is defined by the finite element level set function as follows. Consider a continuous function $\phi_h(t,\mathbf x)$ such that for any $t\in[0,T]$ the function $\phi_h$ is piecewise linear with respect to the triangulation $\mathcal{T}_h$. Its zero level set defines $\Gamma_h(t)$, \begin{equation}\label{Gammah} \Gamma_h(t):=\{\mathbf x\in\Omega\,:\, \phi_h(t,\mathbf x)=0 \}. \end{equation} We assume that $\Gamma_h(t)$ is an approximation to $\Gamma(t)$. This is a reasonable assumption if $\phi_h$ is either an interpolant to the known $\phi$ or one finds $\phi_h$ as the solution to a discrete level set equation. In the later case, one may have no direct knowledge of $\Gamma(t)$. Other interface capturing techniques such as the volume of fluid method also can be used subject to a postprocessing step to recover $\Gamma_h$.
\begin{figure}
\caption{Left: Cut of the background and induced surface meshes for $\Gamma_h(0)$ from Experiment~4 in Section~\ref{s_num}. Right: The zoom-in of the surface mesh.}
\label{fig:surfaceMesh}
\end{figure}
The intersection of $\Gamma_h(t)$ defined in \eqref{Gammah} with any tetrahedron in $\mathcal T_h$ is either a triangle or a quadrilateral. {\color{black} If the intersection is a quadrilateral, we divide it into two triangles.} This construction of $\Gamma_h(t)$ satisfies the assumptions made above. The bulk triangulation $\mathcal T_h$ consisting of tetrahedra and the induced surface triangulation are illustrated in Figure~\ref{fig:surfaceMesh}. There are no restrictions on how $\Gamma_h(t)$ cuts through the background mesh, and so for any fixed time instance $t$ the resulting triangulation $\mathcal{F}_h(t)$ is \emph{not} necessarily regular. The elements from $\mathcal{F}_h(t)$ may have very small internal angles and the size of neighboring triangles can vary strongly, cf.~Figure~\ref{fig:surfaceMesh} (right). Thus, {\color{black} $\Gamma_h(t)$ is not a regular triangulation of $\Gamma(t)$.}
Two interesting properties of the induced surface triangulations are known in the literature ~\cite{DemlowOlsh,olshanskii2012surface}: (i)~If the background triangulation $\mathcal T_h$ satisfies the minimal angle condition~\eqref{shaperegularity}, then the surface triangulation satisfies \textit{maximum} angle condition; (ii)~Any element from $\mathcal{F}_h(t)$ shares at least one vertex with a full size shape regular triangle from $\mathcal{F}_h(t)$. The trace finite element method does not exploit these properties directly, but they are still useful if one is interested in understanding the performance of the method.
We note that the surface triangulations $\mathcal{F}_h(t)$ will be used only to perform numerical integration in the finite element method below, while approximation properties of the method, as we shall see, depend on the volumetric tetrahedral mesh.
\subsection{The trace FEM: steady surface}\label{s_tFEMs} To review the idea of the trace FEM, assume for a moment the stationary transport--diffusion problem on a steady closed smooth surface $\Gamma$, \begin{equation} \alpha u+\mathbf w \cdot \nabla u + ({\mathop{\rm div}}_\Gamma\mathbf w)u -{ \nu}\Delta_{\Gamma} u=f\quad\text{on}~~\Gamma. \label{transport_st} \end{equation} Here we assume $\alpha>0$ and $\mathbf w\cdot\mathbf n=0$. Integration by parts over $\Gamma$ gives the weak formulation of \eqref{transport_st}: Find $u\in H^1(\Gamma)$ such that \begin{equation} \int_{\Gamma}\left(\alpha \, uv+\nu\nabla_\Gamma u\cdot\nabla_\Gamma v\, - (\mathbf w\cdot\nabla v) u\,\right)\, \mathrm{d}\mathbf{s} =\int_{\Gamma}f v\, \mathrm{d}\mathbf{s} \label{weak} \end{equation} for all $v\in H^1(\Gamma)$. In the trace FEM, one substitutes $\Gamma$ with $\Gamma_h$ in \eqref{weak} {\color{black}($\Gamma_h$ is constructed as in section~\ref{s_surf})}, and instead of $H^1(\Gamma)$ considers the space of traces on $\Gamma_h$ of all functions from the background ambient finite element space. This can be formally defined as follows.
Consider the volumetric finite element space of all piecewise linear continuous functions with respect to the bulk triangulation $\mathcal{T}_h$: \begin{equation}
V_h:=\{v_h\in C(\Omega)\ |\ v|_{S}\in P_1~~ \forall\ S \in\mathcal{T}_h\}.
\label{e:2.6} \end{equation} $V_h$ induces the following space on $\Gamma_h$: \begin{equation*}
V_h^{\Gamma}:=\{\psi_h\in C(\Gamma_h)\ |\ \exists ~ v_h\in V_h\ \text{such that }\ \psi_h=v_h~\text{on}~{\Gamma_h}\}.
\end{equation*}
Given the surface finite element space $V_h^{\Gamma}$, the finite element discretization of \eqref{transport_st} reads: Find $u_h\in V_h^{\Gamma}$ such that \begin{equation} \int_{\Gamma_h}\left(\alpha \, u_hv_h+\nu\nabla_{\Gamma_h} u_h\cdot\nabla_{\Gamma_h} v_h\, - (\mathbf w\cdot\nabla v_h) u_h \right)\, \mathrm{d}\mathbf{s}_h =\int_{\Gamma_h}f_h v_h\, \mathrm{d}\mathbf{s}_h \label{FEM} \end{equation} for all $v_h\in V_h^{\Gamma}$. Here $f_h$ is an approximation of the problem source term on $\Gamma_h$.
{\color{black}From here and up to the end of this section} $f^e$ denotes a \emph{normal} extension of a quantity $f$ from $\Gamma$. For a smooth closed surface, $f^e$ is well defined in a neighborhood $\mathcal{O}(\Gamma)$. Assume that $\Gamma_h$ approximates $\Gamma$ in the following sense: It holds $\Gamma_h\subset\mathcal{O}(\Gamma)$ and \begin{equation}
\|\mathbf x-\mathbf p(\mathbf x)\|_{L^\infty(\Gamma_h)}+h\|\mathbf n^e-\mathbf n_h\|_{L^\infty(\Gamma_h)}\le c\,h^2, \label{G_Gh} \end{equation} where $\mathbf n_h$ is an external normal vector on $\Gamma_h$ and $\mathbf p(\mathbf x)\in\Gamma$ is the closest surface point for $\mathbf x$. Given \eqref{G_Gh}, the trace FEM is second order accurate in the $L^2$ surface norm and first order accurate in $H^1$ surface norm~\cite{OlshReusken08,ORXimanum}: For solutions of \eqref{transport_st} and \eqref{FEM}, it holds \[
\|u^e-u_h\|_{L^2(\Gamma_h)}+h\|\nabla_{\Gamma_h}(u^e-u_h)\|_{L^2(\Gamma_h)} \le c\,h^2, \] with a constant $c$ dependent only on the shape regularity of $\mathcal T_h$ and \textit{independent of how the surface $\Gamma_h$ cuts through the background mesh}. This robustness property is extremely useful for extending the method to time-dependent surfaces. It allows to keep the same background mesh while the surface evolves through the bulk domain, avoiding unnecessary mesh fitting and mesh reconstruction.
Before we consider the time-dependent case, a few important properties of the method should be mentioned. Firstly, the authors of \cite{deckelnick2014unfitted} noted that one can use the full gradient instead of the tangential gradient in the diffusion term in \eqref{FEM}. This leads to the following FEM formulation:
Find $u_h\in V_h$ such that \begin{equation} \int_{\Gamma_h}\left(\alpha \, u_hv_h+\nu\nabla u_h\cdot\nabla v_h\, - (\mathbf w\cdot\nabla v_h) u_h \right)\, \mathrm{d}\mathbf{s}_h =\int_{\Gamma_h}f_h v_h\, \mathrm{d}\mathbf{s}_h \label{FEMh} \end{equation} for all $v_h\in V_h$. The rationality behind the modification is clear from the following observation. For the normal extension $u^e$ of the solution $u$ we have $\nabla_{\Gamma} u= \nabla u^e$ and so $u^e$ satisfies the integral equality \eqref{weak} with surface gradients (in the diffusion term) replaced by full gradients and for arbitrary smooth function $v$ on $\Omega$. Therefore, by solving \eqref{FEMh} we recover $u_h$, which approximates the PDE solution $u$ on the triangulated surface $\Gamma_h$ \textit{and} its normal extension $u^e$ in the strip consisting of all tetrahedra cut by the surface $\Gamma_h$.
The formulation \eqref{FEMh} uses the bulk finite element space $V_h$ instead of the surface finite element space $V_h^\Gamma$ in \eqref{FEM}. However, the practical implementation of both methods uses the same frame of all bulk finite element nodal basis functions $\phi_i\in V_h$ such that $\mbox{supp}(\phi_i)\cap\Gamma_h\neq\emptyset$. Hence, the active degrees of freedom in both methods are the same. The stiffness matrices are, however, different. For the case of the Laplace-Beltrami problem and a regular quasi-uniform tetrahedral grid, the studies in \cite{deckelnick2014unfitted,reusken2015analysis} show that the conditioning of the (diagonally scaled) stiffness matrix of the method \eqref{FEMh} improves over the conditioning of the matrix for \eqref{FEM}, at the expense of a slight deterioration of the accuracy. Further in this paper we shall use the full gradient version of the trace FEM.
From the formulations \eqref{FEMh} or \eqref{FEM} we see that only those degrees of freedom of the background finite element space $V_h$ are active (enter the system of algebraic equations) that are tailored to the tetrahedra cut by $\Gamma_h$. This provides us with a method of optimal computational complexity, which is not always the case for the methods based on an extension of surface PDE to the bulk domain. Due to the possible small cuts of bulk tetrahedra (cf. Figure~\ref{fig:surfaceMesh}), the resulting stiffness matrices can be poor conditioned. The simple diagonal re-scaling of the matrices significantly improves the conditioning and eliminates outliers in the spectrum, see \cite{OlsR2009,reusken2015analysis}. Therefore, Krylov subspace iterative methods applied to the re-scaled matrices are very efficient to solve the algebraic systems. Since the resulting matrices are sparse and resemble discretizations of 2D PDEs, using an optimized direct solver is also a suitable option.
\subsection{The trace FEM: evolving surface}\label{s_tFEMe} For the evolving surface case, we extend the approach in such a way that the trace FEM \eqref{FEMh} is applied on each time step for the recovered surface $\Gamma_h^{n}\approx\Gamma(t_n)$. Here and further, $\{t_n\}$, with $0=t_0<\dots<t_n<\dots<t_N=T$, is the temporal mesh, and $u^{n}$ approximates $u(t_{n})$. As before, $V_h$ is a \textit{time independent} bulk finite element space with respect to the given background triangulation $\mathcal T_h$.
Assume that a smooth extension $u^e(\mathbf x,t)$ is available in $\mathcal{O}(\mathcal{G})$, and \begin{equation}\label{Cond1} \Gamma(t_n)\subset \left\{\mathbf x\in\Omega\,:\,(\mathbf x,t_{n-1})\in\mathcal{O}(\mathcal{G})\right\}. \end{equation} In this case, one may discretize \eqref{transport_new} in time using, for example, the implicit Euler method: \begin{equation} \frac{u^{n}-u^e(t_{n-1})}{\Delta t} + \mathbf w^{n} \cdot \nabla u^{n} + ({\mathop{\rm div}}_\Gamma\mathbf w^{n})u^{n} -{ \nu}\Delta_{\Gamma} u^{n}=0\quad\text{on}~\Gamma(t_n), \label{transportFD} \end{equation} $\Delta t=t_n-t_{n-1}$. Now we apply the trace FEM to solve \eqref{transportFD} numerically. The trace FEM is a natural choice here, since $\Gamma(t_n)$ is not fitted by the mesh. We look for $u^{n}_h\in V_h$ solving \begin{equation}\int_{\Gamma_h^{n}}\left(\frac{1}{\Delta t}u^{n}_hv_h - (\mathbf w^{n}_h \cdot \nabla v_h) u^{n}_h \right)\,\mathrm{d}\mathbf{s}_h +{ \nu}\int_{\Gamma_h^{n}}\nabla u^{n}_h\cdot\nabla v_h\,\mathrm{d}\mathbf{s}_h =\int_{\Gamma_h^{n}}\frac{1}{\Delta t}u^{e,n-1}_hv_h\,\mathrm{d}\mathbf{s}_h \label{transportFDFE} \end{equation} for all $v_h\in V_h$. Here $u^{e,n-1}_h$ is a suitable extension of $u^{n-1}_h$ from $\Gamma_h^{n-1}$ to the surface neighborhood, $\mathcal{O}(\Gamma_h^{n-1})$. Condition \eqref{Cond1} yields to the condition \begin{equation}\label{Cond2} \Gamma^{n}_h\subset \mathcal{O}(\Gamma^{n-1}_h). \end{equation} Note that \eqref{Cond2} is not a Courant condition on $\Delta t$, but rather a condition on a width of a strip surrounding the surface, where the extension of the finite element solution is performed.
{\color{black}Over one time step, a material point on the surface can travel a distance not exceeding $\|\mathbf{w}\|_{L^\infty}\Delta t$. Therefore, it is safe to extend the solution to all tetrahedra intersecting the strip of the width $2\|\mathbf{w}\|_{L^\infty}\Delta t$ surrounding the surface. Hence, we consider all tetrahedra having at least one vertex closer than $\|\mathbf{w}\|_{L^\infty}\Delta t$ to the surface: Define \begin{equation}\label{strip}
\mathcal{\widetilde{S}}(\Gamma_h^{n}):=\left\{ S\in \mathcal{T}_h~:~\exists~x\in \mathcal{N}(S),~~\text{s.t.}~~\hbox{dist}(\mathbf x,\Gamma_h^n)<L\|\mathbf{w}\|_{L^\infty}\Delta t\right\},\quad L=1, \end{equation} where $\mathcal{N}(S)$ is the set of all nodes for $S\in \mathcal{T}_h$. The criterion in \eqref{strip} can be refined by exploiting the local information about $\mathbf w$ or about $\mathbf n\cdot\mathbf w$.}
After we determine the numerical extension procedure, $u^{k}_h\to u^{e,k}_h$, the identity \eqref{transportFDFE} defines the fully discrete numerical method.
In general, to find a suitable extension, one can consider a numerical solver for hyperbolic systems and apply it to the second equation in \eqref{transport_new}. For example, one can use a finite element method to solve the problem \[ \frac{\partial u^e}{\partial t'}+ \nabla u^e\cdot\nabla \phi(t^k) =0,\quad\text{such that}~~u^e=u^{k}_h~~\text{on}~\Gamma_h^k, \] with the auxiliary time $t'$, and let $u^{k,e}_h:=\lim\limits_{t'\to\infty}u^{e}(t')$. Another technique to compute extensions (also used for the re-initialization of the signed distance function in the level-set method) is the Fast Marching Method \cite{sethian1996fast}. We find the FMM technique convenient and fast for building suitable extensions in narrow bands of tetrahedra containing $\Gamma_h$. We give the details of the FMM in the next section.
We need one further notation. Denote by $\mathcal{S}(\Gamma_h^{k})$ a strip of all tetrahedra cut by $\Gamma_h^{k}$: \[ \mathcal{S}(\Gamma_h^{k})=\bigcup_{S\in\mathcal{T}_{\Gamma}^k} \overline{S},\quad\text{with}~~ \mathcal{T}_{\Gamma}^k := \{S\in \mathcal{T}_h: S \cap \Gamma^k_h \neq \emptyset \}. \] We want to exploit the fact that the trace finite element method provides us with the normal extension in $\mathcal{S}(\Gamma_h^{k})$ `for free', since the solution $u^{n}_h$ of \eqref{transportFDFE} approximately satisfies $\frac{\partial u_h^k}{\partial \mathbf n}=0$ in $\mathcal{S}(\Gamma_h^{k})$, by the property of the full gradient FEM formulation.
For given $u^{e,n-1}_h$ and $\phi_h(t_n)$, one time step of the algorithm now reads: \begin{enumerate} \item Solve \eqref{transportFDFE} for $u^{n}_h\in V_h$;\\[-3ex] \item Apply the FMM to find $u^{e,n}_h$ in $\mathcal{\widetilde{S}}(\Gamma_h^{n})\setminus \mathcal{S}(\Gamma_h^{n})$ such that $
u^{e,n}_h=u^{n}_h~\text{on}~\partial \mathcal{S}(\Gamma_h^{n}). $ \end{enumerate}
If the motion of the surface is coupled to the solution of the surface PDE (the examples include two-phase flows with surfactant or some models of tumor growth~\cite{GReusken2011,chaplain2001spatio}), then a method to find an evolution of $\phi_h$ has to be added, while finding $u^{e,n}_h$ can be combined with a re-initialization of $\phi_h$ in the FMM.
A particular advantage of the present variant of the trace FEM for evolving domains is that the accuracy order in time can be easily increased using standard finite differences. In numerical experiments we use
the BDF2 scheme: The first term in \eqref{transportFD} is replaced by
\[ \frac{3u^{n}-4u^e(t_{n-1})+u^e(t_{n-2})}{2\Delta t} \] and we set $L=2$ in \eqref{strip}; the corresponding modifications in \eqref{transportFDFE} are obvious. Furthermore, one may increase the accuracy order in space by using higher order background finite elements and a higher order surface reconstruction, see \cite{grande2016higher,lehrenfeld2016high} for practical higher order variants of trace FEM on stationary surfaces. In the framework of this paper, the use of these higher order methods is decoupled from the numerical integration in time.
\subsection{Extension by FMM}\label{s_FMM} The Fast Marching Method is a well-known technique to compute an approximate distance function to an interface embedded in a computational domain. Here we build on the variant of the FMM from section~7.4.1 of \cite{GReusken2011} to compute finite element function extensions in a strip of tetrahedra. We need some further notations. For a vertex $\mathbf x$ of the background triangulation $\mathcal T_h$, $\mathcal{S}(\mathbf x)$ denotes the union of all tetrahedra sharing $\mathbf x$. We fix $t_n$ and let $\Gamma_h=\Gamma_h^{n}$, $\mathcal{S}(\Gamma_h)=\mathcal{S}(\Gamma_h^{n})$. Note that we do not necessarily have \textit{a priori} information of $\mathcal{\widetilde{S}}(\Gamma_h)$, since the distance function may not be available. Finding the narrow band for the extension is a part of the FMM below. We need the set of vertices from tetrahedra cut by the mesh: \[
\mathcal{N}_{\Gamma}=\{\mathbf x\in\mathbb{R}^3\,:\, \mathbf x\in \mathcal{N}(S)~\hbox{ for some } S\in \mathcal{S}(\Gamma_h) \}.
\]
Assume $u_h=u^{n}_h\in V_h$ solves \eqref{transportFDFE} and we are interested in computing $u_h^e$ in $\mathcal{\widetilde{S}}(\Gamma_h)$. The FMM is based on a greedy grid traversal technique and consists of two phases.
{\it Initialization phase.} In the tetrahedra cut by $\Gamma_h$ the full-gradient trace FEM provides us with the normal extension. Hence, we set $$ u_h^{e}(\mathbf x)=u_h(\mathbf x)\quad\text{for}~\mathbf x\in\mathcal{N}_{\Gamma}. $$ For the next step of FMM, we also need a distance function $d(\mathbf x)$ for all $x\in\mathcal{N}_{\Gamma}$. For any $S_T\in \mathcal{S}(\Gamma_h)$, we know that $T=S_T\cap\Gamma_h$ is a triangle or quadrilateral with vertices $\{\mathbf y_j\}$, $j=1,\dots,J$, where $J=3$ or $J=4$. Denote by $\mathbb{P}_T$ the plane containing $T$ and by $P_h \mathbf x$ the projection of $\mathbf x$ on $\mathbb{P}_T$. Then, for each $\mathbf x\in \mathcal{N}(T)$, we define \begin{equation} d_T(\mathbf x):=\left\{ \begin{array}{ll}
|\mathbf x-P_h \mathbf x|& \hbox{if } P_h \mathbf x\in T,\\
\min\limits_{1\leq j\leq J}|\mathbf x-\mathbf y_j|&\hbox{otherwise}. \end{array} \right. \end{equation} After we loop over all $S\in \mathcal{S}(\Gamma_h)$, the value $d(\mathbf x)$ in each $\mathbf x\in\mathcal{N}_{\Gamma}$ is given by \begin{equation} d(\mathbf x)=\min_{S_T\in \mathcal{S}(\mathbf x)}d_T(\mathbf x). \end{equation}
{\it Extension phase.} During this phase, we determine both $d(\mathbf x)$ and $u_h^e(\mathbf x)$ for $\mathbf x\in\mathcal{N}\setminus\mathcal{N}_\Gamma$. To this end, the set $\mathcal{N}$ of all vertices from $\mathcal T_h$ is divided into three subsets. A finished set $\mathcal{N}_{f}$ contains all vertices where $d$ and $u_h^e$ have already been defined. We initialize $\mathcal{N}_{f}=\mathcal{N}_{\Gamma}$. Initially the active set $\mathcal{N}_{a}$ contains all the vertices, which has a neighbour in $\mathcal{N}_f$, \begin{align*} &\mathcal{N}_a=\{\mathbf x\in\mathcal{N}\setminus\mathcal{N}_f~:~ \mathcal{N}(\mathcal{S}(\mathbf x))\cap \mathcal{N}_f\neq \emptyset\},\\ &\mathcal{N}_u=\mathcal{N}\setminus(\mathcal{N}_f\cup\mathcal{N}_a). \end{align*} The active set is updated during the FMM and the method stops once $\mathcal{N}_a$ is empty.
For all $\mathbf x\in\mathcal{N}_a$, the FMM iteratively computes auxiliary distance function and extension function values $\tilde{d}(\mathbf x)$ and $\tilde{u}^e_h(\mathbf x)$, which become final values ${d}(\mathbf x)$ and ${u}^e_h(\mathbf x)$ once $\mathbf x$ leaves $\mathcal{N}_a$ and joins $\mathcal{N}_f$. The procedure is as follows. For $\mathbf x\in\mathcal{N}_a$ we consider all $S\in \mathcal{S}(\mathbf x)$ such that $\mathcal{N}(S)\cap\mathcal{N}_f\neq\emptyset$. If $\mathcal{N}(S)\cap\mathcal{N}_f$ contains only one vertex $\mathbf y$, we set \[
\tilde d_S(\mathbf x)=d(\mathbf y)+|\mathbf x-\mathbf y|, \quad \tilde u_{h,S}(\mathbf x)=u_h^e(\mathbf y). \] If $\mathcal{N}(S)\cap\mathcal{N}_f$ contains two or three vertices $\{\mathbf y_j\}$, $1\leq j\leq J$, $J=2$ or $3$, then we compute \begin{align*} &\tilde d_S(\mathbf x)=\left\{ \begin{array}{ll}
d(P_h \mathbf x)+|\mathbf x-P_h \mathbf x|,& \hbox{if } P_h \mathbf x\in S,\\
d(\mathbf y_{min})+|\mathbf x-\mathbf y_{min}|,&\hbox{otherwise}, \end{array} \right.\\ &\tilde u_{h,S}(\mathbf x)=\left\{ \begin{array}{ll} u_h^e(P_h \mathbf x),& \hbox{if } P_h \mathbf x\in S,\\ u_h^e(\mathbf y_{min}),&\hbox{otherwise}, \end{array} \right. \end{align*}
where $\mathbf y_{min}=\mathrm{argmin}_{1\leq j\leq m}(d(\mathbf y_j)+|\mathbf x-\mathbf y_j|)$, and $P_h \mathbf x$ is the orthogonal projection of $\mathbf x$ on the line passing through $\{\mathbf y_j\}$ (if $J=2$) or the plane containing $\{\mathbf y_j\}$ (if $J=3$). The value of $d(P_h \mathbf x)$ is computed as the linear interpolation of the known values $d(\mathbf y_j)$. Now we set \[ \begin{aligned} \tilde{d}(\mathbf x)&=\tilde d_{S_{min}}(\mathbf x),\\ \tilde u_{h}(\mathbf x)&=\tilde u_{h,S_{min}}(\mathbf x), \end{aligned} \quad\text{for}~~ S_{min}=\mathrm{argmin}\{\tilde d_S(\mathbf x)~:~S\in \mathcal{S}(\mathbf x)~\text{and}~\mathcal{N}(S)\cap\mathcal{N}_f\neq\emptyset\}. \]
Now we determine such vertex $\mathbf x_{min}\in \mathcal{N}_a$ that $ d(\mathbf x_{min})=\min_{\mathbf x\in\mathcal{N}_a}\tilde d(\mathbf x). $ and set \[ {d}(\mathbf x_{min})=\tilde d(\mathbf x_{min}),\quad u^e_{h}(\mathbf x_{min})=\tilde u_{h}(\mathbf x_{min}). \] The vertex $\mathbf x_{min}$ is now moved from the active set $\mathcal{N}_a$ to the finalized set $\mathcal{N}_f$. Based on the value of ${d}(\mathbf x_{min})$ one checks, if any tetrahedron from $\mathcal{S}(\mathbf x_{min})$ may belong to $\mathcal{\widetilde{S}}(\Gamma_h)$ strip. If such $S\in\mathcal{\widetilde{S}}(\Gamma_h)$ exists then $\mathcal{N}_a$ is updated by vertices from $\mathcal{N}_u$ connected with $\mathbf x_{\min}$. Otherwise, no new vertices are added to $\mathcal{N}_a$. In our implementation we use the simple criterion: If it holds \[
d(\mathbf x_{min})>h+L|\mathbf{w}|_{\infty}\Delta t, \] then we do not update $\mathcal{N}_a$ with new vertices from $\mathcal{N}_u$.
\section{Numerical examples}\label{s_num} This section collects the results of several numerical experiments for a number of problems posed on evolving surfaces. The results demonstrate the accuracy of the trace FEM, its stability with respect to the variation of discretization parameters, and the ability to handle the case when the transport--diffusion PDE is solved on a surface undergoing topological changes.
All implementations are done in the finite element package DROPS \cite{DROPS}. The background finite element space $V_h$ consists of piecewise linear continuous finite elements. The BDF2 scheme is applied to approximate the time derivative. At each time step, we assemble the stiffness matrix and the right-hand side by numerical integration over the discrete surfaces $\Gamma_h^n$. A Gaussian quadrature of degree five is used for the numerical integration on each $K\in\mathcal F_h$. The same method is used to evaluate the finite element error. All linear algebraic systems are solved using the GMRES iterative method with the Gauss--Seidel preconditioner to a relative tolerance of $10^{-6}$.
The first series of experiments verifies the formal accuracy order of the method for the examples with known analytical solutions.
\noindent{\bf Experiment 1.} We consider the transport--diffusion equation \eqref{transport} on the unit sphere $\Gamma(t)$ moving with the constant velocity $\mathbf{w}=(0.2,0,0)$. The initial data is given by \[
\Gamma(0):=\{\mathbf x\in\mathbb{R}^3~:~|\mathbf x|=1\},\quad u|_{t=0}=1+x_1+x_2+x_3. \] One easily checks that the exact solution is given by $u(\mathbf x,t)=1+(x_1+x_2+x_3-0.2t )\exp(-2t)$. In this and the next two experiments, we set $T=1$.
The computational domain is $\Omega=[-2,2]^3$. We divide $\Omega$ into tetrahedra as follows: First we apply the uniform tessellation of $\Omega$ into cubes with side length $h$. Further the Kuhn subdivision of each cube into 6 tetrahedra is applied. This results in the shape regular background triangulation $\mathcal T_h$. The finite element level set function $\phi_h(\mathbf x,t)$ is the nodal Lagrangian P1 interpolant for the signed distance function of $\Gamma(t)$, and \[ \Gamma_h^n=\{\mathbf x\in\mathbb{R}^3~:~\phi_h(\mathbf x,t_n)=0\}. \] The temporal grid is uniform, $t_n=n\Delta t$. We note that in all experiments we apply the Fast Marching Method to find both distances to $\Gamma_h^n$ and $u^e$, so we never explore the explicit knowledge of the distance function for $\Gamma(t)$.
\begin{figure}
\caption{The cut of the background mesh and a part of the surface mesh for $t_n=1$.
Colors illustrate the solution and its extension.
}
\label{fig:Exampl1}
\end{figure}
Figure~\ref{fig:Exampl1} shows the cut of the background mesh and the surface mesh colored by the computed solution at time $t=1$. We are interested in the $L^2(H^1)$ and $L^2(L^2)$ surface norms for the error. We compute them using the trapezoidal quadrature rule in time,
\begin{align*} err_{L^2(H^1)} &=
\Big\{\frac{\Delta t}{2}\|\nabla_{\Gamma_h}(u^e-\pi_h u)\|_{L^2(\Gamma_h^0)}^2 +\sum_{i=1}^{N-1}\Delta t\|\nabla_{\Gamma_h}(u^e-u_h)\|_{L^2(\Gamma_h^i)}^2
\\ &\quad +\frac{\Delta t}{2}\|\nabla_{\Gamma_h}(u^e-u_h)\|_{L^2(\Gamma_h^n)}^2\Big\}^{1/2},\\ err_{L^2(L^2)} &=
\Big\{\frac{\Delta t}{2}\|(u^e-\pi_h u)\|_{L^2(\Gamma_h^0)}^2 +\sum_{i=1}^{N-1}\Delta t\|(u^e-u_h)\|_{L^2(\Gamma_h^i)}^2
\\ &\quad +\frac{\Delta t}{2}\|(u^e-u_h)\|_{L^2(\Gamma_h^N)}^2\Big\}^{1/2}. \end{align*}
Tables~\ref{tab:convergencerateH1} and \ref{tab:convergencerateL2} present the error norms for the Experiment~1 with various time steps $\Delta t$ and mesh sizes $h$. If one refines both $\Delta t$ and $h$, the first order of convergence in the surface $L^2(H^1)$-norm and the second order in the surface $L^2(L^2)$-norm are clearly seen. {\color{black} For the case of large $\Delta t$ and small $h$ the FMM extension strip $\widetilde{S}(\Gamma_h)\setminus{S}(\Gamma_h)$ becomes wider in terms of characteristic mesh size $h$, and the accuracy of the method diminishes. This numerical phenomenon can be noted in the top rows of Tables~\ref{tab:convergencerateH1} and \ref{tab:convergencerateL2}, where the error increases as the mesh size becomes smaller. We expect that the situation improves if one applies more accurate extension methods in $\widetilde{S}(\Gamma_h)\setminus{S}(\Gamma_h)$. One candidate would be the normal derivative volume stabilization method from~\cite{grande2016analysis} extended to all tetrahedra in $\widetilde{S}(\Gamma_h)$.}
\begin{table}[h]\small \caption{\small The $L^2(H^1)$-norm of the error in Experiment~1.}\label{tab:convergencerateH1}
\begin{center}
\begin{tabular}{l|llll}
& $h=1/2$ &$h=1/4$ & $h=1/8$ & $h=1/16$ \\ \hline $\Delta t=1/8$&\bf 0.96365&0.835346 &1.221340 &2.586520 \\ $\Delta t=1/16$ &0.963654&\bf 0.74794& 0.423799&0.653380\\ $\Delta t=1/32$ &0.954179&0.759253&\bf 0.37954 &0.225399\\ $\Delta t=1/64$&0.953155& 0.766650&0.381567&\bf 0.19143\\ \hline \end{tabular} \end{center} \end{table}
\begin{table}[h]\small \caption {\small The $L^2(L^2)$-norm of the error in Experiment~1.}\label{tab:convergencerateL2}
\begin{center}
\begin{tabular}{l|llll}
& $h=1/2$ &$h=1/4$ & $h=1/8$ & $h=1/16$ \\ \hline $\Delta t=1/8$&\bf 0.39351&0.192592 &0.319912 &0.691862 \\
$\Delta t=1/16$ &0.435067&\bf 0.16268& 0.057801&0.107322\\
$\Delta t=1/32$ &0.445765&0.172543&\bf 0.04013 &0.018707\\
$\Delta t=1/64$&0.448433& 0.175145&0.041875&\bf 0.01040\\ \hline \end{tabular} \end{center} \end{table}
\begin{table}[h]\small {\color{black}\caption{\small Averaged CPU times per each time step of the method in Experiment~1}\label{tab:computeTime}
\begin{center}
\begin{tabular}{c|cclll}
& Active d.o.f. & Extra d.o.f. & $T_{assemb}$ & $T_{solve}$ & $T_{ext}$ \\ \hline $h_0=1/2,~\Delta t_0=1/8$ &31 &8 &0.0038&0.0004&0.0012 \\ $h=h_0/2,~\Delta t=\Delta t_0/2$& 104 &24 &0.0160&0.0021&0.0041 \\ $h=h_0/4,~\Delta t=\Delta t_0/4$ &452&63 &0.0708&0.0087&0.0195\\ $h=h_0/8,~\Delta t=\Delta t_0/8$&1880&170 &0.3814&0.0374&0.0906\\ \hline \end{tabular} \end{center} } \end{table} {\color{black} Table~\ref{tab:computeTime} shows the breakdown of the computational costs of the method into the averaged CPU times for assembling stiffness matrices, solving resulting linear algebraic systems, and performing the extension to $\widetilde{S}(\Gamma_h)\setminus{S}(\Gamma_h)$ by FMM. Since the surface evolves, all the statistics slightly vary in time, and so the table shows averaged numbers per one time step. ``Active d.o.f.'' is the dimension of the linear algebraic system, i.e. the number of bulk finite element nodal values tailored to tetrahedra from $S(\Gamma_h)$. ``Extra d.o.f.'' is the number of mesh nodes in $\widetilde{S}(\Gamma_h)\setminus\overline{{S}(\Gamma_h)}$, these are all nodes where extension is computed by FMM. The averaged CPU times demonstrate optimal or close to the optimal scaling with respect to the number of degrees of freedom. As common for a finite element method, the most time consuming part is the assembling of the stiffness matrices. The costs of FMM are modest compared to the assembling time, and $T_{solve}$ indicates that using a preconditioned Krylov subspace method is the efficient approach to solve linear algebraic systems (no extra stabilizing terms were added to the FE formulation for improving its algebraic properties). }
\noindent{\bf Experiment~2.} The setup of this experiment is similar to the previous one. The transport velocity is given by $\mathbf{w}=(-2\pi x_2, 2\pi x_1,0)$. Initially, the sphere is set off the center of the domain: The initial data is given by \[
\Gamma(0):=\{\mathbf x\in\mathbb{R}^3~:~|\mathbf x-\mathbf x_0|=1\},\quad u|_{t=0}=1+(x_1-0.5)+x_2+x_3, \] with $\mathbf x_0=(0.5,0,0)$. Now $\mathbf w$ revolves the sphere around the center of domain without changing its shape. One checks that the exact solution to \eqref{transport} is given by \[ u(\mathbf x,t)=(x_1(\cos(2\pi t)-\sin(2\pi t))+x_2(\cos(2\pi t)+\sin(2\pi t))+x_3+0.5 )\exp(-2t). \]
{ \begin{table}\small \caption{\small The $L^2(H^1)$-norm of the error in Experiment~2.}\label{tab:ErrorH1_Ex2}
\begin{center}
\begin{tabular}{l|llll}
& $h=1/2$ &$h=1/4$ & $h=1/8$ & $h=1/16$ \\ \hline {\color{black}$\Delta t=1/32$}& 0.978459 &1.931081&3.740840&4.048480\\ $\Delta t=1/64$&\bf 0.90425& 0.690963& 0.813820& 1.066030 \\
$\Delta t=1/128$ &0.901234 &\bf 0.64014 &0.348516 & 0.300654\\
$\Delta t=1/256$ &0.901443& 0.640055& \bf 0.32352& 0.171101\\
$\Delta t=1/512$&0.901631& 0.641018& 0.323199& \bf 0.16286\\ \hline \end{tabular} \end{center} \end{table} \begin{table}\small \caption{\small The $L^2(L^2)$-norm of the error in Experiment~2.}\label{tab:ErrorL2_Ex2}
\begin{center}
\begin{tabular}{l|llll}
& $h=1/2$ &$h=1/4$ & $h=1/8$ & $h=1/16$ \\ \hline {\color{black}$\Delta t=1/32$}& 0.294122 &0.548567&0.975485&0.958447\\ $\Delta t=1/64$&\bf 0.27244& 0.120061& 0.152520& 0.175920 \\
$\Delta t=1/128$ &0.279106&\bf 0.10451& 0.034962& 0.037085\\
$\Delta t=1/256$ & 0.279744 &0.105975& \bf 0.02699& 0.010692\\
$\Delta t=1/512$&0.279811& 0.106116& 0.026444&\bf 0.00736\\ \hline \end{tabular} \end{center} \end{table} }
Tables~\ref{tab:ErrorH1_Ex2} and \ref{tab:ErrorL2_Ex2} show the error norms for the Experiment~2 with various time steps $\Delta t$ and mesh sizes $h$. If one refines both $\Delta t$ and $h$, the first order of convergence in the surface $L^2(H^1)$-norm and the second order in the surface $L^2(L^2)$-norm are again observed.
{\color{black}Note that the transport velocity $\|\mathbf{w}\|_{\infty}\approx 9.42$ in this experiment scales differently compared to Experiment~1. Therefore, we consider smaller $\Delta t$ to obtain meaningful results.}
\noindent{\bf Experiment~3.} In this experiment, we consider a shrinking sphere and solve \eqref{transport} with a source term on the right-hand side. The bulk velocity field is given by $ \mathbf{w}={ -{\frac12 e^{-t/2}}}\mathbf{n}, $ where $\mathbf{n}$ is the unit outward normal on $\Gamma(t)$. $\Gamma(0)$ is the unit sphere. One computes ${\mathop{\rm div}}_\Gamma \mathbf w = -1$. The prescribed analytical solution $u(\mathbf x,t)=(1+x_1x_2x_3) e^{t}$ solves \eqref{transport} with the right-hand side $ f(\mathbf x,t)=(-1.5 e^{t}+{12}e^{2t})x_1x_2x_3. $
{ \begin{table}\small \caption{\small The $L^2(H^1)$-norm of the error in Experiment~3.}\label{tab:ErrorH1_Ex3}
\begin{center}
\begin{tabular}{l|llll}
& $h=1/4$ &$h=1/8$ & $h=1/16$ & $h=1/32$ \\ \hline $\Delta t=1/16$&\bf 0.48893 &0.311146 & 0.170104 &0.088521 \\
$\Delta t=1/32$ &0.481896& \bf 0.30859 & 0.168635& 0.087013\\
$\Delta t=1/64$ &0.478675& 0.307416 &\bf 0.16801& 0.086513\\
$\Delta t=1/128$&0.477226& 0.306872& 0.167747&\bf 0.08634\\ \hline \end{tabular} \end{center} \end{table}
\begin{table}\small \caption{\small The $L^2(L^2)$-norm of the error in Experiment~3.}\label{tab:ErrorL2_Ex3}
\begin{center}
\begin{tabular}{l|llll}
& $h=1/4$ & $h=1/8$ & $h=1/16$ & $h=1/32$ \\ \hline $\Delta t=1/16$&\bf 0.12237& 0.0468812& 0.0224705& 0.0178009 \\
$\Delta t=1/32$ &0.116763& \bf 0.040745& 0.0130912& 0.0060213 \\
$\Delta t=1/64$ & 0.115589& 0.0396511&\bf 0.011517& 0.0035199\\
$\Delta t=1/128$&0.115336& 0.0394023& 0.0112094& \bf 0.003038\\ \hline \end{tabular} \end{center} \end{table} }
Tables~\ref{tab:ErrorH1_Ex3} and \ref{tab:ErrorL2_Ex3} show the error norms for various time steps $\Delta t$ and mesh sizes $h$. If one refines both $\Delta t$ and $h$, the first order of convergence in the surface $L^2(H^1)$-norm and the second order in the surface $L^2(L^2)$-norm is observed for the example of the shrinking sphere.
\noindent{\bf Experiment~4.} In this example, we consider a surface transport--diffusion problem as in \eqref{transport} on a more complex moving manifold. The initial manifold and concentration are given ({as in \cite{Dziuk88}}) by $ \Gamma(0)=\{ \, \mathbf x\in \mathbb{R}^3~:~ (x_1 -x_3^2)^2+x_2^2+x_3^2=1\, \}, \quad u_0(\mathbf x)=1+x_1 x_2 x_3. $ The velocity field that transports the surface is $$\mathbf{w}(\mathbf x,t)=\big(0.1x_1 \cos(t),0.2x_2 \sin(t),0.2x_3\cos(t)\big)^T.$$
\begin{figure}
\caption{Snapshots of the surface, surface mesh and the computed solution from Experiment~4.
}
\label{fig:Exampl4}
\end{figure}
\begin{figure}
\caption{Total mass evolution for the finite element solution in Experiment 4.}
\label{fig:mass}
\end{figure}
{\color{black}We compute the problem until $T=6$}. In this example, the total mass $ M(t)=\int_{\Gamma(t)}u(\cdot,t) \, \mathrm{d}\mathbf{s} $ is conserved and equal to
$M(0)=|\Gamma(0)|\approx 13.6083.$
We check how well the discrete quantity $ M_h(t)=\int_{\Gamma_h(t)}u_h(\cdot, t) \, \mathrm{d}\mathbf{s} $ is conserved. In Figure~\ref{fig:mass} (left) we plot $M_h(t)$ for different mesh sizes $h$ and a fixed time step $\Delta t=0.01$. The error in the total mass at $t=T$ is equal to 1.8142, 0.4375,0.1006 and 0.0239 (for mesh sizes as in Figure~\ref{fig:mass} (left)). In Figure~\ref{fig:mass} (right) we plot $M_h(t)$ for different time steps $\Delta t$ and a fixed mesh size $h=1/16$. The error in the total mass at $t=T$ is equal 0.3996, 0.0785, 0.0038 (for time steps as in Figure~\ref{fig:mass} (right)). The error in the mass conservation is consistent with the expected second order accuracy in time and space.
If one is interested in the exact mass conservation on the discrete level, then one may enforce $M_h(t_n)=M_h(0)$ as a side constrain in the finite element formulation \eqref{transportFDFE} with the help of the scalar Lagrange multiplier, see \cite{hansbo2016cut}. Here we used the error reduction in total mass as an \textit{indicator} of the method convergence order for the case, when the exact solution is not available.
\noindent{\bf Experiment~5.} In this test problem from~\cite{GOReccomas}, one solves the transport--diffusion equation \eqref{transport} on an evolving surface $\Gamma(t)$ which undergoes a change of topology and experiences a local singularity. The computational domain is $\Omega=(-3,3)\times(-2,2)^2$, $t \in [0,1]$. The evolving surface is the zero level of the level set function $\phi$ defined as: \[
\phi(\mathbf x,t) = 1 - \frac{1}{\| \mathbf x -c_+(t)\|^3} - \frac{1}{\| \mathbf x -c_-(t)\|^3}, \]
with $c_\pm(t)= \pm\frac32(t - 1, 0 , 0)^T$, $t \in [0,1]$. For $t=0$ and $\mathbf x \in B(c_+(0);1)$ one has $\| \mathbf x -c_+(0)\|^{-3} =1$ and $\|\mathbf x -c_-(0)\|^{-3} \ll 1$. For $t=0$ and $ \mathbf x \in B(c_-(0);1)$ one has $\| \mathbf x -c_+(0)\|^{-3} \ll 1$ and $\| \mathbf x -c_-(0)\|^{-3} =1$. Hence, the initial configuration $\Gamma(0)$ is close to two balls of radius $1$, centered at $\pm(1.5, 0, 0)^T$. For $t=1$ the surface $\Gamma(1)$ is the ball around $0$ with radius $2^{1/3}$. For $t >0 $ the two spheres approach each other until time $\tilde t= 1-\tfrac23 2^{1/3}\approx 0.160$, when they touch at the origin. For $t \in (\tilde t,1]$ the surface $\Gamma(t)$ is simply connected and deforms into the sphere $\Gamma(1)$.
\begin{figure}
\caption{
Snapshots of discrete solution in Experiment~5 with $h=1/16$, $\Delta t=1/128$.}
\label{fig:solutions}
\end{figure}
In the vicinity of $\Gamma(t)$, the gradient $\nabla\phi$ and the time derivative $\partial_t\phi$ are well-defined and given by simple algebraic expressions. The normal velocity field, which transports $\Gamma (t)$, can be computed (cf. \cite{GOReccomas}) to be \[
\mathbf w= -\frac{\partial_t\phi}{\lvert\nabla\phi\rvert^2}\nabla\phi. \] The initial value of $u$ is given by \[
u_0(\mathbf x)=\left\{\begin{array}{ll} 3-x_1 & \hbox{for }x_1\geq 0;\\ 0&\hbox{otherwise.}
\end{array}
\right. \]
\begin{figure}
\caption{The computed solution and surface meshes close to the time of collision Example~5.}
\label{fig:surfaceMeshMerge}
\end{figure}
\begin{figure}
\caption{Total mass evolution for the finite element solution in Experiment~5.}
\label{fig:mass_merge}
\end{figure}
In Figure~\ref{fig:solutions} we show a few snapshots of the surface and the computed surface solution on for the background tetrahedral mesh with $h=1/16$ and $\Delta t =1/128$. The surfaces $\Gamma_h^n$ close to the time of collision are illustrated in Figure~\ref{fig:surfaceMeshMerge}. The suggested variant of the trace FEM handles the geometrical singularity without any difficulty. {\color{black} It is clear that the computed extension $u^e$ in this experiment is not smooth in a neighborhood of the singularity, and so formal analysis of the consistency of the method is not directly applicable to this case. However, the closest point extension is well defined and numerical results suggest that this is sufficient for the method to be stable.} Similar to the previous example, we compute the total discrete mass $M_h(t)$ on $\Gamma_h^n$. This can be used as a measure of accuracy. The evolution of $M_h(t)$ for varying $h$ and $\Delta t$ is shown in Figure~\ref{fig:mass_merge}. The convergence of the quantity is obvious. Finally, we note that in this experiment we observed the stable numerical behaviour of the method for any combinations of the mesh size and time step we tested, including $\Delta t = \frac18$, $h=\frac1{16}$ and $\Delta t = \frac1{128}$, $h=\frac1{4}$.
\section{Conclusions} We studied a new fully Eulerian unfitted finite element method for solving PDEs posed on evolving surfaces. The method combines three computational techniques in a modular way: a finite difference approximation in time, a finite element method on stationary surface, and an extension of finite element functions from a surface to a neighborhood of the surface. All of these three computational techniques have been intensively studied in the literature, and so well established methods can be used. In this paper, we used the trace piecewise linear finite element method -- the higher order variants of this method are also available in the literature~{\color{black} \cite{grande2016analysis}} -- for the spacial discretization and a variant of the fast marching method to construct suitable extension. We observed stable and second order accurate numerical results in a several experiments including one experiment with two colliding spheres. The rigorous error analysis of the method is a subject of further research.
{}
\end{document} |
\begin{document}
\title{A uniform boundedness principle in pluripotential theory}
\author[Kosi\'nski]{\L ukasz Kosi\'nski} \address{Institute of Mathematics, Jagiellonian University, \L ojasiewicza 6, 30-348 Krak\'ow, Poland.} \email{[email protected]} \thanks{\L K supported by the Ideas Plus grant 0001/ID3/2014/63 of the Polish Ministry of Science and Higher Education. }
\author[Martel]{\'Etienne Martel} \address{D\'epartement d'informatique et de g\'enie logiciel, Universit\'e Laval, Qu\'ebec City (Qu\'ebec), Canada G1V 0A6} \email{[email protected]} \thanks{EM supported by an NSERC undergraduate student research award}
\author[Ransford]{Thomas Ransford} \address{D\'epartement de math\'ematiques et de statistique, Universit\'e Laval, Qu\'ebec City (Qu\'ebec), Canada G1V 0A6,.} \email{[email protected]} \thanks{TR supported by grants from NSERC and the Canada research chairs program}
\date{7 August 2017}
\begin{abstract} For families of continuous plurisubharmonic functions we show that, in a local sense, separately bounded above implies bounded above. \end{abstract}
\maketitle
\section{The uniform boundedness principle}\label{S:ubp}
Let $\Omega$ be an open subset of ${\mathbb C}^N$. A function $u:\Omega\to[-\infty,\infty)$ is called \emph{plurisubharmonic} if: \begin{enumerate} \item $u$ is upper semicontinuous, and
\item $u|_{\Omega\cap L}$ is subharmonic, for each complex line $L$. \end{enumerate} For background information on plurisubharmonic functions, we refer to the book of Klimek \cite{Kl91}.
It is apparently an open problem whether in fact (2) implies (1) if $N\ge2$. In attacking this problem, we have repeatedly run up against an obstruction in the form of a uniform boundedness principle for plurisubharmonic functions. This principle, which we think is of interest in its own right, is the main subject of this note. Here is the formal statement.
\begin{theorem}[Uniform boundedness principle]\label{T:ubp} Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, where $N,M\ge1$, and let ${\cal U}$ be a family of continuous plurisubharmonic functions on $D\times G$. Suppose that: \begin{enumerate}[label=\normalfont(\roman*)] \item ${\cal U}$ is locally uniformly bounded above on $D\times\{w\}$, for each $w\in G$; \item ${\cal U}$ is locally uniformly bounded above on $\{z\}\times G$, for each $z\in D$. \end{enumerate} Then: \begin{enumerate}[resume, label=\normalfont(\roman*)] \item ${\cal U}$ is locally uniformly bounded above on $D\times G$. \end{enumerate} \end{theorem}
In other words, if there is an upper bound for ${\cal U}$ on each compact subset of $D\times G$ of the form $K\times\{w\}$ or $\{z\}\times L$, then there is an upper bound for ${\cal U}$ on every compact subset of $D\times G$. The point is that we have no \textit{a priori} quantitative information about these upper bounds, merely that they exist. In this respect, the result resembles the classical Banach--Steinhaus theorem from functional analysis.
The proof of Theorem~\ref{T:ubp} is based on two well-known but non-trivial results from several complex variables: the equivalence (under appropriate assumptions) of plurisubharmonic hulls and polynomial hulls, and Hartogs' theorem on separately holomorphic functions. The details of the proof are presented in \S\ref{S:pfubp}.
The Banach--Steinhaus theorem is usually stated as saying that a family of bounded linear operators on a Banach space $X$ that is pointwise-bounded on $X$ is automatically norm-bounded. There is a stronger version of the result in which one assumes merely that the operators are pointwise-bounded on a non-meagre subset $Y$ of $X$, but with the same conclusion. This sharper form leads to new applications (for example, a nice one in the theory of Fourier series can be found in \cite[Theorem~5.12]{Ru87}). Theorem~\ref{T:ubp} too possesses a sharper form, in which one of the conditions (i),(ii) is merely assumed to hold on a non-pluripolar set. This improved version of theorem is the subject of \S\ref{S:ubpgen}.
We conclude the paper in \S\ref{S:appl} by considering applications of these results, and we also discuss the connection with the upper semicontinuity problem mentioned at the beginning of the section.
\section{Proof of the uniform boundedness principle}\label{S:pfubp}
We shall need two auxiliary results. The first one concerns hulls. Given a compact subset $K$ of ${\mathbb C}^N$, its \emph{polynomial hull} is defined by \[
\widehat{K}:=\{z\in{\mathbb C}^N:|p(z)|\le\sup_K|p|\text{~for every polynomial~$p$ on ${\mathbb C}^N$}\}. \] Further, given an open subset $\Omega$ of ${\mathbb C}^N$ containing $K$, the \emph{plurisubharmonic hull} of $K$ with respect to $\Omega$ is defined by \[ \widehat{K}_{\psh(\Omega)}:=\{z\in\Omega:u(z)\le\sup_Ku\text{~for every plurisubharmonic $u$ on $\Omega$}\}. \]
Since $|p|$ is plurisubharmonic on $\Omega$ for every polynomial $p$, it is evident that $\widehat{K}_{\psh(\Omega)}\subset \widehat{K}$. In the other direction, we have the following result.
\begin{lemma}[\protect{\cite[Corollary~5.3.5]{Kl91}}]\label{L:hulls} Let $K$ be a compact subset of ${\mathbb C}^N$ and let $\Omega$ be an open subset of ${\mathbb C}^N$ such that $\widehat{K}\subset\Omega$. Then $\widehat{K}_{\psh(\Omega)}=\widehat{K}$. \end{lemma}
The second result that we shall need is Hartogs' theorem \cite{Ha06} that separately holomorphic functions are holomorphic.
\begin{lemma}\label{L:Hartogs} Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, and let $f:D\times G\to{\mathbb C}$ be a function such that: \begin{itemize} \item $z\mapsto f(z,w)$ is holomorphic on $D$, for each $w\in G$; \item $w\mapsto f(z,w)$ is holomorphic on $G$, for each $z\in D$. \end{itemize} Then $f$ is holomorphic on $D\times G$. \end{lemma}
\begin{proof}[Proof of Theorem~\ref{T:ubp}] Suppose that the result is false. Then there exist sequences $(a_n)$ in $D\times G$ and $(u_n)$ in ${\cal U}$ such that $u_n(a_n)>n$ for all $n$ and $a_n\to a\in D\times G$. Let $P$ be a compact polydisk with centre $a$ such that $P\subset D\times G$. For each $n$, set \[ P_n:=\{\zeta\in P:u_n(\zeta)\le n\}. \] Then $P_n$ is compact, because the functions in ${\cal U}$ are assumed continuous. Further, since $P$ is convex, we have $\widehat{P_n}\subset P\subset D\times G$. By Lemma~\ref{L:hulls}, we have $\widehat{P_n}=\widehat{(P_n)}_{\psh(D\times G)}$. As $a_n$ clearly lies outside this plurisubharmonic hull, it follows that $a_n$ also lies outside the polynomial hull of $P_n$. Thus there exists a polynomial $q_n$
such that $\sup_{P_n}|q_n|<1$ and $|q_n(a_n)|>1$. Let $r_n$ be a polynomial vanishing at $a_1,\dots,a_{n-1}$ but not at $a_n$, and set $p_n:=q_n^mr_n$, where $m$ is chosen large enough so that \[
\sup_{P_n}|p_n|<2^{-n} \quad\text{and}\quad
|p_n(a_n)|>n+\sum_{k=1}^{n-1}|p_k(a_n)|. \]
Let us write $P=Q\times R$, where $Q,R$ are compact polydisks such that $Q\subset D$ and $R\subset G$. Then, for each $w\in R$, the family ${\cal U}$ is uniformly bounded above on $Q\times\{w\}$, so eventually $u_n\le n$ on $Q\times\{w\}$. For these $n$, we then have $Q\times\{w\}\subset P_n$ and hence $|p_n|\le 2^{-n}$ on $Q\times\{w\}$. Thus the series \[ f(z,w):=\sum_{n\ge1}p_n(z,w) \] converges uniformly on $Q\times\{w\}$. Likewise, it converges uniformly on $\{z\}\times R$, for each $z\in D$. We deduce that: \begin{itemize} \item $z\mapsto f(z,w)$ is holomorphic on $\inter(Q)$, for each $w\in \inter(R)$; \item $w\mapsto f(z,w)$ is holomorphic on $\inter(R)$, for each $z\in \inter(Q)$. \end{itemize} By Lemma~\ref{L:Hartogs}, $f$ is holomorphic on $\inter(P)$. On the other hand, for each $n$, our construction gives \[
|f(a_n)|
\ge|p_n(a_n)|-\sum_{k=1}^{n-1}|p_k(a_n)|-\sum_{k=n+1}^\infty| p_k(a_n)|>n. \] Since $a_n\to a$, it follows that $f$ is discontinuous at $a$, the central point of $P$. We thus have arrived at a contradiction, and the proof is complete. \end{proof}
One might wonder if Theorem~\ref{T:ubp} remains true if we drop one of the assumptions (i) or (ii). Here is a simple example to show that it does not. For each $n\ge1$, set \[
K_n:=\{z\in{\mathbb C}:|z|\le n,~1/n\le \arg(z)\le 2\pi\}, \]
and let $(z_n)$ be a sequence such that $z_n\in{\mathbb C}\setminus K_n$ for all $n$ and $z_n\to0$. By Runge's theorem, for each $n$ there exists a polynomial $p_n$ such that $\sup_{K_n}|p_n|\le 1$ and $|p_n(z_n)|>n$. The sequence $|p_n|$ is then pointwise bounded on ${\mathbb C}$, but not uniformly bounded in any neighborhood of $0$.
Thus, if we define $u_n(z,w):=|p_n(z)|$, then we obtain a sequence of continuous plurisubharmonic functions on ${\mathbb C}\times{\mathbb C}$ satisfying (ii) but not (iii).
Although we cannot drop (i) or (ii) altogether, it \emph{is} possible to weaken one of the conditions (i) or (ii) to hold merely on a set that is `not too small', and still obtain the conclusion (iii). This is the subject of next section.
\section{A stronger form of the uniform boundedness principle}\label{S:ubpgen}
A subset $E$ of ${\mathbb C}^N$ is called \emph{pluripolar} if there exists a plurisubharmonic function $u$ on ${\mathbb C}^N$ such that $u=-\infty$ on $E$ but $u\not\equiv-\infty$ on ${\mathbb C}^N$. Pluripolar sets have Lebesgue measure zero, and a countable union of pluripolar sets is again pluripolar. For further background on pluripolar sets, we again refer to Klimek's book \cite{Kl91}.
In this section we establish the following generalization of Theorem~\ref{T:ubp}, in which we weaken one of the assumptions (i),(ii) to hold merely on a non-pluripolar set.
\begin{theorem}\label{T:ubpgen} Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, where $N,M\ge1$, and let ${\cal U}$ be a family of continuous plurisubharmonic functions on $D\times G$. Suppose that: \begin{enumerate}[label=\normalfont(\roman*)] \item ${\cal U}$ is locally uniformly bounded above on $D\times\{w\}$, for each $w\in G$; \item ${\cal U}$ is locally uniformly bounded above on $\{z\}\times G$, for each $z\in F$, \end{enumerate} where $F$ is a non-pluripolar subset of $D$. Then: \begin{enumerate}[resume, label=\normalfont(\roman*)] \item ${\cal U}$ is locally uniformly bounded above on $D\times G$. \end{enumerate} \end{theorem}
For the proof, we need the following generalization of Hartogs' theorem, due to Terada \cite{Te67,Te72}.
\begin{lemma}\label{L:Terada} Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, and let $f:D\times G\to{\mathbb C}$ be a function such that: \begin{itemize} \item $z\mapsto f(z,w)$ is holomorphic on $D$, for each $w\in G$; \item $w\mapsto f(z,w)$ is holomorphic on $G$, for each $z\in F$, \end{itemize} where $F$ is a non-pluripolar subset of $D$. Then $f$ is holomorphic on $D\times G$. \end{lemma}
\begin{proof}[Proof of Theorem~\ref{T:ubpgen}] We define two subsets $A,B$ of $D$ as follows. First, $z\in A$ if $w\mapsto\sup_{u\in{\cal U}}u(z,w)$ is locally bounded above on $G$. Second, $z\in B$ if there exists a neighborhood $V$ of $z$ in $D$ such that $(z,w)\mapsto\sup_{u\in{\cal U}}u(z,w)$ is locally bounded above on $V\times G$. Clearly $B$ is open in $D$ and $B\subset A$. Also $F\subset A$, so $A$ is non-pluripolar.
Let $z_0\in D\setminus B$. Then there exists $w_0\in G$ such that ${\cal U}$ is not uniformly bounded above on any neighborhood of $(z_0,w_0)$. The same argument as in the proof of Theorem~\ref{T:ubp} leads to the existence of a compact polydisk $P=Q\times R$ around $(z_0,w_0)$ and a function $f:Q\times R\to{\mathbb C}$ such that: \begin{itemize} \item $z\mapsto f(z,w)$ is holomorphic on $\inter(Q)$, for each $w\in\inter(R)$, \item $w\mapsto f(z,w)$ is holomorphic on $\inter(R)$, for each $z\in \inter(Q)\cap A$, \end{itemize} and at the same time $f$ is unbounded in each neighborhood of $(z_0,w_0)$. By Lemma~\ref{L:Terada}, this is possible only if $\inter(Q)\cap A$ is pluripolar.
Resuming what we have proved: if $z\in D$ and every neighborhood of $z$ meets $A$ in a non-pluripolar set, then $z\in B$.
We now conclude the proof with a connectedness argument. As $A$ is non-pluripolar, and a countable union of pluripolar sets is pluripolar, there exists $z_1\in D$ such that every neighborhood of $z_1$ meets $A$ in a non-pluripolar set, and consequently $z_1\in B$. Thus $B\ne\emptyset$. We have already remarked that $B$ is open in $D$. Finally, if $z\in D\setminus B$, then there is a an open neighborhood $W$ of $z$ that meets $A$ in a pluripolar set, hence $B\cap W$ is both pluripolar and open, and consequently empty. This shows that $D\setminus B$ is open in $D$. As $D$ is connected, we conclude that $B=D$, which proves the theorem. \end{proof}
We end the section with some remarks concerning the sharpness of Theorem~\ref{T:ubpgen}.
Firstly, we cannot weaken both conditions (i) and (ii) simultaneously. Indeed, let ${\mathbb D}$ be the unit disk, and define a sequence $u_n:{\mathbb D}\times{\mathbb D}\to{\mathbb R}$ by \[
u_n(z,w):=n(|z+w|-3/2). \] Then \begin{itemize}
\item $z\mapsto\sup_n u_n(z,w)$ is bounded above on ${\mathbb D}$ for all $|w|\le1/2$,
\item $w\mapsto\sup_n u_n(z,w)$ is bounded above on ${\mathbb D}$ for all $|z|\le1/2$, \end{itemize} but the sequence $u_n(z,w)$ is not even pointwise bounded above at the point $(z,w):=(\frac{4}{5},\frac{4}{5})$.
Secondly, the condition in Theorem~\ref{T:ubpgen} that $F$ be non-pluripolar is sharp, at least for $F_\sigma$-sets. Indeed, let $F$ be an $F_\sigma$-pluripolar subset of $D$. Then there exists a plurisubharmonic function $v$ on ${\mathbb C}^N$ such that $v=-\infty$ on $F$ and $v(z_0)>-\infty$ for some $z_0\in D$. By convolving $v$ with suitable smoothing functions, we can construct a sequence $(v_n)$ of continuous plurisubharmonic functions on ${\mathbb C}^N$ decreasing to $v$ and such that the sets $\{v_n\le -n\}$ cover $F$. Let $(p_n)$ be a sequence of polynomials in one variable that is pointwise bounded in ${\mathbb C}$ but not uniformly bounded on any neighborhood of $0$ (such a sequence was constructed at the end of \S\ref{S:pfubp}). Choose positive integers $N_n$ such that
$\sup_{|w|\le n}|p_n(w)|\le N_n$, and define $u_n:D\times{\mathbb C}\to{\mathbb R}$ by \[
u_n(z,w):=v_{N_n}(z)+|p_n(w)|. \] Then \begin{itemize} \item $z\mapsto\sup_n u_n(z,w)$ is locally bounded above on $D$ for all $w\in{\mathbb C}$, \item $w\mapsto\sup_n u_n(z,w)$ is locally bounded above on ${\mathbb C}$ for all $z\in F$, \end{itemize} but $\sup_nu_n(z,w)$ is not bounded above on any neighborhood of $(z_0,0)$.
\section{Applications of the uniform boundedness principle}\label{S:appl}
Our first application is to null sequences of plurisubharmonic functions.
\begin{theorem}\label{T:null} Let $D\subset{\mathbb C}^N$ and $G\subset{\mathbb C}^M$ be domains, and let $(u_n)$ be a sequence of positive continuous plurisubharmonic functions on $D\times G$. Suppose that: \begin{itemize} \item $u_n(\cdot,w)\to0$ locally uniformly on $D$ as $n\to\infty$, for each $w\in G$, \item $u_n(z,\cdot)\to0$ locally uniformly on $G$ as $n\to\infty$, for each $z\in F$, \end{itemize} where $F\subset D$ is non-pluripolar. Then $u_n\to0$ locally uniformly on $D\times G$. \end{theorem}
\begin{proof} Let $a\in D\times G$. Choose $r>0$ such that $\overline{B}(a,2r)\subset D\times G$. Writing $m$ for Lebesgue measure on ${\mathbb C}^N\times {\mathbb C}^M$, we have \begin{align*} \sup_{\zeta\in\overline{B}(a,r)}u_n(\zeta) &\le \sup_{\zeta\in\overline{B}(a,r)}\frac{1}{m({B}(\zeta,r))}\int_{{B}(\zeta,r)}u_n\,dm\\ &\le \frac{1}{m({B}(0,r))}\int_{{B}(a,2r)}u_n\,dm. \end{align*} Clearly $u_n\to0$ pointwise on $B(a,2r)$. Also, by Theorem~\ref{T:ubpgen}, the sequence $(u_n)$ is uniformly bounded on $B(a,2r)$. By the dominated convergence theorem, it follows that $\int_{B(a,2r)}u_n\,dm\to0$ as $n\to\infty$. Hence $\sup_{\zeta\in\overline{B}(a,r)}u_n(\zeta)\to0$ as $n\to\infty$. \end{proof}
Our second application relates to the problem mentioned at the beginning of \S\ref{S:ubp}. Recall that $u:\Omega\to[-\infty,\infty)$ is plurisubharmonic if \begin{enumerate} \item $u$ is upper semicontinuous, and
\item $u|_{\Omega\cap L}$ is subharmonic, for each complex line $L$, \end{enumerate} and the problem is to determine whether in fact (2) implies (1). Here are some known partial results: \begin{itemize} \item[-] Lelong \cite{Le45} showed that (2) implies (1) if, in addition, $u$ is locally bounded above. \item[-] Arsove \cite{Ar66} generalized Lelong's result by showing that, if $u$ if separately subharmonic and locally bounded above, then $u$ is upper semicontinuous. (Separately subharmonic means that (2) holds just with lines $L$ parallel to the coordinate axes.) Further results along these lines were obtained in \cite{AG93,KT96,Ri14}. \item[-] Wiegerinck \cite{Wi88} gave an example of a separately subharmonic function that is not upper semicontinuous. Thus Arsove's result no longer holds without the assumption that $u$ be locally bounded above. \end{itemize}
In seeking an example to show that (2) does not imply (1), it is natural to try to emulate Wiegerinck's example, which was constructed as follows.
Let $K_n, z_n$ and $p_n$ be defined as in the example at the end of \S\ref{S:pfubp}. For each $n$ define $v_n(z):=\max\{|p_n(z)|-1,\,0\}$. Then $v_n$ is a subharmonic function, $v_n=0$ on $K_n$ and $v_n(z_n)>n-1$. Set \[ u(z,w):=\sum_kv_k(z)v_k(w). \] If $w\in{\mathbb C}$, then $w\in K_n$ for all large enough $n$, so $v_n(w)=0$. Thus, for each fixed $w\in{\mathbb C}$, the function $z\mapsto u(z,w)$ is a finite sum of subharmonic functions, hence subharmonic. Evidently, the same is true with roles of $z$ and $w$ reversed. Thus $u$ is separately subharmonic. On the other hand, for each $n$ we have \[ u(z_n,z_n)\ge v_n(z_n)v_n(z_n)>(n-1)^2, \] so $u$ is not bounded above on any neighborhood of $(0,0)$.
This example does not answer the question of whether (2) implies (1) because the summands $v_k(z)v_k(w)$ are not plurisubharmonic as functions of $(z,w)\in{\mathbb C}^2$. It is tempting to try to modify the construction by replacing $v_k(z)v_k(w)$ by a positive plurisubharmonic sequence $v_k(z,w)$ such that the partial sums $\sum_{k=1}^nv_k$ are locally bounded above on each complex line, but not on any open neighborhood of $(0,0)$. However, Theorem~\ref{T:ubp} demonstrates immediately that this endeavor is doomed to failure, at least if we restrict ourselves to continuous plurisubharmonic functions.
This raises the following question, which, up till now, we have been unable to answer.
\begin{question}\label{Q:cts} Does Theorem~\ref{T:ubp} remain true without the assumption that the functions in ${\cal U}$ be continuous? \end{question}
This is of interest because of the following result.
\begin{theorem}
Assume that the answer to Question~\ref{Q:cts} is positive. Let $\Omega$ be an open subset of ${\mathbb C}^N$ and let $u:\Omega\to[-\infty,\infty)$ be a function such that $u|_{\Omega\cap L}$ is subharmonic for each complex line $L$. Define \[ s(z):=\sup\{v(z):v \text{~plurisubharmonic on~}\Omega,~v\le u\}. \] Then $s$ is plurisubharmonic on $\Omega$. \end{theorem}
\begin{proof} Let ${\cal U}$ be the family of plurisubharmonic functions $v$ on $\Omega$ such that $v\le u$. If the answer to Question~\ref{Q:cts} is positive, then ${\cal U}$ is locally uniformly bounded above on $\Omega$. Hence, by \cite[Theorem~2.9.14]{Kl91}, the upper semicontinuous regularization $s^*$ of $s$ is plurisubharmonic on $\Omega$, and, by \cite[Proposition~2.6.2]{Kl91}, $s^*=s$ Lebesgue-almost everywhere on $\Omega$. Fix $z\in\Omega$. Then there exists a complex line $L$ passing through $z$ such that $s^*=s$ almost everywhere on $\Omega\cap L$. Let $\mu_r$ be normalized Lebesgue measure on $B(z,r)\cap L$. Then \[ s^*(z)\le\int_{B(z,r)\cap L} s^*\,d\mu_r=\int_{B(z,r)\cap L} s\,d\mu_r\le\int_{B(z,r)\cap L} u\,d\mu_r. \] (Note that $u$ is Borel-measurable by \cite[Lemma~1]{Ar66}.)
Since $u|_{\Omega\cap L}$ is upper semicontinuous, we can let $r\to0^+$ to deduce that $s^*(z)\le u(z)$. Thus $s^*$ is itself a member of ${\cal U}$, so $s^*\le s$, and thus finally $s=s^*$ is plurisubharmonic on $\Omega$. \end{proof}
Of course, $s=u$ if and only if $u$ is itself plurisubharmonic. Maybe this could provide a way of attacking the problem of showing that $u$ is plurisubharmonic?
\end{document} |
\begin{document}
\title{On the Additive Constant of the $k$-server Work Function Algorithm}
\author{ Yuval Emek\thanks{Tel Aviv University, Tel Aviv, 69978 Israel. E-mail: \texttt{[email protected]}. This work was partially done during this author's visit at LIAFA, CNRS and University Paris Diderot, supported by Action COST 295 DYNAMO.} \and Pierre Fraigniaud\thanks{CNRS and University Paris Diderot, France. Email: \texttt{[email protected]}. Additional support from the ANR project ALADDIN, by the INRIA project GANG, and by COST Action 295 DYNAMO.} \and Amos Korman\thanks{CNRS and University Paris Diderot, France. Email: \texttt{[email protected]}. Additional support from the ANR project ALADDIN, by the INRIA project GANG, and by COST Action 295 DYNAMO.} \and Adi Ros\'{e}n\thanks{CNRS and University of Paris 11, France. Email: \texttt{[email protected]}. Research partially supported by ANR projects AlgoQP and ALADDIN.} }
\date{}
\maketitle
\begin{abstract} We consider the Work Function Algorithm for the $k$-server problem \cite{CL91,KP95}. We show that if the Work Function Algorithm is $c$-competitive, then it is also {\em strictly} $(2c)$-competitive. As a consequence of \cite{KP95} this also shows that the Work Function Algorithm is strictly $(4k-2)$-competitive. \end{abstract}
\section{Introduction}
A (deterministic) online algorithm \Algorithm{} is said to be \emph{$c$-competitive} if for all finite request sequences $\rho$, it holds that $\Algorithm(\rho) \leq c\cdot OPT(\rho) +\beta$, where $\Algorithm(\rho)$ and $OPT(\rho)$ are the costs incurred by \Algorithm{} and the optimal algorithm, respectively, on $\sigma$ and $\beta$ is a constant independent of $\rho$. When this condition holds for $\beta=0$, then \Algorithm{} is said to be \emph{strictly $c$-competitive}.
The $k$-server problem is one of the most extensively studied online problems (cf. \cite{BE}). To date, the best known competitive ratio for the $k$-server problem on general metric spaces is $2k-1$ \cite{KP95}, which is achieved by the Work Function Algorithm \cite{CL91}. A lower bound of $k$ for any metric space with at least \( k + 1 \) nodes is also known \cite{MMS90}. The question whether online algorithms are strictly competitive, and in particular if there is a {\em strictly} competitive $k$-server algorithm, is of interest for two reasons. First, as a purely theoretical question. Second, at times one attempts to build a competitive online algorithm by repeatedly applying another online algorithm as a subroutine. In that case, if the online algorithm applied as a subroutine is not strictly competitive, the resulting online algorithm may not be competitive at all due to the growth of the additive constant with the length of the request sequence.
In this paper we show that there exists a strictly competitive $k$-server algorithm for general metric spaces. In fact, we show that if the Work Function Algorithm is $c$-competitive, then it is also strictly $(2c)$-competitive. As a consequence of \cite{KP95}, we thus also show that the Work Function Algorithm is strictly $(4k-2)$-competitive.
\section{Preliminaries}
Let \( \Metric = (V, \Distance) \) be a metric space. We consider instances of the \(k\)-server problem on \(\Metric\), and when clear from the context, omit the mention of the metric space. At any given time, each server resides in some node \( v \in V \). A subset \( X \subseteq V \), \( |X| = k \), where the servers reside is called a \emph{configuration}. The \emph{distance} between two configurations \(X\) and \(Y\), denoted by \(\ConfigurationDistance(X, Y)\), is defined as the weight of a minimum weight matching between \(X\) and \(Y\). In every \emph{round}, a new \emph{request} \( r \in V \) is presented and should be \emph{served} by ensuring that a server resides on the request \(r\). The servers can move from node to node, and the movement of a server from node \(x\) to node \(y\) incurs a \emph{cost} of \(\Distance(x, y)\).
Fix some initial configuration \(A_0\) and some finite request sequence \(\rho\). The \emph{work function} \(\WorkFunction_{\rho}(X)\) of the configuration \(X\) with respect to \(\rho\) is the optimal cost of serving \(\rho\) starting in \(A_0\) and ending up in configuration \(X\). The collection of work function values \( \WorkFunction_{\rho}(\cdot) = \{ (X,
\WorkFunction_{\rho}(X)) \mid X \subseteq V, |X| = k \} \) is referred to as the \emph{work vector} of \(\rho\) (and initial configuration \(A_0\)).
A move of some server from node \(x\) to node \(y\) in round \(t\) is called \emph{forced} if a request was presented at \(y\) in round \(t\). (An empty move, in case that \( x = y \), is also considered to be forced.) An algorithm for the \(k\)-server problem is said to be \emph{lazy} if it only makes forced moves. Given some configuration \(X\), an offline algorithm for the \(k\)-server problem is said to be \emph{\(X\)-lazy} if in every round other than the last round, it only makes forced moves, while in the last round, it makes a forced move and it is also allowed to move servers to nodes in \(X\) from nodes not in \(X\). Since unforced moves can always be postponed, it follows that \(\WorkFunction_{\rho}(X)\) can be realized by an \(X\)-lazy (offline) algorithm for every choice of configuration \(X\).
Given an initial configuration \(A_0\) and a request sequence \(\rho\), we denote the total cost paid by an online algorithm \Algorithm{} for serving \(\rho\) (in an online fashion) when it starts in \(A_0\) by \(\Algorithm(A_0, \rho)\). The optimal cost for serving \(\rho\) starting in \(A_0\) is denoted by \( \Optimal(A_0, \rho) = \min_{X}\{ \WorkFunction_{\rho}(X) \} \). The optimal cost for serving \(\rho\) starting in \(A_0\) and ending in configuration \(X\) is denoted by \( \Optimal(A_0, \rho, X) = \WorkFunction_{\rho}(X) \). (This seemingly redundant notation is found useful hereafter.)
\Comment{ Note that the number of servers used by the algorithm (either online or optimal) is implicitly cast in the above notation through the cardinality of the initial configuration \(A_0\). (This will be important when the number of servers is not explicitly stated.) }
Consider some metric space \(\Metric\). In the context of the \(k\)-server problem, an algorithm \Algorithm{} is said to be \emph{\(c\)-competitive} if for any initial configuration \(A_0\), and any finite request sequence \(\rho\), \( \Algorithm(A_0, \rho) \leq c \cdot \Optimal(A_0, \rho) + \beta \), where \(\beta\) may depend on the initial configuration \(A_0\), but not on the request sequence \(\rho\). \Algorithm{} is said to be \emph{strictly \(c\)-competitive} if it is \(c\)-competitive with additive constant \( \beta = 0 \), that is, if for any initial configuration \(A_0\) and any finite request sequence \(\rho\), \( \Algorithm(A_0, \rho) \leq c \cdot \Optimal(A_0, \rho) \). As common in other works, we assume that the online algorithm and the optimal algorithm have the same initial configuration.
\section{Strictly competitive analysis}
We prove the following theorem.
\begin{AvoidOverfullParagraph} \begin{theorem} \label{theorem:OmitAdditiveTermWFA} If the Work Function Algorithm is \(c\)-competitive, then it is also strictly \( (2 c) \)-competitive. \end{theorem} \end{AvoidOverfullParagraph}
In fact, we shall prove Theorem~\ref{theorem:OmitAdditiveTermWFA} for a (somewhat) larger class of \(k\)-server online algorithms, referred to as \emph{robust} algorithms (this class will be defined soon). We say that an online algorithm for the \(k\)-server problem is \emph{request-sequence-oblivious}, if for every initial configuration \(A_0\), request sequence \(\rho\), current configuration \(X\), and request \(r\), the action of the algorithm on \(r\) after it served \(\rho\) (starting in \(A_0\)) is fully determined by \(X\), \(r\), and the work vector \(\WorkFunction_{\rho}(\cdot)\). In other words, a request-sequence-oblivious online algorithm can replace the explicit knowledge of \(A_0\) and \(\rho\) with the knowledge of \(\WorkFunction_{\rho}(\cdot)\). An online algorithm is said to be \emph{robust} if it is lazy, request-sequence-oblivious, and its behavior does not change if one adds to all entries of the work vector any given value \(d\). We prove that if a robust algorithm is \(c\)-competitive, then it is also strictly \((2 c)\)-competitive. Theorem~\ref{theorem:OmitAdditiveTermWFA} follows as the work function algorithm is robust.
In what follows, we consider a robust online algorithm \Algorithm{} and a lazy optimal (offline) algorithm \Optimal{} for the \(k\)-server problem. (In some cases, \Optimal{} will be assumed to be \(X\)-lazy for some configuration \(X\). This will be explicitly stated.) We also consider some underlying metric \( \Metric = (V, \Distance) \) that we do not explicitly specify. Suppose that \Algorithm{} is \(\alpha\)-competitive and given the initial configuration \(A_0\), let \( \beta = \beta(A_0) \) be the additive constant in the performance guarantee.
Subsequently, we fix some arbitrary initial configuration \(A_0\) and request sequence \(\rho\). We have to prove that \( \Algorithm(A_0, \rho) \leq 2 \alpha \Optimal(A_0, \rho) \). A key ingredient in our proof is a designated request sequence \(\sigma\) referred to as the \emph{anchor} of \(A_0\) and \(\rho\). Let \( \ell = \min \{ \Distance(x, y) \mid x, y \in A_0, x \neq y \} \). Given that \( A_0 = \{ x_1, \dots, x_k \} \), the anchor is defined to be \[ \sigma = (x_1 \cdots x_k)^m \text{, where } m = \left\lceil \max\left\{ \frac{2 k \Optimal(A_0, \rho)}{\ell} + k^2, \frac{2 \alpha \Optimal(A_0, \rho) + \beta(A_0)}{\ell} \right\} \right\rceil + 1 ~ . \] That is, the anchor consists of \(m\) \emph{cycles} of requests presented at the nodes of \(A_0\) in a round-robin fashion.
Informally, we shall append \(\sigma\) to \(\rho\) in order to ensure that both \Algorithm{} and \Optimal{} return to the initial configuration \(A_0\). This will allow us to analyze request sequences of the form \( (\rho \sigma)^q \) as \(q\) disjoint executions on the request sequence \( \rho \sigma \), thus preventing any possibility to ``hide'' an additive constant in the performance guarantee of \(\Algorithm(A_0, \rho)\). Before we can analyze this phenomenon, we have to establish some preliminary properties.
\begin{AvoidOverfullParagraph} \begin{proposition} \label{proposition:RetracingCost} For every initial configuration \(A_0\) and request sequence \(\rho\), we have \( \Optimal(A_0, \rho, A_0) \leq 2 \cdot \Optimal(A_0, \rho) \). \end{proposition} \end{AvoidOverfullParagraph} \begin{proof} Consider an execution \(\eta\) that (i) starts in configuration \(A_0\); (ii) serves \(\rho\) optimally; and (iii) moves (optimally) to configuration \(A_0\) at the end of round
\(|\rho|\). The cost of step (iii) cannot exceed that of step (ii) as we can always retrace the moves \(\eta\) did in step (ii) back to the initial configuration \(A_0\). The assertion follows since \(\eta\) is a candidate to realize \(\Optimal(A_0, \rho, A_0)\). \end{proof}
Since no moves are needed in order to serve the anchor \(\sigma\) from configuration \(A_0\), it follows that \begin{equation} \Optimal(A_0, \rho) \leq \Optimal(A_0, \rho \sigma) \leq 2 \cdot \Optimal(A_0, \rho) ~ . \label{equation:TwiceTheOptimal} \end{equation} Proposition~\ref{proposition:RetracingCost} is also employed to establish the following lemma.
\begin{AvoidOverfullParagraph} \begin{lemma} \label{lemma:OfflineVisitingInitialConfiguration} Given some configuration \(X\), consider an \(X\)-lazy execution
\(\eta\) that realizes \(\Optimal(A_0, \rho \sigma, X)\). Then \(\eta\) must be in configuration \(A_0\) at the end of round \(t\) for some \( |\rho| \leq t < |\rho \sigma| \). \end{lemma} \end{AvoidOverfullParagraph} \begin{proof}
Assume by way of contradiction that \(\eta\)'s configuration at the end of round \(t\) differs from \(A_0\) for every \( |\rho| \leq t < |\rho \sigma| \). The cost \( \Optimal(A_0, \rho \sigma, X) \) paid by \(\eta\) is at most \( 2 \cdot \Optimal(A_0, \rho) + \ConfigurationDistance(A_0, X) \) as Proposition~\ref{proposition:RetracingCost} guarantees that this is the total cost paid by an execution that (i) realizes \(\Optimal(A_0, \rho, A_0)\);
(ii) stays in configuration \(A_0\) until (including) round \( |\rho \sigma| \); and (iii) moves (optimally) to configuration \(X\).
Let \(Y\) be the configuration of \(\eta\) at the end of round \(|\rho|\). We can rewrite the total cost paid by \(\eta\) as \( \Optimal(A_0, \rho \sigma, X) = \Optimal(A_0, \rho, Y) + \Optimal(Y, \sigma, X) \). Clearly, the former term \(\Optimal(A_0, \rho, Y)\) is not smaller than
\(\ConfigurationDistance(A_0, Y)\) which lower bounds the cost paid by any execution that starts in configuration \(A_0\) and ends in configuration \(Y\). We will soon prove (under the assumption that \(\eta\)'s configuration at the end of round \(t\) differs from \(A_0\) for every \( |\rho| \leq t < |\rho
\sigma| \)) that the latter term \(\Optimal(Y, \sigma, X)\) is (strictly) greater than \( 2 \cdot\Optimal(A_0, \rho) + \ConfigurationDistance(Y, X) \). Therefore \( \ConfigurationDistance(A_0, Y) + 2 \cdot \Optimal(A_0, \rho) + \ConfigurationDistance(Y, X) < \Optimal(A_0, \rho, Y) + \Optimal(Y, \sigma, X) = \Optimal(A_0, \rho \sigma, X) \). The inequality \( \Optimal(A_0, \rho \sigma, X) \leq 2 \cdot \Optimal(A_0, \rho) + \ConfigurationDistance(A_0, X) \) then implies that \( \ConfigurationDistance(A_0, X) > \ConfigurationDistance(A_0, y) + \ConfigurationDistance(Y, X) \), in contradiction to the triangle inequality.
It remains to prove that \( \Optimal(Y, \sigma, X) > 2 \cdot \Optimal(A_0, \rho) + \ConfigurationDistance(Y, X) \). For that purpose, we consider the suffix \(\phi\) of \(\eta\) which corresponds to the execution on the subsequence \(\sigma\) (\(\phi\) is an \(X\)-lazy execution that realizes \(\Optimal(Y, \sigma, X)\)). Clearly, \(\phi\) must shift from configuration \(Y\) to configuration \(X\), paying cost of at least \(\ConfigurationDistance(Y, X)\). Moreover, since \(\phi\) is \(X\)-lazy, and by the assumption that \(\phi\) does not reside in configuration \(A_0\), it follows that in each of the \(m\) cycles of the round-robin,
at least one server must move between two different nodes in \(A_0\). (To see this, recall that each server's move of the lazy execution ends up in a node of \(A_0\). On the other hand, all \(k\) servers never reside in configuration \(A_0\).) Thus \(\phi\) pays a cost of at least \(\ell\) per cycle, and \( m \ell \) altogether. A portion of this \( m \ell \) cost can be charged on the shift from configuration \(Y\) to configuration \(X\), but we show that the remaining cost is strictly greater than \( 2 \cdot \Optimal(A_0, \rho) \), thus deriving the desired inequality \( \Optimal(Y, \sigma, X) > 2 \cdot \Optimal(A_0, \rho) + \ConfigurationDistance(Y, X) \).
The \(k\) servers make at least \(m\) moves between two different nodes in \(A_0\) when \(\phi\) serves the subsequence \(\sigma\), hence there exists some server \(s\) that makes at least \( m / k \) such moves as part of \(\phi\). The total cost paid by all other servers in \(\phi\) is bounded from below by their contribution to \(\ConfigurationDistance(Y, X)\). As there are \(k\) nodes in \(A_0\), at most \(k\) out of the \( m / k \) moves made by \(s\) arrive at a new node, i.e., a node which was not previously reached by \(s\) in \(\phi\). Therefore at least \( m / k - k \) moves of \(s\) cannot be charged on its shift from \(Y\) to \(X\). It follows that the cost paid by \(s\) in \(\phi\) is at least \( (m / k - k) \ell \) plus the contribution of \(s\) to \(\ConfigurationDistance(Y, X)\). The assertion now follows by the definition of \(m\), since \( (m / k - k) \ell > 2 \cdot \Optimal(A_0, \rho) \). \end{proof}
Since the optimal algorithm \Optimal{} is assumed to be lazy, Lemma~\ref{lemma:OfflineVisitingInitialConfiguration} implies the following corollary.
\begin{corollary} \label{corollary:OfflineEndingInitialConfiguration} If the optimal algorithm \Optimal{} serves a request sequence of the form \(
\rho \sigma \tau \) (for any choice of suffix \(\tau\)) starting from the initial configuration \(A_0\), then at the end of round \( |\rho \sigma| \) it must be in configuration \(A_0\). \end{corollary}
Consider an arbitrary configuration \(X\). We want to prove that \( \WorkFunction_{\rho \sigma}(X) \geq \WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X) \). To this end, assume by way of contradiction that \( \WorkFunction_{\rho \sigma}(X) < \WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X) \). Fix \( \WorkFunction_0 = \WorkFunction_{\rho \sigma}(A_0) \). Lemma~\ref{lemma:OfflineVisitingInitialConfiguration} guarantees that an \(X\)-lazy execution \(\eta\) that realizes \( \WorkFunction_{\rho \sigma}(X) = \Optimal(A_0, \rho \sigma, X) \) must be in configuration \(A_0\)
at the end of some round \( |\rho| \leq t < |\rho \sigma| \). Let \(\WorkFunction_t\) be the cost paid by \(\eta\) up to the end of round \(t\). The cost paid by \(\eta\) in order to move from \(A_0\) to \(X\) is at least \(\ConfigurationDistance(A_0, X)\), hence \( \WorkFunction_{\rho \sigma}(X) \geq \WorkFunction_t + \ConfigurationDistance(A_0, X) \). Therefore \(\WorkFunction_t < \WorkFunction_0\), which derives a contradiction, since \(\WorkFunction_0\) can be realized by an execution that reaches \(A_0\) at the end of round \(t\) and stays in \(A_0\) until it completes serving \(\sigma\) without paying any more cost. As \( \WorkFunction_{\rho \sigma}(X) \leq \WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X) \), we can establish the following corollary.
\begin{corollary} \label{corollary:WorkFunctionEnding} For every configuration \(X\), we have \( \WorkFunction_{\rho \sigma}(X) = \WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X) \). \end{corollary}
Recall that we have fixed the initial configuration \(A_0\) and the request sequence \(\rho\) and that \(\sigma\) is their anchor. We now turn to analyze the request sequence \( \chi = (\rho \sigma)^{q} \), where \(q\) is a sufficiently large integer that will be determined soon. Corollary~\ref{corollary:OfflineEndingInitialConfiguration} guarantees that \Optimal{} is in the initial configuration \(A_0\) at the end of round \(
|\rho \sigma| \). By induction on \(i\), it follows that \Optimal{} is in \(A_0\) at the end of round \( i \cdot |\rho \sigma| \) for every \( 1 \leq i \leq q \). Therefore the total cost paid by \Optimal{} on \(\chi\) is merely \begin{equation} \Optimal(A_0, \chi) = q \cdot \Optimal(A_0, \rho \sigma) ~ . \label{equation:OfflineRepetition} \end{equation}
Suppose by way of contradiction that the online algorithm
\Algorithm{}, when invoked on the request sequence \( \rho \sigma \) from initial configuration \(A_0\), does not end up in \(A_0\). Since \Algorithm{} is lazy, we conclude that \Algorithm{} is not in configuration \(A_0\) at the end of round \(t\) for any \( |\rho| \leq t <
|\rho \sigma| \). Therefore in each cycle of the round-robin, \Algorithm{} moves at least once between two different nodes in \(A_0\), paying cost of at least \(\ell\). By the definition of \(m\) (the number of cycles), this sums up to \( \Algorithm(A_0, \rho \sigma) \geq m \ell > 2 \alpha \Optimal(A_0, \rho) + \beta(A_0) \). By inequality~(\ref{equation:TwiceTheOptimal}), we conclude that \( \Algorithm(A_0, \rho \sigma) > \alpha \Optimal(A_0, \rho \sigma) + \beta(A_0) \), in contradiction to the performance guarantee of \Algorithm{}. It follows that \Algorithm{} returns to the initial configuration \(A_0\) after serving the request sequence \( \rho \sigma \).
Consider some two request sequences \(\tau\) and \(\tau'\). We say that the work vector \(\WorkFunction_{\tau}(\cdot)\) is \emph{\(d\)-equivalent} to the work vector \(\WorkFunction_{\tau'}(\cdot)\), where \(d\) is some real, if \( \WorkFunction_{\tau}(X) -
\WorkFunction_{\tau'}(X) = d \) for every \( X \subseteq V \), \( |X| = k \). It is easy to verify that if \(\WorkFunction_{\tau}(\cdot)\) is \(d\)-equivalent to \(\WorkFunction_{\tau'}(\cdot)\), then \( \WorkFunction_{\tau r}(\cdot) \) is \(d\)-equivalent to \( \WorkFunction_{\tau' r}(\cdot) \) for any choice of request \( r \in V \). Corollary~\ref{corollary:WorkFunctionEnding} guarantees that the work vector \(\WorkFunction_{\rho \sigma}(\cdot)\) is \(d\)-equivalent to the work vector \(\WorkFunction_{\omega}(\cdot)\) for some real \(d\), where \(\omega\) stands for the empty request sequence. (In fact, \(d\) is exactly \( \WorkFunction_{\rho \sigma}(A_0) \).) By induction on \(j\), we show that for every prefix \(\pi\) of \( \rho \sigma
\) and for every \( 1 \leq i < q \) such that \( |(\rho \sigma)^i \pi| = j \), the work vector \( \WorkFunction_{(\rho \sigma)^i \pi}(\cdot) \) is \(d\)-equivalent to the work vector \(\WorkFunction_{\pi}(\cdot)\) for some real \(d\). Therefore the behavior of the robust online algorithm \Algorithm{} on \(\chi\) is merely a repetition (\(q\) times) of its behavior on \( \rho \sigma \) and \begin{equation} \Algorithm(A_0, \chi) = q \cdot \Algorithm(A_0, \rho \sigma) ~ . \label{equation:OnlineRepetition} \end{equation}
We are now ready to establish the following inequality: \begin{align*} \Algorithm(A_0, \rho) & \leq ~ \Algorithm(A_0, \rho \sigma) \\ & = ~ \frac{\Algorithm(A_0, \chi)}{q} \quad \text{by inequality~(\ref{equation:OnlineRepetition})} \\ & \leq ~ \frac{\alpha \Optimal(A_0, \chi) + \beta(A_0)}{q} \quad \text{by the performance guarantee of \Algorithm{}} \\ & = ~ \frac{\alpha q \Optimal(A_0, \rho \sigma) + \beta(A_0)}{q} \quad \text{by inequality~(\ref{equation:OfflineRepetition})} \\ & \leq ~ \frac{2 \alpha q \Optimal(A_0, \rho) + \beta(A_0)}{q} \quad \text{by inequality~(\ref{equation:TwiceTheOptimal})} \\ & = ~ 2 \alpha \Optimal(A_0, \rho) + \frac{\beta(A_0)}{q} ~ . \end{align*} For any real \( \epsilon > 0 \), we can fix \( q = \lceil \beta(A_0) / \epsilon \rceil + 1 \) and conclude that \( \Algorithm(A_0, \rho) < 2 \alpha \Optimal(A_0, \rho) + \epsilon \). Theorem~\ref{theorem:OmitAdditiveTermWFA} follows.
As the Work Function Algorithm is known to be \( (2 k - 1) \)-competitive \cite{KP95}, we also get the following corollary.
\begin{corollary}\label{corollary:strictly} The Work Function Algorithm is strictly \((4k-2)\)-competitive. \end{corollary}
\paragraph{Acknowledgments} We thank Elias Koutsoupias for useful discussions.
\end{document} |
\begin{document}
\title{Quantum Cloning Machines and the Applications}
\author{Heng Fan} \email{[email protected]} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \affiliation{Collaborative Innovation Center of Quantum Matter, Beijing 100190, China}
\author{Yi-Nan Wang}
\author{Li Jing}
\affiliation{School of Physics, Peking University, Beijing 100871, China}
\author{Jie-Dong Yue} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China}
\author{Han-Duo Shi}
\author{Yong-Liang Zhang}
\author{Liang-Zhu Mu}
\affiliation{School of Physics, Peking University, Beijing 100871, China}
\date{\today}
\begin{abstract} No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine etc. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results. \end{abstract}
\pacs{03.67.Ac, 03.65.Aa, 03.67.Dd, 03.65.Ta}
\maketitle
\tableofcontents
\section{Introduction} In the past years, the study of quantum computation and quantum information has been attracting much attention from various research communities. Quantum information processing (QIP) is based on principles of quantum mechanics \cite{Nielsen2000}. It promises algorithms which may surpass their classical counterparts.
One of those algorithms is Shor algorithm \cite{Shor94} which can factorize large number exponentially
faster than the existing classical algorithms do \cite{Ekert-JozsaRMP96}. In this sense,
the RSA public key cryptosystem \cite{RSA78} widely used in modern financial systems and networks
might be attacked easily if a quantum computer exists, since the security of RSA system
is based on assumption that it is extremely difficult
to factorize a large number. On the other hand, QIP provides an unconditional secure
quantum cryptography based a principle of quantum mechanics, no-cloning theorem \cite{Wootters1982}, which means that an unknown quantum state
cannot be cloned perfectly.
For comparison, in classical information science, we use bit which is either ``0'' or ``1'' to carry the information, while for quantum information, a bit of quantum information which is named as ``qubit'' is encoded in a quantum state which may be a superposition of states $|0\rangle $ and $|1\rangle $. For example, a general qubit takes the form $\alpha |0\rangle +\beta |1\rangle $, where parameters $\alpha $ and $\beta $ are complex numbers according to quantum mechanics and are normalized as
$|\alpha |^2+|\beta |^2=1$. So a qubit can collapse to either $|0\rangle $ or $|1\rangle $ with some probability if a measurement is performed. The classical information can be copied perfectly. We know that we can copy a file in a computer without any principal restriction. On the contrary, QIP is based on principles of quantum mechanics, which is linear and thus an arbitrary quantum state cannot be cloned perfectly since of the no-cloning theorem. We use generally terminology ``clone'' instead of ``copy'' for reason of no-cloning theorem in \cite{Wootters1982}. No-cloning, however, is not the end of the story.
It is prohibited to have a perfect quantum clone. It is still possible that we can copy a quantum state approximately or probabilistically. There are various quantum protocols for QIP which may use tool of quantum cloning for different goals. Thus various quantum cloning machines have been created both theoretically and experimentally. The study of quantum cloning is of fundamental interest in QIP. Additionally, the quantum cloning machines can also be applied directly in various quantum key distribution (QKD) protocols. The first quantum key distribution protocol proposed by Bennett and Brassard in 1984 (BB84) uses four different qubits, BB84 states \cite{Bennett1984}, to encode classical information in transmission. Correspondingly, the phase-covariant quantum clone machine, which can copy optimally all qubits located in the equator of the Bloch sphere, is proved to be optimal for cloning of states similar as BB84 states. The BB84 protocol can be extended to six-state protocol, the corresponding cloning machine is the universal quantum cloning machine which can copy optimally arbitrary qubits. Similarly the probabilistic quantum cloning machine is for B92 QKD protocol \cite{PhysRevLett.68.3121}. Quantum cloning is also related with some fundamentals in quantum information science, for example, the no-cloning theorem is closely related with no-signaling theorem which means that superluminal communication is forbidden. We can also use quantum cloning machines for estimating a quantum state or phase information of a quantum state. So the study of quantum cloning is of interest for reasons of both fundamental and practical applications.
The previous well-accepted reviews of quantum cloning can be found in \cite{RevModPhys.77.1225}, and also in \cite{Cerf2006455}. Quantum cloning, as other topics of quantum information, developed very fast in the past years. An up-to-date review is necessary. In the present review, we plan to give a full description of results about quantum cloning and some closely related topics. This review is self-consistent and some fundamental knowledge is also introduced. In particular, a main characteristic of this review is that it contains a large number of detailed formulations for the main review topics. It is thus easy for the beginners to follow those calculations for further study on those quantum cloning topics.
The review is organized as follows: In the next part of this section, we will present in detail some fundamental concepts of quantum computation and quantum information including the form of qubit represented in Bloch sphere, the definition of entangled state, some principles of quantum mechanics used in the review. Then we will present in detail the developments of quantum cloning. Here let us introduce briefly some results contained in this review. In Section II, we will review several proofs of no-cloning theorem from different points of view, including a simple presentation, no-cloning for mixed states, the relationship between no-cloning and no-signaling theorems for quantum states, no-cloning from information theoretical viewpoints. In Section III, we will review the universal quantum cloning machine. We will present the universal quantum cloning machine for qubit and qudit including both symmetric and asymmetric cases. We then will present a unified quantum cloning machine which can be easily reduced to several universal cloning machines. We will also show some schemes for cloning of mixed states. We will show that the universal quantum cloning machine, by definition, can copy arbitrary input state, is necessary for a six-state input which are used in QKD. Further, the universal cloning machine is necessary for a four-state input which is the minimal input set. In Section V, the phase-covariant quantum cloning machines will be presented. One important application of this cloning machine is to study the well-known BB84 quantum cryptography. The phase-covariant quantum cloning machines include the cases of qubit and of higher dimension. In particular, a unified phase-covariant quantum cloning machine will be presented which can be adjusted for an arbitrary subset of the mutually unbiased bases. We can also show that the minimal input for phase-covariant quantum cloning is a set of three states with equal phase distances in the equator of Bloch sphere. The phase-covariant quantum cloning is actually state-dependent, we thus will present some other cases of state-dependent quantum cloning.
In section IX, we present some detailed results of sequential quantum cloning. We expect that further exploration is in order based on those results.
In order to have a full view of all developments in quantum cloning and some closely related topics, we try to review some references briefly in one to two sentences. Those parts are generally named as `other developments and related topics'. Our aim is to cover as much as possible these developments in quantum cloning, but we understand that some important references might still be missed in this review.
\subsection{Quantum information, qubit and quantum entanglement}
We have a quantum system constituted by two states $|0\rangle $ and $|1\rangle $. They are orthogonal, \begin{eqnarray}
\langle 0|1\rangle =0. \end{eqnarray}
Those two states can be energy levels of an atom, photon polarizations, electron spins, Bose-Einstein condensate with two intrinsic freedoms or any physical material with quantum properties. In this review, we also use some other standard notations $|0\rangle =|\uparrow \rangle ,|1\rangle =|\downarrow\rangle $ and exchange them without mentioning. Simply, those two states can be represented as two vectors in linear algebra, \begin{eqnarray}
|0\rangle =\left( \begin{array}{c}1\\ 0 \end{array}\right) ,~~~~ |1\rangle =\left( \begin{array}{c}0\\ 1 \end{array}\right). \end{eqnarray} Corresponding to bit in classical information science, a qubit in quantum information science is a superposition of two orthogonal states, \begin{eqnarray}
|\psi \rangle =\alpha |0\rangle +\beta |1\rangle , \end{eqnarray} where a normalization equation should be satisfied, \begin{eqnarray}
|\alpha |^2+|\beta |^2=1. \end{eqnarray}
Here both $\alpha $ and $\beta $ are complex parameters which include amplitude and phase information, $\alpha =|\alpha |e^{i\phi _{\alpha }}$ and
$\beta =|\beta |e^{i\phi _{\beta }}$. So a qubit $|\psi \rangle $ is defined on a two-dimensional Hilbert space $\mathbb{C}^2$. In quantum mechanics, a whole phase cannot be detected and thus can be omitted, only the relative phase of $\alpha $ and $\beta $ is important which is $\phi =\phi _{\alpha }-\phi _{\beta }$. Now we can find that a qubit can be represented in another form, \begin{eqnarray}
|\psi \rangle =\cos \frac {\theta }{2}|0\rangle +\sin \frac {\theta }{2}e^{i\phi }|1\rangle , \end{eqnarray} where $\theta \in [0,\pi ], \phi \in [0, 2\pi\}$. It corresponds to a point in the Bloch sphere, see FIG. \ref{IBlochsphere}.
\begin{figure}
\caption{(color online). A qubit in Bloch sphere, $|\psi \rangle
=\cos \frac {\theta }{2}|0\rangle +\sin \frac {\theta }{2}e^{i\phi }|1\rangle $, it contains amplitude parameter $\theta $ and phase parameter $\phi $. }
\label{IBlochsphere}
\end{figure}
The two qubits in separable form can be written as, \begin{eqnarray}
|\psi \rangle |\phi \rangle &=&(\alpha |0\rangle +\beta |1\rangle )(\gamma |0\rangle +\delta |1\rangle ) \nonumber \\
&=&\alpha \gamma |00\rangle +\alpha \delta |01\rangle +\beta \gamma |10\rangle +\alpha \delta |11\rangle . \end{eqnarray} If those two qubits are identical, one can find, \begin{eqnarray}
|\psi \rangle ^{\otimes 2}&\equiv &(\alpha |0\rangle +\beta |1\rangle )(\alpha |0\rangle +\beta |1\rangle ) \nonumber \\
&=& \alpha ^2|00\rangle +\sqrt {2}\alpha \beta \frac {1}{\sqrt {2}}(|01\rangle +|10\rangle )
+\beta ^2|11\rangle . \end{eqnarray}
For convenience, we write the second term as a normalized symmetric state $\frac {1}{\sqrt {2}}(|01\rangle +|10\rangle )$ which will be used later.
For two-qubit state, besides those separable state, we also have the entangled state, for example, \begin{eqnarray}
|\Phi ^+\rangle \equiv \frac {1}{\sqrt {2}}(|00\rangle +|11\rangle ). \label{IBell1} \end{eqnarray}
This is state cannot be written as a product form like $|\psi \rangle |\phi \rangle $, so it is ``entangled''. It is actually a maximally entangled state. In quantum information science, quantum entanglement is the valuable resource which can be widely used in various tasks and protocols. Complementary to entangled state $|\Phi ^+\rangle $, we have other three orthogonal and maximally entangled state which constitute a complete basis for $\mathbb{C}^2 \otimes \mathbb{C}^2$. Those four states are Bell states, here we list them all as follows, \begin{eqnarray}
|\Phi ^+\rangle &\equiv & \frac {1}{\sqrt {2}}(|00\rangle +|11\rangle ), \label{IBell0} \\
|\Phi ^-\rangle &\equiv & \frac {1}{\sqrt {2}}(|00\rangle -|11\rangle ), \label{IBell2} \\
|\Psi ^+\rangle &\equiv & \frac {1}{\sqrt {2}}(|01\rangle +|10\rangle ), \label{IBell3} \\
|\Psi ^-\rangle &\equiv & \frac {1}{\sqrt {2}}(|01\rangle -|10\rangle ). \label{IBell4} \end{eqnarray} Those four Bell states can be transformed to each other by local unitary transformations.
Consider three Pauli matrices defined as, \begin{eqnarray} \sigma _x=\left( \begin{array}{cc}0&1\\ 1&0\end{array}\right) , ~~~\sigma _y=\left( \begin{array}{cc}0&-i\\ i&0\end{array}\right) , ~~~\sigma _z=\left( \begin{array}{cc}1&0\\ 0&-1\end{array}\right). \end{eqnarray} Since $\sigma _y=i\sigma _x\sigma _z $, if the imaginary unit, ``$i$'', is the whole phase, we sometimes use $\sigma _x\sigma _z$
instead of $\sigma _y$. Bear in mind that we have $|0\rangle =\left( \begin{array}{c}1\\0 \end{array}\right) $, and $\langle 0|=(1,0)$, so in linear algebra, we have the representation, \begin{eqnarray}
|0\rangle \langle 0|=\left( \begin{array}{c}1\\0\end{array} \right)(1,0)=\left( \begin{array}{cc} 1&0\\0&0\end{array}\right). \end{eqnarray} Now three Pauli matrices have an operator representation, \begin{eqnarray}
\sigma _x&=&|0\rangle \langle 1|+|1\rangle \langle 0|,\\
\sigma _y&=&-i|0\rangle \langle 1|+i|1\rangle \langle 0|,\\
\sigma _z&=&|0\rangle \langle 0|-|1\rangle \langle 1|. \end{eqnarray} In this review, we will not distinguish the matrix representation and the operator representation. Acting Pauli matrices $\sigma _x$ and $\sigma _z$ on a qubit, we find, \begin{eqnarray}
\sigma _x|0\rangle =|1\rangle ,~~\sigma _x|1\rangle =|0\rangle , \\
\sigma _z|0\rangle =|0\rangle ,~~\sigma _z|1\rangle =-|1\rangle , \end{eqnarray} which are the bit flip action and phase flip action, respectively, while $\sigma _y$ will cause both bit flip and phase flip for a qubit. In this review, for convenience, we sometimes use notations $X\equiv \sigma _x, Z\equiv \sigma _z$ to represent the corresponding Pauli matrices. Also, those Pauli matrices can also be defined in higher dimensional system, while the same notations might be used if no confusion is caused.
For four Bell states, their relationship by local transformations can be as follows, \begin{eqnarray}
|\Phi ^-\rangle &= &(I\otimes \sigma _z) |\Phi ^+\rangle ,\\
|\Psi ^+\rangle &= & (I\otimes \sigma _x) |\Phi ^+\rangle ,\\
|\Psi ^-\rangle &= & (I\otimes \sigma _x\sigma _z) |\Phi ^+\rangle , \end{eqnarray} where $I$ is the identity in $\mathbb{C}^2$, the Pauli matrices are acting on the second qubit.
Here we have already used the tensor product. Consider two operators, $O_1=\left( \begin{array}{cc}A_1&B_1\\ C_1&D_1\end{array}\right) $, $O_2=\left( \begin{array}{cc}A_2&B_2\\ C_2&D_2\end{array}\right) $, the tensor product $O_1\otimes O_2$ is defined and calculated as follows, \begin{eqnarray} O_1\otimes O_2&=&\left( \begin{array}{cc}A_1O_2&B_1O_2\\ C_1O_2&D_1O_2\end{array}\right) \nonumber \\ &=&\left( \begin{array}{cccc}A_1A_2&A_1B_2&B_1A_2&B_1B_2\\ A_1C_2&A_1D_2&B_1C_2&B_1D_2\\ C_1A_2&C_1B_2&D_1A_2&D_1B_2\\ C_1C_2&C_1D_2&D_1C_2&D_1D_2\\\end{array}\right). \end{eqnarray} When we apply a tensor product of, $O_1\otimes O_2$, on a two-qubit quantum state, operator $O_1$ is acting on the first qubit, operator $O_2$ is acting on the second qubit.
We have already extended one qubit to two-qubit state. Similarly, multipartite qubit state can be obtained. Also we may try to extend qubit from two-dimension to higher-dimensional system, generally named as ``qutrit'' for dimension three and ``qudit'' for dimension $d$ in more general case, the Hilbert space is extended from $\mathbb{C}^2$ to $\mathbb{C}^d$. For example, we sometimes have ``qutrit'' when we consider a quantum state in three dimensional system. For more general case, a qudit is also a superposed state, \begin{eqnarray}
|\psi \rangle =\sum _{j=0}^{d-1}x_j|j\rangle , \end{eqnarray} where $x_j, j=0,1,...,d-1$, are normalized complex parameters. Quantum entanglement can also be in higher dimensional, multipartite systems.
A qubit $|\psi \rangle $ can be represented by its density matrix, \begin{eqnarray}
|\psi \rangle \langle \psi |&=&(\alpha |0\rangle +\beta |1\rangle )(\alpha ^*\langle 0|+\beta ^*\langle 1|) \nonumber \\
&=&\left( \begin{array}{cc}|\alpha |^2&\alpha \beta ^* \\
\alpha ^*\beta &|\beta |^2\end{array}\right) . \end{eqnarray} However, a general qubit may be not only the superposed state, which is actually the pure state, but also a mixed state which is in a probabilistic form. It can only be represented by a density operator $\rho $, \begin{eqnarray}
\rho =\sum _jp_j|\psi _j\rangle \langle \psi _j|, \end{eqnarray} where $p_j$ is the probabilistic distribution with $\sum p_j=1$.
A density matrix is positive semi-definite, and its trace equals to 1, \begin{eqnarray} \rho \ge 0, ~~~{\rm Tr}\rho =1. \end{eqnarray} The density matrices of a pure state and a mixed state can be easily distinguished by the following conditions, \begin{eqnarray} {\rm Tr}\rho ^2&=&1, ~~~{\rm pure ~state}; \\ {\rm Tr}\rho ^2&<&1, ~~~{\rm mixed ~state.} \end{eqnarray}
For multipartite state, one part of the state is the reduced density matrix obtained by tracing out other parts. For example, for two-qubit maximally entangled state
$|\Phi ^+_{AB}\rangle $ constituted by $A$ and $B$ parts, each qubit is a mixed state, \begin{eqnarray}
\rho _A={\rm Tr}_B|\Phi ^+_{AB}\rangle \langle \Phi ^+_{AB}|=\frac {1}{2}I. \end{eqnarray} This case is actually a completely mixed state, here $I$ is the identity operator. The identity operator can be written as any pure state and its orthogonal state with equal probability, \begin{eqnarray}
\rho _A=\frac {1}{2}I=\frac {1}{2}|\psi \rangle \langle \psi |+\frac {1}{2}|\psi ^{\perp}\rangle \langle \psi ^{\perp}|, \end{eqnarray}
where if $|\psi \rangle =\alpha |0\rangle +\beta |1\rangle $, its orthogonal state can take the form, \begin{eqnarray}
|\psi ^{\perp}\rangle =\beta ^*|0\rangle -\alpha ^*|1\rangle , \end{eqnarray} where $*$ means the complex conjugation.
\subsection{Quantum gates}
In QIP, all operations should satisfy the laws of quantum mechanics such as the generally used unitary transformation and quantum measurement. Similar as in classical computation, all quantum computation can be effectively implemented by several fundamental gates. The single qubit rotation gate and controlled-NOT (CNOT) gate constitute a complete set of fundamental gates for universal quantum computation \cite{elementarygates}. The single qubit rotation gate is just a unitary transformation on a qubit, $\hat {R}(\vartheta ,\phi )$, defined as \begin{eqnarray}
\hat {R}(\vartheta ,\phi )|0\rangle &=&\cos \vartheta |0\rangle
+e^{i\phi }\sin \vartheta |1\rangle ,\nonumber \\
\hat {R}(\vartheta ,\phi )|1\rangle &=&
-e^{-i\phi }\sin \vartheta |0\rangle +\cos \vartheta |1\rangle , \label{singlegate} \end{eqnarray}
where the phase parameter $\phi $ should also be controllable. The CNOT gate is defined as a unitary transformation on two-qubit system, one qubit is the controlled qubit and another qubit is the target qubit. For a CNOT gate, when the controlled qubit is $|0\rangle $, the target qubit does not change; when the controlled qubit is $|1\rangle $, the target qubit should be flipped. Explicitly it is defined as,
\begin{eqnarray}
&&CNOT: |0\rangle |0\rangle \rightarrow |0\rangle |0\rangle ; \nonumber \\
&&CNOT: |0\rangle |1\rangle \rightarrow |0\rangle |1\rangle ; \nonumber \\
&&CNOT: |1\rangle |0\rangle \rightarrow |1\rangle |1\rangle ; \nonumber \\
&&CNOT: |1\rangle |1\rangle \rightarrow |1\rangle |0\rangle , \label{CNOT} \end{eqnarray} where the first qubit is the controlled qubit and the second qubit is the target qubit. By matrix representation, CNOT gate takes the form, \begin{eqnarray} CNOT=\left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{array}\right) . \end{eqnarray} Depending on physical systems, we can use different universal sets of quantum gates to realize the universal quantum computation.
\section{No-cloning theorem}
\subsection{A simple proof of no-cloning theorem}
For classical information, the possibility of cloning it is an essential feature. In classical systems, cloning, in other words, copying seems no problem. Information stored in computers can be easily made several copies as backup; the accurate semiconservative replication of DNA steadily passes gene information between generations. But for quantum systems, this is not the case. As proved by Wootters and Zurek \cite{Wootters1982}, deterministic cloning of pure states is not possible. After this seminal work, much interest has been shown in extending and generalizing the original no-cloning theorem\cite{PhysRevLett.76.2818,ISI:000253764500008,ISI:000266500900237,ISI:000277243800004,ISI:000280467400011}, which gives us new insight to boundaries of the classical and quantum. On the other hand, no-signaling, guaranteed by Einstein's theory of relativity, is also delicately preserved by no-cloning. This chapter will focus on these topics, hoping to give a thorough description of the no-cloning theorem.
As it is known, a single measurement on a quantum system will only reveal minor information about it, but as a result of which, the quantum system will collapse to an eigenstate of the measurement operator and all the other information about the original state becomes lost. Suppose there exists a cloning machine with a quantum operation U, which duplicates an arbitrary pure state \begin{equation}
U(|\varphi\rangle\otimes|R\rangle\otimes|M\rangle) = |\varphi\rangle\otimes|\varphi\rangle\otimes|M(\varphi)\rangle \end{equation}
here $|\varphi\rangle$ denotes an arbitrary pure state, $|R\rangle$ an initial blank state of the cloning machine, $|M\rangle$ the initial state of the auxiliary state(ancilla), and $M(\varphi)\rangle$ is the ancillary state after operation which depends on $|\varphi\rangle$. With such machine, one can get any number of copies of the original quantum state, and then complete information of it can be determined. However, is it possible to really build such a machine? No-cloning theorem says no.
$\it{THEOREM}:$ No quantum operation exist which can perfectly and deterministically duplicate a pure state.
The proof can be in two methods.
(1). Using the linearity of quantum mechanics. This proof is first proposed by Wootters and Zurek \cite{Wootters1982}, and also by Dieks \cite{dieks-no-cloning}. Suppose there exists a perfect cloning machine that can copy an arbitrary quantum state, that is, for any state $|\varphi\rangle$
$$ |\varphi\rangle|\Sigma\rangle|M\rangle \rightarrow |\varphi\rangle|\varphi\rangle|M(\varphi)\rangle $$
where $|\Sigma\rangle$ is a blank state, and $|M\rangle$ is the state of auxiliary system(ancilla). Thus for state $|0\rangle$ and $|1\rangle$, we have
$$ |0\rangle|\Sigma\rangle|M\rangle \rightarrow |0\rangle|0\rangle|M(0)\rangle ,$$
$$ |1\rangle|\Sigma\rangle|M\rangle \rightarrow .
|1\rangle|1\rangle|M(1)\rangle $$
In this way, for the state $|\psi\rangle = \alpha|0\rangle+\beta|1\rangle$ $$ (\alpha|0\rangle+\beta|1\rangle)|\Sigma\rangle|M\rangle \rightarrow
\alpha|00\rangle|M(0)\rangle+\beta|11\rangle|M(1)\rangle $$
On the other hand, $|\psi\rangle$ itself is a pure state, so
$$ |\psi\rangle|\Sigma\rangle|M\rangle \rightarrow (\alpha^2|00\rangle+\alpha\beta|01\rangle+\alpha\beta|10\rangle+\beta^2|11\rangle)
|M(\psi)\rangle .$$ Obviously, the right hand sides of the two equations cannot be equal, as a result, the premise is false that such a perfect cloning machine exists, which concludes the proof. The linearity of quantum mechanics is also used to show that the superluminal is not possible \cite{dieks-no-cloning}.
(2). Using the properties of unitary operation. This proof is first proposed by Yuen in \cite{yuen-no-cloning}, see also Sec. 9-4 of Peres's textbook\cite{Peres95}. Consider the process of cloning machine as a unitary operator U, then for any two state $|\varphi\rangle$ and $|\psi\rangle$, since under unitary operation the inner product is preserved, we have
$$ \langle\psi|\varphi\rangle = \langle\psi|U^{\dagger}U|\varphi\rangle
= \langle\psi|\langle\psi|\varphi\rangle|\varphi\rangle
= \langle\psi|\varphi\rangle^2 $$
So $\langle\psi|\varphi\rangle$ is either 0 or 1. If the value is 0, it means the two states being copied should be orthogonal, while if 1, the two states are the same.
\subsection{No-broadcasting theorem}
Following the no cloning theorem for pure states, the impossibility of cloning a mixed state is later proved by Barnum \emph{et al.} \cite{PhysRevLett.76.2818}. In fact, rather than cloning, broadcasting, whose meaning will be presented later in this section, is prohibited by quantum mechanics. Correlations, as a fundamental theme of science, is also studied in quantum systems. An elegant no local broadcasting theorem for correlations in a multipartite state is proposed by Piani \emph{et al.} \cite{ISI:000253764500008}. With these two no-broadcasting theorems, it is natural to ask what is the relationship between them. Recently Luo \emph{et al.} have established the no-unilocal broadcasting theorem for quantum correlations, which proves to be the bridge between Barnum's and Piani's theorems and with it we are able to build the equivalence between them. The three theorems together would give us a unified picture of no-broadcasting in quantum systems.
We shall first elaborate on the original no-broadcasting theorem for non-commuting states proposed by Barnum \emph{et al.}\cite{PhysRevLett.76.2818}. Suppose there are two parts A and B of a composite quantum system AB, A is prepared in one of the states $\{\sigma_i\}$, while B is prepared in the blank state $\tau$. If there exists a quantum operation $\mathcal{E}$ which can be performed on system AB, that is, $\sigma_k\otimes\tau\rightarrow\mathcal{E}(\sigma_k\otimes\tau)=\rho_k^{out}$ and the output state satisfies \begin{equation*} {\rm Tr}_a\rho_k^{out}=\sigma_k \ ~~~{\rm and} \ {\rm Tr}_b\rho_k^{out}=\sigma_k,\ \forall k, \end{equation*} we say $\mathcal{E}$ broadcasts the set of states $\{\sigma_i\}$. Here comes Barnum's theorem\cite{PhysRevLett.76.2818}.
\textit{Theorem 1.} A set of states $\{\sigma_i\}$ is broadcastable if and only if the states commute with each other.
Several kinds of proof for Theorem 1 have been found \cite{ISI:000251674300007,PhysRevLett.76.2818,PhysRevLett.100.210502,ISI:000079331000009}, one of them is provided as follows using the property of relative entropy \cite{PhysRevLett.100.210502}. We shall only prove Theorem 1 in the case that the set $\{\sigma_i\}$ only have two states $\sigma_1$ and $\sigma_2$, from which more complex cases can be easily extended to.
\textit{Proof for Theorem 1}
1) ``if'' part: since $\sigma_1$ and $\sigma_2$ commute, they can be expressed in the same orthonormal basis $\{|i\rangle\}$: \begin{equation*}
\sigma_k = \sum_i \lambda_{k,i}|i\rangle\langle i|,\ k=1,2. \end{equation*}
Because $\{|i\rangle\}$ is an orthonormal set, it can be cloned by an operator $\mathcal{E}$, so we get \begin{equation*}
\rho_k^{out}=\mathcal{E}(\sigma_k\otimes\tau) = \sum_i \lambda_{k,i} |ii\rangle\langle ii|,\ k=1,2, \end{equation*} thus \begin{equation*} {\rm Tr}_a \rho_k^{out}=\sigma_k,\ ~~{\rm Tr}_b \rho_k^{out}=\sigma_k,\ k=1,2. \end{equation*} So we see $\sigma_1$ and $\sigma_2$ are broadcasted by $\mathcal{E}$.
2) ``only if'' part: first we shall introduce the concept of relative entropy. The relative entropy S of $\rho_1$ with respect to $\rho_2$ is defined as\cite{Umegaki1962} \begin{equation*}
S(\rho_1|\rho_2)={\rm Tr}[\rho_1(\ln \rho_1-\ln \rho_2)]. \end{equation*}
When $ker(\rho_1)^{\perp}\bigcap ker(\rho_2)=0$, S is well-defined, otherwise S leads to $\infty$ \cite{RevModPhys.50.221}. We first consider the case $ker(\rho_2)\subseteq ker(\rho_1)$, then $S<\infty$. Denote $\rho _1^{in}=\sigma_1\otimes\tau$ and $\rho _2^{in}=\sigma_2\otimes\tau$, we get
\begin{align*}
S(\rho_1^{in}|\rho_2^{in})&={\rm Tr}[\sigma_1\otimes\tau(\log \sigma_1\oplus \log \tau-\log \sigma_2\oplus \log\tau]\\ &={\rm Tr}_a[\sigma_1(\log \sigma_1-\log \sigma_2)]{\rm Tr}_b\tau\\ &={\rm Tr}_a[\sigma_1(\log \sigma_1-\log \sigma_2)]\\
&=S(\sigma_1|\sigma_2). \end{align*} Quantum cloning process $\mathcal{E}$ corresponds to a unitary operator $U$ on input state and the ancillary state $\Sigma $ such that, $\rho _k^{out}=U(\rho _k^{in}\otimes \Sigma )U^{\dagger }$. Now, we have \begin{equation*}
S(\sigma_1|\sigma_2)=S(\rho_1^{in}|\rho_2^{in})= S(\rho_1^{out}|\rho_2^{out}). \end{equation*}
In general, we remark that for any quantum operation such as the cloning process $\mathcal{E}$, $S(\mathcal{E}(\rho_1)|\mathcal{E}(\rho_2))\le S(\rho_1|\rho_2)$, see for example \cite{vedralRMP}. This is closely related with the monotonocity of relative entropy \cite{Lindblad1875}, \begin{equation*}
S(\rho_1^{ab}|\rho_2^{ab})\geq S(\rho_1^b|\rho_2^b), \end{equation*} where $\rho_1^b$ denotes the reduced density matrix of the composite system $\rho_1^{ab}$, and the equality holds if and only if the following condition is satisfied: \begin{equation*} \log \rho_1^{ab}-\log \rho_2^{ab}=I^a\otimes(\log \rho_1^b-\log \rho_2^b). \end{equation*} So we have \begin{equation}
S(\rho_1^{out}|\rho_2^{out})\geq S(\rho_1^{k,out}|\rho_2^{k,out}), \end{equation} for k=a,b where $\rho_i^{k,out}$ denotes $Tr_{a(b)}\rho_i^{out}.$ The equality holds if and only if \begin{align*} \log \rho_1^{out}-\log \rho_2^{out}&=(\log \rho_1^{a,out}-\log \rho_2^{a,out})\otimes I^b \\ &= I^a\otimes(\log \rho_1^{b,out}-\log \rho_2^{b,out}). \end{align*} Under the broadcasting condition, we get \begin{align*} \log \rho_1^{out}-\log \rho_2^{out}&=(\log \sigma_1-\log \sigma_2)\otimes I^b \\ &= I^a\otimes(\log \sigma_1-\log \sigma_2). \end{align*} But the above equation holds only when $\sigma_1$ and $\sigma_2$ are diagonal or they can be diagonalized in the same basis, which means they commute.
For the case $S(\sigma_1|\sigma_2)=\infty $, we consider a mixed state $\sigma_{mix}=\lambda\sigma_1+(1-\lambda)\sigma_2$, where $0<\lambda<1$. If $\sigma_1$ and $\sigma_2$ can be broadcast, then so can be $\sigma_1$ and $\sigma_{mix}$, due to linearity of the operation. But $ker(\sigma_{mix})\subseteq ker(\sigma_1)$, thus $\sigma_1$ and $\sigma_{mix}$ commute, so $\sigma_1$ and $\sigma_2$ commute. Now we have finished the proof of Theorem 1.
We would like to comment that under a weak assumption, it is possible that broadcasting of some information of quantum state is possible. This one is the quantum state information broadcasting presented recently \cite{Horodecki-information-broadcasting}.
\subsection{No-broadcasting for correlations} The quantum entanglement differs quantum world from classical world. Recently, it is also realized that quantum correlation, which may be beyond the quantum entanglement, is also important for QIP. Here we can first make a classification of states by correlation\cite{ISI:000253764500008}. For a bipartite state $\rho^{ab}$ shared by two parties A and B, it is called separable if it can be decomposed as $$ \rho^{ab} = \Sigma_j p_j \rho_j^a \otimes \rho_j^b, $$ where $\{p_j\}$ denotes a probability distribution, $\{\rho_j^a\}$ and $\{\rho_j^b\}$ denote states of party a and b. Otherwise, the $\rho^{ab}$ is called entangled.
If $\rho^{ab}$ can be further decomposed as
$$ \rho^{ab} = \Sigma_i p_i |i\rangle\langle i| \otimes \rho_i^b, $$
with $\{p_i\}$ denoting a probability distribution, $\{|i\rangle\}$ a orthonormal set of party a and $\{\rho_i^b\}$ states of party b, we say it is classical-quantum.
If $\rho_i^b$ can also be represented in an orthonormal set $\{|j\rangle\}$, which makes
$$ \rho^{ab} = \Sigma_i p_{ij} |i\rangle\langle i| \otimes |j\rangle\langle j|, $$ where $\{p_{ij}\}$ represents a probability distribution for two variables, we say it is classical(or classical-classical).
As we know the correlation in $\rho^{ab}$ can be quantified by mutual information \begin{equation*} I(\rho^{ab}) = S(\rho^a) + S(\rho^b) - S(\rho^{ab}), \end{equation*} where S denotes the von Neumann entropy, that is, $S(\rho) = -{\rm Tr}(\rho \ln \rho)$.
We say the correlation in $\rho^{ab}$ is locally broadcast if there exists two quantum operations $\mathcal{E}^a:\mathcal{S}(H^a) \rightarrow \mathcal{S}(H^{a_1}\otimes H^{a_2})$ and $\mathcal{E}^b:\mathcal{S}(H^b) \rightarrow \mathcal{S}(H^{b_1}\otimes H^{b_2})$, here $\mathcal{S}(H)$ denotes the set of quantum states on Hilbert space H, such that $\rho^{ab}\rightarrow (\mathcal{E}^a\otimes\mathcal{E}^b)\rho^{ab}=\rho^{a_1a_2b_1b_2}$, and the amount of correlations in the two reduced states $\rho^{a_1b_1} ={\rm Tr}_{a_2b_2}\rho^{a_1a_2b_1b_2}$ and $\rho^{a_2b_2} = {\rm Tr}_{a_1b_1}\rho^{a_1a_2b_1b_2}$ is identical to which of $\rho^{ab}$, that is $$I(\rho^{a_1b_1}) = I(\rho^{a_2b_2}) = I(\rho^{ab}).$$ While suppose there is a quantum operation $\mathcal{E}^a$ performed on party a, and we get $(\mathcal{E}^a\otimes I^b)\rho^{ab}=\rho^{a_1a_2b}$, we say the correlation in $\rho^{ab}$ is locally broadcast by party a if \begin{equation*} I(\rho^{a_1b})=I(\rho^{a_2b})=I(\rho^{ab}), \end{equation*} where $\rho^{a_1b}={\rm Tr}_{a_2}\rho^{a_1a_2b}$ and $\rho^{a_2b}={\rm Tr}_{a_1}\rho^{a_1a_2b}$.
With the above definition, we can then state no-broadcasting theorems for correlation proposed by Piani \emph{et al.} and Luo \emph {et al.}
\textit{Theorem 2:} The correlation in a bipartite state can be locally broadcast if and only if the state is classical.
\textit{Theorem 3:} The correlation in the bipartite state $\rho^{ab}$ can be locally broadcast by party a if and only if $\rho^{ab}$ is classical-quantum.
\subsection{A unified no-cloning theorem from information theoretical point of view}
Now we shall build equivalence among the theorems according to the method proposed by Luo \emph{et al.}\cite{ISI:000280467400011,ISI:000277243800004}, that is,
Theorem 1 $\Leftrightarrow$ Theorem 2 $\Leftrightarrow$ Theorem 3.
First we shall establish a lemma.
\textit{Lemma 1:} Any bipartite state can be decomposed as $$\rho^{ab} = \sum_k X_k^a \otimes X_k^b,$$ where each $X_k^a$ is non-negative and $\{X_k^b\}$ forms a linearly independent set.
\textit{Proof:} Let $\{Y_j^a\}$ be a linearly independent set for party a, $\{Z_k^b\}$ a linearly independent set for party b. Then any bipartite state $\rho^{ab}$ can be decomposed in the basis $\{Y_j^a\otimes Z_k^b\}$, that is \begin{equation*} \rho^{ab} = \sum_{jk} \lambda_{jk} Y_j^a \otimes Z_k^b. \end{equation*} Obviously we can let $Z_k^a = \sum_j \lambda_{jk} Y_j^a$ and obtain \begin{equation}\label{BipartiteDecomposition1} \rho^{ab} = \sum_k Z_k^a \otimes Z_k^b. \end{equation}
Notice that $Z_k^a$ need not to be non-negative, so we have not arrived at Lemma 1 yet. Starting from (\ref{BipartiteDecomposition1}), we take a fixed $|y\rangle\in H^b$ such that
$$c_1 = \langle y|Z_1^b|y\rangle \neq 0,$$
let $c_k = \langle y|Z_k^b|y \rangle$, we can write $\rho^{ab}$ as \begin{align*} \rho^{ab} &= \sum_k Z_k^a \otimes Z_k^b\\ &= \sum_k Z_k^a\otimes Z_k^b+\sum_{k\neq1}\frac{c_k}{c_1}Z_k^a\otimes Z_1^b - \sum_{k\neq 1}Z_k^a\otimes\frac{c_k}{c_1}Z_1^b\\ &= (Z_1^a+\sum_{k\neq 1}\frac{c_k}{c_1}Z_k^a)\otimes Z_1^b +\sum_{k\neq 1}Z_k^a\otimes(Z_k^b-\frac{c_k}{c_1}Z_1^b)\\ &= X_1^a\otimes Z_1^b+ \sum_{k\neq 1}Z_k^a \otimes \widetilde{Z_k^b}, \end{align*} where $X_1^a = Z_1^a+\sum_{k\neq 1}\frac{c_k}{c_1}Z_k^a$ and $\widetilde{Z_k^b}=Z_k^b-\frac{c_k}{c_1}Z_1^b$
Because for any k,
$$ \langle y|\widetilde{Z_k^b}|y\rangle = \langle y|(Z_k^b-\frac{c_k}{c_1}Z_1^b)|y\rangle = c_k-\frac{c_k}{c_1}c_1 = 0, $$
together with the non-negative property of density operator $\rho^{ab}$,we have for any $|x\rangle \in H^a$ \begin{align*}
\langle x\otimes y|\rho^{ab}|x\otimes y\rangle
&= \langle x|X_1^a|x\rangle\langle y|Z_1^b|y\rangle + \sum_{k\neq 1}\langle x|Z_k^a|x\rangle\langle y|\widetilde{Z_k^b}|y\rangle\\
&= c_1\langle x|X_1^a|x\rangle\\ &\geq 0. \end{align*} Since $c_1\neq 0$, we see $X_1^a$ or $-X_1^a$ is non-negative depending on the sign of $c_1$. Without loss of generality, we can always assume $X_1^a$ to be non-negative, because the negative sign can be absorbed by $Z_1^b$. Further we see the set $\{Z_1^b,\widetilde{Z_k^b}\}$ still forms a linearly independent set.
Now all the $Z_i^a(i\leq 2)$ and $Z_1^b$ remain unchanged, and replace $Z_1^a$ with $X_1^a$, $Z_j^b(j\geq 2)$ with $\widetilde{Z_j^b}$, we can find a $|\widetilde{y}\rangle\in H^b$ such that $\langle\widetilde{y}|\widetilde{Z_2^b}|\widetilde{y}\rangle \neq 0$, continue the above process, we would have got $X_2^a$ which is non-negative. Finally we can replace all the $Z_i^a$'s with $X_i^a$'s and thus the proof is completed.
Next we prove Theorem 1 $\Rightarrow$ Theorem 3.
\textit{Proof:} (``if'' part) Since $\rho^{ab}$ is a classical-quantum state, it can be rewritten as \begin{equation*}
\rho^{ab}=\sum_i p_i|i\rangle\langle i|\otimes\rho_i^b, \end{equation*}
where $\{|i\rangle\}$ is a linearly independent set, we can further assume it to be an orthonormal base, otherwise we may need to append some zero $p_i$. For any state $\sigma \in S(H^a)$, construct a quantum map $\mathcal{E}^a:S(H^a)\rightarrow S(H^{a_1})\otimes S(H^{a_2})$ such that \begin{equation}\label{TrivialBroadcasting} \mathcal{E}^a(\sigma) = \sum_{i}E_i\sigma E_i^{\dagger} \end{equation}
where $E_i=|ii\rangle\langle i|$. Perform $\mathcal{E}$ on party a, then we have locally broadcast $\rho^a$, and of course the correlation in $\rho^{ab}$ is locally broadcast by party a as well.
(``only if'' part) Suppose the correlation in $\rho^{ab}$ is locally broadcast by party a through the operator $\mathcal{E}^a:\mathcal{S}(H^a)\rightarrow\mathcal{S}(H^{a_1}\otimes H^{a_2})$, then $$\rho^{a_1a_2b}=\mathcal{E}^a\otimes\mathcal{I}^b(\rho^{ab}),$$ where $\mathcal{I}^b$ is the identity operator on $H^b$. We have $$I(\rho^{a_1b})=I(\rho^{a_2b})=I(\rho^{ab}).$$ Denote the operator $\mathcal{T}_{a_2}^{a_1a_2}:\mathcal{S}(H^{a_1}\otimes H^{a_2})\rightarrow \mathcal{S}(H^{a_1})$ as the partial tracing operator by tracing out $a_2$, thus $$\rho^{a_1b}=(\mathcal{T}_{a_2}^{a_1a_2}\otimes\mathcal{I}^b)(\rho^{a_1a_2b})= \mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a\otimes\mathcal{I}^b(\rho^{ab}).$$ According to the condition $$I(\rho^{a_1b})=I(\rho^{ab}),$$
and notice that $I(\rho^{ab})=S(\rho^{ab}|\rho^a\otimes\rho^b)$, where S is the relative entropy, $\rho^a$ and $\rho^b$ stand for reduced states, we have \begin{equation}\label{RelativeEntropyEq}
S(\mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a\otimes\mathcal{I}^b(\rho^{ab})| \mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a\otimes\mathcal{I}^b(\rho^a\otimes\rho^b))\\
=S(\rho^{ab}|\rho^a\otimes\rho^b). \end{equation} Now we shall introduce a theorem stating that\cite{Lindblad1875} \begin{equation}\label{Monotonicity}
S(\rho|\sigma)\geq S(\mathcal{E}(\rho)|\mathcal{E}(\sigma)) \end{equation} for any quantum state $\rho$ and $\sigma$, and any quantum map $\mathcal{E}:\mathcal{S}(H)\rightarrow\mathcal{S}(K)$.
The equality holds if and only if there exists an operator $\mathcal{F}:\mathcal{S}(K)\rightarrow\mathcal{S}(H)$ such that $$\mathcal{F}\mathcal{E}(\rho)=\rho,\ \mathcal{F}\mathcal{E}(\sigma)=\sigma.$$ An explicit form of $\mathcal{F}$ is, see \cite{HaydenCMP}, \begin{equation}\label{MonoEqualExplicitForm} \mathcal{F}(\tau)=\sigma^{1/2}\mathcal{E}^{\dagger}((\mathcal{E}(\sigma))^{-1/2} \tau(\mathcal{E}(\sigma))^{-1/2})\sigma^{1/2},\ \tau\in\mathcal{S}(K). \end{equation} Apply (\ref{Monotonicity}) to (\ref{RelativeEntropyEq}), we know there exists an operator $\mathcal{F}^{a_1b}$ such that
$$\mathcal{F}^{a_1b}(\mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a\otimes\mathcal{I}^b)(\rho^{ab})=\rho^{ab},$$ $$\mathcal{F}^{a_1b}(\mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a\otimes\mathcal{I}^b)(\rho^a\otimes\rho^b)=\rho^a\otimes\rho^b.$$ Consider the explicit form of $\mathcal{F}^{a_1b}$ from (\ref{MonoEqualExplicitForm}) and the product structure of $\rho^a\otimes\rho^b$, we can express $\mathcal{F}^{a_1b}$ as $\mathcal{F}^{a_1}\otimes\mathcal{I}^b$, hence $$(\mathcal{F}^{a_1}\otimes \mathcal{I}^b)(\mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a\otimes\mathcal{I}^b)(\rho^{ab})=\rho^{ab}$$ Use Lemma 1, we obtain \begin{equation*} \rho^{ab}=\sum_i X_i^a\otimes X_i^b, \end{equation*} where $X_i^a$ is non-negative, $\{X_i^b\}$ constitutes a linearly independent set, thus $$\sum_i\mathcal{F}^{a_1}\mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a(X_i^a)\otimes X_i^b=\sum_i X_i^a\otimes X_i^b,$$ since $\{X_i^b\}$ is a linearly independent set, we have $$\mathcal{F}^{a_1}\mathcal{T}_{a_2}^{a_1a_2}\mathcal{E}^a(X_k^a)=X_k^a,\ \forall k.$$
So $\{X_i^a\}$ is broadcastable, due to Theorem 1, $X_i^a$'s commute with each other, and hence can be diagonalized by the same basis $\{|i\rangle\}$, now we obtain \begin{equation*}
\rho^{ab} = \sum_i \lambda_i |i\rangle\langle i|\otimes Y_i^b, \end{equation*} it can be easily proved that $Y_i^b$ is non-negative, and hence $\rho^{ab}$ is indeed a classical-quantum state.
Now we prove Theorem 3 $\Rightarrow$ Theorem 2.
\textit{Proof:} We shall prove only the non-trivial part. Suppose the correlation in $\rho^{ab}$ can be locally broadcast by two operators respectively performed on party a and b: \begin{align*} \mathcal{E}^a:\mathcal{S}(H^a)\rightarrow\mathcal{S}(H^{a_1}\otimes H^{a_2}), \\ \mathcal{E}^b:\mathcal{S}(H^b)\rightarrow\mathcal{S}(H^{b_1}\otimes H^{b_2}), \end{align*} then we obtain \begin{align*} \rho^{a_1a_2b_1b_2}&=(\mathcal{E}^a\otimes\mathcal{E}^b)\rho^{ab}\\ &=(\mathcal{I}^{a_1a_2}\otimes\mathcal{E}^b)(\mathcal{E}^a\otimes\mathcal{I}^b)\rho^{ab}. \end{align*} So we have decomposed the operation $\mathcal{E}^a\otimes\mathcal{E}^b$ into two steps, each of which only deals with a single party. Through step one, we obtain \begin{equation*}
S(\rho^{ab}|(\rho^a\otimes\rho^b))\geq S(\mathcal{E}^a(\rho^{ab})|\mathcal{E}^a(\rho^a\otimes\rho^b)), \end{equation*} that is, $I(\rho^{ab})\geq I(\rho^{a_1a_2b})$, since $I(\rho^{a_1a_2b})\geq I({\rm Tr}_{a_2}\rho^{a_1a_2b})=I(\rho^{a_1b})$, we have $$I(\rho^{a_1b})\leq I(\rho^{a_1a_2b})\leq I(\rho^{ab}).$$ Similarly, through step two, we obtain $$I(\rho^{a_1b_1})\leq I(\rho^{\rho^{a_1b_1b_2}})\leq I(\rho^{a_1b}).$$ With the condition $I(\rho^{a_1b_1})=I(\rho^{ab})$, we have $I(\rho^{a_1b})=I(\rho^{ab})$, which shows that the correlation in $\rho^{ab}$ is broadcast by party a, from Theorem 3, we know $\rho^{ab}$ is a classical-quantum state. Exchange a and b in the above discussion, it's obvious that $\rho^{ab}$ is also a quantum-classical state. So $\rho^{ab}$ is a classical state.
Next we prove Theorem 2 $\Rightarrow$ Theorem 1
\textit{Proof:} Again we shall only prove the non-trivial part. Suppose there exists a quantum operation $\mathcal{E}^b$ which can broadcast a set of states $\{\rho_i^b\}$. We can find a orthonormal set $\{|i\rangle\}$ and construct a composite system \begin{equation*}
\rho^{ab}=\sum_i p_i |i\rangle\langle i|\otimes\rho_i^b, \end{equation*} where $\{p_i\}$ is a probability distribution. The party a can be easily broadcast by the operator $\mathcal{E}^a$ form (\ref{TrivialBroadcasting}), together with $\mathcal{E}^b$, $\rho^{ab}$ can be locally broadcast, so is the correlation. Thus from Theorem 2, $\rho^{ab}$ is a classical state, then $\rho_i^b$ commutes with each other.
From the above discussions, we have created a chain of equivalence among the three theorems: Theorem 1 $\Leftrightarrow$ Theorem 2 $\Leftrightarrow$ Theorem 3. This has provided us with a unified picture of the no-broadcasting theorem in quantum systems from the information theoretical point of view.
\subsection{No-cloning and no-signaling}
According to Einstein's relativity theory, superluminal signaling cannot be physically realized. Yet due to the non-local property of quantum entanglement, superluminal signaling is possible provided perfect cloning machine can be made. The scheme has been well-known since Herbert\cite{Herbert1982} first proposed his ``FLASH'' in 1982. The idea is as follows: suppose Alice and Bob, at an arbitrary distance, share a pair of entangled qubits in the state $|\psi\rangle = (1/\sqrt2)(|01\rangle-|10\rangle)$. Alice can measure her qubit by either $\sigma_x$ or $\sigma_z$. If the measurement is $\sigma_z$, Alice's qubit will collapse to the state $|0\rangle$ or $|1\rangle$, with probability 50\%. Respectively, this prepares Bob's qubit in the state $|1\rangle$ or $|0\rangle$. Without knowing the result of Alice's measurement, the density matrix of Bob's qubit is $\frac{1}{2}|0\rangle{\langle}0|+\frac{1}{2}|1\rangle{\langle}1| = \frac{1}{2}I$. On the other hand, if Alice's measurement is $\sigma_x$, Alice's qubit will collapse to the state $|\varphi_{x+}\rangle$ or $|\varphi_{x-}\rangle$, where $|\varphi_{x+}\rangle=1/\sqrt{2}(|0\rangle+|1\rangle)$, $|\varphi_{x-}\rangle=1/\sqrt{2}(|0\rangle-|1\rangle)$, being eigenvectors of $\sigma_x$. Thus Bob's qubit is prepared in the state $|\varphi_{x-}\rangle$ or $|\varphi_{x+}\rangle$ respectively, in this case the density matrix of Bob's qubit is still $\frac{1}{2}|\varphi_{x+}\rangle\langle\varphi_{x+}|+\frac{1}{2}|\varphi_{x-}\rangle\langle\varphi_{x-}| = \frac{1}{2}I$. Obviously, Bob gets no information about which measurement is made by Alice. While, if perfect cloning is allowed, the scenario will change. Bob can use the cloning machine to make arbitrarily many copies of his qubit, in which way he is able to determine the exact state of his qubit, that is, whether an eigenstate of $\sigma_z$ or $\sigma_x$. With this information, Bob knows the measurement Alice has taken. Fortunately, since no-cloning theorem has been proved, the above superluminal signaling scheme cannot be realized, which leaves theory of relativity and quantum mechanics in coexistence.
Up to now, there are many cloning schemes found, naturally one may ask, whether it is possible by using imperfect cloning, to extract information about which measuring basis Alice has used. According to the property of quantum transformation, the answer is no. To see this, we may first consider a simple scheme, that is, Bob can use the universal quantum cloning machine (UQCM) proposed by Bu\v{z}ek and Hillery\cite{PhysRevA.54.1844} to process his qubit. The UQCM transformation reads,
$$ |0\rangle|Q\rangle \rightarrow \sqrt{\frac{2}{3}}|00\rangle|\uparrow\rangle+\sqrt{\frac{1}{3}}|+\rangle|\downarrow\rangle, $$
$$ |1\rangle|Q\rangle \rightarrow \sqrt{\frac{2}{3}}|11\rangle|\downarrow\rangle+\sqrt{\frac{1}{3}}|+\rangle|\uparrow\rangle, $$
where $|Q\rangle$ is the original state of the copying-machine, $|+\rangle$ and $|-\rangle$ are two orthogonal states of the output, $|+\rangle = \frac{1}{\sqrt2}(|01\rangle+|10\rangle)$, $|-\rangle = \frac{1}{\sqrt2}(|01\rangle-|10\rangle)$, and $|\uparrow \rangle ,|\downarrow \rangle $ are the ancillary states which are orthogonal to each other. If Alice chooses $\sigma_z$, the density matrix of Bob's qubit after the process is \begin{align*}
\rho_b &= \frac{1}{2}(\frac{2}{3}|00\rangle\langle00|+\frac{1}{3}|+\rangle\langle+|)
+\frac{1}{2}(\frac{2}{3}|11\rangle\langle11|+\frac{1}{3}|+\rangle\langle+|) \\ &=
\frac{1}{3}(|00\rangle\langle00|+|11\rangle\langle11|+|+\rangle\langle+|). \end{align*} If Alice chooses $\sigma_x$, it can be easily verified that the density matrix will not change, thus no information can be gained by Bob. In fact, Bruss \emph{et al.} have pointed out in \cite{PhysRevA.62.062302} that the density matrix of Bob's qubit will not change no matter what operation is taken on it, as long as the operation is linear and trace-preserving. Suppose the original density matrix shared between Alice and Bob is $\rho^{ab}$, and Alice has done a measurement $\mathcal{A}_m$ on her qubit, Bob makes a transformation $\mathcal{B}$ on his, then the shared density matrix becomes $\mathcal{A}_m\otimes\mathcal{B}(\rho^{ab})$, here m specifies which measurement Alice has taken. In Bob's view, with the linear and trace-preserving property of $\mathcal{A}_m$, the density matrix of his qubit is \begin{align*} tr_a(\mathcal{A}_m\otimes\mathcal{B}(\rho^{ab})) &= \mathcal{B}tr_a(\mathcal{A}_m{\otimes}\mathcal{I}(\rho^{ab})) \\ &= \mathcal{B}tr_a(\rho^{ab}). \end{align*} Note that $tr$ and $Tr$ both denote trace similarly in this review. Here we see that the density matrix of Bob's qubit has nothing to do with Alice's measurement $\mathcal{A}_m$, therefore no information is transferred to Bob. Note that to get the above conclusion, we have only used the linear and trace-preserving property of $\mathcal{A}_m$. Since any quantum operator is linear and completely positive, no-signaling should always hold, thus providing a method to determine the fidelity limit of a cloning machine.
The situation might be more complicated when the no-signaling correlation is considered. It is found that, however, no-signaling might be more non-local than that of quantum mechanics. Then it seems that besides of no-signaling, some extra principle, like local orthogonality \cite{local-orthogonality}, should be satisfied such that the no-signaling non-locality might be realizable by quantum mechanics \cite{popescu-non-locality}.
Gisin studied the case of $1\rightarrow 2$ qubit UQCM in \cite{Gisin19981}. We continue the scheme that Alice and Bob share a pair of entangled states. Now Alice has done some measurement by $\sigma_x$ or $\sigma_z$, and thus Bob's state has been prepared in a respect mixed state. Let there be a UQCM, suppose the input density matrix is $|\varphi\rangle\langle \varphi|=\frac{1}{2}(I+\bm{m}\cdot\bm{\sigma})$, with $\bm{m}$ being the Bloch vector of $|\varphi\rangle$, then after cloning the reduced state on party a and b should read, $\rho^a=\rho^b=(1+\eta\bm{m}\cdot\bm{\sigma})$, yielding the fidelity to be $F=(1+\eta)/2$. According to the form of $\rho^a$ and $\rho^b$, the composite output state of the cloning machine should be \begin{equation*} \rho^{out}=\frac{1}{4}(I_4+\eta(\bm{m}\cdot\bm{\sigma}\otimes I+I\otimes\bm{m}\cdot\bm{\sigma})+\sum_{i,j=x,y,z}t_{ij}\sigma_i\otimes\sigma_j). \end{equation*} The universality of UQCM requires \begin{equation}\label{NosignalCondition1} \rho_{out}(U\bm{m})=U\otimes U\rho_{out}(\bm{m})U^{\dagger}\otimes U^{\dagger}. \end{equation} The no-signaling condition requires \begin{equation}\label{NosignalCondition2} \frac{1}{2}\rho^{out}(+x)+\frac{1}{2}\rho^{out}(-x)=\frac{1}{2}\rho^{out}(+z)+\frac{1}{2}\rho^{out}(-z), \end{equation} where $\rho^{out}(+z)$ represents the output state of the UQCM under the condition that Alice has take the measurement $\sigma_x$ and got result $+$.
Also we should notice that $\rho^{out}$ must be positive. Putting the positive condition together with (\ref{NosignalCondition1}) and (\ref{NosignalCondition2}), we shall get $\eta\leq\frac{2}{3}(F\leq\frac{5}{6})$. Although we have found an upper bound of F, the question remains whether it can be reached. But we know it can be, since a practical UQCM scheme with $F=\frac{5}{6}$ has been proposed\cite{PhysRevA.54.1844}.
Navez \emph{et al.} have derived the upper bound of fidelity for d-dimensional $1\rightarrow2$ UQCM using no-signaling condition\cite{PhysRevA.68.032313}, and the bound also has been proved to be tight. Simon \emph{et al.} have shown how no-signaling condition together with the static property of quantum mechanics can lead to properties of quantum dynamics \cite{PhysRevLett.87.170405}. By static properties we mean: 1) The states of quantum systems are described as vectors in Hilbert space. 2) The usual observables are represented by projections in Hilbert space and the probabilities for measurement are described by the usual trace rule. The two properties with no-signaling condition shall imply that any quantum map must be completely positive and linear, which is what we already have in mind. This may help to understand why bound derived by no-signaling condition is always tight. The experimental test of the no-signaling theorem is also performed in optical system \cite{experimentalnosignaling}. From no-signaling condition, the monogamy relation of violation of Bell inequalities can be derived, and this can be used to obtain the optimal fidelity for asymmetric cloning \cite{bell-non-signaling}. And some general properties of no-signaling theorem are presented in \cite{g-nonsignaling}. The relationship between optimal cloning and no signaling is presented in \cite{Ghosh199917}. The no-signaling is shown to be related with optimal state estimation \cite{Xiangbin-Hwang}. Also the no-signaling is equivalent to the optimal condition in minimum-error quantum state discrimination \cite{BaeHwang}, more results of those topics can be found in \cite{Baepreprint0} for qubit case and \cite{Baepreprint1} for the general case. The optimal cloning of arbitrary fidelity by using no-signaling is studied in \cite{GedikPreprint}.
\subsection{No-cloning for unitary operators}
No-cloning is a fundamental theorem in quantum information science and quantum mechanics. It may be manifested in various versions. Simply by calculation, and with the help of definition of CNOT gate, we may find the following relations, \begin{eqnarray} CNOT(\sigma _x\otimes I)CNOT&=&\sigma _x\otimes \sigma _x, \nonumber \\ CNOT(\sigma _z\otimes I)CNOT&=&\sigma _z\otimes I, \nonumber \\ CNOT(I\otimes \sigma _x)CNOT&=&I\otimes \sigma _x, \nonumber \\ CNOT(I\otimes \sigma _z)CNOT&=&\sigma _z\otimes \sigma _z. \label{opnoclone} \end{eqnarray} It implies that the bit flip operation is copied forwards (from first qubit to second qubit), while the phase flip operation is copied backwards. But we cannot copy simultaneously the bit flip operation and phase flip operation. Those properties are important for methods of quantum error correction and fault-tolerant quantum computation \cite{gottesman98}. This is a kind of no-cloning theorem for unitary operators. The quantum cloning of unitary operators is investigated in \cite{cloningunitary}.
\subsection{Other developments and related topics}
As a basic and fundamental theorem of quantum information and quantum mechanics, no-cloning is related with many topics and has various applications. In the following, we try to list some of those developments and closely related topics.
\begin{itemize} \item We may wonder what is the classical counterpart of no-cloning theorem in quantum world. Some results are available. Different from quantum case, classical broadcasting is possible with arbitrary high resolution \cite{broadcastingclassic}. The difference between quantum copying and classical copying is studied in \cite{Shen2011}, see also \cite{Fenyes2012}. The classical no-cloning is also discussed \cite{classical-no-cloning}.
\item The nonclassical correlations such as entanglement are also fundamental phenomena in the quantum world. They can be related with no-cloning theorem. The no-cloning theorem for entangled states is shown in \cite{prl81.4264}. No-cloning theorem prohibits perfect copying of nonorthogonal states, but as to orthogonal ones, it says if we are allowed to use arbitrary unitary transformations, cloning of them can be deterministically done. However, related with entanglement cloning, it is shown that even orthogonal states in composite systems cannot be cloned \cite{Morprl1998}, related results are also available in \cite{prl77.3264,prl77.3265}. It is also shown that no-cloning theorem is, in principle, equivalent with no-increasing of entanglement \cite{ISI:000075192800004}. By studying quantum correlation beyond quantum entanglement, the equivalence between locally broadcastable and broadcastable is investigated in \cite{ISI:000291254100001}, see a review about quantumness of correlations \cite{RMP-discord}. The combination of no-cloning, no-broadcasting and monogamy of entanglement can be found in \cite{ISI:000241723100033}.
\item The no-cloning theorem can be described in other environments and can be applied to other cases. Some of those results are the following. The no-signaling principle and the state distinguishability is studied in \cite{Baepreprint2}. The linear assignment maps for correlated system-environment states is studied in \cite{ISI:000274001500049}, the connection between the violation of the positivity of this linear assignments and the no-broadcasting theorem is found. The transformations which preserve commutativity of quantum states are studied in \cite{ISI:000268481100011}. Related with quantum cloning, the quantum channels are studied in \cite{Bradler2011}. No-cloning theorem means that two copies cannot be obtained out of a single copy, and if we study the information content measured by Holevo quantity of one copy and two copies, a condition of states broadcasting can be obtained \cite{ISI:000237090400009}. No-cloning can also be related with bounds on quantum capacity \cite{ISI:000244532300030}. The no-cloning studied by wave-packet collapse of quantum measurement is presented by Luo in \cite{Luo2010}. It is also pointed out that no-cloning of non-orthogonal states does not necessarily mean that inner product of quantum states should preserve \cite{ISI:000231391900002}. We remark that no-cloning theorem is also pioneered in \cite{dieks-no-cloning,yuen-no-cloning}, interested readers could check them for reference, see also \cite{Wootters2009}.
\item Since the first version of no-cloning theorem, either inspiration is drawn from it, or generalizations are made, some similar ``no-'' theorems come up, which we shall list as below. 1) No-deleting. Being a reverse process of quantum cloning, it is pointed out that it is also impossible to delete an unknown quantum state \cite{no-deleting}. 2) No-imprinting. See \cite{no-imprinting}, related results can also be found in \cite{ISI:000177872600044}. 3) No-stretching, which is a geometrical interpretation of no-cloning theorem \cite{DAriano2009}. 4) No-splitting, which states that quantum information cannot be split into complementary parts \cite{ISI:000235859600008}. The impossibility of reversing or complementing an unknown quantum state is a generalization of no-cloning theorem \cite{ISI:000232866900008}.
\item Finally, let us remark some applications of no-cloning theorem. We know that no-cloning theorem plays a key role in quantum cryptography, which is close to practical industrialization. Quantum key distribution (QKD), such as the BB84 protocol \cite{Bennett1984}, provides the unconditional security for secrete key sharing. The security of the quantum key distribution is based on no-cloning theorem since if we can copy perfectly the transferred state, we can always find its exact form by copying it to infinite copies so that its exact form can be found. For quantum cryptography protocol E91 \cite{PhysRevLett.67.661}, the security is based on the violation of Bell inequality \cite{Bell1964}. The unified picture of no-broadcasting theorem unifies those theorems together. This result is also shown in \cite{Acin2004}. The study of quantum cryptography, on the other hand, suggests that the ultimate physical limits of privacy might be possible under very weak assumptions \cite{Ekert-Renner-Privacy}. One recent development may include that probabilistic super-replication of quantum information, a different version of quantum cloning with limited aim, is possible \cite{qreplication-yao}. Remarkably, these phenomena can be applied to achieve the ultimate limit of precision in metrology provided by quantum mechanics, which is the Heisenberg limit of quantum metrology.
We would like to emphasize that the quantum cloning can be applied in quantum computation \cite{GalvaoHardy}.
\end{itemize}
\section{Universal quantum cloning machines}
As we have shown in last section, there are various no-cloning theorems implied by the law of quantum mechanics. They imply that one cannot clone an arbitrary qubit perfectly. On the other hand, the approximate quantum cloning is not prohibited. So it is possible that one can get several copies that approximate the original state, with fidelity $F<1$. Hence one naturally raises a question: can we achieve the same fidelity for any state on the Bloch sphere, for the qubit case, or more generally, for any state in a d-dimensional Hilbert space? And what is the best fidelity we can get?
A cloning machine that achieves equal fidelity for every state is called a universal quantum cloning machine (UQCM). This problem is equivalent to distribute information to different receivers, and it is natural to require the performance is the same for every input state, since we do not have any specific information about the input state ahead. According to no-cloning theorem, it is expected that the original input state will be destroyed and become as one of the output copies. For the simplest case, one qubit is cloned to have two copies, those two copies can be identical to each other, i.e., they are symmetric and of course they are different from the original input state. On the other hand, those two copies can also be different, both of them are similar to the the original input state but with different similarities, we mean that they are asymmetric. In this sense, there are symmetric and asymmetric UQCMs.
\subsection{Symmetric UQCM for qubit}
Consider a quantum cloning from 1 qubit to 2 qubits, a trivial scheme can be simply constructed as following:
(1), Measure the input state $|\vec{a}\rangle$ in a random base $|\vec{b}\rangle$. Here the vectors are on the Bloch sphere $S^2$. The probability of obtaining result $|\pm\vec{b}\rangle$ is $p_\pm=\left(1\pm\vec{a}\cdot\vec{b}\right)/2$.
(2), Then duplicate the state $|\pm\vec{b}\rangle$ according to the measurement result. The fidelity is $F_+=|\langle\vec{a}|+\vec{b}\rangle|^2$ and $F_-=|\langle\vec{a}|-\vec{b}\rangle|^2$, respectively.
In an average sense, the fidelity is \begin{equation} F_{trivial}=\int_{S^2}p_+ F_++p_- F_-=\frac{1}{2}+\frac {1}{2}\int_{S^2}\left(\vec{a}\cdot\vec{b}\right)^2 \mathrm{d}\vec{b}=\frac{2}{3}.\label{Ftrivial} \end{equation}
The problem is: can we design a better cloning machine? Bu\v{z}ek and Hillery \cite{PhysRevA.54.1844} proposed an optimal UQCM, namely, a unitary transformation on a larger Hilbert space:
\begin{align}
U|0\rangle_1|0\rangle_2|0\rangle_R&=\sqrt{\frac{2}{3}}|0\rangle_1|0\rangle_2|0\rangle_R+\sqrt{\frac{1}{6}}(|0\rangle_1|1\rangle_2+|1\rangle_1|0\rangle_2)|1\rangle_R,\\
U|1\rangle_1|0\rangle_2|0\rangle_R&=\sqrt{\frac{2}{3}}|1\rangle_1|1\rangle_2|1\rangle_R+\sqrt{\frac{1}{6}}(|0\rangle_1|1\rangle_2+|1\rangle_1|0\rangle_2)|0\rangle_R. \end{align} On the l.h.s of the equations, the first qubit 1 is the input state, the second is a blank state and
the third with subindex $R$ is the ancillary state of the quantum cloning machine itself. By a unitary transformation which is demanded by quantum mechanics, we find the output state on the r.h.s. of the equations. We may find that the original qubit is destroyed and becomes as one of the output qubit in 1 while the blank state is now changed as another copy in party 2, the ancillary state $R$ may or may not be changed which will be traced out for the output. It is obvious that two output states are identical, so it is a symmetric quantum cloning machine.
For an arbitrary normalized pure input state $|\psi\rangle=a|0\rangle+b|1\rangle$, since quantum mechanics is linear, by applying $U$ on the state which is realized simply by following the above cloning transformation, we can find the copies. After tracing out the ancillary state, the output density matrix take the form: \begin{equation}
\rho_{out}=\frac{2}{3}|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|+\frac{1}{6}(|\psi\rangle|\psi^\bot\rangle+|\psi^\bot\rangle|\psi\rangle)
(\langle\psi|\langle\psi^\bot|+\langle\psi^\bot|\langle\psi|).\label{1to2output} \end{equation}
Here $|\psi^\bot\rangle=b^*|0\rangle-a^*|1\rangle$ is orthogonal to $|\psi\rangle$. We can further trace out one of the two states to get the single copy density matrix \begin{equation}
\rho_1=\rho_2=\frac{2}{3}|\psi\rangle\langle\psi|+\frac{1}{6}I. \end{equation}
Note this density matrix is of the form $\eta|\psi\rangle\langle\psi|+\frac{1-\eta}{d}I$ with $\eta$ called the ``shrinking factor''. This form is a linear combination of the original density matrix $|\psi\rangle\langle\psi|$ and the identity $I$ which corresponds to completely mixed state and it is like a white noise.
In fact, in the original papers, the efficiency of the cloning machine is described by Hilbert-Schmidt norm $d_{HS}^2={\rm Tr}[(\rho_{in}-\rho_{out})^\dagger(\rho_{in}-\rho_{out})]$, which also quantifies the distance of two quantum states. The fidelity is a general accepted measure of merit of the quantum cloning \cite{ISI:000165200800033}. We will generally use fidelity as the measure of the quality of the copies in this review.
We can obtain the single copy fidelity \begin{equation}
F_1=\langle\psi|\rho_1|\psi\rangle=\frac{5}{6}, \end{equation}
and the two copies fidelity (global fidelity), \begin{equation}
F_2=^{\otimes2}\langle\psi|\rho_1|\psi\rangle^{\otimes2}=\frac{2}{3}. \end{equation}
The single copy fidelity provides measure of similarity between state $\rho _1$ and the original input state. If it is one, those two states are completely the same, while if it is zero, those two states are orthogonal. One point may be noticed is that, the fidelity between a completely mixed state with $|\psi \rangle $ is $1/2$. We know that a completely mixed state contains nothing about the input state, so fidelity $1/2$ should be the farthest distance between two quantum states. Similarly, the global fidelity quantifies the similarity between the two-qubit output state with the ideal cloning case. If it is one, we have two perfect copies. We remark that the single copy fidelity does not depend on input state, so the quality of the copies has the state-independent characteristic. In this sense, the corresponding cloning machine is ``universal''. One may find that the above presented cloning machine achieves higher fidelity than the trivial one, and it is proved to be optimal \cite{PhysRevLett.79.2153,Bruss1998,Gisin19981}).
Gisin and Massar \cite{PhysRevLett.79.2153} then generalize the cloning machine to $N\rightarrow M$ case, that is $M$ copies are created from $N$ identical qubits. Their cloning machine is a transformation: \begin{equation}
|N\psi\rangle|R\rangle \rightarrow\sum_{j=0}^{M-N}\alpha_j|(M-j)\psi,j\psi^\bot\rangle|R_j\rangle \end{equation} where \begin{equation} \alpha_j=\sqrt{\frac{N+1}{M+1}}\sqrt{\frac{(M-N)!(M-j)!}{(M-N-j)!M!}}
\end{equation} and$|(M-j)\psi,j\psi^\bot\rangle$ denote the normalized symmetric state with $M-j$ states $|\psi\rangle$ and $j$ states $|\psi^\bot\rangle$. Then the single copy fidelity is \begin{equation} F=\frac{M(N+1)+N}{M(N+2)}.\label{FNtoM} \end{equation}
In \cite{PhysRevLett.79.2153} the optimality of this cloning machine is proved for cases $N=1,2,\dots,7$. The complete proof of the optimality is finished in \cite{PhysRevLett.81.2598} where the connection between optimal quantum cloning and optimal state estimation is established. The upper bound of $N$ to $M$ UQCM is found to be exactly equal to (\ref{FNtoM}), hence Gisin and Massar UQCM is optimal.
\subsection{Symmetric UQCM for qudit}
For further generalization, we may seek cloning machine for d-level systems. Bu\v{z}ek and Hillery proposed a 1 to 2 d-dimensional UQCM \cite{ISI:000084208300020}\cite{PhysRevLett.81.5003}: for a basis state $|i\rangle$, the transformation is \begin{equation}
|i\rangle|0\rangle|R\rangle\rightarrow\frac{2}{\sqrt{2(d+1)}}|i\rangle|i\rangle|R_i\rangle+\frac{1}{\sqrt{2(d+1)}}\sum_{i\neq j}(|i\rangle|j\rangle+
|j\rangle|i\rangle)|R_j\rangle. \end{equation}
Here $|R_i\rangle$ is a set of orthogonal normalized ancillary state. The resultant one copy fidelity is, $F=(d+3)/(2d+2)$.
Later, a general $N$ to $M$ UQCM is constructed in a concise way by Werner \cite{PhysRevA.58.1827}, see also \cite{zanardi} for related results. For N identical pure input state $|\psi\rangle$, the output density matrix is: \begin{equation}
\rho_{out}=\frac{d[N]}{d[M]}s_M\left( (|\psi\rangle\langle\psi|)^{\otimes N}\otimes I^{\otimes(M-N)}\right)s_M \label{Werner} \end{equation} where $d[N]=\tbinom{d+N-1}{N}$, (we also use notation $d[N]=C_{d+N-1}^N$), and $s_M$ is the projection onto the symmetric subspace of $\mathcal{H}^{\otimes M}$. As an example, \begin{equation}
s_2=|00\rangle\langle00|+|11\rangle\langle11|+\frac{1}{2}(|01\rangle+|10\rangle)(\langle01|+\langle10|). \end{equation} If we insert this expression into formula (\ref{Werner}), we can get exactly the expression of output density matrix (\ref{1to2output}). So this UQCM can recover the $N=1$, $M=2$, $d=2$ one.
For N to M case, the single copy fidelity is shown to be \begin{align}
F_1&=\frac{d[N]}{d[M]}{\rm Tr}\left[\left(|\psi\rangle\langle\psi|\otimes I^{\otimes(M-1)}\right)s_M\left( (|\psi\rangle\langle\psi|)^{\otimes N}\otimes I^{\otimes(M-N)}\right)s_M\right]\nonumber\\ &=\frac{N(M+d)+M-N}{M(N+d)}.\label{UQCM1F} \end{align} In \cite{PhysRevA.58.1827}, this single copy fidelity is proved to be optimal under the restriction that the operation is a mapping into the symmetric Hilbert space. Generally, there might exist a cloning machine performing better without this constraint. Keyl and Werner studied the more general case and proved this cloning machine is indeed the unique optimal UQCM \cite{ISI:000081019000005}. As a special case, if we let $N=1$, $M=2$, the fidelity apparently reduces to the Bu\v{z}ek and Hillery 1 to 2 d-dimensional UQCM: $F=(d+3)/(2d+2)$. And if we take the $M\rightarrow\infty$ limit, the fidelity turns out to be $F=(N+1)/(N+d)$, this agrees with the state estimation result by Massar and Popescu \cite{Massar1995}.
We are also interested in the M copies fidelity (global fidelity), it can be found as follows \cite{PhysRevA.58.1827}, \begin{align}
F_M&=\frac{d[N]}{d[M]}tr\left[\left(|\psi\rangle\langle\psi|\right)^{\otimes M}s_M\left( (|\psi\rangle\langle\psi|)^{\otimes N}\otimes I^{\otimes(M-N)}\right)s_M\right]\nonumber\\&=\frac{d[N]}{d[M]}tr\left((|\psi\rangle\langle\psi|)^{\otimes M}\right)\nonumber\\&=\frac{d[N]}{d[M]}=\frac{M!(N+d-1)!}{N!(M+d-1)!}. \label{UQCMMF} \end{align}
The fidelity (\ref{UQCM1F}) quantifies the similarity between a single copy from the output state and one input state, while the global fidelity (\ref{UQCMMF}) quantifies the similarity between the whole $M$ copies of the output and the ideal $M$ copies of the input state $|\psi \rangle $. More generally, we may consider to choose arbitrary $L$ copies from the output state and quantify how closeness of this state with $L$ ideal copies of the input state $|\psi \rangle $. Recently, Wang \emph{et al.} \cite{PhysRevA.84.034302} proposed a more general definition ``$L$ copies fidelity'': $F_L=^{\otimes L}\langle\psi|\rho_{out,L}|\psi\rangle^{\otimes L}$ where $\rho_{out,L}$ is the $L$ copies output reduced density matrix. The expression is calculated as, \begin{eqnarray} F_L=\frac {(d+N-1)!(M-N)!(M-L)!}{(d+M-1)!M!N!}\times\sum _{m_1}\frac{(M-m_1+d-2)!(m_1!)^2}{(m_1-L)!(m_1-N)!(d-2)!(M-m_1)!}. \end{eqnarray} For $L=1$ and $L=M$, the expression will reduce to results presented above (\ref{UQCM1F},\ref{UQCMMF}). For another special case $N=1$, expression of fidelity can be simplified by finding the explicit result of the summation, it reads, \begin{eqnarray} F_L(N=1)=\frac {L!d![L(d+M)+M-L] }{(d+L)!M}. \end{eqnarray}
Fan \emph{et al.} \cite{PhysRevA.64.064301} proposed another version of UQCM, written in more explicit form: let $\bm{n}=(n_1,\dots,n_d)$ denote a $d$-component vector. And $|\bm{n}\rangle=|n_1,\dots,n_d\rangle$ is a completely symmetric and normalized state with $n_i$ states in $|i\rangle$. These states is an orthogonal normalized basis of the symmetric Hilbert space $\mathcal{H}^{\otimes M}_+$. Then for an arbitrary input state $|\psi\rangle=\sum^d_{i=1}x_i|i\rangle$, the $N$-fold direct product $|\psi\rangle^{\otimes N}$ could be expanded as: \begin{equation}
|\psi\rangle^{\otimes N}=\sum_{\bm{n}}^N\sqrt{\frac{N!}{n_1!\dots n_d!}}x_1^{n_1}\dots x_d^{n_d}|\bm{n}\rangle. \label{expansion} \end{equation} The cloning transformation takes the form, \begin{equation}
|\bm{n}\rangle|R\rangle\rightarrow\sum_{\bm{j}}^{M-N}\alpha_{\bm{n}\bm{j}}|\bm{n}+\bm{j}\rangle|R_{\bm{j}}\rangle\label{FanClone} \end{equation}
The notation $\sum_{\bm{n}}^N$ means summation over all possible vectors $\bm{n}$ with $n_1+\dots +n_d=N$ and the $|R_{\bm{j}}\rangle$ is a set of orthogonal normalized ancillary states, as usual. The coefficients $\alpha_{\bm{n}\bm{j}}$ are: \begin{equation} \alpha_{\bm{n}\bm{j}}=\sqrt{\frac{(M-N)!(N+d-1)!}{(M+d-1)!}}\sqrt{\prod_{k=1}^d\frac{(n_k+j_k)!}{n_k!j_k!}}. \label{Fancoefficients} \end{equation}
This UQCM can achieve the same fidelities as the UQCM given by Werner \cite{PhysRevA.58.1827}. It is optimal. Later Wang \emph{et al.} \cite{PhysRevA.84.034302} proved that these two cloning machines are indeed equivalent by showing the output states are the same. First, divide the symmetric state $|\vec {m}\rangle $ of $M$ qudits into two parts with $N$ qudits and $M-N$ qudits, respectively, \begin{eqnarray}
|\vec {m}\rangle =\frac {1}{\sqrt {C_M^N}}\sum _{\vec {k}}^{M-N}\prod _j\sqrt {\frac {m_j!}{(m_j-k_j)!k_j!}}|\vec {m}-\vec {k}\rangle
|\vec {k}\rangle . \label{splitting} \end{eqnarray} The symmetry operator $s_M$ can be reformulated. After calculation, output density matrix in (\ref{Werner}) is shown to be:
\begin{eqnarray}
\rho_{out}=\frac{N!(M-N)!(N+d-1)!}{(M+d-1)!}\sum _{\vec {m},\vec {m}'}^M|\vec {m}\rangle \langle
\vec {m}'| \times \left( \sum _{\vec {k}}^{M-N}\prod _j\frac {x_j^{m_j-k_j}x^{*(m_j'-k_j)}\sqrt {m_j!m_j'!}} {(m_j-k_j)!(m'_j-k_j)!k_j!}\right). \end{eqnarray}
For the cloning machine (\ref{FanClone}), we can get the output density matrix after tracing out the ancillary state: \begin{eqnarray} {\rho '}^{out}=\frac{N!(M-N)!(N+d-1)!}{(M+d-1)!}\eta ^2\sum _{\vec {n},\vec {n}'}^N\sum _{\vec {k}}^{M-N}
|\vec {n}+\vec {k}\rangle \langle \vec {n}'+\vec {k}| \times \left( \prod _j\frac {x_j^{n_j}x_j^{*(n'_j)}\sqrt {(n_j+k_j)!(n_j'+k_j)!}}{n_j!n_j'!k_j!}\right). \end{eqnarray} These two expressions are apparently equivalent.
In \cite{PhysRevA.84.034302}, a unified form of the symmetric UQCM is presented, up to an unimportant overall normalization factor, the transformation is, \begin{equation}
|\psi\rangle^{\otimes N}|\Phi^+\rangle^{\otimes (M-N)}\rightarrow \left(s_M\otimes I^{\otimes (M-N)}\otimes I^{\otimes N}\right)|\psi\rangle^{\otimes N}|\Phi^+\rangle^{\otimes (M-N)}.\label{UnifiedUQCM} \end{equation}
This cloning machine is realized by superposition of states in which some of the input states
are permutated into one part of the maximally entangled states.
Since $s_M=s_M(I^{\otimes N}\otimes s_{M-N})$, and the mapping of $s_{M-N}$ on the $M-N$ maximally entangled states is: $(s_{M-N}\otimes I^{M-N})|\Phi^+\rangle^{\otimes(M-N)}$, the cloning transformation may be rewritten as: \begin{eqnarray}
&&\left( s_{M}\otimes I^{\otimes (M-N)}\right) |\vec {n}\rangle |\Phi ^+\rangle ^{\otimes (M-N)} \nonumber \\ &&=\left( s_{M}\otimes I^{\otimes (M-N)}\right)
|\vec {n}\rangle \sum _{\vec {k}}^{M-N}|\vec {k}\rangle |\vec {k}\rangle \nonumber \\
&&=\sum _{\vec {k}}^{M-N}\sqrt {\prod _j\frac {(n_j+k_j)!}{n_j!k_j!}}|\vec {n}+\vec {k}\rangle
|\vec {k}\rangle. \end{eqnarray} In fact this coincides with the UQCM (\ref{FanClone}). Here the complicated coefficients (\ref{Fancoefficients}) proposed for optimal cloning machine can be naturally obtained. Also it can be simply seen that the transformation (\ref{UnifiedUQCM}) is equivalent to the construction (\ref{Werner}) if the ancillary states are traced out. So the UQCM can be simply constructed, we can symmetrize the $N$ input pure states and halves of some maximally entangled states, while other halves of the maximally entangled states are ancillary states. This simplify dramatically the construction of the UQCM theoretically and its physical implementation becomes easier. If maximally entangled states are available, the UQCM is to symmetrize the input pure states with one sides of the maximally entangled states. Indeed, some experiments follow this scheme \cite{667603820020426} which we will review in physical realization section.
\iffalse \begin{figure}
\caption{Schematic $N\rightarrow M$ qudit UQCM. For initial states, there are $N$ identical pure states $|\phi \rangle $ and $M-N$ maximally entangled states. One part of these entangled states are processed by a symmetric operator $s_M$ together with the $N$ input states, see Ref.\cite{PhysRevA.84.034302}.}
\label{UQCMpic}
\end{figure} \fi
\subsection{Asymmetric quantum cloning}
In the previous subsections we are considering symmetric cloning machines which provide identical output copies. However, naturally we may try to distribute information unequally among the copies. The 1 to 2 optimal asymmetric qubit cloner is found by Niu and Griffiths \cite{PhysRevA.58.4377}, Cerf \cite{PhysRevLett.84.4497} and Bu\v{z}ek, Hillery and Bednik \cite{ISI:000074966200008}. Their formalisms are slightly different, but they lead to a same relation between A's fidelity $F_A$ and B's fidelity $F_B$: \begin{equation} \sqrt{(1-F_A)(1-F_B)}\geq F_A+F_B-\frac{3}{2}\label{AsyFrelation} \end{equation} So a tradeoff relation exists for the two fidelities, if one fidelity is large, correspondingly another fidelity will become small. This will be presented further in the following.
The transformation can be written in the following form according to Bu\v{z}ek \emph{et al.}\cite{ISI:000074966200008}: \begin{equation}
|\psi\rangle_A(a|\Phi^+\rangle_{BR}+b|0\rangle_B(|0\rangle_R+|1\rangle_R)/\sqrt{2})\rightarrow a|\psi\rangle_A|\Phi^+\rangle_{BR}+b|\psi\rangle_B|\Phi^+\rangle_{AR}\label{BuzekAsy} \end{equation}
Here $R$ is an ancillary state, $|\Phi^+\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$ is a Bell state. And the normalization condition of input state requires $a^2+ab+b^2=1$, which is an ellipse equation. The reduced density matrix of A and B are: $\rho_{A,B}=F_{A,B}|\psi\rangle\langle\psi|+(1-F_{A,B})|\psi^\bot\rangle\langle\psi^\bot|$, here \begin{equation} F_A=1-b^2/2, F_B=1-a^2/2\label{AsyF}, \end{equation} which is just the fidelity of $A$ and $B$, respectively. It is easy to check (\ref{AsyF}) satisfy the inequality (\ref{AsyFrelation}). And as special cases, we can see if a=0, then $F_A=1$, $F_B=1/2$, hence the information all goes to $A$, and for $B$ it's all the same. If $a^2=b^2=1/3$, then it reduces to symmetric UQCM case, with fidelity $F_A=F_B=5/6$.
For completeness, here we would like to present a slightly different form for the asymmetric quantum cloning which is named by Cerf as a Pauli channel\cite{PhysRevLett.84.4497}. We start from qubit case. An arbitrary quantum pure state takes the form, \begin{eqnarray}
|\psi \rangle =x_0|0\rangle +x_1|1\rangle , ~~
\sum _j|x_j|^2=1. \end{eqnarray} A maximally entangled state is written as \begin{eqnarray}
|\Psi ^+\rangle =
\frac {1}{\sqrt{2}}(|00\rangle +|11\rangle ). \end{eqnarray} We can write the complete quantum state of three particles as \begin{eqnarray}
|\psi \rangle _A|\Psi ^+\rangle _{BC} &=&\frac {1}{2}[
|\Psi ^+\rangle _{AB}|\psi \rangle _C \nonumber \\ &&+(I\otimes X)
|\Psi ^+\rangle _{AB}X|\psi \rangle _C \nonumber \\ &&+(I\otimes Z)
|\Psi ^+\rangle _{AB}Z|\psi \rangle _C \nonumber \\ &&+(I\otimes XZ)
|\Psi ^+\rangle _{AB}XZ|\psi \rangle _C ], \label{2level} \end{eqnarray} where $I$ is the identity, $X,Z$ are two Pauli matrices and $XZ$ is another Pauli matrix up to a whole factor $i$.
Denote the unitary transformation $U_{m,n}=X^mZ^n$, where $m,n=0,1$, and the relation (\ref{2level}) can be rewritten as \begin{eqnarray}
|\psi \rangle _A|\Psi ^+\rangle _{BC} &=&\frac {1}{2}\sum _{m,n} (I\otimes U_{m,-n}\otimes U_{m,n})
|\Psi ^+\rangle _{AB}|\psi \rangle _C . \label{2levela} \end{eqnarray} Here we remark that $Z^{-1}=Z$ for 2-level system. We write it in this form since this relation can be generalized directly to the general d-dimension system.
Now, suppose we do unitary transformation in the following form \begin{eqnarray} &&\sum _{\alpha ,\beta } a_{\alpha ,\beta}(U_{\alpha , \beta }
\otimes U_{\alpha ,-\beta }\otimes I) |\psi \rangle _A|\Psi ^+\rangle _{BC} \nonumber \\ &=&\frac {1}{2} \sum _{\alpha ,\beta ,m,n} (U_{\alpha , \beta
}\otimes U_{\alpha , -\beta }U_{m,-n}\otimes U_{m,n})|\Psi ^+\rangle _{AB}|\psi \rangle _C \nonumber \\ &=&\sum _{m,n} b_{m,n}(I\otimes U_{m,-n}\otimes U_{m,n})
|\Psi ^+\rangle _{AB}|\psi \rangle _C , \label{unitary} \end{eqnarray}
where we defined \begin{eqnarray} b_{m,n}=\frac {1}{2}\sum _{\alpha ,\beta } (-1)^{\alpha n-\beta m}a_{\alpha ,\beta}. \label{abrelation} \end{eqnarray} The amplitudes should be normalizaed $\sum _{\alpha , \beta
}|a_{\alpha ,\beta }|^2= \sum _{m,n}|b_{m,n}|^2=1$. This is actually the asymmetric quantum cloning machine introduced by Cerf\cite{PhysRevLett.84.4497}. We can find the quantum states of $A$ and $C$ now take the form \begin{eqnarray}
\rho _A&=&\sum _{\alpha ,\beta }|a_{\alpha ,\beta }|^2U_{\alpha ,\beta }
|\psi \rangle \langle \psi |U_{\alpha , \beta }^{\dagger } , \\
\rho _C&=&\sum _{m,n}|b_{m,n}|^2U_{m,n}
|\psi \rangle \langle \psi |U_{m,n}^{\dagger }. \end{eqnarray} The quantum state of $A$ is related with the quantum state $C$ by relationship between $a_{\alpha ,\beta }$ and $b_{m,n}$.
The quantum state $\rho _A$ is the original quantum state after the quantum cloning. The quantum state $\rho _C$ is the copy.
Now, let us see a special case, \begin{eqnarray} b_{0,0}=1, ~~~b_{0,1}=b_{1,0}=b_{1,1}=0. \end{eqnarray} Crrespondingly, we can choose \begin{eqnarray} a_{0,0}=a_{0,1}=a_{1,0}=a_{1,1}=\frac {1}{2}. \end{eqnarray} So, we know the quantum states of $A$ and $C$ have the form \begin{eqnarray}
\rho _A=\frac {1}{2}I, ~~~\rho _C=|\psi \rangle \langle \psi |. \end{eqnarray}
As a quantum cloning machine, this means the original quantum state in $A$, $|\psi \rangle $, is completely destroyed,
These results can be generalized to d-dimension system directly.
The asymmetric cloning machine was generalized to $d$-dimensional case by Braunstein, Bu\v{z}ek and Hillery\cite{Braunstein2001}. The setup is almost the same with $|\Phi^+\rangle$ instead defined in higher-dimension, $|\Phi ^+\rangle =\frac{1}{\sqrt{d}}\sum_{j=1}^d|jj\rangle$, and hence the normalization relation is $a^2+b^2+2ab/d=1$. The output reduced density matrices are written in the form with shrinking factor: \begin{equation}
\rho_A=(1-b^2)|\psi\rangle\langle\psi|+b^2\frac{I}{d}, \rho_B=(1-a^2)|\psi\rangle\langle\psi|+a^2\frac{I}{d}. \end{equation} Hence the fidelities are: \begin{equation} F_A=1-b^2\frac{d-1}{d}, F_B=1-a^2\frac{d-1}{d}. \end{equation} If $a^2=b^2=d/(2d+2)$, it reduces to the symmetric 1 to 2 $d$-dimensional UQCM case, with fidelity $(d+3)/(2d+2)$. A trade-off relation between $F_A$ and $F_B$ can be found as follows \cite{ISI:000278182800016}: \begin{equation} \frac{(\sqrt{(d+1)F_A-1}+\sqrt{(d+1)F_B-1})^2}{2(d+1)}+\frac{(\sqrt{(d+1)F_A-1}-\sqrt{(d+1)F_B-1})^2}{2(d-1)}\leq 1 \end{equation} Optimality is satisfied when the inequality is saturated. They also give a similar inequality for $1\rightarrow 1+1+1$ case.
Cerf obtained the same result in a different way, here we present the d-dimensional case\cite{383952220000215,PhysRevLett.88.127902}. This result can be reformulated for other cases such as for state-dependent case presented in next sections. The transformation is: \begin{equation}
|\psi\rangle_A\rightarrow\sum_{m,n=0}^{d-1}a_{m,n}U_{m,n}|\psi\rangle_A|B_{m,-n}\rangle_{BR} \end{equation} Here $U_{m,n}$ is ``generalized Pauli matrix'': \begin{equation}
U_{m,n}=\sum_{k=0}^{d-1}e^{\frac{2\pi kni}{d}}|k+m\rangle\langle k| \end{equation}
and $|B_{m,n}\rangle$ is one of the generalized Bell basis: \begin{equation}
|B_{m,n}\rangle=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{\frac{2\pi kni}{d}}|k\rangle|k+m\rangle. \end{equation} The resultant reduced density matrix \begin{equation}
\rho_A=\sum_{m,n=0}^{d-1}|a_{m,n}|^2U_{m,n}|\psi\rangle\langle\psi|U_{m,n}^\dagger. \end{equation}
Hence the fidelity $F_A=\sum_{n=0}^{d-1}|a_{0,n}|^2$. For $B$, we replace $a_{m,n}$ by its Fourier transform $b_{m,n}=\frac{1}{d}\sum_{m',n'=0}^{d-1}e^{2\pi(nm'-mn')i/d}a_{m',n'}$.
To clone all states equally well, the matrix $a$ can be written in the following form: \begin{eqnarray} a=\left(\begin{array}{cccc}v&x&\cdots&x\\x&y&\cdots&y\\ \vdots&\vdots&\ddots&\vdots\\x&y&\cdots&y\end{array}\right) \end{eqnarray} with normalization relation $v^2+2(d-1)x^2+(d-1)^2 y^2=1$. In this form, $F_A=v^2+(d-1)x^2$ and the expression of $b_{m,n}$ is just to replace $x$ by $x'=[v+(d-2)x+(1-d)y]/d$, $y$ by $y'=(v-2x+y)/d$, $v$ by $v'=[v+2(d-1)x+(d-1)^2 y]/d$. Optimal cloning requires $y=0$, and if we let $a=v-x,b=dx$, these coincide with the parameters $a$ and $b$ in the first formalism. When $v=v'=\sqrt{2/(d+1)}, x=x'=\sqrt{1/(2d+2)}$, it reduces to the symmetric case. These results generalized the qutrit cloning presented by Durt and Gisin \cite{688607420020710}.
The optimality of these cloning machines were also proved by Iblisdir \emph{et al.}\cite{PhysRevA.72.042328}, Fiur\'{a}\v{s}ek, Filip and Cerf \cite{ISI:000234463800006}and Iblisdir, Ac\'{i}n and Gisin\cite{Iblisdir2005}. They also generalize the 1 to 2 asymmetric cloning machine to more general cases. Here we use $N\rightarrow M_1+\dots+M_p$ to denote such a problem: construct an asymmetric cloning machine resulting fidelity $F_1$ for $M_1$ copies, $F_2$ for $M_2$ copies, \dots, $F_p$ for $M_p$ copies.
The $1\rightarrow1+1+1$ $d$-dimensional cloning machine was constructed as following: \begin{align}
|\psi\rangle\rightarrow&\sqrt{\frac{d}{2d+2}}[\alpha|\psi\rangle_A(|\Phi^+\rangle_{BR}|\Phi^+\rangle_{CS}+|\Phi^+\rangle_{BS}|\Phi^+\rangle_{CR})+
\beta|\psi\rangle_B(|\Phi^+\rangle_{AR}|\Phi^+\rangle_{CS}+|\Phi^+\rangle_{AS}|\Phi^+\rangle_{CR})+\nonumber\\&
\gamma|\psi\rangle_C(|\Phi^+\rangle_{AR}|\Phi^+\rangle_{BS}+|\Phi^+\rangle_{AS}|\Phi^+\rangle_{BR})]\label{1to3} \end{align}
where $A,B,C$ are three output states and $R,S$ are ancillary states. $|\Phi^+\rangle =1/\sqrt{d}\sum_{k=0}^{d-1}|kk\rangle$ as usual. For normalization purpose, $\alpha,\beta,\gamma$ obey $\alpha^2+\beta^2+\gamma^2+\frac{2}{d}(\alpha\beta+\beta\gamma+\alpha\gamma)=1$. The final one copy fidelities for $A,B,C$ are: \begin{align} F_A&=1-\frac{d-1}{d}\left(\beta^2+\gamma^2+\frac{2\beta\gamma}{d+1}\right)\nonumber\\ F_B&=1-\frac{d-1}{d}\left(\alpha^2+\gamma^2+\frac{2\alpha\gamma}{d+1}\right)\nonumber\\ F_C&=1-\frac{d-1}{d}\left(\alpha^2+\beta^2+\frac{2\alpha\beta}{d+1}\right).\label{1to3F} \end{align}
In \cite{PhysRevA.72.042328}, the $1\rightarrow 1+n$ cloning machine was found. The Hilbert space $\mathcal{H}^{\otimes(n+1)}$ is decomposed into two symmetric subspace $\mathcal{H}^+_{n+1}\oplus\mathcal{H}^+_{n-1}$. Let $s_{n+1}$ and $s_{n-1}$ denote the projection operator, respectively, the transformation can be written as: \begin{equation} T:\rho\rightarrow(\alpha^* s_{n+1}+\beta^* s_{n-1})(\rho\otimes I^{\otimes n})(\alpha s_{n+1}+\beta s_{n-1}). \end{equation} It is a generalization of the construction of symmetric UQCM (\ref{Werner}). The resulting fidelity is $F_A=1-\frac{2}{3}y^2$ for the '1' side, and $F_B=\frac{1}{2}+\frac{1}{3n}(y^2+\sqrt{n(n+2)}xy)$. Here $x$ and $y$ satisfy $x^2+y^2=1$. A more general case, $N\rightarrow M_A+M_B$, is studied with similar method in \cite{Iblisdir2005}.
In studying asymmetric quantum cloning, the region of possible output fidelities for one to three cloning is studied in \cite{ISI:000278182800016}, the case of one to many case is studied in \cite{Cwiklinski2012}, the general case is studied in \cite{KayDagPreprint}.
\subsection{A unified UQCM}
Recently, Wang \emph{et al.} proposed a unified way to construct general asymmetric UQCM \cite{PhysRevA.84.034302}. The essence is to replace the symmetric operator $s_M$ in construction (\ref{UnifiedUQCM}) by a linear combination of identity $I$ and many permutation operators. Take the 1 to 2 qubit cloning case as the simplest example, $s_2=\frac{1}{2}(I^{\otimes2}+\mathcal{P})$, where $\mathcal{P}=|00\rangle\langle00|+|11\rangle\langle11|+|01\rangle\langle10|+|10\rangle\langle01|$ is a permutation(swap) operator($\mathcal{P}|jl\rangle=|lj\rangle$). If $s_2$ is replaced by $\alpha I^{\otimes2}+\beta\mathcal{P}$, the output density matrix exactly coincides with the output density matrix in construction (\ref{BuzekAsy}): $|\psi\rangle_A\rightarrow a|\psi\rangle_A|\Phi^+\rangle_{BR}+b|\psi\rangle_B|\Phi^+\rangle_{AR}$, with $\alpha=\frac{\sqrt{3}}{2}a,\beta=\frac{\sqrt{3}}{2}b$.
In order to introduce this method, here we present two examples to show explicitly that it can be applied straightforwardly for various occasions.
For $1\rightarrow 3$ asymmetric qubit cloning case, we replace the symmetry operator $s_3$ by \begin{equation} \alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132}.\label{3AsyOp} \end{equation} Note $\mathcal{P}_{mn}$ is the operator that swap the $m$ qubit and the $n$ qubit, and $\mathcal{P}_{123}$ is a cyclic operator that move the first qubit to the second place, the second qubit to the third place, and the third qubit to the first place. $\mathcal{P}_{132}$ is its inverse transformation. In fact these six components in (\ref{3AsyOp}) are just the elements of permutation group $S_3$. The symmetry operator $s_3=\frac{1}{6}(I+\mathcal{P}_{12}+\mathcal{P}_{13}+\mathcal{P}_{23}+\mathcal{P}_{123}+\mathcal{P}_{132})$, is retrieved when $\alpha=\beta=\gamma=\delta=\mu=\nu=\frac{1}{6}$. The $1\rightarrow 3$ asymmetric qubit cloning can be obtained by replacing $s_3$ by (\ref{3AsyOp}) and insert it to the cloning transformation (\ref{UnifiedUQCM}). Here we would like to remark that the number of essential permutations for the specific $1\rightarrow 3$ case are actually three. There are only three independent parameters corresponding to cases: the input state is in first, second, and third positions, respectively. This will be shown explicitly later. Now, if we trace out the ancillary states, it is equivalent to modify (\ref{Werner}). The final density operator is: \begin{align}
\rho&=\frac{1}{2}(\alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132})(|\psi\rangle\langle\psi|\otimes I\otimes I)(\alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132})\nonumber\\
&=\frac{1}{2}[\alpha|\psi\rangle\langle\psi|\otimes I\otimes I+\beta(|0\psi\rangle\langle\psi0|+|1\psi\rangle\langle\psi1|)\otimes I_3+\gamma(|0\psi\rangle\langle\psi0|+|1\psi\rangle\langle\psi1|)_{13}\otimes I_2\nonumber\\&+\delta|\psi\rangle\langle\psi|_1\otimes(|00\rangle\langle00|+|11\rangle\langle11|+|01\rangle\langle10|+|10\rangle\langle01|)+
\mu(|0\psi0\rangle\langle\psi00|+|1\psi0\rangle\langle\psi01|+|0\psi1\rangle\langle\psi10|+|1\psi1\rangle\langle\psi11|)\nonumber\\
&+\nu(|00\psi\rangle\langle\psi00|+|01\psi\rangle\langle\psi01|+|10\psi\rangle\langle\psi10|+|11\psi\rangle\langle\psi11|)](\alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132})\nonumber\\
&=\frac{1}{2}[(\alpha^2+\delta^2)|\psi\rangle\langle\psi|\otimes I\otimes I+(\beta^2+\mu^2)I\otimes|\psi\rangle\langle\psi|\otimes I+(\gamma^2+\nu^2) I\otimes I\otimes|\psi\rangle\langle\psi|\nonumber\\
&+(\alpha\beta+\mu\delta)(|\psi0\rangle\langle0\psi|+|\psi1\rangle\langle1\psi|+|0\psi\rangle\langle\psi0|+|1\psi\rangle\langle\psi1|)_{12}\otimes I_3\nonumber\\&+(\alpha\gamma+\nu\delta)(|\psi0\rangle\langle0\psi|+|\psi1\rangle\langle1\psi|+|0\psi\rangle\langle\psi0|+|1\psi\rangle\langle\psi1|)_{13}\otimes I_2\nonumber\\&+(\beta\gamma+\mu\nu)(|00\psi\rangle\langle0\psi0|+|01\psi\rangle\langle1\psi0|+|10\psi\rangle\langle0\psi1|+|11\psi\rangle\langle1\psi1|+trans.)
\nonumber\\&+(\delta\beta+\alpha\mu)(|0\psi0\rangle\psi00|+|0\psi1\rangle\psi10|+|1\psi0\rangle\langle\psi01|+|1\psi1\rangle\langle\psi11|+trans.)\nonumber\\
&+\delta\gamma(|0\psi0\rangle\langle00\psi|+|0\psi1\rangle\langle10\psi|+|1\psi0\rangle\langle01\psi|+|1\psi1|\rangle\langle11\psi|+trans.)\nonumber\\&
+(\beta\nu+\gamma\mu)I_1\otimes(|\psi0\rangle\langle0\psi|+|\psi1\rangle\langle1\psi|+|0\psi\rangle\langle\psi0|+|1\psi\rangle\langle\psi1|)_{23}\nonumber\\&
+\alpha\nu(|00\psi\rangle\langle\psi00|+|01\psi\rangle\langle\psi01|+|10\psi\rangle\langle\psi10|+|11\psi\rangle\psi11|+trans.)\nonumber\\&
+2\alpha\delta|\psi\rangle\langle\psi|_1\otimes(|00\rangle\langle00|+|01\rangle\langle10|+|10\rangle\langle01|+|11\rangle\langle11|)_{23}\nonumber\\&
+2\beta\mu|\psi\rangle\langle\psi|_2\otimes(|00\rangle\langle00|+|01\rangle\langle10|+|10\rangle\langle01|+|11\rangle\langle11|)_{13}\nonumber\\&
+2\gamma\nu|\psi\rangle\langle\psi|_3\otimes(|00\rangle\langle00|+|01\rangle\langle10|+|10\rangle\langle01|+|11\rangle\langle11|)_{12}\label{U1to3DM} \end{align}
Here \emph{trans.} denotes the transposition of previous terms, for example, term $|00\psi \rangle \langle 0\psi 0|$ is followed by its transposition $|0\psi 0\rangle \langle 00\psi |$. Trace out the second and third qubits, we obtain the single qubit reduced density matrix, \begin{align}
\rho_1&=[2(\alpha^2+\delta^2+\alpha\delta)+\alpha(2\beta+2\gamma+\mu+\nu)+\delta(2\mu+2\nu+\beta+\gamma)+\mu\nu+\beta\gamma]|\psi\rangle\langle\psi|\nonumber\\ &+(\beta^2+\mu^2+\gamma^2+\nu^2+\beta\mu+\beta\nu+\gamma\mu+\gamma\nu)I \end{align} Hence a normalization relation is easily obtained: \begin{align} &2(\alpha^2+\delta^2+\alpha\delta)+\alpha(2\beta+2\gamma+\mu+\nu)+\delta(2\mu+2\nu+\beta+\gamma)+\mu\nu+\beta\gamma+\nonumber\\ &2(\beta^2+\mu^2+\gamma^2+\nu^2+\beta\mu+\beta\nu+\gamma\mu+\gamma\nu)=1 \end{align}
Similarly we can find the reduced density matrices of the second and third copies, their fidelities are: \begin{align} F_1&=1-\frac{1}{2}[(\beta+\mu)^2+(\beta+\nu)^2+(\gamma+\mu)^2+(\gamma+\nu)^2]\nonumber\\ F_2&=1-\frac{1}{2}[(\alpha+\delta)^2+(\alpha+\nu)^2+(\gamma+\delta)^2+(\gamma+\nu)^2]\nonumber\\ F_3&=1-\frac{1}{2}[(\alpha+\delta)^2+(\alpha+\mu)^2+(\beta+\delta)^2+(\beta+\mu)^2]\label{U1to3F} \end{align} It reduces to the symmetric cloning case when $\alpha=\beta=\gamma=\delta=\mu=\nu=\frac{1}{6}$, and the fidelity is 7/9, which exactly coincide with the UQCM fidelity formula (\ref{UQCM1F}). To see its relation with the previous $1\rightarrow 3$ asymmetric UQCM (\ref{1to3}), we can explicitly compute out the density matrix in (\ref{1to3}): \begin{align}
\rho_{out}&=\frac{1}{12}\{4(\alpha'|\psi00\rangle+\beta'|0\psi0\rangle+\gamma'|00\psi\rangle)(\alpha'\langle\psi00|+\beta'\langle0\psi0|+\gamma'\langle00\psi|)
\nonumber\\&+2[\alpha'(|\psi01\rangle+\psi10\rangle)+\beta'(|0\psi1\rangle+|1\psi0\rangle)+\gamma'(|01\psi\rangle+|10\psi\rangle)]\nonumber\\&
[\alpha'(\langle\psi01|+\langle\psi10|)+\beta'(\langle0\psi1|+\langle1\psi0|)+\gamma'(\langle01\psi|+\langle10\psi|)]\nonumber\\&+
4(\alpha'|\psi11\rangle+\beta'|1\psi1\rangle+\gamma'|11\psi\rangle)(\alpha'\langle\psi11|+\beta'\langle1\psi1|+\gamma'\langle11\psi|)\} \end{align} For clarity purpose we replace the coefficients $\alpha, \beta, \gamma$ in (\ref{1to3}) with $\alpha', \beta', \gamma'$. Compare this expression with (\ref{U1to3DM}), we found if the following equations are satisfied, they are equivalent: \begin{align} &\frac{\alpha'^2}{3}=(\alpha+\delta)^2, \frac{\beta'^2}{3}=(\beta+\mu)^2, \frac{\gamma'^2}{3}=(\gamma+\nu)^2\nonumber\\ &\alpha'^2=12\alpha\delta,\beta'^2=12\beta\mu,\gamma'^2=12\gamma\nu. \end{align} This implies $\alpha=\delta=\alpha'/(2\sqrt{3}), \beta=\mu=\beta'/(2\sqrt{3}), \gamma=\nu=\gamma'/(2\sqrt{3})$. And in this case the fidelity expressions (\ref{U1to3F}) exactly coincide with the previous results (\ref{1to3F}). The cloning machine here has six parameters, which indicates that it is a general form of asymmetric UQCM, and we do not need to consider the specific input positions.
We can study the $2\rightarrow3$ case similarly. The resultant density matrix is: \begin{align}
\rho&=\frac{1}{2}(\alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132})(|\psi\psi\rangle\langle\psi\psi|\otimes I)(\alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132})\nonumber\\
&=\frac{1}{2}[(\alpha+\beta)|\psi\psi\rangle\langle\psi\psi|\otimes I_3+(\gamma+\mu)(|0\psi\psi\rangle\langle\psi\psi0|+|1\psi\psi\rangle\langle\psi\psi1|)\nonumber\\
&+(\delta+\nu)(|\psi0\psi\rangle\langle\psi\psi0|+|\psi1\psi\rangle\langle\psi\psi1|)](\alpha I+\beta\mathcal{P}_{12}+\gamma\mathcal{P}_{13}+\delta\mathcal{P}_{23}+\mu\mathcal{P}_{123}+\nu\mathcal{P}_{132})\nonumber\\
&=\frac{1}{2}[(\alpha+\beta)^2|\psi\psi\rangle\langle\psi\psi|\otimes I_3+(\gamma+\mu)^2 I_1\otimes |\psi\psi\rangle\langle\psi\psi|+
(\delta+\nu)^2|\psi\rangle\langle\psi|\otimes I_2\otimes|\psi\rangle\langle\psi|\nonumber\\
&+(\alpha+\beta)(\gamma+\mu)(|0\psi\psi\rangle\langle\psi\psi0|+|1\psi\psi\rangle\langle\psi\psi1|+trans.)\nonumber\\
&+(\alpha+\beta)(\delta+\nu)(|\psi0\psi\rangle\langle\psi\psi0|+|\psi1\psi\rangle\langle\psi\psi1|+trans.)\nonumber\\
&+(\gamma+\mu)(\delta+\nu)(|\psi0\psi\rangle\langle0\psi\psi|+|\psi1\psi\rangle\langle1\psi\psi|+trans.)\label{U2to3DM} \end{align}
We can see that there are only three independent parameters in the final expression: $\alpha+\beta$, $\gamma+\mu$, $\delta+\nu$, so we denote them by $A$, $B$ and $C$ respectively. We trace out the other two states to obtain the following one copy reduced density matrices: \begin{align}
\rho_1&=(A^2+C^2+AB+AC+BC)|\psi\rangle\langle\psi|+\frac{B^2}{2}I\nonumber\\
\rho_2&=(A^2+B^2+AB+AC+BC)|\psi\rangle\langle\psi|+\frac{C^2}{2}I\nonumber\\
\rho_3&=(B^2+C^2+AB+AC+BC)|\psi\rangle\langle\psi|+\frac{A^2}{2}I \end{align}
The coefficients apparently satisfy a normalization relation: $A^2+B^2+C^2+AB+BC+CA=1$. From the one copy reduced density matrices we simply read out the fidelities: \begin{equation} F_1=1-\frac{B^2}{2},F_2=1-\frac{C^2}{2},F_3=1-\frac{A^2}{2} \end{equation} For symmetric cloning case, we let $A=B=C=1/\sqrt{6}$, then the fidelity is $11/12$, which exactly coincide with the UQCM fidelity formula (\ref{UQCM1F}).
\subsection{Singlet monogamy and optimal cloning} In quantum information science, entanglement is a resource for various QIP applications. On the other hand, the entanglement cannot be shared freely among multi-parties. For example, for a multipartite quantum state, one party cannot be maximally entangled independently with other two parties simultaneously. It means that entanglement is monogamous. There are some monogamy inequalities for entanglement sharing \cite{ckw2000,Osborne2006,Ouyc2007,Ouyc2008}.
In this review, we consider the singlet monogamy in application of quantum cloning. We know that singlet is a natural maximally entangled state, we use the name of singlet monogamy to describe the restrictions on entanglement sharing.
Quantitatively, the amount of entanglement between $A$ and $B$ can be defined as \begin{equation}
p_{A,B}=\max_{U,V}\langle \Phi^+|U\otimes V\rho_{A,B}U^\dagger\otimes V^\dagger|\Phi^+\rangle \end{equation}
where $|\Phi^+\rangle =\sum_{i=0}^{d-1}|ii\rangle/\sqrt{d}$ is the d-dimensional maximally entangled state, which is standard in this review. This quantity describes, under local unitary operations, the fidelity between state $\rho _{A,B}$ and the maximally entangled state. In \cite{Kay2009}, Kay, Kaszlikowski and Ramanathan discovered the relation between singlet monogamy and $1\rightarrow N$ optimal asymmetric UQCM. In their approach, a setup proposed by Fiur\'{a}\v{s}ek\cite{PhysRevA.64.062310} is used: $\Lambda_{out}(\psi_{in})$ is a reduced density matrix so that the efficiency of this cloning machine $F$ is measured by averaging $Tr[\sqrt{\sqrt{\rho_{in}}\Lambda_{out}(\psi_{in})\sqrt{\rho_{in}}}]^2$. Note this is a ``average'' definition of fidelity. In \cite{PhysRevA.64.062310} it is proved $F\leq d\lambda$ where $\lambda$ is the maximal eigenvalue of the matrix \begin{equation}
R=\int\mathrm{d}\psi_{in}[|\psi_{in}\rangle\langle\psi_{in}|^T \otimes\Lambda_{out}(\psi_{in})].\label{monogamyR} \end{equation}
For the specific $1\rightarrow N$ asymmetric cloning case, $\Lambda_{out}(\psi_{in})$ is defined to be \begin{equation}
\Lambda_{out}(\psi_{in})=\sum_{i=1}^N \alpha_i I_1\otimes\dots\otimes I_{i-1}\otimes|\psi_{in}\rangle\langle\psi_{in}|_i\otimes I_{i+1}\otimes\dots\otimes I_N. \label{singletout} \end{equation}
Here $\alpha_i$ is a set of parameters to describe the required asymmetry of output states, which satisfies $\sum_{i=1}^N\alpha_i=1$. Rewriting $|\psi_{in}\rangle$ as $U|0\rangle$, where $U$ is a unitary operator in $d$-dimensional Hilbert space, then from (\ref{monogamyR}) we find \begin{equation}
R=\int \mathrm{d} U\sum_{i=1}^N\alpha_i U^T\otimes U|00\rangle\langle00|_{0,i}U^*\otimes U^\dagger , \end{equation}
where the subscript $0$ denotes the port of state $|\psi_{in}\rangle\langle\psi_{in}|^T$ appeared in expression (\ref{monogamyR}) which is now expressed as $U^T|0\rangle $. This equation is obtained by substituting Eq.(\ref{singletout}) into Eq. (\ref{monogamyR}), so subscript $i$ corresponds to port of state $|\psi_{in}\rangle\langle\psi_{in}|$ appeared in Eq.(\ref{singletout}). The form of state transposition denoted as $T$ is due to an identity ${\rm Tr}_1(|\psi _{in}\rangle \langle \psi _{in}|_1|\Phi ^+\rangle \langle \Phi ^+|_{0,1})=\frac {1}{d}|\psi _{in}\rangle \langle \psi _{in}|_0^T$. After calculation it turns out to be \begin{equation}
R=\frac{1}{d(d+1)}\sum_{i=1}^N\alpha_i(I+d|\Phi^+\rangle\langle\Phi^+|)_{0,i}. \end{equation} To find out the eigenvalue of this matrix, an ansatz is proposed: \begin{equation}
|\Psi\rangle=\sum_{i=1}^N\beta_i|\Phi^+\rangle|\Phi\rangle_{1\dots(i-1)(i+1)\dots N}. \end{equation}
$\beta_i$ is parameters satisfy normalization condition $(\sum_{i=1}^N\beta_i)^2+(d-1)\sum_{i=1}^N\beta_i^2=d$, and $|\Phi\rangle$ means a normalized superposition of all permutation of $|\Phi^+\rangle^{\otimes(N-1)/2}$ for odd $N$ and $|\Phi^+\rangle^{\otimes(N-2)/2}|0\rangle$ for even $N$. It satisfies \begin{equation}
(|\Phi^+\rangle\langle\Phi^+|_{0,j}\otimes I)|\Phi\rangle_{0,i}|\Phi\rangle_{1\dots(i-1)(i+1)\dots N}=\gamma_{i,j}|\Phi^+\rangle_{0,j}|\Phi\rangle_{1\dots(j-1)(j+1)\dots N}. \end{equation}
Here $\gamma_{i,j}=[1+\delta_{ij}(d-1)]/d$ and hence we know $|\Psi\rangle$ is a eigenvector of $R$ if $\alpha_i d\sum_{j=1}^N\gamma_{i,j}\beta_j=[d(d+1)-\lambda]\beta_i$ for every $i$. Combine this constraint with the expression of singlet monogamy of $|\Psi\rangle$: $p_{0,i}=(\sum_{j=1}^N\gamma_{i,j}\beta_j)^2$, as well as the normalization condition, we get the singlet monogamy relation for $1\rightarrow N$ asymmetric cloning machine: \begin{equation} \sum_{i=1}^N p_{0,i}\leq\frac{d-1}{d}+\frac{1}{N+d-1}(\sum_{i=1}^N\sqrt{p_{0,i}})^2. \end{equation}
It is straightforward to find the one copy fidelity $F_i=(p_{0,i}d+1)/(d+1)$. For symmetric UQCM case, one let all $p_{0,i}$ to be equal to $(N+d-1)/dN$ and then the previous result $F=(2N+d-1)/[N(d+1)]$ is regenerated. In \cite{Kay2009} it is also shown that the previous $1\rightarrow 1+1+1$ asymmetric cloning and $1\rightarrow1+N$ asymmetric cloning cases are consistent with this approach.
The $1\rightarrow 4$ asymmetric cloning can be similarly studied \cite{Renxj2011}. The normalization condition in this specific case turns out to be: \begin{equation} \beta _{1}^{2}+\beta _{2}^{2}+\beta _{3}^{2}+\beta _{4}^{2}+\frac{2}{d} (\beta _{1}\beta _{2}+\beta _{1}\beta _{3}+\beta _{1}\beta _{4}+\beta _{2}\beta _{3}+\beta _{2}\beta _{4}+\beta _{3}\beta _{4})=1 \end{equation} and the fidelity of these four copies are: \begin{eqnarray} F_{1} &=&1-\frac{d-1}{d}[\beta _{2}^{2}+\beta _{3}^{2}+\beta _{4}^{2}+\frac{ 2(\beta _{2}\beta _{3}+\beta _{2}\beta _{4}+\beta _{3}\beta _{4})}{d+1}], \\ F_{2} &=&1-\frac{d-1}{d}[\beta _{1}^{2}+\beta _{3}^{2}+\beta _{4}^{2}+\frac{ 2(\beta _{1}\beta _{3}+\beta _{1}\beta _{4}+\beta _{3}\beta _{4})}{d+1}], \\ F_{3} &=&1-\frac{d-1}{d}[\beta _{1}^{2}+\beta _{2}^{2}+\beta _{4}^{2}+\frac{ 2(\beta _{1}\beta _{2}+\beta _{1}\beta _{4}+\beta _{2}\beta _{4})}{d+1}], \\ F_{4} &=&1-\frac{d-1}{d}[\beta _{1}^{2}+\beta _{2}^{2}+\beta _{3}^{2}+\frac{ 2(\beta _{1}\beta _{2}+\beta _{1}\beta _{3}+\beta _{2}\beta _{3})}{d+1}]. \end{eqnarray} With general asymmetric quantum cloning machine available, we can expect that the corresponding relationship between quantum cloning and entanglement monogamy can be constructed.
\subsection{Mixed-state quantum cloning}
In the previous discussions of quantum cloning, the input state is assumed to be pure. What if the input state is mixed state $\rho$? Sometimes since we only look at the resulted local one-copy reduced density matrix, this procedure is named ``broadcasting''\cite{PhysRevLett.76.2818,ISI:000231017700008}, as we already presented in this review. In \cite{PhysRevLett.76.2818}, Barnum \emph{et al.} proved the $1\rightarrow 2$ no cloning theorem can be extended to mixed state case, that is, for two non-commuting input density matrices, the cloning machine cannot copy both perfectly, as we have already shown in previous sections. Then D'Ariano, Macchiavello and Perinotti studied the extended $N\rightarrow M$ case and constructed the optimal UQCM \cite{ISI:000231017700008}. They found a non-trivial result that the no-broadcasting theorem cannot be generalized to more than one input case. Specifically, for UQCM, it is even possible to purify the input states when $N\geq4$, this phenomenon is called ``superbroadcasting''. Note here UQCM does not mean constant fidelity reached for every mixed state, since the existence of such cloning machine ($1\rightarrow M$) was nullified by Chen and Chen \cite{ISI:000247624300060}. For mixed state cloning, it seems reasonable to use the shrinking factor as the measure of merit for the quantum cloning machine. This is for cloning of mixed states in symmetric subspace \cite{Fan-mixed}. The property of ``universal'' for mixed cloning machine is in the sense that the shrinking factor of the single output is independent of the input mixed state.
In this subsection, we try to review some explicit results of mixed state cloning studied in \cite{ISI:000248425500008,ISI:000249154900043,Yang2007}. The UQCM for pure state (\ref{Werner},\ref{FanClone}) can be applied apparently to one input mixed state. But if we input the direct product of two identical $\rho$, direct application of (\ref{Werner}) cannot give the optimal output. This can be easily figured out if we consider the $2\rightarrow2$ case. The optimal transformation is just leaving it unchanged, but if we apply the symmetrization projection, since $\rho\otimes\rho$ contains an asymmetric part, this part is deleted so the final state changes. So we need to find out other ways to achieve maximal fidelity.
We suppose the input state is identical copies of $\rho=z_0|0\rangle\langle0|+z_1|0\rangle\langle1|+z_2|1\rangle\langle0|+z_3|1\rangle\langle1|$, and we use the notation $|m,n\rangle$ to denote the totally symmetric state with $m$ $|0\rangle $s and $n$ $|1\rangle $s. Additionally we introduce $|\widetilde{m,n}\rangle$ which is constructed by multiplying each components in $|m,n\rangle$ by a different power of $\omega=\exp[2\pi im!n!/(m+n)!]$ ranging from 0 to $(m+n)!/(m!n!)-1$. For example, $|\widetilde{2,1}\rangle=(|001\rangle+\omega |010\rangle+\omega ^2|100\rangle)/\sqrt{3}$, with $\omega =\exp[2\pi i/3]$. Obviously $|m,n\rangle$ is orthogonal to $|\widetilde{m,n}\rangle$.
Then the $2\rightarrow3$ transformation is written as: \begin{align}
&|2,0\rangle|R\rangle\rightarrow\sqrt{\frac{3}{4}}|3,0\rangle|R_0\rangle+\sqrt{\frac{1}{4}}|2,1\rangle|R_1\rangle\nonumber\\
&|1,1\rangle|R\rangle\rightarrow\sqrt{\frac{1}{2}}|2,1\rangle|R_0\rangle+\sqrt{\frac{1}{2}}|1,2\rangle|R_1\rangle\nonumber\\
&|0,2\rangle|R\rangle\rightarrow\sqrt{\frac{1}{4}}|1,2\rangle|R_0\rangle+\sqrt{\frac{3}{4}}|0,3\rangle|R_1\rangle\nonumber\\
&|\widetilde{1,1}\rangle|R\rangle\rightarrow\sqrt{\frac{1}{2}}|\widetilde{2,1}\rangle|R_0\rangle+\sqrt{\frac{1}{2}}|\widetilde{1,2}\rangle|R_1\rangle. \end{align} It can be verified that the output single copy reduced density matrix is $\frac{5}{6}\rho+\frac{I}{12}$. The shrinking factor $5/6$, apparently coincide with the maximal shrinking factor of $2\rightarrow3$ UQCM in pure state case. The more general $2\rightarrow M$ mixed state cloner is constructed in similar way: \begin{align}
&|2,0\rangle|R\rangle\rightarrow\sum_{k=0}^{M-2}\alpha_{0k}|M-k,k\rangle|R_k\rangle\nonumber\\
&|1,1\rangle|R\rangle\rightarrow\sum_{k=0}^{M-2}\alpha_{1k}|M-k-1,k+1\rangle|R_k\rangle\nonumber\\
&|0,2\rangle|R\rangle\rightarrow\sum_{k=0}^{M-2}\alpha_{2k}|M-k-2,k+2\rangle|R_k\rangle\nonumber\\
&|\widetilde{1,1}\rangle|R\rangle\rightarrow\sum_{k=0}^{M-2}\alpha_{1k}|\widetilde{M-k-1,k+1}\rangle|R_k\rangle \end{align} where \begin{equation} \alpha _{jk}= \sqrt{\frac {6(M-2)!(M-j-k)!(j+k)!}{(2-j)!(M+1)! (M-2-k)!j!k!}}. \end{equation} By calculations, it can also be shown that the shrinking factor leads to previous results(\ref{UQCM1F}) corresponding for pure state case, hence it's optimal.
In \cite{ISI:000249154900043} this construction is generalized to $N\rightarrow M$ case: \begin{equation}
|N-m,m\rangle|R\rangle\rightarrow\sum_{k=0}^{M-N}\beta_{mk}|M-m-k,m+k\rangle|R_{M-N-k,k}\rangle, \end{equation} the coefficients are: \begin{equation} \beta _{mk} =\sqrt{\frac{\left( M-N\right) !\left( N+1\right) !}{\left( M+1\right) !}}\sqrt{\frac{\left( M-m-k\right) !}{\left( N-m\right) !\left( M-N-k\right) !}} \cdot \sqrt{\frac{\left( m+k\right) !}{m!k!}}. \end{equation}
\subsection{Universal NOT gate}
Similar to the quantum cloning problem, one can ask if there is some transformation $U$ that convert an arbitrary state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$ to its conjugate state $|\psi^\bot\rangle=\beta^*|0\rangle-\alpha^*|1\rangle$. For two states $|\psi_1\rangle=\alpha|0\rangle+\beta|1\rangle$ and $|\psi_2\rangle=\gamma|0\rangle+\delta|1\rangle$, we have $\langle\psi_2|\psi_1\rangle=\gamma^*\alpha+\delta^*\beta=\left(\langle\psi_2|U^\dagger U|\psi_1\rangle\right)^*$, hence $U$ is an anti-unitary operator. And it is not completely positive hence cannot be applied to a small system, as argued by Bu\v{z}ek, Hillery and Werner in \cite{PhysRevA.60.R2626}.
Then it is a question whether we can have a universal NOT gate approximately. A general $N\rightarrow M$ universal NOT gate is constructed by using the universal cloning machine. The final single copy output density matrix is $\rho_{out,1}=\frac{N}{N+2}|\psi^\bot\rangle\langle\psi^\bot|+\frac{1}{N+2}I$, regardless of $M$. In fact, this is exactly the reduced density matrix of the ancilla in the UQCM. This is an interesting result since it shows the ancilla has the ``anti-clone'' meaning. The optimality of this universal NOT gate is also proved \cite{Buzek2000}. The universal NOT gate or anti-cloning is the same as the universal spin flip machine \cite{PhysRevLett.83.432}. Related, it is found that a pair of antiparallel spins can contain more information than that of parallel spins. The universal NOT gate is studied for continuous variable system in \cite{cerf-cv-not}. The experimental implementation of universal NOT gate in optical system is reported in \cite{martininature}. The universal controlled-NOT gate is studied in \cite{Siomau2010a}.
\subsection{Minimal input set, six-state cryptography and other results}
\begin{figure}
\caption{The six states used in quantum key distribution, the optimal cloning machine to clone those six states is a UQCM.}
\label{figure-6-state}
\end{figure}
Bruss showed that the $1\rightarrow2$ optimal cloning of the following six states with equal fidelity for each state is equivalent to the qubit UQCM\cite{PhysRevLett.81.3018}, \begin{eqnarray}
&&\{ |0\rangle,|1\rangle\} ;\nonumber \\
&&\{ \frac {1}{\sqrt{2}}(|0\rangle+|1\rangle),\frac {1}{\sqrt{2}}(|0\rangle-|1\rangle)\} ; \nonumber \\
&&\{ \frac {1}{\sqrt{2}}(|0\rangle+i|1\rangle),\frac {1}{\sqrt{2}}(|0\rangle-i|1\rangle)\} . \label{6states} \end{eqnarray} These six states can be represented on Bloch sphere as FIG.\ref{figure-6-state}.
These six states are exactly the three basis used in the six-state QKD protocol, and indeed the UQCM can be regarded as the optimal way to attack the quantum channel in this protocol \cite{Bechmann-Pasquinucci1999}. This is an interesting phenomenon, it means that the optimal cloning machine for those six states can actually clone arbitrary qubits optimally. On the other hand, it also means that we cannot clone six states better than a UQCM does.
A reverse question might be interesting: how many states are enough to define a UQCM? More explicitly, what is the minimal number of the states in the input set, such that the optimal cloning machine that achieves equal fidelity for them is equivalent to the UQCM? Jing \emph{et al.} \cite{Jing2013} solved this problem in the $1\rightarrow2$ cloning case. The minimal set turns out to be four states on the vertex of a tetrahedron: \begin{eqnarray}
|\psi _1\rangle &=&|0\rangle , \nonumber \\
|\psi _2\rangle &=&\cos(\theta/2)|0\rangle+\sin(\theta/2)|1\rangle, \nonumber \\
|\psi _3\rangle &=&\cos(\theta/2)|0\rangle+\sin(\theta/2)e^{2i\pi/3}|1\rangle, \nonumber \\
|\psi _4\rangle &=&\cos(\theta/2)|0\rangle+\sin(\theta/2)e^{4i\pi/3}|1\rangle , \end{eqnarray} where $\theta$ satisfies $\cos(\theta/2)=\frac{\sqrt{3}}{3}$, see FIG.\ref{4states}.
There is a similar phenomenon for states on the equator of the Bloch sphere, which will be demonstrated in the following section.
\begin{figure}
\caption{The four states on the vertex of a tetrahedron, which determines a UQCM in $1\rightarrow2$ cloning case.}
\label{4states}
\end{figure}
\subsection{Other developments and related topics}
The quantum cloning machine is originally proposed by considering arbitrary input states, thus it in general has the universal property \cite{PhysRevLett.81.5003,PhysRevA.54.1844}. For qubit case, by mapping several identical pure states into the cloning of output states assisted by ancillary states, the general universal cloning machine is realized \cite{PhysRevLett.79.2153}. The optimality of the fidelity is later proved by considering that the case of infinite copies should be equivalent to quantum state estimation \cite{PhysRevLett.81.2598}. Along this line, the one to many universal cloning machine for higher-dimensional case is also studied in \cite{ShaomingFei2000}. By using the symmetric projection on identical pure states and tensor product of the identities, which are completely mixed states instead of the intuitively assumed blank states, Werner proposed the optimal universal cloning \cite{PhysRevA.58.1827}. The fidelity between all copies of the output density operator with the ideal copies is used as the figure of merit for this cloning machine. The optimal fidelity between single copy and a single input state is later obtained \cite{ISI:000081019000005}. The cloning of higher dimensional state is also studied independently in \cite{zanardi}. Equivalently but differently by explicit transformation method, the general many to many universal quantum cloning of higher dimensional state is proposed in \cite{PhysRevA.64.064301}. The combination of this universal cloning machine with the one by projection method \cite{PhysRevA.58.1827} is proposed resulting in a unified universal cloning machine \cite{PhysRevA.84.034302}. Also,the fidelities which range from cases of single copy to multiple copies are all obtained. Let us remark that the well-accepted theory of fidelity for mixed states can be found in \cite{Josza1994}.
The topic of universal quantum cloning is well studied. Next, we try to list those closely related developments in two directions. The first direction is in general about the concepts extension from universal quantum cloning. The second direction is about the realization of quantum cloning by various schemes and with various noises.
Let us first list the topics which can be related with universal quantum cloning, some of those topics may leads to potential applications.
\begin{itemize}
\item State estimation. The state estimation is corresponding to one to infinity quantum cloning, and it roughly describes how to find the exact form of an unknown state. Asymptotically, the quantum cloning machine corresponds to state estimation \cite{prl80.1571,PhysRevLett.81.2598,prl97.030402,Bruss1999}. The state estimation can be understood like the asymmetric quantum cloning which keeps one copy untouched and the rest infinite copies are used to estimate the form of the input state. It can be expected that the precision of estimation and the fidelity of the remained copy has a tradeoff relation. This relation means that the more information gained from the estimation, the larger disturbance is induced to the remained copy. This phenomena is demonstrated by a tradeoff relation between the information gain and the disturbance on the estimated state \cite{prl86.1366}, also in \cite{ISI:000237147700049} and \cite{ISI:000254475300022}.
\item Measurement. The optimal minimal measurements of mixed states is studied in \cite{ISI:000081247000022}, which set the limits to optimal cloning of mixed states. Two incompatible observables cannot be measured simultaneously for a quantum system, the cloning schemes are studied for this task to accomplish it optimally \cite{measure-incompatible}. The trade-off relations between measurement accuracy of two or three non-commuting observables of a qubit system is studied in \cite{ISI:000252862000045}, this leads to the no-cloning inequality. The application of this method in quantum communication and the separability of quantum and classical information is studied in \cite{Ricci2005}.
\item Quantum key distributions. The security of QKD can be analyzed by using quantum cloning machine to attack the protocol. This attack can be considered as a simple quantum attack used by the eavesdropper, usually named as Eve. She can use a quantum cloning machine to keep a copy of the state and send another copy to the legitimated receiver. The strength of the attack can be adjusted by using the asymmetric case of quantum cloning. After the announcing of bases used by the sender and the receiver in the QKD protocol, Eve can decode her copy, which in general is not perfect, to find the secrete key. In extreme case, Eve may have a perfect copy but the state in the legitimated receiver side will be random and thus this attack can be easily detected. The universal quantum cloning machine is directly used to analyze the security of six states QKD protocol \cite{PhysRevLett.81.3018}. It is shown that if we want to clone those six states equally well, we cannot do better than the universal quantum cloning machine. Interestingly, is is apparent that those six states is only a subset of any arbitrary states. We can actually go step further, one may find that the universal quantum cloning machine can be determined completely by only four input states located on the vertices of a tetrahedron inside the Bloch sphere \cite{Jing2013}. The general $d$ dimension QKD by using $d+1$ mutually unbiased states is investigated by universal cloning machine in \cite{PhysRevLett.88.127902}, see also a unified method in \cite{PhysRevA.85.012334}. One problem might be that what is the minimal input set which can determine a universal cloning machine in higher dimensional case. On the other hand, the figure of merit of quantum cloning machine is by the fidelity of input and output states. By applying the quantum cloning machine in QKD, the mutual information between pair of the sender and the receiver in comparing with the pair between the eavesdropper and the sender are used. The relationships between quantum cloning, eavesdropping of QKD and the Bell inequalities are presented in \cite{GisinHuttner}.
\item Cloning other than identical pure states: The universal cloning is in general to study the quantum cloning of identical pure states. The aim can be extended to other practical and interesting cases. The problem of learning an unknown unitary transformation from a finite number of examples is related to, but different from cloning, which is studied in \cite{learning}. The cloning of a quantum measurement is studied in \cite{clonemeasure}. The repeatable quantum channels with quantum memory is studied in \cite{Rybar2008}, this topic is something like quantum cloning of quantum channels. It is interestingly observed that a pair of qubits with anti-parallel spins may encode more quantum information \cite{PhysRevLett.83.432}, collective and local measurements of them is studied in \cite{massar-antispin}, the cloning of those kind of states is studied in \cite{cloning-orthogonalqubits}. The upper bound of global fidelity for mixed state universal cloning and state-dependent cloning are obtained in \cite{ISI:000180804600031} and in \cite{ISI:000185716700021}. In relativistical quantum information, a trade-off relation is studied for universal cloning of qudit \cite{Jochym-OConnor2011}. The high fidelity copies from asymmetric cloning machine are studied in \cite{Siomau2010}. The reverse of quantum cloning is also studied in photon stimulated emission scheme \cite{revsecloning} and in continuous variable system \cite{filipreverse}. Several cases of qubit quantum cloning combinations are investigated in \cite{Wu2012}.
\item Some applications of quantum cloning. The superbroadcasting, which combines broadcasting and simultaneous purification of the local output states together, is investigated in \cite{ISI:000243894100042}. The information transfer, and the information in practical cloning machine are presented in \cite{ISI:000087567900023} and \cite{ISI:000089688700023}. The information flux in many body system and in quantum cloning machine is studied in \cite{DiFranco2007}. The measurements on various subsystems of the cloning machine is studied in \cite{bruss-clone-measurement}. The cloning is also related with optimizing the completely maps using semidefinite programming \cite{pra65.030302r}. Numerical calculations are performed to study the relationships between fidelities of cloning machines and the entanglement \cite{Durt2011}. Related, the optimal realization of the transposition maps is studied in \cite{Buscemi2003374}. The UQCM is also adopted to investigate the entanglement and the quantum coherence of the output field in the high-gain quantum injected parametric amplification \cite{Caminati2006}. The application of cloning machine to improve the detectors is in \cite{ISI:000085168800007}. On the other hand, there are also some no-go theorems. Non-existence of a universal quantum machine to examine the precision of unknown quantum states, which is related to UQCM, is investigated \cite{ISI:000298380700008}.
\end{itemize}
Secondly, we present the results about the realization of universal quantum cloning.
\begin{itemize}
\item The optimal quantum cloning model is only proposed by considering ideal condition, but in order to make practical cloning, we have to consider effects such as noise. The introduction of interference in UQCM is investigated, it is shown that this interference does not diminish the optimal fidelity $5/6$ for 1 to 2 qubit symmetric cloning \cite{ISI:000260574100041}. If the ancillary state is not ideally initialed, its effect on the optimal UQCM is studied in \cite{Zhang2012}. The influence of temperature in quantum cloning is analyzed in \cite{pla373.821}. The comparison of fidelities of quantum cloning expressed in theory and under experimental conditions is investigated in \cite{ISI:000221424900003}. The possibility to improve the fidelity of the UQCM in the photon stimulated emission scheme is studied in \cite{ISI:000170297300031}.
\item The ultimate aim of QCM is to be physically implemented, and many proposals have been put up. The spin networks is possible to realize the UQCM \cite{cloning-spinnetwork}. The realization of UQCM is also proposed in optical system \cite{PhysRevLett.92.047902,filip1,filip2}. The Hamiltonian realization of UQCM via adiabatic evolution is proposed, the maximal eigenvalue of this Hamiltonian matrix is the fidelity \cite{ISI:000273639000005}. The proposals to implement cloning machines in separate cavities are in \cite{Fang2011}, by superconducting quantum-interference device qubits in a cavity is presented in \cite{ISI:000254541100254}. The scheme for implementing a UQCM in cavity QED with atoms is studied in \cite{ISI:000223715500003}, by ion trap technique is proposed in \cite{ISI:000230374500016}, by cavity-assisted atomic collisions is proposed in \cite{Zou2003}, via cavity-assisted interaction is studied in \cite{Fang2012}. The scheme of quantum cloning of atomic state into two photonic states is presented in \cite{Song2008}. The broadband photon cloning and the entanglement creation of atoms in waveguide is studied in \cite{ValenteLi}. As we can see, photonic system can play an important role in implementing QCM, a recent review about photonic quantum information processing can be found in \cite{REM-DeMartini,REM-Pan}.
\item Some experiments have realized successfully the universal quantum cloning. The universal cloning by entangled parametric amplification is studied in \cite{DeMartini2000}. The asymmetric UQCM is realized experimentally by partial teleportation \cite{Zhaozhi05}. The asymmetric quantum cloning machine is realized experimentally by polarization states of single photons \cite{Cernoch2009a}. The experimental quantum cloning can also be realized by using photon's orbital angular momentum \cite{nagali}. If both polarization and orbital angular momentum degrees of freedom of photons are used, the four-dimensional quantum state can be encoded. The experimental cloning of four-dimension state by this scheme is demonstrated in \cite{nagaliexperiment}. The general UQCM realized by projective operators and stochastic maps is investigated both theoretically and experimentally presented in \cite{stochastic}. By photon polarization in optics system, the universal quantum cloning and universal NOT gate is implemented experimentally \cite{Sciarrino200434}.
\end{itemize}
\section{Probabilistic quantum cloning}
Concerning about the B92 protocol which involves only two non-orthogonal states \cite{PhysRevLett.68.3121}, we can try to clone it with the largest probability. That is, by measuring a detector, we can make sure that the involved state is cloned perfectly or we know that the cloning process fails. The aim of this quantum cloning is to achieve the optimal probability.
\subsection{Probabilistic quantum cloning machine}
While the previous mentioned quantum cloning machines can always succeed, on the same time, the copies cannot be perfect. Duan and Guo \cite{Duan1998261,PhysRevLett.80.4999,ISI:000080184300011} proposed a different quantum cloning machine: while the coping task can succeed with probability, but if it is successful, we can always obtain perfect copies. This kind of quantum cloning machine is called probabilistic quantum cloning machine.
This quantum cloning machine is useful, in particular, in studying the B92 quantum key distribution protocol \cite{PhysRevLett.68.3121}. In this QKD protocol, only two non-orthogonal states are used for key distribution so the attack is simply to use a specified quantum cloning machine to clone those two non-orthogonal states. In fact, this is the simplest case for probabilistic quantum cloning machine which is used to copy two linearly independent states $S=\{ |\Psi _0\rangle , |\Psi _1\rangle \}$\cite{Duan1998261}. The cloning transformation can be proposed as: \begin{eqnarray}
U(|\Psi _0\rangle |\Sigma \rangle |m_p\rangle )= \sqrt {\eta _0}|\Psi _0\rangle |\Psi _0\rangle |m_0\rangle +\sqrt {1-\eta _0}|\Phi ^0_{ABP}\rangle , \nonumber \\
U(|\Psi _1\rangle |\Sigma \rangle |m_p\rangle )= \sqrt {\eta _1}|\Psi _1\rangle |\Psi _1\rangle |m_1\rangle +\sqrt {1-\eta _1}|\Phi ^1_{ABP}\rangle , \end{eqnarray}
where $|m_p\rangle ,|m_0\rangle \rangle ,|m_1\rangle $ are ancillary states. The measurements are performed in these states. And the states $|\Phi ^0_{ABP}\rangle$ and $|\Phi ^1_{ABP}\rangle$
are chosen so that the reduced state of $P$ is orthogonal to $|m_0\rangle$ and $|m_1\rangle$. When the measurements are $|m_0\rangle $ or $|m_1\rangle $, we know the states $S=\{ |\Psi _0\rangle , |\Psi _1\rangle \}$ are copied perfectly. Otherwise, the cloning task fails. The probabilities of success are $\eta _0$ and $\eta _1$ for states $|\Psi _0\rangle $
and $|\Psi _1\rangle $, respectively. If we let $\eta _0=\eta _1=\eta $, we know that \begin{eqnarray}
\eta \le \frac {1}{1+|\langle \Psi _0|\Psi _1\rangle |}. \end{eqnarray}
This is also a no-cloning theorem: only orthogonal states can be cloned perfectly. And the optimal probabilistic quantum cloning is to let $\eta =1/(1+|\langle \Psi _0|\Psi _1\rangle |)$. It is also related with the problem of how to distinguish non-orthogonal quantum states.
The more complicated case is to copy a set of linearly independent states $S=\{ |\Psi _0\rangle , |\Psi _1\rangle \,...,|\Psi _n\rangle \} $. The form of the probabilistic cloning machine is: \begin{equation}
U(|\Psi_i\rangle|\Sigma\rangle|P_0\rangle)=\sqrt{\gamma_i}|\Psi_i\rangle|\Psi_i\rangle|P_0\rangle+\sum_{j=1}^n c_{ij}|\Phi^j_{AB}\rangle|P_j\rangle.\label{DuanGuo} \end{equation}
$P_0\dots P_n$ is a set of orthonormal ancilla states. Hence if the measurement result of the ancilla turns out to be $P_0$, we know the state $|\Psi_i\rangle$ is perfectly cloned, with the probability $\gamma_i$. Taking the inner product of different $i$ and $j$ in (\ref{DuanGuo}), there's a matrix equation \begin{equation} X^{(1)}-\sqrt{\Gamma}X^{(2)}\sqrt{\Gamma}=CC^\dagger\label{DuanGuoCons} \end{equation}
where $X^{(k)}_{ij}=\langle \Psi _i|\Psi _j\rangle^k$, $\Gamma=\Gamma^\dagger=diag\{\gamma_1\dots\gamma_n\}$, $C_{ij}=c_{ij}$. If the input states $|\Psi_i\rangle$s are not linearly independent, $X^{(1)}$ is not positive definite. And for generic positive definite matrix $\Gamma$, the right-handed side of (\ref{DuanGuoCons}) is not positive semidefinite, hence the equation is not valid as the matrix $CC^\dagger$ is positive semidefinite. Hence such probabilistic cloning machine only exists for linearly independent states(This result is also confirmed by Hardy and Song using the no-signaling argument\cite{Hardy1999331}). They then found the existence is equivalent to the positive semidefiniteness of $X^{(1)}-\Gamma$. The result is called the Duan-Guo bound to distinguish linearly independent quantum states\cite{PhysRevLett.80.4999}. The $1\rightarrow M$ cloning machine is also easy to formulate, just by adding the number of copies at the right-handed side of (\ref{DuanGuo}).
Later, Zhang \emph{et al.}\cite{ISI:000087567900029} constructed a network using universal quantum logic states realizing this cloning machine.
Later, Azuma \emph{et al.} \cite{PhysRevA.72.032335} studied the case with supplementary information, that is, the $|\Sigma\rangle$ at the left-handed side of (\ref{DuanGuo}) is state dependent. Li and Qiu \cite{ISI:000244768000011} explored the case with two ancilla systems, but it is shown that the performance cannot be improved.
\subsection{A novel quantum cloning machine}
For probabilistic cloning machine, Pati \cite{PhysRevLett.83.2849} explored the possibility that the output state contains all possible copies of the original state. That is, for a set of input states $|\Psi_1\rangle\dots|\Psi_n\rangle$, does there exist a transformation $U$ in the following form: \begin{equation}
U(|\Psi_i\rangle|\Sigma\rangle|P_0\rangle)=\sum_{j=1}^M\sqrt{p^{(i)}_j}|\Psi_i\rangle^{\otimes(j+1)}|0\rangle^{\otimes(M-j)}
|P_j\rangle+\sum_{k=M+1}^{N'}\sqrt{f^{(i)}_k}|\Phi^k_{AB}\rangle|P_k\rangle.\label{Pati} \end{equation}
$|P_1\rangle,\dots ,|P_{N'}\rangle$ is a set of orthonormal ancilla states, as usual. In fact, this can be regarded as a superposition of the $1\rightarrow 2,\cdots ,1\rightarrow (M+1)$ cloning machines. From the unitarity constraint, we have \begin{equation}
\langle \Psi_i|\Psi_j\rangle=\sum_{k=1}^{M}\sqrt{p^{(i)}_k}\langle \Psi_i|\Psi_j\rangle^{k+1}\sqrt{p^{(j)}_k}+\sum_{l=M+1}^{N'}\sqrt{f^{(i)}_l f^{(j)}_l}. \end{equation} This equation can be rewritten as a equation of matrices \begin{equation} X^{(1)}=\sum_{k=1}^M P_k X^{(k+1)} P_k+\sum_{l=M+1}^{N'}F^{(l)}. \end{equation}
Here $P_k=P^\dagger_k=diag\{p^{(1)}_k,\dots,p^{(n)}_k\}$ ,$X^{(k)}_{ij}=\langle \Psi _i|\Psi _j\rangle^k$ as usual and $F^{(l)}_{ij}=\sqrt{f^{(i)}_l f^{(j)}_l}$. From this relation, they proved if the states are linearly independent, then the equation can be satisfied with positive definite $P_k$s and $F_l$s. It's also simple to see the transformation doesn't exist if the set of input state contains a state that is a superposition of other states says $|\Psi\rangle=\sum_j c_j|\Psi_j\rangle$,
since we can simply add the $U(|\Psi\rangle|\Sigma\rangle|P_0\rangle)=\sum_j c_jU(|\Psi_j\rangle|\Sigma\rangle|P_0\rangle)$.
And from the right-handed side of (\ref{Pati}) we can see that it is inconsistent.
Under this framework, the cloning machine of Duan and Guo can be viewed as a special case of $M=1$, see \cite{Duan1998261}.
Later, Qiu \cite{ISI:000175743800069} proposed a combination of Pati's probabilistic cloning machine and the approximate cloning machine in the usual sense, which is a more general framework. The condition with supplementary information is also explored, that is, the $|\Sigma\rangle$ at the input side is state dependent. It is found that the probability of success may increase\cite{ISI:000238380400032}.
\subsection{Probabilistic quantum anti-cloning and NOT gate}
Similar to the approximate universal NOT gate in the UQCM section of this review, we can also construct a probabilistic quantum anti-cloning and NOT gate in the framework of probabilistic cloning. Our aim for probabilistic quantum anti-cloning and NOT gate is that we keep the input state unchanged, at the same time, we create additionally an anti-cloning state which corresponds to the NOT gate. Actually, this task can only be fullfilled probabilistically. A $1\rightarrow 2$ probabilistic quantum anti-cloning and NOT gate is proposed by Hardy and Song in \cite{Hardy1999331}: \begin{equation}
U(|\Psi_i\rangle|\Sigma\rangle|P_0\rangle)=\sqrt{f}|\Psi_i\rangle|\Psi^\bot_i\rangle|P_0\rangle+\sum_{j=1}^n c_{ij}|\Phi^j_{AB}\rangle|P_j\rangle.\label{ProbNOT} \end{equation}
By measuring the probe states $|P_0\rangle $ and $|P_j\rangle $ which are orthonormal, we know whether the realization of anti-cloning and NOT gate is successful or not. On the other hand, we already know that the perfect universal NOT gate is impossible \cite{PhysRevA.60.R2626}, the realization of only the NOT gate probabilistically seems an interesting question, $|\Psi _i\rangle \rightarrow |\Psi^\bot_i\rangle $.
The input states are $|\Psi_1\rangle,\dots,|\Psi_n\rangle$, as usual. Taking the inner product of different $i,j$, we get \begin{equation} X^{(1)}=fX'+CC^\dagger \end{equation}
where $X'_{ij}=\langle\Psi_i|\Psi_j\rangle\langle\Psi^\bot_i|\Psi^\bot_j\rangle$ and other notation is same as above. If the input states are linearly independent, then the Gram matrix at the left-handed side of above equation is positive definite. Hence for a sufficiently small $f$, we can guarantee $CC^\dagger$ is also positive semidefinite. So such a cloning machine always exists. As a simple example, we consider the case $n=2, a_{11}=a_{21}=\sqrt{1-f}, a_{12}=a_{22}=0$, then (\ref{ProbNOT}) can be written as: \begin{align}
|\Psi_1\rangle&\rightarrow \sqrt{f}|\Psi_1\rangle|\Psi^\bot_1\rangle|P_0\rangle+\sqrt{1-f}|\Phi_{AB}\rangle|P_1\rangle\nonumber\\
|\Psi_2\rangle&\rightarrow \sqrt{f}|\Psi_2\rangle|\Psi^\bot_2\rangle|P_0\rangle+\sqrt{1-f}|\Phi_{AB}\rangle|P_1\rangle. \end{align}
In this case, we have a constraint of $f$: \begin{equation}
f\leq\frac{1-|\langle\Psi_1|\Psi_2\rangle|}{1-|\langle\Psi_1|\Psi_2\rangle||\langle\Psi^\bot_1|\Psi^\bot_2\rangle|}=\frac{1}{1+|\langle\Psi_1|\Psi_2\rangle|} \end{equation}
which is identical to the Duan and Guo case \cite{Duan1998261}. Li \emph{et al.} \cite{Li2007a} extended the above case to the case where the output state contains all of $|\Psi\rangle|\Psi^{\bot}\rangle,|\Psi\rangle|\Psi^{\bot}\rangle^{\otimes2},
\dots,|\Psi\rangle|\Psi^{\bot}\rangle^{\otimes M}$.
\subsection{Other developments and related topics}
The probabilistic quantum cloning \cite{Duan1998261,PhysRevLett.80.4999} was initiated to study the attack on a QKD protocol proposed by Bennett (B92) which can exploits any two nonorthogonal states for key distribution \cite{PhysRevLett.68.3121}. Different from the approximate quantum cloning, the probabilistic cloning aims to have perfect copies by sacrificing the success probability, i.e., in case of failing, the state owned by eavesdropper is useless, but in case of success, the eavesdropper will have perfect copies. The aim of the probabilistic quantum cloning is equivalent to discriminate probabilistically different quantum states such as the two nonorthogonal states in B92. Then we next try to list three directions of research in studying probabilistic quantum cloning.
\begin{itemize}
\item The probabilistic quantum cloning is equivalent the quantum states discrimination. Along this line, the relation between the cloning machine and states discrimination can be found, for example, in \cite{ISI:000230887300043,0305-4470-31-50-007,Zhang2010b}. The minimum-error discrimination ambiguously of mixed states is studied in \cite{ISI:000252862000060}. The optimal unambiguous discrimination of two density matrices is studied in \cite{ISI:000231564200074}. The optimal observables for minimum-error state discrimination is studied in \cite{ISI:000282433200038}. By homodyne detection, distinguishing two single-mode Gaussian states is studied in \cite{ISI:000228632100051}. It is also shown that according to Wigner-Araki-Yanase theorem that the repeatability and distinguishability cannot be reached simultaneously \cite{ISI:000240238300141}. In order to distribute a quantum state to a coupled two subsystems, the strength of interaction should be above a threshold \cite{ISI:000243166700173}. The unambiguous discrimination of two squeezed states using probabilistic cloning is investigated in \cite{Mishra2012}.
\item
Probabilistic quantum cloning of various states and different methods have been presented. Fiur\'{a}\v{s}ek \cite{PhysRevA.70.032308} used the technique described in the ``Singlet Monogamy'' subsection in the UQCM part to analyze the optimality of probabilistic cloning machine. The study of probabilistic cloning of qubits with real parameters is shown in \cite{Zhang2010a} The assisted probabilistic quantum cloning is proposed by Pati in \cite{ISI:000085336900037}. The broadcasting of mixed state using probabilistic cloning machine is shown in \cite{ISI:000264951100014}. The optimal probabilistic ancilla-free, which is economic, phase-covariant qudit telecloning machine is presented in \cite{Wang2009a}. The probabilistic cloning of three symmetric states \cite{Jimenez2010} and equidistant states \cite{Jimenez2010a} are also studied.
\item The implementation, both theoretically and probabilistically, of probabilistic quantum cloning is also an important subject. The scheme to implement probabilistic cloning of qubits via twin photons is proposed in \cite{twin-photons}. The scheme by GHZ states is proposed in \cite{ZhangLiWangGuo}.
Experimentally, the accuracy of quantum state estimation is studied \cite{ISI:000185192100032}, this accuracy is also compared with asymptotic lower bound obtained theoretically by Cram\'{e}r-Rao inequality. The probabilistic quantum cloning experimentally realized in NMR system is reported in \cite{duprobabilistic}. By generalizing the probabilistic cloning to state amplification, the experimental heralded amplification of the photon polarized state and entanglement distillation are reported in \cite{XiangGuoYong1} and \cite{XiangGuoYong2}.
\end{itemize}
\section{Phase-covariant and state-dependent quantum cloning}
In last section, we studied the quantum cloning machines which are universal. That is the case that the input states are arbitrary or we know nothing about the input state. Practically, it is possible that we already know partial information of the input state. The point is whether this partial information is helpful or not for us to obtain a better fidelity in quantum cloning. In this section, we will show that depending on specified input states, we can design some quantum cloning machines which perform better for those restricted input states than that the universal cloning machines.
On the other hand, one of the most important applications of quantum cloning is to analyze the security of quantum key distribution protocols. In security analysis, the quantum states transfer through a quantum channel. We suppose that this quantum channel is controlled by the eavesdropper who is generally named as Eve. Eve can perform any operations which is allowed by quantum mechanics. One direct attack is the ``receive-measure-resend'' attack where ``measure'' can be supposed to be a quantum operation. However, quantum mechanics states that non-orthogonal quantum states cannot be distinguished perfectly. So the measured results will in general not be perfect and thus the obtained measurement result is not the original sent state. This will induce inevitable errors which can be detected by public discussions between the legitimated sender and receiver, Alice and Bob, in QKD.
Eve can choose freely her attack schemes. The quantum cloning machines provide a quantum scheme of eavesdropping attack. We just assume that the Eve has an appropriate quantum cloning machine. By quantum cloning, Eve can keep one copy of the transferring state and send another copy to the legitimate receiver, Bob. Now Eve and Bob both have copies of the sending state. By this process, we can find how much information can be obtained by Eve, and on the other hand, how much errors are induced by this attack. This provides a security analysis of QKD. In this eavesdropping, Eve intends to get some information secretly between Alice and Bob's communication and wish to make the least possibility to be detected. So the optimal quantum cloning machine is required. Based on different QKD protocols, various cloning machines are designed specially. The universal quantum cloning machines studied in the previous sections themselves might be optimal. But it may not be optimal for the quantum states involved in a special QKD protocol. So the state-dependent quantum cloning machines are necessary. In this section, we will give all the examples of state-dependent cloning.
\subsection{Quantum key distribution protocols} In this subsection, we intend to refer some quantum key distribution protocols and show how the eavesdropper attacks them. Each protocol may lead to a special kind state-dependent cloning machine. Initial protocols are based simply on 2-dimension system and later they are generalized to higher-dimension. Next, we present in detail the well-known BB84 protocol \cite{Bennett1984} and briefly its generalizations. An earlier review of QKD is in \cite{QKDreview}.
\begin{enumerate} \item BB84 protocol \cite{Bennett1984} uses two sets of orthogonal 2-level states and intersection angle in Bloch-sphere between different sets is $90^{\circ}$. They can be written as following, see FIG.\ref{figure-6-state}, \begin{eqnarray}
&&|0\rangle,|1\rangle,\nonumber\\
&&\frac{1}{\sqrt2}(|0\rangle+|1\rangle), \frac{1}{\sqrt2}(|0\rangle-|1\rangle). \end{eqnarray}
Note that by operating a unitary transformation, characteristics of these states remain unchanged. Therefore, we may also use the following four states in BB84 protocol which are still two sets of orthogonal 2-level states with $90^{\circ}$ intersection angle, also see all those states in FIG.\ref{figure-6-state}.
\begin{eqnarray}
\frac{1}{\sqrt2}(|0\rangle\pm|1\rangle),\nonumber\\
\frac{1}{\sqrt2}(|0\rangle\pm\mathrm{i}|1\rangle). \end{eqnarray}
In BB84 protocol, Alice sends one of these four qubits to Bob through a certain quantum channel which is controlled by Eve. After receiving the qubit, Bob measures the obtained qubit with one of the two bases randomly. After Bob has finished his measurement, Alice would announce the bases of each qubits. If their bases coincidence, Bob's measurement result is surely correct. Alice and Bob will share a common secrete key. If sending basis and the measurement basis are different, they simply discord this bit of information. Also they may use some qubits as the checking qubits to find out the error rate introduced by the quantum channel. They can suppose that all errors are caused by Eve's attack.
The eavesdropper, Eve, will capture the qubits in the quantum channel and clone them. She remains one part to copies and still sends the other part to Bob in the quantum channel. As soon as Alice broadcasts the bases, Eve measures her own qubits sequentially to derive the information sent between Alice and Bob.
This BB84 protocol is proved to be unconditional secure and the security is based on principles of quantum mechanics. The security proofs of BB84 protocol are given by several groups, for example Mayers \cite{Mayers}, Lo and Chau \cite{LoChau99}, Shor and Preskil \cite{Shor2000}. We remark that Ekert proposed a QKD strategy based on the non-locality of quantum mechanics \cite{PhysRevLett.67.661} which is the same of the BB84 protocol.
\item B92\cite{PhysRevLett.68.3121} is a protocol which uses any two non-orthogonal states. Tamaki \emph{et al.} \cite{TKI03} provide the security proof of this protocol. In this review, we suppose those two states take the form, \begin{equation}
\frac{1}{\sqrt2}(|0\rangle+\mathrm{e}^{\mathrm{i}\phi_k}|1\rangle),k=1,2. \end{equation}
\item BB84 protocol can be generalized to 6-state protocol\cite{PhysRevLett.81.3018}. The six states involved in this protocol are BB84 states plus two more states as shown in Eq.(\ref{6states}). Interestingly, the optimal cloning of those six states is the universal quantum cloning machine as already shown in previous sections.
\item For higher-dimensional case, the QKD protocols can use $2$-basis or $d+1$-basis in a $d$-level system as studied by Cerf \emph{et al.} \cite{PhysRevLett.88.127902}.
\item In $d$-dimension, there are altogether $d+1$ mutually unbiased bases (MUB), provided $d$ is prime. Any $(g+1)$-basis from those MUBs, $g=1,2,...,d$, can actually be used for QKD \cite{PhysRevA.85.012334}. Here, we briefly give the definition of MUB,
$\{|i\rangle\}$ and $\{|\tilde{i}^{(k)}\rangle\}$ $(k=0,1, ...,d-1)$, they are expressed as: \begin{equation}
|\tilde{i}^{(k)}\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\omega^{i(d-j)-ks_j}|j\rangle, \end{equation}
with $s_j=j+...+(d-1)$ and $\omega=e^{i\frac{2\pi}{d}}$. These states satisfy the condition, $|\langle\tilde{i}^{(k)}|\tilde{l}^{(j)}\rangle|=\delta_{kj}\delta _{il} +\frac {1}{\sqrt {d}}(1-\delta _{kj})$. States in different set of bases are mutually unbiased.
\item Basing on the characteristics of MUB, we can design a retrodiction protocol using method of mean king game. This special protocol, different from BB84 or other QKD protocols, shows that Bob has a 100\% successful measurement scheme in comparison with the $1/(g+1)$ successful measurement in such as BB84 protocols. Here we remark that quantum memory is not available for Bob. We will present a detailed analysis of this retrodiction protocol. \end{enumerate}
\subsection{General state-dependent quantum cloning} As to the above QKD protocols, universal quantum cloning machine is sure to work well, but not surely to be the optimal one. Thus if we need a higher quality of the output from the cloning machine, a state-dependent cloning machine is needed. In fact, each protocol corresponds to a special kind of state-dependent cloning machine based on the given ensembles of states.
Let us firstly consider a general case based on two equatorial states. Obviously, it is equivalent to the B92 protocol \cite{PhysRevLett.68.3121}. To be non-trivial and satisfy the B92 protocol, these states are nonorthogonal. The cloning machine is designed to clone only these two states optimally and equally well without considering other states on the Bloch sphere. This problem is studied in \cite{Bruss1998}.
The quantum cloning machine takes a completely unknown 2-level state $|\psi\rangle$ and makes two output qubits. Each output state is described by a reduced density matrix with the following form, \begin{equation}
\rho=\eta|\psi\rangle\langle\psi|+(1-\eta)\frac{\mathbb{I}}{2}. \end{equation}
Here, $\eta$ described the shrinking of the initial Bloch vector $\vec{s}$ corresponding to the density matrix $|\psi\rangle\langle\psi|$. In other words, the output state is \begin{equation} \rho=\frac{\mathbb{I}+\eta\vec{s}\cdot\vec{\sigma}}{2}\nonumber , \end{equation} with the input state being \begin{equation}
|\psi\rangle\langle\psi|=\frac{\mathbb{I}+\vec{s}\cdot\vec{\sigma}}{2}. \end{equation}
We assume that any quantum cloning machine satisfies the following reasonable conditions according to requirement of all QKD protocols: First, $\rho_1=\rho_2$, which is called symmetry condition. Second, $F=Tr(\rho_{\psi}\rho_1)=const.$, which is called isotropy condition meaning that the fidelity between each output and the input does not depend on the specified form of the input. Stronger condition $\vec{s_1}=\eta_{\psi}\vec{\psi}$ is required by orientation invariance of the Bloch vector. It is obvious that when the last condition is satisfied, the second will be satisfied.
Next let us investigate the explicit form of the quantum cloning machine. Bru{\ss} \emph {et al.} \cite{Bruss1998} make a general ansatz for the unitary transformation $U$ performed by the cloning machine. They are, \begin{eqnarray}
U|0\rangle|0\rangle|X\rangle=a|00\rangle|A\rangle+b_1|01\rangle|B_1\rangle+b_2|10\rangle|B_2\rangle+c|11\rangle|C\rangle,\\
U|1\rangle|0\rangle|X\rangle=\tilde{a}|11\rangle|\tilde{A}\rangle+\tilde{b_1}|10\rangle|\tilde{B_1}\rangle+\tilde{b_2}|01\rangle|\tilde{B_2}\rangle+\tilde{c}|00\rangle|\tilde{C}\rangle. \end{eqnarray}
where $|X\rangle$ is an input ancilla. And $|A\rangle,|B_i\rangle,...$ denote output ancilla states. Ancilla states may have any dimension but are required to be normalized. There are several constraints for these parameters. Thanks to the unitarity of the cloning transformation, the complex parameters $a,b_i,c...$ satisfy the normalization conditions: \begin{eqnarray}
|a|^2+|b_1|^2+|b_2|^2+|c|^2=1,\nonumber\\
|\tilde{a}|^2+|\tilde{b_1}|^2+|\tilde{b_2}|^2+|\tilde{c}|^2=1, \end{eqnarray} and the orthogonality condition: \begin{equation}
a^*\tilde{c}\langle A|\tilde{C}\rangle+b_2^*\tilde{b_1}\langle B_2|\tilde{B_1}\rangle+b_1^*\tilde{b_2}\langle B_1|\tilde{B_2}\rangle+c^*\tilde{a}\langle C|\tilde{A}\rangle=0. \end{equation} Assume that the cloning machine works in symmetric subspace, more relations are derived \begin{eqnarray}
|b_1|=|b_2|,&&|\tilde{b_1}|=|\tilde{b_2}|,\nonumber\\
|\langle B_1|\tilde{B_2}\rangle|=|\langle B_2|\tilde{B_1}\rangle|,&&|\langle B_1|\tilde{B_1}\rangle|=|\langle B_2|\tilde{B_2}\rangle|, \end{eqnarray} and \begin{eqnarray}
a b_1^*\langle B_1|A\rangle+c^*b_2\langle C|B_2\rangle=a b_2^*\langle B_2|A\rangle+c^*b_1\langle C|B_1\rangle,\nonumber\\
\tilde{b_1^*}a\langle \tilde{B_1}|A\rangle+\tilde{a^*}b_1\langle\tilde{A}|B_1\rangle=\tilde{b_2}a\langle\tilde{B_2}|A\rangle+\tilde{a}^*b_2\langle \tilde{A}|B_2\rangle,\nonumber\\
b_1^*\tilde{c}\langle B_1|\tilde{C}\rangle+c^*\tilde{b_1}\langle C|B_1\rangle=b_2^*\tilde{c}\langle B_2|\tilde{C}\rangle+c^*\tilde{b_2}\langle C|\tilde{B_2}\rangle. \end{eqnarray} Moreover, letting shrink factor remaining constant ratio within each direction in Bloch sphere, one has, \begin{equation} \frac{s_{1_x}}{s_{\psi_x}}=\frac{s_{1_y}}{s_{\psi_y}}=\frac{s_{1_z}}{s_{\psi_z}}=\eta_{\psi}. \end{equation} Applied in the transformation, we may derive further constraints: \begin{eqnarray}
&&|a|^2-|c|^2=|\tilde{a}|^2-|\tilde{c}|^2\nonumber\\
&&|a|^2-|c|^2=Re[\tilde{b_1}^*a\langle\tilde{B_1}|A\rangle+\tilde{a}^*b_1\langle\tilde{A}|B_1\rangle],\nonumber\\
&&Im[\tilde{b_1}^*a\langle\tilde{B_1}|A\rangle+\tilde{a}^*b_1\langle\tilde{A}|B_1\rangle]=0,\nonumber\\
&&b_1^*\tilde{c}\langle B_1|\tilde{C}\rangle +c^*\tilde{b_1}\langle C|\tilde{B_1}\rangle=0,\nonumber\\
&&b_2^*a\langle B_2|A\rangle+c^*b_1\langle C|B_1\rangle=0,\nonumber\\
&&\tilde{b_2}^*\tilde{a}\langle \tilde{B_2}|\tilde{A}\rangle+\tilde{c}^*\tilde{b_1}\langle \tilde{C}|\tilde{B_1}\rangle=0,\nonumber\\
&&\tilde{c}^*a\langle\tilde{C}|A\rangle-\tilde{a}^*c\langle\tilde{A}|C\rangle=0,\nonumber \\ {\rm and~symmetrically,~}&& 1\leftrightarrow 2. \end{eqnarray} Here, notation $1\leftrightarrow2$ indicates the above constraints changing indices 1 with 2 according to the symmetry condition.
Our task is to maximize the shrinking factor $\eta$ with its explicit form taken as, \begin{equation}
\eta=|a|^2-|c|^2. \end{equation} The fidelity which is defined as \begin{equation}
F=Tr(\rho_1|\psi\rangle\langle\psi|)=\frac{1}{2}(1+\vec{s_1}\cdot\vec{s_{\psi}}) \end{equation} is related to the shrinking factor as \begin{equation} F=\frac{1}{2}(1+\eta). \end{equation} Note that that this relationship between fidelity and shrinking factor holds only for pure states. The study of mixed state has already been presented in the previous sections. The above discussions are regardless of the specified QKD protocols.
\subsection{Quantum cloning of two non-orthogonal states} Next, we will consider the situation of B92 protocol in which only two qubits are required to be cloned. Now, we firstly prove that ancilla is necessary in our cloning machine. Without ancilla, we could write down constraints as:
$|a|^2-|c|^2=|\tilde{a}|^2-|\tilde{c}|^2$,
$|a|^2-|c|^2=Re[\tilde{b_1}^*a+\tilde{a}^*b_1]$, $b_2^*a+c^*b_1=0$, and $\tilde{b_2}^*\tilde{a}+\tilde{c}^*\tilde{b_1}=0$. Adding symmetric ansatz, we have
$|b_1|=|b_2|=|b|$
and $|\tilde{b_1}|=|\tilde{b_2}|=|\tilde{b}|$.
From these constraints we would have four possible results:
(a),$|a|=|c|$ and $|\tilde{a}|=|\tilde{c}|$,
(b), $|a|=|c|$ and $|\tilde{b}|=0$,
(c), $|b|=0$ and $|\tilde{a}|=|\tilde{c}|$, and (d), $|b|=0$, and $|\tilde{b}|=0$. For each case, we have $\eta=0$ which seems meaningless. Consequently, it is impossible to generate a symmetric quantum cloning machine without ancilla.
In the following, we will explicitly give the form of the quantum cloning machine and the fidelity in this case. Assume two pure states in a two-dimensional Hilbert space with expressions: \begin{eqnarray}
|a\rangle &=&\cos\theta|0\rangle+\sin\theta|1\rangle,\\
|b\rangle &=& \sin\theta|1\rangle+\cos\theta|0\rangle, \end{eqnarray}
where $\theta$ varying from $0$ to $\pi/4$. Define $S=\langle a|b\rangle=\sin2\theta$. We may imagine that the fidelity only dependents on $S$ because we could transform every 2 states into the above form by only unitary operation without influence the fidelity.
Since there are too many constraints to give strict algebraic calculations, we utilize the symmetry in the B92 protocol to simplify the calculations. Performing an unitary operator $U$ on the input states, we define final states $|\alpha\rangle$ and $|\beta \rangle$ as \begin{eqnarray}
|\alpha\rangle= U|a\rangle|0\rangle,
|\beta\rangle=U|b\rangle|0\rangle. \end{eqnarray} Since $U$ is an unitary transformation, we could derive \begin{equation}
\langle\alpha|\beta\rangle=\langle a|b\rangle=\sin2\theta=S. \end{equation}
Using global fidelity $F_g$ to evaluate the quantum cloning, which is defined as \begin{equation}
F_g=\frac{1}{2}(|\langle\alpha|aa\rangle|^2+|\langle\beta|bb\rangle|^2) \end{equation}
Certainly, optimal cloning machine needs that both $|\alpha\rangle$ and $|\beta\rangle$ lying in the space spanned by vectors $|aa\rangle$ and $|bb\rangle$. Without complicated calculations, we would obtain maximal global fidelity as \begin{equation} F_g=\frac{1}{4}(\sqrt{1+\sin^22\theta}\sqrt{1+\sin2\theta}+\cos2\theta\sqrt{1-\sin2\theta})^2. \end{equation}
Additionally, we are also interested with the local fidelity of each output qubit with the input one, which is defined as \begin{equation}
F_l=Tr[\rho_\alpha|a\rangle\langle a|]. \end{equation} The explicit result is, \begin{equation} F_{l,1}=\frac{1}{2}[1+\frac{1-S^2}{\sqrt{1+S^2}}+\frac{S^2(1+S)}{1+S^2}]. \end{equation} We may notice that it is larger than $5/6$. That is to say, for this protocol, state-dependent cloning machine works better than UQCM as expected. It is also noticed that the Bloch vector not only shrinks but also makes a rotation with a state-dependent angle $\vartheta$: \begin{equation}
\vartheta=\arccos[\frac{1}{|\vec{s}|}\frac{\cos2\theta}{\sqrt{1+\sin^22\theta}}]-2\theta. \end{equation} This is caused by that one constraint presented previously is released.
We should emphasize that this result is derived under the request of maximum global fidelity rather than maximum local fidelity. When we only need a better state-dependent cloning machine locally, we may have different consequences.
And the fidelity is given by: \begin{eqnarray} F_{l,3}=\frac{1}{2}+\frac{\sqrt2}{32S}(1+S)(3-3S+\sqrt{1-2S+9S^2})\times\sqrt{-1+2S+3S^2+(1-S)\sqrt{1-2S+9S^2}}. \end{eqnarray} Moreover, it could be tested that the minimum value $F_{l,3}\approx0.987$ is derived when $S=1/2$. And when $S=0$ and $S=1$, one finds $F=1$ as expected.
In addition, we should note that different concerning in the eavesdropping would lead to variant results. In B92 protocol, direct cloning is not the most advisable action for Eve if she wishes to be most surreptitious. In fact, Eve's main purpose is not to clone the quantum information which is embodied in the two nonorthogonal quantum states, but rather to optimize the trade-off between obtaining most classical information versus making the least disturbance on the original qubit\cite{PhysRevA.53.2038}. We may name it the optimal eavesdropping which is different from optimal cloning. In \cite{Bruss1998}, fidelity for optimal eavesdropping is expressed as \begin{equation} F_{l,2}=\frac{1}{2}+\frac{\sqrt2}{4}\sqrt{(1-2S^2+2S^3+S^4)+(1-S^2)\sqrt{(1+S)(1-S+3S^2-S^3)}}. \end{equation} Note that, for all $S$, $F_{l,2}\geq F_{l,3}$.
Here we have a short summary, the general state-dependent cloning machine works better than UQCM when applied to a certain number of states. We give the special case of two nonorthogonal pure states. It is obviously that, if we know the ensemble of states used in one QKD protocol, state-dependent cloning machine can be designed accordingly. Besides for QKD protocols, various quantum machines themselves are of fundamental interests. As an extension of B92 protocol, Koashi and Imoto considered the quantum cryptography by two mixed states \cite{koashi-crypto-mixed}.
\subsection{Phase-covariant quantum cloning: economic quantum cloning for equatorial qubits}
In this subsection, we will discuss quantum cloning machine for BB84 states, which is first studied in \cite{Bruss2000}. For convenience, we will also refer those four states $\{ (|0\rangle \pm |1\rangle )/\sqrt {2}$,
$(|0\rangle \pm i|1\rangle )/\sqrt {2}\} $ as the BB84 states. In fact, the cloning machine of BB84 states is proved to be able to copy all equatorial states optimally. It has a higher fidelity than that of the UQCM. Moreover, this kind of quantum cloning machine is able to work without the help of the ancilla states. It is thus the economic quantum cloning.
\begin{figure}
\caption{The equatorial qubits are qubits which are located on the equtor of the Bloch sphere. The optimal cloning machine for those states is the phase-covariant quantum cloning machine. This machine is also optimal for cloning BB84 like states.}
\label{figure-equatorial}
\end{figure}
It is interesting to find that any quantum cloning machine that clones BB84 states equally well will also clone equatorial states with the same fidelity. We know that the equatorial qubits are located on the equator of the Bloch sphere which take the form, $|\psi (\phi )\rangle =(|0\rangle +e^{i\phi }|1\rangle )/\sqrt {2}$ see FIG.\ref{figure-equatorial}. Since each output qubit can be represented as the mixture of input state and the completely mixed state and the corresponding fidelity does not depend on the phase $\phi $, this kind of cloning machine is ``phase covariant''. It is named generally as the phase-covariant quantum cloning machine.
Consider a completely positive map $T$ that could clone optimally the four states of BB84. Perform $T$ on those states would lead to approximate result: \begin{eqnarray}
T[|\pm x\rangle\langle\pm x|]=\eta |\pm x\rangle\langle\pm x|+(1-\eta)\frac{\mathbb{I}}{2},\\
T[|\pm y\rangle\langle\pm y|]=\eta |\pm y\rangle\langle\pm y|+(1-\eta)\frac{\mathbb{I}}{2}. \end{eqnarray}
On the other hand, equatorial states could be written as \begin{equation}
|\psi(\phi)\rangle\langle\psi(\phi)|=\frac{1}{2}(\mathbb{I}+\cos\phi \sigma_x+\sin\phi \sigma_y). \label{xyequator} \end{equation} This is the qubits in $x-y$ equator. Similarly we have qubits in $x-z$ equator such as BB84 states and in $y-z$ equator. Perform linear operation $T$ on it and consider $T(\mathbb{I})=\mathbb{I}$, we derive \begin{equation}
T[|\psi(\phi)\rangle\langle\psi(\phi)|]=\eta|\psi(\phi)\rangle\langle\psi(\phi)|+(1-\eta)\frac{\mathbb{I}}{2} \label{scalar0} \end{equation} The shrinking factor $\eta$ remains unchanged. Therefore, we could conclude that optimal cloning machine performed for the BB84 states is equivalent to phase covariant cloning machine.
Next, we release the above constraint that the single output qubit takes the scalar form (\ref{scalar0}). We only need that the fidelity does not depend on the phase parameter $\phi $ in quantum cloning. We fist consider the economic case, which is accomplished without ancilla. Phase-covariant quantum cloning machine is presented in the following as proposed by Niu and Griffiths \cite{PhysRevA.60.2764}, \begin{eqnarray}
&&|0\rangle|0\rangle\rightarrow|0\rangle|0\rangle,\nonumber \\
&&|1\rangle|0\rangle\rightarrow\cos\eta|1\rangle|0\rangle+\sin\eta|0\rangle|1\rangle, \label{economic} \end{eqnarray} where $\eta\in[0,\pi/2]$ means the asymmetry between the two output states. And when $\eta=\pi/4$ the two output states are equivalent, corresponding to the symmetric case.
For any equatorial state $|\psi(\phi)\rangle=\frac{1}{\sqrt2}(|0\rangle+e^{i\phi}|1\rangle)$ which is the input state, we have \begin{equation}
|\psi({\phi})\rangle|0\rangle\rightarrow\frac{1}{\sqrt2}(|00\rangle+\cos\eta e^{i\phi}|10\rangle+\sin\eta e^{i\phi}|01\rangle). \end{equation} So we could easily obtain the reduced matrix of each states, \begin{eqnarray}
\rho_A=Tr_B(|\psi({\phi})\rangle\langle\psi({\phi})|)\nonumber\\
\rho_B=Tr_A(|\psi({\phi})\rangle\langle\psi({\phi})|). \end{eqnarray}
Then, as to any equatorial state $|\psi(\phi)\rangle$, we have fidelity defined as $F=\langle\psi|\rho|\psi\rangle$: \begin{eqnarray} F_A=\frac{1}{2}(1+\cos\eta),\\ F_B=\frac{1}{2}(1+\sin\eta). \end{eqnarray} Obviously, fidelities are independent of $\phi$ as expected. Particularly, for symmetric case $\eta=\pi/4$, the fidelity is \begin{eqnarray} F=1/2+1/\sqrt8\approx0.85355>\frac{5}{6}\approx 0.833333. \label{phaseoptimal} \end{eqnarray} In other words, phase covariant cloning machine behaves better than UQCM in cloning equatorial states. Phase-covariant quantum cloning machine can also be realized with ancillary states in a different form. The related results of phase cloning can be found in \cite{PhysRevA.56.1173,Bruss2000,Acin2004a,PhysRevA.69.062316}. The experimental implementation of this scheme is reported in optics system and nuclear magnetic resonance system \cite{Cernoch2006,PhysRevLett.94.040505}.
\subsection{One to many phase-covariant quantum cloning machine for equatorial qubits}
For quantum cloning, we are always interested in the case that multi-copies created from some fewer identical input states. The simplest extension of $1\rightarrow 2$ is one to many quantum cloning, i.e. $1\rightarrow M$ phase-covariant quantum cloning. Based on the cloning transformations similar to the UQCM \cite{PhysRevLett.79.2153}, for arbitrary equatorial qubits,
$|\Psi \rangle =(|\uparrow \rangle +e^{i\phi }|\downarrow \rangle )/\sqrt{2}$, it is assumed that the cloning transformations take the following form \cite{PhysRevA.65.012304}, \begin{eqnarray}
U_{1,M}|\uparrow \rangle \otimes R&=&
\sum _{j=0}^{M-1}\alpha _j|(M-j)\uparrow , j\downarrow \rangle \otimes R_j, \nonumber \\
U_{1,M}|\downarrow \rangle \otimes R&=& \sum _{j=0}^{M-1}\alpha _{M-1-j}
|(M-1-j)\uparrow , (j+1)\downarrow \rangle \otimes R_j, \label{1m} \end{eqnarray}
where $R$ denotes the initial state of the copy machine and $M-1$ blank copies, $R_j$ are orthogonal normalized states of the ancillary (ancilla), and $|(M-j)\psi, j)\psi _{\perp}\rangle $ denotes the symmetric and normalized state with $M-j$ qubits in state $\psi $ and $j$ qubits in state $\psi _{\perp}$. We already know the result of universal case: For arbitrary input state, the case $\alpha _j=\sqrt{\frac {2(M-j)}{M(M+1)}}$ is the optimal $1\rightarrow M$ universal quantum cloning \cite{PhysRevLett.79.2153}.
Next we consider the case that the input states being restricted to the equatorial qubits. It is assumed that phase-covariant transformations satisfy some properties: it possesses the orientation invariance of the Bloch vector and that the output states are in symmetric subspace which naturally ensure that we have identical copies. The unitarity and the normalization is satisfied by $\sum _{j=0}^{M-1}\alpha _j^2=1$. We now wish that the optimal phase-covariant cloning machine can be achieved. Let us see fidelity which is found to take the form, \begin{eqnarray} F=\frac {1}{2}[1+\eta (1,M)], \end{eqnarray} where \begin{eqnarray} \eta (1,M)=\sum _{j=0}^{M-1}\alpha _j\alpha _{M-1-j} \frac {C_{M-1}^j}{\sqrt{C_M^jC_M^{j+1}}}. \end{eqnarray} From this result, it is straightforward to examine two special cases, $M=2,3$. For $M=2$, we have $\alpha _0^2+\alpha _1^2=1$ and $\eta (1,M)=\sqrt{2}\alpha _0\alpha _1$. In case $\alpha _0=\alpha _1=1/\sqrt{2}$, the fidelity achieves the maximum. For $M=3$, we have $\alpha _0^2+\alpha _1^2+\alpha _2^2=1$, and \begin{eqnarray} \eta (1,3)=\frac {2}{3}\alpha _1^2 +\frac {2}{\sqrt{3}}\alpha _0\alpha _2. \end{eqnarray} For $\alpha _0=\alpha _2=0, \alpha _1=1$, we have $\eta (1,3)=\frac {2}{3}$, which is the optimal value and it reproduces the case of quantum triplicator for $x-y$ equatorial qubits as presented below, \begin{eqnarray}
|\uparrow \rangle \rightarrow \frac {1}{\sqrt {3}}(|\uparrow \uparrow \downarrow\rangle +|\uparrow \downarrow \uparrow \rangle
+|\downarrow \uparrow \uparrow \rangle ),\nonumber \\
|\downarrow \rangle \rightarrow \frac {1}{\sqrt {3}}(|\downarrow \downarrow \uparrow\rangle +|\downarrow \uparrow \downarrow \rangle
+|\uparrow \downarrow \downarrow \rangle ).\nonumber \\ \end{eqnarray} Note that the fidelity of this quantum triplicator is $5/6$ which is the same as the $1\rightarrow 2$ UQCM.
We next review the result of 1 to $M$ phase-covariant quantum cloning transformations. When $M$ is even, we suppose $\alpha _j=\sqrt{2}/2, j=M/2-1, M/2$ and $\alpha _j=0$, otherwise. When $M$ is odd, we can suppose $\alpha _j=1, j=(M-1)/2 $ and $\alpha _j=0$, otherwise. The corresponding fidelities are $F=\frac {1}{2}+\frac {\sqrt{M(M+2)}}{4M}$ for M is even, and $F=\frac {1}{2}+\frac {(M+1)}{4M}$ for M is odd. The explicit cloning transformations have already been presented in (\ref{1m}).
The above fidelities for $M=2, 3$ cases can be found easily being optimal. We next prove that for general $M$, the fidelities shown above achieve the maximum as well \cite{PhysRevA.65.012304}. As we just reviewed, the method introduced in \cite{PhysRevLett.79.2153} for UQCM can also be applied in this phase-covariant case. Here, we try to present a more general formula by considering $N$ to $M$ cloning transformation. This formula incorporates the coefficients in the unitary transformation to the un-normalized ancillary states. We expect that this formula can be used to study the general optimal $N\rightarrow M$ phase-covariant quantum cloning which is still an open question. We then will reduce from the general formula to the simple case $N=1$ to reach our conclusion.
By expansion, the $N$ identical input states for equatorial qubits can be written as, \begin{eqnarray}
|\Psi \rangle ^{\otimes N}=\sum _{j=0}^Ne^{ij\phi }\sqrt{C_N^j}
|(N-j)\uparrow ,j\downarrow \rangle . \label{phaseinput} \end{eqnarray} The most general $N$ to $M$ quantum cloning machine for equatorial qubits is expressed as \begin{eqnarray}
|(N-j)\uparrow ,j\downarrow \rangle \otimes R \rightarrow
\sum _{k=0}^M|(M-k)\uparrow ,k\downarrow \rangle \otimes |R_{jk}\rangle , \label{phasecloning} \end{eqnarray}
where $R$ still denotes the $M-N$ blank copies and the initial state of the cloning machine, and $|R_{jk}\rangle $ are unnormalized final states of the ancilla. By using the unitarity condition, we know that the ancillary states should satisfy the following condition, \begin{eqnarray}
\sum _{k=0}^M\langle R_{j'k}|R_{jk}\rangle =\delta _{jj'}. \end{eqnarray} Substitute the input state (\ref{phaseinput}) into the cloning transformation (\ref{phasecloning}), we obtain the whole output state with ancillary state, \begin{eqnarray}
|\Psi \rangle ^{\otimes N}\rightarrow \sum _{j=0}^Ne^{ij\phi }\sqrt{C_N^j}\sum _{k=0}^M|(M-k)\uparrow ,k\downarrow \rangle \otimes |R_{jk}\rangle . \end{eqnarray} By tracing out the ancillary state, the output state of $M$-qubit takes the form, \begin{eqnarray}
\rho ^{out}=\sum _{j',k',j,k}e^{i(j-j')\phi }\sqrt {C_N^jC_N^{j'}}\langle R_{j'k'}|R_{jk}\rangle
|(M-k)\uparrow ,k\downarrow \rangle \langle (M-k')\uparrow ,k'\downarrow |. \end{eqnarray} The one-qubit reduced density operators are the same which is ensured by the symmetric space representation. The fidelity between input and output of one-qubit can then be calculated as \begin{eqnarray}
F=\langle \Psi |\rho ^{out}_{red.}|\Psi \rangle
=\sum _{j',k',j,k}\langle R_{j'k'}|R_{jk} \rangle A_{j'k'jk}, \end{eqnarray} where $\rho ^{out}_{red.}$ is the density operator of each output qubit by taking partial trace over $M-1$ output qubits with only one qubit left. We impose the condition that the output density operator has the property of Bloch vector invariance, and also we next consider the simple case $N=1$, \begin{eqnarray} A_{j'k'jk}&=&\frac {1}{4}\{ \delta _{j'j}\delta _{k'k} +(1-\delta _{j'j})[\delta _{k',(k+1)}\frac {\sqrt{(M-k)(k+1)}}{M} \nonumber \\ && +\delta _{k,(k'+1)}\frac {\sqrt{(M-k')(k'+1)}}{M}]\}, \label{amatrix} \end{eqnarray} where $j,j'=0, 1$ for case $N=1$. The optimal fidelity of this cloning machine for equatorial qubits corresponds to the maximal eigenvalue $\lambda _{max}$ of matrix $A$ by $F=2\lambda _{max}$ \cite{PhysRevLett.79.2153}. The matrix $A$ (\ref{amatrix}) is a block diagonal matrix with block $B$ given by, \begin{eqnarray} B=\frac {1}{4}\left( \begin{array}{cc} 1 & \frac {\sqrt {(M-k)(k+1)}}{M}\\ \frac {\sqrt {(M-k)(k+1)}}{M} & 1\end{array}\right) . \end{eqnarray} Thus we now can confirm that the optimal fidelities of 1 to $M$ cloning machine for equatorial qubits takes the form, \begin{eqnarray} F=2\lambda _{max}=\left\{ \begin{array}{l} \frac {1}{2}+\frac {\sqrt{M(M+2)}}{4M}, ~~~~{\rm M~is~even},\\ \frac {1}{2}+\frac {(M+1)}{4M}, ~~~~~~~~~{\rm M~is~odd}.\end{array} \right. \end{eqnarray} Explicitly, the corresponding $1\rightarrow M$ optimal phase-covariant quantum cloning can be written as: \begin{enumerate} \item M is even, suppose $M=2L$, we have \begin{eqnarray}
|\uparrow \rangle &\rightarrow &\frac {\sqrt {2}}{2}|(L+1)\uparrow ,(L-1)\downarrow \rangle \otimes R_0+
+\frac {\sqrt {2}}{2}|L\uparrow ,L\downarrow \rangle \otimes R_1, \nonumber \\
|\downarrow \rangle &\rightarrow &\frac {\sqrt {2}}{2}|L\uparrow ,L\downarrow \rangle \otimes R_0
+\frac {\sqrt {2}}{2}|(L-1)\uparrow ,(L+1)\downarrow \rangle \otimes R_1. \label{withancilla} \end{eqnarray} \item M is odd, suppose $M=2L+1$, we have \begin{eqnarray}
|\uparrow \rangle &\rightarrow &|(L+1)\uparrow ,L\downarrow \rangle , \nonumber \\
|\downarrow \rangle &\rightarrow &|L\uparrow ,(L+1)\downarrow \rangle . \label{moddcase} \end{eqnarray} \end{enumerate} Note that those transformations (\ref{withancilla}) have ancillary states $R_0, R_1$. The simplest economic case without these ancillary states has been presented in (\ref{economic}). The general economic case equivalent with Eq.(\ref{withancilla}) can be written as, \begin{eqnarray}
|\uparrow \rangle &\rightarrow &|(L+1)\uparrow ,(L-1)\downarrow \rangle \nonumber \\
|\downarrow \rangle &\rightarrow &|L\uparrow ,L\downarrow \rangle . \label{generaleconomic} \end{eqnarray}
The optimal phase-covariant quantum cloning for the general $N\rightarrow M$ case still seems elusive, some related results and the phase-cloning of qutrits can be found in \cite{PhysRevA.67.042306}. The one to three phase-covariant quantum cloning is realized in optics system \cite{1to3phasecloning}.
\subsection{Phase quantum cloning: comparison between economic and non-economic}
It seems that phase quantum cloning with input
$|\psi \rangle =\frac {1}{\sqrt{2}}(|0\rangle +e^{i\phi }
|1\rangle )$ can be realized by both economic and non-economic transformations with completely the same optimal fidelity. We suppose that qubit implemented by quantum device is precious, so we should prefer to economic phase cloning.
On the other hand, there exist some subtle differences between those two cases which are not generally noticed. For convenience, let us present explicitly those transformations. From the general results in Eq.(\ref{withancilla}), the optimal phase-covariant cloning transformation takes the form, \begin{eqnarray}
|0\rangle \rightarrow
\frac {1}{\sqrt{2}}|00\rangle |0\rangle _a
+{\frac 12}\left( |01\rangle +
|10\rangle \right) |1\rangle _a, \nonumber \\
|1\rangle \rightarrow
\frac {1}{\sqrt{2}}|11\rangle |1\rangle _a
+{\frac 12}\left( |01\rangle +
|10\rangle \right) |0\rangle _a, \label{non-economic} \end{eqnarray} where the subindex $a$ denotes the ancillary state. With the help of Eq.(\ref{generaleconomic}), the economic phase-covariant cloning takes the following form, which is also presented in Eq.(\ref{economic}) and here we choose asymmetric parameter $\eta =\pi /4$,
\begin{eqnarray}
&&|0\rangle\rightarrow|0\rangle|0\rangle,\nonumber \\
&&|1\rangle\rightarrow \frac{1}{\sqrt {2}}(|1\rangle|0\rangle+|0\rangle|1\rangle), \label{economic0} \end{eqnarray}
We already know that the fidelities of both economic and non-economic are the same and optimal, see Eq.(\ref{phaseoptimal}), \begin{eqnarray} F_{optimal}=\frac {1}{2}+\sqrt{\frac {1}{8}}. \end{eqnarray} The single qubit reduced density matrix of output from (\ref{non-economic}) can be calculated as, \begin{eqnarray}
\rho _{red.}=\frac {1}{\sqrt{2}}|\psi \rangle \langle \psi |+ \left( \frac {1}{2}-\sqrt{\frac {1}{8}}\right) \mathbb{I} =\left( \begin{array}{ll}\frac {1}{2}& \frac {1}{\sqrt {8}}e^{-i\phi }\\ \frac {1}{\sqrt {8}}e^{i\phi }&\frac {1}{2} \end{array}\right) \label{scalar} \end{eqnarray} It takes the scalar form, i.e., the single output can be written as a mixture of input qubit and a completely mixed state $\mathbb{I}/2$.
In comparison, the single qubit reduced density matrix of output from economic case is, \begin{eqnarray} \rho ^{eco.}_{red.} =\left( \begin{array}{ll}\frac {3}{4}& \frac {1}{\sqrt {8}}e^{-i\phi }\\ \frac {1}{\sqrt {8}}e^{i\phi }&\frac {1}{4} \end{array}\right). \end{eqnarray} This form does not satisfy the scalar form. It also means relation $T(\mathbb{I})=\mathbb{I}$ is not satisfied.
In eavesdropping of well known BB84 QKD, because all four states $|0\rangle ,|1\rangle , 1/\sqrt{2}(|0\rangle
+|1\rangle ), 1/\sqrt{2}(|0\rangle -|1\rangle )$ can be described by
$|\Psi \rangle =\cos \theta |0\rangle +\sin \theta |1\rangle $. So, instead of the UQCM, we should at least use the cloning machine for equatorial qubits in eavesdropping. Actually in individual attack, we can not do better than the cloning machine for equatorial qubits \cite{Bruss2000}. The cloning machine presented in equations (\ref{economic0},\ref{non-economic}) can be used in analyzing the eavesdropping of other two mutually unbiased bases $1/\sqrt{2}(|0\rangle -|1\rangle ), 1/\sqrt{2}(|0\rangle +|1\rangle ), 1/\sqrt{2}(|0\rangle +i|1\rangle
), 1/\sqrt{2}(|0\rangle -i|1\rangle )$ which belong to
$|\psi \rangle =(|0\rangle +e^{i\phi }|1\rangle )/\sqrt {2}$.
\subsection{Phase-covariant quantum cloning for qudits}
The phase quantum cloning can be applied to higher dimensional system. For qutrit case, the optimal fidelity was obtained by D'Ariano \emph {et al.}\cite{PhysRevA.64.042308} and Cerf \emph{et al.}\cite{688607420020710}; \begin{eqnarray} F=\frac {5+\sqrt{17}}{12}, ~~~{\rm for}~d=3. \label{level3} \end{eqnarray} In this review, we consider the general case in $d$-dimension \cite{PhysRevA.67.022317}.
The input state is restricted to have the sample amplitude parameter but have arbitrary phases \begin{eqnarray}
|\psi \rangle =\frac {1}{\sqrt{d}}\sum _{j=0}^{d-1} e^{i\phi _j}|j\rangle , \label{dphase-input} \end{eqnarray} where phases $\phi _j\in [0,2\pi ), j=0, \cdots, d-1$. A whole phase is not important, so we can assume $\phi _0=0$. For comparing the input and the single qudit output, here we write the density operator of input as
$\rho ^{(in)}={\frac {1}{d}}\sum _{jk}e^{i(\phi _j-\phi _k)}|j\rangle \langle k|$. Our aim is to find the optimal quantum cloning transformations so that each output qudit is close to this input density operator.
Considering the symmetries, we can propose the following simple transformations, \begin{eqnarray}
U|j\rangle |Q\rangle =\alpha |jj\rangle |R_j\rangle +\frac {\beta
}{\sqrt{2(d-1)}}\sum _{l\not =j}^{d-1} (|jl\rangle +|lj\rangle
)|R_l\rangle , \label{dphaseclone} \end{eqnarray}
where $\alpha ,\beta $ are real numbers, and $\alpha ^2+\beta ^2=1$. Actually letting $\alpha , \beta $ to be complex numbers does not improve the fidelity. $|R_j\rangle $ are orthonormal ancillary states.
Substituting the input state (\ref{dphase-input}) into the cloning transformation and tracing out the ancillary states, the output state takes the form \begin{eqnarray}
\rho ^{(out)}&=&\frac {\alpha ^2}{d}\sum _j|jj\rangle \langle jj|
+\frac {\alpha \beta }{d\sqrt {2(d-1)}}\sum _{j\not =l} e^{i(\phi _j-\phi _l)}\left[ |jj\rangle (\langle jl| \right. \nonumber \\
&&\left. +\langle |lj|) +(|jl\rangle +|lj\rangle )\langle ll|\right] \nonumber \\
&+&\frac {\beta ^2}{2d(d-1)}\sum _{jj'}\sum _{l\not =j,j'} e^{i(\phi _j-\phi _{j'})}(|jl\rangle +|lj\rangle ) (\langle lj'|+\langle j'l|). \end{eqnarray} Then, we can obtain the single qudit reduced density matrix of output \begin{eqnarray}
\rho ^{(out)}_{red.}&=&{\frac {1}{d}}\sum _{j}|j\rangle \langle j| \nonumber \\ &&+\left( \frac {\alpha \beta }{d}\sqrt{\frac {2}{d-1}} +\frac
{\beta ^2(d-2)}{2d(d-1)}\right) \sum _{j\not =k} e^{i(\phi _j-\phi _k)}|j\rangle \langle k|. \end{eqnarray} The fidelity can be calculated as \begin{eqnarray} F={\frac {1}{d}}+\alpha \beta \frac {\sqrt{2(d-1)}}{d} +\beta ^2\frac {d-2}{2d}. \end{eqnarray} Now, we need to optimize the fidelity under the restriction $\alpha ^2+\beta ^2=1$. We can find the optimal fidelity of 1 to 2 phase-covariant quantum cloning machine can be written as \begin{eqnarray} F_{optimal}=\frac {1}{d}+\frac {1}{4d}(d-2+\sqrt{d^2+4d-4}). \label{dfidelity} \end{eqnarray} The optimal fidelity is achieved when $\alpha ,\beta $ take the following values, \begin{eqnarray} \alpha =\left( {\frac {1}{2}}-\frac {d-2}{2\sqrt{d^2+4d-4}}\right) ^{\frac {1}{2}},\nonumber \\ \beta =\left( {\frac {1}{2}}+\frac {d-2}{2\sqrt{d^2+4d-4}}\right) ^{\frac {1}{2}}. \label{ab} \end{eqnarray} In case $d=2,3$, this results reduce to previous known results (\ref{phaseoptimal},\ref{level3}), respectively. As expected, this optimal fidelity of phase-covariant quantum cloning machine is higher than the corresponding optimal fidelity of UQCM, \begin{eqnarray} F_{optimal}> F_{universal}=(d+3)/2(d+1). \end{eqnarray} These are the optimal phase-covariant quantum cloning machine for qudits (\ref{dphaseclone}, \ref{ab}) and the optimal fidelity (\ref{dfidelity}).
\subsection{Symmetry condition and minimal sets in determining quantum cloning machines}
In this subsection, we will mainly discuss how symmetry condition determines the form of quantum cloning machine. We will also consider the minimal sets in determining those quantum cloning machines.
As we shown, the number of BB84 states is four. The optimal cloning of those four states actually can clone optimally arbitrary corresponding equatorial qubits. This means that BB84 states are enough in determining the phase-covariant quantum cloning machine. We would come up with a question that whether they are the minimal input sets necessarily for the phase-covariant cloning. It is revealed that the set of BB84 states is not the minimal input set. The minimal set which determines the phase cloning machine is supposed to possess the
highest symmetry in Bloch sphere with the number three. Here, we give a brief proof.
Consider three input states $\frac{1}{\sqrt2}(|0\rangle+e^{i\phi}|1\rangle)$ where $\phi=0,2\pi/3,4\pi/3$ which are finite numbers. We suppose that the quantum cloning machine works in symmetric subspace and is economic. The most general form can be written as, \begin{eqnarray}
|0\rangle\rightarrow a|00\rangle+b|01\rangle+c|10\rangle+d|11\rangle ,\nonumber \\
|1\rangle\rightarrow e|00\rangle+f|01\rangle+g|10\rangle+h|11\rangle , \end{eqnarray}
where $a$ to $h$ are complex numbers which satisfy constrains $|a|^2+|b|^2+|c|^2+|d|^2=1$,$|e|^2+|f|^2+|g|^2+|h|^2=1$ and $ae^*+bf^*+cg^*+dh^*=0$ due to orthogonal and normalizing conditions. Because the machine works in symmetric subspace, we have $b=c$ and $f=g$. It is easily calculated that the fidelity for arbitrary input equatorial state is, \begin{equation} F_A(\phi)=\frac{1}{2}+\frac{1}{2}Re[ac^*e^{(i\phi)}+ag^*+ec^*e^{(2i\phi)}+eg^*e^{(i\phi)}+bd^*e^{(i\phi)}+fh^*e^{(i\phi)}+fd^*e^{(2i\phi)}+bh^*]. \end{equation} Simplify the expression by utilizing constrains above, we find \begin{equation} F_A(\phi)=\lambda_1\cos(2\phi+\psi_1)+\lambda_2\cos(\phi+\psi_2)+\lambda_3, \end{equation}
where $\lambda_i,i=1,2,3$, are real numbers. Explicit expressions of these parameters are: $\lambda_1=\frac{1}{2}|ec^*+fd^*|$, $\psi_1=arg(ec^*+fd^*)$, $\psi_2=\frac{1}{2}|ac^*+eg^*+bd^*+fh^*|$, $\psi_2=arg(ac^*+eg^*+bd^*+fh^*)$, and $\lambda_3=\frac{1}{2}+\frac{1}{2}Re(ag^*+bh^*)$.
Additionally, we let the cloning fidelities for those three states being the same: $F(0)=F(2\pi/3)=F(4\pi/3)$. Thus, we will obtain two more constraints: $\lambda_1\sin\psi_1=\lambda_2\sin\psi_2$, $\lambda_1\cos\psi_1+\lambda_2\cos\psi_2=0$. With the help of some algebraic inequalities, one would find that $F$ reaches its maximum value if and only if $\lambda_1=\lambda_2=0$. Now we are ready to find a simple form of the fidelity for the three input states, \begin{equation} F_A=\lambda_3. \end{equation} Remarkably, this result demonstrates that for any $\phi $, $F_A$ is independent of the phase parameter $\phi$. This cloning machine becomes the standard phase-covariant quantum cloning machine. Note that three states constituting the minimal input set have been studied from the viewpoint of designing quantum measurement technique for optimal quantum information detection \cite{PeresWootters}.
We have just shown that the phase-covariant quantum cloning machine can be determined completely
by a minimal input set with only three symmetric states. Here we would like to remark two points: (i) We know that the phase cloning may take two different forms with or without the ancillary states. The minimal input set is studied for the economic case in the above, we would like to point out that the cloning fidelity cannot be increased with the help of the ancillary states. Thus, the conclusion that this minimal input set can determine the phase-covariant quantum cloning are for both economic and non-economic cases. (ii) We have just reviewed the 1 to 2 cloning. If we would like to clone equally well these three states presented above to $M$ copies, the $1\rightarrow M$ phase-covariant quantum cloning machine is the optimal one. So this minimal input set can also determine completely the $1\rightarrow M$ phase-covariant cloning machine. Those two conclusions can be obtained by similar investigations as just reviewed.
We may find that the minimal input set contains three states which have a geometric symmetry in two dimension Hilbert space. It seems not obvious what kind of symmetry should be possessed for the minimal input set for phase-covariant quantum cloning in higher dimensional system. This is an open question and might be explored further. Not only for the case of phase-covariant quantum cloning, the UQCM has similar question. We already know that the minimal input set for UQCM of two dimension is constituted by four states forming a tetrahedron on Bloch sphere, see Fig.(\ref{4states}). It is not clear what is the minimal input set of UQCM in higher dimensional system.
We know that the fidelity of $1\rightarrow \infty $ phase quantum cloning is corresponding to quantum phase estimation \cite{prl80.1571}. The result, that the quantum phase cloning of states with arbitrary phase is equivalent to the cloning of a finite ensemble including only three special states, may shed light on the quantum state estimation of some fixed ensembles. Besides the case that input states are restricted to the equator of the Bloch sphere, there are some other cases where the input states may be located on a belt of the Bloch sphere \cite{yuzw}, or with other distributions. It will interesting to study the minimal input sets for those quantum cloning machines.
\subsection{Quantum cloning machines of arbitrary set of MUBs}
Here, we discussed the higher dimension quantum cloning of the mutual unbiased basis(MUB). It is known that a Hilbert space of $d$ dimension contains $d+1$ sets of MUB, provided $d$ is prime. In this review, when MUBs are used, we will restrict our attentions on case $d$ is prime. We can design QKD protocols by using arbitrary sets of MUBs. For example, in 2 dimensional system, we have two well accepted QKD protocols, six-state protocol means 3 sets of MUBs and BB84 protocol means 2 sets. In higher dimension, we can also propose corresponding cloning machines for those sets of MUBs.
Let us first present some characteristics of MUBs. In a system of dimension $d$, there are $d+1$ MUBs \cite{roychowdhuryMUB}, namely $\{|i\rangle\}$ and $\{|\tilde{i}^{(k)}\rangle\}$ $(k=0,1, ...,d-1)$, are expressed as, \begin{equation}
|\tilde{i}^{(k)}\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\omega^{i(d-j)-ks_j}|j\rangle, \label{mubphase} \end{equation}
with $s_j=j+...+(d-1)$ and $\omega=e^{i\frac{2\pi}{d}}$. Any states in the same set are orthogonal $\langle\tilde{i}^{(k)}|\tilde{l}^{(k)}\rangle=\delta_{il}$, and any states in different sets satisfying $|\langle\tilde{i}^{(k)}|\tilde{l}^{(j)}\rangle|=\frac{1}{\sqrt{d}}$, $k\not =j$, that is their overlaps are the same. Define the generalized Pauli matrices $\sigma_x$ and $\sigma_z$ as,
$\sigma_|j\rangle=|j+1\rangle$ and $\sigma_z|j\rangle=\omega^j|j\rangle$. Note that, as usual, we omit module $d$ in equations. Then there are $d^2-1$ independent Pauli matrices $U_{mn}=(\sigma _x)^m(\sigma _z)^n$ and
$U_{mn}|j\rangle=\omega^{jn}|j+m\rangle$. Those MUBs are eigenvectors of operators $\sigma _z, \sigma _x(\sigma _z)^k$, $k=0,1,...,d-1$, \begin{eqnarray}
\sigma _x(\sigma _z)^k|\tilde{i}^{(k)}\rangle =\omega ^i|\tilde{i}^{(k)}\rangle . \end{eqnarray} The result of MUBs can also be found in \cite{Wootters1989}.
A straightforward generalization of BB84 states in $d$-dimension is two sets of bases from those $d+1$ MUBs, and the generalization of six-state protocol is to use all $d+1$ mutually unbiased bases \cite{PhysRevLett.88.127902}. Suppose two MUBs are $\{|k\rangle\},k=0,1,2,...,d-1$ and its dual under a Fourier transformation, \begin{equation}
|\bar{l}\rangle=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i(kl/d)}|k\rangle, \end{equation}
where l=0,1,2,...,d-1. We follow the standard QKD, Alice initially sends the state $|\psi\rangle$, Eve can use her quantum clone machine to copy the state and the transferring state is disturbed which is later still sent to Bob. Eve has a non-perfect copy of the sending state and the ancillary state of her quantum cloning machine. The whole system is written as, \begin{equation}
|\psi\rangle_{A}\rightarrow\sum_{m,n=0}^{d-1}a_{m,n}U_{m,n}|\psi\rangle_B|B_{m,-n}\rangle_{E,E'}, \label{whole48} \end{equation}
where A,B,E, and E' represent Alice's qudit, Bob's clone, Eve's clone, and the cloning machine. Obviously, parameters $a_{m,n}$ satisfy $\sum_{m,n=0}^{d-1}|a_{m,n}|^2=1$. As we already know, $|B_{m,-n}\rangle_{EE'}$ stands for d-dimensional Bell states which is the maximally entangled states of two qubits with explicit form: \begin{equation}
|B_{m,n}\rangle_{EE'}=\frac{1}{\sqrt{d}}\sum_{k=0}^{N-1}e^{2\pi i(kn/d)}|k\rangle_E|k+m\rangle_{E'}, \end{equation} where $m,n=0,1,...,d-1$. Note that the operators $U_{m,n}$ can be expressed as, \begin{equation}
U_{m,n}=\sum_{k=0}^{d-1}e^{2\pi i(kn/d)}|k+m\rangle\langle k|. \end{equation}
They actually form a group of qudit error operations where $m$ represents the shift errors and $n$ is related with the phase errors. Trace off the joint states within Eve, Bob's clone will be a mixed state, it is the same as the state $|\psi \rangle $ passing through a quantum channel which will cause decoherence, \begin{equation}
\rho_B=\sum_{m,n=0}^{d-1}|a_{m,n}|^2U_{m,n}|\psi\rangle\langle\psi|U_{m,n}^{\dagger}. \label{Bobdensity} \end{equation}
Therefore, when Alice sends states $|k\rangle$, Bob's fidelity is \begin{equation}
F=\langle k|\rho_B|k\rangle=\sum_{n=0}^{d-1}|a_{0,n}|^2. \end{equation}
Also, when Alice sends states $|\bar{l}\rangle$, Bob's fidelity is \begin{equation}
\bar{F}=\langle\bar{l}|\rho_b|\bar{l}\rangle=\sum_{m=0}^{d-1}|a_{m,0}|^2. \end{equation} Consider the requirement that the cloner works equally well with these states, we must choose the amplitude matrix as the following form, \begin{equation} (a_{m,n})= \begin{pmatrix} v & x & \cdots & x \\ x & y & \cdots & y \\ \vdots & \vdots & \ddots & \vdots \\ x & y & \cdots & y \end{pmatrix} \label{ampmatrix} \end{equation} where $x$, $y$ and $v$ are real number satisfying $v^2+2(d-1)x^2+(d-1)^2y^2=1$. In this way, we find Bob's fidelity is \begin{equation} F=v^2+(d-1)x^2. \end{equation}
Next let us consider the state of Eve's side. Eve performs the unitary transformation on both the transferring state $|\psi \rangle $ and a maximally entangled state, \begin{equation} U=\sum_{m,n=0}^{d-1}a_{mn}(U_{mn}\otimes U_{m,-n}\times\mathbb{I}), \end{equation} as shown in Eq.(\ref{whole48}). We can find that this transformation can be rewritten in a different form as follows, \begin{eqnarray}
U|\psi\rangle_{A}|\Phi _{0,0}\rangle _{E,E'}&=&
\sum_{m,n=0}^{d-1}a_{m,n}U_{m,n}|\psi\rangle_B|B_{m,-n}\rangle_{E,E'} \nonumber \\
&=&\sum _{mn}b_{m,n}|\Phi _{-m,n}\rangle _{BE'}\otimes U_{m,n}|\psi \rangle _E. \label{wholeAB} \end{eqnarray} So coefficients $a_{m,n}$ are related with another set of coefficients $b_{m,n}$ as follows, \begin{equation} b_{m,n}=\frac{1}{d}\sum_{m',n'=0}^{d-1}e^{2\pi i(nm'-mn')/d}a_{m',n'}. \label{fourier} \end{equation} By using coefficients $b_{m,n}$ as shown in Eq.(\ref{wholeAB}), the density operator of Eve takes a simple form as \begin{equation}
\rho_E=\sum_{m,n=0}^{d-1}|b_{m,n}|^2U_{m,n}|\psi\rangle\langle\psi|U_{m,n}^{\dagger}. \end{equation} This density operator is similar as Bob's density operator in (\ref{Bobdensity}) except that coefficients $b_{m,n}$ are used. Further, we find that the fidelity for Eve can be expressed as, \begin{equation} F_E=v'^2+(d-1)x'^2 \end{equation} where $v'$, $x'$ and $y'$ corresponds to parameters of coefficients $b_{m,n}$, with similar structure as matrix in (\ref{ampmatrix}), which are related with $a_{m,n}$ by the Fourier transformations (\ref{fourier}). Explicitly, those parameters can be written as, \begin{eqnarray} x'&=&[v+(d-2)x+(1-d)y]/d,\nonumber\\ y'&=&(v-2x+y)/d,\nonumber\\ v'&=&[v+2(d-1)x+(d-1)^2y]/d. \end{eqnarray}
Now, both fidelities of Eve and Bob are represented by the same parameters $v,x,y$. Our purpose is to maximize Eve's fidelity $F_E$ under a given value of Bob's fidelity $F$. The trade-off relation can then be found as, \begin{equation} F_E=\frac{F}{d}+\frac{(d-1)(1-F)}{d}+\frac{2}{d}\sqrt{(d-1)F(1-F)}. \end{equation}
Next, we consider another protocol using all available $d+1$ bases. Similarly, by considering that the same fidelity is necessary for all used bases since they are applied randomly, we derive that amplitude matrix presented in Eq.(\ref{ampmatrix}) must satisfy $x=y$. Hence, Bob's fidelity is \begin{equation} F=v^2+(d-1)x^2=1-d(d-1)x^2, \end{equation} and Eve's fidelity is, \begin{equation} F_E=v'^2+(d-1)x'^2=1-d(d-1)x'^2, \end{equation} where $v'$ and $x'$ are expressed as \begin{eqnarray} x'&=&(v-x)/d\nonumber\\ v'&=&[v+(d^2-1)x]/d. \end{eqnarray} The relations induce the trade-off between two fidelities of Bob and Eve.
For higher dimension case, we may have more choices for QKD. Besides by using only two bases or all $d+1$ bases, we may choose any sets of mutually unbiased bases. Then corresponding cloning machines are necessary in analyzing the security. Those general QKD protocols are studied recently in \cite{PhysRevA.85.012334}. By using the same arguments about the symmetry, we can find, \begin{eqnarray}
\rho_B&=&\sum_{m,n=1}^{d-1}|a_{mn}|^2|i+m\rangle\langle i+m|,\\
\tilde{\rho}_B^{(k)}&=&\sum_{m,n=0}^{d-1}|a_{mn}|^2(U_{mn}|\tilde{i}^{(k)}\rangle_B)(_B\langle\tilde{i}^{(k)} U_{mn}^{\dagger}),\\
\rho_E&=&\sum_{m,n=1}^{d-1}|b_{mn}|^2|i+m\rangle\langle i+m|,\\
\tilde{\rho}_E^{(k)}&=&\sum_{m,n=0}^{d-1}|b_{mn}|^2(U_{mn}|\tilde{i}^{(k)}\rangle_E)(_E\langle\tilde{i}^{(k)} U_{mn}^{\dagger}), \end{eqnarray} where $k=0,1,...,g-1$. Therefore, one may easily derive the fidelities, \begin{eqnarray}
F_B&=&\sum_n|a_{0n}|^2,\\
\tilde{F}_B^{(k)}&=&\sum_m|a_{m,km}|^2,\\
F_E&=&\frac{1}{d}\sum_m|\sum_na_{mn}|^2,\\
\tilde{F}_E^{(k)}&=&\frac{1}{d}\sum_n|\sum_ma_{m,n+km}|^2,\\ \end{eqnarray} where $k=0,1,...,g-1$. Assuming that Eve's attack is balanced, or we say she induces an equal probability of error for any one of the $g+1$ MUBs, we have, \begin{equation} F_B=\tilde{F}_B^{(0)}=...=\tilde{F}_B^{(g-1)}. \end{equation} These constraints can determine the optimal cloner. Eve could maximize all these $g+1$ fidelities simultaneously and and let them equal. This is can be realized by ``vectorization'' of the matrix elements of $(a_{mn})$. Define, \begin{eqnarray} &&\vec{\alpha}_i=(a_{1,1i},...,a_{d-1,(d-1)i}),~~~(i=0,1,...,g-1),\\ &&\vec{A}_i=(A_1,...,A_{d-1}),\\ &&A_i=\sum_{j\neq0,i,...,(g-1)i}^{d-1}a_{ij}(i=1,2,...,d-1), \end{eqnarray} and the rest elements are restricted by the following equations: \begin{eqnarray}
\sum_{j=1}^{d-1}|a_{0j}|^2&=&F_B-|a_{00}|^2,\\
||\vec{\alpha}_i||^2&=&F_B-|a_{00}|^2,~~~ (i=0,1,...,g-1). \end{eqnarray} Finally, Eve's fidelity can be expressed as \begin{equation}
F_E=\frac{1}{d}(|\sum_{j=0}^{d-1}a_{0j}|^2+||\sum_{i=0}^{g-1}\vec{\alpha}_i+\vec{A}||^2). \end{equation} By maximizing Eve's fidelity, the above result can be further simplified by some algebraic considerations, \begin{eqnarray} a_{mn}= \begin{cases} v,~m=n=0,\\ x,~m=0,n\neq0~{\rm or}~m\neq0,n=km,\\ y,{\rm ~~otherwise}, \end{cases}\label{gmatrix} \end{eqnarray} where $k=0,...,g-1$, and $v$ is a real number to be determined and $x=\sqrt{\frac{F_B-v^2}{d-1}}$, $y=\sqrt{\frac{1+gv^2-(g+1)F_B}{(d-1)(d-g)}}$. Now we reach our conclusion that the fidelity of Eve is, \begin{eqnarray} F_E=\frac{1}{d}\{[v+(d-1)x]^2+(d-1)[gx+(d-g)y]^2\}.\label{gfidelity} \end{eqnarray} The only undetermined variable is $v$, we can change it so that the fidelity of Eve $F_E$ reaches the maximum depending on the fixed fidelity of Bob. The fidelities of Bob and Eve are presented in FIG.\ref{Figure-xiong} for some special cases \cite{PhysRevA.85.012334}.
\begin{figure}
\caption{The fidelities of Bob and Eve for dimension $d=5$. The number of mutually unbiased bases runs from 2 to 6. These results are presented in \cite{PhysRevA.85.012334}.}
\label{Figure-xiong}
\end{figure}
From this conclusion, we may easily find out the results for $g=1$ and $g=d$ which lead to results in \cite{PhysRevLett.88.127902}. Also we are interested in the condition that $F_B=F_E$ which is the symmetric cloning, and the remained variable $v$ is fixed in this case which actually takes a rather complicated form, we finally have, \begin{eqnarray} F=\frac{2}{d}\frac{d-g}{(g+3)-\sqrt{(g+3)^2-8\frac{(d-g)(g+1)}{d}}} \label{gsymmfidelity}. \end{eqnarray}
As we know, the optimal cloner for $d+1$ MUBs is actually equivalent to universal quantum cloning machine. It is interesting to know which of the above quantum cloning machine is equivalent to the phase-covariant quantum cloning machine. Stimulated by the fact that $d$ MUBs presented in Eq.(\ref{mubphase}) only contain phase parameters but the amplitude parameters are fixed, we may suppose that the $d$-dimension phase-covariant quantum cloning should be equal to the cloning of $d$ MUBs. Indeed, let $g=d-1$, the fidelity (\ref{gsymmfidelity}) coincides with the phase-covariant fidelity in (\ref{dfidelity}). To further check that those two cloning machines are the same, we need also consider the asymmetric case.
Let us consider the equatorial qudit as,
$|\psi \rangle =\frac{1}{\sqrt d}\sum_{j=0}^{d-1}e^{i\phi_j}|j\rangle $, where $\phi_j$ are phase parameters. One can assume that the asymmetric cloning transformation is given as, \begin{equation}
|i\rangle\rightarrow\alpha|ii\rangle|i\rangle+\frac{\beta}{\sqrt{d-1}}\sum_{j\neq i}(\cos\theta|ij\rangle+\sin\theta|ji\rangle|j\rangle), \end{equation} where $\theta$ is the asymmetric parameter. Therefore one can derive two fidelities for Bob and Eve, respectively, \begin{eqnarray} F_1=\frac{1}{d}+\frac{2\alpha\beta}{d}\sqrt{d-1}\cos\theta+\frac{\beta^2(d-2)}{d}\cos^2\theta,\\ F_2=\frac{1}{d}+\frac{2\alpha\beta}{d}\sqrt{d-1}\sin\theta+\frac{\beta^2(d-2)}{d}\sin^2\theta, \end{eqnarray} where we still have $\alpha^2+\beta^2=1$. Here we would like to emphasize, the exact value of $\alpha ,\beta $ should depend on the parameter $\theta $. For symmetric case, $\theta =\pi /4$, their values can be found in Eqs.(\ref{ab}). When $\theta $ changes, the values of $\alpha ,\beta $ also change. Numerical evidences show that those fidelities are the same with the fidelities in Eq.(\ref{gfidelity}).
We know that MUBs may determine some specified quantum cloning machines which in general are better than the universal cloning machine which admits arbitrary input states. On the other hand, we may wonder whether those MUBs are the minimal input sets which can be copied optimally by the corresponding cloning machines. In qubit case, we know that the number of states in the minimal input set can be reduced \cite{Jing2013}. It is not clear what are the minimal input sets for the optimal cloning machines of those MUBs in general $d$ dimension.
\subsection{Quantum cloning in mean king problem as a quantum key distribution protocol}
Quantum key distribution (QKD) protocols allow two parties, called Alice (the sender) and Bob (the receiver) conventionally, to generate shared secret keys for them to communicate securely. In BB84 protocol\cite{Bennett1984}, we send states by exploiting two mutually unbiased bases of qubit. Ekert proposed a QKD protocol based on Bell theorem by using the entangled pairs in 1991 (E91) \cite{PhysRevLett.67.661}. As we already know that the BB84 protocol can also be generalized by using a six-state protocol \cite{PhysRevLett.81.3018}. It is also possible to propose a QKD protocol by combination of BB84 protocol and E91 protocol. This protocol is based on the so-called mean king problem \cite{meanking} since its description is usually like a tale \cite{tale}.
The protocol of mean king problem, can be considered as two steps: The first step is the same as E91 protocol except without classical announcement of measurement bases and the second step is like BB84 protocol. In this protocol, Bob needs to retrodict the outcome of a projective measurement by Alice without knowing the bases she used. For qubit first \cite{meanking} and higher dimension latter \cite{LatinSquare,OrthogonalArray}, it is shown that Bob has a 100\% winning strategy. So it is realized that this quantum retrodiction protocol might be applied as a QKD in quantum cryptography \cite{Bub,Werner,Yoshida2010}.
In this protocol, Alice may exploit bases in a ``meaner'' way by utilizing biased (nondegenerate) bases \cite{Biased}. The security of the QKD protocol is analyzed by considering a full coherent attack on both quantum channels \cite{Werner}. To be explicit, Eve controls completely the preparation of entangled pairs, which are used by Bob before sending
one part of them to Alice, as well as the feedback channel which is used for transmitting
back the quantum state after a measurement by Alice.
To specify the attack, in the former scenario, Eve initially prepares a maximally entangled
state $|\Phi^+ \rangle_{BB'}$ which will be shared for Alice and Bob. But she adversarial prepares another completely same entangled pair $|\Phi ^+\rangle _{EE\rq{}}$, partially swaps her qubit with the providing entangled pair. Consequently, Alice and Bob both are partially entangled with Eve, in contrast, they are maximally entangled with each other if no Eve exists. So the whole system with Alice, Bob and Eve possesses a superposition of two pairs maximally entangled states. For the second channel, Eve is confined to only perform a cloning-based individual attack on the particle Alice sends to Bob after her projective measurement \cite{PhysRevLett.81.3018,PhysRevLett.88.127902,Cerf1998,PhysRevA.85.012334}. The attack on this retrodiction protocol on both steps of the entangled pair preparation and quantum state transmission can be understood from a general viewpoint by the unified quantum cloning machine \cite{PhysRevA.84.034302}. A QKD protocol is secure when mutual information between Alice and Bob is larger than that between Alice and Eve, under this condition can Alice and Bob use classical error correction and privacy amplification methods \cite{QKDreview,PhysRevLett.88.127902} to guarantee a secure communication. Alternatively, we may also compare the fidelity between Eve and Bob to see which one is closer with the ideal case.
Here let us review the comparison between different protocols in FIG.\ref{d2} which includes four cases, the standard QKD by BB84 states and six-state, and their correspondences by retrodiction protocol. Interestingly, it is clear that the retrodiction QKD protocol presented here is more secure than BB84 protocol and six-state protocol, i.e., with fixed disturbance ($F_{Bob}$ is fixed), Eve's probability to figure out the correct result is lower. And using 3 bases ($g=2$) is even more secure than 2 bases($g=1$).
\begin{figure}
\caption{(color online) $F_{Eve}$ vs $F_{Bob}$ curve for $d=2$. In our scenario, the mean king retrodiction QKD has higher security than both BB84 and six-state protocols.}
\label{d2}
\end{figure}
The efficiency of the mean king retrodiction has the advantage of generating a raw key in \emph{every single run} no matter how many mutually unbiased bases are utilized. For comparison, in standard QKD by exploiting $g+1$ mutually unbiased bases,
there would only have a raw key in $g+1$ runs on average for Alice and Bob.
\subsection{Other developments and related topics}
No-cloning is a fundamental principle of quantum mechanics. On the other hand, the quantum cloning machine is concerned about to clone quantum states, approximately or probabilistically with both cases not violating the principle of quantum mechanics, but with the highest quality measured by different figures of merit. If we know nothing about the input quantum states, the cloning machine should be in the sense of universal as we have reviewed in last section. In case we know partial information of the input states, we can use state-dependent cloning which performs, at least, as good as the universal cloning machine. It is naturally expected that we can do better in most cases. The phase-covariant cloning machine belongs to the class of state-dependent cloning, however, we usually listed it independently. The phase-covariant is in the sense that the density matrix of single output copy has the same form of phase with the input state density matrix, to be more explicit, the difference of those two density matrices is a mixture of identity with a probability. This property ensures that the fidelity of this cloning machine is independent of input states which differs only in phase parameters.
The phase-covariant quantum cloning was initiated by Bru\ss ~\emph{et al.} for considering the 1 to 2 cloning of equatorial qubit, which is a qubit located in the equator of the Bloch sphere \cite{Bruss2000}. It is also shown that the cloning of equatorial qubits is optimal if the input is restricted to only four states corresponding to BB84 states. The minimal input set which can determine completely this optimal quantum cloning machine includes only three states located symmetrically on the equator of the Bloch sphere \cite{Jing2013}. The more general 1 to 3 phase cloning case \cite{PhysRevA.64.042308} and 1 to many case are also studied \cite{PhysRevA.65.012304}. For higher dimensional case, the three-dimension phase-covariant quantum cloning is studied in \cite{PhysRevA.64.042308,688607420020710}, the general $d$-dimension phase cloning is presented in \cite{PhysRevA.67.022317}. Various kinds of phase-covariant and state-dependent cloning are proposed. Next, we list some of those developments below.
\begin{itemize}
\item The input states for cloning are limited to some conditions, which in general can be described by some symmetries. The cloning of states in higher-dimension, but with only real parameters is studied in \cite{PhysRevA.68.032313}. The asymmetric qudit phase-covariant quantum cloning is studied in \cite{ISI:000228015900003}. The quantum cloning of set of states which is invariant under the Weyl-Heisenberg group is studied by the extremal cloning machine \cite{extremalcloning}. The quantum cloning of states with fixed amplitudes but arbitrary phase is studied in \cite{Karimipour-phase}, which is suboptimal while the experimental scheme uses the optimal one \cite{PhysRevLett.94.040505}. The cloning of states in a belt of Bloch sphere is studied in \cite{yuzw}. The case of distribution with mirror like symmetry, i.e., with known modulus of expectation of Pauli $\sigma _z$ matrix is studied in \cite{mirror}, the case of arbitrary axisymmetric distribution on the Bloch sphere is studied in \cite{axi-bloch}, see also \cite{Bartkiewicz2012}. The cloning of a pair of orthogonally polarized photons is studied in \cite{Fiurasek2008}. The optimal broadcasting of mixed equatorial qubits is studied in \cite{Yu2009}. A hybrid quantum cloning machine combines universal and state-dependent cases together is presented in \cite{Adhikari2007}.
\item The estimation of states or phases for finite quantum states. We have known that UQCM and phase-covariant cloning machine are for states with some parameters, amplitude or phase, which can be assumed to be continuous. On the other hand, we already know that we cannot do better for cases even the number of input states are finite, for example for some sets of MUBs. We remark that the quantum cloning of sets of MUBs should be related with state estimation. The results of arbitrary state estimation and phase estimation are available which correspond to $g=d, d-1$, however, the general $g+1$ MUBs estimations are not yet studied. The phase estimation of qubits is studied in \cite{prl80.1571}, the case of qubits in mixed states is presented in \cite{mixedstate-phaseestimation}. The phase estimation of multiple phases is studied in \cite{multiphase-esti}. The criterion for estimation and the quality of state-dependent cloning is analyzed in \cite{ISI:000179502200033}.
\item State-dependent cloning related with QKD protocols. We should note that the security of QKD is generally defined by various criteria \cite{QKDreview}, in this review, we consider the attack by the scheme of quantum cloning. Quantum copying of two states is studied in \cite{hillery-buzek}. The QKD in three dimensions is studied in \cite{prl88.127901}. The four-dimensional case is studied in \cite{4dim2000,PhysRevA.68.042323}, The optimal eavesdropping of BB84 states is studied in \cite{fuchs97}, higher-dimensional case and some related results are presented by some other groups \cite{Karimipour,Acin2003,Nikolopoulos2005,Nikolopoulos2006,Bae2007,prl95.080501,BourennaneKarlsson}. The extension of BB84 states for qubits is also studied as the spherical-code \cite{Renes2004}. The comparison between photon-number-splitting attack and quantum cloning attack of BB84 states is studied in \cite{pns-clone}. The extension of phase-covariant cloning to multipartite quantum key distribution is studied in \cite{scarani2001}. The cloning network of generalized BB84 states constituted by two pairs of orthogonal states is presented in \cite{ISI:000221546500001}.
\item Concepts related with phase-covariant and state-dependent cloning. The state-dependent cloning machine and the relation with completely positive trace-preserving maps is studied in \cite{ISI:000187004700046}. Relation of state-dependent cloning with quantum tracking is studied in \cite{ISI:000258180300064}. The assisted phase cloning of qudit by remote state preparation is presented in \cite{ISI:000263307800011}. The network of state-dependent quantum cloning is studied in \cite{chefles}, see also \cite{Zhou2011}. The relations between teleportation and dissipative channels with the universal and phase-covariant cloning machine are analyzed in \cite{Ozdemir2007}. The phenomena of superbroadcasting is also studied for phase-covariant case \cite{ISI:000241723100032}. The no-cloning theorem for a single POVM is presented in \cite{ISI:000282921000007}. It is found that while equatorial qubit contains only one arbitrary parameter, the phase information cannot be compressed \cite{1751-8121-45-2-025304}.
Quantum cloning is generally not concerned with relativity. However with relativistic covariance requirement, the state-dependent cloning of photons and the BB84 states are studied in \cite{relative-cloning}. It is shown by phase-covariant quantum cloning that the cloned quantum states are not macroscopic in the spirit of Schrodinger's cat \cite{macro-clone}.
\item Implementation theoretically and experimentally. Various proposals of implementation have been put up. The economic realization of phase-covariant devices in arbitrary dimension, where phase cloning as a special case, is studied in \cite{Buscemi2007}. The scheme of one to three economic phase-covariant quantum cloning machine is proposed implementing by linear optics system \cite{Zou2005-M}. The one to many symmetric economic phase cloning is proposed in \cite{Zhang2007}, see also \cite{Zhang2009}. The scheme to realize economic one to many phase cloning for qubit and qutrit is proposed in \cite{Zou2006}. The realization of phase-covariant and real qubit states quantum cloning are presented in \cite{Fang2010}. The phase cloning in spin networks is proposed in \cite{ChiaraFazio}. The proposal of optical implementation of phase cloning of qubits is presented in \cite{fiurasek2003}, the cloning of real state is studied in \cite{Hu2010}. The one to many phase-covariant quantum cloning is also analyzed by the general angular momentum formalism \cite{Sciarrino2007}. Quantum circuits for both entanglement manipulation and asymmetric phase-covariant cloning are studied in \cite{Levente2010}.
Experimentally, the asymmetric phase cloning is realized in optical system \cite{bartphase}, and in \cite{Soubusta2008}. The ancilla-free phase-covariant cloning through Hong-Ou-Mandel interference is realized in experiment by Khan and Howell \cite{Khan-Howell}. The one to three economic quantum cloning of equatorial qubits encoded by polarization states of photons and the universal cloning are realized experimentally in \cite{XuLiChen}. Realization by NMR system can be found in\cite{PhysRevLett.94.040505,chendujfphase}. In optical parametric amplification of a single photon in the high gain-regime, experiment is performed to distribute the photon polarization state to a large number of particles which corresponds to the phase-covariant quantum cloning \cite{Nagali2007}. The phase-covariant quantum cloning is also implemented in nitrogen-vacancy center of diamond by using three energy levels \cite{Pan2011}, in nanodiamond with full coherent control of phases is reported \cite{Pan2013}, this will be reviewed in detail later. The experimental implementation of eavesdropping of BB84 states and trine states by optimal cloning is studied in \cite{Bartkiewicz-experiment12}.
\end{itemize}
\section{Local cloning of entangled states, entanglement in quantum cloning}
Quantum cloning is generally to find the quantum operations to realize the optimal cloning. The only restriction is that the operations should satisfy quantum mechanics. We next study the local cloning of entangled states, in this case, the operations are additionally restricted to be local. In principle, there is also a no-cloning theorem for entangled states \cite{prl81.4264}.
In addition, since the crucial role of quantum entanglement in quantum information, we will also study the entanglement properties in quantum cloning machines.
\subsection{Local cloning of Bell states}
Quantum entanglement plays a key role in quantum computation and quantum information. It is the precious resource in quantum information processing. Also entanglement is a unique property of quantum system which does not have any classical correspondence. In this sense, quantum entanglement has already become a common concept and has many applications in various quantum systems. The study of entanglement is generally under the condition of local (quantum) operations and classical communication (LOCC). This is due to the consideration that entanglement does not increase under LOCC.
The local cloning of entangled states is an interesting topic \cite{OH06,buzek-lcoal-cloning,GKRlocal-cloning,njp6.164}. First let us raise the problem: Suppose two spatially separated parties, Alice (A) and Bob (B), share some entangled states, by LOCC, they want to copy the shared entangled states. As an example, let us study the following problem \cite{GKRlocal-cloning}, the four Bell states are defined as usual as the following, \begin{eqnarray}
|\Phi ^+\rangle &=&\frac {1}{\sqrt {2}}(|00\rangle +|11\rangle ), \nonumber \\
|\Phi ^-\rangle &=&\frac {1}{\sqrt {2}}(|00\rangle -|11\rangle )
=(I\otimes Z)|\Phi ^+\rangle , \nonumber \\
|\Psi ^+\rangle &=&\frac {1}{\sqrt {2}}(|01\rangle +|10\rangle )
=(I\otimes X)|\Phi ^+\rangle ,\nonumber \\
|\Psi ^-\rangle &=&\frac {1}{\sqrt {2}}(|01\rangle -|10\rangle ).
=(I\otimes XZ)|\Phi ^+\rangle . \label{Bell} \end{eqnarray} Alice and Bob share one Bell states from a known subset, say
$\{ |\Psi ^+\rangle , |\Phi ^+\rangle \}$, they want to copy this state by LOCC. Several problems should be considered before to study this problem: (1).The entanglement between $A$
and $B$ does not increase under LOCC. So to copy locally this state, we generally assume that some known entangled states, for example $|\Phi ^+\rangle $, are shared between $A$ and $B$ which can be used as ancilla. (2).The entanglement resource used by local copying should be minimum. Otherwise, we can use the teleportation scheme\cite{PhysRevLett.70.1895}, let Bob (Alice) obtain the full state $\{ |\Psi ^+\rangle , |\Phi ^+\rangle \}$, he knows the state exactly by measurement, and copies of the entangled state between $A$ and $B$ can be obtained easily by local unitary operations which are shown explicitly above in (\ref{Bell}). Actually Alice and Bob can discriminate any two Bell states by LOCC \cite{walgateshort,walgate,prl87.277902}.
In these conditions, the problem can be explicitly stated as: Alice and Bob share either of two maximally entangled states
$\{ |\Psi ^+\rangle , |\Phi ^+\rangle \}$, but they do not know which one it is. Additionally, they share known maximally entangled states in form $|\Phi ^+\rangle $ as the resource which are used as ancillary states. The question is: can they obtain the state
$|\Psi ^+\rangle ^{\otimes 2}$ or
$|\Phi ^+\rangle ^{\otimes 2}$ by LOCC? The answer is `yes': both Alice and Bob do CNOT gate with the unknown qubit as the controlled qubit and the ancilla as the target qubit, they can achieve their aim. We name this method as CNOT scheme. The key point here is that Alice and Bob do not need to know which state they share, they can finally obtain two copies of this state, and only one known state $|\Phi ^+\rangle $ (resource) is consumed.
Let us next analyze the advantages of this scheme by comparing with the teleportation scheme. By using the available resource $|\Phi ^+\rangle $ for teleportation, the unknown state, which is either $|\Psi ^+\rangle $ or $|\Phi ^+\rangle $, can be teleported to either Alice or Bob's side, here we suppose Alice receives this unknown state. Alice can find the exact form of this state by using Bell measurement. Now according to the obtained information, Alice and Bob use additional two entangled resource, and can share two copies of the previous unknown maximally entangled state. In this process, three maximally entangled state are consumed, where one is for teleportation, and another two are used to share between Alice and Bob. Since three entanglement resource is used, this scheme is not as efficient as the CNOT scheme. If we use the local discrimination scheme , i.e., by local measurement in $\{ |0\rangle , |1\rangle \}$ basis on the unknown entangled state, then with assistance of classical communication, we know the exact form of the shared state \cite{walgateshort,walgate,prl87.277902}. Since resource of maximally entangled states $|\Phi ^+\rangle $ are available, by local unitary transformation, we can change two resource entangled states to the detected known form. We still achieve the aim that two copies of an entangled state are shared between Alice and Bob. In this scheme, two known Bell states (resource) are consumed which is not as efficiency as the CNOT scheme. On the other hand, in order to obtain two copies of
$|\Psi ^+\rangle $ or $|\Phi ^+\rangle $, at least, one entangled state (resource) should be used. We already know that the CNOT scheme uses only one entangled state (resource), it is thus optimal.
\subsection{Local cloning and local discrimination} If a set of quantum states can be perfected discriminated, they can be copied perfectly since we can discriminate them first, then prepare many copies of these states by using the available entanglement resource. For example, two orthogonal states can be copied perfectly. We know that two Bell states can be locally discriminated, as shown in the last subsection, they can be local cloned perfectly if a {\it priori} Bell state resource is available. Is it generally true that local discrimination means local cloning being possible? In \cite{OH06}, it is stated that, in general, the local copying is more difficult than local discrimination.
However, local cloning and local discrimination are closely related \cite{OH06}. The following result was obtained in \cite{OH06}:
For d-dimensional system, and suppose $d$ is prime, a set of maximally entangled states $\{ |\Psi _j\rangle \} _{j=0}^{N-1} $ are defined as \begin{eqnarray}
|\Psi _j\rangle =(U_j\otimes I)|\Phi ^+\rangle , \end{eqnarray} and \begin{eqnarray}
U_j=\sum _{j=0}^{D-1}\omega ^{jk}|k\rangle \langle k|, \end{eqnarray}
then the set $\{ |\Psi _j\rangle \} _{j=0}^{N-1} $ can be locally copied.
Here let us point out that the states of this set can be local discriminated perfectly according to the criteria proposed in \cite{Fan04}. The scheme can be like the following. It is known that $U_j=\sigma _z^j$, where the generalized Pauli matrix $\sigma _z|k\rangle =\omega ^{k}|k\rangle $. We define a class generalized Hadamard transformations as, up to an unimportant factor, \begin{eqnarray} (H_{\alpha })_{jk}=\omega ^{-jk}\omega ^{-\alpha s_k}, \end{eqnarray} where $s_k=k+...+(d-1)$. We remark that those transformations correspond to the $d+1$ mutually unbiased states. By applying those Hadamard transformations, the generalized Pauli matrices transform as, \begin{eqnarray} H_{\alpha }\sigma _x^m\sigma _z^nH_{\alpha }^{\dagger }= \sigma _x^{m\alpha +n}\sigma _z^{-m}. \end{eqnarray} Now we know that $\sigma _z$ matrix can be transformed to $\sigma _x$ matrix,
$U_j\rightarrow \sigma _x^j$. Since $(\sigma _x^j\otimes I)\sum |kk\rangle =\sum |k+j,k\rangle $, that means those states can be distinguished by LOCC, also the above transformations correspond to local unitary operations, we now conclude that states in set $\{ |\Psi _j\rangle \} _{j=0}^{N-1} $ can be distinguished by LOCC.
We next see how those states can be cloned locally, define generalized CNOT gate as, \begin{eqnarray}
CNOT: |a\rangle |b\rangle \rightarrow |a\rangle |b+ a\rangle , \end{eqnarray}
where $|a+b\rangle $ modula $d$ is assumed. We suppose an ancilla state $|\Phi ^+\rangle $ is shared between Alice and Bob. Let both Alice and Bob perform the generalized CNOT gate, we obtain the perfect copies
$|\Psi _j\rangle ^{\otimes 2}$. This result can be derived as follows, according the definition of the CNOT gate, we know that \begin{eqnarray}
CNOT^{\dagger }: |a\rangle |b\rangle \rightarrow |a\rangle |b-a\rangle , \end{eqnarray} It is straightforward to check that we have the following properties \begin{eqnarray}
|\Phi ^+\rangle _{12}|\Phi ^+\rangle _{34}&=& CNOT_{13}^{\dagger }\otimes CNOT^{\dagger }_{24}
|\Phi ^+\rangle _{12}|\Phi ^+\rangle _{34} \nonumber \\ &=&CNOT_{13}\otimes CNOT_{24}
|\Phi ^+\rangle _{12}|\Phi ^+\rangle _{34}. \end{eqnarray} Then we can find \begin{eqnarray} CNOT_{13}\otimes CNOT_{24}
|\Psi _j\rangle _{12}|\Phi ^+\rangle _{34}. &=&CNOT_{13}\otimes CNOT_{24}(U_j\otimes I)_{13}
|\Phi ^+\rangle _{12}|\Phi ^+\rangle _{34} \nonumber \\ &=&CNOT_{13}(U_j\otimes I)_{13}CNOT_{13}^{\dagger }
|\Phi ^+\rangle _{12}|\Phi ^+\rangle _{34}. \end{eqnarray} And we know the following result: \begin{eqnarray} CNOT(U_j\otimes I)CNOT^{\dagger }=U_j\otimes U_j. \end{eqnarray}
The operator $U_j$ is copied. Thus by this method, a set of maximally entangled states $\{ |\Psi _j\rangle \} _{j=0}^{N-1} $ are locally copied. This interesting phenomenon means that some unitary operators can be cloned perfectly in the above framework.
In 2-dimensional system, we have presented relations (\ref{opnoclone}) for CNOT gate previously \cite{gottesman98}, $(\sigma _x\otimes I)\rightarrow \sigma _x\otimes \sigma _x, (\sigma _z\otimes I)\rightarrow \sigma _z\otimes I, (I\otimes \sigma _x)\rightarrow I\otimes \sigma _x, (I\otimes \sigma _z)\rightarrow \sigma _z\otimes \sigma _z$. Those results imply that the bit flip errors are copied forwards while the phase errors are copied backwards. But we cannot copy simultaneously the bit flip errors and phase flip errors. This is a kind of no-cloning theorem.
Some other results about the cloning of entanglement are listed in the following. The local cloning of product states without the shared entanglement ancilla is studied in \cite{yingms05}. Distinguishing states locally is also studied in \cite{walgate,walgateshort,chenyx1,chenyx2}. The entangled states studied are generally pure states, however, it is shown that maximally entangled states can also be mixed which is constituted by very special structures. A subset of those mixed maximally entangled states has similar properties as those of pure maximally entangled states, and can be local distinguished perfectly \cite{Lizongguo}. The local cloning of other cases are also studied, including three-qubit case \cite{PhysRevA.74.032323.2006}, the continuous-variable case \cite{pra77.042301}, orthogonal entangled states and catalytic copying \cite{njp6.164}. The local cloning of partially entangled pure states in higher dimension is studied in \cite{ISI:000269858900005}. Some results of local cloning with entanglement resource are presented in \cite{EisertJacobs,CheflesGilson,CollinsLinden}.
Various schemes of quantum cloning of entanglement are studied in \cite{c-entagle0,c-entangle}. Quantum cloning of continuous-variable entangled states is studied in \cite{Weedbrook2008}. The cloning of entangled photons to large scales which might be see by human eye is analyzed in \cite{Sekatski2010}. The scheme of cloning unknown entangled state and its orthogonal-complement state with some assistances is studied in \cite{ISI:000267510900005} and also in \cite{ISI:000227361300007} and \cite{ISI:000242357000016}, the case of arbitrary unknown two-qubit entangled state is studied in \cite{ISI:000269486400010}. The partial quantum cloning of bipartite state, i.e., only part of the of the two-particle state is cloned, and the cloning of mixed states are studied in \cite{ISI:000278194800002}.
Coherent states cloning and local cloning are presented in \cite{Dong2008}. The disentanglement is to preserve the local properties of an entangled state but erase the entanglement between the subsystems, it is closely related with quantum cloning and the broadcasting \cite{ISI:000084148000024}. The cloning machine used as
approximate disentanglement is presented in \cite{ISI:000221638800002}.
The two-qubit disentanglement and inseparability correlation are presented in \cite{ISI:000085836300022}.
\subsection{Entanglement of quantum cloning}
It is also of interest to know the entanglement structure of states in the quantum cloning machines. Potentially, those properties can be used to distinguish quantum from classical since entanglement is considered to be one unique property of quantum world.
There are much progress about the theory of entanglement, see \cite{RMP-Horodecki} for a nice review. For example, Peres-Horodeckis criteria \cite{Peres96,Horodecki19961} is simple to detect the entangled state. Since the output states of the quantum cloning machines are generally available, we can use various techniques to study the entanglement properties of the sole copies, or the whole output state of the cloning machine, or the copies with the ancillary states, etc.
In \cite{PhysRevA.65.012304}, it is shown that for the $1\rightarrow 2$ cloning machines, the two copies of the UQCM are entangled, while the two copies for the phase-covariant cloning machine are separable. Further, we can use some measures of entanglement to quantify the entanglement. The entanglement structure or separability of the asymmetric phase-covariant quantum cloning is studied in \cite{Rezakhani2005278}. The bipartite and tripartite entanglement of the output state of cloning are studied in \cite{Bruss2003}.
\section{Telecloning}
Quantum telecloning, as its name suggests, combines teleportation and quantum cloning so that quantum states are distributed to some more spatially separated parties. In the well-known teleportation scheme in \cite{PhysRevLett.70.1895}, quantum information of an unknown d-level system is completely transmitted from a sender Alice to a remote receiver Bob by using the resource of a maximally entangled state. It is natural to consider ``one-to-many'' and ``many-to-many'' communication via quantum channels. This is the generalized teleportation scheme discussed in \cite{PhysRevA.61.032311,PhysRevA.67.012323}. Of course, it is impossible to transmit quantum information with perfect fidelities for many copies, because the no-cloning theorem \cite{Wootters1982} claims that an unknown quantum state can not be cloned perfectly. However, as we already shown, we can try to quantum clone those quantum states approximately or probabilistically which are allowed by quantum mechanics.
As we already presented, there are various quantum cloning machines which create optimal copies. The aim of teleclone is to create optimal copies which is the same as that of the cloning machines, in addition, we need to create optimal copies remotely by teleportation. Those remote copies themselves may be spatially separated with each other. One may imagine that we can use first quantum cloning machines to create optimal copies locally, then send those copies to their destination points. The aim of teleclone can indeed be realized by this way. In this point, the importance of teleclone is like teleportation. Instead of teleportation, we can surely use flying qubits for states transportation. However, teleportation in the one hand provides an alternative method. On the other hand, in case the quantum channel is noisy, the flying qubits may experience inevitable decoherence which will induce errors. The teleportation scheme can avoid this disadvantage by using the maximally entangled state resource. Even when the entanglement resource is not perfect, the non-maximally entangled states can be purified locally to create maximally entangled states. Now we are ready to study teleclone which combines together the quantum cloning and the quantum teleportation. Still the resource of entanglement is necessary, however, its exact form depends on our specially designed scheme.
Murao \emph{et al.} studied the optimal telecloning of 1 qubit to M qubits by using maximally entangled state \cite{PhysRevA.59.156}. Telecloning which transmites an unknown d-level state to $M$ spatially separated receivers is studied in \cite{PhysRevA.61.032311}. And the telecloning of $N$ qubits to $M$ qubits, $M>N$, that requires positive valued operator measure (POVM) was proposed in \cite{Dur2000}. These telecloning are also called reversible telecloning because there is no loss of quantum information. The $1\rightarrow 2$ telecloning which uses nonmaximum entanglement (it is named irreversible telecloning, in comparison), is studied in \cite{Bruss1998}, and the generalized case, $1\rightarrow M$ irreversible telecloning, is given in \cite{PhysRevA.63.020303}. Quantum information can be encoded by states of continuous variables (CV)\cite{RevModPhys.77.513}. The teleportation of CV is presented in \cite{prl84.3482}. The optimal 1 to $M$ telecloning of CV coherent states using a $(M+1)$-partite entangled state as a multiuser quantum channel is shown in \cite{PhysRevLett.87.247901}. This optimal telecloning could be achieved by exploiting nonmaximum entanglement between the sender and receivers. So this protocol was regarded as a CV irreversible telecloning. A scheme of CV reversible $N\rightarrow M$ telecloning, $M>N$, which distributes information without loss, is presented in \cite{PhysRevA.73.042315}. In this scheme, besides $M$ clones, additional $M-N$ anti-clones are obtained at the same time by using $2M$-partite entanglement generalizing the scheme presented in Ref.\cite{PhysRevA.59.156}.
\subsection{Teleportation}
Let us review the original teleportation protocol and its generalization, i.e., the many-to-many scheme for transmitting quantum information. The teleportation scheme is proposed in \cite{PhysRevLett.70.1895}. Alice wants to send an unknown state of a d-level particle to a spatially separated observer Bob with the help of quantum channel and classical communication. Alice's initial unknown state is, \begin{equation}
|\psi\rangle=\sum_{k=0}^{d-1} \alpha_k |k\rangle_A, \end{equation}
where $\sum_{k=0}^{d-1}|\alpha_k|^2=1$ and $\{|k\rangle\}$ is a complete orthogonal basis. In order to achieve the teleportation, Alice and Bob are assumed to share a prior maximally entangled state, $|\xi \rangle =|\Phi ^+\rangle $, \begin{align}
|\xi\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|j\rangle_P|j\rangle_B. \end{align}
The total system is, $|\Psi\rangle=|\psi\rangle\otimes|\xi\rangle$, which can be rewritten as, \begin{align}
|\Psi\rangle=|\psi\rangle _A\otimes|\xi\rangle _{PB}=
\frac{1}{d}\sum_{m,n=0}^{d-1}|\Phi_{mn}\rangle_{AP}\sum_{k=0}^{d-1}\exp\Big(-i\frac{2\pi nk}{d}\Big)\alpha_k|k+m\rangle_B, \end{align} where $k+m$ is assumed to module d. As standard, the generalized Bell basis are, \begin{equation}
|\Phi_{mn}\rangle=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1} \exp\Big(i\frac{2\pi nk}{d}\Big)|k\rangle|k+m\rangle . \end{equation} Alice performs a joint Bell-type measurement on the input and port particles, sends the measurement result $m,n$ to receiver Bob via classical communication. The unitary transformation which brings Bob's particle to the original state of Alice's is \begin{equation}
U_{mn}=\sum_{j=0}^{d-1}exp\Big(i\frac{2\pi jn}{d}\Big)|j\rangle\langle j+m|. \end{equation} As we already know, they are the generalized Pauli matrices in $d$ dimension.
Next, we will review the generalized symmetric teleportation scheme of N senders and $M, (M>N)$, receivers proposed in \cite{PhysRevA.67.012323}. Assuming the senders $X_1, X_2, \cdots,X_N$ share an unknown, but with fixed form of entangled state $|\psi\rangle_X=\sum_{k=0}^{d-1}\alpha_k|\psi_k\rangle_{X_1}|\psi_k\rangle_{X_2}\cdots|\psi_k\rangle_{X_N}$, where $\{|\psi_k\rangle\}$ is an orthonormal basis of d-dimensional space. This state is a generalization of GHZ state. The quantum entangled state which is the resource, consisting of N ``port'' particles $P_k(k=1,\cdots,N)$ and M receivers $C_k(k=1,\cdots,M)$, takes a special form which is a $(N+M)$-partite state, \begin{equation}
|\xi\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|\pi_j\rangle_{P_1}|\pi_j\rangle_{P_2}\cdots|\pi_j\rangle_{P_N}
|\phi_j\rangle_{C_1C_2\cdots C_M}, \end{equation}
where $\{|\pi_j\rangle\}$ denotes a d-dimensional orthonormal basis. The complete state of the system is, \begin{align}
|\psi\rangle|\xi\rangle &=\frac{1}{\sqrt{d}}\sum_{k,j=0}^{d-1}\alpha _k |\psi_k\rangle_{X_1}|\pi_j\rangle_{P_1}|\psi_k\rangle_{X_2}|\pi_j\rangle_{P_2}\cdots|\psi_k\rangle_{X_N}|\pi_j\rangle_{P_N}
|\phi_j\rangle_{C_1C_2\cdots C_M}\nonumber\\
&=\frac{1}{d^{(N+1)/2}}\sum^{d-1}_{m,n_1,n_2,\cdots,n_N}|\Phi_{m,n_1}\rangle|\Phi_{m,n_2}\rangle\cdots|\Phi_{m,n_N}\rangle\otimes
\sum_k^{d-1}exp\Big[-i\frac{2\pi k}{d}(n_1+n_2+\cdots+n_N)\Big] \alpha_k|\phi_{k+m}\rangle \end{align} The following steps are involved in this protocol: \begin{enumerate}
\item The senders perform a joint Bell-type measurement on particles $X_j$ and $P_j$ and get the outcomes, $|\Phi_{m,n_1}\rangle,|\Phi_{m,n_2}\rangle,\cdots,|\Phi_{m,n_1}\rangle $. Here the generalized
Bell states take the form $|\Phi_{m,n}\rangle =\frac {1}{\sqrt {d}}\sum _{k=0}^{d-1}exp\left( \frac {2\pi ikn}{d}\right)
|\psi _k\rangle |\pi _{k+m}\rangle $, where modulo $d$ is assumed.
\item The outcomes are sent to the receivers by using classical communication,
\item Then, the receivers perform a local recovery unitary operator(LRUO) that satisfies $U_{m;n_1,n_2,\cdots,n_N}
|\phi_{k+m}\rangle=exp\Big[i\frac{2\pi k}{d}(n_1+n_2+\cdots+n_N)\Big]|\phi_{k}\rangle$ . \end{enumerate}
Several remarks are here: (i). In case that local operations are allowed, state $|\psi\rangle_X$ can be transformed locally to just one qudit, $\sum_{k=0}^{d-1}\alpha_k|\psi_k \rangle $. (ii). The $M$ receivers are located in spatially separated places, otherwise if they are in the same port, local quantum operations can reversely change the qudit
$\sum_{k=0}^{d-1}\alpha_k|\psi_k \rangle $ to a generalized GHZ like state shared by $M$ parties. (iii). The scheme presented above combines the quantum information distribution and the teleportation together.
\subsection{Symmetric $1\rightarrow M$ telecloning}
In this subsection, we study the $1\rightarrow M$ generalized telecloning of qudit which is studied in \cite{PhysRevA.61.032311}. In that scenario, the quantum information of d-level particle is transmitted optimally from one sender $X$ to $M$ receivers $C_1,C_2,\cdots,C_M$. One ``port'' and $(M-1)$ ancillary particles were involved. The resource, including the port particle and $(2M-1)$ output states ($M$ receivers and $(M-1)$ ancillas), is the maximally entangled state, \begin{align}
|\xi\rangle &=\frac{1}{\sqrt{d[M]}}\sum_{k=0}^{d[M]-1}|\xi_k^M\rangle_{PA}|\xi_k^M\rangle_C \nonumber\\
&=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|j\rangle_P\otimes\Big(\frac{\sqrt{d}}{\sqrt{d[M]}}\sum_{k=0}^{d[M]-1}
\sideset{_P}{}{\mathop{\langle}} j|\xi_k^M\rangle_{PA}|\xi_k^M\rangle_C\Big)\nonumber\\
&=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|j\rangle_P\otimes|\phi_j\rangle, \end{align} where, $d[M]=C^{N}_{N+d-1}$, we denote the normalized symmetric state as,
$|\xi_k^M\rangle=\frac{1}{\sqrt{\mathcal{N}(\xi_k^M)}}|\mathcal{P}(a_0,a_1,\cdots,a_{M-1})\rangle $, ($\mathcal{P}$ denotes the sum of all possible permutation of the elements $\{a_0,a_1,\cdots,a_{M-1}\}$ for $a_j\in\{0,1,\cdots, d-1\}$ and $a_{j+1}>a_j$), $\{|\phi_j\rangle=\frac{\sqrt{d}}{\sqrt{d[M]}}\sum_{k=0}^{d[M]-1}
\sideset{_P}{}{\mathop{\langle}} j|\xi_k^M\rangle_{PA} |\xi_k^M\rangle_C \}$ is a basis of the output state. The LRUO that satisfies $U_{mn}|\phi_{j+m}\rangle=e^{i\frac{2\pi nj}{d}}|\phi_j\rangle$ for the output state $\{|\phi_j\rangle\}$ is \begin{equation} U_{mn}=\underbrace{U_{mn}^A\otimes \cdots \otimes U_{mn}^A}_{M-1}\otimes\underbrace{ U_{mn}^C\otimes\cdots \otimes U_{mn}^c}_{M}, \end{equation} where \begin{align}
U_{mn}^A=\sum_{j=0}^{d-1}e^{-i\frac{2\pi jn}{d}}|j\rangle\langle j+m|, U_{mn}^C=\sum_{j=0}^{d-1}e^{i\frac{2\pi jn}{d}}|j\rangle\langle j+m| \end{align}
The initial state $|\psi\rangle_X=\sum_{j=0}^{d-1}\alpha_j |j\rangle $ of the sender X is ``encoded" to the separated output state $|\phi\rangle_X=\sum_{j=0}^{d-1}\alpha_j |\phi_j\rangle $ held by the (M-1) ancillas and M receivers. \begin{align}
|\xi_k^M\rangle=\frac{1}{\sqrt{\mathcal{N}(\xi_k^M)}}|\mathcal{P}(a_0,a_1,\cdots,a_{M-1})\rangle
=\frac{1}{\sqrt{\mathcal{N}(\xi_k^M)}}\sum_{a_j} \sqrt{\mathcal{N}(\xi_k'^{M-1})}|a_j\rangle|\xi_{k'}^{M-1}\rangle, \end{align} where $k'=f_M(a_0,\cdots,a_{j-1},a_j,\cdots, a_{M-1})$. There is a relationship between index k and k': $k=g(a_j,k')$, then the total system takes the form, \begin{eqnarray}
|\phi_j\rangle=\frac{\sqrt{d}}{\sqrt{d[M]}}\sum_{k'=0}^{d[M-1]-1}R^{k'}_j|\xi_{k'}^{M-1}\rangle_A\otimes|\xi _{g(j,k')}^M\rangle_C, \end{eqnarray} where we use the notation, $R^{k'}_j=\frac{\sqrt{\mathcal{N}(\xi_{k'}^{M-1})}}{\sqrt{\mathcal{N}(\xi_{g(j,k')}^M)}}$. By tracing out the ancillary states $A$, we obtain the output state of $M$ qudits, \begin{align}
\rho_C=&tr_A(|\phi\rangle\langle\phi|)=\sum_{l=0}^{d[M-1]-1} \sideset{_A}{}{\mathop{\langle}}\xi_l^{M-1}|\phi\rangle\langle\phi|\xi_l^{M-1}\rangle_A\nonumber\\
&=\frac{d}{d[M]}\sum_{j,j'=0}^{d-1}\sum_{k'=0}^{d[M-1]-1}\alpha_j\alpha_{j'}^\ast R_j^{k'}R_{j'}^{k'}|\xi_{g(j,k')}^M\rangle_C\langle\xi_{g(j,k')}^M|\nonumber\\
&=\frac{d[1]}{d[M]}\Big(\sum_{k=0}^{d[M]-1}|\xi_k^M\rangle\langle\xi_k^M|\Big)\Big(|\psi\rangle\langle\psi|\otimes
\mathbb{I}^{\otimes(M-1)}\Big)\Big(\sum_{k'=0}^{d[M]-1}|\xi_{k'}^M\rangle\langle\xi_{'k}^M|\Big)\nonumber\\
&=\frac{d[1]}{d[M]}s_M(|\psi\rangle\langle\psi|\otimes
\mathbb{I}^{\otimes(M-1)})s_M=\hat{T}(|\psi\rangle\langle\psi|) \end{align}
This reduced density matrix of the receivers is consistent with the density matrix for $1\rightarrow M$ d-level optimal clones
\cite{PhysRevA.58.1827,PhysRevA.84.034302}. Let us emphasize that the output of $M$ qudits are consistent
with optimal cloning, moreover, they are spatially separated in different places. The $1\rightarrow M$ telecloning
is also related with programming protocol which is studied in \cite{Ishizaka-Hiroshima}.
\subsection{Economical phase-covariant telecloning}
Quantum cloning machines have economic and non-economic cases. Similarly, we also have the economic telecloning \cite{Wang2009,Wang2009a}. We know that phase-covariant cloning has been studied in \cite{Bruss2000,PhysRevA.65.012304,PhysRevA.67.042306,PhysRevA.67.022317}. The $1\rightarrow M$ optimal economical phase-covariant cloning for qubits was proposed in \cite{PhysRevA.76.034303}, and the $1\rightarrow 2$ economic map (non-optimal) for qudits also was studied \cite{PhysRevA.72.052322}. For special value $M=k d+N$, the optimal $N\rightarrow M$ economical cloning for qudits has been introduced \cite{PhysRevA.71.042327}. A protocol for the $1\rightarrow M$ economical phase-covariant telecloning of qubits has been demonstrated in \cite{Wang2009}, and the $1\rightarrow 2$ economical phase cloning of qudits has been derived in \cite{Wang2009a}.
We next see the $1\rightarrow M $ economical phase cloning of qubits, the input state is, $|\psi\rangle_X=\cos\frac{\theta}{2}|0\rangle_X+e^{i\phi}\sin\frac{\theta}{2}|1\rangle_X$: \begin{align} \begin{cases}
U |0\rangle_1|R_{2\cdots M}\rangle &=|\phi_0\rangle=|00\cdots0\rangle_M\nonumber\\
U |1\rangle_1|R_{2\cdots M}\rangle &=|\phi_1\rangle=\frac{1}{\sqrt{M}}\sum_{j=1}^{M}|0\cdots 1_j \cdots 0\rangle_M \end{cases} \end{align}
We get the output state which is $|\psi\rangle^{out}_M=\cos\frac{\theta}{2}|\phi_0\rangle_X+e^{i\phi}\sin\frac{\theta}{2}|\phi_1\rangle_X$, and fidelity $ F=\sideset{_X}{}{\mathop{\langle}}\psi|tr(|\psi\rangle_{out}\langle\psi|)|\psi\rangle_X=\frac{1}{M}\sin^4\frac{\theta}{2}
+\cos^4\frac{\theta}{2}+\sin^2\frac{\theta}{2}\cos^2\frac{\theta}{2}\Big(\frac{2}{\sqrt{M}}+\frac{M-1}{M}\Big)$. Second, the telecloning scheme is that the sender X prepares the quantum information channel $|\xi\rangle_{PC}=\frac{1}{\sqrt{2}}\big(|0\rangle_P|\phi_0\rangle_C+|1\rangle_P|\phi_1\rangle_C\big)$. The total state can be expressed as \begin{align}
|\Psi\rangle_{XPC}=|\psi\rangle_X|\xi\rangle_{PC}=\frac{1}{2}
\Big[|\Phi^0\rangle_{XP}\otimes(\cos\frac{\theta}{2}|\phi_0\rangle+e^{i\phi}\sin\frac{\theta}{2}|\phi_1\rangle)
+|\Phi^1\rangle_{XP}\otimes(\cos\frac{\theta}{2}|\phi_0\rangle-e^{i\phi}\sin\frac{\theta}{2}|\phi_1\rangle)\nonumber\\
+|\Phi^2\rangle_{XP}\otimes(e^{i\phi}\sin\frac{\theta}{2}|\phi_0\rangle+\cos\frac{\theta}{2}|\phi_1\rangle)
+|\Phi^3\rangle_{XP}\otimes(e^{i\phi}\sin\frac{\theta}{2}|\phi_0\rangle-\cos\frac{\theta}{2}|\phi_1\rangle)\Big], \end{align}
where $ \{|\Phi^0\rangle=|\Phi^+\rangle,|\Phi^1\rangle=|\Phi^-\rangle,|\Phi^2\rangle=|\Psi^+\rangle,|\Phi^3\rangle=|\Psi^-\rangle\}$ are the Bell basis. The next steps have been reviewed above in the generalized telecloning.
For the input state $|\psi\rangle_X=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1} e^{i\theta_j}|j\rangle_X$, the $1\rightarrow 2$ economical phase-covariant cloning machine was demonstrated in \cite{PhysRevA.72.052322}. It takes the form, \begin{align}\begin{cases}
U |0\rangle_X|R\rangle=|\phi_0\rangle=|00\rangle \\
U |j\rangle_X|R\rangle=|\phi_j\rangle=\frac{1}{\sqrt{2}}(|j0\rangle+|0j\rangle), \quad(j\neq0) \end{cases} \end{align}
The fidelity is $F_{econ}=\sideset{_X}{}{\mathop{\langle}}\psi|tr(|\psi\rangle_{out}\langle\psi|)|\psi\rangle_X=\frac {1}{2d^2}[(d-1)^2+(1+2\sqrt{2})(d-1)+2]$. However, the optimal fidelity of $1\rightarrow2$ phase-covariant (with an ancilla) presented in \cite{PhysRevA.67.022317} is,
$F_{opt}=\frac{1}{4d}(d+2+\sqrt{d^2+4d-4})$. When $d=2$, $F_{econ}=F_{opt}$ and otherwise $d>2$, $F_{econ}<F_{opt}$. It is possible to achieve with optimal fidelity $F_{opt}$ probabilistically \cite{Wang2009a}. In this scheme, the entangled state used is $|\xi\rangle_{PC}=\sum_{j=0}^{d-1}x_j|j\rangle_P|\phi_j\rangle_C,$ where the coefficients $x_j$, that are assumed to be real numbers, satisfy the normalization condition $\sum_{j=0}^{d-1}x_j^2=1$. The quantum state of the whole system is, \begin{align}
|\Psi\rangle_{XPC}=|\psi\rangle_X\otimes|\xi\rangle_{PC}
=\frac{1}{d}\sum_{m,n=0}^{d-1}|\Phi_{mn}\rangle_{XP}\sum_{j=0}^{d-1}\exp\big(i\frac{2\pi nj}{d}\big)x_{j+m}
e^{i\theta_j}|\phi_{j+m}\rangle_C \end{align}
Only when the outcome of the Bell-type joint measurement is $\{m=0,n\}$( with probability $1/d$), the receivers can obtain the clones $|\psi\rangle_{out}=\sum_{j=0}^{d-1}x_j e^{i\theta_j}|\phi_j\rangle $ by using the LRUO $U= U_{0n}\otimes\mathbb{I}$. The fidelity of this clones is $F^t_{econ}=\frac{1}{d}\Big(1+\sqrt{2}x_0\sum_{j=0}^{d-1} x_j+\sum_{i=1}^{d-2}\sum_{j=i+1}^{d-1}x_i x_j\Big)$. We set $\{x_j\}$ as \begin{eqnarray} &&x_0=X(d)=\sqrt{\frac{4(d-1)}{D(D+d-2)}}, \nonumber \\ &&x_j=Y(d)=\sqrt{\frac{d^2+(d-2)D}{D(D+d-2)(d-1)}},~~(j\neq0), \end{eqnarray}
where $D=\sqrt{d^2+4d-4}$. It's not difficult to verify that $F^t_{econ}=F_{opt}$ for any $d$. Actually, the output state of this telecloning scheme is equivalent to the $\rho^C_{opt}$ of the optimal phase-covariant cloning after tracing out of the ancilla \cite{PhysRevA.67.022317}. For $d>2$, the von Neumann entropy $S(|\xi\rangle\langle\xi|)=-X^2(d)\log_2 X^2(d)-(d-1)Y^2(d)\log_2Y^2(d)<\log_2 d$, which implies $|\xi\rangle$ is only partially entangled. Thus, we can conclude that the suitable quantum entanglement in realizing the optimal $1\rightarrow 2$ cloning of qudits with a certain probability $1/d$ are special configurations of nonmaximally entangled states rather than the maximally entangled states.
\subsection{Asymmetric telecloning}
Quantum telecloning described in the previous section evenly distributes information of the unknown input state to the distant receivers. However, it may be desirable to transmit information to several different receivers with different fidelities. For example, the sender Alice trusts Bob more than Claire hope Bob's fidelity is larger. These schemes are asymmetric telecloning. The 1 to 2 optimal asymmetric quantum cloning of qubits was introduced in \cite{Cerf1998, ISI:000074966200008,PhysRevA.58.4377, PhysRevLett.84.4497}. The 1 to 2 asymmetric cloning machine was generalized to d-dimension case in \cite{383952220000215, Braunstein2001}, and recently in \cite{PhysRevA.84.034302}.
Here, we briefly review $1\rightarrow 2$ asymmetric telecloning for qubits \cite{PhysRevA.61.032311} as an example. The entanglement state resource is,
$|\xi\rangle=\frac{1}{\sqrt{2}}(|0\rangle_P|\phi_0\rangle +|1\rangle_P|\phi_1\rangle,$ where \begin{align}\begin{cases}
|\phi_0\rangle &=\frac{1}{\sqrt{1+p^2+q^2}}(|0\rangle|0\rangle_B|0\rangle_C+p|1\rangle|0\rangle_B 1\rangle_C+q|1\rangle|1\rangle_B|0\rangle_C) \\
|\phi_1\rangle &=\frac{1}{\sqrt{1+p^2+q^2}}(|1\rangle|1\rangle_B|1\rangle_C+p|0\rangle|1\rangle_B 0\rangle_C+q|0\rangle|0\rangle_B|1\rangle_C) \quad(p+q=1) \end{cases}\end{align} The LRUOs satisfy the conditions,
$\sigma_z\otimes\sigma_z\otimes\sigma_z|\phi_{0(1)}\rangle =(-)|\phi_{0(1)}\rangle,
\sigma_x\otimes\sigma_x\otimes\sigma_x|\phi_{0(1)}\rangle =|\phi_{1(0)}\rangle$. And the final output state is $|\psi\rangle _{out}=\alpha_0|\phi_0\rangle+\alpha_1|\phi_1\rangle$ while the input state being $|\psi\rangle _{X}=\alpha_0|0\rangle+\alpha_1|1\rangle$. The fidelities of Bob and Claire, which satisfy the trade-off relation, $\sqrt{(1-F_B)(1-F_C)}= F_B+F_C-\frac{3}{2}$, respectively are \begin{equation} F_B=\frac{1+p^2}{1+p^2+q^2} \quad F_C=\frac{1+q^2}{1+p^2+q^2}. \end{equation} Next, we show the results of 1 to 2 asymmetric telecloning of qudits. The asymmetric cloning machine is \begin{align}
U|j\rangle_{C_1}|00\rangle_{C_2 A}=|\phi_j\rangle
=\sum_{m,n=0}^{d-1}\beta_{m,n}(V_{m,n}|j\rangle_{C_1})\otimes|\Phi_{m,-n}\rangle_{C_2 A}\\
=\sum_{m,r=0}^{d-1}b_{m,r}|j+m\rangle_{C_1}|j+r\rangle_{C_2}|j+m+r\rangle_{A}, \end{align}
where $V_{m,n}=\sum_{j=0}^{d-1}e^{2\pi jn/d}|j+m\rangle \langle j|$ are generalized Pauli matrices and $b_{m,r}=\frac {1}{\sqrt {d}}\sum_{n=0}^{d-1}e^{-i2\pi nr/d}\beta_{m,n}$. We have a mathematical equation, \begin{equation}
\sum_{m,n=0}^{d-1}\beta_{m,n}|\Phi_{m,n}\rangle_{R C_1}|\Phi_{m,-n}\rangle_{C_2 A}
=\sum_{m,n=0}^{d-1}\gamma_{m,n}|\Phi_{m,n}\rangle_{R C_2}|\Phi_{m,-n}\rangle_{C_1 A} \end{equation}
where $\sum_{m,n=0}^{d-1}|\beta_{m,n}|^2=1$, $\gamma_{m,n}=\frac{1}{d}\sum_{x,y=0}^{d-1}e^{i2\pi (nx-my)/d}\beta_{x,y}$. Then we project this equation on $|j\rangle_R$ and get $|\phi_j\rangle=\sum_{m,n=0}^{d-1}\gamma_{m,n}(V_{m,n}|j\rangle_{C_2})\otimes|\Phi_{m,-n}\rangle_{C_1 A}$. The input state $|\psi\rangle_X=\sum_{j=0}^{d-1}\alpha_j |j\rangle$ is copied into the output states $|\psi\rangle_{out}=\sum_{j=0}^{d-1}\alpha_j |\phi_j\rangle_{C_1 C_2 A}$. This output states are described by the reduced density matrices, respectively, \begin{align}
\rho_{C_1}=tr_{C_2 A}(|\psi\rangle_{out}\langle\psi|)=
\sum_{m,n=0}^{d-1}|\beta_{m,n}|^2 V_{m,n}|\psi\rangle_X\langle\psi|V^\dag_{m,n},\\
\rho_{C_2}=tr_{C_1 A}(|\psi\rangle_{out}\langle\psi|)=
\sum_{m,n=0}^{d-1}|\gamma_{m,n}|^2 V_{m,n}|\psi\rangle_X\langle\psi|V^\dag_{m,n}. \end{align} In order to generate the clones that are characterized by the optimal fidelities which are independent of the input state, the following condition should be satisfied \cite{383952220000215,688607420020710}, \begin{eqnarray} b_{0,0}=\frac{1}{\sqrt{d}}[\nu+(d-1)\mu],&&b_{m,0}=\sqrt{d}\mu, \nonumber \\ b_{0,r}=\frac{1}{\sqrt{d}}(\nu-\mu),&&b_{m,r}=0, \quad (m\neq0,r\neq0) \end{eqnarray} And we get the fidelities of two clones \begin{align} F_{C_1}=\frac{1+(d-1)p^2}{1+(d-1)(p^2+q^2)},\quad F_{C_2}=\frac{1+(d-1)q^2}{1+(d-1)(p^2+q^2)}, \end{align} where $p=\frac{\nu-\mu}{\nu+(d-1)\mu}, q=1-p$. When $p=q=1/2$, the fidelities are in agreement with $F=\frac{N(d-1+M)+M}
{M(d+N)}(N=1,M=2)$ obtained by Werner \cite{PhysRevA.58.1827}. The telecloning scheme requires the quantum entanglement, shared by the port, ancilla, and the receivers $C_1,C_2$, is given as $|\xi\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|j\rangle_P|\phi_j\rangle_{C_1 C_2 A}$. After the sender performs a Bell-type joint measurement on the input and port particles, and gets the result $m,n$, the ancilla and the receivers $C_1,C_2$ perform the LRUO $U_{m,n}^{local}=\sum_{j_1,j_2,j_3}e^{i2\pi n(j_1+j_2-j_3)/d}|j_1\rangle\langle j_1+m|_{C_1}
\otimes |j_2\rangle\langle j_2+m|_{C_2}\otimes |j_3\rangle\langle j_3+m|_{A}$ and gets the output state $|\psi\rangle_{out}=\sum_{j=0}^{d-1}\alpha_j |\phi_j\rangle_{C_1 C_2 A}$.
\subsection{General telecloning}
In the general case \cite{Zhangnew}, the sender hold the $N$ identical input states $|\varphi \rangle ^{\otimes N}=(\sum _jx_j|j\rangle )^{\otimes N}$ at the same location $X$, so we have the state, $|\psi\rangle_X=|\varphi \rangle ^{\otimes N}$. One may find that this state belongs to the symmetric subspace, \begin{eqnarray}
|\psi\rangle_X=\sum_{\overrightarrow{n}}^{N}(\sqrt{N!}\prod_j \frac{\alpha_j^{n_j}}{\sqrt{n_j !}})|\overrightarrow{n}\rangle , \label{teleinitial} \end{eqnarray}
where $|\overrightarrow{n}\rangle $ are the basis of the symmetric subspace $\mathcal{H}^{\otimes N}_+$ in the standard representation, each element of vector $\overrightarrow{n}$ corresponding to the number of state $|i\rangle $ as shown explicitly in (\ref{expansion}). For convenience, we introduce the notation $y_{\overrightarrow{n}}\equiv (\sqrt{N!}\prod_j \frac{\alpha_j^{n_j}}{\sqrt{n_j !}})$ and the initial state in Eq.(\ref{teleinitial}) can be rewritten as, \begin{eqnarray}
|\psi\rangle_X=\sum_{\overrightarrow{n}}^{N} y_{\overrightarrow{n}}|\overrightarrow{n}\rangle . \end{eqnarray} The sender would like to distribute these states to spatially separated $M$ receivers, $M\ge N$. Following the teleportation procedure, the sender performs a joint measurement on input particles $X_1,X_2,\cdots, X_N$ and port particles $P_1,P_2,\cdots,P_N$
which are acting as ancillary states, then announce the outcome to the $M-N$ ancillas and $M$ receivers via classical communication. Next, the ancillas and receivers get the optimal clones after applying the specific Local Recovery Unitary Operator (LRUO). In order to achieve this aim, instead of using joint Bell-type measurement, the sender performs a more general positive operator-valued measure (POVM) defined by $|\chi(\overrightarrow{x})\rangle $ on the system, \begin{eqnarray}
|\chi(\overrightarrow{x})\rangle=[\mathbb{I}^{\otimes N}_X\otimes U(\overrightarrow{x})^{\otimes N}_P]
\frac{1}{\sqrt{d[N]}}\sum_{\overrightarrow{n}}^{N}|\overrightarrow{n}\rangle_{X}|\overrightarrow{n}\rangle_{P}. \end{eqnarray}
We remark that the state $|\chi(\overrightarrow{x})\rangle $ in the projection corresponds to a bipartite maximally entangled state $\frac{1}{\sqrt{d[N]}}\sum_{\overrightarrow{n}}^{N}|\overrightarrow{n}\rangle_{X}|\overrightarrow{n}\rangle_{P}$ with tensor product of $N$ identical unitary operators on one party. This POVM can be confirmed by the following property, \begin{eqnarray}
\int d\overrightarrow{x}F_{\overrightarrow{x}}=\int d\overrightarrow{x}\lambda(\overrightarrow{x})|
\chi(\overrightarrow{x})\rangle\langle \chi(\overrightarrow{x})| =S^N_X\otimes S^N_P,
\label{POVM} \end{eqnarray} where $S^N\otimes S^N$ is the identity in the space $\mathcal{H}^{\otimes N}_+\otimes\mathcal{H}^{\otimes N}_+$, $ U(\overrightarrow{x})$ is an element of Lie group $SU(d)$, and the vector $\overrightarrow{x}$ consisting $(d^2-1)$ parameters which can determine the unitary operator. Next we show that the latter equation can be satisfied. According to the theorem of Weyl Reciprocity \cite{ma2007group}, the unitary transformation $U^{\otimes N}$ and permutation permutation $P_\alpha$ can be exchanged. If $\mathcal{Y}^{[\lambda]}_{\mu}$ is a standard Young operator corresponding to the standard Young tableau with $N$ boxes, the subspace $\mathcal{Y}^{[\lambda]}_{\mu}\mathcal{H}^{\otimes N}$ will be invariant under transformation $U^{\otimes N}$. Considering that the symmetric projection $S^N$ is equal to the standard Young operator $\frac{1}{N!}\mathcal{Y}^{[N]}$, we have \begin{eqnarray} U(\overrightarrow{x})^{\otimes N} S^N&=&S^N U(\overrightarrow{x})^{\otimes N},\\
U(\overrightarrow{x})^{\otimes N}|\overrightarrow{n_1}\rangle &=&
\sum_{\overrightarrow{n_2}}D_{\overrightarrow{n_2},\overrightarrow{n_1}}(\overrightarrow{x})|\overrightarrow{n_2}\rangle, \end{eqnarray} where $D(\overrightarrow{x})$ is a representation of Lie group $SU(d)$. A group theorem states that an irreducible representation of group $SU(d)$ will be induced when $U(\overrightarrow{x})^{\otimes N}$ operates on invariant subspace $\mathcal{Y}^{[\lambda]}_\mu\mathcal{H}^{\otimes N}$ when $\mathcal{Y}^{[\lambda]}_\mu$ is a standard Young operator, see \cite{ma2007group}. Thus $D(\overrightarrow{x})$ is an irreducible representation of group $SU(d)$. Then according to Schur's lemmas and the orthogonality relations \cite{ma2007group}, we obtain, \begin{equation}\label{schur} \frac{1}{d[N]}\int d\overrightarrow{x}\lambda(\overrightarrow{x})D_{\overrightarrow{n_1},\overrightarrow{n_2}}(\overrightarrow{x}) D^*_{\overrightarrow{n_3},\overrightarrow{n_4}}(\overrightarrow{x})=\delta_{\overrightarrow{n_1},\overrightarrow{n_3}} \delta_{\overrightarrow{n_2},\overrightarrow{n_4}} \end{equation} This formula ensures that the integral of the projectors $F_{\overrightarrow{x}}$ is equal to the identity operator in the space $\mathcal{H}^{\otimes N}_+\otimes\mathcal{H}^{\otimes N}_+$ which should be satisfied for a POVM. In special case $d=2$, because we know the analytical expression of the unitary matrix $U(\overrightarrow{x})$ and its irreducible representation $D(\overrightarrow{x})$, an appropriate finite POVM can be constructed, then the integral reduces to summation. The importance to construct finite POVM is that its explicit form is necessary for experimental implementation.
The total system can be expressed as \begin{align}
|\psi\rangle_X |\xi\rangle_{PAC}
&=\frac{1}{d[N]} \sum_{\overrightarrow{x}}\lambda(\overrightarrow{x})|\chi({\overrightarrow{x})}\rangle_{XP} [U^\dag(\overrightarrow{x})^{\otimes (M-N)}_A\otimes U^T(\overrightarrow{x})^{\otimes M}_C]^\dag \sqrt{\frac{d[N]}{d[M]}}\Big(\sum_{\overrightarrow{n}}^{N}y_{\overrightarrow{n}}
\sideset{_P}{}{\mathop{\langle}}\overrightarrow{n}|\Big)
\Big(\sum_{\overrightarrow{m}}^{M}|\overrightarrow{m}\rangle_{PA}|\overrightarrow{m}\rangle_{C}\Big)\nonumber\\
&=\frac{1}{d[N]} \sum_{\overrightarrow{x}}\lambda(\overrightarrow{x})|\chi({\overrightarrow{x})}\rangle_{XP} [U^{local}(\overrightarrow{x})]^\dag |\psi_c\rangle_{AC} \end{align}
The LRUO is $U^{local}(\overrightarrow{x})=U^\dag(\overrightarrow{x})^{\otimes (M-N)}_A\otimes U^T(\overrightarrow{x})^{\otimes M}_C$. As we expect, the sender distributes the universal cloning state $|\psi_c\rangle_{AC}$ to spatially separated $M$ receivers assisted by $(M-N)$ ancillas. The scheme of the telecloning can be represented in FIG.\ref{figure-yinan-network}.
\begin{figure}
\caption{Scheme of telecloning. The sender who may possess several ports share a maximally entangled state with several receivers who are spatially separated, and possibly assisted with ancillary states. The sender (Cloud) performs a POVM and announces the measurement result, the receiver can recover their states locally, see \cite{Zhangnew}.}
\label{figure-yinan-network}
\end{figure}
The asymmetric quantum telecloning for multiqubit states with various figures of merit are investigated by Chen and Chen \cite{ISI:000251336100002}. The reverse processing of telecloning is the remote state concentration. Roughly speaking, the final state of the information concentration is the initial state of the telecloning. It is shown that in the concentration processing, the bound entangled state can be used as a resource \cite{muraovedral}. The standard entangled state possesses similar capability in the quantum information concentration \cite{Zhangnew}. This remote quantum information concentration is also studied in \cite{Wang2011}.
Telecloning is a combination of teleportation and quantum cloning, its reverse process is remote quantum information concentration. The quantum information distribution and concentration are expected to be the fundamental functions of quantum networks. Those functions can be potentially used for clocks synchronization with quantum advantage \cite{zhangnew2}. We can expect that the network quantum computation will be an important subject for further explorations. Some experimental and implementation schemes of the telecloning are reported as in the following. The experimental realization of telecloning is performed by partial teleportation scheme \cite{Zhaozhi05}. A proposal of distance cloning is in \cite{filip2}. The entanglement resource of up to six qubits of Dicke states is created experimentally \cite{dickestate-experiment}. The experimental implementation of telecloning of optical coherent states is demonstrated in \cite{KoikeTakahashi}. The experimental telecloning of phase-conjugate inputs is presented in \cite{ZhangJing08PRA}. Telecloning of entanglement is presented in \cite{ISI:000232228300059}, the telecloning of W state is studied in \cite{Yan2009}. A scheme to implement an economical phase-covariant quantum telecloning is separate cavities is proposed in \cite{Fang2012a}. Implementation of telecloning of economic phase-covariant about bipartite entangled state is studied in \cite{Meng2009}. The continuous variable telecloning with bright entangled beams is studied in \cite{Olivares2008}. The controlled telecloning and teleflipping for one pure qubit is studied in \cite{Zhan2009}.
\section{Quantum cloning for continuous variable systems}
This section is devoted to the issue of quantum cloning machine for continuous variable (CV) systems. The available reviews of this topic can be found in \cite{RevModPhys.77.1225,CerfGrangier}. The photonic state of optical system is usually described by CV. Most schemes and protocols of quantum computation and quantum information can be realized and demonstrated by photonic state of CV, it may possesses unique advantage other than other systems. The reviews of CV quantum information can be found in \cite{RevModPhys.77.513,Wangphysreport,RMP-gaussian}, see spin squeezing in \cite{Majian}. An example of continuous systems is simply to consider the position and momentum of a particle, or the two quadratures of a quantized electromagnetic field. Instead of universal cloning, we only study the case of $N\rightarrow M$ Gaussian cloning for coherent states, whose precise definition will be given in the context below. We shall first get the fidelity bound for $N\rightarrow M$ Gaussian cloning\cite{PhysRevA.62.040301}, and then give an explicit implementation using a linear amplifier and beam splitters\cite{PhysRevLett.86.4938}. Note that the same procedure is also suitable if the input states are squeezed states, provided little change of parameters of the devices is made\cite{PhysRevA.62.040301,PhysRevLett.86.4938}.
\subsection{Optimal bounds for Gaussian cloners of coherent states}
We deal with a quantum system described in terms of two canonically conjugated operators $\hat{x}$ and $\hat{p}$, which respectively has a continuous spectra. Since $\hat{x}$ and $\hat{p}$ are conjugated, they cannot both be copied perfectly, so we hope to find a cloning machine which makes an approximate cloning and obtain an ``optimal'' result. Corresponding to universal cloning, we here focus on cloning transformations which take only coherent states as input. That is, the input states of the cloning machine form a set $\mathcal{S}$, which can be parametrized as, \begin{equation}
\mathcal{S}=\left\{ |\alpha\rangle:\alpha=\frac{1}{\sqrt{2}}(x+ip),x,p\in\bm{R}\right\}, \end{equation}
where $\langle \alpha|\hat{x}|\alpha\rangle = x$ and $\langle \alpha|\hat{p}|\alpha\rangle = p$. Moreover we shall only consider $N\rightarrow M$ symmetric Gaussian cloners(SGCs) which can be defined as a linear completely positive map:$C_{N,M}:\mathcal{H}^{\otimes N}\rightarrow\mathcal{H}^{\otimes M}$, where $\mathcal{H}$ stands for an infinite-dimensional Hilbert space. So after the transformation we shall get $\rho_M=C_{N,M}(|\alpha\rangle\langle\alpha|^{\otimes N})$. To mean Gaussian, the reduced state of a single clone needs to satisfy: \begin{align} \rho_1 &= Tr_{M-1}(\rho_M) \nonumber\\
&= \frac{1}{\pi\sigma_{N,M}^2}\int d^2 \beta e^{-|\beta|^2/\sigma_{N,M}^2}D(\beta)|\alpha \rangle\langle\alpha |D^{\dagger}(\beta), \end{align}
where the integral is performed over all values of $\beta=(x+ip)/\sqrt{2}$ in the complex plane, note that we have set $\hbar=1$, and $D(\beta)=exp(\beta\hat{a}^{\dagger}-\beta^{*}\hat{a})$ is a displacement operator which shifts a state of $x$ in position and $p$ in momentum, $\hat{a}$ and $\hat{a}^{\dagger}$ denote annihilation and creation operators respectively. As a result, after cloning for each copy an extra noise $\sigma_x^2=\sigma_p^2=\sigma_{N,M}^2 $ on the conjugate variables $x$ and $p$ are added. It is readily checked that the cloning fidelity $f_{N,M}=\langle\alpha|\rho_1|\alpha\rangle$ is the same for any coherent input state $|\alpha\rangle$, provided $\sigma_{N,M}$ remains invariant, which means, our cloner is symmetric. Through simple computation one finds, \begin{equation}
f_{N,M}=\langle\alpha|\rho_1|\alpha\rangle=\frac{1}{1+\sigma_{N,M}^2}. \end{equation} Now we shall make the proposition that the lower bound of $\sigma_{N,M}$ is \begin{equation}\label{lowerboundfornoise} \bar{\sigma}_{N,M}^2=\frac{M-N}{MN}, \end{equation} which implies the optimal fidelity for $N\rightarrow M$ cloning machine is \begin{equation} f_{N,M}=\frac{1}{1+\bar{\sigma}_{N,M}^2}=\frac{MN}{MN+M-N}. \end{equation} Next we will prove (\ref{lowerboundfornoise}). As first step we shall come up with a lemma.
\textit{Lemma 1}. Cascading a $N\rightarrow M$ cloner with an $M\rightarrow L$ cloner cannot be better than the optimal $N\rightarrow L$ cloner. In our case, two cascading $N\rightarrow M$ and $M\rightarrow L$ SGCs result in a single $N\rightarrow L$ SGC whose variance is simply the sum of variances of the two cascading SGCs. Hence we have \begin{equation}\label{noisecascading} \bar{\sigma}_{N,L}^2\le \sigma_{N,M}^2+\sigma_{M,L}^2, \end{equation} where $\bar{\sigma}_{N,M}^2$ stands for the low variance bound of $N\rightarrow L$ cloner.
The proof for Lemma 1 can be found in \cite{PhysRevA.62.040301}. We will use lemma 1 to reach (\ref{lowerboundfornoise}). From (\ref{noisecascading}), setting $L\rightarrow \infty$ we get \begin{equation} \bar{\sigma}_{N,\infty}^2\le \sigma_{N,M}^2+\bar{\sigma}_{M,\infty}^2. \end{equation} Then we can use quantum estimation theory to analyze $\bar{\sigma}_{N,\infty}^2$, which is the variance of an optimal joint measurement of $\hat{x}$ and $\hat{p}$ on N replicas of a system. We have \cite{Holevo1982}, \begin{equation}\label{inequalestimation} g_x\sigma_x^2(1)+g_p\sigma_p^2(1)\ge g_x\Delta\hat{x}^2 +g_p\Delta\hat{p}^2+\sqrt{g_xg_p}, \end{equation} for all values of the constants $g_x,g_p>0$, where $\sigma_x^2(1)$ and $\sigma_p^2(1)$ denote the variance of the measured values of $\hat{x}$ and $\hat{p}$, while $\Delta \hat{x}^2$ and $\Delta \hat{p}^2$ denote the intrinsic variance of observables $\hat{x}$ and $\hat{p}$, respectively. For each value of $g_x$ and $g_p$, we have a specific positive-operator-valued measure(POVM) which achieves the bound. Also, as in classical statistics, we have \cite{Holevo1982}, \begin{equation} \sigma_x^2(N)=\frac{\sigma_x^2(1)}{N}, \sigma_p^2(N)=\frac{\sigma_p^2(1)}{N}, \end{equation} where $\sigma_x^2{N}$ or $\sigma_p^2{N}$ is the measured variance of $\hat{x}$ or $\hat{p}$ if we perform the measurement on N independent and identical systems. In the context of coherent states, $\Delta \hat{x}^2=\Delta \hat{p}^2=1/2$, if we further require $\sigma_x^2(N)=\sigma_p^2(N)$, the tight bound of (\ref{inequalestimation}) is reached for $g_x=g_p$. Then it yields from (\ref{inequalestimation}) \begin{equation}\label{varianceboundinf} \bar{\sigma}_{N,\infty}^2=1/N. \end{equation} Combine (\ref{varianceboundinf}) and (\ref{noisecascading}), we have completed our proof.
\subsection{Implementation of optimal Gaussian QCM with a linear amplifier and beam splitters}
In this section, we shall give the explicit transformation for the optimal Gaussian $N\rightarrow M$ cloning of coherent states, and show that the transformation can be implemented through the common devices used in quantum optics experiments: a phase-insensitive linear amplifier and a network of beam splitters\cite{PhysRevLett.86.4938}. Thus we can prove that the optimal bounds of fidelity derived in the previous section can actually be achieved. Note also other implementations may be possible as well, for example, a scheme using a circuit of CNOT gates is proposed to be an implementation for the $1\rightarrow 2$ Gaussian cloning\cite{PhysRevLett.85.1754}.
Assume the state to be cloned is $|\alpha\rangle$, we denote the initial input state of the cloning machine as $|\Psi\rangle = |\alpha\rangle^{\otimes N}\otimes |0\rangle^{\otimes M-N}\otimes|0\rangle_z$, where except the $N$ input modes to be cloned, we have $M-N$ blank modes and an ancillary mode $z$. The blank modes and the ancilla are prepared initially in the vacuum state $|0\rangle$. Let $\{ x_k, p_k\}$ denote the pair of quadrature operators associated with each mode $k$ involved in the cloning transformation, where $k=0,...,M-1$(for simplicity, we sometimes omit the hats for operators when the context is unambiguous). As usual, for cloning we mean a quantum operation $U:\mathcal{H}^{\otimes M-1} \rightarrow \mathcal{H}^{\otimes M-1} $ performed on the initial state $|\Psi\rangle$, and the output state becomes $|\Psi^{''}\rangle = U|\Psi\rangle$.
For simplicity of analysis and calculation which shall be shown below, we work in the Heisenberg picture, then U can be described by a canonical transformation acting on the operators $\{x_k,p_k\}$: \begin{equation} x_k^{''}=U^{\dagger}x_k U,\ \ p_k^{''}=U^{\dagger}p_k U, \end{equation}
while the state $|\Psi\rangle$ is left invariant. We will now impose several requirements for the transformation U which establish some expected properties of the state after cloning: \begin{enumerate}
\item The expected values of x and k for the M output modes be:
\begin{equation}
\langle x_k^{''}\rangle = \langle\alpha|x_0|\alpha\rangle,\ \
\langle p_k^{''}\rangle = \langle\alpha|p_0|\alpha\rangle,
\end{equation}
which means the state of the clones is centered on the original coherent state.
\item Note that for a coherent state, we have $\sigma_x^2 = \sigma_p^2=\Delta x_{vac}^2=\frac{1}{2}$, and also by a rotation in the phase space, we get the operator $v=cx+dp$, (where $c$ and $d$ are complex numbers satisfying $|c|^2+|d|^2=1$), the error variance of which is the same:
\begin{equation}
\sigma_v^2=\sigma_x^2=\sigma_p^2=\Delta x_{vac}^2=\frac{1}{2}.
\end{equation}
We then require that the invariance property under rotation is preserved by the transformation $U$, which yields
\begin{equation}
\sigma_{v_k^{''}}^2 = \sigma_{x_k^{''}}^2 = \sigma_{p_k^{''}}^2 = (1+\frac{2}{N}-\frac{2}{M})\Delta x_{vac}^2,
\end{equation}
where $v_k^{''}=cx_k^{''}+dp_k^{''}$.
\item $U$ is unitary, which in the Heisenberg picture is equivalent to demand that the commutation relations are preserved through the transformation: \begin{equation} [x_j^{''},x_k^{''}]=[p_j^{''},p_k^{''}]=0,\ \ [x_j^{''},p_k^{''}]=i\delta_{jk}, \end{equation} for $j,k=0,...,M-1$ and for the ancilla. \end{enumerate} Based on the above requirements we shall then give the explicit implementation of the cloning machine.
\subsection{Optimal $1\rightarrow 2$ Gaussian QCM} We first consider the simple case of duplication ($N=1, M=2$). An explicit transformation can be found: \begin{align}\label{1to2transformation} x_0^{''}=x_0+\frac{x_1}{\sqrt{2}}+\frac{x_z}{\sqrt{2}},\ \ p_0^{''}=p_0+\frac{p_1}{\sqrt{2}}-\frac{p_z}{\sqrt{2}},\nonumber\\ x_1^{''}=x_0-\frac{x_1}{\sqrt{2}}+\frac{x_z}{\sqrt{2}},\ \ p_1^{''}=p_0-\frac{p_1}{\sqrt{2}}-\frac{p_z}{\sqrt{2}},\nonumber\\ x_z^{'}=x_0+\sqrt{2}x_z, \ \ p_z^{'}=-p_0+\sqrt{2}p_z, \end{align} for which one can check that all the three requirements are satisfied.
Next we proceed to see how to implement the above duplicator in practice. First interpret (\ref{1to2transformation}) as a sequence of two canonical transformations: \begin{align} a_0^{'}=\sqrt{2}a_0+a_z^{\dagger},\ \ a_z^{'}=a_0^{\dagger}+\sqrt{2}a_z \nonumber\\ a_0^{''}=\frac{1}{\sqrt{2}}(a_0^{'}+a_1), \ \ a_1^{''}=\frac{1}{\sqrt{2}}(a_0^{'}-a_1), \end{align} where $a_k=(x_k+ip_k)/\sqrt{2}$ and $a_k^{\dagger}=(x_k-ip_k)/\sqrt{2}$ denote the annihilation and creation operators for mode k. We then can immediately come up with a practical scheme which has two steps to have the desired transformation realized. Step 1 is a phase-insensitive amplifier whose gain G is equal to 2, while step 2 is a phase-free 50:50 beam splitter(see Fig. \ref{gaussianQCM1to2pic}). To see the cloner is optimal, we note from \cite{PhysRevD.26.1817}, for an amplifier of gain $G$, each quadrature's excess noise variance is bounded by \begin{equation}\label{variancegainrelation} \sigma_{LA}^2\ge (G-1)/2. \end{equation} Since we have chosen $G$ to be 2, it yields $\sigma_{LA}^2=1/2$, which proves the optimality of the cloning transformation.
\begin{figure}
\caption{Implementation of the optimal Gaussian $1\rightarrow 2$ QCM for light modes. LA stands for linear amplifier and BS represents a balanced beam splitter, see \cite{PhysRevLett.86.4938}.}
\label{gaussianQCM1to2pic}
\end{figure}
\subsection{Optimal Gaussian $N\rightarrow M$ QCM}
Now we continue to study the case of $N\rightarrow M$ Gaussian cloning, this time we shall again use linear amplifier to achieve the transformation. Due to the relation of extra variance and gain from (\ref{variancegainrelation}), we need to make $G$ as low as possible in order to reach the optimal limit of $\sigma_{N,M}^2$. The cloning procedure is as follows: (i) concentrate the $N$ input modes to one single mode, which is then amplified. (ii) distribute the concentrated mode symmetrically among the $M$ output modes. Obviously an easy method to realize the processes is through discrete Fourier transform (DFT), with which we can write out the detailed steps of the cloning procedure.
Step 1: concentration of the $N$ input modes by a DFT: \begin{equation} a_k^{'}=\frac{1}{\sqrt{N}}\sum_{l=0}^{N-1}exp(ikl2\pi/N)a_l, \label{DFT} \end{equation}
where $k=0,...,N-1$. After the concentration, the energy of the $N$ input modes is put together on one single mode, which we shall rename as $a_0$, while every other mode becomes a vacuum state. To see this more clearly, we note that the energy of one mode is $E_k = \hbar\omega(\langle {a}_k^{\dagger}{a}_k\rangle + \frac{1}{2}), k = 0, ..., N-1.$ Since the input state is $|\alpha\rangle^{\otimes N}$, $\langle {a}_k^{\dagger}{a}_l\rangle = |\alpha|^2$ for all $k$ and $l$, and the total energy is $E = \sum_{k=0}^{N-1} E_k = N\hbar\omega(|\alpha|^2+\frac{1}{2})$. On the other hand, after the DFT process, all modes except one become vacuum states. From Eq.(\ref{DFT}), for any $k\neq 0$, we have \begin{eqnarray} \langle {a}_k^{'\dagger} {a}_k^{'} \rangle &=& \frac{1}{N} \sum_{l = 0}^{N-1}\sum_{m=0}^{N-1} exp(i\frac{k2\pi}{N}(m-l))\langle {a}_l^{\dagger} {a}_m\rangle \nonumber \\
&=& |\alpha|^2 \frac{1}{N} \sum_{l = 0}^{N-1}\sum_{m=0}^{N-1} exp(i\frac{k2\pi}{N}(m-l)) \nonumber \\ &=& 0. \end{eqnarray} So the new mode $k \neq 0$ is a vacuum state. On the other hand for $k = 0$, we have \begin{eqnarray}
\langle {a}_0^{'\dagger} {a}_0^{'} \rangle = \frac{1}{N} \sum_{l = 0}^{N-1}\sum_{m=0}^{N-1} \langle {a}_l^{\dagger} {a}_m\rangle = N|\alpha|^2. \end{eqnarray} Now we see energy is concentrated in the new mode $0$.
Step 2: take the mode $a_0$ together with the ancilla as the input of a linear amplifier of gain $G=M/N$, which results, \begin{align} a_0^{'}=\sqrt{\frac{M}{N}}a_0+\sqrt{\frac{M}{N}-1}a_z^{\dagger}, \nonumber \\ a_z^{'}=\sqrt{\frac{M}{N}-1}a_0^{\dagger}+\sqrt{\frac{M}{N}}a_z. \end{align}
Step 3: distribute energy symmetrically onto the $M$ outputs by performing a DFT on $a_0^{'}$ and the $M-1$ vaccum modes produced in step 1: \begin{equation} a_k^{''}=\frac{1}{\sqrt{M}}\sum_{l=0}^{M-1}exp(ikl2\pi/M)a_l^{'}, \end{equation} It's readily checked that the procedure can meet our three requirements. Moreover if we choose $\sigma_{LA}^2=[M/N-1]/2$, the optimality is then confirmed.
\begin{figure}
\caption{Implementation of the optimal Gaussian $N\rightarrow M$ QCM for light modes. LA represents linear amplifier, DFT stands for discrete Fourier transform, see \cite{PhysRevLett.86.4938}.}
\label{gaussianQCMntompic}
\end{figure}
Like the case of $1\rightarrow 2$ cloning, we shall also use a network of beam splitters to construct the required DFT. It is shown that any discrete unitary operator can be experimentally realized by a sequence of beam splitters and phase shifters \cite{Reck1994}. An explicit construction is given in \cite{PhysRevLett.86.4938} as shown in FIG. \ref{gaussianQCMntompic}.
\subsection{Other developments and related topics}
Similarities and differences exist between cloning in discrete space and CV space. Similar as in discrete space case, if we know partial information of the input state in CV system, the fidelity can also be improved \cite{pra73.045801}. Without analog in discrete case, the quantum cloning with phase-conjugate input modes is studied in \cite{prl87.247903}.
One can expect that many proposals in discrete case can be extended to CV case. Next, we list those results in areas of QKD, CV quantum cloning and implementation schemes in the following.
\begin{itemize} \item In relation with QKD, the application of CV cloning machine in key distribution is studied in \cite{cerf-cvqkd-squeezed}. By some figure of merit, the optimal cloning of coherent states with non-Gaussian setting may be better than a Gaussian setting \cite{prl95.070501}. This may pose a question about whether the security of a QKD can be challenged or not in a more general condition. However, CV cryptography \cite{prl88.057902} is shown to be still secure under non-Gaussian attack \cite{prl92.047905}. The asymmetry CV cloning used for security analysis of cryptography is also discussed in \cite{CerfIblisdirAssche}. The review of CV cloning and QKD can be found in \cite{CerfGrangier}.
\item The CV universal NOT gate is studied in \cite{cerf-cv-not}. The optimal cloning of mixed Gaussian states is studied in \cite{ISI:000241067100032}. The superbroadcasting of CV mixed states is studied in \cite{ISI:000238292400001}. The quantum cloning limits for finite distributions of coherent states are studied in \cite{finite-coherent}, and also in \cite{ISI:000189304700015}. A proposal to test quantum limits of a Gaussian-distributed set of coherent state related with cloning is presented in \cite{Namiki2011}. The cloning of CV entangled state is studied in \cite{Weedbrook2008}.
\item The implementation of CV quantum cloning via various schemes is proposed in \cite{PhysRevLett.86.4938,prl86.914,prl86.4942}. The multicopy Gaussian states is studied in \cite{Fiurasek2007}. The Gaussian cloning of coherent light states into an atomic quantum memory is presented in \cite{FiurasekCerfPolzik}.
Experimentally, implementation of Gaussian cloning of coherent states with fidelity of about $65\% $ by only linear optics is shown in \cite{prl94.240503}, the results are further analyzed in \cite{Olivares2006}. Experimental realization of CV cloning with phase-conjugate inputs is shown in \cite{experiementalcvcloning} and also in \cite{ZhangJing07PRA}. The experimental realization of both CV teleportation and cloning is reported in \cite{ZhangJing05PRL}.
\end{itemize}
As one of the most basic protocols of quantum information processing, the CV teleportation is studied in \cite{ISI:000085234900012}. The criteria of CV cloning and teleportation are studied in \cite{GrosshansGrangier}. We remark again that the reviews of CV quantum information can be found in \cite{RevModPhys.77.513,Wangphysreport,RMP-gaussian}.
\section{Sequential universal quantum cloning}
In past years, theoretical research on quantum cloning machines have progressed greatly. At the same time, various cloning schemes have been realized experimentally, by using polarized photons\cite{667603820020426,PhysRevA.68.042306,PhysRevLett.92.047901,PhysRevLett.92.047902} or nuclear spins in NMR \cite{PhysRevLett.88.187901,PhysRevLett.94.040505}. However, these experiments are only restricted to $1 \rightarrow 2$ or $1 \rightarrow 3$ cloning machines, leaving the general case of $N \rightarrow M$ cloning unsolved. The difficulty of realizing $N \rightarrow M$ cloning mainly arises in preparing multipartite entangled states, since it is very difficult to perform a global unitary operation on large-dimensional systems to create multipartite entangled states. While on the other hand, using the technique of sequential cloning, one may be able to divide the big global unitary operation into small ones, each of which is only concerned with a small quantum system and as a result makes it possible to get the desired entangled state. Several quantum cloning procedures for multipartite cloning were proposed, but they are not in the sequential method\cite{PhysRevLett.84.2993,PhysRevA.67.022317}. In 2007, based on the work of Vidal \cite{PhysRevLett.91.147902}, Delgado \emph{et al.} proposed a scheme of a sequential $1 \rightarrow M$ cloning machine\cite{Delgado2007}. Since the procedure is sequential, it significantly reduces the difficulty of its realization. Later, a scheme of more general $N \rightarrow M$ sequential cloning is presented \cite{Dang2008}. The case of $N \rightarrow M$ sequential cloning of qudits is also proposed briefly, yet the details are not presented. The essential idea of sequential method is to express the desired state in the form of matrix product state (MPS), and according to results in \cite{PhysRevLett.95.110503}, any MPS can be sequentially generated. On the other hand, it is also pointed out that sequential unitary decompositions are not always successful for genuine entangling operations \cite{lamatasequential}. Here in this section, we will present in detail how the procedure of $1 \rightarrow M$ and $N \rightarrow M$ sequential cloning can work.
\subsection{$1 \rightarrow M$ sequential UQCM}
According to the method of Delgado \emph{et al.}\cite{Delgado2007}, we first need an ancilla system of dimension $D$. Let $\mathcal{H}_\mathcal{A}$ denotes the D-dimensional Hilbert space of the ancilla system, and $\mathcal{H}_\mathcal{B}$ the 2-dimensional Hilbert space of one qubit. In every step of the sequential cloning, we perform a quantum evolutional operator V on the product space of the ancilla and a single qubit. Here we suppose that each qubit is initially state $|0\rangle$ which will not appear in the following equations. $V$ then can be represented by an isometric transformation: $V: \mathcal{H}_\mathcal{A} \rightarrow \mathcal{H}_\mathcal{A}\otimes\mathcal{H}_\mathcal{B}$, in which $V = \sum_{i,\alpha,\beta}V_{\alpha,\beta}^i|\alpha,i\rangle\langle\beta|$. Let $V^i=\sum_{\alpha,\beta}V_{\alpha,\beta}^i|\alpha\rangle\langle\beta|$, then $V^i$ is a $D{\times}D$ matrix and satisfies the isometry condition $\sum_iV^{i\dagger}V^i=I$. Let the initial state of the ancilla be $|\phi_I\rangle\in\mathcal{H}_\mathcal{A}$. We make the ancilla to interact with the qubits once a time and sequentially, after the unitary operation, we would not recover the ancilla state. So when $n$ operations have been done, the final output state of the ancilla and all the qubits take the form, $|\Psi\rangle=V^{[n]}...V^{[2]}V^{[1]}|\phi_I\rangle$, where indices in squared brackets represent the steps of sequential generation. Now we need to decouple the aniclla from the multi-entangled qubits, and then the $n$-qubit state shall be left: \begin{equation}\label{eqexpand1}
|\psi\rangle = \sum_{i_1,i_2,...,i_n}\langle\phi_F|V^{[n]i_n}...V^{[1]i_1}|\phi_I\rangle|i_1...i_n\rangle, \end{equation}
where $\phi_F$ represents the final state of the ancilla. For 2-dimensional system $\{|0\rangle,|1\rangle\}$, the cloning transformation of the optimal $1 \rightarrow M$ cloning machine is \cite{PhysRevLett.79.2153}: \begin{eqnarray}
|0\rangle\otimes|R\rangle\rightarrow|\Psi_M^{(0)}\rangle=\sum_{j=0}^{M-1}\beta_j|(M-j)0,j1\rangle\otimes|(M-j-1)1,j0\rangle_R,\\
|1\rangle\otimes|R\rangle\rightarrow|\Psi_M^{(1)}\rangle=\sum_{j=0}^{M-1}\beta_{M-j-1}|(M-j-1)0,(j+1)1\rangle\otimes|(M-j-1)1,j0\rangle_R, \end{eqnarray}
in which $\beta_j=\sqrt{2(M-j)/M(M+1)}$, $|(M-j-1)1,j0\rangle_R$ is the final state of the cloning machine, and $|(M-j)0,j1\rangle$ denotes the normalized completely symmetric $M$-qubit state with ($M-j$) qubits in $|0\rangle$ and $j$ qubits in $|1\rangle$. In order to clone a general state $|\phi\rangle$, it is necessary to know how to sequentially generate the states $|\Psi_M^{(0)}\rangle$ and $|\Psi_M^{(1)}\rangle$, as a result of which we need to express these two states in the MPS form: \begin{eqnarray}
|\Psi_M^{(0)}\rangle=\sum_{i_1,...i_n}\langle\phi_F^{(0)}|V_0^{[n]i_n}...V_0^{[1]i_1}|0\rangle_D|i_1...i_n\rangle,\\
|\Psi_M^{(1)}\rangle=\sum_{i_1,...i_n}\langle\phi_F^{(1)}|V_1^{[n]i_n}...V_1^{[1]i_1}|0\rangle_D|i_1...i_n\rangle. \end{eqnarray}
Now we aim to get the explicit expression of the matrices $V_0^{[k]i_k}$ and $V_1^{[k]i_k}$. A way to do so is by using Schmidt decomposition(SD) \cite{PhysRevLett.91.147902}, see textbook \cite{Nielsen2000}. Consider an arbitrary state $|\Psi\rangle$ in Hilbert space $\mathcal{H}_2^{\otimes{n}}$, the SD of $|\Psi\rangle$ according to the bipartition $A:B$ is \begin{equation}
|\Psi\rangle = \sum_{\alpha} \lambda_{\alpha}|\Phi_{\alpha}^{[A]}\rangle|\phi_{\alpha}^{[B]}\rangle, \end{equation}
where $|\Phi_{\alpha}^{[A]}\rangle(|\Phi_{\alpha}^{[B]}\rangle)$ is an eigenvector of the reduced density matrix $\rho^{[A]}(\rho^{[B]})$ with eigenvalue $|\lambda_{\alpha}|^2\geq 0$, and the Schmidt coefficient $\lambda_{\alpha}$ satisfies $\langle \Phi_{\alpha}^{[A]}|\Psi\rangle = \lambda_{\alpha}|\Phi_{\alpha}^{[B]}\rangle$.
With the help of SD, we proceed the following protocol: \begin{enumerate}
\item Compute the SD of $|\Psi\rangle$ according to the bipartite $1:n-1$ splitting of the $n$-qubit system, which is \begin{align}\label{eq1}
|\Psi\rangle &= \sum_{\alpha_1}\lambda_{\alpha_1}^{[1]}|\Phi_{\alpha_1}^{[1]}\rangle|\Phi_{\alpha_1}^{[2...n]}\rangle\\
&= \sum_{i_1,\alpha_1}\Gamma_{\alpha_1}^{[1]i_1}\lambda_{\alpha_1}^{[1]}|i_1\rangle|\Phi_{\alpha_1}^{[2...n]}\rangle, \end{align}
where in the second line we have expressed the Schmidt vector $|\Phi_{\alpha_1}^{[1]}\rangle$ in the computational basis $\{|0\rangle,|1\rangle\}$: $|\Phi_{\alpha_1}^{[1]}\rangle = \sum_{i_1}\Gamma_{\alpha_1}^{[1]i_1}|i_1\rangle.$
\item Expand $|\Phi_{\alpha_1}^{[2...n]}\rangle$ in local basis for qubit 2, \begin{equation}\label{eq2}
|\Phi_{\alpha_1}^{[2...n]}\rangle = \sum_{i_2}|i_2\rangle|\tau_{\alpha_1i_2}^{[3...n]}\rangle. \end{equation}
\item Express $|\tau_{\alpha_1i_2}^{[3...n]}\rangle$ by at most $\chi$ Schmidt vectors $|\Phi_{\alpha_2}^{[3...n]}\rangle$(the eigenvectors of $\rho^{[3...n]}$), where $\alpha_2$ ranges from 1 to $\chi$ and $\chi = \max_{A}\chi_A,$ here $\chi_A$ denotes the rank of the reduced density matrix $\rho_A$ for a particular partition $A:B$ of the $n$-qubit state:
\begin{equation}\label{eq3}
|\tau_{\alpha_1i_2}^{[3...n]}\rangle = \sum_{\alpha_2}\Gamma_{\alpha_1\alpha_2}^{[2]i_2}\lambda_{\alpha_2}^{[2]}|\Phi_{\alpha_2}^{[3...n]}\rangle,
\end{equation}
where $\lambda_{\alpha_2}^{[2]}$'s are the corresponding Schmidt coefficients. \item Subsitute (\ref{eq2}) and (\ref{eq3}) into (\ref{eq1}), we get \begin{equation}
|\Psi\rangle = \sum_{i_1,\alpha_1,i_2,\alpha_2}\Gamma_{\alpha_1}^{[1]i_1}\lambda_{\alpha_1}^{[1]}
\Gamma_{\alpha_1\alpha_2}^{[2]i_2}\lambda_{\alpha_2}^{[2]}|i_1i_2\rangle|\Phi_{\alpha_2}^{[3...n]}\rangle. \end{equation}
Now it's easy to see if we repeat the steps 2-4, we get the expansion of $|\Psi\rangle$ in the computational basis: \begin{equation}
|\Psi\rangle = \sum_{i_1}...\sum_{i_n}c_{i_1...i_n}|i_1\rangle...|i_n\rangle, \end{equation} where the coefficients $c_{i_1...i_n}$ are \begin{equation}\label{eqexpand2} c_{i_1...i_n} = \sum_{\alpha_1,...,\alpha_{n-1}}\Gamma_{\alpha_1}^{[1]i_1}\lambda_{\alpha_1}^{[1]} \Gamma_{\alpha_1\alpha_2}^{[2]i_2}\lambda_{\alpha_2}^{[2]}...\Gamma_{\alpha_{n-1}}^{[n]i_n}. \end{equation} \end{enumerate}
Through comparing equations (\ref{eqexpand1}) and (\ref{eqexpand2}), we are able to construct $V_0^{[k]i_k}$ and $V_1^{[k]i_k}$ explicitly. The detailed work is omitted here since in next section about the more general $N\rightarrow M$ sequential cloning case, each step of getting the matrix is provided.
When the input state of the cloning machine is an arbitrary state $|\psi\rangle = x_0|0\rangle+x_1|1\rangle$(normalization condition is satisfied: $|x_0|^2+|x_1|^2=1$.), according to the linearity principle of quantum mechanics, the state after cloning transformation is $x_0|\Psi_M^{(0)}\rangle+x_1|\Psi_M^{(1)}\rangle$, which can also be sequentially generated. First, view the arbitrary state $|\psi\rangle$ and the ancilla's initial state $|0\rangle_D$ as a unified state: $|\phi_I\rangle = |\psi\rangle\otimes|0\rangle_D$. Then use the qubit $k$,($k=1,2,...,n$), sequentially to interact with the ancilla according to the 2D-dimensional isometric operators $V^{[k]i_k} = |0\rangle\langle0|\otimes{V_0^{[k]i_k}}+|1\rangle\langle1|\otimes{V_1^{[k]i_k}}$. After all qubits have interacted with the ancilla, perform a generalized Hadamard transformation to the ancilla \begin{eqnarray}
|0\rangle|\phi_F^{(0)}\rangle\rightarrow\frac{1}{\sqrt2}[|0\rangle|\phi_F^{(0)}\rangle+|1\rangle|\phi_F^{(1)}\rangle]\\
|1\rangle|\phi_F^{(1)}\rangle\rightarrow\frac{1}{\sqrt2}[|0\rangle|\phi_F^{(0)}\rangle-|1\rangle|\phi_F^{(1)}\rangle] \end{eqnarray}
Now measure the ancilla with the basis $\{|0\rangle|\phi_F^{(0)}\rangle,|1\rangle|\phi_F^{(1)}\rangle\}$, either result occurs with probability 1/2. When the result is $|0\rangle|\phi_F^{(0)}\rangle$, we get the desired state $x_0|\Psi_M^{(0)}\rangle+x_1|\Psi_M^{(1)}\rangle$; while if the result is $|1\rangle|\phi_F^{(1)}\rangle$, we need to perform a $\pi$-phase gate upon each qubit, and the desired state will be obtained.
To realized the above $1 \rightarrow M$ cloning scheme, an ancilla system of dimension $2M$ is needed, while if we take a global unitary operation to accomplish the cloning, the dimension of the unitary operation will increase exponentially with $M$. So we see sequential cloning is much easier to realize experimentally.
\subsection{$N \rightarrow M$ optimal sequential UQCM}
In this section, we will discuss the more general case $N \rightarrow M$ optimal sequential UQCM. An arbitrary qubit is written $|\Psi\rangle = x_0|0\rangle+x_1|1\rangle$($|x_0|^2+|y_0|^2=1$), then N identical $|\Psi\rangle$ can be expressed as \begin{equation}
|\Psi\rangle^{\otimes{N}} = \sum_{m=0}^{N}x_0^{N-m}x_1^m\sqrt{C_N^m}|(N-m)0,m1\rangle, \end{equation}
where $C_N^m=\frac{N!}{m!(N-m)!}$, and $|(N-m)0,m1\rangle$ denotes the normalized completely symmetric N-qubit state with (N-m) qubits in state $|0\rangle$ and m qubits in state $|1\rangle$.
It is well known that the optimal UQCM transformation for completely symmetric states\cite{PhysRevLett.79.2153} is \begin{eqnarray}
|(N-m)0,m1\rangle\otimes|R\rangle \rightarrow |\Psi_M^{(m)}\rangle
= \sum_{j=0}^{M-N}\beta_{mj}|(M-m-j)0,(m+j)1\rangle\otimes|R_j\rangle, \end{eqnarray}
where $\beta_{mj}=\sqrt{C_{M-m-j}^{M-N-j}C_{m+j}^j/C_{M+1}^{N+1}}$, $R_j$ denotes the final states of the cloning machine, and for different j, $|R_j\rangle$'s are orthogonal to each other. Here we can choose $|R_j\rangle=|(M-N-j)1,j0\rangle_R$. Since we have found the cloning transformation of any state in the form $|((N-m)0,m1\rangle$, according to the linearity principle of quantum mechanics, the transformation for N arbitrary state $|\Psi\rangle$ is \begin{equation}
|\Psi\rangle^{\otimes{N}}\otimes|R\rangle \rightarrow |\Psi_M\rangle=\sum_{m=0}^{N}x_0^{N-m}x_1^m\sqrt{C_N^m}|\Psi_M^{(m)}\rangle, \end{equation}
here $|\Psi_M\rangle$ is the final state of all the qubits and the cloning machine we hope to obtain, like the $1 \rightarrow N$ sequential UQCM case, we first need to show how $|\Psi_M^{(m)}\rangle$ can be sequentially generated. Hence it's necessary to know the MPS form of $|\Psi_M^{(m)}\rangle$: \begin{equation}\label{NMeq1}
|\Psi_M^{(m)}\rangle = \sum_{i_1...i_{2M-N}}\langle\phi_F|V^{[2M-N]i_{2M-N}}...V^{[1]i_1}|\phi_I\rangle|i_1...i_{2M-N}\rangle, \end{equation} where $V^{[n]i_n}(1\leq n\leq 2M-N)$ is a $D\times D$ dimensional matrix, and satisfies the isometry condition:$\sum_{i_n} (V^{[n]i_n})^{\dagger}V^{[n]i_n}=I$. Now we shall follow the idea of SD, and give detailed elaboration on how to get the explicit form of $V^{[n]i_n}$.
\begin{enumerate} \item Case $n = 1$.
Compute the SD of $|\Psi_M^{(m)}\rangle$ according to partition 1:2...(2M-N): \begin{eqnarray}\label{SD1} \left\vert \Psi _{M}^{(m)}\right\rangle &=&\overset{M-N}{\underset{j=0}{\sum }} \beta _{mj}\left\vert \left( M-m-j\right) 0,\left( m+j\right) 1\right\rangle
\otimes |R_j\rangle \nonumber\\
&=& \sum_{\alpha_1}\lambda_{\alpha_1}^{[1]}|\phi_{\alpha_1}^{[1]}\rangle\otimes
|\phi_{\alpha_1}^{[2...(2M-N)]}\rangle\nonumber\\
&=& \sum_{\alpha_1,i_1}\Gamma_{\alpha1}^{[1]i_1}\lambda_{\alpha_1}^{[1]}|i_1\rangle\otimes
|\phi_{\alpha_1}^{[2...(2M-N)]}\rangle\nonumber\\ &=&\left\vert 0\right\rangle \otimes \lambda ^{\left[ 1\right]} _{1}\left\vert \phi _{1}^{\left[ 2...\left( 2M-N\right) \right] }\right\rangle +\left\vert 1\right\rangle \otimes \lambda ^{\left[ 1\right]} _{2}\left\vert \phi _{2}^{ \left[ 2...\left( 2M-N\right) \right] }\right\rangle , \end{eqnarray} where through comparing the first and last lines, we get \begin{equation*} \left\vert \phi _{1}^{\left[ 2...\left( 2M-N\right) \right] }\right\rangle = \overset{M-m-1}{\underset{k=-m}{\sum }}\beta _{mk}\sqrt{\frac{C_{M-1}^{m+k}}{ C_{M}^{m+k}}}\left\vert \left( M-m-k-1\right) 0,\left( m+k\right)
1\right\rangle \otimes |R_k\rangle /\lambda ^{\left[ 1\right]} _{1}, \end{equation*} \begin{equation*} \left\vert \phi _{2}^{\left[ 2...\left( 2M-N\right) \right] }\right\rangle = \overset{M-m-1}{\underset{k=-m}{\sum }}\beta _{m\left( k+1\right) }\sqrt{ \frac{C_{M-1}^{m+k}}{C_{M}^{m+k+1}}}\left\vert \left( M-m-k-1\right)
0,\left( m+k\right) 1\right\rangle \otimes |R_{k+1}\rangle /\lambda ^{\left[ 1\right]} _{2}. \end{equation*} Compare the last two lines, we also have \begin{equation*} \Gamma _{\alpha _{1}}^{\left[ 1\right]0}=\delta _{\alpha _{1},1},\text{ } \Gamma _{\alpha _{1}}^{\left[ 1\right]1}=\delta _{\alpha _{1},2},\text{ } \alpha _{1}=1,2. \end{equation*}
Now use the condition of normalization, Schmidt coefficients could be calculated, \begin{equation*} \lambda ^{\left[ 1\right]} _{1}=\sqrt{\overset{M-m-1}{\underset{k=-m}{\sum }} \beta _{mk}^{2}\frac{C_{M-1}^{m+k}}{C_{M}^{m+k}}},\text{ }\lambda ^{\left[ 1 \right]} _{2}=\sqrt{\overset{M-m-1}{\underset{k=-m}{\sum }}\beta _{m\left( k+1\right) }^{2}\frac{C_{M-1}^{m+k}}{C_{M}^{m+k+1}}}. \end{equation*}
Then we have \begin{equation*} V _{\alpha _{1}}^{\left[ 1\right]i_{1}}=\Gamma _{\alpha _{1}}^{\left[ 1\right]i_{1}}\lambda ^{\left[ 1\right]} _{\alpha _{1}}. \end{equation*} The explicit form of $V^{[1]i_1}$ is given in the appendix, so is other $V^{[i]i_n}$.
Next, we will not present the detailed calculations for other cases since the method is almost the same, but only list the results.
\item For $1<n\le M-1$:
We calculate the SD of $|\Psi_M^{(m)}\rangle$ according to partitions. The results are: \begin{equation*} \lambda ^{\left[ n\right]} _{j+1}=\sqrt{C_{n}^{j}\overset{M-m-n}{\underset{k=-m} {\sum }}\beta _{m\left( j+k\right) }^{2}\frac{C_{M-n}^{m+k}}{C_{M}^{m+j+k}}}, ~~~\lambda ^{\left[ n-1\right]} _{j+1}=\sqrt{C_{n-1}^{j}\overset{M-m-n+1}{\underset {k=-m}{\sum }}\beta _{m\left( j+k\right) }^{2}\frac{C_{M-n+1}^{m+k}}{ C_{M}^{m+j+k}}}. \end{equation*} \begin{equation*} \Gamma _{\left( j+1\right) \alpha _{n}}^{ \left[ n\right]0}=\delta _{\left( j+1\right) \alpha _{n}}\frac{\sqrt{C_{n-1}^{j}}}{\lambda ^{\left[ n-1\right]} _{j+1}\sqrt{C_{n}^{j}}}, ~~~\Gamma _{\left( j+1\right) \alpha _{n}}^{\left[ n\right]1}=\delta _{\left( j+2\right) \alpha _{n}}\frac{\sqrt{C_{n-1}^{j}}}{\lambda ^{\left[ n-1\right]} _{j+1}\sqrt{C_{n}^{j+1}}}. \end{equation*}
And for this case, the summarized form is $V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]i_{n}}=\Gamma _{\alpha _{n-1}\alpha _{n}}^{\left[ n\right]i_{n}}\lambda ^{\left[ n\right]} _{\alpha _{n}}$.
\item Case $n=M$: We have, \begin{equation*} \lambda ^{\left[ M\right]} _{j+1}=\beta _{m\left( j-m\right) }, ~~~\lambda ^{\left[ M-1\right]} _{j+1}=\sqrt{C_{M-1}^{j}\overset{-m+1}{\underset{ k=-m}{\sum }}\beta _{m\left( j+k\right) }^{2}\frac{C_{1}^{m+k}}{C_{M}^{m+j+k} }}. \end{equation*} And also, \begin{equation*} \Gamma _{\left( j+1\right) \alpha _{M}}^{\left[ M\right]0}=\delta _{\alpha _{M}\left( j+1\right) }\frac{\sqrt{C_{M-1}^{j}}}{\lambda ^{\left[ M-1\right]} _{j+1}\sqrt{C_{M}^{j}}}, \end{equation*} \begin{equation*} \Gamma _{\left( j+1\right) \alpha _{M}}^{\left[ M\right]1}=\delta _{\alpha _{M}\left( j+2\right) }\frac{\sqrt{C_{M-1}^{j}}}{\lambda ^{\left[ M-1\right]} _{j+1}\sqrt{C_{M}^{j+1}}}. \end{equation*} Similarly, for this case $V_{\alpha_M\alpha_{M-1}}^{[M]i_M}=\Gamma_{\alpha_{M-1}\alpha_M}^{[M]i_M} \lambda_{\alpha_M}^{[M]}$.
\item Case $M+l$ $\left( 1\le l\le M-N\right) $: We have, \begin{equation*} \lambda ^{\left[ M+l\right]} _{j+1}=\sqrt{C_{M-N-l}^{j-m}\overset{l}{\underset{ k=0}{\sum }}\beta _{m\left( j+k-m\right) }^{2}\frac{C_{l}^{k}}{ C_{M-N}^{j+k-m}}}. \end{equation*} \begin{equation*} \lambda ^{\left[ M+l-1\right]} _{j+1}=\sqrt{C_{M-N-l+1}^{j-m}\overset{l-1}{ \underset{k=0}{\sum }}\beta _{m\left( j+k-m\right) }^{2}\frac{C_{l-1}^{k}}{ C_{M-N}^{j+k-m}}}, \end{equation*} \begin{equation*} \Gamma _{\left( j+1\right) \alpha _{M+l}}^{\left[ M+l\right]0}=\delta _{\alpha _{M+l}j}\sqrt{\frac{C_{M-N-l}^{j-m-1}}{C_{M-N-l+1}^{j-m}}}/\lambda ^{\left[ M+l\right]} _{\alpha _{M+l}}, \end{equation*} \begin{equation*} \Gamma _{\left( j+1\right) \alpha _{M+l}}^{\left[ M+l\right]1}=\delta _{\alpha _{M+l}\left( j+1\right) }\sqrt{\frac{C_{M-N-l}^{j-m}}{ C_{M-N-l+1}^{j-m}}}/\lambda ^{\left[ M+l\right]} _{\alpha _{M+l}}. \end{equation*} Then we have $V _{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right]i_{M+l}}=\Gamma _{\alpha _{M+l-1}\alpha _{M}+l}^{\left[ M+l\right]i_{M+l}}\lambda ^{\left[ M+l\right]} _{\alpha _{M}+l}$.
\end{enumerate}
Up till now, we have calculated out the explicit form of every $V^{[k]i_k}$, and since $V^{[k]i_k}$ depends on m, we denote it as $V_{(m)}^{[k]i_k}$ here after.
Through computation, we can get the smallest dimension needed for the isometric operator $V_(m)^{[k]i_k}$,
\begin{equation}
D = \{
\begin{array}{ll}
M-N/2+1 & \text{if N is even;}\\
M-(N-1)/2+1 & \text{if N is odd.}
\end{array}
\end{equation}
So we see D increases linearly with $M$, which shall significantly ease the difficulty of sequential cloning.
Based on the above computation, we have known that the state $|\Psi_M^{(m)}\rangle$ can be expressed in the MPS form, so the $N$-qubit pure state $|(N-m)0,m1\rangle$ can be sequentially transformed to $|\Psi_M^{(m)}\rangle$. Now in order to sequentially clone the $N$-qubit $|\Psi\rangle^{\otimes N}$ to $M$ qubits, the scheme is as follows\cite{Dang2008}.
1). Encode the N-qubit $|\Psi\rangle^{\otimes N}$ in the ancilla, which makes the initial state of the united ancilla \begin{equation}
|\phi_I^{'}\rangle = \sum_{m=0}^N x_0^{N-m} x_1^m \sqrt{C_N^m} |(N-m)0,m1\rangle\otimes|0\rangle_D. \end{equation}
2). Build the operators \begin{equation}
V^{[k]i_k}=\sum_{m=0}^N (\sqrt{C_N^m})^{\frac{1}{2M-N}}(|0\rangle\langle0|)^{\otimes N-m}(|1\rangle\langle1|)^{\otimes m} \otimes V_{(m)}^{[n]i_n}. \end{equation}
3). Let all the qubits interact sequentially with the united ancilla according to the operator $V^{[k]i_k}$, we get the final state of the whole system \begin{align*}
|\Psi_{out}\rangle &= \sum_{i_1...i_{2M-N}} V^{[2M-N]i_{2M-N}}...V^{[1]i_1}|\varphi_i^{'}\rangle\otimes|i_1...i_{2M-N}\rangle\\
&= \sum_{m=0}^N x_0^{N-m} x_1^m \sqrt{C_N^m} |0\rangle^{\otimes N-m}|1\rangle^{\otimes m}\otimes|\varphi_F^{(m)}\rangle\otimes|\Psi_M^{(m)}\rangle, \end{align*}
where $|\varphi_F^{(m)}\rangle$ is the final state of the ancilla when the input state is $|(N-m)0,m1\rangle$.
4). Perform a generalized Hadamard gate on the ancilla (quantum fourier transformation) \begin{equation}
|0\rangle^{\otimes N-m}|1\rangle^{\otimes m}\otimes |\varphi_F^{(m)}\rangle \rightarrow \frac{1}{\sqrt{N+1}}\sum_{m'=0}^N e^{\frac{i2\pi mm'}{N+1}}
|0\rangle^{\otimes N-m'}|1\rangle^{\otimes m'}\otimes |\varphi_F^{(m')}\rangle, \end{equation} after which the final state becomes \begin{equation}
|\Psi_{out}^{'}\rangle = \frac{1}{\sqrt{N+1}}\sum_{m'=0}^N |0\rangle^{\otimes N-m'} |1\rangle^{\otimes m'} \otimes |\varphi_F^{(m')}\rangle \otimes |\Psi_M^{'}\rangle, \end{equation} where \begin{equation}
|\Psi_M^{'}\rangle = \sum_{m=0}^N e^{\frac{i2\pi mm'}{N+1}} x_0^{N-m} x_1^m \sqrt{C_N^m} |\Psi_M^{(m)}\rangle. \end{equation}
5). Make measurement on the whole ancilla with the basis $\{ |0\rangle^{\otimes N-m'}|1\rangle^{\otimes m'}\otimes|\varphi_F^{(m')}\rangle\}_{m'=0}^N$. When the result is $m'=0$, the desired state $|\Psi_M\rangle = \sum_{m=0}^N x_0^{N-m} x_1^m \sqrt{C_N^m} |\Psi_M^{(m)}\rangle$ is directly obtained. If the measured $m'\neq0$, then we need to act a local phase gate $U_S$ on every qubit. Through computation, a proper phase gate is \begin{equation}
U_S = |0\rangle\langle0|+e^{i\theta}|1\rangle\langle1|, \end{equation} where $\theta=-\frac{2\pi m'}{N+1}$. With the effect of the phase gate, the output state becomes \begin{equation}
|\Psi_M^{''}\rangle = e^{i(M-N)\theta}|\Psi_M\rangle. \end{equation} Since the phase $e^{i(M-N)\theta}$ won't affect, the output state is what we want. Now it can be seen we have realized the sequential $N \rightarrow M$ UQCM. When $N=1$, all the results coincide with the $1 \rightarrow M$ case in last section.
Recently, the sequential cloning concerning about the real-life experimental condition is investigated in \cite{Saberi2012}.
\subsection{Sequential UQCM in d dimensions} We now further proceed to a more general case where qubit is extended to qudit. In the space of d dimensions, an arbitrary quantum pure state can be expressed as \begin{equation}
|\Psi\rangle=\sum_{i=0}^{d-1}x_i|i\rangle, \sum_{i=0}^{d-1}|x_i|^2=1. \end{equation} Then N identical such qudits will be expanded in symmetric space as\cite{PhysRevA.58.1827} \begin{equation}
|\Psi\rangle^{\otimes N} = \sum_{\bm{m}=0}^N \sqrt{\frac{N!}{m_1!...m_d!}}x_0^{m_1}...x_{d-1}^{m_d}|\bm{m}\rangle, \end{equation}
where $|\bm{m}\rangle$ denotes the symmetric state, whose form is \begin{equation}
|\bm{m}\rangle = |m_1 0,m_2 1, ..., m_d (d-1)\rangle, \end{equation}
which means the symmetric state $|\bm{m}\rangle$ has $m_i$ qudits in the computational base $|i\rangle$(i=1,...,d), and the sum of qubits in each base satisfies $\sum_{i=1}^d m_i=N$.
Take the symmetric state $|\bm{m}\rangle$ as the input state of the optimal d-level UQCM according to Fan et al's scheme\cite{PhysRevA.64.064301}, the corresponding M-qudit output state will be \begin{equation}
|\Psi_M^{(\bm{m})}\rangle = \sum_{\bm{j}=0}^{M-N} \beta_{\bm{m}\bm{j}}|\bm{m}+\bm{j}\rangle\otimes|R_{\bm{j}}\rangle, \end{equation}
where the vector $\bm{j}=(j_1,j_2,...j_d)$ satisfies $\sum_{i=1}^d j_i = M-N$, $|R_{\bm{j}}\rangle=|\bm{j}\rangle_R$ denotes the state of the cloning machine, and $\beta_{\bm{m}\bm{j}}=\sqrt{\prod_{i=1}^d C_{m_i+j_i}^{m_i}/C_{M+d-1}^{M-N}}$. The following steps are similar to the 2-level case presented previously. We still need to find the MPS form of the state $|\Psi_M^{(\bm{m})}\rangle$, and the method is through SD as well. Express $|\Psi_M^{(\bm{m})}\rangle$ in the computational basis \begin{equation}
|\Psi_M^{(\bm{m})}\rangle = \sum_{i_1...i_{2M-N}}\langle \varphi_F^{(\bm{m})}|V^{[2M-N]i_{2M-N}}...V^{[1]i_1}|0\rangle_D\otimes|i_1...i_{2M-N}\rangle \end{equation} Through computation, $V^{[k]i_k}$ can be obtained \cite{Dang2008}. The detailed process is just a direct extension of the 2-level case and shall be omitted here, see Appendix for the details. Besides, we also know that the necessary dimension of the ancilla is $D_d = C_{M-\lfloor\frac{N+1}{2}\rfloor+d-1}^{M-\lfloor\frac{N+1}{2}\rfloor}$, where the symbol $\lfloor X \rfloor$ denotes the floor function. So we may observe that $d\geq2$ $D_d$ is far smaller than $d_M$, that simplification shows the advantage of sequential cloning of qudits.
\section{Implementation of quantum cloning machines in physical systems}
In general, the cloning machines can be realized by the corresponding quantum circuits constituted by single qubit rotation gates and CNOT gates just like other quantum computations. This is guaranteed by the universal quantum computation \cite{elementarygates} which can be realized by a complete set of universal gates.
\subsection{A unified quantum cloning circuit}
It is interesting that the UQCM and the phase-covariant QCM can be realized by a unified quantum cloning circuit by adjusting angles in the single qubit rotation gates, as shown in FIG.\ref{quantumcircuit} first presented by Bu\v{z}ek \emph{et~al.} \cite{BBHBnetwork}. Let us consider the definition of single qubit rotation gate (\ref{singlegate}) with a fixed phase parameter which can be omitted, it can be written in matrix form as, \begin{eqnarray} \hat {R}(\vartheta )=\left( \begin{array}{cc} \cos \vartheta &\sin \vartheta\\ -\sin \vartheta &\cos \vartheta \end{array}\right) . \end{eqnarray} The form of CNOT gate is in (\ref{CNOT}). Here we use subindices in a CNOT gate, $CNOT_{jk}$, to specify that the controlled qubit is $j$ and the target qubit is $k$. Following the copying scheme in \cite{BBHBnetwork}, the cloning procession is divided into two unitary transformations, \begin{eqnarray}
|\Psi _{a_1}^{(in)}|0\rangle _{a_2}|0\rangle _{a_3}
\rightarrow |\Psi \rangle _{a_1}^{(in)}|\Psi \rangle _{a_1a_2}^{(prep)}
\rightarrow |\Psi \rangle _{a_1a_2a_3}^{(out)}. \end{eqnarray} The preparation state is constructed as follows, \begin{eqnarray}
|\Psi \rangle _{a_2a_3}^{(prep)}=\hat {R}_2(\vartheta _3) CNOT_{32}\hat {R}_3(\vartheta _2)CNOT_{23}
\hat {R}_2(\vartheta _1)|0\rangle _{a_2}|0\rangle _{a_3}. \end{eqnarray} The second step is as, \begin{eqnarray}
|\Psi \rangle _{a_1a_2a_2}^{(out)}=CNOT_{a_3a_1}CNOT_{a_2a_1}
CNOT_{a_1a_3}CNOT_{a_1a_2}|\psi \rangle _{a_1}^{(in)}
|\psi \rangle _{a_2a_3}^{(prep)}. \label{copy} \end{eqnarray} We may find that two copies are in $a_2, a_3$ qubits. For UQCM, the angles in the single qubit rotations are chosen as, \begin{eqnarray} \vartheta _1=\vartheta _3=\frac {\pi }{8}, ~~\vartheta _2=-\arcsin \left( \frac {1}{2}-\frac {\sqrt{2}}{3}\right) ^{1/2}. \end{eqnarray}
This scheme is flexible and can be adjusted for phase-covariant quantum cloning. We only need to choose different angles for the single qubit rotations, and those angles are shown to be as follows \cite{PhysRevA.65.012304}, \begin{eqnarray} \vartheta _1=\vartheta _3=\arcsin \left( \frac {1}{2} -\frac {1}{2\sqrt{3}}\right)^{\frac {1}{2}}, ~~\vartheta _2=-\arcsin \left( \frac {1}{2}-\frac {\sqrt{3}}{4}\right) ^{\frac {1}{2}}. \end{eqnarray} To be explicit, we may find that the preparation state takes the form, \begin{eqnarray}
|\Psi \rangle _{a_2a_3}^{(perp)}
=\frac {1}{\sqrt{2}}|00\rangle _{a_2a_3}
+\frac {1}{2}(|01\rangle _{a_2a_3}+|10\rangle _{a_2a_3}). \end{eqnarray} The second step for phase-covariant quantum cloning is the same as the that of the UQCM. So this cloning circuit is general and can be applied for both universal cloning and phase-covariant cloning.
\begin{figure}
\caption{Quantum circuit implementing quantum cloning machines. This quantum circuit can realize both universal cloning machine and phase-covariant quantum cloning machine by adjusting parameters in the single qubit rotation gates. This circuit is the same as the one in \cite{BBHBnetwork}.}
\label{quantumcircuit}
\end{figure}
\subsection{A simple scheme of realization of UQCM and Valence-Bond Solid state}
We already know that the universal cloning machine can be realized by a symmetric projection. This fact can let us find a simple scheme for the implementation of the universal cloning machine. The simplest universal cloning machine can be obtained by a symmetric projection on the input qubit and one part of a maximally entangled state. This symmetric projection can be naturally realized by bosonic operators in the Fock space representation. Suppose the input state is $a_H^{\dagger }$, the available maximally entangled state is $a_H^{\dagger }a_H^{\dagger }+ a_V^{\dagger }a_V^{\dagger }$, here $H,V$ can be horizontal and vertical polarizations of a photon, or any other degrees of freedom of the bosonic operator. We also suppose that those operators are acting on the vacuum state, then we have, \begin{eqnarray} a_H^{\dagger }\left( a_H^{\dagger }a_H^{\dagger }+ a_V^{\dagger }a_V^{\dagger }\right) =\left[ \sqrt {\frac {2}{3}}\frac {(a_H^{\dagger })^2}{\sqrt {2!}}a_H^{\dagger } +\sqrt {\frac {1}{3}}(a_H^{\dagger }a_V^{\dagger })a_V^{\dagger }\right] \sqrt {3}. \label{fockrep} \end{eqnarray} Now we consider that the last bosonic operators are acting as ancillary qubits, in Fock space representation, $\frac {(a_H^{\dagger })^2}{\sqrt {2}}$ corresponds to two photons in horizontal polarization, while
$a_H^{\dagger }a_V^{\dagger }$ is a symmetric state with one horizontal photon and one vertical photon. By a whole normalization factor $\sqrt {3}$, the above formula then takes the following form, with initial state $|H\rangle $, \begin{eqnarray}
|H\rangle \rightarrow \sqrt {\frac {2}{3}} |2H\rangle |H\rangle _a+\sqrt {\frac {1}{3}} |H,V\rangle \rangle |V\rangle _a. \label{photonclone1} \end{eqnarray} Similarly for a vertical photon, we have \begin{eqnarray}
|V\rangle \rightarrow \sqrt {\frac {2}{3}} |2V\rangle |V\rangle _a+\sqrt {\frac {1}{3}} |H,V\rangle \rangle |H\rangle _a. \label{photonclone2} \end{eqnarray} It is now clear that those two transformations constitute exactly a UQCM. This fact is noticed by Simon \emph{et al.}. Actually it also provides a natural realization of the UQCM by photon stimulated emission. Based on the experiment of the preparation of maximally entangled state, the UQCM can be realized by the above scheme which we will present later.
For no-cloning theorem, a frequently misunderstanding point may be that, it seems that ``laser'' itself can provide a perfect cloning machine, one photon can be cloned perfectly to have many completely same photons. This seems contradict with no-cloning theorem. The point is that we can for sure clone a photon to have may copies. However, the no-cloning theorem states that if we clone horizontal and vertical photons perfectly, we can not clone perfectly the photon superposition state of horizontal and vertical. Thus ``laser'' does not conflict with no-cloning theorem.
When a maximally entangled state is available, it seems that a UQCM can be realized. In condensed matter physics, the Valence-Bond Solid state is constructed by a series of singlet states, see for example \cite{VBSPRL}, \begin{eqnarray}
|VBS\rangle =\prod _{i=0}^N\left( a_i^{\dagger }b_{i+1}^{\dagger }-a_{i+1}^{\dagger }b_{i}^{\dagger }\right)|{\rm vacuum}\rangle , \end{eqnarray} where sites 0 and $N+1$ are two ends. We remark that the sites in the bulk will be restricted to the symmetric subspace also by the reason of Fock space representation as shown schematically in the following:
\begin{center} \unitlength=1mm \begin{picture}(80,5)(0,3)
\put(0,0){\makebox(3,3){0}} \put(9,0){\makebox(3,3){1}}
\put(19,0){\makebox(3,3){2}}
\put(27,0){\makebox(3,3){...}} \put(30,0){\makebox(3,3){...}} \put(37,0){\makebox(3,3){...}} \put(40,0){\makebox(3,3){...}} \put(47,0){\makebox(3,3){...}} \put(50,0){\makebox(3,3){...}} \put(59,0){\makebox(3,3){N}}
\put(67,0){\makebox(3,3){N+1}} \end{picture}
\begin{picture}(80,5)(0,0) \put(0,0){\line(1,0){7}} \put(10,0){\line(1,0){7}} \put(20,0){\line(1,0){7}} \put(30,0){\line(1,0){7}} \put(40,0){\line(1,0){7}} \put(50,0){\line(1,0){7}} \put(60,0){\line(1,0){7}} \put(0,0){\circle*{1}} \put(7,0){\circle*{1}} \put(10,0){\circle*{1}} \put(17,0){\circle*{1}} \put(20,0){\circle*{1}} \put(27,0){\circle*{1}} \put(30,0){\circle*{1}} \put(37,0){\circle*{1}} \put(40,0){\circle*{1}} \put(47,0){\circle*{1}} \put(50,0){\circle*{1}} \put(57,0){\circle*{1}} \put(60,0){\circle*{1}} \put(67,0){\circle*{1}} \put(8.5,0){\circle{5}} \put(18.5,0){\circle{5}} \put(28.5,0){\circle{5}} \put(38.5,0){\circle{5}} \put(48.5,0){\circle{5}} \put(58.5,0){\circle{5}} \end{picture}
\end{center}
By the same consideration as presented above, since one singlet state is a maximally entangled state, the UQCM can be realized if the input state is put in site $0$, $(\alpha a_0^{\dagger }+\beta b_0^{\dagger })\left( a_0^{\dagger }b_{1}^{\dagger }-a_{1}^{\dagger }b_{0}^{\dagger }\right)$. Further one may notice we do not need to restrict just a singlet state where only two sites are involved, a whole one-dimensional Valence-Bond Solid state can be dealed as a maximally entangled state so that a UQCM can be realized like the following: \begin{eqnarray}
(\alpha a_0^{\dagger }+\beta b_0^{\dagger })|VBS\rangle . \end{eqnarray} The state of input $\alpha a_0^{\dagger }+\beta b_0^{\dagger }$ is like the open boundary operator. One feature of this universal cloning machine may be that the ancillary states are at one end of this 1D state, the two copies are located on another end. This system is like a majorana fermion quantum wire proposed by Kitaev \cite{Kitaev} where the encoded qubit is topologically protected. It is also pointed out that the cloning machine can be realized by networks of spin chains \cite{cloning-spinnetwork}.
\subsection{UQCM realized by photon stimulated emission and the experiment}
With the results in last section, one may realize that a maximally entanglement source may provide a mechanism for quantum cloning. The corresponding fidelity is optimal. It seems that photon stimulated emission possesses such a property and can give a realization of the UQCM. This is first proposed in \cite{PhysRevLett.84.2993,Kempe00} and realized experimentally \cite{667603820020426,prl89.107901}. In this scheme, certain types of three-level atoms can be used to optimality clone qubit that is encoded as an arbitrary superposition of excitations in the photonic modes corresponding to the atomic transitions. Next, we shall first review briefly the qubit case followed by a general d-dimensional result.
For qubit case \cite{PhysRevLett.84.2993,Kempe00}, we consider the inverted medium that consists of an ensemble of
$\Lambda $ atoms with three energy levels. These three levels correspond to two degenerate ground states $|g_1\rangle $ and $|g_2\rangle $ and an excited level
$|e\rangle $. The ground states are coupled to the excited state by two modes of the electromagnetic field $a_1$ and $a_2$, respectively. The Hamiltonian of this system takes the form, \begin{eqnarray}
H=\gamma \left( a_1\sum _{k=1}^N|e^k\rangle \langle g_1^k|
+a_2\sum _{k=1}^N|e^k\rangle \langle g_2^k| \right) + H.c. \label{Hami} \end{eqnarray}
We then introduce the operator as $b_rc^{\dagger }\equiv \sum _{k=1}^N|e^k\rangle \langle g_r^k|, ~~~r=1,2$, where $c^{\dagger} $ is a creation operator of ``e-type'' excitation, $b_r$ is a annihilation operator of $g_r$ ground states, $r=1,2$. Now the Hamiltonian (\ref{Hami}) becomes as, \begin{eqnarray} {\cal {H}}=\gamma (a_1b_1+a_2b_2)c^{\dagger }+H.c. \label{Hami-osc} \end{eqnarray} Now we find that the source of maximally entangled states is available. The input state can be considered as the form
$(\alpha a_1^{\dagger }+\beta a_2^{\dagger })|0,0\rangle =\alpha
|1,0\rangle +\beta |0,1\rangle $. The number of copies in this cloning system is restricted by the number of atoms in excited states
$\otimes _{k=1}^N|e^k\rangle $ which are represented as $(c^{\dagger })^N/\sqrt {N!}$. We may consider that initially there are $i+j$ qubits in $a_1^{\dagger }$ and $a_2^{\dagger }$ which corresponds to a completely symmetric state with $i,j$ states in two different levels of qubits, \begin{eqnarray}
|\Psi _{in},(i,j)\rangle &=& \frac {(a_1^{\dagger })^{i}(a_2^{\dagger })^j(c^{\dagger })^N}
{\sqrt {i!j!N!}}|0\rangle \nonumber \\
&=&|i_{a_1},j_{a_2}\rangle |0_{b_1},0_{b_2}\rangle |N_c\rangle \nonumber \\
&\equiv &|i,j\rangle _a|0,0\rangle _b|N\rangle _c. \label{g-input} \end{eqnarray}
With the Hamiltonian (\ref{Hami-osc}), the time evolution of the state starting from the initial state (\ref{g-input}) becomes as follows, \begin{eqnarray}
&&|\Psi (t),(i,j)\rangle =e^{-iHt}|\Psi _{in},(i,j)\rangle \nonumber \\
&=&\sum _p(-iHt)^p/p!|\Psi _{in},(i,j)\rangle
=\sum _{l=0}^Nf_l(t)|F_l,(i,j)\rangle , \label{time} \end{eqnarray}
where $|\Psi _{in},(i,j)\rangle =|F_0,(i,j)\rangle$, $l$
is the additional photons emitted corresponding to additional copies, thus there are altogether $i+j+l$ copies in the output which is expressed as $|F_l,(i,j)\rangle $. In this process, the states with different copies are actually superposed together. The amplitude parameter in the superposed state corresponds to the probability of finding
$l$ additional copies which is $|f_l(t)|^2$.
To show that this process is exactly the realization of the optimal UQCM, we can show that the corresponding cloning transformation with $l$ additional copes can be calculated as, \begin{eqnarray}
|i,j\rangle _a|0,0\rangle _b|N\rangle _c\rightarrow |F_l,(i,j)\rangle = \sum _{k=0}^l\sqrt {\frac {l!(i+j+1)!}{(i+j+l+1)!}} \sqrt{ \frac {(i+l-k)!(j+k)!}{i!j!k!(l-k)!}} \nonumber \\
|i+l-k,j+k\rangle _a|l-k,k\rangle _b|N-l\rangle _c. \label{g-output} \end{eqnarray} This is indeed the UQCM. In addition, this provides an alternative method to find the optimal cloning transformations.
Experimentally, Lamas-Linares \emph{et al.} successfully performed the universal quantum cloning by using the Hamiltonian (\ref{Hami-osc}) shown above \cite{667603820020426}. The operators $a^{\dagger },b^{\dagger }$ are creation operators of photons in the spatial modes $a,b$ marked in FIG.(\ref{clone-experiment}) corresponding to two different directions of emission after passing the non-linear crystal (BBO 2mm). Photons of mode $a^{\dagger }$ with subscript $1,2$ refer to vertical and horizontal polarization, respectively. In the state analyzer part of the experimental set up, vertical and horizontal photons can be analyzed by polarizing beam splitter in front of photon detectors $D2$ and $D3$.
In experiment, a laser produces light pulses of 120 fs duration (Fs pulse shown in FIG.\ref{clone-experiment}). By beam splitter, a tiny part of each pulse is split off and attenuated below the single-photon level resulting in probabilistically the input photon. The polarization of the input photon can be adjusted in the state preparation part corresponding to arbitrary input pure qubit. The major part of the pulse is frequency doubled (shown as $2\omega $ in FIG.\ref{clone-experiment})
and used to pump the non-linear crystal (BBO 2mm) where photon pairs entangled in polarization are created, as shown in Hamiltonian (\ref{Hami-osc}). The input photon and the $a$ photons with vertical and horizontal polarizations created are adjusted so that they can overlap perfectly and are indistinguishable for quantum cloning. Photon of mode $b$ severs as a trigger indicating whether the entangled state in polarization by parametric down-conversion has created or not. For time evolution $e^{-i{\cal {H}}t}$ with small values of $\gamma t$ for initial state $a^{\dagger }_1|0\rangle $, the first order term corresponds to three photon state
$(a_1^{\dagger }b_1^{\dagger }+a_2^{\dagger }b_2^{\dagger })a^{\dagger }_1|0\rangle $. Here we remark that the entangled state in vertical and horizontal polarization usually takes the form $a_V^{\dagger }b_H^{\dagger }-a_H^{\dagger }b_V^{\dagger }$£¬which is equivalent to what we write here. The output state has the form, $(a^{\dagger }_1)^2$ for $b_1^{\dagger }$ and $a^{\dagger }_1a^{\dagger }_2$ for $b_2^{\dagger }$ corresponding to two terms in the universal quantum cloning transformation. By this method the information of the input photon polarization, a qubit, is quantum cloned by universal cloning on the down-converted photon. This process of universal cloning is exactly what we have reviewed as symmetric projection of identical input states and one half the maximally entangled states. We may expect that, if maximally entangled states are available, the universal quantum cloning is possible if symmetric projection can be realized.
\begin{figure}
\caption{Schematic of experimental universal quantum cloning, see Ref.\cite{667603820020426}.}
\label{clone-experiment}
\end{figure}
\subsection{Higher dimension UQCM realized by photon stimulated emission}
The higher dimensional case can be similarly studied \cite{ISI:000177872600134}. Now the atoms have one excited state $|e\rangle $ and $d$ ($d\ge 2$) ground states $|g_n\rangle , n=1, 2,\cdots, d$, and each coupled to a different photons $a_n$ corresponding to modes of qudit. The Hamiltonian can also be written as a generalized form, \begin{eqnarray} {\cal {H}}_d=\gamma (a_1b_1+\cdots +a_db_d)+H.c. \label{d-Hami} \end{eqnarray} The initial states which are symmetric states are, \begin{eqnarray}
|\Psi _{in},\vec {j}\rangle = \prod _{i=1}^d\frac {(a_i^{\dagger })^{j_i}}{\sqrt{j_i!}}
\frac {(c^{\dagger })^N}{\sqrt{N!}}|0\rangle
\equiv |\vec{j}\rangle _a|\vec{0}\rangle _b|N\rangle _c, \label{d-input} \end{eqnarray}
where $\vec {j}=(j_1,j_2,\cdots,j_d)$. One can find that the time evolution of states for qudits is the same as that of qubits (\ref{time}). So the probability to obtain additional $l$ copies is $|f_l(t)|^2$. We use the notation $|F_0,\vec{j}\rangle \equiv |\Psi _{in},\vec{j}\rangle $, $\sum _{i}j_i=M$, and the output of cloning with $l$ additional copies can be obtained as, \begin{eqnarray}
|\vec{j}\rangle _a|\vec{0}\rangle _b|N\rangle _c\rightarrow |F_l,\vec{j}\rangle &=& \sum _{k_i}^l\sqrt{ \frac {(M+d-1)!l!}{(M+l+d-1)!}} \prod _{i=1}^d\sqrt{ \frac {(k_i+j_i)!}{k_i!j_i!}} \nonumber \\
&&|\vec{j}+\vec{k}\rangle _a|\vec{k}\rangle _b|N-l\rangle _c, \label{d-output} \end{eqnarray} where summation $\sum _{k_i}^l$ runs for all variables with constraint, $\sum _i^dk_i=l$. We thus realize the optimal UQCM for qudits.
Explicitly, the action of Hamiltonian on the symmetric states takes the following form, \begin{eqnarray}
&&{\cal {H}}_d|F_l,\vec{j}\rangle
=\gamma (\sqrt{(l+1)(N-l)(M+l+d)}|F_{l+1},\vec{j}\rangle \nonumber \\
&&+\sqrt{l(N-l+1)(M+l+d-1)}|F_{l-1},\vec{j}\rangle , \nonumber \\ &&~~~~l\le l<N, \nonumber \\
&&{\cal {H}}_d|F_0,\vec{j}\rangle =\gamma \sqrt{N(M+d)}|F_1,\vec{j}\rangle , \nonumber \\
&&{\cal {H}}_d|F_N,\vec{j}\rangle =\gamma \sqrt{N(M+N+d-1)}|F_{N-1},\vec{j} \rangle . \label{struct1} \end{eqnarray}
To end this subsection, we present our familiar results of UQCM but in this photonic system. An arbitrary qudit takes the form,
$|\Psi \rangle =\sum _{i=1}^dx_ia_i^{\dagger }|\vec{0}\rangle $, with $\sum _{i=1}^d|x_i|^2=1$. By expansion, it corresponds to state, \begin{eqnarray}
|\Psi \rangle ^{\otimes M}&=&
(\sum _{i=1}^dx_ia_i^{\dagger })^{\otimes M}|\vec{0}\rangle \nonumber \\ &=&M!\sum _{j_i}^M \prod _{i=1}^d \frac {x_i^{j_i}} {\sqrt{j_i!}} \frac {(a_i^{\dagger })^{j_i}}
{\sqrt {j_i!}}|\vec{0}\rangle . \end{eqnarray} With the help of cloning transformation (\ref{d-output}), the output of cloning is, \begin{eqnarray}
|\Psi \rangle ^{\otimes M}\rightarrow |\Psi \rangle ^{out} &=&M!\sum _{j_i}^M \sum _{k_i}^l\sqrt{ \frac {(M+d-1)!l!}{(L+d-1)!}} \nonumber \\ &&\times \prod _{i=1}^d \frac {x_i^{j_i}} {j_i!} \sqrt{ \frac {(k_i+j_i)!}{k_i!}}
|\vec{j}+\vec{k}\rangle _a|\vec{k}\rangle _b, \label{out} \end{eqnarray} As we already know, this is the UQCM of qudits.
Quantum cloning itself is reversible since it is realized by unitary transformation. This does not necessarily mean that the cloning realized by photon stimulated emission can be inverted. However, it is proposed that this inverting process can succeed \cite{revsecloning}.
\subsection{Experimental implementation of phase-covariant quantum cloning by nitrogen-vacancy defect center in diamond}
The economic phase-covariant quantum cloning involves only three states of two-qubit system. Experimentally, we can encode those three states by three energy levels in a specified physical system. The experimental implementation of phase-covariant quantum cloning by this scheme is realized in solid state system \cite{Pan2011}. This solid system is the nitrogen-vacancy (NV) defect center in diamond. The structure of NV center in diamond is that a carbon atom is replaced by a nitrogen atom and additionally a vacancy is located in a nearby lattice site. The NV center is negative charged and can provide three states of the electronic spin one. Those three states correspond to zero magnetic moment ($m_s=0$) with a 2.87-GHz zero field splitting, two magnetic sub-levels induced by external magnetic field corresponding to $m_s=\pm 1$. The experimental samples of diamond can be bulk or nanodiamond. The electronic spin in NV center of diamond can be individually addressed by using confocal microscopy so that we can control it exactly, however, ensemble of NV centers can also be well controlled. The NV center can be initialized to state $m_s=0$ polarization by a continuous 532nm laser excitation.
The superposed states of $m_s=0$ with $m_s=\pm 1$ are prepared by resonating microwaves depending on the duration time determined by their corresponding Rabi oscillations. The microwave radiation is sent out by a copper wire of 20 $\mu$m diameter placed with a distance of 20 $\mu$m from the NV center. The Rabi oscillations corresponding to different microwave frequencies show that the prepared states are superposed states in quantum mechanics. The resonating frequencies of the controlling microwaves are determined by the electronic-spin-resonating (ESR) spectrum of the NV center obtained by frequency continuously changing. The readout of the electronic state is by Rabi oscillation, the measured value depends on the intensity of the florescence which corresponding to the amplitude of state $m_s=0$ in the superposed state. The intensity of florescence is measured by single photon counting module connected with a multifunction data acquisition device. The main advantage of the NV center in diamond is its long coherence time which is long enough for spin electronic spin manipulation for various tasks in quantum information processing.
One key point in precisely control the electron spin state is that it does not interact with environmental spin bath mainly constituted by nearby nuclear spins. The fact is that when the electron spin is in state with zero magnetic moment, $m_s=0$, it does not interact with the nuclear spin. If the electron spin is in either of the $m_s=\pm 1$ states, it is under the influence of the nearby nuclear spin. We may, on the one hand, use this coherent coupling for quantum information purposes, such as to generate entangled state or for quantum memory. On the other hand, it causes decoherence of the quantum state of the electron spin in the NV center. The interaction between the electron spin and a nearby nuclear spin in the NV center can be clearly shown by hyperfine structure in the ESR spectrum.
The general spin Hamiltonian of the NV center consisting of an electron spin, $\textbf{S}$, coupled with nearby nuclear spins, $\textbf{I}_k$, is given as, \begin{eqnarray} H_{spin}=H_{zf}+H_{eZeeman}+H_{hf}+H_q+H_{nZeeman}, \end{eqnarray} where the terms in spin Hamiltonian describe: the electron spin zero field splitting, $H_{ZF}=\textbf{S}\bar {D}\textbf{S}$, the electron Zeeman interaction, $H_{eZeeman}=\beta _e\vec{B}_0\bar {g}_e\textbf{S}$, hyperfine interactions between the electron spin and nuclear spins $H_{hf}=\sum _k\textbf{S}\bar {A}_k\textbf{I}_k$, the quadrupole interactions for nuclei with $I>1/2$, $H_q=\sum _{I_k>1}\textbf{I}_k\bar {P}_k\textbf{I}_k $, and the nuclear Zeeman interactions, $H_{nZeeman}=-\beta _n\sum _kg_{n,k}\vec {B}_0\textbf{I}_k$, also $g_e$ and $g_n$ are the $g$ factors for the electron and nuclei respectively, $\beta _e,\beta _g$ are Bohr magnetons for electron and nucleus, $\bar {A}$ and $\bar {P}$ are coupling tensors of hyperfine and quadrupole, and $\vec {B}_0$ is the applied magnetic field.
In implementation of economic phase-covariant quantum cloning, we use four equatorial qubits equivalent to BB84 states. However, according to result of minimal input set, it is also possible to check just three equatorial qubits \cite{Jing2013}. Since only three orthogonal states are involved in the economic phase-covariant cloning, the scheme is to use three physical states $m_s=0, m_s=\pm 1$ to represent logic states of qubits. In the experimental scheme, the encode scheme is that: $|10\rangle \rightarrow m_s=0$,
$|00\rangle $ and $|01\rangle $ correspond to $m_s=\pm 1$ respectively.
The implementation of phase cloning is in two steps. The first step is the initial state preparation which includes the input state preparation and cloning machine initialization. The experimental realization of this step is to prepare the logic qubits $\frac {1}{\sqrt {2}}(|0\rangle +e^{i\phi }|1\rangle )|0\rangle $, which is to prepare physically a superposed state of two involved levels. It is realized by initializing the NV center, applying a $\pi /2$ pulse microwave. The second step of the phase cloning is to realize the quantum cloning transformation. According to the optimal transformation $|00\rangle \rightarrow |00\rangle ,|10\rangle \rightarrow
\frac {1}{\sqrt {2}}(|10\rangle +|01\rangle )$, we can realize it by applying another $\pi /2$ pulse microwave. So now the phase quantum cloning is realized experimentally. To readout the result, we can use the combination of two Rabi oscillations to find the exact value of the output state. Experimentally, it is shown that the experimental results are very close with theoretical expectations. In average, the experimental fidelity is about $85.2\% $ which is very close to theoretical optimal bound $1/2+\sqrt {1/8}\approx 85.4\% $ and is clearly better than the universal quantum cloning \cite{Pan2011}.
To run all values of the phase parameter, the active controlling of the phase of the input state should be performed in experiment. This can also be realized experimentally by using two independently microwave sources. This experiment is performed recently by using the nanodiamond \cite{Pan2013}. The advantage by using nanodiamond instead of the bulk sample is its further integration property. Additionally, due to the sub-wavelength size of the nanodiamond, the fluorescence collection efficiency can increase dramatically which provides a high quality signal. The experimental results are presented in FIG. \ref{figure-panphase}. We can find that the advantage of the phase cloning machine than the universal cloning machine can also be demonstrated in this experiment by using the state tomography for readout.
\begin{figure}
\caption{Experimental results of phase-covariant quantum cloning in NV center of diamond at room temperature. The output states in experiment are readout by state tomography. The fidelities for different input states are presented and are shown to be better than the bound $5/6$ of the universal cloning machine The average fidelity is very close to theoretical expectation. This figure is presented in \cite{Pan2013}.}
\label{figure-panphase}
\end{figure}
\subsection{Experimental developments}
Quantum cloning process have been realized by various schemes experimentally as we already reviewed in the previous sections. Here let us present some experiments in the following. By quantum circuit method as shown in FIG. \ref{quantumcircuit}, the universal cloning is realized by nuclear magnetic resonance (NMR) \cite{PhysRevLett.88.187901}. The phase-covariant cloning is also realized in NMR system with input states ranging from the equator to the polar possessing an arbitrary phase parameter \cite{PhysRevLett.94.040505}. The UQCM is realized experimentally by single photon with different degrees of freedoms \cite{pra64.012315}, stimulated emission with optical fiber amplifier \cite{prl89.107901}. Also in optical system, the UQCM and the NOT gate are realized \cite{prl92.067901}. Closely related with optical cloning, the experimental noiseless amplifier for quantum light states is performed \cite{naturephoton2011}. The UQCM realization in cavity QED is proposed in \cite{qed-clone}. Experimental of various cloning machines in one set is performed recently in optics system \cite{Lemr2012}.
Quantum cloning machine can be used for metrology. It is proposed theoretically and shown experimentally with an all-fiber experiment at telecommunications wavelengths that the optimal cloning machine can be used as a radiometer to measure the amount of radiated power \cite{clone-radiometry}. The electro-optic quantum memory for light by atoms is demonstrated experimentally and compared with the limit of no-cloning limit \cite{HetetLongdell}.
\section{Concluding Remarks}
The research of quantum cloning and the QIP are continuously developing. In preparing this review, some new results may emerge which may not be summarized in this review. However, we still try to include some new results in the revision process of this review. When this review was first posted in arXiv, a lot of colleagues informed us some related literatures which might be missed in the previous versions. We would like to thank those responses. At the same time, their feedbacks let us realize that such a review is indeed necessary. We are then encouraged to finish this review and the revision. The anonymous referee also provided a lot of valuable suggestions for us to improve our presentation.
\emph{Acknowlegements:} This work was supported by ``973'' program (2010CB922904), NSFC (11175248), NFFTBS (J1030310), and Level B Primary Leading Project (through USTC) of Chinese Academy of Sciences. H.F. would like to thank useful discussions and communications concerning about this topic with Luigi Amico, V. Buz\v{e}k, Jun-Peng Cao, Kai Chen, Zeng-Bing Chen, I. Cirac, G. M. D'Ariano Jiang-Feng Du, Lu-Ming Duan, Shao-Ming Fei, J. Fiurasek, N. Gisin, Guang-Can Guo, A. Hayashi, Yun-Feng Huang, W. Y. Hwang, H. Imai, Vladimir Korepin, Le-Man Kuang, L. C. Kwek, Shu-Shen Li, Hai-Qing Lin, Nai-Le Liu, Seth Lloyd, Gui-Lu Long, Li Lu, Shun-Long Luo, C. Macchieveallo, K. Matsumoto, Xin-Yu Pan, Jian-Wei Pan, Martin Plenio, Xi-Jun Ren, Vwani Roychowdhury, Chang-Pu Sun, Vlatko Vedral, Miki Wadati, Xiang-Bin Wang, Zi-Dan Wang, R. Werner, Ru-Quan Wang, Sheng-Jun Wu, Zheng-Jun Xi, Tao Xiang, Zhao-Xi Xiong, Eli Yablonovitch, Si-Xia Yu, P. Zanardi, Jing Zhang, Zheng-Wei Zhou, Xu-Bo Zou. He also would like to thank continuous support from En-Ge Wang, Yu-Peng Wang, Xin-Cheng Xie, Qi-Kun Xue and Lu Yu. We thank Jun Feng and Xi Chen for their careful reading this review and for numerous suggestions. We thank Ling-An Wu for helping us to fix the language problems. We thank Shuai Cui and Yu-Ran Zhang for drawing some pictures.
\section{Appendix} For sequential quantum cloning machine of qudits, some detailed results are presented here. We shall provide the explicit form of matrix $V^{[n]i_n}$ for different kinds of sequential UQCM.
\subsection{$N\rightarrow M$ sequential UQCM of qubits.} \begin{enumerate} \item When $n=1$: The upper left corner of $V^{[1]i_1}$ is \begin{equation*} V ^{\left[ 1\right]0}=\left( \begin{array}{cc} \lambda _{\left[ 1\right]1} & 0 \\ 0 & \lambda _{\left[ 1\right]2} \end{array} \right) ,\text{ }V ^{\left[ 1\right]1}=\left( \begin{array}{cc} 0 & \lambda _{\left[ 1\right]1} \\ \lambda _{\left[ 1\right]2} & 0 \end{array} \right) , \end{equation*} while for $\alpha _{1}\ge 3$, set $V _{xy}^{\left[ 1\right]i_{1}}=\frac{1}{\sqrt{2}}\delta _{xy} $.
\item Case $1<n\le M-N+m$.
For $1\le \alpha _{n},\alpha _{n-1}\le n$, \begin{eqnarray} V_{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]0}=\delta _{\alpha _{n}\alpha _{n-1}}\sqrt{ \frac{\overset{M-m-n}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n}^{m+k}}{C_{M}^{m+\alpha _{n-1}-1+k}}}{ \overset{M-m-n+1}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n+1}^{m+k}}{C_{M}^{m+\alpha _{n-1}-1+k}}}}, \end{eqnarray} otherwise $V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]0}=\delta _{\alpha _{n}\alpha _{n-1}}\frac{1}{\sqrt{2}}$.
For $2\le \alpha _{n}\le \left( n+1\right) $, $1\le \alpha _{n-1}\le n$, \begin{eqnarray} V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]1}=\delta _{\alpha _{n}\left( \alpha _{n-1}+1\right) }\sqrt{\frac{\overset{M-m-n}{ \underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}+k\right) }^{2}\frac{ C_{M-n}^{m+k}}{C_{M}^{m+\alpha _{n-1}+k}}}{\overset{M-m-n+1}{\underset{k=-m}{ \sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n+1}^{m+k}}{ C_{M}^{m+\alpha _{n-1}-1+k}}}}, \end{eqnarray}
for $\alpha _{n}=1, \alpha _{n-1}=n+1$, $V
_{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]1}=\frac{1}{\sqrt{2}}$, otherwise $V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]1}=\delta _{\alpha _{n}\alpha _{n-1}}\frac{1}{\sqrt{2}}$.
\item Case $M-N+m<n\le M-m$.
For $1\le \alpha _{n},\alpha _{n-1}\le \left( M-N+m+1\right) $, \begin{eqnarray} V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]0}=\delta _{\alpha _{n}\alpha _{n-1}}\sqrt{\frac{\overset{M-m-n}{\underset{k=-m}{\sum } }\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n}^{m+k}}{ C_{M}^{m+\alpha _{n-1}-1+k}}}{\overset{M-m-n+1}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n+1}^{m+k}}{ C_{M}^{m+\alpha _{n-1}-1+k}}}}. \end{eqnarray}
For $2\le \alpha _{n}\le \left( M-N+m+1\right) $, $1\le \alpha _{n-1}\le \left( M-N+m\right) $, \begin{eqnarray} V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right] 1}=\delta _{\alpha _{n}\left( \alpha _{n-1}+1\right) } \sqrt{\frac{\overset{M-m-n}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}+k\right) }^{2}\frac{C_{M-n}^{m+k}}{C_{M}^{m+\alpha _{n-1}+k}}}{ \overset{M-m-n+1}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n+1}^{m+k}}{C_{M}^{m+\alpha _{n-1}-1+k}}}}, \end{eqnarray} otherwise, $V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]1}=0$.
\item Case $M-m<n\le M-1$.
(1). For $\left( m+n+1-M\right) \le \alpha _{n},\alpha _{n-1}\le \left( M-N+m+1\right) $, \begin{eqnarray} V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]0}=\delta _{\alpha _{n}\alpha _{n-1}}\sqrt{\frac{\overset {M-m-n}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2} \frac{C_{M-n}^{m+k}}{C_{M}^{m+\alpha _{n-1}-1+k}}}{\overset{M-m-n+1}{ \underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{ C_{M-n+1}^{m+k}}{C_{M}^{m+\alpha _{n-1}-1+k}}}}. \end{eqnarray}
For $\alpha _{n}=m+n-M$, $1\le \alpha _{n-1}\le \left( M-N+m+1\right) $, $V _{\alpha _{M}\alpha _{M-1}}^{\left[ n\right]0}=0$. Otherwise, $V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]0}=\delta _{\alpha _{n}\alpha _{n-1}}\frac{1}{\sqrt{2}}$.
(2). For $\left( m+n+1-M\right) \le \alpha _{n}\le \left( M-N+m+1\right) $, $\left( m+n-M\right) \le \alpha _{n-1}\le \left( M-N+m\right) $, \begin{eqnarray} V_{\alpha _{n}\alpha _{n-1}}^{\left[ n\right] 1}=\delta _{\alpha _{n}\left( \alpha _{n-1}+1\right) }\sqrt{\frac{ \overset{M-m-n}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}+k\right) }^{2}\frac{C_{M-n}^{m+k}}{C_{M}^{m+\alpha _{n-1}+k}}}{ \overset{M-m-n+1}{\underset{k=-m}{\sum }}\beta _{m\left( \alpha _{n-1}-1+k\right) }^{2}\frac{C_{M-n+1}^{m+k}}{C_{M}^{m+\alpha _{n-1}-1+k}}}}. \end{eqnarray}
For $\alpha _{n}=m+n-M$, $1\le \alpha _{n-1}\le \left( M-N+m+1\right) $, $V _{\alpha _{M}\alpha _{M-1}}^{\left[ n\right]1}=0$. Otherwise, $V _{\alpha _{n}\alpha _{n-1}}^{\left[ n\right]1}=\delta _{\alpha _{n}\alpha _{n-1}}\frac{1}{\sqrt{2}}$.
\item Case $n=M$.
(1). For $\left( m+1\right) \le \alpha _{M},\alpha _{M-1}\le \left( M-N+m+1\right) $, \begin{eqnarray} V _{\alpha _{M}\alpha _{M-1}}^{\left[ M\right]0}=\delta _{\alpha _{M}\alpha _{M-1}}\sqrt{\frac{\beta _{m\left( \alpha _{M-1}-1-m\right) }^{2}/C_{M}^{\alpha _{M-1}-1}}{\beta _{m\left( \alpha _{M-1}-1-m\right) }^{2}/C_{M}^{\alpha _{M-1}-1}+\beta _{m\left( \alpha _{M-1}-m\right) }^{2}/C_{M}^{\alpha _{M-1}}}}. \end{eqnarray} For $\alpha _{M}=m$, $1\le \alpha _{M-1}\le \left( M-N+m+1\right) $, $V _{\alpha _{M}\alpha _{M-1}}^{\left[ M\right]0}=0$. Otherwise, $V _{\alpha _{M}\alpha _{M-1}}^{\left[ M\right]0}=\delta _{\alpha _{M}\alpha _{M-1}}\frac{1}{\sqrt{2}}$.
(2). For $\left( m+1\right) \le \alpha _{M}\le \left( M-N+m+1\right) $, $m\le \alpha _{M-1}\le \left( M-N+m\right) $, \begin{eqnarray} V _{\alpha _{M}\alpha _{M-1}}^{\left[ M\right]1}=\delta _{\alpha _{M}\left( \alpha _{M-1}+1\right) }\sqrt{\frac{\beta _{m\left( \alpha _{M-1}-m\right) }^{2}/C_{M}^{\alpha _{M-1}}}{\beta _{m\left( \alpha _{M-1}-1-m\right) }^{2}/C_{M}^{\alpha _{M-1}-1}+\beta _{m\left( \alpha _{M-1}-m\right) }^{2}/C_{M}^{\alpha _{M-1}}}}. \end{eqnarray}
For $\alpha _{M}=m$, $1\le \alpha _{M-1}\le \left( M-N+m+1\right) $, $V _{\alpha _{M}\alpha _{M-1}}^{\left[ M\right]0}=0$. Otherwise, $V _{\alpha _{M}\alpha _{M-1}}^{\left[ M\right]0}=\delta _{\alpha _{M}\alpha _{M-1}}\frac{1}{\sqrt{2}}$.
\item Case $n=M+l$.
(1). For $\left( m+1\right) \le \alpha _{M+l}\le \left( M-N+m-l+1\right) $, $\left( m+2\right) \le \alpha _{M+l-1}\le \left( M-N+m-l+2\right) $, \begin{eqnarray} V_{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right]0}=\delta _{\alpha _{M+l}\left( \alpha _{M+l-1}-1\right) }\sqrt{\frac{\alpha _{M+l-1}-m-1}{M-N-l+1}}. \end{eqnarray}
For $\alpha _{M+l}=\left( M-N+m-l+2\right) $, $1\le \alpha _{M+l-1}\le \left( M-N+m+1\right) $, $V _{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right] 0}=0$. Otherwise $V _{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right]0}=\delta _{\alpha _{M+l}\alpha _{M+l-1}}\frac{1}{\sqrt{2}}$.
(2). For $\left( m+1\right) \le \alpha _{M+l},\alpha _{M+l-1}\le \left( M-N+m-l+1\right) $, \begin{eqnarray} V _{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right]1}=\delta _{\alpha _{M+l}\alpha _{M+l-1}}\sqrt{\frac{ M-N-l-\alpha _{M+l-1}+m+2}{M-N-l+1}}. \end{eqnarray} For $\alpha _{M+l}=\left( M-N+m-l+2\right) $, $1\le \alpha _{M+l-1}\le \left( M-N+m+1\right) $, $V _{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right]0}=0$. Otherwise $V _{\alpha _{M+l}\alpha _{M+l-1}}^{\left[ M+l\right]0}=\delta _{\alpha _{M+l}\alpha _{M+l-1}}\frac{1}{\sqrt{2}}$.
Here we have got all the operators $V _{(m)}^{\left[ n \right]i_{n}}$ for $0\le m\le N-m$. When $N-m<m\le N $, one can find that $V _{(m)}^{\left[ n\right]i_{n}}=V _{(N-m)}^{\left[ n\right] \overline{i_{n}}}$, $\overline{i_{n}}=i_{n}+1\left( \text{mod}2\right) $.
\end{enumerate}
\subsection{$N\rightarrow M$ sequential UQCM of qudits}
\begin{enumerate} \item Case $n=1$. \begin{equation*} \lambda_{\alpha_1}^{[1]}=\sqrt{\sum_{\bm{k}=-\bm{j}'}^{M-N-\bm{j}'} \beta_{\bm{m}(\bm{j}'+\bm{k})}^2\frac{m_{i_1+1}+j_{i_1+1}^{'}+k_{i_1+1}}{ M}}, \end{equation*} where $\beta_{\bm{m}\bm{j}}=\sqrt{\prod_{i=1}^d C_{m_i+j_i}^{m_i}/C_{M+d-1}^{M-N}}$. \begin{equation*} \Gamma_{\alpha_1}^{[1]i_1}=\delta_{\alpha_1\bm{j}'}. \end{equation*} \begin{equation*} V_{\alpha_1}^{[1]i_1}=\Gamma_{\alpha_1}^{[1]i_1}\lambda_{\alpha_1}^{[1]}. \end{equation*}
\item Case $1< n\le M-1$.
$$ \lambda_{\bm{j}'}^{[n]}=\sqrt{\sum_{\bm{k}=-\bm{j}'}^{M-N-\bm{j}'} \beta_{\bm{m}(\bm{j}'+\bm{k})}^2\prod_{i=1}^d C_{j_i^{'}+k_i+m_i}^{j_i^{'}}/C_M^n}.$$
\begin{equation*} \lambda_{\bm{j}"}^{[n-1]}= \sqrt{\sum_{\bm_{k}'=-\bm{j}"}^{M-N-\bm{j}"}\beta_{\bm{m}(\bm{j}" +\bm{k}')}^2\prod_{i=1}^d C_{j_i^{"}+k_i^{'}+m_i}^{j_i^{"}}/C_{M}^{n-1}}. \end{equation*}
$$ \Gamma_{\bm{j}"\alpha_n}^{[n]i_n}=\delta_{\alpha_n(\bm{j}"+\hat{e}_{i_n+1})} \frac{1}{\lambda_{\bm{j}"}^{[n-1]}}\sqrt{\frac{\bm{j}_{i_n+1}^{"}+1}{n}}. $$ $$ V_{\alpha_n\bm{j}"}^{[n]i_n}=\delta_{\alpha_n(\bm{j}"+\hat{e}_{i_n+1})} \sqrt{\frac{\bm{j}_{i_n+1}^{"}+1}{n}}\frac{\lambda_{\bm{j}"+\hat{e}_{i_n+1}}^{[n]}}{ \lambda_{\bm{j}"}^{[n-1]}}.$$
\item Case $n=M$.
$$ \lambda_{\bm{j}"}^{[M]}=\beta_{\bm{m}(\bm{j}'-\bm{m})}.$$
$$ \lambda_{\bm{j}"}^{[M-1]}=\sqrt{\sum_{i_M=0}^{d-1}\beta_{\bm{m}(\bm{j}"-\bm{m} +\hat{e}_{i_M+1})}^2\frac{\bm{j}_{i_M+1}^{"}+1}{M}}.$$
$$ \Gamma_{\bm{j}"\alpha_M}^{[M]i_M}=\delta_{\alpha_M(\bm{j}"+\hat{e}_{i_M+1})} \frac{1}{\lambda_{\bm{j}"}^{[M-1]}}\sqrt{\frac{\bm{j}_{i_M+1}^{"}}{M}}.$$
$$ V_{\alpha_M\bm{j}"}^{[M]i_M}=\delta_{\alpha_M(\bm{j}"+\hat{e}_{i_M+1})} \frac{\lambda_{\alpha_M}^{[M]}}{\lambda_{\bm{j}"}^{[M-1]}}\sqrt{\frac{\bm{j}_{i_M+1}^{"}}{M}}.$$
\item Case $n=M+l$.
$$ \lambda_{\bm{j}'}^{[M+l]}=\sqrt{\sum_{\bm{k}=\bm{m}-\bm{j}'}^{M-N+\bm{m}-\bm{j}'} \beta_{\bm{m}(\bm{j}'-\bm{m}+\bm{k})}^2\prod_{i=1}^d C_{j_i^{'}-m_i+k_i}^{k_i}/C_{M-N}^l} .$$
$$ \lambda_{\bm{j}"}^{[M+l-1]}=\sqrt{\sum_{\bm{k}=\bm{m}-\bm{j}"}^{M-N+\bm{m}-\bm{j}"} \beta_{\bm{m}(\bm{j}"-\bm{m}+\bm{k})}^2\prod_{i=1}^d C_{j_i^{"}-m_i+k_i^{'}}^{k_i^{'}}/C_{M-N}^{l-1}} .$$
$$ \Gamma_{\bm{j}"\alpha_n}^{[M+1]i_n}=\delta_{\alpha_n(\bm{j}"-\hat{e}_{i_n+1})} \frac{1}{\lambda_{\alpha_n}^{[M+l]}}\sqrt{\frac{j_{i_n+1}^{"}-m_{i_n+1}}{M-N-l+1}} .$$
$$ V_{\alpha_n\bm{j}"}^{[n]i_n}=\delta_{\alpha_n(\bm{j}"-\hat{e}_{i_n+1})} \sqrt{\frac{j_{i_n+1}^{"}-m_{i_n+1}}{M-N-l+1}}. $$ \end{enumerate}
With the above information, one can build the explicit form of $V^{[n]i_n}$ according to the 2-dimensional case, and the extension is direct.
\end{document} |
\begin{document}
\title{Notes on multiplicativity of maximal output purity for completely positive qubit maps} \author{Koenraad M.R.~Audenaert} \email{[email protected]} \affiliation{Institute for Mathematical Sciences, Imperial College London, 53 Prince's Gate, London SW7 2PG, UK} \affiliation{Dept.\ of Mathematics, Royal Holloway, University of London, Egham, Surrey TW20 0EX, UK} \date{\today}
\begin{abstract} A problem in quantum information theory that has received considerable attention in recent years is the question of multiplicativity of the so-called maximal output purity (MOP) of a quantum channel. This quantity is defined as the maximum value of the purity one can get at the output of a channel by varying over all physical input states, when purity is measured by the Schatten $q$-norm, and is denoted by $\nu_q$. The multiplicativity problem is the question whether two channels used in parallel have a combined $\nu_q$ that is the product of the $\nu_q$ of the two channels. A positive answer would imply a number of other additivity results in QIT.
Very recently, P.\ Hayden has found counterexamples for every value of $q>1$. Nevertheless, these counterexamples require that the dimension of these channels increases with $1-q$ and therefore do not rule out multiplicativity for $q$ in intervals $[1,q_0)$ with $q_0$ depending on the channel dimension. I argue that this would be enough to prove additivity of entanglement of formation and of the classical capacity of quantum channels.
More importantly, no counterexamples have as yet been found in the important special case where one of the channels is a qubit-channel, i.e.\ its input states are 2-dimensional. In this paper I focus attention to this qubit case and I rephrase the multiplicativity conjecture in the language of block matrices and prove the conjecture in a number of special cases. \end{abstract}
\maketitle
\section{Introduction\label{sec:intro}} Additivity problems are amongst the most important, and notorious, open problems of quantum information theory. Basically, the question is whether or not certain information theoretic properties of composite quantum systems consisting of independent parts decompose as simple sums over these parts.
One of the more important instances of this question concerns the classical information carrying capacity of quantum channels. Is the total capacity of two quantum channels taken in parallel equal to the sum of the capacities of the separate channels? Roughly speaking, the classical capacity of a quantum channel quantifies the maximal achievable rate of error-free communication of classical information through the channel, when the classical information is encoded onto quantum states that are subsequently transmitted through the quantum channel and then decoded into classical information again \cite{holevo99}. By judiciously choosing encoding and decoding, the transmission errors incurred when passing through the quantum channel can be corrected. Theoretically speaking, for every channel, error correcting block codes can be devised so that the remaining probability of error vanishes asymptotically, when block size goes to infinity. The rate of a code is the number of classical bits of information carried, on average, by one quantum bit (qubit). The capacity of the channel is then the maximal rate of an error correcting code that asymptotically corrects all errors for that particular channel.
A basic result of classical information theory is that the capacity of two classical channels in parallel is just the sum of the two capacities. The additivity question for the classical capacity of a quantum channel asks whether this is still true for quantum channels with encoding/decoding based on quantum error correcting codes. If not, this means that the rate of transmission through the two channels can be increased by encoding the two streams of classical bits into one stream of \textit{entangled} quantum states, rather than into two independent streams of quantum states.
Other additivity questions in QIT concern the \textit{entanglement of formation} of bipartite quantum states, which is an important measure of entanglement, and the \textit{minimal output entropy} of a quantum channel. A surprising result of quantum information theory is that all these additivity questions are in fact equivalent \cite{ka03,shor}, despite the seemingly different contexts in which they are formulated. In this paper, I concentrate on what looks like the simplest instance of the additivity questions, namely the additivity of the minimal output entropy of a quantum channel.
When a pure state is sent through a quantum channel, i.e.\ when a quantum operation acts on a pure quantum state, the resulting state will in general be no longer pure but will be mixed. By expressing the purity of the resulting state in terms of a mathematical measure of purity, one can ask for the largest possible value of purity an output state can have when one can choose the input state freely. One such measure of purity is the von Neumann entropy $S(\rho):=-\mathop{\rm Tr}\nolimits[\rho\log\rho]$. As this is an inverted measure of purity (0 for pure states, positive for mixed states), this has to be minimised, yielding the \textit{minimal output entropy} of a quantum channel. Its precise definition is $$ \nu_S(\Phi) := \min_\psi S(\Phi(\ket{\psi}\bra{\psi})), $$ where $\Phi(\cdot)$ denotes the action of the quantum channel on a state. As is well-known, a quantum channel is mathematically defined as a trace-preserving completely positive map.
A quantity that is closely related to the minimal output entropy is the \textit{maximal output $q$-purity} (MOP). Here, purity is measured by the Schatten $q$-norm $$
||\rho||_q := \mathop{\rm Tr}\nolimits[\rho^q]^{1/q}, $$ a non-commutative generalisation of the familiar $\ell_q$ vector norm. This yields for the MOP: $$
\nu_q(\Phi) := \max_\psi ||\Phi(\ket{\psi}\bra{\psi})||_q. $$ The entropy is related to the Schatten norms via the limit $$ -x\log x = \lim_{q\to 1} \frac{1-x^q}{q-1}. $$ In \cite{ahw}, it was proven that the minimal output entropy is additive if the maximal output $q$-norm is multiplicative for all values of $q$ ``close to 1''; more precisely, if for a pair of given channels $\Phi$ and $\Omega$ there exists a number $q_0>1$ such that for all $1\le q< q_0$, $$ \nu_q(\Phi\otimes\Omega) = \nu_q(\Phi)\,\,\nu_q(\Omega) $$ holds.
Most of the recent efforts on additivity has been directed towards this multiplicativity problem, because of its simple formulation (and because of the wealth of available techniques for dealing with Schatten norms). Indeed, when comparing the multiplicativity problem to the other additivity problems, this one almost looks too simple. However, closer investigation of the equivalence theorems reveal that the complexity is hidden in the dimension of the channel. More precisely, Theorem 2 from \cite{ka03} states that ``if there exists a real number $q_0>1$ such that $\nu_q(\Lambda)$ is multiplicative for all $1\le q\le q_0$ and for any CP map $\Lambda$, then the entanglement of formation is strongly superadditive'', while according to the main result in \cite{msw}, strong superadditivity of the entanglement of formation implies additivity of the classical capacity of quantum channels. These two theorems do not mention dimensionality of the states/CP maps/channels involved, because they are stated, in a global manner, in terms of the set of \textit{all} states/CP maps/channels. However, on closer inspection of the proofs one finds that something more specific has actually been proven, in terms of the sets of all CP maps/channels with specified input and output dimension (note, however, that Shor's equivalence theorems do not offer this possibility): multiplicativity of $\nu_q$, with $q\downarrow 1$, for all CP maps of dimension \begin{eqnarray*} \Lambda_1:&& {\cal H}_{1,in}\mapsto{\cal H}_{1,out} \\ \Lambda_2:&& {\cal H}_{2,in}\mapsto{\cal H}_{2,out}, \end{eqnarray*} implies additivity of classical capacity for all channels of dimension \begin{eqnarray*} \Phi_I: && {\cal H}_{I,in}\mapsto{\cal H}_{I,out} \\ \Phi_{II}:&& {\cal H}_{II,in}\mapsto{\cal H}_{II,out}, \end{eqnarray*} with \begin{eqnarray*} {\cal H}_{1,in} &=& {{\cal H}}_{I,out}^{\otimes 2} \otimes {\cal H}_{I,in} \\ {\cal H}_{1,out} &=& {\cal H}_{I,out} \\ {\cal H}_{2,in} &=& {{\cal H}}_{II,out}^{\otimes 2} \otimes {\cal H}_{II,in} \\ {\cal H}_{2,out} &=& {\cal H}_{II,out}. \end{eqnarray*} As an important example, to prove additivity of the classical capacity of a pair of channels, where one channel is a qubit channel ($2\mapsto 2$), one needs to prove multiplicativity of MOP for all pairs of CP maps, where the first one is of dimension $8\mapsto 2$. Hence, indeed, as regards the multiplicativity of MOP, the complexity of the classical capacity has been hidden in the increased input dimension of the channels that have to be considered.
Originally, the hope was that $q_0$ in the statement of the multiplicativity question could be taken to be infinity. Soon after appearance of \cite{ahw}, however, a counterexample was found for $q>4.78$, involving two identical channels of dimension $3\mapsto 3$ \cite{hw}. Very recently, the existence of channels was discovered (in a non-constructive way) that violate multiplicativity for $q$ arbitrarily close to 1 \cite{hayden,winter}. Note, however, that this does not (yet) disprove additivity of minimal output entropy because the dimension of the channels involved increases with the minimal value of $q$ for which they violate multiplicativity. For fixed dimension, there is always room for multiplicativity for $q$ closer to 1, and by the dimension argument mentioned above this is all one needs. Thus, the claim mentioned in the title of \cite{hayden} that ``the maximal $p$-norm multiplicativity conjecture is false'' is not entirely correct.
In any case, these counterexamples show that even if multiplicativity holds, proving it in some neighbourhood of 1 will be very hard; most, if not all, known results on Schatten $q$-norms hold over intervals for $q$ like $[1,\infty)$ or $[1,2]$ and not on such intervals as $[1,q_0]$ with $q_0$ dimension dependent.
On the other hand, the more interesting channels are the lower-dimensional ones, esp.\ the qubit channels, and by the above-mentioned dimension argument, one can restrict attention to multiplicativity for channels with equally low output dimension. For qubit channels, no counterexamples have yet been found. In fact, multiplicativity of MOP when one of the channels is a $2\mapsto 2$ channel has been proven for $q=2$ and $q\ge 4$ \cite{king_p4}, and for all $q\ge1$ when one of the channels is a unital $2\mapsto 2$ channel \cite{king_unital}. Other positive results include multiplicativity for all $q\ge1$ and for all dimensions \cite{king} when one of the channels is entanglement breaking (EB) \cite{holevo99}, i.e.\ is of the form $\Phi(\rho) = \sum_k A_k \mathop{\rm Tr}\nolimits[B_k\rho]$, for $A_k,B_k\ge0$ (that is, the Choi matrix of $\Phi$ corresponds to a separable state).
In this paper I study the multiplicativity problem for the important case when one of the channels has input dimension 2, and reduce the problem to a number of simpler forms, some of which do not hold in general but can be proven in specific instances. While the results I obtain here do not boil down to new multiplicativity results, I do explore new mathematical methods, and the hope is that this will provide new inspiration to tackle the additivity problem.
\section{Notations\label{sec:not}} In this paper, I call a qubit map any linear map from ${\mathbb{C}}^2$ to ${\mathbb{C}}^d$, $d\ge 2$. This is more general than the customary definition, by which $d=2$. The reason for this deviation is that the Theorems and Conjectures extend naturally to these generalised qubit maps.
I will employ overloaded notation where the symbol $\Phi$ either refers to the map or to the Choi matrix of that map. For example, in expressions like $\Phi(\rho)$, $\Phi$ refers to the map; when used ``stand-alone'', as in $||\Phi||_q$, it refers to the Choi matrix.
I denote the blocks of the Choi matrix of $\Phi$ by $\Phi_{ij}:=\Phi(e^{ij})$.
Unitarily invariant (UI) matrix norms are denoted $|||.|||$ and are norms that have the property $|||UAV|||=|||A|||$ for any unitary $U$ and $V$. For such norms the equality $|||AA^*|||=|||A^*A|||$ holds. This follows from the inequality $|||AB|||\le|||BA|||$ which holds for all $A$ and $B$ such that $AB$ is normal (\cite{bhatia}, Proposition IX.1.1). When $B=A^*$, both $AB$ and $BA$ are normal, hence equality must then hold.
\section{A Conjecture for Qubit CP Maps\label{sec:conj}} Let $\Phi$ be a CP qubit map from ${\mathbb{C}}^2$ to ${\mathbb{C}}^{d_1}$, and let $\rho$ be a $2\times d_2$ state, block partitioned as $$ \rho = \twomat{B}{C}{C^*}{D}. $$
In \cite{kingconj}, C.\ King conjectured, and proved in specific instances, that for $q\ge 1$ \begin{equation}\label{eq:chris0} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} \le \nu_q(\Phi)\,\, (\beta+\delta), \end{equation} where $\beta=\schatten{q}{B}$ and $\delta=\schatten{q}{D}$. He also noted that this Conjecture would imply multiplicativity of MOP when one of the channels is a qubit channel. While this Conjecture is already a major simplification of the multiplicativity problem (it involves only one channel), it is still non-trivial due to the fact that a maximisation occurs in the RHS (in the factor $\nu_q$). It would clearly be very helpful if the remaining maximisation could be removed in one way or another. An initially rather promising idea was that the following inequality would imply (\ref{eq:chris0}) \cite{kingpriv}: \begin{equation}\label{eq:case3} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} \le \max_\theta \schatten{q}{\Phi(\twomat{\beta}{\exp(i\theta)\sqrt{\beta\delta}}{\exp(-i\theta)\sqrt{\beta\delta}}{\delta})}. \end{equation} That is, the maximisation over all pure qubit input states is replaced by a maximisation over a single angle $\theta$.
To see how this implies multiplicativity, note first that the matrix $$\frac{1}{\beta+\delta}\twomat{\beta}{\exp(i\theta)\sqrt{\beta\delta}}{\exp(-i\theta)\sqrt{\beta\delta}}{\delta}$$ represents a state (in fact, a pure one), so that the RHS of (\ref{eq:case3}) is bounded above by $(\beta+\delta)\nu_q(\Phi)$, thereby implying (\ref{eq:chris0}). Now put $\rho=(\mathrm{\openone}\otimes\Omega)(\tau)$, with $\tau$ a $2\times d$ state, then the LHS of (\ref{eq:case3}) is $\schatten{q}{(\Phi\otimes\Omega)(\tau)}$. The block structure of $\rho$ is then given by $B=\Omega(\tau^{11})$, $D=\Omega(\tau^{22})$, yielding the inequality $\beta+\delta\le \nu_q(\Omega)(\mathop{\rm Tr}\nolimits(\tau^{11})+\mathop{\rm Tr}\nolimits(\tau^{22}))$, so that the RHS of (\ref{eq:chris0}) is indeed bounded above by $\nu_q(\Omega)\nu_q(\Phi)$, implying multiplicativity of the MOP.
Note that, when the off-diagonal block $\Phi_{12}$ (and thus $\Phi_{21}$) is Hermitian, the optimal value of $\exp(i\theta)$ in the RHS of (\ref{eq:case3}) is $\pm1$. Indeed, \begin{eqnarray*} \Phi(\twomat{\beta}{\exp(i\theta)\sqrt{\beta\delta}}{\exp(-i\theta)\sqrt{\beta\delta}}{\delta}) &=& \beta \Phi_{11}+\delta \Phi_{22} + 2\cos\theta\,\sqrt{\beta\delta}\,\Phi_{12}. \end{eqnarray*} This is linear in $\cos\theta$, hence the RHS of (\ref{eq:case3}) is the maximisation of a convex function (the $q$-norm of the matrix) in $\cos\theta$. As $\cos\theta$ runs over a convex set (the interval $[-1,1]$), the maximum is obtained in an extreme point, hence $\pm1$.
Unfortunately, numerical experiments revealed that (\ref{eq:case3}) does not hold in general; I will present such a counterexample in the next Section. Nevertheless, it is the purpose of this paper to study the statement and introduce a number of techniques to prove it in a variety of special cases. I start, in the next Section, with the idea of `taking square roots' of CP maps and states.
\section{Taking `Square Roots'\label{sec:sqrt}} Positivity of $\rho$ and complete positivity of the map $\Phi$ allow us to `take their square roots' and obtain a `square-rooted' version of inequality (\ref{eq:case3}), in the following sense. Since $\rho$ is PSD, it can be written as $\rho = X^* X$, where $X$ is a $1\times 2$ block matrix of size
$R\times d_{in}$ ($R$ being the rank of $\rho$). Denoting $X=(X_1|X_2)$, we have $B=X_1^*X_1$, $D=X_2^*X_2$, and $C=X_1^*X_2$.
Similarly, $\Phi$ is CP, thus its Choi-matrix can be written as $\Phi=G^* G$, where $G$ is a $1\times 2$ block matrix with blocks of size $K\times d_{out}$
($K$ is the number of Kraus elements, $d_{out}$ is the dimension of the output Hilbert space): $G=(G_1|G_2)$.
The LHS of (\ref{eq:chris0}) and (\ref{eq:case3}) is equal to the square of the $2q$-norm of the `square root' of $(\Phi\otimes\mathrm{\openone})(\rho)$: $$ (\Phi\otimes\mathrm{\openone})(\rho) = (G_1\otimes X_1+G_2\otimes X_2)^* (G_1\otimes X_1+G_2\otimes X_2), $$ so $$ \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} = \schatten{2q}{G_1\otimes X_1+G_2\otimes X_2}^2. $$
Likewise, the `square-root' of the RHS of (\ref{eq:chris0}) is $$
\max_\psi ||\sum_i \psi_i G_i||_{2q}^2\,\,(||X_1||_{2q}^2+||X_2||_{2q}^2) $$ so that (\ref{eq:chris0}) is equivalent to \begin{equation}\label{eq:chris0sqrt} \schatten{2q}{G_1\otimes X_1+G_2\otimes X_2} \le
\max_\psi \frac{||\sum_i \psi_i G_i||_{2q}}{\sqrt{|\psi_1|^2+|\psi_2|^2}}\,\,\sqrt{||X_1||_{2q}^2+||X_2||_{2q}^2}. \end{equation} This says that $$
\frac{||G_1\otimes X_1+G_2\otimes X_2||_{2q}}{\sqrt{||X_1||_{2q}^2+||X_2||_{2q}^2}} $$ attains its maximum over all $X_i$ when $X_2=\alpha X_1$, for certain (complex) values of the scalar $\alpha$.
The square-root of the RHS of (\ref{eq:case3}) is $$ \Phi(\twomat{\beta}{\exp(i\theta)\sqrt{\beta\delta}}{\exp(-i\theta)\sqrt{\beta\delta}}{\delta}) = \left(G_1 \sqrt{\beta} +G_2 \sqrt{\delta} \exp(i\theta)\right)^*
\left(G_1 \sqrt{\beta} +G_2 \sqrt{\delta} \exp(i\theta)\right), $$ which can be written as $$
\schatten{2q}{G_1 ||X_1||_{2q}+G_2 ||X_2||_{2q} e^{i\theta}}^2, $$
where I used $\beta = ||X_1^* X_1||_q = ||X_1||_{2q}^2$. In this way, (\ref{eq:case3}) is equivalent with \begin{equation} \schatten{2q}{G_1\otimes X_1+G_2\otimes X_2}
\le \max_\theta \schatten{2q}{G_1 ||X_1||_{2q} + e^{i\theta}G_2 ||X_2||_{2q}}. \label{eq:case3c} \end{equation}
I now present the promised counterexample to inequality (\ref{eq:case3}), in its square-rooted form (\ref{eq:case3c}). Consider the diagonal matrices $G_1=X_1=\mathop{\rm Diag}\nolimits(1,b), G_2=X_2=\mathop{\rm Diag}\nolimits(b,-1)$, with $0\le b\le 1$; then the inequality is violated when $2<2q< p_0$, where $p_0$ is a root of the equation $((1+b)^p+(1-b)^p)(1+b^p) = 2(1+b^2)^p$ in $p$. Fortunately, this counterexample does not violate multiplicativity since it corresponds to block-diagonal $\rho$ and $\Phi$; thus $\Phi$ is EB and $\rho$ is separable, whence multiplicativity holds.
\section{Rank One Case} In this Section, I describe a technique called the method of conjugation, and use it to obtain results for the cases where either the CP map or the state has rank 1.
The method of conjugation amounts to transforming existing relations into new ones by replacing the `components' of the expressions by their Hermitian conjugates, and exploiting in one way or another the fact that for any UI norm $|||AA^*||| = |||A^*A|||$. What exactly is meant by `components' here very much depends on the situation, and I will describe here a number of applications to illustrate the method. This method is not new; it appears, for example, in \cite{bhatia94}.
Suppose we have a $d\times2$ bipartite state $\rho$ in block-matrix form, and we decompose it as $$ \rho = \twovec{X^*}{Y^*} \twovect{X}{Y}, $$ then a possible way of conjugating $\rho$ is to conjugate its components $X$ and $Y$. This gives rise to a new matrix, of different dimensions, which I denote by $\tilde{\rho}$, and which is given by $$ \tilde{\rho} = \twovec{X}{Y} \twovect{X^*}{Y^*}. $$ I want to stress here that the tilde is just a label and not a functional operation, quite simply because that operation is not uniquely defined; infinitely many $X$ and $Y$ exist for one and the same $\rho$, each giving rise to different $\tilde{\rho}$.
Exactly the same can be done for a $2\mapsto d$ CP map $\Phi$. Let us decompose its Choi matrix as $$ \Phi = \twovec{G^*}{H^*} \twovect{G}{H}, $$ then conjugation yields the new map $$ \tilde{\Phi} = \twovec{G}{H} \twovect{G^*}{H^*}. $$ If $\Phi$ is a map from ${\mathbb{C}}^2$ to ${\mathbb{C}}^d$ of rank $R$ (that is, it can be represented by a minimal number of $R$ Kraus elements) then one can find blocks $G$ and $H$ of size $R\times d$, so that $\tilde{\Phi}$ is a map from ${\mathbb{C}}^2$ to ${\mathbb{C}}^R$ of rank at most $d$.
The relation linking conjugated state and map to their originals is: for any UI norm \begin{equation}\label{eq:conjug}
|||(\tilde{\Phi}\otimes\mathrm{\openone})(\tilde{\rho})||| = |||(\Phi\otimes\mathrm{\openone})(\rho)|||.
\end{equation} This is proven by writing the expressions out in terms of the blocks and exploiting $|||AA^*|||=|||A^*A|||$. Indeed, $(\Phi\otimes\mathrm{\openone})(\rho) = (G\otimes X+H\otimes Y)^*(G\otimes X+H\otimes Y)$, and $(\tilde{\Phi}\otimes\mathrm{\openone})(\tilde{\rho}) = (G\otimes X+H\otimes Y)(G\otimes X+H\otimes Y)^*$.
A simple consequence of (\ref{eq:conjug}) is that $\nu_q(\tilde{\Phi}) = \nu_q(\Phi)$. One just applies (\ref{eq:conjug}) for qubit states $\rho$ (the `blocks' of $\rho$ are scalars) and notes that $\tilde{\rho}$ is the complex conjugate of $\rho$, whence the maximisation over all $\rho$ coincides with the maximisation over all $\tilde{\rho}$.
The concept of a \textit{complementary channel} introduced in \cite{devetak03,holevo05} is essentially a specific instance of such a conjugated map. Let a channel $\Phi$ on a space ${\cal H}$ be represented in Stinespring form by $$ \Phi(\rho) = \mathop{\rm Tr}\nolimits_{aux}(U(\rho\otimes\omega)U^*), $$ where $\omega$ is a fixed ancilla state on the space ${\cal H}_{aux}$, and $U$ is a unitary on ${\cal H}\otimes{\cal H}_{aux}$. The, or rather `a' complementary channel is then defined as a channel with Stinespring form $$ \Phi'(\rho) = \mathop{\rm Tr}\nolimits_{{\cal H}}(U(\rho\otimes\omega)U^*). $$ Again, for a given $\Phi$, $\Phi'$ is not unique as it depends on the choice of $\omega$ and $U$ \cite{holevo05}.
\begin{proposition} The complementary channel $\Phi'$ defined above is a conjugated map of the complex conjugation of $\Phi$. \end{proposition} \textit{Proof.} Let the ancilla state $\omega$ be the pure state $\ket{0}\bra{0}$. If $\Phi$ has Kraus representation $\Phi(\rho)=\sum_k A_k \rho A_k^*$ (where the $A_k$ are defined by
$\langle j|A_k|m\rangle = \langle j,k|U|m,0\rangle$), then the complementary channel $\Phi'$ satisfies the relation
$\langle k|\Phi'(\rho)|j\rangle = \mathop{\rm Tr}\nolimits[A_k \rho A_j^*]$.
The Choi matrix of $\Phi$ can thus be decomposed as $\Phi = \twovec{G_1^*}{\vdots} \twovect{G_1}{\cdots}$, with $G_m^*\ket{k} = A_k\ket{m}$. Likewise, the Choi matrix of $\Phi'$ can be decomposed as $\Phi' = \twovec{{G'}_1^*}{\vdots} \twovect{{G'}_1}{\cdots}$. By taking $\rho=\ket{m}\bra{l}$, we find $$
\langle k|\Phi'(\ket{m}\bra{l})|j\rangle = \mathop{\rm Tr}\nolimits[A_k \ket{m}\bra{l} A_j^*] = \langle l|A_j^* A_k|m\rangle, $$ while on the other hand $$
\langle k|\Phi'(\ket{m}\bra{l})|j\rangle = \langle k|{G'}_m^*\,\,G'_l|j\rangle = \overline{\bra{j}{G'}_l^*\,\,{G'}_m\ket{k}}. $$ We can, therefore, make the identification $\overline{{G'}_m}\ket{k} = A_k\ket{m} = G_m^*\ket{k}$, so that, indeed, $G'_m = \overline{G}_m^* = G_m^T$.
$\square$\par\vskip24pt
Using this method of conjugation, we can prove three special cases of the (\ref{eq:case3}).
The first special case is when $\Phi$ is the identity map. In that case the RHS of (\ref{eq:case3}) reduces to $\beta+\delta$, and we get: \begin{theorem} For $\rho\ge0$ partitioned as below, and $q\ge 1$, \begin{equation}
\schatten{q}{\rho}=\schatten{q}{\twomat{B}{C}{C^*}{D}} \le ||B||_q + ||D||_q. \end{equation} \end{theorem} This is well-known, and rather easy to prove. In fact, it holds not only for the Schatten norms, but for any UI norm, and not only for $2\times2$ partitionings, but for any symmetric partitioning.
\textit{Proof.} The general structure of the proof is: conjugate, apply the triangle inequality, then conjugate again.
By positivity of $\rho$, we can write $$ \rho = \twovec{X^*}{Y^*} \twovect{X}{Y}, $$ where $X$ and $Y$ are general $d\times 2d$ matrices. Then after conjugating the two factors (not the blocks, but the whole matrix), we can exploit the triangle inequality to find \begin{eqnarray*} \schatten{q}{\rho} &=& \schatten{q}{\twovec{X^*}{Y^*}\twovect{X}{Y}} \\ &=& \schatten{q}{\twovect{X}{Y}\twovec{X^*}{Y^*}} \\
&=& ||XX^*+YY^*||_q \\
&\le& ||XX^*||_q + ||YY^*||_q \\
&=& ||X^*X||_q + ||Y^*Y||_q \\
&=& ||B||_q+||D||_q. \end{eqnarray*}
$\square$\par\vskip24pt
An elaboration of the previous argument yields the case of single-element CP maps $\Phi$, that is maps of the form $\Phi(\rho)=A\rho A^*$. \begin{theorem} Inequality (\ref{eq:case3}) holds for $q\ge1$, for $\Phi$ a single-element CP map, and for any state $\rho$. \end{theorem} \textit{Proof.} In this case $A$ has 2 columns, say $A_1$ and $A_2$, and the Choi matrix of $\Phi$ is given by the rank-1 matrix $$ \Phi = \twovec{A_1}{A_2} \twovect{A_1^*}{A_2^*}. $$ Let again $$ \rho = \twovec{X^*}{Y^*} \twovect{X}{Y}, $$ then, by the conjugation identity (\ref{eq:conjug}), \begin{eqnarray*} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} \quad = \quad \schatten{q}{(\tilde{\Phi}\otimes\mathrm{\openone})(\tilde{\rho})} &=& \schatten{q}{A_1^*A_1\, XX^* + A_2^*A_2\, YY^* + A_2^*A_1\, Y^*X + A_1^*A_2\, X^*Y}, \end{eqnarray*} where I exploited the fact that $A_1$ and $A_2$ are vectors, so that the quantities $A_j^*A_k$ are scalars. We can do the same thing for the RHS, and get the \textit{scalar} quantity \begin{equation}\label{eq:ii} \schatten{q}{\Phi(\twomat{\beta}{\exp(i\theta)\sqrt{\beta\delta}}{\exp(-i\theta)\sqrt{\beta\delta}}{\delta})}
= \left| A_1^*A_1\, \beta + A_2^*A_2\, \delta
+ A_2^*A_1\,\exp(i\theta)\sqrt{\beta\delta} + A_1^*A_2\,\exp(-i\theta)\sqrt{\beta\delta} \right| \end{equation} Comparison of LHS and RHS in this form invites the idea of using the triangle inequality again. \begin{eqnarray*} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)}
&\le& A_1^*A_1 \schatten{q}{XX^*} + A_2^*A_2 \schatten{q}{YY^*} + |A_2^*A_1| \schatten{q}{Y^*X}
+ |A_1^*A_2| \schatten{q}{X^*Y}. \end{eqnarray*} Now we know that $\schatten{q}{XX^*} = \beta$ and $\schatten{q}{YY^*} = \delta$. Furthermore, by the Cauchy-Schwarz inequality for UI norms, $\schatten{q}{Y^*X}\le \schatten{q}{X^*X}^{1/2}\schatten{q}{Y^*Y}^{1/2}=\sqrt{\beta\delta}$. Thus \begin{eqnarray*} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)}
&\le& A_1^*A_1 \beta + A_2^*A_2 \delta + |A_2^*A_1| \sqrt{\beta\delta}
+ |A_1^*A_2| \sqrt{\beta\delta}.
\end{eqnarray*} By taking $\theta$ such that $|A_2^*A_1| = \exp(i\theta) A_2^*A_1$, the last expression coincides with RHS(\ref{eq:ii}).
$\square$\par\vskip24pt
As a third and final special case, we can in a similar fashion prove (\ref{eq:case3}) for any pure input state $\rho$. \begin{theorem} Inequality (\ref{eq:case3}) holds for $q\ge1$, for $\Phi$ a CP map, and for pure states $\rho$. \end{theorem} \textit{Proof.} Let $$ \Phi = \twovec{X^*}{Y^*} \twovect{X}{Y}, $$ and let $\rho=\ket{\psi}\bra{\psi}$, with $$ \psi = \twovec{\psi_1}{\psi_2}. $$ Conjugation of $\rho$, via conjugation of its components $\psi_1$ and $\psi_2$, then gives the qubit state $$ \tilde{\rho} = \twomat{\inpr{\psi_1}{\psi_1}}{\inpr{\psi_1}{\psi_2}}{\inpr{\psi_2}{\psi_1}}{\inpr{\psi_2}{\psi_2}}. $$ Thus $$ \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} = \schatten{q}{\inpr{\psi_1}{\psi_1}XX^* + \inpr{\psi_1}{\psi_2}XY^* + \inpr{\psi_2}{\psi_1}YX^* + \inpr{\psi_2}{\psi_2}YY^*}. $$
Since we're dealing with a pure state, $\beta=\schatten{q}{\psi_1 \psi_1^*}=\mathop{\rm Tr}\nolimits(\psi_1 \psi_1^*)=\inpr{\psi_1}{\psi_1}$, and similarly, $\delta=\inpr{\psi_2}{\psi_2}$. Also, for some $\theta$, $\inpr{\psi_1}{\psi_2} = e^{i\theta} |\inpr{\psi_1}{\psi_2}| = s e^{i\theta} \sqrt{\beta\delta}$, with $0\le s\le1$ (by the Cauchy-Schwarz inequality). Then $$ \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} = \schatten{q}{\beta X{X}^* + s e^{i\theta}\sqrt{\beta\delta}X{Y}^* + s e^{-i\theta}\sqrt{\beta\delta}Y{X}^* + \delta Y{Y}^*}. $$ Now notice $s e^{i\theta} = p e^{i\theta}+(1-p)(-e^{i\theta})$ for $p=(1+s)/2$. Thus \begin{eqnarray*} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} &\le& p\schatten{q}{\beta X{X}^* + e^{i\theta}\sqrt{\beta\delta}X{Y}^* + e^{-i\theta}\sqrt{\beta\delta}Y{X}^* + \delta Y{Y}^*} \\ && +(1-p)\schatten{q}{\beta X{X}^* - e^{i\theta}\sqrt{\beta\delta}X{Y}^* - e^{-i\theta}\sqrt{\beta\delta}Y{X}^* + \delta Y{Y}^*} \\ &\le& \max_\theta \schatten{q}{\beta X{X}^* + e^{i\theta}\sqrt{\beta\delta}X{Y}^* + e^{-i\theta}\sqrt{\beta\delta}Y{X}^* + \delta Y{Y}^*} \\ &=& \max_\theta \schatten{q}{\tilde{\Phi}(\twomat{\beta}{e^{i\theta}\sqrt{\beta\delta}}{e^{-i\theta}\sqrt{\beta\delta}}{\delta})} \\ &=& \max_\theta \schatten{q}{\Phi(\twomat{\beta}{e^{-i\theta}\sqrt{\beta\delta}}{e^{i\theta}\sqrt{\beta\delta}}{\delta})}. \end{eqnarray*}
$\square$\par\vskip24pt
\section{Positive Off-Diagonal Blocks} In the case where the off-diagonal block $\Phi_{12}$ is PSD, a very general Theorem can be proven for linear maps with general input and output dimensions.
First we need an Araki-Lieb-Thirring (A-L-T) type inequality for general operators, proven in \cite{kaijiss}: \begin{proposition}\label{prop:LTG} For general operators $F$ and $H$, and for $q\ge 1$, \begin{equation}
\mathop{\rm Tr}\nolimits|FHF^*|^q \le \mathop{\rm Tr}\nolimits\left((F^*F)^q \frac{|H|^q+|H^*|^q}{2}\right). \end{equation} \end{proposition}
The following Proposition has appeared before as Lemma 2 in \cite{king}, in somewhat different form, for the case where all matrices involved are PSD. In that form, the proof relied on the A-L-T inequality. Having now the stronger inequality from Proposition \ref{prop:LTG} at our disposal, we can lift the original Proposition to the following more general setting: \begin{proposition}\label{prop:EB1} For $A_k\ge 0$ and general $B_k$, and any $q\ge 1$, \begin{equation}\label{eq:Apos0} \schatten{q}{\sum_k A_k\otimes B_k} \le \schatten{q}{\sum_k A_k} \, \max_j \schatten{q}{B_j}. \end{equation} \end{proposition} \textit{Proof.} Proceeding as in the proof of Lemma 2 in \cite{king}, I introduce the following notations (which are possible because the $A_k$ are PSD): \begin{eqnarray*} F &=& (\sqrt{A_1} \otimes \mathrm{\openone} \ldots \sqrt{A_K} \otimes \mathrm{\openone}) \\ G &=& (\sqrt{A_1} \ldots \sqrt{A_K}) \\ H &=& \bigoplus_k \mathrm{\openone}\otimes B_k. \end{eqnarray*} I denote by $X_{kk}$ the $k$-th diagonal block of a matrix in the same partitioning as $H$. For example, $H_{kk}=\mathrm{\openone}\otimes B_k$.
Using these notations, $\sum_k A_k\otimes B_k$ can be written as $FHF^*$. By Proposition \ref{prop:LTG}, \begin{eqnarray*}
\schatten{q}{\sum_k A_k\otimes B_k}^q &=& \mathop{\rm Tr}\nolimits[|FHF^*|^q] \\
&\le& \mathop{\rm Tr}\nolimits[(F^* F)^q \, (|H|^q+|H^*|^q)]/2 \\
&=& \sum_k \mathop{\rm Tr}\nolimits[[(F^* F)^q]_{kk} \, (\mathrm{\openone} \otimes (|B_k|^q+|B_k^*|^q))]/2 \\
&=& \sum_k \mathop{\rm Tr}\nolimits[[(G^* G)^q]_{kk}] \, \mathop{\rm Tr}\nolimits[|B_k|^q] \\
&\le& \max_j \mathop{\rm Tr}\nolimits[|B_j|^q] \, \sum_k \mathop{\rm Tr}\nolimits[[(G^* G)^q]_{kk}] \\
&=& \max_j \mathop{\rm Tr}\nolimits[|B_j|^q] \, \mathop{\rm Tr}\nolimits[(G^* G)^q]. \end{eqnarray*} Then noting $$ \mathop{\rm Tr}\nolimits[(G^* G)^q] = \mathop{\rm Tr}\nolimits[(G G^*)^q] = \mathop{\rm Tr}\nolimits[(\sum_k A_k)^q], $$ and taking $q$-th roots yields the Proposition.
$\square$\par\vskip24pt
\begin{corollary}\label{cor:EB1} For $A_k\ge 0$ and general $B_k$, and any $q\ge 1$, \begin{equation}\label{eq:Apos} \schatten{q}{\sum_k A_k\otimes B_k} \le \schatten{q}{\sum_k A_k \schatten{q}{B_k}}. \end{equation} \end{corollary} \textit{Proof.}
Define $A'_k = ||B_k||_q A_k$ and $B'_k = B_k/||B_k||_q$. Then $||B'_k||_q=1$ and, by (\ref{eq:Apos0}), \begin{eqnarray*} \schatten{q}{\sum_k A_k\otimes B_k} &=& \schatten{q}{\sum_k A'_k\otimes B'_k} \le \max_j \schatten{q}{B'_j} \, \schatten{q}{\sum_k A'_k}
= \schatten{q}{\sum_k ||B_k||_q A_k}. \end{eqnarray*}
$\square$\par\vskip24pt
In fact, it is easy to see that the Corollary is equivalent to Proposition \ref{prop:EB1}. Just note that, by positivity of the $A_k$, $\sum_k ||B_k||_q A_k \le \max_j ||B_j||_q \sum_k A_k$. The same inequality then holds for the $q$-norm.
Using the above machinery, we can now prove: \begin{theorem}\label{th:case2} For linear maps $\Phi$ where \textit{all} the blocks $\Phi_{ij}:=\Phi(e^{ij})$ are positive, and for general block-partitioned operators $X=[X_{ij}]$: \begin{equation}\label{eq:case2} \schatten{q}{(\Phi\otimes\mathrm{\openone})(X)} \le \schatten{q}{\Phi([\schatten{q}{X_{ij}}])}. \end{equation} \end{theorem} \textit{Proof.} By assumption, all blocks $\Phi_{ij}$ are positive. Corollary \ref{cor:EB1} therefore yields $$ \schatten{q}{(\Phi\otimes\mathrm{\openone})(X)} = \schatten{q}{\sum_{i,j} \Phi_{ij}\otimes X_{ij}}
\le \schatten{q}{\phantom{\big|}\sum_{i,j} \schatten{q}{X_{ij}}\,\Phi_{ij}} = \schatten{q}{\Phi([\schatten{q}{X_{ij}}])}. $$
$\square$\par\vskip24pt
Proposition \ref{prop:EB1} can also be applied in the `square-root' case. \begin{corollary} Let $\Phi$ be a CP map of the form $$ \Phi=\twovec{G_1^*}{G_2^*}\twovect{G_1}{G_2}, $$ where $G_1$ and $G_2$ are PSD up to scalar phase factors $e^{i\theta_i}$. Then (\ref{eq:case3}) holds for any state $\rho$ and for $1/2\le q$. \end{corollary} Note that this case includes values of $q$ where the ``Schatten $q$-norm'' is not a norm at all.
\textit{Proof.} Let $G_i=e^{i\theta_i} H_i$, with $H_i$ PSD. Straightforward application of Corollary \ref{cor:EB1} to LHS(\ref{eq:case3c}) yields, for $2q\ge 1$, \begin{eqnarray*} \schatten{2q}{G_1\otimes X_1+G_2\otimes X_2} &\le& \schatten{2q}{H_1\otimes e^{i\theta_1} X_1+H_2\otimes e^{i\theta_1}X_2} \\ &=& \schatten{2q}{\schatten{2q}{e^{i\theta_1} X_1} H_1+\schatten{2q}{e^{i\theta_2} X_2} H_2} \\ &=& \schatten{2q}{\schatten{2q}{X_1}G_1+e^{i(\theta_1-\theta_2)}\schatten{2q}{X_2}G_2} \\ &\le& \max_\theta\schatten{2q}{\schatten{2q}{X_1} G_1+e^{i\theta}\schatten{2q}{X_2}G_2}, \end{eqnarray*} which yields (\ref{eq:case3c}), and hence (\ref{eq:case3}), in this case.
$\square$\par\vskip24pt
Proposition \ref{prop:EB1} has some further consequences. \begin{corollary}\label{cor:sep} For $\Phi$ a CP map, and $\rho$ a separable state, \begin{equation}
||(\Phi\otimes\mathrm{\openone})(\rho)||_q \le \nu_q(\Phi)\,\, ||\mathop{\rm Tr}\nolimits_1\rho||_q. \end{equation} \end{corollary} \textit{Proof.} Since $\rho$ is separable, it can be written in the form $\rho=\sum_k \sigma_k\otimes B_k$, where all $\sigma_k$ are normalised states, and all $B_k$ are positive (not necessarily normalised). As a consequence, $\sum_k B_k = \mathop{\rm Tr}\nolimits_1 \rho$. By Proposition \ref{prop:EB1}, we get \begin{eqnarray*} \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} &=& \schatten{q}{\sum_k \Phi(\sigma_k)\otimes B_k} \\ &\le& \max_k \schatten{q}{\Phi(\sigma_k)} \,\, \schatten{q}{\sum_k B_k} \\ &\le& \nu_q(\Phi)\,\, \schatten{q}{\sum_k B_k} \\ &=& \nu_q(\Phi)\,\, \schatten{q}{\mathop{\rm Tr}\nolimits_1\rho}. \end{eqnarray*}
$\square$\par\vskip24pt
\begin{theorem}[King] The MOP is multiplicative for any $q$ when at least one of the CP maps involved is EB. \end{theorem} \textit{Proof.} Let $\Omega$ be an EB CP map, and $\Phi$ any other CP map. Let $\rho=(\mathrm{\openone}\otimes\Omega)(\tau)$, with $\tau$ a state. Because $\Omega$ is EB, $\rho$ is (proportional to) a separable state. By Corollary \ref{cor:sep}, and the fact that $\mathop{\rm Tr}\nolimits_1\tau$ is a state, we get \begin{eqnarray*} \schatten{q}{(\Phi\otimes\Omega)(\tau)} &=& \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} \\ &\le& \nu_q(\Phi)\,\, \schatten{q}{\mathop{\rm Tr}\nolimits_1\rho} \\ &=& \nu_q(\Phi)\,\, \schatten{q}{\Omega(\mathop{\rm Tr}\nolimits_1\tau)} \\ &\le& \nu_q(\Phi)\,\nu_q(\Omega). \end{eqnarray*}
$\square$\par\vskip24pt
\section{Block-Hankel and Block-Toeplitz Matrices} Gurvits has proven in \cite{gurvits} that a state whose density matrix is block-Hankel is separable. For a published reference, see Ando \cite{ando04}, who uses the term `super-positivity' for separability. For $2\times d$ states, that also follows from the semidefinite programming test in \cite{woerdeman} (using the $n=0$ case).
Gurvits has also proven that states with block-Toeplitz density matrices are separable. This follows from a representation by Ando (\cite{ando04}, just after Theorem 4.9) which says that such matrices can be decomposed in terms of a PSD matrix-valued measure $dP(\cdot)$ on the interval $[0,2\pi)$. For the $2\times d$ case this reads $$ \rho=\twomat{B}{C}{C^*}{B} = \int_0^{2\pi} \twomat{1}{e^{i\theta}}{e^{-i\theta}}{1} \otimes dP(\theta). $$ One clearly sees that every factor of the tensor product is positive; $\rho$ is therefore a separable state. Actually, from the proofs of Lemma 4.8 and Theorem 4.9 in \cite{ando04} one can see that an integral is not needed, and, instead, we can use a finite sum $$ \twomat{B}{C}{C^*}{B} = \sum_{k=1}^d \twomat{1}{e^{i\theta_k}}{e^{-i\theta_k}}{1} \otimes P_k. $$
Using these representations, we can prove one more special instance of (\ref{eq:case3}). \begin{theorem}\label{th:gen1} For $\rho=\twomat{B}{C}{C^*}{B}\ge0$, and for $\Phi$ any linear map, (\ref{eq:case3}) holds for all $q\ge1$. \end{theorem} \textit{Proof.} According to Ando's representation mentioned in the previous Section, $\rho$ can be written in the form $$ \twomat{B}{C}{C^*}{B} = \sum_{k=1}^d \twomat{1}{e^{i\theta_k}}{e^{-i\theta_k}}{1} \otimes P_k, $$ with $P_k\ge0$. Applying the map $\Phi\otimes\mathrm{\openone}$ gives $$ (\Phi\otimes\mathrm{\openone})(\rho) = \sum_k \Phi(\twomat{1}{e^{i\theta_k}}{e^{-i\theta_k}}{1}) \otimes P_k. $$ Because $\Phi$ need not be CP, the first tensor factor is no longer positive. However, the $P_k$ still are, allowing us to employ Proposition \ref{prop:EB1}, which gives us $$ \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} \le \max_{\theta} \schatten{q}{\Phi(\twomat{1}{e^{i\theta}}{e^{-i\theta}}{1})} \,\, \schatten{q}{\sum_k P_k}. $$
Noticing that the second factor is just $||B||_q=\beta=\delta$ yields (\ref{eq:case3}).
$\square$\par\vskip24pt
When $B$ is different from $D$, restrictions have in general to be imposed on $\Phi$; the previous Theorem being an exception. For $C=C*$, the RHS of (\ref{eq:case3}) only depends on $X:=\Phi_{11}+\Phi_{22}$ and $Y:=\Phi_{12}+\Phi_{21}$, so that we are led to maximise the LHS over all maps $\Phi$, keeping $X$ and $Y$ fixed. Now the LHS is $$ \schatten{q}{(\Phi\otimes\mathrm{\openone})(\rho)} = \schatten{q}{\Phi_{11} \otimes B + \Phi_{22}\otimes D + Y\otimes C} = \schatten{q}{X \otimes \frac{B+D}{2} + \Delta\otimes \frac{B-D}{2} + Y\otimes C}, $$ where $\Delta:=\Phi_{11} - \Phi_{22}$. So if $\Phi$ is unconstrained, and $B-D\neq0$, then the LHS could be made arbitrarily large by letting $\Delta$ become arbitrarily large. If, however, $\Phi$ is CP, say, then that can no longer happen. Indeed, by positivity of $\Phi_{11}$ and $\Phi_{22}$, $X\pm\Delta\ge0$, whence $-X\le \Delta \le X$.
\acknowledgments
This work was supported by The Leverhulme Trust (grant F/07 058/U), and is part of the QIP-IRC (www.qipirc.org) supported by EPSRC (GR/S82176/0). I thank Chris King for many stimulating conversations and his suggestion to study (\ref{eq:case3}).
Dedicated with love to young Ewout Audenaert, whose constant calls for attention kept me awake during the course of this work.
\end{document} |
\begin{document}
\title{{f Rate of Convergence and Large Deviation for the Infinite Color P\'olya Urn Schemes} \begin{abstract} In this work we consider the \emph{infinite color urn model} associated with a bounded increment random walk on $\mbox{${\mathbb Z}$}^d$. This model was first introduced in \cite{BaTh2013}. We prove that the rate of convergence of the expected configuration of the urn at time $n$ with appropriate centering and scaling is of the order ${\mathcal O}\left(\frac{1}{\sqrt{\log n}}\right)$. Moreover we derive bounds similar to the classical Berry-Essen bound. Further we show that for the expected configuration a \emph{large deviation principle (LDP)} holds with a good rate function and speed $\log n$.
\noindent {\bf Keywords:} \emph{Berry-Essen bound, infinite color urn, large deviation principle, rate of convergence,
urn models}.
\noindent {\bf AMS 2010 Subject Classification:} \emph{Primary: 60F05, 60F10; Secondary: 60G50}.
\end{abstract}
\section{Introduction} \label{Sec:Intro} P\'olya urn scheme is one of the most well studied stochastic process which has plenty of applications in various different fields. Since the time of its introduction by P\'olya \cite{Polya30} there has been a vast number of different variants and generalizations \cite{Fri49, Free65, BagPal85, Pe90, Gouet, Svante1, FlDuPu06, Pe07} studied in literature. In general one considers the model with finitely many colors and then it can be described simply by \begin{quote} Start with an urn containing finitely many balls of different colors. At any time $n\geq 1$, a ball is selected uniformly at random from the urn, and its color is noted. The selected ball is then returned to the urn along with a set of balls of various colors which may depend on the color of the selected ball. \end{quote} In \cite{BlackMac73} Blackwell and MacQueen introduced a version of the model with possibly infinitely many colors but with a very simple replacement mechanism. Recently the authors of this work has introduced \cite{BaTh2013} a new generalization of the classical model with infinite but countably many colors with replacement mechanism corresponding to random walks in $d$-dimension. This generalization is essentially different than that of the classical P\'olya urn scheme, as well as the model introduced in \cite{BlackMac73}, where the replacement mechanism is diagonal. The generalization by \cite{BaTh2013} considers replacement mechanism with non-zero off diagonal entries and provides a novel connection between the two classical models, namely, P\'olya urn scheme and random walks on $d$-dimensional Euclidean space has been demonstrated. In the current work we exploit this connection to derive the \emph{rate of convergence} and the \emph{large deviation principle} for the $\left(n+1\right)^{\mbox{th}}$ selected color in the infinite color generalization of the P\'olya urn scheme. In the following subsection we describe the specific model which we study.
\subsection{Infinite Color Urn Model Associated with Random Walks} \label{SubSec:Model} Let $\left(X_j\right)_{j \geq 1}$ be i.i.d. random vectors taking values in $\mbox{${\mathbb Z}$}^d$ with probability mass function $p\left({\mathbf u}\right) := {\mathbf P}\left(X_1 = {\mathbf u}\right), {\mathbf u} \in \mbox{${\mathbb Z}$}^d$. We assume that the distribution of $X_1$ is bounded, that is there exists a non-empty finite subset $B \subseteq \mbox{${\mathbb Z}$}^d$ such that $p\left(u\right) = 0$ for all $u \not\in B$. Throughout this paper we take the convention of writing all vectors as row vectors. Thus for a vector ${\mathbf x} \in \mbox{${\mathbb R}$}^d$ we will write ${\mathbf x}^T$ to denote it as a column vector. The notations $\langle \cdot , \cdot \rangle$ will denote
the usual Euclidean inner product on $\mbox{${\mathbb R}$}^d$ and $\| \cdot \|$ the the Euclidean norm. We will always write \begin{equation} \begin{array}{rcl} \boldsymbol \mu & := & {\mathbf E}\left[X_1\right] \\ \varSigma & := & {\mathbf E}\left[ X_1^T X_1 \right] \\ e\left(\boldsymbol \lambda\right) & := & {\mathbf E}\left[e^{\langle \boldsymbol \lambda , X_1 \rangle}\right], \, \boldsymbol \lambda \in \mbox{${\mathbb Z}$}^d. \\ \end{array} \label{Equ:Basic-Notations} \end{equation} When the dimension $d=1$ we will denote the mean and variance simply by $\mu$ and $\sigma^2$ respectively.
Let $S_n := X_0 + X_1 + \cdots + X_n, n \geq 0$ be the random walk on $\mbox{${\mathbb Z}$}^d$ starting at $X_0$ and with increments $\left(X_j\right)_{j \geq 1}$ which are independent. Needless to say that $\left(S_n\right)_{n \geq 0}$ is Markov chain with state-space $\mbox{${\mathbb Z}$}^d$, initial distribution given by the distribution of $X_0$ and the transition matrix $R := \left(\left( p\left({\mathbf u} - {\mathbf v}\right) \right)\right)_{u, v \in {\mathbb Z}^d}$.
In \cite{BaTh2013} the following infinite color generalization of P\'olya urn scheme was introduced where the colors were indexed by $\mbox{${\mathbb Z}$}^d$. Let $U_n := \left(U_{n,{\mathbf v}}\right)_{{\mathbf v} \in {\mathbb Z}^d} \in [0, \infty)^{{\mathbb Z}^d}$ denote the configuration of the urn at time $n$, that is, \[ \small{ {\mathbf P}\left( \left(n+1\right)^{\mbox{th}} \mbox{\ selected ball has color\ } {\mathbf v} \,\Big\vert\, U_n, U_{n-1}, \cdots, U_0 \right) \propto U_{n,{\mathbf v}}, \, {\mathbf v} \in \mbox{${\mathbb Z}$}^d. } \] Starting with $U_0$ which is a probability distribution we define $\left(U_n\right)_{n \geq 0}$ recursively as follows \begin{equation} \label{recurssion} U_{n+1}=U_{n} + C_{n+1} R \end{equation} where $C_{n+1} = \left(C_{n+1,{\mathbf v}}\right)_{{\mathbf v} \in {\mathbb Z}^d}$ is such that $C_{n+1,V}=1$ and $C_{n+1,{\mathbf u}} = 0$ if ${\mathbf u} \neq V$ where $V$ is a random color chosen from the configuration $U_n$. In other words \[ U_{n+1}=U_n + R_V \] where $R_V$ is the $V^{\text{th}}$ row of the replacement matrix $R$. Following \cite{BaTh2013} we define the process $\left(U_n\right)_{n \geq 0}$ as the \emph{infinite color urn model} with initial configuration $U_0$ and replacement matrix $R$. We will also refer it as the \emph{infinite color urn model associated with the random walk $\left(S_n\right)_{n \geq 0}$ on $\mbox{${\mathbb Z}$}^d$}. Throughout this paper we will assume that $U_0 = \left(U_{0,{\mathbf v}}\right)_{{\mathbf v} \in {\mathbb Z}^d}$ is such that $U_{0,{\mathbf v}} = 0$ for all but finitely many ${\mathbf v} \in \mbox{${\mathbb Z}$}^d$.
It is worth noting that $\displaystyle{\sum_{{\mathbf u} \in {\mathbb Z}^d} U_{n,{\mathbf u}} = n + 1}$ for all $n \geq 0$. So if $Z_n$ denotes the $\left(n+1\right)^{\mbox{th}}$ selected color then \begin{equation} {\mathbf P}\left(Z_n = {\mathbf v} \,\Big\vert\, U_n, U_{n-1}, \cdots, U_0 \right) = \frac{U_{n,{\mathbf v}}}{n+1} \Rightarrow {\mathbf P}\left(Z_n = {\mathbf v} \right) = \frac{{\mathbf E}\left[U_{n,{\mathbf v}}\right]}{n+1}. \end{equation} In other words the expected configuration of the urn at time $n$ is given by the distribution of $Z_n$.
\subsection{Outline of the Main Contribution of the Paper} \label{SubSec:Outline} In \cite{BaTh2013} the authors studied the asymptotic distribution of $Z_n$, in particular, it has been proved (see Theorem 2.1 of \cite{BaTh2013}) that as $n \rightarrow \infty$, \begin{equation} \label{CLEC} \frac{Z_{n}-\boldsymbol \mu \log n}{\sqrt{\log n}}\stackrel{d}{\longrightarrow}N_{d}\left(\mathbf 0, \varSigma \right). \end{equation} In Section \ref{Sec:BE} we find the rate of convergence for the above asymptotic and show that classical Berry-Essen type bound hold at any dimension $d \geq 1$, which is of the order ${\mathcal O}\left(\frac{1}{\sqrt{\log n}}\right)$.
It is easy to see that \eqref{CLEC} implies \begin{equation} \frac{Z_{n}}{\log n} \cgd \boldsymbol \mu \text{\ as\ } n \to \infty \Rightarrow \frac{Z_{n}}{\log n} \cgp \boldsymbol \mu \text{\ as\ } n \to \infty. \label{Equ:Zn-Prob-Conv} \end{equation} So it is then natural to ask whether the sequence of measures $\left( {\mathbf P}\left(\frac{Z_n}{\log n} \in \cdot \right)\right)_{n \geq 2}$ satisfy a \emph{large deviation principle (LDP)}. In Section \ref{Sec:LDP} we show that the above sequence of measures satisfy a LDP with a good rate function and speed $\log n$. We also give an explicit representation of the rate function in terms of rate function of a marked Poisson process with intensity one and the markings given by the i.i.d. increments $\left(X_j\right)_{j \geq 1}$.
\subsection{Fundamental Representation} \label{SunSec:Representation} We end the introduction with the following very important observation made in \cite{BaTh2013} (see Theorem 3.1 in \cite{BaTh2013}) \begin{equation} Z_n \ed Z_0 + \sum_{j=1}^n I_j X_j\, \label{Equ:Representation} \end{equation} where $\left(X_j\right)_{j \geq 1}$ are as above and $\left(I_j\right)_{j \geq 1}$ are independent Bernoulli variables such that $I_j \sim \mbox{Bernoulli}\left(\frac{1}{j+1}\right)$ and are independent of $\left(X_j\right)_{j \geq 1}$. $Z_0 \sim U_0$ and is independent of $\left(\left(X_j\right)_{j \geq 1}, \left(I_j\right)_{j \geq 1}\right)$.
Note that using this representation the asymptotic normality \eqref{CLEC} follows immediately as an application of the Lindeberg Central Limit Theorem \cite{Bill86}. We use this representation to derive the Berry-Essen type bounds and also the LDP.
\section{Berry-Essen Bounds for the Expected Configuration} \label{Sec:BE} In this section we show that the rate of convergence of (\ref{CLEC}) is of the order ${\mathcal O}\left( \frac{1}{\sqrt{\log n}} \right)$. In fact we show that the Berry-Essen type bound holds for the color of the $\left(n+1\right)^{\mbox{th}}$-selected ball.
\subsection{Berry-Essen Bound for $d=1$} \label{SubSec:BE-1} We first consider the case when the associated random walk is a one dimensional walk and the set of colors are indexed by the set of integers $\mbox{${\mathbb Z}$}$. \begin{theorem} \label{Thm:BE-1} Suppose $U_0 = \delta_0$ then \begin{equation} \sup_{x \in {\mathbb R}} \left\vert {\mathbf P}\left( \frac{Z_{n}-\mu h_n}{\sqrt{n \rho_2}} \leq x\right) - \Phi\left(x\right) \right\vert \leq 2.75 \times \frac{\sqrt{n} \rho_3}{\rho_2^{3/2}} = {\mathcal O} \left( \frac{1}{\sqrt{\log n}} \right), \label{Equ:BE-1} \end{equation} where $\displaystyle{h_n := \sum_{j=1}^n \frac{1}{j+1}}$, $\Phi$ is the standard normal distribution function and \begin{equation} \rho_2 := \frac{1}{n} \left( \sigma^2 h_n -
\mu^2 \sum_{j=1}^n \frac{1}{\left(j+1\right)^2} \right) \label{Equ:Def-rho2} \end{equation} and \begin{equation} \rho_3 := \frac{1}{n} \left(
\sum_{j=1}^n \frac{1}{j+1} {\mathbf E}\left[ \left\vert X_1 - \frac{\mu}{j+1} \right\vert^3 \right]
+ \left\vert \mu \right\vert^3 \sum_{j=1}^n \frac{j}{\left(j+1\right)^4} \right). \label{Equ:Def-rho3} \end{equation} \end{theorem}
\begin{proof} We first note that when $U_0 = \delta_0$ then \eqref{Equ:Representation} can be written as \begin{equation} Z_n \ed \sum_{j=1}^n I_j X_j\, \label{Equ:Representation-0} \end{equation} where $\left(X_j\right)_{j \geq 1}$ are i.i.d. increments of the random walk $\left(S_n\right)_{n \geq 0}$, $\left(I_j\right)_{j \geq 1}$ are independent Bernoulli variables such that $I_j \sim \mbox{Bernoulli}\left(\frac{1}{j+1}\right)$ and are independent of $\left(X_j\right)_{j \geq 1}$.
Now observe that \[ n \rho_2 = \sum_{j=1}^n {\mathbf E}\left[ \left(I_j X_j - {\mathbf E}\left[I_j X_j\right] \right)^2\right] \mbox{\ and\ } n \rho_3 = \sum_{j=1}^n {\mathbf E}\left[ \left\vert I_j X_j - {\mathbf E}\left[I_j X_j\right] \right\vert^3\right]. \] Thus from the \emph{Berry-Essen Theorem} for the independent but non-identical increments (see Theorem 12.4 of \cite{BhRa76}) we get \begin{equation} \sup_{x \in {\mathbb R}} \left\vert {\mathbf P}\left( \frac{\sum_{j=1}^n I_j X_j -\mu h_n}{\sqrt{n \rho_2}} \leq x\right) - \Phi\left(x\right) \right\vert \leq 2.75 \times \frac{\sqrt{n} \rho_3}{\rho_2^{3/2}}. \label{Equ:BE-Intermediate} \end{equation} The equations \eqref{Equ:Representation-0} and \eqref{Equ:BE-Intermediate} implies the inequality in \eqref{Equ:BE-1}.
Finally to prove the last part of the equation \eqref{Equ:BE-1} we note that from definition $n \rho_2 \sim C_1 \log n$ and $n \rho_3 \sim C_2 \log n$ where $0 < C_1, C_2 < \infty$ are some constants. Thus \[ \frac{\sqrt{n} \rho_3}{\rho_2^{3/2}} = {\mathcal O}\left(\frac{1}{\sqrt{\log n}}\right). \] This completes the proof of the theorem. \end{proof}
Following result follows easily from the above theorem by observing the facts $h_n \sim \log n$ and $n \rho_2 \sim C_1 \log n$. \begin{theorem} \label{Thm:BE-1-General} Suppose $U_{0,k} = 0$ for all but finitely many $k \in \mbox{${\mathbb Z}$}$ then there exists a constant $C > 0$ such that \begin{equation} \sup_{x \in {\mathbb R}} \left\vert {\mathbf P}\left( \frac{Z_{n}-\mu \log n}{\sigma \sqrt{\log n}} \leq x \right) - \Phi\left(x\right) \right\vert \leq C \times \frac{\sqrt{n} \rho_3}{\rho_2^{3/2}} = {\mathcal O} \left( \frac{1}{\sqrt{\log n}} \right), \label{Equ:BE-1-General} \end{equation} $\Phi$ is the standard normal distribution function and $\rho_2$ and $\rho_3$ are as defined in \eqref{Equ:Def-rho2} and \eqref{Equ:Def-rho3} respectively. \end{theorem} It is worth noting that unlike in Theorem \ref{Thm:BE-1} the constant $C$ which appears in \eqref{Equ:BE-1-General} above, is not a universal constant, it may depend on the increment distribution, as well as on $U_0$.
\subsection{Berry-Essen bound for $d \geq 2$} \label{SubSec:BE-d} We now consider the case when the associated random walk is $d \geq 2$ dimensional and the colors are indexed by $\mbox{${\mathbb Z}$}^d$. Before we present our main result we introduce few notations.
{\bf Notations:} For a vector ${\mathbf x} \in \mbox{${\mathbb R}$}^d$ we will write the coordinates as $\left(x^{(1)}, x^{(2)}, \cdots, x^{(d)}\right)$. For example the coordinates of $\boldsymbol \mu$ will be written as $\left(\mu^{(1)}, \mu^{(2)}, \cdots, \mu^{(d)}\right)$. For a matrix $A = \left(\left(a_{ij}\right)\right)_{1\leq i,j\leq d }$ we denote by $A\left(i,j\right)$ the $(d-1) \times (d-1)$ sub-matrix of $A$, obtained by deleting the $i^{\text{th}}$ row and $j^{\text{th}}$ column. Let \begin{equation} \rho_2^{(d)} := \frac{1}{n} \sum_{j=1}^{n} \frac{1}{(j+1)}\frac{\det\left(\varSigma -\frac{1}{j+1}M\right)}
{\det\left(\varSigma(1,1)-\frac{1}{j+1}M(1,1)\right)}, \label{Equ:Def-rho2-d} \end{equation} where $M := \left(\left(\mu^{(i)} \mu^{(j)}\right)\right)_{1 \leq i, j \leq d}$ and \begin{equation} \rho_3^{(d)} := \frac{1}{n d} \sum_{j=1}^n \sum_{i=1}^{d}\gamma^{3}_{n}\left(i\right)
\beta_{j}\left(i\right), \label{Equ:Def-rho3-d} \end{equation} where \[ \gamma^{2}_{n}(i) := \max_{1\leq j \leq n} \frac{\det\left(\varSigma(i,i)-\frac{1}{(j+1)}M(i,i)\right)}
{\det\left(\varSigma(1,1)-\frac{1}{j+1}M(1,1)\right)} \] and \[ \beta_{j}(i) = \frac{1}{j+1} {\mathbf E}\left[ \left\vert X_1^{(i)} - \frac{\mu^{(i)}}{j+1} \right\vert^3 \right] + \frac{j}{\left(j+1\right)^4} \left\vert \mu^{(i)} \right\vert^3. \] For any two vectors ${\mathbf x}$ and ${\mathbf y} \in \mbox{${\mathbb R}$}^d$ we will write ${\mathbf x} \leq {\mathbf y}$, if the inequality holds coordinate wise. Finally for a positive definite matrix $B$, we write $B^{1/2}$ for the unique positive definite square root of it.
\begin{theorem} \label{Thm:BE-d} Suppose $U_0 = \delta_{\mathbf 0}$ then there exists an universal constant $C\left(d\right) > 0$ which may depend on the dimension $d$ such that \begin{equation} \sup_{{\mathbf x} \in {\mathbb R}^d} \left\vert {\mathbf P}\left( \left(Z_{n}-\boldsymbol \mu h_n\right) \varSigma_n^{-1/2} \leq {\mathbf x} \right) - \Phi_d\left({\mathbf x}\right) \right\vert \leq C\left(d\right) \frac{\sqrt{n} \rho_3^{(d)}}{\left(\rho_2^{(d)}\right)^{3/2}} = {\mathcal O} \left( \frac{1}{\sqrt{\log n}} \right), \label{Equ:BE-d} \end{equation} where $\varSigma_n := \sum_{j=1}^n \frac{1}{j+1} \left(\varSigma - \frac{1}{j+1} M\right)$ and $\Phi_d$ is the distribution function of a standard $d$-dimensional normal random vector. \end{theorem}
\begin{proof} Like in the one dimensional case, we start by observing that when $U_0 = \delta_0$ then \eqref{Equ:Representation} can be written as \begin{equation} Z_n \ed \sum_{j=1}^n I_j X_j\, \label{Equ:Representation-0-d} \end{equation} where $\left(X_j\right)_{j \geq 1}$ are i.i.d. increments of the random walk $\left(S_n\right)_{n \geq 0}$, $\left(I_j\right)_{j \geq 1}$ are independent Bernoulli variables such that $I_j \sim \mbox{Bernoulli}\left(\frac{1}{j+1}\right)$ and are independent of $\left(X_j\right)_{j \geq 1}$.
Now the proof of the inequality in \eqref{Equ:BE-d} follows from equation (D) of \cite{Bergstrom49} which deals with $d$-dimensional version of the classical Berry-Essen inequality for independent but non-identical summands, which in our case are the random variables $\left(I_j X_j\right)_{j \geq 1}$. It is enough to notice that \[ \beta_{j}(i) = {\mathbf E}\left[ \left\vert I_j X_1^{(i)} - {\mathbf E}\left[I_j X_j^{(i)}\right] \right\vert^3 \right], \] and \[ \varSigma_n = \sum_{j=1}^n {\mathbf E}\left[ \left( I_j X_j - {\mathbf E}\left[I_j X_j\right]\right)^T \left(I_j X_j - {\mathbf E}\left[I_j X_j\right] \right) \right]. \]
Finally to prove the last part of the equation \eqref{Equ:BE-d} just like in the one dimensional case we note that from definition $n \rho_2^{(d)} \sim C_1' \log n$ and $n \rho_3^{(d)} \sim C_2' \log n$ where $0 < C_1', C_2' < \infty$ are some constants. Thus \[ \frac{\sqrt{n} \rho_3}{\rho_2^{3/2}} = {\mathcal O}\left(\frac{1}{\sqrt{\log n}}\right). \] This completes the proof of the theorem. \end{proof}
\begin{rem} If we define that $\varSigma\left(1,1\right) = 1$ and $M\left(1,1\right)=0$ when $d=1$ then Theorem \ref{Thm:BE-1} follows from the above theorem except in Theorem \ref{Thm:BE-1} the constant is more explicit. \end{rem}
Just like in the one dimensional case the following result follows easily from the above theorem by observing $h_n \sim \log n$. \begin{theorem} \label{Thm:BE-d-General} Suppose $U_0 = \left(U_{0,{\mathbf v}}\right)_{{\mathbf v} \in {\mathbb Z}^d}$ is such that $U_{0,{\mathbf v}} = 0$ for all but finitely many ${\mathbf v} \in \mbox{${\mathbb Z}$}^d$ then there exists a constant $C > 0$ which may depend on the increment distribution, such that \begin{equation} \sup_{{\mathbf x} \in {\mathbb R}^d} \left\vert {\mathbf P}\left( \left(\frac{Z_{n}-\boldsymbol \mu \log n}{\sqrt{\log n}} \right) \varSigma^{-1/2} \leq {\mathbf x} \right) - \Phi_d\left({\mathbf x}\right) \right\vert \leq C \times \frac{\sqrt{n} \rho_3^{(d)}}{\left(\rho_2^{(d)}\right)^{3/2}} = {\mathcal O} \left( \frac{1}{\sqrt{\log n}} \right), \label{Equ:BE-d-General} \end{equation} where $\Phi_d$ is the distribution function of a standard $d$-dimensional normal random vector. \end{theorem}
\section{Large Deviations for the Expected Configuration} \label{Sec:LDP} In this section we discuss the asymptotic behavior of the tail probabilities of $\frac{Z_{n}}{\log n}$. Following standard notations are used in rest of the paper. For any subset $A \subseteq \mbox{${\mathbb R}$}^d$ we write $A^{\circ}$ to denote the \emph{interior} of $A$ and $\bar{A}$ to denote the \emph{closer} of $A$ under the usual Euclidean metric. \begin{theorem} \label{Thm:LDP} The sequence of measures ${\mathbf P}\left(\frac{Z_n}{\log n} \in \cdot \right)_{n \geq 2}$ satisfy a LDP with rate function $I\left(\cdot\right)$ and speed $\log n$, that is, \begin{equation} \small{ - \inf_{{\mathbf x} \in A^{\circ}} I\left({\mathbf x}\right) \leq \mathop{\underline{\lim}}\limits_{n \to \infty} \frac{\log {\mathbf P}\left(\frac{Z_{n}}{\log n}\in A\right)}{\log n} \leq \mathop{\overline{\lim}}\limits_{n \to \infty} \frac{\log {\mathbf P}\left(\frac{Z_{n}}{\log n}\in A\right)}{\log n} \leq - \inf_{{\mathbf x} \in \bar{A}} I\left({\mathbf x}\right) } \label{Equ:LDP} \end{equation} \normalsize where $I(\cdot)$ is the Fenchel-Legendre dual of $e\left(\cdot\right)-1$, that is for $x \in \mathbb{R}^{d}$, \begin{equation} I(x)=\displaystyle\sup_{\boldsymbol \lambda \in \mathbb{R}^{d}}\{\langle {\mathbf x}, \boldsymbol \lambda \rangle -e(\boldsymbol \lambda) + 1 \}. \label{Equ:Def-I} \end{equation} Moreover $I(\cdot)$ is convex and a \emph{good rate function}. \end{theorem}
\begin{proof} We start with the representation \eqref{Equ:Representation} \[ Z_n \ed Z_0 + \sum_{j=1}^n I_j X_j \] where as earlier $\left(X_j\right)_{j \geq 1}$ are i.i.d. increments of the random walk $\left(S_n\right)_{n \geq 0}$ on $\mbox{${\mathbb Z}$}^d$ and $\left(I_j\right)_{j \geq 1}$ are independent Bernoulli variables such that $I_j \sim \mbox{Bernoulli}\left(\frac{1}{j+1}\right)$ and are independent of $\left(X_j\right)_{j \geq 1}$. $Z_0 \sim U_0$ and is independent of $\left(\left(X_j\right)_{j \geq 1}, \left(I_j\right)_{j \geq 1}\right)$. Now without loss of any generality we may assume that $Z_0 = \mathbf 0$ with probability one, that is, $U_0 = \delta_{\mathbf 0}$.
Consider the following scaled \emph{logarithmic moment generating function} of $Z_n$, \begin{equation} \Lambda_{n}\left(\boldsymbol \lambda\right) := \frac{1}{\log n}\log \mathbb{E}\left[e^{\langle \boldsymbol \lambda, Z_{n}\rangle}\right]. \label{Equ:log-MGF} \end{equation} From \eqref{Equ:Representation} it follows that \[ \mathbb{E}\left[e^{\langle \boldsymbol \lambda, Z_{n}\rangle}\right]=\frac{1}{n+1}\Pi_{n}\left(e\left(\boldsymbol \lambda\right)\right) \] where $\Pi_{n}\left(z\right)=\prod_{j=1}^{n}\left(1+\frac{z}{j}\right)$, $z \in \mathbb{C}$. Using Gauss's formula (see page 178 of \cite{Con78}) we have \begin{equation} \lim_{n\to \infty} \frac{\Pi_{n}(z)}{n^{z}}\Gamma(z+1)=1 \label{Equ:Gauss} \end{equation} and the convergence happens uniformly on compact subsets of $\mathbb{C}\setminus\{-1,-2,\ldots\}$. Therefore we get \begin{equation} \Lambda_{n}\left(\boldsymbol \lambda\right) \longrightarrow e\left(\boldsymbol \lambda\right) - 1 < \infty \,\,\, \forall \,\, \boldsymbol \lambda \in \mbox{${\mathbb R}$}^d. \end{equation} Thus the LDP as stated in \eqref{Equ:LDP} follows from the G\"artner-Ellis Theorem (see Remark (a) on page 45 of \cite{DeZe1993} or page 66 of \cite{ThesisArijit2010}).
We next note that $I(\cdot)$ is a convex function because it is the Fenchel-Legendre dual of $e\left(\boldsymbol \lambda\right)-1$ which is finite for all $\boldsymbol \lambda \in \mathbb{R}^{d}$.
Finally, we will show that $I\left(\cdot\right)$ is good rate function, that is, the level sets $A\left(\alpha \right)=\{{\mathbf x}\colon I({\mathbf x})\leq \alpha\}$ are compact for all $\alpha > 0$. Since $I$ is a rate function so by definition it is lower semicontinuous. So it is enough to prove that $A(\alpha)$ is bounded for all $\alpha \in \mbox{${\mathbb R}$}$.
Observe that for all ${\mathbf x} \in \mathbb{R}^{d}$, \[
I({\mathbf x}) \geq \sup_{\| \boldsymbol \lambda \| = 1}\left\{\langle {\mathbf x},\boldsymbol \lambda \rangle-e(\boldsymbol \lambda)+1\right\}. \] Now the function $\boldsymbol \lambda \mapsto e\left(\boldsymbol \lambda\right)$
is continuous and $\left\{\boldsymbol \lambda \colon \| \boldsymbol \lambda \| = 1 \right\}$
is a compact set. So $\exists$ $\boldsymbol \lambda_{0} \in \left\{\boldsymbol \lambda \colon \| \boldsymbol \lambda \| = 1\right\}$ such that $\sup_{\vert\boldsymbol \lambda\vert=1} e\left(\boldsymbol \lambda\right) = e\left(\boldsymbol \lambda_{0}\right)$.
Therefore for $\| {\mathbf x} \| \neq 0$ choosing $\boldsymbol \lambda=\frac{{\mathbf x}}{\| {\mathbf x} \|}$, we have
$ I({\mathbf x}) \geq \| {\mathbf x} \| -e\left(\boldsymbol \lambda_{0}\right)+1$. So if ${\mathbf x}\in A(\alpha)$ then \[
\| {\mathbf x} \| \leq \left(\alpha +e \left(\boldsymbol \lambda _{0}\right)-1\right). \] This proves that the level sets are bounded, which completes the proof. \end{proof}
Our next result is an easy consequence of \eqref{Equ:Def-I} which can be used to compute explicit formula for the rate function $I$ in many examples in one or higher dimensions. \begin{theorem} \label{Thm:I-Formula} The rate function $I$ is same as the rate function for the large deviation of the empirical means of i.i.d. random vectors with distribution corresponding to the distribution of the following random vector \begin{equation} W = \sum_{i=1}^N X_i, \label{Equ:I-Representation} \end{equation} where $N \sim \mbox{Poisson}\left(1\right)$ and is independent of $\left(X_j\right)_{j \geq 1}$ which are the i.i.d. increments of the associated random walk. \end{theorem}
\begin{proof} We first observe that $\log {\mathbf E}\left[e^{\langle \boldsymbol \lambda, W \rangle}\right] = e\left(\boldsymbol \lambda\right) - 1$. The rest then follows from \eqref{Equ:Def-I} and Cram\'er's Theorem (see Theorem 2.2.30 of \cite{DeZe1993}). \end{proof}
\begin{rem} Using Theorem \ref{Thm:I-Formula} we can conclude that the tail of the asymptotic distribution of $Z_n$ can be approximated by the tail of a marked Poisson process with intensity one where the markings are given by the i.i.d. increments of the associated random walk. \end{rem}
For $d=1$, one can get more information about the rate function $I$, in particular following result it follows from Theorem \ref{Thm:I-Formula} and Lemma 2.2.5 of \cite{DeZe1993}. \begin{prop} \label{Cor:LDP1} Suppose $d=1$ then $I(x)$ is non-decreasing when $x \geq \mu$ and non-increasing when $x \leq \mu$. Moreover \begin{equation} I(x) = \begin{cases}
\displaystyle{\sup _{\lambda \geq 0}\{x\lambda -e(\lambda)+1\}} & \mbox{if\ \ } x \geq \mu \\
\displaystyle{\sup _{\lambda \leq 0}\{x\lambda -e(\lambda)+1\}} & \mbox{if\ \ } x \leq \mu.
\end{cases} \label{incr} \end{equation} In particular, $I(\mu)=\inf_{x \in \mathbb{R}} I(x)$. \end{prop}
Following is an immediate corollary of the above result and Theorem \ref{Thm:LDP}. \begin{cor}\label{Th: CramerLDP} Let $d=1$ then for any $\epsilon > 0$ \begin{equation} \label{Eq: lim} \lim_{n \to \infty} \frac{1}{\log n} \log {\mathbf P} \left(\frac{Z_{n}}{\log n}\geq \mu + \epsilon \right) = - I\left( \mu + \epsilon \right) \end{equation} and \begin{equation} \lim_{n \to \infty} \frac{1}{\log n} \log {\mathbf P} \left(\frac{Z_{n}}{\log n} \leq \mu - \epsilon \right) = -I\left( \mu - \epsilon \right). \end{equation} \end{cor}
We end the section with explicit computations of the rate functions for two examples of infinite color urn models associated with random walks on one dimensional integer lattice. \begin{Example} Our first example is the case when the random walk is trivial, which moves deterministically one step at a time. In other words $X_1 = 1$ with probability one. In this case $\mu = 1$ and $\sigma^2=1$. Also the moment generating function of $X_1$ is given by $e\left(\lambda\right) := e^{\lambda}$, $\lambda \in \mbox{${\mathbb R}$}$. By Theorem \ref{Thm:I-Formula} the rate function for the associated infinite color urn model is same as the rate function for a Poisson random variable with mean $1$, that is \begin{equation} I(x)= \begin{cases} + \infty & \text{\ if\ } x < 0 \\ 1 & \text{\ if\ } x = 0 \\ x \log x - x + 1 & \text{\ if\ } x > 0 \end{cases} \end{equation} Thus for this example one can prove a \emph{Poisson approximation} for $Z_n$. \end{Example}
\begin{Example} Our next example is the case when the random walk is the \emph{simple symmetric random walk} on the one dimensional integer lattice. For this case we note that $\mu = 0$, $\sigma^2 = 1$ and the moment generating function $X_1$ is $e\left(\lambda\right) = \cosh \lambda$, $\lambda \in \mbox{${\mathbb R}$}$. The rate function for the associated infinite color urn model turns out to be \begin{equation} I(x)=x \sinh^{-1} x - \sqrt{1+x^2} + 1. \end{equation} \end{Example}
\end{document} |
\begin{document}
\title{Quenched local central limit theorem for random walks in a time-dependent balanced random environment}
\author{Jean-Dominique Deuschel
\thanks{Electronic address: \texttt{[email protected]}}} \affil{Instit\"ut f\"ur Mathematik\\ Technische Universit\"at Berlin}
\author{Xiaoqin Guo
\thanks{Electronic address: \texttt{[email protected]}}} \affil{Department of Mathematics\\University of Wisconsin-Madison} \maketitle
\begin{abstract} We prove a quenched local central limit theorem for continuous-time random walks in $\mathbb Z^d, d\ge 2$, in a uniformly-elliptic time-dependent balanced random environment which is ergodic under space-time shifts. We also obtain Gaussian upper and lower bounds for quenched and (positive and negative) moment estimates of the transition probabilities and asymptotics of the discrete Green function. \end{abstract}
\section{Introduction}~\label{sec:intro} In this article we consider a random walk in a balanced uniformly-elliptic time-dependent random environment on $\Z^d, d\ge 2$.
For $x,y\in\Z^d$, we write $x\sim y$ if $|x-y|_2=1$. Denote by $\mc P$ the set ({\it of nearest-neighbor transition rates on $\Z^d$}) \[
\mc P:=\left\{v: \Z^d\times\Z^d\to[0,\infty)\bigg|v(x,y)=0 \text{ if }x\nsim y\right\}. \] Equip $\mc P$ with the the product topology and the corresponding Borel $\sigma$-field. We denote by $\Omega\subset \mc P^{\R}$ the set of all measurable functions $\omega: t\mapsto \omega_t$ from $\R$ to $\mc P$
and call every $\omega\in\Omega$ a time-dependent {\it environment}. For $\omega\in\Omega$, we define the parabolic difference operator \begin{align*} \mc L_\omega u(x,t)
&=\sum_{y:y\sim x}\omega_t(x,y)(u(y,t)-u(x,t))+\partial_t u(x,t) \end{align*} for every bounded function $u:\Z^d\times\R\to\R$ which is differentiable in $t$. Let $(\hat X_t)_{t\ge 0}=(X_t,T_t)_{t\ge 0}$ denote the continuous-time Markov chain on $\Z^d\times\R$ with generator $\mc L_\omega$. Note that almost surely, $T_t=T_0+t$. We say that $(X_t)_{t\ge 0}$ is a {\it continuous-time random walk in the environment }$\omega$ and denote by $P_\omega^{x,t}$ its law (called the {\it quenched law}) with initial state $(x,t)\in\Z^d\times\R$.
We equip $\Omega\subset\mc P^\R$ with the induced product topology and let $\mb P$ be a probability measure on the Borel $\sigma$-field $\mc B(\Omega)$ of $\Omega$. An environment $\omega\in\Omega$ is said to be {\it balanced} if \[ \sum_{y}\omega_t(x,y)(y-x)=0, \quad \text{ for all } t\in\R, x\in\Z^d \] and {\it uniformly elliptic} if there is a constant $\kappa\in(0,1)$ such that \[ \kappa< \omega_t(x,y)<\tfrac{1}{\kappa} \quad \text{ for all } t\in\R, x,y\in\Z^d \text{ with } x\sim y. \] Let $\Omega_\kappa\subset\Omega$ \label{page:defomg} denote the set of balanced and uniformly elliptic environments with ellipticity constant $\kappa\in(0,1)$. The measure $\mb P$ is said to be balanced and uniformly elliptic if $\mb P\left( \omega\in\Omega_\kappa \right)=1$ for some $\kappa\in(0,1)$.
For each $(x,t)\in\Z^d\times\R$ we define the space-time shift $\theta_{x,t}\omega:\Omega\to\Omega$ by \[ (\theta_{x,t}\omega)_s(y,z):=\omega_{s+t}(y+x,z+x). \] We assume that the law $\mb P$ of the environment is translation-invariant and {\it ergodic} under the space-time shifts $\{\theta_{x,t}:x\in\Z^d, t\ge 0\}$. I.e, $P(A)\in\{0,1\}$ for any $A\in\mc B(\Omega)$ such that $\mb P(A\Delta \theta_{\hat x}^{-1}A)=0$ for all $\hat x\in\Z^d\times[0,\infty)$.
Given $\omega$, the environmental process \begin{equation}\label{eq:def-omegabar} \bar\omega_t:=\theta_{\hat X_t}\omega, \qquad t\ge 0, \end{equation} with initial state $\bar\omega_0=\omega$ is a Markov process on $\Omega$. With abuse of notation, we use $P_\omega^{0,0}$ to denote the quenched law of $(\bar\omega_t)_{t\ge 0}$.
\noindent{\bf Assumptions:} {\it throughout this paper, we assume that $\mb P$ is balanced, ergodic, and uniformly elliptic with ellipticity constant $\kappa>0$.}
We recall the quenched central limit theorem (QCLT) in \cite{DGR15}.
\begin{theorem}\cite[Theorem 1.2]{DGR15}\label{thm:recall} Under the above assumptions of $\mb P$, \begin{enumerate}[(a)\,] \item there exists a unique invariant measure $\mb Q$ for the process $(\bar\omega_t)_{t\ge 0}$ such that $\mb Q\ll\mb P$ and $(\bar\omega_t)_{t\ge 0}$ is an ergodic flow under $\mb Q\times P_\omega^{0,0}$. \item\label{item:qclt}(QCLT)
For $\mb P$-almost all $\omega$, $P_\omega^{0,0}$-almost surely, $(X_{n^2t}/n)_{t\ge 0}$ converges weakly, as $n\to\infty$, to a Brownian motion with deterministic non-degenerate covariance matrix $\Sigma={\rm diag}\{2E_{\mb Q}[\omega_0(0,e_i)], i=1,\ldots,d\}$. \end{enumerate}
\end{theorem} In the special case where the environment is time-independent, i.e, $\mb P(\omega_t=\omega_s \text{ for all } t,s\in\R)=1$, we say that the environment is {\it static}. \begin{remark} For balanced random walks in a static, uniformly-elliptic, ergodic random environment on $\Z^d$, the QCLT has been first shown by Lawler \cite{Lawl82}, which is a discrete version of the result of Papanicolaou and Varadhan \cite{PV82}. It is then generalized to static random environments with weaker ellipticity assumptions in \cite{GZ, BD14}.
\end{remark}
\begin{remark}\label{rm3}
Write $\norm{f}_{L^p(\mb P)}:=(E_{\mb P}[|f|^p])^{1/p}$ for $p\in\R$. Let \[ \rho(\omega):=\mathrm{d}\mb Q/\mathrm{d}\mb P. \] It is obtained in \cite{DGR15} that $\rho>0$, $\mb P$-almost surely. At the end of the proof of \cite[Theorem 1.2]{DGR15}, it is shown that $E_\mb Q[g]\le C\norm{g}_{L^{d+1}(\mb P)}$ for any bounded continuous function $g$, which implies \begin{equation}\label{rho-moment} E_{\mb P}[\rho^{(d+1)/d}]<\infty. \end{equation} Moreover, one of our main results (see Theorem~\ref{thm:ap-property}\eqref{item:neg-moment} below) shows that there exists $q=q(\kappa,d)$ such that the $L^{-q}(\mathbb P)$ moment of $\rho$ is also bounded.
For $(x,t)\in\Z^d\times\R$, set \[ \rho_\omega(x,t):=\rho(\theta_{x,t}\omega). \] Since $\Omega$ is equipped with a product $\sigma$-field, for any fixed $\omega\in\Omega$, the map $\R\to\Omega$ defined by $t\mapsto \theta_{0,t}\omega$ is measurable. Hence for almost-all $\omega$, the function $\rho_\omega(x,t)$ is measurable in $t$. Moreover, $\rho_\omega$ possesses the following properties.
For $\mb P$-almost all $\omega$, \begin{enumerate}[(i)] \item $\rho_\omega(x,t)\delta_x\mathrm{d} t$ is an invariant measure for the process $\hat X_t$ under $P_\omega$; \item $\rho_\omega(x,t)>0$ is the unique density (with respect to $\delta_x\mathrm{d} t$) for an invariant measure of $\hat X$ that satisfies $E_\mb P[\rho_\omega(0,0)]=1$; \item $\rho_\omega$ has a version which is absolutely continuous with respect to $t$ with \begin{equation}\label{rho-invariance} \partial_t\rho_\omega(x,t)=\sum_{y} \rho_\omega(y,t)\omega_t(y,x) \end{equation} for almost every $t$, where $\omega_t(x,x):=-\sum_{y:y\sim x}\omega_t(x,y)$. \end{enumerate}
The proof of these properties can be found in \cite[Appendix]{arXiv}.
\end{remark}
As a main result of our paper, we will present the following {\it local limit theorem} (LLT), which is a finer characterization of the local behavior of the random walk than the QCLT. Let \[ \hat 0:=(0,0)\in\Z^d\times\R. \] For $\hat x=(x,t),\hat y=(y,s)\in \Z^d\times\R$, $t\le s$, define \begin{equation}\label{eq:def-hk} p^\omega(\hat x, \hat y):=P_\omega^{x,t}(X_{s-t}=y), \quad q^{\omega}(\hat x,\hat y)=\dfrac{p^\omega(\hat x,\hat y)}{\rho_\omega(\hat y)}. \end{equation}
\begin{theorem} [LLT]\label{thm:llt} For $\mb P$-almost all $\omega$ and any $T>0$, \[ \lim_{n\to\infty}\sup_{x\in\R^d,t>T} \Abs{ n^dq^\omega(\hat 0;\floor{nx},n^2t) -p_t^\Sigma(0,x) }=0. \] Here $p_t^\Sigma(0,x)=[(2\pi t)^{d}\det\Sigma]^{-1/2}\exp(-x^T\Sigma^{-1} x/2t)$ is the transition kernel of the Brownian motion with covariance matrix $\Sigma$ and starting point $0$, and $\floor{x}:=(\floor{x_1},\ldots, \floor{x_d})\in\Z^d$ for $x\in\R^d$. \end{theorem}
The proof of the LLT follows from Theorem~\ref{thm:recall} and a localization of the {\it heat kernel} $q^\omega(\hat 0,\cdot)$, an argument already implemented in \cite{Barlow-Hambly09} and \cite{ADS18} in the context of random conductance models. For this purpose, the regularity of $\hat x\mapsto q^\omega(\hat 0,\hat x)$ is essential. We use an analytical tool from classical PDE theory: the parabolic Harnack inequality (PHI) which yields not only H\"older continuity (cf. Corollary~\ref{cor:hoelder} below) of $q^\omega(\hat 0,\cdot)$ but also very sharp heat kernel estimates. Note that for fixed $\hat x=(x,t)$, the function $u(\hat y)=q^\omega(\hat y,\hat x)$ satisfies $\mc L_\omega u=0$ in $\Z^d\times(-\infty,t)$. However in our non-reversible model, we need to prove, instead of PHI for $\mc L_\omega$, the PHI (Theorem~\ref{thm-ah}) for the {\it adjoint operator} $\mc L^*_\omega$ (defined below), since the heat kernel
$v_\omega(\hat x):=q^\omega(\hat 0,\hat x)$
solves \begin{equation}\label{e27} \mc L_\omega^*v(\hat x):=\sum_{y:y\sim x}\omega_t^*(x,y)(v(y,t)-v(\hat x))-\partial_t v(\hat x)=0 \end{equation} for $\hat x=(x,t)\in\Z^d\times(0,\infty)$, where \[ \omega_t^*(x,y):=\frac{\rho_\omega(y,t)\omega_t(y,x)}{\rho_\omega(x,t)} \quad\mbox{ for }x\sim y\in\Z^d. \] Note that $\omega^*$ is not necessarily a balanced environment anymore.
For $r>0$, $x\in\R^d$, we let \[
B_r(x)=\{y\in\Z^d: |x-y|_2<r\}, \quad B_r=B_r(0) \] and define for $\hat x=(x,t)\in\R^d\times\R$ the {\it parabolic balls} \begin{equation}\label{def:parab-ball} Q_r(\hat x)=B_r(x)\times[t,t+r^2), \qquad Q_r=Q_r(\hat 0). \end{equation} Throughout this paper, unless otherwise specified,
$C, c$ denote generic positive constants that depend only on $(d,\kappa)$, and which may differ from line to line. If $cB\le A\le CB$, we write \[ A\asymp B. \]
Our second main result is \begin{theorem}[PHI for $\mc L^*_\omega$]\label{thm-ah} For $\mb P$-almost all $\omega$, any non-negative solution $v$ of the adjoint equation $\mc L_\omega^* v=0$ in $B_{2R}\times(0, 4R^2] $ satisfies \[ \sup_{B_R\times(R^2,2R^2)}v\le C\inf_{B_R\times (3R^2,4R^2]}v. \] \end{theorem}
In PDE, the Harnack inequality for the adjoint of non-divergence form elliptic differential operators was first proved by Bauman \cite{Baum84}, and was generalized to the parabolic setting by Escauriaza \cite{Esc00}. Our proof of Theorem~\ref{thm-ah} follows the main idea of \cite{Esc00}.
\begin{remark} For time discrete random walks in a static environment, Theorem~\ref{thm-ah} was obtained by Mustapha \cite{Mustapha06}. His argument follows basically \cite{Esc00}, and uses the PHI \cite[Theorem~4.4]{KT98} of Kuo and Trudinger in the time discrete situation.
Moreover, in the static case, the volume-doubling property of the invariant distribution, which is the essential part of the proof of Theorem~\ref{thm-ah}, is much simpler, see \cite{FS84}. In our dynamical setting, a {\it parabolic} volume-doubling property (Theorem~\ref{thm:vd}) is required. To this end, we adapt ideas of Safonov-Yuan \cite{SY} and results in the references therein \cite{FSY, Baum84,Garo} into our discrete space setting.
\end{remark}
The main challenge in proving Theorem~\ref{thm-ah} is that $\mc L^*_\omega$ is neither balanced nor uniformly elliptic, and so the PHI for $\mc L_\omega$ (Theorem~\ref{Harnack}) is not immediately applicable. This is the main difference with the random conductance model with symmetric jump rates where \[ \omega_t(x,y)=\omega_t(y,x)=\omega^*_t(x,y), \] and thus which PHI for $\mc L_\omega$ is the same as PHI for $\mc L_\omega^*$. See \cite{Andres14,Delm99,DD05,ACDS,HK16}.
Let us explain the main idea for the proof of Theorem~\ref{thm-ah}. An important observation is that solutions $v$ of $\mc L_\omega^*$ can be expressed in terms of hitting probabilities of the {\it time-reversed} process, cf. Lemma~\ref{lem8} below. Thus to compare values of the adjoint solution, one only needs to estimate hitting probabilities of the {\it original} process that {\it starts from} the boundary. To this end, we will use a ``boundary Harnack inequality" (Theorem~\ref{thm-bh}) which compares $\mc L_\omega$-harmonic functions near the boundary. We will also need a volume-doubling inequality for the invariant measure (Theorem~\ref{thm:vd}) to control the change of probabilities due to time-reversal.
Recall the heat kernel $q^\omega$ in \eqref{eq:def-hk}. For any $A\subset\R^d$ and $s\in\R$, let \[\rho_\omega(A,s)=\sum_{x\in A\cap\Z^d}\rho_\omega(x,s).\]
We write the $\ell^2$-norm of $x\in\R^d$ as $|x|=|x|_2$. For $r\ge 0,t>0$, define \begin{equation}\label{eq:def-function-h} \mathfrak{h}(r,t)=\tfrac{r^2}{t\vee r}+r\log(\tfrac{r}{t}\vee 1). \end{equation} Note that $\mathfrak{h}(c_1r,c_2 t)\asymp \mathfrak{h}(r,t)$ for constants $c_1,c_2>0$.
Our third main results are the following heat kernel estimates (HKE). \begin{theorem}[HKE]\label{thm:hke} For $\mb P$-almost every $\omega$ and all $\hat x=(x,t)\in\Z^d\times(0,\infty)$, \begin{equation}\label{eq:quenched-hke} \frac{c}{\rho_\omega(B_{\sqrt t}(y),s)}
e^{-C\frac{|x|^2}{t}} \le q^\omega(\hat 0,\hat x) \le \frac{C}{\rho_\omega(B_{\sqrt t}(y),s)}
e^{-c\mathfrak{h}(|x|,t)} \end{equation}
for all $s\in[0,t]$ and $y$ with $|y|\le |x|+c\sqrt t$.
Moreover, recalling the definition of $L^p(\mb P)$ in Remark~\ref{rm3}, there exists $p=p(d,\kappa)>0$ such that \begin{align} &\norm{P_\omega^{0,0}(X_t=x)}_{L^{(d+1)/d}(\mb P)}
\le \frac{C}{(t+1)^{d/2}}e^{-c\mathfrak{h}(|x|,t)}\label{eq:mm-hke1}\\ \text{and }\quad &\norm{P_\omega^{0,0}(X_t=x)}_{L^{-p}(\mb P)} \ge
\frac{c}{(t+1)^{d/2}}e^{-C\frac{|x|^2}{t}}\label{eq:mm-hke2} \end{align} for all $(x,t)\in\Z^d\times(0,\infty)$. As a consequence, setting $G^\omega(0,x)=\int_0^\infty P_\omega^{0,0}(X_t=x)\mathrm{d} t$, we have for $d\ge 3$ and $x\in\Z^d$, \begin{equation}\label{eq:mm-green} \norm{G^\omega(0,x)}_{L^{(d+1)/d}(\mb P)}\asymp \norm{G^\omega(0,x)}_{L^{-p}(\mb P)}\asymp
(|x|+1)^{2-d}. \end{equation} \end{theorem}
Note that for a general ergodic environment, the density $\rho_\omega$ does not have deterministic (positive) upper and lower bounds, thus one cannot expect deterministic quenched Gaussian bounds for $p^\omega(\hat 0,\hat x)$. However, our Theorem~\ref{thm:hke} shows that it has $L^{(d+1)/d}(\mb P)$ and $L^{-q}(\mb P)$ moment bounds.
Furthermore, we can characterize asymptotics of the Green function of the RWRE.
Recall the notations $\Sigma$ (in Theorem~\ref{thm:recall} \eqref{item:qclt}), $p^\Sigma_t$, $\floor{x}$ (in Theorem~\ref{thm:llt}), and $\mathfrak{h}$ in \eqref{eq:def-function-h}. \begin{corollary}\label{cor:q-estimates}
The following statements are true for $\mb P$-almost every $\omega$. \begin{enumerate}[(i)] \item\label{cor:q-hke} There exists $t_0(\omega)>0$ such that for any $\hat x=(x,t)\in\Z^d\times(t_0,\infty)$, \[
\frac{c}{t^{d/2}}e^{-\frac{C|x|^2}{t}} \le q^\omega(\hat 0,\hat x) \le
\frac{C}{t^{d/2}}e^{-c\mathfrak{h}(|x|,t)}. \] As a consequence, the RWRE is recurrent when $d=2$ and transient when $d\ge 3$. \item\label{cor:green1} When $d=2$, for all $x\in \R^d\setminus\left\{0\right\}$, \[\lim_{n\to\infty}\frac{1}{\log n}\int_0^\infty \left[q^\omega(\hat 0;0,t)-q^\omega(\hat 0;\floor{nx},t)\right]\mathrm{d} t =\frac{1}{\pi\sqrt{\det\Sigma}}. \] \item\label{cor:green2} When $d\ge 3$, for all $x\in\R^d\setminus\left\{0\right\}$, \[ \lim_{n\to\infty}n^{d-2}\int_0^\infty q^\omega(\hat 0;\floor{nx},t)\mathrm{d} t =\int_0^\infty p^\Sigma_t(0,x)\mathrm{d} t. \] \end{enumerate} \end{corollary}
Similar results as Corollary~\ref{cor:q-estimates}\eqref{cor:green1}\eqref{cor:green2} are also obtained recently for the conductance model \cite{ADS18}.
The organization of this paper is as follows. In Section~\ref{sec:vd}, we prove a parabolic volume-doubling property and an $A_p$ inequality for $\rho_\omega$, and obtain a new proof of the PHI for $\mc L_\omega$. In Section~\ref{sec:near-bdry}, we establish estimates of $\mc L_\omega$-harmonic functions near the boundary, showing both the interior elliptic-type and boundary PHI's. We prove the PHI for the adjoint operator (Theorem~\ref{thm-ah}) in Section~\ref{sec:proof-of-ah}. Finally, with the adjoint PHI, we prove Theorems~\ref{thm:llt}, \ref{thm:hke}, and Corollary~\ref{cor:q-estimates} in Section~\ref{sec:proof-llt-hke}. Section~\ref{sec:auxiliary-prob} contains probability estimates that are used in previous sections. Some estimates and standard arguments can be found in the Appendix of the longer arXiv version \cite{arXiv} of this paper.
\section{A local volume-doubling property and its consequences}\label{sec:vd}
The purpose of this section is to obtain a parabolic volume-doubling property (Theorem~\ref{thm:vd}) and a negative moment estimate (Theorem~\ref{thm:ap-property}) for the density $\rho_\omega$. The proof relies crucially on a volume-doubling property for the hitting probabilities restricted in a finite ball (Theorem~\ref{thm:prob_doubling}), which is an improved version of \cite[Theorem 1.1]{SY} by Safonov and Yuan in the PDE setting.
As a by-product, we obtain a new proof of the classical PHI of Krylov and Safonov \cite{KS80} in the lattice (Theorem~\ref{Harnack}). Our proof, which is of interest on its own, can be viewed as the parabolic version of Fabes and Stroock's \cite{FS84} proof in the elliptic PDE setting. \subsection{Volume-doubling properties} For a finite subgraph $D\subset\Z^d$, let \[\label{page:bdry} \partial D=\{y\in\Z^d\setminus D: y\sim x \mbox{ for some }x\in D\}, \quad \bar D:=D\cup\partial D. \]
For $\ms D\subset \Z^d\times\R$, define the {\it parabolic boundary} of $\ms D$ as \[ \partial^\p\ms D:= \{ (x,t)\notin\ms D: \big(B_{1+\epsilon}(x)\times(t-\epsilon,t]\big)\cap\ms D\neq\emptyset \mbox{ for all }\epsilon>0 \}. \] In the special case $\ms D=D\times[0,T)$ for some finite $D\subset\Z^d$, it is easily seen that $\partial^\p\ms D=(\partial D\times[0,T])\cup(\bar D\times\{T\})$. See figure~\ref{fig:p-bdry}.
By the optional stopping theorem, for any $(x,t)\in\ms D\subset\Z^d\times\R$ and any bounded integrable function $u$ on $\ms D\cup\partial^\p\ms D$,
\begin{equation}\label{representation} u(x,t)=-E_\omega^{x,t}\left[\int_0^\tau\mc L_\omega u(\hat X_r)\mathrm{d} r\right]+ E^{x,t}_\omega[u(\hat X_\tau)], \end{equation} where $\tau=\inf\{r\ge 0: (X_r,T_r)\notin \ms D\}$.
\begin{figure}
\caption{The parabolic boundary of $D\times[0,T)$.}
\label{fig:p-bdry}
\end{figure}
\begin{theorem}\label{thm:vd} $\mb P$-almost surely, for every $r\ge 1/2$, \[
\sup_{t:|t|\le r^2}\rho_\omega(B_{2r},t)\le C\rho_\omega(B_r,0). \] \end{theorem} In PDE setting, this type of estimate was first established by Fabes and Stroock \cite{FS84} for adjoint solutions of non-divergence form elliptic operators, and then generalized by Escauriaza \cite{Esc00} to the parabolic case.
To obtain Theorem~\ref{thm:vd}, a crucial estimate is a volume-doubling property (Theorem~\ref{thm:prob_doubling}) for the hitting measure of the random walk, which we will prove by adapting some ideas of Safonov and Yuan \cite[Theorem 1.1]{SY} in the PDE setting. Note that our proof of Theorem~\ref{thm:prob_doubling} relies on a probabilistic estimate (Lemma~\ref{lem1}) rather than the Harnack inequality (Theorem~\ref{Harnack}).
For any $A\subset\Z^d$, $s\in\R$, define the stopping time \begin{equation} \label{def:st-a-s} \Delta(A,s)=\inf\{t\ge 0:\hat X_t\notin A\times(-\infty,s)\}. \end{equation}
\begin{theorem}\label{thm:prob_doubling} Assume $\omega\in\Omega_\kappa$. There exists $k_0=k_0(d,\kappa)$ such that for any $k\ge k_0$, $m\ge 2$, $r,s>0$ and any $y\in B_{k\sqrt s}$, we have \[ P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}\in B_{2r})\le C_k P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s}, s)}\in B_r). \]
Here $C_k$ depends only on $(k,d,\kappa)$. In particular, for any $k\ge 1$, $|y|\le k\sqrt s$, \[ P_\omega^{y,0}(X_s\in B_{2r})\le C_k P_\omega^{y,0}(X_s\in B_r). \] \end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:prob_doubling}:] Since $B_1=\{0\}$, we only consider $r\ge 1/2$. Fix $s$, $r$. For $\rho\ge 0$, $k\ge k_0$, define $L_{k,\rho}=B_{k\rho}\times\{s-\rho^2\}$ and \[
D_{k,\rho}=\bigcup_{R\le\rho}L_{k,R}=\{(x,t)\in\Z^d\times(-\infty,s]: |x|/k\le \sqrt{s-t}\le\rho\}. \] \begin{figure}
\caption{The shaded region is $D_{k,\rho}$.}
\label{fig:dkr}
\end{figure} Note that $k_0>6$ is a large constant to be determined. For any $R\le\rho$, by Lemma~\ref{lem1} below, there exists $\alpha_k>0$ depending on $(k,\kappa,d)$ such that \begin{align*} \min_{(x,t)\in L_{k,R}}P^{x,t}_\omega(X_{\Delta(B_{mk\rho},s)}\in B_r)
&\ge (\tfrac{r}{2kR}\wedge\tfrac{1 }{2})^{\alpha_k}. \end{align*} Let $\beta_k>1$ be a large constant to be determined later. Then, letting \[ v_\rho(\hat x)=(8k)^{\alpha_k}(\beta_k+1)P_\omega^{\hat x}(X_{\Delta(B_{mk\rho},s)}\in B_r)-P_\omega^{\hat x}(X_{\Delta(B_{mk\rho},s)}\in B_{2r}), \] we get $\inf_{D_{k,R}}v_\rho\ge (\beta_k+1)(\tfrac{4r}{R}\wedge2k)^{\alpha_k}-1$ for $0<R\le \rho$. In particular, $\inf_{D_{k,(4r)\wedge\rho}}v_\rho\ge \beta_k$ for $\rho\ge 0$. Set \[ R_\rho=\sup\{R\in[0,m\rho]:\inf_{D_{k,R}}v_\rho\ge 0\}. \] Clearly, $R_\rho\ge (4r)\wedge\rho$. We will prove that \begin{equation}\label{eq:R-rho} R_\rho\ge \rho \text{ for all }\rho>0. \end{equation}
Assuming \eqref{eq:R-rho} fails, then $R_\rho<\rho$ for some $\rho>4r$. We will show that this is impossible via contradiction. First, for such $\rho>4r$, we claim that there exists a constant $\gamma=\gamma(d,\kappa)>0$ such that \begin{equation} \label{eq:lowb_vrho} \min_{L_{1,R}}v_\rho\ge \beta_k(\frac{r}{R})^\gamma \quad\text{ for all }R\in[2r,R_\rho). \end{equation} By Lemma~\ref{lem1}, $g(R, R/2):= \min_{x\in B_R}P_\omega^{x,s-R^2}(X_{\Delta(B_{2R},s-(R/2)^2)}\in B_{R/2})\ge C$. Further, by the Markov property and that $R_\rho<\rho$, for $R\in[2r,R_\rho)$, \begin{align*} &\min_{x\in B_R}P_\omega^{x,s-R^2}(X_{\Delta(D_{k,m\rho},s-(R/2^n)^2)}\in B_{R/2^n})\\ &\ge g(R,R/2)\cdots g(R/2^{n-1},R/2^n)\ge C^n. \end{align*} Since $v_\rho(\hat X_t)$ is a martingale in $B_{mk\rho}\times(-\infty,s)$ and that $v_\rho\ge 0$ in $D_{k,R_\rho}$, choosing $n$ such that $R/2^n\le r<R/2^{n-1}$, the above inequality yields \begin{align*} v_\rho(x,s-R^2) &\ge P_\omega^{x,s-R^2}(X_{\Delta(D_{k,m\rho},s-(R/2^n)^2)}\in B_{R/2^n})\inf_{D_{k,r}}v_\rho\\ &\ge C^n\beta_k\ge \beta_k(\frac{r}{R})^c \end{align*} for $R\in[2r,R_\rho)$ and $x\in B_R$. Display \eqref{eq:lowb_vrho} is proved.
Next, we will show that for $R\in[2r,R_\rho)$, \begin{equation} \label{eq:upperb-frho} f_\rho(R):=\sup_{\partial B_{kR}\times[s-R^2,s)}v_\rho^-\le (\frac{r}{R})^{-c/\log q_k}, \end{equation} where $v_\rho^-=\max\{0,-v_\rho\}$, $q_k=1-\tfrac{c_1}{k}$ and $c_1>0$ is a constant to be determined. Noting that $v_\rho^-=0$ in $D_{k,R_\rho}\cup (B_{2r}^c\times\{s\})$ and that $v_\rho^-(\hat X_t)$ is a sub-martingale, we know that $f_\rho(R)$ is a decreasing function for $R\in(\frac{2r}{k},R_\rho)$.
Further, for any $(x,t)\in \partial B_{kR}\times[s-R^2,s)$ with $q_k R\in(\frac{2r}{k},R_\rho)$, by the optional stopping lemma, in view of Theorem~\ref{thm:fluctuation} below, \begin{align*} v_\rho^-(x,t) &\le P_\omega^{x,t}(X_a\in\partial B_{kq_kR}\text{ for some }a\in[0,R^2])f(q_kR)\\ &\le
P_\omega^{x,t}\left(\sup_{0\le a\le R^2}|X_a-x|\ge (1-q_k)kR \right)f(q_kR)\\ &\stackrel{Theorem~\ref{thm:fluctuation}}{\le} C(e^{-cc_1^2}+e^{-c_1 R})f(q_kR) \le f(q_kR)/2, \end{align*} if $c_1$ is chosen to be big enough. So $f_\rho(R)\le f_\rho(q_kR)/2$. Let $n\ge 0$ be the integer such that $q_k^{n+1}R<r\le q_k^n R$. We conclude that for $R\in[2r,R_\rho)$, \[ f_\rho(R)\le 2^{-n}f_\rho(q_k^nR)\le \left( \frac{r}{R} \right)^{-c/\log q_k}f_\rho(r). \] Inequality \eqref{eq:upperb-frho} then follows from the fact $v_\rho^-\le 1$.
Finally, if $R_\rho<\rho$ for some $\rho>4r$,
let $\tau=\inf\{t\ge0:\hat X_t\notin (B_{mk\rho}\times(-\infty,s))\setminus(\bar B_{kR/2}\times[s-(R/2)^2,s))\}$. Since $v_\rho=0$ on $\partial^\p (B_{mk\rho}\times(-\infty,s))\setminus(B_{2r}\times\{s\})$, by the optional stopping lemma, for $R\in[R_\rho,2R_\rho)$ and $x\in B_{kR}$, \begin{align*} &v_\rho(x,s-R^2)\\
&=E_\omega^{x,s-R^2}[v_\rho(\hat X_\tau)\mathbbm{1}_{\hat X_\tau\in B_{kR/2}\times\{s-(R/2)^2\} \text{ or }\hat X_\tau\in\partial B_{kR/2}\times[s-(R/2)^2,s)}]\\
&\ge
P_\omega^{x,s-R^2}(X_{\Delta(B_{2k\rho},s-(R/2)^2)}\in B_{R/2})\min_{L_{1,R/2}}v_\rho-f_\rho(R/2)\\
&\stackrel{Lemma~\ref{lem1},\eqref{eq:lowb_vrho},\eqref{eq:upperb-frho}}{\ge}
A_k\beta_k(\frac{2r}{R})^\gamma-(\frac{2r}{R})^{-c/\log q_k},
\end{align*} where $A_k$ depends on $(k,\kappa,d)$. Taking $k_0>c_1$ to be big enough such that $-c/\log q_k>\gamma$ for $k\ge k_0$ and choosing $\beta_k>A_k^{-1}$, the above inequality then implies $\inf_{D_{k,2R_\rho}}v\ge 0$, which contradicts our definition of $R_\rho$. Display \eqref{eq:R-rho} is proved, and therefore, $\min_{x\in B_{k\sqrt s}}v_{\sqrt s}(x,0)\ge 0$. The theorem follows. \end{proof}
\begin{corollary} \label{cor:space-time-vd} Let $\omega\in\Omega_\kappa$ and $k_0$ as in Theorem~\ref{thm:prob_doubling}. For any $r>0, k\ge k_0, m\ge 2, s>0$ and $y\in B_{k\sqrt s}$, we have \[
\sup_{t\ge 0: |t-s|\le r^2}P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},t)}\in B_{2r})\le C_k P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}\in B_r), \]
where $C_k$ depends on $(k,\kappa,d)$. In particular, for any $k\ge 1, |y|\le k\sqrt s$, \[
\sup_{t\ge 0: |t-s|\le r^2}P_\omega^{y,0}(X_t\in B_{2r})\le C_k P_\omega^{y,0}(X_s\in B_r). \] \end{corollary} \begin{proof} It suffices to consider $r<\sqrt s$, because otherwise, by Lemma~\ref{lem1}, the right side is bigger than a constant. When $t\in[0\vee(s-r^2),s]$, \begin{align*} &\min_{y\in B_{k\sqrt s}}P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}\in B_{4r})\\ &\ge \min_{y\in B_{k\sqrt s}}P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},t)}\in B_{2r})\min_{x\in B_{2r}}P_\omega^{x,t}(X_{s-t}\in B_{2r}(x))\\ &\stackrel{Lemma~\ref{lem1}}{\ge} C \min_{y\in B_{k\sqrt s}}P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},t)}\in B_{2r}). \end{align*} By Theorem~\ref{thm:prob_doubling}, we can replace $4r$ in the above inequality by $r$.
When $t\in[s,s+r^2]$, for any $y\in B_{k\sqrt s}$, \begin{align*} &P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},t)}\in B_{2r})\\
&\le\sum_{n=0}^\infty\sum_{x:|x|\in[2^nr,2^{n+1}r)} P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}=x) P_\omega^{x,s}(X_{t-s}\in B_{2r}) \\ &\stackrel{Corollary~\ref{cor:prob-upperb}}{\le } C\sum_{n=0}^\infty P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}\in B_{2^n r})(e^{-c2^nr}+ e^{-c 4^n}). \end{align*} Observing that (cf. Theorem~\ref{thm:prob_doubling}) \[ P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}\in B_{2^n r}) \le C^n P_\omega^{y,0}(X_{\Delta(B_{mk\sqrt s},s)}\in B_{r}), \] our proof is complete. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:vd}:] Let $k_0\ge 2$ be as in Theorem~\ref{thm:prob_doubling}. Recall $\bar\omega_t, Q_r, \Delta$ in \eqref{eq:def-omegabar}, \eqref{def:parab-ball}, \eqref{def:st-a-s}. For fixed $\xi\in\Omega_\kappa$, define a probability measure $\Q_R=\Q_R^\xi$ on $\{\theta_{\hat x}\xi: \hat x\in Q_R\}$ such that for any bounded measurable $f\in\R^{\Omega}$, \[ E_{\Q_R}[f]=\frac{1}{C_R}E_\xi^{0,-R^2} [\int_0^{\Delta(B_{2k_0 R},R^2)}f(\bar\xi_s)\mathbbm{1}_{\hat X_s\in Q_R}\mathrm{d} s], \] where $C_R$ is a renormalization constant such that $\Q_R$ is a probability.
First, we claim that $C_R\asymp R^2$. Clearly, $C_R\le 2R^2$. On the other hand, \begin{align*} C_R &= E_\xi^{0,-R^2}[\int_0^{\Delta(B_{2k_0 R},R^2)}\mathbbm{1}_{\hat X_s\in Q_R}\mathrm{d} s]\\ &\ge P_\xi^{0,-R^2}(X_{\Delta(B_R,0)}\in B_{R/2})\min_{x\in B_{R/2}}E_\xi^{x,0}[\Delta(B_R,R^2)]\\ &\stackrel{Lemma~\ref{lem1}}{\ge} C\min_{x\in B_{R/2}}E_\xi^{x,0}[\Delta(B_R,R^2)]. \end{align*}
Since $|X_t-X_0|^2-\tfrac d\kappa t$ is a supermartingale, denoting $\tau=\Delta(B_R,R^2)$, we have
$0\ge E_\xi^{x,0}[|X_\tau-x|^2-\tfrac d\kappa \tau]$. Hence for any $x\in B_{R/2}$, \[
E_\xi^{x,0}[\tau]\ge cE_\xi^{x,0}[|X_\tau-x|^2] \ge CR^2 P_\xi^{x,0}(\tau<R^2), \] which implies $E_\xi^{x,0}[\tau]\ge cR^2$. Thus $C_R\ge CR^2$ and so $C_R\asymp R^2$.
Next, since $\Omega$ is pre-compact, by Prohorov's theorem, there is a subsequence of $\Q_R$ that converges weakly, as $R\to\infty$, to a probability measure $\tilde\Q$ on $\Omega$. We will show that $\tilde \Q$ is an invariant measure of the process $(\bar\omega_t)$. Indeed, let $p_R=p_{R,\xi}$ denote the kernel $p_R(\hat x;y,s):=P_\xi^{\hat x}(\hat X_{\Delta(B_{2k_0 R}, s)}=(y,s))$.
Then, letting $\ms Lf(\omega)=\sum_{e}\omega_0(0,e)[f(\theta_{e,0}\omega)-f(\omega)]+\partial_t f(\theta_{0,t}\omega)|_{t=0}$ denote the generator of the process $(\bar\omega_t)$, and $\hat y:=(y,s)$, we have \begin{equation}\label{eq: QR} E_{\Q_R}[\ms L f(\omega)] =C_R^{-1}\sum_{y\in B_R}\int_0^{R^2}p_R(0,-R^2;\hat y)\ms Lf(\theta_{\hat y}\xi)\mathrm{d} s \end{equation} for $f\in\textnormal{dom}(\ms L)$, where $\textnormal{dom}(\ms L)$ denotes the domain of the generator $\ms L$. Note that similar to $\rho_\omega$, the function $v(\hat x)=p_R(0,-R^2;\hat x)$ satisfies the equality \eqref{rho-invariance}: $\ms L^Tv(\hat x)=0$ for $\hat x\in B_{2R}\times(-R^2,R^2)$, where $\ms L^T v(\hat x)=\sum_y v(y,t)\omega_t(y,x)-\partial_t v(x,t)$. Hence, using integration by parts, \begin{align}\label{eq:qr-difference} &\Abs{\sum_{y\in B_R}\int_0^{R^2}p_R(0,-R^2;\hat y)\ms Lf(\theta_{\hat y}\xi)\mathrm{d} s} \nonumber\\&\le C\norm{f}_\infty\int_0^{R^2}\sum_{y\in \bar B_R\setminus\mathring{B}_R}p_R(0,-R^2;\hat y)\mathrm{d} s+2\norm{f}_\infty \end{align} for all $f\in\textnormal{dom}(\ms L)$, where $\overset{\circ}{B}_R=\{x\in B_R: x\not\sim\partial B_R\}$. Observe that \[ u(\hat x)=\int_0^{R^2}\sum_{y\in \bar B_R\setminus\mathring{B}_R}p_R(\hat x;\hat y)\mathrm{d} s=E^{\hat x}_\xi[\int_0^{\Delta(B_{2k_0 R},R^2)}\mathbbm{1}_{\hat X_t\in \bar B_R\setminus\mathring{B}_R\times(0,R^2)}]\mathrm{d} t \]
satisfies $\mc L_\xi u(\hat x)=-\mathbbm{1}_{\hat x\in \bar B_R\setminus\mathring{B}_R\times[0,R^2)}$ for $\hat x\in\ms D:= B_{2k_0R}\times[-R^2,R^2)$ and $u|_{\partial^\p\ms D}=0$. By the parabolic maximum principle \cite[Theroem A.3.1]{arXiv}, we get $u(0,-R^2)\le CR^{(2d+1)/(d+1)}$. Hence, by \eqref{eq: QR}, \eqref{eq:qr-difference}, and $C_R\asymp R^2$, \[ \lim_{R\to\infty}E_{\Q_R}[\ms Lf]=0 \qquad \forall \text{ bounded function }f\in\textnormal{dom}(\ms L), \] and so $E_{\tilde\Q}[\ms L f]=0$, which implies that $\tilde\Q$ is an invariant measure of $(\bar\omega_t)$.
Furthermore, we will show that $\tilde\Q\ll\mb P$. Notice that the function \[ w(\hat x):=E_\xi^{\hat x} [\int_0^{\Delta(B_{2k_0 R},R^2)}f(\bar\xi_s)\mathbbm{1}_{\hat X_s\in Q_R}\mathrm{d} s] \]
satisfies $\mc L_\xi w(\hat x)=-f(\theta_{\hat x}\xi)\mathbbm{1}_{\hat x\in Q_R}$ in $\ms D$ and $w|_{\partial^\p\ms D}=0$. By \cite[Theorem A.3.1]{arXiv}, for any bounded measurable $f\in\R^\Omega$, \[ E_{\Q_R}[f]\le CR^{-2}w(0,-R^2) \le
C\left[\int_0^{R^2}\sum_{x\in B_R}|f(\theta_{x,t}\xi)|^{d+1}\mathrm{d} t\bigg/(R^{2+d}) \right]^{1/(d+1)}, \] which, by the multi-dimensional ergodic theorem, yields $E_{\tilde\Q}[f]\le C\norm{f}_{L^{d+1}(\mb P)}$ as we take $R\to\infty$. So $\tilde \Q\ll\mb P$. By Theorem~\ref{thm:recall}, $\tilde \Q=\Q$.
Finally, since $\Q_R\Rightarrow\Q$, for any bounded measurable $f\in\R^\Omega$, \begin{align}\label{eq:formula-rho-br} &E_{\mb P}[\rho_\omega(B_r,t)f]=\sum_{x\in B_r}E_{\Q}[f(\theta_{x,-t}\omega)]\\ &=\lim_{R\to\infty}\sum_{x\in B_r,y\in B_R}\int_0^{R^2} P_\xi^{0,-R^2}(X_{\Delta(B_{2k_0 R},s)}=y)f(\theta_{x+y,s-t}\xi)\mathrm{d} s/C_R.\nonumber \end{align}
Hence, for any measurable function $f\ge 0$, $|t|\le r^2$, and $\mb P$-a.a. $\xi$, \begin{align*} &E_{\mb P}[\rho_\omega(B_r,0)f]\\ &\ge \lim_{R\to\infty}\sum_{z\in B_{R-r}}\int_0^{R^2} P_\xi^{0,-R^2}(X_{\Delta(B_{2k_0 R},s)}\in B_r(z))f(\theta_{z,s}\xi)\mathrm{d} s/C_R\\ &\stackrel{Corollary~\ref{cor:space-time-vd}}{\ge} C\lim_{R\to\infty}\sum_{z\in B_{R-r}}\int_0^{R^2} P_\xi^{0,-R^2}(X_{\Delta(B_{2k_0 R},s+t)}\in B_{2r}(z))f(\theta_{z,s}\xi)\mathrm{d} s/C_R\\ &\ge C \lim_{R\to\infty}\sum_{x\in B_{2r},y\in B_{R-3r}}\int_{r^2}^{R^2-r^2} P_\xi^{0,-R^2}(X_{\Delta(B_{2k_0 R},s)}=y)f(\theta_{x+y,s-t}\xi)\mathrm{d} s/C_R\\ &\stackrel{\eqref{eq:formula-rho-br}}{=}CE_{\mb P}[\rho_\omega(B_{2r},t)f]. \end{align*} Since $f$ is arbitrary, the theorem follows.
\end{proof}
\begin{remark} By Theorem~\ref{thm:vd}, for any $r\ge 1$, \begin{equation}\label{eq:rho-asymp} \frac{c}{r^2}\int_0^{r^2} \rho_\omega(B_r, s)\mathrm{d} s \le \rho_\omega(B_r,0) \le \frac{C}{r^2}\int_0^{r^2} \rho_\omega(B_r, s)\mathrm{d} s. \end{equation} Hence, by the multi-dimensional ergodic theorem, for $\mb P$-almost every $\omega$, \begin{equation}\label{eq:rho-ergodic}
c\le \varliminf_{r\to\infty}\frac{1}{|B_r|}\rho_\omega(B_r,0) \le
\varlimsup_{r\to\infty}\frac{1}{|B_r|}\rho_\omega(B_r,0) \le C. \end{equation} \end{remark}
\subsection{$A_p$ property and a new proof of the PHI for $\mc L_\omega$}\label{sec:ap&harnack}
We endow $\Z^d$ with the discrete topology and counting measure, and equip $\Z^d\times\R$ with the corresponding product topology and measure (where $\R$ has the usual topology and measure). For $\ms D\subset\Z^d\times\R$, let $|\ms D|$ be its measure, and denote the integration over $\ms D$ by $\int_{\ms D}f$ . For instance, \begin{equation}\label{eq:def-integral} \int_{B_R\times[0,T]}f=\sum_{x\in B_R}\int_0^T f(x,t)\mathrm{d} t, \end{equation}
and $|\ms D|=\int_{\ms D}1$. For $p>0$, we define a norm \begin{equation} \label{eq:def-norm}
\norm{f}_{\ms D, p}:=(\int_{\ms D}|f|^p/|\ms D|)^{1/p}. \end{equation}
A function $v\in\R^{\Z^d\times\R}$ is called an {\it adjoint solution} of $\mc L_\omega$ in $\ms D=B_R\times[T_1,T_2)$ if $\int_{\ms D}v\mc L_\omega\phi=0$ for any test function $\phi(x,t)\in \R^{\Z^d\times\R}$ that is supported on $B_{R}\times(T_1,T_2)$ and smooth in $t$.
For any function $w$ defined on $E\subset\Z^d\times\R$, we write $w(E):=\int_E w$.
\begin{lemma}\label{lem:pre-RH} Recall $\norm{\cdot}_{\ms D,p}$ in \eqref{eq:def-norm} and the parabolic balls $Q_r$ in \eqref{def:parab-ball}. Let $\omega\in\Omega_\kappa$. For any non-negative adjoint solution $v$ of $\mc L_\omega$ in $Q_{2r}$, $r>10$, \[ \norm{v}_{Q_r,(d+1)/d}\le C\norm{v}_{Q_{3r/2},1}. \] \end{lemma} \begin{proof} Denote the continuous balls of radius $r$ by \begin{equation}\label{eq:def-continuousball}
\mc O_r=\{x\in\R^d:|x|_2<r\} \quad\text{ and }\quad \mc O_r(y)=y+\mc O_r, \quad y\in\R^d. \end{equation}
Let $\phi_0\ge 0$ be a smooth (with respect to $t$) function supported on $\mc O_{3/2}\times[0,9/4)$ with ${\phi_0}|_{\mc O_1\times[0,1)}=1$ and set $\phi(x,t)=\phi_0(x/r,t/r^2)$. Let $f$ any non-negative smooth function supported on $Q_{r}$ with $\norm{f}_{Q_r,d+1}=1$ and let $u\in[0,\infty)^{\Z^d\times\R}$ be supported on $Q_{9r/5}$ with $L_\omega u=-f$ in $Q_{9r/5}$. Since \begin{align*} 0&=\int v\mc L_\omega(\phi u)=\int v\phi\mc L_\omega u+\int v u\mc L_\omega\phi+\sum_{x,y}\int_\R v(x,t)\omega_t(x,y)\nabla_{x,y}u\nabla_{x,y}\phi\mathrm{d} t, \end{align*} where $\nabla_{x,y}u(\cdot,t):=u(x,t)-u(y,t)$ and (cf. \eqref{eq:def-integral}) $\int=\int_{\Z^d\times\R}$, we get \[ \int v\phi f=\int v u\mc L_\omega\phi+\sum_{x,y}\int_\R v(x,t)\omega_t(x,y)\nabla_{x,y}u\nabla_{x,y}\phi\mathrm{d} t=:\rom{1}+\rom{2}. \]
By the maximum principle (\cite[Theorem~A.3.1]{arXiv}), $u\le Cr^2\norm{f}_{Q_r,d+1}\le Cr^2$. Thus, using $|\mc L_\omega\phi|\le C/r^2$, we get $|\rom{1}|\le Cv(Q_{3r/2})$. Further, noting that \begin{align*}
|\sum_{x,y}\int_\R v(x,t)\omega_t(x,y)(\nabla_{x,y}u)^2\mathrm{d} t|
&=|\int v\mc L_\omega (u^2)-2\int vu\mc L_\omega u|\\
&=2|\int vu\mc L_\omega u| \le Cr^2\int vf, \end{align*} we have \begin{align*}
|\rom{2}|&\le \left(\sum_{x,y}\int_\R v(x,t)\omega_t(x,y)(\nabla_{x,y}\phi)^2\mathrm{d} t\right)^{1/2} \left(\sum_{x,y}\int_\R v(x,t)\omega_t(x,y)(\nabla_{x,y}u)^2\mathrm{d} t\right)^{1/2}\\ &\le Cv(Q_{3r/2})^{1/2}(\int vf)^{1/2}. \end{align*} Hence we obtain $\int v f\le \int v\phi f\le Cv(Q_{3r/2})+Cv(Q_{3r/2})^{1/2}(\int vf)^{1/2}$ and so $v(Q_{3r/2})\ge c\int vf$. The lemma follows by taking supremum over all $f$ with $\norm{f}_{Q_r,d+1}=1$. \end{proof}
For $\hat x=(x_1,\ldots,x_d, t)$, define {\it parabolic cubes} with side-length $r>0$ as \begin{equation}\label{eq:def-parakub} K_r(\hat x)=(\prod_{i=1}^d[x_i-r,x_i+r)\cap\Z^d)\times[t,t+r^2), \quad K_r=K_r(\hat 0). \end{equation} We say that a function $w\in\R^{\Z^d\times\R}$ satisfies the {\it reverse H\"older inequality} $RH_q(\ms D)$, $1<q<\infty$, if for any parabolic subcube $K$ of $\ms D$, \[\tag{$RH_q$} \norm{w}_{K,q}\le C\norm{w}_{K,1}. \] We say that $w$ belongs to the {\it $A_p(\ms D)$ class} (with $A_p$ bound $A$), $1<p<\infty$, if there exists $A<\infty$ such that, for any parabolic subcube $K$ of $\ms D$, \[\tag{$A_p$} \norm{w}_{K, 1}\norm{1/w}_{K, 1/(p-1)}\le A \] Recall the stopping time $\Delta$ in \eqref{def:st-a-s}. For $R>0$, $\hat y\in B_{2R}\times\R$, let \begin{equation}\label{eq:def-gr} g_R(\hat y;x,t)=P_\omega^{\hat y}(X_{\Delta(B_{2R},t)}=x). \end{equation}
\begin{corollary}\label{cor:reverse-holder} Let $\omega\in\Omega_\kappa, R>0$. Recall $k_0$ in Theorem~\ref{thm:prob_doubling}. \begin{enumerate}[(i)] \item $\rho_\omega$ satisfies $RH_{(d+1)/d}(\Z^d\times\R)$. \item For any $y\in B_R$, $v_y(\hat x)=g_R(y,0;\hat x)$ satisfies \[RH_{(d+1)/d}(B_{R/2}\times[R^2/(2k_0^2),R^2/k_0^2]).\] \end{enumerate} \end{corollary} \begin{proof} Note that $\rho_\omega$, $v_y$ are adjoint solutions with volume-doubling properties Theorem~\ref{thm:vd} and Corollary~\ref{cor:space-time-vd}. The corollary follows from Lemma~\ref{lem:pre-RH}. \end{proof}
\begin{theorem} \label{thm:ap-property} Let $\omega, R, k_0, v_y$ be the same as in Corollary~\ref{cor:reverse-holder}. There exist $p=p(d,\kappa)>1, A=A(d,\kappa)$ such that, for $\mb P$-a.e. $\omega$, \begin{enumerate}[(a)]
\item\label{item:neg-moment} $\rho_\omega\in A_p(\Z^d\times\R)$ with $A_p$ bound $A$. As a consequence,
\[E_{\mb P}[\rho_\omega^{-1/(p-1)}]<\infty;\]
\item For any $y\in B_R$, $v_y$ belongs to $A_p(B_{R/2}\times[R^2/(2k_0^2), R^2/k_0^2])$ with $A_p$ bound $A$. As a consequence, for any $E\subset K$ where $K$ is a parabolic subcube of $B_{R/2}\times[R^2/(2k_0^2), R^2/k_0^2]$,
\begin{equation}\label{eq:A-infty}
\frac{g_R(y,0;E)}{g_R(y,0;K)}\ge C\left(\frac{|E|}{|K|}\right)^c.
\end{equation}
\end{enumerate} \end{theorem}
\begin{proof}
See \cite[Section~A.6]{arXiv}. \end{proof}
\begin{remark} The fact that $(RH)$ implies $(A_p)$ is a classical result in harmonic analysis. See e.g, \cite[pg.246-249]{CF74}, \cite[pg. 213-214]{Stein}. In the elliptic non-divergence form PDE setting, the $A_p$ inequality for adjoint solutions was proved by Bauman \cite{Baum84}, and estimate of form \eqref{eq:A-infty} was used by Fabes and Stroock \cite{FS84} to obtain a short proof of the elliptic Harnack inequality. \end{remark} In what follows, using \eqref{eq:A-infty}, we will obtain a new proof of the parabolic Harnack inequality (Theorem~\ref{Harnack}). Our proof follows the ideas of \cite{FS84}. Note that in our parabolic setting, the local volume-doubling property (Corollary~\ref{cor:space-time-vd}) played a crucial role in the proof of \eqref{eq:A-infty}.
\begin{theorem}[PHI for $\mc L_\omega$]\label{Harnack} Assume $\omega\in\Omega_\kappa$ and $\theta>0$. Let $u$ be a non-negative function that satisfies $\mc L_\omega u= 0$ in $B_R\times(0, \theta R^2)$. Then, for $0<\theta_1<\theta_2<\theta_3<\theta$, there exists a constant $C=C(\kappa,d,\theta_1,\theta_2,\theta_3,\theta)$ such that \[ \sup_{B_{R/2}\times(\theta_2 R^2,\theta_3 R^2)}u\le C\inf_{B_{R/2}\times[0, \theta_1R^2)}u.\tag{PHI} \] \end{theorem} We remark that in discrete time setting, (PHI) is obtained by Kuo and Trudinger for the so-called {\it implicit form} operators, see \cite[(1.16)]{KT98}.
\begin{proof} [Proof of Theorem~\ref{Harnack}]
Let $\ell_0=1/k_0^2$ and $\ms D=\{x:|x|_\infty<R/\sqrt d\}\times[\ell_0R^2/2,\ell_0 R^2]$. We only prove a weaker version $\sup_{\ms D}u\le C\min_{x\in B_{R/\sqrt{4d}}}u(x,0)$.
The theorem then follows by iteration. Indeed, assume $\min_{x\in B_{R/\sqrt{4d}}}u(x,0)=u(y,0)=1$ for $y\in B_{R/\sqrt{4d}}$. Let $E_\lambda=\{\hat x \in\ms D:u(\hat x)\ge \lambda\}$. By Lemma~\ref{lem1}, $g_R(y,0;\ms D)>CR^2$. Moreover, for $s\in [\ell_0R^2/2,\ell_0R^2]$, $1=u(y,0)\ge \lambda g_R(y,0;E_\lambda\cap\{(x,t):t=s\})$, and so
\[
1\ge C \lambda g_R(y,0;E_\lambda)/R^2\stackrel{Theorem~\ref{thm:ap-property}(b)}{\ge} C\lambda(|E_\lambda|/|\ms D|)^c.
\]
Hence $|E_\lambda|/|\ms D|\le C\lambda^{-\gamma}$ for some $\gamma>0$. Therefore, for $0<p<\gamma/2$, \begin{align*}
\norm{u}_{\ms D,p}\le \left[1+p\int_1^\infty \lambda^{p-1}|E_\lambda|/|\ms D|\mathrm{d} \lambda\right]^{1/p}<C'=C'\min_{x\in B_{R/\sqrt d}}u(x,0)<\infty. \end{align*} This inequality, together with \cite[Theorem~A.4.1]{arXiv}, completes our proof. \end{proof}
\section{Estimates of solutions near the boundary}\label{sec:near-bdry} The purpose of this section is to establish estimates of $\mc L_\omega$-harmonic functions near the parabolic boundary. For $x\in\Z^d, A\subset\Z^d$, let \[
\dist(x, A):=\min_{y\in A}|x-y|_1. \] \subsection{An elliptic-type Harnack inequality} \begin{theorem}[Interior elliptic-type Harnack inequality]\label{thm-eh} Assume $\omega\in\Omega_\kappa$. Let $R\ge 2$ and $u\ge 0$ satisfies \[ \left\{ \begin{array}{rl} &\mc L_\omega u=0 \quad\mbox{ in }Q_R \\ & u=0 \quad\mbox{ in }\partial B_R\times[0,R^2). \end{array} \right. \] Then for $0<\delta\le \tfrac{1}{4}$, letting $Q^\delta_R:=B_{(1-\delta) R}\times[0, (1-\delta^2)R^2)$, there exists a constant $C=C(d,\kappa, \delta)$ such that \[ \sup_{Q^\delta_R}u\le C\inf_{Q_R^\delta}u. \] \end{theorem}
\begin{figure}
\caption{The values of $u$ are comparable inside the region $Q_R^\delta$. }
\label{fig:etharnack}
\end{figure}
To prove Theorem~\ref{thm-eh}, we need a so-called Carlson-type estimate. For parabolic differential operators in non-divergence form, this kind of estimate was first proved by Garofalo \cite{Garo} (see also \cite[Theorem 3.3]{FSY}).
\begin{theorem}\label{thm-Carl} Assume $\omega\in\Omega_\kappa$, $R>2r>0$. Then for any function $u\ge 0$ that satisfies \[ \left\{ \begin{array}{lr} \mc L_\omega u=0 &\text{ in } (B_R\setminus\bar B_{R-2r})\times[0,3r^2)\\ u=0 &\text{ on }\partial B_R\times[r^2,3r^2)\qquad \end{array}, \right. \] with the convention $\sup\emptyset=-\infty$, we have \begin{equation}\label{e25} \sup_{(B_R\setminus B_{R-r})\times[r^2,2r^2)}u\le C\min_{y\in \partial B_{R-r}}u(y,0). \end{equation}
\end{theorem}
\begin{proof} Set $\ms D=(B_R\setminus\bar B_{R-2r})\times[r^2,3r^2)$. For $\hat x=(x,t)\in\ms D$, let $d_1(\hat x)=\sup\{\rho\ge 0: B_\rho(x)\subset B_R\setminus\bar B_{R-2r}\}\ge 1$.
First, we show that there exists $\gamma=\gamma(d,\kappa)$ such that \begin{equation}\label{e51} \sup_{\hat x\in\ms D}\left( d_1(\hat x)/r \right)^\gamma u(\hat x)\le C\min_{y\in\partial B_{R-r}}u(y,0). \end{equation} Indeed, for any $\hat x=(x,t)\in\ms D$, we can find a sequence of $n\le C\log(r/d_1(\hat x))$ balls with increasing radii $r_k:=c2^k d_1(\hat x)$: \[ B_{r_1}(x_1)\subset B_{r_2}(x_2)\subset\cdots\subset B_{r_n}(x_n) \subset B_R\setminus \bar B_{R-2r} \] such that $x_1=x$, $\dist(x_n,\partial B_{R-r})\le r/2$, and $t-r_n^2\ge r^2/2$. By Theorem~\ref{Harnack}, \begin{align*} u(x,t) &\le Cu(x_1,t-r_1^2)\le\cdots \\ &\le C^n u(x_n,t-r_n^2)\le C(\frac{r}{d_1(\hat x)})^c\min_{y\in \partial B_{R-r}}u(y,0), \end{align*} where in the last inequality we applied Theorem~\ref{Harnack} to a chain of parabolic balls with spatial centers at $\partial B_{R-r}$ and radius $cr$. Display \eqref{e51} is proved.
Next, with $\gamma$ as in \eqref{e51}, letting $d_0(\hat x)=\sup\{\rho\ge 0: Q_\rho(\hat x)\subset(\Z^d\setminus \bar B_{R-2r})\times[r^2,3r^2)\}$ , we claim that \begin{equation}\label{e49} \sup_{\hat x\in\ms D}d_0(\hat x)^\gamma u(\hat x)\le \epsilon^{-\gamma}\sup_{\hat y\in\ms D}d_1(\hat y)^\gamma u(\hat y), \end{equation} where $\epsilon=\epsilon(d,\kappa)\in(0,1/5)$ is to be determined. It suffices to show that $\sup_{\ms D}d_0^\gamma u$ is achieved at $\hat x\in\ms D$ with $\epsilon d_0(\hat x)\le d_1(\hat x)$. Indeed, if $\epsilon d_0(\hat x)>d_1(\hat x)$, then $B_{2d_1(\hat x)}\setminus B_R\neq\emptyset$, and for any $\hat y=(y,s)\in Q_{2d_1(\hat x)}(\hat x)\cap\ms D$, \[
d_0(\hat x)\le d_0(\hat y)+|x-y|+|t-s|^{1/2}\le d_0(\hat y)+4d_1(\hat x)\le d_0(\hat y)+4\epsilon d_0(\hat x) \] and so $d_0(\hat x)\le (1-4\epsilon)^{-1} d_0(\hat y)$. Moreover, by Corollary~\ref{cor:prob1}, \begin{align*} d_0(\hat x)^\gamma u(\hat x)&\le [1-P_\omega^{\hat x}(X_\cdot \text{ exits $B_{2d_1(\hat x)}(x)\cap B_R$ from $\partial B_R$ before time }d_1^2(\hat x))]\\ &\qquad\times d_0(\hat x)^\gamma\sup_{(B_{2d_1(\hat x)}(x)\cap B_R)\times[r^2,3r^2)}u\\ &\le (1-c_0)(1-4\epsilon)^{-\gamma}\sup_{\ms D}d_0^\gamma u \end{align*} for a constant $c_0\in(0,1)$. Thus, when $\epsilon d_0(\hat x)> d_1(\hat x)$, choosing $\epsilon>0$ so that $(1-c_0)(1-4\epsilon)^{-\gamma}<1-\tfrac{c_0}{2}$, we get $d_0(\hat x)^\gamma u(\hat x)
<(1-\tfrac{c_0}{2})\sup_{\ms D} d_0^\gamma u$. Display \eqref{e49} is proved. Inequality \eqref{e25} follows from \eqref{e51} and \eqref{e49}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-eh}:] Since $u=0$ on $\partial B_R\times[0,R^2)$, \begin{align*} \sup_{Q_R^\delta}u &\le \sup_{B_R\times\{R^2-\tfrac{1}{4}(\delta R)^2\}}u\\ &\le C\sup_{B_{(1-\delta)R}\times\{R^2-\tfrac{1}{2}(\delta R)^2\}}u\\ &\stackrel{Theorem~\ref{Harnack}}{\le} C(d,\kappa,\delta)\inf_{Q_R^\delta }u, \end{align*} where we used Theorem~\ref{thm-Carl} and Theorem~\ref{Harnack} in the second inequality. \end{proof}
\subsection{A boundary Harnack inequality} For positive harmonic functions with zero values on the spatial boundary, the following boundary Harnack inequality compares values near the spatial boundary and values inside, with time coordinates appropriately shifted. \begin{theorem}[Boundary PHI]\label{thm-bh}
Let $R>0$. Suppose $u$ is a nonnegative solution to $\mc L_\omega u=0$ in $(B_{4R}\setminus B_{2R})\times(-2R^2,3R^2)$, and $u|_{\partial B_{4R}\times\R}=0$. Then for any $\hat x=(x,t)\in (B_{4R}\setminus\bar B_{3R})\times(-R^2,R^2)$, we have \[ C\frac{\dist(x,\partial B_{4R})}{R}\max_{y\in\partial B_{3R}}u(y, t+R^2) \le u(\hat x) \le C\frac{\dist(x,\partial B_{4R})}{R}\min_{y\in\partial B_{3R}}u(y,t-R^2). \] \end{theorem}
Theorem~\ref{thm-bh} is a lattice version of \cite[(3.9)]{Garo}. In what follows we offer a probabilistic proof.
\begin{proof}[Proof of Theorem~\ref{thm-bh}:] Our proof uses the fact that $u(\hat X_t)$ is a martingale before exiting the region $\ms D:=(B_{4R}\setminus B_{2R})\times(-2R^2,3R^2)$.
For the lower bound,
let $\tau_{3,4}:=\inf\{s>0: X_s\notin B_{4R}\setminus \bar B_{3R}\}$. By the optional stopping lemma, $u(\hat x)=E_\omega^{\hat x}[u(\hat X_{\tau_{3,4}\wedge 0.5R^2})]$, and so \begin{align*} u(\hat x) &\ge P_\omega^{x,t}(\tau_{3,4}< R^2/2,X_{\tau_{3,4}}\in \partial B_{3 R}) \inf_{\partial B_{3 R}\times[t,t+0.5R^2]} u\\ &\ge C\frac{\dist(x,\partial B_{4R})}{R}\max_{y\in\partial B_{3R}}u(y,t+R^2) \end{align*} where in the last inequality we used Lemma~\ref{lem6} and applied Theorem~\ref{Harnack} (to a chain of parabolic balls). The lower bound is obtained.
To obtain the upper bound, note that for $\hat x\in (B_{4R}\setminus \bar B_{3R})\times(-R^2,R^2)$, \begin{align*} u(\hat x)
&\le \bigg[\max_{z\in B_{4R}\setminus\bar B_{3R}}u(z,t+\tfrac{R^2}{2})+\max_{\partial B_{3R}\times(t,t+\tfrac{R^2}{2}]}u\bigg] P_\omega^{x,t}(X_{\tau_{3,4}\wedge 0.5R^2}\notin \partial B_{4R})\\ &\stackrel{\eqref{e25}}{\le} C\bigg[\max_{z\in B_{3.5R}\setminus\bar B_{3R}}u(z,t-\tfrac{R^2}{2})+\max_{\partial B_{3R}\times(t,t+\tfrac{R^2}{2}]}u\bigg] P_\omega^{x,t}(X_{\tau_{3,4}\wedge 0.5R^2}\notin \partial B_{4R})\\ &\le C\min_{z\in\partial B_{3R}}u(z,t-R^2)\dist(x,\partial B_{4R})/R, \end{align*} where in the last inequality we applied Lemma~\ref{lem7} and used an iteration of the Harnack inequality (Theorem~\ref{Harnack}). \end{proof}
\section{Proof of PHI for the adjoint operator (Theorem~\ref{thm-ah})}\label{sec:proof-of-ah}
We define $\hat Y_t=(Y_t,S_t)$ to be the continuous-time Markov chain on $\Z^d\times\R$ with generator $\mc L_\omega^*$. The process $\hat Y_t$ can be interpreted as the time-reversal of $\hat X_t$. Denote by $P_{\omega^*}^{y,s}$ the quenched law of $\hat Y_\cdot$ starting from $\hat Y_0=(y,s)$ and by $E_{\omega^*}^{y,s}$ the corresponding expectation. Note that $S_t=S_0-t$.
For $R>0, \hat x=(x,t), \hat y=(y,s)\in B_R\times\R$ with $s>t$, set \[ \begin{array}{rl} & p_R^\omega(\hat x;\hat y)=P_\omega^{x,t}(X_{s-t}=y,s-t<\tau_R(\hat X)),\\ &p_R^{*\omega}(\hat y;\hat x)=P_{\omega^*}^{y,s}(Y_{s-t}=x, s-t<\tau_R(\hat Y)), \end{array} \] where \begin{equation}\label{e22} \tau_R(\hat X):=\inf\{t\ge 0: X_t\notin B_R\} \end{equation} and $\tau_R(\hat Y)$ is defined similarly. Note that \[ p_R^{*\omega}(\hat y;\hat x)=\frac{\rho_\omega(\hat x)}{\rho_\omega(\hat y)}p_R^\omega(\hat x;\hat y). \] \begin{lemma}\label{lem8} For any $\hat y=(y,s)\in B_R\times(0,\infty)$ and any non-negative solution $v$ of $\mc L^*_\omega v=0$ in $B_R\times(0,s]$, \begin{align*} \MoveEqLeft v(\hat y) = \sum_{x\in\partial B_R,z\in B_R,x\sim z}\int_0^s \frac{\rho_\omega(x,t)}{\rho_\omega(\hat y)}\omega_t(z,x)p^{\omega}_R(z,t;\hat y) v(x,t) \mathrm{d} t\\ &+ \sum_{x\in B_R}\frac{\rho_\omega(x,0)}{\rho_\omega(\hat y)}p_R^\omega(x,0;\hat y)v(x,0). \end{align*}
\end{lemma} \begin{proof} Write the two summations in the lemma as $\rom{1}$ and $\rom{2}$. Clearly, $\rom{2}=E_{\omega^*}^{\hat y}[v(\hat Y_s)1_{\tau_R>s}]$. Since $(v(\hat Y_t))_{t\ge 0}$ is a martingale, we have
\begin{align*}
v(y,s)
=E_{\omega^*}^{\hat y}[v(\hat Y_{\tau_R})1_{\tau_R\le s}]
+E_{\omega^*}^{\hat y}[v(\hat Y_s)1_{\tau_R>s}].
\end{align*}
So it remains to show $\rom{1}=E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R})1_{\tau_R\le s}]$.
We claim that for $x\in\partial B_R$,
\begin{equation}\label{e30}
P_{\omega*}^{\hat y}(Y_{\tau_R}=x,\tau_R\in\mathrm{d} t) =
\sum_{z\in B_R,z\sim x}\frac{\rho_\omega(x,s-t)}{\rho_\omega(\hat y)}\omega_{s-t}(z,x)p_R^\omega(z,s-t;\hat y)\mathrm{d} t.
\end{equation}
Indeed, for $h>0$ small enough, $x\in\partial B_R$ and almost every $t\in(0,s)$,
\begin{align*} &P_{\omega^*}^{y,s}(Y_{\tau_R}=x,\tau_R\in (t-h,t+h))\\ &=\sum_{z\in B_R:z\sim x}P_{\omega^*}^{\hat y}(Y_{t-h}=z,\tau_R>t-h)P_{\omega^*}^{z,s-t+h}(Y_{2h}=x)+o(h)\\ &=\sum_{z\in B_R:z\sim x}p_R^{\omega^*}(\hat y;z,s-t) \int_{-h}^h\omega^*_{s-t+r}(z,x)\mathrm{d} r+o(h).
\end{align*} Dividing both sides by $2h$ and taking $h\to 0$, display \eqref{e30} follows by Lebesgue's differentiation theorem. Applying \eqref{e30} to \[ E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R})1_{\tau_R\le s}] = \sum_{x\in\partial B_R}\int_0^s v(x,s-t)P_{\omega*}^{\hat y}(Y_{\tau_R}=x,\tau_R\in\mathrm{d} t), \]
we obtain $\rom{1}=E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R})1_{\tau_R\le s}]$ with a change of variable. \end{proof}
For fixed $\hat y:=(y,s)\in B_{R}\times\R$, set $u(\hat x):=p^\omega_{2R}(\hat x,\hat y)$. Then $\mc L_\omega u=0$ in $B_{2R}\times(-\infty,s)\cup (B_{2R}\setminus B_R)\times\R$ and $u(x,t)=0$ when $x\in\partial B_{2R}$ or $t>s$. By Theorem~\ref{thm-bh} and Theorem~\ref{thm-eh},
for any $(x,t)\in B_{2R}\times(s-4R^2,s-\tfrac{R^2}{2})$, \begin{align}\label{e34} u(x,t) \asymp u(o,s-R^2)\dist(x,\partial B_{2R})/R, \end{align} and, for any $(x,t)\in (B_{2R}\setminus B_{3R/2})\times(s-4R^2,s)$, \begin{align}\label{e35} u(x,t)
&\le Cu(o,s-R^2)\dist(x,\partial B_{2R})/R. \end{align}
\begin{lemma}\label{lem9} Let $v\ge 0$ satisfies $\mc L_\omega^* v=0$ in $B_{2R}\times(0,4R^2]$, then for any $\bar Y=(\bar y,\bar s)\in B_R\times(3R^2,4R^2]$ and $\lbar Y=(\lbar y,\lbar s)\in B_R\times(R^2,2R^2)$, we have \[ \frac{v(\bar Y)}{v(\lbar Y)} \ge C\dfrac{\int_{0}^{R^2}\rho_\omega(\partial B_{2R},t)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})} {\int_{0}^{4R^2}\rho_\omega(\partial B_{2R},t)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})}. \] \end{lemma}
\begin{proof} Write $\hat x:=(x,t)$ and set $\bar u(\hat x):=p^\omega_{2R}(\hat x;\bar Y)$, $\lbar u(\hat x):=p^\omega_{2R}(\hat x;\lbar Y)$. By Lemma~\ref{lem8} and \eqref{e34}, \begin{align}\label{e32} v(\bar Y) &\ge C\sum_{x\in\partial B_{2R},z\in B_{2R},x\sim z}\int_0^{\lbar s} \frac{\rho_\omega(\hat x)}{\rho_\omega(\bar Y)}\bar u(z,t)v(\hat x)\mathrm{d} t \nonumber\\ &\qquad+ C\sum_{x\in B_{2R}}\frac{\rho_\omega(x,0)}{\rho_\omega(\bar Y)}\bar u(0,\bar s-R^2)\frac{\dist(x,\partial B_{2R})}{R}v(x,0)\nonumber\\ &\ge C\frac{\bar u(0,\bar s-R^2)}{R\rho_\omega(\bar Y)}\bigg[\sum_{x\in\partial B_{2R}}\int_0^{\lbar s}\rho_\omega(\hat x)v(\hat x)\mathrm{d} t\nonumber\\ &\qquad+\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})v(x,0)\bigg]. \end{align} Similarly, by Lemma~\ref{lem8} and \eqref{e35}, we have \begin{align}\label{e33} v(\lbar Y) &\le C\frac{\lbar u(0,\lbar s-R^2)}{R\rho_\omega(\lbar Y)}\bigg[\sum_{x\in\partial B_{2R}}\int_0^{\lbar s}\rho_\omega(\hat x)v(\hat x)\mathrm{d} t\nonumber\\ &\qquad+\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})v(x,0)\bigg]. \end{align} Combining \eqref{e32} and \eqref{e33}, we get \begin{equation}\label{eq:v-low-up} \frac{v(\bar Y)}{v(\lbar Y)} \ge C\frac{\bar u(o,\bar s-R^2)/\rho_\omega(\bar Y)}{\lbar u(o, \lbar s-R^2)/\rho_\omega(\lbar Y)}. \end{equation} Next, taking $v\equiv 1$, by Lemma~\ref{lem8} and \eqref{e35}, \begin{align*} 1 &= \sum_{x\in\partial B_{2R},z\in B_{2R},z\sim x}\int_{0}^{\bar s} \frac{\rho_\omega(\hat x)}{\rho_\omega(\bar Y)} \omega_t(z,x)\bar u(z,t)\mathrm{d} t + \sum_{x\in B_{2R}}\frac{\rho_\omega(x,0)}{\rho_\omega(\bar Y)}\bar u(x,0)\\ &\le C\frac{\bar u(o,\bar s-R^2)}{R\rho_\omega(\bar Y)} \big[ \sum_{x\in\partial B_{2R}}\int_{0}^{\bar s}\rho_\omega(\hat x)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R}) \big]. \end{align*} Similarly, by Lemma~\ref{lem8} and \eqref{e34}, \begin{align*} 1
&\ge C\frac{\lbar u(o,\lbar s-R^2)}{R\rho_\omega(\lbar Y)} \big[ \sum_{x\in\partial B_{2R}}\int_{0}^{\lbar s/2}\rho_\omega(\hat x)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_R) \big]. \end{align*} These inequalities, together with \eqref{eq:v-low-up}, yield the lemma. \end{proof}
\begin{remark} It is clear that for static environments, the adjoint Harnack inequality (Theorem~\ref{thm-ah}) follows immediately from Lemma~\ref{lem9}. However, in time-dependent case, we need the parabolic volume-doubling property of $\rho_\omega$. \end{remark}
\begin{proof}[Proof of Theorem~\ref{thm-ah}] First, we will show that for all $R>0$, \begin{align}\label{e41} &\int_0^s\rho_\omega(\partial B_R,t)\mathrm{d} t+\sum_{x\in B_R}\rho_\omega(x,0)\dist(x,\partial B_R)\\ &\asymp \frac{1}{R}\int_0^s\rho_\omega(B_R,t)\mathrm{d} t+\sum_{x\in B_R}\rho_\omega(x,s)\dist(x,\partial B_R).\nonumber \end{align} Recall $\tau_R$ at \eqref{e22} and set $g(x,t)=E_\omega^{x,t}[\tau_R(\hat X)]$. Note that $g(x,\cdot)=0$ for $x\notin B_R$ and $\mc L_\omega g(x,t)=-1$ if $x\in B_R$. By \eqref{rho-invariance}, for any $s>0$, \begin{align*} &0=\sum_{x\in\Z^d}\int_0^s g(x,t)[\sum_y\rho_\omega(y,t)\omega_t(y,x)-\partial_t\rho_\omega(x,t)]\mathrm{d} t\\
&=\sum_{x\in \partial B_R,y\in B_R}\int_0^s\rho(x,t)\omega_t(x,y)g(y,t)\mathrm{d} t+\sum_{x\in B_R}g(x,0)\rho(x,0)\\ &\qquad- \sum_{x\in B_R}\int_0^s\rho(x,t)\mathrm{d} t-\sum_{x\in B_R}g(x,s)\rho(x,s). \end{align*}
Moreover, since $|X_t|^2-\frac{d}{\kappa}t$ and $|X_t|^2-\kappa t$ are super- and sub- martingales, \[ g(x,t)\asymp
E_\omega^{x,t}[|X_{\tau_R}|^2-|x|^2] \asymp R\dist(x,\partial B_R) \quad \forall (x,t)\in B_R\times\R \] by the optional-stopping theorem. Display \eqref{e41} then follows.
Combining \eqref{e41} and Lemma~\ref{lem9}, we obtain \[ \frac{v(\bar Y)}{v(\lbar Y)} \ge C\frac{\int_0^{R^2}\rho(B_{2R},t)\mathrm{d} t+R\sum_{x\in B_{2R}}\rho(x,R^2)\dist(x,\partial B_{2R})} {\int_0^{4R^2}\rho(B_{2R},t)\mathrm{d} t+R\sum_{x\in B_{2R}}\rho(x,4R^2)\dist(x,\partial B_{2R})}. \] Finally, Theorem~\ref{thm-ah} follows by Theorem~\ref{thm:vd} and the above inequality. \end{proof}
\section{Proof of Theorem~\ref{thm:hke}, Corollary~\ref{cor:q-estimates} and Theorem~\ref{thm:llt}} \label{sec:proof-llt-hke}
\subsection{Proof of Theorem~\ref{thm:hke}} \begin{proof} First, using Theorem~\ref{thm-ah} and standard arguments, we will prove \eqref{eq:quenched-hke}. Recall that $v(\hat x):=q^\omega(\hat 0,\hat x)$ satisfies $\mc L_\omega^* v=0$ in $\Z^d\times(0,\infty)$. By Theorem~\ref{thm-ah}, for $\hat x=(x,t)\in \Z^d\times(0,\infty)$, we have $v(\hat x)\le C\min_{y\in B_{\sqrt t}(x)}v(y,3t)$
and so \begin{align*} v(\hat x) &\le \frac{C}{\rho(B_{\sqrt t}(x), 3t)}\sum_{y\in B_{\sqrt t}(x)}\rho(y,3t)v(y,3t)\\ &= \frac{C}{\rho(B_{\sqrt t}(x), 3t)}P_\omega^{0,0}(X_{3t}\in B_{\sqrt t}(x)) \le
C\exp[-c\mathfrak{h}(|x|,t)], \end{align*} where Corollary~\ref{cor:prob-upperb} is used in the last inequality.
Moreover, for any $s\in[0,t]$, $|y|\le|x|+c\sqrt t$, by Theorem~\ref{thm:vd} and iteration, \[
\rho(B_{\sqrt t}(x),3t)\ge C \rho(B_{\sqrt {t\vee1}}(x),s)\ge C\left(\tfrac{|x|}{\sqrt{t\vee1}}+1\right)^{-c} \rho(B_{\sqrt{t\vee1}}(y),s). \]
Since $\tfrac{|x|}{\sqrt{t\vee1}}+1\le C_\epsilon e^{\epsilon\mathfrak{h}(|x|,t)}$ for any $\epsilon>0$, the upper bound in \eqref{eq:quenched-hke} follows.
To obtain the lower bound in \eqref{eq:quenched-hke}, by similar argument as above and Theorem~\ref{thm-ah}, $v(\hat x)\ge C\max_{y\in B_{\sqrt t/2}(x)}v(y,t/4)$ for $\hat x\in\Z^d\times(0,\infty)$, and so \begin{equation}\label{e52} v(\hat x) \ge \frac{C}{\rho(B_{\sqrt t/2}(x), t/4)} P_\omega^{0,0}(X_{t/4}\in B_{\sqrt t/2}(x)). \end{equation} We claim that for any $(y,s)\in \Z^d\times(0,\infty)$, \begin{equation}\label{hitting-upperb}
P_\omega^{y,0}(X_s\in B_{\sqrt s})\ge Ce^{-c|y|^2/s}. \end{equation}
Indeed, the case $|y|/\sqrt s\le 3$ follows from Lemma~\ref{lem1}. When $|y|/\sqrt s>3$, let \[
n=\floor{2|y|^2/s}. \]
Set $u(x,t):=p^\omega(x,t;B_{\sqrt s},s)$. Then $u$ is a $\mc L_\omega$-harmonic function on $\Z^d\times(-\infty,s)$. Taking a sequence of points $(y_i)_{i=1}^n$ such that $y_0=y, y_n=0$ and $|y_i-y_{i+1}|\le |y|/n$, for $i=0,\ldots n-1$, \begin{align*}
&\min_{x\in B_{|y|/\sqrt{n}}(y_i)}u(x,\tfrac{i|y|^2}{n^2})\\ &\ge
\min_{z\in B_{|y|/\sqrt{n}}(y_i)}p^\omega(z,\tfrac{i|y|^2}{n^2};B_{|y|/\sqrt{n}}(y_{i+1}),\tfrac{(i+1)|y|^2}{n^2})\min_{x\in B_{|y|/\sqrt{n}}(y_{i+1})}u(x,\tfrac{(i+1)|y|^2}{n^2})\\ &\stackrel{Lemma~\ref{lem1}}{\ge}
C\min_{x\in B_{|y|/\sqrt{n}}(y_{i+1})}u(x,\tfrac{(i+1)|y|^2}{n^2}). \end{align*}
Iteration then yields $u(y,0)\ge C^{n-1}\min_{x\in B_{|y|/\sqrt{n}}}u(x,\tfrac{|y|^2}{n})\stackrel{Lemma~\ref{lem1}}{\ge} C^n$. Inequality \eqref{hitting-upperb} is proved. Then, by \eqref{e52}, \[ v(x,t)\ge
\frac{C}{\rho(B_{\sqrt t/2}(x), t/4)}e^{-c|x|^2/t}. \]
Moreover, by Theorem~\ref{thm:vd}, we have for any $s\in[0,t], |y|\le |x|$, \[ \rho(B_{\sqrt t/2}(x), t/4) \le C\rho(B_{\sqrt t/2}(x),s)
\le C(\tfrac{|x|}{\sqrt t}+1)^c\rho(B_{\sqrt t}(y),s). \] The lower bound in \eqref{eq:quenched-hke} is proved.
Next, we will prove the moment bounds \eqref{eq:mm-hke1} and \eqref{eq:mm-hke2}, which, by \eqref{eq:quenched-hke} and \eqref{eq:rho-asymp}, are equivalent to showing that, for $r:=\sqrt t>0$, \[
\Norm{\frac{\rho(\hat 0)}{\rho(Q_{r})}}_{L^{(d+1)/d}(\mb P)} \le C r^2(r\vee1)^{-d} \quad\text{ and }\quad \Norm{\frac{\rho(\hat 0)}{\rho(Q_{r})}}_{L^{-p}(\mb P)} \ge Cr^2(r\vee1)^{-d}, \] where $Q_r$ is as defined in \eqref{def:parab-ball}. Indeed, using the translation-invariance of $\mb P$ and the volume-doubling property of $\rho$, for $q:=(d+1)/d$, \begin{align*} \norm{\rho(\hat 0)/\rho(Q_r)}_{L^{q}(\mb P)}^q \le
C\frac{1}{|Q_r|}\int_{\hat x\in Q_r}E_{\mb P}\left[ \frac{\rho(\hat x)^q}{\rho(Q_r)^q} \right] \le
C/|Q_r|^q, \end{align*} where we used the Reverse H\"older inequality (Corollary~\ref{cor:reverse-holder}(i)) in the last inequality.
Recalling \eqref{eq:def-integral}, inequality \eqref{eq:mm-hke1} then follows from the fact that $|Q_r|=r^2\sum_{x\in B_r}1\asymp r^2(r\vee 1)^{-d}$.
To obtain \eqref{eq:mm-hke2}, note that by translation invariance and $\mb P$ and the volume-doubling property of $\rho$, taking $\epsilon\in(0,1/(p-1))$, \begin{align*} \norm{\rho(Q_r)/\rho(\hat 0)}_{L^\epsilon(\mb P)}^\epsilon
\le
\frac{C}{|Q_r|}E_{\mb P}\left[\int_{\hat x\in Q_r}\frac{\rho(Q_{r})^\epsilon}{\rho(\hat x)^\epsilon}\right]
\le C|Q_r|^\epsilon, \end{align*} where we used the $A_p$ inequality (Theorem~\ref{thm:ap-property}\eqref{item:neg-moment}) of $\rho$ in the last inequality. Therefore $\norm{\rho(\hat 0)/\rho(Q_r)}_{L^{-\epsilon}(\mb P)}\ge Cr^2(r\vee 1)^{-d}$ and \eqref{eq:mm-hke2} is proved.
Display \eqref{eq:mm-green} follows from \eqref{eq:mm-hke1}, \eqref{eq:mm-hke2}, and Minkowski's integral inequality. \end{proof}
\subsection{Proof of Theorem~\ref{thm:llt}}\label{subsec:pf-thm-llt} As a standard consequence of the PHI for $\mc L^*_\omega$, we first state the following H\"older estimate. (See a proof in \cite[Section~A.2]{arXiv}.) \begin{corollary}\label{cor:hoelder} There exists $\gamma=\gamma(d,\kappa)\in(0,1]$ such that for $\mb P$-almost all $\omega$, any non-negative solution $u$ of $\mc L_\omega^*$ in $B_R(x_0)\times(t_0-R^2,t_0]$, $R>0$, satisfies \[
|u(\hat x)-u(\hat y)|\le C \left( \frac{r}{R}
\right)^\gamma \sup_{ B_R(x_0)\times(t_0-R^2,t_0]}u \] for all $\hat x,\hat y\in B_r(x_0)\times(t_0-r^2,t_0]$ and $r\in(0,R)$. \end{corollary}
Recall $q^\omega(\hat y,\hat x)$ in \eqref{eq:def-hk}. For any $\hat x=(x,t)\in\R^d\times\R$, set \[ v(\hat x):=q^\omega(\hat 0;\floor{x},t), \] where $\floor{x}$ is as in Theorem~\ref{thm:llt}. Note that $\mc L_\omega^* v=0$ in $\Z^d\times(0,\infty)$. By Corollary~\ref{cor:hoelder} and Theorem~\ref{thm:hke}, for any $\hat y=(y,s)\in B_{\sqrt t}(x)\times(\tfrac t2, t)$, \begin{align} \label{eq:holder-v}
|v(\hat x)-v(\hat y)|
&\le C\left(\frac{|x-y|+\sqrt{t-s}}{\sqrt t}\right)^\gamma \sup_{B_{\sqrt t}(x)\times(\tfrac t2, t]}v\nonumber\\
&\le C\left(\frac{|x-y|+\sqrt{t-s}}{\sqrt t}\right)^\gamma t^{-d/2} \end{align} when $t>t_0(\omega)$ is big enough. Here in the last inequality we used Corollary~\ref{cor:q-estimates}(i) which is an immediate consequence of Theorem~\ref{thm:hke} and \eqref{eq:rho-ergodic}.
Recall $\mc O_r$ in \eqref{eq:def-continuousball}. For $\hat x=(x,t)\in\R^d\times\R$, write \[ \hat x^{n}:=(\floor{nx},n^2t). \] To prove Theorem~\ref{thm:llt}, it suffices to show that for any $K>T$, \begin{align} \label{eq:uniform-conv}
\lim_{n\to\infty}\sup_{\hat x\in\mc O_K\times[T,K]}|n^dv(\hat x^n)-p_t^\Sigma(0,x)|=0. \end{align} Indeed, for any $\epsilon>0$, there exists $K=K(T,\epsilon,d,\kappa)>0$ such that, writing $\ms D:=(\R^d\times[T,\infty))\setminus(\mc O_K\times[T,K])$, we have \begin{align*} \varlimsup_{n\to\infty}\sup_{\ms D} n^dv(\hat x^n)+p_t^\Sigma(0,x) &\stackrel{\eqref{eq:quenched-hke},\eqref{eq:rho-ergodic}}{\le} C\sup_{\ms D}
t^{-d/2}e^{-c|x|^2/t}
\le \epsilon. \end{align*}
Hence
Theorem~\ref{thm:llt} follows provided that \eqref{eq:uniform-conv} is proved.
\begin{proof}[Proof of Theorem~\ref{thm:llt}] As we discussed in the above, it suffices to prove \eqref{eq:uniform-conv}. It suffices to consider the case $T<K<2T$. For any $\epsilon>0$, \begin{align}\label{eq:split} &\Abs{n^d v(\hat x^n)-p_t^\Sigma(0,x)} \le A(\hat x,\epsilon)+B_n(\hat x,\epsilon)+C_n(\hat x,\epsilon), \end{align}
where $A(\hat x,\epsilon)=\Abs{\frac{\int_t^{t+\epsilon^2}p_s^\Sigma(0,\mc O_\epsilon(x))\mathrm{d} s}{\epsilon^2|\mc O_\epsilon|}-p_t^\Sigma(0,\mc O_\epsilon(x))}$,
\[B_n(\hat x,\epsilon)=\Abs{\int_t^{t+\epsilon^2}\frac{P_\omega^{\hat 0}(X_{n^2s}\in n\mc O_\epsilon(x))-p_s^\Sigma(0,\mc O_\epsilon(x))}{\epsilon^2|\mc O_\epsilon|}\mathrm{d} s},\]
\[C_n(\hat x,\epsilon)=\Abs{n^d v(\hat x^n)-\frac{\int_t^{t+\epsilon^2}P_\omega^{\hat 0}(X_{n^2s}\in n\mc O_\epsilon(x))\mathrm{d} s}{\epsilon^2|\mc O_\epsilon|}}. \]
First, we will show that \begin{align} \label{eq:uniform-ergo}
\varlimsup_{n\to\infty}\sup_{\hat x\in\mc O_K\times[T,K]}C_n(\hat x,\epsilon)=O(\epsilon^\gamma). \end{align} To this end, note that there exists $N=N(T,\omega,d,\kappa)$ such that for $n\ge N$, \begin{align*} C_n(\hat x,\epsilon)&\le
n^dv(\hat x^n)\Abs{1-\frac{\int_t^{t+\epsilon^2}\rho(n\mc O_\epsilon(x),n^2s)\mathrm{d} s}{\epsilon^2|n\mc O_\epsilon|}} \\&
+\sum_{y\in n\mc O_\epsilon(x)}\int_t^{t+\epsilon^2}|v(y,n^2s)-v(\hat x^n)|\rho(y,n^2s)\mathrm{d} s/(\epsilon^2|\mc O_\epsilon|)\\ &\le
CT^{-d/2}\Abs{1-\frac{\int_t^{t+\epsilon^2}\rho(n\mc O_\epsilon(x),n^2s)\mathrm{d} s}{\epsilon^2|n\mc O_\epsilon|}}\\&\qquad
+CT^{-(\gamma+d)/2}\epsilon^{\gamma}\int_t^{t+\epsilon^2}\rho(n\mc O_\epsilon(x),n^2s)\mathrm{d} s/(\epsilon^2|n\mc O_\epsilon|), \end{align*} where in the second inequality we used Corollary~\ref{cor:q-estimates}(i) and \eqref{eq:holder-v}. Further, by an ergodic theorem of Krengel and Pyke \cite[Theorem 1]{KP87} and \eqref{rho-moment}, \begin{equation}\label{eq:uniform-rho} \lim_{n\to 0}\sup_{\hat x\in \mc O_K\times[T,K]}
\Abs{1-\frac{\int_t^{t+\epsilon^2}\rho(n\mc O_\epsilon(x),n^2s)\mathrm{d} s}{\epsilon^2|n\mc O_\epsilon|}}=0. \end{equation} Display \eqref{eq:uniform-ergo} follows.
Next, for $\hat x=(x,t)$, by writing $B_n(\hat x,\epsilon)$ as \[
\Abs{\frac{\int_t^{t+\epsilon^2}P_\omega^{\hat 0}(X_{n^2s}\in n\mc O_\epsilon(x))\mathrm{d} s}{\epsilon^2|\mc O_\epsilon|}-\frac{\int_t^{t+\epsilon^2}p_s^\Sigma(0,\mc O_\epsilon(x))\mathrm{d} s}{\epsilon^2|\mc O_\epsilon|}}=:|B_n^1(\hat x,\epsilon)-B^2(\hat x,\epsilon)|, \] we will show that \begin{equation}\label{eq:equicontin} \varlimsup_{n\to\infty}\sup_{\hat x\in \mc O_K\times[T,K]}B_n(\hat x,\epsilon)=O(\epsilon^\gamma). \end{equation}
We claim that $B_n(\hat x,\epsilon)$ is approximately equicontinuous (with order $\epsilon^\gamma$). That is, there exist $N,\delta$ depending on $(\epsilon,\omega, d,\kappa,T,K)$ such that, whenever $n\ge N$ and $\hat x_1=(x_1,t_1), \hat x_2=(x_2,t_2)\in\mc O_K\times[T,K]$ satisfy $|\hat x_1-\hat x_2|_1:=|x_1-x_2|+|t_1-t_2|<\delta$, we have \[
|B_n(\hat x_1,\epsilon)-B_n(\hat x_2,\epsilon)|<C\epsilon^\gamma. \] It suffices to show that $B_n^1(\hat x,\epsilon)$ is approximately equicontinuous. Indeed, by \eqref{eq:uniform-ergo} and \eqref{eq:holder-v}, when $n\ge N$ is large and $\hat x_1,\hat x_2\in \mc O_K\times[T,K]$, \begin{align*}
|B^1_n(\hat x_1,\epsilon)-B^1_n(\hat x_2,\epsilon)|
&\le C_n(\hat x_1,\epsilon)+C_n(\hat x_2,\epsilon)+ n^d|v(\hat x_1^n)-v(\hat x_2^n)|\\
&\le C\epsilon^\gamma+C(|x_1-x_2|+\sqrt{|t_1-t_2|})^\gamma. \end{align*}
The approximate equicontinuity of $B_n^1(\hat x,\epsilon)$ follows. To prove \eqref{eq:equicontin}, we choose a finite sequence $\{\hat x_i\}_{i=1}^M$ such that $\min_{1\le i\le M}|\hat x-\hat x_i|_1<\delta$ for all $\hat x\in \mc O_K\times[T,K]$. Since $\lim_{n\to\infty}\max_{1\le i\le M}B_n(\hat x_i)=0$ by the QCLT (Theorem~\ref{thm:recall}), display \eqref{eq:equicontin} follows by the approximate equicontinuity.
Clearly, $\lim_{\epsilon\to 0}\sup_{\hat x\in\mc O_K\times[T,K]}A(\hat x,\epsilon)=0$. This, together with \eqref{eq:uniform-ergo} and \eqref{eq:equicontin}, yields the uniform convergence of \eqref{eq:split} by sending first $n\to\infty$ and then $\epsilon\to 0$. Our proof of Theorem~\ref{thm:llt} is complete. \end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:q-estimates}:]
\eqref{cor:q-hke} follows from Theorem~\ref{thm:hke} and \eqref{eq:rho-ergodic}. \eqref{cor:green1} and \eqref{cor:green2} are consequences of Theorems~\ref{thm:llt} and \ref{thm:hke}. Their proofs, which are similar to \cite[Theorem 1.14]{ACDS} and
\cite[Theorem 1.4]{ADS18}, can be found in \cite[A.6]{arXiv}. \end{proof}
\section{Auxiliary probability estimates}\label{sec:auxiliary-prob} This section contains probability estimates that are useful in the rest of the paper. It does not rely on results in the previous sections, and can be read independently. Recall the definition of the function $\mathfrak{h}(r,t)$ in \eqref{eq:def-function-h}. \begin{theorem}\label{thm:fluctuation} Assume $\omega\in\Omega_\kappa$. Then for $t>0$, $r>0$, \[
P_\omega^{0,0}(\sup_{0\le s\le t}|X_s|\ge r)\le C\exp\left(-c\mathfrak{h}(r,t)\right). \] \end{theorem} \begin{proof} Let $x(i), i=1,\ldots,d,$ denotes the $i$-th coordinate of $x\in\R^d$. It suffices to show that for $i=1,\ldots,d$, \[
P_\omega^{0,0}(\sup_{0\le s\le t}|X_s(i)|>r) \le C\exp\left(-c\mathfrak{h}(r,t)\right). \] We will prove the statement for $i=1$. Let $\tilde N_t:=\#\{0\le s\le t: X_s(1)\neq X_{s^-}(1)\}$ be the number of jumps in the $e_1$ direction before time $t$. Let $(S_n)$ be the discrete time simple random walk on $\Z$, then $X_t(1)\stackrel{d}{=}S_{\tilde N_t}$.
Note that $\tilde N_t$ is stochastically dominated by a Poisson process $N_t$ with rate $c_0:=2d/\kappa$, and so $P_\omega^{\hat 0}(\sup_{0\le s\le t}|X_s(1)|>r)\le P(\max_{0\le m\le N_t}|S_m|>r)$. Hence, \begin{align*}
P_\omega^{\hat 0}(\sup_{0\le s\le t}|X_s(1)|>r)
&\le P(N(t)\ge 2c_0(t\vee r))+P(\max_{0\le m\le 2c_0(t\vee r)}|S_m|>r)\\ &\le e^{-c(t\vee r)}+Ce^{-cr^2/(t\vee r)}\le Ce^{-cr^2/(t\vee r)}. \end{align*} where we used Hoeffding's inequality in the second inequality. On the other hand, since the random walk is in a discrete set $\Z$, we have, for any $\theta>0$, \begin{align*}
P_\omega^{\hat 0}(\sup_{0\le s\le t}|X_s(1)|>r) &\le P(N(t)>r)\\ &\le E[\exp(\theta N(t)-\theta r)] =\exp[ c_0t(e^\theta-1)-\theta r]. \end{align*} When $r\ge 9c_0^2 t$, taking $\theta=\log(\tfrac{r}{c_0t})$, we get an upper bound $\exp[-\tfrac r2\log(\tfrac{r}{t})]$. Hence, letting $f(r,t)=\tfrac{r^2}{t\vee r}\mathbbm{1}_{r<9c_0^2t}+r\log(\tfrac{r}{t})\mathbbm{1}_{r\ge 9c_0^2t}$, we obtain \[
P_\omega^{\hat 0}(\sup_{0\le s\le t}|X_s(1)|>r) \le C\exp(-cf(r,t)). \] Since $f(r,t)\asymp \tfrac{r^2}{t\vee r}+r\log(\tfrac{r}{t})\mathbbm{1}_{r\ge 9c_0^2t}\asymp \mathfrak{h}(r,t)$, our proof is complete. \end{proof}
\begin{corollary}\label{cor:prob-upperb} Assume $\omega\in\Omega_\kappa$ and $\theta_2>\theta_1>0$. There exist $C,c$ depending on $(d,\kappa,\theta_1,\theta_2)$ such that for $\theta\in(\theta_1,\theta_2), (x,t)\in\Z^d\times(0,\infty)$, \begin{equation*} P_\omega^{0,0}(X_{t}\in B_{\sqrt{\theta t}}(x)) \le
C\exp[-c\mathfrak{h}(|x|,t)].
\end{equation*} \end{corollary} \begin{proof} Since $\mathfrak{h}(0,t)=0$, we only need to consider the case $x\neq 0$.
If $\theta t\le 1$, then $P_\omega^{\hat 0}(X_{t}\in B_{\sqrt{\theta t}}(x))=P_\omega^{\hat 0}(X_t=x)\le P_\omega^{\hat 0}(\sup_{0\le s\le t}|X_s|\ge |x|)\le C\exp[-c\mathfrak{h}(|x|,t)]$ by Theorem~\ref{thm:fluctuation}.
If $\theta t>1$ and $1\le |x|\le 2\sqrt{\theta t}$, then $|x|\le |x|^2\le 4\theta t$ and so $\mathfrak{h}(|x|,t)\asymp \mathfrak{h}(|x|,4\theta t)\asymp\tfrac{|x|^2}{t}$. In particular, $\mathfrak{h}(|x|,t)\le C\tfrac{|x|^2}{t}\le c$. Hence, trivially, $P_\omega^{\hat 0}(X_{t}\in B_{\sqrt{\theta t}}(x))\le 1\le C\exp(-c\mathfrak{h}(|x|,t))$.
It reminds to consider $|x|>2\sqrt{\theta t}>2$. In this case, by Theorem~\ref{thm:fluctuation},
$P_\omega^{\hat 0}(X_{t}\in B_{\sqrt{\theta t}}(x))\le P_\omega^{\hat 0}(\sup_{0\le s\le t}|X_s|\ge |x|/2)
\le C\exp[-c\mathfrak{h}(|x|,t)]$.
\end{proof}
\begin{lemma}\label{lem1} Let $0<\theta_1<\theta_2$, $R>0$ and $\omega\in\Omega_\kappa$. Recall the definition of the stopping time $\Delta$ in \eqref{def:st-a-s}. There exists a constant $\alpha=\alpha(\kappa,d,\theta_1,\theta_2)\ge 1$ such that for any $s\in(\theta_1 R^2,\theta_2R^2)$ and $\sigma>0$ \[ \min_{x\in B_R}P_\omega^{x,0}(X_{\Delta(B_{2R},s)}\in B_{\sigma R})\ge (\frac{\sigma\wedge 1}{2})^\alpha. \] \end{lemma}
\begin{proof} It suffices to consider the case $\sigma\in(0, 1)$ and $R\ge K_1$, where $K_1=K_1(\theta_1,\theta_2,\kappa,d)$ is a large constant to be determined. Indeed, if $R<K_1$, then by uniform ellipticity, for any $x\in B_R$, \[ P_\omega^{x,0}(X_{\Delta(B_{2R},s)}\in B_{\sigma R}) \ge P_\omega^{x,0}(X_s=0, \Delta(B_{2R},s)=s) > C(\kappa,d,\theta_1,\theta_2). \] Further, for $R\ge K_1$, if suffices to consider the case $\sigma R\ge \sqrt{K_1}$. Indeed, assume the lemma is proved for $R\ge K_1$ and $\sigma R\ge \sqrt{K_1}$. Then, when $\sigma R<\sqrt{K_1}$ and $x\in B_R$, by uniform ellipticity, \begin{align*} &P_\omega^{x,0}(X_{\Delta(B_{2R},s)}\in B_{\sigma R})\\ &\ge P_\omega^{x,0}(X_{\Delta(B_{2R},s-K_1)}\in B_{\sqrt K_1})\min_{y\in B_{\sqrt K_1}}P_\omega^{y,s-K_1}(X_{K_1}=0, \Delta(B_{2R},s)=K_1)\\ &\ge (\frac{\sqrt{K_1}}{2R})^\alpha C(K_1,\kappa,d)\ge C(\frac{\sigma}{2})^\alpha. \end{align*} Hence in what follows we only consider the case $R\ge K_1$ and $\sigma R\ge \sqrt{K_1}$.
For $(x,t)\in\R^d\times\R$, set \[ \psi_0(t)=1-\dfrac{1-(\sigma/2)^2}{s}t, \quad
\tilde\psi_1(x,t)=\psi_0-\frac{|x|^2}{4R^2},\quad \psi_1=\tilde\psi_1\vee 0, \] and, for some large constant $q\ge 2$ to be chosen, \[ \psi(x,t):=\psi_1^2\psi_0^{-q}, \quad w(x,t)=(\sigma/2)^{2q-4}\psi(x,t). \]
\begin{figure}
\caption{The set $U$. The solid line is $\partial^\p U$.}
\end{figure}
Let $U:=\{\hat x\in B_{2R}\times[0,s): \psi_1(\hat x)>0\}$. We will show that for $\hat x\in U$, \[ w(\hat x)\le v(\hat x):=P_\omega^{\hat x}(X_\tau\in B_{\sigma R}). \]
Recall the parabolic boundary in page~\pageref{page:bdry}. We first show that $w$ satisfies \begin{equation}\label{e7} \begin{cases}
w|_{\partial^\p U}\le \mathbbm{1}_{x\in B_{\sigma R}, s=t}\\ \min_{x\in B_R} w(x,0)\ge \frac{1}{2}(\sigma/2)^{2q-4},\\ \mc L_\omega w\ge 0 \quad\mbox{ in $U$, for }q\mbox{ large}. \end{cases} \end{equation} The first two properties in \eqref{e7} are obvious. For the third property, note that \begin{align*} \partial_t\psi=R^{-2}\psi_0^{-q}[\dfrac{1-(\sigma/2)^2}{s/R^2}(q\tfrac{\psi_1}{\psi_0}-2)\psi_1] \quad\text{ in }U. \end{align*} For any unit vector $e\in\Z^d$, let \begin{equation}\label{eq:2nd-difference} \nabla_e^2u(x,t):=u(x+e,t)+u(x-e,t)-2u(x,t). \end{equation}
When $\hat x\in U_1:=\{(z,s)\in U:(y,s)\in U \text{ for all }y\sim z\}$, then $\nabla_e^2 [\psi_1^2(\hat x)]=\nabla_e^2[\tilde\psi_1^2(\hat x)]$. When $\hat x=(x,t)\in U\setminus U_1$, then for some $|e|=1$, either $(x+e,t)$ or $(x-e,t)$ is not in $U$. Say, $(x+e,t)\notin U$, then $x\cdot e\ge 1$ and $\exists \delta\in(0,1)$ such that $\tilde\psi_1(x+\delta e,t)=0$. In both cases, there exists $\delta\in(0,1]$ such that \begin{align*} &\nabla^2_e[\psi_1(x,t)^2]\\ &=\tilde\psi_1^2(x+\delta e,t)+\tilde\psi_1^2(x-e,t)-2\psi_1^2(\hat x)\\ &=-\frac{[1+\delta^2+2x\cdot e(\delta -1)]\psi_1}{2R^2}+\frac{1+\delta^2+4(\delta^2+1)(x\cdot e)^2+4(\delta^3-1)x\cdot e}{16 R^4}\\ &\ge -\frac{\psi_1}{R^2}+\frac{(x\cdot e)^2}{4R^4}-\frac{\psi_0^{1/2}}{2R^3}, \end{align*}
where in the last inequality we used the fact $1\le x\cdot e\le |x|\le 2R\psi_0^{1/2}$. Thus, letting $\xi:=\psi_1/\psi_0\in[0,1]$, we have for $\hat x=(x,t)\in\tilde U$, \begin{align*} R^2\psi_0^{q-1}\mc L_\omega\psi(\hat x) &=R^2\left(\sum_{i=1}^d\omega_t(x,x+e_i)\nabla^2_{e_i}[\psi_1^2]/\psi_0+\psi_0^{q-1}\partial_t\psi\right)\\
&\ge \frac{c|x|^2}{R^2\psi_0}-C\xi -\frac{C}{R\psi_0^{1/2}} +\dfrac{1-(\sigma/2)^2}{s/R^2}(q\xi-2)\xi\\ &\ge Cq\xi^2-c_1\xi+c_2-c_3/K_1^{1/2}, \end{align*}
where in the last inequality we used $|x|^2/(4R^2\psi_0)=1-\xi$ and $\psi_0^{1/2}\ge \sigma/2\ge K_1^{1/2}/(2R)$. Taking $q$ and $K_1$ large enough, we have $\mc L_\omega\psi\ge 0$ in $U$. The third property in \eqref{e7} is proved.
Finally, we set $v(\hat x)=P_\omega^{\hat x}(X_{\Delta(B_{2R},s)}\in B_{\sigma R})$. By \eqref{e7}, $v(\hat X_t)-w(\hat X_t)$ is a super-martingale for
$t\le T_U:=\inf\{s\ge 0: \hat X_s\notin U\}$ and $(v-w)|_{\partial^\p U}\ge 0$. Hence, the optional-stopping theorem yields \[ v(\hat x)-w(\hat x)\ge E^{\hat x}_a[v(\hat X_{T_U})-w(\hat X_{T_U})]\ge 0 \quad \text{ for }\hat x\in U. \] In particular, $ \min_{x\in B_{R}}v(x,0)\ge \min_{x\in B_{R}}w(x,0) \ge (\sigma/2)^{2q-4}/2. $ \end{proof}
\begin{corollary}\label{cor:prob1} Assume that $\omega\in\Omega_\kappa$, $R/2>r>1/2$, $\theta>0$. There exists $c=c(d,\kappa,\theta)\in(0,1)$ such that for any $y\in\partial B_R$ with $B_r(y)\cap B_R\neq\emptyset$, \[ \min_{x\in B_r(y)\cap B_R}P_\omega^{x,0}\left(X_\cdot \mbox{ exits $B_{2r}(y)\cap B_R$ from $\partial B_R$ before time } \theta r^2\right)>c. \] \end{corollary}
\begin{proof}
By uniform ellipticity, it suffices to prove the lemma for all $r\ge 10$.
Let $z=\tfrac{ry}{5|y|}+y\in\R^d$. Note that $B_{r}(y)\subset B_{5r/4}(z)\subset B_{3r/{2}}(z)\subset B_{2r}(y)$ and $B_{r/5}(z)\subset\Z^d\setminus B_R$. Recall $\Delta$ in \eqref{def:st-a-s}. Then, by Lemma~\ref{lem1}, \begin{align*}
&\min_{x\in B_{r}(y)\cap B_R}P_\omega^{x,0}(X_{\Delta(B_{2r}(y)\cap B_R,\theta r^2)}\in \partial B_R)\\
&\ge
\min_{x\in B_{5r/4(z)}}P_\omega^{x,0}(X_{\Delta(B_{3r/2}(z),\theta r^2)}\in B_{r/5}(z)) \ge c(\theta,d,\kappa).
\end{align*} The corollary is proved. \end{proof}
\begin{lemma}\label{lem6} Assume $\omega\in\Omega_\kappa$, $\beta\in(0,1)$. Let $\tau_{\beta,1}=\tau_{\beta,1}(R)=\inf\{t\ge 0: X_t\notin B_R\setminus\bar B_{\beta R}\}$. Then if $y\in B_R\setminus\bar B_{\beta R}\neq\emptyset$ and $\theta>0$, we have \[ P^{y,0}_\omega(X_{\tau_{\beta,1}}\in\partial B_{\beta R},\tau_{\beta,1}\le \theta R^2) \ge C\dfrac{\dist(y,\partial B_R)}{R}, \] where $C=C(\kappa,d,\beta,\theta)$. \end{lemma}
\begin{proof} It suffices to prove the lemma for $R>\alpha^2$, where $\alpha=\alpha(\kappa,d,\beta,\theta)$ is a large constant to be determined. We only need to consider $y$ with
$\dist(y,\partial (B_R\setminus\bar B_{\beta R}))\ge2$ in which case $R-|y|\asymp \dist(y,\partial B_R)$.
For $\hat x=(x,t)$, let $g(\hat x)=\exp(-\tfrac{\alpha}{R^2}|x|^2-\frac{\alpha t}{\theta R^2})$. Using the inequalities $e^a+e^{-a}\ge 2+a^2$ and $e^a\ge 1+a$, we get for $x\in B_R\setminus \bar B_{\beta R}, t\in\R$, \begin{align*} \mc L_\omega g(\hat x) &=g(\hat x)\left(\sum_{i=1}^d\omega_t(x,x+e_i)[e^{-\tfrac{\alpha}{R^2}(1+2x_i)}+e^{-\tfrac{\alpha}{R^2}(1-2x_i)}-2]-\frac{\alpha}{\theta R^2}\right)\\ &\ge g(\hat x)\left(\sum_{i=1}^d\omega_t(x,x+e_i)[e^{-\alpha/R^2}(2+4\alpha^2x_i^2/R^4)-2]-\frac{\alpha}{\theta R^2}\right)\\ &\ge
g(\hat x)(-C\frac{\alpha}{R^2}+c\frac{\alpha^2|x|^2}{R^4}-\frac{\alpha}{\theta R^2}) \\ &\ge \frac{\alpha}{R^2}g(\hat x) (c\alpha\beta^2-C)>0 \end{align*} if $\alpha$ is chosen to be large enough. Hence $g(\hat X_t)$ is a submartingale for $t\le \tau_{\beta,1}$.
Recall the definition of the stopping time $\Delta$ in \eqref{def:st-a-s}. Let \[ v(\hat x):=\frac{g(\hat x)-e^{-\alpha}}{e^{-\alpha\beta^2}-e^{-\alpha}} \quad \mbox{ and } u(\hat x):=P_\omega^{\hat x}(X_{\Delta(B_R\setminus\bar B_{\beta R}, \theta R^2)}\in \partial B_{\beta R}). \] Set $\ms D=(B_R\setminus\bar B_{\beta R})\times[0,\theta R^2)$.
Since $(u-v)|_{\partial^\p\ms D}\ge 0$ and $u(\hat X_t)$ is a martingale in $\ms D$,
by the optional-stopping theorem we conclude that $u\ge v$ in $\ms D$. In particular, $u(x,0)\ge v(x,0)\ge
C(R^2-|x|^2)/R^2$ for $x\in B_R\setminus\bar B_{\beta R}$. \end{proof}
\begin{lemma}\label{lem7} Let $\beta\in (0,1)$, and let $\tau_{\beta,1}$ be as in Lemma~\ref{lem6}. For $\theta>0$, there exists a constant $C=C(\beta,\kappa,d,\theta)$ such that, if $x\in B_R\setminus \bar B_{\beta R}\neq\emptyset$, \[ P^{x,0}_\omega(X_{(\theta R^2)\wedge\tau_{\beta,1}}\notin\partial B_R)\le C\dist(x,\partial B_R)/R. \] \end{lemma} \begin{proof}
Set $\ms D:=(B_R\setminus \bar B_{\beta R})\times[0,\theta R^2)$. It suffices to consider the case $R>k^2$, where $k=k(\beta,\kappa,d,\theta)>\log 2/\log(2-\beta^2)$ is a large constant to be determined. Let $h(x,t)=2-|x|^2/(R+1)^2+t/(\theta R^2)$.
Recall the notation $\nabla^2$ in \eqref{eq:2nd-difference}. For $\hat x=(x,t)\in \ms D$, note that $1\le h(\hat x)\le 3$ and $|\nabla_{e_i}^2(h^{-k})(\hat x)-\partial_{ii}(h^{-k})(\hat x)|\le Ck^3R^{-3}h^{-k}(x,t)$. Hence for any $\hat x=(x,t)\in \ms D$, when $k$ is sufficiently large, \begin{align*} &\mc L_\omega (h^{-k})(\hat x)\\ &\ge \sum_{i=1}^d \omega_t(x,x+e_i)\partial_{ii}(h^{-k})-Ck^3R^{-3}h^{-k}+\partial_t (h^{-k})\\ &\ge c\sum_{i=1}^d[k^2\tfrac{x_i^2}{(R+1)^4}h^{-k-2}+\tfrac{k}{(R+1)^2}h^{-k-1}]-Ck^3R^{-3}h^{-k}-\tfrac{k}{\theta R^2}h^{-k-1}\\ &\ge ckh^{-k}R^{-2}[k-C-Ck^2R^{-1}]>0, \end{align*} which implies that $h(\hat X_t)^{-k}$ is a submartingale inside the region $\ms D$.
Next, set (Recall the stopping time $\Delta$ in \eqref{def:st-a-s}.) \[ u(\hat x)=P_\omega^{\hat x}(X_{\Delta(B_R\setminus\bar B_{\beta R}, \theta R^2)}\notin\partial B_R). \] Then $u(\hat X_t)+(2-\beta^2)^kh(\hat X_t)^{-k}$ is a submartingale in $\ms D$. Since \[ \left\{ \begin{array}{rl}
&h^{-k}|_{x\in \partial B_R}\le (2-1+0)^{-k}=1\\
&h^{-k}|_{x\in\partial B_{\beta R}}\le (2-\beta^2)^{-k}\\
& h^{-k}|_{t=\theta R^2}\le (2-1+1)^{-k}\le (2-\beta^2)^{-k} \end{array}, \right. \] by the optional stopping theorem, we have for $x\in B_R\setminus \bar B_{\beta R}$, \begin{align*} u(x,0)+(2-\beta^2)^kh(x,0)^{-k}\le \sup_{\partial^\p\ms D} [u+(2-\beta^2)^kh^{-k}]\le (2-\beta^2)^k. \end{align*} Therefore, for any $x\in B_R\setminus \bar B_{\beta R}$, \begin{align*} u(x,0)&\le (2-\beta^2)^k(1-h(x,0)^{-k})\\
&\le C(h(x,0)-1)= C[1-|x|^2/(R+1)^2]\\ &\le C\dist(x,\partial B_R)/R. \end{align*} Our proof of Lemma~\ref{lem7} is complete. \end{proof}
\newtheorem{atheorem}{Theorem} \numberwithin{atheorem}{subsection} \newtheorem{alemma}[atheorem]{Lemma}
\appendix \section{Appendix} \subsection{Properties (i)-(iii) in Remark~\ref{rm3}} \begin{proof} (i)Since $\mb Q$ is an invariant measure for $(\bar\omega_t)$, we have for any bounded measurable function $f$ on $\Omega$, $y\in\Z^d$, $\hat x=(x,t)$, and $s<t$, \begin{align}\label{inv-meas} 0&=E_\mb Q E_\omega^{0,0}[f(\theta_{-\hat x}(\bar\omega_{t-s}))-f(\theta_{-\hat x}\omega)]\nonumber\\ &= E_\mb P\left[ \rho(\omega)\sum_{y\in\Z^d}p^\omega(0,0;x-y,t-s)[f(\theta_{-y,-s}\omega)-f(\theta_{-\hat x}\omega)] \right]\nonumber\\ &= E_\mb P\left[ f(\omega)[\sum_{y\in\Z^d}\rho(\theta_{\hat y}\omega)p^\omega(\hat y,\hat x)-\rho(\theta_{\hat x}\omega)] \right], \end{align} where $\hat y=(y,s)$ and we used the translation-invariance of $\mb P$ in the last equality. Moreover, by Fubini's theorem, for any bounded compactly-supported continuous function $\phi:\R\to\R$, \[ E_{\mb P}\left[f(\omega) \int_{-\infty}^t\phi(s)[\sum_{y\in\Z^d}\rho(\theta_{\hat y}\omega)p^\omega(\hat y,\hat x)-\rho(\theta_{\hat x}\omega)]\mathrm{d} s \right] =0 \] Thus we have that $\mb P$-almost surely, for any such test function $\phi$ on $\R$, \begin{equation*} \int_{-\infty}^t\phi(s)[\sum_{y\in\Z^d}\rho_\omega(\hat y)p^\omega(\hat y,\hat x)-\rho_\omega(\hat x)]\mathrm{d} s =0, \end{equation*} which (together with the translation-invariance of $\mb P$) implies that $\mb P$-almost surely, $\rho_\omega(x,t)\delta_x\mathrm{d} t$ is an invariant measure for the process $(\hat X_t)_{t\ge 0}$.
(ii) We have $\rho_\omega>0$ since the measures $\mb Q$ and $\mb P$ are equivalent. The uniqueness follows from the uniqueness of $\mb Q$ in \cite[Theorem 1.2]{DGR15}.
(iii) By \eqref{inv-meas} and Fubini's theorem, we also have that $\mb P$-almost surely, for any test function $\phi(t)$ as in (i) and any $h>0$, $x\in\Z^d$, \[ \int_{-\infty}^\infty \phi(t)[\sum_{y\in\Z^d}\rho_\omega(y,t)(p^\omega(y,t;x,t+h)-\delta_x(y))-(\rho_\omega(x,t+h)-\rho_\omega(\hat x)]\mathrm{d} t =0. \] Dividing both sides by $h$ and letting $h\to 0$, we obtain \eqref{rho-invariance} with $\partial_t\rho_\omega$ replaced by the weak derivative. Note that the weak differentiability of $\rho_\omega$ in $t$ implies that it has an absolutely continuous (in $t$) version.
Since $\rho_\omega$ is only used as a density, we may always assume that $\mb P$-almost surely, $\rho_\omega(x,\cdot)$ is continuous and almost-everywhere differentiable in $t$. \end{proof}
\subsection{Proof of Corollary~\ref{cor:hoelder}} \begin{proof} Assume $(x_0,t_0)=(0,0)$ and fix $R>0$. Let $R_k=2^{-k}R$ and $Q^k=B_{R_k}\times(-R_k^2,0]$. Note that $Q^{k+1}\subset Q^k$. For any bounded subset $E\subset\Z^d\times\R$, denote $\osc_E u:=\sup_E u-\inf_E u$. Set \[ v_k:=(u-\inf_{Q^k}u)/\osc_{Q^k}u. \] Notice that $\inf_{Q^k}v_k=0, \sup_{Q^k}v_k=1$ and \[ \osc_{Q^{k+1}}u=\osc_{Q^{k+1}}v_k \cdot\osc_{Q^k}u. \] We claim that $\osc_{Q^{k+1}}v_k\le 1-\delta$ for some $\delta=\delta(d,\kappa)\in(0,1)$. Indeed, replacing $v_k$ by $1-v_k$ if necessary, we can assume $\sup_{B_{R_{k+1}}\times(-\tfrac34 R_k^2,-\tfrac12 R_k^2)}v_k\ge 1/2$. By the PHI for $\mc L^*_\omega$ (Theorem~\ref{thm-ah}), \[ \inf_{Q^{k+1}}v_k\ge c\sup_{B_{R_{k+1}}\times(-\tfrac34 R_k^2,\tfrac12 R_k^2)}v_k\ge \tfrac c2:=\delta\in(0,1) \] and so $\osc_{Q^{k+1}}v_k\le \sup_{Q^k}v_k-\inf_{Q^{k+1}}v_k\le 1-\delta$. The claim is proved. So \[ \osc_{Q^{k+1}}u\le (1-\delta)\osc_{Q^k}u. \] If $r>R/2$, the Corollary is trivial. If $r\le R/2$, we iterate the above inequality $k=\floor{\log_2(R/r)}$ times (so that $Q^{k+1}\subset B_r\times(-r^2,0]\subset Q^k$) to obtain \[ \osc_{B_r\times(-r^2,0]}u\le \osc_{Q^{k}}u\le (1-\delta)^k\osc_{Q^0}u\le (1-\delta)^{-1}(r/R)^\gamma \osc_{Q^0}u. \] where $\gamma=-\log_2(1-\delta)$. Our proof is complete. \end{proof}
\subsection{Parabolic maximum principle} In what follows we will prove a maximum principle for parabolic difference operators under the discrete space and continuous time setting. For any $\ms D\subset B_R\times(0,T)$, $\hat x:=(x,t)\in\ms D$ and $u: \ms D\cup\partial^\p\ms D\to\R$, define \[ I_u(\hat x) :=\{ p\in\R^d: u(x,t)-u(y,s)\ge p\cdot(x- y)\mbox{ for all $(y,s)\in\ms D\cup\partial^\p\ms D$ with $s> t$}\}, \] \[ \Gamma=\Gamma(u,\ms D):=\{(x,t)\in\ms D: I_u(x,t)\neq\emptyset\}, \] \begin{align}\label{eq:181222} \Gamma^+
=\Gamma^+(u,\ms D)=\{\hat x\in\Gamma:R|p|<u(\hat x)-p\cdot x \mbox{ for some }p\in I_u(\hat x)\}. \end{align} \begin{atheorem}[Maximum principle]\label{thm:max-p} Let $\omega\in\Omega_\kappa$. Recall $\int_{\ms D}$ in \eqref{eq:def-integral}. Assume that $\ms D\subset B_R\times(0,T)$ is an open subset of $\Z^d\times\R$ for some $R,T>0$. Let $f$ be a measurable function on $\ms D$. For any function $u:\ms D\cup\partial^\p\ms D\to\R$ that solves $\mc L_a u\ge -f \quad{ in }\,{\ms D}$, we have \[ \sup_{\ms D} u\le \sup_{\partial^\p\ms D}u+ CR^{d/(d+1)}
(\int_{\Gamma^+}|f|^{d+1})^{1/(d+1)} \] \end{atheorem} \begin{proof} Without loss of generality, assume $f\ge 0$, $\sup_{\partial^\p\ms D}u=0$, and \[\sup_{\ms D}u:=M>0.\]
Let \[ \Lambda=\{
(\xi,h)\in\R^{d+1}: R|\xi|<h<M/2 \}. \] For $(x,t)\in\ms D$, define a set \[ \chi(x,t)=\{(p,u(x,t)-x\cdot p): p\in I_u(x,t)\}\subset \R^{d+1}. \]
First, we claim that \begin{equation}\label{e3} \Lambda\subset \chi(\Gamma^+):=\bigcup_{(x,t)\in\Gamma^+}\chi(x,t). \end{equation} This will be proved by showing that for any $(\xi,h)\in\Lambda$, we have $(\xi,h)\in\chi(x_1,t_1)$ for some $(x_1,t_1)\in\Gamma^+$. Indeed, fix $(\xi,h)\in\Lambda$ and define \[ \phi(x,t):=u(x,t)-\xi\cdot x-h. \]
Since $\sup_{\ms D}\phi\ge M-|\xi|R-h>0$, there exists $(x_0,t_0)\in\ms D$ with $\phi(x_0,t_0)>0$. Now for any $x\in\Z^d$, set (with the convention $\sup\emptyset=-\infty$) \[ N_x=\sup\{t:(x,t)\in\ms D \text{ and } \phi(x,t)\ge 0\}, \] and let $(x_1,t_1)$ be such that \[ t_1=N_{x_1}=\max_{x\in B_R}N_x\ge N_{x_0}\ge t_0. \]
By the continuity of $\phi$, we get $\phi(x_1,t_1)\ge 0$ and $(x_1,t_1)\in \ms D\cup\partial^\p\ms D$. Since $\phi|_{\partial^\p\ms D}< 0$, we have $(x_1,t_1)\in\ms D$. Moreover, since $\ms D$ is an open set, we can conclude that $\phi(x_1,t_1)=0$ and $\phi(x_1,s)<0$ for all $s>t_1$ with $(x_1,s)\in\ms D\cup\partial^\p\ms D$. Hence $\xi\in I_u(x_1,t_1)$ and $u(x_1,t_1)-\xi\cdot x_1=h>R|\xi|$, which implies that $(\xi, h)\in\chi(x_1,t_1)$ and $(x_1,t_1)\in\Gamma^+$. Display \eqref{e3} is proved.
Next, setting \[ \chi(\Gamma^+, x):=\bigcup_{s:(x,s)\in\Gamma^+}\chi(x,s), \] we will show that \begin{equation}\label{e4} {\rm Vol}_{d+1}(\chi(\Gamma^+, x)) \le C\int_0^T (f(x,t)/\varepsilon)^{d+1}1_{(x,t)\in\Gamma^+}\mathrm{d} t, \end{equation} where ${\rm Vol}_{d+1}$ is the volume in $\R^{d+1}$. To this end, let $ \tilde\chi(x,t) =I_u(x,t)\times\{ u(x,t)\}\subset\R^{d+1}$. Noting that, for fixed $x$, the map $(y,s)\mapsto(y,s+y\cdot x)$ is volume preserving, we then have \begin{align}\label{e5} {\rm Vol}_{d+1}(\chi(\Gamma^+, x)) &= {\rm Vol}_{d+1}(\tilde\chi(\Gamma^+, x))\nonumber\\ &= \int_0^T (-\partial_t u) {\rm Vol}_d(I_u(x,t))1_{(x,t)\in\Gamma^+}\mathrm{d} t. \end{align} For any fixed $p\in I(x,t)$, $(x,t)\in\Gamma^+$, set \[ w(y,s)=u(y,s)-p\cdot y. \] Then $I_w(x,t)=I_u(x,t)+p$. Since $w(x,t)-w(x\pm e_i,t)\ge \mp q_i$ for any $q\in I_w(x,t)$, $i=1,\ldots d$, we have \[ {\rm Vol}_d(I_u(x,t))={\rm Vol}_d(I_w(x,t)) \le \prod_{i=1}^d[2u(x,t)-u(x+e_i,t)-u(x-e_i,t)]. \] This inequality, together with \eqref{e5}, yields \begin{align*} &{\rm Vol}_{d+1}(\chi(\Gamma^+, x))\\
&\le -C\int_0^T \partial_t u\prod_{i=1}^d a_t(x,x+e_i)[2u(x,t)-u(x+e_i,t)-u(x-e_i,t)]1_{(x,t)\in\Gamma^+}\mathrm{d} t\\
&\le C\int_0^T [-\mc L_a u(x,t)]^{d+1}1_{(x,t)\in\Gamma^+}\mathrm{d} t. \end{align*} Display \eqref{e4} is proved. Finally, by \eqref{e3}, \eqref{e4} and ${\rm Vol}_{d+1}(\Lambda)=CM^{d+1}/R^d$, we conclude that $ M^{d+1}/R^d\le C\int_{\Gamma^+}f^{d+1}$. The theorem follows immediately. \end{proof} \subsection{Mean value inequality}\label{subset:mvi} \begin{atheorem}\label{thm:mvi} Let $\theta>0$ and $a\in\Omega_\kappa$. Recall $\norm{\cdot}_{\ms D,p}$ in \eqref{eq:def-norm}. For any $\theta_1\in(0,\theta)$, $\rho\in (0,1)$ and $p>0$, there exists $C=C(\kappa,d,p,\theta,\theta_1,\rho)$ such that for any function $u$ that solves $\mc L_a u\ge 0$ in $\ms D=B_R\times[0,\theta R^2)$, we have \[ \sup_{B_{\rho R}\times[0,\theta_1R^2)}u\le C\norm{u^+}_{\ms D,p}. \] \end{atheorem} \begin{proof} Since $\norm{u^+}_{\ms D,p}$ is increasing in $p>0$, it suffices to consider $p\in(0,1)$. Let $\beta\ge 2$ be a constant to be determined, and set \[
\eta(x):=\big(1-\frac{|x|^2}{R^2}\big)^\beta 1_{x\in B_R},\quad {
\zeta(t):=\big(1-\dfrac{t}{\theta R^2}\big)^\beta 1_{0\le t<\theta R^2},} \] and set $v=\eta u^+$, $\bar v=v\zeta$. Define an elliptic operator $L_a^E$ to be \[ L_a^E f(x,t)=\sum_{y:y\sim x}a_t(x,y)(f(y,t)-f(x,t)), \]
so that $\mc L_a=L_a^E+\partial_t$. Note that $\bar v|_{\partial^\p \ms D}=0$ and $\bar v(\hat x)>0$ for $\hat x\in\Gamma^+(\bar v, \ms D)$. (Recall the definition of $\Gamma^+$ above Theorem~\ref{thm:max-p}.) By the same argument as in \cite[displays (27),(28) and (29)]{GZ}, we have that on $\Gamma^+(\bar v, \ms D)$, $u^+=u$ and \[ L_a^E v \ge \eta L_a^E u-C_\beta\eta^{1-2/\beta}R^{-2}u^+. \] Hence, for $X=(x,t)\in \Gamma^+(\bar v, \ms D)$, \begin{align*} \mc L_a \bar v &=\zeta L_a^E v+\partial_t(\zeta\eta u)\\ &\ge \zeta\eta L_a^E u-C_\beta\zeta\eta^{1-2/\beta}R^{-2}u^++\eta u\partial_t \zeta+\zeta\eta \partial_t u\\ &= \zeta\eta\mc L_a u -C_\beta\zeta\eta^{1-2/\beta}R^{-2}u^++\eta u\partial_t \zeta\\ &\ge -C_\beta\zeta\eta^{1-2/\beta}R^{-2}u^++\eta u\partial_t \zeta. \end{align*} Noting that in $\ms D$, $\partial_t\zeta\ge -C\beta R^{-2}\zeta^{1-1/\beta}/\theta$ and $\zeta,\eta\in[0,1]$, we have \[ \mc L_a\bar v \ge -C(\eta\zeta)^{1-2/\beta}R^{-2}u^+ \qquad\mbox{in }\Gamma^+(\bar v, \ms D). \] Applying Theorem~\ref{thm:max-p} to $\bar v$ and taking $\beta=2(d+1)/p$, \begin{align*} \sup_{\ms D}\bar v &\le C\norm{(\eta\zeta)^{1-2/\beta}u^+/\varepsilon}_{\ms D,d+1}\\ &\le C(\sup_{\ms D}\bar v)^{1-p/(d+1)}\norm{(u^+)^{p/(d+1)}/\varepsilon}_{\ms D,d+1}. \end{align*} Since $\sup_{B_{\rho R}\times[0,\theta_1R^2)}u\le C\sup_{\ms D}\bar v$, the theorem follows. \end{proof}
\subsection{Reverse H\"older implies $A_p$} Recall $\abs{\ms D}, \int_{\ms D}, \norm{\cdot}_{\ms D,p}$, and the parabolic cubes in \eqref{eq:def-integral}, \eqref{eq:def-norm} and \eqref{eq:def-parakub}. \begin{alemma} Let $K^0\subset\Z^d\times\R$ be a parabolic cube with side-length $r>0$. If a function $w>0$ on $K^0$ satisfies $RH_q(K^0)$, $q>1$, then \begin{enumerate}[(i)] \item $w\in A_p(K^0)$ for some $1<p<\infty$;
\item $\frac{w(E)}{w(K)}\ge C(\frac{|E|}{|K|})^c$ for all $E\subset K$ where $K\neq\emptyset$ is a subcube of $K^0$. \end{enumerate} \end{alemma} \begin{proof}
First, we claim that there exist constants $\gamma,\delta\in (0,1)$ such that $w(E)>\gamma w(K)$ implies $|E|>\delta|K|$ for all $E\subset K$ where $K\neq\emptyset$ is a subcube of $K^0$. Indeed, this is a simple consequence of H\"older's inequality: \begin{align*}
\frac{1}{|K|}w(E)=\frac{1}{|K|}\int_{K} w\mathbbm{1}_{E}
\le (\frac{|E|}{|K|})^{1/q'}\norm{w}_{K,q}\stackrel{(RH_q)}{\le} C\left(\frac{|E|}{|K|}\right)^{1/q'}\frac{w(K)}{|K|}, \end{align*} where $q'=q/(q-1)$ denotes the conjugate of $q$.
Assume $K^0=K_r$. Let $\mc M_k(K_r), k>1$ be the family of nonempty subcubes of $K_r$ of the form \[ (\prod_{i=1}^d[\frac{m_i}{2^k}r,\frac{1+m_i}{2^k}r)\cap\Z^d)\times[\frac{n_i}{4^k}r^2,\frac{1+n_i}{4^k}r^2) \] where $m_i,n_i$'s are integers. Elements in $M_k(K_r)$ are called $k$-level dyadic subcubes of $K_r$. Note that every $k$-level cube $K$ is contained in a unique $(k-1)$-level ``parent" denoted by $K^{-1}$.
Since the class $A_p$ is invariant under constant multiplication, we may assume that $w(K^0)/|K^0|=1$.
Let $f:=w^{-1}\mathbbm{1}_{K^0}$ and define a maximal function \[
M_f(x):=\sup_{K\ni x}\frac{1}{w(K)}\sum_K |f|w, \] where the supremum is taken over all dyadic subcubes $K$ of $K_0$.
Consider the level sets
\[
E_k=\{x\in K^0: M_f(x)>2^{Nk}\}, \quad k=0,1,2,\ldots
\]
where $N$ is a big constant to be determined. Notice that by assumption, $E_0$ is comprised of dyadic subcubes strictly smaller than $K_0$. Since $w$ is volume-doubling, there exists a constant $c_0>0$ such that for any maximal dyadic subcube $K$ of $E_{k-1}$,
\[
\int_K fw\le \int_{K^{-1}}fw\le 2^{N(k-1)}w(K^{-1})\le 2^{N(k-1)+c_0}w(K).
\]
Moreover, for the same $K$, we have $2^{Nk}w(E_k\cap K)\le \int_{K}fw$ and so, by the inequality above, $w(E_k\cap K)\le 2^{c_0-N}w(K)$. We now take $N$ to be large enough that $w(E_k\cap K)\le (1-\gamma) w(K)$ which implies $|E_k\cap K|\le (1-\delta)|K|$. Summing over all such $K$'s, we have $|E_k|\le (1-\delta) |E_{k-1}|$, $k\ge 1$.
Thus \[
|E_k|\le\delta^k|E_0|\le \delta^k|K^0|, \quad k=0,\ldots \] and so, for $p>1$ chosen so that $p'=p/(p-1)$ is sufficiently close to $1$, \begin{align*} \int_{K^0} f^{p'-1} &=\int_{K^0\cap\{x:M_f\le 1\}}f^{p'-1}+\sum_{k=0}^\infty \int_{E_k\setminus E_{k+1}}f^{p'-1}\\
&\le |K^0|+\sum_{k=0}^\infty 2^{(p'-1)N(k+1)}\delta^k|K^0|\\
&\le C|K^0|. \end{align*} (i) is proved. (ii) then follows from H\"older's inequality \begin{align*} \frac{1}{w(K)}\int_E w^{-1}w\le \left(\frac{1}{w(K)}\int_K w^{-p'}w\right)^{1/p'}\left(\frac{1}{w(K)}\int_K\mathbbm{1}_E w\right)^{1/p} \end{align*} and the $A_p$ inequality. \end{proof}
\subsection{Proof of Corollary~\ref{cor:q-estimates}\eqref{cor:green1}\eqref{cor:green2}} \begin{proof}
\eqref{cor:green1} For any $\hat x=(x,t)\in\R^d\times[0,\infty)$ and $\omega\in\Omega_\kappa$, set \[ v(\hat x)=q^\omega(\hat 0; \floor{x},t) \quad\mbox{ and }\quad a^\omega(x):=\int_0^\infty(v(0,t)-v(x,t))\mathrm{d} t. \] When $d=2$, it suffices to consider $x\in\mb B_1\setminus\{0\}$. We fix a small number $\epsilon\in(0,1)$ and split the integral $a^\omega(nx)$ into four parts: \begin{align*} a^\omega(nx)=\int_0^{n^\epsilon}+\int_{n^\epsilon}^{n^2} +\int_{n^2}^\infty=:\rom{1}+\rom{2}+\rom{3}, \end{align*} where it is understood that the integrand is $(v(0,t)-v(x,t))\mathrm{d} t$.
First, we will show that $\mb P$-almost surely, \begin{equation}\label{eq:green-1}
\varlimsup_{n\to\infty}|\rom{1}|/\log n\le \epsilon. \end{equation} By Theorem~\ref{thm:hke}, for any $t\in(0,n^\epsilon)$, $x\in\Z^2\setminus\left\{0\right\}$ and all $n$ large enough, $v(nx,t)
\le Ce^{-cn|x|}/\rho_\omega(B_{\sqrt t},0). $ Thus \[ \int_0^{n^\epsilon}v(nx,t)\mathrm{d} t \le
\frac{n^\epsilon}{\rho_\omega(\hat 0)} e^{-cn|x|}. \] By \eqref{cor:q-hke}, there exists $t_0(\omega)>0$ such that for $n$ big enough with $n^\epsilon>t_0$, \begin{align*} \int_0^{n^\epsilon}v(0,t)\mathrm{d} t \le \frac{Ct_0}{\rho_\omega(\hat 0)}+\int_{t_0}^{n^\epsilon}\frac{C}{t}\mathrm{d} t \le \frac{Ct_0}{\rho_\omega(\hat 0)}+C\epsilon\log n, \end{align*} Display \eqref{eq:green-1} follows immediately.
In the second step, we will show that (note that $2p^\Sigma_1(0,0)=1/\pi\sqrt{\det\Sigma}$) \begin{equation}\label{eq:green-2}
\limsup_{n\to\infty}|\rom{2}-2p_1^\Sigma(0,0)\log n|/\log n\le C\epsilon, \quad \mb P\mbox{-a.s.} \end{equation} Indeed, by Theorem~\ref{thm:llt}, there exists $C(\omega,\epsilon)>0$ such that
$|tv(0,t)-p_1^\Sigma(0,0)|\le\epsilon$ whenever $t\ge C(\omega,\epsilon)$. Now, taking $n$ large enough such that $n^{\epsilon}>C(\omega,\epsilon)$, \begin{align}\label{eq:green-21} &\Abs{\int_{n^\epsilon}^{n^2}v(0,t)\mathrm{d} t-(2-\epsilon)p_1^\Sigma(0,0)\log n}\nonumber\\ &\le \int_{n^\epsilon}^{n^2}\Abs{\frac{tv(0,t)-p^\Sigma_1(0,0)}{t}}\mathrm{d} t\nonumber\\ &\le \epsilon\int_{n^\epsilon}^{n^2}\frac{\mathrm{d} t}{t}<2\epsilon\log n. \end{align} On the other hand, for $t\ge n^\epsilon>t_0(\omega)$, by \eqref{cor:q-hke},
$v(nx,t)\le \frac{C}{t}(e^{-cn|x|}+e^{-cn^2|x|^2/t})$. Thus \begin{equation}\label{eq:green-22} \int_{n^\epsilon}^{n^2}v(nx,t)\mathrm{d} t\le
\int_{n^\epsilon}^{n^{2-\epsilon}}\frac{C}{t}e^{-cn^\epsilon|x|^2}\mathrm{d} t +\int_{n^{2-\epsilon}}^{n^2}\frac{C}{t}\mathrm{d} t \le C\epsilon\log n. \end{equation} Displays \eqref{eq:green-21} and \eqref{eq:green-22} imply \eqref{eq:green-2}.
Finally, we will prove that for $\mb P$-almost every $\omega$, \begin{equation}\label{eq:green-3}
\limsup_{n\to\infty}|\rom{3}|/\log n=0. \end{equation}
Since $|x|<1$, by \eqref{eq:holder-v}, for any $t\ge n^2\ge t_0(\omega)$, \[ \Abs{v(0,t)-v(nx,t)}\le C\left( \frac{n}{\sqrt t} \right)^\gamma t^{-1}. \] Therefore, $\mb P$-almost surely, when $n^2>t_0(\omega)$, \begin{align*} \Abs{\int_{n^2}^\infty v(0,t)-v(nx,t)\mathrm{d} t} &\le Cn^\gamma\int_{n^2}^\infty \frac{1}{t^{\gamma/2+1}}\mathrm{d} t\le C. \end{align*} Display \eqref{eq:green-3} follows. Combining \eqref{eq:green-1}, \eqref{eq:green-2} and \eqref{eq:green-3}, we have for $d=2$, \[ \varlimsup_{n\to\infty}\Abs{\frac{a^\omega(nx)}{\log n}-2p_1^\Sigma(0,0)}\le C\epsilon, \] Noting that $\epsilon>0$ is arbitrary, we obtain Corollary~\ref{cor:q-estimates}\eqref{cor:green1}.
\eqref{cor:green2} We fix a small constant $\epsilon\in(0,1)$. Note that \[ n^{d-2}\int_0^\infty q^\omega(\hat 0;\floor{nx},t)\mathrm{d} t =\int_0^\infty n^d v(nx,n^2s)\mathrm{d} s. \] For any fixed $x\in\R^d$, write \begin{align*} \int_0^\infty n^d v(nx,n^2s)\mathrm{d} s =\int_0^{n^{-\epsilon}}+\int_{n^{-\epsilon}}^\epsilon+\int_\epsilon^{1/\sqrt \epsilon}+\int_{1/\sqrt\epsilon}^\infty =:\rom{1}+\rom{2}+\rom{3}+\rom{4}. \end{align*} First, by Theorem~\ref{thm:hke}, for $s\in(0,n^{-\epsilon})$,
we have $v(nx,n^2s)\le Ce^{-cn^\epsilon|x|^2}/\rho_\omega(\hat 0)$, hence \begin{equation}\label{eq:green2-1}
\varlimsup_{n\to\infty}\rom{1}\le C\lim_{n\to\infty}n^{d-\epsilon}e^{-cn^\epsilon|x|^2}/\rho_\omega(\hat 0)=0. \end{equation} Second, by \eqref{cor:q-hke}, when $n$ is large enough, then for all $t\ge n^{2-\epsilon}$, we have
$v(nx,t)\le Ct^{-d/2}e^{-cn^2|x|^2/t}$ . Hence \begin{equation}\label{eq:green2-2} \varlimsup_{n\to\infty}\rom{2}\le
\varlimsup_{n\to\infty} Cn^d\int_{n^{-\epsilon}}^\epsilon (n^2s)^{-d/2}e^{-c|x|^2/s}\mathrm{d} s\le C\epsilon. \end{equation}
Moreover, by Theorem~\ref{thm:llt}, there exists $N(\omega,\epsilon)$ such that for $n\ge N(\omega,\epsilon)$, we have $\sup_{|s|\ge \epsilon}|v(nx,n^2s)-p_s^\Sigma(0,x)|\le\epsilon$. Hence \begin{equation}\label{eq:green2-3} \varlimsup_{n\to\infty}\Abs{\rom{3}-\int_{\epsilon}^{1/\sqrt \epsilon}p_s^\Sigma(0,x)\mathrm{d} s}\le \sqrt\epsilon. \end{equation} Further, by \eqref{cor:q-hke}, for $d\ge 3$, \begin{equation}\label{eq:green2-4} \varlimsup_{n\to\infty}\rom{4} \le C\int_{1/\sqrt\epsilon}^\infty \frac{n^d}{(n^2s)^{d/2}}\mathrm{d} s=C\epsilon^{(d-2)/4}. \end{equation} Finally, combining \eqref{eq:green2-1},\eqref{eq:green2-2}, \eqref{eq:green2-3} and \eqref{eq:green2-4}, we get \[ \varlimsup_{n\to\infty} \Abs{\int_0^\infty n^d v(nx,n^2s)\mathrm{d} s-\int_{\epsilon}^{1/\sqrt \epsilon}p_s^\Sigma(0,x)\mathrm{d} s} \le C\epsilon^{1/4}. \] Letting $\epsilon\to 0$, \eqref{cor:green2} is proved. \end{proof}
\end{document} |
\begin{document}
\title[ORTHOGONALITY PRESERVING] {ON VOLTERRA AND ORTHOGONALITY PRESERVING QUADRATIC STOCHAISTIC OPERATORS}
\author{Farrukh Mukhamedov} \address{Farrukh Mukhamedov\\
Department of Computational \& Theoretical Sciences\\ Faculty of Science, International Islamic University Malaysia\\ P.O. Box, 141, 25710, Kuantan\\ Pahang, Malaysia} \email{{\tt [email protected]} {\tt farrukh\[email protected]}}
\author{Muhammad Hafizuddin Bin Mohd Taha} \address{Muhammad Hafizuddin Bin Mohd Taha\\
Department of Computational \& Theoretical Sciences\\ Faculty of Science, International Islamic University Malaysia\\ P.O. Box, 141, 25710, Kuantan\\ Pahang, Malaysia}
\subjclass{Primary 37E99; Secondary 37N25, 39B82, 47H60, 92D25} \keywords{Quadratic stochastic operator, Volterra operator, orthogonal preserving.}
\begin{abstract}
A quadratic stochastic operator (in short QSO) is usually used to present the time evolution of differing species in biology. Some quadratic stochastic operators have been studied by Lotka and Volterra. In the present paper, we first give a simple characterization of Volterra QSO in terms of absolutely continuity of discrete measures. Moreover, we provide its generalization in continuous setting. Further, we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. The associativity of genetic algebras generated by orthogonal preserving QSO is studied too. \end{abstract}
\maketitle
\section{Introduction}
The history of quadratic stochastic operators (QSO) can be traced back to Bernstein's work \cite{B} where such kind of operators appeared from the problems of population genetics (see also \cite{Ly2}). Such kind of operators describe time evolution of variety species in biology are represented by so-called Lotka-Volterra(LV) systems \cite{L,V1,V2}.
A quadratic stochastic operator is usually used to present the time evolution of species in biology, which arises as follows. Consider a population consisting of $m$ species (or traits) $1,2,\cdots,m$. We denote a set of all species (traits) by $I=\{1,2,\cdots,m\}$. Let $x^{(0)}=\left(x_1^{(0)},\cdots,x_m^{(0)}\right)$ be a probability distribution of species at an initial state and $P_{ij,k}$ be a probability that individuals in the $i^{th}$ and $j^{th}$ species (traits) interbreed to produce an individual from $k^{th}$ species (trait). Then a probability distribution $x^{(1)}=\left(x_{1}^{(1)},\cdots,x_{m}^{(1)}\right)$ of the spices (traits) in the first generation can be found as a total probability, i.e., \begin{equation*} x_k^{(1)}=\sum_{i,j=1}^m P_{ij,k} x_i^{(0)} x_j^{(0)}, \quad k=\overline{1,m}. \end{equation*} This means that the association $x^{(0)} \to x^{(1)}$ defines a mapping $V$ called \textit{the evolution operator}. The population evolves by starting from an arbitrary state $x^{(0)},$ then passing to the state $x^{(1)}=V(x^{(0)})$ (the first generation), then to the state $x^{(2)}=V(x^{(1)})=V(V(x^{(0)}))=V^{(2)}\left(x^{(0)}\right)$ (the second generation), and so on. Therefore, the evolution states of the population system described by the following discrete dynamical system $$x^{(0)}, \quad x^{(1)}=V\left(x^{(0)}\right), \quad x^{(2)}=V^{(2)}\left(x^{(0)}\right), \quad x^{(3)}=V^{(3)}\left(x^{(0)}\right) \cdots$$
In other words, a QSO describes a distribution of the next generation if the distribution of the current generation was given. The fascinating applications of QSO to population genetics were given in \cite{Ly2}. Furthermore, the quadratic stochastic operator was considered an important source of analysis for the study of dynamical properties and modelings in various fields such as biology \cite{HHJ,HS,May,MO,NSE}, physics \cite{PL,T}, economics and mathematics \cite{G,Ly2,T,U,V}.
In \cite{11}, it was given along self-contained exposition of the recent achievements and open problems in the theory of the QSO. The main problem in the nonlinear operator theory is to study the behavior of nonlinear operators. This problem was not fully finished even in the class of QSO (the QSO is the simplest nonlinear operator). The difficulty of the problem depends on the given cubic matrix $(P_{ijk})_{i,j,k=1}^m$. An asymptotic behavior of the QSO even on the small dimensional simplex is complicated \cite{MSQ,V,Z}.
In the present paper, we first give a simple characterization of Volterra QSO (see \cite{G}) in terms of absolutely continuity of discrete measures (see Section 3). Further, in section 4 we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. In section 5, we study associativity of genetic algebras generated by orthogonal preserving QSO.
\section{Preliminaries}
An evolutionary operator of a free population is a (quadratic) mapping of the simplex \begin{equation}\label{1.2}
S^{m-1}=\{\mathbf{x}=(x_1,\ldots,x_m)\in \mathbb{R}^m|x_i\geq0, \ \ \sum_{i=1}^mx_i=1\} \end{equation} into itself of the form \begin{equation}\label{1.3} V:x_k^{\prime}=\sum_{i,j=1}^mP_{ij,k}x_ix_j, \ \ k=1,2,\ldots,m \end{equation} where $P_{ij,k}$ are coefficient of heredity and \begin{equation}\label{1.4} P_{ij,k}\geq0, \ \ P_{ij,k}=P_{ji,k}, \ \ \sum_{k=1}^{m}P_{ij,k}=1, \ \ i,j,k=1,2,\ldots,m \end{equation} Such a mapping is called \textit{quadratic stochastic operator (QSO)}.
Note that every element $\mathbf{x}\in S^{m-1}$ is a probability distribution on $E=\{1,\ldots,m\}$. The population evolves starting from an arbitrary initial state $\mathbf{x}\in S^{m-1}$ (probability distribution on $E$) to the state $\mathbf{x}^{\prime}=V(\mathbf{x})$ in the next generation, then to the state $\mathbf{x}^{\prime\prime}=V^2(\mathbf{x})=V(V(\mathbf{}x))$, and so on.
For a given $\mathbf{x}^{(0)}\in S^{m-1}$, the trajectory $$\{x^{(n)}\}, \ \ m=0,1,2,\ldots$$ of $\mathbf{x}^{(0)}$ under the action of QSO \eqref{1.3} is defined by $$\mathbf{x}^{m+1}=V(\mathbf{x}^{(m)}), \ \ m=0,1,2,\ldots$$
A QSO $V$ defined by \eqref{1.3} is called \textit{Volterra operator} \cite{G} if one has \begin{equation}\label{1.5} P_{ij,k}=0 \ \ \mbox{if} \ \ k\not\in \{i,j\}, \ \ \forall i,j,k\in E. \end{equation}
Note that it is obvious that the biological behavior of condition \eqref{1.5} is that the offspring repeats one of its parents' genotype (see \cite{G,11}).
\begin{defn} Let $\mathbf{x}=(x_1,\ldots,x_n)$ and $\mathbf{y}=(y_1,\ldots,y_n)$. $\mathbf{x}$ is \textit{equivalent} to $\mathbf{y}$ ($\mathbf{x}\sim\mathbf{y}$) if \begin{itemize} \item[(i)]$\mathbf{x}\prec\mathbf{y}$ ($\mathbf{x}$ is \textit{absolutely continuous} with respect to $\mathbf{y}$) if $y_k=0\Rightarrow x_k=0$, \item[(ii)]$\mathbf{y}\prec\mathbf{x}$ if $x_k=0\Rightarrow y_k=0$. \end{itemize} \end{defn}
\begin{defn}
Let $I=\{1,2,\ldots,n\}$ and $Supp(x)=\{i\in I|x_i\neq0\}$. Then $\mathbf{x}$ is \textit{singular} or \textit{orthogonal} to $\mathbf{y}$ ($\mathbf{x}\bot\mathbf{y}$) if $Supp(\mathbf{x})\cap Supp(\mathbf{y})=\emptyset$. \end{defn}
Note that if $\mathbf{x}\bot\mathbf{y}\Rightarrow \mathbf{x}\cdot\mathbf{y}=0$, whenever $x,y\in S^n$. Here $\mathbf{x}\cdot\mathbf{y}$ stands for the usual scalar product in $\mathbb{R}^n$.
\section{On Volterra QSO}
In this section we are going to give a characterization of Volterra quadratic operator in terms of the above given order. Note that dynamics of Volterra QSO was investigated in \cite{G}. Certain other properties of such kind of operators has been studied in \cite{MS}. Some generalizations of Volterra QSO were studied in \cite{MS1,RN,RZ}.
Recall that the vertices of the simplex $S^{m-1}$ are described by the elements $e_k=(\delta_{1k},\delta_{2k},\dots,\delta_{mk})$, where $\delta_{ik}$ is the Kronecker's delta.
\begin{thm} Let $V:S^{n-1}\rightarrow S^{n-1}$ be a QSO. Then the following conditions are equivalent:
\begin{itemize} \item[(i)] $V$ is a Volterra QSO;
\item[(ii)] one has $V(\mathbf{x})\prec \mathbf{x}$ for all $\mathbf{x}\in S^{n-1}$. \end{itemize} \end{thm}
\begin{proof} (i)$\Rightarrow$ (ii). It is known \cite{G} that any Volterra QSO can be represented as follows: \begin{equation}\label{1v} (V(x))_k=x_k\left(1+\sum\limits_{i=1}^{m}a_{ki}x_i\right), \ k=\overline{1,m}, \end{equation}
where $a_{ki}=-a_{ik}$, $|a_{ki}|\le 1$.
From the equality we immediately get $V(\mathbf{x})\prec \mathbf{x}$ for all $\mathbf{x}\in S^{n-1}$.
(ii)$\Rightarrow$ (i). Let $\mathbf{x}=e_k$, $(k\in\{1,\dots,n\})$. Then due to $V(\mathbf{x})\prec \mathbf{x}$ from \eqref{1.3} one finds \begin{equation}\label{3v}
P_{kk,k}=1 \qquad P_{kk,i}=0, \ i\neq k. \end{equation}
Now assume that $\mathbf{x}=\lambda e_i+(1-\lambda)e_j$, where $\lambda\in(0,1)$. Let $k\notin \{i,j\}$, then from \eqref{1.3} one finds that \begin{equation}\label{2v} V(\mathbf{x})_k=P_{ii,k}\lambda^2+2\lambda(1-\lambda)P_{ij,k}+P_{jj,k}(1-\lambda)^2 \end{equation} Taking into account \eqref{3v} and the relation $V(\mathbf{x})\prec \mathbf{x}$ with \eqref{2v} one gets $P_{ij,k}=0$. This completes the proof. \end{proof}
The proved theorem characterizes Volterra QSO in terms of absolute continuity of distributions. Therefore, this theorem will allow to define such kind of operators in abstract settings. Let us demonstrate it.
Assume that $(E,{\mathcal F})$ be a measurable space and $S(E,{\mathcal F})$ be the set of all probability measures on $(E,{\mathcal F})$.
Recall that a mapping $V :S(E,{\mathcal F})\to S(E,{\mathcal F})$ is called a \textit{quadratic stochastic operator (QSO)} if, for an arbitrary measure $\lambda\in S(E,{\mathcal F})$ the measure $\lambda'= V(\lambda)$ is defined as follows \begin{eqnarray}\label{VQ} \lambda'(A)=\int_E\int_E P(x,y,A)d\lambda(x)d\lambda(y), \ \ A\in{\mathcal F}, \end{eqnarray} where $P(x,y,A)$ satisfies the following conditions: \begin{enumerate} \item[(i)] $P(x,y,\cdot)\in S(E,{\mathcal F})$ for any fixed $x,y\in E$;
\item[(ii)] For any fixed $A\in{\mathcal F}$ the function $P(x,y,A)$ is measurable of two variables $x$ and $y$ on $(E\times E,{\mathcal F}\otimes{\mathcal F})$;
\item[(iii)] the function $P(x,y,A)$ is symmetric, i.e. $P(x,y,A)=P(y,x,A)$ for any $x,y\in E$ and $A\in{\mathcal F}$. \end{enumerate}
Note that when $E$ is finite, i.e. $E=\{1,\dots,m\}$, then a QSO on $S(E,{\mathcal F})= S^{m-1}$ is defined as in \eqref{1.3} with $P_{ij,k}= P(i, j, k)$.
Certain construction of QSO in general setting was studied in \cite{GR}.
We recall that a measure $\mu\in S(E,{\mathcal F})$ is \textit{absolutely continuous} w.r.t. a measure $\nu\in S(E,{\mathcal F})$ if $\nu(A)=0$ implies $\mu(A)=0$, and they are denoted by $\mu\prec\nu$. Put $$ \textrm{null}(\mu)=\bigcup\limits_{\mu(A)=0} A. $$ Then support of the measure $m$ is defined by $supp(\mu)=E\setminus\textrm{null}(\mu)$. Two measures $\mu,\nu\in S(E,{\mathcal F})$ are called \textit{singular} if $supp(\mu)\cap supp(\nu)=\emptyset$, and they are denoted by $\mu\perp\nu$.
\begin{defn} A QSO given by \eqref{VQ} is called \textit{Volterra} if $V\lambda\prec\lambda$ for all $\lambda\in S(E,{\mathcal F})$. \end{defn}
\begin{thm} Let $V$ be given by \eqref{VQ}. Then $V$ is Volterra QSO if and only if $P(x,y,A)=0$ for all $x,y\notin A$. \end{thm}
\begin{proof} First we assume that $V$ is Volterra QSO. Take any $x,y\in E$ and consider the measure $\nu=\frac{1}{2}(\delta_x+\delta_y)$, where $\delta_x$ is a delta-measure, i.e. $\delta_x(A)=\chi_A(x)$. Then from \eqref{VQ} one finds that $$ V(\nu)(A)=\frac{1}{4}\big(P(x,x,A)+P(y,y,A)+2P(x,y,A)\big). $$ From $V\nu\prec\nu$ and $\nu(A)=0$ (if $x,y\notin A$) we infer that $V(\nu)(A)=0$, this yields that $P(x,x,A)=P(y,y,A)=P(x,y,A)=0$ if $x,y\notin A$.
Let us suppose that $P(x,y,A)=0$ is valid for all $x,y\notin A$. Assume that for $\mu\in S(E,{\mathcal F})$ one has $\mu(B)=0$ for some $B\in{\mathcal F}$. Let us show that $V(\mu)(B)=0$. Indeed, from \eqref{VQ} and the conditions one gets \begin{eqnarray*} V(\mu)(B)&=&\int_E\int_E P(x,y,B)d\mu(x)d\mu(y)\\[2mm] &=&\int_{E\setminus B}\int_{E\setminus B} P(x,y,B)d\mu(x)d\mu(y)+ \int_{E\setminus B}\int_{B} P(x,y,B)d\mu(x)d\mu(y)\\[2mm] &&+\int_{B}\int_{E\setminus B} P(x,y,B)d\mu(x)d\mu(y)+\int_{B}\int_{B} P(x,y,B)d\mu(x)d\mu(y)\\[2mm] &=&\int_{E\setminus B}\int_{E\setminus B} P(x,y,B)d\mu(x)d\mu(y)=0 \end{eqnarray*} This completes the proof. \end{proof}
\section{Orthogonal Preserving(OP) QSO in 2D Simplex}
We recall that two vectors $\mathbf{x}$ and $\mathbf{y}$ belonging to $S^{n-1}$ are called \textit{singular} or \textit{orthogonal} if $\mathbf{x}\cdot\mathbf{y}=0$.
A mapping $V:S^{n-1}\to S^{n-1}$ is called is \textit{Orthogonal Preserving (O.P.)} if one has $V(\mathbf{x})\perp V(\mathbf{y})$ whenever $\mathbf{x}\perp\mathbf{y}$.
In this section we are going to describe orthogonal preserving QSO defined in 2D simplex.
Let us assume that $V:S^2\to S^2$ be a orthogonal preserving QSO.
This means that \begin{equation*} V(1,0,0)\perp V(0,1,0) \perp V(0,0,1) \end{equation*} Now from the definition of QSO, we immediately get \begin{equation*} (P_{11,1},P_{11,2},P_{11,3})\perp (P_{22,1},P_{22,2},P_{22,3})\perp (P_{33,1},P_{33,2},P_{33,3}) \end{equation*}
Since in the simplex $S^2$ there are 3 orthogonal vectors which are $$(1,0,0),\ \ (0,1,0), \ \ (0,0,1).$$ We conclude the vectors $$(P_{11,1},P_{11,2},P_{11,3}), \ \ (P_{22,1},P_{22,2},P_{22,3}), \ \ (P_{33,1},P_{33,2},P_{33,3})$$ could be permutation of the given orthogonal vectors. Therefore we have 6 possibilities and we consider each of these possibilities one by one.
Let us first assume that \begin{equation*} \begin{matrix} P_{11,1}=0 & P_{11,2}=0 & P_{11,3}=1 \\ P_{22,1}=0 & P_{22,2}=1 & P_{22,3}=0 \\ P_{33,1}=1 & P_{33,2}=0 & P_{33,3}=0 \end{matrix} \end{equation*} Now our aim is to find condition for the others coefficients of the given QSO. Let us consider the following vectors \begin{equation*} \mathbf{x}=\left(\frac{1}{2},\frac{1}{2},0 \right) \qquad \mathbf{y}=(0,0,1) \end{equation*} which are clearly orthogonal. One can see that \begin{align*} V(\mathbf{x})&=1/4(2P_{12,1},2P_{12,2}+1,1+2P_{12,3}) \\ V(\mathbf{y})&=(1,0,0) \end{align*} Therefore, the orthogonal preservably of $V$ yields $P_{12,1}=0$. From $\sum^3_{i=1}P_{12,i}=1$ one gets \begin{equation*} P_{12,2}+P_{12,3}=1 \end{equation*} Now consider \begin{equation*} \mathbf{x}=\left(0,\frac{1}{2},\frac{1}{2} \right) \qquad \mathbf{y}=(1,0,0) \end{equation*} Then we have \begin{align*} V(\mathbf{x})&=1/4(2P_{23,1}+1,1+2P_{23,2},1+2P_{23,3}), \\ V(\mathbf{y})&=(0,0,1) \end{align*} Again the orthogonal preservability of $V$ implies $P_{23,3}=0$ and hence we get \begin{equation*} P_{23,1}+P_{23,2}=1 \end{equation*}
Now consider \begin{equation*} \mathbf{x}=\left(\frac{1}{2},0,\frac{1}{2} \right) \qquad \mathbf{y}=(0,1,0) \end{equation*} Then one has \begin{align*} V(\mathbf{x})&=1/4(1+2P_{13,1},2P_{13,2},1+2P_{13,3}), \\ V(\mathbf{y})&=(0,1,0) \end{align*} Hence, we conclude that $P_{13,2}=0$ and get \begin{equation*} P_{13,1}+P_{13,3}=1 \end{equation*} Taking into account the obtained equations, we denote \begin{equation*} P_{12,2}=\alpha \qquad P_{23,1}=\beta \qquad P_{13,1}=\gamma \end{equation*} Correspondingly one gets \begin{equation*} P_{12,3}=1-\alpha \qquad P_{23,2}=1-\beta \qquad P_{13,3}=1-\gamma \end{equation*} Therefore $V$ has the following form \begin{equation*} V^{(1)}_{\alpha,\beta,\gamma}: \left\{ \begin{array}{l} x'=z^2+2\gamma xz+2\beta yz\\ y'=y^2+2\alpha xy+2(1-\beta)yz\\ z'=x^2+2(1-\alpha) xy+2(1-\gamma)xz\\ \end{array} \right. \end{equation*}
Similarly, considering other possibilities we obtain the following operators:
\begin{equation*} V^{(2)}_{\alpha,\beta,\gamma} : \left\{ \begin{array}{l} x'=x^2+2\alpha xy+2\gamma xz\\ y'=y^2+2(1-\alpha) xy+2\beta yz\\ z'=z^2+2(1-\gamma) xz+2(1-\beta)yz\\ \end{array} \right. \end{equation*}
\begin{equation*} V^{(3)}_{\alpha,\beta,\gamma}: \left\{ \begin{array}{l} x'=x^2+2\alpha xy+2\gamma xz\\ y'=z^2+2(1-\gamma) xz+2\beta yz\\ z'=y^2+2(1-\alpha) xy+2(1-\beta)yz\\ \end{array} \right. \end{equation*}
\begin{equation*} V^{(4)}_{\alpha,\beta,\gamma}: \left\{ \begin{array}{l} x'=y^2+2\alpha xy+2\beta yz\\ y'=z^2+2\gamma xz+2(1-\beta)yz\\ z'=x^2+2(1-\alpha) xy+2(1-\gamma)xz\\ \end{array} \right. \end{equation*}
\begin{equation*} V^{(5)}_{\alpha,\beta,\gamma}: \left\{ \begin{array}{l} x'=y^2+2\alpha xy+2\beta yz\\ y'=x^2+2(1-\alpha) xy+2\gamma xz\\ z'=z^2+2(1-\gamma) xz+2(1-\beta)yz\\ \end{array} \right. \end{equation*}
\begin{equation*} V^{(6)}_{\alpha,\beta,\gamma}:\left\{ \begin{array}{l} x'=z^2+2\gamma xz+2\beta yz\\ y'=x^2+2\alpha xy+2(1-\gamma)xz\\ z'=y^2+2(1-\alpha) xy+2(1-\beta)yz\\ \end{array} \right. \end{equation*}
So, if $V$ is OP QSO, then it can be one of the above given operators. Now we are going to show the obtained operators are indeed orthogonal preserving.
\begin{thm}\label{OP} Let $V$ be an orthogonal preserving QSO. Then $V$ has one of the following forms: \begin{equation}\label{list} V^{(1)}_{\alpha,\beta,\gamma}, \ V^{(2)}_{\alpha,\beta,\gamma}, \ V^{(3)}_{\alpha,\beta,\gamma}, \ V^{(4)}_{\alpha,\beta,\gamma}, \ V^{(5)}_{\alpha,\beta,\gamma}, V^{(6)}_{\alpha,\beta,\gamma}. \end{equation} \end{thm}
\begin{proof} According to the above done calculation we have six listed operators. Now we show that these operators indeed OP. Without loss of generality, we may consider operator $V^{(1)}_{\alpha,\beta,\gamma}$.
Assume that $\mathbf{x}\perp \mathbf{y}$. Then there are following possibilities: \begin{equation*} \mathbf{x}\perp \mathbf{y}\Longleftrightarrow \left\{ \begin{array}{l} \quad \mathbf{x}=(x,y,0) \quad \mathbf{y}=(0,0,1), \\ \quad \mathbf{x}=(x,0,z) \quad \mathbf{y}=(0,1,0), \\ \quad \mathbf{x}=(0,y,z) \quad \mathbf{y}=(1,0,0). \\ \end{array} \right. \end{equation*}
Let $\mathbf{x}=(x,y,0)$ and $\mathbf{x}=(0,0,1)$. Then one gets \begin{equation*} V^{(1)}_{\alpha,\beta,\gamma}(\mathbf{x})=(0,y^2+2\alpha xy,x^2+2(1-\alpha)xy),\qquad V^{(1)}_{\alpha,\beta,\gamma}(\mathbf{y})=(1,0,0). \end{equation*} It is clear there are orthogonal. By the same argument, for other two cases, we can establish the orthogonality of $V^{(1)}_{\alpha,\beta,\gamma}(\mathbf{x})$ and $V^{(1)}_{\alpha,\beta,\gamma}(\mathbf{y})$. This completes the proof. \end{proof}
\begin{rem}\label{permut} We note that the operators given in \eqref{list} are permutations of Volterra QSO. In \cite{GE} it was proved that permutations of Volterra operators are automorphisms of the simplex. \end{rem}
\begin{rem} It is well-known that linear stochastic operators are orthogonal preserving if and only if they are permutations of the simplex. We point out that if $\alpha=\beta=\gamma=1/2$, then the operators \eqref{list} reduce to such kind of permutations. \end{rem}
To investigate dynamic of obtained operators, it is usual to investigate by means of the conjugacy.
Let us recall we say two QSO $V^{(1)}$ and $V^{(2)}$ are conjugate if there exist a permutation $T_{\pi}:(x,y,z)\rightarrow(\pi(x),\pi(y),\pi(z))$ such that $T_{\pi}^{-1}V^{(1)}T_{\pi}=V^{(2)}$ and we denote this by $V^{(1)}\sim^\pi V^{(2)}$.
In our case, we need to consider only permutations of $(x,y,z)$ given by: \begin{equation*} \pi= \begin{bmatrix} x & y & z\\ y & z & x \end{bmatrix} \qquad \pi_1= \begin{bmatrix} x & y & z \\ x & z & y \end{bmatrix} \end{equation*}
Note that other permutations can be derived by the given two ones.
\begin{thm}\label{OP-K} Orthogonal Preserving QSO can be divided into three non-conjugate classes \begin{align*} K_1&=\{V^{(1)}_{\alpha,\beta,\gamma},V^{(5)}_{\alpha,\beta,\gamma},V^{(3)}_{\alpha,\beta,\gamma}\} \\ K_2&=\{V^{(4)}_{\alpha,\beta,\gamma},V^{(6)}_{\alpha,\beta,\gamma}\} \\ K_3&=\{V^{(2)}_{\alpha,\beta,\gamma}\} \end{align*} \end{thm}
\begin{proof} Let us consider $V^{(1)}_{\alpha,\beta,\gamma}$. Then one has \begin{align*} &T_{\pi}^{-1}V^{(1)}_{\alpha,\beta,\gamma}T_{\pi}(x,y,z)=T_{\pi}^{-1}V^{(1)}_{\alpha,\beta,\gamma}(y,z,x) \\
&=(y^2+2(1-\alpha)yz+2(1-\gamma)yx,x^2+2\gamma yx+2\beta zx,z^2 +2\alpha yz+2(1-\beta)zx) \\
&=V^{(5)}_{1-\gamma,1-\alpha,\beta} \end{align*} This means tht $V^{(1)}_{\alpha,\beta,\gamma}\sim^\pi V^{(5)}_{1-\gamma,1-\alpha,\beta}$.
Similarly, we have $T_{\pi}^{-1}V^{(5)}_{\alpha,\beta,\gamma}T_{\pi}(x,y,z)=V^{(3)}_{1-\gamma,\alpha,1-\beta}$. Hence,$V^{(5)}_{\alpha,\beta,\gamma}\sim V^{(3)}_{1-\gamma,\alpha,1-\beta}$. By the same argument one finds $T_{\pi}^{-1}V^{(3)}_{\alpha,\beta,\gamma}T_{\pi}(x,y,z)=V^{(1)}_{\gamma,1-\alpha,1-\beta}$ which means $V^{(3)}_{\alpha,\beta,\gamma}\sim^\pi V^{(1)}_{\gamma,1-\alpha,1-\beta}$.
This implies that the operators $V^{(1)}_{\alpha,\beta,\gamma},V^{(5)}_{\alpha,\beta,\gamma},V^{(3)}_{\alpha,\beta,\gamma}$ are conjugate and we put them into one class denoted by $K_1$.
One can obtain that $V^{(2)}_{\alpha,\beta,\gamma}\sim^\pi V^{(2)}_{1-\gamma,\alpha,1-\beta}$ and $V^{(4)}_{\alpha,\beta,\gamma}\sim^\pi V^{(4)}_{1-\gamma,1-\alpha,\beta}$ and $V^{(6)}_{\alpha,\beta,\gamma}\sim^\pi V^{(6)}_{\gamma,1-\alpha,1-\beta}$. Therefore we need to consider another permutation $\pi_1$
Consequently, one finds $V^{(2)}_{\alpha,\beta,\gamma}\sim^{\pi_1} V^{(2)}_{\gamma,1-\beta,\alpha}$, $V^{(4)}_{\alpha,\beta,\gamma}\sim^{\pi_1} V^{(6)}_{1-\gamma,\beta,\alpha}$.
Thus by $K_2$ we denote the class containing $V^{(4)}_{\alpha,\beta,\gamma}$ and $V^{(6)}_{\alpha,\beta,\gamma}$ and by $K_3$ class containing only $V^{(2)}_{\alpha,\beta,\gamma}$. This completes the proof. \end{proof}
\begin{rem} One can see that the operator $V^{(2)}_{\alpha,\beta,\gamma}$ is a Volterra QSO, and its dynamics investigated in \cite{G}. From the result of \cite{V,Z} one can conclude that even dynamics of Volterra QSO is very complicated. We note that if $\alpha,\beta,\gamma\in\{0,1\}$ then dynamics of operators taken from the classes $K_1$,$K_2$ were investigated in \cite{MJ,MSJ}. In \cite{GE} certain general properties of dynamics of permuted Volterra QSO were studied. \end{rem}
\begin{rem} We can also defined orthogonal preserving in general setting. Namely, we call a QSO given by \eqref{VQ} \textit{orthogonal preserving} if $V(\mu)\perp V(\nu)$ whenever $\mu\perp\nu$, where $\mu,\nu\in S(E,{\mathcal F})$. Taking into account Remark \ref{permut} we can formulate the following
\begin{conj} Let $V$ be a QSO given by \eqref{VQ}. Then $V$ is orthogonal preserving if and only if there is a measurable automorphism $\alpha:E\to E$ (i.e. $\alpha^{-1}({{\mathcal F}})\subset{\mathcal F}$) and a Volterra QSO $V_0$ such that $V\mu=V_0(\mu\circ\alpha^{-1})$. \end{conj}
\end{rem}
\section{Associativity of Orthogonality Preserving QSO}
In this section we state basic definitions and properties of genetics algebras.
Let $V$ be a QSO and suppose that $\mathbf{x},\mathbf{y}\in \mathbb{R}^n$ are arbitrary vectors, we introduce a multiplication rule on $\mathbb{R}^n $ by \begin{equation*} \mathbf{x}\circ \mathbf{y}=\frac{1}{4}\big(V(\mathbf{x}+\mathbf{y})-V(\mathbf{x}-\mathbf{y})\big) \end{equation*}
This multiplication can be written as follows: \begin{equation}\label{4.1} (\mathbf{x}\circ \mathbf{y})_k=\sum^n_{i,j=1}P_{ij,k}x_iy_j \end{equation} where $\mathbf{x}=(x_1,\ldots ,x_n),\mathbf{y}=(y_1,\ldots ,y_n)\in \mathbb{R}^n $.
The pair $(\mathbb{R}^n,\circ)$ is called \textit{genetic algebra}. We note the this algebra is commutative. This means $\mathbf{x}\circ\mathbf{y}=\mathbf{y}\circ\mathbf{x}$. Certain algebraic properties of such kind of algebras were investigated in \cite{W-B,Ly2}. In general, the genetic algebra no need to be associative. Therefore, we introduce the following
\begin{defn} A QSO $V$ is called \textit{associative} if the corresponding multiplication given by \eqref{4.1} is associative, i.e \begin{equation}\label{4.2} (\mathbf{x}\circ\mathbf{y})\circ \mathbf{z}=\mathbf{x}\circ (\mathbf{y}\circ\mathbf{z}) \end{equation} hold for all $\mathbf{x},\mathbf{y},\mathbf{z}\in\mathbb{R}^n$. \end{defn}
In this section we are going to find associative orthogonal preserving QSO. According to the previous section, we have only three classes of OP QSO. Now we are interested whether these operators will be associative. Note that associativity of some classes of QSO has been investigated in \cite{G2008}.
\begin{thm} The QSO $V^{(2)}_{\alpha,\beta,\gamma}$ is associative if and only if one of the following conditions are satisfied: \begin{align*} (1)\quad \alpha=0,\quad\beta=0,\quad \gamma=0 \\ (2)\quad \alpha=1,\quad\beta=0,\quad \gamma=0 \\ (3)\quad \alpha=0,\quad\beta=1,\quad \gamma=1 \\ (4)\quad \alpha=1,\quad\beta=1,\quad \gamma=1 \\ (5)\quad \alpha=1,\quad\beta=1,\quad \gamma=0 \\ (6)\quad \alpha=0,\quad\beta=1,\quad \gamma=0 \\ \end{align*} \end{thm}
\begin{proof}
To show the associativity we will check the equality \eqref{4.2}, which can be rewritten as follows: \begin{equation}\label{4.3} \sum^3_{i,j=1}P_{ij,u}x_{i}\left(\sum^3_{m,k=1}P_{mk,j}y_{m}z_{k}\right)=\sum^3_{i,j=1}P_{ij,u}\left(\sum^3_{m,k=1}P_{mk,i}x_{m}y_{k}\right)z_j \qquad u=1,2,3 \\ \end{equation} where we have use the following equalities \begin{align*} (x\circ y)\circ z&=\sum^3_{i,j=1}P_{ij,l}x_{i}\left(\sum^3_{m,k=1}P_{mk,j}y_{m}z_{k}\right) \\ x\circ (y\circ z)&=\sum^3_{i,j=1}P_{ij,l}\left(\sum^3_{m,k=1}P_{mk,i}x_{m}y_{k}\right)z_j \qquad l=1,2,3 \end{align*} For $V^{(2)}_{\alpha,\beta,\gamma}$ the equality \eqref{4.3} can be written as follows:
\begin{align*} x_1(y_1z_1+\alpha y_1z_2+\gamma y_1z_3+\alpha y_2z_1+\gamma y_3z_1) \\ +\alpha x_1((1-\alpha)y_1z_2+(1-\alpha)y_2z_1+y_2z_2+\beta y_2z_3+\beta y_3z_2) \\ +\gamma x_1((1-\gamma)y_1z_3+(1-\beta)y_2z_3+(1-\gamma)y_3z_1+(1-\beta)y_3z_2+y_3z_3) \\ +\alpha x_2(y_1z_1+\alpha y_1z_2+\gamma y_1z_3+\alpha y_2z_1+\gamma y_3z_1) \\ +\gamma x_3(y_1z_1+\alpha y_1z_2+\gamma y_1z_3+\alpha y_2z_1+\gamma y_3z_1) \\ =z_1(x_1y_1+\alpha x_1y_2+\gamma x_1y_3+\alpha x_2y_1+\gamma x_3y_1) \\ +\alpha z_2(x_1y_1+\alpha x_1y_2+\gamma x_1y_3+\alpha x_2y_1+\gamma x_3y_1) \\ +\gamma z_3(x_1y_1+\alpha x_1y_2+\gamma x_1y_3+\alpha x_2y_1+\gamma x_3y_1) \\ +\alpha z_1((1-\alpha)x_1y_2+(1-\alpha)x_2y_1+x_2y_2+\beta x_2y_3+\beta x_3y_2) \\ +\gamma z_1((1-\gamma)x_1y_3+(1-\beta)x_2y_3+(1-\gamma)x_3y_1+(1-\beta)x_3y_2+x_3y_3); \end{align*} \begin{align*} (1-\alpha)x_1((1-\alpha)y_1z_2+(1-\alpha)y_2z_1+y_2z_2+\beta y_2z_3+\beta y_3z_2) \\ +(1-\alpha)x_2(y_1z_1+\alpha y_1z_2+\gamma y_1z_3+\alpha y_2z_1+\gamma y_3z_1) \\ +x_2((1-\alpha)y_1z_2+(1-\alpha)y_2z_1+y_2z_2+\beta y_2z_3+\beta y_3z_2) \\ +\beta x_2((1-\gamma)y_1z_3+(1-\beta)y_2z_3+(1-\gamma)y_3z_1+(1-\beta)y_3z_2+y_3z_3) \\ +\beta x_3((1-\alpha)y_1z_2+(1-\alpha)y_2z_1+y_2z_2+\beta y_2z_3+\beta y_3z_2) \\ =(1-\alpha)z_2(x_1y_1+\alpha x_1y_2+\gamma x_1y_3+\alpha x_2y_1+\gamma x_3y_1) \\ +(1-\alpha)z_1((1-\alpha)x_1y_2+(1-\alpha)x_2y_1+x_2y_2+\beta x_2y_3+\beta x_3y_2) \\ +z_2((1-\alpha)x_1y_2+(1-\alpha)x_2y_1+x_2y_2+\beta x_2y_3+\beta x_3y_2) \\ +\beta z_3((1-\alpha)x_1y_2+(1-\alpha)x_2y_1+x_2y_2+\beta x_2y_3+\beta x_3y_2) \\ +\beta z_2((1-\gamma)x_1y_3+(1-\beta)x_2y_3+(1-\gamma)x_3y_1+(1-\beta)x_3y_2+x_3y_3); \end{align*} \begin{align*} (1-\gamma)x_1((1-\gamma)y_1z_3+(1-\beta)y_2z_3+(1-\gamma)y_3z_1+(1-\beta)y_3z_2+y_3z_3) \\ +(1-\beta)x_2((1-\gamma)y_1z_3+(1-\beta y_2z_3+(1-\gamma)y_3z_1+(1-\beta)y_3z_2+y_3z_3) \\ +(1-\gamma)x_3(y_1z_1+\alpha y_1z_2+\gamma y_1z_3+\alpha y_2z_1+\gamma y_3z_1) \\ +(1-\beta)x_3((1-\alpha)y_1z_2+(1-\alpha)y_2z_1+y_2z_2+\beta y_2z_3+\beta y_3z_2) \\ +x_3((1-\gamma)y_1z_3+(1-\beta)y_2z_3+(1-\gamma)y_3z_1+(1-\beta)y_3z_2+y_3z_3) \\ =(1-\gamma)z_3(x_1y_1+\alpha x_1y_2+\gamma x_1y_3+\alpha x_2y_1+\gamma x_3y_1) \\ +(1-\beta)z_3((1-\alpha)x_1y_2+(1-\alpha)x_2y_1+x_2y_2+\beta x_2y_3+\beta x_3y_2) \\ +(1-\gamma)z_1((1-\gamma)x_1y_3+(1-\beta)x_2y_3+(1-\gamma)x_3y_1+(1-\beta)x_3y_2+x_3y_3) \\ +(1-\beta)z_2((1-\gamma)x_1y_3+(1-\beta)x_2y_3+(1-\gamma)x_3y_1+(1-\beta)x_3y_2+x_3y_3) \\ +z_3((1-\gamma)x_1y_3+(1-\beta)x_2y_3+(1-\gamma)x_3y_1+(1-\beta)x_3y_2+x_3y_3). \end{align*} Now equalizing the corresponding terms and simplifying the obtained expressions one gets: \begin{equation*} \begin{matrix} \beta(1-\beta)=0 & \alpha(1-\gamma)=(\alpha-\gamma)(1-\beta) & \alpha(1-\alpha)=0 \\ \alpha(\gamma-\beta)=0 & \gamma(1-\gamma)=0 & \gamma(1-\beta)=0\\ (\beta-\gamma)(1-\alpha)=\beta(1-\gamma) \end{matrix} \end{equation*} Solving these equations we get the desired equalities which completes the proof. \end{proof}
By the same argument one can prove the following
\begin{thm} \label{Theorem:box} The operators $V^{(1)}_{\alpha,\beta,\gamma}$ and $V^{(4)}_{\alpha,\beta,\gamma}$ are not associative for any values of $\alpha,\beta,\gamma$. \end{thm}
\end{document} |
\begin{document}
\title{Representing a cubic graph as the intersection graph of axis-parallel boxes in three dimensions} \author{Abhijin Adiga\inst{1}, L. Sunil Chandran\inst{1}} \institute{Department of Computer Science and Automation, Indian Institute of Science, Bangalore--560012, India. \\email: \{abhijin,sunil\}@csa.iisc.ernet.in} \date{} \maketitle
\begin{abstract} We show that every graph of maximum degree $3$ can be represented as the intersection graph of axis parallel boxes in three dimensions, that is, every vertex can be mapped to an axis parallel box such that two boxes intersect if and only if their corresponding vertices are adjacent. In fact, we construct a representation in which any two intersecting boxes just touch at their boundaries. \confExt{Further, this construction can be realized in linear time.}
\keywordname{ cubic graphs, intersection graphs, axis parallel boxes, boxicity} \end{abstract} \section{Introduction} We will be considering only simple, undirected and finite graphs. Let ${\cal F}=\{S_1, S_2, \ldots, S_n\}$ be a family of sets. An intersection graph associated with ${\cal F}$ has ${\cal F}$ as the vertex set and two vertices $S_i$ and $S_j$ are adjacent if and only if $i\neq j$ and $S_i\cap S_j\neq \emptyset$. It is interesting to study intersection graphs of sets with some restriction, for example, sets which correspond to geometric objects such as intervals, spheres, boxes, axis-parallel lines, etc. Many important graph classes arise out of such restrictions: interval graphs, circular arc graphs, unit-disk graphs and grid-intersection graphs, to name a few. In this paper, we are concerned with intersection graphs of $3$-dimensional boxes. A {\it $3$-dimensional axis parallel box} ($3$-box in short) is a Cartesian product of $3$ closed intervals on the real line. A graph is said to have a {\it $3$-box representation} if it can be represented as the intersection graph of $3$-boxes.
In the literature there are several results on representing a planar graph as the intersection graph of various geometric objects. Among these, the most noted result is the circle packing theorem (also known as the Koebe-Andreev-Thurston theorem) from which it follows that planar graphs are exactly the intersection graphs of closed disks in the plane such that the intersections happen only at the boundaries. In \cite{intervalRepPlanarGraphsThomassen}, Thomassen gave a similar representation for planar graphs with $3$-boxes. He showed that every planar graph has a {\it strict $3$-box representation}, that is, intersections occur only in the boundaries of the boxes and two boxes which intersect have precisely a $2$-box (a rectangle) in common. Very recently, Felsner and Francis \cite{contactRepPlanarGraphsFelsnerFrancis} strengthened this result by showing that there exists a strict $3$-box representation for a planar graph such that each box is an isothetic cube. In \cite{phdThesisScheinerman,intervalRepPlanarGraphsThomassen}, it was shown that every planar graph has a strict representation using at most two rectangles per vertex. Scheinerman and West \cite{intervalNumberPlanarScheinermanWest} showed that every planar graph is an intersection graph of intervals such that each vertex is represented by at most three intervals on the real line.
We consider the question of whether a graph of maximum degree $3$ has a $3$-box representation. We note that there exist graphs with maximum degree greater than $3$ which do have a $3$-box representation. For example, it is easy to show that a $K_8$ minus a perfect matching does not have a $3$-box representation \cite{recentProgressesInCombRoberts}. Considering the effort that has gone into discovering geometric representation theorems for planar graphs, it is surprising that no such results are known up to now in the case of cubic graphs. It may be because of the fact that intuitively cubic graphs are farther away from ``geometry" compared to planar graphs. In this paper we present the first such theorem (as far as we know) for cubic graphs: \begin{theorem}\label{thm:mainTheorem} Every graph of maximum degree $3$ has a $3$-box representation with the restriction that two boxes can intersect only at their boundaries. \end{theorem}
\subsection{$k$-box representations and boxicity} The concept of $3$-box representation can be extended to higher dimensions. A $k$-box is a Cartesian product of closed intervals $[a_1,b_1]\times [a_2,b_2]\times\cdots\times [a_k,b_k]$. A graph $G$ has a {\it $k$-box representation} if it is the intersection graph of a family of $k$-boxes in the $k$-dimensional Euclidean space. The \emph{boxicity} of $G$ denoted by $\mbox{\textnormal{box}}(G)$, is the minimum integer $k$ such that $G$ has a $k$-box representation. Clearly, Theorem \ref{thm:mainTheorem} can be rephrased as: Every graph with maximum degree $3$ has boxicity at most $3$. The best known upper bound for the boxicity of cubic graphs is $10$; it follows from the bound $\mbox{\textnormal{box}}(G)\le 2\floor{\frac{\Delta^2}{2}}+2$ by Esperet \cite{boxicityGraphsBoundedDegreeEsperet}, where $\Delta$ is the maximum degree of the graph. In \cite{boxicityMaxDegreeSunilNaveenFrancis}, it was conjectured that boxicity of a graph is $O(\Delta)$. However, this was disproved in \cite{boxicityPosetDimensionAbhijinSunilDiptendu} by showing the existence of graphs with boxicity $\Omega(\Delta\log\Delta)$. Theorem \ref{thm:mainTheorem} implies that the conjecture is true for $\Delta=3$.
Our result also implies that any problem which is hard for cubic graphs is also hard for graphs with a $3$-box representation. We list a few of such problems: crossing number, minimum vertex cover, Hamiltonian cycle, maximum independent set, minimum dominating set and maximum cut.
We give a brief literature survey on boxicity. It was introduced by Roberts in 1969 \cite{recentProgressesInCombRoberts}. Cozzens \cite{phdThesisCozzens} showed that computing the boxicity of a graph is NP-hard. This was later strengthened by Yannakakis \cite{complexityPartialOrderDimnYannakakis} and finally by Kratochv\`{\i}l \cite{specialPlanarSatisfiabilityProbNPKratochvil} who showed that determining whether boxicity of a graph is at most two itself is NP-complete. Adiga, Bhowmick and Chandran \cite{boxicityPosetDimensionAbhijinSunilDiptendu} showed that it is hard to approximate the boxicity of even a bipartite graph within $\sqrt{n}$ factor, where $n$ is the order of the graph. In \cite{computingBoxicityCozzensRoberts}, Cozzens and Roberts studied the boxicity of split graphs. Chandran and Sivadasan \cite{boxicityTreewidthSunilNaveen} showed that $\mbox{\textnormal{box}}(G)\le\hspace{0.5mm}\mbox{\textnormal{tree-width}\hspace{.5mm}}(G)+2$. Chandran, Francis and Sivadasan~\cite{boxicityMaxDegreeSunilNaveenFrancis} proved that $\mbox{\textnormal{box}}(G)\le2\chi(G^2)$, where $\chi$ is the chromatic number and $G^2$ is the square of the graph.
Boxicity is a direct generalization of the concept of \emph{interval graphs}. A graph is an interval graph if and only if it can be expressed as the intersection of a family of intervals on the real line. Since a $1$-box is an interval, it follows that interval graphs are precisely the class of graphs with boxicity at most $1$ \footnote{The only graph with boxicity $0$ is the complete graph.}. Now we present an alternate characterization of $k$-box representation in terms of interval graphs. This is used more frequently than its geometric definition. \begin{lemma}\label{lem:intbox} A graph $G$ has a $k$-box representation if and only if there exist $k$ interval graphs $I_1,I_2,\ldots,I_k$ such that $V(I_i)=V(G)$, $i=1,2,\ldots,k$ and $E(I_1)\cap E(I_2)\cap\cdots\cap E(I_k)=E(G)$. \end{lemma} Our proof of Theorem \ref{thm:mainTheorem} uses Lemma \ref{lem:intbox}; we construct $3$ interval graphs such that the given cubic graph is the intersection of these interval graphs.
We observe that there exist graphs with maximum degree $3$ (and hence cubic graphs) with boxicity strictly greater than $2$. For example, Let $G$ be a non-planar cubic graph and $G_s$ be the graph obtained by subdividing each edge once. Then, $\mbox{\textnormal{box}}(G_s)>2$. It is an easy exercise to prove this. One way is to show that if $G_s$ does have a $2$-box representation, then a planar embedding for $G$ can be derived from this box representation, contrary to the initial assumption that $G$ is a non-planar graph. This means that these graphs do not have a $2$-box representation, that is, they cannot be expressed as the intersection graphs of rectangles on the plane. The rest of the paper is devoted to the proof of Theorem \ref{thm:mainTheorem}.
\subsection{Notation}
Let $G$ be a graph. The notation $(x,y)\in E(G)$ ($(x,y)\notin E(G)$) means that $x$ is (not) adjacent to $y$ in $G$. For $U\subseteq V(G)$, $G[U]$ denotes the graph induced by $U$ in $G$. The \emph{open neighborhood} of $U$, denoted by $N(U,G)$ is the set $\left\{x\in V(G)\setminus U|\exists y\in U \mbox{ such that }(x,y)\in E(G)\right\}$. The length of a path is the number of edges in the path. We consider an isolated vertex as a path of length $0$. Suppose $G$ and $H$ are graphs defined on the same vertex set. $G\cap H$ denotes the graph with $V(G\cap H)=V(G)$ ($=V(H)$) and $E(G\cap H)=E(G)\cap E(H)$.
Consider a non-empty set $X$ and let $\Pi$ be an ordering of the elements of $X$. $\comp{\Pi}$ denotes the reverse of $\Pi$, that is, for any $x,y\in X$, $\comp{\Pi}(x)<\comp{\Pi}(y)$ if and only if $\Pi(x)>\Pi(y)$. Let $A$ and $B$ be disjoint subsets of $X$. The notation $\Pi(A)<\Pi(B)$ implies the following: $\forall a\in A,\ b\in B$, $\Pi(a)<\Pi(b)$.
\begin{lemma}\label{lem:cubicEnough} If every cubic graph has a $3$-box representation, then, every graph of maximum degree $3$ also has a $3$-box representation. The statement holds even when the intersections are restricted to the boundaries of the boxes. \end{lemma} \journ{ \begin{proof} Let $H$ be a non-cubic graph with maximum degree $3$. We will show that there exists a cubic graph $H'$ such that $H$ is an induced subgraph of $H'$. Here is one way of constructing $H'$ from $H$. Let
$D=3|V(H)|-\sum_{v\in V(H)}d(v)$, where $d(v)$ is the degree of $v$. Let $C_D$ be a $D$-length cycle such that $V(C_D)\cap V(H)=\varnothing$. We construct $H'$ as follows: Let $V(H')=V(H)\cup V(C_D)$. $H'$ contains all the edges contained in $H$ and $C_D$ and in addition, each vertex $v\in V(H)$ is made adjacent to $3-d(v)$ unique vertices from $V(C_D)$, where $d(v)$ is the degree of $v$. Clearly, $H'$ is cubic and by Theorem \ref{thm:mainTheorem}, has a $3$-box representation. Since $H$ is an induced subgraph of $H'$, any box representation of $H'$ can be converted to a box representation of $H$ by simply retaining only the boxes of vertices belonging to $H$. \qed \end{proof} } In view of Lemma \ref{lem:cubicEnough}, we note that it is enough to prove that a cubic graph has a $3$-box representation. Therefore, in our proof of Theorem \ref{thm:mainTheorem}, we will assume that the graph is cubic.
\section{Structural prerequisites}\label{sec:structPre} \subsection{Special cycles and paths} \begin{definition}{\bf Special cycle:}\label{def:specialCycle} An induced cycle $C$ is a special cycle if for all $x\in C$,
$C\setminus\{x\}$ is not a subgraph of an induced cycle or path of size $\ge|C|+1$. \end{definition} \begin{definition}{\bf Special path:}\label{def:specialPath} An induced path $P$ is a special path if \begin{enumerate} \item it is maximal in the sense that it is not a subgraph of an induced cycle or a longer induced path, and
\item for any end point of $P$, say $x$, $P\setminus\{x\}$ is not a subgraph of an induced cycle of size $\ge|P|$ or an induced path of length
$\ge|P|+1$. \end{enumerate} \end{definition}
\begin{observation}\label{obs:nonSpecial} Any connected graph with at least $3$ vertices contains a special cycle or path. \end{observation} \journ{ This is easy to see. Among all sets of vertices which induce cycles or paths in the graph, consider the largest sets. If one of these sets induces a cycle, then, clearly this is a special cycle since there is no larger induced path or cycle in the graph. If none of them induces a cycle, then, each of these sets induces a special path since there is no induced longer path or an induced cycle of the same size in the graph. } \subsection{Partitioning the vertex set of a cubic graph} Let $G$ be a cubic graph and let $V=V(G)$. We partition $V$ in two stages. In Algorithm \ref{alg:primPart}, we obtain the primary partition: $V=\mathcal{S}\uplus\mathcal{N}_1\uplus\mathcal{A}_1$. This is followed by a finer partitioning in Algorithm \ref{alg:finePart}: $\mathcal{N}_1=\mathcal{R}\uplus\mathcal{N}$ and $\mathcal{A}_1=\mathcal{B}\uplus\mathcal{A}$.
\begin{algorithm}[h] \caption{\label{alg:primPart}}
\SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{Cubic graph $G$} \Output{$\mathcal{S},\mathcal{N}_1,\mathcal{A}_1$ such that $V=\mathcal{S}\uplus\mathcal{N}_1\uplus\mathcal{A}_1$.} Let $V'=V$ and $\mathcal{S}=\varnothing$\; \While{there is a connected component in $G[V']$ with at least $3$ vertices}{\nllabel{lin:specialContra}
Let $T\subseteq V'$ be a set which induces a special cycle or path; \tcp{which exists by Observation \ref{obs:nonSpecial}}
$\mathcal{S}\longleftarrow \mathcal{S}\cup T$\;
$V'\longleftarrow V'\setminus\{T\cup N(T,G[V'])\}$\nllabel{lin:removeNeighbors}\; } $\mathcal{A}_1=V'$\; $\mathcal{N}_1=N(\mathcal{S},G)$; \end{algorithm}
\begin{observation}\label{obs:primPart} We have some easy observations from Algorithm \ref{alg:primPart}: \begin{enumerate} \item $\mathcal{S}$ induces a collection of cycles and paths in $G$.\label{primPart:inducedCyclesPaths} \item Every vertex in $\mathcal{S}$ has at least one neighbor in $\mathcal{N}_1$.\label{primPart:SN1} Therefore, every vertex in $\mathcal{N}_1$ is adjacent to at most two vertices in $\mathcal{A}_1$. \item For any $u\in\mathcal{S}$ and $v\in\mathcal{A}_1$, $u$ and $v$ are not adjacent. \label{primPart:ADisjointS} \item $\mathcal{A}_1$ induces a collection of isolated vertices and edges in $G$. \journ{This observation follows from the fact that $G[\mathcal{A}_1]$ does not contain any special cycle or path and therefore, from Observation \ref{obs:nonSpecial}, does not contain any component with three or more vertices.} \label{primPart:isolatedA} \end{enumerate} \end{observation}
\begin{algorithm}[h] \caption{\label{alg:finePart}}
\SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{Cubic graph $G$, $\mathcal{N}_1$, $\mathcal{A}_1$} \Output{$\mathcal{N},\mathcal{A},\mathcal{R},\mathcal{B}$ such that $\mathcal{N}_1=\mathcal{R}\uplus\mathcal{N}$ and $\mathcal{A}_1=\mathcal{B}\uplus\mathcal{A}$.} Let $\mathcal{R}=\mathcal{B}=\varnothing$\; \ForEach{$v\in\mathcal{N}_1$}{
\tcp{Recall that $v$ is adjacent to at least one vertex in $\mathcal{S}$ and therefore to at most $2$ vertices in $\mathcal{A}_1$.}
\If {$v$ is adjacent to two vertices $\mathcal{A}_1\setminus\mathcal{B}$,}{\nllabel{lin:N2A}
Let $X_1(v)=N(v,G[\mathcal{A}_1\setminus\mathcal{B}])$; \tcp{the two neighbors of $v$
in $\mathcal{A}_1\setminus\mathcal{B}$}\nllabel{lin:X1}
Let $X_2(v)=N(X_1(v),G[\mathcal{A}_1\setminus\mathcal{B}])$; \tcp{neighbors of
neighbors of $v$ in $\mathcal{A}_1\setminus\mathcal{B}$}\nllabel{lin:X2}
$\mathcal{R}\longleftarrow\mathcal{R}\cup\{v\}$\;\nllabel{lin:vR}
$\mathcal{B}\longleftarrow\mathcal{B}\cup(X_1(v)\cup X_2(v))$\;\nllabel{lin:BX1X2}
} } $\mathcal{N}=\mathcal{N}_1\setminus\mathcal{R}$\; $\mathcal{A}=\mathcal{A}_1\setminus\mathcal{B}$; \end{algorithm}
\begin{observation}\label{obs:finePart} Some observations from Algorithm \ref{alg:finePart}. \begin{enumerate} \item Every vertex in $\mathcal{N}$ is adjacent to at most one vertex in $\mathcal{A}$.\label{finePart:N1A} \item Every vertex in $\mathcal{R}$ is adjacent to one vertex in $\mathcal{S}$ and two vertices in $\mathcal{B}$. This immediately implies that (a) $\mathcal{R}$ is an independent set and (b) for any $u\in\mathcal{R}$ and $v\in\mathcal{N}\cup\mathcal{A}$, $u$ and $v$ are not adjacent.\label{finePart:RDisjointNA} \item Since $\mathcal{B}\subseteq\mathcal{A}_1$, for any $u\in\mathcal{B}$ and $v\in\mathcal{S}$, $u$ and $v$ are not adjacent, by Observation \ref{obs:primPart}.\ref{primPart:ADisjointS}.\label{finePart:BDisjointS} \item For any $u\in\mathcal{B}$ and $v\in\mathcal{A}$, $u$ and $v$ are not adjacent. \journ{The proof is as follows. Since $u\in\mathcal{B}$, it follows that there exists a $w\in\mathcal{R}$ such that in Algorithm \ref{alg:finePart}, $u\in X_1(w)\cup X_2(w)$. From Observation \ref{obs:primPart}.\ref{primPart:isolatedA}, $u$ is adjacent to at most one vertex in $\mathcal{A}_1$. If it does have a neighbor in $\mathcal{A}_1$, it must belong to $X_1(w)\cup X_2(w)$ which is a subset of $\mathcal{B}$. Since $\mathcal{A}=\mathcal{A}_1\setminus\mathcal{B}$, $u$ is not adjacent to any vertex in $\mathcal{A}$.} \label{finePart:BDisjointA} \end{enumerate} \end{observation} \journ{ \begin{observation}\label{obs:X1X2} We have some observations regarding $X_1(\cdot)$ and $X_2(\cdot)$ which are defined in Algorithm \ref{alg:finePart}. Let $v\in\mathcal{R}$. \begin{enumerate}
\item $|X_1(v)|=2$. \item Since $X_1(v)\cup X_2(v)\subseteq\mathcal{A}_1$, from Observation \ref{obs:primPart}.\ref{primPart:isolatedA} it follows that every vertex in $X_1(v)\cup X_2(v)$ has at most one neighbor in $\mathcal{A}_1$ and this neighbor is in $X_1(v)\cup X_2(v)$.\label{X1X2:closed} \item If the two vertices in $X_1(v)$ are adjacent, then, $X_2(v)$ is empty. \item If $X_2(v)$ is not empty, then, again from Observation \ref{obs:primPart}.\ref{primPart:isolatedA}, every vertex in $X_2(v)$
is adjacent to exactly one vertex in $X_1(v)$ and $|X_2(v)|\le2$. \end{enumerate} \end{observation} } \subsubsection{Partitioning $\mathcal{S}$:}\label{sec:partS} We partition $\mathcal{S}$ into $\mathcal{C}$, the set of vertices which induce special cycles and $\mathcal{P}$, the set of vertices which induce special paths. $\mathcal{P}$ is further partitioned into $\mathcal{P}_e$, the set of end points and $\mathcal{P}_i$, the set of interior points of all the paths. \begin{definition}{\bf Second end points}\label{def:secondEndPoint} of a path of length at least $2$ are the interior vertices of the path which are adjacent to at least one of its end points. \end{definition} The set of second end points of the paths in $\mathcal{P}$ is denoted by $\mathcal{P}_{2e}$ and $\mathcal{P}_{2i}=\mathcal{P}_i\setminus\mathcal{P}_{2e}$.
\begin{lemma}\label{lem:NCPi} Every vertex in $\mathcal{N}_1$ is adjacent to at least one vertex in $\mathcal{C}\cup\mathcal{P}_i$. \end{lemma} \journ{ \begin{proof} Let $v\in\mathcal{N}_1$. Since $\mathcal{N}_1=N(\mathcal{S},G)$, $v$ is adjacent to at least one vertex in $\mathcal{S}$. If $v$ is not adjacent to any vertex in $\mathcal{C}\cup\mathcal{P}_i$, it implies that all its neighbors in $\mathcal{S}$ belong to $\mathcal{P}_e$. Let $P$ be the first path in $\mathcal{P}$ to be extracted in Algorithm \ref{alg:primPart} with $v$ as its neighbor. Since $v$ is not adjacent to any vertex in $\mathcal{P}_i$, it follows that it is adjacent to at least one end point of $P$ and none of its interior vertices. Since $P$ is the first component of $\mathcal{S}$ to be extracted with $v$ as the neighbor, it means that $v$ is still present in $V'$ when $P$ is chosen as the special path. If $v$ is adjacent to both the end points of $P$, then $P\cup\{v\}$ induces a cycle in $G[V']$ and if $v$ is adjacent to only one end point, then $P\cup\{v\}$ induces a path in $G[V']$ at that stage. In either case we have a contradiction to the fact that $P$ is a special path of $G[V']$. \qed \end{proof} }
\subsection{The graph induced by $\mathcal{S}\cup\mathcal{R}\cup\mathcal{B}$}\label{sec:SRB} \begin{lemma}\label{lem:componentsY} For each $u\in\mathcal{R}$, let $\Gamma(u)=\{u\}\cup X_1(u)\cup X_2(u)$, where $X_1(\cdot)$ and $X_2(\cdot)$ are as defined in Algorithm~\ref{alg:finePart}. Then, \begin{enumerate} \item $\mathcal{R}\cup\mathcal{B}=\displaystyle\biguplus_{u\in\mathcal{R}}\Gamma(u)$, \item $\Gamma(u)$ is a component in the graph induced by $\mathcal{R}\cup\mathcal{B}$, and \item $\Gamma(u)$ is isomorphic to one of the graphs illustrated in Figure~\ref{fig:componentsY}. \end{enumerate} \end{lemma} \journ{ \begin{proof} From Algorithm \ref{alg:finePart}, it is clear that $\mathcal{R}\cup\mathcal{B}=\bigcup_{u\in\mathcal{R}}\Gamma(u)$. Therefore, to prove the first statement we need to only show that for two distinct vertices $u,v\in\mathcal{R}$, $\Gamma(u)\cap\Gamma(v)=\varnothing$. Let us assume that $u$ was added to $\mathcal{R}$ before $v$ in the algorithm. Since any $x\in X_1(u)\cup X_2(u)$ is present in $\mathcal{B}$ when $v$ is being added to $\mathcal{R}$, it implies that $x\notin X_1(v)\cup X_2(v)$. Hence proved.
Note that $\Gamma(u)$ is connected. We will show that no vertex in $\Gamma(u)$ is adjacent to any vertex in $\Gamma(v)$, for any $v$ which was added to $\mathcal{R}$ after $u$ in Algorithm \ref{alg:finePart}. Clearly, this will imply that $\Gamma(u)$ is a component in the graph induced by $\mathcal{R}\cup\mathcal{B}$. First, let us consider $u$. Since $u$ is adjacent to one vertex in $\mathcal{S}$, it has only two neighbors in $\mathcal{A}_1$ and these two neighbors are in $X_1(u)$. This implies that it is not adjacent to any vertex in $\Gamma(v)$ since $\Gamma(u)\cap\Gamma(v)=\varnothing$.
Consider any vertex $x\in X_1(u)\cup X_2(u)$. From Observation \ref{obs:X1X2}.\ref{X1X2:closed} and the fact that $\Gamma(u)\cap\Gamma(v)=\varnothing$, we can infer that $x$ is not adjacent to any vertex in $X_1(v)\cup X_2(v)$. Now, suppose $x$ is adjacent to $v$. Since $x\in\mathcal{B}$ when $v$ is being added to $\mathcal{R}$, we infer that $v$ is adjacent to at most one vertex in $\mathcal{A}_1\setminus\mathcal{B}$ at that stage. This implies that $v$ does not satisfy the condition in Line \ref{lin:N2A} in the algorithm, a contradiction since $v$ belongs to $\mathcal{R}$. Hence proved.
From Observation \ref{obs:X1X2}, it is easy to infer that each component $\Gamma(u)$ is isomorphic to one of the five graphs shown in Figure \ref{fig:componentsY}. Let $X_1(u)=\{x_1,x_1'\}$. If $x_1$ or $x_1'$ has a neighbor in $X_2(u)$, then, it will be denoted by $x_2$ or $x_2'$ respectively. We have the following five graphs: In each graph, $u$ is adjacent to $x_1$ and $x_1'$. \begin{enumerate}[(a)] \item $x_1$ is adjacent to $x_1'$, and therefore, $X_2(u)=\varnothing$. \item $x_1$ is not adjacent to $x_1'$ and $X_2(u)=\varnothing$. \item $x_1$ is not adjacent to $x_1'$ and $X_2(u)=\{x_2\}$. \item $x_1$ is not adjacent to $x_1'$ and $X_2(u)=\{x_2'\}$. \item $x_1$ is not adjacent to $x_1'$ and $X_2(u)=\{x_2,x_2'\}$. \qed \end{enumerate} \end{proof} }
\begin{figure}
\caption{In the graph induced by $\mathcal{R}\cup\mathcal{B}$, each component $\Gamma(u)$, $u\in\mathcal{R}$ is isomorphic to one of the graphs illustrated in the figure. Here, $u$ has exactly two neighbors in $\mathcal{B}$, $x_1$ and $x_1'$. These neighbors if not adjacent can each have at most one neighbor in $\mathcal{B}$ which are denoted by $x_2$ and $x_2'$ respectively. }
\label{fig:componentsY}
\end{figure}
\journ{ From Lemma \ref{lem:componentsY} and Observations \ref{obs:finePart}.\ref{finePart:BDisjointS} and \ref{obs:finePart}.\ref{finePart:BDisjointA}, it follows that every vertex in $\mathcal{B}$ is adjacent to either one or two vertices in $\mathcal{R}\cup\mathcal{B}$ and no vertex in $\mathcal{S}\cup\mathcal{A}$. This implies that every vertex in $\mathcal{B}$ is adjacent to either one or two vertices in $\mathcal{N}$. Based on this, we partition $\mathcal{B}$ into two parts: } \begin{definition}{\bf $\mathcal{B}_1$ and $\mathcal{B}_2$:}\label{def:B1B2} $\mathcal{B}_1$ is the set of vertices of $\mathcal{B}$ which have one neighbor in $\mathcal{N}$ and $\mathcal{B}_2$ is the set of vertices of $\mathcal{B}$ which have two neighbors in $\mathcal{N}$. \end{definition} \journ{ Recall that from Observation \ref{obs:finePart}.\ref{finePart:RDisjointNA}, each vertex of $\mathcal{R}$ has a unique neighbor in $\mathcal{S}$. In fact, we can infer more: } \begin{lemma}\label{lem:RP2i} Let $u\in\mathcal{R}$. The unique vertex of $\mathcal{S}$ to which $u$ is adjacent to belongs to $\mathcal{P}_{2i}$. \end{lemma} \journ{ \begin{proof} Let $x$ be the unique neighbor of $u$ in $\mathcal{S}$. Let $a$ and $b$ be the remaining neighbors of $u$. From Observation \ref{obs:finePart}.\ref{finePart:RDisjointNA}, $a,b\in\mathcal{B}$. We need to show that $x\in\mathcal{P}_{2i}$. We will prove by contradiction. Let $T$ be the special cycle or path in $\mathcal{S}$ which contains $x$. Since $T$ is the only component in $\mathcal{S}$ with $v$ as a neighbor, it implies that in Algorithm \ref{alg:primPart}, $v$ is in $V'$ when $T$ is being chosen as the special path or cycle. Since $a,b\in\mathcal{B}$, it implies that they belong to $\mathcal{A}_1$ and therefore, they too are present in $V'$ when $T$ is being chosen. Moreover, $a$ and $b$ are not adjacent to any vertex in $T$ (Observation \ref{obs:finePart}.\ref{finePart:BDisjointS}). Now, we have the following cases to consider: \begin{description} \item[$x\in\mathcal{C}$:] This implies that $T$ is a special cycle. Let $x'\in T$ be a vertex adjacent to $x$. Clearly, $(T\setminus\{x'\})\cup\{v,a\}$
induces a path of length $|T|+1$ in $G[V']$ contradicting the fact that $T$ is a special cycle. \item[$x\in\mathcal{P}_e$:] This implies that $T$ is a special path. Since $v$
is not adjacent to any other vertex in the path, $T\cup\{v\}$ induces a path of length $|T|+1$ in $G[V']$, contradicting its maximality and thus $T$ cannot be a special path. \item[$x\in\mathcal{P}_{2e}$:] Again, this implies that $T$ is a special path. Let $x_e$ be an end point of $T$ to which $x$ is adjacent to. Clearly, $(T\setminus\{x_e\})\cup\{v,a\}$ induces a path of length
$|T|+1$ in $G[V']$, contradicting the fact that $T$ is a special path. \end{description} Therefore, $x\in\mathcal{P}_{2i}$. \qed \end{proof} } \begin{observation}\label{obs:RP2i} We have the following observations due to Lemma \ref{lem:RP2i}. \begin{enumerate} \item Each vertex in $\mathcal{C}\cup\mathcal{P}_{2e}$ is adjacent to exactly one vertex in $\mathcal{N}$. \journ{The proof is as follows: Note that each vertex in $\mathcal{C}\cup\mathcal{P}_{2e}$ either belongs to a special cycle or is an interior vertex of a special path in $G[\mathcal{C}\cup\mathcal{P}]$. Therefore, it has only one neighbor in $V\setminus(\mathcal{C}\cup\mathcal{P})$ and by Observation \ref{obs:primPart}.\ref{primPart:SN1}, it must belong to $\mathcal{N}_1$. By Lemma \ref{lem:RP2i}, it does not belong to $\mathcal{R}$. Hence, it belongs to $\mathcal{N}$.} \label{RP2i:CP2eDisjointR} \item Every vertex in $\mathcal{P}_e$ is adjacent to exactly two vertices in $\mathcal{N}$. \label{RP2i:Pe2N} \item Every vertex in $\mathcal{P}_{2i}$ is adjacent to exactly one vertex in $\mathcal{R}\cup\mathcal{N}=\mathcal{N}_1$. \label{RP2i:P2iRN} \end{enumerate} \end{observation}
\subsection{The graph induced by $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$}\label{sec:B2PeN} \begin{lemma}\label{lem:B2PeInd} $\mathcal{B}_2\cup\mathcal{P}_e$ is an independent set in $G$. \end{lemma} \journ{ \begin{proof} Let $x,y\in\mathcal{P}_e$. If $x$ and $y$ are the end points of two different paths in $\mathcal{P}$, clearly, they are not adjacent. Since each path is special, it has at least $3$ vertices which implies that if $x$ and $y$ are end points of the same path, then they are not adjacent. Hence, $\mathcal{P}_e$ induces an independent set in $G$. Let $x,y\in\mathcal{B}_2$. By definition, they have two neighbors each in $\mathcal{N}$. Therefore, if $x$ and $y$ are adjacent, they induce a component in $\mathcal{R}\cup\mathcal{B}$, which contradicts Statement $3$ of Lemma \ref{lem:componentsY}. Noting that $\mathcal{P}_e\subseteq\mathcal{S}$ and $\mathcal{B}_2\subseteq\mathcal{B}$, from Observation \ref{obs:finePart}.\ref{finePart:BDisjointS}, it follows that no vertex in $\mathcal{B}_2$ is adjacent to any vertex in $\mathcal{P}_e$. Hence proved. \qed \end{proof} } \begin{observation}\label{obs:B2PeN} \journ{Consider a vertex $v\in\mathcal{B}_2\cup\mathcal{P}_e$. By the definition of $\mathcal{B}_2$ and Observation \ref{obs:RP2i}.\ref{RP2i:Pe2N}, it follows that $v$ is adjacent to exactly two vertices in $\mathcal{N}$. By Lemma \ref{lem:B2PeInd}, $v$ is not adjacent to any vertex in $\mathcal{B}_2\cup\mathcal{P}_e$. Therefore, in the graph induced by $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, its degree is $2$. Lemma~\ref{lem:NCPi} implies that every vertex in $\mathcal{N}$ is adjacent to at most two vertices in $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. From these two observations, we can infer the following about the graph induced by $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$.} \conf{We have the following observations about the graph induced by $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$.} \begin{enumerate} \item Its maximum degree is $2$ and thus is a collection of paths and cycles.\label{B2PeN:B2PeNDelta2} \item All the end points of paths (which also includes isolated vertices) belong to $\mathcal{N}$.\label{B2PeN:NEndIsolated} \item A vertex in $\mathcal{N}$ is adjacent to a vertex in $\mathcal{A}$ only if it is an end point of a path. \journ{The proof is as follows: Let $v\in\mathcal{N}$. It has at least one neighbor in $S$ (Observation \ref{obs:primPart}.\ref{primPart:SN1}). If it has a neighbor in $\mathcal{A}$, then it can have at most one neighbor in $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. Hence, proved.}\label{B2PeN:NEndIsolatedA} \end{enumerate} \end{observation}
\begin{definition}{\bf\boldmath $\mathcal{N}_e$ and $\mathcal{N}_{int}$:} \label{def:NeNint} $\mathcal{N}$ is partitioned into $\mathcal{N}_e$, the set of end points of paths (which includes isolated vertices) and $\mathcal{N}_{int}$, the set of interior points of cycles and paths in $G[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$. \end{definition} \journ{ In view of Observation \ref{obs:B2PeN}.\ref{B2PeN:NEndIsolatedA}, a vertex in $\mathcal{N}$ is adjacent to a vertex in $\mathcal{A}$ only if it belongs to $\mathcal{N}_e$. } \begin{definition}{\bf Type 1 and Type 2 cycles:}\label{def:type12} Recall that by Observation \ref{obs:B2PeN}.\ref{B2PeN:B2PeNDelta2}, $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ induces a collection of cycles and paths. We classify the cycles in the following manner: \begin{description} \item[Type 1:] Cycles whose vertices alternate between $\mathcal{N}$ and $\mathcal{B}_2\cup\mathcal{P}_e$. \item[Type 2:] Cycles which are not Type 1. From Lemma \ref{lem:B2PeInd}, it is easy to infer that such a cycle has at least one pair of adjacent vertices which belong to $\mathcal{N}$. \end{description} \end{definition}
\journ{ \begin{lemma}\label{lem:NAdjB2PeDisjointCP2e} If a vertex $v\in\mathcal{N}$ is adjacent to $2$ vertices in $\mathcal{B}_2\cup\mathcal{P}_e$, then its remaining neighbor belongs to $\mathcal{P}_{2i}$. In other words, $v$ has no neighbor in $\mathcal{C}\cup\mathcal{P}_{2e}$. \end{lemma} \begin{proof} Let $v\in\mathcal{N}$ be such that it is adjacent to two vertices in $\mathcal{B}_2\cup\mathcal{P}_e$. Let $x_1$ and $x_2$ be these neighbors. From Lemma \ref{lem:NCPi}, it follows that $v$ is adjacent to one vertex in $\mathcal{C}\cup\mathcal{P}_i$. Let this vertex be $y$. We need to show that $y\in\mathcal{P}_{2i}$.
Suppose $T$ is the first special path or cycle chosen in Algorithm \ref{alg:primPart} with $v$ as a neighbor. We will first show that $v$, $x_1$ and $x_2$ are present in $V'$ when $T$ is being chosen. Since $T$ is the first component of $\mathcal{S}$ with $v$ as its neighbor, it implies that $v\in V'$ at that time. If any of $x_1$ and $x_2$ belongs to $\mathcal{B}_2$, then, it is in $V'$ because $\mathcal{B}_2\subseteq\mathcal{A}_1\subseteq V'$ at any time in the algorithm. If any of $x_1$ and $x_2$, say $x$ is in $\mathcal{P}_e$, then again, since $T$ is the first component of $\mathcal{S}$ with $v$ as the neighbor, it follows that $x$ is an end point of either $T$ or a special path chosen after $T$. In either case, $x$ belongs to $V'$ when $T$ is being chosen. The following observation is crucial for the proof:
\emph{Consider any $t\in\{y,x_1,x_2\}$. If $t\notin T$, it implies that it is not adjacent to any vertex in $T$, since otherwise it would be present in $\mathcal{N}_1$ and this is not possible since by assumption $y\in\mathcal{S}$ and $x_1,x_2\in\mathcal{B}_2\cup\mathcal{P}_e$. }
We will now show that if $y\in\mathcal{C}\cup\mathcal{P}_{2e}$, it contradicts the assumption that $T$ is a special cycle or path.
Let us suppose that $T$ is a special cycle. Since $x_1,x_2\notin\mathcal{C}$, it follows that $x_1,x_2\notin T$ and therefore, $y\in T$. Let $y'\in T$ be adjacent to $y$. Since $x_1\notin T$, it is not adjacent to any vertex of $T$ and therefore, $(T\setminus\{y'\})\cup\{v,x_1\}$ induces a path with
$|T|+1$ vertices in $G[V']$ contradicting the fact that $T$ is a special cycle. Therefore, from now on we will assume that $T$ is a special path.
If $y\notin T$, it implies that at least one of $\{x_1,x_2\}$ belongs to $T$. If only one of them, say $x_1\in T$, then, since $x_1$ has to be an end point of $T$, $T\cup\{v\}$ induces a longer path in $G[V']$ and if both $x_1,x_2\in T$, then, $T\cup\{v\}$ induces a cycle. In either case, we have a contradiction to the fact that $T$ is a special path.
Suppose $y\in T$. Since we have assumed that $y\in\mathcal{C}\cup\mathcal{P}_{2e}$, this means $y$ is a second end point of $T$. Let $p_e'$ be an end point of $T$ adjacent to $y$ and let $p_e$ be the other end point. If $p_e\in\{x_1,x_2\}$, then, $(T\setminus\{p_e'\})\cup\{v\}$ induces a cycle contradicting the fact that $T$ is a special path. If $p_e\notin\{x_1,x_2\}$, it implies that at most one vertex from $\{x_1,x_2\}$ can be an end point of $T$. Without loss of generality, we assume that $x_1$ is not an end point of $T$. This implies that
$x_1\notin T$ and therefore, is not adjacent to any vertex in $T$. Hence, $(T\setminus\{p_e'\})\cup\{v,x_1\}$ induces a path with $|T|+1$ vertices, again contradicting the fact that $T$ is a special path. Therefore, $y\in\mathcal{P}_{2i}$. \qed \end{proof} }
\journ{ \begin{observation}\label{obs:B2PeNIsnMinus1}
We can assume that $|\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}|\le n-2$. This is because, since $G$ is cubic, it has a cycle and therefore, we can always extract a special cycle with at least $3$ vertices or a special path with at least
$4$ vertices from $V$. Thus, we can ensure that $|\mathcal{S}\setminus\mathcal{P}_e|\ge 2$. This implies that, $|\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}|\le n-2$. \end{observation} }
\subsection{A summary} In this section, we partitioned the vertex set $V$ as follows: \journ{$V=\mathcal{S}\uplus\mathcal{N}_1\uplus\mathcal{A}_1$, where $\mathcal{S}=\mathcal{C}\uplus\mathcal{P}$, $\mathcal{N}_1=\mathcal{R}\uplus\mathcal{N}$ and $\mathcal{A}_1=\mathcal{B}\uplus\mathcal{A}$. Further, $\mathcal{P}=\mathcal{P}_e\uplus\mathcal{P}_{2e}\uplus\mathcal{P}_{2i}$ and $\mathcal{B}=\mathcal{B}_1\uplus\mathcal{B}_2$. Therefore,} $V=\mathcal{C}\uplus\mathcal{P}_e\uplus\mathcal{P}_{2e} \uplus\mathcal{P}_{2i}\uplus\mathcal{N} \uplus\mathcal{R}\uplus\mathcal{B}_1\uplus\mathcal{B}_2 \uplus\mathcal{A}$. This partitioning of $V$ is illustrated in Figure \ref{fig:cubicPartTree}. Some of the observations and lemmas which we developed will be frequently referred to in the sections to come. For the convenience of the reader, we have tabulated them as follows. \journ{In Table \ref{tab:nonAdj}, we have listed the pairs of sets $X,Y\subseteq V$ which satisfy the property that in $G$ there is no edge between $X$ and $Y$. The relevant observation or lemma is listed in the third column.} In Table \ref{tab:uniqNeighbor}, we list pairs of sets $X,Y\subseteq V$ such that in $G$, every vertex of $X$ has at most one neighbor in $Y$. The corresponding observation or lemma can be found in the third column. In the fourth column, we give information on whether the vertex in $X$ has at most one neighbor or exactly one neighbor in $Y$. \begin{figure}\label{fig:cubicPartTree}
\end{figure} \journ{ \begin{table}[tb] \begin{center}
\begin{tabular}{|r|c|c|l|} \hline 1&$\mathcal{A}$ & $\mathcal{C}\cup\mathcal{P}$ & using Observation \ref{obs:primPart}.\ref{primPart:ADisjointS} and the fact that $\mathcal{A}\subseteq\mathcal{A}_1$\\\hline 2&$\mathcal{A}\cup\mathcal{N}$ & $\mathcal{R}$ & Observation \ref{obs:finePart}.\ref{finePart:RDisjointNA}(b)\\\hline 3&$\mathcal{A}$ & $\mathcal{B}$ & Observation \ref{obs:finePart}.\ref{finePart:BDisjointA}\\\hline 4&$\mathcal{C}$ & $\mathcal{P}$ & follows directly from the definitions of $\mathcal{C}$ and $\mathcal{P}$.\\\hline 5&$\mathcal{C}\cup\mathcal{P}_e\cup\mathcal{P}_{2e}$ & $\mathcal{R}$ & Lemma \ref{lem:RP2i}\\\hline 6&$\mathcal{C}\cup\mathcal{P}$ & $\mathcal{B}$ & Observation \ref{obs:finePart}.\ref{finePart:BDisjointS}\\\hline 7&$\mathcal{P}_e$ & $\mathcal{P}_e$ & Lemma \ref{lem:B2PeInd}\\\hline 8&$\mathcal{P}_e$ & $\mathcal{P}_{2i}$ & follows directly from the definitions of $\mathcal{P}_e$ and $\mathcal{P}_{2i}$.\\\hline 9&$\mathcal{N}_{int}$ & $\mathcal{A}$ & Observation \ref{obs:B2PeN}.\ref{B2PeN:NEndIsolatedA} and Definition \ref{def:NeNint}\\\hline 10&$\mathcal{R}$ & $\mathcal{R}$ & Observation \ref{obs:finePart}.\ref{finePart:RDisjointNA}(a)\\\hline \end{tabular} \caption{{\bf Non-adjacency table:} In every row, there is no edge between the set in the 1st column and the set in the 2nd column in $G$.\label{tab:nonAdj}} \end{center} \end{table} } \begin{table}[tb] \begin{center}
\begin{tabular}{|r|c|c|l|l|} \hline 1&$\mathcal{N}_e$ & $\mathcal{A}$ & Observation \ref{obs:B2PeN}.\ref{B2PeN:NEndIsolatedA} and Definition \ref{def:NeNint}& at most one neighbor\\\hline 2&$\mathcal{B}_1$ & $\mathcal{N}$ & Definition \ref{def:B1B2} & exactly one neighbor \\\hline 3&$\mathcal{C}\cup\mathcal{P}_{2e}$ & $\mathcal{N}$ & Observation~\ref{obs:RP2i}.\ref{RP2i:CP2eDisjointR} & exactly one neighbor \\\hline 4&$\mathcal{P}_{2i}$ & $\mathcal{R}\cup\mathcal{N}$ & Observation \ref{obs:RP2i}.\ref{RP2i:P2iRN} & exactly one neighbor\\\hline 5&$\mathcal{R}$ & $\mathcal{P}_{2i}$ & Lemma \ref{lem:RP2i} & exactly one neighbor\\\hline \end{tabular} \caption{{\bf Unique neighbor table:} In every row, each vertex belonging to the set in the 1st column has either (a) at most one neighbor OR (b) exactly one neighbor in the set given in the 2nd column.\label{tab:uniqNeighbor}} \end{center} \end{table}
\section{Construction of a $3$-box representation of $G$}\label{sec:3box}
In order to give a $3$-box representation of $G$, we define three interval graphs $I_1$, $I_2$ and $I_3$\journ{ and verify that $E(G)=E(I_1)\cap E(I_2)\cap E(I_3)$}. \conf{The verification of $E(G)=E(I_1)\cap E(I_2)\cap E(I_3)$ and the condition that two intersecting boxes touch only at their boundaries is in the appendix.} Let $n=|V|$, the number of vertices in $G$. For any $v\in V$, let $f(v,I_j)$, $j=1,2,3$ denote the closed interval assigned to $v$ in the interval representation of $I_j$. Further, let $l(v,I_j)$ and $r(v,I_j)$ denote the left and right end points of $f(v,I_j)$ respectively. In each interval graph, the interval assigned to a vertex is based on the set it belongs to in the partition of $V$ (illustrated in Figure \ref{fig:cubicPartTree}). \journ{We will also show that for every pair of adjacent vertices $x$ and $y$, in at least one interval graph $I_j$, $j\in\{1,2,3\}$, either $l(x,I_j)=r(y,I_j)$ or $l(y,I_j)=r(x,I_j)$. This is sufficient to prove that their corresponding boxes intersect only at their boundaries.}
\subsection{Construction of $I_1$}\label{sec:I1} \subsubsection{Vertices of $\mathcal{A}$} \label{sec:AI1} We recall from Observation \ref{obs:primPart}.\ref{primPart:isolatedA} that since $\mathcal{A}\subseteq\mathcal{A}_1$, it induces a collection of isolated vertices and edges in $G$. \begin{definition}\label{def:PiA} Let $\Pi_A$ be an ordering of $\mathcal{A}$ which satisfies the condition that the two end points of every (isolated) edge are consecutively ordered. \end{definition} \journ{ The intervals assigned to the vertices of $\mathcal{A}$ are as follows: } \paragraph{\bf An isolated vertex} $u$ is given a point interval as follows: \begin{equation}\label{eqn:AIsolatedI1} f(u,I_1)=[2n+\Pi_A(u),2n+\Pi_A(u)]. \end{equation} \paragraph{\bf End points of an isolated edge} $(u,v)$: Without loss of generality, let $\Pi_A(v)=\Pi_A(u)+1$. We assign the intervals to $u$ and $v$ as follows: \begin{eqnarray} f(u,I_1)&=&[2n+\Pi_A(u),2n+\Pi_A(u)+0.5],\label{eqn:AEdgexI1} \\ f(v,I_1)&=&[2n+\Pi_A(v)-0.5,2n+\Pi_A(v)].\label{eqn:AEdgeyI1} \end{eqnarray} \journ{ \begin{observation}\label{obs:AATouch} Let $x,y\in\mathcal{A}$ such that $\Pi_A(x)<\Pi_A(y)$. If they are adjacent in $G$, then, $r(x,I_1)=l(y,I_1)$. This follows from (\ref{eqn:AEdgexI1}) and (\ref{eqn:AEdgeyI1}). \end{observation}
\begin{lemma}\label{lem:AGI1} The graph induced by $\mathcal{A}$ in $I_1$ and $G$ are identical, that is, $I_1[\mathcal{A}]=G[\mathcal{A}]$. \end{lemma} \begin{proof} From the interval assignments (\ref{eqn:AIsolatedI1}), (\ref{eqn:AEdgexI1}) and (\ref{eqn:AEdgeyI1}), we observe the following: for any $x\in\mathcal{A}$, (a) the point $2n+\Pi_A(x)$ is an end point of $f(x,I_1)$ and (b) either $f(x,I_1)$ is a point interval or its length is $0.5$.
Suppose $x,y\in\mathcal{A}$ such that $\Pi_A(x)<\Pi_A(y)$. From the above observations it follows that $x$ and $y$ are adjacent in $I_1$ if and only if $\Pi_A(y)=\Pi_A(x)+1$ and $f(x,I_1)=[2n+\Pi_A(x),2n+\Pi_A(x)+0.5]$ and $f(y,I_1)=[2n+\Pi_A(y)-0.5,2n+\Pi_A(y)]$. This implies that $x$ was assigned an interval by (\ref{eqn:AEdgexI1}) and $y$, by (\ref{eqn:AEdgeyI1}). This happens only if $x$ and $y$ are the end points of an isolated edge in $G[\mathcal{A}]$. Hence proved. \qed \end{proof} } \subsubsection{Vertices of ${\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}}$}\label{sec:B2PeNI1} We define an ordering on this set. \begin{definition}\label{def:Pi1} Let $\Pi_1$ be an ordering of $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ which satisfies the following properties. Let $S$ be a component induced by $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. We recall from Section \ref{sec:B2PeN} that $S$ is either a path or a cycle. \begin{enumerate} \item Let $S$ be a path with at least two vertices. Then, for one of the natural orderings of the vertices of $S$, say $p_1p_2\ldots p_t$, we have $\Pi_1(p_i)=\Pi_1(p_{i-1})+1$, $2\le i\le t$. \item Suppose $S$ is a cycle. We recall that $S$ is either a Type 1 or a Type 2 cycle (Definition \ref{def:type12}). Consider one of the natural orderings of the vertices of $S$, say $c_1c_2\ldots c_tc_1$ such that if $S$ is a Type 1 cycle, then $c_1\in\mathcal{N}$ and if it is a Type 2 cycle, then $c_1,c_t\in\mathcal{N}$. Then, we have, $\Pi_1(c_i)=\Pi_1(c_{i-1})+1$, $2\le i\le t$. \item Let $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}= \Lambda_{\textrm{Type 1}}\uplus\Lambda_{\textrm{Type 2}}\uplus\Lambda_{\textrm{Paths}}$ where, $\Lambda_{\textrm{Type 1}}$, $\Lambda_{\textrm{Type 2}}$ and $\Lambda_{\textrm{Paths}}$ are the sets of vertices belonging to Type 1 cycles, Type 2 cycles and paths respectively. Then, we have $\Pi_1({\Lambda_{\textrm{Type 1}}})<\Pi_1({\Lambda_{\textrm{Type 2}}})<\Pi_1({\Lambda_{\textrm{Paths}}})$. \end{enumerate} \end{definition} \journ{ It is easy to verify that such an ordering exists. We also infer that if $S_1$ and $S_2$ are two different components of $G[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$, then, either $\Pi_1(S_1)<\Pi_1(S_2)$ or $\Pi_1(S_2)<\Pi_1(S_1)$. \begin{observation}\label{obs:Pi1nMinus1}
$\forall z\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, $\Pi_1(z)<n-1$. This follows from the fact that $|\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}|<n-1$ (Observation \ref{obs:B2PeNIsnMinus1}). \end{observation} } The interval assignments for the vertices of $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ are as follows: \paragraph{\bf For a vertex in a Type 1 cycle:} Let $S=c_1c_2\ldots c_tc_1$ be a Type 1 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. \begin{eqnarray} &&f(c_1,I_1)=\left[\Pi_1(c_1),\Pi_1(c_t)\right];\label{eqn:B2PeNType1c1I1}\\ &&f(c_i,I_1)=\left[\Pi_1(c_i),\Pi_1(c_i)+1\right], 1<i<t;\label{eqn:B2PeNType1ciI1}\\ &&f(c_t,I_1)=\left[\Pi_1(c_t),\Pi_1(c_t)+0.5\right].\label{eqn:B2PeNType1ctI1} \end{eqnarray} \journ{ \begin{observation}\label{obs:Type1I1} Suppose $S=c_1c_2\ldots c_tc_1$ is a Type 1 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. \begin{enumerate} \item For $1<i<t$, $r(c_i,I_1)=l(c_{i+1},I_1)$ and therefore, $I_1[S\setminus c_1]=G[S\setminus c_1]$. This is because, from (\ref{eqn:B2PeNType1ciI1}) and (\ref{eqn:B2PeNType1ctI1}), $r(c_i,I_1)=\Pi_1(c_i)+1=\Pi_1(c_{i+1})=l(c_{i+1},I_1)$.\label{Type1I1:SMinusc1Touch} \item $I_1[S]$ is a supergraph of $G[S]$. The proof is as follows: $c_1$ is adjacent to all the other vertices of $S$. This follows from (\ref{eqn:B2PeNType1c1I1}): $l(c_i,I_1)=\Pi_1(c_i)\in f(c_1,I_1)$. From Point 1, $c_i$ is adjacent to $c_{i+1}$, $2\le i<t$. Hence, proved. \label{Type1I1:Type1SuperI1} \item Let $x\in S_x$ and $y\in S_y$, where $S_x$ and $S_y$ induce different Type 1 cycles. Then, $(x,y)\notin E(I_1)$. The proof is as follows: Without loss of generality, let $\Pi_1(S_x)<\Pi_1(S_y)$. From (\ref{eqn:B2PeNType1c1I1}--\ref{eqn:B2PeNType1ctI1}), it follows that $\displaystyle r(x,I_1)\le \max_{a\in S_x}\Pi_1(a)+0.5<\min_{b\in S_y}\Pi_1(b)\le l(y,I_1)$. Therefore, $(x,y)\notin E(I_1)$. \label{Type1I1:nonAdjType1} \end{enumerate} \end{observation} } \paragraph{\bf For a vertex in a Type 2 cycle:} Let $S=c_1c_2\ldots c_tc_1$ be a Type 2 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. We recall from the definition of $\Pi_1$ that $c_1,c_t\in\mathcal{N}$. They are assigned intervals as follows: \begin{eqnarray} f(c_1,I_1)&=&\left[n+\Pi_1(c_1),n+\Pi_1(c_t)\right], \label{eqn:B2PeNType2c1I1} \\ f(c_t,I_1)&=&\left[n+\Pi_1(c_t),n+\Pi_1(c_t)+0.5\right]. \label{eqn:B2PeNType2ctI1} \end{eqnarray} The remaining vertices are assigned intervals as follows. For $1<i<t$, \begin{eqnarray} &&\textrm{if $c_i\in\mathcal{N}$, then, }f(c_i,I_1)= \left[n+\Pi_1(c_i),n+\Pi_1(c_i)+1\right],\label{eqn:B2PeNType2othersNI1}\\ &&\textrm{if $c_i\in\mathcal{B}_2\cup\mathcal{P}_e$, then, }f(c_i,I_1)= \left[n,n+\Pi_1(c_i)+1\right].\label{eqn:B2PeNType2othersB2PeI1} \end{eqnarray} \journ{ \begin{observation}\label{obs:Type2I1} Let $S=c_1c_2\ldots c_tc_1$ be a Type 2 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. \begin{enumerate} \item $I_1[S]$ is a supergraph of $G[S]$. The proof is as follows: From (\ref{eqn:B2PeNType2c1I1}--\ref{eqn:B2PeNType2othersB2PeI1}), $\forall c\in S$, $n+\Pi_1(c)\in f(c,I_1)$. From (\ref{eqn:B2PeNType2c1I1}), we note that $\forall c\in S$, $n+\Pi_1(c)\in f(c_1,I_1)$ and therefore, $c_1$ is adjacent to all the other vertices of $S$. From (\ref{eqn:B2PeNType2othersNI1}) and (\ref{eqn:B2PeNType2othersB2PeI1}), for $1<i<t$, $r(c_{i},I_1)=n+\Pi_1(c_{i})+1=n+\Pi_1(c_{i+1})\in f(c_{i+1},I_1)$. Therefore, $c_i$ is adjacent to $c_{i+1}$, $1<i<t$.\label{Type2I1:Type2SuperI1} \item Let $x,y\in S$ be two adjacent vertices in $G$ such that neither $x$ nor $y$ is $c_1$. If $\Pi_1(x)>\Pi_1(y)$ and $x\in\mathcal{N}$, then, $l(x,I_1)=r(y,I_1)$. This follows by noting that $\Pi_1(x)=\Pi_1(y)+1$ and subsequently applying it in (\ref{eqn:B2PeNType2ctI1}--\ref{eqn:B2PeNType2othersB2PeI1}).\label{Type2I1:touch} \end{enumerate} \end{observation} } \paragraph{\bf For a vertex in a path:} Let $S=p_1p_2\ldots p_t$ be a path such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. By Definition \ref{def:NeNint}, $p_1,p_t\in\mathcal{N}_e$. From Table \ref{tab:uniqNeighbor} (row 1), they can be adjacent to at most one vertex in $\mathcal{A}$. Taking this into consideration, the interval assignments are as follows: Let $p\in\{p_1,p_t\}\subseteq\mathcal{N}_e$: \begin{eqnarray} &&\textrm{if $p$ is not adjacent to any vertex in $\mathcal{A}$, then, } f(p,I_1)=\left[n+\Pi_1(p),2n\right], \label{eqn:B2PeNp1ptnoneI1}\\ &&\textrm{if $p$ is adjacent to a vertex $a$ in $\mathcal{A}$, then, } f(p,I_1)=\left[n+\Pi_1(p),l(a,I_1)\right].\label{eqn:B2PeNp1ptxI1} \end{eqnarray} \journ{ Note that $f(a,I_1)$ is already defined in (\ref{eqn:AIsolatedI1}--\ref{eqn:AEdgeyI1}). Moreover, $l(a,I_1)> 2n\ge n+\Pi_1(p)$. Therefore, $f(p,I_1)$ is well-defined in (\ref{eqn:B2PeNp1ptxI1}).} If $p$ is an interior point of the path, its interval assignment is as follows: \begin{eqnarray} &&\textrm{if $p\in\mathcal{N}_{int}$, then, }f(p,I_1)=\left[n+\Pi_1(p),n+\Pi_1(p)+1\right],\label{eqn:B2PeNpInteriorNI1}\\ &&\textrm{if $p\in\mathcal{B}_2\cup\mathcal{P}_e$, then, }f(p,I_1)=\left[n,n+\Pi_1(p)+1\right].\label{eqn:B2PeNpInteriorB2PeI1} \end{eqnarray} \journ{ \begin{observation}\label{obs:PathI1} Let $S=p_1p_2\ldots p_t$ be a path such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. \begin{enumerate} \item $I_1[S]$ is a supergraph of $G[S]$. The proof is as follows: For all $p\in S$, $n+\Pi_1(p),n+\Pi_1(p)+1\in f(p,I_1)$. This is easy to infer from (\ref{eqn:B2PeNp1ptnoneI1}--\ref{eqn:B2PeNpInteriorB2PeI1}) and the fact that $\Pi_1(p)<n-1$ (Observation \ref{obs:Pi1nMinus1}). This implies that for $1\le i<t$, $p_i$ is adjacent to $p_{i+1}$. Hence, proved.\label{PathI1:PathSuperI1} \item Let $x,y\in S$ be two adjacent vertices in $G$. If $\Pi_1(x)>\Pi_1(y)$, $x\in\mathcal{N}$ and $y\in\mathcal{B}_2\cup\mathcal{P}_e$, then, $l(x,I_1)=r(y,I_1)$. This follows by noting that $\Pi_1(x)=\Pi_1(y)+1$ and subsequently applying it in (\ref{eqn:B2PeNp1ptnoneI1}--\ref{eqn:B2PeNpInteriorB2PeI1}).\label{PathI1:touch} \end{enumerate} \end{observation}
\begin{observation}\label{obs:BunchI1} We have some observations regarding the intervals assigned to vertices of $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. We repeatedly make use of Observation \ref{obs:Pi1nMinus1}. \begin{enumerate} \item If $z$ belongs to a Type 1 cycle, then, (a) $l(z,I_1)=\Pi_1(z)$ and (b) $1\le l(z,I_1)<r(z,I_1)<n$. The proof is as follows: Let $z$ belong to the Type 1 cycle $S_z=c_1c_2\ldots c_tc_1$ such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. From (\ref{eqn:B2PeNType1c1I1}--\ref{eqn:B2PeNType1ctI1}), it immediately follows that $l(z,I_1)=\Pi_1(z)$, $l(z,I_1)<r(z,I_1)$ and $r(z,I_1)\le\Pi_1(c_t)+0.5<(n-1)+0.5<n$. \label{BunchI1:Type1}
\item If $z\in\mathcal{N}$ belongs to a Type 2 cycle or a path then, $l(z,I_1)=n+\Pi_1(z)$ and therefore, $n<l(z,I_1)<2n-1$. This follows from (\ref{eqn:B2PeNType2c1I1}--\ref{eqn:B2PeNType2othersNI1}) for a Type 2 cycle and (\ref{eqn:B2PeNp1ptnoneI1}--\ref{eqn:B2PeNpInteriorNI1}) for a path.\label{BunchI1:NType2Path}
\item If $z\in\mathcal{B}_2\cup\mathcal{P}_e$ belongs to a Type 2 cycle or a path then, $l(z,I_1)=n$ and $r(z,I_1)=n+\Pi_1(z)+1<2n$. This follows from (\ref{eqn:B2PeNType2othersB2PeI1}) for a Type 2 cycle and (\ref{eqn:B2PeNpInteriorB2PeI1}) for a path.\label{BunchI1:B2PeType2Path}.
\item If $x,y\in\mathcal{N}$ such that $\Pi_1(x)<\Pi_1(y)$, then, $l(x,I_1)+1\le l(y,I_1)$. The proof is as follows: If $x$ and $y$ belong to Type 1 cycles, then by Point 1 in this observation, $l(x,I_1)+1=\Pi_1(x)+1\le\Pi_1(y)=l(y,I_1)$. If $y$ belongs to a Type 1 cycle, then by Definition \ref{def:Pi1} (Point 3), $x$ also belongs to a Type 1 cycle. Therefore, it is not possible that $y$ is in a Type 1 cycle and $x$ is not. If $x$ is in a Type 1 cycle and $y$ is not, then, $l(x,I_1)+1=\Pi_1(x)+1<n+\Pi_1(y)=l(y,I_1)$. Finally, if both belong to a Type 2 cycle or a path, then, $l(x,I_1)+1=n+\Pi_1(x)+1\le n+\Pi_1(y)=l(y,I_1)$ (from Point \ref{BunchI1:NType2Path} in this observation).\label{BunchI1:xLessy}
\item If $x$ belongs to a Type 1 cycle and $y$ belongs to either a Type 2 cycle or a path, then, $x$ and $y$ are not adjacent in $I_1$. The proof is as follows: From Point \ref{BunchI1:Type1} in this observation, $r(x,I_1)<n$ and from Points \ref{BunchI1:NType2Path} and \ref{BunchI1:B2PeType2Path}, $l(y,I_1)\ge n$.\label{BunchI1:Type1SepType2PathI1}
\item If $x\in\mathcal{N}$ is adjacent to $a\in\mathcal{A}$, then, $r(x,I_1)=l(a,I_1)$. The proof is as follows: By Table \ref{tab:nonAdj} (row 9), $x\in\mathcal{N}_e$ and by Table \ref{tab:uniqNeighbor} (row 1), $a$ should be the only neighbor of $x$ in $\mathcal{A}$. The interval assignment for $x$ is given in (\ref{eqn:B2PeNp1ptxI1}) where, $r(x,I_1)=l(a,I_1)$.\label{BunchI1:ANTouch} \end{enumerate} \end{observation}
\begin{lemma}\label{lem:AB2PeNSuperI1} $I_1[\mathcal{A}\cup\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$ is a supergraph of $G[\mathcal{A}\cup\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$. \end{lemma} \begin{proof} Let $x,y\in\mathcal{A}\cup\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ be two adjacent vertices in $G$. If $x,y\in\mathcal{A}$, then, by Lemma \ref{lem:AGI1} they are adjacent in $I_1$. Let $x,y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. They have to belong to the same component in $G[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$, which is either a Type 1 cycle, Type 2 cycle or a path. By Observations \ref{obs:Type1I1}.\ref{Type1I1:Type1SuperI1}, \ref{obs:Type2I1}.\ref{Type2I1:Type2SuperI1} and \ref{obs:PathI1}.\ref{PathI1:PathSuperI1}, $x$ and $y$ are adjacent in $I_1$.
Now it remains to be shown that if $x\in\mathcal{A}$ and $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, then they are adjacent in $I_1$. Noting that $\mathcal{B}_2\subseteq\mathcal{B}$ and $\mathcal{P}_e\subseteq\mathcal{P}$, $y\notin\mathcal{B}_2\cup\mathcal{P}_e$ (see Table \ref{tab:nonAdj}, rows 1 and 3). Therefore, $y\in\mathcal{N}$. By Observation \ref{obs:BunchI1}.\ref{BunchI1:ANTouch}, $r(y,I_1)=l(x,I_1)$. Therefore, $x$ is adjacent to $y$ in $I_1$. Hence proved. \qed \end{proof} } \subsubsection{Vertices of $\mathcal{B}_1\cup\mathcal{P}_{2e}$} Let $v\in\mathcal{B}_1\cup\mathcal{P}_{2e}$. From Table \ref{tab:uniqNeighbor} (rows 2 and 3), we note that $v$ has a unique neighbor in $\mathcal{N}$, say $v'$. $f(v',I_1)$ is already defined in Section \ref{sec:B2PeNI1}. \begin{eqnarray} &&\textrm{if $v\in\mathcal{B}_1$, then, } f(v,I_1)=\left[0,l(v',I_1)\right]\label{eqn:B1I1},\\ &&\textrm{if $v\in\mathcal{P}_{2e}$, then, } f(v,I_1)=\left[-1,l(v',I_1)\right]\label{eqn:P2eI1}. \end{eqnarray} \journ{ \begin{lemma}\label{lem:B1P2eI1} For any $x\in\mathcal{B}_1\cup\mathcal{P}_{2e}$, $r(x,I_1)>n$ and therefore, $[0,n]\subset f(x,I_1)$. \end{lemma} \begin{proof} From Table \ref{tab:uniqNeighbor} (rows 2 and 3), $x$ has a unique neighbor in $\mathcal{N}$, say $x'$. From (\ref{eqn:B1I1}) and (\ref{eqn:P2eI1}), $r(x,I_1)=l(x',I_1)$. We will now show that $x'$ does not belong to a Type~1 cycle in the graph induced by $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. Suppose $x\in\mathcal{B}_1$. Since $\mathcal{N}\subseteq\mathcal{N}_1$, by Lemma~\ref{lem:NCPi}, $x'$ is adjacent to at least one vertex in $\mathcal{C}\cup\mathcal{P}_i$. Since $x\in\mathcal{B}_1$, $x'$ cannot be adjacent to two vertices in $\mathcal{B}_2\cup\mathcal{P}_e$ and hence cannot belong to a Type~1 cycle. Now suppose $x\in\mathcal{P}_{2e}$. If $x'$ belongs to a Type 1 cycle, then it has two neighbors in $\mathcal{B}_2\cup\mathcal{P}_e$. By Lemma~\ref{lem:NAdjB2PeDisjointCP2e}, the remaining neighbor of $x'$, that is $x$, does not belong to $\mathcal{P}_{2e}$, which is a contradiction.
Thus, $x'$ belongs to either a Type~2 cycle or a path in $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. From Observation \ref{obs:BunchI1}.\ref{BunchI1:NType2Path}, $l(x',I_1)>n$ and therefore, $r(x,I_1)>n$. From the interval assignments for $x$ in (\ref{eqn:B1I1}) and (\ref{eqn:P2eI1}), it immediately follows that $[0,n]\subset f(x,I_1)$.\qed \end{proof} } \subsubsection{Vertices of ${\mathcal{R}}$}\label{sec:RI1} \conf{$ \forall v\in\mathcal{R}, f(v,I_1)=\left[-1,n\right]$.} \journ{\begin{equation}\label{eqn:RI1} \forall v\in\mathcal{R}, f(v,I_1)=\left[-1,n\right]. \end{equation}} \journ{ Consider the set of vertices which have been assigned intervals until now: $\mathcal{A}\cup(\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N})\cup(\mathcal{B}_1\cup\mathcal{P}_{2e})\cup\mathcal{R}= V\setminus(\mathcal{P}_{2i}\cup\mathcal{C})$. \begin{lemma}\label{lem:VminusCP2iSuperI1} $I_1[V\setminus(\mathcal{P}_{2i}\cup\mathcal{C})]$ is a supergraph of $G[V\setminus(\mathcal{P}_{2i}\cup\mathcal{C})]$. \end{lemma} \begin{proof} Let $x,y\in V\setminus(\mathcal{P}_{2i}\cup\mathcal{C})$ be two adjacent vertices in $G$. If $x,y\in\mathcal{A}\cup\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, then by Lemma \ref{lem:AB2PeNSuperI1}, $x$ is adjacent to $y$ in $I_1$. Let $x\in\mathcal{B}_1\cup\mathcal{P}_{2e}\cup\mathcal{R}$. By Lemma \ref{lem:B1P2eI1}, $[0,n]\subset f(x,I_1)$.
If $y\in\mathcal{B}_1\cup\mathcal{P}_{2e}$, again by Lemma \ref{lem:B1P2eI1}, $[0,n]\subset f(y,I_1)$ and if $y\in\mathcal{R}$, then by (\ref{eqn:RI1}), $[0,n]\subset f(y,I_1)$ and therefore, $x$ is adjacent to $y$. If $y\in\mathcal{A}$, then, since $\mathcal{B}_1\subseteq\mathcal{B}$ and $\mathcal{P}_{2e}\subseteq\mathcal{P}$, by Table \ref{tab:nonAdj} (rows 1--3) $x$ cannot be adjacent to $y$ in $G$. Suppose $y\in\mathcal{B}_2\cup\mathcal{P}_e$. If $y$ belongs to a Type 1 cycle, then by Observation \ref{obs:BunchI1}.\ref{BunchI1:Type1}(b), $f(y,I_1)\subset[1,n]$. Otherwise, from Observation \ref{obs:BunchI1}.\ref{BunchI1:B2PeType2Path}, $l(y,I_1)=n$. In either case, $x$ is adjacent to $y$ in $I_1$. Finally, suppose $y\in\mathcal{N}$. By Table \ref{tab:nonAdj} (row 2), $x\notin\mathcal{R}$, which implies that $x\in\mathcal{B}_1\cup\mathcal{P}_{2e}$. From Table \ref{tab:uniqNeighbor} (rows 2 and 3), $y$ is the unique neighbor of $x$ in $\mathcal{N}$. By (\ref{eqn:B1I1}) and (\ref{eqn:P2eI1}), $r(x,I_1)=l(y,I_1)$ and therefore, $x$ and $y$ are adjacent in $I_1$. Hence proved. \qed \end{proof} } \subsubsection{Vertices of ${\mathcal{P}_{2i}}$} Suppose $v\in\mathcal{P}_{2i}$. Let $v'$ be its unique neighbor in $\mathcal{R}\cup\mathcal{N}$ (see Table \ref{tab:uniqNeighbor} row 4). Note that $f(v',I_1)$ is already defined Sections \ref{sec:B2PeNI1} and \ref{sec:RI1}. \begin{eqnarray} &&\textrm{if $v'\in\mathcal{R}$, then, } f(v,I_1)=\left[-1,-1\right], \label{eqn:P2iRI1}\\ &&\textrm{if $v'\in\mathcal{N}$, then, } f(v,I_1)=\left[-1,l(v',I_1)\right].\label{eqn:P2iNI1} \end{eqnarray} \journ{ \begin{lemma}\label{lem:VminusCSuperI1} $I_1[V\setminus\mathcal{C}]$ is a supergraph of $G[V\setminus\mathcal{C}]$. \end{lemma} \begin{proof} Let $x,y\in V\setminus\mathcal{C}$ be two adjacent vertices in $G$. By Lemma \ref{lem:VminusCP2iSuperI1}, if $x,y\in V\setminus(\mathcal{C}\cup\mathcal{P}_{2i})$, then, they are adjacent in $I_1$. Suppose $x\in\mathcal{P}_{2i}$. By definition, $x$ is an interior vertex of a special path in $\mathcal{P}$ and therefore, it is adjacent to two vertices from the path. Moreover, it is not adjacent to any end point of this path since otherwise it would be present in $\mathcal{P}_{2e}$. Therefore, two of the neighbors of $x$ are in $\mathcal{P}_{2i}\cup\mathcal{P}_{2e}$. By Table \ref{tab:uniqNeighbor} (row 4), the third neighbor of $x$ has to be in $\mathcal{R}\cup\mathcal{N}$. From this we infer that $y\in\mathcal{P}_{2i}\cup\mathcal{P}_{2e}\cup\mathcal{R}\cup\mathcal{N}$. Note that $l(x,I_1)=-1$ by (\ref{eqn:P2iRI1}) and (\ref{eqn:P2iNI1}). If $y\in\mathcal{P}_{2i}\cup\mathcal{P}_{2e}\cup\mathcal{R}$, then by the interval assignments (\ref{eqn:P2eI1}--\ref{eqn:P2iNI1}), it follows that $l(y,I_1)=-1$. If $y\in\mathcal{N}$, then, by (\ref{eqn:P2iNI1}), $r(x,I_1)=l(y,I_1)$. In either case, $x$ is adjacent to $y$ in $I_1$. Hence proved. \qed \end{proof}
\begin{observation}\label{obs:P2iRTouch} If $x\in\mathcal{P}_{2i}$ and $y\in\mathcal{R}$ are adjacent in $G$, then, $r(x,I_1)=l(y,I_1)$. From Table \ref{tab:uniqNeighbor} (row 4), $y$ is the only neighbor of $x$ in $\mathcal{R}\cup\mathcal{N}$. The interval assignment for $x$ is given by (\ref{eqn:P2iRI1}) and for $y$ by (\ref{eqn:RI1}), from which it follows that $r(x,I_1)=l(y,I_1)=-1$. \end{observation} } \subsubsection{Vertices of ${\mathcal{C}}$} We recall that $\mathcal{C}$ induces a collection of cycles in $G$. \begin{definition}{\bf\boldmath Notation $\eta(\cdot)$ and special vertex:}\label{def:specialVertex} We recall from Table \ref{tab:uniqNeighbor} (row 3) that every vertex $x\in\mathcal{C}$ has a unique neighbor in $\mathcal{N}$. We denote this neighbor by $\eta(x)$. We define a vertex $c\in C$ as the special vertex of $C$ if $l(\eta(c),I_1)=\displaystyle\min_{c'\in C}l(\eta(c'),I_1)$. Note that $\eta(c)$ is already assigned an interval in Section \ref{sec:B2PeNI1}. \end{definition} Suppose $C$ is a cycle in $\mathcal{C}$. Let $C=c_1c_2\ldots c_tc_1$ be a natural ordering of the vertices of $C$ such that $c_1$ is the special vertex of $C$. The interval assignments are as follows: \begin{equation}\label{eqn:CI1} \begin{array}{ll} f(c_1,I_1)=\left[l(\eta(c_1),I_1),l(\eta(c_1),I_1)\right],\\ f(c_i,I_1)=\left\{
\begin{array}{ll}
\left[l(\eta(c_1),I_1),l(\eta(c_i),I_1)+0.5\right], & i=2,t,\\
\left[l(\eta(c_1),I_1)+0.5,l(\eta(c_i),I_1)+0.5\right], & \textrm{otherwise}.\\
\end{array}\right. \end{array} \end{equation} \journ{ Since $l(\eta(c_1),I_1)<l(\eta(c_i),I_1)$ for $i\ne1$, we observe that the intervals are well-defined.
\begin{observation}\label{obs:CI1} Let $C=c_1c_2\ldots c_tc_1$ be a cycle in $\mathcal{C}$ with $c_1$ being the special vertex. \begin{enumerate} \item $c_1$ is adjacent to only $c_2$ and $c_t$ in $I_1$. Further, $r(c_1,I_1)=l(c_2,I_1)=l(c_t,I_1)$. This is easy to infer from (\ref{eqn:CI1}).\label{CI1:special} \item $C\setminus\{c_1\}$ is a clique in $I_1$. Since $l(\eta(c_1),I_1)\le l(\eta(c),I_1)$, $c\in C$, from (\ref{eqn:CI1}), it follows that $\forall c\in C\setminus\{c_1\}$, $l(\eta(c_1)+0.5,I_1)\in f(c,I_1)$.\label{CI1:nonSpecial} \item\label{CI1:cEtacI1} For every vertex $c\in\mathcal{C}$, $l(\eta(c),I_1)\in f(c,I_1)$ and therefore, $c$ is adjacent to $\eta(c)$ in $I_1$. The proof follows: Since $c_1$ is a special vertex, by definition, $l(\eta(c_1),I_1)\le l(\eta(c),I_1)$, and therefore from the interval assignments in (\ref{eqn:CI1}), $l(\eta(c),I_1)\in f(c,I_1)$. \end{enumerate} \end{observation}
\noindent Now we will show the following: \begin{lemma} \label{lem:superI1} $I_1$ is a supergraph of $G$. \end{lemma} \begin{proof} Let $x,y\in V$ be two adjacent vertices in $G$. If $x,y\in V\setminus\mathcal{C}$, then, by Lemma \ref{lem:VminusCSuperI1}, $x$ and $y$ are adjacent in $I_1$. If $x,y\in\mathcal{C}$, then, they belong to the same component (which is a special cycle) in $G[\mathcal{C}]$. From Observation \ref{obs:CI1} (Points \ref{CI1:special} and \ref{CI1:nonSpecial}), we can infer that they are adjacent in $I_1$. The only case remaining is the one in which one vertex is in $\mathcal{C}$ and the other in $V\setminus\mathcal{C}$.
Suppose $x\in\mathcal{C}$ and $y\in V\setminus\mathcal{C}$. Since $x$ belongs to a special cycle in $\mathcal{C}$, it has two neighbors in the cycle. The remaining neighbor is $y$. By Table \ref{tab:uniqNeighbor} (row 3), $y\in\mathcal{N}$. Moreover, by the notation introduced in Definition \ref{def:specialVertex}, $y=\eta(x)$. From Observation \ref{obs:CI1}.\ref{CI1:cEtacI1}, it follows that $y$ is adjacent to $x$ in $I_1$. Hence proved.\qed \end{proof}
Recall that we need to show that $E(G)=E(I_1)\cap E(I_2)\cap E(I_3)$. For this, we will have to prove that every missing edge in $G$ is missing in at least one of the three interval graphs. Note that there is no edge between $\mathcal{A}$ and $V\setminus(\mathcal{A}\cup\mathcal{N})$ (Table \ref{tab:nonAdj} rows 1--3). Now we will show that all these missing edges of $G$ are also missing in $I_1$. \begin{lemma}\label{lem:AANnonAdj} Let $x\in\mathcal{A}$ and $y\in V\setminus(\mathcal{A}\cup\mathcal{N})$. Then, $(x,y)\notin E(I_1)$. \end{lemma} \begin{proof} From (\ref{eqn:AIsolatedI1}--\ref{eqn:AEdgeyI1}), $l(x,I_1)>2n$. Now we will show that $r(y,I_1)\le2n$, from which it immediately follows that $(x,y)\notin E(I_1)$. If $y\in\mathcal{B}_2\cup\mathcal{P}_e$, from Observations \ref{obs:BunchI1}.\ref{BunchI1:Type1} and \ref{obs:BunchI1}.\ref{BunchI1:B2PeType2Path}, $r(y,I_1)\le2n-1$. If $y\in\mathcal{B}_1\cup\mathcal{P}_{2e}$, then by (\ref{eqn:B1I1}) and (\ref{eqn:P2eI1}), $r(y,I_1)=l(z,I_1)$ for some $z\in\mathcal{N}$. If $y\in\mathcal{P}_{2i}$, then by (\ref{eqn:P2iRI1}) and (\ref{eqn:P2iNI1}), either $r(y,I_1)=-1$ or $r(y,I_1)=l(z,I_1)$ for some $z\in\mathcal{N}$. If $y\in\mathcal{C}$, then from (\ref{eqn:CI1}), $r(y,I_1)\le l(z,I_1)+0.5$ for some $z\in\mathcal{N}$. If $y\in\mathcal{R}$, then by (\ref{eqn:RI1}), $r(y,I_1)=n$. From Observations \ref{obs:BunchI1}.\ref{BunchI1:Type1} and \ref{obs:BunchI1}.\ref{BunchI1:NType2Path}, $\forall z\in\mathcal{N}$, $l(z,I_1)<2n-1$ and therefore, it follows that in each case $r(y,I_1)<2n$. Thus, $(x,y)\notin E(I_1)$. Hence proved. \qed \end{proof} } \subsection{Construction of $I_2$}\label{sec:I2} \subsubsection{Vertices of ${\mathcal{A}}$} We recall the interval assignment for $\mathcal{A}$ in $I_1$ (see Section \ref{sec:AI1}). Let $\overline{\Pi_A}$ be the reverse of $\Pi_A$. The interval assignments for vertices of $\mathcal{A}$ in $I_2$ are as follows: \paragraph{\bf An isolated vertex} $u$ is given a point interval as follows: \begin{equation}\label{eqn:AIsolatedI2} f(u,I_2)=[n+\overline{\Pi_A}(u),n+\overline{\Pi_A}(u)]. \end{equation} \paragraph{\bf End points of an isolated edge} $(u,v)$: Without loss of generality, let $\Pi_A(v)=\Pi_A(u)+1$. This implies that $\overline{\Pi_A}(v)=\overline{\Pi_A}(u)-1$. We assign the intervals to $u$ and $v$ as follows: \begin{equation}\label{eqn:AEdgeI2} \begin{array}{ll} f(u,I_2)=[n+\overline{\Pi_A}(u)-0.5,n+\overline{\Pi_A}(u)],\\ f(v,I_2)=[n+\overline{\Pi_A}(v),n+\overline{\Pi_A}(v)+0.5]. \end{array} \end{equation} \journ{ Note that the two intervals intersect at $n+\overline{\Pi_A}(u)-0.5=n+\overline{\Pi_A}(v)+0.5$.
\begin{observation}\label{obs:AGI2} The graph induced by $\mathcal{A}$ in $I_2$ and $G$ are identical, that is, $I_2[\mathcal{A}]=G[\mathcal{A}]$. The proof is similar to that of Lemma \ref{lem:AGI1}. \end{observation} } \subsubsection{Vertices of ${\mathcal{N}}$} Let $v\in\mathcal{N}$. From Table \ref{tab:uniqNeighbor} (row 1), $v$ is adjacent to at most one vertex in $\mathcal{A}$. \begin{eqnarray} &&\textrm{if $v$ is not adjacent to any vertex in $\mathcal{A}$, then, } f(v,I_2)=[0,n], \label{eqn:NnoneI2} \\ &&\textrm{if $v$ is adjacent to vertex $a$ in $\mathcal{A}$, then, } f(v,I_2)=\left[0,l(a,I_2)\right].\label{eqn:NAI2} \end{eqnarray} \journ{ Note that $l(a,I_2)$ is already defined in (\ref{eqn:AIsolatedI2}) and (\ref{eqn:AEdgeI2}) and satisfies, $l(a,I_2)>n$. Hence, we have the following observation. \begin{observation}\label{obs:NCliqueI2} In $I_2$, $\forall x\in\mathcal{N}$, $[0,n]\subseteq f(x,I_2)$. \end{observation}
\begin{lemma}\label{lem:ANSuperI2} $I_2[\mathcal{A}\cup\mathcal{N}]$ is a supergraph of $G[\mathcal{A}\cup\mathcal{N}]$. \end{lemma} \begin{proof} Let $x,y\in\mathcal{A}\cup\mathcal{N}$ be two adjacent vertices in $G$. If $x,y\in\mathcal{A}$, from Observation~\ref{obs:AGI2}, $(x,y)\in E(I_2)$. If $x,y\in\mathcal{N}$, from Observation \ref{obs:NCliqueI2}, $(x,y)\in E(I_2)$. If $x\in\mathcal{N}$ and $y\in\mathcal{A}$, then by Table \ref{tab:nonAdj} (row 9) and Table \ref{tab:uniqNeighbor} (row 1), $x\in\mathcal{N}_e$ and $y$ is its only neighbor in $\mathcal{A}$. From (\ref{eqn:NAI2}), $r(x,I_2)=l(y,I_2)$ and therefore, $(x,y)\in E(I_2)$.\qed \end{proof}
\begin{lemma}\label{lem:ANnonAdj} If $x\in\mathcal{N}$ and $y\in\mathcal{A}$ such that $(x,y)\notin E(G)$, then, $(x,y)\notin E(I_1\cap I_2)$. \end{lemma} \begin{proof} Suppose $x$ is not adjacent to any vertex in $\mathcal{A}$. In $I_2$, by (\ref{eqn:NnoneI2}), $r(x,I_2)=n$ and by (\ref{eqn:AIsolatedI2}) and (\ref{eqn:AEdgeI2}), $l(y,I_2)>n$ and therefore, $(x,y)\notin I_2$. Let us assume that $x$ is adjacent to some vertex in $\mathcal{A}$, say $a$. From Table \ref{tab:nonAdj} (row 9), $x\in\mathcal{N}_e$ and by Table \ref{tab:uniqNeighbor} (row 1), $a$ is the only neighbor of $x$ in $\mathcal{A}$. From the interval assignment in (\ref{eqn:B2PeNp1ptxI1}), $r(x,I_1)=l(a,I_1)$ and from (\ref{eqn:NAI2}) $r(x,I_2)=l(a,I_2)$. If $\Pi_A(a)<\Pi_A(y)$, then, $l(a,I_1)<l(y,I_1)$ (this is easy to infer from (\ref{eqn:AIsolatedI1}--\ref{eqn:AEdgeyI1})) and therefore, $(x,y)\notin I_1$. Otherwise, since $\Pi_A(a)>\Pi_A(y)$, it implies that $\overline{\Pi_A}(a)<\overline{\Pi_A}(y)$ which in turn implies that $l(a,I_2)<l(y,I_2)$ (see interval assignments in (\ref{eqn:AIsolatedI2}) and (\ref{eqn:AEdgeI2})). Therefore, $(x,y)\notin E(I_2)$. \qed \end{proof} } \subsubsection{Vertices of ${\mathcal{C}\cup\mathcal{P}}$} We recall that $\mathcal{C}\cup\mathcal{P}$ induces a collection of special cycles and special paths in $G$.
\begin{definition}\label{def:Pi2} $\Pi_2$ is an ordering of $\mathcal{C}\cup\mathcal{P}$ such that the following properties are satisfied: \begin{enumerate} \item Let $P$ be a path in $\mathcal{P}$. For a natural ordering of $P$, say $p_1p_2\ldots p_t$, we have $\Pi_2(p_{i})= \Pi_2(p_{i-1})+1$, $2\le i\le t$. \item Suppose $C$ is a cycle in $\mathcal{C}$. For a natural ordering of $C$, say $c_1c_2\ldots c_tc_1$, where $c_1$ is the special vertex (recall Definition \ref{def:specialVertex}), we have $\Pi_2(c_{i})= \Pi_2(c_{i-1})+1$, $2\le i\le t$. \end{enumerate} \end{definition} \journ{ It is easy to see that such an ordering $\Pi_2$ exists. Also note that if $S_1$ and $S_2$ are two different components of $G[\mathcal{C}\cup\mathcal{P}]$, then, either $\Pi_2(S_1)<\Pi_2(S_2)$ or $\Pi_2(S_2)<\Pi_2(S_1)$. The interval assignments are as follows: } \paragraph{\bf For the vertices of a path:} Suppose $P=p_1p_2\ldots p_t$ such that $\Pi_2(p_{i+1})= \Pi_2(p_i)+1$, $1\le i<t$. \begin{equation}\label{eqn:PI2} \begin{array}{l} f(p_i,I_2)=\left[\Pi_2(p_i),\Pi_2(p_i)+1\right], 1\le i<t,\\ f(p_t,I_2)=\left[\Pi_2(p_t),\Pi_2(p_t)+0.5\right]. \end{array} \end{equation} \journ{ \begin{observation}\label{obs:PSuperI2} Let $P=p_1p_2\ldots p_t$ be a special path from $\mathcal{P}$ such that $\Pi_2(p_{i+1})= \Pi_2(p_{i})+1$, $1\le i<t$. Then, for $1<i\le t$, $l(p_i,I_2)=r(p_{i-1},I_2)$ and hence $I_2[P]=G[P]$. \end{observation} } \paragraph{\bf For the vertices of a cycle:} Suppose $C=c_1c_2\ldots c_tc_1$ such that $\Pi_2(c_{i+1})= \Pi_2(c_{i})+1$, $1\le i<t$. \begin{equation} \begin{array}{l}\label{eqn:CI2} f(c_1,I_2)= \left[\Pi_2(c_1),\Pi_2(c_t)\right],\\ f(c_i,I_2)= \left[\Pi_2(c_i),\Pi_2(c_i)+1\right],1<i<t,\\ f(c_t,I_2)= \left[\Pi_2(c_t),\Pi_2(c_t)+0.5\right]. \end{array} \end{equation} \journ{ \begin{observation}\label{obs:CI2} Suppose $C=c_1c_2\ldots c_tc_1$ is a special cycle from $\mathcal{C}$ such that $\Pi_2(c_{i+1})=\Pi_2(c_i)+1$, $1\le i<t$. \begin{enumerate} \item For $2\le i\le t$, $l(c_{i+1},I_2)=r(c_i,I_2)$ and therefore, $I_2[C\setminus c_1]=G[C\setminus c_1]$.\label{CI2:touch} \item $I_2[C]$ is a supergraph of $G[C]$. The proof is as follows: $c_1$ is adjacent to all the other vertices of $C$ in $I_2$ and from Point 1 in this observation, $I_2[C\setminus c_1]=G[C\setminus c_1]$.\label{CI2:CSuperI2} \end{enumerate} \end{observation}
\begin{observation}\label{obs:CP0nI2} For every $x\in\mathcal{C}\cup\mathcal{P}$ and $y\in\mathcal{N}$, $f(x,I_2)\subset f(y,I_2)$. The proof is as follows: $G$ is a cubic graph whereas $G[\mathcal{C}\cup\mathcal{P}]$ has maximum degree $2$ and therefore, $\mathcal{C}\cup\mathcal{P}\ne V$. This implies that $\forall z\in\mathcal{C}\cup\mathcal{P}$, $\Pi_2(z)<n$. Taking this into consideration, from interval assignments (\ref{eqn:PI2}) and (\ref{eqn:CI2}) we can infer that $0<l(x,I_2)<r(x,I_2)<n$ and therefore, $f(x,I_2)\subset[0,n]$. From Observation \ref{obs:NCliqueI2}, $[0,n]\subseteq f(y,I_2)$. Hence proved. \end{observation}
\begin{lemma}\label{lem:CPnonAdjI2} If $x,y\in\mathcal{C}\cup\mathcal{P}$ belong to different components in $G[\mathcal{C}\cup\mathcal{P}]$, then, $(x,y)\notin E(I_2)$. \end{lemma} \begin{proof} Let $x\in S_x$ and $y\in S_y$, where $S_x$ and $S_y$ are two different components of $G[\mathcal{C}\cup\mathcal{P}]$. Without loss of generality we will assume that $\Pi_2(S_x)<\Pi_2(S_y)$. Irrespective of whether $S_x$ (or $S_y$) induces a path or a cycle in $G[\mathcal{C}\cup\mathcal{P}]$, from (\ref{eqn:PI2}) and (\ref{eqn:CI2}), it follows that $\displaystyle r(x,I_2)\le\max_{a\in S_x}\Pi_2(a)+0.5<\min_{b\in S_y}\Pi_2(b)\le l(y,I_2)$. Therefore, $(x,y)\notin E(I_2)$. \qed \end{proof}
\begin{lemma}\label{lem:CPI1I2} The graph induced by $\mathcal{C}\cup\mathcal{P}$ in $G$ and $I_1\cap I_2$ are identical, that is, $G[\mathcal{C}\cup\mathcal{P}]=(I_1\cap I_2)[\mathcal{C}\cup\mathcal{P}]$. \end{lemma} \begin{proof} Let $x,y\in\mathcal{C}\cup\mathcal{P}$. First we will show that if $(x,y)\in E(G)$, then $(x,y)\in E(I_1\cap I_2)$. Clearly, from Lemma \ref{lem:superI1}, $(x,y)\in E(I_1)$. Since $x$ and $y$ are adjacent in $G$, they belong to the same path or cycle in $G[\mathcal{C}\cup\mathcal{P}]$, say $S$. From Observations \ref{obs:PSuperI2} and \ref{obs:CI2}.\ref{CI2:CSuperI2}, it follows that $I_2[S]$ is a supergraph of $G[S]$. Therefore, $(x,y)\in E(I_2)$. Hence, $(x,y)\in E(I_1\cap I_2)$.
Now we will show that if $(x,y)\notin E(G)$, then, either $(x,y)\notin E(I_1)$ or $(x,y)\notin E(I_2)$. There are two cases to consider: (1) $x$ and $y$ belong to different components in $G[\mathcal{C}\cup\mathcal{P}]$ and (2) they belong to the same component. If it is Case (1), then by Lemma \ref{lem:CPnonAdjI2}, $(x,y)\notin E(I_2)$. If it is Case (2), then, let $x,y\in S$, where $S$ is a component of $G[\mathcal{C}\cup\mathcal{P}]$. If $S$ is a special path, then by Observation \ref{obs:PSuperI2}, $I_2[S]=G[S]$ and therefore, $(x,y)\notin E(I_2)$. Suppose $S$ is a special cycle. Let $S=c_1c_2\ldots c_tc_1$, where $c_1$ is the special vertex. If neither $x$ nor $y$ is $c_1$, then, they are not adjacent in $I_2$. This is because, by Observation \ref{obs:CI2}.\ref{CI2:touch}, $I_2[C\setminus\{c_1\}]=G[C\setminus\{c_1\}]$. Suppose $x=c_1$, then clearly, $y\ne c_2,c_t$. By Observation \ref{obs:CI1}.\ref{CI1:special}, in $I_1$, $c_1$ is adjacent to only $c_2$ and $c_t$. Thus, $x$ and $y$ are not adjacent in $I_1$. Hence, $(x,y)\notin E(I_1\cap I_2)$. \qed \end{proof}
\begin{lemma}\label{lem:ANCPSuperI2} $I_2[\mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{P}]$ is a super graph of $G[\mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{P}]$. \end{lemma} \begin{proof} Let $x,y\in\mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{P}$ be two adjacent vertices in $G$. If $x,y\in\mathcal{A}\cup\mathcal{N}$, then by Lemma \ref{lem:ANSuperI2}, $(x,y)\in E(I_2)$ and if $x,y\in\mathcal{C}\cup\mathcal{P}$, then, from Lemma \ref{lem:CPI1I2} we can infer that $(x,y)\in E(I_2)$. Therefore, we will assume that $x\in\mathcal{A}\cup\mathcal{N}$ and $y\in\mathcal{C}\cup\mathcal{P}$. By Table \ref{tab:nonAdj} (row 1), $x\notin\mathcal{A}$ and hence, $x\in\mathcal{N}$. From Observation \ref{obs:CP0nI2}, $f(y,I_2)\subset f(x,I_2)$. Therefore, $(x,y)\in E(I_2)$. \qed \end{proof} } \subsubsection{Vertices of ${\mathcal{R}\cup\mathcal{B}}$} \label{sec:RBI2} From Lemma \ref{lem:componentsY}, we recall that each component in $\mathcal{R}\cup\mathcal{B}$ is isomorphic to one of the graphs shown in Figure \ref{fig:componentsY}. Further, each component contains exactly one vertex from $\mathcal{R}$ and is uniquely identified by it; by the notation introduced in Lemma \ref{lem:componentsY}, for every $u\in\mathcal{R}$, $\Gamma(u)$ denotes the component containing $u$ in $G[\mathcal{R}\cup\mathcal{B}]$. \begin{definition}{\bf\boldmath Notation $\beta(\cdot)$:}\label{def:baseVertex} In the graph induced by $\mathcal{R}\cup\mathcal{B}$, consider each component $\Gamma(u)$, $u\in\mathcal{R}$. From Table \ref{tab:uniqNeighbor} (row 5), $u$ is adjacent to a unique vertex in $\mathcal{P}_{2i}$. We denote this vertex by $\beta(u)$. \end{definition}
\paragraph{\bf Interval assignments for vertices of $\mathcal{R}\cup\mathcal{B}$:} Let us consider a component of $G[\mathcal{R}\cup\mathcal{B}]$, say $\Gamma(u)$, $u\in\mathcal{R}$. From (\ref{eqn:PI2}), we note that $\beta(u)$ is assigned a unit interval in $I_2$. The interval assignments for the vertices of $\Gamma(u)$ is illustrated in Figure \ref{fig:componentsYInterval}.
\journ{ \begin{remark}\label{rem:RBI2} Let $u\in\mathcal{R}$. Every vertex of $\Gamma(u)$ is assigned a strict sub-interval of $f(\beta(u),I_2)$ and none of these intervals contains any end point of $f(\beta(u),I_2)$. \end{remark} }
\begin{figure}
\caption{The interval assignments for each component $\Gamma(u)$, $u\in\mathcal{R}$ induced by $\mathcal{R}\cup\mathcal{B}$ in the interval graph $I_2$. \journ{The dotted vertical lines are used to indicate that the concerned intervals intersect exactly at their end points, that is, in (a) $r(x_1,I_2)=l(x_1',I_2)$, in (c) $r(x_2,I_2)=l(x_1,I_2)$, in (d) $l(x_2',I_2)=r(x_1',I_2)$ and in (e) $r(x_2,I_2)=l(x_1,I_2)$ and $l(x_2',I_2)=r(x_1',I_2)$}. }
\label{fig:componentsYInterval}
\end{figure} \journ{ \begin{lemma}\label{lem:RBI2} Let $z\in\mathcal{R}$ and $\Gamma(z)$ be a component of $G[\mathcal{R}\cup\mathcal{B}]$. \begin{enumerate} \item Every vertex in $\Gamma(z)$ is adjacent to only one vertex in $\mathcal{C}\cup\mathcal{P}$ and that is $\beta(z)$. \item The graph induced by $\mathcal{R}\cup\mathcal{B}$ in $I_2$ and in $G$ are identical, that is, $I_2[\mathcal{R}\cup\mathcal{B}]=G[\mathcal{R}\cup\mathcal{B}]$. \end{enumerate} \end{lemma} \begin{proof} Consider the vertex $\beta(z)$. Let $f_o(\beta(z),I_2)$ denote the open interval $\left(l(\beta(z),I_2),r(\beta(z),I_2)\right)$. From Definition \ref{def:baseVertex}, we recall that $\beta(z)\in\mathcal{P}_{2i}$. We first prove the following: \begin{clm}\label{clm:ppDashI2} Let $p\in\mathcal{C}\cup\mathcal{P}$ such that $p\ne\beta(z)$. Then, $f_o(\beta(z),I_2)\cap f(p,I_2)=\varnothing$. \end{clm} \begin{proof} If $\beta(z)$ and $p$ belong to different components in $G[\mathcal{C}\cup\mathcal{P}]$, then by Lemma \ref{lem:CPnonAdjI2}, $(\beta(z),p)\notin E(I_2)$ and therefore, their intervals do not intersect. Suppose $\beta(z)$ and $p$ are in the same component. Since $\beta(z)\in\mathcal{P}_{2i}$, $\beta(z)$ and $p$ belong to a special path. By (\ref{eqn:PI2}), it follows that $f(\beta(z),I_2)$ and $f(p,I_2)$ can intersect only at $l(\beta(z),I_2)$ or $r(\beta(z),I_2)$. Hence proved. \bqed \end{proof} By Remark \ref{rem:RBI2}, for every $x\in\Gamma(z)$, $f(x,I_2)\subseteq f_o(\beta(z),I_2)$. This implies that $x$ is adjacent to $\beta(z)$ in $I_2$ and by Claim \ref{clm:ppDashI2}, $x$ is not adjacent to any other vertex from $\mathcal{C}\cup\mathcal{P}$. Thus, we have proved the first statement.
Suppose $x\in\Gamma(z)$ and $y\in\Gamma(z')$, where $z\ne z'$. We first note that $\beta(z)\ne\beta(z')$. This is because, since $\beta(z)\in\mathcal{P}_{2i}$, in $G[\mathcal{C}\cup\mathcal{P}]$, it is the interior vertex of a special path and therefore, two of its neighbors belong to $\mathcal{P}$. Since its remaining neighbor is $z$, it cannot be adjacent to $z'$. By Remark \ref{rem:RBI2}, $f(x,I_2)\subseteq f_o(\beta(z),I_2)$ and $f(y,I_2)\subseteq f_o(\beta(z'),I_2)$. From Claim \ref{clm:ppDashI2}, $f_o(\beta(z),I_2)\cap f_o(\beta(z'),I_2)=\varnothing$ and therefore, $x$ is not adjacent to $y$. From Figure \ref{fig:componentsYInterval}, it is easy to see that $I_2[\Gamma(z)]=G[\Gamma(z)]$ $\forall z\in\mathcal{R}$. Therefore, $I_2[\mathcal{R}\cup\mathcal{B}]=G[\mathcal{R}\cup\mathcal{B}]$. \bqed \end{proof}
\begin{observation}\label{obs:RBCP} Here are some immediate consequences of Lemma \ref{lem:RBI2}. \begin{enumerate} \item If $x\in\mathcal{R}$ and $y\in\mathcal{P}_{2i}$ such that $(x,y)\notin E(G)$, then, $(x,y)\notin E(I_2)$. This follows from noting that $y\ne\beta(x)$ and subsequently applying Lemma \ref{lem:RBI2} (Statement 1).\label{RBCP:RP2iI2} \item For any $x\in\mathcal{R}\cup\mathcal{B}$ and $y\in\mathcal{C}\cup\mathcal{P}_e\cup\mathcal{P}_{2e}$, $(x,y)\notin E(I_2)$. The proof is as follows: Let $x\in\Gamma(z)$, $z\in\mathcal{R}$. From Lemma \ref{lem:RBI2} (Statement 1), $x$ is not adjacent to any vertex in $\mathcal{C}\cup\mathcal{P}$ other than $\beta(z)$ in $I_2$. From Definition \ref{def:baseVertex}, $\beta(z)\in\mathcal{P}_{2i}$ and therefore, $y\ne\beta(z)$.\label{RBCP:RBCPeP2eI2} \end{enumerate} \end{observation}
\begin{lemma} \label{lem:superI2} $I_2$ is a supergraph of $G$. \end{lemma} \begin{proof} Let $x$ and $y$ be two adjacent vertices in $G$. If $x,y\in \mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{P}$, then, by Lemma \ref{lem:ANCPSuperI2}, $(x,y)\in E(I_2)$. If $x,y\in\mathcal{R}\cup\mathcal{B}$, then, by Lemma \ref{lem:RBI2} (Statement 2), $(x,y)\in E(I_2)$. The only remaining case is when $x\in\mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{P}$ and $y\in\mathcal{R}\cup\mathcal{B}$. Let $\Gamma(z)$, $z\in\mathcal{R}$ be the component in $G[\mathcal{R}\cup\mathcal{B}]$ containing $y$. By Table \ref{tab:nonAdj} (rows 2, 3, 5 and 6), $x\notin\mathcal{A}\cup\mathcal{C}$ and therefore, $x\in\mathcal{P}\cup\mathcal{N}$.
Suppose $x\in\mathcal{P}$. By Table \ref{tab:nonAdj} (row 6), $y\notin\mathcal{B}$. Since by assumption $y\in\mathcal{R}\cup\mathcal{B}$, it implies that $y\in\mathcal{R}$, which in turn implies that $z=y$ as $z$ is the only vertex from $\mathcal{R}$ in $\Gamma(z)$. From Table \ref{tab:nonAdj} (row 5), we can infer that $x\in\mathcal{P}_{2i}$. Now by Definition \ref{def:baseVertex}, $x=\beta(y)$. By Lemma \ref{lem:RBI2} (Statement 1), $y$ is adjacent to $x$ in $I_2$. Finally, if $x\in\mathcal{N}$, then by Observation \ref{obs:CP0nI2}, $f(x,I_2)\supset f(\beta(z),I_2)$, since $\beta(z)\in\mathcal{P}_{2i}\subset\mathcal{P}$. By Remark \ref{rem:RBI2}, $f(y,I_2)\subset f(\beta(z),I_2)$. Therefore, $f(y,I_2)\subset f(x,I_2)$ and $x$ is adjacent to $y$ in $I_2$. \qed \end{proof} } \subsection{Construction of $I_3$}\label{sec:I3} \subsubsection{Vertices of ${\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}}$}\label{sec:B2PeNI3} We recall the notations and interval assignments developed for this set in $I_1$, in particular the definition of $\Pi_1$ (Definition \ref{def:Pi1}).
\paragraph{\bf For a vertex in a Type 1 cycle:} Let $S=c_1c_2\ldots c_tc_1$ be a Type 1 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. We recall that $c_1\in\mathcal{N}$ and $c_t\in\mathcal{B}_2\cup\mathcal{P}_e$. The interval assignments are as follows: For $1\le i<t$, \begin{eqnarray} &&\textrm{if $c_i\in\mathcal{N}$, then, }f(c_i,I_3)=\left[\Pi_1(c_i),\Pi_1(c_i)+1\right],\label{eqn:B2PeNType1ciNI3}\\ &&\textrm{if $c_i\in\mathcal{B}_2$, then, }f(c_i,I_3)=\left[\Pi_1(c_i),n\right],\label{eqn:B2PeNType1ciB2I3}\\ &&\textrm{if $c_i\in\mathcal{P}_e$, then, }f(c_i,I_3)=\left[\Pi_1(c_i),n+1\right].\label{eqn:B2PeNType1ciPeI3} \end{eqnarray} The interval assignment for $c_t$ is as follows: \begin{eqnarray} &&\textrm{if $c_t\in\mathcal{B}_2$, then, }f(c_t,I_3)=\left[\Pi_1(c_1)+1,n\right],\label{eqn:B2PeNType1ctB2I3}\\ &&\textrm{if $c_t\in\mathcal{P}_e$, then, }f(c_t,I_3)=\left[\Pi_1(c_1)+1,n+1\right],\label{eqn:B2PeNType1ctPeI3} \end{eqnarray} \journ{ \begin{observation}\label{obs:Type1I3} Consider a Type 1 cycle $S=c_1c_2\ldots c_tc_1$, such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. \begin{enumerate} \item $I_3[S]$ is a supergraph of $G[S]$. The proof is as follows: From Observation \ref{obs:Pi1nMinus1}, $\Pi_1(c_i)<n-1$. Using this in (\ref{eqn:B2PeNType1ciNI3}--\ref{eqn:B2PeNType1ciPeI3}), we note that for $1\le i<t$, $l(c_i,I_3)=\Pi_1(c_i)$ and $\Pi_1(c_i)+1\in f(c_i,I_3)$, and therefore, $c_i$ is adjacent to $c_{i+1}$ in $I_3$. Next, from (\ref{eqn:B2PeNType1ctB2I3}) and (\ref{eqn:B2PeNType1ctPeI3}), it is easy to infer that $f(c_t,I_3)$ contains $\Pi_1(c_i)+1$, $1\le i\le t$. Therefore, $c_t$ is adjacent to all the other vertices of the cycle. Hence proved.\label{Type1I3:Type1SuperI3} \item $r(c_1,I_3)=l(c_2,I_3)=l(c_t,I_3)$.\label{Type1I3:touch} \end{enumerate} \end{observation}
\begin{lemma}\label{lem:Type1NI1I3} Let $x\in\mathcal{N}$ and $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ be such that $x\in S_x$ and $y\in S_y$ where both $S_x$ and $S_y$ induce Type 1 cycles. If $(x,y)\notin E(G)$, then $(x,y)\notin E(I_1\cap I_3)$. \end{lemma} \begin{proof} If $S_x\ne S_y$, by Observation \ref{obs:Type1I1}.\ref{Type1I1:nonAdjType1}, $(x,y)\notin E(I_1)$. Suppose $S_x=S_y=S$. Let $S=c_1c_2\ldots c_tc_1$ such that $\Pi_1(c_{i+1})=\Pi_1(c_{i})+1$, $1\le i<t$. If neither $x$ nor $y$ is $c_1$, then by Observation \ref{obs:Type1I1}.\ref{Type1I1:SMinusc1Touch}, they are not adjacent in $I_1$ since $I_1[S\setminus \{c_1\}]=G[S\setminus\{c_1\}]$. If $x=c_1$, then, $y=c_j$ for some $j\ne 2,t$. In $I_3$, noting that $c_1\in\mathcal{N}$, from (\ref{eqn:B2PeNType1ciNI3}), $r(x,I_3)=r(c_1,I_3)=\Pi_1(c_1)+1$. From (\ref{eqn:B2PeNType1ciNI3}--\ref{eqn:B2PeNType1ciPeI3}) $l(y,I_3)=\Pi_1(y)=\Pi_1(c_j)>\Pi_1(c_1)+1$. Hence, $(x,y)\notin E(I_3)$.\qed \end{proof} } \paragraph{\bf For a vertex in a Type 2 cycle:} Let $S=c_1c_2\ldots c_tc_1$ be a Type 2 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. We recall from Definition \ref{def:Pi1} (Point 2) that $c_1,c_t\in\mathcal{N}$. They are assigned intervals as follows: \begin{eqnarray} &&f(c_1,I_3)=\left[\Pi_1(c_1),\Pi_1(c_1)+1\right],\label{eqn:B2PeNType2c1I3}\\ &&f(c_t,I_3)=\left[\Pi_1(c_1)+1,\Pi_1(c_t)+0.5\right],\label{eqn:B2PeNType2ctI3} \end{eqnarray} For $c_i$, $1<i<t$, \begin{eqnarray} &&\textrm{if $c_i\in\mathcal{N}$, then, } f(c_i,I_3)=\left[\Pi_1(c_i),\Pi_1(c_i)+1\right],\label{eqn:B2PeNType2NI3}\\ &&\textrm{if $c_i\in\mathcal{B}_2$, then, } f(c_i,I_3)=\left[\Pi_1(c_i),n\right],\label{eqn:B2PeNType2B2I3}\\ &&\textrm{if $c_i\in\mathcal{P}_e$, then, } f(c_i,I_3)=\left[\Pi_1(c_i),n+1\right].\label{eqn:B2PeNType2PeI3} \end{eqnarray} \journ{ \begin{observation}\label{obs:Type2I3} Let $S=c_1c_2\ldots c_tc_1$ be a Type 2 cycle such that $\Pi_1(c_{i+1})=\Pi_1(c_i)+1$, $1\le i<t$. \begin{enumerate} \item $I_3[S]$ is a supergraph of $G[S]$. The proof is as follows: Recalling from Observation \ref{obs:Pi1nMinus1} that $\Pi_1(z)<n-1$, $\forall z\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, from (\ref{eqn:B2PeNType2c1I3}) and (\ref{eqn:B2PeNType2NI3}--\ref{eqn:B2PeNType2PeI3}), for $1\le i<t$, $\Pi_1(c_{i}),\Pi_1(c_{i})+1\in f(c_{i},I_3)$. Therefore, $c_i$ is adjacent to $c_{i+1}$, $1\le i<t$. From (\ref{eqn:B2PeNType2ctI3}), for $1\le i<t$, $\Pi_1(c_i)+1\in f(c_t,I_3)$. Therefore, $c_t$ is adjacent to all the other vertices of $S$. Hence proved.\label{Type2I3:Type2SuperI3} \item $r(c_1,I_3)=l(c_2,I_3)=l(c_t,I_3)$.\label{Type2I3:c1Touch} \item Let $x,y\in S$ be two adjacent vertices in $G$ such that neither $x$ nor $y$ is $c_t$. If $\Pi_1(x)<\Pi_1(y)$ and $x\in\mathcal{N}$, then, $r(x,I_3)=l(y,I_3)$. This follows by noting that $\Pi_1(y)=\Pi_1(x)+1$ and subsequently applying it in (\ref{eqn:B2PeNType2c1I3}) and (\ref{eqn:B2PeNType2NI3}--\ref{eqn:B2PeNType2PeI3}).\label{Type2I3:touch} \end{enumerate} \end{observation} } \paragraph{\bf For a vertex in a path:} Let $S=p_1p_2\ldots p_t$ be a path such that $\Pi_1(p_{i+1})=\Pi_1(p_i)+1$, $1\le i<t$. We recall from Observation \ref{obs:B2PeN}.\ref{B2PeN:NEndIsolated} and Definition \ref{def:NeNint} that $p_1,p_t\in\mathcal{N}_e$. The interval assignment for $p_t$ is as follows: \begin{eqnarray} f(p_t,I_3)=\left[\Pi_1(p_t),\Pi_1(p_t)+0.5\right].\label{eqn:B2PeNPathptI3} \end{eqnarray} The interval assignment for $p_i$, $i<t$ is as follows: \begin{eqnarray} &&\textrm{if $p_i\in\mathcal{N}$, then, }f(p_i,I_3)=\left[\Pi_1(p_i),\Pi_1(p_i)+1\right], \label{eqn:B2PeNPathpiNI3}\\ &&\textrm{if $p_i\in\mathcal{B}_2$, then, }f(p_i,I_3)=\left[\Pi_1(p_i),n\right],\label{eqn:B2PeNPathpiB2I3}\\ &&\textrm{if $p_i\in\mathcal{P}_e$, then, }f(p_i,I_3)=\left[\Pi_1(p_i),n+1\right].\label{eqn:B2PeNPathpiPeI3} \end{eqnarray} \journ{ \begin{observation}\label{obs:PathI3} Let $S=p_1p_2\ldots p_t$ be a path such that $\Pi_1(p_{i+1})=\Pi_1(p_i)+1$, $1\le i<t$. \begin{enumerate} \item $I_3[S]$ is a supergraph of $G[S]$.The proof is as follows: Since by Observation \ref{obs:Pi1nMinus1}, $\Pi_1(p_i)<n-1$, we can infer from the interval assignments in (\ref{eqn:B2PeNPathpiNI3}--\ref{eqn:B2PeNPathpiPeI3}) that, for $i<t$, the points $\Pi_1(p_i)$ and $\Pi_1(p_i)+1$ belong to $f(p_i,I_3)$. From (\ref{eqn:B2PeNPathptI3}), the point $\Pi_1(p_t)$ belongs to $f(p_t,I_3)$. This implies that for $1\le i<t$, $p_i$ is adjacent to $p_{i+1}$. Hence, proved.\label{PathI3:PathSuperI3}
\item Let $x,y\in S$ be two adjacent vertices in $G$. If $\Pi_1(x)<\Pi_1(y)$, $x\in\mathcal{N}$ and $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, then, $r(x,I_3)=l(y,I_3)$. This follows by noting that $\Pi_1(y)=\Pi_1(x)+1$ and subsequently applying it in (\ref{eqn:B2PeNPathptI3}--\ref{eqn:B2PeNPathpiPeI3}).\label{PathI3:touch} \end{enumerate} \end{observation}
\begin{observation}\label{obs:BunchI3} Here are some observations regarding the intervals assigned to vertices of $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. We recall from Observation \ref{obs:Pi1nMinus1} that for any $z\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, $\Pi_1(z)<n-1$. \begin{enumerate} \item If $z\in\mathcal{N}$, $\Pi_1(z)+0.5\le r(z,I_3)\le \Pi_1(z)+1<n$. This follows from the interval assignments in (\ref{eqn:B2PeNType1ciNI3}), (\ref{eqn:B2PeNType2c1I3}--\ref{eqn:B2PeNType2NI3}) and (\ref{eqn:B2PeNPathptI3}--\ref{eqn:B2PeNPathpiNI3}). \label{BunchI3:NLess2n} \item If $z\in\mathcal{B}_2\cup\mathcal{P}_e$ belongs to a Type 2 cycle or a path, then, $l(z,I_3)=\Pi_1(z)<n-1$. This follows from (\ref{eqn:B2PeNType2B2I3}--\ref{eqn:B2PeNType2PeI3}) for Type 2 cycles and (\ref{eqn:B2PeNPathpiB2I3}--\ref{eqn:B2PeNPathpiPeI3}) for paths respectively.\label{BunchI3:B2PeLess2n} \item If $z\in\mathcal{B}_2$, from (\ref{eqn:B2PeNType1ciB2I3}), (\ref{eqn:B2PeNType1ctB2I3}), (\ref{eqn:B2PeNType2B2I3}) and (\ref{eqn:B2PeNPathpiB2I3}), $r(x,I_3)=n$.\label{BunchI3:B2I3} \item If $z\in\mathcal{P}_e$, from (\ref{eqn:B2PeNType1ciPeI3}), (\ref{eqn:B2PeNType1ctPeI3}), (\ref{eqn:B2PeNType2PeI3}) and (\ref{eqn:B2PeNPathpiPeI3}), $r(x,I_3)=n+1$.\label{BunchI3:PeI3} \end{enumerate} \end{observation}
\begin{observation}\label{obs:B2PeNSuperI3} $I_3[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$ is a supergraph of $G[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$. The proof is as follows: Let $S$ be a component of $G[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$. It is either a Type 1 cycle, Type 2 cycle or path. From Observations \ref{obs:Type1I3}.\ref{Type1I3:Type1SuperI3}, \ref{obs:Type2I3}.\ref{Type2I3:Type2SuperI3} and \ref{obs:PathI3}.\ref{PathI3:PathSuperI3}, it follows that $I_3[S]$ is a supergraph of $G[S]$. \end{observation}
\begin{lemma}\label{lem:Type2PathNI1I3} Let $x\in\mathcal{N}$ and $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ be such that $x\in S_x$ and $y\in S_y$ where $S_x$ and $S_y$ induce a Type 2 cycle or a path. If $(x,y)\notin E(G)$, then $(x,y)\notin E(I_1\cap I_3)$. \end{lemma} \begin{proof} We will consider the following two cases separately: (1) $y\in\mathcal{N}$ and (2) $y\in\mathcal{B}_2\cup\mathcal{P}_e$. \paragraph{\boldmath $y\in\mathcal{N}$:} If $S_x\ne S_y$, then, $(x,y)\notin E(I_3)$. The proof is as follows: Without loss of generality let $\Pi_1(S_x)<\Pi_1(S_y)$. From the interval assignments for a vertex of $\mathcal{N}$ in a Type 2 cycle (see (\ref{eqn:B2PeNType2c1I3}--\ref{eqn:B2PeNType2NI3})) and a path (see (\ref{eqn:B2PeNPathptI3}--\ref{eqn:B2PeNPathpiNI3})), $\displaystyle r(x,I_3)\le \max_{a\in S_x}\Pi_1(a)+0.5<\min_{b\in S_y}\Pi_1(b)\le l(y,I_3)$ and therefore, $(x,y)\notin E(I_3)$.
Now we consider the case $S_x=S_y=S$. Let $S=s_1s_2\ldots s_t$, such that $\Pi_1(s_{i+1})=\Pi_1(s_i)+1$, for $1\le i\le t$. Since $x$ and $y$ are not adjacent in $S$, by the definition of $\Pi_1$
(Definition \ref{def:Pi1}, Points 1 and 2 for paths and Type 2 cycles respectively), $|\Pi_1(x)-\Pi_1(y)|>1$. Suppose $S$ is a Type 2 cycle. If neither $x$ nor $y$ is $s_t$, then, by (\ref{eqn:B2PeNType2c1I3}) and (\ref{eqn:B2PeNType2NI3}), $f(x,I_3)=[\Pi_1(x),\Pi_1(x)+1]$ and
$f(y,I_3)=[\Pi_1(y),\Pi_1(y)+1]$. Since $|\Pi_1(x)-\Pi_1(y)|>1$, $f(x,I_3)\cap f(y,I_3)=\varnothing$. If $x=s_t$, then, in $I_1$, from (\ref{eqn:B2PeNType2ctI1}), $l(x,I_1)=n+\Pi_1(s_t)$ while, since $y\ne s_{t-1},s_1$ from (\ref{eqn:B2PeNType2othersNI1}), $r(y,I_1)=n+\Pi_1(y)+1<n+\Pi_1(s_t)=l(x,I_1)$. Therefore, $(x,y)\notin E(I_1)$. If $S$ is a path, then, assuming without loss of generality that $\Pi_1(x)<\Pi_1(y)$, from (\ref{eqn:B2PeNPathptI3}) and (\ref{eqn:B2PeNPathpiNI3}), $r(x,I_3)=\Pi_1(x)+1$ and
$l(y,I_3)=\Pi_1(y)$. Since $|\Pi_1(x)-\Pi_1(y)|>1$, $r(x,I_3)<l(y,I_3)$. Therefore, $(x,y)\notin E(I_3)$.
\paragraph{\boldmath $y\in\mathcal{B}_2\cup\mathcal{P}_e$:} First we will show the following: \begin{clm}\label{clm:xyg1}
$|\Pi_1(x)-\Pi_1(y)|>1$. \end{clm} \begin{proof} Suppose $S_x=S_y$, that is, both $x$ and $y$ belong to the same component. Since $x$ is not adjacent to $y$, from the definition of $\Pi_1$
(Definition \ref{def:Pi1}), $|\Pi_1(x)-\Pi_1(y)|>1$. Suppose $S_x\ne S_y$. Let $S_y=s_1s_2\ldots s_t$ such that $\Pi_1(s_{i+1})=\Pi_1(s_i)+1$,
$1\le i<t$. If $|\Pi_1(x)-\Pi_1(y)|=1$, then, $y$ must be either $s_1$ or $s_t$. But, $s_1,s_t\in\mathcal{N}$ since $S_y$ is either a Type 2 cycle or a path (by Definition \ref{def:Pi1}). This contradicts the assumption that $y\in\mathcal{B}_2\cup\mathcal{P}_e$.\bqed \end{proof}
Suppose $\Pi_1(x)<\Pi_1(y)$. From Observation \ref{obs:BunchI3}.\ref{BunchI3:NLess2n}, $r(x,I_3)\le \Pi_1(x)+1$ and from Observation \ref{obs:BunchI3}.\ref{BunchI3:B2PeLess2n}, $l(y,I_3)=\Pi_1(y)$. We have from Claim \ref{clm:xyg1}, $\Pi_1(x)+1<\Pi_1(y)$ and hence $(x,y)\notin E(I_3)$. Suppose $\Pi_1(x)>\Pi_1(y)$. From Observation \ref{obs:BunchI1}.\ref{BunchI1:B2PeType2Path}, $r(y,I_1)=n+\Pi_1(y)+1$ and from Observation \ref{obs:BunchI1}.\ref{BunchI1:NType2Path}, $\displaystyle l(x,I_1)=n+\Pi_1(x)$. From Claim \ref{clm:xyg1}, $\Pi_1(y)+1<\Pi_1(x)$ and hence $(x,y)\notin E(I_1)$. \qed \end{proof}
\begin{observation}\label{obs:NnonAdjB2PeNI1I3} If $x\in\mathcal{N}$ and $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ such that $(x,y)\notin E(G)$, then, $(x,y)\notin E(I_1\cap I_3)$. The proof is as follows: If one of $x$ and $y$ belongs to a Type 1 cycle and the other belongs to a Type 2 cycle or path, from Observation \ref{obs:BunchI1}.\ref{BunchI1:Type1SepType2PathI1}, $(x,y)\notin E(I_1)$. If both $x$ and $y$ belong to Type 1 cycles, then, by Lemma \ref{lem:Type1NI1I3}, $(x,y)\notin E(I_1\cap I_3)$. If both $x$ and $y$ belong to Type 2 cycles and paths, then, by Lemma \ref{lem:Type2PathNI1I3}, $(x,y)\notin E(I_1\cap I_3)$. \end{observation} } \subsubsection{Vertices of $\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}$} Let $v\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}$. By Table \ref{tab:uniqNeighbor} (rows 2 and 3), it follows that $v$ is adjacent to exactly one vertex in $\mathcal{N}$; let $v'$ be this vertex. \begin{eqnarray} &&\textrm{If $v\in\mathcal{C}\cup\mathcal{P}_{2e}$, then, } f(v,I_3)=\left[r(v',I_3),n+1\right],\label{eqn:CP2eI3}\\ &&\textrm{If $v\in\mathcal{B}_1$, then, } f(v,I_3)=\left[r(v',I_3),n\right],\label{eqn:B1I3} \end{eqnarray} Note that $v'$ is already assigned an interval in Section \ref{sec:B2PeNI3}.
\subsubsection{Vertices of ${\mathcal{P}_{2i}}$} Let $v\in\mathcal{P}_{2i}$. By Table \ref{tab:uniqNeighbor} (row 4), $v$ has a unique neighbor in $\mathcal{R}\cup\mathcal{N}$; let $v'$ be this vertex. \begin{eqnarray} &&\textrm{if $v'\in\mathcal{R}$, then, } f(v,I_3)=\left[n+1,n+1\right], \label{eqn:P2iRI3}\\ &&\textrm{if $v'\in\mathcal{N}$, then, } f(v,I_3)=\left[r(v',I_3),n+1\right].\label{eqn:P2iNI3} \end{eqnarray} \journ{ \begin{lemma}\label{lem:BP2iI3} Let $x\in\mathcal{P}_{2i}$ and $y\in\mathcal{B}$. $(x,y)\notin E(I_2\cap I_3)$. \end{lemma} \begin{proof} Recall the notation $\beta(\cdot)$ introduced in Definition \ref{def:baseVertex}. Let $y\in\Gamma(z)$ for some $z\in\mathcal{R}$. If $x\ne\beta(z)$, then by Lemma \ref{lem:RBI2} (Point 1), $(x,y)\notin E(I_2)$. If $x=\beta(z)$, then, it implies that $x$ is adjacent to $z$ and by Table \ref{tab:uniqNeighbor} (row 4), $z$ is its only neighbor in $\mathcal{R}\cup\mathcal{N}$. Since $z\in\mathcal{R}$, the interval assigned to $x$ is given in (\ref{eqn:P2iRI3}), from which $l(x,I_3)=n+1$. If $y\in\mathcal{B}_1$, then by (\ref{eqn:B1I3}), $r(y,I_3)=n$. If $y\in\mathcal{B}_2$, from Observation \ref{obs:BunchI3}.\ref{BunchI3:B2I3}, $r(y,I_3)=n$. Therefore, $(x,y)\notin E(I_3)$. \qed \end{proof} } \subsubsection{Vertices of ${\mathcal{R}}$} \conf{$\forall v\in\mathcal{R}, f(v,I_3)=[n,n+1]$.} \journ{ Every vertex is assigned the following interval: \begin{equation}\label{eqn:RI3} \forall v\in\mathcal{R}, f(v,I_3)=[n,n+1]. \end{equation} } \journ{ \begin{lemma}\label{lem:B1CP2eRP2iSuperI3} $I_3[\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R}]$ is a supergraph of $G[\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R}]$. \end{lemma} \begin{proof} First we prove the following: \begin{clm}\label{clm:nnp1} For any $x\in\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{R}$, $[n,n+1]\subseteq f(x,I_3)$. \end{clm} \begin{proof} For any $z\in\mathcal{N}$, from Observation \ref{obs:BunchI3}.\ref{BunchI3:NLess2n}, $r(z,I_3)\le \Pi_1(z)+1<n$. From (\ref{eqn:CP2eI3}), for every $x\in\mathcal{C}\cup\mathcal{P}_{2e}$, $l(x,I_3)=r(z,I_3)$ for some $z\in\mathcal{N}$. Since $r(x,I_3)=n+1$, it implies that $[n,n+1]\subset f(x,I_3)$. If $x\in\mathcal{R}$, by (\ref{eqn:RI3}), $[n,n+1]=f(x,I_3)$. Hence proved. \bqed \end{proof}
To prove the lemma, we need to show that if $(x,y)\in E(G)$, then, $(x,y)\in E(I_3)$, where $x,y\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R}$. For any $x\in\mathcal{P}_{2i}$, from (\ref{eqn:P2iRI3}--\ref{eqn:P2iNI3}), $r(x,I_3)=n+1$ and using Claim \ref{clm:nnp1}, we infer that $I_3[\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R}]$ is a clique. For any $x\in\mathcal{B}_1$, from (\ref{eqn:B1I3}), $r(x,I_3)=n$ and again by using Claim \ref{clm:nnp1}, we observe that $I_3[\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{B}_1\cup\mathcal{R}]$ is a clique. Therefore, the only case we have to consider is $x\in\mathcal{P}_{2i}$ and $y\in\mathcal{B}_1$. However, from Table \ref{tab:nonAdj} (row 6), this case is not possible. Hence proved. \qed \end{proof}
\begin{lemma}\label{lem:VminusASuperI3} $I_3[V\setminus\mathcal{A}]$ is a supergraph of $G[V\setminus\mathcal{A}]$. \end{lemma} \begin{proof} Let $x,y\in V\setminus\mathcal{A}$ be two adjacent vertices in $G$. Note that $V\setminus\mathcal{A}=(\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N})\cup (\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R})$. If $x,y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, by Observation \ref{obs:B2PeNSuperI3}, $x$ and $y$ are adjacent in $I_3$. If $x,y\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R}$, then by Lemma \ref{lem:B1CP2eRP2iSuperI3}, $x$ and $y$ are adjacent in $I_3$. The remaining case is $x\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ and $y\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R}$.
Let $x\in\mathcal{B}_2$. From Table \ref{tab:nonAdj} (row 6) we note that $y\notin\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}$. Therefore, $y\in\mathcal{B}_1\cup\mathcal{R}$. If $y\in\mathcal{B}_1$, by (\ref{eqn:B1I3}), $r(y,I_3)=n$ and if $y\in\mathcal{R}$, by (\ref{eqn:RI3}), $l(y,I_3)=n$. By Observation \ref{obs:BunchI3}.\ref{BunchI3:B2I3}, $r(x,I_3)=n$ and therefore, $x$ is adjacent to $y$ in $I_3$.
Let $x\in\mathcal{P}_e$. From Table \ref{tab:nonAdj} (rows 6, 4, 8 and 5 respectively) $y\notin\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2i}\cup\mathcal{R}$. Therefore, $y\in\mathcal{P}_{2e}$. By (\ref{eqn:CP2eI3}), $r(y,I_3)=n+1$ and by Observation \ref{obs:BunchI3}.\ref{BunchI3:PeI3}, $r(x,I_3)=n+1$. Hence, $x$ is adjacent to $y$ in $I_3$.
Let $x\in\mathcal{N}$. From Table \ref{tab:nonAdj} (row 2), $y\notin\mathcal{R}$. This implies that $y\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}$. From Table \ref{tab:uniqNeighbor} (rows 2, 3 and 4), $x$ is the unique neighbor of $y$ in $\mathcal{N}$. From (\ref{eqn:CP2eI3}), (\ref{eqn:B1I3}) and (\ref{eqn:P2iNI3}), $l(y,I_3)=r(x,I_3)$. Hence, $x$ is adjacent to $y$ in $I_3$. \qed \end{proof}
\begin{lemma}\label{lem:Nresolved} If $x\in\mathcal{N}$ and $y\in V\setminus\mathcal{A}$ such that $(x,y)\notin E(G)$, then, $(x,y)\notin E(I_1\cap I_3)$. \end{lemma} \begin{proof} If $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$, then by Observation \ref{obs:NnonAdjB2PeNI1I3}, $(x,y)\notin E(I_1\cap I_3)$. If $y\in\mathcal{R}$, then by (\ref{eqn:RI3}), $l(y,I_3)=n$. If $y\in\mathcal{P}_{2i}$ and is not adjacent to any vertex in $\mathcal{N}$, then by (\ref{eqn:P2iRI3}), $l(y,I_3)=n+1$. By Observation \ref{obs:BunchI3}.\ref{BunchI3:NLess2n}, $r(x,I_3)<n$ and therefore, in both the cases, $(x,y)\notin E(I_3)$. Now we consider the remaining cases (1) $y\in\mathcal{P}_{2i}$ such that $y$ has a neighbor in $\mathcal{N}$ and (2) $y\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}$. From Table \ref{tab:uniqNeighbor} (rows 4, 2 and 3 respectively), in each case, $y$ has exactly one neighbor in $\mathcal{N}$ and let this vertex be $z$. Since $x\ne z$, either $\Pi_1(x)<\Pi_1(z)$ or $\Pi_1(z)<\Pi_1(x)$.
Suppose $\Pi_1(x)<\Pi_1(z)$. In $I_3$, from the interval assignments for vertices of $\mathcal{C}\cup\mathcal{P}_{2e}$, $\mathcal{B}_1$ and $\mathcal{P}_{2i}$ in (\ref{eqn:CP2eI3}), (\ref{eqn:B1I3}) and (\ref{eqn:P2iNI3}) respectively, $l(y,I_3)=r(z,I_3)$. From Observation \ref{obs:BunchI3}.\ref{BunchI3:NLess2n}, $r(x,I_3)\le \Pi_1(x)+1<\Pi_1(z)+0.5\le r(z,I_3)$. Hence, $(x,y)\notin E(I_3)$.
Suppose $\Pi_1(x)>\Pi_1(z)$. In $I_1$, from the interval assignments for vertices of $\mathcal{B}_1$, $\mathcal{P}_{2e}$ and $\mathcal{P}_{2i}$ in (\ref{eqn:B1I1}), (\ref{eqn:P2eI1}) and (\ref{eqn:P2iNI1}) respectively, $r(y,I_1)=l(z,I_1)$. For $\mathcal{C}$ in (\ref{eqn:CI1}), $r(y,I_1)\le l(z,I_1)+0.5$. From Observation \ref{obs:BunchI1}.\ref{BunchI1:xLessy}, $l(x,I_1)\ge l(z,I_1)+1$ and therefore, $(x,y)\notin E(I_1)$. Hence proved.\qed \end{proof}
\begin{lemma}\label{lem:NTouch} Let $x\in\mathcal{N}$ and $y\in V\setminus\mathcal{A}$ such that $(x,y)\in E(G)$. Then, for some $I\in\{I_1,I_3\}$, either $l(x,I)=r(y,I)$ or $l(y,I)=r(x,I)$. \end{lemma} \begin{proof} Note that $V\setminus\mathcal{A}=(\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N})\cup (\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}\cup\mathcal{R})$. Suppose $y\in\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$. Since $x$ is adjacent to $y$, they belong to the same component in $G[\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}]$, which is either a Type 1 cycle, Type 2 cycle or a path. Let this component be $S=s_1s_2\ldots s_t$, where $\Pi_1(s_{i+1})=\Pi_1(s_i)+1$, $1\le i<t$. Suppose $S$ is a Type 1 cycle. If neither $x$ nor $y$ is $c_1$, then by Observation \ref{obs:Type1I1}.\ref{Type1I1:SMinusc1Touch}, this condition is satisfied in $I_1$. If $x$ or $y$ is $c_1$, then by Observation \ref{obs:Type1I3}.\ref{Type1I3:touch}, this is satisfied in $I_3$. Suppose $S$ is a Type 2 cycle. If $x$ or $y$ is $c_1$, say $x=c_1$, then by Observation \ref{obs:Type2I3}.\ref{Type2I3:c1Touch}, $r(x,I_3)=l(y,I_3)$. Now we will assume that neither $x$ nor $y$ is $c_1$. Suppose $\Pi_1(x)>\Pi_1(y)$. By Observation \ref{obs:Type2I1}.\ref{Type2I1:touch}, $l(x,I_1)=r(y,I_1)$. Moreover, since $c_t\in\mathcal{N}$, this holds for the case $x=c_t$ and $y=c_{t-1}$ too. Therefore, we can assume that $x,y\notin\{c_1,c_t\}$. If $\Pi_1(x)<\Pi_1(y)$, then, by Observation \ref{obs:Type2I3}.\ref{Type2I3:touch}, $l(y,I_3)=r(x,I_3)$. Finally, suppose $S$ is a path. If $y\in\mathcal{N}$, then, without loss of generality we can assume that $\Pi_1(x)<\Pi_1(y)$ and therefore, by Observation \ref{obs:PathI3}.\ref{PathI3:touch}, the condition is satisfied in $I_3$. If $y\in\mathcal{B}_2\cup\mathcal{P}_e$ and $\Pi_1(x)<\Pi_1(y)$, then, again by Observation \ref{obs:PathI3}.\ref{PathI3:touch}, $r(x,I_3)=l(y,I_3)$ and if $\Pi_1(x)>\Pi_1(y)$, by Observation \ref{obs:PathI1}.\ref{PathI1:touch} $r(y,I_3)=l(x,I_3)$.
Suppose $y\in\mathcal{B}_1\cup\mathcal{C}\cup\mathcal{P}_{2e}\cup\mathcal{P}_{2i}$. By Table \ref{tab:uniqNeighbor} (rows 2, 3 and 4), $x$ is the unique neighbor of $y$ in $\mathcal{N}$. From (\ref{eqn:CP2eI3}), (\ref{eqn:B1I3}) and (\ref{eqn:P2iNI3}), it follows that $l(y,I_3)=r(x,I_3)$. By Table \ref{tab:nonAdj} (row 2), $x$ is not adjacent to any vertex in $\mathcal{R}$. Thus, we have covered all possible cases for $y\in V\setminus\mathcal{A}$. Hence, proved.\qed \end{proof}
\begin{observation}\label{obs:RBTouch} If $x\in\mathcal{R}$ and $y\in\mathcal{B}$ are adjacent in $G$, then, $l(x,I_3)=r(y,I_3)=n$. This follows from the fact that $\mathcal{B}=\mathcal{B}_1\cup\mathcal{B}_2$, (\ref{eqn:B1I3}), (\ref{eqn:RI3}) and Observation \ref{obs:BunchI3}.\ref{BunchI3:B2I3}. \end{observation} } \subsubsection{Vertices of $\mathcal{A}$} \conf{$\forall v\in\mathcal{A}, f(v,I_3)=[1,n+1]$.} \journ{ Every vertex is assigned the following interval: \begin{equation}\label{eqn:AI3} \forall v\in\mathcal{A}, f(v,I_3)=[1,n+1]. \end{equation} } \journ{ \begin{lemma}\label{lem:superI3} $I_3$ is a supergraph of $G$. \end{lemma} \begin{proof} By Lemma \ref{lem:VminusASuperI3}, $I_3[V\setminus\mathcal{A}]$ is a supergraph of $G[V\setminus\mathcal{A}]$. From the interval assignments in $I_3$ for vertices in $V\setminus\mathcal{A}$, it is easy to infer that if $x\in V\setminus\mathcal{A}$, then, $f(x,I_3)\subset [1,n+1]$. Since by (\ref{eqn:AI3}), for any $y\in\mathcal{A}$, $f(y,I_3)=[1,n+1]$, it follows that $y$ is adjacent to $x$. Clearly, $I_3[\mathcal{A}]$ is a clique. Therefore, $I_3$ is a supergraph of $G$. \qed \end{proof}
\subsection{Proof of $E(G)=E(I_1)\cap E(I_2)\cap E(I_3)$}\label{sec:verify} We will prove Theorem \ref{thm:mainTheorem} by showing that $E(G)=E(I_1)\cap E(I_2)\cap E(I_3)$. We have already proved that $I_1$, $I_2$ and $I_3$ are supergraphs of $G$ (in Lemmas \ref{lem:superI1}, \ref{lem:superI2} and \ref{lem:superI3}, respectively). In this section, we will show the following: If two vertices $s$ and $t$ are not adjacent in $G$, then there exists at least one interval graph $I\in\{I_1,I_2,I_3\}$, such that $(s,t)\notin I$. Recall the partitioning of $V$ illustrated in Figure \ref{fig:cubicPartTree}: $V=\mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{R}\cup\mathcal{B}\cup\mathcal{P}$. Now we will consider one by one all possible cases and in each case show that $s$ and $t$ are not adjacent in at least one of the interval graphs.
\paragraph{\boldmath $s\in\mathcal{A}$, $t\in V$:} If $t\in\mathcal{A}$, then by Lemma \ref{lem:AGI1}, $(s,t)\notin E(I_1)$. If $t\in\mathcal{N}$, then by Lemma \ref{lem:ANnonAdj}, $(s,t)\notin E(I_1\cap I_2)$ and if $t\in V\setminus(\mathcal{A}\cup\mathcal{N})$, then by Lemma \ref{lem:AANnonAdj} $(s,t)\notin E(I_1)$.
\paragraph{\boldmath $s\in\mathcal{N}$, $t\in V\setminus\mathcal{A}$:} By Lemma \ref{lem:Nresolved}, $(s,t)\notin E(I_1\cap I_3)$.
\paragraph{\boldmath $s\in\mathcal{C}$, $t\in V\setminus(\mathcal{A}\cup\mathcal{N})=\mathcal{C}\cup\mathcal{P}\cup\mathcal{R}\cup\mathcal{B}$:} If $t\in \mathcal{C}\cup\mathcal{P}$, by Lemma \ref{lem:CPI1I2}, $(s,t)\notin E(I_1\cap I_2)$. If $t\in\mathcal{R}\cup\mathcal{B}$, by Observation \ref{obs:RBCP}.\ref{RBCP:RBCPeP2eI2}, $(s,t)\notin E(I_2)$.
\paragraph{\boldmath $s\in\mathcal{P}$, $t\in V\setminus(\mathcal{A}\cup\mathcal{N}\cup\mathcal{C})=\mathcal{P}\cup\mathcal{R}\cup\mathcal{B}$:} If $t\in\mathcal{P}$, by Lemma \ref{lem:CPI1I2}, $s$ and $t$ are not adjacent in either $I_1$ or $I_2$. Let $t\in\mathcal{R}\cup\mathcal{B}$. If $s\in\mathcal{P}_e\cup\mathcal{P}_{2e}$, then by Observation \ref{obs:RBCP}.\ref{RBCP:RBCPeP2eI2}, $(s,t)\notin E(I_2)$. Finally, let $s\in\mathcal{P}_{2i}$. If $t\in\mathcal{R}$, then by Observation \ref{obs:RBCP}.\ref{RBCP:RP2iI2} $(s,t)\notin E(I_2)$ and if $t\in\mathcal{B}$, by Lemma \ref{lem:BP2iI3}, $(s,t)\notin E(I_2\cap I_3)$.
\paragraph{\boldmath $s,t\in\mathcal{R}\cup\mathcal{B}$:} By Lemma \ref{lem:RBI2} (Statement 2), $(s,t)\notin E(I_2)$.
\subsection{Proof of Theorem \ref{thm:mainTheorem}}\label{sec:boundary} We have proved that $G$ has a $3$-box representation. Now we will show that in this $3$-box representation, any two intersecting boxes intersect only at their boundaries and hence complete the proof of Theorem \ref{thm:mainTheorem}. For this to happen, the following condition needs to be satisfied: \begin{condition}\label{con:touch} Let $s$ and $t$ be adjacent in $G$. For some $I\in\{I_1,I_2,I_3\}$, either $l(s,I)=r(t,I)$ or $l(t,I)=r(s,I)$. \end{condition} As in the previous section, we will consider one by one all the possible cases:
\paragraph{\boldmath $s\in\mathcal{A}$, $t\in V$:} If $t\in\mathcal{A}$, then by Observation \ref{obs:AATouch} and if $t\in\mathcal{N}$, by Observation \ref{obs:BunchI1}.\ref{BunchI1:ANTouch}, Condition \ref{con:touch} is satisfied. By Table \ref{tab:nonAdj} (rows 1--3), $s$ is not adjacent to any vertex in $V\setminus(\mathcal{A}\cup\mathcal{N})$.
\paragraph{\boldmath $s\in\mathcal{N}$, $t\in V\setminus\mathcal{A}$:} By Lemma \ref{lem:NTouch}, Condition \ref{con:touch} is satisfied.
\paragraph{\boldmath $s\in\mathcal{C}$, $t\in V\setminus(\mathcal{A}\cup\mathcal{N})=\mathcal{C}\cup\mathcal{P}\cup\mathcal{R}\cup\mathcal{B}$:} By Table \ref{tab:nonAdj} (rows 4--6), $t\notin\mathcal{P}\cup\mathcal{R}\cup\mathcal{B}$. Therefore, the only case to be considered is $t\in\mathcal{C}$. Let $s,t\in C$, where $C\subseteq\mathcal{C}$ is a special cycle. Let $C=c_1c_2\ldots c_t$ where $c_1$ is the special vertex. If $x=c_1$, then by Observation \ref{obs:CI1}.\ref{CI1:special}, Condition \ref{con:touch} is satisfied. If neither $x$ nor $y$ is $c_1$, then by Observation \ref{obs:CI2}.\ref{CI2:touch}, Condition \ref{con:touch} is satisfied in $I_2$.
\paragraph{\boldmath $s\in\mathcal{P}$, $t\in V\setminus(\mathcal{A}\cup\mathcal{N}\cup\mathcal{C})=\mathcal{P}\cup\mathcal{R}\cup\mathcal{B}$:} If $t\in\mathcal{P}$, then by Observation \ref{obs:PSuperI2}, Condition \ref{con:touch} is met in $I_2$. If $t\in\mathcal{R}$, then from Table \ref{tab:nonAdj} (row 5) we infer that $s\in\mathcal{P}_{2i}$. By Observation \ref{obs:P2iRTouch}, Condition \ref{con:touch} is met in $I_1$. By Table \ref{tab:nonAdj} (row 6), $t\notin\mathcal{B}$.
\paragraph{\boldmath $s\in\mathcal{R}$, $t\in V\setminus(\mathcal{A}\cup\mathcal{N}\cup\mathcal{C}\cup\mathcal{P})=\mathcal{R}\cup\mathcal{B}$:} By Table \ref{tab:nonAdj} (row 10) $t\notin\mathcal{R}$. By Observation \ref{obs:RBTouch}, Condition \ref{con:touch} is satisfied in $I_3$.
\paragraph{\boldmath $s,t\in\mathcal{B}$:} Condition \ref{con:touch} is satisfied in $I_2$ (See Figure \ref{fig:componentsYInterval}).
Thus, we have completed the proof for cubic graphs. Now, applying Lemma \ref{lem:cubicEnough}, it follows that any graph with maximum degree $3$ has a $3$-box representation. Thus, we have proved Theorem \ref{thm:mainTheorem}. } \confExt{ \section{Algorithmic aspects} Now we briefly explain how our construction of the $3$-box representation can be realized in $O(n)$ time, where $n$ is the number of vertices in the graph. Firstly, we note that the process can be split into three stages: (1) Partitioning the vertex set as illustrated in Figure \ref{fig:cubicPartTree}, (2) ordering the vertices of $\mathcal{A}$, $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ and $\mathcal{C}\cup\mathcal{P}$ according to Definitions \ref{def:PiA}, \ref{def:Pi1} and \ref{def:Pi2} and finally, (3) assigning intervals to all the vertices. In Stage (1) the first step is to extract special cycles and special paths as described in Algorithm \ref{alg:primPart}. This is the only non-trivial part of the construction and we analyze its complexity in the appendix. We will show that a special cycle or path can be extracted from a graph with maximum degree $3$ in time linear to the number of vertices in it. This will imply that Algorithm \ref{alg:primPart} can be implemented in $O(n)$ time. Algorithm \ref{alg:finePart} takes $O(n)$ time since in every iteration we need to only check if a vertex in $\mathcal{N}_1$ has two neighbors in $\mathcal{A}_1$ in that iteration and accordingly move or retain the vertex and its neighbors. It is easy to see that the finer partitioning of $\mathcal{P}$, $\mathcal{N}$ and $\mathcal{B}$ can be accomplished in linear time. Stage (2) involves ordering the vertices of sets $\mathcal{A}$, $\mathcal{B}_2\cup\mathcal{P}_e\cup\mathcal{N}$ and $\mathcal{C}\cup\mathcal{P}$ component wise. Since each of these sets induce a graph of maximum degree $2$, they can be ordered in linear time. Stage (3) only involves assignment of intervals to the vertices and can be achieved in linear time. }
\section{Conclusion} We showed that every graph of maximum degree $3$ has a $3$-box representation and therefore, its boxicity is at most $3$. One interesting question is whether we can characterize cubic graphs which have a $2$-box representation. Answering this will also determine if the boxicity of a cubic graph can be computed in polynomial time. One could also try to extend the proof techniques used in this paper to graphs with maximum degree $4$ and $5$ in order to improve the bounds on their boxicity.
\appendix \section{Appendix} Let $G$ be a graph with maximum degree $3$.
\subsection{Extending a non-special induced cycle or path}\label{sec:extend} Suppose $C$ is a non-special induced cycle in $G$. Then, by Definition \ref{def:specialCycle}, it follows that there exists a vertex $x\in C$, such that $C\setminus\{x\}$ belongs to a cycle or path of length
$|C|+1$. We call such a vertex a {\bf removable vertex}. Extending a non-special induced cycle $C$ corresponds to removing a removable vertex $x$ and adding two new vertices $a$ and $b$ such that $(C\setminus\{x\})\cup\{a,b\}$ is an induced cycle or a path. There are only two possible ways in which a non-special induced cycle can be extended and these are illustrated in Figure \ref{fig:cycle2Path}. The vertices which are added to or removed from $C$ in an extension operation are called {\bf participating vertices}. In the figure, $x$, $a$ and $b$ are the participating vertices.
\begin{figure}
\caption{The possible ways in which a non-special induced cycle can be extended. The vertices marked black are the participating vertices. }
\label{fig:cycle2Path}
\end{figure}
\begin{lemma}\label{lem:cycleExt} Suppose $C$ is a non-special induced cycle and $x\in C$. It takes constant time to verify whether $x$ is a removable vertex or not. If $x$ is a removable vertex, then, the extension of $C$ by removing $x$ can again be achieved in constant time. \end{lemma} \begin{proof} Consider the possible ways in which $C$ can be extended as shown in Figure \ref{fig:cycle2Path}. To verify if $x$ is a removable vertex, we need to only check if the vertices $a$ and $b$ exist. Similarly, given that $x$ is a removable vertex, we need to only find $a$ and $b$ to extend $C$ by removing $x$. Recalling that $\Delta(G)\le3$, it is easy to see that this can be done in constant time.\qed \end{proof}
Let $P$ be a non-special induced path in $G$. By Definition \ref{def:specialPath}, it implies that either: \begin{enumerate} \item it is not maximal in the sense that it is part of an induced cycle or a longer induced path, or
\item for some end point of $P$, say $x$, $P\setminus\{x\}$ belongs to an induced cycle of size $\ge|P|$ or an induced path of length $\ge|P|+1$. We call $x$ a {\bf removable end point} of $P$. \end{enumerate} Extending a non-special path $P$ corresponds to the following operations: \begin{enumerate} \item If $P$ is not maximal, then, we add a new vertex $y$ to $P$ such that $P\cup\{y\}$ is an induced path or cycle. Clearly, $y$ must be a neighbor of an end point of $P$ such that it is not adjacent to any interior vertex of $P$. \item If $P$ is maximal, then, we remove a removable end point $x$ and either \begin{enumerate}[(a)] \item add a single new vertex $a$ such that $(P\setminus\{x\})\cup\{a\}$ is an induced cycle (note that in this case $a$ has to be adjacent to the neighbor of $x$ in $P$) OR \item add two new vertices $a$ and $b$ such that $(P\setminus\{x\})\cup\{a,b\}$ is an induced cycle or a path (in this case, $a$ and $b$ are adjacent and $a$ is adjacent to the neighbor of $x$ in $P$). \end{enumerate} This is illustrated in Figure \ref{fig:path2Cycle}. \end{enumerate} As in the case of extending a cycle, the vertices which are added to or removed from $P$ are called {\bf participating vertices}. In case 1, $y$ is the participating vertex. In case 2(a), $x$ and $a$ are participating vertices and in case 2(b), $x$, $a$ and $b$ are participating vertices.
\begin{figure}
\caption{The ways in which a non-special induced path can be extended. The vertices marked black are the participating vertices. }
\label{fig:path2Cycle}
\end{figure}
\begin{lemma}\label{lem:pathExt} If $P$ is an induced path, then, in constant time, it can be verified whether it is a special path or not. If not, then, in constant time it can be extended. \end{lemma} \begin{proof} First we need to check if $P$ is maximal or not, that is, whether it is part of a larger induced cycle or a longer induced path. This can be done in constant time. If $P$ is maximal, then, we need to check if there is a removable end point and then extend $P$ by removing it. For this, as shown in Figure \ref{fig:path2Cycle}, we need to check if the vertices $x$, $a$ and $b$ exist. Since $\Delta(G)\le3$, it is easy to see that this can be done in constant time. It is also trivial to verify that the extension can be achieved in constant time. Hence proved. \qed \end{proof}
\subsection{An algorithm to find a special cycle or path} We now give an iterative algorithm to obtain a special cycle or path. The outline of the algorithm is as follows: Let $S$ be the set which holds the vertices of the special cycle or path at the termination of the algorithm. We start with $S$ containing an arbitrary vertex. In each iteration, we extend it as described in Section \ref{sec:extend}. The algorithm terminates when $S$ induces a special cycle or path. In Algorithm \ref{alg:special}, we present an outline of this procedure.
\begin{algorithm}[h] \caption{\label{alg:special}}
\SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \SetKw{Sf}{specialFlag} \Input{Graph $G$ with maximum degree $3$} \Output{Set $S\subseteq V(G)$ which induces a special cycle or path in $G$} Let $S=\{x\}$ where $x$ is an arbitrary vertex\; Let $R=\varnothing$, the set of potential removable vertices of $S$\; Let \Sf $=0$; \tcp{If set to $1$, it implies that $S$ is a special cycle or path} \While{\Sf$=0$}{
\uIf{$S$ induces a cycle}{
\uIf{$R=\varnothing$}{
\Sf$=1$; \tcp{No removable vertices and therefore, $S$
is a special cycle (see Observation \ref{obs:pot})}
}
\Else{
Choose any vertex $x$ from $R$\;\nllabel{lin:xR}
Extend $S$ by removing $x$ as described in Section \ref{sec:extend}\;\nllabel{lin:extendCycle}
}
}
\ElseIf{$S$ induces a path}{
\uIf{$S$ is a special path}{
\Sf$=1$\;
}
\Else{
Extend $S$ as described in Section \ref{sec:extend}\;\nllabel{lin:extendPath}
}
}
Update $R$ as described in Section \ref{sec:update}\;\nllabel{lin:updateR} } \end{algorithm}
\subsubsection{Potential removable vertices}\label{sec:update} From Lemma \ref{lem:pathExt}, we note that in constant time we can recognize a non-special induced path and extend it. However, in view of Lemma \ref{lem:cycleExt}, to recognize and extend a non-special induced cycle in constant time, we first need a strategy to find a removable vertex. For efficiently finding removable vertices, we maintain a list of potential removable vertices which is updated in each iteration.
\begin{definition}{\bf Potential removable vertices:}\label{def:potRemove} A vertex $x\in S$ is a potential removable vertex if it has two neighbors in $S$, say $x_{-1}$ and $x_{+1}$ and satisfies at least one of the following conditions: There are two vertices $a$ and $b$ such that \begin{enumerate} \item $a$ is adjacent to only $x_{-1}$ (or $x_{+1}$) in $S\setminus\{x\}$ and $b$ is adjacent to $a$ and not any vertex in $S\setminus\{x\}$. This corresponds to case (a) in Figure \ref{fig:cycle2Path}; OR \item $a$ is adjacent to $x_{-1}$ in $S\setminus\{x\}$ and $b$ is adjacent to only $x_{+1}$ in $S\setminus\{x\}$. This corresponds to case (b) in Figure \ref{fig:cycle2Path}. \end{enumerate} \end{definition}
From the definition of removable vertices at the beginning of Section \ref{sec:extend}, we infer the following: \begin{observation}\label{obs:pot} If $S$ induces a cycle, then, all potential removable vertices in $S$ will correspond to removable vertices. If there are no potential removable vertices, then, it implies that $S$ is a special cycle. \end{observation}
\begin{lemma}\label{lem:checkRemovable} In constant time, we can check if a particular vertex in $S$ is a potential removable vertex. \end{lemma} The proof is similar to that of Lemma \ref{lem:cycleExt}.
In the algorithm, we maintain a set of all potential removable vertices, which we denote as $R$. From Observation \ref{obs:pot}, it follows that if $S$ induces a cycle, we can decide whether it is special or not by just checking if $R$ is empty or not. Therefore, given an induced cycle and the corresponding $R$, we can recognize in constant time whether it is a special cycle or not. Now, we show that after each extension of $S$, $R$ can be updated in constant time. \begin{lemma}\label{lem:updateR} In each iteration of Algorithm \ref{alg:special}, the set of potential removable vertices, $R$ can be updated in constant time. \end{lemma} \begin{proof} Let us consider a vertex $x\in R$ before extension of $S$. Let $X$ denote the set containing $x$ and the associated vertices $x_{-1}$, $x_{+1}$, $a$ and $b$ (see Definition \ref{def:potRemove}). Since $S$ is extended, there are participating vertices. We observe that $x$ will remain as a potential removable vertex if no vertex in $X$ and no neighbor of $X$ is a participating vertex. This implies that if a vertex is at a distance $5$ or more from any participating vertices, then, clearly its status as a potential removable vertex or not remains unchanged. Therefore, we need to only check all the vertices at a distance $4$ or less from each participating vertex. The number of such vertices is a constant since $\Delta(G)=3$ and by Lemma \ref{lem:checkRemovable} verifying for each vertex takes only constant time. Hence proved. \qed \end{proof}
\begin{lemma}\label{lem:2in1} If the special cycle or path extracted by Algorithm \ref{alg:special} is of size $l$, then, the total number of iterations required is at most $2l+2$. \end{lemma} \begin{proof} We will prove the lemma by showing that in Algorithm \ref{alg:special}, for every two iterations (excluding the last two) the size of $S$ increases by at least $1$. If $S$ induces a cycle at the beginning of the $i$th iteration, then from Section \ref{sec:extend}, it follows that
$S$ is extended to a cycle or path of size $|S|+1$ at the end of the iteration. If $S$ induces a path at the beginning of the $i$th iteration, either $S$ is extended to a cycle or path of size $|S|+1$
or to a cycle of size $|S|$ at the end of the iteration. In the latter case, assuming $S$ is not a special cycle in the $(i+1)$th iteration, it is extended to a cycle or path of size $|S|+1$ at the end of the $(i+1)$th iteration. Hence, proved. \qed \end{proof}
From Lemmas \ref{lem:cycleExt}--\ref{lem:updateR}, it follows that Lines \ref{lin:extendCycle}, \ref{lin:extendPath} and \ref{lin:updateR} in Algorithm \ref{alg:special} require constant time. The sets $S$ and $R$ may be implemented as doubly linked lists and with each vertex we can associate membership flags and pointers to its place in each of the linked lists. Given this setup, each iteration requires constant time. From Lemma \ref{lem:2in1}, the number of iterations is bounded by $2l+2$. Therefore, the algorithm takes $O(l)$ time to terminate. Since the total number of vertices in the set of special cycles and paths will be bounded above by $n$, the overall running time of Algorithm \ref{alg:primPart} is $O(n)$.
\end{document} |
\begin{document}
\title{Entanglement as the symmetric portion of correlated coherence} \author{Kok Chuan Tan} \email{[email protected]} \author{Hyunseok Jeong} \email{[email protected]} \affiliation{Center for Macroscopic Quantum Control \& Institute of Applied Physics, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742, Korea} \date{\today}
\begin{abstract} We show that the symmetric portion of correlated coherence is always a valid quantifier of entanglement, and that this property is independent of the particular choice of coherence measure. This leads to an infinitely large class of coherence based entanglement monotones, which is always computable for pure states if the coherence measure is also computable. It is already known that every entanglement measure can be constructed as a coherence measure. The results presented here show that the converse is also true. The constructions that are presented can also be extended to also include more general notions of nonclassical correlations, leading to quantifiers that are related to quantum discord. \end{abstract}
\maketitle
\section{Introduction}
An important pillar in the field of quantum information is the study of the quantumness of correlations, the most well known of which is the notion of entangled quantum states~\cite{Einstein1935}. Entanglement is now the basis of many of the most useful and powerful quantum protocols, such as quantum cryptography~\cite{Ekert1991}, quantum teleportation~\cite{Bennett1991} and superdense coding. In the past several decades, generalized notions of quantum correlations that include but supersede entanglement have also been considered, most prominently in the form quantum discord~\cite{Ollivier2001, Henderson2001}. There is mounting evidence that such notions of quantum correlations can also lead to nonclassical effects in multipartite scenarios~\cite{Datta2008, Chuan2012, Dakic2012, Chuan2013}, even when entanglement is not available.
In a separate development, the past several years has also seen a growing amount of interest in the recently formalized resource theory of coherence~\cite{Aberg2006,Baumgratz2014, Levi2014}. Such theories are primarily interested in identifying the quantumness of some given quantum state, and is not limited to a multipartite setting as in the case of entanglement or discord. Nonetheless, there is considerable interest in the study of correlations from the point of view of coherence~\cite{Streltsov2015, Tan2016, Ma2016, Tan2018}. In this picture, one may view quantum correlations as a single aspect of the more general notion of nonclassicality, which in this article we will assume to imply coherence. Beyond the study of quantum correlations, the resource theory of coherence has been applied to an ever increasing number of physical scenarios, ranging from macroscopicity~\cite{Yadin2016, Kwon2017}, to quantum algorithms~\cite{Hillery2016,Matera2016}, to interferometry~\cite{YTWang2017}, to nonclassical light~\cite{Tan2017, Zhang2015, Xu2016}. \cite{Streltsov2017} provides a recent overview of the developments to date. Especially relevant are the results in~\cite{Streltsov2015}. There, it was shown that coherence can be faithfully converted into entanglement, and that each entanglement measure corresponds to a coherence measure in the sense of \cite{Baumgratz2014}.
In this article, we report a series of constructions which allows notions of nonclassical correlations to be quantified using coherence measures. The arguments are general in the sense that it does not depend on the particular coherence measure used, and does not even depend on the particular flavour of coherence measure that is being employed, so long as they satisfy some minimal set of properties that a reasonable coherence measure should satisfy. This suggests that notions of entanglement and discord are intrinsically tied to any reasonable resource theory of coherence. In essence, our results establish that the converse of the relationship proposed in~\cite{Streltsov2015} is also true, so that for every coherence measure, there corresponds an entanglement measure. In fact, we also go beyond this by demonstrating that not only does this correspondence exist for the coherence resource theory proposed in \cite{Baumgratz2014}, as our framework does not depend on the choice of free operations required by a particular resource theory \cite{Streltsov2017}. In addition, as natural consequences of our operation-independent approach, we also see that discord-like quantifiers of nonclassical correlations are naturally embedded in any such resource theories of coherence.
This operation-free approach contrasts with other approaches considered in~\cite{Chitambar2016, Streltsov2016}, where coherence and entanglement are bridged by forming a hybrid resource theory based on some combination of free operations from both theories. Such a hybridization approach often require additional constraints, such as requiring that operations are both local on top of being incoherent, which may bring about extra complications. For instance, it is sometimes difficult to physically justify the set of operations being considered in the hybridized theory, and one may also have to deal with the accounting of not one, but two, species of resource states (i.e. simultaneously keep track of available maximally coherent qubits on top of maximally entangled qubits).
\section{Preliminaries}
We review some elementary concepts concerning coherence measures. Coherence is a basis dependent property of a quantum state. For a given fixed basis $\mathcal{B} = \{ \ket{i} \}$, the set of incoherent states $\cal I$ is the set of quantum states with diagonal density matrices with respect to this basis, and is considered to the the set of classical states. Correspondingly, states that have nonzero off diagonal elements form the set of coherent states that are nonclassical.
The notion of nonclassicality from the point of view of coherence is an unambiguous aspect of all coherence theories, but different flavors of coherence resource theories sometimes consider different sets of non-coherence producing operations in order to justify different coherence measures (See~\cite{Streltsov2017} for a summary). For our purposes, we will not require any specific properties of such non-coherence producing operations. The following is a set of axioms that such resource theories of coherence generally obeys: Let $\mathcal{C}$ be a measure of coherence belonging to some coherence resource theory, then $\mathcal{C}(\rho)$ must satisfy (C1) $\mathcal{C}(\rho) \geq 0$ for any quantum state $\rho$ and equality holds if and only if $\rho \in \cal I$. (C2) The measure is non-increasing under a non-coherence producing map $\Phi$ , i.e., $C(\rho) \geq C(\Phi(\rho))$. (C3) Convexity, i.e. $\lambda C(\rho) + (1-\lambda) C(\sigma) \geq C(\lambda \rho + (1-\lambda) \sigma)$, for any density matrix $\rho$ and $\sigma$ with $0\leq \lambda \leq 1$.
The following quantity was considered by Tan {\it et al.}~\cite{Tan2016} while studying the relationship between coherence and quantum correlations: $$\mathcal{C}(A:B \mid \rho_{AB}) \coloneqq \mathcal{C}(\rho_{AB}) - \mathcal{C}(\rho_{A}) - \mathcal{C}(\rho_{B}).$$
This quantity was referred to as correlated coherence, and the coherence measure $\mathcal{C}$ in \cite{Tan2016} was chosen to be $l_1$-norm of coherence. There, it was noted that since it is always possible to choose a local basis for the subsystems $A$ and $B$ where $\mathcal{C}(\rho_{A})$ and $\mathcal{C}(\rho_{A})$ vanishes, the coherence in the system is apparently no longer stored locally, and must exist amongst the correlations between subsystems $A$ and $B$.
Based on this observation it was demonstrated there that if one were to minimize the correlated coherence with respect to all such possible local bases, i.e. all possible local bases $\mathcal{B}_A$ and $\mathcal{B}_B$ satisfying $\mathcal{C}(\rho_{A})=\mathcal{C}(\rho_{A})=0$, then the minimization over all such bases may be related to quantum correlations such as discord and entanglement. Formally, they considered the quantity:
\begin{definition}[Correlated coherence] $$\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) \coloneqq \min_{\mathcal{B}_{A:B}}\mathcal{C}(A:B \mid \rho_{AB}),$$ where the minimization is performed over the set of local bases $\mathcal{B}_{A:B} \coloneqq \{ (\mathcal{B}_A, \mathcal{B}_B) \mid \mathcal{C}(\rho_{A})=\mathcal{C}(\rho_{B})=0 \}.$ \end{definition}
The quantity is invariant under local unitary operations, since it is clear that for any state $\rho_{AB}$ and local basis $\mathcal{B}_{A:B} = \{\ket{i}_A\ket{j}_B \}$, the correlated coherence for the state $U_A \rho_{AB} U^\dag_A$ and basis $\mathcal{B}_{A:B} = \{U_A\ket{i}_A\ket{j}_B \}$ is identical. Subsequently, entropic versions of correlated coherence was also studied in \cite{Wang2017} and more recently in \cite{Kraft2018, Ma2018}, where operational scenarios were considered.
In the next section, we prove that using $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB})$ as our basic building block, every coherence measure can be used to construct a valid entanglement quantifier, which establishes that entanglement may be interpreted as the symmetric portion of correlated coherence.
\section{Quantifying entanglement with correlated coherence}
We begin with some necessary definitions:
\begin{definition}[Symmetric extensions] A symmetric extension of a bipartite state $\rho_{A_1B_1}$ is an extension $\rho_{A_1\ldots A_n B_1\ldots B_n}$ satisfying $\mathrm{Tr}_{A_2 \ldots A_n B_2 \ldots B_n}(\rho_{A_1\ldots A_n B_1\ldots B_n}) = \rho_{A_1B_1}$ that is, up to local unitaries, invariant under the swap operation $\Phi_{\mathrm{SWAP}}^{A_i \leftrightarrow B_i}$ between any subsystems $A_i$ of Alice and $B_i$ of Bob, i.e. there exists some unitary $U_{A_1 \ldots A_n}$ such that $$\Phi_{\mathrm{SWAP}}^{A_i \leftrightarrow B_i}(U_{A_1 \ldots A_n}\rho_{A_a \ldots A_n B_1\ldots B_n}U^\dag_{A_1 \ldots A_n}) = U_{A_1 \ldots A_n}\rho_{A_a \ldots A_n B_1\ldots B_n}U^\dag_{A_1 \ldots A_n}$$
\end{definition}
A symmetric extension is therefore, up to a local unitary on Alice's side (or Bob's side), an extension of the quantum state that, up to local unitaries, exists within the symmetric subspace. Subsequently, for notational simplicity, we will use unprimed letters $A,B$ for the system of interest, and and primed letters $A', B'$ for the ancillas in the extension. Let us now consider the correlated coherence for such extensions.
\begin{definition}[Symmetric correlated coherence] The symmetric correlated coherence, for any given coherence measure $\mathcal{C}$, is defined to be the following quantity: $$E_{\mathcal{C}}(\rho_{AB}) = \min_{A'B'}\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho_{AA'BB'})$$ where the minimization is performed over all possible symmetric extensions of $\rho_{AB}$. Note that the ancillas $A'$ and $B'$ may, in general, be composite systems. \end{definition}
The above definition quantifies the minimum correlated coherence that exists within a symmetric subspace of an extended Hilbert space, up to some local unitary on Alice's side or Bob's side. For this reason, we interpret this quantity as the portion of the correlated coherence that is symmetric.
For the rest of the note, we will prove several elementary properties of the above correlation measure, which will finally establish it as a valid entanglement monotone.
First, we will demonstrate that $E_{\mathcal{C}}(\rho_{AB})$ is a convex function of states:
\begin{proposition} $E_{\mathcal{C}}(\rho_{AB})$ is a convex function of state, i.e. $$\sum_i p_i E_{\mathcal{C}}(\rho^i_{AB}) \geq E_{\mathcal{C}}(\sum_i p_i\rho^i_{AB})$$ where $p_i$ defines some probability distribution s.t. $\sum_i p_i = 1$ and $\rho^i_{AB}$ is any normalized quantum state. \end{proposition}
\begin{proof} Let $\rho^{i*}_{AA'BB'}$ be the optimal extension such that $E_{\mathcal{C}}(\rho^i_{AB})= \mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho^{i*}_{AA'BB'})$. We have the following chain of inequalities:
\begin{align} \sum_i p_i E_{\mathcal{C}}(\rho^i_{AB})& = \sum_i p_i \mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho^{i*}_{AA'BB'}) \\ &= \sum_i p_i \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}) \\ &\geq \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \sum_i p_i \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}) \\ &\geq E_\mathcal{C}( \sum_i p_i \rho^i_{AB} ) \end{align}
The inequality in Line 3 occurs because there is at least one local basis that is upper bounded by Line 2. To see this, suppose for every $i$ and $\rho^{i*}_{AA'BB'}$, the optimal basis for evaluating $\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho^{i*}_{AA'BB'})$ is $\{ \ket{\alpha_{i,j}}_{AA'} \ket{\beta_{i,k}}_{BB'} \}$. Then it is clear that the optimal local basis for $\rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}$ must be $\{ \ket{\alpha_{i,j}}_{AA'} \ket{i}_{A''}\ket{\beta_{i,k}}_{BB'} \ket{i}_{B''} \}$ since there was just essentially a relabelling of the basis. Since the coherence measure $\mathcal{C}$ is convex, the classical mixture of quantum states cannot increase the amount of coherence with respect to the basis $\{ \ket{\alpha_{i,j}}_{AA'} \ket{i}_{A''}\ket{\beta_{i,k}}_{BB'} \ket{i}_{B''} \}$. Finally, one can verify that the local coherences with respect to this basis is always zero, so this is just one particular local basis that satisfies the necessary contraints. In sum, this implies \begin{align*}\sum_i p_i &\mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}) \\ &\geq \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \sum_i p_i \rho^{i*}_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}),\end{align*} which was the required inequality.
The inequality in Line 4 comes from the observation that $\sum_i p_i \rho^*_{AA'BB'}\otimes \ket{i,i}_{A''B''}\bra{i,i}$ is a particular symmetric extension of $\sum_i p_i \rho^*_{AB}$. The final inequality is simply the condition of convexity which we needed to prove. \end{proof}
In the next proposition, we demonstrate the connection between $E_{\mathcal{C}}(\rho_{AB})$ and nonseparability, which defines entanglement.
\begin{proposition} [Faithfulness]
$E_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is separable, and strictly positive otherwise. \end{proposition}
\begin{proof} First of all, we note that all coherence measures are nonnegative over valid quantum states, and as such, since $E_{\mathcal{C}}(\rho_{AB})$ is defined as a form of coherence over some quantum state, $E_{\mathcal{C}}(\rho_{AB})$ must be nonnegative.
Suppose some bipartite state $\rho_{AB}$ is separable. By definition, this necessarily implies that there exists some decomposition for which $\rho_{AB} = \sum_i p_i \ket{a_i}_A\bra{a_i} \otimes \ket{b_i}_B\bra{b_i}$. This always permits an extension of the form $\rho_{AA'BB'} = \sum_i p_i \ket{a_i}_A\bra{a_i} \otimes \ket{i}_{A'}\bra{i} \otimes \ket{b_i}_B\bra{b_i} \otimes \ket{i}_{B'}\bra{i}$ for some orthonormal set $\{ \ket{i} \}$. It can then be directly verified that $\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho_{AA'BB'}) = 0$ so we must have $E_{\mathcal{C}}(\rho_{AB})=0$ for every separable state.
We now prove the converse. Suppose $E_{\mathcal{C}}(\rho_{AB})=0$. Then there must exist some extension for which $\mathcal{C}_{\mathrm{min}}(AA':BB' \mid \rho_{AA'BB'}) = 0$. This implies that there must exist a local basis on $AA'$ and on $BB'$ such that the coherence must be zero, so $\rho_{AA'BB'}$ necessarily must be diagonal in this basis, i.e. $\rho_{AA'BB'} = \sum_{i} q_i \ket{\alpha_i}_{AA'}\bra{\alpha_i} \otimes \ket{\beta_i}_{BB'}\bra{\beta_i}$. Directly tracing out the subsystems $A'$ and $B'$ will lead to a decomposition of the form $\rho_{AB} = \sum_i p_i \ket{a_i}_A\bra{a_i} \otimes \ket{b_i}_B\bra{b_i}$, so $\rho_{AB}$ must be a separable state.
We then observe that since $E_{\mathcal{C}}(\rho_{AB})$ must be nonnegative, and it is zero iff $\rho_{AB}$ is separable, then it must be strictly positive for every entangled state. This completes the proof. \end{proof}
Finally, we show that $E_{\mathcal{C}}(\rho_{AB})=0$ always decreases under LOCC type operations.
\begin{proposition} [Monotonicity] \label{thm::monotonicity} For any LOCC protocol represented by a quantum map $\Phi_{\mathrm{LOCC}}$, we have $$E_{\mathcal{C}}( \rho_{AB}) \geq E_{\mathcal{C}}[\Phi_{\mathrm{LOCC}}(\rho_{AB})].$$ \end{proposition}
\begin{proof} Any LOCC operation can always be decomposed into some local quantum operation, a communication of classical information stored in a classical register, and finally, another local operation that is dependent on the classical information received.
Let us suppose that Bob, representing the subsystem $B$ is the one who will communicate classical information to Alice, representing subsystem $A$. His local operation can always be represented by adding ancillary subsystems $B'B''$ in some initial pure state $\ket{0}_{B'}\bra{0} \otimes \ket{0}_{B''}\bra{0}$, followed by a unitary operation on all of the subsystems on his side. Without any loss in generality, we will assume $B''$ will contain all the classical information (i.e. it is a classical register) after the unitary is performed, and $B'$ is traced out. Bob will then communicate this classical information to Alice, who will then perform some quantum operation depending on the information she received.
Based on the above, we have the following chain in inequalities:
\begin{align} E_{\mathcal{C}}(\rho_{AB})&= E_{\mathcal{C}}( \rho_{AB} \otimes \ket{0}_{B'}\bra{0} \otimes \ket{0}_{B''}\bra{0}) \\ &= E_{\mathcal{C}}( U_{BB'B''}\rho_{AB} \otimes \ket{0}_{B'}\bra{0} \otimes \ket{0}_{B''}\bra{0}U_{BB'B''}^\dag) \\ &\geq E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \end{align} where the last line makes use of the observation that a symmetric extension of the argument in Line 6 is also a symmetric extension of the argument of Line 7.
From the above, we see that a local POVM performed on Bob's side necessarily decreases the $E_\mathcal{C}$. The next part of the protocol requires Bob to communicate the classical information in the register $B''$ over to Alice. We need to demonstrate that this can be done for free, without increasing the correlated coherence.
To see this, let $\sigma_{AA'A''BB'B''}^*$ be the optimal symmetric extension of $\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}$. We then have $E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) = \mathcal{C}_{\mathrm{min}}(AA'A'':BB'B'' \mid \sigma_{AA'A''BB'B''}^*)$. Recall that the register $B''$ stores the classical information of Bob's POVM outcomes. By definition, $\sigma_{AA'A''BB'B''}^*$ must be a symmetric extension, so there exists a local unitary that Alice can perform such that $U_{AA'A''}\sigma_{AA'A''BB'B''}^* U_{AA'A''}^\dag = \Phi_{\mathrm{SWAP}}^{AA'A'' \leftrightarrow BB'B''}(U_{AA'A''}\sigma_{AA'A''BB'B''}^* U_{AA'A''}^\dag)$. Since local unitaries do no affect the measure $E_\mathcal{C}$, so we will assume that $\sigma_{AA'A''BB'B''}^*$ is itself already symmetric.
Suppose we add add registers, denoted $M_{A}$ and $M_{B}$, initialized in the state $\ket{0}_{M_A}$ and $\ket{0}_{M_B}$, and locally copy the classical information on registers $A''$ and $B''$ via CNOT operations $U_{\mathrm{CNOT}}^{M_AA''}$ and $U_{\mathrm{CNOT}}^{M_BB''}$. This results in the state $$\mathcal{U}_{\mathrm{CNOT}}^{M_AA''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BB''}(\ket{0}_{M_A}\bra{0} \otimes \sigma_{AA'A''BB'B''}^* \otimes \ket{0}_{M_B}\bra{0}),$$ where $\mathcal{U}_{\mathrm{CNOT}}^{AB}(\rho_{AB}) \coloneqq U_{\mathrm{CNOT}}^{AB} \rho_{AB} U_{\mathrm{CNOT}}^{AB \dag}$. Note that as identical unitary operations are performed on Alice's and Bob's side, the above state is symmetric since $\sigma_{AA'A''BB'B''}^*$ is symmetric. Due to symmetry, we must have the following chain of equalities:
\begin{align} &\mathcal{U}_{\mathrm{CNOT}}^{M_AA''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BB''}(\ket{0}_{M_A}\bra{0} \otimes \sigma_{AA'A''BB'B''}^* \otimes \ket{0}_{M_B}\bra{0}) \\& \;= \Phi_{\mathrm{SWAP}}^{A''\leftrightarrow B''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_AA''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BB''}[\ket{0}_{M_A}\bra{0} \otimes \Phi_{\mathrm{SWAP}}^{A''\leftrightarrow B''} (\sigma_{AA'A''BB'B''}^*) \otimes \ket{0}_{M_B}\bra{0}] \\ & \;= \mathcal{U}_{\mathrm{CNOT}}^{M_AB''} \circ \mathcal{U}_{\mathrm{CNOT}}^{M_BA''}(\ket{0}_{M_A}\bra{0} \otimes \sigma_{AA'A''BB'B''}^* \otimes \ket{0}_{M_B}\bra{0})\end{align} where Equation~10 uses the fact that $\Phi_{\mathrm{SWAP}}^{A\leftrightarrow B}(\rho_{AB}) = U_{\mathrm{SWAP}}^{A\leftrightarrow B} \rho_{AB} U_{\mathrm{SWAP}}^{A\leftrightarrow B\; \dag}$, $ U_{\mathrm{SWAP}}^{A\leftrightarrow B} = U_{\mathrm{SWAP}}^{A\leftrightarrow B\; \dag}$ and $U_{\mathrm{SWAP}}^{B\leftrightarrow C} U_{\mathrm{CNOT}}^{AB} U_{\mathrm{SWAP}}^{B\leftrightarrow C \; \dag} = U_{\mathrm{CNOT}}^{AC}$. One can verify that Equation~10 is a symmetric extension of $\sum_i \ket{i}_{M_A}\bra{i}\otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}$, which is just the state if Bob communicates the classical information on the register $B''$ to Alice. As such, we determine that the copying of classical information to Alice cannot increase the measure, so we have $$E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \geq E_\mathcal{C}(\sum_i \ket{i}_{M_A}\bra{i}\otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}).$$ This is already sufficient for us to prove that $E_\mathcal{C}$ cannot increase under classical communication.
Continuing from where we left off:
\begin{align} E_{\mathcal{C}}(\rho_{AB})& \geq E_\mathcal{C}(\sum_i K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \\ &\geq E_\mathcal{C}(\sum_i \ket{i}_{A''}\bra{i} \otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \\ &= E_\mathcal{C}(\ket{0}_{A'}\bra{0} \otimes \sum_i \ket{i}_{A''}\bra{i} \otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i}) \\ &= E_\mathcal{C}(U_{AA'A''}\ket{0}_{A'}\bra{0} \otimes \sum_i \ket{i}_{A''}\bra{i} \otimes K_B^i \rho^{*}_{AB}K_B^{i\dag} \otimes \ket{i}_{B''}\bra{i} U_{AA'A''}^\dag) \\
&\geq E_\mathcal{C}( \sum_{i,j} \ket{i}_{A''}\bra{i} \otimes K_A^{i,j}K_B^i \rho^{*}_{AB}K_B^{i\dag} K_A^{i,j\dag} \otimes \ket{i}_{B''}\bra{i} ) \end{align} where in Line 11 and 12, we used the fact that local operations and lassical communication cannot increase $E_\mathcal{C}$, and in Line 15, the inequality is because every symmetric extension of the argument in Line 14 is also a symmetric extension of the argument in line 15. The final line says that when Alice performs an operation conditioned on the classical communication by Bob, the measure also does not increase.
From the above arguments, we see that any local POVM performed by Bob, followed by a communication of classical measurement outcomes to Alice, and ended by another local quantum operation by Alice conditioned on the classical communication necessarily cannot increase $E_\mathcal{C}$. Since any LOCC protocol is a series of such procedures between Alice and Bob, possibly with their roles reversed, this implies that $E_\mathcal{C}$ is always contractive under LOCC operations.
\end{proof}
In sum, the Propositions directly imply the following theorem, which is the key result of this article: \begin{theorem} $E_\mathcal{C}$ is a valid entanglement monotone for every choice of coherence measure $\mathcal{C}$. \end{theorem}
We observe that if we were to choose the coherence measure to be the relative entropy of coherence, which is defined as $\mathcal{C}(\rho_{AB}) = \mathcal{S}[\Delta(\rho_{AB})]- \mathcal{S}(\rho_{AB})$ where $\Delta(\rho_{AB})$ is the completely dephased state~\cite{Baumgratz2014}, then for pure states, the measure exactly coincides with the well known entropy of entanglement. This is because pure quantum states only have trivial extensions and it always permits Schmidt decomposition of the state $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$ where we observe that the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ satisfies the condition that $\mathcal{C}(\rho_A)=\mathcal{C}(\rho_B)=0$. We can then verify w.r.t. this basis, $\mathcal{S}[\Delta(\rho_{AB})]= \mathcal{S}(\sum_i{\lambda_i \ket{i,i}_{AB}\bra{i,i}}) = \mathcal{S}[\mathrm{Tr}_B(\ket{\psi}_{AB}\bra{\psi})]$ which is just the expression for the entropy of entanglement. It still remains to be proven that the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ achieves the minimization required in the definition of correlated coherence. In fact, this is always true and is a generic property of all coherence measures, which we show in following theorem.
\begin{theorem}[$E_{\mathcal{C}}$ for pure states] \label{thm::purestates} For any continuous coherence measure $\mathcal{C}$ and pure state $\ket{\psi}_{AB}$ with Schmidt decomposition $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$, $E_\mathcal{C}(\ket{\psi}_{AB}) = \mathcal{C}(\ket{\psi}_{AB})$ where the coherence is measured w.r.t. the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ specified by the Schmidt decomposition. \end{theorem}
\begin{proof} Consider the Schmidt decomposition $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$ for any pure state. We don't need to consider extensions since every pure state only has trivial extensions. Suppose the coefficients are nondegenerate, in the sense that $\lambda_i \neq \lambda_j$ if $i\neq j$. If we were to perform a partial trace, we see that $\rho_A = \mathrm{Tr}_B(\ket{\psi}_{AB}\bra{\psi})= \sum_i \lambda_i \ket{i}_A\bra{i}$. As the the coefficients are nondegenerate, this implies that $\{\ket{i}_A\}$ (up to an overall phase factor) is the unique local basis satisfying $\mathcal{C}(\rho_A)=0$. Identical arguments also apply for subsystem $B$. As such, the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ necessarily achieves the minimum for the correlated coherence, i.e. $\mathcal{C}_{\mathrm{min}}(A:B \mid \ket{\psi}_{AB})$ and $E_\mathcal{C}(\ket{\psi}_{AB})$ are just the coherence $\mathcal{C}(\ket{\psi}_{AB})$ w.r.t. the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$.
The above demonstrates that the local bases defined by the Schmidt decomposition achieves the necessary minimization when the coefficients are nondegenerate. We now extend the arguments to the more general case. Consider now a general Schmidt decomposition $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$. In this case, even if the coefficients are degenerate, the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$ nonetheless satisfies the constraints $\mathcal{C}(\rho_A)=\mathcal{C}(\rho_B)=0$ so $\mathcal{C}(\ket{\psi}_{AB})$ w.r.t. this basis is at least an upper bound to $E_{\mathcal{C}}(\ket{\psi}_{AB})$.
Consider again the partial trace $\rho_A = \sum_i \lambda_i \ket{i}_A\bra{i}$. Without any loss in generality, we will assume that $\lambda_i$ is in decreasing order, so that if $j\geq i$ then $\lambda_j \leq \lambda_i$. Suppose one of the coefficient is $m$-degenerate, so that for some $k$, $\lambda_k > \lambda_{k+1}= \ldots= \lambda_{k+m} > \lambda_{k+m+1}$. Note the strict inequality on both ends. We now consider a slightly perturbed state $\ket{\psi(\epsilon)}_{AB} = \sum_i\sqrt{\lambda_i(\epsilon)}\ket{i,i}_{AB}$ where $\lambda_i(\epsilon) = \lambda_i-\epsilon \lfloor i- k - \frac{m}{2} \rfloor$ whenever $k<i<k+m+1$ and $\lambda_i(\epsilon) = \lambda_i$ otherwise. The corresponding partial trace is denoted $\rho_A(\epsilon) = \sum_i \lambda_i(\epsilon) \ket{i}_A\bra{i}$. For sufficiently small $\epsilon >0$, we can verify that the majorization condition $\rho_A \prec \rho_A(\epsilon)$ is satisfied, which due to Nielsen's theorem \cite{Nielsen2010}, implies that there exists some LOCC operation $\Phi_{\mathrm{LOCC}}$ that performs the transformation $\ket{\psi}_{AB} \rightarrow\ket{\psi(\epsilon)}_{AB}$. From Theorem~\ref{thm::monotonicity}, we know that this implies $E_\mathcal{C}(\ket{\psi}_{AB}) \geq E_\mathcal{C}\left(\ket{\psi(\epsilon)}_{AB} \right)$ since the quantity cannot increase under LOCC operations. At the same time, for sufficiently small $\epsilon>0$, the coefficients $\lambda_i(\epsilon)$ are non-degenerate so $E_\mathcal{C}(\ket{\psi(\epsilon)}_{AB}) = \mathcal{C}(\ket{\psi(\epsilon)}_{AB})$ w.r.t. the local bases $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$. This implies that $\mathcal{C}(\ket{\psi(\epsilon)}_{AB}) \leq E_\mathcal{C}(\ket{\psi}_{AB}) \leq \mathcal{C}(\ket{\psi}_{AB})$. In the limit $\epsilon \rightarrow 0$, $\mathcal{C}(\ket{\psi(\epsilon)}_{AB})\rightarrow \mathcal{C}(\ket{\psi}_{AB})$ , so by the squeeze theorem we must have that $E_\mathcal{C}(\ket{\psi}_{AB}) = \mathcal{C}(\ket{\psi}_{AB})$, where the implied basis is given by $\{\ket{i}_A \}$ and $\{ \ket{i}_B \}$. We have considered the case where only one coefficient has $m$-degeneracy, but the same arguments can just be repeated as necessary for every coefficient that has degeneracy, which is sufficient to prove the general case. \end{proof}
Theorem~\ref{thm::purestates} reveals that for every coherence measure and pure bipartite state, then there is always a basis where the coherence exactly quantifies the entanglement. In a more practical sense, it also shows that for every coherence measure that is computable, there corresponds a computable entanglement measure for pure states. Previously, we have already seen that the relative entropy of coherence, which has a closed form expression, corresponds to the entropy of entanglement for pure states. We can similarly choose the $l1$ norm of coherence, for which we get the simple closed form formula $E_{\mathcal{C}}(\ket{\psi}_{AB}) = \sum_{i \neq j } \sqrt{\lambda_i\lambda_j}$ where $\ket{\psi}_{AB} = \sum_i\sqrt{\lambda_i}\ket{i,i}_{AB}$. Theorem~\ref{thm::purestates} states that this expression is also a valid entanglement monotone for pure states. In general, there exists an infinite number of computable coherence measures. We also note that once one has an entanglement monotone for pure states, then it is possible to generalize it to mixed states via a convex roof construction~\cite{Horodecki2001}, which provides yet another avenue for generating new entanglement measures from coherence measures.
\section{Asymmetric quantifiers of quantum correlations}
In the previous section, the symmetric portion of the correlated coherence was considered, in which case it was found to directly address the entangled part of quantum correlations. We now show that simply dropping the requirement of symmetry naturally leads to discord-like measures of correlations.
For quantum discord, the set of states that has zero discord, and are thus ``classical", are the set of classical-quantum states which can be written in the form $\rho_{AB}=\sum_i p_i \ket{i}_A\bra{i}\otimes \rho_B^i$. One may readily define this set of classical quantum states by considering extensions without the requirement that the extension is symmetric. Let us consider the following:
\begin{definition} [Asymmetric discord of coherence] The asymmetric discord of coherence, for any given coherence measure $\mathcal{C}$, is defined to be the following quantity: $$D_{\mathcal{C}}(\rho_{AB}) = \min_{B'}\mathcal{C}_{\mathrm{min}}(A:BB' \mid \rho_{ABB'})$$ where the minimization is performed over all possible extensions satisfying $\mathrm{Tr}_{B'}(\rho_{ABB'}) = \rho_{AB}$. \end{definition}
We can then observe that this always defines a discord-like quantifier for every coherence measure $\mathcal{C}$.
\begin{theorem} $D_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-quantum, i.e. the state can be written as $\rho_{AB}= \sum_i p_i \ket{i}_A\bra{i}\otimes \rho_B^i$ where $\{\ket{i}_A\}$ is some orthonormal set. It is strictly positive otherwise. \end{theorem}
\begin{proof} First, suppose $\rho_{AB}= \sum_i p_i \ket{i}_A\bra{i}\otimes \rho_B^i$. Writing $\rho_B^i$ in terms of its pure state decomposition, we have $$\rho_{AB}= \sum_i p_i \ket{i}_A\bra{i}\otimes \sum_j q_{ij}\ket{\beta_j}_B\bra{\beta_j}.$$ This state always permits an extension on Bob's side of the form $$\rho_{ABB'}= \sum_i p_i \ket{i}_A\bra{i}\otimes \sum_j q_{ij}\ket{\beta_j}_B\bra{\beta_j}\otimes \ket{i,j}_{B'}\bra{i,j}$$ for which $\mathcal{C}_{\mathrm{min}}(A:BB' \mid \rho_{ABB'})=0$ and so $D_{\mathcal{C}}(\rho_{AB})=0$.
Conversely, if $D_{\mathcal{C}}(\rho_{AB})=0$ then this implies that we can write $\rho_{ABB'} = \sum_i p_i \ket{i}_A\bra{i} \otimes \ket{\beta_i}_{BB'}\bra{\beta_i}$, is is a classical-quantum state and will remain classical-quantum even if we trace out the subsystem $B'$. This proves the converse statement so we must have $D_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-quantum.
Since $D_{\mathcal{C}}(\rho_{AB})$ is a coherence measure and so is nonnegative, and $D_{\mathcal{C}}(\rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-quantum, we must have that for any non classical-quantum state, $D_{\mathcal{C}}(\rho_{AB})> 0$. This completes the proof.
\end{proof}
The most general notion of nonclassical correlations is one where the set of classical states is the set of classical-classical states, or completely classical states. These are quantum states that can always be written in the form $\rho_{AB}=\sum_{i,j} p_{ij} \ket{i}_A\bra{i}\otimes \ket{j}_B\bra{j}$. This can be directly addressed via the correlated coherence itself, without consideration of any extensions of states, which is the natural end point of the further relaxation of the constraints that were previously considered in $E_{\mathcal{C}}$ and $D_\mathcal{C}$
\begin{theorem} $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-classical, i.e. the state can be written as $\rho_{AB}=\sum_{i,j} p_{ij} \ket{i}_A\bra{i}\otimes \ket{j}_B\bra{j}$ where $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$ are some orthonormal sets. It is strictly positive otherwise. \end{theorem}
\begin{proof} First, suppose $\rho_{AB}=\sum_{i,j} p_{ij} \ket{i}_A\bra{i}\otimes \ket{j}_B\bra{j}$. It is then immediate clear by considering the basis $\{ \ket{i}_A \ket{j}_B$ that $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$.
Conversely, if $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ then this implies that we can write $\rho_{AB} = \sum_{i,j} p_{ij} \ket{i}_A\bra{i} \otimes \ket{j}_B\bra{j}$ since there must be some local basis $\{ \ket{i}_A \}$ and $\{ \ket{j}_B \}$ for which $\rho_{AB}$ is diagonal. This proves the converse statement so we must have $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-classical.
Since $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB})$ is a coherence measure and so is nonnegative, and $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB}) = 0$ iff $\rho_{AB}$ is classical-classical, so we must have that for any non classical-classical state, $\mathcal{C}_{\mathrm{min}}(A:B \mid \rho_{AB})> 0$. This completes the proof.
\end{proof}
We also observe that for pure bipartite states, $\mathcal{C}_{\mathrm{min}}(A:B \mid \ket{\psi}_{AB}) = D_{\mathcal{C}}(\ket{\psi}_{AB})=E_{\mathcal{C}}(\ket{\psi}_{AB})$, The discord-like quantifiers converge with entanglement, which is a known property of measures of quantum discord. \section{Conclusion}
In the preceding sections, we presented a construction that is valid quantifier of entanglement. The construction is also generalizable to include larger classes of quantum correlations, leading to discord-like quantifiers of nonclassicality. The arguments are independent of not only the type of coherence measure used, it is also independent of the kind of non-coherence producing operation that is being considered. Such entanglement measures must therefore necessarily exist for any convex coherence quantifier that shares a common notion of classicality. This leads to the conclusion that such constructions, and thus notions of entanglement and discord, must exist in every reasonable resource theory of coherence.
In~\cite{Streltsov2015}, it was demonstrated that for every entanglement measure, there corresponds a coherence measure. This was achieved by considering the entanglement of the state after performing some preprocessing in the form an an incoherent operation. In a sense, this article asks the converse question: Does every coherence measure correspond to some entanglement measure? The results discussed in this article proves this in the affirmative. Therefore, if one were interested in keeping count, the number of possible entanglement measures must be exactly equal to the number of coherence measures.
The fact that entanglement can always be defined as the symmetric portion of correlated coherence also further illuminates the role that is being played by the incoherent operation in~\cite{Streltsov2015}, despite not being a crucial element for the construction of entanglement measures. Recall that incoherent operations are operations that do not produce coherence. This does not, however, preclude the moving of coherence from one portion of the Hilbert space to another. Since coherence can always be faithfully convert coherence into entanglement, we see that the incoherent operation, in such a context, is performing the role of converting any local coherences into the symmetric portion of the correlated coherence, at least when one restrict themselves to the resource theory of coherence considered in~\cite{Baumgratz2014,Streltsov2015 }.
We hope that the discussion presented here will inspire further research into the interplay between coherence and quantum correlations.
\acknowledgements This work was supported by the National Research Foundation of Korea (NRF) through a grant funded by the Korea government (MSIP) (Grant No. 2010-0018295) and by the Korea Institute of Science and Technology Institutional Program (Project No. 2E27800-18-P043). K.C. Tan was supported by Korea Research Fellowship Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (Grant No. 2016H1D3A1938100). We would also like to thank H. Kwon for helpful discussions.
\end{document} |
\begin{document}
\title{ Extension of Irreducibility results on Generalised Laguerre Polynomials $L_n^{(-1-n-s)}(x)$} \author[Nair]{Saranya G. Nair} \address{Department of Mathematics\\ BITS Pilani, K K Birla Goa Campus, Goa- 403726} \email{[email protected]} \author[Shorey]{T. N. Shorey} \address{National Institute of Advanced Studies, IISc Campus\\ Bangalore, 560012} \email{[email protected]} \thanks{2010 Mathematics Subject Classification: Primary 11A41, 11B25, 11N05, 11N13, 11C08, 11Z05.\\ Keywords: Irreducibility, Laguerre Polynomials, Primes, Newton Polygons.}
\begin{abstract} We consider the irreducibility of Generalised Laguerre Polynomials
for negative integral values given by $L_n^{(-1-n-s)}(x)=\displaystyle\sum_{j=0}^{n}\binom{n-j+s}{n-j}\frac{x^j}{j!}.$ For different values of $s,$ this family gives polynomials which are of great interest. It was proved earlier that for $0 \leq s \leq 60,$ these polynomials are irreducible over $\mathbb{Q}.$ In this paper we improve this result upto $s \leq 88.$ \end{abstract}
\maketitle \pagenumbering{arabic}
\pagestyle{myheadings} \markright{Extension of Irreducibility results on Generalised Laguerre Polynomials $L_n^{(-1-n-s)}(x)$} \markleft{ Nair and Shorey} \section{{\bf Introduction}}
For a positive integer $n$ and real number $\alpha,$ the Generalised Laguerre Polynomial (GLP) is defined as \begin{align}\label{lag} L_n^{(\alpha)}(x)=\displaystyle\sum_{j=0}^{n}\frac{(n+\alpha)(n-1+\alpha)\cdots (j+1+\alpha)}{j!(n-j)!}(-x)^j. \end{align} These polynomials were discovered around 1880 and they have been extensively studied in various branches of mathematics and mathematical physics. The algebraic properties of GLP were first studied by Schur \cite{Sch1},\cite{Sch2} where he established the irreducibility of $L_n^{(\alpha)}(x)$ for $\alpha \in \{0,1,-n-1\},$ gave a formula for the discriminant $\Delta_n^{(\alpha)}$ of $ \mathcal{L}_n^{(\alpha)}(x)=n!L_n^{(\alpha)}(x)$ by \begin{align*}
\Delta_n^{(\alpha)}=\displaystyle\prod_{j=1}^{n}j^j(\alpha+j)^{j-1} \end{align*} and calculated their assosciated Galois groups. For an account of results obtained on GLP, we refer to \cite{Haj},\cite{NaSh}.
Let $f(x) \in \mathbb{Q}[x]$ with deg $f=n$. By irreducibility of a polynomial, we shall always mean its irreducibility over $\mathbb{Q}.$ We observe that if a polynomial of degree $n$ has a factor of degree $k <n,$ then it has a factor of degree $n-k.$ {\it Therefore given a polynomial of degree $n$, we always consider factors of degree $k$ where $1 \leq k \leq \frac{n}{2}$}. If the argument $\alpha$ of \eqref{lag} is a negative integer, we see that the constant term of $L_n^{(\alpha)}(x)$ vanishes if and only if $n \geq |\alpha|=-\alpha$ and then $L_n^{(\alpha)}(x)$ is reducible. Therefore we assume that $\alpha \leq -n-1.$ We write $\alpha=-n-s-1$ where $s$ is a non-negative integer. We have \begin{align}
L_n^{(-n-s-1)}(x)=\displaystyle\sum_{j=0}^{n}(-1)^n\frac{(n+s-j)!}{(n-j)! s!}\frac{x^j}{j!}. \end{align} Borrowing the notation from \cite{SinSho}, we consider the following polynomial \begin{align*}
g(x):=g(x,n,s)=(-1)^nL_n^{(-n-s-1)}=\displaystyle\sum_{j=0}^{n}\binom{n+s-j}{n-j}\frac{x^j}{j!}=\displaystyle\sum_{j=0}^{n}b_j\frac{x^j}{j!} \end{align*} where $b_j=\binom{n+s-j}{n-j}$ for $0 \leq j \leq n.$ Thus $b_n=1,b_0=\binom{n+s}{s}=\frac{(n+1)\cdots (n+s)}{s!}$. We observe that $g(x)$ is irreducible if and only if $L_n^{(-n-s-1)}(x)$ is irreducible. The aim of this paper is to discuss the irreducibility of $g(x)$. We consider more general polynomial \begin{align*}
G(x):=G(x,n,s)=\displaystyle\sum_{j=0}^{n}a_jb_j\frac{x^j}{j!} \end{align*}
such that $a_j \in \mathbb{Z}$ for $0 \leq j \leq n$ with $|a_0|=|a_n|=1.$ If $a_j=1$ for $0 \leq j \leq n,$ we have $G(x)=g(x).$ We write \begin{align*}
g_1(x):=n! g(x) = n! \displaystyle\sum_{j=0}^{n}\binom{n+s-j}{n-j}\frac{x^j}{j!} \end{align*} and \begin{align*}
G_1(x)=n! G(x) \end{align*}so that $g_1$ and $G_1$ are monic polynomials with integer coefficients of degree $n.$ The irreducibility of $g_1(x)$ and $G_1(x)$ implies the irreducibility of $g(x)$ and $G(x)$ respectively. We begin with the following result by Sinha and Shorey \cite{SinSho} on $G(x)$.
\begin{lemma}\label{Lemma 1} Let $s \leq 92.$ Then $G(x)=G(x,n,s)$ has no factor of degree $k \geq 2$ except when $(n,k,s) \in \{ (4,2,7),(4,2,23),(9,2,19),(9,2,47),(16,2,14),(16,2,34),(16,2,89),\\(9,3,47), (16,3,19),(10,5,4)\}. $ \end{lemma}
We re-state Lemma 1.1 as follows:\\
Let $s \leq 92.$ Assume that $G(x)$ has a factor of degree greater than or equal to $2$. Then $(n,k,s) \in \{ (4,2,7),(4,2,23),(9,2,19),(9,2,47),(16,2,14),(16,2,34),(16,2,89),(9,3,47),\\ (16,3,19),(10,5,4)\}.$ We check that $g(x)$ is irreducible when $n \in \{ 4,9,10,16\}.$ Therefore by Lemma \ref{Lemma 1} with $G(x)=g(x)$ we have \begin{lemma}\label{Lemma2}
Let $n \geq 3$ and $s \leq 92$. Then $g_1(x)$ is either irreducible or linear factor times an irreducible polynomial. \end{lemma} The irreducibility of $g_1(x)$ was proved by Schur \cite{Sch1} for $s=0,$ by Hajir \cite{Haj06} for $s=1$, by Sell \cite{Sell} for $s=2$ and by Hajir \cite{Haj} for $3 \leq s \leq 8.$ We shall prove \begin{theorem}\label{thm1}
$g_1(x)$ is irreducible for $9 \leq s \leq 88.$ \end{theorem} Nair and Shorey \cite{NaSh15b} and Jindal, Laishram and Sarma \cite{jin} already proved the irreducibility of $g_1(x)$ in the range of $9 \leq s \leq 22$ and $23 \leq s \leq 60$ respectively. But our proof of Theorem 1 is new. The proofs of \cite{NaSh15b} and \cite{jin} depend on the method of Hajir in \cite{Haj} whereas our proof depends on Lemma \ref{Lemma2} which is a direct consequence of Lemma \ref{Lemma 1}. We could not cover the cases of $88 \leq s \leq 92$ due to computational limitations.
At this point, we pause with a digression concerning notation. In order to dispell possible confusion in the reader concerning notation, it is worth pointing out that the notation $L_n^{<s>}(x)$ (resp., $\mathcal{L}_n^{<s>}(x)$) was used by the authors in \cite{Haj}, \cite{jin} and \cite{NaSh} to denote the polynomial $g(x, n, s)$ (resp, $g_1(x, n, s)$).
\section{Preliminaries}
From now onwards we shall assume that $s \geq 9.$ For a real number $\alpha,$ we write $\left[ \alpha\right]$ to be the largest integer not exceeding $\alpha.$
Let $f(x) =\displaystyle\sum_{j=0}^{m}d_jx^j \in \mbox{$\mathbb Z$}[x]$ with $d_0d_m \neq 0$ and let $p$ be a prime. For an integer $x,$ let $\nu(x)=\nu_p(x)$ be the highest power of $p$ dividing $x$ and we write $\nu(0)=\infty.$ Let S be the following set of points in the extended plane $$S =\{(0,\nu(d_m)),(1,\nu(d_{m-1})),(2,\nu(d_{m-2})),\ldots,(m,\nu(d_0))\}.$$ Consider the lower edges along the convex hull of these points. The left most endpoint is $(0,\nu(d_m))$ and the right most endpoint is $(m,\nu(d_0))$. The endpoints of each edge belong to S and the slopes of the edges increase strictly from left to right. The polygonal path formed by these edges is called the Newton polygon of $ f(x)$ with respect to the prime $p$ and we denote it by $NP_p(f)$. The endpoints of the edges of $NP_p(f)$ are called the vertices of $ NP_p(f)$. We begin with a very useful result, due to Filaseta \cite{Fil}, giving a criterion on the factorisation of a polynomial in terms of the maximum slope of the edges of its Newton polygon. \begin{lemma}\label{newton1}
Let $l, k,m$ be integers with $m \geq 2k > 2l \geq 0$. Suppose $h(x) =\displaystyle\sum_{j=0}^{m}b_jx^j \in
\mbox{$\mathbb Z$}[x] $ and $p$ be a prime such that $ p \nmid b_m$ and $p\mid b_j $ for $0 \leq j \leq m-l-1$ and the
right most edge of $ NP_p(h)$ has slope $ <\frac{1}{k}$. Then for any integers $a_0, a_1,\ldots, a_m$
with $p\nmid a_0a_m$, the polynomial $f(x) =\displaystyle\sum_{j=0}^{m} a_jb_jx^j$ cannot have a factor with degree in $[l + 1, k]$. \end{lemma} The next result is Lemma 4.2 of \cite{ShTi10} with $a=0.$ \begin{lemma}\label{ShTi}
Let $a_0,a_1,\cdots,a_n$ denote arbitrary integers and $$h(x)=\displaystyle\sum_{j=0}^{n}a_j\frac{x^j}{j!}.$$ Assume that $h(x)$ has a factor of degree $k \geq 1.$ Suppose that there
exists a prime $p > k $ such that $p$ divides $n(n-1) \cdots (n-k+1).$ Then $p$ divides $a_0a_n.$ \end{lemma} For a positive integer $l$ and a prime $p,$ let $\nu_p(l)$ be the maximal power of $p$ dividing $l.$
\begin{lemma}\label{order1}
Let $p$ be a prime. For any integer $l\geq 1$, write $l$ in base $p$ as $l=l_tp^t+l_{t-1}p^{t-1}+\dots+l_1p+l_0$ where $0\leq l_i\leq p-1$ for $0\leq i \leq t$ and $l_t >0$. Then
\begin{align*}
\nu_p(l!)=\frac{l-\sigma_p(l)}{p-1}
\end{align*}
where $\sigma_p(l)=l_t+l_{t-1}+\dots+l_1+l_0$. \end{lemma} This is due to Legendre. For a proof, see \cite[Ch.17, p 263]{Hasse}. As a consequence we have \begin{align}\label{eq2}
\nu_p\left(\binom{m}{t}\right)= \frac{\sigma_p(t)+\sigma_p(m-t)-\sigma_p(m)}{p-1}. \end{align}
\begin{lemma}\label{Lemma 6}
Assume that $g_1(x)$ is a linear factor times an irreducible polynomial. Let $p$ be a prime dividing $n$ and $ s < p^2.$ Then \begin{align*}
d+ \left[\frac{s}{p}\right] \geq p
\end{align*}
where $d \equiv \frac{n}{p} (\mod p)$ for $ 1 \leq d <p.$
\end{lemma} The assertion of Lemma \ref{Lemma 6} was proved in \cite[Corollary 3.2]{jin} under the assumption of $p$ dividing $n_1$ where
\begin{align}\label{defn of n}
n=n_0\cdot n_1 \ {\rm with} \ \gcd(n_0,n_1)=1 \end{align} and \begin{align}\label{n_1}
n_1=\displaystyle \prod_{p| \gcd (n,\binom{n+s}{s})}p^{\text{ord}_p(n)}. \end{align} Therefore $n_0$ is the largest divisor of $n$ which is coprime to $\binom{n+s}{s}.$ Thus the assumption $p$ dividing $n_1$ in \cite[Corollary 3.2]{jin} is replaced by $p$ dividing $n$ in Lemma \ref{Lemma 6} when $g_1(x)$ is linear factor times an irreducible polynomial.
\begin{proof}
We apply Lemma \ref{ShTi} with $h(x)=g_1(x)$ and $k=1$ to conclude that \begin{align} \label{eq1}
p|\frac{(n+1)\cdots (n+s)}{s!}.
\end{align} If $\nu_p(n)>s,$ then $\nu_p\left( \frac{(n+1)\cdots (n+s)}{s!}\right) \leq \nu_p(\frac{s!}{s!}) \leq 0$ contradicting \eqref{eq1}. Therefore $\nu_p(n) \leq s < p^2$ and hence $ 1 \leq \nu_p(n) <2.$ We write, \begin{align*}
n=pD \ \text{where} \ \gcd(D,p)=1 \end{align*} and $s=ps_1+s_0$ where $1 \leq s_1 <p, 0 \leq s_0 <p$. Then $n+s=p(D+s_1)+s_0$ which implies that $\sigma_p(n+s)=\sigma_p(D+s_1)+s_0$. Now we argue as in \cite[Lemma 3.1]{jin} for deriving from \eqref{eq2} and \eqref{eq1} that \begin{align*} 1 \leq \nu_p\left(\binom{n+s}{s}\right)= &\frac{\sigma_p(n)+\sigma_p(s)-\sigma_p(n+s)}{p-1}\\ =& \frac{\sigma_p(D)+s_1+s_0+-\sigma_p(D+s_1)-s_0}{p-1}\\ =&\frac{\sigma_p(D)+s_1-\sigma_p(D+s_1)}{p-1}\\ =& \nu_p\left(\binom{D+s_1}{s_1}\right)\\ =& \nu_p \left(\frac{(D+1)\cdots (D+s_1)}{s_1!}\right)\\ =& \nu_p((D+1) \cdots (D+s_1))\\ =& \nu_p(D+j) \ \text{for \ exactly one} \ j \ \text{with} \ 1 \leq j \leq s_1 \end{align*} since $s_1 <p.$ Hence $D+\left[ \frac{s}{p}\right]=D+s_1 \geq D+j \equiv 0 $ (mod $p$). Since $\gcd(D,p)=1$, we observe from \eqref{eq2} that $D=\frac{n}{p} \equiv d$ (mod $p$) where $ 1 \leq d <p$ by looking at $p-$ adic representation of $n.$ Hence $ d+ \left[ \frac{s}{p}\right] \geq p$ where $d \equiv \frac{n}{p}$ (mod $p$) for $1 \leq d <p.$ \end{proof} Next, we prove \begin{lemma}\label{Lemma 7}
Assume that $g_1(x)$ is linear factor times an irreducible polynomial. Then for $n \leq 127$ and $s \leq 103,$ $g_1(x)$ is irreducible.
\end{lemma}
This result is proved in \cite[Lemma 2.10]{NaSh15b} without the assumption that $g_1(x)$ is linear factor times an irreducible polynomial. But Lemma \ref{Lemma 7} suffices for our purpose. We give here a proof of this particular case, as it involves considerably less computations.
\begin{proof}
Let $n \leq 127$ and $s \leq 103.$ Since $g_1(x)$ is linear factor times an irreducible polynomial, we see that $n_0=1.$ Assume that $n \geq s+2.$ Then $\max( \frac{n+s}{2},n-1)=n-1$ and we derive from \cite[Lemma 4.1]{Haj} that $g_1(x)$ is irreducible if $n$ is prime. Now we check that $g_1(x)$ is irreducible for all pairs $(n,s)$ with $n$ composite and $n \geq s+2.$ Next we assume that $n <s+2.$ Then $\max( \frac{n+s}{2},n-1)=\frac{n+s}{2}$. We determine all pairs $(n,s)$ such that $n <s+2$ and there exists a prime $p$ satisfying $\frac{n+s}{2} < p \leq n$. Then $g_1(x)$ is irreducible for all these pairs $(n,s)$ by \cite[Lemma 4.1]{Haj}. For the remaining pairs $(n,s)$ with $n <s+2,$ we check that $g_1(x)$ is irreducible.
\end{proof}
We close this section by stating the following result which is Lemma 3.6 from \cite{jin} with $i=1$.
\begin{lemma} \label{Lemma factor1}
Let $p|n(s+1)$ and $\nu_p\left(\binom{n+s}{s}\right)=u.$ Then $g_1(x)$ cannot have a factor of degree $1$ if any of the following condition holds:
\begin{itemize}
\item[(i)] $u=0$
\item[(ii)] $u >0$, $ p>2$ and $
\max \left\{\frac{u+1}{p}, \frac{\nu_p(n+s-z_0)-\nu_p(n)}{z_0+1}\right\} < 1$,
where $z_0 \equiv n+s$ (mod $p$) with $1 \leq z_0 <p.$
\end{itemize}
\end{lemma}
\section{Proof of Theorem \ref{thm1}}
For $c> 1$ and $s \geq c^2$, we consider the following set given by
\begin{align*}
H_{s,c}=\{n \in \mathbb{N}, n >127 \ \text{and for } p |n, p^{\nu_p(n)} \leq s \ \text{and if} \ p > sc^{-1} \ \text{then} \ d+\left[\frac{s}{p}\right] \geq p \}
\end{align*} where $1 \leq d <p$ and $d \equiv \frac{n}{p}$ (mod $p$). Since $p \geq sc^{-1} \geq \sqrt{s},$ we derive from Lemmas \ref{Lemma 6} and \ref{Lemma 7} that it suffices to prove the irreducibility of $g_1(x)=g_1(x,n,s)$ with $n \in H_{s,c}.$ We partition $H_{s,c}$ as $H_{s,c,1}$ and $H_{s,c,2}$ given by \begin{align*}
H_{s,c,1}=\{ n \in H_{s,c} \ \text{such that} \ P(n) \leq [sc^{-1}]\} \end{align*} and \begin{align*}
H_{s,c,2}=\{ n \in H_{s,c} \ \text{such that} \ P(n) > [sc^{-1}]\}. \end{align*}
Let $9 \leq s \leq 88$. By taking $ c \in \{3,3.42,5.5,7.7\}$ we compute $H_{s,c,1}$ and $H_{s,c,2}$ and hence $H_{s,c}$ for $ 9 \leq s \leq 88.$ We give some details regarding the computations of $H_{s,c}$ below. For example, for $s=80,$ and $ c=7.7$, the cardinality of $H_{s,c}$, $|H_{s,c}|=1538$ and for $s=85,$ and $ c=7.7$, the cardinality of $H_{s,c}$, $|H_{s,c}|=2466.$ The following table gives the $c$ values which we have chosen for each $s$ to compute the set $H_{s,c}$. \begin{center}
\begin{tabular}{|c|c|}
\hline
$s$ & $c$ \\
\hline
$9 \leq s \leq 11$ & $3$ \\
\hline $12 \leq s \leq 35$ & $3.42$ \\ \hline $36 \leq s \leq 60$ & $5.5$ \\ \hline
$61 \leq s \leq 88$ & $7.7$\\
\hline \end{tabular} \end{center} For each $n \in H_{s,c}$, we apply Lemma \ref{Lemma factor1} to derive that $g_1(x)=g(x,n,s)$ is irreducible except for $(n,s) \in T$ where \begin{align*}
T=&\{(272,17),(144,21),(144,23),(144,25),(144,26),(312,26),(600,26),(216,29),(216,31),\\& (720,31),(240,35), (1440,35),(288,40),(288,41),(216,42),(216,44),(216,47),(288,47),\\& (288,48),(216,49),(144,51),(288,51),(144,53),(216,53),(288,53),(4320,55),(216,59),\\&(216, 63), (288, 63),(432, 63), (672, 63),(180,71),(192,71),(216,71),(216,79),(576,79),\\&(144, 80), (192, 80),(216, 80), (320, 80), (432, 80), (576,
80), (720, 80), (4320, 80)\} \end{align*} We use IrreduicibilityQ command in Mathematica to check that $g_1(x)$ is irreducible for all these values of $(n,s) \in T.$ \begin{remark}
It is not necessary to compute $H_{s,c}$ for all values of $s$. For a fixed $c$, if $[sc^{-1}]=[(s+1)c^{-1}]$ and $P(s+1) \leq [sc^{-1}]$, then $H_{s,c}=H_{s+1,c}$. The assertion follows from the definitions of $H_{s,c,1}$ and $H_{s,c,2}$. Therefore $H_{s,c}=H_{s+1,c}$ for $s \in \{19,27,29,34\}$ with $c=3.42$, $s \in \{39,41,47,49,53,55,59\}$ with $c=5.5$ and $s \in \{62,69,71,74,79,83,87\}$ with $c=7.7$.
\end{remark}
\begin{remark}
We can take $c=5.5$ or $c=7.7$ according as $s$ at least $36$ or $s \geq 60$ respectively. But we cannot take $c$ more than $3.42\frac{s}{s-1}$ without sharpening \cite[Corollary 6]{NaSh15b}. Consequently we get sets $H_{s,c}$ of smaller size when $s \geq 36$ and this reduces the computations. \end{remark}
\end{document} |
\begin{document}
\title[Semilinear nonlocal elliptic equations]{\large Semilinear nonlocal elliptic equations \\ with absorption term} \author[P.-T. Huynh]{Phuoc-Truong Huynh} \address{P.-T. Huynh, Department of Mathematics, Alpen-Adria-Universit\"{a}t Klagenfurt, Klagenfurt, Austria.} \email{[email protected]}
\author[P.-T. Nguyen]{Phuoc-Tai Nguyen} \address{P.-T. Nguyen, Department of Mathematics and Statistics, Masaryk University, Brno, Czech Republic.} \email{[email protected]}
\begin{abstract} In this paper, we study the semilinear elliptic equation (E) $\mathbb{L} u + g(u) = \mu$ in a $C^2$ bounded domain $\Omega \subset \mathbb{R}^N$ with homogeneous Dirichlet boundary condition $u=0$ on $\partial \Omega$ or in $\mathbb{R}^N \setminus \Omega$ if applicable, where $\mathbb{L}$ is a nonlocal operator posed in $\Omega$, $g$ is a nondecreasing continuous function and $\mu$ is a Radon measure on $\Omega$. Our approach relies on a fine analysis of the Green operator $\mathbb{G}^{\Omega}$, which is formally known as the inverse of $\mathbb{L}$. Under mild assumptions on $\mathbb{G}^{\Omega}$, we establish the compactness of $\mathbb{G}^{\Omega}$ and variants of Kato's inequality expressed in terms of $\mathbb{G}^{\Omega}$, which are important tools in proving the existence and uniqueness of weak-dual solutions to (E). Finally, we discuss solutions with an isolated boundary singularity, which can be attained via an approximation procedure. The contribution of the paper consists of (i) developing novel unifying techniques which allow to treat various types of nonlocal operators and (ii) obtaining sharp results in weighted spaces, which cover and extend several related results in the literature.
\noindent\textit{Key words: nonlocal elliptic equations, compactness, Kato inequality, critical exponent, weak-dual solutions, measure data, Green kernel.}
\noindent\textit{2020 Mathematics Subject Classification: 35J61, 35B33, 35B65, 35R06, 35R11, 35D30, 35J08.}
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction} In the present paper, we are interested in semilinear elliptic equations with absorption term of the form \begin{equation}\label{eq:equation_introduction} \mathbb{L} u + g(u) = \mu \text{ in }\Omega, \end{equation} with homogeneous Dirichlet boundary condition \begin{equation} \label{bdry-cond} u = 0 \text{ on }\partial \Omega \text{ or in }\mathbb{R}^N \backslash \Omega \text{ if applicable.} \end{equation} Here $\Omega \subset \mathbb{R}^N$ $(N \ge 3)$ is a $C^2$ bounded domain, $\mu$ is a Radon measure on $\Omega$, $g: \mathbb{R} \to \mathbb{R}$ is a continuous function and $\mathbb{L}$ is a nonlocal operator posed in $\Omega$. This type of nonlinear nonlocal equations has gained a lot of attention from the partial differential equations research community because of its applications in many fields such as probability, financial mathematics, elasticity and biology. The operator $\mathbb{L}$ considered in this work belongs to a class of nonlocal operators satisfying a list of assumptions given in Subsection \ref{sec:pre_assumption}, which can be verified in case of well-known fractional Laplacians, as will be discussed in Subsection \ref{sec:examples}.
\noindent \textbf{Overview of the literature.} The study of linear and semilinear equations with irregular data has flourished over the last decades with a large number of achievements. Special attention has been paid to a family of equations consisting of competing effects: a diffusion expressed by a linear second-order differential operator, an absorption produced by a nonlinearity term and a source given by a function or a measure. A well-studied equation, which arises in the Thomas--Fermi theory, belonging to this family is \begin{equation} \label{eq:1} - \Delta u + g(u) = \mu \quad \text{in } \Omega, \end{equation} where $g:\mathbb{R} \to \mathbb{R}$ is a nondecreasing, continuous \textit{absorption} with $g(0) = 0$. An abundant theory for equation \eqref{eq:1} and its corresponding boundary value problem \begin{equation}\label{eq:Laplace}
\left\{ \begin{aligned}
-\Delta u + g(u)&= \mu &&\text{ in }\Omega, \\
u &= 0 &&\text{ on }\partial \Omega,
\end{aligned} \right. \end{equation} has been developed in function settings and then extended to measure frameworks (see e.g. \cite{BreStr_1973,Bre_1980,BenBre_2003,MarVer_2014,Pon_2016}). The involvement of measures enables one to bring into light and explains striking aspects of nonlinear phenomena. More precisely, it is well known that problem \eqref{eq:Laplace} has a unique solution for any $\mu = f \in L^1(\Omega,\delta)$, where $\delta(x)=\hbox{dist}(x,\partial \Omega)$ (see e.g. \cite[Proposition 2.1.2]{MarVer_2014}). However, in contrast to the $L^1$ case, the solvability for problem \eqref{eq:Laplace} does not hold for every Radon measure $\mu \in \mathcal{M}(\Omega,\delta)$ in general, and can be achieved only under suitable assumptions on the growth of $g$.
The existence result for \eqref{eq:Laplace} can be proved by using an approximation method based on the following pivotal ingredients. The first one is Sobolev estimates on the Green operator $\mathbb{G}$ associated to the Laplacian in (weak) Lebesgue spaces. This, combined with a subcritical integrability condition on $g$, implies the equi-integrability of approximating solutions. The second ingredient is the compactness of $\mathbb{G}$, which is derived from Sobolev estimates of $\mathbb{G}$ (see \cite[Theorem 1.2.2 (iii)]{MarVer_2014}) and compact Sobolev embeddings, asserts that the Green operator $\mathbb{G}: \mathcal{M}(\Omega) \to L^q(\Omega)$ is compact for any $1 \le q<\frac{N}{N-1}$.
The uniqueness of \eqref{eq:Laplace} is a direct consequence of Kato's inequality (see \cite[Proposition 1.5.9]{MarVer_2014}) which states that for any $f \in L^1(\Omega,\delta)$, $\mu \in \mathcal{M}(\Omega,\delta)$, if $u \in L^1(\Omega)$ has zero boundary value (in the trace sense) and satisfies $$ -\int_{\Omega} u \Delta \zeta dx \leq \int_{\Omega} f \zeta dx + \int_{\Omega} \zeta d\mu, \quad \forall 0 \leq \zeta \in C_0^2(\overline \Omega), $$
then $$ -\int_{\Omega} u^+ \Delta \zeta dx \leq \int_{\Omega} f \sgn^+(u) \zeta dx + \int_{\Omega} \zeta d\mu^+, \quad \forall 0 \leq \zeta \in C_0^2(\overline \Omega), $$ where $u^+=\max\{u,0\}$.
Significant extensions of the foregoing results to nonlocal frameworks are available in the literature. It is impossible to list all of the obtained results, however, in order to have an overview of the recent progress in this subject, we mention some relevant works. Chen and V\'eron \cite{CheVer_2014} studied the problem \begin{equation} \label{prob:frac} \left\{ \begin{aligned}
(-\Delta)^s u + g(u) &= \mu &&\text{ in }\Omega,\\
u &= 0 &&\text{ in } \mathbb{R}^N \backslash \Omega,
\end{aligned} \right. \end{equation} where $(-\Delta)^s$, $s \in (0,1)$, is the well-known fractional Laplacian defined by
$$ (-\Delta)^s u(x):=c_{N,s} \, \mathrm{P.V.} \int_{\mathbb{R}^N} \frac{u(x)-u(y)}{|x-y|^{N+2s}}dy,\quad x \in \Omega.$$ Here $c_{N,s} > 0$ is a normalized constant and the abbreviation P.V. stands for ``in the principal value sense", see \eqref{Lepsilon}. More precisely, they showed that if $g$ satisfies the subcritical integrability condition $$ \int_{1}^\infty [g(t)-g(-t)]t^{-1-\frac{N+s}{N-s}}dt < +\infty $$ then for any $\mu \in \mathcal{M}(\Omega,\delta^s)$, problem \eqref{prob:frac} admits a unique weak solution. Their proof employs the compactness of the Green operator $\mathbb{G}^{\Omega}$, which is strongly based on fractional Sobolev embeddings, and a Kato's inequality for fractional Laplacian which ensures that if $u$ is a weak solution of \begin{equation} \left\{ \begin{aligned}
(-\Delta)^s u &= f \in L^1(\Omega,\delta^s) &&\text{ in }\Omega,\\
u &= 0 &&\text{ in } \mathbb{R}^N \backslash \Omega,
\end{aligned} \right. \end{equation} then $$ \int_{\Omega} u^+ (-\Delta)^s \zeta dx \leq \int_{\Omega} \zeta f \sgn^+(u) dx $$ for every $\zeta$ in a suitable space of test functions $\mathbb{X}(\Omega)$ (see \cite{CheVer_2014} for details). Afterwards, a version of Kato's inequality for the spectral fractional Laplacian was obtained by Abatangelo and Dupaigne \cite{AbaDup_2017}, which in turn serves to investigate related semilinear elliptic equations. It can be seen that the methods used in \cite{CheVer_2014,AbaDup_2017} rely essentially on the properties of particular operators under consideration.
A great deal of effort has been devoted to developing unifying tools which allow to deal with elliptic equations involving a large class of nonlocal operators characterized by estimates on the associated Green function. This research direction has been greatly pushed forward by numerous publications, among them are Bonforte, Figalli and V\'azquez \cite{BonFigVaz_2018} on theory for semilinear elliptic equations with sublinear nonlinearities in the context of weak-dual solutions, Abatangelo, G\'omez-Castro and V\'azquez \cite{Aba_2019} on the boundary behavior of solutions, Chan, G\'omez-Castro and V\'azquez \cite{GomVaz_2019} on the study of eigenvalue problems and various concepts of solutions. The notion of weak-dual solution introduced by Bonforte and V\'azquez in \cite{BonVaz_2015} is convenient to study elliptic and parabolic equations involving the foregoing class of operators since it involves only the associated Green function and does not require to specify the meaning of the operators acting on test functions. In parallel, significant contributions to the research topic based on probabilistic methods were given by Kim, Song and Vondra\v cek in \cite{KimSonVon_2020, KimSonVon_2020-1,KimSonVon_2019}. Lately, a work originated in attempt to study semilinear equations with source terms was carried out by the authors of the present paper. It was showed in \cite{TruongTai_2020} that there exists a finite threshold value in the sense that either the multiplicity, or the uniqueness, or the nonexistence holds according to whether the norm of given data is smaller than, or equal to, or bigger than the threshold value.
However, understanding equations involving an absorption nonlinearity and an operator determined by its associated Green function is still rather limited up to now. The interplay between the properties of the operator and the nature of the absorption term may lead to disclose new types of difficulties in the analysis and requires a different approach.
\noindent \textbf{Aim of the paper.} The aim of the paper is twofold. We will develop important tools including the compactness of the Green operator $\mathbb{G}^{\Omega}$ associated to $\mathbb{L}$ in the optimal weighted measure space and variants of Kato's inequality. The compactness of $\mathbb{G}^{\Omega}$ is based on the Riesz-Fr\'echet-Kolmogorov theorem and sharp estimates on $\mathbb{G}^{\Omega}$, which justifies the difference of our technique compared with those in \cite{MarVer_2014,CheVer_2014,AbaDup_2017}. Variants of Kato's inequality are obtained by exploiting the expression of $\mathbb{L}$ and the fine properties of variational solutions to non-homogeneous linear equations. The application of these tools in turn allows to achieve the subsequent central aim, which is the derivation of the existence and uniqueness for equation \eqref{eq:equation_introduction}--\eqref{bdry-cond}.
\noindent \textbf{Organization of the paper.} The rest of the paper is organized as follows. In Section \ref{sec:preliminaries}, we give the definition of function and measure spaces, present the main assumptions on the class of operators, and provide some examples. In Section \ref{sec:result}, we state the central results of this paper, including the compactness of the Green operator $\mathbb{G}^{\Omega}$ associated to $\mathbb{L}$, Kato's inequality and the solvability of problem \eqref{eq:equation_introduction}--\eqref{bdry-cond}. In Section \ref{sec:lineartheory}, we perform the proof of the compactness of $\mathbb{G}^{\Omega}$. Section \ref{sec:katoinequality} is devoted to the derivation of Kato's inequality and its variants. In light of the above results, we will establish the existence and uniqueness of solutions for semilinear equations with absorption terms in Section \ref{sec:semilinear}. Finally, a discussion concerning the boundary measure problems will be provided in Section \ref{sec:boundarysolution}.
\section{Preliminaries, basic assumptions and definitions}\label{sec:preliminaries}
\subsection{Notations}Throughout this paper, we assume that $\Omega \subset \mathbb{R}^N$ ($N > 2s)$ is a $C^2$ bounded domain and let $\delta(x)$ be the distance from $x \in \Omega$ to $\mathbb{R}^N \backslash \Omega$. We denote by $c, C, c_1, c_2, \ldots$ positive constants that may vary from one appearance to another and depend only on the data. The notation $C = C(a,b,\ldots)$ indicates the dependence of constant $C$ on $a,b,\ldots$ For two functions $f$ and $g$, we write $f \lesssim g$ ($f \gtrsim g$) if there exists a constant $C > 0$ such that $f \le C g$ ($f \ge C g$). We write $f \sim g$ if $f \lesssim g$ and $g \lesssim f$. We denote $a\vee b := \max\{ a,b\}$ and $a\wedge b := \min \{ a, b\}$. For a function $f$, the positive part and the negative part of $f$ are $f^+= f \vee0 $ and $f^-=(-f) \vee0 $ respectively.
\subsection{Function spaces}\label{sec:LPspace} We introduce function spaces used in this paper. For $1 \leq q <\infty$ and $\alpha \in \mathbb{R}$, we consider the weighted Lebesgue space $$
L^q(\Omega,\delta^{\alpha}) := \left\{ u \in L^1_{\loc}(\Omega): \norm{u}_{L^q(\Omega,\delta^{\alpha})}:= \left( \int_{\Omega}|u|^q \delta^{\alpha}dx \right)^{\frac{1}{q}}< +\infty \right\} $$ and $$\delta^{\alpha} L^{\infty}(\Omega) := \{ \delta^{\alpha}u: u \in L^\infty(\Omega)\}. $$ The weighted Marcinkiewicz space, or weak-Lebesgue space, $M^q(\Omega,\delta^{\alpha})$, $1 \le q < \infty$, is defined by $$
M^q(\Omega,\delta^{\alpha}):= \left\{ u \in L^1_{\loc}(\Omega): \sup_{\lambda > 0} \lambda^q \int_{\Omega} \mathbf{1}_{\{ x \in \Omega: |u(x)| > \lambda \} } \delta^{\alpha} dx < +\infty \right\}, $$ where $\mathbf{1}_{E}$ denotes the indicator function of a set $E \subset \mathbb{R}^N$. Put $$
\vertiii{u}_{M^q(\Omega,\delta^\alpha)}:= \left( \sup_{\lambda > 0} \lambda^q \int_{\Omega} \mathbf{1}_{\{ x \in \Omega: |u(x)| > \lambda \} } \delta^{\alpha} dx \right)^{\frac{1}{q} }. $$ We note that $\vertiii{\cdot}_{M^q(\Omega,\delta^\alpha)}$ is not a norm, but for $q >1$, it is equivalent to the norm $$
\norm{u}_{M^q(\Omega,\delta^\alpha)}:= \sup \left\{ \frac{\int_{A} |u|^q \delta^\alpha dx }{ (\int_{A} \delta^\alpha dx)^{1-\frac{1}{q}} }: A \subset \Omega, A \text{ measurable }, 0< \int_{A} \delta^\alpha dx < + \infty. \right\}. $$ In fact, there hold (see \cite[page 300-311]{CiCo}) \begin{equation} \label{Marcin-equi} \vertiii{u}_{M^q(\Omega,\delta^\alpha)} \leq \norm{u}_{M^q(\Omega,\delta^\alpha)} \leq \frac{q}{q-1}\vertiii{u}_{M^q(\Omega,\delta^\alpha)}, \quad \forall u \in M^q(\Omega,\delta^\alpha). \end{equation}
It is well-known that the following embeddings hold $$L^q(\Omega,\delta^{\alpha}) \subset M^q(\Omega,\delta^{\alpha}) \subset L^{r}(\Omega,\delta^{\alpha}),\quad \forall r \in [1,q),$$ where all the inclusions are strict and continuous, see for instance \cite[Section 2.2]{BidViv_2000} and \cite[Section 1.1]{Gra_2009}. We recover the usual $L^q$ spaces and weak-$L^q$ spaces if $\alpha = 0$ and simply write $L^q(\Omega)$ and $M^q(\Omega)$ respectively.
Next, by $\mathcal{M}(\Omega,\delta^{\alpha})$ we mean the space of Radon measures on $\Omega$ such that $\norm{\mu}_{\mathcal{M}(\Omega,\delta^{\alpha})} := \int_{\Omega} \delta^{\alpha} d|\mu| < +\infty$. In the case $\alpha = 0$, we use notation $\mathcal{M}(\Omega)$ for the space of bounded Radon measures on $\Omega$. More details on the given spaces could be found in \cite{Gra_2009,MarVer_2014}.
For $s \in (0,1) $, the fractional Sobolev space $H^s(\Omega)$ is defined by
$$H^s(\Omega):= \left\{ u \in L^2(\Omega): [u]_{H^s(\Omega)}:= \int_{\Omega}\int_{\Omega} \dfrac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}dx dy < +\infty \right\},$$ which is a Hilbert space equipped with the inner product
$$\inner{u,v}_{H^s(\Omega)}:= \int_{\Omega} uv dx + \int_{\Omega}\int_{\Omega} \dfrac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}dxdy, \quad u, v \in H^s(\Omega).$$ The space $H^s_0(\Omega)$ is defined as the closure of $C^{\infty}_c(\Omega)$ with respect to the norm $\norm{\cdot}_{H^s(\Omega)} = \sqrt{\langle \cdot, \cdot \rangle_{H^s(\Omega)}}$. Next, let $H^{s}_{00}(\Omega)$ be defined by \begin{equation} \label{H00} H^s_{00}(\Omega) := \left\{ u \in H^s(\Omega): \dfrac{u}{\delta^s} \in L^2(\Omega) \right\}. \end{equation} It can be seen that $H^s_{00}(\Omega)$ equipped with the norm \begin{equation} \label{H00-norm}
\norm{u}^2_{H^s_{00}(\Omega)} := \int_{\Omega} \left(1+\dfrac{1}{\delta^{2s}}\right)|u|^2 dx + \int_{\Omega} \int_{\Omega} \dfrac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}dx dy \end{equation} is a Banach space. Roughly speaking, $H^s_{00}(\Omega)$ is the space of functions in $H^s(\Omega)$ satisfying the Hardy inequality. By \cite[Subsection 8.10]{Bha_2012}, there holds $$ H^s_{00}(\Omega) = \va{
&H^s(\Omega) = H^s_0(\Omega) &\text{ if }0 < s < \frac{1}{2}, \\[4pt]
&H^{\frac{1}{2}}_{00}(\Omega) \subsetneq H^{\frac{1}{2}}_{0}(\Omega) &\text{ if }s = \frac{1}{2},\\[4pt]
&H^s_0(\Omega) &\text{ if } \frac{1}{2} < s < 1.} $$ In fact, $H^{s}_{00}(\Omega)$ is the space of functions in $H^s(\mathbb{R}^N)$ supported in $\Omega$, or equivalently, the trivial extension of functions in $H^s_{00}(\Omega)$ belongs to $H^s(\mathbb{R}^N)$ (see \cite[Lemma 1.3.2.6]{Gris_2011}). Furthermore, $C^{\infty}_c(\Omega)$ is a dense subset of $H^s_{00}(\Omega)$. When $s = \frac{1}{2}$, $H^{\frac{1}{2}}_{00}(\Omega)$ is called the Lions--Magenes space (see \cite[Theorem 7.1]{LioMag_1972}). The strict inclusion $H^{\frac{1}{2}}_{00}(\Omega) \subsetneq H^{\frac{1}{2}}_0(\Omega)$ holds since $1 \in H^{\frac{1}{2}}_0(\Omega)$ but $1 \notin H^{\frac{1}{2}}_{00}(\Omega)$. Alternatively, fractional Sobolev spaces can be viewed as interpolation spaces due to \cite[Chapter 1]{LioMag_1972}. For more details on fractional Sobolev spaces, the reader is referred to \cite{Bha_2012,BonSirVaz_2015,Gris_2011, LioMag_1972, NezPalVal_2012} and the list of references therein.
\subsection{Main assumptions}\label{sec:pre_assumption} In this subsection, we provide a list of assumptions on the class of admissible operators $\mathbb{L}$ in which some are adopted from, as well as labeled in a consistent manner with, the label system in \cite{TruongTai_2020}, whereas the others are additionally imposed and indicated by the superscript *. Related conditions were introduced in \cite{BonFigVaz_2018,Aba_2019,ChaGomVaz_2019_2020}.
\textbf{Assumptions on $\mathbb{L}$.} We make the following assumptions on the operator $\mathbb{L}$.
\begin{enumerate}[label= (L\arabic{enumi}*),ref=L\arabic{enumi}*] \item \label{eq_L1} The operator $\mathbb{L}: C^{\infty}_c(\Omega) \to L^2(\Omega)$ is defined in terms of functions $J$ and $B$ via the formula \begin{equation} \label{LJB} \mathbb{L} u(x) := \mathrm{P.V.} \int_{\Omega}J(x,y)[u(x)-u(y)]dy + B(x)u(x), \quad u \in C^{\infty}_c(\Omega), x\in \Omega. \end{equation} Here, the so-called jump kernel $J$ is nonnegative, symmetric, finite on $(\Omega \times \Omega) \setminus D_{\Omega}$, where $D_{\Omega}:= \left\{ (x,x): x \in \Omega \right\}$, and satisfies
$$\int_{\Omega} \int_{\Omega} |x-y|^2 J(x,y) dx dy < +\infty,$$ and $B$ is nonnegative, locally bounded in $\Omega$. \end{enumerate} The abbreviation P.V. in \eqref{LJB} stands for ``in the principal value sense", namely \begin{equation} \label{pv} \mathbb{L} u(x) = \lim_{\varepsilon \to 0^+} \mathbb{L}_{\varepsilon} u(x), \end{equation} where \begin{equation} \label{Lepsilon}
\mathbb{L}_{\varepsilon} u (x):= \int_{\Omega} J(x,y)(u(x)-u(y))\chi_{\varepsilon}(|x-y|) dy + B(x)u(x) \end{equation} with $$ \chi_{\varepsilon}(t):= \va{ &0 \quad &\text{if } 0 \leq t \le \varepsilon, \\ &1 \quad &\text{if } t > \varepsilon.} $$
\begin{enumerate}[label= (L\arabic{enumi}*),ref=L\arabic{enumi}*, resume]
\item \label{eq_L2bis} For every $u \in C^{\infty}_c(\Omega)$, there exists a function $\varphi_u \in L^1_{\loc}(\Omega)$ and $\varepsilon_0 > 0$ such that $|\mathbb{L}_{\varepsilon} u| \le \varphi_u$ a.e. in $\Omega$, for every $\varepsilon \in (0,\varepsilon_0]$. \end{enumerate}
Under assumptions \eqref{eq_L1} and \eqref{eq_L2bis}, \begin{equation}\label{eq:integrationbypart1} \begin{aligned} \int_{\Omega} v \mathbb{L} u dx &= \dfrac{1}{2} \int_{\Omega}\int_{\Omega} J(x,y)(u(x)-u(y))(v(x)-v(y)) dx dy + \int_{\Omega} B(x) u(x) v(x) dx \\ & = \int_{\Omega} u \mathbb{L} v dx \end{aligned} \end{equation} holds for every $u,v \in C^{\infty}_c(\Omega)$, see Proposition \ref{prop:integrationbypart}. Hence, $\mathbb{L}$ is symmetric and nonnegative on $C^{\infty}_c(\Omega)$. From this, we consider the bilinear form \begin{equation} \label{bilinear} \mathcal{B}(u,v):= \dfrac{1}{2} \int_{\Omega}\int_{\Omega} J(x,y)(u(x)-u(y))(v(x)-v(y))dx dy + \int_{\Omega} B(x)u(x)v(x) dx \end{equation} on the domain \begin{equation} \label{D(B)} \mathcal{D}(\mathcal{B}):= \left\{ u \in L^2(\Omega): \mathcal{B}(u,u) < +\infty \right\}. \end{equation} It can be seen that $\mathcal{D}(\mathcal{B})$ is nonempty since $C^{\infty}_c(\Omega) \subset \mathcal{D}(\mathcal{B})$ by \eqref{eq:integrationbypart1} and assumption \eqref{eq_L1}. In particular, $\mathcal{D}(\mathcal{B})$ is dense in $L^2(\Omega)$ with respect to the $L^2$-norm. Furthermore, $\mathcal{D}(\mathcal{B})$ is a complete space with respect to the norm induced by the inner product \begin{equation}\label{eq:innerproduct} \inner{ u,v}_{\mathcal{D}(\mathcal{B})} := \inner{ u,v}_{L^2(\Omega)} + \mathcal{B}(u,v),\quad u,v \in \mathcal{D}(\mathcal{B}), \end{equation} see Proposition \ref{prop:closedbilinear}. On the other hand, since $\mathbb{L}$ is symmetric and nonnegative on $C^{\infty}_c(\Omega)$, it admits the Friedrich extension $\widetilde{\mathbb{L}}$, which is nonnegative and self-adjoint.
For the reader's convenience, we recall the construction of the extension. Let $\mathbb{H}(\Omega)$ be the closure of $C^{\infty}_c(\Omega)$ in $\mathcal{D}(\mathcal{B})$ under the norm induced by \eqref{eq:innerproduct}. Since $\mathcal{D}(\mathcal{B})$ is complete, $\mathbb{H}(\Omega)$ is also complete. By \cite[Theorem 4.4.2]{Dav_1995}, the form $\mathcal{B}|_{\mathbb{H}(\Omega)}$ is induced by a non-negative self-adjoint operator $\widetilde{\mathbb{L}}$. Furthermore, $\mathbb{H}(\Omega) = \mathcal{D}(\widetilde{\mathbb{L}}^{\frac{1}{2}})$ (see \cite[page 104]{Dav_1980}) and $$ \mathcal{B}(u,v) = \inner{ \widetilde{\mathbb{L}}^\frac{1}{2} u, \widetilde{\mathbb{L}}^\frac{1}{2} v}_{L^2(\Omega)},\quad \forall u,v \in \mathbb{H}(\Omega). $$ In particular, for $u,v \in C^{\infty}_c(\Omega) \subset \mathbb{H}(\Omega)$, one has $$ \inner{ \widetilde{\mathbb{L}}^\frac{1}{2} u, \widetilde{\mathbb{L}}^\frac{1}{2} v}_{L^2(\Omega)} = \mathcal{B}(u,v) = \inner{\mathbb{L} u, v}_{L^2(\Omega)}.$$ By density, the above equalities also hold for all $u \in C^{\infty}_c(\Omega)$ and $v \in \mathbb{H}(\Omega)$. Finally, by \cite[Lemma 4.4.1]{Dav_1995}, $C^{\infty}_c(\Omega) \subset \mathcal{D}(\widetilde{\mathbb{L}})$ and $\widetilde{\mathbb{L}}u = \mathbb{L} u$ for all $u \in C^{\infty}_c(\Omega)$. This proves that $\widetilde{\mathbb{L}}$ is an extension of $\mathbb{L}$. \textit{Hereafter, without any confusion we denote by $\mathbb{L}$ the extension $\widetilde{\mathbb{L}}$.}
\begin{enumerate}[resume,label=(L\arabic{enumi}),ref=L\arabic{enumi}] \setcounter{enumi}{1}
\item \label{eq_L2} There exists a constant $\Lambda > 0$ such that
\begin{equation}\label{eq:lowerbound}
\norm{u}_{L^2(\Omega)}^2 \le \Lambda\inner{\mathbb{L} u, u}_{L^2(\Omega)},\quad \forall u \in C^{\infty}_c(\Omega).
\end{equation} \end{enumerate} Assumption \eqref{eq_L2} is the classical coercivity assumption. Under this assumption and the fact that $C^{\infty}_c(\Omega)$ is dense in $\mathbb{H}(\Omega)$, we have \begin{equation}\label{eq:lowerbound2} \norm{u}_{L^2(\Omega)}^2 \le \Lambda \mathcal{B}(u,u),\quad \forall u \in \mathbb{H}(\Omega). \end{equation}
By \eqref{eq:lowerbound2}, it can be seen that $\norm{u}_{\mathcal{D}(\mathcal{B})}$ and $\norm{u}_{\mathbb{H}(\Omega)}:= \sqrt{\mathcal{B}(u,u)}$ are equivalent norms on $\mathbb{H}(\Omega)$. The norm $\| \cdot\|_{\mathbb{H}(\Omega)}$ is associated to the inner product \begin{equation} \label{innerH} \inner{u,v}_{\mathbb{H}(\Omega)} := \mathcal{B}(u,v), \quad u,v \in \mathbb{H}(\Omega). \end{equation}
To further develop our theory, we require that \begin{enumerate}[resume,label=(L\arabic{enumi}),ref=L\arabic{enumi}] \setcounter{enumi}{2} \item \label{eq_L3} $\mathbb{H}(\Omega) = H^s_{00}(\Omega)$. \end{enumerate}
\textbf{Assumptions on the inverse of $\mathbb{L}$.} \begin{enumerate}[label=(G\arabic{enumi}),ref=G\arabic{enumi}]
\item \label{eq_G1} There exists an operator $\mathbb{G}^{\Omega}$, which is called Green operator, such that for every $f \in C^{\infty}_c(\Omega)$, one has \begin{equation}\label{eq:homolinear} \mathbb{L} [\mathbb{G}^{\Omega} [f]] = f \text{ in } L^2(\Omega). \end{equation} In other words, $\mathbb{G}^{\Omega}$ is a right inverse operator of $\mathbb{L}$ and for every $f \in C^{\infty}_c(\Omega)$, one has $\mathbb{G}^{\Omega}[f] \in \dom(\mathbb{L}) \subset \mathbb{H}(\Omega)$.
\item \label{eq_G2} The operator $\mathbb{G}^{\Omega}$ admits a Green function $G^{\Omega}$, namely $$ \mathbb{G}^{\Omega}[f](x) = \int_{\Omega} G^{\Omega}(x,y)f(y)dy,\quad x \in \Omega, $$ where $G^{\Omega}: (\Omega \times \Omega) \setminus D_{\Omega} \to (0,\infty)$ is Borel measurable, symmetric and satisfies \begin{equation} \label{G-est}
G^{\Omega} (x,y) \sim \dfrac{1}{|x-y|^{N-2s}} \left( \dfrac{\delta(x)}{|x-y|} \wedge 1 \right)^{\gamma} \left( \dfrac{\delta(y)}{|x-y|} \wedge 1 \right)^{\gamma}, \quad x,y \in \Omega, x \neq y, \end{equation} with $s,\gamma \in (0,1]$ and $N>2s$. \end{enumerate} The assumptions on the existence of the Green function \eqref{eq_G1} and two-sided estimates \eqref{eq_G2} allow us to prove quantitative estimates for the Green operator, as shown in \cite{BonFigVaz_2018,Aba_2019, TruongTai_2020}.
The next two assumptions focus on the continuity of the Green function. \begin{enumerate}[label=(G\arabic{enumi}*),ref=G\arabic{enumi}*, resume] \item \label{eq_G3bis} $G^{\Omega}$ is jointly continuous in $(\Omega \times \Omega) \backslash D_{\Omega}$. \item \label{eq_G4} For every $x_0 \in \Omega$ and $z_0 \in \partial \Omega$, the limit \begin{equation}\label{eq:martinkernel_def} M^{\Omega}(x_0,z_0):= \lim_{(x,y) \to (x_0,z_0) } \dfrac{G^{\Omega}(x,y)}{\delta(y)^{\gamma}} \end{equation} exists and $M^{\Omega} : \Omega \times \partial \Omega \to \mathbb{R}^+$ is a continuous function. \end{enumerate}
Similar assumptions have been considered in \cite[(K4) and (K5)]{Aba_2019}. Assumptions \eqref{eq_G3bis} and \eqref{eq_G4} describe the behavior of $G^{\Omega}$ in $\Omega$ and up to the boundary $\partial \Omega$ respectively, which in turn play an important role in the derivation of the compactness properties of $\mathbb{G}^{\Omega}$ in different weighted function or measure spaces. Here the function $M^{\Omega}$ defined in \eqref{eq:martinkernel_def} is closely related to the Martin kernel in defined in \cite{SonVon_2003} and is also called Martin kernel by some authors (see e.g. \cite{Aba_2019}). In addition, the two-sided estimates assumption on $G^{\Omega}$ in \eqref{eq_G2} imply
\begin{equation} \label{M-est} M^\Omega(x,z) \sim \dfrac{ \delta(x)^\gamma}{|x-z|^{N-2s+2\gamma}}, \quad x \in \Omega, z \in \partial \Omega. \end{equation}
\subsection{Some examples}\label{sec:examples}Typical nonlocal operators satisfying \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4} to be kept in mind are the restricted fractional Laplacian, the spectral fractional Laplacian and the censored fractional Laplacian. We remark that, for these examples, assumptions \eqref{eq_L2}, \eqref{eq_L3}, \eqref{eq_G1} and \eqref{eq_G2} are satisfied, as pointed out in \cite{TruongTai_2020}, while assumption \eqref{eq_L2bis} is fulfilled in light of \cite[Proposition 3.4.]{KimSonVon_2020}. Therefore, it is enough to to verify only \eqref{eq_L1}, \eqref{eq_G3bis} and \eqref{eq_G4}.
\textbf{The restricted fractional Laplacian (RFL).} A well-known example of nonlocal operators satisfying the assumptions in Subsection \ref{sec:pre_assumption} is the restricted fractional Laplacian defined, for $s \in (0,1)$, by
$$(-\Delta)^s_{\RFL} u(x): = c_{N,s}\, \mathrm{P.V.} \int_{\mathbb{R}^N} \frac{u(x)-u(y)}{|x-y|^{N+2s}}dy,\quad x \in \Omega, $$ restricted to functions that are zero outside $\Omega$. This operator has been intensively studied in the literature, see for instance \cite{Aba_2015,RosSer_2014,SerVal_2012,SerVal_2014} (see also \cite{CheSon_1998} for a probabilistic approach) and references therein. For $u \in C^{\infty}_c(\Omega)$, one can write \begin{align*} (-\Delta)^s_{\RFL} u(x)
=c_{N,s}\mathrm{P.V.} \int_{\Omega} \frac{u(x)-u(y)}{|x-y|^{N+2s}}dy + c_{N,s}\int_{\mathbb{R}^N \backslash \Omega} \dfrac{u(x)}{|x-y|^{N+2s}}dy. \end{align*} Hence, in this case, \eqref{eq_L1} holds with
$$J(x,y) = \dfrac{c_{N,s}}{|x-y|^{N+2s}} \quad \text{and} \quad B(x) = c_{N,s}\int_{\mathbb{R}^N \backslash \Omega} \dfrac{1}{|x-y|^{N+2s}}dy,\quad x,y \in \Omega, \, x\neq y. $$ In addition, \eqref{eq_G3bis} is valid by \cite[page 467]{CheSon_1998}, and \eqref{eq_G4} is satisfied due to \cite[Lemma 6.5]{CheSon_1998}.
\textbf{The spectral fractional Laplacian (SFL).} The spectral fractional Laplacian has been studied in, for instance, \cite{SonVon_2003,AbaDup_2017,DhiMaaZri_2011,CafSti_2016,BraColPabSan_2013}. It is defined for $s \in (0,1)$ and $u \in C^{\infty}_c(\Omega)$ by $$ (-\Delta)^s_{\SFL} u(x):= \mathrm{P.V.} \int_{\Omega} [u(x)-u(y)]J_s(x,y)dy + B_s(x) u(x),\quad x \in \Omega.$$ Here $$J_s(x,y) := \dfrac{s}{\Gamma(1-s)} \int_0^{\infty} K_{\Omega} (t,x,y) \dfrac{dt}{t^{1+s}}dt, \quad x,y \in \Omega,$$ and $$B_s(x) := \dfrac{s}{\Gamma(1-s)} \int_0^{\infty} \left(1- \int_{\Omega} K_{\Omega} (t,x,y) dy\right) \dfrac{dt}{t^{1+s}},\quad x \in \Omega,$$ where $K_{\Omega}$ is the heat kernel of the Laplacian $-\Delta$ in $\Omega$. Thus, in this case, \eqref{eq_L1} is fulfilled with $$J(x,y) = J_s(x,y) \quad \text{and} \quad B(x) = B_s(x),\quad x,y \in \Omega, \, x \neq y.$$ We see that \eqref{eq_G3bis} follows by \cite[Proposition 2.2]{SonVon_2003} and \eqref{eq_G4} is given in \cite[page 416]{GlovPop_2004}.
\textbf{The censored fractional Laplacian (CFL).} The censored fractional Laplacian is defined for $ s > \frac{1}{2}$ by
$$ (-\Delta)^s_{\CFL} u(x) := a_{N,s}\, \mathrm{P.V.} \int_{\Omega} \dfrac{u(x)-u(y)}{|x-y|^{N+2s}}dy, \quad x \in \Omega.$$ This operator has been studied in \cite{BogBurChe_2003,Che_2018,Fal_2020}. In this case, \eqref{eq_L1} holds for
$$J(x,y) = \dfrac{a_{N,s}}{|x-y|^{N+2s}} \text{ and }B(x) = 0, \quad x,y \in \Omega, \, x \neq y.$$ We see that \eqref{eq_G3bis} follows from \cite[Theorem 1.1]{ChenKim_2002}, while \eqref{eq_G4} can be deduced by using \cite[Theorem 1.1 and (4.1)]{ChenKim_2002}. The detailed proof is left to the interested reader.
\section{Main results and comparison with relevant results in the literature}\label{sec:result} This section is devoted to the description of our main results. \subsection{Compactness of the Green operator} Let $G^{\Omega}$ be the Green function introduced in \eqref{eq_G1} and \eqref{eq_G2}. For $f \in L^1(\Omega,\delta^{\gamma})$ and $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$, we define \begin{align} \label{eq:green_l1} \mathbb{G}^{\Omega}[f](x)&:= \int_{\Omega} G^{\Omega}(x,y)f(y) dy, \quad x \in \Omega,\\ \label{eq:green_measure}\mathbb{G}^{\Omega}[\mu](x)&:= \int_{\Omega} G^{\Omega}(x,y) d\mu(y), \quad x \in \Omega. \end{align} It was shown in \cite[Lemma 4.2]{TruongTai_2020} that $\mathbb{G}^{\Omega}[\mu](x)$ is finite for a.e. $x \in \Omega$ if and only if $\mu \in \mathcal{M}(\Omega,\delta^\gamma)$.
Various results on the continuity of $\mathbb{G}^\Omega$ between function and measure spaces, as well as prior estimates, were obtained in \cite{Aba_2019,chan2020singular,TruongTai_2020}. When $\mathbb{L} = (-\Delta)_{\mathrm{RFL}}^s$, it was proved in \cite[Proposition 2.6]{CheVer_2014} that $\mathbb{G}^\Omega: L^1(\Omega,\delta^s) \to L^q(\Omega)$ is compact for any $q \in [1,\frac{N}{N-s})$. When $\mathbb{L}$ is a more general nonlocal operator, it was shown in \cite[Proposition 5.1]{BonFigVaz_2018} that $\mathbb{G}^\Omega: L^2(\Omega) \to L^2(\Omega)$ is compact under assumption \eqref{eq_G2}. Our first main result provides compactness properties of $\mathbb{G}^{\Omega}$ in weighted function and measure spaces. To begin, for $\beta \geq 0$ and $\alpha \geq \beta-2s$, we set \begin{equation} \label{p_alphagamma} p^*_{\beta, \alpha}:=\frac{N+\alpha}{N+\beta-2s}. \end{equation} In the case $\alpha=\beta=\gamma$, we simply write $p^*$ instead of $p_{\gamma,\gamma}^*$, namely \begin{equation} \label{p*} p^*=\frac{N+\gamma}{N+\gamma-2s}. \end{equation}
\begin{theorem}[Compactness of $\mathbb{G}^{\Omega}$]\label{theo:compactness} Assume that \eqref{eq_G1}--\eqref{eq_G3bis} hold and $s,\gamma$ be as in \eqref{G-est}.
$(i)$ Let $\gamma' \in [0,\gamma)$ and $\alpha$ satisfy $$(-\gamma' - 1)\vee (\gamma' - 2s)\vee \left(- \frac{\gamma' N}{N - 2s + \gamma'}\right) < \alpha < \frac{\gamma' N}{N-2s}.$$ Then the map $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega, \delta^{\gamma'}) \to L^q(\Omega, \delta^{\alpha})$ is compact for every $q < p^*_{\gamma', \alpha}=\frac{N+\alpha}{N+\gamma'-2s}$.
$(ii)$ In addition, if \eqref{eq_G4} holds, then statement $(i)$ holds for $\gamma' = \gamma$.
\end{theorem}
Let us give some remarks on the above result.
(a) Theorem \ref{theo:compactness} covers and extends several results in the literature. Indeed, the fact that the Green operator $\mathbb{G}^{\Omega}$ of the classical (minus) Laplacian $-\Delta$ is compactly embedded from $\mathcal{M}(\Omega,\delta)$ to $L^q(\Omega)$ with $p < \frac{N}{N-1}$ (see \cite{MonPon_2008}) can be retrieved from statement (ii) with $\gamma=s=1$ and $\alpha=0$. Moreover, the compactness from $L^1(\Omega,\delta^s)$ to $L^q(\Omega)$, $q < \frac{N}{N-s}$, of the Green operator associated to the RFL (see \cite[Proposition 2.6]{CheVer_2014}) is a special case of statement (ii) with $\gamma = s \in (0,1)$ and $\alpha=0$. In addition, statement (ii) with $s \in (0,1), \gamma=\alpha=1$ and with $s \in (0,\frac{1}{2}), \gamma=2s-1, \alpha = 0$ reduce to a compactness result in the case of SFL and in the case of CFL respectively, which are new to the best of our knowledge.
(b) Unlike the approach employed in case of the classical Laplacian \cite{MarVer_2014} or the RFL \cite[Proposition 2.6]{CheVer_2014} which relies essentially on the particular operator's inherent properties and available compact Sobolev embeddings, our unifying method, inspired by \cite[Proposition 5.1]{BonFigVaz_2018}, is based on the Riesz--Fr\'echet--Kolmogorov theorem and sharp estimates on the Green kernel, which allows us to treat different types of operators. In the proof of Theorem \ref{theo:compactness}, we make use of the Riesz--Fr\'echet--Kolmogorov theorem and a subtle regularized version of the Green function. Under conditions \eqref{eq_G1}--\eqref{eq_G3bis}, the method allows to deal with space $\mathcal{M}(\Omega,\delta^{\gamma'})$ for any $\gamma' \in [0,\gamma)$. Condition \eqref{eq_G4} is additionally imposed in statement (ii) to enable us to include also the space $\mathcal{M}(\Omega,\delta^{\gamma})$, which, by virtue of \eqref{G-est}, seems to be the largest measure space for the compactness of $\mathbb{G}^{\Omega}$ to hold.
(c) The value $p^*$ given by \eqref{p*} is the critical exponent for the existence of a solution to semilinear equations with a source power term (see \cite[Theorem 3.3]{TruongTai_2020}). We will show below that this exponent also plays an important role in the solvability of the boundary value problems \eqref{eq:equation_introduction}--\eqref{bdry-cond}.
\subsection{Kato's inequality} Our main result in this subsection is stated as follows. \begin{theorem}[Kato's inequality] \label{thm:kato} Assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G2} hold.
$(i)$ Let $f \in L^1(\Omega,\delta^{\gamma})$ and put $u = \mathbb{G}^{\Omega} [f]$. Then
\begin{equation}\label{eq:kato_abs}
\int_{\Omega} |u|\xi dx \le \int_{\Omega} \sgn(u) f \mathbb{G}^{\Omega}[\xi]dx,
\end{equation}
and
\begin{equation}\label{eq:kato_main}
\int_{\Omega} u^+\xi dx \le \int_{\Omega} \sgn^+(u) f \mathbb{G}^{\Omega}[\xi]dx,
\end{equation}
for all $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega}[\xi] \ge 0$ a.e. in $\Omega$.
$(ii)$ Assume in addition that \eqref{eq_G3bis} and \eqref{eq_G4} hold. Let $f \in L^1(\Omega,\delta^{\gamma})$, $\mu \in \mathcal{M}(\Omega,\delta^\gamma)$ and put $u = \mathbb{G}^{\Omega} [f] + \mathbb{G}^{\Omega}[\mu]$. Then \begin{equation}\label{eq:kato_main2}
\int_{\Omega} u^+\xi dx \le \int_{\Omega} \sgn^+(u) \mathbb{G}^{\Omega}[\xi]fdx + \int_{\Omega} \mathbb{G}^{\Omega}[\xi] d\mu^+ , \end{equation} and \begin{equation}\label{eq:kato_abs2}
\int_{\Omega} |u|\xi dx \le \int_{\Omega} \sgn(u) \mathbb{G}^{\Omega}[\xi]fdx + \int_{\Omega} \mathbb{G}^{\Omega}[\xi] d\mu^+ , \end{equation} for all $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega}[\xi] \ge 0$ a.e. in $\Omega$. \end{theorem}
Various versions of Kato's inequality have been established for the classical Laplacian \cite{MarVer_2014}, for the RFL \cite[Proposition 2.4]{CheVer_2014} and for the SFL \cite[Lemma 31]{AbaDup_2017}. Our proof of Theorem \ref{thm:kato} relies on formula \eqref{LJB} and the properties of the space $\mathbb{H}(\Omega)$, which allows us to deal with different types of nonlocal operators such as RFL, SFL and CFL. When measures are involved, \eqref{eq:kato_main2} and \eqref{eq:kato_abs2} are proved by an approximation argument in which the stability result Corollary \ref{cor:stability} is employed under additional conditions \eqref{eq_G3bis} and \eqref{eq_G4}.
As far as we know, in the literature, different variants of Kato's inequality are proved in the context of certain notion of solutions to linear equations. However, Theorem \ref{thm:kato} is stated for functions admitting a Green representation without appealing prematurely any notion of solutions. Thereby, unnecessary potential restriction on the application of Theorem \ref{thm:kato} might be avoided. For instance, Kato's inequality \eqref{eq:kato_main} can be applied to show the uniqueness of weak solutions of semilinear equations involving RFL in \cite{CheVer_2014} due to the fact that these solutions admit a Green representation.
\subsection{Semilinear equations}\label{subssec:semilinear} The above results are important tools in the study of the following problem \begin{equation}\label{eq:semilinear_absorption} \left\{ \begin{aligned} \mathbb{L} u + g(u) &= \mu &&\text{ in }\Omega, \\ u &= 0 &&\text{ on }\partial \Omega, \\ u &=0 &&\text{ in }\mathbb{R}^N \backslash \overline{\Omega} \text{ (if applicable)}, \end{aligned} \right. \end{equation} where $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$ and $g: \mathbb{R} \to \mathbb{R}$ is a nondecreasing continuous function with $g(0)=0$. In order to study \eqref{eq:semilinear_absorption}, we adapt the notion of weak-dual solutions which was first introduced by Bonforte and V\'azquez \cite{BonVaz_2015} and then was used in several works (see e.g. \cite{BonFigVaz_2018,Aba_2019,ChaGomVaz_2019_2020,chan2020singular}).
\begin{definition}\label{def:semilinear_weaksolution} Assume that $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$. A function $u$ is called a \textit{weak-dual solution} of \eqref{eq:semilinear_absorption} if $u \in L^1(\Omega,\delta^{\gamma})$, $g(u) \in L^1(\Omega,\delta^{\gamma})$ and \begin{equation} \label{weakdualsol}\int_{\Omega} u\xi dx + \int_{\Omega} g(u)\mathbb{G}^{\Omega}[\xi] dx = \int_{\Omega} \mathbb{G}^{\Omega}[\xi] d\mu,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega). \end{equation} \end{definition} Here all the integrals in \eqref{weakdualsol} are well-defined under assumptions \eqref{eq_G1}--\eqref{eq_G3bis}. The notion of weak-dual solutions has interesting features. On the one hand, as one can see from formulation \eqref{weakdualsol}, an advantage of this notion in comparison with other concepts of solutions is that it involves only the Green function and does not require to specify the meaning of $\mathbb{L} \xi$ for test functions $\xi$. Moreover, it is equivalent to other notions of solutions given in \cite[Theorem 2.1]{ChaGomVaz_2019_2020} for which the space of test functions $\delta^{\gamma}L^{\infty}(\Omega)$ is replaced by $L^{\infty}_c(\Omega)$ or $C^{\infty}_c(\Omega)$. On the other hand, since the operator $\mathbb{L}$ does not appear in \eqref{weakdualsol}, its properties cannot be directly exploited from this formulation, which therefore requires an analysis mainly on the Green function.
In light of Theorem \ref{theo:compactness}, Theorem \ref{thm:kato} and the ideas in \cite{Pon_2016}, we are able to obtain the solvability for \eqref{eq:semilinear_absorption}.
\begin{theorem}\label{theo:subsupersolution} Assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4} hold and $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$. Let $g: \mathbb{R} \to \mathbb{R}$ be a nondecreasing continuous function with $g(0)=0$. Assume in addition that \begin{equation}\label{eq:goodmeasure} g(-\mathbb{G}^{\Omega}[\mu^-]), g(\mathbb{G}^{\Omega}[\mu^+]) \in L^1(\Omega,\delta^{\gamma}). \end{equation} Then problem \eqref{eq:semilinear_absorption} admits a unique weak-dual solution $u$. This solution satisfies \begin{equation} \label{Gmu+-}-\mathbb{G}^{\Omega}[\mu^-] \le u \le \mathbb{G}^{\Omega}[\mu^+] \quad \text{a.e. in } \Omega. \end{equation} Moreover the map $\mu \mapsto u$ is nondecreasing. \end{theorem}
When the data belong to $L^1(\Omega,\delta^\gamma)$, condition \eqref{eq:goodmeasure} can be relaxed as pointed out in the following result. \begin{corollary} \label{cor:L1data} Assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4} hold. For any $f \in L^1(\Omega,\delta^{\gamma})$, problem \eqref{eq:semilinear_absorption} with $\mu=f$ admits a unique weak-dual solution. Furthermore, the map $f \mapsto u$ is increasing. \end{corollary} As a consequence of Theorem \ref{theo:subsupersolution}, the existence and uniqueness remain to hold when $g$ satisfies the so-called \textit{subcritical integral condition} \begin{equation} \label{sub-int} \int_1^{\infty} [g(t) - g(-t)]t^{-1-p^*}dt < +\infty, \end{equation} where $p^*$ is given in \eqref{p*}. \begin{theorem} \label{measuredata-sub} Assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4} hold. Let $g: \mathbb{R} \to \mathbb{R}$ be a nondecreasing continuous function satisfying \eqref{sub-int} and $g(0)=0$. Then for any $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$, problem \eqref{eq:semilinear_absorption} admits a unique weak-dual solution $u$. Furthermore, the solution satisfies \eqref{Gmu+-} and the map $\mu \mapsto u$ is nondecreasing. \end{theorem}
In particular, when $g$ is a power function, namely $g(t)=|t|^{p-1}t$ with $p>1$, condition \eqref{sub-int} holds if and only if $1<p<p^*$. Let $z \in \partial \Omega$ and $\{z_n\}_{n \in \mathbb{N}} \subset \Omega$ converging to $z$. When $1<p<p^*$, for any $n \in \mathbb{N}$, there exists a unique weak-dual solution $u_n$ to \eqref{eq:semilinear_absorption} with $\mu=\delta^{-\gamma}\delta_{z_n}$, where $\delta_{z_n}$ denotes the Dirac measure concentrated at $z_n$. We show below that when $n \to \infty$, the corresponding solution $u_n$ converges to a function $u$ that is singular at $z$ and has the same blow-up rate at $z$ as $M^\Omega(\cdot,z)$, where $M^\Omega(\cdot,z)$ is given in \eqref{eq:martinkernel_def}.
\begin{theorem} \label{bdry-iso} Assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4} hold, $1<p<p^*$ and $z \in \partial \Omega$. Then there exists a unique function $u \in L^1(\Omega,\delta^\gamma)$ such that
\begin{equation} \label{eq:boundary} u + \mathbb{G}^{\Omega}[u^p] = M^\Omega(\cdot,z) \quad \text{a.e. in } \Omega.
\end{equation}
Moreover, there holds
\begin{equation} \label{asymp} \lim_{\Omega \ni x \to z}\frac{u(x)}{M^{\Omega}(x,z)}=1.
\end{equation} \end{theorem} Roughly speaking, the function $u$ in Theorem \ref{bdry-iso} could be understood as a `solution' to the boundary problem \begin{equation} \label{eq:bdry} \left\{ \begin{aligned}
\mathbb{L} u + u^p &= 0&&\text{ in }\Omega, \\
u &= \delta_z &&\text{ on }\partial \Omega, \\
u &=0 &&\text{ in }\mathbb{R}^N \backslash \overline{\Omega} \text{ (if applicable)}. \end{aligned} \right. \end{equation}
Although the problem of boundary singularities of solutions to \eqref{eq:bdry} is interesting, it is not a topic that we want to consider and dwell on in this paper.
\section{Compactness of the Green operator}\label{sec:lineartheory} We start this section with a result on the continuity of $\mathbb{G}^{\Omega}$. \begin{proposition}\label{prop:marcinestimate} Assume that \eqref{eq_G1}--\eqref{eq_G3bis} hold. Let $\gamma' \in [0,\gamma]$ and $\alpha$ satisfy \begin{equation}\label{eq:condition_alpha_gamma} \left(-\gamma' - 1\right) \vee \left(\gamma' - 2s\right) \vee \left(- \frac{\gamma' N}{N - 2s + \gamma'}\right) < \alpha < \frac{\gamma' N}{N-2s}. \end{equation} Then the map \begin{equation}\label{eq:Marcin} \mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma'}) \to M^{p_{\gamma', \alpha}^*}(\Omega,\delta^{\alpha}) \end{equation} is continuous (recall that $p_{\gamma',\alpha}^*=\frac{N+\alpha}{N+\gamma'-2s}$). In particular, for any $q \in [1,p_{\gamma',\alpha})$ the map \begin{equation}\label{eq:Marcin2} \mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma'}) \to L^q(\Omega,\delta^{\alpha}) \end{equation} is continuous. \end{proposition} \begin{proof} The desired result can be obtained by proceeding as in the proof of \cite[Proposition 4.6]{TruongTai_2020}, replacing $\gamma$ by $\gamma'$. Therefore we omit the proof. \end{proof}
In order to prove Theorem \ref{theo:compactness}, we need the following technical result. Firstly, notice from \eqref{G-est} that (see the proof of \cite[Lemma 4.1]{TruongTai_2020}), for any $(x,y) \in (\Omega \times \Omega) \setminus D_{\Omega}$, \begin{equation}\label{eq:Green0}
G^{\Omega}(x,y) \lesssim \dfrac{1}{|x-y|^{N-2s}}\wedge \dfrac{\delta(y)^{\gamma}}{\delta(x)^{\gamma}}\dfrac{1}{|x-y|^{N-2s}} \wedge \dfrac{\delta(y)^{\gamma}}{|x-y|^{N-2s+\gamma}}\wedge\dfrac{\delta(y)^{\gamma}\delta(x)^{\gamma}}{|x-y|^{N-2s+2\gamma}}.
\end{equation}
\begin{lemma}\label{lem:technical} Assume that \eqref{eq_G1}--\eqref{eq_G3bis} hold. For $\varepsilon > 0$ and $\beta > N-2s+2\gamma$, we define $K^{\varepsilon}_{\beta} : \mathbb{R}^+\to \mathbb{R}^+$ as \begin{equation} \label{Keb} K^{\varepsilon}_{\beta} (t) :=1 \wedge \left(t^{\beta}\varepsilon^{-\beta}\right), \quad t \in \mathbb{R}^+. \end{equation} Put \begin{equation} \label{eq:Gepsilon} G^{\varepsilon}_{\beta}(x,y) := \left\{ \begin{aligned}
&G^{\Omega}(x,y)K^{\varepsilon}_{\beta}(|x-y|) \quad &&\text{if } (x,y) \in (\Omega \times \Omega) \backslash D_{\Omega},\\ &0 \quad &&\text{if } (x,y) \in D_{\Omega}, \end{aligned}\right. \end{equation} where $G^{\Omega}$ is the Green function of $\mathbb{L}$ given in \eqref{eq_G1}.
(i) For every $\gamma' < \gamma$, the map $$(x,y) \mapsto \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma'}}$$ is uniformly continuous in $\Omega \times \Omega$.
(ii) If, in addition, \eqref{eq_G4} holds, then the map $$(x,y) \mapsto \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma}}$$ is uniformly continuous in $\Omega \times \Omega$. \end{lemma}
\begin{proof} (i) Assume that \eqref{eq_G1}--\eqref{eq_G3bis} hold. We first show that $G^{\varepsilon}_{\beta}$ is continuous in $\Omega \times \Omega$. Indeed, by \eqref{eq_G3bis} $G^{\Omega}$ is continuous in $(\Omega \times \Omega)\backslash D_{\Omega}$. In addition, the map $(x,y) \mapsto K^{\varepsilon}_{\beta}(|x-y|)$ is continuous since $K^{\varepsilon}_{\beta}$ is continuous on $\mathbb{R}^+$. Thus, $G^{\varepsilon}_{\beta}$ is continuous in $(\Omega \times \Omega) \setminus D_{\Omega}$. We prove that $G^{\varepsilon}_{\beta}$ is continuous on the diagonal $D_{\Omega}$. For $x,y \in \Omega$ such that $0 < |x-y| < \varepsilon$, by \eqref{eq:Green0} one has \begin{equation}\label{eq:limitGr} \begin{aligned}
G^{\varepsilon}_{\beta}(x,y) = G^{\Omega}(x,y) K^{\varepsilon}_{\beta}(|x-y|) \lesssim \dfrac{1}{|x-y|^{N-2s}}\cdot \dfrac{|x-y|^{\beta}}{\varepsilon^{\beta}} \leq \dfrac{|x-y|^{\beta - (N-2s)}}{\varepsilon^{\beta}}. \end{aligned} \end{equation}
For $(x_0,x_0) \in D_\Omega$, combining \eqref{eq:limitGr} and the fact that $\beta > N-2s$, we have $$\lim_{(x,y) \to (x_0,x_0)}G^{\varepsilon}_{\beta}(x,y) = 0 = G^{\varepsilon}_{\beta}(x_0,x_0).$$ Hence, $G^{\varepsilon}_{\beta}$ is continuous at $(x_0,x_0) \in D_\Omega$. This implies $G^{\varepsilon}_{\beta}$ is continuous on $\Omega \times \Omega$.
We prove the uniform continuity of the map $(x,y) \mapsto G^{\varepsilon}_{\beta}(x,y)/\delta(y)^{\gamma'}$ for $\gamma' < \gamma$. Let $\Psi_{\gamma'}$ be the extension of $G_{\beta}^{\varepsilon}/{\delta^{\gamma'}}$ on $\overline{\Omega} \times \overline{\Omega}$ defined by $$ \Psi_{\gamma'}(x,y): = \left\{ \begin{aligned} &\dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma'}} \quad &&\text{if } (x,y) \in \Omega \times \Omega, \\ &0 \quad &&\text{elsewhere}. \end{aligned} \right. $$
By the continuity of $G^{\varepsilon}_{\beta}$ in $\Omega \times \Omega$, $\Psi_{\gamma'}$ is continuous in $\Omega \times \Omega$. We prove that $\Psi_{\gamma'}$ is continuous on the boundary. By \eqref{eq:Green0} and \eqref{Keb}, for $(x,y) \in \Omega \times \Omega$, we derive \begin{equation}\label{eq:Green_bound1} \begin{aligned} \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma'}}
= \dfrac{G^{\Omega}(x,y)}{\delta(y)^{\gamma'}}K^{\varepsilon}_{\beta}(|x-y|) \lesssim \dfrac{\delta(x)^\gamma \delta(y)^{\gamma - \gamma'}}{|x-y|^{N+2\gamma-2s}}\cdot \left(1 \wedge \dfrac{|x-y|^{\beta}}{\varepsilon^{\beta}}\right) \lesssim \dfrac{\delta(x)^\gamma\delta(y)^{\gamma - \gamma'}}{\varepsilon^{N+ 2\gamma - 2s}}. \end{aligned} \end{equation} Here we have used the fact that $\beta > N + 2\gamma - 2s$ and the elementary inequality \begin{equation} \label{1tab} (1 \wedge t^a) \le t^b, \quad t \ge 0, b \leq a. \end{equation} Therefore, since $\gamma>\gamma'$, we conclude that for any $(x_0,y_0) \in \partial \Omega \times \overline{\Omega}$ or $(x_0,y_0) \in \overline{\Omega} \times \partial \Omega$, $$ \lim_{\substack{(x,y) \to (x_0,y_0)\\ (x,y) \in \Omega \times \Omega} } \Psi_{\gamma'}(x,y) = \lim_{\substack{(x,y) \to (x_0,y_0)\\ (x,y) \in \Omega \times \Omega}} \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma'}} = 0 = \Psi_{\gamma'}(x_0,y_0). $$ Thus, $\Psi_{\gamma'}$ is continuous on $\overline \Omega \times \overline \Omega$. It follows that $G^{\varepsilon}_{\beta}/\delta^{\gamma'}$ is uniformly continuous on $\Omega \times \Omega$.
(ii) Assume, in addition, that \eqref{eq_G4} holds. We find that the map \begin{equation} \label{eq:bd-a} (x,y) \mapsto \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma}} \quad \text{is continuous on } \Omega \times \Omega. \end{equation} Also by using the same argument leading to \eqref{eq:Green_bound1}, we obtain $$ \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma}} \lesssim \dfrac{\delta(x)^\gamma}{\varepsilon^{N+ 2\gamma - 2s}}, \quad \forall (x,y) \in \Omega \times \Omega. $$ Therefore, for any $x_0 \in \partial \Omega$ and $y_0 \in \overline \Omega$, \begin{equation} \label{eq:bdryx} \lim_{(x,y) \to (x_0,y_0)}\dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma}} = 0. \end{equation} Next, by assumption \eqref{eq_G4}, for $x_0 \in \Omega$ and $z_0 \in \partial \Omega$, we have \begin{equation}\label{eq:boundary_limit_2}
\lim_{(x,y) \to (x_0,z_0)} \dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma}} = \lim_{(x,y) \to (x_0,z_0)} \dfrac{G^{\Omega}(x,y)}{\delta(y)^{\gamma}} K^{\varepsilon}_{\beta}(|x-y|) = M^{\Omega}(x_0,z_0)K^{\varepsilon}_{\beta}(|x_0-z_0|). \end{equation} Note that the function on the right hand side of \eqref{eq:boundary_limit_2} is continuous in $\Omega \times \partial \Omega$. Furthermore, by \eqref{M-est}, \eqref{Keb} (with $\beta > N - 2s + 2\gamma$) and \eqref{1tab}, we obtain
\begin{align*} M^{\Omega}(x_0,z_0)K^{\varepsilon}_{\beta}(|x_0-z_0|) \lesssim \frac{\delta(x_0)^\gamma}{|x_0-z_0|^{N+2\gamma-2s}} \left( 1 \wedge \frac{|x_0-z_0|^\beta}{\varepsilon^{\beta}} \right) \lesssim \frac{\delta(x_0)^\gamma}{\varepsilon^{N+2\gamma-2s}}, \end{align*} which implies
\begin{equation} \label{bdry-bdry} \lim_{x_0 \to \partial \Omega} M^{\Omega}(x_0,z_0)K^{\varepsilon}_{\beta}(|x_0-z_0|) = 0. \end{equation} Let $\Psi_{\gamma}$ be the extension of $G^{\varepsilon}_{\beta}/\delta^{\gamma}$ on $\overline \Omega \times \overline \Omega$ defined by $$ \Psi_{\gamma}(x,y): = \left\{ \begin{aligned} &\dfrac{G^{\varepsilon}_{\beta}(x,y)}{\delta(y)^{\gamma}} \quad &&\text{if } (x,y) \in \Omega \times \Omega, \\
&M^{\Omega}(x,y)K_{\beta}^{\varepsilon}(|x-y|) &&\text{if } (x,y) \in \Omega \times \partial \Omega,\\ &0 &&\text{elsewhere.} \end{aligned} \right. $$ We infer from \eqref{eq:bd-a}--\eqref{bdry-bdry} that $\Psi_{\gamma}$ is continuous on $\overline \Omega \times \overline \Omega$. Therefore, $\Psi_{\gamma}$ is uniformly continuous on $\overline \Omega \times \overline \Omega$. Consequently, $G^{\varepsilon}_{\beta}/\delta^{\gamma}$ is uniformly continuous in $\Omega \times \Omega$. The proof is complete. \end{proof}
We are ready to provide the proof of Theorem \ref{theo:compactness} (i) and (ii).
\begin{proof}[{\sc Proof of Theorem \ref{theo:compactness}}] \text{}
(i) \textbf{Step 1.} Let $\varepsilon > 0$ and $G^{\varepsilon}_{\beta}$, $K^{\varepsilon}_{\beta}$ defined in Lemma \ref{lem:technical}. For simplicity, we write $G^{\varepsilon}$ for $G^{\varepsilon}_{\beta}$, $K^\varepsilon$ for $K_\beta^\varepsilon$ and define $H^{\varepsilon}:= G^{\Omega} - G^{\varepsilon}$. For every $0 \le \gamma' < \gamma$, by \eqref{eq:Green0}, we have \begin{equation}\label{eq_hcompactL1}
0 \le H^{\varepsilon}(x,y) = G^{\Omega}(x,y)(1- K^{\varepsilon}(x,y)) \lesssim \dfrac{\delta(y)^{\gamma'}}{\delta(x)^{\gamma'}} b(|x-y|), \quad x,y \in \Omega, \end{equation} where $$ b(t) := \dfrac{1}{t^{N-2s}} \left(1 - \dfrac{t^\beta}{\varepsilon^{\beta}}\right)\mathbf{1}_{\{ 0 \le t \le \varepsilon \} }, \quad t \geq 0. $$ Let $K \subset \subset \Omega$ be fixed and $\mu \in \mathcal{M}(\Omega,\delta^{\gamma'})$.
For $h \in \mathbb{R}^N$ such that $|h| < \frac{1}{2}\hbox{dist}(K,\partial \Omega)$, one has \begin{equation}\label{eq_maintheo2} \begin{aligned} \norm{ \mathbb{G}^{\Omega}[\mu](\cdot + h) - \mathbb{G}^{\Omega}[\mu](\cdot) }_{L^1(K)} &= \norm{ \int_{\Omega} \left[G^{\Omega}(\cdot+h,y) - G^{\Omega}(\cdot,y)\right]d\mu(y) }_{L^1(K)} \\ &\le \norm{ \int_{\Omega} H^{\varepsilon} (\cdot,y)d\mu(y) }_{L^1(K)} + \norm{ \int_{\Omega} H^{\varepsilon} (\cdot+h,y)d\mu(y) }_{L^1(K)} \\ &+ \norm{ \int_{\Omega} \left[G^{\varepsilon} (\cdot+h,y) - G^{\varepsilon}(\cdot,y) \right]d\mu(y)}_{L^1(K)} \\ &=: J_1 + J_2 + J_3. \end{aligned} \end{equation}
We first estimate $J_1$. Using the inequality $\delta(x) \geq \hbox{dist}(K,\partial \Omega),\forall x \in K$ one has $$\begin{aligned}
J_1 = \int_{K} \int_{\Omega} H^{\varepsilon} (x,y) d\mu(y) \, dx &\lesssim \int_{K}\int_{\Omega} \dfrac{\delta(y)^{\gamma'}}{\delta(x)^{\gamma'}} b(|x-y|) d\mu(y) \, dx \\
& \lesssim \int_{\Omega}\int_{K} \dfrac{\delta(y)^{\gamma'}}{\delta(x)^{\gamma'}} b(|x-y|) dx \, d\mu(y)\\
& \leq \dfrac{1}{\hbox{dist}(K,\partial \Omega)^{\gamma'}} \int_{\Omega}\delta(y)^{\gamma'} \left(\int_{K} b(|x-y|) dx\right)\, d\mu(y). \end{aligned} $$ Since
$$\int_{K} b(|x-y|) dx = \int_{ \{|x- y| \le \varepsilon\} } b(|x-y|)dx \lesssim \int_0^{\varepsilon} b(t)t^{N-1} dt \lesssim \int_0^{\varepsilon} t^{2s-1} dt \lesssim \varepsilon^{2s},$$ we obtain \begin{equation}\label{eq_I12} \begin{aligned} J_1 \lesssim \dfrac{1}{\hbox{dist}(K,\partial \Omega)^{\gamma'}}\varepsilon^{2s} \int_{\Omega} \delta(y)^{\gamma'} d\mu(y) = \dfrac{1}{\hbox{dist}(K,\partial \Omega)^{\gamma'}}\varepsilon^{2s} \norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma'})}. \end{aligned} \end{equation}
Next we estimate $J_2$. We have \begin{align*} J_2 = \int_{K} \int_{\Omega} H^{\varepsilon} (x+h,y) d\mu(y) dx
&\lesssim \int_{K}\int_{\Omega} \dfrac{\delta(y)^{\gamma'}}{\delta(x+h)^{\gamma'}} b(|x+h-y|) d\mu(y) dx \\
&\lesssim \int_{\Omega} \delta(y)^{\gamma'} \left( \int_K \dfrac{b(|x+h-y|)}{\delta(x+h)^{\gamma'}} dx \right) d\mu(y). \end{align*}
Notice that for $|h| < \frac{1}{2}\hbox{dist}(K,\partial \Omega)$,
$$\delta(x+h)^{\gamma'} \geq \delta(x)^{\gamma'} - |h|^{\gamma'} \geq C_{\gamma'} \hbox{dist}(K,\partial \Omega)^{\gamma'},\quad \forall x \in K,$$ which implies \begin{align*}
\int_{K} \dfrac{b(|x+h-y| )}{\delta(x+h)^{\gamma'}} dx
& \lesssim \frac{1}{C_{\gamma'}\hbox{dist}(K,\partial \Omega)^{\gamma'}} \int_{K} b(|x+h-y|) dx \\
& \lesssim \frac{1}{C_{\gamma'}\hbox{dist}(K,\partial \Omega)^{\gamma'}} \int_{\{|x+h-y| \le \varepsilon \}} b(|x+h-y|) dx\\ & \lesssim \frac{1}{C_{\gamma'}\hbox{dist}(K,\partial \Omega)^{\gamma'}} \int_0^{\varepsilon} b(t)t^{N-1} dt \lesssim \varepsilon^{2s}. \end{align*} From this we derive that \begin{equation}\label{eq_I22} \begin{aligned} J_2 \lesssim \frac{1}{C_{\gamma'}\hbox{dist}(K,\partial \Omega)^{\gamma'}}\varepsilon^{2s} \int_{\Omega} \delta(y)^{\gamma'} d\mu(y) = \frac{1}{C_{\gamma'}\hbox{dist}(K,\partial \Omega)^{\gamma'}}\varepsilon^{2s} \norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma'})}. \end{aligned} \end{equation}
Finally, for $J_3$ we write \begin{equation}\label{eq_I23} \begin{aligned}
J_3 &= \int_{K} \left|\int_{\Omega} \left[G^{\varepsilon} (x+h,y) - G^{\varepsilon}(x,y) \right]d\mu(y)\right|dx\\
& \le \int_{\Omega} \int_{K} |G^{\varepsilon}(x+h,y) - G^{\varepsilon}(x,y)| dx d\mu(y)\\
&\le \int_{\Omega} \delta(y)^{\gamma'} \int_{K} \left|\dfrac{G^{\varepsilon}(x+h,y) - G^{\varepsilon}(x,y)}{\delta(y)^{\gamma'}}\right| dx d\mu(y)\\
&\le |K| \norm{ \dfrac{G^{\varepsilon}(\cdot + h, \cdot)}{\delta(\cdot)^{\gamma'}} - \dfrac{G^{\varepsilon}(\cdot, \cdot)}{\delta(\cdot)^{\gamma'}}}_{L^{\infty}(\Omega \times \Omega)} \norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma'})}, \end{aligned} \end{equation} Since the map $(x,y) \mapsto \frac{G^{\varepsilon}(x,y)}{\delta(y)^{\gamma'}}$ is uniformly continuous by Lemma \ref{lem:technical}(i), we deduce that \begin{equation}\label{eq_I23.2}
\lim_{|h| \to 0} \sup_{ \norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma'})} \le 1} J_3 = 0. \end{equation} Combining \eqref{eq_maintheo2}--\eqref{eq_I23.2}, we have
$$\limsup_{|h| \to 0}\sup_{ \norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma'})} \le 1} \norm{ \mathbb{G}^{\Omega}[\mu](\cdot+h) - \mathbb{G}^{\Omega}[\mu](\cdot)}_{L^1(K)} \le C(K)\varepsilon^{2s}.$$ By Riesz--Fr\'echet--Kolmogorov theorem (see e.g. \cite[Proposition 1.2.23]{DraMil_2013}) we conclude that $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma'}) \to L^1(K)$ is compact for every compact set $K \subset \subset \Omega$.
\textbf{Step 2.} We prove that $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma'}) \to L^q(\Omega,\delta^{\alpha})$ is compact. Consider a bounded sequence $\{ \mu_n\}_{n \in \mathbb{N}} \subset \mathcal{M}(\Omega,\delta^{\gamma'})$ and a $C^2$ exhaustion $\{ \Omega_k \}_{k \in \mathbb{N}}$ of $\Omega$, namely $\Omega_k \subset \subset \Omega_{k+1} \subset \subset \Omega$ for all $k \in \mathbb{N}$ and $\cup_{k=1}^{\infty} \Omega_k = \Omega$. Put $u_n:= \mathbb{G}^{\Omega}[\mu_n]$. By Step 1, for any $k \in \mathbb{N}$, the map $\mathbb{G}^{\Omega} : \mathcal{M}(\Omega,\delta^{\gamma'}) \to L^1(\Omega_k)$ is compact. Thus, there exists a subsequence, denoted by $\{u_{n,k}\}_{n \in \mathbb{N}}$, and a function $v_k$ defined in $\Omega_k$ such that $u_{n,k} \to v_k$ a.e. in $\Omega_k$ as $n \to \infty$. Using the standard diagonal argument, we derive that there exist a subsequence, still labeled by the same notation $\{u_n\}_{n \in \mathbb{N}}$, and a function $u$ such that $u_n \to u$ a.e. in $\Omega$ and $u = v_k$ in $\Omega_k$. On the other hand, by \eqref{eq:Marcin2}, for any $q \in [1, p^*_{\gamma',\alpha} )$, we have $$ \norm{u_n}_{L^q(\Omega,\delta^{\alpha})} \lesssim \norm{\mu_n}_{\mathcal{M}(\Omega,\delta^{\gamma'})} \lesssim 1. $$ Employing Vitali's convergence theorem, we derive that, up to a subsequence, $u_n \to u$ in $L^q(\Omega,\delta^{\alpha})$ for any $q \in [1, p^*_{\gamma',\alpha})$. Hence, we conclude that the map $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma'}) \to L^q(\Omega,\delta^{\alpha})$ is compact for any $q \in [1,p^*_{{\gamma'},\alpha})$.
(ii) Let $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$ and consider the decomposition as in \eqref{eq_maintheo2} for $\mathbb{G}^{\Omega}[\mu]$. Following the same argument as above, we have the same estimates for $J_1, J_2$ where $\gamma'$ is replaced by $\gamma$. Furthermore, the estimate for $J_3$ also holds under assumption \eqref{eq_G4} by Lemma \ref{lem:technical}(ii). Hence, the map $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma}) \to L^1(K)$ is compact for any subset $K \subset \subset \Omega$. Finally, following Step 2 in the proof of (i), we conclude that the map $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma}) \to L^q(\Omega,\delta^{\alpha})$ is compact for every $q \in [1, {p^*_{\gamma,\alpha})}$ (recall that $p^*_{\gamma,\alpha}=\frac{N+\alpha}{N+\gamma-2s}$). \end{proof}
We infer in particular from Theorem \ref{theo:compactness} that the map $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ is compact under the given assumptions. As a consequence of Theorem \ref{theo:compactness}(ii), we obtain the following convergence which can be interpreted as the stability of solutions to linear problems.
\begin{corollary} \label{cor:stability} Suppose that \eqref{eq_G1}--\eqref{eq_G4} hold. Assume $\{ \mu_n\}_{n \in \mathbb{N}} \subset \mathcal{M}(\Omega,\delta^{\gamma})$ converges weakly to $\mu$ in $\mathcal{M}(\Omega,\delta^{\gamma})$. Then $\mathbb{G}^{\Omega}[\mu_n] \to \mathbb{G}^{\Omega}[\mu]$ in $L^1(\Omega,\delta^{\gamma})$. \end{corollary}
\begin{proof} Let $\{ \nu_n\}_{n \in \mathbb{N}}$ be an arbitrary subsequence of $\{\mu_n\}_{n \in \mathbb{N}}$. Put $u_{\nu_n} := \mathbb{G}^{\Omega}[\nu_n], n \in \mathbb{N}$, and $u_\mu := \mathbb{G}^{\Omega}[\mu]$. By the compactness of the map $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ in Theorem \ref{theo:compactness} (ii), up to a subsequence, there exists a function $v \in L^1(\Omega,\delta^{\gamma})$ such that $u_{\nu_n} \to v$ in $L^1(\Omega,\delta^{\gamma})$. By the integration-by-parts formula \cite[Lemma 4.4]{TruongTai_2020}, we have $$\int_{\Omega} u_{\nu_n} \xi dx = \int_{\Omega} \mathbb{G}^{\Omega} [\xi] d\nu_n,\quad \forall \xi \in \delta^{\gamma} L^{\infty}(\Omega).$$ Letting $n \to \infty$ and noticing that $\{\nu_n\}_{n \in \mathbb{N}}$ converges weakly to $\mu$, we have $$\int_{\Omega} v \xi dx = \int_{\Omega} \mathbb{G}^{\Omega} [\xi] d\mu,\quad \forall \xi \in \delta^{\gamma} L^{\infty}(\Omega),$$ which implies $v = \mathbb{G}^{\Omega}[\mu] = u_\mu$. Hence, $u_{\nu_n} \to u_\mu$ in $L^1(\Omega,\delta^{\gamma})$. Since $\{ \nu_n\}_{n \in \mathbb{N}}$ is chosen arbitrarily, we conclude that the whole sequence $\{\mathbb{G}^{\Omega}[\mu_n]\}_{n \in \mathbb{N}}$ converges to $\mathbb{G}^{\Omega}[\mu]$ in $L^1(\Omega,\delta^{\gamma})$. \end{proof}
We next prove the compactness of the Green operator in $L^r(\Omega,\delta^{\gamma})$ for $r > \frac{N+\gamma}{2s}$ for the sake of completeness. \begin{proposition}Assume that \eqref{eq_G1}--\eqref{eq_G3bis} hold. Then the map $\mathbb{G}^{\Omega}: L^r(\Omega,\delta^{\gamma}) \to C(\overline{\Omega})$ is compact for any $r > \frac{N+\gamma}{2s}$. \end{proposition}
\begin{proof}
\noindent \textbf{Step 1.} We first claim that the map $\mathbb{G}^{\Omega}: L^r(\Omega,\delta^{\gamma}) \to C(\overline{\Omega})$ is continuous. Firstly, under assumptions \eqref{eq_G1}--\eqref{eq_G2}, the map $\mathbb{G}^{\Omega}: L^r(\Omega,\delta^{\gamma}) \to L^{\infty}(\Omega)$ is continuous by \cite[Proposition 4.11]{TruongTai_2020}. Hence, it is sufficient to show that $\mathbb{G}^{\Omega}[f]$ is continuous on $\overline{\Omega}$ for any $f \in L^r(\Omega,\delta^\gamma)$. Indeed, let $\{x_n\}_{n \in \mathbb{N}} \subset \Omega$ be a sequence converging to a point $\tilde x\in \overline \Omega$.
First we consider the case that $\tilde x \in \Omega$. By assumption \eqref{eq_G3bis}, it can be seen that \begin{equation}\label{eq:Greenpointwiselim} \lim_{n \to \infty}\frac{G^{\Omega}(x_n,y)}{\delta(y)^{\gamma}} = \frac{G^{\Omega}(\tilde x,y)}{\delta(y)^{\gamma}} \quad \text{ for a.e. } y \in \Omega. \end{equation}
We show that for every $q \in [1,p^*)$, the sequence $\left\{ G^{\Omega}(x_n ,\cdot)\delta(\cdot)^{-\gamma}\right\}_{n \in \mathbb{N}}$ is bounded in $L^q(\Omega,\delta^{\gamma})$. For $q \in [1,p^*)$, by \eqref{G-est} there holds
$$\dfrac{G^{\Omega}(x_n,y)}{\delta(y)^{\gamma(1-\frac{1}{q})}} \lesssim \dfrac{1}{|x_n-y|^{N-2s+\gamma(1-\frac{1}{q})}}, \quad x_n \neq y.$$
Therefore, taking into account that $q < p^* = \frac{N+\gamma}{N+\gamma-2s}$, we deduce
\begin{align*}
\int_{\Omega} \left( \dfrac{G^{\Omega}(x_n,y)}{\delta(y)^{\gamma}}\right)^q \delta(y)^\gamma dy = \int_{\Omega} \left(\dfrac{G^{\Omega}(x_n,y)}{\delta(y)^{\gamma(1-\frac{1}{q})}}\right)^q dy \lesssim \int_{\Omega} \dfrac{1}{|x_n-y|^{q(N+\gamma-2s)-\gamma}}dy \leq C(N,s,\gamma,q).
\end{align*}
Thus, $\left\{ G^{\Omega}(x_n ,\cdot)\delta(\cdot)^{-\gamma}\right\}_{n \in \mathbb{N}}$ is bounded $L^q(\Omega,\delta^{\gamma})$ for every $q \in [1,p^*)$. Since $r'=\frac{r}{r-1}<p^*$, we conclude by Vitali's convergence theorem that $G^{\Omega}(x_n,\cdot)\delta(\cdot)^{-\gamma} \to G^{\Omega}(\tilde x,\cdot)\delta(\cdot)^{-\gamma}$ in $L^{r'}(\Omega,\delta^{\gamma})$, namely
\begin{equation} \label{eq:compactLinfty.2}
\lim_{ n \to \infty} \norm{G^{\Omega}(x_n,\cdot)\delta(\cdot)^{-\gamma} - G^{\Omega}(\tilde x,\cdot)\delta(\cdot)^{-\gamma}}_{L^{r'}(\Omega,\delta^{\gamma})} = 0.
\end{equation} Let $f \in L^r(\Omega,\delta^{\gamma})$. We have
\begin{equation}\label{eq:compactLinfty.1}
\begin{aligned}
\left|\mathbb{G}^{\Omega}[f](x_n) - \mathbb{G}^{\Omega}[f](\tilde x)\right| &= \left|\int_{\Omega} \left[G^{\Omega}(x_n,y)-G^{\Omega}(\tilde x,y)\right] f(y)dy\right| \\
&\le \left(\int_{\Omega} f(y)^r \delta(y)^{\gamma} dy\right)^{\frac{1}{r}}\left( \int_{\Omega} \dfrac{|G^{\Omega}(x_n,y)-G^{\Omega}(\tilde x ,y)|^{r'}}{\delta(y)^{\frac{\gamma r'}{r}}}dy\right)^{\frac{1}{r'}}\\
& = \norm{f}_{L^r(\Omega,\delta^{\gamma})}\norm{G^{\Omega}(x_n,\cdot)\delta(\cdot)^{-\gamma} - G^{\Omega}(\tilde x,\cdot)\delta(\cdot)^{-\gamma}}_{L^{r'}(\Omega,\delta^{\gamma})}.
\end{aligned}
\end{equation}
Using \eqref{eq:compactLinfty.2}, we derive that $\mathbb{G}^{\Omega}[f](x_n) \to \mathbb{G}^{\Omega}[f](\tilde x)$ as $n \to \infty$. Thus $\mathbb{G}^{\Omega}[f]$ is continuous in $\Omega$.
If $\tilde x \in \partial \Omega$ then by using \eqref{G-est}, we deduce that
$$ \lim_{n \to \infty}\frac{G^{\Omega}(x_n,y)}{\delta(y)^\gamma} = 0 \quad \text{for a.e. } y \in \Omega.
$$
By using a similar argument as above, we deduce that for any $f \in L^r(\Omega,\delta^\gamma)$, $\mathbb{G}^\Omega[f](x_n) \to 0$ as $n \to \infty$. Thus, by putting $\mathbb{G}^\Omega[f](x)=0$ for $x \in \partial \Omega$, we obtain that $\mathbb{G}^{\Omega}[f] \in C(\overline \Omega)$.
\textbf{Step 2.} Consider a bounded set $\mathcal{Q} \subset L^r(\Omega,\delta^{\gamma})$ and put $M_{\mathcal{Q}}:=\sup_{f \in \mathcal{Q}}\norm{f}_{L^r(\Omega,\delta^{\gamma})}<+\infty$. By Step 1 and \cite[Proposition 4.11]{TruongTai_2020}, one has $\mathbb{G}^{\Omega}(\mathcal{Q}) \subset C(\overline{\Omega})$ and $$\norm{\mathbb{G}^{\Omega}[f]}_{L^{\infty}(\Omega)} \lesssim \norm{f}_{L^r(\Omega,\delta^{\gamma})} \leq M_\mathcal{Q},\quad \forall f \in \mathcal{Q}.$$ Next let $\varepsilon>0$ and take arbitrary $\tilde x \in \overline \Omega$ and $f \in \mathcal{Q}$. By using the same argument leading to \eqref{eq:compactLinfty.1}, we deduce that there exists $\rho$ depending on $\tilde x, \varepsilon, M_{\mathcal{Q}},\Omega$ such that for any $x \in B(\tilde x,\rho)\cap \Omega$, there holds
$$ \left|\mathbb{G}^{\Omega}[f](x) - \mathbb{G}^{\Omega}[f](\tilde x)\right| < \varepsilon. $$ It means that the set $\{ \mathbb{G}^{\Omega}[f]: f\in \mathcal{Q} \}$ is equicontinuous. Invoking Arzel\`a--Ascoli theorem (see for instance \cite[Theorem 1.2.13]{DraMil_2013}), we conclude that the set the set $\{ \mathbb{G}^{\Omega}[f]: f\in \mathcal{Q} \}$ is relatively compact. Thus $\mathbb{G}^{\Omega}: L^r(\Omega,\delta^{\gamma}) \to C(\overline{\Omega})$ is compact. The proof is complete. \end{proof}
\section{Kato's inequality} \label{sec:katoinequality}
In this section, we prove Kato's inequality for operators satisfying \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4}. We begin by presenting some properties of the function spaces introduced in Section \ref{sec:preliminaries}, which will be used in the sequel.
\begin{proposition}\label{prop:closedbilinear} The space $\mathcal{D}(\mathcal{B})$ defined in \eqref{D(B)} is complete under the norm $$
\norm{u}_{\mathcal{D}(\mathcal{B})} := \left( \mathcal{B}(u,u) + \norm{u}_{L^2(\Omega)}^2 \right)^{\frac{1}{2}} , \quad u,v \in \mathcal{D}(\mathcal{B}). $$ \end{proposition}
\begin{proof}One can easily check that $\norm{\cdot}_{\mathcal{D}(\mathcal{B})}$ is a norm on $\mathcal{D}(\mathcal{B})$, therefore it is enough to prove the completeness of $\mathcal{D}(\mathcal{B})$. Indeed, let $\{u_n\}_{n \in \mathbb{N}} \subset \mathcal{D}(\mathcal{B})$ be a Cauchy sequence with respect to the norm $\| \cdot \|_{\mathcal{D}(\mathcal{B})}$. For any $\varepsilon>0$, there exists $n_0$ such that $$ \norm{u_m - u_n}_{L^2(\Omega)} \le \norm{u_m - u_n}_{\mathcal{D}(\mathcal{B})} < \varepsilon, \quad \forall m, n > n_0.$$ This implies $\{ u_n \}_{n \in \mathbb{N}}$ is a Cauchy sequence in $L^2(\Omega)$. Hence up to a subsequence, there exists a function $u \in L^2(\Omega)$ such that $u_n \to u$ in $L^2(\Omega)$ and a.e. in $\Omega$. Fix $m > n_0$, then for a.e. $x,y \in \Omega$,
\begin{align*}
[u_n(x) - u_m(x)] - [u_n(y) - u_m(y)] \to [u(x) - u_m(x)] - [u(y) - u_m(y)] \quad \text{as } n \to \infty.
\end{align*}
Therefore, by Fatou's Lemma and the convergence $u_n \to u$ in $L^2(\Omega)$, we obtain
\begin{align*}
\mathcal{B}(u - u_m, u - u_m) + \| u - u_m \|_{L^2(\Omega)}^2
\le \liminf_{n \to \infty} \left[\mathcal{B}(u_n - u_m, u_n - u_m) + \norm{u_n - u_m}^2_{L^2(\Omega)}\right] \le \varepsilon^2.
\end{align*}
This implies
$$\norm{u_m - u}_{\mathcal{D}(\mathcal{B})} < \varepsilon,\quad \forall m > n_0,$$
which means $u_m \to u$ in $\mathcal{D}(\mathcal{B})$ with respect to the norm $\norm{\cdot}_{\mathcal{D}(\mathcal{B})}$. The proof is complete. \end{proof}
\begin{lemma}\label{lem:Hs00} Let $H_{00}^s(\Omega)$ be defined in \eqref{H00}.
$(i)$ Assume $u \in H_{00}^s(\Omega)$ and $h: \mathbb{R} \to \mathbb{R}$ is a Lipschitz function such that $h(0) = 0$. Then $h(u) \in H_{00}^s(\Omega)$.
$(ii)$ Assume $u,v \in H_{00}^s(\Omega) \cap L^{\infty}(\Omega)$. Then $uv \in H_{00}^s(\Omega)$. \end{lemma}
\begin{proof} (i) By the assumption on $h$, there exists a positive constant $l$ such that $|h(t_1) - h(t_2)| \leq l |t_1-t_2|$ and in particular $|h(t_1)| \leq l|t_1|$ for any $t_1,t_2 \in \mathbb{R}$. Therefore
\begin{align*}
&\int_{\Omega}\left(1+\dfrac{1}{\delta^{2s}}\right)|h(u)|^2 dx + \int_{\Omega}\int_{\Omega} \dfrac{|h(u(x))-h(u(y))|^2}{|x-y|^{N+2s}}dx dy \\
\leq\, \, & l^2\int_{\Omega}\left(1+\dfrac{1}{\delta^{2s}}\right)|u|^2 dx + l^2\int_{\Omega}\int_{\Omega} \dfrac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}dx dy \leq l^2\norm{u}^2_{H^s_{00}(\Omega)},
\end{align*}
which implies $h(u) \in H^s_{00}(\Omega)$.
(ii) Since $u,v \in H^s_{00}(\Omega) \cap L^{\infty}(\Omega)$, it follows that, for any $x,y \in \Omega$,
\begin{align*}
|u(x)v(x) - u(y)v(y)| \leq \norm{u}_{L^{\infty}(\Omega)} |v(x) - v(y)| + \norm{v}_{L^{\infty}(\Omega)} |u(x) - u(y)|.
\end{align*}
Therefore
\begin{align*}
&\int_{\Omega}\left(1+\dfrac{1}{\delta^{2s}}\right)|uv|^2 dx + \int_{\Omega}\int_{\Omega} \dfrac{|u(x)v(x)-u(y)v(y)|^2}{|x-y|^{N+2s}}dx dy \\
&\le \norm{u}_{L^{\infty}(\Omega)}^2 \int_{\Omega} \left(1+\dfrac{1}{\delta^{2s}}\right)|v|^2dx + \norm{u}_{L^{\infty}(\Omega)}^2 \int_{\Omega}\int_{\Omega} \dfrac{|v(x)-v(y)|^2}{|x-y|^{N+2s}}dx dy \\ &\quad +\norm{v}_{L^{\infty}(\Omega)}^2 \int_{\Omega}\int_{\Omega} \dfrac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}dx dy \\
&\le \norm{u}_{L^{\infty}(\Omega)}^2 \norm{v}_{H^s_{00}(\Omega)}^2 + \norm{v}_{L^{\infty}(\Omega)}^2 \norm{u}_{H^s_{00}(\Omega)}^2 < +\infty.
\end{align*}
This implies $uv \in H_{00}^s(\Omega)$. We complete the proof. \end{proof}
\begin{proposition}\label{prop:integrationbypart} Assume that \eqref{eq_L1} and \eqref{eq_L2bis} hold. Then for every $u, v\in C^{\infty}_c(\Omega)$,
\begin{equation}\label{eq:integrationbyparts}
\int_{\Omega} v \mathbb{L} u dx = \dfrac{1}{2} \int_{\Omega}\int_{\Omega} J(x,y)(u(x)-u(y))(v(x)-v(y)) dx dy + \int_{\Omega} B(x) u(x) v(x) dx.
\end{equation} \end{proposition}
\begin{proof} Let $u,v \in C^{\infty}_c(\Omega)$. We first notice that the right-hand side of \eqref{eq:integrationbyparts}
is finite. Indeed, let $K \subset \subset \Omega$ such that $\supp u, \supp v \subset \subset K \subset \subset \Omega$. Then by assumption \eqref{eq_L1}, we have
\begin{align*}
&\dfrac{1}{2}\int_{\Omega} \int_{\Omega}\left|J(x,y)(u(x)-u(y))(v(x)-v(y))\right|dx dy+ \int_{\Omega}\left|B(x)u(x)v(x)\right| dx\\
\le \,\, & \dfrac{1}{2} \norm{\nabla u}_{L^{\infty}(\Omega)}\norm{\nabla v}_{L^{\infty}(\Omega)} \int_{\Omega} \int_{\Omega} J(x,y) |x-y|^2dx dy + \norm{u}_{L^{\infty}(\Omega)}\norm{ v}_{L^{\infty}(\Omega)} \int_{K} B(x) dx < +\infty.
\end{align*}
Let $\varepsilon > 0$ be sufficiently small. By \eqref{Lepsilon}, we obtain
\begin{align*}
\int_{\Omega} v(x)\mathbb{L}_{\varepsilon} u(x) dx
&= \int_{\Omega} v(x)\left[\int_{\Omega} J(x,y)(u(x)-u(y))\chi_{\varepsilon}(|x-y|) dy + B(x)u(x)\right] dx.
\end{align*}
Using the symmetry property of $J(x,y)$ and by the change of variables, we get
\begin{align*}
&\int_{\Omega} v(x)\left[\int_{\Omega} J(x,y)(u(x)-u(y))\chi_{\varepsilon}(|x-y|) dy \right] dx \\
&\quad = \dfrac{1}{2} \int_{\Omega}\int_{\Omega}J(x,y) (u(x)-u(y))(v(x)-v(y)) \chi_{\varepsilon}(|x-y|) dx dy,
\end{align*}
which implies
\begin{align*}
\int_{\Omega} v \mathbb{L}_{\varepsilon} u dx &= \dfrac{1}{2} \int_{\Omega}\int_{\Omega}J(x,y) (u(x)-u(y))(v(x)-v(y)) \chi_{\varepsilon}(|x-y|) dx dy \\
&\quad + \int_{\Omega} B(x)u(x)v(x) dx.
\end{align*}
Notice that $|v \mathbb{L}_{\varepsilon} u| \le |v \varphi_u|$ and $v\varphi_u \in L^1(\Omega)$ by assumption \eqref{eq_L2bis}. Furthermore, we have
\begin{align*}
&|u(x)-u(y)||v(x)-v(y)| J(x,y)\chi_{\varepsilon}(|x-y|) \le |u(x)-u(y)||v(x)-v(y)| J(x,y),\\
& \int_{\Omega}\int_{\Omega}|u(x)-u(y)||v(x)-v(y)| J(x,y) dxdy < +\infty,
\end{align*}
and $\chi_{\varepsilon}(|x-y|) \to 1$ as $\varepsilon \to 0$. Hence, letting $\varepsilon \to 0^+$ and using the dominated convergence theorem, we conclude that
\begin{align*}
\int_{\Omega} v \mathbb{L} u dx
&= \lim_{\varepsilon \to 0^+} \int_{\Omega} v \mathbb{L}_{\varepsilon} u dx \\
&= \lim_{\varepsilon \to 0^+} \dfrac{1}{2} \int_{\Omega}\int_{\Omega} [u(x)-u(y)][v(x)-v(y)] J(x,y)\chi_{\varepsilon}(|x-y|) dx dy \\
&\hspace{1cm} + \int_{\Omega} B(x)u(x)v(x) dx \\
&= \dfrac{1}{2} \int_{\Omega}\int_{\Omega} [u(x)-u(y)][v(x)-v(y)] J(x,y) dx dy + \int_{\Omega} B(x)u(x)v(x) dx,
\end{align*}
which is the desired result. \end{proof}
The following lemma lies at the core of the proof of Kato's inequality.
\begin{lemma}\label{lem:kato} Assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G2} hold. Let $f \in L^1(\Omega,\delta^{\gamma})$, $u = \mathbb{G}^{\Omega} [f]$ and $p \in C^{1,1}(\mathbb{R})$ be a convex function such that $p(0) = p'(0) = 0$ and $|p'| \le 1$. Then \begin{equation}\label{eq:kato1} \int_{\Omega} p(u) \xi dx \le \int_{\Omega} fp'(u) \mathbb{G}^{\Omega}[\xi] dx, \end{equation} for every $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega}[\xi] \ge 0$ a.e. in $\Omega$. \end{lemma}
\begin{proof} The proof is divided into two steps. Recall that by \cite[Proposition 5.2]{TruongTai_2020}, under assumptions \eqref{eq_L1}--\eqref{eq_L2} and \eqref{eq_G1}--\eqref{eq_G2}, for $w \in L^2(\Omega)$ one has $\mathbb{G}^{\Omega}[w] \in \mathbb{H}(\Omega)$ and \begin{equation}\label{eq:variational}
\int_{\Omega} w\zeta dx = \inner{\mathbb{G}^{\Omega}[w],\zeta}_{\mathbb{H}(\Omega)},\quad \forall \zeta \in \mathbb{H}(\Omega). \end{equation}
\textbf{Step 1.} Consider the case $f \in C^{\infty}_c(\Omega)$. In this case, $u = \mathbb{G}^{\Omega}[f] \in \mathbb{H}(\Omega)$ and since $p'$ is Lipschitz with $|p'| \le 1$, we have $p'(u) \in \mathbb{H}(\Omega) \cap L^{\infty}(\Omega)$ by Lemma \ref{lem:Hs00}. On the other hand, $\mathbb{G}^{\Omega}[\xi] \in \mathbb{H}(\Omega) \cap L^{\infty}(\Omega)$ for every $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ (see \cite[Proposition 4.11 and Proposition 5.2]{TruongTai_2020}). This together with Lemma \ref{lem:Hs00} again implies $p'(u) \mathbb{G}^{\Omega}[\xi] \in \mathbb{H}(\Omega)$.
In \eqref{eq:variational}, choosing $\zeta=p'(u) \mathbb{G}^{\Omega}[\xi] \in \mathbb{H}(\Omega)$ and $w= f \in C^{\infty}_c(\Omega) \subset L^2(\Omega)$, we obtain \begin{equation}\label{eq:equa1} \begin{aligned} \int_{\Omega} fp'(u) \mathbb{G}^{\Omega}[\xi] dx &= \inner{ u, p'(u)\mathbb{G}^{\Omega}[\xi] }_{\mathbb{H}(\Omega)} \\ &= {\frac{1}{2}}\int_{\Omega} \int_{\Omega} V(x,y)J(x,y)dxdy + \int_{\Omega} B u p'(u) \mathbb{G}^{\Omega}[\xi] dx, \end{aligned} \end{equation} where $$V(x,y):= [ u(x) - u(y)]\left[p'(u(x))\mathbb{G}^{\Omega}[\xi](x) - p'(u(y))\mathbb{G}^{\Omega}[\xi](y)\right], \quad x,y \in \Omega.$$ The second equality in \eqref{eq:equa1} follows from \eqref{bilinear} and \eqref{innerH}.
Put \begin{align*}
&a_1: = \mathbb{G}^{\Omega}[\xi](x) , \quad b_1: = (u(x)- u(y))p'(u(x)) - (p(u(x)) - p(u(y))),\\
&a_2: = \mathbb{G}^{\Omega}[\xi](y), \quad b_2: = (u(x)-u(y))p'(u(y)) - (p(u(x)) - p(u(y))). \end{align*} Then we can write $V$ as $$ V(x,y) = a_1b_1 - a_2b_2 +\left(p(u(x)) - p(u(y))\right)\left(\mathbb{G}^{\Omega}[\xi](x) - \mathbb{G}^{\Omega}[\xi](y)\right), \quad x,y \in \Omega. $$ Since $p$ is convex, for every $x,y \in \Omega$, we have \begin{align*} p(u(x)) - p(u(y)) &\le p'(u(x)) (u(x) - u(y)), \\ p(u(x)) - p (u(y)) &\ge p'(u(y))(u(x) - u(y)), \end{align*} which implies $b_1 \ge 0 \ge b_2$. Noticing that $a_1, a_2 \ge 0$, we have $a_1b_1 \ge a_2b_2$, which yields \begin{equation}\label{eq:compareO1} \begin{aligned} V(x,y) \ge (p(u(x)) - p(u(y)))(\mathbb{G}^{\Omega}[\xi](x) - \mathbb{G}^{\Omega}[\xi](y)),\quad x,y \in \Omega. \end{aligned} \end{equation} Since $p$ is convex and $p(0) = 0$, it follows that $p(t) \le t p'(t), t \in \mathbb{R}$. Combining this with \eqref{eq:equa1} and \eqref{eq:compareO1}, we deduce \begin{equation}\label{eq:equa3} \begin{aligned} \int_{\Omega} fp'(u) \mathbb{G}^{\Omega}[\xi] dx &\geq {\frac{1}{2}}\int_{\Omega} \int_{\Omega} [p(u(x))-p(u(y))][\mathbb{G}^{\Omega}[\xi](x) - \mathbb{G}^{\Omega}[\xi](y)]J(x,y)dx dy \\ &\quad+ \int_{\Omega} Bp(u)\mathbb{G}^{\Omega}[\xi] dx \\ &= \inner{ p(u), \mathbb{G}^{\Omega} [\xi]}_{\mathbb{H}(\Omega)}. \end{aligned} \end{equation}
Here we have used the fact that $p(u) \in \mathbb{H}(\Omega)$ by Lemma \ref{lem:Hs00} with $p(0) = 0$, $p$ is Lipschitz. Finally, replacing $w$ by $\xi \in \delta^{\gamma}L^{\infty}(\Omega) \subset L^2(\Omega)$ and $\zeta$ by $p(u)\in \mathbb{H}(\Omega)$ in \eqref{eq:variational}, we deduce that \begin{equation}\label{eq:kato1bis} \int_{\Omega} fp'(u) \mathbb{G}^{\Omega}[\xi] dx \ge \inner{ p(u), \mathbb{G}^{\Omega} [\xi]}_{\mathbb{H}(\Omega)} = \int_{\Omega} p(u) \xi dx,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega), \mathbb{G}^{\Omega}[\xi] \ge 0. \end{equation} Hence, \eqref{eq:kato1} follows for $f \in C^{\infty}_c(\Omega)$.
\textbf{Step 2.} Consider $f \in L^1(\Omega,\delta^{\gamma})$. Let $\{ f_n\}_{n \in \mathbb{N}} \subset C^{\infty}_c(\Omega)$ be a sequence converging to $f$ in $L^1(\Omega,\delta^{\gamma})$ and a.e. in $\Omega$. Set $u=\mathbb{G}^{\Omega}[f]$ and $u_n = \mathbb{G}^{\Omega}[f_n]$, $n \in \mathbb{N}$. Since $f_n \to f$ in $L^1(\Omega,\delta^{\gamma})$ and the map $\mathbb{G}^{\Omega}: L^1(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ is continuous, up to a subsequence, we deduce that $u_n \to u \text{ in }L^1(\Omega,\delta^{\gamma}) \text{ and a.e. in }\Omega$.
By the assumption $p \in C^{1,1}(\mathbb{R})$ and $|p'| \leq 1$, we obtain $|p(u_n)-p(u)| \leq |u_n-u|$ in $\Omega$, which implies that $p(u_n) \to p(u)$ in $L^1(\Omega,\delta^\gamma)$ and a.e. in $\Omega$. In particular, \begin{equation} \label{puun} \lim_{n \to \infty}\int_{\Omega}p(u_n)\xi dx = \int_{\Omega}p(u)\xi dx, \quad \forall \xi \in \delta^\gamma L^\infty(\Omega). \end{equation}
Now, notice that $f_np'(u_n) \to fp'(u) \text{ a.e. in }\Omega$, $|f_n p'(u_n)| \le |f_n|$ and $f_n \to f$ in $L^1(\Omega,\delta^{\gamma})$. By using the generalized dominated convergence theorem, we obtain $f_np'(u_n) \to fp'(u)$ in $L^1(\Omega,\delta^\gamma)$. For any $\xi \in \delta^\gamma L^\infty(\Omega)$, we have $\mathbb{G}^{\Omega}[\xi] \in \delta^\gamma L^\infty(\Omega)$ due to \cite[Proposition 3.5]{ChaGomVaz_2019_2020}. Therefore \begin{equation} \label{fpuun} \lim_{n \to \infty}\int_{\Omega}f_np'(u_n) \mathbb{G}^{\Omega}[\xi] dx = \int_{\Omega}fp'(u) \mathbb{G}[\xi] dx, \quad \forall \xi \in \delta^\gamma L^\infty(\Omega). \end{equation} Applying \eqref{eq:kato1bis} with $f$ replaced by $f_n \in C^{\infty}_c(\Omega)$, we obtain \begin{equation}\label{eq:kato2} \int_{\Omega} f_n p'(u_n) \mathbb{G}^{\Omega}[\xi] dx \ge \int_{\Omega} p(u_n) \xi dx,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega), \mathbb{G}^{\Omega} [\xi]\ge 0. \end{equation} Combining \eqref{puun}--\eqref{eq:kato2}, we derive \begin{align*} \int_{\Omega} p(u) \xi dx = \lim_{n \to \infty} \int_{\Omega} p(u_n) \xi dx &\leq \lim_{n \to \infty} \int_{\Omega} f_n p'(u_n) \mathbb{G}^{\Omega} [\xi] dx \\ &= \int_{\Omega} f p'(u) \mathbb{G}^{\Omega}[\xi] dx,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega), \mathbb{G}^{\Omega} [\xi]\ge 0. \end{align*} We complete the proof. \end{proof}
We turn to the proof of Proposition \ref{thm:kato}. \begin{proof}[{\sc Proof of Proposition \ref{thm:kato}}{\normalfont (i)}] Consider the sequence $\{ p_k \}_{k \in \mathbb{N}}$ given by \begin{equation}\label{eq:sequencepk}
p_k(t) := \left\{ \begin{aligned} &|t| - \frac{1}{2k} &&\text{ if }|t| \ge \frac{1}{k},\\[3pt]
&\frac{kt^2}{2} &&\text{ if } |t| < \frac{1}{k}. \end{aligned} \right. \end{equation}
Then for every $k \in \mathbb{N}$, $p_k \in C^{1,1}(\mathbb{R})$ is convex, $p_k(0) = (p_k)'(0) = 0$ and $|(p_k)'| \le 1$. Hence, applying Lemma \ref{lem:kato} with $p = p_k$, one has \begin{equation} \label{pku}
\int_{\Omega} p_k(u) \xi dx \le \int_{\Omega} f (p_k)'(u) \mathbb{G}^{\Omega}[\xi] dx \le \int_{\Omega} |f| \mathbb{G}^{\Omega}[\xi] dx,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega), \mathbb{G}^{\Omega} [\xi]\ge 0. \end{equation}
Notice that $p_k(t) \to |t|$ and $(p_k)'(t) \to \sgn (t)$ as $k \to \infty$. Hence, letting $k \to \infty$ in \eqref{pku} and using the dominated convergence theorem, we obtain \eqref{eq:kato_abs}.
Next, by the integration-by-parts formula (see \cite[Lemma 4.4]{TruongTai_2020}), we have $$\int_{\Omega} u \xi dx = \int_{\Omega} f \mathbb{G}^{\Omega}[\xi]dx, \quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega).$$ This and \eqref{eq:kato_abs} imply \eqref{eq:kato_main}. The proof is complete. \end{proof}
We next prove a version of Kato's inequality involving measures.
\begin{proof}[{\sc Proof of Proposition \ref{thm:kato}}{\normalfont (ii)}] Consider sequences of nonnegative functions $\{ \mu_{i,n}\}_{n \in \mathbb{N}} \subset L^1(\Omega,\delta^{\gamma})$, $i=1,2$, such that $\{\mu_{1,n}\}_{n \in \mathbb{N}}$ and $\{\mu_{2,n}\}_{n \in \mathbb{N}}$ converge weakly to $\mu^+$ and $\mu^-$ in $\mathcal{M}(\Omega,\delta^\gamma)$ respectively. Put $\mu_n:=\mu_{1,n}- \mu_{2,n}$ then $\{\mu_n\}_{n \in \mathbb{N}} \subset L^1(\Omega,\delta^\gamma)$ and $\{\mu_n\}_{n \in \mathbb{N}}$ converges weakly to $\mu$ in $\mathcal{M}(\Omega,\delta^\gamma)$. Denote $u_n := \mathbb{G}^{\Omega}[f + \mu_n]$, $n \in \mathbb{N}$. By Corollary \ref{cor:stability}, up to a subsequence we have $u_n \to u$ in $L^1(\Omega,\delta^{\gamma})$ and a.e. in $\Omega$. Using Lemma \ref{lem:kato} and the estimate $|p'| \leq 1$, we obtain \begin{align*} \int_{\Omega} p(u_n)\xi dx &\le \int_{\Omega} p'(u_n) \mathbb{G}^{\Omega}[\xi]fdx + \int_{\Omega} p'(u_n)\mathbb{G}^{\Omega}[\xi] \mu_n dx\\ &\le \int_{\Omega} p'(u_n) \mathbb{G}^{\Omega}[\xi]fdx + \int_{\Omega} \mathbb{G}^{\Omega}[\xi] \mu_{1,n} dx, \end{align*} for every $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega} [\xi] \ge 0$ a.e. in $\Omega$. Noticing that $p \in C^{1,1}(\mathbb{R})$, $u_n \to u \text{ in }L^1(\Omega,\delta^{\gamma})$ and a.e. in $\Omega$, $p(u_n) \to p(u)$ in $L^1(\Omega,\delta^\gamma)$ and $\{\mu_{1,n}\}_{n \in \mathbb{N}}$ converges weakly to $\mu^+$, we can pass to the limit to get \begin{equation}\label{eq:corolkato1} \int_{\Omega} p(u)\xi dx \le \int_{\Omega} p'(u) \mathbb{G}^{\Omega}[\xi]fdx + \int_{\Omega} \mathbb{G}^{\Omega}[\xi] d\mu^+,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega), \mathbb{G}^{\Omega}[\xi] \ge 0. \end{equation} Now consider the sequence $\{ p_k\}_{k \in \mathbb{N}}$ as in \eqref{eq:sequencepk}. In \eqref{eq:corolkato1}, replacing $p$ by $p_k$ and letting $k \to \infty$, we derive \eqref{eq:kato_abs2}.
Next, by the integration-by-parts formula (see \cite[Lemma 4.4]{TruongTai_2020}), we have $$\int_{\Omega} u \xi dx = \int_{\Omega} f \mathbb{G}^{\Omega}[\xi]dx + \int_{\Omega} \mathbb{G}^{\Omega}[\xi]d\mu, \quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega).$$ This and \eqref{eq:kato_abs2} imply \eqref{eq:kato_main2}. The proof is complete. \end{proof}
\section{Semilinear elliptic equations with measure data}\label{sec:semilinear}
In this section, we study the existence of solutions to problem \eqref{eq:semilinear_absorption}. Recall that a function $u$ is a weak-dual solution of \eqref{eq:semilinear_absorption} if $u \in L^1(\Omega,\delta^{\gamma})$, $g(u) \in L^1(\Omega,\delta^{\gamma})$ and $$\int_{\Omega} u\xi dx + \int_{\Omega} g(u)\mathbb{G}^{\Omega}[\xi] dx = \int_{\Omega} \mathbb{G}^{\Omega}[\xi] d\mu,\quad \forall \xi \in \delta^{\gamma}L^{\infty}(\Omega).$$ Equivalently, $u$ is a weak-dual solution to \eqref{eq:semilinear_absorption} if $$u + \mathbb{G}^{\Omega}[g(u)] = \mathbb{G}^{\Omega}[\mu] \text{ a.e. in }\Omega.$$
The idea of the proof of Theorem \ref{theo:subsupersolution} is based on method of sub- and supersolutions introduced in \cite{MonPon_2008}. We begin with the following equi-integrability result. \begin{lemma} Assume $v_1,v_2 \in L^1(\Omega,\delta^\gamma)$ such that $v_1 \leq v_2$ and $g(v_1), g(v_2) \in L^1(\Omega,\delta^\gamma)$. Then the set $$\mathcal{F}:= \left\{ g(v) \in L^1(\Omega,\delta^{\gamma}) :v \in L^1(\Omega,\delta^{\gamma}) \text{ and }v_1 \le v \le v_2 \text{ a.e. in }\Omega\right\}$$ is equi-integrable. \end{lemma}
\begin{proof} The proof follows that of \cite[Proposition 2.1]{MonPon_2008}. Here we provide it for the sake of convenience. Assume that $\mathcal{F}$ is not equi-integrable in $L^1(\Omega,\delta^{\gamma})$. Then there exist $\varepsilon > 0$, a sequence $\{ u_n\}_{n \in \mathbb{N}}$ such that $v_1 \le u_n \le v_2$ a.e. in $\Omega$, and a sequence of measurable subsets $\{ E_n\}_{n \in \mathbb{N}}$ of $\Omega$ such that
$$|E_n| \to 0 \text{ as } n \to \infty \quad \text{ and } \quad \int_{E_n} g(u_n) \delta^{\gamma} dx \ge \varepsilon, \quad \forall n \in \mathbb{N}.$$ By \cite[Lemma 2.1]{MonPon_2008} with $w_n:= g( u_n)\delta^{\gamma}/\varepsilon$, we can choose a subsequence $\{u_{n_k}\}_{k \in \mathbb{N}}$ and a sequence of disjoint measurable sets $\{F_k\}_{k \in \mathbb{N}}$ such that
$$\int_{F_k} |g(u_{n_k})|\delta^{\gamma} dx \ge \dfrac{\varepsilon}{2},\quad \forall k \in \mathbb{N}.$$ Put $$ v(x):= \va{ &u_{n_k}(x) &\text{ if } x\in F_k \text{ for some }k \ge 1,\\ &v_1(x) &\text{ otherwise.}} $$ Then $v \in L^1(\Omega,\delta^{\gamma})$ and $v_1 \le v \le v_2$. In addition,
$$\int_{\Omega} |g(v)|\delta^{\gamma} dx \ge \sum_{k=1}^{\infty} \int_{\Omega} |g(u_{n_k})|\delta^{\gamma} dx = +\infty,$$ which is a contradiction. \end{proof}
As a result, by following the idea in \cite[Theorem 2.1]{MonPon_2008}, we obtain \begin{proposition}\label{prop:continuousCara} Assume that $g(v) \in L^1(\Omega,\delta^{\gamma})$ for every $v\in L^1(\Omega,\delta^{\gamma})$. Let $\mathcal{A}$ be the operator defined by $\mathcal{A}v(x) = g(v(x))$ for $v \in L^1(\Omega,\delta^{\gamma})$ and $x \in \Omega$. Then $\mathcal{A}: L^1(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ is continuous. \end{proposition}
\begin{proof}[\sc Proof of Theorem \ref{theo:subsupersolution}] For any measurable function $v$, set $$h(v(x)) := \va{ &g(\mathbb{G}^{\Omega} [\mu^+](x)) &\text{ if } v(x) > \mathbb{G}^{\Omega}[\mu^+](x),\\ &g(v(x)) &\text{ if } -\mathbb{G}^{\Omega}[\mu^-](x) \le v(x) \le \mathbb{G}^{\Omega}[\mu^+](x),\\ &g(-\mathbb{G}^{\Omega} [\mu^-](x)) &\text{ if } v(x) < -\mathbb{G}^{\Omega} [\mu^-](x).}$$ By assumption \eqref{eq:goodmeasure}, we have $h(v) \in L^1(\Omega,\delta^\gamma)$. Hence, the map $\mathcal{A}: L^1(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ defined by $\mathcal{A}v(x) = h(v(x))$ is continuous by Proposition \ref{prop:continuousCara}. Furthermore, $g(-\mathbb{G}^{\Omega}[\mu^-]) \le h(v) \le g(\mathbb{G}^{\Omega}[\mu^+]),\forall v \in L^1(\Omega,\delta^{\gamma})$.
\textbf{Step 1.} Assume that $\mu \in \mathcal{M}(\Omega,\delta^{\gamma})$. We prove that there exists a function $u \in L^1(\Omega,\delta^{\gamma})$ such that $$u + \mathbb{G}^{\Omega}[h(u)] = \mathbb{G}^{\Omega}[\mu],$$ Indeed, consider the operator $\mathbb{T} : L^1(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ defined by $$\mathbb{T} u := \mathbb{G}^{\Omega}\left[\mu - h( u)\right], \quad u \in L^1(\Omega,\delta^{\gamma}) $$ and the set $$\mathcal{C}:=\left\{ u \in L^1(\Omega,\delta^{\gamma}): \norm{u}_{L^1(\Omega,\delta^{\gamma})} \le M \right\},$$ where $$M:= C_1\left(\norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma})} + \norm{g\left(\mathbb{G}^{\Omega}[\mu^+]\right)}_{L^1(\Omega,\delta^{\gamma})} + \norm{g\left(-\mathbb{G}^{\Omega}[\mu^-]\right)}_{L^1(\Omega,\delta^{\gamma})} \right),$$ with $C_1=C_1(\Omega,N,s)$ being the constant in the estimate $\norm{\mathbb{G}^{\Omega}[\mu]}_{L^1(\Omega,\delta^{\gamma})} \le C_1 \norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma})}$.
Since $\mathbb{G}^{\Omega}: \mathcal{M}(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ is compact by Theorem \ref{theo:compactness} and the map $\mathcal{A}: L^1(\Omega,\delta^{\gamma}) \to L^1(\Omega,\delta^{\gamma})$ is continuous, it follows that $\mathbb{T}$ is compact. Furthermore, $\mathcal{C}$ is closed, bounded, convex and $\mathbb{T}(\mathcal{C}) \subset \mathcal{C}$ since \begin{align*} \norm{\mathbb{T} u}_{L^1(\Omega,\delta^{\gamma})} = \norm{\mathbb{G}^{\Omega}\left[\mu - h( u)\right]}_{L^1(\Omega,\delta^{\gamma})} \leq C_1\left(\norm{\mu}_{\mathcal{M}(\Omega,\delta^{\gamma})} + \norm{h(u)}_{L^1(\Omega,\delta^{\gamma})}\right) \le M, \end{align*} for every $u \in \mathcal{C}$. By Schauder's fixed point theorem, we conclude that there exists a function $u \in L^1(\Omega,\delta^{\gamma})$ such that $$u + \mathbb{G}^{\Omega}[h(u)] = \mathbb{G}^{\Omega}[\mu].$$
\textbf{Step 2.} We prove that \begin{equation} \label{Gmupm}-\mathbb{G}^{\Omega}[\mu^-] \le u \le \mathbb{G}^{\Omega}[\mu^+] \quad \text{a.e. in } \Omega. \end{equation} Denote $v := u - \mathbb{G}^{\Omega}[\mu^+]$. We have $$v = \mathbb{G}^{\Omega}[-h(u)] + \mathbb{G}^{\Omega}[\mu - \mu^+].$$ By Theorem \ref{thm:kato} (ii), we have $$\int_{\Omega} v^+ \xi dx \leq -\int_{\left\{u \geq \mathbb{G}^{\Omega}[\mu^+]\right\}} h(u) \mathbb{G}^{\Omega}[\xi]dx = -\int_{\left\{u \geq \mathbb{G}^{\Omega}[\mu^+]\right\}} g(\mathbb{G}^{\Omega}[\mu^+]) \mathbb{G}^{\Omega}[\xi]dx \leq 0,$$ for every $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega}[\xi] \ge 0$, which implies $v^+ = 0$. Hence, $u \le \mathbb{G}^{\Omega}[\mu^+]$. Similarly, one has $u \ge -\mathbb{G}^{\Omega}[\mu^-]$. Thus, $h(u) = g(u)$ and $u$ satisfies $$u + \mathbb{G}^{\Omega}[g(u)] = \mathbb{G}^{\Omega}[\mu],$$ i.e. $u$ is a solution of \eqref{eq:semilinear_absorption} and \eqref{Gmupm} holds.
\textbf{Step 3.} We prove that the solution is unique and the map $\mu \mapsto u$ is increasing. Assume that $\mu_1,\mu_2 \in \mathcal{M}(\Omega,\delta^{\gamma})$ such that $\mu_1 \le \mu_2$ and let $u_1$ and $u_2$ be solutions to \eqref{eq:semilinear_absorption} with measure $\mu_1$ and $\mu_2$ respectively. Then we have $$u_1 - u_2 + \mathbb{G}^{\Omega}[g(u_1) - g(u_2)] = \mathbb{G}^{\Omega}[\mu_1 - \mu_2].$$ Applying Kato's inequality \eqref{eq:kato_main2} for $f = g(u_2) - g(u_1)$ and $\mu = \mu_1 - \mu_2$, we have $$\int_{\Omega}(u_1-u_2)^+ \xi dx + \int_{ \{ u_1 \ge u_2 \} } \left( g(u_1) - g(u_2)\right) \mathbb{G}^{\Omega}[\xi] dx \le \int_{\Omega} \mathbb{G}^{\Omega}[\xi] d(\mu_1 - \mu_2)^+ = 0$$ for any $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega}[\xi] \ge 0.$ Since $g$ is increasing, we derive $\int_{\Omega}(u_1 - u_2)^+ \xi dx \le 0$ for every $\xi \in \delta^{\gamma}L^{\infty}(\Omega)$ such that $\mathbb{G}^{\Omega}[\xi]\ge 0$. This implies that $u_1 \le u_2$. Thus the map $\mu \to u$ is increasing. The uniqueness for \eqref{eq:semilinear_absorption} follows straightforward. We complete the proof. \end{proof}
As a consequence of the above result, we obtain the existence for $L^1$ data.
\begin{proof}[{\sc Proof of Corollary \ref{cor:L1data}}] First we assume that $f \in L^{\infty}(\Omega)$. We have $\mathbb{G}^{\Omega}[|f|] \in L^{\infty}(\Omega)$ (see \cite[Theorem 2.1]{Aba_2019}), which implies $g(\mathbb{G}^{\Omega}[f^+]) \in L^{\infty}(\Omega)$ and $g(-\mathbb{G}^{\Omega}[f^-]) \in L^{\infty}(\Omega)$, in particular $g(-\mathbb{G}^{\Omega}[f^-])$ and $g(\mathbb{G}^{\Omega}[f^+])$ belong to $L^1(\Omega,\delta^{\gamma})$. By Theorem \ref{theo:subsupersolution}, there exists a unique weak-dual solution to \eqref{eq:semilinear_absorption}.
Now consider the case $f \in L^1(\Omega,\delta^{\gamma})$ and let $\{f_n\}_{n \in \mathbb{N}}$ be a sequence in $L^{\infty}(\Omega)$ that converges to $f$ in $L^1(\Omega,\delta^{\gamma})$. Denote by $u_n$ the unique solution of \eqref{eq:semilinear_absorption} with datum $f_n$. Since $$u_n - u_m + \mathbb{G}^{\Omega}[g(u_n) - g(u_m)] = \mathbb{G}^{\Omega}[f_n - f_m],\quad m,n \in \mathbb{N},$$ using Kato's inequality \eqref{eq:kato_abs} and the assumption that $g$ is nondecreasing, we obtain
$$\int_{\Omega} |u_n - u_m| \xi dx + \int_{\Omega}|g(u_n) - g(u_m)|\mathbb{G}^{\Omega}[\xi]dx \le \int_{\Omega}|f_n - f_m|\mathbb{G}^{\Omega}[\xi] dx,$$ for every $\xi \in \delta^{\gamma} L^{\infty}(\Omega), \mathbb{G}^{\Omega}[\xi]\ge 0$. Choosing $\xi = \delta^{\gamma}$ and noticing that $\mathbb{G}^{\Omega}[\delta^{\gamma}] \sim \delta^{\gamma}$ by \cite[example 3.6]{Aba_2019}, one has \begin{align*}
\int_{\Omega} |u_n - u_m| \delta^{\gamma} dx + \int_{\Omega}|g(u_n) - g(u_m)| \delta^{\gamma} dx \lesssim \int_{\Omega} |f_n - f_m|\delta^{\gamma} dx. \end{align*} Since $\{f_n\}_{n \in \mathbb{N}}$ is a Cauchy sequence, we infer from the above estimate that $\{u_n\}_{ n \in \mathbb{N}}$ and $\{ g(u_n) \}_{n \in \mathbb{N}}$ are also Cauchy sequences. This implies that, up to a subsequence, $u_n \to u$ and $g(u_n) \to g(u)$ in $L^1(\Omega,\delta^{\gamma})$. Hence, letting $n \to \infty$ in the formula $u_n + \mathbb{G}^\Omega[g(u_n)]=\mathbb{G}^\Omega[f_n]$, we conclude that $u + \mathbb{G}^{\Omega}[g(u)] = \mathbb{G}^{\Omega}[f]$, which means that $u$ is a solution to \eqref{eq:semilinear_absorption} with $\mu=f$. Finally, the solution is unique by Kato's inequality. We complete the proof. \end{proof}
Next we give a sharp condition on $g$ under which \eqref{eq:goodmeasure} is satisfied. Recall that $p^*=\frac{N+\gamma}{N+\gamma-2s}$.
\begin{lemma} \label{lem:subint}
Let $g: \mathbb{R} \to \mathbb{R}$ be a nondecreasing continuous function with $g(0)=0$. Assume that $g$ satisfies the subcritical integrability condition \begin{equation} \label{sub-int1} \int_1^{\infty} [g(t) - g(-t)]t^{-1-p^*}dt < +\infty. \end{equation} Then for any $\mu \in \mathcal{M}(\Omega,\delta^\gamma)$,
$$ g(\mathbb{G}^\Omega[|\mu|]), g(-\mathbb{G}^\Omega[|\mu|]) \in L^1(\Omega,\delta^\gamma). $$ \end{lemma} \begin{proof} Let $\mu \in \mathcal{M}(\Omega,\delta^\gamma)$. It is sufficient to show that if $\mu \geq 0$ then $g(\mathbb{G}^\Omega[\mu]) \in L^1(\Omega,\delta^\gamma)$. For $\lambda>0$, set $$ A_\lambda:=\{ x \in \Omega: \mathbb{G}^\Omega[\mu](x) > \lambda \}, \quad a(\lambda) := \int_{A_\lambda}\delta^\gamma dx. $$ We write \begin{equation} \label{gG1-1} \begin{aligned} \int_{\Omega}g(\mathbb{G}^\Omega[\mu])\delta^\gamma dx &= \int_{\Omega \setminus A_1}g(\mathbb{G}^\Omega[\mu])\delta^\gamma dx + \int_{A_1}g(\mathbb{G}^\Omega[\mu])\delta^\gamma dx \\ &\leq g(1)\int_{\Omega}\delta^\gamma dx - \int_1^{\infty} g(t)da(t). \end{aligned} \end{equation} For $\lambda_n \ge 1$, by integration by parts and estimate (which is deduced from \eqref{eq:Marcin} and the definition of Marcinkiewicz spaces and inequality \eqref{Marcin-equi}) \begin{equation} \label{atn} a(\lambda_n) \leq \lambda_n^{-p^*}\vertiii{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)} \leq \lambda_n^{-p^*}\norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)}. \end{equation} We read off from \eqref{sub-int1} that there exists a sequence $\{\lambda_n\}_{n \in \mathbb{N}}$ such that \begin{equation} \label{tn} \lim_{\lambda_n \nearrow +\infty}\lambda_n^{-p^*}g(\lambda_n) = 0, \end{equation} which together with \eqref{atn} implies \begin{equation} \label{angn} \lim_{\lambda_n \nearrow +\infty}a(\lambda_n)g(\lambda_n) = 0. \end{equation} Hence, one can write \begin{align*} - \int_1^{\lambda_n} g(t)da(t) &= - g(\lambda_n)a(\lambda_n) + g(1)a(1) + \int_{1}^{\lambda_n} a(t)dg(t) \\ &\leq - g(\lambda_n)a(\lambda_n) + g(1)a(1) + \norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)}\int_{1}^{\lambda_n} t^{-p^*}dg(t) \\ &\leq - g(\lambda_n)a(\lambda_n) + g(1)a(1) + \norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)}\int_{1}^{\lambda_n} t^{-p^*}dg(t) \\ &= \left(\norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)}\lambda_n^{-p^*}- a(\lambda_n)\right)g(\lambda_n) + \left(a(1)-\norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)}\right)g(1) \\ &\quad + p^*\norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)} \int_1^{\lambda_n}g(t)t^{-p^*-1}dt. \end{align*} Letting $\lambda_n \to +\infty$ and using \eqref{tn} and \eqref{atn}, we arrive at $$ -\int_1^\infty g(t)da(t) \leq p^*\norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)} \int_1^{\infty}g(t)t^{-p^*-1}dt. $$ Plugging this into \eqref{gG1-1} and using \eqref{sub-int1}, we obtain $$ \int_{\Omega}g(\mathbb{G}^\Omega[\mu])\delta^\gamma dx \leq g(1)\int_{\Omega}\delta^\gamma dx + p^*\norm{ \mathbb{G}^\Omega[\mu]}_{M^{p^*}(\Omega,\delta^\gamma)} \int_1^{\infty}g(t)t^{-p^*-1}dt < +\infty, $$ which means $g(\mathbb{G}^\Omega[\mu]) \in L^1(\Omega,\delta^\gamma)$.
In order to prove that $g(-\mathbb{G}^\Omega[\mu]) \in L^1(\Omega,\delta^\gamma)$, we put $\tilde g(t) = -g(-t)$ and use the fact that $\tilde g(\mathbb{G}^\Omega[\mu]) \in L^1(\Omega,\delta^\gamma)$. We complete the proof. \end{proof}
\begin{proof}[{\sc Proof of Theorem \ref{measuredata-sub}}.] This theorem follows from Theorem \ref{theo:subsupersolution} and Lemma \ref{lem:subint}. \end{proof}
\section{Boundary singularities of solutions}\label{sec:boundarysolution} In this section, we assume that \eqref{eq_L1}--\eqref{eq_L3} and \eqref{eq_G1}--\eqref{eq_G4} hold. We will show that any isolated boundary singularity can be obtained as the limit of isolated interior singularities. Recall that the Martin kernel $M^{\Omega}: \Omega \times \partial \Omega \to \mathbb{R}$ is given by $$ M^{\Omega}(x,z) =\lim_{\Omega \ni y \to z}\frac{G^\Omega(x,y)}{\delta(y)^\gamma}, \quad x \in \Omega, z \in \partial \Omega. $$ Recall also that $p^*=\frac{N+\gamma}{N+\gamma-2s}$. \begin{proposition} Assume that $1 < p < p^*$. Then for any $z \in \partial \Omega$, \begin{equation} \label{GMp} \lim_{\Omega \ni x \to z}\frac{\mathbb{G}^\Omega[M^\Omega(\cdot,z)^p](x)}{M^\Omega(x,z)} = 0. \end{equation} \end{proposition}
\begin{proof} It can be seen from \eqref{M-est} that $$
\mathbb{G}^{\Omega}[M^{\Omega}(\cdot,z)^p](x) = \int_{\Omega} G^{\Omega}(x,y) M^{\Omega}(y,z)^p dy \sim \int_{\Omega} G^{\Omega}(x,y) \left( \dfrac{\delta(y)^{\gamma}}{|y-z|^{N-2s+2\gamma}} \right)^p dy. $$ This, together with \eqref{M-est} again, implies $$
\dfrac{\mathbb{G}^{\Omega}[M^{\Omega}(\cdot,z)^p](x)}{M^{\Omega}(x,z)} \sim \delta(x)^{-\gamma} |x-z|^{N-2s+2\gamma}\int_{\Omega} G^{\Omega}(x,y) \left( \dfrac{\delta(y)^{\gamma}}{|y-z|^{N-2s+2\gamma}} \right)^p dy. $$ Put $$
\Omega_1:=\Omega \cap B\left(x,\dfrac{|x-z|}{2}\right), \quad \Omega_2 :=\Omega \cap B\left(z,\dfrac{|x-z|}{2}\right), \quad \Omega_3:=\Omega \setminus (\Omega_1 \cup\Omega_2) $$ and
$$I_k := \delta(x)^{-\gamma} |x-z|^{N-2s+2\gamma}\int_{\Omega_k} G^{\Omega}(x,y) \left( \dfrac{\delta(y)^{\gamma}}{|y-z|^{N-2s+2\gamma}} \right)^pdy, \quad k = 1,2,3.$$
We first estimate $I_1$. For any $y \in \Omega_1$, we have $2|y-z| \ge |x-z|$ and obviously, $\delta(y) \le |y-z|$. Furthermore, since $p < p^*$, there exists $\alpha > 0$ such that $$(p-1)(N-2s + 2\gamma) < \alpha < \left( \gamma(p-1)+2s \right)\wedge \left( p(N-2s+2\gamma) \right).$$ Then we find \begin{align*}
I_1 &\lesssim \delta(x)^{-\gamma} |x-z|^{N-2s+2\gamma} \int_{\Omega_1} G^{\Omega}(x,y) \dfrac{\delta(y)^{\gamma p - \alpha}}{|y-z|^{p(N-2s+2\gamma)-\alpha}} dy\\
&\lesssim \delta(x)^{-\gamma} |x-z|^{(N-2s+2\gamma)(1-p)+\alpha} \int_{\Omega_1} G^{\Omega}(x,y) \delta(y)^{\gamma p -\alpha} dy \\
&\lesssim \delta(x)^{-\gamma} |x-z|^{(N-2s+2\gamma)(1-p)+\alpha} \mathbb{G}^{\Omega}[\delta^{\gamma p - \alpha}](x). \end{align*} We have $\gamma p - \alpha > \gamma -2s$ since $\alpha < \gamma(p-1)+2s$. Thus by \cite[Theorem 3.4]{Aba_2019}, $\mathbb{G}^{\Omega}[\delta^{\gamma p - \alpha}] \sim \delta^{\gamma}$. This implies
$$I_1 \lesssim \delta(x)^{-\gamma} |x-z|^{(N-2s+2\gamma)(1-p)+\alpha} \delta(x)^{\gamma} \le |x-z|^{(N-2s+2\alpha)(1-p)+\alpha}.$$ Since $(N-2s+2\gamma)(1-p) +\alpha > 0$, it follows that \begin{equation} \label{limI1} \lim_{x \to z}I_1 = 0. \end{equation}
We next estimate $I_2$. For $y \in \Omega_2$, we have $|x-z| \leq 2|x-y|$ and $\delta(y) \le |y-z|$. This, together with the fact that \begin{equation}\label{eq:Gestimate2}
G^{\Omega}(x,y) \lesssim \dfrac{\delta(x)^{\gamma}\delta(y)^{\gamma}}{|x-y|^{N-2s+2\gamma}}, \quad (x,y) \in (\Omega \times \Omega) \backslash D_{\Omega}, \end{equation} implies \begin{align*}
I_2 &\lesssim \delta(x)^{-\gamma} |x-z|^{N-2s+2\gamma} \int_{\Omega_2} \dfrac{\delta(x)^{\gamma}\delta(y)^{\gamma}}{|x-y|^{N-2s+2\gamma}}\cdot \left( \dfrac{\delta(y)^{\gamma}}{|y-z|^{N-2s+2\gamma}}\right)^p dy \\
&\lesssim \int_{\Omega_2} \dfrac{1}{|y-z|^{(N-2s+\gamma)p - \gamma}} dy\\
&\lesssim \int_0^{|x-z|} t^{N-1+\gamma - (N+\gamma-2s)p}dt \\
&= \frac{1}{N+\gamma - (N+\gamma-2s)p}|x-z|^{N+\gamma - (N+\gamma-2s)p}. \end{align*} Since $p<p^*$, it follows that \begin{equation} \label{limI2} \lim_{x \to z}I_2 = 0. \end{equation}
Finally, we estimate $I_3$. For every $y \in \Omega_3$, we have $|y-z| \leq 3|x-y|$ and $\delta(y) \le |y-z|$. Combining with \eqref{eq:Gestimate2} again, we obtain \begin{equation}\label{eq:I3_1} \begin{aligned} I_3
&\lesssim \delta(x)^{-\gamma} |x-z|^{N-2s+2\gamma} \int_{\Omega_3} \dfrac{\delta(x)^{\gamma}\delta(y)^{\gamma}}{|x-y|^{N-2s+2\gamma}}\cdot \left( \dfrac{\delta(y)^{\gamma}}{|y-z|^{N-2s+2\gamma}}\right)^p dy\\
&\lesssim |x-z|^{N-2s+2\gamma} \int_{\Omega_3}|y-z|^{-(N-2s+\gamma)(p+1)}dy \\
&\lesssim |x-z|^{N-2s+2\gamma} \int_{\frac{1}{2}|x-z|}^{3\mathrm{diam}(\Omega)}t^{N-1-(N-2s+\gamma)(p+1)}dt. \end{aligned} \end{equation} Notice that for $a > 0$ and $b \in \mathbb{R}$ $$\lim_{ w \to 0^+} w^a \int_w^1 t^{b} dt = 0 \text{ if } a+ b +1 > 0.$$ We note that $(N-2s+2\gamma) + \left[ N-1-(N-2s+\gamma)(p+1)\right] + 1 > 0$ since $p < p^*$. Therefore from \eqref{eq:I3_1}, we conclude that \begin{equation} \label{limI3} \lim_{x \to z}I_3 = 0. \end{equation} Thus combining \eqref{limI1}, \eqref{limI2} and \eqref{limI3}, we obtain \eqref{GMp}. \end{proof}
We are ready to prove Theorem \ref{bdry-iso}.
\begin{proof}[{\sc Proof of Theorem \ref{bdry-iso}}.]
Let $\{z_n\}_{n \in \mathbb{N}} \subset \Omega$ be a sequence converging to $z \in \partial \Omega$. Put $\mu_n = \delta^{-\gamma} \delta_{z_n}$ then $\mu_n \in \mathcal{M}(\Omega,\delta^\gamma)$ and $\| \mu_n \|_{\mathcal{M}(\Omega,\delta^\gamma)}=1$. Since $p \in (1,p^*)$, for every $n \in \mathbb{N}$, by Corollary \ref{measuredata-sub}, there exists a unique positive function $u_n \in L^1(\Omega,\delta^{\gamma})$ satisfying \begin{equation} \label{un-repr} u_n + \mathbb{G}^\Omega[u_n^p] = \mathbb{G}^\Omega[\mu_n]. \end{equation}
Since $\| \mu_n \|_{\mathcal{M}(\Omega,\delta^\gamma)}=1$ and $\mathbb{G}^\Omega: \mathcal{M}(\Omega,\delta^\gamma) \to L^1(\Omega,\delta^\gamma)$ is compact (see Theorem \ref{theo:compactness}), we know that up to a subsequence $\{ \mathbb{G}^\Omega[\mu_n]\}_{n \in \mathbb{N}}$ converges in $L^1(\Omega,\delta^\gamma)$ and a.e. in $\Omega$ to a function $v_1 \in L^1(\Omega,\delta^{\gamma})$. This, combined with assumption \eqref{eq_G4}, yields $$ v_1(x) = \lim_{n \to \infty}\mathbb{G}^\Omega[\mu_n](x) = \lim_{n \to \infty}\frac{G^\Omega(x,z_n)}{\delta(z_n)^\gamma} = M^\Omega(x,z) \quad \text{for a.e. } x \in \Omega. $$
Next, we infer from \eqref{un-repr} that $$ 0 \le u_n \le \mathbb{G}^{\Omega}[\mu_n],\quad \forall n \in \mathbb{N}.$$ Hence for all $q \in (1, p^*)$, there holds
\begin{equation} \label{unLq} \| u_n \|_{L^q(\Omega,\delta^\gamma)} \leq \| \mathbb{G}^{\Omega}[\mu_n] \|_{L^q(\Omega,\delta^\gamma)} \le C(N,\Omega,s,\gamma,q),\quad \forall n \in \mathbb{N}. \end{equation} Taking \eqref{unLq} with $q=p$ and taking into account the fact that $\mathbb{G}^\Omega: L^1(\Omega,\delta^\gamma) \to L^1(\Omega,\delta^\gamma)$ is compact, we see that there exists a function $v_2 \in L^1(\Omega,\delta^\gamma)$ such that, up to a subsequence, $\{ \mathbb{G}^\Omega[u_n^p]\}_{n \in \mathbb{N}}$ converges $v_2$ in $L^1(\Omega,\delta^\gamma)$ and a.e. in $\Omega$. Put $u=M^\Omega(\cdot,z) - v_2$ then from \eqref{un-repr}, we realize that $u_n \to u$ in $L^1(\Omega,\delta^\gamma)$ and a.e. in $\Omega$.
In light of \eqref{unLq} and H\"older inequality, $\{u_n^p\}_{n \in \mathbb{N}}$ is equi-integrable in $L^1(\Omega,\delta^\gamma)$. Moreover, $u_n^p \to u^p$ a.e. in $\Omega$. Therefore, by Vitali's convergence theorem, $u_n^p \to u^p$ in $L^1(\Omega,\delta^\gamma)$. Consequently, by Corollary \ref{cor:stability}, $\mathbb{G}^{\Omega}[u_n^p] \to \mathbb{G}^{\Omega}[u^p]$ in $L^1(\Omega,\delta^\gamma)$ and a.e. in $\Omega$. Gathering the above facts and letting $n \to \infty$ in \eqref{un-repr} yields \eqref{eq:boundary}. In addition, the uniqueness follows from Kato's inequality \eqref{eq:kato_abs}.
Next we prove \eqref{asymp}. From \eqref{eq:boundary}, we obtain $$ M^\Omega(x,z) - \mathbb{G}^{\Omega}[M^\Omega(\cdot,z)^p](x) \leq u(x) \leq M^\Omega(x,z), $$ which implies $$ 1 - \frac{\mathbb{G}^{\Omega}[M^\Omega(\cdot,z)^p](x)}{M^\Omega(x,z)} \leq \frac{u(x)}{M^\Omega(x,z)} \leq 1.$$ This and \eqref{GMp} imply \eqref{asymp}. The proof is complete. \end{proof}
\noindent \textbf{Acknowledgments.}The work of P.-T. Huynh was supported by the Austrian Science Fund FWF under the grants DOC 78. He gratefully thanks Prof. Jan Slov\'{a}k for the kind hospitality during his stay at Masaryk University. P.-T. Nguyen was supported by Czech Science Foundation, Project GA22-17403S. Part of the work was carried out during the visit of P.-T. Nguyen at the University of Klagenfurt. P-T. Nguyen gratefully acknowledges the University of Klagenfurt for the hospitality.
\end{document} |
\begin{document}
\title{A Linear-logical Reconstruction of Intuitionistic Modal Logic S4}
\begin{abstract} We propose a \emph{modal linear logic} to reformulate intuitionistic modal logic S4\,($\mathrm{IS4}$) in terms of linear logic, establishing an S4-version of Girard translation from $\mathrm{IS4}$ to it. While the Girard translation from intuitionistic logic to linear logic is well-known, its extension to modal logic is non-trivial since a naive combination of the S4 modality and the exponential modality causes an undesirable interaction between the two modalities. To solve the problem, we introduce an extension of intuitionistic multiplicative exponential linear logic with a modality combining the S4 modality and the exponential modality, and show that it admits a sound translation from $\mathrm{IS4}$. Through the Curry--Howard correspondence we further obtain a Geometry of Interaction Machine semantics of the modal $\lambda$-calculus by Pfenning and Davies for staged computation. \end{abstract}
\section{Introduction} Linear logic discovered by Girard\,\cite{G:linear_logic} is, as he wrote, not an alternative logic but should be regarded as an ``extension'' of usual logics. Whereas usual logics such as classical logic and intuitionistic logic admit the structural rules of weakening and contraction, linear logic does not allow to use the rules freely, but it reintroduces them in a controlled manner by using the exponential modality `$!$' (and its dual `$?$'). Usual logics are then reconstructed in terms of linear logic with the power of the exponential modalities, via the Girard translation.
In this paper, we aim to extend the framework of linear-logical reconstruction to the $(\Box, \supset)$-fragment of intuitionistic modal logic S4\,($\mathrm{IS4}$) by establishing what we call ``modal linear logic'' and an S4-version of Girard translation from $\mathrm{IS4}$ into it. However, the crux to give a faithful translation is that a naive combination of the $\Box$-modality and the $!$-modality causes an undesirable interaction between the inference rules of the two modalities. To solve the problem, we define the modal linear logic as an extension of intuitionistic multiplicative exponential linear logic with a modality `$\bangbox$~'\,(pronounced by ``bangbox'') that integrates `$\Box$' and `$!$', and show that it admits a faithful translation from $\mathrm{IS4}$.
As an application, we consider a computational interpretation of the modal linear logic. A typed $\lambda$-calculus that we will define corresponds to a natural deduction for the modal linear logic through the Curry--Howard correspondence, and it can be seen as a reconstruction of the modal $\lambda$-calculus by Pfenning and Davies\,\cite{PD:judgmental_reconstruction, DP:modal_analysis} for the so-called staged computation. Thanks to our linear-logical reconstruction, we can further obtain a Geometry of Interaction Machine\,(GoIM) for the modal $\lambda$-calculus.
The remainder of this paper is organized as follows. In Section~\ref{sec:background} we review some formalizations of linear logic and $\mathrm{IS4}$. In Section~\ref{sec:linear_logical_reconstruction} we explain a linear-logical reconstruction of $\mathrm{IS4}$. First, we discuss how a naive combination of linear logic and modal logic fails to obtain a faithful translation. Then, we propose a modal linear logic with the $\bangbox$~-modality that admits a faithful translation from $\mathrm{IS4}$. In Section~\ref{sec:curry-howard} we give a computational interpretation of modal linear logic through a typed $\lambda$-calculus. In Section~\ref{sec:axiomatization} we provide an axiomatization of modal linear logic by a Hilbert-style deductive system. In Section~\ref{sec:goim} we obtain a GoIM of our typed $\lambda$-calculus as an application of our linear-logical reconstruction. In Sections~\ref{sec:related_work} and \ref{sec:conclusion} we discuss related work and conclude our work, respectively.
\section{Preliminaries} \label{sec:background} We recall several systems of linear logic and modal logic. In this paper, we consider the minimal setting to give an S4-version of Girard translation and its computational interpretation. Thus, every system we will use only contain an implication and a modality as operators.
\subsection{Intuitionistic MELL and its Girard translation}
\begin{rulefigure}{fig:imell}{Definition of $\mathrm{IMELL}$.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.3\hsize}
\paragraph*{Syntactic category}
\begin{align*}
\hspace{-1em}\text{Formulae}~~A, B, C ::= p ~|~ \ottnt{A} \multimap \ottnt{B} ~|~ \ottsym{!} \ottnt{A}
\end{align*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.5\hsize}
\paragraph*{Inference rule}
\begin{tabular}{c}
\begin{minipage}{0.18\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ $}
\RightLabel{$ \mathrm{Ax} $}
\UIC{$\ottnt{A} \, \vdash \, \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.63\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.5em}
\AXC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.18\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\ottsym{!} \Gamma \, \vdash \, \ottnt{A}$}
\RightLabel{$ !\mathrm{R} $}
\UIC{$\ottsym{!} \Gamma \, \vdash \, \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{minipage}
}
\end{tabular}
\hspace{-0.75em}\rule{0.445\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1em}
\begin{minipage}{\hsize}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.195\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \slimp\!\mathrm{R} $}
\UIC{$\Gamma \, \vdash \, \ottnt{A} \multimap \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.275\hsize}
\begin{prooftree}
\def\hskip 0.05in{\hskip 0.5em}
\def2pt{2pt}
\AXC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} \ottnt{B} \, \vdash \, \ottnt{C}$}
\RightLabel{$ \slimp\!\mathrm{L} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \ottsym{,} \ottnt{A} \multimap \ottnt{B} \, \vdash \, \ottnt{C}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.15\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{L} $}
\UIC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{W} $}
\UIC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{C} $}
\UIC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
\begin{rulefigure}{fig:girard_translation}{Definition of the Girard translation from intuitionistic logic.}
\begin{tabular}{c|c}
{
\begin{minipage}{0.60\hsize}
\underline{$ \fgirardtrans{ \ottnt{A} } $}
\begin{align*}
\hspace{1.5em} \fgirardtrans{ \ottmv{p} } \defeq p, \hspace{3.5em} \fgirardtrans{ \ottnt{A} \supset \ottnt{B} } \defeq ( \ottsym{!} \fgirardtrans{ \ottnt{A} } ) \multimap \fgirardtrans{ \ottnt{B} }
\end{align*}
\end{minipage}
}
&
{
\begin{minipage}{0.30\hsize}
\underline{$ \fgirardtrans{ \Gamma } $}
\begin{align*}
\hspace{1.5em} \fgirardtrans{ \Gamma } &\defeq \{ \fgirardtrans{ \ottnt{A} } ~|~ A \in \Gamma \}
\end{align*}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
\noindent Figure~\ref{fig:imell} shows the standard definition of the ($!$, $\multimap$)-fragment of \emph{intuitionistic multiplicative exponential linear logic}, which we refer to as $\mathrm{IMELL}$. A formula is either a propositional variable, a linear implication, or an exponential modality. We let $p$ range over the set of propositional variables, and $A$, $B$, $C$ range over formulae. A \emph{context} $\Gamma$ is defined to be a multiset of formulae, and hence the exchange rule is assumed as a meta-level operation. A \emph{judgment} consists of a context and a formula, written as $\Gamma \, \vdash \, \ottnt{A}$. As a convention, we often write $\Gamma \, \vdash \, \ottnt{A}$ to mean that the judgment is derivable (and we assume similar conventions throughout this paper). The notation $\ottsym{!} \Gamma$ in the rule $!\mathrm{R}$ denotes the multiset $\{ !A ~|~ A \in \Gamma \}$.
Figure~\ref{fig:girard_translation} defines the Girard translation\footnote{This is known to be the \emph{call-by-name} Girard translation (cf.\,\cite{M+:cbn_cbv}) and we only follow this version in later discussions. However, we conjectured that our work can apply to other versions.} from the $\supset$-fragment of intuitionistic propositional logic $\mathrm{IL}$. For an $\mathrm{IL}$-formula $A$, $ \fgirardtrans{ \ottnt{A} } $ will be an $\mathrm{IMELL}$-formula; and $ \fgirardtrans{ \Gamma } $ a multiset of $\mathrm{IMELL}$-formulae. Then, we can show that the Girard translation from $\mathrm{IL}$ to $\mathrm{IMELL}$ is sound.
\begin{theorem}[Soudness of the translation]
\label{thm:girard_translation}
If $\Gamma \, \vdash \, \ottnt{A}$ in $\mathrm{IL}$, then $\ottsym{!} \fgirardtrans{ \Gamma } \, \vdash \, \fgirardtrans{ \ottnt{A} } $ in $\mathrm{IMELL}$. \end{theorem}
\subsection{Intuitionistic S4}
We review a formalization of the $(\Box, \supset)$-fragment of intuitionistic propositional modal logic S4\,($\mathrm{IS4}$). In what follows, we use a sequent calculus for the logic, called $\mathrm{LJ}^{\Box}$. The calculus $\mathrm{LJ}^{\Box}$ used here is defined in a standard manner in the literature\,(e.g. it can be seen as the $\mathrm{IS4}$-fragment of \textbf{G1s} for classical modal logic S4 by Troelstra and Schwichtenberg\,\cite{TS:basic_proof_theory}).
Figure~\ref{fig:ljbox} shows the definition of $\mathrm{LJ}^{\Box}$. A formula is either a propositional variable, an intuitionistic implication, or a box modality. A \emph{context} and a \emph{judgment} are defined similarly in $\mathrm{IMELL}$.
The notation $ \Box \Gamma $ in the rule $\Box\mathrm{R}$ denotes the multiset $\{ \Box \ottnt{A} ~|~ A \in \Gamma \}$.
\begin{rulefigure}{fig:ljbox}{Definition of $\mathrm{LJ}^{\Box}$.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.28\hsize}
\paragraph*{Syntactic category}
\begin{align*}
\hspace{-1em}\text{Formulae}~~A, B, C ::= p ~|~ \ottnt{A} \supset \ottnt{B} ~|~ \Box \ottnt{A}
\end{align*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.50\hsize}
\paragraph*{Inference rule}
\begin{tabular}{c}
\begin{minipage}{0.165\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ $}
\RightLabel{$ \mathrm{Ax} $}
\UIC{$\ottnt{A} \, \vdash \, \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.598\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.5em}
\AXC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.17\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ \Box \Gamma \, \vdash \, \ottnt{A}$}
\RightLabel{$ \Box\mathrm{R} $}
\UIC{$ \Box \Gamma \, \vdash \, \Box \ottnt{A} $}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{minipage}
}
\end{tabular}
\hspace{-0.75em}\rule{0.445\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1em}
\begin{minipage}{\hsize}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.18\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \sarrow\!\mathrm{R} $}
\UIC{$\Gamma \, \vdash \, \ottnt{A} \supset \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} \ottnt{B} \, \vdash \, \ottnt{C}$}
\RightLabel{$ \sarrow\!\mathrm{L} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \ottsym{,} \ottnt{A} \supset \ottnt{B} \, \vdash \, \ottnt{C}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.165\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \Box\mathrm{L} $}
\UIC{$\Gamma \ottsym{,} \Box \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{W} $}
\UIC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{C} $}
\UIC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
\begin{remark}
It is worth noting that
the $!$-exponential in $\mathrm{IMELL}$ and the $\Box$-modality in $\mathrm{LJ}^{\Box}$ have similar structures.
To see this, let us imagine the rules $\Box\mathrm{R}$ and $\Box\mathrm{L}$ replacing the symbol `$\Box$' with `$!$'. The results will be exactly the same as $!\mathrm{R}$ and $!\mathrm{L}$.
In fact, the $!$-exponential satisfies the S4 axiomata in $\mathrm{IMELL}$, which is the reason we also call it as a modality. \end{remark}
\subsection{Typed $\lambda$-calculus of the intuitionistic S4} We review the modal $\lambda$-calculus developed by Pfenning and Davies\,\cite{PD:judgmental_reconstruction, DP:modal_analysis}, which we call $\lambda^{\Box}$. The system $\lambda^{\Box}$ is essentially the same calculus as $\lambda^{\rightarrow\Box}_{e}$ in \cite{DP:modal_analysis}, although some syntax are changed to fit our notation in this paper. $\lambda^{\Box}$ is known to correspond to a natural deduction system for $\mathrm{IS4}$, as is shown in \cite{PD:judgmental_reconstruction}.
\begin{rulefigure}{fig:lambdabox}{Definition of $\lambda^{\Box}$.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.50\hsize}
\paragraph*{Syntactic category}
\begin{alignat*}{3}
&\hspace{-1em}\text{Types}~&A, B, C &::= p ~|~ \ottnt{A} \supset \ottnt{B} ~|~ \Box \ottnt{A} \\
&\hspace{-1em}\text{Terms}~&M, N, L &::= x ~|~ \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} ~|~ \ottnt{M} \, \ottnt{N} \\
&&&\,|~ \Box \ottnt{M} ~|~ \ottkw{let} \, \Box \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}
\end{alignat*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.45\hsize}
\paragraph*{Reduction rule}
\begin{alignat*}{3}
&\hspace{-1em}\betarule{\sarrow}\hspace{0.5em}& & \ottsym{(} \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \, \ottnt{N} & & \leadsto \ottnt{M} [ \ottmv{x} := \ottnt{N} ] \\
&\hspace{-1em}\betarule{\Box}\hspace{0.5em}& &\ottkw{let} \, \Box \ottmv{x} \ottsym{=} \Box \ottnt{N} \, \ottkw{in} \, \ottnt{M}& & \leadsto \ottnt{M} [ \ottmv{x} := \ottnt{N} ]
\end{alignat*}
\end{minipage}
}
\end{tabular}
\rule{\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1em}
\begin{minipage}{0.9\hsize}
\paragraph*{Typing rule}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \mathrm{Ax} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \Box\mathrm{Ax} $}
\UIC{$\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.42\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$}
\RightLabel{$ \sarrow\!\mathrm{I} $}
\UIC{$\Delta \ottsym{;} \Gamma \, \vdash \, \ottsym{(} \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \ottsym{:} \ottnt{A} \supset \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.52\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A} \supset \ottnt{B}$}
\AXC{$\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \sarrow\!\mathrm{E} $}
\BIC{$\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.38\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \emptyset \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \Box\mathrm{I} $}
\UIC{$\Delta \ottsym{;} \Gamma \, \vdash \, \Box \ottnt{M} \ottsym{:} \Box \ottnt{A} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.55\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \ottsym{:} \Box \ottnt{A} $}
\AXC{$\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\RightLabel{$ \Box\mathrm{E} $}
\BIC{$\Delta \ottsym{;} \Gamma \, \vdash \, \ottkw{let} \, \Box \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
Figure~\ref{fig:lambdabox} shows the definition of $\lambda^{\Box}$. The set of types corresponds to that of formulae of $\mathrm{IS4}$. We let $x$ range over the set of term variables, and $M, N, L$ range over the set of terms. The first three terms are as in the simply-typed $\lambda$-calculus. The terms $ \Box \ottnt{M} $ and $\ottkw{let} \, \Box \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}$ is used to represent a constructor and a destructor for types $ \Box \ottnt{A} $, respectively. The variable $x$ in $\lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M}$ and $\ottkw{let} \, \Box \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}$ is supposed to be \emph{bound} in the usual sense and the \emph{scope} of the biding is $M$ and $N$, respectively. The set of \emph{free} (i.e., unbound) variables in $M$ is denoted by $\fv{M}$. We write the \emph{capture-avoiding substitution} $ \ottnt{M} [ \ottmv{x} := \ottnt{N} ] $ to denote the result of replacing $N$ for every free occurrence of $x$ in $M$.
A \emph{(type) context} is defined to be the set of pairs of a term variable $x_i$ and a type $A_i$ such that all the variables are distinct, which is written as $x_1 : A_1, \cdots, x_n : A_n$ and is denoted by $\Gamma$, $\Delta$, $\Sigma$, etc. Then, a \emph{(type) judgment} is defined, in the so-called \emph{dual-context} style, to consists of two contexts, a term, and a type, written as $\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$.
The intuition behind the judgment $\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ is that the context $\Delta$ is intended to implicitly represent assumptions for types of form $ \Box \ottnt{A} $, while the context $\Gamma$ is used to represent ordinary assumptions as in the simply-typed $\lambda$-calculus.
The typing rules are summarized as follows. $\mathrm{Ax}$, $\sarrow\!\mathrm{I}$, and $\sarrow\!\mathrm{E}$ are all standard, although they are defined in the dual-context style. $\Box\mathrm{Ax}$ is another variable rule, which can be seen as what to formalize the modal axiom \textit{T} (i.e., $\vdash \, \Box \ottnt{A} \supset \ottnt{A} $) from the logical viewpoint. $\Box\mathrm{I}$ is a rule for the constructor of $ \Box \ottnt{A} $, which corresponds to the necessitation rule for the $\Box$-modality. Similarly, $\Box\mathrm{E}$ is for the destructor of $ \Box \ottnt{A} $, which corresponds to the elimination rule.
The \emph{reduction} $ \leadsto $ is defined to be the least compatible relation on terms generated by $\betarule{\sarrow}$ and $\betarule{\Box}$. The \emph{multistep reduction} $ \leadsto^+ $ is defined to be the transitive closure of $ \leadsto $.
\section{Linear-logical reconstruction} \label{sec:linear_logical_reconstruction}
\subsection{Naive attempt at the linear-logical reconstruction}
It is natural for a ``linear-logical reconstruction'' of $\mathrm{IS4}$ to define a system that has both properties of linear logic and modal logic, so as to be a target system for an S4-version of Girard translation. However, a naive combination of linear logic and modal logic is not suitable to establish a faithful translation.
Let us consider what happens if we adopt a naive system. The simplest way to define a target system for the S4-version of Girard translation is to make an extension of $\mathrm{IMELL}$ with the $\Box$-modality. Suppose that a deductive system $\mathrm{IMELL}^{\Box}$ is such a calculus, that is, the formulae of $\mathrm{IMELL}^{\Box}$ are defined by the following grammar:
\begin{align*}
A, B &::= p ~|~ \ottnt{A} \multimap \ottnt{B} ~|~ \ottsym{!} \ottnt{A} ~|~ \Box \ottnt{A}
\end{align*} with the inference rules being those of $\mathrm{IMELL}$, along with the rules $\Box\mathrm{R}$ and $\Box\mathrm{L}$ of $\mathrm{LJ}^{\Box}$.
As in the case of Girard translation from $\mathrm{IL}$ to $\mathrm{IMELL}$, we have to establish the following theorem for some translation $\fgirardtrans{-}$:
\begin{center}
If $\Gamma \, \vdash \, \ottnt{A}$ is derivable in $\mathrm{LJ}^{\Box}$, then so is $\ottsym{!} \fgirardtrans{ \Gamma } \, \vdash \, \fgirardtrans{ \ottnt{A} } $ in $\mathrm{IMELL}^{\Box}$. \end{center}
\noindent but, if we extend our previous translation $\fgirardtrans{-}$ from $\mathrm{IL}$ to $\mathrm{IMELL}$ with $ \fgirardtrans{ \Box \ottnt{A} } \defeq \Box \fgirardtrans{ \ottnt{A} } $, we get stuck in the case of $\Box\mathrm{R}$. This is because we need to establish the inference $\Box'$ in Figure~\ref{fig:bad_translation}, which means that we have to be able to obtain a derivation of form $\ottsym{!} \fgirardtrans{ \Box \Gamma } \, \vdash \, \Box \fgirardtrans{ \ottnt{A} } $ from that of $\ottsym{!} \fgirardtrans{ \Box \Gamma } \, \vdash \, \fgirardtrans{ \ottnt{A} } $ in $\mathrm{IMELL}^{\Box}$.
\begin{figure}
\caption{Translation for the case of $\Box\mathrm{R}$.}
\label{fig:bad_translation}
\caption{ Valid inference in $\mathrm{LJ}^{\Box}$. }
\label{fig:valid_inference}
\caption{ Invalid inference in $\mathrm{IMELL}^{\Box}$. }
\label{fig:invalid_inference}
\end{figure}
However, the inference $\Box'$ is invalid in $\mathrm{IMELL}^{\Box}$ in general, because there exists a counterexample. First, the inference shown in Figure~\ref{fig:valid_inference} is valid, and the judgment $ \Box \ottsym{(} \ottmv{p} \supset \ottmv{q} \ottsym{)} \ottsym{,} \Box \ottmv{p} \, \vdash \, \Box \ottmv{q} $ is indeed derivable in $\mathrm{LJ}^{\Box}$. However, the corresponding inference via $\fgirardtrans{-}$ is invalid as Figure~\ref{fig:invalid_inference} shows. In the figure, the judgments correspond to those in Figure~\ref{fig:valid_inference} via $\fgirardtrans{ - }$, but the inference $\Box\mathrm{R}$ in Figure~\ref{fig:invalid_inference} is invalid in $\mathrm{IMELL}^{\Box}$ due to the side-condition of $\Box\mathrm{R}$. Even worse, we can see that the judgment $\ottsym{!} \Box \ottsym{(} \ottsym{!} \ottmv{p} \multimap \ottmv{q} \ottsym{)} \ottsym{,} \ottsym{!} \Box \ottmv{p} \, \vdash \, \Box \ottmv{q} $ is itself underivable in $\mathrm{IMELL}^{\Box}$\footnote{Precisely speaking, this can be shown as a consequence of the cut-elimination theorem of $\mathrm{IMELL}^{\Box}$, and the theorem was shown in the authors' previous work\,\cite{FY:higher-arity}.}.
Moreover, one may think the other cases that we extend the original translation $\fgirardtrans{-}$ from $\mathrm{IL}$ to $\mathrm{IMELL}$ with $ \fgirardtrans{ \Box \ottnt{A} } \defeq ~\ottsym{!} \Box \fgirardtrans{ \ottnt{A} } $ or $ \fgirardtrans{ \Box \ottnt{A} } \defeq \Box \ottsym{!} \fgirardtrans{ \ottnt{A} } $ will work to obtain a faithful translation. However, the judgment $ \Box \ottmv{p} \, \vdash \, \Box \Box \ottmv{p} $ will be a counter-example in either case.
All in all, the problem of the naive combination formulated as $\mathrm{IMELL}^{\Box}$ intuitively came from an undesirable interaction between the right rules of the two modalities:
\begin{center} \begin{tabular}{c}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$\ottsym{!} \Gamma \, \vdash \, \ottnt{A}$}
\RightLabel{$ !\mathrm{R} $}
\UIC{$\ottsym{!} \Gamma \, \vdash \, \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$ \Box \Gamma \, \vdash \, \ottnt{A}$}
\RightLabel{$ \Box\mathrm{R} $}
\UIC{$ \Box \Gamma \, \vdash \, \Box \ottnt{A} $}
\end{prooftree}
\end{minipage} \end{tabular} \end{center}
\noindent Each of these rules has a side-condition: the conclusion $!A$ in $!\mathrm{R}$ must be derived from the modalized context $\ottsym{!} \Gamma$, and similarly for $ \Box \ottnt{A} $ in $\Box\mathrm{R}$. This makes it hard to obtain a faithful S4-version of Girard translation for this naive extension.
\subsection{Modal linear logic}
We propose a \emph{modal linear logic} to give a faithful S4-version of Girard translation from $\mathrm{IS4}$.
First of all, the problem we have identified essentially came from the fact that there is no relationship between `$!$' and `$\Box$', and hence the side-conditions of $!\mathrm{R}$ and $\Box\mathrm{R}$ do not hold when we intuitively expect them to hold. Thus, we introduce a modality, `$\bangbox$~' combining `$!$' and `$\Box$', to solve this problem.
\begin{rulefigure}{fig:imellbangbox}{Definition of $\imell^{\smallbangbox}\,$.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.5\hsize}
\paragraph*{Syntactic category}
\begin{align*}
\text{Formulae}~~A, B, C ::= p ~|~ \ottnt{A} \multimap \ottnt{B} ~|~ \ottsym{!} \ottnt{A} ~|~ \bangbox \ottnt{A}
\end{align*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.5\hsize}
\paragraph*{Inference rule}
\begin{tabular}{c}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ $}
\RightLabel{$ \mathrm{Ax} $}
\UIC{$\ottnt{A} \, \vdash \, \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.60\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.5em}
\AXC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$\ottnt{A} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{minipage}
}
\end{tabular}
\hspace{-0.75em}\rule{0.53\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1em}
\begin{minipage}{\hsize}
\begin{tabular}{c}
\begin{minipage}{0.195\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \slimp\!\mathrm{R} $}
\UIC{$\Gamma \, \vdash \, \ottnt{A} \multimap \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.27\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.5em}
\AXC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} \ottnt{B} \, \vdash \, \ottnt{C}$}
\RightLabel{$ \slimp\!\mathrm{L} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \ottsym{,} \ottnt{A} \multimap \ottnt{B} \, \vdash \, \ottnt{C}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.18\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ \bangbox \Delta \ottsym{,} \ottsym{!} \Gamma \, \vdash \, \ottnt{A}$}
\RightLabel{$ !\mathrm{R} $}
\UIC{$ \bangbox \Delta \ottsym{,} \ottsym{!} \Gamma \, \vdash \, \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{L} $}
\UIC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ \bangbox \Delta \, \vdash \, \ottnt{A}$}
\RightLabel{$ \bangbox\mathrm{R} $}
\UIC{$ \bangbox \Delta \, \vdash \, \bangbox \ottnt{A} $}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.175\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \bangbox\mathrm{L} $}
\UIC{$\Gamma \ottsym{,} \bangbox \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.178\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{W} $}
\UIC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.20\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{C} $}
\UIC{$\Gamma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.185\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \, \vdash \, \ottnt{B}$}
\RightLabel{$ \bangbox\mathrm{W} $}
\UIC{$\Gamma \ottsym{,} \bangbox \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.20\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\Gamma \ottsym{,} \bangbox \ottnt{A} \ottsym{,} \bangbox \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \bangbox\mathrm{C} $}
\UIC{$\Gamma \ottsym{,} \bangbox \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
Our modal linear logic, which is called $\imell^{\smallbangbox}\,$, is defined by a sequent calculus which is given in Figure~\ref{fig:imellbangbox}. As we mentioned, the formulae are defined as an extension of those of $\mathrm{IMELL}$ with the $\bangbox$~-modality. A point is that the $!$-modality is still there with the $\bangbox~$-modality.
The $\bangbox~$-modality is defined so as to have properties of both `$!$' and `$\Box$', but `$!$' still behaves similarly to $\mathrm{IMELL}$. Therefore, all the intuitions of the inference rules except $!\mathrm{R}$ and $\bangbox\mathrm{R}$ should be clear. The rules $!\mathrm{R}$ and $\bangbox\mathrm{R}$ reflect the ``strength'' between the modalities `$!$' and `$\bangbox$~'. Indeed, `$!$' and `$\bangbox~$' satisfy the S4 axiomata and `$\bangbox~$' is stronger than `$!$'.
\begin{example}
\label{ex:axiomata}
The following hold:
\begin{enumerate}
\item $\vdash \, \ottsym{!} \ottnt{A} \multimap \ottnt{A}$ and $\vdash \, \bangbox \ottnt{A} \multimap \ottnt{A} $
\item $\vdash \, \ottsym{!} ( \ottnt{A} \multimap \ottnt{B} ) \multimap \ottsym{!} \ottnt{A} \multimap \ottsym{!} \ottnt{B}$ and $\vdash \, \bangbox ( \ottnt{A} \multimap \ottnt{B} ) \multimap \bangbox \ottnt{A} \multimap \bangbox \ottnt{B} $
\item $\vdash \, \ottsym{!} \ottnt{A} \multimap \ottsym{!} \ottsym{!} \ottnt{A}$ and $\vdash \, \bangbox \ottnt{A} \multimap \bangbox\,\,\bangbox \ottnt{A} $
\item $\vdash \, \bangbox \ottnt{A} \multimap \ottsym{!} \ottnt{A} $ but $\nvdash \, \ottsym{!} \ottnt{A} \multimap \bangbox \ottnt{A} $
\end{enumerate} \end{example}
\begin{remark}
In Example~\ref{ex:axiomata}, the first three represent the so-called S4 axiomata: \textit{T}, \textit{K}, and \textit{4}.
The last one represents the strength of the two modalities.
Actually, assuming the $!$-modality and the $\bangbox~$-modality to satisfy the S4 axiomata and the ``strength'' axiom $\vdash \, \bangbox \ottnt{A} \multimap \ottsym{!} \ottnt{A} $
is enough to characterize our modal linear logic (see Section~\ref{sec:axiomatization} for more details). \end{remark}
The cut-elimination theorem for $\imell^{\smallbangbox}\,$ is shown similarly to the case of $\mathrm{IMELL}$, and hence $\imell^{\smallbangbox}\,$ is consistent. The addition of `$\bangbox~$' causes no problems in the proof.
\begin{definition}[Cut-degree and degree]
For an application of $\mathrm{Cut}$ in a proof, its \emph{cut-degree}
is defined to be the number of logical connectives in the cut-formula.
The \emph{degree} of a proof is defined to be the maximal cut-degree of the proof\,(and $0$ if there is no application of $\mathrm{Cut}$). \end{definition}
\begin{theorem}[Cut-elimination]
The rule $\mathrm{Cut}$ in $\imell^{\smallbangbox}\,$ is admissible, i.e.,
if $\Gamma \, \vdash \, \ottnt{A}$ is derivable, then there is a derivation of the same judgment without any applications of $\mathrm{Cut}$. \end{theorem}
\begin{proof}
We follow the proof for propositional linear logic by Lincoln et al.\,\cite{L+:decision_problems}.
To show the admissibility of $\mathrm{Cut}$, we consider the admissibility of the following cut rules:
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$\Gamma \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$\Gamma \, \vdash \, \bangbox \ottnt{A} $}
\AXC{$\Gamma' \ottsym{,} ( \bangbox \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \bangbox\mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\noindent
where $ ( \ottnt{C} )^{n} $ denotes the multiset that has $n$ occurrences of $C$
and $n$ is assumed to be positive as a side-condition; and $\Gamma'$ in $!\mathrm{Cut}$ (resp. in $\bangbox\mathrm{Cut}$) is supposed to contain no formulae of form $!A$\,(resp. $ \bangbox \ottnt{A} $).
The \emph{cut-degrees} of $!\mathrm{Cut}$ and $\bangbox\mathrm{Cut}$ are defined similarly to that of $\mathrm{Cut}$.
Then, all the three rules ($\mathrm{Cut}$, $!\mathrm{Cut}$, $\bangbox\mathrm{Cut}$) are shown to be admissible by simultaneous induction
on the lexicographic complexity $\fbracket{\delta, h}$,
where $\delta$ is the degree of the assumed derivation and $h$ is its height.
See the appendix for details of the proof. \end{proof}
\begin{corollary}[Consistency]
$\imell^{\smallbangbox}\,$ is consistent, i.e., there exists an underivable judgment. \end{corollary}
\begin{rulefigure}{fig:modal_girard_translation}{Definition of the S4-version of Girard translation.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.60\hsize}
\underline{$ \fgirardtrans{ \ottnt{A} } $}
\begin{align*}
\hspace{0.4em}
\fgirardtrans{ \ottmv{p} } \defeq \ottmv{p}, \hspace{0.4em}
\fgirardtrans{ \ottnt{A} \supset \ottnt{B} } \defeq ( \ottsym{!} \fgirardtrans{ \ottnt{A} } ) \multimap \fgirardtrans{ \ottnt{B} } , \hspace{0.4em}
\fgirardtrans{ \Box \ottnt{A} } \defeq \bangbox \fgirardtrans{ \ottnt{A} }
\end{align*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.37\hsize}
\underline{$ \fgirardtrans{ \Gamma } $}
\begin{align*}
\fgirardtrans{ \Gamma } &\defeq \{ (x : \fgirardtrans{ \ottnt{A} } ) ~|~ (x : A) \in \Gamma \}
\end{align*}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
Then, we can define an S4-version of Girard translation as in Figure~\ref{fig:modal_girard_translation}, and it can be justified by the following theorem, which is readily shown by induction on the derivation.
\begin{theorem}[Soundness]
\label{thm:modal_girard_translation}
If $ \Box \Delta \ottsym{,} \Gamma \, \vdash \, \ottnt{A}$ in $\mathrm{LJ}^{\Box}$,
then $ \bangbox \fgirardtrans{ \Delta } \ottsym{,} \ottsym{!} \fgirardtrans{ \Gamma } \, \vdash \, \fgirardtrans{ \ottnt{A} } $ in $\imell^{\smallbangbox}\,$. \end{theorem}
\section{Curry--Howard correspondence} \label{sec:curry-howard}
In this section, we give a computational interpretation for our modal linear logic through the Curry--Howard correspondence and establish the corresponding S4-version of Girard translation for the modal linear logic in terms of typed $\lambda$-calculus.
\subsection{Typed $\lambda$-calculus for the intuitionistic modal linear logic} We introduce $\lambda^{\smallbangbox}\,$\,(pronounced by ``lambda bangbox'') that is a typed $\lambda$-calculus corresponding to the modal linear logic under the Curry--Howard correspondence. The calculus $\lambda^{\smallbangbox}\,$ can be seen as an integration of $\lambda^{\Box}$ of Pfenning and Davies and the linear $\lambda$-calculus for dual intuitionistic linear logic of Barber\,\cite{B:dual_intuitionistic_linear_logic}. The rules of $\lambda^{\smallbangbox}\,$ are designed considering the ``necessity'' of modal logic and the ``linearity'' of linear logic, and formally defined as in Figure~\ref{fig:lambdabangbox}.
\begin{rulefigure}{fig:lambdabangbox}{Definition of $\lambda^{\smallbangbox}\,$.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.48\hsize}
\paragraph*{Syntactic category}
\begin{alignat*}{3}
&\hspace{-1em}\text{Types}~&A, B, C &::= p ~|~ \ottnt{A} \multimap \ottnt{B} ~|~ \ottsym{!} \ottnt{A} ~|~ \bangbox \ottnt{A} \\
&\hspace{-1em}\text{Terms}~&M, N, L &::= x ~|~ \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} ~|~ \ottnt{M} \, \ottnt{N} ~|~ ! \ottnt{M} ~|~ \bangbox \ottnt{M} \\
&&&\,|~ \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}~|~ \ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}
\end{alignat*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.50\hsize}
\paragraph*{Reduction rule}
\begin{alignat*}{3}
&\hspace{-1.2em}\betarule{\slimp}\hspace{0.5em}& & \ottsym{(} \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \, \ottnt{N} & & \leadsto \ottnt{M} [ \ottmv{x} := \ottnt{N} ] \\
&\hspace{-1.2em}\betarule{!}\hspace{0.5em}& &\ottkw{let} \, ! \ottmv{x} \ottsym{=} ! \ottnt{N} \, \ottkw{in} \, \ottnt{M}& & \leadsto \ottnt{M} [ \ottmv{x} := \ottnt{N} ] \\
&\hspace{-1.2em}\betarule{\bangbox~}\hspace{0.5em}& &\ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \bangbox \ottnt{N} \, \ottkw{in} \, \ottnt{M}& & \leadsto \ottnt{M} [ \ottmv{x} := \ottnt{N} ]
\end{alignat*}
\end{minipage}
}
\end{tabular}
\rule{\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1.2em}
\begin{minipage}{0.98\hsize}
\paragraph*{Typing rule}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \mathrm{LinAx} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ !\mathrm{Ax} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \emptyset \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.32\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \bangbox\mathrm{Ax} $}
\UIC{$\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.38\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$}
\RightLabel{$ \slimp\!\mathrm{I} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{:} \ottnt{A} \multimap \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.57\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A} \multimap \ottnt{B}$}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \slimp\!\mathrm{E} $}
\BIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottnt{M} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.37\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$}
\RightLabel{$ !\mathrm{I} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, ! \ottnt{M} \ottsym{:} \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.55\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottsym{!} \ottnt{A}$}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\RightLabel{$ !\mathrm{E} $}
\BIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.37\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \emptyset \ottsym{;} \emptyset \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \bangbox\mathrm{I} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \bangbox \ottnt{M} \ottsym{:} \bangbox \ottnt{A} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.57\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \bangbox \ottnt{A} $}
\AXC{$\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\RightLabel{$ \bangbox\mathrm{E} $}
\BIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
The structure of types are exactly the same as that of formulae in $\imell^{\smallbangbox}\,$. Terms are defined as an extension of the simply-typed $\lambda$-calculus with the following: the terms $ ! \ottnt{M} $ and $\ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}$, which are a constructor and a destructor for types $!A$, respectively; and the terms $ \bangbox \ottnt{M} $ and $\ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}$, which are those for types $ \bangbox \ottnt{A} $ similarly. Note that the variable $x$ in $\ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}$ and $\ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}$ is supposed to be \emph{bound}.
A \emph{(type) context} is defined by the same way as $\lambda^{\Box}$ and a \emph{(type) judgment} consists of three contexts, a term and a type, written as $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$. These three contexts of a judgment $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ have the following intuitive meaning: (1) $\Delta$ implicitly represents a context for modalized types of form $ \bangbox \ottnt{A} $; (2) $\Gamma$ implicitly represents a context for modalized types of form $ \Box \ottnt{A} $; (3) $\Sigma$ represents an ordinary context but its elements must be used linearly.
The intuitive meanings of the typing rules are as follows. Each of the first three rules is a variable rule depending on the context's kind. It is allowed for the $\Delta$-part and the $\Gamma$-part to weaken the antecedent in these rules, but is not for the $\Sigma$-part since it must satisfy the linearity condition. The rules $\slimp\!\mathrm{I}$ and $\slimp\!\mathrm{E}$ are for the type $\multimap$, and again, the $\slimp\!\mathrm{E}$ is designed to satisfy the linearity. The remaining rules are for types $\ottsym{!} \ottnt{A}$ and $ \bangbox \ottnt{A} $.
The \emph{reduction} $ \leadsto $ is defined to be the least compatible relation on terms generated by $\betarule{\slimp}$, $\betarule{!}$, and $\betarule{\bangbox~}$. The \emph{multistep reduction} $ \leadsto^+ $ is defined as in the case of $\lambda^{\Box}$.
Then, we can show the subject reduction and the strong normalization of $\lambda^{\smallbangbox}\,$ as follows.
\begin{lemma}[Substitution]\
\label{lem:substitution}
\begin{enumerate}
\item If $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$ and $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$, then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottnt{M} [ \ottmv{x} := \ottnt{N} ] \ottsym{:} \ottnt{B}$;
\item If $\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$ and $\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$, then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} [ \ottmv{x} := \ottnt{N} ] \ottsym{:} \ottnt{B}$;
\item If $\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$ and $\Delta \ottsym{;} \emptyset \ottsym{;} \emptyset \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$, then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} [ \ottmv{x} := \ottnt{N} ] \ottsym{:} \ottnt{B}$.
\end{enumerate} \end{lemma}
\begin{theorem}[Subject reduction]
If $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ and $\ottnt{M} \leadsto \ottnt{N}$,
then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$. \end{theorem}
\begin{proof}
By induction on the derivation of $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ together with Lemma~\ref{lem:substitution}. \end{proof}
\begin{theorem}[Strong normalization]
For well-typed term $M$, there are no infinite reduction sequences starting from $M$. \end{theorem}
\begin{proof}
By embedding to a typed $\lambda$-calculus of the $(!, \multimap)$-fragment of dual intuitionistic linear logic, named $\lambda^{!, \slimp}$,
which is shown to be strongly normalizing by Ohta and Hasegawa\,\cite{OH:terminating_linear_lambda}.
The details are in the appendix, but the intuition is described as follows.
First,
for every well-typed term $M$,
we define the term $ \fembed{ \ottnt{M} } $ by replacing the occurrences of $ \bangbox \ottnt{N} $ and $\ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ottnt{N} \, \ottkw{in} \, \ottnt{L}$ in $M$
with $ ! \fembed{ \ottnt{N} } $ and $\ottkw{let} \, ! \ottmv{x} \ottsym{=} \fembed{ \ottnt{N} } \, \ottkw{in} \, \fembed{ \ottnt{L} } $, respectively.
Then, we can show that $ \fembed{ \ottnt{M} } $ is typable in $\lambda^{!, \slimp}$,
because the structure of `$\bangbox$~' collapses to that of `$!$',
and that the embedding $\fembed{-}$ preserves reductions. Therefore, $\lambda^{\smallbangbox}\,$ is strongly normalizing. \end{proof}
As we mentioned, we can view that $\lambda^{\smallbangbox}\,$ is indeed a typed $\lambda$-calculus for the intuitionistic modal linear logic. A natural deduction that corresponds to $\lambda^{\smallbangbox}\,$ is obtained as the ``logical-part'' of the calculus, and we can show that the natural deduction is equivalent to $\imell^{\smallbangbox}\,$.
\begin{definition}[Natural deduction]
A natural deduction for modal linear logic,
called $\mathrm{NJ}^{\smallbangbox}\,$, is defined to be one that is extracted from $\lambda^{\smallbangbox}\,$ by erasing term annotations. \end{definition}
\begin{fact}[Curry--Howard correspondence]
There is a one-to-one correspondence between $\mathrm{NJ}^{\smallbangbox}\,$ and $\lambda^{\smallbangbox}\,$,
which preserves provability/typability and proof-normalizability/reducibility. \end{fact}
\begin{lemma}[Judgmental reflection] \label{lem:reflection}
The following hold in $\mathrm{NJ}^{\smallbangbox}\,$.
\begin{enumerate}
\item $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$ if and only if $\Delta \ottsym{;} \Gamma \ottsym{,} \ottnt{A} \ottsym{;} \Sigma \, \vdash \, \ottnt{B}$;
\item $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \bangbox \ottnt{A} \, \vdash \, \ottnt{B}$ if and only if $\Delta \ottsym{,} \ottnt{A} \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{B}$.
\end{enumerate} \end{lemma}
\begin{theorem}[Equivalence]
\label{thm:equivalence}
$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{A}$ in $\mathrm{NJ}^{\smallbangbox}\,$ if and only if
$ \bangbox \Delta \ottsym{,} \ottsym{!} \Gamma \ottsym{,} \Sigma \, \vdash \, \ottnt{A}$ in $\imell^{\smallbangbox}\,$. \end{theorem}
\begin{proof}
By straightforward induction.
Lemma~\ref{lem:reflection} is used to show the if-part. \end{proof}
\subsection{Embedding from the modal $\lambda$-calculus by Pfenning and Davies}
We give a translation from Pfenning and Davies' $\lambda^{\Box}$ to our $\lambda^{\smallbangbox}\,$. We also show that the translation preserves the reductions of $\lambda^{\Box}$, and thus it can be seen as the S4-version of Girard translation on the level of proofs through the Curry--Howard correspondence.
To give the translation, we introduce two meta $\lambda$-terms in $\lambda^{\smallbangbox}\,$ to encode the function space $\supset$ of $\lambda^{\Box}$. The simulation of reduction of $ \ottsym{(} \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \, \ottnt{N} $ in $\lambda^{\Box}$ can be shown readily.
\begin{definition}
Let $M$ and $N$ be terms such that
$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$ and $\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$.
Then, $\overline{\lambda} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M}$ and $\ottnt{M} \overline{@} \ottnt{N}$ are defined as the terms $\lambda \ottmv{y} \ottsym{:} \ottsym{!} \ottnt{A} \ottsym{.} \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottmv{y} \, \ottkw{in} \, \ottnt{M}$ and $ \ottnt{M} \, \ottsym{(} ! \ottnt{N} \ottsym{)} $, respectively, where $y$ is chosen to be fresh, i.e., it is a variable satisfying $y \not\in (\fv{M} \cup \{ x \})$. \end{definition}
\begin{lemma}[Derivable full-function space]
The following rules are derivable in $\lambda^{\smallbangbox}\,$:
\begin{tabular}{c}
\begin{minipage}{0.37\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$}
\doubleLine
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottsym{(} \overline{\lambda} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \ottsym{:} \ottsym{!} \ottnt{A} \multimap \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.53\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottsym{!} \ottnt{A} \multimap \ottnt{B}$}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$}
\doubleLine
\BIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \overline{@} \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\noindent
Moreover, it holds that $\ottsym{(} \overline{\lambda} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \overline{@} \ottnt{N} \leadsto^+ \ottnt{M} [ \ottmv{x} := \ottnt{N} ] $ in $\lambda^{\smallbangbox}\,$. \end{lemma}
\begin{rulefigure}{fig:translation}{Definitions of the S4-version of Girard translation in term of typed $\lambda$-calculus.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\begin{minipage}{0.38\hsize}
\underline{$ \fgirardtrans{ \ottnt{A} } $}
\begin{align*}
\fgirardtrans{ \ottmv{p} } &\defeq p \\
\fgirardtrans{ \ottnt{A} \supset \ottnt{B} } &\defeq \ottsym{!} \fgirardtrans{ \ottnt{A} } \multimap \fgirardtrans{ \ottnt{B} } \\
\fgirardtrans{ \Box \ottnt{A} } &\defeq \bangbox \fgirardtrans{ \ottnt{A} }
\end{align*}
\rule{\textwidth}{0.4pt}
\underline{$ \fgirardtrans{ \Gamma } $}
\begin{align*}
\hspace{0.3em} \fgirardtrans{ \Gamma } &\defeq \{ (x : \fgirardtrans{ \ottnt{A} } ) ~|~ (x : A) \in \Gamma \}
\end{align*}
\end{minipage}
}
&
{
\begin{minipage}{0.60\hsize}
\underline{$ \ftrans{ \ottnt{M} } $}
\begin{align*}
\ftrans{ \ottmv{x} } &\defeq \ottmv{x}\\
\ftrans{ \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} } &\defeq \overline{\lambda} \ottmv{x} \ottsym{:} \fgirardtrans{ \ottnt{A} } \ottsym{.} \ftrans{ \ottnt{M} } \\
\ftrans{ \ottnt{M} \, \ottnt{N} } &\defeq \ftrans{ \ottnt{M} } \overline{@} \ftrans{ \ottnt{N} } \\
\ftrans{ \Box \ottnt{M} } &\defeq \bangbox \ftrans{ \ottnt{M} } \\
\ftrans{ \ottkw{let} \, \Box \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} } &\defeq \ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ftrans{ \ottnt{M} } \, \ottkw{in} \, \ftrans{ \ottnt{N} }
\end{align*}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
Together with the above meta $\lambda$-terms $\overline{\lambda} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M}$ and $\ottnt{M} \overline{@} \ottnt{N}$, we can define the translation from $\lambda^{\Box}$ into $\lambda^{\smallbangbox}\,$ and show that it preserves typability and reducibility.
\begin{definition}[Translation]
The \emph{translation} from $\lambda^{\Box}$ to $\lambda^{\smallbangbox}\,$
is defined to be the triple of
the type/context/term translations $ \fgirardtrans{ \ottnt{A} } $, $ \fgirardtrans{ \Gamma } $, and $ \ftrans{ \ottnt{M} } $ defined in Figure~\ref{fig:translation}. \end{definition}
\begin{theorem}[Embedding]
\label{thm:embedding}
$\lambda^{\Box}$ can be embedded into $\lambda^{\smallbangbox}\,$, i.e., the following hold:
\begin{enumerate}
\item If $\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ in $\lambda^{\Box}$, then $ \fgirardtrans{ \Delta } \ottsym{;} \fgirardtrans{ \Gamma } \ottsym{;} \emptyset \, \vdash \, \ftrans{ \ottnt{M} } \ottsym{:} \fgirardtrans{ \ottnt{A} } $ in $\lambda^{\smallbangbox}\,$.
\item If $\ottnt{M} \leadsto \ottnt{M'}$ in $\lambda^{\Box}$, then $ \ftrans{ \ottnt{M} } \leadsto^+ \ftrans{ \ottnt{M'} } $ in $\lambda^{\smallbangbox}\,$.
\end{enumerate} \end{theorem}
\begin{proof}
By induction on the derivation of $\Delta \ottsym{;} \Gamma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ and $\ottnt{M} \leadsto \ottnt{M'}$ in $\lambda^{\Box}$, respectively. \end{proof}
From the logical point of view, Theorem~\ref{thm:embedding}.1 can be seen as another S4-version of Girard translation (in the style of natural deduction) that corresponds to Theorem~\ref{thm:modal_girard_translation}; and Theorem~\ref{thm:embedding}.2 gives a justification that the S4-version of Girard translation is correct with respect to the level of proofs, i.e., it preserves proof-normalizations as well as provability.
\section{Axiomatization of modal linear logic} \label{sec:axiomatization}
\begin{rulefigure}{fig:clbangbox}{Definition of $\mathrm{CL}^{\smallbangbox}\,$.}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1.2em}
\begin{minipage}{0.98\hsize}
\paragraph*{Syntactic category}
\begin{alignat*}{3}
&\text{Types}~~&A, B, C &::= p ~|~ \ottnt{A} \multimap \ottnt{B} ~|~ \ottsym{!} \ottnt{A} ~|~ \bangbox \ottnt{A} \\
&\text{Terms}~~&M, N, L &::= x ~|~ c ~|~ ! \ottnt{M} ~|~ \bangbox \ottnt{M}
\end{alignat*}
\end{minipage}
}
\end{tabular}
\rule{\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1.2em}
\begin{minipage}{0.98\hsize}
\paragraph*{Typing rule}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{($c$ is a combinator)}
\RightLabel{$ \mathrm{Ax} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \vdash c : \ftypeof{c}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.60\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A} \multimap \ottnt{B}$}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \mathrm{MP} $}
\BIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottnt{M} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \mathrm{LinAx} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ !\mathrm{Ax} $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \emptyset \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.32\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \bangbox\mathrm{Ax} $}
\UIC{$\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$}
\RightLabel{$ ! $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, ! \ottnt{M} \ottsym{:} \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$\Delta \ottsym{;} \emptyset \ottsym{;} \emptyset \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \bangbox $}
\UIC{$\Delta \ottsym{;} \Gamma \ottsym{;} \emptyset \, \vdash \, \bangbox \ottnt{M} \ottsym{:} \bangbox \ottnt{A} $}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\rule{\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.55\hsize}
\paragraph*{Combinator}
\begin{itemize}
\item $\vdash \, \mathrm{I} \ottsym{:} \ottnt{A} \multimap \ottnt{A}$
\item $\vdash \, \mathrm{B} \ottsym{:} ( \ottnt{B} \multimap \ottnt{C} ) \multimap ( \ottnt{A} \multimap \ottnt{B} ) \multimap \ottnt{A} \multimap \ottnt{C}$
\item $\vdash \, \mathrm{C} \ottsym{:} ( \ottnt{A} \multimap \ottnt{B} \multimap \ottnt{C} ) \multimap \ottnt{B} \multimap \ottnt{A} \multimap \ottnt{C}$
\item $\vdash \, \mathrm{S}^{\delta} \ottsym{:} ( \delta \ottnt{A} \multimap \ottnt{B} \multimap \ottnt{C} ) \multimap ( \delta \ottnt{A} \multimap \ottnt{B} ) \multimap \delta \ottnt{A} \multimap \ottnt{C}$
\item $\vdash \, \mathrm{K}^{\delta} \ottsym{:} \ottnt{A} \multimap \delta \ottnt{B} \multimap \ottnt{A}$
\item $\vdash \, \mathrm{W}^{\delta} \ottsym{:} ( \delta \ottnt{A} \multimap \delta \ottnt{A} \multimap \ottnt{B} ) \multimap \delta \ottnt{A} \multimap \ottnt{B}$
\item $\vdash \, \mathrm{T}^{\delta} \ottsym{:} \delta \ottnt{A} \multimap \ottnt{A}$
\item $\vdash \, \mathrm{D}^{\delta} \ottsym{:} \delta ( \ottnt{A} \multimap \ottnt{B} ) \multimap \delta \ottnt{A} \multimap \delta \ottnt{B} $
\item $\vdash \, \mathrm{4}^{\delta} \ottsym{:} \delta \ottnt{A} \multimap \delta \delta \ottnt{A} $
\item $\vdash \, \mathrm{E} \ottsym{:} \bangbox \ottnt{A} \multimap \ottsym{!} \ottnt{A}$
\end{itemize}
where $\delta \in \{ !, \bangbox~ \}$
\end{minipage}
}
&
{
\hspace{-0.8em}
\begin{minipage}{0.40\hsize}
\paragraph*{Reduction}
\begin{alignat*}{2}
& \mathrm{I} \, \ottnt{M} & & \leadsto ~\ottnt{M}\\
& \mathrm{B} \, \ottnt{M} \, \ottnt{N} \, \ottnt{L} & & \leadsto ~ \ottnt{M} \, \ottsym{(} \ottnt{N} \, \ottnt{L} \ottsym{)} \\
& \mathrm{C} \, \ottnt{M} \, \ottnt{N} \, \ottnt{L} & & \leadsto ~ \ottnt{M} \, \ottnt{L} \, \ottnt{N} \\
& \mathrm{S}^{\delta} \, \ottnt{M} \, \ottnt{N} \, \ottsym{(} \delta \ottnt{L} \ottsym{)} & & \leadsto ~ \ottnt{M} \, \ottsym{(} \delta \ottnt{L} \ottsym{)} \, \ottsym{(} \ottnt{N} \, \ottsym{(} \delta \ottnt{L} \ottsym{)} \ottsym{)} \\
& \mathrm{K}^{\delta} \, \ottnt{M} \, \ottsym{(} \delta \ottnt{N} \ottsym{)} & & \leadsto ~\ottnt{M}\\
& \mathrm{W}^{\delta} \, \ottnt{M} \, \ottsym{(} \delta \ottnt{N} \ottsym{)} & & \leadsto ~ \ottnt{M} \, \ottsym{(} \delta \ottnt{N} \ottsym{)} \, \ottsym{(} \delta \ottnt{N} \ottsym{)} \\
& \mathrm{T}^{\delta} \, \ottsym{(} \delta \ottnt{M} \ottsym{)} & & \leadsto ~\ottnt{M}\\
& \mathrm{D}^{\delta} \, \ottsym{(} \delta \ottnt{M} \ottsym{)} \, \ottsym{(} \delta \ottnt{N} \ottsym{)} & & \leadsto ~ \delta \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} \\
& \mathrm{4}^{\delta} \, \ottsym{(} \delta \ottnt{M} \ottsym{)} & & \leadsto ~ \delta \delta \ottnt{M} \\
& \mathrm{E} \, \bangbox \ottnt{M} & & \leadsto ~ ! \ottnt{M}
\end{alignat*}
\noindent
where $\delta \in \{ !, \bangbox~ \}$
\end{minipage}
}
\end{tabular}
\end{rulefigure}
\noindent We give an axiomatic characterization of the intuitionistic modal linear logic. To do so, we define a typed combinatory logic, called $\mathrm{CL}^{\smallbangbox}\,$, which can be seen as a Hilbert-style deductive system of modal linear logic through the Curry--Howard correspondence. In this section, we only aim to provide the equivalence between $\mathrm{NJ}^{\smallbangbox}\,$ and the Hilbert-style, while $\mathrm{CL}^{\smallbangbox}\,$ satisfies several desirable properties, e.g., the subject reduction and the strong normalizability.
The definition of $\mathrm{CL}^{\smallbangbox}\,$ is given in Figure~\ref{fig:clbangbox}. The set of types has the same structure as that in $\lambda^{\smallbangbox}\,$. A term is either a \emph{variable}, a \emph{combinator}, a necessitated term by `$!$', or a necessitated term by `$\bangbox$~'. The notions of \emph{(type) context} and \emph{(type) judgment} are defined similarly to those of $\lambda^{\smallbangbox}\,$.
Every combinator $c$ has its type as defined in the list in the figure, and is denoted by $\ftypeof{c}$. Then, the typing rules are described as follows: $\mathrm{Ax}$ and $\mathrm{MP}$ are the standard rules, which logically correspond to an axiom rule of the set of axiomata, and \emph{modus ponens}, respectively. The others are defined by the same way as in $\lambda^{\smallbangbox}\,$.
The \emph{reduction} $ \leadsto $ of combinators is defined to be the least compatible relation on terms generated by the reduction rules listed in the figure.
\begin{remark} $\mathrm{CL}^{\smallbangbox}\,$ can be seen as an extension of \emph{linear combinatory algebra} of Abramsky et al.\,\cite{AHS:goi_and_lca} with the $\bangbox~$-modality, or equivalently, a linear-logical reconstruction of Pfenning's modally-typed combinatory logic\,\cite{P:modal_combinatory_logic}. The combinators $ \mathrm{T}^! $, $ \mathrm{D}^! $, $ \mathrm{4}^! $ represent the S4 axiomata for the $!$-modality, and similarly, $ \mathrm{T}^{\smallbangbox} $, $ \mathrm{D}^{\smallbangbox} $, $ \mathrm{4}^{\smallbangbox}\, $ represent those for the $\bangbox~$-modality. $ \mathrm{E} $ is the only one combinator to characterize the strength between the two modalities. \end{remark}
As we defined $\mathrm{NJ}^{\smallbangbox}\,$ from $\lambda^{\smallbangbox}\,$, we can define the Hilbert-style deductive system (with open assumptions) for the intuitionistic modal linear logic via $\mathrm{CL}^{\smallbangbox}\,$.
\begin{definition}[Hilbert-style]
A Hilbert-style deductive system for modal linear logic, called $\mathrm{HJ}^{\smallbangbox}\,$,
is defined to be one that is extracted from $\mathrm{CL}^{\smallbangbox}\,$ by erasing term annotations. \end{definition}
\begin{fact}[Curry--Howard correspondnece]
\label{fact:Hilbert_curry-howard}
There is a one-to-one correspondence between $\mathrm{HJ}^{\smallbangbox}\,$ and $\mathrm{CL}^{\smallbangbox}\,$,
which preserves provability/typability and proof-normalizability/reducibility. \end{fact}
The deduction theorem of $\mathrm{HJ}^{\smallbangbox}\,$ can be obtained as a consequence of the so-called \emph{bracket abstraction} of $\mathrm{CL}^{\smallbangbox}\,$ through Fact~\ref{fact:Hilbert_curry-howard}, which allows us to show the equivalence between $\mathrm{HJ}^{\smallbangbox}\,$ and $\mathrm{NJ}^{\smallbangbox}\,$. Therefore, the modal linear logic is indeed axiomatized by $\mathrm{HJ}^{\smallbangbox}\,$.
\begin{theorem}[Deduction theorem]\
\label{thm:deduction_theorem}
\begin{enumerate}
\item If $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$, then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \ottsym{:} ( \ottnt{A} \multimap \ottnt{B} ) $;
\item If $\Delta \ottsym{;} \Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$, then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \ottsym{:} ( \ottsym{!} \ottnt{A} \multimap \ottnt{B} ) $;
\item If $\Delta \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$, then $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \ottsym{:} ( \bangbox \ottnt{A} \multimap \ottnt{B} ) $.
\end{enumerate}
where $\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, $\ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$
are bracket abstraction operations that take a variable $x$ and a $\mathrm{CL}^{\smallbangbox}\,$-term $M$
and returns a $\mathrm{CL}^{\smallbangbox}\,$-term,
and the definitions are given in the appendix. \end{theorem}
\begin{proof}
By induction on the derivation. The proof is just a type-checking of the result of the bracket abstraction operations. \end{proof}
\begin{theorem}[Equivalence]
$\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{A}$ in $\mathrm{HJ}^{\smallbangbox}\,$ if and only if $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{A}$ in $\mathrm{NJ}^{\smallbangbox}\,$. \end{theorem}
\begin{proof}
By straightforward induction.
We use Theorem~\ref{thm:deduction_theorem} and Fact~\ref{fact:Hilbert_curry-howard} to show the if-part. \end{proof}
\begin{corollary}
$\imell^{\smallbangbox}\,$, $\mathrm{NJ}^{\smallbangbox}\,$, and $\mathrm{HJ}^{\smallbangbox}\,$ are equivalent with respect to provability. \end{corollary}
\section{Geometry of Interaction Machine} \label{sec:goim}
In this section, we show a dynamic semantics, called \emph{context semantics}, for the modal linear logic in the style of geometry of interaction machine\,\cite{M:goim, M:goim_popl}. As in the usual linear logic, we first define a notion of \emph{proof net} and then define the machine as a token-passing system over those proof nets. Thanks to the simplicity of our logic, the definitions are mostly straightforward extension of those for classical $\mathrm{MELL}$\,($\mathrm{CMELL}$).
\subsection{Sequent calculus for classical modal linear logic}
We define a sequent calculus of classical modal linear logic, called $\mathrm{CMELL}^{\smallbangbox}\,$. The reason why we define it in the classical setting is for ease of defining the proof nets in the latter part.
\begin{rulefigure}{fig:cmellbangbox}{Definition of $\mathrm{CMELL}^{\smallbangbox}\,$.}
\hspace{-1em}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.7\hsize}
\paragraph*{Syntactic category}
\begin{align*}
\text{Formulae}~~A, B, C &::= p ~|~ \ottmv{p} ^\bot
~|~ \ottnt{A} \otimes \ottnt{B} ~|~ \ottnt{A} \parr \ottnt{B}
~|~ \ottsym{!} \ottnt{A} ~|~ ? \ottnt{A}
~|~ \bangbox \ottnt{A} ~|~ \whynotdia \ottnt{A}
\end{align*}
\end{minipage}
}
&
{
\hspace{-1em}
\begin{minipage}{0.20\hsize}
\paragraph*{Inference rule}
\begin{prooftree}
\def2pt{2pt}
\AXC{$ $}
\RightLabel{$ \mathrm{Ax} $}
\UIC{$\vdash \, \ottnt{A} ^\bot \ottsym{,} \ottnt{A}$}
\end{prooftree}
\end{minipage}
}
\end{tabular}
\hspace{-0.75em}\rule{0.78\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1em}
\begin{minipage}{\hsize}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.25\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.5em}
\AXC{$\vdash \, \Gamma \ottsym{,} \ottnt{A}$}
\AXC{$\vdash \, \ottnt{A} ^\bot \ottsym{,} \Gamma'$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\vdash \, \Gamma \ottsym{,} \Gamma'$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.22\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.5em}
\AXC{$\vdash \, \Gamma \ottsym{,} \ottnt{A}$}
\AXC{$\vdash \, \Gamma' \ottsym{,} \ottnt{B}$}
\RightLabel{$ \stensor $}
\BIC{$\vdash \, \Gamma \ottsym{,} \Gamma' \ottsym{,} \ottnt{A} \otimes \ottnt{B} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.17\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma \ottsym{,} \ottnt{A} \ottsym{,} \ottnt{B}$}
\RightLabel{$ \spar $}
\UIC{$\vdash \, \Gamma \ottsym{,} \ottnt{A} \parr \ottnt{B} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.16\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \whynotdia \Delta \ottsym{,} ? \Gamma \ottsym{,} \ottnt{A}$}
\RightLabel{$ ! $}
\UIC{$\vdash \, \whynotdia \Delta \ottsym{,} ? \Gamma \ottsym{,} \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.12\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma \ottsym{,} \ottnt{A}$}
\RightLabel{$ ? $}
\UIC{$\vdash \, \Gamma \ottsym{,} ? \ottnt{A}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.15\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \whynotdia \Delta \ottsym{,} \ottnt{A}$}
\RightLabel{$ \bangbox $}
\UIC{$\vdash \, \whynotdia \Delta \ottsym{,} \bangbox \ottnt{A} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.13\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma \ottsym{,} \ottnt{A}$}
\RightLabel{$ \whynotdia $}
\UIC{$\vdash \, \Gamma \ottsym{,} \whynotdia \ottnt{A} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.15\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma$}
\RightLabel{$ ?\mathrm{W} $}
\UIC{$\vdash \, \Gamma \ottsym{,} ? \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.17\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma \ottsym{,} ? \ottnt{A} \ottsym{,} ? \ottnt{A}$}
\RightLabel{$ ?\mathrm{C} $}
\UIC{$\vdash \, \Gamma \ottsym{,} ? \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.15\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma$}
\RightLabel{$ \whynotdia\mathrm{W} $}
\UIC{$\vdash \, \Gamma \ottsym{,} \whynotdia \ottnt{A} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.17\hsize}
\begin{prooftree}
\def2pt{2pt}
\AXC{$\vdash \, \Gamma \ottsym{,} \whynotdia \ottnt{A} \ottsym{,} \whynotdia \ottnt{A} $}
\RightLabel{$ \whynotdia\mathrm{C} $}
\UIC{$\vdash \, \Gamma \ottsym{,} \whynotdia \ottnt{A} $}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
Figure~\ref{fig:cmellbangbox} shows the definition of $\mathrm{CMELL}^{\smallbangbox}\,$. The set of formulae are defined as an extension of $\mathrm{CMELL}$-formulae with the two modalities `$\bangbox$\,' and `$\whynotdia$\,'. A \emph{dual formula} of $A$, written $ \ottnt{A} ^\bot $, is defined by the standard dual formulae in $\mathrm{CMELL}$ along with $ ( \bangbox \ottnt{A} ) ^\bot \defeq \whynotdia ( \ottnt{A} ^\bot ) $ and $ ( \whynotdia \ottnt{A} ) ^\bot \defeq \bangbox ( \ottnt{A} ^\bot ) $. Here, the $\whynotdia~$-modality is the dual of the $\bangbox~$-modality by definition, and it can be seen as an integration of the $?$-modality and the $\Diamond$-modality. The \emph{linear implication} $\ottnt{A} \multimap \ottnt{B}$ is defined as $ \ottnt{A} ^\bot \parr \ottnt{B} $ as usual. The inference rules are defined as a simple extension of $\imell^{\smallbangbox}\,$ to the classical setting in the style of ``one-sided'' sequent.
Then, the cut-elimination theorem for $\mathrm{CMELL}^{\smallbangbox}\,$ can be shown similarly to the case of $\imell^{\smallbangbox}\,$, and we can see that there exists a trivial embedding from $\imell^{\smallbangbox}\,$ to $\mathrm{CMELL}^{\smallbangbox}\,$.
\begin{theorem}[Cut-elimination]
The rule $\mathrm{Cut}$ in $\mathrm{CMELL}^{\smallbangbox}\,$ is admissible. \end{theorem}
\begin{theorem}[Embedding]
If $\Gamma \, \vdash \, \ottnt{A}$ in $\imell^{\smallbangbox}\,$, then $\vdash \, \Gamma ^\bot \ottsym{,} \ottnt{A}$ in $\mathrm{CMELL}^{\smallbangbox}\,$. \end{theorem}
\subsection{Proof-nets formalization} First, we define \emph{proof structures} for $\mathrm{CMELL}^{\smallbangbox}\,$. The \emph{proof nets} are then defined to be those proof structures satisfying a condition called \emph{correctness criterion}. Intuitively, a proof net corresponds to an (equivalence class of) proof in $\mathrm{CMELL}^{\smallbangbox}\,$. \begin{definition}
A \emph{node} is one of the graph-theoretic node shown in Figure~\ref{fig:nodesAndBox}
equipped with $\mathrm{CMELL}^{\smallbangbox}\,$ types on the edges.
They are all directed from top to bottom: for example, the $\mathsf{\parr}$ node has two
incoming edges and one outgoing edge.
A $!$-node (resp.\ $\bangbox$~-node) has one outgoing edge typed by $! A$
(resp. $\bangbox A$)
and arbitrarily many (possibly zero) outgoing edges typed by $? A_i$ and $\whynotdia B_i$ (resp.\ $\whynotdia A_i$).
\begin{figure}
\caption{Nodes of proof net and box notation.}
\label{fig:nodesAndBox}
\end{figure}
A \emph{proof structure} is a finite directed graph that satisfies the following conditions:
\begin{itemize}
\item each edge is with a type that matches the types specified by the nodes (in Figure~\ref{fig:nodesAndBox}) it is connected to;
\item some edges may not be connected to any node (called \emph{dangling edges}).
Those dangling edges and also the types on those edges are called the
\emph{conclusions of the structure};
\item the graph is associated with a total map from all the $!$-nodes and
$\bangbox$~-nodes in it to proof structures called the \emph{contents} of the $!$/$\bangbox$~-nodes.
The map satisfies that the types of the conclusions of a $!$-node (resp.\ $\bangbox$~-node)
coincide with the conclusions of its content.
\end{itemize} \end{definition}
\begin{remark}
Formally, a $!$-node (resp.\ a $\bangbox$~-node) and its content are distinctive
objects and they are not connected as a directed graph.
Though, it is convenient to depict them as if the $!$-node (resp.\ $\bangbox$~-node)
represents a ``box'' filled with its content, as shown at the bottom-right of Figure~\ref{fig:nodesAndBox}.
We also depict multiple edges by an edge with a diagonal line.
In what follows, we adopt this ``box'' notation and multiple edges notation without explicit note. \end{remark}
\newcommand{S}{S}
\begin{definition}\label{def:switchingPath}
Given a proof structure $S$, a \emph{switching path} is an undirected path on $S$
(meaning that the path is allowed to traverse an edge forward or backward)
satisfying that on each $\parr$ node, $?c$ node, and $\whynotdia c$ node,
the path uses at most one of the premises, and that the path uses any edge at most once. \end{definition}
\begin{definition}\label{def:modalMellCorrectCrit}
The \emph{correctness criterion} is the following condition:
given a proof structure $S$, switching paths of $S$ and all contents of $!$-nodes, $\bangbox~$-nodes in $S$ are all acyclic and connected.
A proof structure satisfying the correctness criterion is called a \emph{proof net}. \end{definition}
As a counterpart of cut-elimination process in $\mathrm{CMELL}^{\smallbangbox}\,$, the notion of \emph{reduction} is defined for proof structures (and hence for proof nets): this intuition is made precise by Lemma~\ref{lem:simulation_seqnet} where $\fnettrans{-}$ is the translation from $\mathrm{CMELL}^{\smallbangbox}\,$ to proof nets, whose definition is omitted here since it is defined analogously to that of $\mathrm{CMELL}$ and $\mathrm{CMELL}$ proof net. The lemmata below are naturally obtained by extending the case for $\mathrm{CMELL}$ since the $\bangbox~$-modality has mostly the same logical structure as the $!$-modality.
\begin{definition}
\emph{Reductions} of proof structures are local graph reductions defined by the set of rules
depicted in Figure~\ref{fig:reduction_rules}. \end{definition}
\begin{figure}
\caption{Reduction rules.}
\label{fig:reduction_rules}
\end{figure}
\begin{lemma} \label{lem:net_preservation}
Let $S \to S'$ be a reduction between proof structures.
If $S$ is a proof net (i.e., satisfies the correctness criterion),
so is $S'$. \end{lemma}
\begin{lemma} \label{lem:simulation_seqnet}
Let $\Pi$ be a proof of $\vdash \, \whynotdia \Delta ^\bot \ottsym{,} ? \Gamma ^\bot \ottsym{,} \Sigma ^\bot \ottsym{,} \ottnt{A}$ and
suppose that $\Pi$ reduces to another proof $\Pi'$.
Then there is a sequence of reductions $\fnettrans{\Pi} \to^* \fnettrans{\Pi'}$ between the proof nets. \end{lemma}
\subsection{Computational interpretation}
\begin{definition}
A \emph{context} is a triple $(\mathcal{M}, \mathcal{B}, \mathcal{N})$ where $\mathcal{M}, \mathcal{B}, \mathcal{N}$ are generated by the following grammar:
{\small
$
\mathcal{M} ::= \varepsilon ~|~ l.\mathcal{M} ~|~ r.\mathcal{M}
\quad \mathcal{B} ::= \varepsilon ~|~ L.\mathcal{B} ~|~ R.\mathcal{B} ~|~ \fbracket{\mathcal{B},\mathcal{B}} ~|~ \star
\quad \mathcal{N} ::= \varepsilon ~|~ L'.\,\mathcal{N} ~|~ R'.\,\mathcal{N} ~|~ \fbracket{\mathcal{N},\mathcal{N}} ~|~ \star
$
} \end{definition}
The intuition of a context is an intermediate state while ``evaluating'' the proof net (and, by translating into a proof net, a term in $\lambda^{\smallbangbox}\,$). The geometry of interaction machine calculates the semantic value of a net by traversing the net from a conclusion to another; to traverse the net in a ``right way'' (more precisely, in a way invariant under net reduction), the context accumulates the information about the path that is already passed. Then, \emph{how} the net is traversed is defined by the notion of \emph{path} over a proof net as we define below.
\begin{definition}
The \emph{extended dynamic algebra} $\Lambda^{\Box *}$ is a single-sorted $\Sigma$ algebra
that contains $0,1,p,q,r,r',s,s',t,t',d,d' \colon \Sigma$ as constants,
has an associative operator $\cdot\colon\Sigma\times\Sigma\to\Sigma$
and operators $(-)^*\colon\Sigma\to\Sigma$, ${!}\colon\Sigma\to\Sigma$, ${\bangbox{}}\colon\Sigma\to\Sigma$,
equipped with a formal sum $+$,
and satisfies the equations below.
Hereafter,
we write $xy$ for $x\cdot y$ where $x$ and $y$ are metavariables over $\Sigma$.
{\small
\begin{alignat*}{10}
\qquad0^* = !0 = 0 &\quad 1^* = !1 = 1 &\quad 0x = x0 = 0 &\quad 1x = x1 = x\\
\qquad!(x)^* = !(x^*) &\quad (xy)^* = y^*x^* &\quad (x^*)^* = x &\quad !(x)!(y) = !(xy) \\
\qquad\bangbox(x)\bangbox(y) = \bangbox(xy) &\quad p^*p = q^*q = 1 &\quad q^*p = p^*q = 0 &\quad r^*r = s^*s = 1\\
\qquad s^*r = r^*s = 0 &\quad d^*d= 1 &\quad t^*t = 1 &\quad p'^*p' = q'^*q' = 1\\
\qquad q'^*p' = p'^*q' = 0 &\quad r'^*r' = s'^*s' = 1 &\quad s'^*r' = r'^*s' = 0 &\quad d'^*d'= 1 \\
\qquad t'^*t' = 1 &\quad !(x)r = r!(x) &\quad !(x)s = s!(x) &\quad !(x)t = t!!(x)\\
\qquad !(x)d = dx &\quad \bangbox(x)r' = r'\bangbox(x) &\quad \bangbox(x)s' = s'\bangbox(x) &\quad \bangbox(x)t' = t'\bangbox~\bangbox(x)\\
\qquad\bangbox(x)d' = d'x &\quad x+y = y+x &\quad x+0 = x&\quad (x+y)z = xz+yz\\
\qquad z(x+y) = zx+zy &\quad (x+y)^* = x^*+y^* &\quad !(x+y) = !x+!y &\quad \bangbox{(x+y)} = \bangbox{x}+\bangbox{y}
\end{alignat*}
} \end{definition}
\begin{remark}
The equations in the definition above are mostly the same as the standard dynamic algebra
$\Lambda^*$~\cite{M:goim, M:goim_popl} except those equations concerning the symbols with $'$ and the operator $\bangbox{}$, and their structures are analogous to those for $!$ operator.
This again reflects the fact that the logical structure of rules for $\bangbox{}$ is analogous to that of $!$. \end{remark}
\begin{definition}
A \emph{label} is an element of $\Lambda^{\Box *}$ that is associated to
edges of proof structures as in Figure~\ref{fig:labels}.
Let $S$ be a proof structure and $T_S$ be the set of edge traversals in the structure.
$S$ is associated with a function $w\colon T_S \to \Lambda^{\Box *}$
defined by $w(e) = l$ (resp.\ $l^*$) if $e$ is a forward (resp.\ backward) traversal
of an edge $e$ and $l$ is the label of the edge;
$w(e_1e_2) = w(e_1)w(e_2)$.
\begin{figure}
\caption{Labels on edges.}
\label{fig:labels}
\end{figure} \end{definition}
\begin{definition}
A \emph{walk} over a proof structure $S$ is an element of $\Lambda^{\Box *}$
that is obtained by concatenating labels along a graph-theoretic path over $S$
such that the graph-theoretic path does not traverse an edge forward (resp.\ backward)
immediately after the same edge backward (resp.\ forward);
and does not traverse a premise of one of $\otimes, \parr, c$ node and another premise of
the same node immediately after that.
A \emph{path} is a walk that is not proved to be equal to $0$.
A path is called \emph{maximal} if it starts and finishes at a conclusion. \end{definition}
The intuition of the notion of path is that a path is a ``correct way'' of traversing a proof net, in the sense that any path is preserved before and after a reduction. All the other walks that are not paths will be broken, which is represented by the constant $0$ of $\Lambda^{\Box *}$. Then, we obtain a \emph{context semantics} from paths in the following way.
\begin{definition}
Given a monomial path $a$, its action $\fint{a} : \Sigma \rightharpoonup \Sigma$ on contexts is defined as follows.
We define $\fint{1}$ as the identity mapping on contexts.
There is no definition of $\fint{0}$.
The $\fint{f^*}$ is the inverse translation, i.e., $\fint{f}^{-1}$.
The transformer of the composition of $a$ and $b$ is defined as
$\fint{ab}(m) \defeq \fint{a}(\fint{b}(m))$.
For the other labels, the interpretation are defined as follows where exponential morphisms $!$ and $\bangbox\,$ are defined by the meta-level pattern matchings:
\begin{alignat*}{4}
\fint{p}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (l . \mathcal{M}, \mathcal{B}, \mathcal{N})
&\quad \fint{q}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (r . \mathcal{M}, \mathcal{B}, \mathcal{N})\\
\fint{r}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (\mathcal{M}, L . \mathcal{B}, \mathcal{N})
&\quad \fint{s}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (\mathcal{M}, R . \mathcal{B}, \mathcal{N})\\
\fint{t}(\mathcal{M}, \fbracket{\mathcal{B}_1, \fbracket{\mathcal{B}_2, \mathcal{B}_3}}, \mathcal{N})
&\defeq (\mathcal{M}, \fbracket{\fbracket{\mathcal{B}_1, \mathcal{B}_2}, \mathcal{B}_3}, \mathcal{N})
&\quad \fint{d}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (\mathcal{M}, \star . \mathcal{B}, \mathcal{N})\\
\fint{r'}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (\mathcal{M}, \mathcal{B}, L' . \mathcal{N})
&\quad \fint{s'}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (\mathcal{M}, \mathcal{B}, R' . \mathcal{N})\\
\fint{t'}(\mathcal{M}, \mathcal{B}, \fbracket{\mathcal{N}_1, \fbracket{\mathcal{N}_2, \mathcal{N}_3}})
&\defeq (\mathcal{M}, \mathcal{B}, \fbracket{\fbracket{\mathcal{N}_1, \mathcal{N}_2}, \mathcal{N}_3})
&\quad \fint{d'}(\mathcal{M}, \mathcal{B}, \mathcal{N}) &\defeq (\mathcal{M}, \mathcal{B}, \star . \mathcal{N})\\
\end{alignat*}
\begin{align*}
\fint{!(f)}(\mathcal{M}, \fbracket{\mathcal{B}_1, \mathcal{B}_2}, \mathcal{N})
&\defeq \textbf{let}~(\mathcal{M}', \mathcal{B}_2', \mathcal{N}') = \fint{f}(\mathcal{M}, \mathcal{B}_2, \mathcal{N})~\textbf{in}~(\mathcal{M}', \fbracket{\mathcal{B}_1, \mathcal{B}_2'}, \mathcal{N}') \\
\fint{\bangbox(f)}(\mathcal{M}, \mathcal{B}, \fbracket{\mathcal{N}_1, \mathcal{N}_2})
&\defeq \textbf{let}~(\mathcal{M}', \mathcal{B}', \mathcal{N}_2') = \fint{f}(\mathcal{M}, \mathcal{B}, \mathcal{N}_2)~\textbf{in}~(\mathcal{M}', \mathcal{B}', \fbracket{\mathcal{N}_1, \mathcal{N}_2'})
\end{align*}
Given a path $a$, its action $\fint{a} : \Sigma \rightharpoonup \mathbb{M}(\Sigma)$ is
defined by the rules above (regarding the codomain as a multiset) and
$\fint{a+b}(m) = (\fint{a}(m)) \uplus (\fint{b}(m))$ where $\uplus$ is the multiset sum. \end{definition}
\begin{remark}
In Mackie's work~\cite{M:goim}, the multiset in the codomain is not used since the main interest of his
work is on terms of a base type: in that setting any proof net corresponding to a term
has an execution formula that is monomial.
In general, this style of context semantics
is slightly degenerated compared to Girard's original version and its successors
because the information of ``current position'' is dropped from the definition of contexts. \end{remark}
\begin{definition}
\label{def:context_semantics} Let $S$ be a closed proof net and $\chi$ be the set of maximal paths between conclusions of $S$.
The \emph{execution formula} is defined by $\mathcal{EX}(S) = \Sigma_{\phi \in \chi}\phi$ where the RHS is the sum of all paths in $\chi$.
The \emph{context semantics} of $S$ is defined to be $\fint{\mathcal{EX}(S)}\colon\Sigma \rightharpoonup \mathbb{M}(\Sigma)$. \end{definition}
\begin{definition}
Let $M$ be a closed well-typed term in $\lambda^{\smallbangbox}\,$.
The \emph{context semantics} of $M$ is defined to be $\fint{ \flambdanettrans{M} }$, where
$\flambdanettrans{-}$ is a straightforward translation from $\lambda^{\smallbangbox}\,$-terms to proof nets, defined by
constructing proof nets from $\lambda^{\smallbangbox}\,$-derivations
as in Figure~\ref{fig:bangboxnettranslation}. \end{definition}
\begin{figure}
\caption{Translation from $\lambda^{\smallbangbox}\,$ to $\mathrm{CMELL}^{\smallbangbox}\,$ proof nets.}
\label{fig:bangboxnettranslation}
\end{figure}
\begin{lemma}
Let $S$ be a closed proof net and $S'$ be its normal form.
Then $\fint{S} = \fint{S'}$. \end{lemma}
The lemma is proved through two auxiliary lemmata below.
\begin{lemma}
Let $\phi$ be a path from a conclusion of a closed net $S$ ending at a node $a$.
Let $(\mathcal{M}', \mathcal{B}', \mathcal{N}') = \fint{\phi}(\mathcal{M}, \varepsilon, \varepsilon)$.
The height of $\mathcal{B}'$ (resp.\ $\mathcal{N}'$) matches with the
number of exponential (resp.\ necessitation) boxes containing the node $a$. \end{lemma}
\begin{proof}
By spectating the rules of actions above:
the height of stacks only changes at doors of a box. \end{proof}
\begin{lemma}
Let $\phi$ be a path inside a box of a closed net $S$.
$\fint{\phi}(\mathcal{M}, \sigma . \mathcal{B}, \tau . \mathcal{N})$
is in the form $(\mathcal{M}', \sigma' . \mathcal{B}, \tau . \mathcal{N})$. \end{lemma}
\begin{proof}
Again, by spectating the rules of actions. \end{proof}
\begin{theorem}
If a closed term $M$ in $\lambda^{\smallbangbox}\,$ is typable and
$\ottnt{M} \leadsto \ottnt{M'}$,
then $\fint{\flambdanettrans{M}} = \fint{\flambdanettrans{M'}}$. \end{theorem}
\begin{remark} This notion of context semantics inherently captures the ``dynamics'' of computation, and indeed Mackie exploited\,\cite{M:goim, M:goim_popl} the character to implement a compiler, in the level of machine code, for PCF. In this paper we do not cover such a concrete compiler, but the definition of $\fint{-}$ can be seen as ``context transformers'' of virtual machine that is mathematically rigorous enough to model the computation of $\lambda^{\smallbangbox}\,$ (and hence of $\lambda^{\Box}$). \end{remark}
\section{Related work} \label{sec:related_work}
\subsection{Linear-logical reconstruction of modal logic} The work on translations from modal logic to linear logic goes back to Martini and Masini\,\cite{MM:modal_view}. They proposed a translation from classical S4 ($\mathrm{CS4}$) to full propositional linear logic by means of the Grisin--Ono translation. However, their work only discusses provability.
The most similar work to ours is a ``linear analysis'' of $\mathrm{CS4}$ by Schellinx\,\cite{S:linear_approach}, in which Girard translation from $\mathrm{CS4}$ with respect to proofs is proposed. He uses a bi-colored linear logic, a subsystem of multicolored linear logic by Danos et al.\,\cite{DJS:structure_of_exponentials}, called $\textbf{2-LL}$, for a target calculus of the translation. It has two pairs of exponentials $\fbracket{\bangzero, \whynotzero}$ and $\fbracket{\bangone, \whynotone}$, called \emph{subexponentials} following the terminology by Nigam and Miller\,\cite{NM:subexponentials}, with the following rules:
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ \bangone \Gamma, \bangzero \Gamma' \vdash A, \whynotone \Delta, \whynotzero \Delta' $}
\RightLabel{ $\bangzero\mathrm{R}$ }
\UIC{$ \bangone \Gamma, \bangzero \Gamma' \vdash \bangzero A, \whynotone \Delta, \whynotzero \Delta' $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ \bangone \Gamma \vdash A, \whynotone \Delta $}
\RightLabel{ $\bangone\mathrm{R}$ }
\UIC{$ \bangone \Gamma \vdash \bangone A, \whynotone \Delta $}
\end{prooftree}
\end{minipage} \end{tabular}
\noindent These rules have, while they are defined in the classical setting, essentially the same structure to what we defined as $!\mathrm{R}$ and $\bangbox\mathrm{R}$ for $\imell^{\smallbangbox}\,$, respectively.
To mention the difference between the results of Schellinx and ours, his work has investigated only in terms of proof theory. Neither a typed $\lambda$-calculus nor a Geometry of Interaction interpretation was given. However, even so, he already gave a reduction-preserving Girard translation for the sequent calculi of $\mathrm{CS4}$ and $\textbf{2-LL}$, and his \emph{linear decoration}\,(cf. \cite{S:linear_approach, DJS:structure_of_exponentials}) allows us to obtain the cut-elimination theorem for $\mathrm{CS4}$ as a corollary of that of $\textbf{2-LL}$. Thus, it should be interesting to investigate a relationship between his work and ours.
Furthermore, there also exist two uniform logical frameworks that can encode various logics including $\mathrm{IS4}$ and $\mathrm{CS4}$. One is the work by Nigam et al.\,\cite{N+:extended_framework} which based on Nigam and Miller's linear-logical framework with subexponentials and on the notion of focusing by Andreoli. The other work is \emph{adjoint logic} by Pruiksma et al.\,\cite{P+:adjoint_logic} which based on, again, subexponentials, and the so-called LNL model for intuitionistic linear logic by Benton. While our present work is still far from the two works, it seems fruitful to take our discussion into their frameworks to give linear-logical computational interpretations for various logics.
\subsection{Computation of modal logic and its relation to metaprogramming} Computational interpretations of modal logic have been considered not only for intuitionistic S4 but also for various logics, including the modal logics $\mathrm{K}$, $\mathrm{T}$, $\mathrm{K4}$, and $\mathrm{GL}$, and a few constructive temporal logics\,(cf. the survey by Kavvos in \cite{K:dual-context_lics}). This field of modal logics is known to be connected to ``metaprogramming'' in the theory of programming languages and has been substantially studied. One of the studies is \emph{(multi-)staged computation}\,(cf.\,\cite{TS:metaml}), which is a programming paradigm that supports Lisp-like \emph{quasi-quote}, \emph{unquote}, and \emph{eval}. The work of $\lambda^{\Box}$ by Davies and Pfenning\,\cite{DP:modal_analysis} is actually one of logical investigations of it.
Furthermore, the multi-stage programming is not a mere theory but has ``real'' implementations such as MetaML\,\cite{TS:metaml} and MetaOCaml\,(cf. a survey in \cite{C+:inference_for_classifiers}) in the style of functional programming languages. Some core calculi of these implementations are formalized as type systems\,(e.g.\,\cite{T:environment_classifiers, C+:inference_for_classifiers}) and investigated from the logical point of view (e.g.\,\cite{TI:logical_foundation}).
\section{Conclusion} \label{sec:conclusion} We have presented a linear-logical reconstruction of the intuitionistic modal logic S4, by establishing the modal linear logic with the $\bangbox~$-modality and the S4-version of Girard translation from $\mathrm{IS4}$. The translation from $\mathrm{IS4}$ to the modal linear logic is shown to be correct with respect to the level of proofs, through the Curry--Howard correspondence.
While the proof-level Girard translation for modal logic is already proposed by Schellinx, our typed $\lambda$-calculus $\lambda^{\smallbangbox}\,$ and its Geometry of Interaction Machine (GoIM) are novel. Also, the significance of our formalization is its simplicity. All we need to establish the linear-logical reconstruction of modal logic is the $\bangbox~$-modality, an integration of $!$-modality and $\Box$-modality, that gives the structure of modal logic into linear logic. Thanks to the simplicity, our $\lambda$-calculus and the GoIM can be obtained as simple extensions of existing works.
As a further direction, we plan to enrich our framework to cover other modal logics such as $\mathrm{K}$, $\mathrm{T}$, and $\mathrm{K4}$, following the work of contextual modal calculi by Kavvos\,\cite{K:dual-context_lics}. Moreover, reinvestigating of the modal-logical foundation for multi-stage programming by Tsukada and Igarashi\,\cite{TI:logical_foundation} via our methods and extending Mackie's GoIM for PCF\,\cite{M:goim_popl} to the modal-logical setting seem to be interesting from the viewpoint of programming languages.
Lastly, we have also left a semantical study for modal linear logic with respect to the validity. At the present stage, we think that we could give a sound-and-complete characterization of modal linear logic by an integration of Kripke semantics of modal logic and phase semantics of linear logic, but details will be studied in a future paper.
\appendix \section{Appendix}
\subsection{Cut-elimination theorem for the intuitionistic modal linear logic}
In this section, we give a complete proof of the cut-elimination theorem of $\imell^{\smallbangbox}\,$.
\begin{theorem}[Cut-elimination]
The rule $\mathrm{Cut}$ in $\imell^{\smallbangbox}\,$ is admissible, i.e.,
if $\Gamma \, \vdash \, \ottnt{A}$ is derivable, then there is a derivation of the same judgment without any applications of $\mathrm{Cut}$. \end{theorem}
\begin{proof}
As we mentioned in the body,
we will show the following rules are admissible.
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$\Gamma \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$\Gamma \, \vdash \, \bangbox \ottnt{A} $}
\AXC{$\Gamma' \ottsym{,} ( \bangbox \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \bangbox\mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\noindent
The admissibility of $\mathrm{Cut}$, $!\mathrm{Cut}$, $\bangbox\mathrm{Cut}$ are shown by
simultaneous induction on the derivation of $\Gamma \, \vdash \, \ottnt{A}$
with the lexicographic complexity $\fbracket{\delta, h}$,
where $\delta$ is the degree of the assumed derivation and $h$ is its height.
Therefore, it is enough to show that for every application of cuts, one of the following hold: (1) it can be reduced to a cut with a smaller cut-degree; (2) it can be reduced to a cut with a smaller height; (3) it can be eliminated immediately.
In what follows, we will explain the admissibility of each cut rule separately although the actual proofs are done simultaneously.
\begin{itemize}
\item The admissibility of $\mathrm{Cut}$.
We show that every application of the rule $\mathrm{Cut}$ whose cut-degree is maximal is eliminable.
Thus, consider an application of $\mathrm{Cut}$ in the derivation:
\begin{prooftree}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$\Gamma \, \vdash \, \ottnt{A}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
such that its cut-degree is maximal and its height is minimal (comparing to the other applications whose cut-degree is maximal).
The proof proceeds by case analysis on $\Pi_0$.
\begin{itemize}
\item $\Pi_0$ ends with $\mathrm{Cut}$.
In this case the derivation is as follows:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma_{{\mathrm{0}}} \, \vdash \, \ottnt{C}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma_{{\mathrm{1}}} \ottsym{,} \ottnt{C} \, \vdash \, \ottnt{A}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma_{{\mathrm{0}}} \ottsym{,} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottnt{A}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma_{{\mathrm{0}}} \ottsym{,} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
Since the bottom application of $\mathrm{Cut}$ was chosen to have the maximal cut-degree and the minimum height,
the cut-degree of the above is less than that of the bottom.
Therefore, the derivation can be translated to the following:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma_{{\mathrm{0}}} \, \vdash \, \ottnt{C}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma_{{\mathrm{1}}} \ottsym{,} \ottnt{C} \, \vdash \, \ottnt{A}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$\Gamma_{{\mathrm{1}}} \ottsym{,} \ottnt{C} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$\Gamma_{{\mathrm{0}}} \ottsym{,} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\item $\Pi_0$ ends with $\slimp\!\mathrm{R}$.
In this case, the derivation is as follows:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma \ottsym{,} \ottnt{A_{{\mathrm{0}}}} \, \vdash \, \ottnt{A_{{\mathrm{1}}}}$}
\RightLabel{$ \slimp\!\mathrm{R} $}
\UIC{$\Gamma \, \vdash \, \ottnt{A_{{\mathrm{0}}}} \multimap \ottnt{A_{{\mathrm{1}}}}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} \ottnt{A_{{\mathrm{0}}}} \multimap \ottnt{A_{{\mathrm{1}}}} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
for some $A_0$ and $A_1$ such that $A \equiv \ottnt{A_{{\mathrm{0}}}} \multimap \ottnt{A_{{\mathrm{1}}}}$.
If the last step in $\Pi_1$ is $\mathrm{Ax}$, then the result is obtained as $\Pi_0$.
If the last step in $\Pi_1$ is $\slimp\!\mathrm{L}$, the derivation is as follows:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma \ottsym{,} \ottnt{A_{{\mathrm{0}}}} \, \vdash \, \ottnt{A_{{\mathrm{1}}}}$}
\RightLabel{$ \slimp\!\mathrm{R} $}
\UIC{$\Gamma \, \vdash \, \ottnt{A_{{\mathrm{0}}}} \multimap \ottnt{A_{{\mathrm{1}}}}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \, \vdash \, \ottnt{A_{{\mathrm{0}}}}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma'' \ottsym{,} \ottnt{A_{{\mathrm{1}}}} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \slimp\!\mathrm{L} $}
\BIC{$\Gamma' \ottsym{,} \Gamma'' \ottsym{,} \ottnt{A_{{\mathrm{0}}}} \multimap \ottnt{A_{{\mathrm{1}}}} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \ottsym{,} \Gamma'' \, \vdash \, \ottnt{B}$}
\end{prooftree}
which is translated to the following:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \, \vdash \, \ottnt{A_{{\mathrm{0}}}}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma \ottsym{,} \ottnt{A_{{\mathrm{0}}}} \, \vdash \, \ottnt{A_{{\mathrm{1}}}}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{A_{{\mathrm{1}}}}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma'' \ottsym{,} \ottnt{A_{{\mathrm{1}}}} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$\Gamma \ottsym{,} \Gamma' \ottsym{,} \Gamma'' \, \vdash \, \ottnt{B}$}
\end{prooftree}
since the cut-degrees of $A_0$ and $A_1$ are less than that of $A$.
The other cases can be shown by simple commutative conversions.
\item $\Pi_0$ ends with $!\mathrm{R}$.
This case is dealt as a special case of the case $!\mathrm{R}$ in $!\mathrm{Cut}$.
\item $\Pi_0$ ends with $\bangbox\mathrm{R}$.
This case is dealt as a special case of the case $\bangbox\mathrm{R}$ in $\bangbox\mathrm{Cut}$.
\item $\Pi_0$ ends with the other rules. Easy.
\end{itemize}
\item The admissibility of $!\mathrm{Cut}$.
As in the case of $\mathrm{Cut}$, consider an application of $!\mathrm{Cut}$:
\begin{prooftree}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$\Gamma \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$\Gamma \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
such that its cut-degree is maximal and its height is minimal.
By case analysis on $\Pi_0$.
\begin{itemize}
\item $\Pi_0$ ends with $\mathrm{Ax}$.
In this case the cut-elimination is done as follows:
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ $}
\UIC{$\ottsym{!} \ottnt{A} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$\ottsym{!} \ottnt{A} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.09\hsize}
$\overset{\text{Cut elim.}}{\Longrightarrow}$
\end{minipage}
\begin{minipage}{0.30\hsize}
\begin{prooftree}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{$ !\mathrm{C} $}
\UIC{$\Gamma' \ottsym{,} \ottsym{!} \ottnt{A} \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\item $\Pi_0$ ends with $!\mathrm{R}$.
In this case the derivation is as follows:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottnt{A}$}
\RightLabel{$ !\mathrm{R} $}
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \Pi_1 $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
Due to the side-condition of $!\mathrm{R}$, we have to do case analysis on $\Pi_1$ further as follows.
\begin{itemize}
\item $\Pi_1$ ends with $\mathrm{Cut}$.
In this case the derivation is as follows:
\begin{prooftree}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{k} \, \vdash \, \ottnt{C}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma'' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{l} \ottsym{,} \ottnt{C} \, \vdash \, \ottnt{B}$}
\RightLabel{$ \mathrm{Cut} $}
\BIC{$\Gamma' \ottsym{,} \Gamma'' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \ottsym{,} \Gamma'' \, \vdash \, \ottnt{B}$}
\end{prooftree}
where $n = k + l$.
We only deal with the case of $k > 0$ and $l > 0$, and the other cases are easy.
Then, the derivation can be translated to the following:
\begin{prooftree}
\def\hskip 0.05in{\hskip .1in}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{k} \, \vdash \, \ottnt{C}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{C}$}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma'' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{l} \ottsym{,} \ottnt{C} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma'' \ottsym{,} \ottnt{C} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$ ( \bangbox \Gamma_{{\mathrm{0}}} )^2 \ottsym{,} ( \ottsym{!} \Gamma_{{\mathrm{1}}} )^2 \ottsym{,} \Gamma' \ottsym{,} \Gamma'' \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{$ !\mathrm{C}, \bangbox\mathrm{C} $}
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \ottsym{,} \Gamma'' \, \vdash \, \ottnt{B}$}
\end{prooftree}
since the cut-degree of $!\mathrm{Cut}$ is less than that of $\mathrm{Cut}$ from the assumption.
\item $\Pi_1$ ends with $!\mathrm{L}$.
If the formula introduced by $!\mathrm{L}$ is not the cut-formula, then it is easy.
For the other case, the derivation is as follows:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottnt{A}$}
\RightLabel{$ !\mathrm{R} $}
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n-1} \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{L} $}
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
which is translated to the following:
\begin{prooftree}
\AXC{$ \vdots $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottnt{A}$}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n-1} \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \ottsym{,} \ottnt{A} \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{ I.H. }
\BIC{$ ( \bangbox \Gamma_{{\mathrm{0}}} )^2 \ottsym{,} ( \ottsym{!} \Gamma_{{\mathrm{1}}} )^2 \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\doubleLine
\RightLabel{$ !\mathrm{C}, \bangbox\mathrm{C} $}
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\item $\Pi_1$ ends with $!\mathrm{C}$.
If the formula introduced by $!\mathrm{C}$ is not the cut-formula, then it is easy.
For the other case, the cut-elimination is done as follows:
\begin{tabular}{c}
\begin{minipage}{0.38\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.05in}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n+1} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{C} $}
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n} \, \vdash \, \ottnt{B}$}
\RightLabel{$ !\mathrm{Cut} $}
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.08\hsize}
$\overset{\text{Cut elim.}}{\Longrightarrow}$
\end{minipage}
\begin{minipage}{0.38\hsize}
\begin{prooftree}
\def2pt{2pt}
\def\hskip 0.05in{\hskip 0.05in}
\AXC{$ \Pi_0 $}
\noLine
\UIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \, \vdash \, \ottsym{!} \ottnt{A}$}
\AXC{$ \vdots $}
\noLine
\UIC{$\Gamma' \ottsym{,} ( \ottsym{!} \ottnt{A} )^{n+1} \, \vdash \, \ottnt{B}$}
\RightLabel{ I.H. }
\doubleLine
\BIC{$ \bangbox \Gamma_{{\mathrm{0}}} \ottsym{,} \ottsym{!} \Gamma_{{\mathrm{1}}} \ottsym{,} \Gamma' \, \vdash \, \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
Note that the whole proof has not been proceeding by induction on $n$, and hence the number of occurrences of $!A$ does not matter in this case.
\item $\Pi_1$ ends with the other rules. Easy.
\end{itemize}
\item $\Pi_0$ ends with the other rules. Easy.
\end{itemize}
\item The admissibility of $\bangbox\mathrm{Cut}$. Similar to the case of $!\mathrm{Cut}$. \qedhere
\end{itemize} \end{proof}
\subsection{Strong normalizability of the typed $\lambda$-calculus for modal linear logic}
We complete the proof of the strong normalization theorem for $\lambda^{\smallbangbox}\,$. As we mentioned, this is done by an embedding to a typed $\lambda$-calculus for the $(!, \multimap)$-fragment of dual intuitionistic linear logic, studied by Ohta and Hasegawa\,\cite{OH:terminating_linear_lambda}, and shown to be strongly normalizing.
\begin{rulefigure}{fig:lambdabanglimp}{Definition of $\lambda^{!, \slimp}$ (some syntax are changed to fit the present paper's notation).}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1.2em}
\begin{minipage}{0.98\hsize}
\paragraph*{Syntactic category}
\begin{alignat*}{3}
&\text{Types}~~&A, B, C &::= p ~|~ \ottnt{A} \multimap \ottnt{B} ~|~ \ottsym{!} \ottnt{A}\\
&\text{Terms}~~&M, N, L &::= x ~|~ \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} ~|~ \ottnt{M} \, \ottnt{N} ~|~ ! \ottnt{M} ~|~ \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N}
\end{alignat*}
\end{minipage}
}
\end{tabular}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1.2em}
\begin{minipage}{0.98\hsize}
\paragraph*{Typing rule}
\begin{center}
\begin{tabular}{c}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ \mathrm{LinAx} $}
\UIC{$\Gamma \ottsym{;} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{prooftree}
\AXC{$ $}
\RightLabel{$ !\mathrm{Ax} $}
\UIC{$\Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \emptyset \, \vdash \, \ottmv{x} \ottsym{:} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.38\hsize}
\begin{prooftree}
\AXC{$\Gamma \ottsym{;} \Sigma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{B}$}
\RightLabel{$ \slimp\!\mathrm{I} $}
\UIC{$\Gamma \ottsym{;} \Sigma \, \vdash \, \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{:} \ottnt{A} \multimap \ottnt{B}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.53\hsize}
\begin{prooftree}
\AXC{$\Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A} \multimap \ottnt{B}$}
\AXC{$\Gamma \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{A}$}
\RightLabel{$ \slimp\!\mathrm{E} $}
\BIC{$\Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottnt{M} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\begin{tabular}{c}
\begin{minipage}{0.37\hsize}
\begin{prooftree}
\AXC{$\Gamma \ottsym{;} \emptyset \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$}
\RightLabel{$ !\mathrm{I} $}
\UIC{$\Gamma \ottsym{;} \emptyset \, \vdash \, ! \ottnt{M} \ottsym{:} \ottsym{!} \ottnt{A}$}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.53\hsize}
\begin{prooftree}
\AXC{$\Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottsym{!} \ottnt{A}$}
\AXC{$\Gamma \ottsym{,} \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{;} \Sigma' \, \vdash \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\RightLabel{$ !\mathrm{E} $}
\BIC{$\Gamma \ottsym{;} \Sigma \ottsym{,} \Sigma' \, \vdash \, \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{:} \ottnt{B}$}
\end{prooftree}
\end{minipage}
\end{tabular}
\end{center}
\end{minipage}
}
\end{tabular}
\hspace{-1em}
\begin{tabular}{c}
{
\hspace{-1.2em}
\begin{minipage}{0.98\hsize}
\paragraph*{Reduction rule}
\begin{alignat*}{3}
\ottsym{(} \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \ottsym{)} \, \ottnt{N} & \leadsto \ottnt{M} [ \ottmv{x} := \ottnt{N} ] &&\\
\lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} \, \ottmv{x} & \leadsto \ottnt{M} &&\\
\ottkw{let} \, ! \ottmv{x} \ottsym{=} ! \ottnt{M} \, \ottkw{in} \, \ottnt{N} & \leadsto \ottnt{N} [ \ottmv{x} := \ottnt{M} ] &&\\
\ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, C [ ! \ottmv{x} ] & \leadsto C [ \ottnt{M} ] &&\\
\ottsym{(} \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{)} \, \ottnt{L} & \leadsto \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \, \ottnt{L} &&\\
\ottkw{let} \, ! \ottmv{y} \ottsym{=} \ottsym{(} \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{)} \, \ottkw{in} \, \ottnt{L} & \leadsto \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottkw{let} \, ! \ottmv{y} \ottsym{=} \ottnt{N} \, \ottkw{in} \, \ottnt{L} &&\\
\lambda \ottmv{y} \ottsym{:} \ottnt{A} \ottsym{.} \ottsym{(} \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{)} & \leadsto \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \lambda \ottmv{y} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{N}
&\hspace{0.3em}\text{(if $y \not\in \fv{M}$)}&\\
\ottnt{L} \, \ottsym{(} \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{)} & \leadsto \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{L} \, \ottnt{N} &&\\
\lambda \ottmv{z} \ottsym{:} \ottnt{A} \ottsym{.} \ottsym{(} \ottkw{let} \, ! \ottmv{y} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{L} \, \ottkw{in} \, \ottnt{N} \ottsym{)} & \leadsto \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{L} \, \ottkw{in} \, \lambda \ottmv{z} \ottsym{:} \ottnt{A} \ottsym{.} \ottsym{(} \ottkw{let} \, ! \ottmv{y} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} \ottsym{)}
&\hspace{0.3em}\text{(if $y \not\in \fv{L}$)}&
\end{alignat*}
where $C[-]$ is a linear context defined by the following grammar:
\begin{align*}
C ::= [-] ~|~ \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} C ~|~ C \, \ottnt{M} ~|~ \ottnt{M} \, C ~|~ \ottkw{let} \, ! \ottmv{x} \ottsym{=} C \, \ottkw{in} \, \ottnt{M} ~|~ \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, C \\
\end{align*}
\end{minipage}
}
\end{tabular}
\end{rulefigure}
The calculus of Ohta and Hasegawa, named $\lambda^{!, \slimp}$ here, is given in Figure~\ref{fig:lambdabanglimp}. The syntax and the typing rules can be read in the same way as (the ($!$, $\multimap$)-fragment of) $\lambda^{\smallbangbox}\,$. There are somewhat many reduction rules in contrast to those of $\lambda^{\smallbangbox}\,$, but these are due to the purpose of Ohta and Hasegawa to consider $\eta$-rules and commutative conversions. The different sets of reduction rules do not cause any problems to prove the strong normalizability of $\lambda^{\smallbangbox}\,$.
\begin{rulefigure}{fig:embedding}{Definition of the embeddings $ \fembed{ \ottnt{A} } $, $ \fembed{ \Gamma } $, and $ \fembed{ \ottnt{M} } $.}
\begin{center}
\begin{tabular}{c|c}
{
\hspace{-1em}
\begin{minipage}{0.40\hsize}
\underline{$ \fembed{ \ottnt{A} } $}
\begin{align*}
\fembed{ \ottmv{p} } &\defeq \ottmv{p}\\
\fembed{ \ottnt{A} \multimap \ottnt{B} } &\defeq \fembed{ \ottnt{A} } \multimap \fembed{ \ottnt{B} } \\
\fembed{ \ottsym{!} \ottnt{A} } &\defeq \ottsym{!} \fembed{ \ottnt{A} } \\
\fembed{ \bangbox \ottnt{A} } &\defeq \ottsym{!} \fembed{ \ottnt{A} }
\end{align*}
\rule{\textwidth}{0.4pt}
\underline{$ \fembed{ \Gamma } $}
\begin{align*}
\fembed{ \Gamma } &\defeq \{ (x : \fembed{ \ottnt{A} } ) ~|~ (x : A) \in \Gamma \}
\end{align*}
\end{minipage}
}
&
{
\begin{minipage}{0.52\hsize}
\underline{$ \fembed{ \ottnt{M} } $}
\begin{align*}
\fembed{ \ottmv{x} } &\defeq \ottmv{x}\\
\fembed{ \lambda \ottmv{x} \ottsym{:} \ottnt{A} \ottsym{.} \ottnt{M} } &\defeq \lambda \ottmv{x} \ottsym{:} \fembed{ \ottnt{A} } \ottsym{.} \fembed{ \ottnt{M} } \\
\fembed{ \ottnt{M} \, \ottnt{N} } &\defeq \fembed{ \ottnt{M} } \, \fembed{ \ottnt{N} } \\
\fembed{ ! \ottnt{M} } &\defeq ! \fembed{ \ottnt{M} } \\
\fembed{ \ottkw{let} \, ! \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} } &\defeq \ottkw{let} \, ! \ottmv{x} \ottsym{=} \fembed{ \ottnt{M} } \, \ottkw{in} \, \fembed{ \ottnt{N} } \\
\fembed{ \bangbox \ottnt{M} } &\defeq ! \fembed{ \ottnt{M} } \\
\fembed{ \ottkw{let} \, \bangbox \ottmv{x} \ottsym{=} \ottnt{M} \, \ottkw{in} \, \ottnt{N} } &\defeq \ottkw{let} \, ! \ottmv{x} \ottsym{=} \fembed{ \ottnt{M} } \, \ottkw{in} \, \fembed{ \ottnt{N} }
\end{align*}
\end{minipage}
}
\end{tabular}
\end{center}
\end{rulefigure}
\begin{definition}[Embedding]
An \emph{embedding} from $\lambda^{\smallbangbox}\,$ to $\lambda^{!, \slimp}$
is defined to be the triple of the translations $ \fembed{ \ottnt{A} } $, $ \fembed{ \Gamma } $, and $ \fembed{ \ottnt{M} } $
given in Figure~\ref{fig:embedding}. \end{definition}
\begin{lemma}[Preservation of typing and reduction]\
\label{lem:preservation}
\begin{enumerate}
\item If $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ in $\lambda^{\smallbangbox}\,$, then $ \fembed{ \Delta \ottsym{,} \Gamma } \ottsym{;} \fembed{ \Sigma } \, \vdash \, \fembed{ \ottnt{M} } \ottsym{:} \fembed{ \ottnt{A} } $ in $\lambda^{!, \slimp}$.
\item If $\ottnt{M} \leadsto \ottnt{N}$ in $\lambda^{\smallbangbox}\,$, then $ \fembed{ \ottnt{M} } \leadsto \fembed{ \ottnt{N} } $ in $\lambda^{!, \slimp}$.
\end{enumerate} \end{lemma}
\begin{proof}
By induction on $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ and $\ottnt{M} \leadsto \ottnt{N}$, respectively. \end{proof}
\begin{theorem}[Strong normalization]
In $\lambda^{\smallbangbox}\,$,
there are no infinite reduction sequences starting from $M$ for all well-typed term $M$. \end{theorem}
\begin{proof}
Suppose that there exists an infinite reduction sequence starting from $M$ in $\lambda^{\smallbangbox}\,$.
Then, the term $ \fembed{ \ottnt{M} } $ is well-typed in $\lambda^{!, \slimp}$ and yields an infinite reduction sequence in $\lambda^{!, \slimp}$
by Lemma~\ref{lem:preservation}. However, this contradicts the strong normalizability of $\lambda^{!, \slimp}$. \end{proof}
\subsection{Bracket abstraction algorithm}
We show the definition of bracket abstraction operators in this section.
\begin{rulefigure}{fig:bracket_abstraction}{Definitions of $\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, and $\ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$ for bracket abstraction.}
\hspace{-1em}
\begin{tabular}{c}
{
\begin{minipage}{0.97\hsize}
\underline{$ \lambda_{*} \ottmv{x} . \ottnt{M} $}
\begin{alignat*}{3}
\lambda_{*} \ottmv{x} . \ottmv{x} &\defeq& &~ \mathrm{I} &&\\
\lambda_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{C} \, \ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottnt{N} &&~~~\text{if $x \in \mathrm{FV}(M)$ }\\
\lambda_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{B} \, \ottnt{M} \, \ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{N} \ottsym{)} &&~~~\text{if $x \in \mathrm{FV}(N)$ }
\end{alignat*}
\end{minipage}
}
\end{tabular}
\rule{\textwidth}{0.4pt}
\hspace{-1em}
\begin{tabular}{c|c}
{
\begin{minipage}{0.45\hsize}
\underline{$ \lambda^{!}_{*} \ottmv{x} . \ottnt{M} $}
\begin{alignat*}{3}
\hspace{-1em} \lambda^{!}_{*} \ottmv{x} . \ottmv{x} &\defeq& &~ \mathrm{T}^! &&\\
\hspace{-1em} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} &\defeq& &~ \mathrm{K}^! \, \ottnt{M} &&~~~\text{if $(a)$}\\%~~~\text{if $x \not\in \mathrm{FV}(M)$ }\\
\hspace{-1em} \lambda^{!}_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{C} \, \ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottnt{N} &&~~~\text{if $(b)$}\\%~~~\text{if $x \in \mathrm{FV}(M)$ and $x \not\in \mathrm{FV}(N)$ }\\
\hspace{-1em} \lambda^{!}_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{B} \, \ottnt{M} \, \ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{N} \ottsym{)} &&~~~\text{if $(c)$}\\%~~~\text{if $x \not\in \mathrm{FV}(M)$ and $x \in \mathrm{FV}(N)$ }\\
\hspace{-1em} \lambda^{!}_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{S}^! \, \ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{N} \ottsym{)} &&~~~\text{if $(d)$}\\%~~~\text{if $x \in \mathrm{FV}(M)$ and $x \in \mathrm{FV}(N)$ }\\
\hspace{-1em} \lambda^{!}_{*} \ottmv{x} . \ottsym{(} ! \ottnt{M} \ottsym{)} &\defeq& &~ \mathrm{B} \, \ottsym{(} \mathrm{D}^! \, ! \ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \ottsym{)} \, \mathrm{4}^! &&
\end{alignat*}
\end{minipage}
}
&
{
\begin{minipage}{0.45\hsize}
\underline{$ \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} $}
\begin{alignat*}{3}
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottmv{x} &\defeq& &~ \mathrm{T}^{\smallbangbox} &&\\
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} &\defeq& &~ \mathrm{K}^{\smallbangbox} \, \ottnt{M} &&\text{if $(a)$}\\%~~~\text{if $x \not\in \mathrm{FV}(M)$ }\\
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{C} \, \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottnt{N} &&\text{if $(b)$}\\%~~~\text{if $x \in \mathrm{FV}(M)$ and $x \not\in \mathrm{FV}(N)$ }\\
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{B} \, \ottnt{M} \, \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{N} \ottsym{)} &&\text{if $(c)$}\\%~~~\text{if $x \not\in \mathrm{FV}(M)$ and $x \in \mathrm{FV}(N)$ }\\
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} &\defeq& &~ \mathrm{S}^{\smallbangbox} \, \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{N} \ottsym{)} &&\text{if $(d)$}\\%~~~\text{if $x \in \mathrm{FV}(M)$ and $x \in \mathrm{FV}(N)$ }\\
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottsym{(} ! \ottnt{M} \ottsym{)} &\defeq& &~ \mathrm{B} \, \ottsym{(} \mathrm{D}^! \, \ottsym{(} ! \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \ottsym{)} \ottsym{)} \, \ottsym{(} \mathrm{B} \, \mathrm{E} \, \mathrm{4}^{\smallbangbox}\, \ottsym{)} &&\\
\hspace{-1em} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottsym{(} \bangbox \ottnt{M} \ottsym{)} &\defeq& &~ \mathrm{B} \, \ottsym{(} \mathrm{D}^{\smallbangbox} \, \ottsym{(} \bangbox \ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \ottsym{)} \ottsym{)} \, \mathrm{4}^{\smallbangbox}\, &&
\end{alignat*}
\end{minipage}
}
\end{tabular}
\rule{\textwidth}{0.4pt}
where $(a), (b), (c), (d)$ means the conditions $(x \not\in \fv{M})$, $(x \in \fv{M} \text{ and } x \not\in \fv{N})$,
$(x \not\in \fv{M} \text{ and } x \in \fv{N})$, $(x \in \fv{M} \text{ and } x \in \fv{N}) $, respectively.
\end{rulefigure}
\begin{definition}[Bracket abstraction]
Let $M$ be a term $M$ of $\mathrm{CL}^{\smallbangbox}\,$ such that $\Delta \ottsym{;} \Gamma \ottsym{;} \Sigma \, \vdash \, \ottnt{M} \ottsym{:} \ottnt{A}$ and $x \in \fv{M}$ for some $\Delta, \Gamma, \Sigma, A$ and $x$.
Then, the \emph{bracket abstraction of $M$ with respect to $x$} is defined to be either one of the following, depending the variable kind of $x$:
\begin{alignat*}{3}
\hspace{0.30\textwidth}&\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)} &&\hspace{3em}\text{if $x \in \fdom{\Sigma}$\footnotemark;}\\
\hspace{0.30\textwidth}&\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} &&\hspace{3em}\text{if $x \in \fdom{\Gamma}$;}\\
\hspace{0.30\textwidth}&\ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)} &&\hspace{3em}\text{if $x \in \fdom{\Delta}$,}
\end{alignat*}
where each one of $\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, and $\ottsym{(} \lambda^{\smallbangbox}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$
is the meta-level \emph{bracket abstraction} operation given in Figure~\ref{fig:bracket_abstraction},
which takes the pair of $x$ and $M$, and yields a $\mathrm{CL}^{\smallbangbox}\,$-term.
\footnotetext{$\fdom{\Gamma}$ is defined to be the set $\{ x ~|~ (x : A) \in \Gamma \}$ for all type contexts $\Gamma$.} \end{definition}
\begin{remark}
As in the case of standard bracket abstraction algorithm,
the intuition behind the operations $\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$, and $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$
is that they are defined so as to mimic the $\lambda$-abstraction operation in the framework of combinatory logic.
For instance, the denotation of $\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$ is a $\mathrm{CL}^{\smallbangbox}\,$-term that represents a function with the parameter $x$, that is, it is a term that satisfies that $ \ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottnt{N} \leadsto^+ \ottnt{M} [ \ottmv{x} := \ottnt{N} ] $ in $\mathrm{CL}^{\smallbangbox}\,$,
for all $\mathrm{CL}^{\smallbangbox}\,$-terms $N$. \end{remark}
\begin{remark}
There are no definitions for some cases in $\ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$ and $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottnt{M} \ottsym{)}$,
e.g, the case that $\ottsym{(} \lambda_{*} \ottmv{x} . \ottsym{(} \ottnt{M} \, \ottnt{N} \ottsym{)} \ottsym{)}$ such that $x \in \fv{M}$ and $x \in \fv{N}$,
and the case that $\ottsym{(} \lambda^{!}_{*} \ottmv{x} . \ottsym{(} \bangbox \ottnt{M} \ottsym{)} \ottsym{)}$. This is because that these are actually unnecessary due to the linearity condition and the side condition of the rule $\bangbox$~.
Moreover, the well-definedness of the bracket abstraction operations can be shown by induction on $M$,
and in reality, the proof of the deduction theorem can be seen as what justifies it.
The intentions that $ \ottsym{(} \lambda_{*} \ottmv{x} . \ottnt{M} \ottsym{)} \, \ottnt{N} \leadsto^+ \ottnt{M} [ \ottmv{x} := \ottnt{N} ] $, etc. can also be shown by easy calculation. \end{remark}
\end{document} |
\begin{document}
\def{\mathbb R}{{\mathbb R}} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb C}{{\mathbb C}} \newcommand{\rm trace}{\rm trace} \newcommand{{\mathbb{E}}}{{\mathbb{E}}} \newcommand{{\mathbb{P}}}{{\mathbb{P}}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newtheorem{df}{Definition} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{pr}{Proposition} \newtheorem{co}{Corollary} \def\nu{\nu} \def\mbox{ sign }{\mbox{ sign }} \def\alpha{\alpha} \def{\mathbb N}{{\mathbb N}} \def{\cal A}{{\cal A}} \def{\cal L}{{\cal L}} \def{\cal X}{{\cal X}} \def{\cal F}{{\cal F}} \def\bar{c}{\bar{c}} \def\nu{\nu} \def\delta{\delta} \def\mbox{\rm dim}{\mbox{\rm dim}} \def\mbox{\rm Vol}{\mbox{\rm Vol}} \def\beta{\beta} \def\theta{\theta} \def\lambda{\lambda} \def\varepsilon{\varepsilon} \def{:}\;{{:}\;} \def\noindent {\bf Proof : \ }{\noindent {\bf Proof : \ }} \def\endpf{ \begin{flushright} $ \Box $ \\ \end{flushright}}
\title[Hyperplane inequality for measures]{A $\sqrt{n}$ estimate for measures of hyperplane sections of convex bodies}
\author{Alexander Koldobsky}
\address{Department of Mathematics\\ University of Missouri\\ Columbia, MO 65211}
\email{koldobskiya@@missouri.edu}
\begin{abstract} The hyperplane (or slicing) problem asks whether there exists an absolute constant $C$ so that for any origin-symmetric convex body $K$ in ${\mathbb R}^n$ $$
|K|^{\frac {n-1}n} \le C \max_{\xi \in S^{n-1}} |K\cap \xi^\bot|, $$ where $\xi^\bot$ is the central hyperplane in ${\mathbb R}^n$ perpendicular to $\xi,$ and
$|K|$ stands for volume of proper dimension. The problem is still open, with the best-to-date estimate $C\sim n^{1/4}$ established by Klartag, who slightly improved the previous estimate of Bourgain. It is much easier to get a weaker estimate with $C=\sqrt{n}.$
In this note we show that the $\sqrt{n}$ estimate holds for arbitrary measure in place of volume. Namely, if $L$ is an origin-symmetric convex body in ${\mathbb R}^n$ and $\mu$ is a measure with non-negative even continuous density on $L,$ then $$\mu(L)\ \le\ \sqrt{n} \frac n{n-1} c_n\max_{\xi \in S^{n-1}}
\mu(L\cap \xi^\bot)\ |L|^{1/n} \ ,$$
where $c_n= \left|B_2^n\right|^{\frac{n-1}n}/ \left|B_2^{n-1}\right| < 1,$ and $B_2^n$ is the unit Euclidean ball in ${\mathbb R}^n.$ We deduce this inequality from a stability result for intersection bodies.
\end{abstract} \maketitle
\section{Introduction} The hyperplane (or slicing) problem \cite{Bo1, Bo2, Ba, MP} asks whether there exists an absolute constant $C$ so that for any origin-symmetric convex body $K$ in ${\mathbb R}^n$ \begin{equation} \label{hyper}
|K|^{\frac {n-1}n} \le C \max_{\xi \in S^{n-1}} |K\cap \xi^\bot|, \end{equation} where $\xi^\bot$ is the central hyperplane in ${\mathbb R}^n$ perpendicular to $\xi,$ and
$|K|$ stands for volume of proper dimension. The problem is still open, with the best-to-date estimate $C\sim n^{1/4}$ established by Klartag \cite{Kl}, who slightly improved the previous estimate of Bourgain \cite{Bo3}. We refer the reader to [BGVV] for the history and current state of the problem.
In the case where $K$ is an intersection body (see definition and properties below), the inequality (\ref{hyper}) can be proved with the best possible constant (\cite[p. 374]{G2}): \begin{equation}\label{hyper-inter}
|K|^{\frac {n-1}n} \le \frac{\left|B_2^n\right|^{\frac{n-1}n}}{\left|B_2^{n-1}\right|}
\max_{\xi \in S^{n-1}} |K\cap \xi^\bot|, \end{equation}
with equality when $K=B_2^n$ is the unit Euclidean ball. Here $|B_2^n|= \pi^{n/2}/\Gamma(1+n/2)$ is the volume of $B_2^n.$ Throughout the paper, we denote the constant in (\ref{hyper-inter}) by
$$c_n= \frac{\left|B_2^n\right|^{\frac{n-1}n}}{\left|B_2^{n-1}\right|} .$$ Note that $c_n<1$ for every $n\in {\mathbb N};$ this is an easy consequence of the log-convexity of the $\Gamma$-function.
It was proved in \cite{K3} that inequality (\ref{hyper}) holds for intersection bodies with arbitrary measure in place of volume. Let $f$ be an even continuous non-negative function on ${\mathbb R}^n,$ and denote by $\mu$ the measure on ${\mathbb R}^n$ with density $f$. For every closed bounded set $B\subset {\mathbb R}^n$ define $$\mu(B)=\int\limits_B f(x)\ dx.$$ Suppose that $K$ is an intersection body in ${\mathbb R}^n.$ Then, as proved in \cite[Theorem 1]{K3} (see also a remark at the end of the paper \cite{K3}), \begin{equation} \label{arbmeas}
\mu(K) \le \frac n{n-1} c_n \max_{\xi \in S^{n-1}} \mu(K\cap \xi^\bot)\ |K|^{1/n}. \end{equation} The constant in the latter inequality is the best possible.
This note was motivated by a question of whether one can remove the assumption that $K$ is an intersection body and prove the inequality (\ref{arbmeas}) for all origin-symmetric convex bodies, perhaps at the expense of a greater constant in the right-hand side. One would like this extra constant to be independent of the body or measure. In this note we prove the following inequality.
\begin{theorem}\label{main} Let $L$ be an origin-symmetric convex body in ${\mathbb R}^n,$ and let $\mu$ be a measure with even continuous non-negative density on $L.$ Then \begin{equation} \label{sqrtn} \mu(L)\ \le\ \sqrt{n} \frac n{n-1} c_n\max_{\xi \in S^{n-1}}
\mu(L\cap \xi^\bot)\ |L|^{1/n}. \end{equation} \end{theorem}
In the case of volume, the estimate (\ref{hyper}) with $C=\sqrt{n}$ can be proved relatively easy (see \cite[p. 96]{MP} or \cite[Theorem 8.2.13]{G2}), and it is not optimal, as mentioned above. The author does not know whether the estimate (\ref{sqrtn}) is optimal for arbitrary measures.
\section{Proof of Theorem \ref{main}}
We need several definitions and facts. A closed bounded set $K$ in ${\mathbb R}^n$ is called a {\it star body} if every straight line passing through the origin crosses the boundary of $K$ at exactly two points different from the origin, the origin is an interior point of $K,$ and the {\it Minkowski functional} of $K$ defined by
$$\|x\|_K = \min\{a\ge 0:\ x\in aK\}$$ is a continuous function on ${\mathbb R}^n.$
The {\it radial function} of a star body $K$ is defined by
$$\rho_K(x) = \|x\|_K^{-1}, \qquad x\in {\mathbb R}^n.$$ If $x\in S^{n-1}$ then $\rho_K(x)$ is the radius of $K$ in the direction of $x.$
If $\mu$ is a measure on $K$ with even continuous density $f$, then \begin{equation} \label{polar-measure}
\mu(K) = \int_K f(x)\ dx = \int\limits_{S^{n-1}}\left(\int\limits_0^{\|\theta\|^{-1}_K} r^{n-1} f(r\theta)\ dr\right) d\theta. \end{equation} Putting $f=1$, one gets \begin{equation} \label{polar-volume}
|K| =\frac{1}{n} \int_{S^{n-1}} \rho_K^n(\theta) d\theta=
\frac{1}{n} \int_{S^{n-1}} \|\theta\|_K^{-n} d\theta. \end{equation}
The {\it spherical Radon transform} $R:C(S^{n-1})\mapsto C(S^{n-1})$ is a linear operator defined by $$Rf(\xi)=\int_{S^{n-1}\cap \xi^\bot} f(x)\ dx,\quad \xi\in S^{n-1}$$ for every function $f\in C(S^{n-1}).$
The polar formulas (\ref{polar-measure}) and (\ref{polar-volume}), applied to a hyperplane section of $K$, express volume of such a section in terms of the spherical Radon transform:
$$\mu(K\cap \xi^\bot) = \int_{K\cap \xi^\bot} f =
\int_{S^{n-1}\cap \xi^\bot} \left(\int_0^{\|\theta\|_K^{-1}} r^{n-2}f(r\theta)\ dr \right)d\theta$$ \begin{equation} \label{measure=spherradon}
=R\left(\int_0^{\|\cdot\|_K^{-1}} r^{n-2}f(r\ \cdot)\ dr \right)(\xi). \end{equation} and \begin{equation} \label{volume=spherradon}
|K\cap \xi^\bot| = \frac{1}{n-1} \int_{S^{n-1}\cap \xi^\bot} \|\theta\|_K^{-n+1}d\theta =
\frac{1}{n-1} R(\|\cdot\|_K^{-n+1})(\xi). \end{equation}
The spherical Radon transform is self-dual (see \cite[Lemma 1.3.3]{Gr}), namely, for any functions $f,g\in C(S^{n-1})$ \begin{equation} \label{selfdual} \int_{S^{n-1}} Rf(\xi)\ g(\xi)\ d\xi = \int_{S^{n-1}} f(\xi)\ Rg(\xi)\ d\xi. \end{equation} Using self-duality, one can extend the spherical Radon transform to measures. Let $\mu$ be a finite Borel measure on $S^{n-1}.$ We define the spherical Radon transform of $\mu$ as a functional $R\mu$ on the space $C(S^{n-1})$ acting by $$(R\mu,f)= (\mu, Rf)= \int_{S^{n-1}} Rf(x) d\mu(x).$$ By Riesz's characterization of continuous linear functionals on the space $C(S^{n-1})$, $R\mu$ is also a finite Borel measure on $S^{n-1}.$ If $\mu$ has continuous density $g,$ then by (\ref{selfdual}) the Radon transform of $\mu$ has density $Rg.$
The class of intersection bodies was introduced by Lutwak \cite{L}. Let $K, L$ be origin-symmetric star bodies in ${\mathbb R}^n.$ We say that $K$ is the intersection body of $L$ if the radius of $K$ in every direction is equal to the $(n-1)$-dimensional volume of the section of $L$ by the central hyperplane orthogonal to this direction, i.e. for every $\xi\in S^{n-1},$ \begin{equation} \label{intbodyofstar}
\rho_K(\xi)= \|\xi\|_K^{-1} = |L\cap \xi^\bot|. \end{equation} All bodies $K$ that appear as intersection bodies of different star bodies form {\it the class of intersection bodies of star bodies}.
Note that the right-hand side of (\ref{intbodyofstar}) can be written in terms of the spherical Radon transform using (\ref{volume=spherradon}):
$$\|\xi\|_K^{-1}= \frac{1}{n-1} \int_{S^{n-1}\cap \xi^\bot} \|\theta\|_L^{-n+1} d\theta=
\frac{1}{n-1} R(\|\cdot\|_L^{-n+1})(\xi).$$ It means that a star body $K$ is
the intersection body of a star body if and only if the function $\|\cdot\|_K^{-1}$ is the spherical Radon transform of a continuous positive function on $S^{n-1}.$ This allows to introduce a more general class of bodies. A star body $K$ in ${\mathbb R}^n$ is called an {\it intersection body} if there exists a finite Borel measure \index{intersection body}
$\mu$ on the sphere $S^{n-1}$ so that $\|\cdot\|_K^{-1}= R\mu$ as functionals on $C(S^{n-1}),$ i.e. for every continuous function $f$ on $S^{n-1},$ \begin{equation} \label{defintbody}
\int_{S^{n-1}} \|x\|_K^{-1} f(x)\ dx = \int_{S^{n-1}} Rf(x)\ d\mu(x). \end{equation}
We refer the reader to the books \cite{G2, K2} for more information about intersection bodies and their applications. Let us just say that intersection bodies played a crucial role in the solution of the Busemann-Petty problem. The class of intersection bodies is rather rich. For example, every origin-symmetric convex body in ${\mathbb R}^3$ and ${\mathbb R}^4$ is an intersection body \cite{G1, Z}. The unit ball of any finite dimensional subspace of $L_p,\ 0<p\le 2$ is an intersection body, in particular every polar projection body is an intersection body \cite{K1}.
We deduce Theorem 1 from the following stability result for intersection bodies. \begin{theorem}\label{stab} Let $K$ be an intersection body in ${\mathbb R}^n,$ let $f$ be an even continuous function on $K,$ $f\ge 1$ everywhere on $K,$ and let $\varepsilon>0.$ Suppose that \begin{equation}\label{comp1}
\int_{K\cap \xi^\bot} f \ \le\ |K\cap \xi^\bot| +\varepsilon,\qquad \forall \xi\in S^{n-1}, \end{equation} then \begin{equation}\label{comp2}
\int_K f\ \le\ |K| + \frac {n}{n-1}\ c_n\ |K|^{1/n}\varepsilon. \end{equation} \end{theorem}
\noindent {\bf Proof : \ } First, we use the polar formulas (\ref{measure=spherradon}) and (\ref{volume=spherradon}) to write the condition (\ref{comp1}) in terms of the spherical Radon transform:
$$R\left(\int_0^{\|\cdot\|_K^{-1}} r^{n-2}f(r\ \cdot)\ dr \right)(\xi) \le \frac{1}{n-1} R(\|\cdot\|_K^{-n+1})(\xi) + \varepsilon.$$ Let $\mu$ be the measure on $S^{n-1}$ corresponding to $K$ by the definition of an intersection body (\ref{defintbody}). Integrating both sides of the latter inequality over $S^{n-1}$ with the measure $\mu$ and using (\ref{defintbody}), we get
$$\int_{S^{n-1}} \|\theta\|_K^{-1} \left(\int_0^{\|\theta\|_K^{-1}} r^{n-2}f(r\theta)\ dr \right)d\theta $$ \begin{equation} \label{eq11}
\le \frac{1}{n-1} \int_{S^{n-1}} \|\theta\|_K^{-n}\ d\theta + \varepsilon \int_{S^{n-1}} d\mu(\xi). \end{equation} Recall (\ref{polar-measure}), (\ref{polar-volume}) and the assumption that $f\ge 1.$ We write the integral in the left-hand side of (\ref{eq11}) as
$$\int_{S^{n-1}} \|\theta\|_K^{-1} \left(\int_0^{\|\theta\|_K^{-1}} r^{n-2}f(r\theta)\ dr \right)d\theta $$
$$= \int_{S^{n-1}} \left(\int_0^{\|\theta\|_K^{-1}} r^{n-1}f(r\theta)\ dr \right)d\theta$$
$$+ \int_{S^{n-1}} \left(\int_0^{\|\theta\|_K^{-1}} (\|\theta\|_K^{-1} - r) r^{n-2}f(r\theta)\ dr \right)d\theta$$
$$\ge \int_K f + \int_{S^{n-1}} \left(\int_0^{\|\theta\|_K^{-1}} (\|\theta\|_K^{-1} - r) r^{n-2}\ dr \right)d\theta$$ \begin{equation} \label{eq22}
=\int_K f + \frac 1{(n-1)n} \int_{S^{n-1}} \|\theta\|_K^{-n}\ d\theta = \int_K f + \frac1{n-1} |K|. \end{equation}
Let us estimate the second term in the right-hand side of (\ref{eq11}) by adding the Radon transform of the unit constant
function under the integral ($R1(\xi)=\left|S^{n-2}\right|$ for every $\xi \in S^{n-1}$),
using again the fact that $\|\cdot\|_K^{-1}=R\mu$ and then applying H\"older's inequality:
$$\varepsilon \int_{S^{n-1}} d\mu(\xi) = \frac{\varepsilon}{\left|S^{n-2}\right|} \int_{S^{n-1}} R1(\xi)\ d\mu(\xi)$$
$$=\frac{\varepsilon}{\left| S^{n-2} \right| } \int_{S^{n-1}} \|\theta\|_K^{-1}\ d\theta $$
$$ \le \frac{\varepsilon}{\left|S^{n-2}\right|} \left|S^{n-1}\right|^{\frac{n-1}n} \left(\int_{S^{n-1}} \|\theta\|_K^{-n}\ d\theta\right)^{\frac1n}$$ \begin{equation}\label{eq33}
= \frac{\varepsilon}{\left|S^{n-2}\right|} \left|S^{n-1}\right|^{\frac{n-1}n} n^{1/n}|K|^{1/n}= \frac n{n-1} c_n |K|^{1/n} \varepsilon. \end{equation}
In the last step we used $|S^{n-1}|=n|B_2^n|, |S^{n-2}|=(n-1)|B_2^{n-1}|.$ Combining (\ref{eq11}), (\ref{eq22}),(\ref{eq33}) we get
$$\int_K f + \frac 1{n-1} |K| \le \frac n{n-1} |K| + \frac n{n-1} c_n |K|^{1/n} \varepsilon. \qed$$ \bigbreak Now we prove our main result. \smallbreak \noindent {\bf Proof of Theorem \ref{main}: } Let $g$ be the density of the measure $\mu,$ so $g$ is an even non-negative continuous function on $L.$ By John's theorem \cite{J}, there exists an origin-symmetric ellipsoid $K$ such that $$\frac 1{\sqrt{n}} K \subset L \subset K.$$ The ellipsoid $K$ is an intersection body (see for example \cite[Corollary 8.1.7]{G2}). Let $f= \chi_K + g \chi_L,$ where $\chi_K,\ \chi_L$ are the indicator functions of $K$ and $L.$ Clearly, $f\ge 1$ everywhere on $K.$ Put
$$\varepsilon=\max_{\xi\in S^{n-1}} \left(\int_{K\cap \xi^\bot} f - |K\cap \xi^\bot| \right)= \max_{\xi\in S^{n-1}} \int_{L\cap \xi^\bot} g$$ and apply Theorem \ref{stab} to $f,K,\varepsilon$ (the function $f$ is not necessarily continuous on $K,$ but the result holds by a simple approximation argument). We get
$$\mu(L)= \int_L g = \int_K f -\ |K|$$
$$ \le \frac n{n-1} c_n |K|^{1/n}\max_{\xi\in S^{n-1}} \int_{L\cap \xi^\bot} g$$
$$ \le \sqrt{n}\ \frac n{n-1} c_n |L|^{1/n}\max_{\xi\in S^{n-1}} \mu(L\cap \xi^\bot),$$
because $|K|^{1/n}\le \sqrt{n}\ |L|^{1/n}.$ \qed
\bigbreak {\bf Acknowledgement.} I wish to thank the US National Science Foundation for support through grant DMS-1265155.
\end{document} |
\begin{document}
\title {A model in which the Separation principle holds for a given effective projective Sigma-class\thanks {This research was partially supported by Russian Foundation for Basic Research RFBR grant number 20-01-00670.} }
\author {Vladimir Kanovei\thanks{ IITP RAS,
\ {\tt [email protected]} . }
\and Vassily Lyubetsky\thanks{IITP RAS, \ {\tt [email protected]}.
} }
\date {\today}
\maketitle
\begin{abstract} In this paper, we prove the following: If $n\ge3$, there is a generic extension of $\rL$ -- the constructible universe -- in which it is true that the Separation principle holds for both effective (lightface) classes
{$\is1n$}
and
{$\ip1n$ } of sets of integers. The result was announced long ago by Leo Harrington with a sketch of the proof for $n=3$; its full proof has never been presented. Our methods are based on a countable product of almost-disjoint forcing notions independent in the sense of Jensen--Solovay. \end{abstract}
\parf{Introduction} \las{int}
The separation problem was introduced in descriptive set theory by Luzin~\cite{lbook}.
In modern terms, the separation principle -- or simply
{\sep}, for a given projective (boldface) class $\fs1n$ or $\fp1n$ -- is the assertion that\vom\vim \bde \item[\bsep\
{\rm for}
$\fs1n$ {\rm or} $\fp1n$:] any pair of disjoint $\fs1n$, resp,\ $\fp1n$ sets $X,Y$ of reals can be separated by a $\fd1n$ set.\vom\vim \ede Accordingly, \rit{the classical separation problem}
is a question of whether \bsep\ holds for this or another projective class $\fs1n$ or $\fp1n$. Luzin and then Novikov \cite{nov1951} underlined the importance and difficulty of this problem. (See \cite{mDST,Kdst,kl49} for details and further references.)
Luzin \cite{lus:ea,lbook} and Novikov~\cite{nov1931} proved that \bsep\ holds for $\fs11$ but fails for the dual class $\fp11$. Somewhat later, it was established by Novikov~\cite{nov1935} that the picture changes at the next projective level: \bsep\ holds for $\fp12$ but fails for $\fs12$.
As for the higher levels of projective hierarchy, all attempts made in classical descriptive set theory to solve the separation problem above the second level did not work, until some additional set theoretic axioms were introduced---in particular, those by Novikov~\cite{nov1951} and Addison~\cite{add1,add2}. G\"odel's \rit{axiom of constructibility} $\rV=\rL$ implies that, for any $n\ge 3$, \bsep\ holds for $\fp1n$ but fails for $\fs1n$---pretty similar to second level.
In such a case, it is customary in modern set theory to look for models in which the separation problem is solved differently than under $\rV=\rL$ for at least some projective classes $\fs1n$ and $\fp1n$, $n\ge3$. This goal is split into two different tasks: \ben \Renu \itlb{tas1} Prove the \rit{independence} of the \dd\Pi side \bsep---that is, given $n\ge3$, find models in which \bsep\ \rit{fails} for the class $\fp1n$;\vim
\itlb{tas2} Prove the \rit{consistency} of the \dd\Sigma side \bsep---that is, given $n\ge3$, find models in which \bsep\ \rit{holds} for the class $\fs1n$.\vom\vim \een As for \rit{models}, we focus here only on \rit{generic extensions of the constructible universe $\rL$}. Other set theoretic models, e.g., those based on strong determinacy or large cardinal hypotheses,
are not considered in this paper. ({ We may only note in brackets that, by Addison and Moschovakis \cite{addmos}, and Martin~\cite{martAD}, the \rit{axiom of projective determinacy} $\PD$ implies that, for any $m\ge 1$, the separation problem is solved affirmatively for $\fs1{2m+1}$ and $\fp1{2m+2}$ and negatively for $\fp1{2m+1}$ and $\fs1{2m+2}$ --- similar to what happens at the first and second level corresponding to $n=0$ in this scheme. See also Steel~\cite{steel_detsep,steel_core}, and Hauser and Schindler \cite{hashi} for some other relevant results.}).
{Problems}
(I) and (II) have been well-known since the early years of forcing, e.g., see problem P3030, and especially P3029 (= (II) for $n=3$) in a survey \cite{mathiasS} by Mathias.
Two solutions for part (I) are known so far. Harrington's two-page handwritten note ([Addendum A1]~\cite{h74}) contains a sketch of a model, without going into details, defined by the technique of almost-disjoint forcing of Solovay and Jensen \cite{jsad}, in which indeed \sep\ fails for both $\fs1n$ and $\fp1n$ for a given $n$. This research was cited in
Moschovakis~\cite{mDST} (Theorem 5B.3), and Mathias~\cite{mathiasS} (Remark P3110 on page 166), but has never been published or otherwise detailed in any way. Some other models, with the same property of failure of \sep\ for different projective classes, were recently defined and studied in \cite{kl28,kl49}.
As for (II), the problem as it stands is open so far, and no conclusive achievement, such as a model (a generic extension of $\rL$) in which \bsep\ holds for $\fs1n$ for some $n\ge3$, is known. Yet, the following modification turns out to be easier to work with. The \rit{effective} or \rit{lightface} \sep, for a given lightface class $\is1n$ or $\ip1n$ (we give \cite{mDST} as a reference on the lightface projective hierarchy), is the assertion that\vom\vim
\bde \item[\lsep\ {\rm for} $\is1n$ {\rm or} $\ip1n$:] any pair of disjoint $\is1n$, respectively,\ $\ip1n$ sets $X,Y$
can be separated by a $\id1n$ set---here, unlike the \bsep\ case, the sets $X,Y$ can be either sets of reals or sets of integers.\vom\vim \ede
Accordingly, the \rit{effective} or \rit{lightface} separation problem is a question of whether \lsep\ holds for this or another class of the form $\is1n$ or $\ip1n$, with specific versions for sets of reals and sets of integers.
Addison~\cite{add1,add2} demonstrated that, similar to the above, \lsep\ holds for $\is11$ and $\ip12$; fails for $\ip11$ and $\is12$; and under the axiom of constructibility $\rV=\rL$, it holds for $\ip1n$ and fails for $\is1n$ for all $n\ge3$---both in the ``real'' and the ``integer'' versions. (See also \cite{mDST}.)
In this context, Harrington announced in \cite{h74} that there is a model in which \lsep\ holds both for the class $\is13$ \rit{for sets of integers}, and for the class $\ip13$ \rit{for sets of integers}. A two-page handwritten sketch of a construction of such a model is given in ([Addendum A3]~\cite{h74}) without much elaboration of arguments.
{\ubf The goal of this paper} is to prove the next theorem, which generalizes the cited Harrington result and thereby is a definite advance in the direction of (II) in the context of \lsep\ for sets of integers. This is the main result of this paper.
\bte \lam{mt} Let\/ $\nn\ge2$. There is a generic extension of\/ $\rL$ in which\/ \ben \renu \jtlb{mt1} \lsep\ holds for\/ $\is1{\nn+1}$ sets of integers, so that any pair of disjoint\/ $\is1{\nn+1}$ sets\/ $X,Y\sq\om$ can be separated by a\/ $\id1{\nn+1}$ set$;$
\jtlb{mt2} \lsep\ also holds for\/ $\ip1{\nn+1}$ sets of integers, so that any pair of disjoint\/ $\ip1{\nn+1}$ sets\/ $X,Y\sq\om$ can be separated by a\/ $\id1{\nn+1}$ set. \een
\ete
Our proof of this theorem will follow a scheme that includes both some arguments outlined by Harrington in {\cite{h74}}, Addendum A3 (mainly related to the most elementary case $\nn=2$) and some arguments absent in {\cite{h74}}, in particular, those related to the generalization to the case $\nn\ge3$. (We may note here that {\cite{h74}} is neither a beta-version of a paper, nor a preprint of any sort, but rather handwritten notes to a talk in which omissions of even major details can be expected.) All this will require both a fairly sophisticated construction of the model itself and a fairly complex derivation of its required properties by rather new methods for modern set theoretic research. Thus, we are going to proceed with filling-in all necessary details left aside in {\cite{h74}}. We hope that the detailed acquaintance with the set theoretic methods first introduced by Harrington will serve to the benefits of the reader~envisaged.
To prove Theorem \ref{mt}, we make use of a generic extension of $\rL$ defined in our earlier paper \cite{kl57} (and before that in \cite{h74}---modulo some key details absent in \cite{h74}) in order to prove the consistency of the equality $\pws\om\cap\rL=\pws\om\cap\id1{\nn+1}$
for a given $\nn\ge2$. (The equality claims that the constructible reals are the same as $\id1{\nn+1}$ reals. Its consistency was a major problem posed by Harvey Friedman \cite{102}.) We present the construction of this generic model in all necessary detail.
This includes a version of almost-disjoint forcing considered in Section~\ref{prel}, the cone-homogeneity lemma in Section~\ref{fhom}, the systems and product forcing construction in Section~\ref{*s41}, and a Jensen--Solovay-style construction of the actual countable support forcing product $\dQ$ in Section~\ref{*71}. Theorem~\ref{*7euv} and Definition~\ref{*fixu} in Section~\ref{*71} present the construction of $\dQ$ in $\rL$ via the union of a \dd\omd long increasing sequence of systems $\xU_\xi$, $\xi<\omb$, which satisfies suitable completeness and definability requirements (that depend on the choice of the value of an integer $\nn$ as in Theorem~\ref{mt}), and also follows the Jensen--Solovay idea of Cohen-generic extensions at each step $\xi<\omb$ of the inductive construction of $\xU_\xi$.
Then, we consider corresponding \dd\dQ generic extensions $\rL[G]$ in Section~\ref{*Pex}, and their subextensions involved in the proof of Theorem~\ref{mt} in Section~\ref{175} (Theorem~\ref{33}). Two key lemmas are established in Section~\ref{2key}, and the proof of theorems \ref{33} and \ref{mt} is finalized in Section~\ref{175*} (the $\is1{\nn+1}$ case) and in Section~\ref{176} (the $\ip1{\nn+1}$ case).
The final section on conclusions and discussion completes the paper.
\parf{Almost-disjoint forcing} \las{prel}
Almost-disjoint forcing was invented by Jensen and Solovay~\cite{jsad}. Here, we make use of a \dd\omi version of this tool considered in ([Section 5]~\cite{jsad}). The version we utilize here exactly corresponds to the case $\bom=\omil$ developed in our earlier paper~\cite{kl57} (and to less extent in \cite{kl58}). This will allow us to skip some proofs below. Assume the following: \bit \item\msur $\bom=\omil$, the first uncountable ordinal in $\rL$.
\vitem\msur $\Fun=({\bom}^{\bom})\cap\rL$ = all \dd\bom sequences of ordinals $<\bom$ in $\rL$.
\vitem\msur $\Seq=({\bom}^{<\bom}\bez\ans\La)\cap\rL$ is the set of all sequences $s\in\rL$ of ordinals $<\bom$, of length $0<\lh s<\bom$. \eit
By definition, the sets $\Fun$, $\Seq$ belong to $\rL$ and $\card{(\Seq)}=\bom=\omil$ whereas $\card{(\Fun)}=\om^\rL_2$ in $\rL$. Note that $\La$, the~empty sequence, does not belong to $\Seq$.
\bit \item A set $X\sq\Fun$ is \rit{dense} iff for any $s\in\Seq$ there is \index{set!dense} $f\in X$ such that $s\su f$.
\vitem If $S\sq\Seq\yt f\in \Fun$; then, let $\ssl{S}f= \ens{\xi<\bom}{f\res\xi\in S}$. \index{zzsf@$\ssl{S}f$}
\vitem If $\ssl{S}f$ \imar{ssl Sf} \index{zzsf@$\ssl S f$} is unbounded in $\bom$, then say that \index{cover} \rit{$S$ covers\/ $f$}; otherwise, \rit{$S$ does not cover\/ $f$}. \eit
The general goal of the almost-disjoint forcing is the following: given a set $u\sq\Fun$ in the ground set universe $\rL$, find a generic set $S\sq\Seq$ such that the equivalence ``$f\in u\leqv S\:\text{ does not cover }f$''
holds for each $f\in \Fun$ in the ground universe.
\bdf
\lam{*3.2a} \imar{usl ps p pf p} $\Qa$ is the set of all pairs\/ $p=\usl{\ps p}{\pf p}\in\rL$
of {\em finite} \index{zzQ*@$\Qa$} \index{zzsp@$\ps p$} \index{zzfp@$\pf p$} sets\/ $\pf p\sq\Fun$, $\ps p\sq\Seq$. Note that\/ $\Qa\in\rL$. Elements of\/ $\Qa$ are called (forcing) \rit{conditions}. \index{condition}
If\/ $p\in\Qa$, then put\/ \imar{pfv p} $ \pfv p=\ens{f\res\xi}{f\in\pf p\land 1\le\xi<\bom}\,, $; this is a tree in $\Seq$.
Let\/ $p,q\in\Qa$. Define\/ $q\leq p$ (that is, $q$ is {\em stronger} as a forcing condition) iff\/ \index{condition!order} \index{condition!stronger} $\ps p\sq \ps q\yd \pf p\sq \pf q$, and~the difference\/ $\ps q\bez \ps p$ does not intersect $\pfv p$, \ie, $\ps q\cap\pfv p=\ps p\cap \pfv p$. Clearly, we have\/ $q\leq p$ iff\/ $\ps p\sq \ps q\yd \pf p\sq \pf q$, and\/ $\ps q\cap {\pfv p}=\ps p\cap {\pfv p}$.
If $u\sq\Fun$, then put\/ $\zq u=\ens{p\in \Qa}{\pf p\sq u}$. \index{zzQu@$\zq u$} \imar{zq u} \edf
\ble [Lemma 1 in \cite{kl57}] \lam{*comp} Conditions\/ $p,q\in\Qa$ are compatible in\/ $\Qa$ iff\/ $(1)$ $\ps q\bez \ps p$ does not intersect\/ $\pfv p$, and\/ $(2)$ $\ps p\bez \ps q$ does not intersect\/ $\pfv q$.
\ele
Thus, any conditions\/ $p,q\in \zq u$ are compatible in\/ $\zq u$ iff\/ $p,q$ are compatible in\/ $\Qa$ iff the condition\/ $p\land q=\usl{\ps p\cup \ps q}{\pf p\cup \pf q}\in \zq u$ \index{zzpandq@$p\land q$} satisfies\/ $p\land q\leq p$ and\/ $p\land q\leq q$.
\parf{The almost-disjoint forcing notions are homogeneous} \las{fhom}
We are going to show that forcing notions of the form $\zq u$ are sufficiently homogeneous. This is not immediately clear here, unlike the case of many other homogeneity claims. Assume that conditions $p,q\in\Qa$ satisfy the next requirement: \busm {*sut*} {$\pf p=\pf q$ \ \ and \ \ $\ps p\cup \ps q\sq \pfv p=\pfv q$. }
\noi
Then, a transformation $\yh pq$ acting on conditions is defined as~follows.
If $p=q$, then define $\yh pq(r)=r$ for all $r\in\Qa$, the~identity.
Suppose that $p\ne q$. Then, $p,q$ are incompatible by \eqref{*sut*} and Lemma~\ref{*comp}. Define \mbox{$\dyh pq=\ens{r\in\Qa}{r\leq p\lor r\leq q}$}, \index{zzdpq@$\dyh pq$} \index{domain!dpq@$\dyh pq$} \imar{dyh pq} the \rit{domain} of $\yh pq$. Let $r\in\dyh pq$. We put $\yh pq(r)=r':=\pa{\ps {r'}}{\pf{r'}}$, where $\pf{r'}=\pf{r}$ and \imar{yh pq} \index{zzhpq@$\yh pq$} \busr {*sut*1} {\ps {r'}= \left\{ \bay{rcl} (\ps {r}\bez \ps p)\cup \ps q &\text{ in case }& r\leq p\,,\\[1ex]
(\ps {r}\bez \ps q)\cup \ps p &\text{ in case }& r\leq q\,.\\[1ex] \eay \right.}
In this case, the difference between $\ps {r}$ and $\ps {r'}$ is located within the set $X=\pfv p=\pfv q$, so~that $\ps r\cap X=\ps p$ and $\ps {r'}\cap X=\ps q$ whenever $r\leq p$, while $\ps r\cap X=\ps q$ and $\ps {r'}\cap X=\ps p$ whenever $r\leq q$. The next lemma is Lemma 6 in \cite{kl57}.
\ble \label{homL} \ben \renu \jtlb{homL1}\msur If\/ $u\sq\Fun$ is dense and\/ $p_0,q_0\in\zq u$, then there exist conditions\/ $p,q\in\zq u$ with\/ $p\leq p_0$, $q\leq q_0$, satisfying \eqref{*sut*}.
\itlb{homL2} Let\/ $p,q\in\Qa$ satisfy\/ \eqref{*sut*}.
If\/ $p=q$, then\/ $\yh pq$ is the identity transformation.
If\/ $p\ne q$, then\/ $\yh pq$ is an order automorphism of\/ $\dyh pq=\ens{r\in\Qa}{r\leq p\lor r\leq q}$, satisfying\/ $\yh pq(p)=q$ and\/ $\yh pq=(\yh pq)\obr=\yh qp$.
\itlb{homL3} If\/ $u\sq\Fun$ and\/ $p\yi q\in \zq u$ satisfy\/ \eqref{*sut*}, then\/ $\yh pq$ maps the set\/ $\zq u\cap \dyh pq$ onto\/ itself order-preserving. \een \ele
\bpf[sketch] \ref{homL1} By the density of $u$, there is a countable set $F\sq\Fun$ satisfying $\pf p\cup\pf q\sq F$ and \mbox{$\ps p\cup \ps q\sq F^{\vee}= \ens{f\res\xi}{f\in F\land 1\le\xi<\bom}$}. Put $p=\pa{\ps p}{F}$ and $q=\pa{\ps q}{F}$. Claims \ref{homL2} and \ref{homL3} are routine. \epf
\bcor [in $\rL$] \lam{homC} If a set\/ $u\sq\Fun$ is dense, then\/ $\zq u$ is\/ {\em\ubf cone homogeneous} in the sense \mbox{of\/ {\rm\cite{dob_sdf}}}, \ie, if\/ $p_0,q_0\in\zq u$, then there exist conditions\/ $p,q\in\zq u$ with\/ $p\leq p_0$, $q\leq q_0$, such that the cones\/ $\zq u_{\leq p}=\ens{p'\in\zq u}{p'\leq p}$ and\/ $\zq u_{\leq q}$ are order-isomorphic.
\ecor
\parf{Systems and product almost-disjoint forcing} \las{*s41}
To prove Theorem~\ref{mt}, we make use of a forcing notion equal to the countable-support product of a collapse forcing $\dC$ and \dd{\ombl}many forcing notions of the form $\zq u$, $u\sq\Fun$.
{\ubf We work in $\rL$.} Define $\dC={\pws\om\cap\rL}^{<\om}$, the set of all finite sequences of subsets of $\om$ in $\rL$, an ordinary forcing $\pws\om\cap\rL$ to collapse down to $\om$.
Let $\pwb=\ombl$ and $\pwo=\pwb\cup\ans{-1}$, the \rit{index set} of the mentioned product. \index{zzIskr@$\pwo$} \index{zzIskr-@$\pwb$} \imar{pwo,pwb} Let a \rit{system} be any map $U:\abs U\to\pws\Fun$ such that $\abs U\sq\pwb$, each set $U(\inu)$ ($\inu\in\abs U$) is dense in $\Fun$, and the \rit{components} $U(\inu)\sq\Fun\;\,(\inu\in\abs U)$ are pairwise disjoint. \index{system}
Given a system $U$ in $\rL$, we let $\jq U$ be the finite-support product of $\dC$ and the sets $\zq{U(\inu)}$, $\inu\in\abs U$. \index{zzQ@$\fQa$} \imar{fQa} That~is, $\jq U$ consists of all maps $p$ defined on a finite set $\dom p=\abp p\sq\abs U\cup\ans{-1}$ so that $p(\inu)\in\zq{U(\inu)}$
for all $\inu\in\abm p:=\abp p\bez\ans{-1}$, \imar{abm p} and if $-1\in\abp p$, then $\bij p:=p(-1)\in\dC$. \index{zzzzbp@$\bij p$} \index{zzzzpabs@$\abp p$} \index{zzzzpabs-@$\abm p$}
If $p\in \jq U$, then put \imar{qf p nu} $\qf p \inu=\pf {p(\inu)}$ and \index{zzspn@$\qs p\inu$} \index{zzfpn@$\qf p \inu$}
\imar{qs p nu} $\qs p\inu=\ps {p(\inu)}$ for all $\inu\in\abm p$, so that $p(\inu)=\usl{\ps p(\inu)}{\pf p(\inu)}$.
We order $\jq U$ component-wise: $p\leq q$ ($p$ is stronger as a forcing condition) iff $\abp q\sq\abp p$, $\bij q\sq \bij p$ in case $-1\in\abp q$, and~$p(\inu)\leq q(\inu)$ in $\zq{U(\inu)}$ for all $\inu\in\abm q$.
Note that $\jq U$ contains \rit{the empty condition} $\bo\in\jq U$ satisfying $\abp\bo=\pu$; \index{zzzzodot@$\bo$} \index{condition!empty@empty condition $\bo$} obviously, $\bo$ is the \dd\leq least (and weakest as a forcing condition) element of $\jq U$.
\ble [in $\rL$] \lam{*ccc2} If\/ $U$ is a system, then the forcing notion\/ $\jq U$ satisfies\/ \dd\ombl CC. \ele
\bpf We argue in $\rL$. Assume towards the contrary that $X\sq\jq U$ is an antichain of cardinality $\card X=\omb$. As $\card\dC=\omi$, we can assume that $\bij p=\bij q$ for all $p,q\in X$. Consider the set $S=\ens{\abm p}{p\in X}$; it consists of finite subsets of $\omb$.
{\it Case 1\/}: $\card S\le\omi$. Then, by the cardinality argument, there is a set $X'\sq X$ and some $a\in S$ such that $\abm p=a$ for all $p\in X'$ and still $\card {X'}=\omb$. Note that if $p\ne q$ belongs to $X'$, then $\bij p=\bij q$ by the above; therefore, as $p,q$ are incompatible, we have $\ps p\ne \ps q$. Thus, $P=\ens{\ps p}{p\in X'}$ still satisfies $\card P=\omb$. This is a contradiction since obviously the set $\ens{\ps p}{p\in \jq U\land \abm p=a}$ has cardinality $\omi$.
{\it Case 2\/}: $\card S=\omb$. Then, by the \dd\Delta system lemma (see \eg\ Lemma 111.2.6 in Kunen~\cite{kun}) there is a set $S'\sq S$ and a finite set $\da\sq\omb$ (the root) such that $a\cap b=\da$ for all $a\ne b$ in $S'$, and still $\card {S'}=\omb$. For any $a\in S$, pick a condition $p_a\in X'$ with $\abm p=a$; then, $X''=\ens{p_a}{a\in S'}$ still satisfies $\card {X''}=\omb$. By construction, if $p\ne q$ belong to $X''$, then $\abm p\cap\abm q=\da$ and $p,q$ are incompatible; hence, the restricted conditions $p\res\da$, $q\res\da$ are incompatible as well. Thus, the set $Y=\ens{p\res\da}{p\in X''}$ still has cardinality $\card Y=\omb$ and is an antichain. On the other hand, $\abm q=\da$ for all $q\in Y$. Thus we have a contradiction as in Case 1. \epf
\parf{Jensen--Solovay construction} \las{*71}
Our plan is to define a system $\xU\in\rL$ such that any \dd{\jq \xU}generic extension of $\rL$ has a subextension that witnesses Theorem~\ref{mt}. Such a system will be defined in the form of a component-wise union of a \dd\ombl long increasing sequence of \rit{small} systems, where the smallness means that, in $\rL$, the system involves only \dd\omil many functions in $\Fun$.
{\ubf We work in $\rL$.}
\bit \item A system $U$ is \index{system!small} \rit{small}, if~both the set $\abs U$ and each set $U(\inu)\;\,(\inu\in\abs U)$ has cardinality $\le\omil$.
\vitem If $U,V$ are systems, $\abs U\sq\abs V$, and $U(\inu)\sq V(\inu)$ for all $\inu\in\abs U$,
then say that $V$ \rit{extends} $U$, in~symbol $U\pce V$. \imar{U pce V} \index{extends} \index{zzzzlecur@$\pce$}
\vitem If $\sis{U_\xi}{\xi<\la}$ is a \dd\pce increasing sequence of systems, then define a system $U=\lis_{\xi<\la}U_\xi$ \imar{lis} by $\abs U=\bigcup_{\xi<\la}\abs{U_\xi}$ and $U(\inu)=\bigcup_{\xi<\la,\:\inu\in \abs{U_\xi}}U_\xi(\inu)$ for all $\inu\in\abs U$. \index{zzzzbigveeUxi@$\lis_{\xi<\la}U_\xi$} \eit
We let $\zfcm$ be $\zfc$ minus the Power Set axiom, \index{zzzzzfc-@$\zfcm$} \index{theory!zfc-@$\zfcm$} with the schema of Collection instead of Replacement, with \AC\ in the form of the well-orderability principle, and~with the axiom: \lap{$\omi$ exists}. See~\cite{gitPWS} on versions of $\zfc$ sans the Power Set axiom in~detail.
Let $\zfcd$ be $\zfcm$ plus the axioms: $\rV=\rL$, and~the axiom \lap{every set $x$ satisfies $\card x\le\omi$}. \index{theory!zfc1@$\zfcd$}
Let $U,V$ be systems. Suppose that $M$ is any transitive model of $\zfcd$ containing $\bom$. \mbox{Define $\pcm U{U'}M$} iff $U\pce U'$ \imar{pcm U U' M} and the following holds: \index{zzUMV@$\pcm U{U'}M$} \ben \aenu \jtlb{pcm1} the set $\raz U{U'}= \bigcup_{{\inu\in\abs U}} (U'(\inu) \bez U(\inu))$
is \rit{multiply\/ $\Seq$-generic} over $M$, \index{multiply $\Seq$-generic} in the sense that every sequence $\ang{f_1,\,\dots f_m}$ of pairwise different functions $f_\ell\in \raz U{U'}$
is generic over $M$ in the sense of $\Seq={\omi}^{<\omi}$ as the forcing notion in $\rL$, \ and
\itlb{pcm2} if $\inu\in\abs U$, then the set $U'(\inu)\bez U(\inu)$ is dense in $\Fun$, and therefore uncountable.\vom\vim \een
Note a corollary of \ref{pcm1}: $\raz U{U'}\cap M=\pu$.
\bit \item Let $\jsp$, \rit{Jensen--Solovay pairs}, \index{zzSJSP@$\cjsp$} be the set of all pairs $\pa MU$, where $M\mo\zfcd$ is a transitive model containing $\bom$ and $U\in M$ is a system. Then, the sets $\Seq\yd \jq U$ also belong to $M$.\vom\vim
\item Let $\sjsp$, \rit{small Jensen--Solovay pairs}, be the set of all pairs $\pa MU\in\jsp$ \index{zzJSP@$\jsp$} such that $U$ is a small system in the sense above and $\card M\le\omi$ (in $\rL$).\vom
\item $\pa MU\pce\pa{M'}{U'}$ ($\pa{M'}{U'}$ extends $\pa{M}{U}$) \, iff \, \index{zzzzlecur@$\pce$} $M\sq M'$ and $\pcm{U}{U'}{M}$;\vom
\hspace*{-5ex} $\pa MU\prec\pa{M'}{U'}$ (strict) \, iff \, \index{zzzzlcur@$\prec$} $\pa MU\pce\pa{M'}{U'}$ and $\kaz \inu\in\abm U \,(U(\inu)\sneq U'(\inu))$.\vom
\item A~\rit{Jensen--Solovay sequence} of length $\la\le\bomp=\omb$ is \index{Jensen--Solovay sequence} any strictly \dd\prec increasing \mbox{\dd\la sequence} $\sis{\pa{M_\xi}{U_\xi}}{\xi<\la}$ of pairs $\pa{M_\xi}{U_\xi}\in\cjsp$, satisfying $U_\eta=\lis_{\xi<\eta}U_\xi$ on limit steps.
Let $\lt\la$ be the set of all such sequences.\vom \index{zzJSla@$\lt\la$}
\item A pair $\pa MU\in\cjsp$ \rit{solves} a set \index{solves a set} $D\sq\cjsp$ iff either $\pa MU\in D$ or there is no pair $\pa{M'}{U'}\in D$ that extends $\pa MU$.\vom\vim
\item Let $\sol D$ be the set of all pairs $\pa MU\in\cjsp$, which \index{zzDsolv@$\sol D$} solve a given set $D\sq\cjsp$.\vom\vim
\item Let $n\ge3$. A sequence $\sis{\pa{M_\xi}{U_\xi}}{\xi<\omb}\in\lt\omb$ is \dd n\rit{complete} iff it \index{sequence!\dd ncomplete} intersects every set of the form $\sol D$, where $D\sq\cjsp$ is a $\hbs{n-2}(\hb)$ set.\vom \eit
If $\ka$ is a cardinal then $\mathrm{H}\ka$ is the collection of all sets $x$ whose transitive closure $\TC(x)$ has cardinality $\card{(\TC(x))}<\ka$. {Arguing in $\rL$, we have $\hb=\rL_{\omb}$, of course.}
{\sloppy Further, $\hbs{n-2}(\hb)$ means definability by a $\is{}{n-2}$ formula of the \dd\in language, in which any definability parameters in $\hb$ are allowed, while $\hbs{n-2}$ means the parameter-free definability. Similarly, $\hbd{n-1}(\ans{\bom})$ in the next theorem means that $\bom=\omil$ is allowed as a sole parameter. It is a simple exercise that sets $\ans\Seq$ and $\Seq$ are $\hbd1(\ans\bom)$ under $\rV=\rL$. To account for $\bom$ as a parameter, note that the set\/ $\omi$ is $\hbs1$; hence, the singleton $\ans\omi$ is $\hbd2$. }
Generally, we refer to, e.g., \cite{skml}, Part B, 5.4, or \cite{jechmill}, Chap.~13, on the L\'evy hierarchy of $\in$-formulas and definability classes $\is H n\yd \ip Hn\yd\id H n$ \index{definability classes! $\is H n\yd \ip Hn\yd\id H n$} \index{zzSHC@$\is H n,\,\ip Hn,\,\id H n$} for any transitive set $H$.
\bte
[Theorem 3 in \cite{kl57}] \lam{*7euv} It is true in\/ $\rL$ that if\/ $n\ge2$, then there is a sequence\/ $\sis{\pa{M_\xi}{U_\xi}}{\xi<\omb}\in\lt\omb$ of class\/ $\hbd{n-1}(\ans{\bom})$;
hence, $\hbd{n-1}$ in case\/ $n\ge3$, and in addition\/ \dd ncomplete in case\/ $n\ge3$,~such that\/ $\xi\in\abs{U_{\xi+1}}$ for all\/ $\xi<\omb$.
\ete
Similar theorems were established in \cite{kl34,kl36,kl38} for different purposes.
\bdf [in $\rL$] \lam{*fixu} {\ubf Fix a number\/ $\nn\ge2$ during the proof of Theorem~\ref{mt}.}
Let $\sis{\pa{\xM_\xi}{\xU_\xi}}{\xi<\omb}\in\lt\omb$ \imar{xM xU} be a Jensen--Solovay sequence as~in Thm~\ref{*7euv}, thus \index{zzMxi@$\xM_\xi$} \index{zzUxi@$\xU_\xi$} \ben \renu \jtlb{fixu1} the sequence is of class\/ $\hbd{\nn-1}$;
\itlb{fixu2} we have\/ $\xi\in\abs{\xU_{\xi+1}}$ for all\/ $\xi$;
\itlb{fixu3} if\/ $\nn\ge3$, then the sequence is\/ \dd\nn complete.\vom \een Put $\xU=\lis_{\xi<\omb}\xU_\xi$, \imar{xU} so $\xU(\inu)= \bigcup_{\xi<\omb,\inu\in\abs{\xU_\xi}}\xU_\xi(\inu)$ for all $\inu\in\pwb$. Thus, $\xU\in\rL$ is a system and $\abs\xU=\pwb$ since $\xi\in\abs{\xU_{\xi+1}}$ for all $\xi$.
We define $\dQ=\jq \xU$ (the {\em\ubf basic forcing notion}). \imar{dQ}
\index{zzUb@$\xU$} \index{zzPb@$\dQ$} \index{zzPxi@$\dQ_\xi$} Thus, $\dQ\in\rL$ is the finite-support product of the set $\dC$ and sets $\dQ(\inu)=\zq{\xU(\inu)}\yt \inu\in\pwb$. \edf
\ble [in $\rL$] \lam{*dl} The binary relation\/ $f\in\xU(\inu)$
is\/ $\hbd{\nn-1}(\ans{\bom})$. \ele
\bpf To get a $\is{}{}$ definition, make use of \ref{fixu1} of Definition~\ref{*fixu}. To get a $\ip{}{}$ definition, note that, in $\rL$, $f\in\xU(\inu)$ iff for any $\xi<\omb$, if $f\in \xM_\xi$ and $\nu<\xi$ then $f\in\xU_\xi(\nu)$. \epf
\parf{Basic generic extension} \las{*Pex}
We consider $\dQ_\nn:=\dQ_\nn=\jq\xU$ (see Definition~\ref{*fixu}) as a forcing notion in $\rL$. Accordingly, we will study \dd{\dQ}generic extensions $\rL[G]$ of the ground universe $\rL$. Define some elements of these extensions. Suppose that $G\sq\dQ$.
\index{zzzzabmG@$\abm G$}
Let $$ \bij G=\textstyle\bigcup_{p\in G} \bij p\,, \index{zzbG@$\bij G$} \quad \text{and}\quad \qs G \inu =\ps {G(\inu)}= \textstyle\bigcup_{p\in G}\qs p \inu $$
\index{zzGi@$G(\inu)$} \index{zzSGi@$\qs G \inu$} for any $\inu\in\abm G$, where $G(\inu)=\ens{p(\inu)}{p\in G}\sq\zq{\xU(\inu)}$.
Thus, $\qs G \inu \sq\Seq$.
Therefore, any \dd{\dQ}generic set $G\sq\dQ$ splits into the family of sets $G(\inu)\yt \inu\in\pwb$,
and a separate
map $\bij G:\om\onto\pws\om\cap\rL$. It follows from Lemma~\ref{*ccc2} by standard arguments that \dd{\dQ}generic extensions of $\rL$ satisfy $\omi=\om_2^\rL$.
\ble [Lemma 9 in \cite{kl57}] \lam{*krg} Let\/
$G\sq \dQ$ be a set\/ \dd{\dQ}generic over\/ $\rL$. Then\/$,$\vom\vim \ben \renu \jtlb{krg5}\msur $\bij G$ is a\/ \dd\dC generic map from\/ $\om$ onto\/ $\pws\om\cap\rL\;;$\vim
\itlb{Kreg1} if\/ $\inu\in\pwb$, then the set\/ $G(\inu)=\ens{p(\inu)}{p\in G}\in\rL[G]$ is\/ \dd{\zp{\xU(\inu)}}generic over\/ $\rL$ --- hence, if\/ $f\in\Fun$, then\/ $f\in \xU(\inu)$ iff\/ $\qs G \inu \text{ does not cover }f\;.$ \qed \een \ele Now suppose that $c\sq\pwo$.
If $p\in\dQ$, then a {\em restriction\/} $p'=p\res c\in\dQ$ is defined by \index{zzp"1c@$p\res c$} $\abm{p'}=c\cap\abm p$ and $p'(\inu)=p(\inu)$ for all $\inu\in\abm{p'}$.
In particular, if $\inu\in\pwo$, then let $$ p\rn \inu=p\res{(\abp p\bez\ans \inu)} \qand p\rne\inu=p\res{\ans \inu}\;\; \text{(identified with $p(\nu)$).} $$ \index{zzp"1ni@$p\rn \inu$}
If $G\sq\dQ$, then let $G\res c=\ens{p\in G}{\abm p\sq c}$ (=$\ens{p\res c}{p\in G}$ in case $c\in\rL$).
\index{zzG"1c@$G\res c$} Put $G\rn \inu=\ens{p\in G}{\inu\nin\abp p} =G\res{(\pwo\bez\ans \inu)}$. \index{zzG"1ni@$G\rn \inu$}
Writing $p\res c$,
it is not assumed that $c\sq\abp p$.
The proof of Theorem~\ref{mt} makes use of a generic extension of the form $\rL[G\res c]$, where $G\sq\dQ$ is a set \dd\dQ generic over $\rL$ and~$c\sq\pwo\yt c\nin\rL$.
Define formulas $\dGa_\inu$ ($\inu\in\pwb$) as follows:
$$ \dGa_\inu(S)\,:=_{\rm def}\;S\sq\Seq\,\land\,
\kaz f\in \Fun\; \big(f\in \xU(\inu) \eqv {S}\text{ does not cover }f). $$
\ble [Lemma 22 in \cite{kl57}] \lam{gi} Suppose that a set\/ $G\sq\dQ$ is\/ \dd\dQ generic over\/ $\rL$ and\/ $\inu\in\pwb$, $c\in\rL[G]\yt \pu\ne c\sq\pwo$. Then, \/ $\omi^{\rL[G\res c]}=\om_2^\rL$ and \ben \renu \jtlb{gi1}\msur
$\dGa_\inu(\qs G \inu )$ holds$;$
\itlb{gi2}\msur
$\qs G \inu \nin\rL[G\rn\inu]$---generally, there are no sets\/ $S\sq\Seq$ in\/ $\rL[G\rn\inu]$ satisfying\/ $\dGa_\inu(S)\,;$
\itlb{gi0+} if\/ $-1\in c$, then\/ $\bij G\in\rL[G\res c]$, and if\/ $\inu\in c$, then\/ $\qs G \inu \in\rL[G\res c]$. \qed \een \ele
The next key theorem is Theorem 4 in \cite{kl57}. Note that if $\nn=2$, then the result is an easy corollary of the Shoenfield absoluteness theorem.
\bte [elementary equivalence theorem] \lam{*eet} Assume that in\/ $\rL$, $-1\in d\sq\pwo$, sets\/ $Z',Z\sq\pwb\bez d$ satisfy\/ $\card{(\pwb\bez Z)} \le \omi$ and\/ $\card {(\pwb\bez Z')} \le \omi$, the symmetric difference\/ $Z\sd Z'$ is at most countable and the complementary set\/ $\pwb\bez(d\cup Z\cup Z')$ is infinite.
Let\/ $G\sq\dQ$ be\/ \dd\dQ generic over\/ $\rL$, and\/ $x_0\in\rL[G\res d]$ be any real.
Then,\/ any closed\/ $\is1\nn$ formula\/ $\vpi$, with real parameters in\/ $\rL[x_0]$, is simultaneously true in the models\/ $\rL[x_0,G\res Z]$ and\/ $\rL[x_0,G\res Z']$. \qed
\ete
\parf{The model} \las{175}
Here, we introduce a submodel of the basic \dd\dQ generic extension $\rL[G]$ defined in Section~\ref{*Pex} that will lead to {the proof} of Theorem~\ref{mt}.
Recall that a number $\nn\ge2$ is fixed by Definition~\ref{*fixu}.
Under the assumptions and notation of Definition~\ref{*fixu}, consider a set $G\sq\dQ$, \dd\dQ generic over $\rL$. \mbox{Then, $\bij G=\bigcup G(-1)$} is a \index{zzbG@$\bij G$} \dd\dC generic map from $\om$ onto $\pws\om\cap\rL$ by Lemma~\ref{*krg} \ref{krg5}. We define \index{zzwG@$\wg G$} \index{zzWG@$\Wg G$} \imar{fn sli} \busr{W=} {\wg G= \ens{\om k+2^j}{k<\om\land j\in \bij G(k)} \cup \ens{\om k+3^j}{j,k<\om}\sq\om^2,}
and $\Wg G=\ans{-1}\cup \wg G$. We also define, for~any $m<\om$, $$ \index{zzwmG@$\wgb m G$} \index{zzwmG@$\wgm m G$} \wgb mG=\ens{\om k+\ell\in \wg G}{k\ge m}\,, \quad \wgm mG=\ens{\om k+\ell\in \wg G}{k<m}\,, $$ and accordingly, \index{zzWmG@$\Wgb m G$} \index{zzWmG@$\Wgm m G$} $\Wgb mG=\ans{-1}\cup \wgb mG$ and $\Wgm mG=\ans{-1}\cup \wgm mG$.
With these definitions, each $k$th slice \busr {wk=} {\wgk k G= \ens{\om k+2^j}{j\in \bij G(k)} \cup \ens{\om k+3^j}{j<\om} \index{zzwkG@$\wgk k G$} }
of $\wg G$ is necessarily infinite and coinfinite, and~it codes the target set $\bij G(k)$ since \busr {bjk} {\bij G(k)=\ens{j<\om}{\om k+2^j\in \wgk k G} =\ens{j<\om}{\om k+2^j\in \Wg G}.}
Note that definition \eqref{W=} is \rit{monotone \poo\ $\bij G$}, \ie, if $\bij G(k)\sq \bij{G'}(k)$ for all $k$, then $\wg G\sq\wg {G'}$ and $\Wg G\sq\Wg {G'}$.
Anyway, $\wg G\sq\om^2$ (the ordinal product) is a set in the model $\rL[\bij G]=\rL[\Wg G]=\rL[\wg G]=\rL[\wgb mG]$ for each $m$, whereas $\wgm mG\in\rL$ for all $m$. Finally, let $ W=[\om^2,\omb)=\ens{\za}{\om^2\le\za<\omb}\,. \index{zzW@$W=[\om^2,\omb)$} $
Recall that if $c\sq\pwo$, then $G\res c=\ens{p\in G}{\abp p\sq c}$.
\bce \ubf To prove Theorem~\ref{mt}, we consider the model \ $\rL[G\res (\Wg G\cup W)]\,\sq\,\rL[G]$. \ece
\bte \lam{33} If\/ $G$ is a\/ \dd\dQ generic set over\/ $\rL$, then the class\/ $\rL[G\res (\Wg G\cup W)]$ suffices to prove Theorem \ref{mt}. That is, \lsep\ holds in\/ $\rL[G\res (\Wg G\cup W)]$ both for\/ $\is1{\nn+1}$ sets of integers and for\/ $\ip1{\nn+1}$ sets of integers.
\ete
The proof will include several lemmas.
For the next lemma, we let $\for_\dQ$ be the $\dQ$-forcing notion defined in $\rL$. If $p\in\dQ$ and $-1\in\abp p$, then let $p\rne{-1}:=p\res\ans{-1}$. This can be identified with just $p(-1)\in\dC$, of course, but formally $p\rne{-1}\in\dQ$. If $-1\nin\abp p$, then let $p\rne{-1}:=\bo$ (the empty condition). Let $\naG$ be the canonical \dd\dQ name for the generic set $G\sq\dQ$, $\check W$ be a name for the set $W=[\om^2,\ombl)\in\rL$, and $\bijn$ be a canonical \dd\dQ name for $\bij G$. \index{name!Gund@$\naG$} \index{zzzzGund@$\naG$}
\ble [reduction to the $\dC$-component] \lam{-1} Let\/ $p\in\dQ$ and let\/ $\Phi(\bijn)$ be a closed formula containing only\/ $\bijn$ and names for sets in\/ $\rL$ as parameters. Assume that\/ $$ p\for_\dQ ``\Phi\text{ is true in }\rL[\dgwgw]''\,. $$
Then,\/ $p\rne{-1}\for_\dQ ``\Phi\text{ is true in }\rL[\dgwgw]''$ as well. \ele
\bpf By the product forcing theorem, if $G\sq\dQ$ is \dd\dQ generic over $\rL$, then the model $\rL[\gwgw]$ is a \dd{\dQ'}generic extension of $\rL[\bij G]$, where $\dQ'= \prod_{\inu\in\Wg{G}\cup W}\zq{\xU(\inu)}$ is a forcing in $\rL[\bij G]$. However, it follows from Corollary~\ref{homC} that $\dQ'$ is a (finite-support) product of cone-homogeneous forcing notions. Therefore, $\dQ'$ itself is a cone homogeneous forcing, and we are finished (see e.g., Lemma 3 in \cite{dob_sdf} or Theorem IV.4.15 in \cite{kun}). \epf
\parf{Two key lemmas} \las{2key}
{The following two lemmas present two key properties of models of the form $\rL[G\res (\Wg G\cup W)]$ involved in the proof of Theorem~\ref{33}. The first lemma
shows that all constructible reals are $\id1{\nn+1}$ in such a model.}
\ble
\lam{7lem1} Let a set\/ $G\sq\dQ$ be\/ \dd\dQ generic over\/ $\rL$. Then, it holds in\/ $\rL[G\res (\Wg G\cup W)]$ that\/ $\wg G$ is $\is1{\nn+1}$ and each set\/ $x\in\rL\yt x\sq\om$ is\/ $\id1{\nn+1}\;.$ \ele
\bpf Consider an arbitrary ordinal $\inu=\om k+\ell$; $k,\ell<\om$. We claim that \busr {gx} {{\inu\in \wg G}\eqv {\sus S\,\dGa_{\inu}(S)} } holds in $\rL[G\res (\Wg G\cup W)]$. Indeed, assume that $\inu\in \wg G$. Then, $S=\qs G \inu \in \rL[G\res \Wg G]$, and we have $\dGa_{\inu}(S)$ in $\rL[G\res (\Wg G\cup W)]$ by Lemma~\ref{gi} \ref{gi0+},\ref{gi1}. Conversely, assume that $\inu\nin \wg G$.
Then,~we have $\Wg G\in\rL[\bij G]\sq \rL[G\res \Wg G]\sq\rL[G\rn\inu]$, but $\rL[G\rn\inu]$ contains no $S$ with $\dGa_{\inu}(S)$ by Lemma~\ref{gi}~\ref{gi2}.
\nek{\hk} {{\mathrm{H}\bbkappa}} \nek{\hi} {{\mathrm{H}\omi}}
Let $\bbkappa:=\omb^\rL$ in the remainder of the proof of the lemma. Then it follows that $\bbkappa=\omi^{\rL[G\res (\Wg G\cup W)]}=\omi^{\rL[G]}$, by {Lemma~\ref{gi}.}
Note that the right-hand side of \eqref{gx} defines a $\is{\hk}{\nn}(\ans{\oli,\Seq})$ relation in the model $\rL[G\res (\Wg G\cup W)]$ by Lemma~\ref{*dl}. (Indeed, we have
$(\hk)^\rL=(\hb)^\rL=\rL_{\omb^\rL}=\rL_{\bbkappa}$
in $\rL[G\res (\Wg G\cup W)]$, therefore $(\hk)^\rL$ is $\is\hk1$ in $\rL[G\res(\Wg G\cup W)]$.)
On the other hand, the~sets $\ans{\oli}$ and $\ans{\Seq}$ remain $\hbd2$ singletons in $\rL[G\res(\Wg G\cup W)]$; they can be eliminated since $\nn\ge2$. This yields $\wg G\in\is\hk\nn=\is\hi \nn$ in $\rL[G\res(\Wg G\cup W)]$. It follows that $\wg G\in\is1{\nn+1}$ in $\rL[G\res(\Wg G\cup W)]$ by Lemma 25.25 in \cite{jechmill}, as~required.
Now, let $x\in\rL\yt x\sq\om$. By genericity, there exists $k<\om$ such that $\bij G(k)=x$. Then, $x=\ens{j}{\om k+2^j\in \wg G}$ by \eqref{W=}; therefore, $x$ is $\is1{\nn+1}$ as well. However, $\om\bez x\in\is1{\nn+1}$ by the same argument. Thus, $x$ is $\id1{\nn+1}$ in $\rL[G\res(\Wg G\cup W)]$, as required. \epf
The proof of the next lemma
involves Theorem~\ref{*eet} above as a key reference. The lemma holds for $\nn=2$ by Shoenfield.
\ble
\lam{sn+7} Suppose that\/ $G\sq\dQ$ is\/ \dd\dQ generic over\/ $\rL$, $m<\om$, $c\sq \wgm m G$, $c\in\rL$.
Then, any closed\/ $\is1\nn$ formula\/ $\Phi$, with~reals in\/ $\rL[G\res (c\cup \Wgb mG\cup W)]$ as parameters, is simultaneously true in\/ $\rL[G\res {(c\cup\Wgb mG\cup W)}]$ and in\/ $\rL[G\res {(\Wg G\cup W)}]$.
It follows that if\/ $c'\sq c\sq \wgm m G$ in\/ $\rL$, then any closed\/ $\is1{\nn+1}$ formula\/ $\Psi$, with~parameters in\/ $\rL[G\res (c'\cup \Wgb mG\cup W)]$, true in\/ $\rL[G\res {(c'\cup\Wgb mG\cup W)}]$, is true in the model\/ $\rL[G\res {(c\cup\Wgb mG\cup W)}]$ as well. \ele
\bpf
There is an ordinal $\xi<\omb$ such that all parameters in $\vpi$ belong to $\rL[G\res Y]$, \mbox{where $Y=c\cup \Wgb mG\cup X$} and $X=[\om^2,\xi)=\ens{\ga}{\om^2\le\ga<\xi}$.
The set $Y$ belongs to $\rL[\bij G]$; in fact, $\rL[Y]=\rL[\bij G]$. Therefore, $G\res Y$ is equi-constructible with the pair $\ang{\bij G,\sis{\qs G \inu }{\nu\in Y}}$. Here, $\bij G$ is a map from $\om$ onto $\pws\om\cap\rL$. It follows that there is a real $x_0$ with $\rL[G\res Y]=\rL[x_0]$.
Then, all parameters of $\vpi$ belong to $\rL[x_0]$.
To prepare an application of Theorem~\ref{*eet} of Section~\ref{*Pex}, we put $$ \bay{rclcccc} Z' &=& [\xi,\omb)\;,\\[0.5ex]
Z &=& e\cup Z'\;,\quad\text{where}\quad e=\wgm m G\bez c\;,\\[0.5ex]
d &=& \ans{-1}\cup \ens{\om k+j}{k\ge m\land j<\om}\cup X\,.\vim \eay $$
It is easy to check that all requirements of Theorem~\ref{*eet} for these sets are fulfilled. Moreover, as $\Wgb mG\sq\ans{-1}\cup\ens{\om k+j}{k\ge m\land j<\om}$, we have $Y=c\cup \Wgb mG\cup X\sq d$;~hence, $x_0\in\rL[G\res d]$.
Therefore, we conclude by Theorem~\ref{*eet} that the formula $\vpi$ is simultaneously true in $\rL[x_0,G\res Z]$ and in $\rL[x_0,G\res Z']$. However, $$ \rL[x_0,G\res Z']=\rL[G\res (Y\cup Z')]= \rL[G\res {(c\cup \Wgb mG\cup W)}] $$ by construction, while~\mbox{$\rL[x_0,G\res Z]=\rL[G\res {(\Wg G\cup W)}]$}, and we are done. \epf
\parf{Finalization: \boldmath{$\is1{\nn+1}$} case} \las{175*}
Here, we finalize the proof of Theorems~\ref{33} and \ref{mt} \poo\ $\is1{\nn+1}$ sets of integers. We generally follow the line of arguments sketched by Harrington ([Addendum A3]~\cite{h74}) for the $\is13$ case, with suitable changes mutatis mutandis. We will fill in all details omitted in \cite{h74}.
Recall that a number $\nn\ge2$ is fixed by Definition \ref{*fixu}.
We assume that\vom \ben \fenu \jtlb{0xy} a set\/ $G\sq\dQ$ is \dd\dQ generic over $\rL$, sets $x,y\sq\om$ belong to $\rL[G\res (\Wg G\cup W)]$, and it holds in\/ $\rL[G\res (\Wg G\cup W)]$ that $x,y$ are disjoint\/ $\is1{\nn+1}$ sets.\vom \een
The goal is to prove that $x,y$ can be separated by a set $Z\in\rL$ and then argue that $Z$ is $\id1{\nn+1}$ by Lemma \ref{7lem1}.
Recall that $W=[\om^2,\ombl)=\ens{\xi}{\om^2\le\xi<\ombl}$. Suppose that\vom \ben \fenu \atc \jtlb{1stt}\msur $\vpi(\cdot)$ and $\psi(\cdot)$ are parameter-free $\is1{\nn+1}$ formulas that provide $\is1{\nn+1}$ definitions for the sets, respectively,\ $x,y$ of \ref{0xy}---that is, $$ x=\ens{\ell<\om}{\vpi(\ell)} \qand y=\ens{\ell<\om}{\psi(\ell)} $$ in $\rL[G\res(\Wg G\cup W)]$. The assumed implication $\kaz \ell\:(\vpi(\ell)\imp\neg\:\psi(\ell))$ (as $x\cap y=\pu$) is forced to be true in $\rL[\dgwgw]$ by a condition $p_0\in G$.\vom \een
Here, $\naG$ is the canonical \dd\dQ name for the generic set $G\sq\dQ$ while $\check W$ is a name for the set $W=[\om^2,\ombl)\in\rL$. \index{name!Gund@$\naG$} \index{zzzzGund@$\naG$}
We observe that $\kaz \ell\:(\vpi(\ell)\imp\neg\:\psi(\ell))$ is a parameter-free sentence. Therefore, it can \noo\ be assumed that $\abp{p_0}=\ans{-1}$, by Lemma~\ref{-1}. In this case, the condition $p_0\in\dQ$ can be identified with its only nontrivial component $s_0=p_0(-1)\in\dC$.
\ble [interpolation lemma] \lam{1122} Under the assumptions of\/ \ref{1stt}, if\/ $\ell,\ell'<\om$, conditions\/ $p,p'\in\dQ$ satisfy\/ $p\le p_0$ and\/ $p'\le p_0$, and we have \bce $p\for_\dQ ``\rL[\dgwgw]\mo\vpi(\ell)$'' \ \ and\/ \ \ $p'\for_\dQ ``\rL[\dgwgw]\mo\psi(\ell')$''. \ece Then,\/ $\ell\ne\ell'$. \ele
\bpf [sketched in {\cite[Addendum A3]{h74}} for $\nn=2$] First of all, by Lemma~\ref{-1}, we can \noo\ assume that $\abp{p}=\abp{p'}=\ans{-1}$; so, the components $s=p(-1)$ and $s'=p'(-1)$ satisfy $s_0\sq s$ and $s_0\sq s'$ in $\dC$.
We \noo\ assume that the tuples $s,s'$ have the same length $\lh s=\lh{s'}=m$. (Otherwise, extend the shorted one by a sufficient number of new terms equal to $\pu$.) Define another condition $t\in\dC$ such that $\dom t=m$ and $t(j)=s(j)\cup s'(j)$ for all $j<m$. Accordingly, define $q\in\dQ$ so that $\abp q=\ans{-1}$ and $q(-1)=t$. Despite that $q$ may well be incomparable with $p,p'$ in $\dQ$, we claim that \bul{ql} q \for_\dQ \text{\rm``$\rL[\dgwgw]\mo\vpi(\ell)\land \psi(\ell')$''}\,. \eus
To prove the \dd\vpi part of \eqref{ql}, let $H\sq\dQ$ be a set \dd\dQ generic over $\rL$, and $q\in H$. Then, $t\su\bij H$. We have to prove that $\vpi(\ell)$ holds in $\rL[\hwhw]$.
Define another generic set $K\sq\dQ$, slightly different from $H$, so that \ben \Aenu \jtlb{kh1} $K(\inu)=H(\inu)$ for all $\inu\in\pwb=\ombl$;\vim
\itlb{kh2} $s\su\bij K$; \ \ and\vim
\itlb{kh3}
if $m\le j<\om$, then $\bij K(j)=\bij H(j)$. \een
\noi
In other words, the only difference between $K$ and $H$ is that $\bij K\res m=s$ but $\bij H\res m=t$.
It follows that $p\in K$; hence, $\vpi(\ell)$ holds in $\rL[\kwkw]$ by the assumptions of the lemma. Now, we note that by definition, $$ \Wg{K}\cup W=\wgm m{K}\cup \Wgb m{K}\cup W, \quad \Wg{H}\cup W=\wgm m{H}\cup \Wgb m{H}\cup W, $$
Here, the sets $c_H=\wgm m{H}$ and $c_K=\wgm m{K}$ satisfy $c_K\sq c_H$ (since $\bij K(j)=s(j)\sq t(j)=\bij H(j)$ for all $j<m$). In addition, $\Wgb m{H}=\Wgb m{K}$ (since $\bij K(j)\bij H(j)$ for all $j\ge m$). To conclude, \bul{kh} \Wg{K}\cup W=c_K\cup \Wgb m{H}\cup W, \quad \Wg{H}\cup W=c_H\cup \Wgb m{H}\cup W, \eus and $c_K\sq c_H=\wgm m{H}$.
On the other hand, it follows {from}
(A) that $K\res c=H\res c$ for any $c\sq\pwb$, whereas $\bij K$ and $\bij H$ are recursively reducible to each other by (B),(C). Therefore, $$ \rL[\kwkw] =\rL[H\res(\Wg{K}\cup W)] =\rL[H\res(c_K\cup \Wgb m{H}\cup W)] $$ by \eqref{kh}. However, $\vpi(\ell)$ holds in this model by the above. It follows by Lemma~\ref{sn+7} that $\vpi(\ell)$ holds in $\rL[\hwhw]=\rL[c_H\cup \Wgb m{H}\cup W]$ as well. (Harrington circumvents \mbox{Lemma~\ref{sn+7} in \cite{h74}} by a reference to the Shoenfield absoluteness theorem.) We are finished.
After \eqref{ql} has been established, we recall that $q\le p_0$ in $\dQ$ by construction. It follows that $\ell\ne\ell'$ by the choice of $p_0$ (see \ref{1stt} above). \epf
\bpf[Theorems~\ref{33} and~\ref{mt}:~$\is1{\nn+1}$ case]
We work under the assumptions of \ref{0xy} and \ref{1stt} above. Consider the following sets in $\rL$: $$ \bay{rcl} Z_x &=&\ens{\ell<\om} {\sus p\in\dQ\,(p\le p_0\land p\for_\dQ \text{``$\rL[\dgwgw]\mo\vpi(\ell)$''})};\\[1ex]
Z_y &=&\ens{\ell'<\om} {\sus p'\in\dQ\,(p'\le p_0\land p'\for_\dQ \text{``$\rL[\dgwgw]\mo\psi(\ell')$''})}. \eay $$
Note that $Z_x\cap Z_y=\pu$ by Lemma~\ref{1122}. On the other hand, it is clear that $x\sq Z_x$ and $y\sq Z_y$ by \ref{1stt}. Thus, either of the two sets $Z_x, Z_y\in\rL$ separates $x$ from $y$. It remains to apply Lemma~\ref{7lem1}. \epf
\parf{Finalization: \boldmath{$\ip1{\nn+1}$} case} \las{176}
This will be a mild variation of the argument presented in the previous section.
\bpf [Theorems~\ref{33} and~\ref{mt}: $\ip1{\nn+1}$ case, sketch] Emulating \ref{0xy} and \ref{1stt} above, we assume that a set $G\sq\dQ$ is \dd\dQ generic over $\rL$, and $x,y\sq\om$ are disjoint $\ip1{\nn+1}$ sets in $\rL[G\res (\Wg G\cup W)]$, defined by
parameter-free $\ip1{\nn+1}$ formulas, respectively,\ $\vpi(\cdot)$ and $\psi(\cdot)$. The implication $\kaz \ell\:(\vpi(\ell)\imp\neg\:\psi(\ell))$ is forced to hold in $\rL[\dgwgw]$ by a condition $p_0\in G$ satisfying $\abp {p_0}=\ans{-1}$. The proof of Lemma~\ref{1122} goes on for $\ip1{\nn+1}$ formulas $\vpi,\psi$ the same way, with the only difference that we define $t(j)=s(j)\cap s'(j)$ for $j<m$. Yet, this is compatible with the application of Lemma~\ref{sn+7} because now, $\vpi,\psi$ are $\ip1{\nn+1}$ formulas. \epf
\parf{Conclusions and discussion} \las{cd}
In this study, the method of almost-disjoint forcing was employed to the problem of obtaining a model of $\ZFC$ in which the \sep\ principle holds for lightface classes $\ip1{n+1}$ and $\is1{n+1}$, for a given $n\ge2$, for sets of integers. The problem of obtaining such models has been generally known since the early years of modern set theory, see, \eg, problems 3029 and 3030 in a survey \cite{mathiasS} by Mathias.
Harrington \cite[Addendum A3]{h74} claimed the existence of such models; yet, a detailed proof has never appeared.
{From our study, it is concluded that the
technique developed in our earlier paper \cite{kl57} solves the general case of the problem (an arbitrary $n\ge2$) by providing a generic extension of $\rL$ in which the \lsep\ principle holds for classes $\ip1{n+1}$ and $\is1{n+1}$, for a given $n\ge2$, for sets of integers, for a chosen value $n\ge2$. }
From this result, we immediately come to the following problem:
\begin{vopr} \lam{prob1} Define a generic extension of\/ $\rL$ in which the\/ \lsep\ principle holds for classes\/ $\ip1{n+1}$ and\/ $\is1{n+1}$, for\/ {\ubf all} $n\ge2$, for sets of integers.
\end{vopr}
The intended solution is expected to be obtained on the basis of a suitable product of the forcing notions $\dQ_\nn$, $\nn\ge2$, defined in Section~\ref{*Pex}.
And we recall the following major problem.
\begin{vopr} \lam{prob2} Given\/ $n\ge2$, define a generic extension of $\rL$ in which the\/ \sep\ principle holds for the classes\/ $\is1{n+1}$ and\/ $\fs1{n+1}$ for\/ {\ubf sets of reals}.
\end{vopr}
The case of \rit{sets of reals} in the \sep\ problem is more general, and obviously much more difficult, than the case sets of integers.
\addcontentsline{toc}{subsection}{\hspace*{5.5ex}References}
{\small
}
\end{document} |
\begin{document}
\title{Irreducibility of $x^n-a$} \begin{abstract} A. Capelli gave a necessary and sufficient condition for the reducibility of $x^n-a$ over $\Q$. In this article, we are providing an alternate elementary proof for the same. \end{abstract} {\bf{Key Words}}: Irreducible polynomials, cyclotomic polynomials.\\ {\bf{AMS(2010)}}: 11R09, 12D05.\\
In this article, we present an elementary proof of a theorem about the irreducibility of $x^n-a$ over $\Q$. Vahlen\cite{vahlen} is the first mathematician who characterized the irreducibility conditions of $x^n-a$ over $\Q$. A. Capelli~\cite{capelli} extended this result to all fields of characteristic zero. Later L. R\'edei~\cite{redei} proved this result for all fields of positive characteristic. But this theorem referred to as Capelli's theorem.
\begin{theorem}[\cite{capelli}, \cite{vahlen}, \cite{redei}] \label{thm:vahlen}
Let $n\ge 2.$ A polynomial $x^n-a\in \Q[x]$ is reducible over $\Q$ if and only if
either $a=b^t$ for some $t|n, t>1$, or $4|n$ and $a=-4b^4$, for some $b\in \Q.$
\end{theorem}
Since Theorem \ref{thm:vahlen} is true for arbitrary fields, all of the proofs are proved by using field extensions except the proof given by Vahlen \cite{vahlen}. Vahlen assumes that the binomial $x^n-a$ is reducible and proves Theorem \ref{thm:vahlen} by using the properties of $n^{th}$ roots of unity and by comparing the coefficients on both sides of the following equation $$x^n-a=(x^m+a_{m-1}x^{m-1}+\cdots+a_0)(x^{n-m}+b_{n-m-1}x^{n-m-1}+\cdots+b_0)$$ for some $m$, $0<m<n$. Reader can consult (\cite{karpilovsky}, p.425) for a proof using field theory. We give a proof particularly over $\Q$ by using very little machinery.
Let $f(x)=x^n-a$, $a=\frac{b}{c}\in \Q$ and $(b, c)=1$. Then $c^nf(x)=(cx)^n-c^{n-1}b\in \Z[x].$ Hence $x^n-a$ is reducible over $\Q$ if and only if $y^n-c^{n-1}b$ is reducible over $\Z$. It is, therefore, sufficient to consider $a\in \Z$ and throughout the article, by reducibility, we will mean reducible over $\Z$.
\begin{theorem}\label{thm:capelli}
Let $n\ge 2.$ A polynomial $x^n-a\in \Z[x]$ is reducible over $\Z$ if and only if
either $a=b^t$ for some $t|n, t>1$, or $4|n$ and $a=-4b^4$, for some $b\in \Z.$ \end{theorem}
The polynomial $x^n-1$ is a product of cyclotomic polynomials and if $n=2^{n_1}u$ with $2\nmid u$, then \begin{equation*}
x^n+1=\prod\limits_{d|u}\Phi_{2^{n_1+1}d}(x). \end{equation*} Therefore, from now onwards we assume that $a>1,$ if not specified, and
check the reducibility of the polynomial $x^n\pm a$ for $n\ge 2$. If there exists a prime $p$ such that $p|a$ but $p^2\nmid a,$ then $x^n\pm a$ is irreducible by Eisenstein's criterion. In other words, if $a=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$ is the prime factorization of $a$ and $x^n\pm a$ is reducible, then $a_i\ge 2$ for every $i\in \{1,2,\ldots,k\}.$ More generally,
\begin{lemma}\label{lem:gcdge2}
Let $n\ge 2$, $a=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$ be the prime factorization of $a$ and let $x^n\pm a$ be reducible. Then $\gcd(a_1,a_2,\ldots,a_k)\ge 2$ and $\gcd(\gcd(a_1,a_2,\ldots,a_k),n)>1.$ \end{lemma} \begin{proof}
We prove the result by induction on $k=\omega(a)$, the number of distinct prime divisors of $a.$ The roots of $x^n\pm a$ are of the form $a^{1/n}(\mp \zeta_n)^e$, where $e\in \Z$ and $\zeta_n$ is a primitive $n^{\text{th}}$ root of unity. Since the proof barely depends upon the sign of roots, we restrict to the case $x^n-a$. Let $f(x)$ be a proper factor of $x^n-a,$ where $\deg(f)=s<n$. If $f(0)=\pm d$, then $\pm d=a^{s/n}\zeta_n^w$ for some $w\in \Z$.
Let $k=1$ and $a=p_1^{a_1}$. From Eisenstein's criterion, $a_1\ge 2$. If $d=p_1^{\alpha},$ then $d^n=a^s$ gives, $\alpha n=a_1s$. Since $a_1\ge 2$ and $s<n$, we deduce that $(a_1, n)>1$.
Let $k=2$ and $a=p_1^{a_1}p_2^{a_2}$. From $d^n=a^s,$ let $d=p_1^{d_1}p_2^{d_2}$ be the prime factorization of $d$. Then $nd_1=a_1s, nd_2=a_2s$ would give $d_1a_2=d_2a_1$. If $(a_1, a_2)=1$, then $d_1=a_1c$ for some $c|d_2$ and $nc=s<n$ is a contradicton. Thus, $m=(a_1, a_2)\ge 2$. Next we need to show that $(n,m)>1$. Suppose $a_1=mb_1, a_2=mb_2$ with $(b_1,b_2)=1.$ From $d_1a_2=d_2a_1$, we deduce that $d_1b_2=d_2 b_1.$ Then $d_1=b_1 r$ for some $r|d_2.$ If $(n,m)=1$, then $nd_1=a_1s=mb_1s$ will give $n|s$, a contradiction. Hence, $(n,m)>1$.
Suppose the result is true for some $k\ge 2$. Thus, if $a=p_1^{a_1}\cdots p_k^{a_k}$, then $(a_1, \ldots, a_k)=u>1$ and $(u,n)>1$. To show that the result is true for $k+1$. Let $a=p_1^{a_1}\cdots p_k^{a_k}p_{k+1}^{a_{k+1}}=a_1^up_{k+1}^{a_{k+1}}$, where $a_1=p_1^{w_1}\cdots p_k^{w_k}$ and $(w_1, \ldots, w_k)=1$. From $d^n=a^s$, we can write $d$ as $d=b_1^vp_{k+1}^{d_{k+1}},$ where $b_1=p_1^{v_1}\cdots p_k^{v_k},$ $(v_1, \ldots, v_k)=1$. From the fundamental theorem of arithmetic, $d_{k+1}n=a_{k+1}s$ and $b_1=a_1,$ $vn=us$. That is $d_{k+1}u=a_{k+1}v$. If $(u,a_{k+1})=1,$ then $d_{k+1}=a_{k+1}h$ for some $h|v,$ and $na_{k+1}h=a_{k+1}s$ implies $n|s$. This contradicts the fact that $s<n$. Thus, $(u,a_{k+1})=m>1$.
To show that $(n,m)>1$. Let $u=mu_1, a_{k+1}=ma_{k+1}',$ where $(u_1, a_{k+1}')=1$. From $d_{k+1}u=a_{k+1}v$, we get $u_1d_{k+1}=va_{k+1}'$. Since $(u_1,a_{k+1}')=1$, $d_{k+1}=a_{k+1}'t$ for some $t|v$. On the other hand, $nd_1=a_1s, nd_{k+1}=a_{k+1}s$ would imply $a_1d_{k+1}=d_1a_{k+1}$. If $a_1=ua_1'=mu_1a_1',$ then $a_1d_{k+1}=d_1a_{k+1}$ implies $d_1=u_1a_1't$. Using this in $nd_1=a_1s$, we have $nt=ms$. If $(n,m)=1$, then $n|s$ is a contradiction. Thus, $(n,m)>1$. By induction principle, the result is true for every $k\ge 1.$ \end{proof}
In other words, if $x^n\pm a$ is reducible, then $a$ has to be of the form $b^m,$ where $(n,m)>1$ and $m\ge 2$. With a rearrangement in powers, we can say
\begin{cor}\label{cor:restrictedb}
Let $n\ge 2$ and $x^n\pm a $ be reducible over $\Z$. Then $ a= b^m$ for some $m\ge 2, m|n$, and $b$ is either a prime number or $b=(p_1^{b_1}p_2^{b_2}\cdots p_k^{b_k})^d,$ where $k\ge 2, (b_1, b_2, \ldots, b_k)=1$ and $(d, n)=1.$ \end{cor}
Suppose $f(x)=x^{25}\pm 6^8$. Then $b=6,$ $m=8$, and $(8,25)=1$ implies that the polynomial $x^{25}\pm 6^8$ is irreducible by Corollary \ref{cor:restrictedb}. Let $g(x)=x^{25}\pm (243)^2$. If we consider $b=243$, then $(2,25)=1$ imply that $x^{25}\pm (243)^2$ is irreducible. But $x^5\pm 9|g(x)$. The reason is, $b=243$ is not as in Corollary \ref{cor:restrictedb}. Since $243=3^5,$ $b$ will be $3^2$ and $m=5$ so that $m|n$. Because of this reason, we will say
A positive integer `{\em $b$ has the property $\P$'} if $b$ is in the form as given in Corollary \ref{cor:restrictedb}.
\begin{lemma}\label{lem:m|n}
Let $m\ge 2, m|n,$ and $b$ has the property $\P$. Then $x^n\pm b^m$ is reducible except possibly for $x^n+b^{2^r}, r\ge 1$. \end{lemma}
\begin{proof} If $m|n$, then \begin{equation*} x^n-b^m=\left(x^{n/m}-b\right)\left(x^{n(m-1)/m}+x^{n(m-2)/m}b+\cdots+x^{n/m}b^{m-2}+b^{m-1}\right). \end{equation*} Let $m=2^{r}m_1$, where $2\nmid m_1$ and $r\ge 0$. Then \begin{align}
x^n+b^m&= \prod\limits_{d|m_1}b^{2^r\varphi(d)}\Phi_{2^{r+1}d}\left(\frac{x^{n/m}}{b}\right), \notag \end{align} where $b^{2^r\varphi(d)}\Phi_{2^{r+1}d}\left(\frac{x^{n/m}}{b}\right)\in \Z[x]$ and $\varphi$ is the Euler totient function. \end{proof}
Lemma \ref{lem:m|n} is true even if $b$ does not have the property $\P$. If $m=2^r\ge 2, m|n,$ and $b$ has the property $\P$, then the reducibility condition of $x^n+b^m$ completes the proof of Theorem \ref{thm:capelli}.
Selmer(\cite{selmer}, p.298) made the following observation. Let $g(x)\in \Z[x]$ be an arbitrary irreducible polynomial of degree $n$. If $g(x^2)$ is reducible, then, using the fact that $\Z[x]$ is a unique factorization domain, we get $$g(x^2)=(-1)^ng_1(x)g_1(-x),$$ where $g_1(x)$ is an irreducible polynomial in $\Z[x]$. Thus, if $g_1(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$, then \begin{equation*} g(x^2)=(a_nx^n+a_{n-2}x^{n-2}+\cdots+a_0)^2-(a_{n-1}x^{n-1}+\cdots+a_1x)^2. \end{equation*} Let $k$ be an odd integer. Then $g(k^2)\equiv g(1)\pmod{4}$. Since the right hand side of the last equation is the difference between the two squares, $g(k^2)\equiv 0, \pm 1\pmod{4}$. Combining all of these, one can conclude that
\begin{lemma}\label{general} Let $g(x)\in \Z[x]$ be an irreducible polynomial. \begin{enumerate}[label=(\alph*)] \item\label{gpart1} If $g(k^2)\equiv 2\pmod{4}$ for an odd integer $k$, then $g(x^2)$ is irreducible over $\Z$. \item\label{gpart2} If $g(x^2)$ is reducible, then there are unique (up to sign) polynomials $f_1(x)$ and $f_2(x)$ such that $g(x^2)=f_1(x)^2-f_2(x)^2$. Furthermore, in this case, we can write $f_1(x)=h_1(x^2)$ and $f_2(x)=xh_2(x^2),$ where $h_1(x),h_2(x)\in \Z[x]$. \end{enumerate} \end{lemma} The proof of \ref{gpart2} follows from the fact that $\Z[x]$ is a unique factorization domain.
\begin{lemma}\label{capelli:lem3} Let $m=2^r\ge 2$ and $n$ be an odd positive integer. If $b$ has the property $\P$, then $x^{2^in}+b^m$ is irreducible for every $i$, $0\le i\le r$. \end{lemma}
\begin{proof} We proceed by induction on $i$. If $i=0$ and $b$ has the property $\P$, then $f(x)=x^n+b^m$ is irreducible by Lemma \ref{lem:gcdge2}. If $i=1$, then $f(x)=x^n+b^m$ is irreducible, and if $f(x^2)=x^{2n}+b^m=(x^n+b^{m/2})^2-2b^{m/2}x^n$ is reducible, from Lemma \ref{general}, $2b^{m/2}x^n$ has to be of the form $x^2h(x^2)^2$ for some $h(x)\in \Z[x]$. Since $n$ is odd, this is not possible and hence $f(x^2)$ is irreducible.
Suppose the result is true for some $i$, $0\le i\le r$ and we will show that it is true for $i+1\le r$. So, $f(x)=x^{2^in}+b^m$ is irreducible for some $i\le r$. From Lemma \ref{general}, if \begin{equation*} f(x^2)=x^{2^{i+1}n}+b^m=(x^{2^in}+b^{m/2})^2-2b^{m/2}x^{2^in} \end{equation*} is reducible, then $2b^{m/2}x^{2^in}$ has to be of the form $(xg(x^2))^2$ for some $g\in \Z[x]$. This is possible only when $m=2$ and $b=2^{\alpha}b_1^2,$ where $\alpha, b_1$ are odd positive integers. That is $r=1$ and hence $i=0$. We have already seen that $f(x^2)$ is irreducible in this case. Therefore, by the induction principle, $x^{2^in}+b^{2^r}$ is irreducible for every $i\le r$. \end{proof}
\begin{lemma}
Let $m=2^r\ge 2$ and let $b$ be an odd integer which has the property $\P$. If $m|n,$ then $x^n+b^m$ is irreducible. \end{lemma}
\begin{proof} Let $n=mt$. If $t$ is odd, then by Lemma~\ref{capelli:lem3}, $x^n+b^m$ is irreducible. Let $t=2^{t_1}u$, $t_1\ge 1$ and $u$ is odd. From Lemma \ref{capelli:lem3}, $g(x)=x^{mu}+b^m$ is irreducible. Since $b$ is odd, $g(k^2)\equiv 2\pmod{4}$ for any odd integer $k$. Applying Lemma \ref{general} repeatedly to $g(x)$, the result follows. \end{proof}
\begin{cor}\label{cor:tbeven}
Let $m=2^r\ge 2,$ $m|n,$ and $b$ has the property $\P$. If $x^n+b^m$ is reducible, then both $b$ and $\frac{n}{m}$ are even integers. \end{cor}
\begin{lemma}\label{lastlem} Let $t,r\in \N$ and $b$ has the property $\P$. Then $x^{2^rt}+b^{2^r}$ is reducible if and only if $t$ is even, $r=1$, and $b=2d^2$ for some $d\in \N$. \end{lemma}
\begin{proof} If $t$ is even, $r=1$ ,and $b=2d^2,$ then $$x^{2t}+4d^4=(x^t+2d^2)^2-4d^2x^t=(x^{t}-2dx^{t/2}+2d^2)(x^{t}+2dx^{t/2}+2d^2).$$ Conversely, let $g(x)=x^{2^rt}+b^{2^r}$ is reducible. By Corollary \ref{cor:tbeven}, both $t$ and $b$ are even integers. Let $t=2^{t_1}u, b=2^{b_1}v,$ where $u,v$ are odd integers and $t_1, b_1\ge 1$. By Lemma \ref{capelli:lem3}, the polynomial $h(x)=x^{2^ru}+b^{2^r}$ is irreducible. Since $g(x)=h(x^{2^{t_1}})$ is reducible, there is some $i$, $1\le i \le t_1$ such that $h(x^{2^{i-1}})$ is irreducible and
\begin{equation*} h(x^{2^i})=x^{2^{r+i}u}+2^{2^rb_1}v^{2^r}=(x^{2^{r+i-1}u}+2^{2^{r-1}b_1}v^{2^{r-1}})^2-2^{2^{r-1}b_1+1}v^{2^{r-1}}x^{2^{r+i-1}u} \end{equation*} is reducible. From uniqueness property of Lemma \ref{general}, $2^{2^{r-1}b_1+1}v^{2^{r-1}}x^{2^{r+i-1}u}$ has to be of the form $(xl(x^2))^2$ for some $l(x)\in \Z[x]$. This is possible only when $r=1$ and $2^{2^{r-1}b_1+1}v^{2^{r-1}}$ is a perfect square. Hence, $2^{b_1+1}v=c^2$ would imply $b=2\left(\frac{c}{2}\right)^2$ with $\frac{c}{2}\in \N$. \end{proof}
Proof of Theorem~\ref{thm:capelli} and hence Theorem~\ref{thm:vahlen} follows from Lemma~\ref{lem:gcdge2} and \ref{lem:m|n}, \ref{lastlem}.\\
{\bf Acknowledgement.} We would like to thank the referee for valuable comments.
\thebibliography{99} \bibitem{capelli} A. Capelli, {\em Sulla riduttibilita delle equazioni algebriche}, Nota prima, Red. Accad. Fis. Mat. Soc. Napoli(3), 3(1897), 243--252. \bibitem{selmer} E. S. Selmer, {\em On the irreducibility on certain trinomials}, Math. Scand., 4 (1956), 287--302. \bibitem{karpilovsky} G. Karpilovsky, {\em Topics in field theory}, ISBN: $0444872973$, North-Holland, 1989. \bibitem{vahlen} K. Th. Vahlen, {\em \"Uber reductible Binome}, Acta Math., 19(1)(1895), 195--198.
\bibitem{redei} L. R\'edei, {\em Algebra}, Erster Teil, Akademische Verlaggesellschaft, Leipzig, 1959.
\end{document} |
\begin{document}
\title{Certain Transformations for Hypergeometric series in $p$-adic setting}
\author{Rupam Barman} \address{Department of Mathematics, Indian Institute of Technology, Hauz Khas, New Delhi-110016, INDIA} \curraddr{} \email{[email protected]} \thanks{}
\author{Neelam Saikia} \address{Department of Mathematics, Indian Institute of Technology, Hauz Khas, New Delhi-110016, INDIA} \curraddr{} \email{[email protected]} \thanks{}
\subjclass[2010]{Primary: 11G20, 33E50; Secondary: 33C99, 11S80, 11T24.} \date{10th March, 2014} \keywords{Character of finite fields, Gaussian hypergeometric series, Elliptic curves, Trace of Frobenius, Teichm\"{u}ller character, $p$-adic Gamma function.}
\begin{abstract} In \cite{mccarthy2}, McCarthy defined a function $_{n}G_{n}[\cdots]$ using the Teichm\"{u}ller character of finite fields and quotients of the $p$-adic gamma function. This function extends hypergeometric functions over finite fields to the $p$-adic setting. In this paper, we give certain transformation formulas for the function $_{n}G_{n}[\cdots]$ which are not implied from the analogous hypergeometric functions over finite fields. \end{abstract} \maketitle \section{Introduction and statement of results} In \cite{greene}, Greene introduced the notion of hypergeometric functions over finite fields or \emph{Gaussian hypergeometric series}. He established these functions as analogues of classical hypergeometric functions. Many interesting relations between special values of Gaussian hypergeometric series and the number of points on certain varieties over finite fields have been obtained. By definition, results involving hypergeometric functions over finite fields are often restricted to primes in certain congruence classes. For example, the expressions for the trace of Frobenius map on certain families of elliptic curves given in \cite{BK1, BK2, Fuselier, lennon, lennon2} are restricted to such congruence classes. In \cite{mccarthy2}, McCarthy defined a function $_{n}G_{n}[\cdots]$ which can best be described as an analogue of hypergeometric series in the $p$-adic setting. He showed how results involving Gaussian hypergeometric series can be extended to a wider class of primes using the function $_{n}G_{n}[\cdots]$. \par Let $p$ be an odd prime, and let $\mathbb{F}_q$ denote the finite field with $q$ elements, where $q=p^r, r\geq 1$. Let $\phi$ be the quadratic character on $\mathbb{F}_q^{\times}$ extended to all of $\mathbb{F}_q$ by setting $\phi(0):=0$. Let $\mathbb{Z}_p$ denote the ring of $p$-adic integers. Let $\Gamma_p(.)$ denote the Morita's $p$-adic gamma function, and let $\omega$ denote the Teichm\"{u}ller character of $\mathbb{F}_q$. We denote by $\overline{\omega}$ the inverse of $\omega$. For $x \in \mathbb{Q}$ we let $\lfloor x\rfloor$ denote the greatest integer less than or equal to $x$ and $\langle x\rangle$ denote the fractional part of $x$, i.e., $x-\lfloor x\rfloor$. Also, we denote by $\mathbb{Z}^{+}$ and $\mathbb{Z}_{\geq 0}$ the set of positive integers and non negative integers, respectively. The definition of the function $_{n}G_{n}[\cdots]$ is as follows. \begin{definition}\cite[Definition 5.1]{mccarthy2} \label{defin1} Let $q=p^r$, for $p$ an odd prime and $r \in \mathbb{Z}^+$, and let $t \in \mathbb{F}_q$. For $n \in \mathbb{Z}^+$ and $1\leq i\leq n$, let $a_i$, $b_i$ $\in \mathbb{Q}\cap \mathbb{Z}_p$. Then the function $_{n}G_{n}[\cdots]$ is defined by \begin{align} &_nG_n\left[\begin{array}{cccc}
a_1, & a_2, & \ldots, & a_n \\
b_1, & b_2, & \ldots, & b_n
\end{array}|t
\right]_q:=\frac{-1}{q-1}\sum_{j=0}^{q-2}(-1)^{jn}~~\overline{\omega}^j(t)\notag\\ &\times \prod_{i=1}^n\prod_{k=0}^{r-1}(-p)^{-\lfloor \langle a_ip^k \rangle-\frac{jp^k}{q-1} \rfloor -\lfloor\langle -b_ip^k \rangle +\frac{jp^k}{q-1}\rfloor}
\frac{\Gamma_p(\langle (a_i-\frac{j}{q-1})p^k\rangle)}{\Gamma_p(\langle a_ip^k \rangle)}
\frac{\Gamma_p(\langle (-b_i+\frac{j}{q-1})p^k \rangle)}{\Gamma_p(\langle -b_ip^k \rangle)}.\notag \end{align} \end{definition} The aim of this paper is to explore possible transformation formulas for the function $_{n}G_{n}[\cdots]$. In \cite{mccarthy2}, McCarthy showed that transformations for hypergeometric functions over finite fields can be re-written in terms of $_{n}G_{n}[\cdots]$. However, such transformations will hold for all $p$ where the original characters existed over $\mathbb{F}_p$, and hence restricted to primes in certain congruence classes. In the same paper, McCarthy posed an interesting question about finding transformations for $_{n}G_{n}[\cdots]$ which exist for all but finitely many $p$. In \cite{BS1}, the authors find the following two transformations for the function $_{n}G_{n}[\cdots]$ which exist for all prime $p > 3$. \begin{result}\cite[Corollary 1.5]{BS1}\label{cor1}
Let $q=p^r$, $p>3$ be a prime. Let $a, b \in \mathbb{F}_q^{\times}$ and $-\dfrac{27b^2}{4a^3}\neq 1$. Then
\begin{align} &{_2}G_2\left[ \begin{array}{cc}
\frac{1}{4}, & \frac{3}{4} \\
\frac{1}{3}, & \frac{2}{3}
\end{array}|-\dfrac{27b^2}{4a^3}
\right]_q\notag\\
&=\left\{
\begin{array}{ll}
\phi(b(k^3+ak+b))\cdot {_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{3}, & \frac{2}{3}
\end{array}|-\dfrac{k^3+ak+b}{4k^3}\right]_q \hbox{if~ $a=-3k^2$;}\\
\phi(-b(3h^2+a))\cdot {_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{4}, & \frac{3}{4}
\end{array}|\dfrac{4(3h^2+a)}{9h^2}
\right]_q \hbox{if ~$h^3+ah+b=0$.}
\end{array}
\right.\notag \end{align} \end{result} Apart from the transformations which can be implied from the hypergeometric functions over finite fields, the above two transformations are the only transformations for the function $_{n}G_{n}[\cdots]$ in full generality to date. In this paper, we prove two more such transformations which are given below. \begin{theorem}\label{MT1}
Let $q=p^r$, $p>3$ be a prime. Let $m=-27d(d^3+8)$, $n=27(d^6-20d^3-8)$ $\in \mathbb{F}_q^{\times}$ be such that $d^3\neq 1$, and
$-\dfrac{27n^2}{4m^3}\neq 1$. Then \begin{align} &q\phi(-3d)\cdot {_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{6}, & \frac{5}{6}
\end{array}|\dfrac{1}{d^3}\right]_q\notag\\ &=\alpha-q+\phi(-3(8+92d^3+35d^6))+q\phi(n)\cdot{_2}G_2\left[ \begin{array}{cc}
\frac{1}{4}, & \frac{3}{4} \\
\frac{1}{3}, & \frac{2}{3}
\end{array}|-\dfrac{27n^2}{4m^3}
\right]_q,\notag
\end{align}
where $\alpha=\left\{
\begin{array}{ll}
5-6\phi(-3), & \hbox{if~ $q\equiv 1\pmod{3}$;} \\
1, & \hbox{if~ $q\not\equiv 1\pmod{3}$.}
\end{array}
\right.$
\end{theorem} Combining Result \ref{cor1} and Theorem \ref{MT1}, we have another four such transformations for the function $_{n}G_{n}[\cdots]$ which are listed below. \begin{corollary}\label{cor2}
Let $q=p^r$, $p>3$ be a prime.
Let $\alpha$ be defined as in Theorem \ref{MT1}, and $m=-27d(d^3+8)$, $n=27(d^6-20d^3-8)\in \mathbb{F}_q^{\times}$ be such that $d^3\neq 1$ and
$-\dfrac{27n^2}{4m^3}\neq 1$.
\begin{enumerate}
\item If $3k^2+m=0$, then \begin{align} &q\phi(-3d)\cdot {_2}G_2\left[ \begin{array}{cc} \frac{1}{2}, & \frac{1}{2} \\ \frac{1}{6}, & \frac{5}{6}
\end{array}|\dfrac{1}{d^3}\right]_q\notag\\ &=\alpha-q+\phi(-3(8+92d^3+35d^6))+q\phi(k^3+mk+n)\notag\\ &~\times{_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{3}, & \frac{2}{3}
\end{array}|-\dfrac{k^3+mk+n}{4k^3}
\right]_q.\notag \end{align} \item If $h^3+mh+n=0$, then \begin{align} &q\phi(-3d)\cdot {_2}G_2\left[ \begin{array}{cc} \frac{1}{2}, & \frac{1}{2} \\ \frac{1}{6}, & \frac{5}{6}
\end{array}|\dfrac{1}{d^3}\right]_q\notag\\ &=\alpha-q+\phi(-3(8+92d^3+35d^6))+q\phi(-3h^2-m)\notag\\ &~\times{_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{4}, & \frac{3}{4}
\end{array}|\dfrac{4(3h^2+m)}{9h^2}
\right]_q.\notag \end{align} \end{enumerate} \end{corollary} For an elliptic curve $E$ defined over $\mathbb{F}_q$, the trace of Frobenius of $E$ is defined as $a_q(E):=q+1-\#E(\mathbb{F}_q)$, where $\#E(\mathbb{F}_q)$ denotes the number of $\mathbb{F}_q$- points on $E$ including the point at infinity. Also, $j(E)$ denotes the $j$-invariant of $E$. We now state a result of McCarthy which will be used to prove our main results. \begin{theorem}\cite[Theorem 1.2]{mccarthy2}\label{mc} Let $p>3$ be a prime. Consider an elliptic curve $E_s/\mathbb{F}_p$ of the form $E_s: y^2=x^3+ax+b$ with $j(E_s)\neq 0, 1728$. Then \begin{align}
a_p(E_s)=\phi(b)\cdot p\cdot {_2}G_2\left[ \begin{array}{cc}
\frac{1}{4}, & \frac{3}{4} \\
\frac{1}{3}, & \frac{2}{3}
\end{array}|-\frac{27b^2}{4a^3}
\right]_p. \end{align} \end{theorem} \begin{remark} McCarthy proved Theorem \ref{mc} over $\mathbb{F}_p$ and remarked that the result could be generalized for $\mathbb{F}_q$. We have verified that Theorem \ref{mc} is also true for $\mathbb{F}_q$. We will apply Theorem \ref{mc} for $\mathbb{F}_q$ to prove our results. \end{remark}
\section{Preliminaries} Let $\widehat{\mathbb{F}_q^\times}$ denote the set of all multiplicative characters $\chi$ on $\mathbb{F}_q^{\times}$. It is known that $\widehat{\mathbb{F}_q^\times}$ is a cyclic group of order $q-1$ under the multiplication of characters: $(\chi\psi)(x)=\chi(x)\psi(x)$, $x\in \mathbb{F}_q^{\times}$. The domain of each $\chi \in \mathbb{F}_q^{\times}$ is extended to $\mathbb{F}_q$ by setting $\chi(0):=0$ including the trivial character $\varepsilon$. We now state the \emph{orthogonality relations} for multiplicative characters in the following lemma. \begin{lemma}\emph{(\cite[Chapter 8]{ireland}).}\label{lemma2} We have \begin{enumerate} \item $\displaystyle\sum_{x\in\mathbb{F}_q}\chi(x)=\left\{
\begin{array}{ll}
q-1 & \hbox{if~ $\chi=\varepsilon$;} \\
0 & \hbox{if ~~$\chi\neq\varepsilon$.}
\end{array}
\right.$ \item $\displaystyle\sum_{\chi\in \widehat{\mathbb{F}_q^\times}}\chi(x)~~=\left\{
\begin{array}{ll}
q-1 & \hbox{if~~ $x=1$;} \\
0 & \hbox{if ~~$x\neq1$.}
\end{array}
\right.$ \end{enumerate} \end{lemma} \par Let $\mathbb{Z}_p$ and $\mathbb{Q}_p$ denote the ring of $p$-adic integers and the field of $p$-adic numbers, respectively. Let $\overline{\mathbb{Q}_p}$ be the algebraic closure of $\mathbb{Q}_p$ and $\mathbb{C}_p$ the completion of $\overline{\mathbb{Q}_p}$. Let $\mathbb{Z}_q$ be the ring of integers in the unique unramified extension of $\mathbb{Q}_p$ with residue field $\mathbb{F}_q$. We know that $\chi\in \widehat{\mathbb{F}_q^{\times}}$ takes values in $\mu_{q-1}$, where $\mu_{q-1}$ is the group of $(q-1)$-th root of unity in $\mathbb{C}^{\times}$. Since $\mathbb{Z}_q^{\times}$ contains all $(q-1)$-th root of unity, we can consider multiplicative characters on $\mathbb{F}_q^\times$ to be maps $\chi: \mathbb{F}_q^{\times} \rightarrow \mathbb{Z}_q^{\times}$. \par We now introduce some properties of Gauss sums. For further details, see \cite{evans}. Let $\zeta_p$ be a fixed primitive $p$-th root of unity in $\overline{\mathbb{Q}_p}$. The trace map $\text{tr}: \mathbb{F}_q \rightarrow \mathbb{F}_p$ is given by \begin{align} \text{tr}(\alpha)=\alpha + \alpha^p + \alpha^{p^2}+ \cdots + \alpha^{p^{r-1}}.\notag \end{align} Then the additive character $\theta: \mathbb{F}_q \rightarrow \mathbb{Q}_p(\zeta_p)$ is defined by \begin{align} \theta(\alpha)=\zeta_p^{\text{tr}(\alpha)}.\notag \end{align} For $\chi \in \widehat{\mathbb{F}_q^\times}$, the \emph{Gauss sum} is defined by \begin{align} G(\chi):=\sum_{x\in \mathbb{F}_q}\chi(x)\theta(x).\notag \end{align} We let $T$ denote a fixed generator of $\widehat{\mathbb{F}_q^\times}$ and denote by $G_m$ the Gauss sum $G(T^m)$. We now state three results on Gauss sums which will be used to prove our main results. \begin{lemma}\emph{(\cite[Eqn. 1.12]{greene}).}\label{fusi3} If $k\in\mathbb{Z}$ and $T^k\neq\varepsilon$, then $$G_kG_{-k}=qT^k(-1).$$ \end{lemma} \begin{lemma}\emph{(\cite[Lemma 2.2]{Fuselier}).}\label{lemma1} For all $\alpha \in \mathbb{F}_q^{\times}$, $$\theta(\alpha)=\frac{1}{q-1}\sum_{m=0}^{q-2}G_{-m}T^m(\alpha).$$ \end{lemma} \begin{theorem}\emph{(\cite[Davenport-Hasse Relation]{Lang}).}\label{lemma3} Let $m$ be a positive integer and let $q=p^r$ be a prime power such that $q\equiv 1 \pmod{m}$. For multiplicative characters $\chi, \psi \in \widehat{\mathbb{F}_q^\times}$, we have \begin{align} \prod_{\chi^m=1}G(\chi \psi)=-G(\psi^m)\psi(m^{-m})\prod_{\chi^m=1}G(\chi). \end{align} \end{theorem} \par In the proof of our results, the Gross-Koblitz formula plays an important role. It relates the Gauss sums and the $p$-adic gamma function. For $n \in\mathbb{Z}^+$, the $p$-adic gamma function $\Gamma_p(n)$ is defined as \begin{align} \Gamma_p(n):=(-1)^n\prod_{0<j<n,p\nmid j}j\notag \end{align} and one extends it to all $x\in\mathbb{Z}_p$ by setting $\Gamma_p(0):=1$ and \begin{align} \Gamma_p(x):=\lim_{n\rightarrow x}\Gamma_p(n)\notag \end{align} for $x\neq0$, where $n$ runs through any sequence of positive integers $p$-adically approaching $x$. This limit exists, is independent of how $n$ approaches $x$, and determines a continuous function on $\mathbb{Z}_p$ with values in $\mathbb{Z}_p^{\times}$. \par Let $\pi \in \mathbb{C}_p$ be the fixed root of $x^{p-1} + p=0$ which satisfies $\pi \equiv \zeta_p-1 \pmod{(\zeta_p-1)^2}$. Then the Gross-Koblitz formula relates Gauss sums and $p$-adic gamma function as follows. Recall that $\omega$ denotes the Teichm\"{u}ller character of $\mathbb{F}_q$. \begin{theorem}\emph{(\cite[Gross-Koblitz]{gross}).}\label{thm4} For $a\in \mathbb{Z}$ and $q=p^r$, \begin{align} G(\overline{\omega}^a)=-\pi^{(p-1)\sum_{i=0}^{r-1}\langle\frac{ap^i}{q-1} \rangle}\prod_{i=0}^{r-1}\Gamma_p\left(\langle \frac{ap^i}{q-1} \rangle\right).\notag \end{align} \end{theorem} \section{Proof of the results} \par We first state a lemma which we will use to prove the main results. This lemma is a generalization of Lemma 4.1 in \cite{mccarthy2}. For a proof, see \cite{BS1}. \begin{lemma}\emph{(\cite[Lemma 3.1]{BS1}).}\label{lemma4} Let $p$ be a prime and $q=p^r$. For $0\leq j\leq q-2$ and $t\in \mathbb{Z^+}$ with $p\nmid t$, we have \begin{align}\label{eq8} \omega(t^{tj})\prod_{i=0}^{r-1}\Gamma_p\left(\langle \frac{tp^ij}{q-1}\rangle\right) \prod_{h=1}^{t-1}\Gamma_p\left(\langle\frac{hp^i}{t}\rangle\right) =\prod_{i=0}^{r-1}\prod_{h=0}^{t-1}\Gamma_p\left(\langle\frac{p^ih}{t}+\frac{p^ij}{q-1}\rangle\right) \end{align} and \begin{align}\label{eq9} \omega(t^{-tj})\prod_{i=0}^{r-1}\Gamma_p\left(\langle\frac{-tp^ij}{q-1}\rangle\right) \prod_{h=1}^{t-1}\Gamma_p\left(\langle \frac{hp^i}{t}\rangle\right) =\prod_{i=0}^{r-1}\prod_{h=0}^{t-1}\Gamma_p\left(\langle\frac{p^i(1+h)}{t}-\frac{p^ij}{q-1}\rangle \right). \end{align} \end{lemma} \begin{lemma}\label{lemma5} For $1\leq l\leq q-2$ such that $l\neq \frac{q-1}{2}$, and $0\leq i\leq r-1$, we have \begin{align}\label{eq-51} &\lfloor\frac{3lp^i}{q-1}\rfloor+3\lfloor\frac{-lp^i}{q-1}\rfloor- 3\lfloor\frac{-2lp^i}{q-1}\rfloor-\lfloor\frac{6lp^i}{q-1}\rfloor\notag\\ &=-2\lfloor\langle \frac{p^i}{2}\rangle- \frac{lp^i}{q-1}\rfloor -\lfloor\langle \frac{-p^i}{6} \rangle+ \frac{lp^i}{q-1}\rfloor-\lfloor\langle \frac{-5p^i}{6} \rangle+\frac{lp^i}{q-1}\rfloor. \end{align} \end{lemma} \begin{proof} Since $\lfloor\frac{6lp^i}{q-1}\rfloor$ can be written as $6u+v$, for some $u,v \in \mathbb{Z}$ such that $0\leq v\leq 5$, \eqref{eq-51} can be verified by considering the cases $v=0, 1, \ldots, 5$. For the case $v=0$ we have $\lfloor\frac{6lp^i}{q-1}\rfloor=6u$, and then it is easy to check that both the sides of \eqref{eq-51} are equal to zero. Similarly, for other values of $v$ one can verify the result. \end{proof} To prove Theorem \ref{MT1}, we will first express the number of points on the Hessian form of elliptic curves. Let $a\in \mathbb{F}_q$ be such that $a^3\neq 1$. Then the Hessian curve over $\mathbb{F}_q$ is given by the cubic equation \begin{align}\label{hessian1}
C_a: x^3+y^3+1=3axy. \end{align} We express the number of $\mathbb{F}_q$-points on $C_a$ in the following theorem. Let $C_a(\mathbb{F}_q)=\{(x, y)\in \mathbb{F}_q^2: x^3+y^3+1=3axy\}$ be the set of all $\mathbb{F}_q$-points on $C_a$. \begin{theorem}\label{hessian2} Let $q=p^r, p > 5$. Then \begin{align} \#C_a(\mathbb{F}_q)=\alpha-1+q-q\phi(-3a)\cdot{_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{6}, & \frac{5}{6}
\end{array}|\dfrac{1}{a^3}
\right]_q,\notag \end{align} where $\alpha=\left\{
\begin{array}{ll}
5-6\phi(-3), & \hbox{if~ $q\equiv 1\pmod{3}$;} \\
1, & \hbox{if~ $q\not\equiv 1\pmod{3}$.}
\end{array}
\right.$ \end{theorem} \begin{proof} We have $\#C_{a}(\mathbb{F}_{q})=\#\{(x,y)\in\mathbb{F}_{q}\times\mathbb{F}_{q}:\ P(x,y)=0\}$,\\ where $P(x,y)=x^{3}+y^{3}-3axy+1$. Using the identity \begin{align} \sum_{z\in\mathbb{F}_q}\theta(zP(x,y))=\left\{
\begin{array}{ll}
q, & \hbox{if $P(x,y)=0$;} \\
0, & \hbox{if $P(x,y)\neq0$,}
\end{array}
\right.\notag \end{align} we obtain \begin{align}\label{eq-53} q\cdot\#C_{a}(\mathbb{F}_{q})&=\sum_{x,y,z\in\mathbb{F}_{q}}\theta(zP(x,y))\notag\\ &=q^2+\sum_{z\in\mathbb{F}_{q}^{\times}}\theta(z)+\sum_{y,z\in\mathbb{F}_{q}^{\times}}\theta(zy^3)\theta(z)\notag\\ &~+\sum_{x,z\in\mathbb{F}_{q}^{\times}}\theta(zx^3)\theta(z)+\sum_{x,y,z\in\mathbb{F}_{q}^{\times}}\theta(z)\theta(zx^3) \theta(zy^3)\theta(-3axyz)\notag\\ &:=q^2+A+B+C+D. \end{align} Using Lemma \ref{lemma2}, Lemma \ref{fusi3} and Lemma \ref{lemma1}, we find $A$, $B$, $C$ and $D$ separately. We have \begin{align} A=\frac{1}{q-1}\sum_{l=0}^{q-2}G_{-l}\sum_{z\in\mathbb{F}_{q}^{\times}}T^{l}(z).\notag \end{align} The inner sum in the expression of $A$ is non zero only if $l=0$, and hence $A=-1$. We have \begin{align} B&=\sum_{y,z\in\mathbb{F}_{q}^{\times}}\theta(zy^3)\theta(z)\notag\\ &=\frac{1}{(q-1)^2}\sum_{y,z\in\mathbb{F}_{q}^{\times}}\sum_{l,m=0}^{q-2}G_{-m}T^m(zy^3)G_{-l}T^l(z)\notag\\ &=\frac{1}{(q-1)^2}\sum_{l,m=0}^{q-2}G_{-m}G_{-l}\sum_{z\in\mathbb{F}_{q}^{\times}}T^{l+m}(z) \sum_{y\in\mathbb{F}_{q}^{\times}}T^{3m}(y),\notag \end{align} which is non zero only if $l+m=0$ and $3m=0$. By considering the following two cases we find $B$.\\ Case 1: If $q\equiv 1\pmod{3}$ then $m=0,\frac{q-1}{3}$ or $\frac{2(q-1)}{3}$; and $l=0,-\frac{q-1}{3}$ or $-\frac{2(q-1)}{3}$. Hence, \begin{align} B&=G_{0}G_{0}+G_{-\frac{q-1}{3}}G_{\frac{q-1}{3}}+G_{-\frac{2(q-1)}{3}}G_{\frac{2(q-1)}{3}}\notag\\ &=1+2q.\notag \end{align} Case 2: If $q\not\equiv 1\pmod{3}$ then $l=m=0$, and hence $B=G_{0}G_{0}=1$. Also, \begin{align} C&=\sum_{x,z\in\mathbb{F}_{q}^{\times}}\theta(zx^3)\theta(z)\notag\\ &=B.\notag \end{align} Finally, \begin{align} D&=\sum_{x,y,z\in\mathbb{F}_{q}^{\times}}\theta(z)\theta(zx^3)\theta(zy^3)\theta(-3axyz)\notag\\ &=\frac{1}{(q-1)^4}\sum_{x,y,z\in\mathbb{F}_{q}^{\times}}\sum_{l,m,n,k=0}^{q-2}G_{-l}G_{-m}G_{-n}G_{-k}T^{l}(zx^3)\notag\\ &~\times T^{m}(zy^3)T^{n}(z)T^{k}(-3axyz)\notag\\ &=\frac{1}{(q-1)^4}\sum_{l,m,n,k=0}^{q-2}G_{-l}G_{-m}G_{-n}G_{-k}T^{k}(-3a)\notag\\ &\times~\sum_{x\in\mathbb{F}_{q}^{\times}}T^{3l+k}(x)\sum_{y\in\mathbb{F}_{q}^{\times}}T^{3m+k}(y) \sum_{z\in\mathbb{F}_{q}^{\times}}T^{l+m+n+k}(z),\notag \end{align} which is non zero only if $3l+k=0$, $3m+k=0$, and $l+m+n+k=0$. We now consider the following two cases.\\ Case 1: If $q\equiv 1\pmod{3}$ then $m=l$, $l+\frac{q-1}{3}$ or $l+\frac{2(q-1)}{3}$; $k=-3l$; and $n=l$, $l-\frac{q-1}{3}$ or $l-\frac{2(q-1)}{3}$, and hence \begin{align}\label{eq-54} D&=\frac{1}{q-1}\sum_{l=0}^{q-2}G_{-l}G_{-l}G_{-l}G_{3l}T^{-3l}(-3a)\notag\\ &~+\frac{2}{q-1}\sum_{l=0}^{q-2}G_{-l}G_{-l-\frac{q-1}{3}}G_{-l-\frac{2(q-1)}{3}}G_{3l}T^{-3l}(-3a). \end{align} Transforming $l\rightarrow l-\frac{q-1}{2}$, we have \begin{align} D&=\frac{1}{q-1}\sum_{l=0}^{q-2}G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{2}}G_{3l-\frac{q-1}{2}} T^{-3l+\frac{q-1}{2}}(-3a)\notag\\ &~+\frac{2}{q-1}\sum_{l=0}^{q-2}G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{6}}G_{-l-\frac{q-1}{6}}G_{3l-\frac{q-1}{2}} T^{-3l+\frac{q-1}{2}}(-3a)\notag\\ &=\frac{\phi(-3a)}{q-1}\sum_{l=0}^{q-2}G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{2}}G_{3l-\frac{q-1}{2}} T^{-3l}(-3a)\notag\\ &~+\frac{2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{6}}G_{-l-\frac{q-1}{6}}G_{3l-\frac{q-1}{2}} T^{-3l}(-3a).\notag \end{align} Using Davenport-Hasse relation for certain values of $m$ and $\psi$ we deduce the following relations: For $m=2$, $\psi=T^{-l}$, we have \begin{align} G_{-l+\frac{q-1}{2}}=\frac{G_{\frac{q-1}{2}}G_{-2l}T^l(4)}{G_{-l}},\notag \end{align} and for $m=2$, $\psi=T^{3l}$, we have \begin{align} G_{3l-\frac{q-1}{2}}=\frac{G_{\frac{q-1}{2}}G_{6l}T^{-3l}(4)}{G_{3l}}.\notag \end{align} For $m=6$, $\psi=T^{-l}$, we have \begin{align} &G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{3}}G_{-l+\frac{2(q-1)}{3}}G_{-l+\frac{q-1}{6}}G_{-l+\frac{5(q-1)}{6}}\notag\\ &=\frac{q^2\phi(-1)G_{\frac{q-1}{2}}G_{-6l}T^{6l}(6)}{G_{-l}},\notag \end{align} and for $m=3$, $\psi=T^{-l}$, we have \begin{align} G_{-l+\frac{q-1}{3}}G_{-l+\frac{2(q-1)}{3}}=\frac{qG_{-3l}T^{3l}(3)}{G_{-l}}.\notag \end{align} Using all these expressions and Lemma \ref{lemma2} and Lemma \ref{fusi3} we find that \begin{align} D&=\frac{\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-2l}G_{-2l}G_{-2l}G_{6l}G_{\frac{q-1}{2}}^4T^{-3l}(-3a)} {G_{-l}G_{-l}G_{-l}G_{3l}}\notag\\ &~+\frac{2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-l+\frac{q-1}{2}}G_{-l+\frac{q-1}{3}}G_{-l+\frac{2(q-1)}{3}} G_{-l+\frac{q-1}{6}}G_{-l+\frac{5(q-1)}{6}}G_{3l-\frac{q-1}{2}}T^{-3l}(-3a)}{G_{-l+\frac{q-1}{3}}G_{-l+\frac{2(q-1)}{3}}}\notag\\ &=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-2l}G_{-2l}G_{-2l}G_{6l}T^{-3l}(-3a)}{G_{-l}G_{-l}G_{-l}G_{3l}}\notag\\ &~+\frac{2q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-6l}G_{6l}T^{-3l}(-a)}{G_{3l}G_{-3l}}\notag\\ &=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-2l}^3G_{6l}}{G_{-l}^3G_{3l}}T^{-3l}(-3a)+ \frac{6q^2\phi(-3a)\phi(a)}{(q-1)q}\notag\\ &~+\frac{2q^2\phi(-3a)}{q-1}\sum_{l=0, l\neq\frac{q-1}{6},\frac{q-1}{2},\frac{5(q-1)}{6}}^{q-2}T^{3l}(\frac{1}{a})\notag\\ &=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-2l}^3G_{6l}}{G_{-l}^3G_{3l}}T^{-3l}(-3a)+ \frac{6q\phi(-3)}{q-1}\notag\\ &~+\frac{2q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}T^{3l}(\frac{1}{a})-\frac{6q^2\phi(-3a)\phi(a)}{q-1}\notag\\ &=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\frac{G_{-2l}^3G_{6l}}{G_{-l}^3G_{3l}}T^{-3l}(-3a)-6q\phi(-3)\notag\\ &=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}\frac{G_{-2l}^3G_{6l}}{G_{-l}^3G_{3l}}T^{-3l}(-3a) +\frac{1}{q-1}-6q\phi(-3).\notag \end{align} Taking $T=\overline{\omega}$ and using Gross-Koblitz formula we deduce that \begin{align} D&=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}\pi^{(p-1)\sum_{i=0}^{r-1} \{3\langle\frac{-2lp^i}{q-1}\rangle+\langle\frac{6lp^i}{q-1}\rangle-\langle\frac{3lp^i}{q-1}\rangle- 3\langle\frac{-lp^i}{q-1}\rangle\}}\notag\\ &~\times\overline{\omega}^l\left(-\frac{1}{27a^3}\right) \prod_{i=0}^{r-1}\frac{\Gamma_{p}^3(\langle\frac{-2lp^i}{q-1}\rangle)\Gamma_p(\langle\frac{6lp^i}{q-1}\rangle)} {\Gamma_{p}^3(\langle\frac{-lp^i}{q-1}\rangle)\Gamma_p(\langle\frac{3lp^i}{q-1}\rangle)}\notag\\ &~+\frac{1}{q-1}-6q\phi(-3).\notag \end{align} From Lemma \ref{lemma4} we deduce that \begin{align} D&=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}\pi^{(p-1)s}~~\overline{\omega}^l\left(-\frac{1}{a^3}\right)\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_{p}^3(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_{p}^3(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{p^i}{6}\rangle)}\right\}\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_p(\langle(\frac{1}{2}+\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{5}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_p(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{5p^i}{6}\rangle)}\right\}\notag\\ &~+\frac{1}{q-1}-6q\phi(-3),\notag \end{align}
where $s=\sum_{i=0}^{r-1} \{3\langle\frac{-2lp^i}{q-1}\rangle+\langle\frac{6lp^i}{q-1}\rangle-\langle\frac{3lp^i}{q-1}\rangle- 3\langle\frac{-lp^i}{q-1}\rangle\}$. \begin{align} D&=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}\pi^{(p-1)s}~~\overline{\omega}^l\left(-\frac{1}{a^3}\right)\notag\\ &~\times\underbrace{\prod_{i=0}^{r-1}\left\{\frac{\Gamma_p(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{2}+\frac{l}{q-1})p^i\rangle)} {\Gamma_p(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{p^i}{2}\rangle)}\right\}}\\ &\hspace{3.9cm}I_{l}\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_{p}^2(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{6}+\frac{l}{q-1})p^i\rangle)\Gamma_p(\langle(\frac{5}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_{p}^2(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{5p^i}{6}\rangle)}\right\}\notag\\ &~+\frac{1}{q-1}-6q\phi(-3).\notag \end{align} For $l\neq \frac{q-1}{2}$, we have \begin{align}\label{eq-55} I_{l}&=\prod_{i=0}^{r-1}\frac{\Gamma_p(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{2}+\frac{l}{q-1})p^i\rangle)} {\Gamma_p(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{p^i}{2}\rangle)}\notag\\ &=\prod_{i=0}^{r-1}\frac{\Gamma_p(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(1-\frac{l}{q-1})p^i\rangle)\Gamma_p(\langle\frac{lp^i}{q-1}\rangle) \Gamma_p(\langle(\frac{1}{2}+\frac{l}{q-1})p^i\rangle)} {\Gamma_p(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{p^i}{2}\rangle)}\notag\\ &\times\frac{1}{\Gamma_p(\langle(1-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle\frac{lp^i}{q-1}\rangle)}. \end{align} Applying Lemma \ref{lemma4} in equation \eqref{eq-55} we deduce that \begin{align}\label{eq-56} I_l&=\prod_{i=0}^{r-1}\frac{\Gamma_p(\langle\frac{-2lp^i}{q-1}\rangle)\Gamma_p(\langle\frac{2lp^i}{q-1}\rangle)} {\Gamma_p(\langle(1-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle\frac{lp^i}{q-1}\rangle)}. \end{align} From \cite[Eqn. 2.9]{mccarthy2} we have that for $0<l<q-1$, $$\prod_{i=0}^{r-1}\Gamma_p(\langle(1-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle\frac{lp^i}{q-1}\rangle)=(-1)^r\overline{\omega}^l(-1).$$ Putting this value in equation \eqref{eq-56}, and using Gross-Koblitz formula [Theorem \ref{thm4}], Lemma \ref{fusi3}, and the fact that $$\langle\frac{-2lp^i}{q-1}\rangle+\langle\frac{2lp^i}{q-1}\rangle=1,$$ we have \begin{align} I_l&=\frac{\pi^{(p-1)\sum_{i=0}^{r-1}\langle\frac{-2lp^i}{q-1}\rangle}\prod_{i=0}^{r-1}\Gamma_p\left(\langle\frac{-2lp^i}{q-1} \rangle\right) \pi^{(p-1)\sum_{i=0}^{r-1}\langle\frac{2lp^i}{q-1}\rangle}\prod_{i=0}^{r-1}\Gamma_p\left(\langle\frac{2lp^i}{q-1}\rangle\right)} {(-1)^r\overline{\omega}^l(-1)\pi^{(p-1)\sum_{i=0}^{r-1}\{\langle\frac{-2lp^i}{q-1}\rangle+\langle\frac{2lp^i}{q-1}\rangle\}}} \notag\\ &=\frac{G(\overline{\omega}^{~-2l})G(\overline{\omega}^{~2l})}{q\overline{\omega}^l(-1)}\notag\\ &=\frac{q~\overline{\omega}^{2l}(-1)}{q~\overline{\omega}^{l}(-1)}\notag\\ &=\overline{\omega}^{l}(-1).\notag \end{align} Using the above relation we obtain \begin{align}\label{eq-52} D&=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}\pi^{(p-1)s}~~ \overline{\omega}^l\left(\frac{1}{a^3}\right)\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_{p}^2(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{6}+\frac{l}{q-1})p^i\rangle)\Gamma_p(\langle(\frac{5}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_{p}^2(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{5p^i}{6}\rangle)}\right\}\notag\\ &~+\frac{1}{q-1}-6q\phi(-3). \end{align} Now \begin{align} s&=\sum_{i=0}^{r-1} \{3\langle\frac{-2lp^i}{q-1}\rangle+\langle\frac{6lp^i}{q-1}\rangle-\langle\frac{3lp^i}{q-1}\rangle- 3\langle\frac{-lp^i}{q-1}\rangle\}\notag\\ &=\sum_{i=0}^{r-1} \{3(\frac{-2lp^i}{q-1})+(\frac{6lp^i}{q-1})-(\frac{3lp^i}{q-1})- 3(\frac{-lp^i}{q-1})\}\notag\\ &~+\sum_{i=0}^{r-1} \{-3\lfloor\frac{-2lp^i}{q-1}\rfloor-\lfloor\frac{6lp^i}{q-1}\rfloor+\lfloor\frac{3lp^i}{q-1}\rfloor+ 3\lfloor\frac{-lp^i}{q-1}\rfloor\}\notag\\ &=\sum_{i=0}^{r-1} \{-3\lfloor\frac{-2lp^i}{q-1}\rfloor-\lfloor\frac{6lp^i}{q-1}\rfloor+\lfloor\frac{3lp^i}{q-1}\rfloor+ 3\lfloor\frac{-lp^i}{q-1}\rfloor\},\notag \end{align} which is an integer. Therefore equation \eqref{eq-52} becomes \begin{align} D&=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}(-p)^s~~ \overline{\omega}^l\left(\frac{1}{a^3}\right)\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_{p}^2(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{6}+\frac{l}{q-1})p^i\rangle)\Gamma_p(\langle(\frac{5}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_{p}^2(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{5p^i}{6}\rangle)}\right\}\notag\\ &~+\frac{1}{q-1}-6q\phi(-3). \end{align} Lemma \ref{lemma5} gives \begin{align} D&=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0,l\neq\frac{q-1}{2}}^{q-2}\overline{\omega}^l\left(\frac{1}{a^3}\right) (-p)^{\sum_{i=0}^{r-1}\{-2\lfloor\langle\frac{p^i}{2}\rangle-\frac{lp^i}{q-1}\rfloor\}}\notag\\ &~\times(-p)^{\sum_{i=0}^{r-1}\{-\lfloor\langle\frac{-p^i}{6}\rangle+\frac{lp^i}{q-1}\rfloor -\lfloor\langle\frac{-5p^i}{6}\rangle+\frac{lp^i}{q-1}\rfloor\}}\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_{p}^2(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{6}+\frac{l}{q-1})p^i\rangle)\Gamma_p(\langle(\frac{5}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_{p}^2(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{5p^i}{6}\rangle)}\right\}\notag\\ &~+\frac{1}{q-1}-6q\phi(-3)\notag\\ &=\frac{q^2\phi(-3a)}{q-1}\sum_{l=0}^{q-2}\overline{\omega}^l\left(\frac{1}{a^3}\right) (-p)^{\sum_{i=0}^{r-1}\{-2\lfloor\langle\frac{p^i}{2}\rangle-\frac{lp^i}{q-1}\rfloor\}}\notag\\ &~\times(-p)^{\sum_{i=0}^{r-1}\{-\lfloor\langle\frac{-p^i}{6}\rangle+\frac{lp^i}{q-1}\rfloor -\lfloor\langle\frac{-5p^i}{6}\rangle+\frac{lp^i}{q-1}\rfloor\}}\notag\\ &~\times\prod_{i=0}^{r-1}\left\{\frac{\Gamma_{p}^2(\langle(\frac{1}{2}-\frac{l}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{1}{6}+\frac{l}{q-1})p^i\rangle)\Gamma_p(\langle(\frac{5}{6}+\frac{l}{q-1})p^i\rangle)} {\Gamma_{p}^2(\langle\frac{p^i}{2}\rangle)\Gamma_p(\langle\frac{5p^i}{6}\rangle)}\right\}\notag\\ &~-\frac{q}{q-1}+\frac{1}{q-1}-6q\phi(-3)\notag\\ &=-1-6q\phi(-3)-q^2\phi(-3a)\cdot {_2}G_{2}\left[\begin{array}{cc}
\frac{1}{2} & \frac{1}{2} \\
\frac{1}{6} & \frac{5}{6}
\end{array}|\frac{1}{a^3} \right]_{q}. \end{align} Case 2: If $q\not\equiv 1 \pmod{3}$ then $m=l$, $k=-3l$, and $n=l$, and then \begin{align} D&=\frac{1}{q-1}\sum_{l=0}^{q-2}G_{-l}G_{-l}G_{-l}G_{3l}T^{-3l}(-3a),\notag \end{align} which is same as the first term of the equation \eqref{eq-54}. Thus we have $$D=-1-q^2\phi(-3a)\cdot {_2}G_{2}\left[\begin{array}{cc}
\frac{1}{2} & \frac{1}{2} \\
\frac{1}{6} & \frac{5}{6}
\end{array}|\frac{1}{a^3} \right]_{q}.$$ Substituting the values of $A$, $B$, $C$ and $D$ in equation \eqref{eq-53} we obtain the desired result. \end{proof} \noindent \textbf{Proof of Theorem \ref{MT1}}: Consider the elliptic curve $$E: y^2=x^3+mx+n,$$ where $m=-27d(d^3+8)$ and $n=27(d^6-20d^3-8)$. By the following transformation $x\rightarrow -\frac{36-9d^3+3dx-y}{6(9d^2+x)}$ and $y\rightarrow -\frac{36-9d^3+3dx+y}{6(9d^2+x)}$, we obtain the equivalent form $C_d$. In the proof of Theorem 1.2, Barman and Kalita \cite{BK1} proved that $$\#E(\mathbb{F}_q)+q=\#C_d(\mathbb{F}_q)+2+\phi(-3(8+92d^3+35d^6)).$$ For $d^3\neq 1$, from Theorem \ref{hessian2}, we have \begin{align}
a_q(E)&=q+1-\#E(\mathbb{F}_q)\notag\\
&=2q-1-\#C_d(\mathbb{F}_q)-\phi(-3(8+92d^3+35d^6))\notag\\
&=q-\alpha-\phi(-3(8+92d^3+35d^6))+q\phi(-3d)\cdot{_2}G_2\left[ \begin{array}{cc}
\frac{1}{2}, & \frac{1}{2} \\
\frac{1}{6}, & \frac{5}{6}
\end{array}|\dfrac{1}{d^3}
\right]_q,\notag \end{align} where $\alpha=\left\{
\begin{array}{ll}
5-6\phi(-3), & \hbox{if~ $q\equiv 1\pmod{3}$;} \\
1, & \hbox{if~ $q\not\equiv 1\pmod{3}$.}
\end{array}
\right.$\\ For $m, n \neq 0$ and $-\dfrac{27n^2}{4m^3}\neq 1$, we have $j(E)\neq 0, 1728$. Now, applying Theorem \ref{mc} over $\mathbb{F}_q$, we complete the proof of the theorem.
\end{document} |
\begin{document}
\title{Global existence and non-existence of stochastic parabolic equations }
\begin{abstract} This paper is concerned with the blowup phenomenon of stochastic parabolic equations both on bounded domain and in the whole space. We introduce a new method to study the blowup phenomenon on bounded domain. Comparing with the existing results, we delete the assumption that the solutions to stochastic heat equations are non-negative. Then the blowup phenomenon in the whole space is obtained by using the properties of heat kernel. We obtain that the solutions will blow up in finite time for nontrivial initial data.
{\bf Keywords}: It\^{o}'s formula; Blowup; Stochastic heat equation; Impact of noise.
AMS subject classifications (2010): 35K20, 60H15, 60H40.
\end{abstract}
\baselineskip=15pt
\section{Introduction} \setcounter{equation}{0}
For deterministic partial differential equations, finite time blowup phenomenon has been studied by many authors, see the book \cite{Hubook2018}. There are two cases to study this problem. One is bounded domain and the other is whole space. On the bounded domain, the $L^p$-norm of solutions ($p>1$) will blow up in finite time. The methods used for bounded domain include: Kaplan's first eigenvalue method, concavity method and comparison method, see Chapter 5 of \cite{Hubook2018}. The main result is the following: under the assumptions that the initial data is suitable large and that the nonlinear term $f(u)$ satisfies $f(u)\geq u^{1+\alpha}$ with $\alpha>0$, the solution of $u_t-\Delta u=f(u)$ with Dirichlet boundary condition will blow up in finite time.
For the whole space, the following "Fujita Phenomenon" has been attraction in the literature. Consider the following Cauchy problem
\bes\left\{\begin{array}{lll} u_t=\Delta u+u^p,\ \ \ &x\in\mathbb{R}^d, \ \ t>0,\ \ p>0,\\ u(0,x)=u_0(x), \ \ \ &x\in\mathbb{R}^d.
\end{array}\right.\lbl{1.1}\ees It has been proved that:
\begin{quote} (i) if $0<p<1$, then every nonnegative solution is global, but not necessarily unique;
(ii) if $1<p\leq1+\frac{2}{d}$, then any nontrivial, nonnegative solution blows up in finite time;
(iii) if $p>1+\frac{2}{d}$, then $u_0\in\mathcal{U}$ implies that $u(t,x,u_0)$ exists globally;
(iv) if $p>1+\frac{2}{d}$, then $u_0\in\mathcal{U_1}$ implies that $u(t,x,u_0)$ blows up in finite time,
\end{quote} where $\mathcal{U}$ and $\mathcal{U_1}$ are defined as follows
\begin{eqnarray*}
\mathcal{U}&=&\left\{v(x)|v(x)\in BC(\mathbb{R}^d,\mathbb{R}_+), v(x)\leq \delta e^{-k|x|^2},\ k>0,\delta=\delta(k)>0\right\},\\
\mathcal{U_1}&=&\left\{v(x)|v(x)\in BC(\mathbb{R}^d,\mathbb{R}_+), v(x)\geq c e^{-k|x|^2},\ k>0,c\gg1\right\}.
\end{eqnarray*} Here $BC=\{$ bounded and uniformly continuous functions $\}$, see Fujita \cite{F1966,F1970} and Hayakawa \cite{kH1973}.
It is easy to see that for the whole space, there are four types of behaviours for problem (\ref{1.1}), namely, (1) global existence unconditionally but uniqueness fails in certain solutions, (2) global existence with restricted initial data, (3) blowing up unconditionally, and (4) blowing up with restricted initial data. The occurrence of these behaviors depends on the combination effect of the nonlinearity represented by the parameter $p$, the size of the initial datum $u_0(x)$, represented by the choice of $\mathcal{U}$ or $\mathcal{U_1}$, and the dimension of the space.
Now, we recall some known results of stochastic partial differential equations (SPDEs). In this paper, we only focus on the stochastic parabolic equations. It is known that the existence and uniqueness of global solutions to SPDEs can be established under appropriate conditions (\cite{Cb2007,DW2014,LR2010,L2013,T2009}). For the finite time blowup phenomenon of stochastic parabolic equations, we first consider the case on bounded domain. Consider the following equation
\bes\left\{\begin{array}{lll}
du=(\Delta u+f(u))dt+\sigma(u)dW_t, \ \ \qquad t>0,&x\in D,\\[1.5mm]
u(x,0)=u_0(x)\geq0, \ \ \ &x\in D,\\[1.5mm]
u(x,t)=0, \qquad \qquad \qquad \qquad \qquad \qquad t>0, &x\in\partial D,
\end{array}\right.\lbl{1.2}\ees Da Prato-Zabczyk \cite{PZ1992} considered the existence of global solutions of (\ref{1.2}) with additive noise ($\sigma$ is constant). Manthey-Zausinger \cite{MZ1999} considered (\ref{1.2}), where $\sigma$ satisfied the global Lipschitz condition. Dozzi and L\'{o}pez-Mimbela \cite{DL2010} studied equation (\ref{1.2}) with $\sigma(u)=u$ and proved that if $f(u)\geq u^{1+\alpha}$ ($\alpha>0$) and initial data is large enough, the solution will blow up in finite time, and that if $f(u)\leq u^{1+\beta}$ ($\beta$ is a certain positive constant) and the initial data is small enough, the solution will exist globally, also see \cite{NX2012}. A natural question arises: If $\sigma$ does not satisfy the global Lipschitz condition, what can we say about the solution? Will it blow up in finite time or exist globally? Chow \cite{C2009,C2011} answered part of this question. Lv-Duan \cite{LD2015} described the competition between the nonlinear term and noise term for equation (\ref{1.2}). Bao-Yuan \cite{BY2014} and Li et al.\cite{LPJ2016} obtained the existence of local solutions of (\ref{1.2}) with jump process and L$\acute{e}$vy process, respectively. For blowup phenomenon of stochastic functional parabolic equations, see \cite{CL2012,LWW2016} for details.
In a somewhat different case, Mueller \cite{M1991} and, later, Mueller-Sowers \cite{MuS1993} investigated the problem of a noise-induced explosion for a special case of equation (\ref{1.2}), where $f(u)\equiv0,\,\sigma(u)=u^\gamma$ with $\gamma>0$ and $W(x,t)$ is a space-time white noise. It was shown that the solution will explode in finite time with positive probability for some $\gamma>3/2$.
We remark that the method used to prove the finite time blowup on bounded domain is the stochastic Kaplan's first eigenvalue method. In order to make sure the inner product $(u,\phi)$ is positive, the authors firstly proved the solutions of (\ref{1.2}) keep positive under some assumptions, see \cite{BY2014,C2009,C2011,LPJ2016,LD2015}. We find that under some special case the positivity of solution can be deleted. What's more, in present paper, we will give a new method (stochastic concavity method) to prove the solutions blow up in finite time. The advantage of this method is that we need not the positivity of solution.
For the whole space, Foondun et al. \cite{FLN2018} considered the finite time blowup phenomenon for the Cauchy problem of stochastic parabolic equations. Comparing with the deterministic parabolic equations, they only obtained the result similar to type (4). In this paper, we establish the similar results to types (1) and (3). The method used here is comparison principle and the properties of heat kernel. We obtain some different phenomenon with or without noise. Moreover, many types of noise are considered.
Comparing with the results of deterministic partial differential equations, there are a lot of work to do and we will study this issue in our further paper.
This paper is arranged as follows. In Sections 2 and 3, we will consider the global existence and non-existence of stochastic parabolic equations on bounded domain and in the whole space, respectively. This paper ends with a short discussion in Section 4.
Throughout this paper, we write $C$ as a general positive constant and $C_i$, $i=1,2,\cdots$ as a concrete positive constant.
\section{Bounded domain} \setcounter{equation}{0} In this section, we first recall some known results on bounded domain, and then give some non-trivial generalizations. Consider the following SPDE
\bes\left\{\begin{array}{lll}
du=(\Delta u+f(u,x,t))dt+\sigma(u,\nabla u,x,t)dW_t, \ \ \qquad &t>0,\ x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &\qquad\ \ x\in D,\\[1.5mm]
u(x,t)=0, \ \ \ \ &t>0,\ x\in\partial D,
\end{array}\right.\lbl{2.1}\ees where $\sigma$ is a given function, and $W(x,t)$ is a Wiener random field defined in a complete probability space $(\Omega,\mathcal {F},\mathbb{P})$ with a filtration $\mathcal {F}_t$. The Wiener random field has mean $\mathbb{E}W(x,t)=0$ and its covariance function $q(x,y)$ is defined by
\begin{eqnarray*} \mathbb{E}W(x,t)W(y,s)=(t\wedge s)q(x,y), \ \ \ x,y\in\mathbb{R}^n,
\end{eqnarray*} where $(t\wedge s)=\min\{t,s\}$ for $0\leq t,s\leq T$. The existence of strong solutions of (\ref{2.1}) has been studied by many authors \cite{Cb2007,PZ1992}. To consider positive solutions, they start with the unique solution $u\in C(\bar D\times[0,T])\cap L^2((0,T);H^2)$ for equation (\ref{2.1}). Chow \cite{C2009,C2011} considered the finite time blowup problem of (\ref{2.1}). They used the positivity of solution to prove the finite time blowup. Under the following conditions
\begin{quote}
(P1) There exists a constant $\delta\geq0$ such that
\begin{eqnarray*}
\frac{1}{2} q(x,x)\sigma^2(r,\xi,x,t)-\sum_{i,j=1}^na_{ij}\xi_i\xi_j\leq\delta r^2
\end{eqnarray*}
for all $r\in\mathbb{R},x\in\bar D,\xi \in\mathbb{R}^n$ and $t\in[0,T]$;\\
(P2) The function $f(r,x,t)$ is continuous on $\mathbb{R}\times\bar D\times[0,T]$ and such that
$f(r,x,t)\geq0$ for $r\leq0$ and $x\in\bar D$, $t\in[0,T]$; and \\
(P3) The initial datum $u_0(x)$ on $\bar D$ is positive and continuous,
\end{quote}
\begin{prop}\lbl{p1.1}{\rm\cite[Theorem 3.3]{C2009}} Suppose that the conditions {\rm (P1),(P2)} and {\rm (P3)} hold true. Then the solution of the initial-boundary problem for the parabolic It\^{o}'s equation {\rm(\ref{2.1})} remains positive, i.e., $u(x,t)\geq 0$, a.s. for almost every $x\in D$ and for all $ t\in[0,T]$. \end{prop}
Let $\phi$ be the eigenfunction with respect to the first eigenvalue $\lambda_1$ on the bounded domain, i.e.,
\begin{eqnarray*}\left\{\begin{array}{llll} -\Delta \phi=\lambda_1 \phi, \ \ \ \ \ \ \ \ {\rm in} \ D,\\ \phi=0, \ \ \qquad\ \qquad {\rm on}\ \partial D.
\end{array}\right.\end{eqnarray*} And we normalize it in such a way that
\begin{eqnarray*} \phi(x)\geq0,\ \ \ \ \int_ D \phi(x)dx=1.
\end{eqnarray*} In paper \cite{C2011}, Chow assumed that the following conditions hold
\begin{quote}
(N1) There exist a continuous function $F(r)$ and a constant $r_1>0$ such that
$F$ is positive, convex and strictly increasing for $r\geq r_1$ and satisfies
\begin{eqnarray*} f(r,x,t)\geq F(r)
\end{eqnarray*} for $r\geq r_1$, $x\in\bar D$, $t\in[0,\infty)$;\\ (N2) There exists a constant $M_1>r_1$ such that $F(r)>\lambda_1r$ for $r\geq M_1$;\\ (N3) The positive initial datum satisfies the condition
\begin{eqnarray*} (\phi,u_0)=\int_ D u_0(x)\phi(x)dx>M_1;
\end{eqnarray*} (N4) The following condition holds
\begin{eqnarray*} \int_{M_1}^\infty\frac{dr}{F(r)-\lambda_1r}<\infty.
\end{eqnarray*}
\end{quote} Alternatively, he imposes the following conditions $S$ on the noise term:
\begin{quote} (S1) The correlation function $q(x,y)$ is continuous and positive for $x,y\in\bar D$ such that
\begin{eqnarray*} \int_ D\int_ D q(x,y)v(x)v(y)dxdy\geq q_1\int_ D v^2(x)dx
\end{eqnarray*} for any positive $v\in H$ and for some $q_1>0$;
(S2) There exist a positive constant $r_2$, continuous functions $\sigma_0(r)$ and $G(r)$ such that they are both positive, convex and strictly increasing for $r\geq r_2$ and satisfy
\begin{eqnarray*} \sigma(r,x,t)\geq \sigma_0(r)\ \ \ \ {\rm and} \ \ \ \ \sigma_0^2(r)\geq2G(r^2)
\end{eqnarray*} for $x\in\bar D$, $t\in[0,\infty)$;
(S3) There exists a constant $M_2>r_2$ such that $q_1G(r)>\lambda_1r$ for $r\geq M_2$;
(S4) The positive initial datum satisfies the condition
\begin{eqnarray*} (\phi,u_0)=\int_ D u_0(x)\phi(x)dx>M_2;
\end{eqnarray*}
(S5) The following integral is convergent so that
\begin{eqnarray*} \int_{M_2}^\infty\frac{dr}{q_1G(r)-\lambda_1r}<\infty.
\end{eqnarray*}
\end{quote}
\begin{prop}\lbl{p1.2} {\rm\cite[Theorem 3.1]{C2011}} Suppose the initial-boundary value problem {\rm(\ref{2.1})} has a unique local solution and the conditions {\rm(P1)-(P3)} are satisfied, where $\sigma$ does not depend on $\nabla u$. In addition, we assume that either the conditions {\rm(N1)-(N4)} or the alternative conditions {\rm(S1)-(S5)} given above hold true. Then, for a real number $p>0$, there exists a constant $T_p>0$ such that
\begin{eqnarray*}
\lim\limits_{t\rightarrow T_p-}\mathbb{E}\|u \|_p
=\lim\limits_{t\rightarrow T_p-}\mathbb{E}\left(\int_ D|u(x,t)|^pdx\right)^\frac{1}{p}=\infty,
\end{eqnarray*} where $p\geq1$ under conditions $N$, while $p\geq2$ under conditions $S$. \end{prop}
The positivity of solutions is needed for the case that the nonlinear term induces the finite time blowup. But for a special case, we can prove the positivity of solutions can be deleted. Now, we consider the following SPDEs
\bes\left\{\begin{array}{lll}
du=\Delta udt+\sigma(u,x,t)dW(x,t), \ \ \qquad &t>0,\ x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &\qquad\ \ x\in D,\\[1.5mm]
u(x,t)=0, \ \ \ \ &t>0,\ x\in\partial D,
\end{array}\right.\lbl{2.3}\ees where $W(x,t)$ is time-space white noise and $D\subset\mathbb{R}$ is an interval in $\mathbb{R}$.
\begin{theo}\lbl{t2.1} Assume that the initial-boundary problem (\ref{2.3}) has a unique local solution. Assume further that $C_1|u|^\gamma\leq|\sigma(u,x,t)|\leq C_2|u|^\gamma_1$ with $C_1>0$ and $\gamma_1\geq\gamma>1$, $u_0\geq0$ and
\begin{eqnarray*} \left(\int_ D u_0(x)\phi(x)dx\right)^{2(\gamma-1)}\geq\frac{\lambda_1}{q_1C_1^2}.
\end{eqnarray*} Then there exist constants $T^*>0$ and $p\geq2\gamma_1$ such that
\begin{eqnarray*}
\lim\limits_{t\rightarrow T^*-}\mathbb{E}\|u_t\|^p_{L^p}
=\lim\limits_{t\rightarrow T^*-}\mathbb{E}\int_D|u(x,t)|^pdx=\infty.
\end{eqnarray*}
\end{theo}
{\bf Proof.}
We will prove the theorem by contradiction. Suppose finite time blowup is false. Then there exist a global positive solution $u$ and $p\geq2\gamma_1$ such that for any $T>0$
\begin{eqnarray*}
\sup_{0\leq t\leq T}\mathbb{E}\|u(\cdot,t)\|^p_{L^p}<\infty,
\end{eqnarray*} which implies that
\begin{eqnarray*}
\sup_{0\leq t\leq T}\mathbb{E}\Big|\int_ D u(x,t)\phi(x)dx\Big|^2\leq\|\phi\|^2_{L^q(D)}\sup_{0\leq t\leq T}\mathbb{E}\|u(\cdot,t)\|^p_{L^p}<\infty,
\end{eqnarray*} where $1/p+1/q=1$, $\phi$ is defined as below Proposition \ref{p1.1} and satisfies $\int_ D\phi(x)dx=1$. Define
\begin{eqnarray*} \hat u(t):=\int_ D u(x,t)\phi(x)dx.
\end{eqnarray*} By applying It\^{o}'s formula to $\hat u^2(t)$, we get
\bes \hat u^2(t)&=&(u_0,\phi)^2-2\lambda_1\int_0^t\hat u^2(s)ds+2\int_0^t\int_ D \hat u(s)\sigma(u,x,t)\phi(x)d W(x,s)dx\nonumber\\ &&+\int_0^t\int_ D\sigma^2(u,x,s)\phi^2(x)dxds
\lbl{2.4}\ees We note that the stochastic term is usually a local martingale. Thus we need use the technique of stopping time. Let
\begin{eqnarray*} \tau_n=\inf\{t\geq0:\ \int_0^t\int_ D\sigma^2(u,x,s)\phi^2(x)dxds\geq n\}.
\end{eqnarray*} Let $\eta(t\wedge\tau_n)=\mathbb{E}\hat u^2(t\wedge\tau_n)$. By taking an expectation over (\ref{2.4}), we obtain
\begin{eqnarray*} \eta(t\wedge\tau_n)&=&(u_0,\phi)^2-2\lambda_1\int_0^{t\wedge\tau_n}\eta(s)ds +\int_0^{t\wedge\tau_n}\mathbb{E}\int_ D\sigma^2(u,x,s)\phi^2(x)dxds.
\end{eqnarray*} Noting that
\begin{eqnarray*} \eta(t\wedge\tau_n)\leq(u_0,\phi)^2 +\int_0^{t}\mathbb{E}\int_ D\sigma^2(u,x,s)\phi^2(x)dxds,
\end{eqnarray*} and letting $n\to\infty$, we have
\begin{eqnarray*} \eta(t)=(u_0,\phi)^2-2\lambda_1\int_0^{t}\eta(s)ds +\int_0^{t}\mathbb{E}\int_ D\sigma^2(u,x,s)\phi^2(x)dxds.
\end{eqnarray*} Using the assumptions $\inf_{x,y\in D}q(x,y)\geq q_1>0$ and $\sigma^2(u,x,s)\geq C_1|u|^{2\gamma}$ with $\gamma>1$ and Jensen's inequality, we have
\begin{eqnarray*} \eta(t)&\geq&\eta(0)-2\lambda_1\int_0^t\eta(s)ds +2q_1C_1^2\int_0^t\eta^\gamma(s)ds,
\end{eqnarray*} or, in the differential form,
\begin{eqnarray*}\left\{\begin{array}{llll} \displaystyle\frac{d\eta(t)}{dt}=-2\lambda_1\eta(t)+2q_1C_1^2 \eta^\gamma(t)\\[1.5mm] \eta(0)=\eta_0.
\end{array}\right. \end{eqnarray*} Noting that
\begin{eqnarray*} \eta(0)=\left(\int_ D u_0(x)\phi(x)dx\right)^{2}\geq\left(\frac{\lambda_1}{q_1C_1^2}\right)^{\frac{1}{(\gamma-1)}},
\end{eqnarray*} we have $\eta'(0)\geq0$. This implies that $\eta(t)>0$. An integration of the differential equation gives that
\begin{eqnarray*} T\leq\int_{\eta_0}^{\eta(T)}\frac{dr}{C^2_1q_1r^{\gamma}-\lambda_1r} \leq\int_{\eta_0}^\infty\frac{dr}{C^2_1q_1r^{\gamma}-\lambda_1r}<\infty,
\end{eqnarray*} which implies $\eta(t)$ must blow up at a time $T^*\leq\int_{\eta_0}^\infty\frac{dr}{C^2_1q_1r^{\gamma}-\lambda_1r}$. Hence this is a contradiction. This completes the proof. $\Box$
The advantage of Theorem \ref{t2.1} is that the positivity of the solution is not needed. And in above Theorem, we assume that the initial-boundary problem (\ref{2.3}) has a unique local solution. In fact, if $\sigma$ satisfies the local Lipschitz condition, one can follow the method of \cite{T2009} to obtain the existence and uniqueness of local solution, also see \cite{eP1979}. In \cite{eP1979,T2009}, the authors established the existence and uniqueness of energy solution, where the solutions belong to $H_0^1(D)$ for any fixed time almost surely. Noting that $H^{\frac{1}{2}+}(D)\hookrightarrow L^\infty(D)$ for $D\subset\mathbb{R}$, our assumptions are valid.
If we only consider the case $\sigma$ does not depend on $\xi$, that is, $\sigma:=\sigma(u,x,t)$. Then it follows the assumption (P1) that $\sigma(0,x,t)=0$, which implies that for additive noise, the solutions maybe not keep positive. Hence the first eigenvalue method will fail. Next, we introduce another method. For simplicity, we consider the following SPDEs
\bes\left\{\begin{array}{lll}
du=[\Delta u+|u|^{p-1}u]dt+\sigma(x,t)dB_t, \ \ \qquad &t>0,\ x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &\qquad\ \ x\in D,\\[1.5mm]
u(x,t)=0, \ \ \ \ &t>0,\ x\in\partial D,
\end{array}\right.\lbl{2.5}\ees where $B_t$ is an one-dimensional Brownian motion. If the initial data belongs to $H^1(D)$, Debussche et al. \cite{DMH2015} proved the solution of (\ref{2.5}) belongs to $H^3_0(D)$ during the lifespan.
\begin{theo}\lbl{t2.2} Suppose that $p>1$ and $u_0$ satisfies
\begin{eqnarray*}
-\frac{1}{2}\int_D|\nabla u_0(x)|^2dx+\frac{1}{p+1}\int_D|u_0(x)|^{p+1}dx-\frac{1}{2}\int_0^\infty\mathbb{E}\int_D|\nabla\sigma(x,t)|^2dxdt>0,
\end{eqnarray*} then the solution of (\ref{2.5}) must blow up in finite time in sense of mean square.
\end{theo}
{\bf Proof.} We will prove the theorem by contradiction. First we suppose there exist a global solution $u$ such that
\begin{eqnarray*} \sup_{t\in[0,T]}\mathbb{E}\int_Du^2dx<\infty
\end{eqnarray*} for any $T>0$. Similar to the proof of Theorem \ref{t2.1}, by using It\^{o} formula, we have
\begin{eqnarray*}
\mathbb{E}\int_Du^2-\int_Du^2_0=-2\mathbb{E}\int_0^t\int_D|\nabla u|^2
+2\mathbb{E}\int_0^t\int_D|u|^{p+1}+\mathbb{E}\int_0^t\int_D|\sigma(x,s)|^2.
\end{eqnarray*} Denote
\begin{eqnarray*} v(t)=\mathbb{E}\int_Du^2, \ \ \ h(t)=\mathbb{E}\int_D\left(-2|\nabla u|^2
+2|u|^{p+1}+|\sigma(x,t)|^2\right),
\end{eqnarray*} then we have
\begin{eqnarray*} v(t)-v(0)=\int_0^th(s)ds.
\end{eqnarray*} Let
\begin{eqnarray*} I(t)=\int_0^tv(s)ds+A,\ \ \ A\ {\rm is \ a\ positive\ constant},
\end{eqnarray*} then we have $I'(t)=v(t),\ I''(t)=h(t)$. Set
\begin{eqnarray*} J(t)= \mathbb{E}\int_D\left(-\frac{1}{2}|\nabla u|^2
+\frac{1}{p+1}|u|^{p+1}\right).
\end{eqnarray*} It\^{o} formula implies that
\begin{eqnarray*}
&&\frac{1}{2}\mathbb{E}\int_D|\nabla u|^2-\frac{1}{2}\mathbb{E}\int_D|\nabla u_0|^2\\
&=&-\int_0^t\mathbb{E}\int_D\Delta u(\Delta u+|u|^{p-1}u)+\frac{1}{2}\int_0^t\mathbb{E}\int_D|\nabla\sigma(x,t)|^2,
\end{eqnarray*} and
\begin{eqnarray*}
&&\frac{1}{p+1}\mathbb{E}\int_D|u|^{p+1}-\frac{1}{p+1}\mathbb{E}\int_D|u_0|^{p+1}\\
&=&\int_0^t\mathbb{E}\int_D|u|^{p-1}u(\Delta u+|u|^{p-1}u)+\frac{p}{2}\int_0^t\mathbb{E}\int_D|u|^{p-1}\sigma^2(x,t).
\end{eqnarray*} Therefore, we have
\begin{eqnarray*} J(t)=J(0)+\int_0^t\mathbb{E}\int_D(\Delta u+|u|^{p-1}u)^2-\frac{1}{2}\int_0^t\mathbb{E}\int_D|\nabla\sigma(x,s)|^2
+\frac{p}{2}\int_0^t\mathbb{E}\int_D|u|^{p-1}\sigma^2(x,s).
\end{eqnarray*} By comparing $I''(t)$ and $J(t)$, we have, for $1<\delta<\frac{p+1}{2}$,
\begin{eqnarray*} I''(t)=h(t)\geq4(1+\delta)J(t).
\end{eqnarray*} Clearly,
\begin{eqnarray*} I'(t)=v(t)&=&v(0)+\int_0^th(s)ds\\
&=&v(0)+\int_0^t\mathbb{E}\int_D|\sigma(x,t)|^2+\int_0^t\mathbb{E}\int_D\left(-2|\nabla u|^2
+2|u|^{p+1}\right)dxds\\
&=&v(0)+\int_0^t\mathbb{E}\int_D|\sigma(x,t)|^2+\int_0^t\mathbb{E}\int_D\left(2u\Delta u
+2|u|^{p+1}\right)dxds.
\end{eqnarray*} It follows that, for any $\varepsilon>0$,
\begin{eqnarray*} I'(t)^2&\leq&4(1+\varepsilon)\left[\int_0^t\mathbb{E}\int_D\left(\Delta u
+|u|^{p-1}u\right)^2dxds\right]\left[\int_0^t\mathbb{E}\int_Du^2dxds\right]\\
&&+\frac{1}{1+\varepsilon}\left[v(0)+\int_0^t\mathbb{E}\int_D|\sigma(x,t)|^2\right]^2.
\end{eqnarray*} Combining the above estimates, we obtain
\begin{eqnarray*} &&I''(t)I(t)-(1+\alpha)I'(t)^2\\
&\geq&4(1+\delta)\left[J(0)+\int_0^t\mathbb{E}\int_D(\Delta u+|u|^{p-1}u)^2-\frac{1}{2}\int_0^t\mathbb{E}\int_D|\nabla\sigma(x,s)|^2\right.\\
&&\left.+\frac{p}{2}\int_0^t\mathbb{E}\int_D|u|^{p-1}\sigma^2(x,s)\right] \times\left[\int_0^t\int_Du^2dxds+A\right]\\ &&-4(1+\alpha)(1+\varepsilon)\left[\int_0^t\mathbb{E}\int_D\left(\Delta u
+|u|^{p-1}u\right)^2dxds\right]\left[\int_0^t\mathbb{E}\int_Du^2dxds\right]\\
&&-\frac{(1+\alpha)}{1+\varepsilon}\left[v(0)+\int_0^t\mathbb{E}\int_D|\sigma(x,t)|^2\right]^2
\end{eqnarray*} Now we choose $\varepsilon$ and $\alpha$ small enough such that
\begin{eqnarray*} 1+\delta>(1+\alpha)(1+\varepsilon).
\end{eqnarray*} By assumption,
\begin{eqnarray*}
J(0)-\frac{1}{2}\int_0^t\mathbb{E}\int_D|\nabla\sigma(x,s)|^2>0.
\end{eqnarray*} We can choose $A$ large enough such that
\begin{eqnarray*} I''(t)I(t)-(1+\alpha)I'(t)^2>0,
\end{eqnarray*} which implies that
\begin{eqnarray*} \frac{d}{dt}\left(\frac{I'(t)}{I^{1+\alpha}(t)}\right)>0.
\end{eqnarray*} Then we have
\begin{eqnarray*} \frac{I'(t)}{I^{1+\alpha}}(t)>\frac{I'(0)}{I^{1+\alpha}(0)}\ \ \ {\rm for}\ t>0.
\end{eqnarray*} It follows that $I(t)$ cannot remain finite for all $t$. This is a contradiction. The proof is complete. $\Box$
\begin{remark}\lbl{r2.1} The advantage of concavity method is that we did not use the positivity of solutions. Meanwhile, the disadvantage of Theorem \ref{t2.2} is that we only deal with the additive noise. For multiplicative noise, when we deal with the term $\mathbb{E}\int_D|\nabla u|^2$, by using It\^{o} formula, we will have the term $-\frac{1}{2}\int_0^t\mathbb{E}\int_D|\nabla\sigma(u)|^2$, and we cannot control this term.
\end{remark}
\begin{remark}\lbl{r2.2} The effect of noise on the blowup problem can be described as the followings:
(i) for an additive noise, without help of the nonlinear term, the solutions will not blow up in finite time; but if the solutions blow up in finite time without noise, the additive noise can make the finite time blowup hard to happen. In other words, the assumption on initial data will be stronger if we add the additive noise.
(ii) for multiplicative noise, without the help of nonlinear term, the solutions blow up in finite time under some assumptions on initial data.
\end{remark}
Look back at Proposition \ref{p1.2} and Theorems \ref{t2.1} and \ref{t2.2}, we find the finite time blowup appear in the $L^p$-norm of the solutions, $p>1$. Maybe we will ask what about the case $0<p<1$. The following result answer this equation. Consider the following stochastic parabolic equations
\bes\left\{\begin{array}{lll}
du=[\Delta u+f(u)]dt+\sigma(u) dW(x,t), \ \ \qquad &t>0,\ x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &\qquad\ \ x\in D,\\[1.5mm]
u(x,t)=0, \ \ \ \ &t>0,\ x\in\partial D.
\end{array}\right.\lbl{2.5}\ees
\begin{theo}\lbl{t2.3} Assume $f(r)\geq0$ for $r\leq0$. Then we have:
(i) Assume further that $f(r)\geq C_0r^p$, $q(x,y)\leq q_0$ for $x,y\in D$ and $\sigma^2(u)\leq C_1u^2$. If the initial data satisfies
\begin{eqnarray*} \left(\int_Du_0(x)\phi(x)dx\right)^{p-1}>\frac{\hat\lambda}{C_0\epsilon}, \ \ \hat\lambda=\epsilon\lambda_1+\frac{\epsilon}{2}(1-\epsilon)q_0C_1^2.
\end{eqnarray*} then the solution $u(x,t)$ of (\ref{2.5}) will blow up in finite time in $L^1$-norm and $\epsilon$-order moment, where $0<\epsilon<1$ and $p>1$, i.e., there exists a positive $T>0$ such that
\begin{eqnarray*}
\mathbb{E}\|u(\cdot,t)\|^\epsilon_{L^1(D)}\to\infty,\ \ \ {\rm as}\ \ \ t\to T;
\end{eqnarray*}
(ii) Assume further that $f(r)\leq C_0r^p$, $q(x,y)\geq q_1$ for $x,y\in D$ and $\frac{1}{C_1}u^m\leq\sigma^2(u)\leq C_2u^m$. Then, if $m>p>1$, $(m-p)(2m-1)>mp$ and the initial data are bounded, then the solution $u(x,t)$ of (\ref{2.5}) will exist globally in the following sense:
$\mathbb{E}[|(u,\phi)|^\epsilon]<\infty$ for any $t>0$.
\end{theo}
{\bf Proof.} (i) It follows from Proposition \ref{p1.1} that (\ref{2.5}) has a unique positive solution. Similar to the proof of Theorem \ref{t2.1}, we will prove the theorem by contradiction. Suppose the claim is false. Then there exists a global positive solution $u$ such that for any $T>0$
\begin{eqnarray*}
\sup_{0\leq t\leq T}\mathbb{E}\|u(\cdot,t)\|^\epsilon_{L^1(D)}<\infty,
\end{eqnarray*} which implies that
\begin{eqnarray*}
\sup_{0\leq t\leq T}\mathbb{E}\left(\int_ D u(x,t)\phi(x)dx\right)^\epsilon\leq\|\phi\|_{L^\infty(D)}\sup_{0\leq t\leq T}\mathbb{E}\|u(\cdot,t)\|^\epsilon_{L^1(D)}<\infty.
\end{eqnarray*} Set $\hat u=(u,\phi)$. It\^{o} formula gives that
\bes \hat u^\epsilon(t)&=&(u_0,\phi)^\epsilon-\epsilon\lambda_1\int_0^t\hat u^\epsilon(s)ds +\epsilon\int_0^t\hat u(s)^{\epsilon-1}\int_ Df(u)\phi dxds\nonumber\\ &&+ \epsilon\int_0^t\int_ D \hat u(s)^{\epsilon-1}\sigma(u)\phi(x)d W(x,s)dx\nonumber\\ &&+\frac{\epsilon(\epsilon-1)}{2}\int_0^tu(s)^{\epsilon-2}\int_ D\int_ D q(x,y) \sigma(u)\phi(x)\sigma(u)\phi(y)dxdyds
\lbl{2.6}\ees Let $\eta(t)=\mathbb{E}\hat u^\epsilon(t)$. Similar to the proof of Theorem \ref{t2.1}, by taking an expectation over (\ref{2.6}), we obtain
\begin{eqnarray*} \eta(t)&=&(u_0,\phi)^\epsilon-\epsilon\lambda_1\int_0^t\eta(s)ds+ \epsilon\int_0^t\mathbb{E}\hat u(s)^{\epsilon-1}\int_ Df(u)\phi dxds\nonumber\\ &&+\frac{\epsilon(\epsilon-1)}{2}\int_0^t\mathbb{E}u(s)^{\epsilon-2}\int_ D\int_ D q(x,y) \sigma(u)\phi(x)\sigma(u)\phi(y)dxdyds.
\end{eqnarray*} Using the assumptions $\inf_{x,y\in D}q(x,y)\leq q_0$ and $\sigma^2(u)|\leq C_1|u|^2$ and Jensen's inequality, we have
\begin{eqnarray*} \eta(t)&\geq&\eta(0)-\varepsilon\lambda_1\int_0^t\eta(s)ds+C_0\epsilon\int_0^t\eta^{\frac{p+\epsilon-1}{\epsilon}}(s)ds -\frac{\epsilon}{2}(1-\epsilon)q_0C_1^2\int_0^t\eta(s)ds,
\end{eqnarray*} or, in the differential form,
\begin{eqnarray*}\left\{\begin{array}{llll} \displaystyle\frac{d\eta(t)}{dt}=-\hat\lambda\eta(t)+C_0\epsilon\eta^{\frac{p+\epsilon-1}{\epsilon}}(t)\\[1.5mm] \eta(0)=\eta_0.
\end{array}\right. \end{eqnarray*} Noting that $\eta'(0)>0$. This implies that $\eta(t)>0$. An integration of the differential equation gives that
\begin{eqnarray*} T\leq\int_{\eta_0}^{\eta(T)}\frac{dr}{C_0\epsilon r^{\frac{p+\epsilon-1}{\epsilon}}-\hat\lambda r} \leq\int_{\eta_0}^\infty\frac{dr}{C_0\epsilon r^{\frac{p+\epsilon-1}{\epsilon}}-\hat\lambda r}<\infty,
\end{eqnarray*} which implies $\eta(t)$ must blow up at a time $T^*\leq\int_{\eta_0}^\infty\frac{dr}{C_0\epsilon r^{\frac{p+\epsilon-1}{\epsilon}}-\hat\lambda r}$. Hence this is a contradiction. Thus we obtain the desired result.
(ii) Define
\begin{eqnarray*} \tau_n=\inf\{t>0, \ \ (u,\phi)^\epsilon>n\}.
\end{eqnarray*} Set $\hat u=(u,\phi)$. By using It\^{o} formula, for $t\leq\tau_n$, we have
\bes \hat u^\epsilon(t)&=&(u_0,\phi)^\epsilon-\epsilon\lambda_1\int_0^t\hat u^\epsilon(s)ds +\epsilon\int_0^t\hat u(s)^{\epsilon-1}\int_ Df(u)\phi dxds\nonumber\\ &&+ \epsilon\int_0^t\int_ D \hat u(s)^{\epsilon-1}\sigma(u)\phi(x)d W(x,s)dx\nonumber\\ &&+\frac{\epsilon(\epsilon-1)}{2}\int_0^tu(s)^{\epsilon-2}\int_ D\int_ D q(x,y) \sigma(u)\phi(x)\sigma(u)\phi(y)dxdyds
\lbl{2.7}\ees Let $\eta(t)=\mathbb{E}\hat u^\epsilon(t)$. By taking an expectation over (\ref{2.7}), we obtain
\bes \eta(t)&=&(u_0,\phi)^\epsilon-\epsilon\lambda_1\int_0^t\eta(s)ds+ \epsilon\int_0^t\mathbb{E}\hat u(s)^{\epsilon-1}\int_ Df(u)\phi dxds\nonumber\\ &&+\frac{\epsilon(\epsilon-1)}{2}\int_0^t\mathbb{E}u(s)^{\epsilon-2}\int_ D\int_ D q(x,y) \sigma(u)\phi(x)\sigma(u)\phi(y)dxdyds \nonumber\\ &\leq&\eta(0)-\epsilon\lambda_1\int_0^t\eta(s)ds+C_0
\epsilon\int_0^t\mathbb{E}\hat u(s)^{\epsilon-1}\int_ D|u|^p\phi dxds\nonumber\\ &&+\frac{\epsilon(\epsilon-1)}{2}\int_0^t\mathbb{E}u(s)^{\epsilon-2}\int_ D\int_ D q(x,y) \sigma(u)\phi(x)\sigma(u)\phi(y)dxdyds.
\lbl{2.8}\ees H$\ddot{o}$lder inequality and $\varepsilon$-Young inequality yield that
\begin{eqnarray*}
&&C_0\epsilon\hat u(s)^{\epsilon-1}\int_ D|u|^p\phi dx\\
&\leq& C_0\epsilon\hat u(s)^{\epsilon-1}\left(\int_ D|u|^m\phi dx\right)^{\frac{p}{m}}\\
&\leq&\frac{\epsilon q_1(1-\epsilon)}{4C_1}u(s)^{\epsilon-2}\left(\int_ D|u|^m\phi dx\right)^2+C
u(s)^{\frac{2m}{2m-p}(2p-p\epsilon-1+\epsilon)}.
\end{eqnarray*} Submitting the above inequality into (\ref{2.8}), and using the assumptions on $\sigma$, we have
\bes \eta(t) &\leq&\eta(0)-\epsilon\lambda_1\int_0^t\eta(s)ds+C\int_0^tu(s)^{\frac{2m}{2m-p}(2p-p\epsilon-1+\epsilon)}ds\nonumber\\ &&
-\int_0^t\frac{\epsilon q_1(1-\epsilon)}{2C_1}u(s)^{\epsilon-2}\left(\int_ D|u|^m\phi dx\right)^2ds\nonumber\\ &\leq&\eta(0)-\epsilon\lambda_1\int_0^t\eta(s)ds+C\int_0^tu(s)^{\frac{2m}{2m-p}(2p-p\epsilon-1+\epsilon)}ds\nonumber\\ && -\int_0^t\frac{\epsilon q_1(1-\epsilon)}{2C_1}u(s)^{2m+\epsilon-2}ds.
\lbl{2.9} \ees The assumption $(m-p)(2m-1)>mp$ gives
\begin{eqnarray*} \epsilon<\frac{2m}{2m-p}(2p-p\epsilon-1+\epsilon)<2m+\epsilon-2.
\end{eqnarray*} Noting that for any $r<m<n$ and $u>0$, we have
\bes u^m=u^\beta u^{m-\beta}\leq \varepsilon u^n+C(\varepsilon)u^r, \ \ \ \beta=\frac{r(n-m)}{n-r}.
\lbl{2.10}\ees So we can use (\ref{2.10}) to deal with the second last term of right hand side of (\ref{2.9}). Eventually, we get for $t\leq\tau_n$
\begin{eqnarray*} \eta(t)\leq\eta(0)+C\int_0^t\eta(s)ds.
\lbl{2.11} \end{eqnarray*} We remark the constant $C$ does not depend on $t$. The Gronwall's lemma implies that
\begin{eqnarray*} \eta(t)\leq C+Ce^{Ct}, \ \ \ t\leq\tau_n.
\end{eqnarray*} Letting $n\to\infty$, the above inequality implies that $\mathbb{P}\{\tau_\infty<\infty\}=0$. The proof is complete. $\Box$
\section{Whole space} \setcounter{equation}{0}
In this section, we consider stochastic parabolic equations in whole space. Our aim is to establish the global existence and non-existence under some assumptions. We first recall the results of Foondun et al. \cite{FLN2018}, where the authors considered the following equation
\bes \partial_tu_t(x)=\mathcal{L}u_t(x)+\sigma(u_t(x))\dot{F}(x,t)
\ t>0,\ x\in\mathbb{R}^d.
\lbl{3.1} \ees Here $\mathcal{L}$ denotes the fractional Laplacian, the generator of an $\alpha$-stable process and $\dot{F}$ is the random forcing term which they took to be white in time and possibly colored in space. They obtained the following results.
\begin{prop}\lbl{p3.1}\cite[Theorems 1.2,1,5,1.6,1.8,1.9]{FLN2018}
(i) Noise white both in time and space, i.e.,
\begin{eqnarray*} \mathbb{E}[\dot{F}(x,t)\dot{F}(y,s)]=\delta_0(t-s)\delta_0(x-y).
\end{eqnarray*} Assume that there exists a $\gamma>0$ such that
\begin{eqnarray*}
\sigma(x)\geq|x|^{1+\gamma}\ \ \ {\rm for\ all\ }\ x\in\mathbb{R}^d,
\end{eqnarray*} and that there is a positive constant $\kappa$ such that $\inf_{x\in\mathbb{R}^d}:=\kappa$ . Then there exists a $t_0>0$ such that for all $x\in\mathbb{R}^d$, the solution $u_t(x)$ of (\ref{3.1}) blows up in finite time, i.e.,
\bes
\mathbb{E}|u_t(x)|^2=\infty\ \ \ {\rm whenever}\ \ t\geq t_0.
\lbl{3.2}\ees Furthermore, the initial condition can be weaken as the following,
\bes \int_{B(0,1)}u_0(x)dx:=K_{u_0}>0,
\lbl{3.3}\ees where $B(0,1)$ is the ball centred in the point $0$ and radius $1$. The solution $u_t(x)$ of (\ref{3.1}) also blows up in finite time whenever $K_{u_0}\geq K$, where $K$ is some positive constant.
(ii) Noise white in time and correlated in space, i.e.,
\begin{eqnarray*} \mathbb{E}[\dot{F}(x,t)\dot{F}(y,s)]=\delta_0(t-s)f(x,y).
\end{eqnarray*} Assume that for fixed $R>0$, there exists some positive number $K_f$ such that
\bes \inf_{x,y\in B(0,R)}(x,y\in B(0,R))f(x,y)\geq K_f.
\lbl{3.4}\ees Then, for fixed $t_0>0$ there exists a positive unmber $\kappa_0$ such that for all $\kappa\geq\kappa_0$ and $x\in\mathbb{R}^d$ we have (\ref{3.2}) holds.
In particularly, suppose that the correlation function $f$ is given by
\begin{eqnarray*} f(x,y)=\frac{1}{|x-y|^\beta} \ \ {\rm with}\ \ \beta<\alpha\wedge d.
\end{eqnarray*} Then for $\kappa>0$ there exists a $t_0>0$ such that (\ref{3.2}) holds.
Furthermore, under the assumptions (3.3) and (\ref{3.4}), there exists a $t_0>0$ such that for all $x\in\mathbb{R}^d$ (\ref{3.2}) holds.
\end{prop} In the above proposition, Foondun et al. \cite{FLN2018} only considered the finite time blowup phenomenon driven by noise. Our aim in this paper is to find the effect of noise, including additive noise and multiplicative noise. And we are also very interested in the type (3) as introduction said.
We first consider the global existence of the following stochastic parabolic equations
\bes \left\{\begin{array}{llll} du_t=(\Delta u+f(u,x,t))dt+\sigma(u,x,t)dB_t,\ \ t>0,\ &x\in\mathbb{R}^d,\\ u(x,0)=u_0(x)\gneqq0, \ \ &&x\in\mathbb{R}^d,
\end{array}\right.\lbl{3.5}\ees where $B_t$ is one-dimensional Brownian motion. A mild solution to (\ref{3.5}) in sense of Walsh \cite{walsh1986} is any $u$ which is adapted to the filtration generated by the white noise and satisfies the following evolution equation
\begin{eqnarray*} u(x,t)&=&\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)f(u,y,s)dyds\\ && +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)\sigma(u,y,s)dydB_s,
\end{eqnarray*} where $K(t,x)$ denotes the heat kernel of Laplacian operator, i.e.,
\begin{eqnarray*} K(t,x)=\frac{1}{(2\pi t)^{d/2}}\exp\left(-\frac{|x|^2}{2t} \right)
\end{eqnarray*} satisfies
\begin{eqnarray*} \left(\frac{\partial}{\partial t}-\Delta\right)K(t,x)=0\ \ \ {\rm for}\ \ (x,t)\neq(0,0).
\end{eqnarray*} We get the following results.
\begin{theo}\lbl{t3.1} Suppose that there exist positive constants $C_0,\ 0<p<1$ such that
\begin{eqnarray*}
|h(u,x,t)|\leq C_0|u|^p,\ \ \ h=f \ {\rm or}\ g.
\end{eqnarray*} Then the solutions of (\ref{3.5}) with bounded continuous initial data $u_0$ exist globally in any $r$-order moment, $r\geq1$.
\end{theo}
{\bf Proof.} By taking the second moment and using the Walsh isometry, we get for any $T>0$
\begin{eqnarray*}
\mathbb{E}|u(x,t)|^2&=&\left(\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)f(u,y,s)dyds\right.\\ &&\left. +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)\sigma(u,y,s)dydB_s)\right)^2\\ &\leq&4\int_{\mathbb{R}^d}K(t,x-y)u^2_0(y)dy+4C_0^2
\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y) [\mathbb{E}|u(y,s)|^2]^pdyds\\ &&
+4C_0^2\int_0^t\mathbb{E}\left(\int_{\mathbb{R}^d}K(t-s,x-y) |u(y,s)|^p]\right)^2\\
&\leq&4 \sup_{x\in\mathbb{R}^d}|u_0(x)|^2+8C_0^2\sup_{t\in[0,T],x\in\mathbb{R}^d}[\mathbb{E}|u(y,s)|^2]^p \int_0^T\int_{\mathbb{R}^d}K(t,x) dtdx
\end{eqnarray*} Then taking supremum for $t,x$ over $\in[0,T]\times\mathbb{R}^d$ (the right hand is independent of $t$ and $x$) , we get
\begin{eqnarray*}
\sup_{t\in[0,T],x\in\mathbb{R}^d} \mathbb{E}|u(x,t)|^2\leq 4 \sup_{x\in\mathbb{R}^d}|u_0(x)|^2+8C_0^2T\sup_{t\in[0,T],x\in\mathbb{R}^d}[\mathbb{E}|u(y,s)|^2]^p.
\end{eqnarray*} Notice that $0<p<1$, we have for any $T>0$
\begin{eqnarray*}
\sup_{t\in[0,T],x\in\mathbb{R}^d} \mathbb{E}|u(x,t)|^2\leq C(T)<\infty,
\end{eqnarray*} which implies that $\mathbb{P}\{|u(x,t)|=\infty\}=0$. The proof is complete. $\Box$
We remark that the heat kernel $K$ belongs to $L^1(\mathbb{R}^d)$ but not $L^2(\mathbb{R}^d)$. Hence this result does not hold for the noise white in both time and space. Meanwhile, if we assume the covariance function $q(x,y)$ is uniformly bounded, then the above result also hold for the noise white in time and correlated in space.
Next, we establish the result similar to the case of type (3). In order to do that, we will consider the following Cauchy problem
\bes \left\{\begin{array}{llll} du_t=\Delta udt+\sigma(u,x,t)dW(x,t),\ \ t>0,\ &x\in\mathbb{R}^d,\\ u(x,0)=u_0(x)\gneqq0, \ \ &&x\in\mathbb{R}^d,
\end{array}\right.\lbl{3.6}\ees where $W(t,x)$ is white noise both in time and space. In the rest of paper, we always assume that the initial data is nonnegative continuous function. A mild solution to (\ref{3.6}) in sense of Walsh \cite{walsh1986} is any $u$ which is adapted to the filtration generated by the white noise and satisfies the following evolution equation
\begin{eqnarray*} u(x,t)=\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)\sigma(u,y,s)W(dy,ds),
\end{eqnarray*} where $K(t,x)$ denotes the heat kernel of Laplacian operator. We get the following results.
\begin{theo}\lbl{t3.2} Suppose $d=1$ and $\sigma^2(u,x,t)\geq C_0u^{2m}$, $C_0>0$, then for $1<m\leq\frac{3}{2}$, the solutions of (\ref{3.5}) blows up in finite time for any nontrivial nonnegative initial data $u_0$. That is to say, there exists a positive constant $T$ such that for all $x\in\mathbb{R}$
\begin{eqnarray*} \mathbb{E}u^2(x,t)=\infty\ \ {\rm for}\ t\geq T.
\end{eqnarray*}
\end{theo}
{\bf Proof.} We assume that the solution remains finite for all finite $t$ almost surely and want to derive a contradiction. By taking the second moment and using the Walsh isometry, we get
\begin{eqnarray*}
\mathbb{E}|u(x,t)|^2&=&\left(\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy\right)^2 +\int_0^t\int_{\mathbb{R}^d}K^2(t-s,x-y)\mathbb{E}\sigma^2(u,y,s)dyds\\ &=:&I^2_1(x,t)+I_2(x,t).
\end{eqnarray*} We may assume without loss of generality that $u_0(x)\geq C_1>0$ for $|x|<1$ by the assumption. A direct computation shows that
\bes I_1(x,t)&\geq&\frac{C_1}{(2\pi t)^{d/2}}\int_{B_1(0)}\exp\left(-\frac{|x|^2+|y|^2}{2t}\right)dy\nonumber\\
&\geq& \frac{C_1}{(2\pi t)^{d/2}}\exp\left(-\frac{|x|^2}{2t}\right)\int_{|y|\leq\frac{1}{\sqrt{t}}}
\exp\left(-\frac{|y|^2}{2}\right)dy\nonumber\\
&\geq& \frac{C}{(2\pi t)^{d/2}}\exp\left(-\frac{|x|^2}{2t}\right)
\lbl{3.7} \ees for $t>1$ and $C>0$.
It is easy to see that
\begin{eqnarray*} I_2(x,t)&\geq& C_0\int_0^t\int_{\mathbb{R}^d}K^2(t-s,x-y)\mathbb{E}|u(y,s)|^{2m}dyds\\
&\geq& C_0\int_0^t\int_{\mathbb{R}^d}K^2(t-s,x-y)[\mathbb{E}|u(y,s)|^{2}]^mdyds.
\end{eqnarray*} Denote $v(x,t)=\mathbb{E}|u(x,t)|^{2}$. Let
\begin{eqnarray*} G(t)=\int_{\mathbb{R}^d}K(t,x) v(x,t)dx.
\end{eqnarray*} Then for $t>1$,
\bes G(t)&=&\int_{\mathbb{R}^d}I^2_1(x,t)K(t,x)dx+\int_{\mathbb{R}^d}I_2(x,t)K(t,x)dx\nonumber\\ &\geq&\frac{C_2}{t^d}+\int_0^t\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}K(t,x)K^2(t-s,x-y)v^m(y,s)dydxds.
\lbl{3.8}\ees It is clear that
\begin{eqnarray*} &&\int_{\mathbb{R}^d}K(t,x)K^2(t-s,x-y)dx\\ &=&\frac{1}{(2\pi t)^{d/2}[2\pi (t-s)]^{d}}\int_{\mathbb{R}^d}
\exp\left(-\frac{|x|^2}{2t}-\frac{|x-y|^2}{t-s}\right)dx\\ &=&K(s,y)\frac{(2\pi s)^{d/2}}{(2\pi t)^{d/2}[2\pi (t-s)]^{d}}\int_{\mathbb{R}^d}
\exp\left(\frac{|y|^2}{2s}-\frac{|x|^2}{2t}-\frac{|x-y|^2}{t-s}\right)dx.
\end{eqnarray*} Since
\begin{eqnarray*}
&&\frac{|y|^2}{2s}-\frac{|x|^2}{2t}-\frac{|x-y|^2}{t-s}\\
&\geq&\frac{|y|^2}{2s}-\frac{|x-y|^2+|y|^2+2|x-y||y|}{2t}-\frac{|x-y|^2}{t-s}\\
&=&\frac{1}{2t}\left(-2|x-y||y|+\frac{t-s}{s}|y|^2\right)-\frac{|x-y|^2}{2t}-\frac{|x-y|^2}{t-s}\\
&\geq&-\frac{s|x-y|^2}{2t(t-s)}-\frac{|x-y|^2}{2t}-\frac{|x-y|^2}{t-s}\\
&\geq& -\frac{2|x-y|^2}{t-s}\ \ {\rm for}\ 0<s<t,
\end{eqnarray*} we get for $0<s<t$
\begin{eqnarray*} \int_{\mathbb{R}^d}
\exp\left(\frac{|y|^2}{2s}-\frac{|x|^2}{2t}-\frac{|x-y|^2}{t-s}\right)dx \geq \int_{\mathbb{R}^d}
\exp\left(-\frac{2|x-y|^2}{t-s}\right)dx =C_3(t-s)^{d/2}.
\end{eqnarray*} Substituting the above estimate into (\ref{3.8}) and applying Jensen's inequality, we obtain
\begin{eqnarray*} G(t)&\geq&\frac{C_2}{t^d}+C_4\int_0^t\frac{s^{d/2}}{t^d}\int_{\mathbb{R}^d}K(s,y)v^m(y,s)dydxds\\ &\geq&\frac{C_2}{t^d}+C_4\int_0^t\frac{s^{d/2}}{t^d}G^m(s)ds
\end{eqnarray*} We can rewrite the above inequality as
\bes t^d G(t)&\geq& C_2+C_4\int_0^ts^{d/2}G^m(s)ds\lbl{3.9}\\ &=:&g(t).\nonumber
\ees Then for $t>1$, we have
\begin{eqnarray*} &&g(t)\geq C_2,\\ &&g'(t)\geq C_4t^{d/2}G^m(t)\geq C_4t^{d/2}\left(\frac{1}{t^d}g(t)\right)^m=C_4t^{\frac{d}{2}-dm}g^m(t),
\end{eqnarray*} which implies
\begin{eqnarray*} \frac{C_2^{1-m}}{m-1}\geq \frac{1}{m-1}g^{1-m}(t)\geq C_4\int_t^Ts^{\frac{d}{2}-dm}dx\ \ {\rm for}\ T>t\geq1.
\end{eqnarray*} If $m\leq\frac{d+2}{2d}$, that is, $\frac{d}{2}-dm+1\geq0$, the right-hand side of the above inequality is unbounded as $T\to\infty$, which gives a contradiction. Noting that we must let $m>1$ because we used the Jensen's inequality, thus we get $1<m\leq\frac{3}{2}$ and $d=1$. And thus we complete the proof. $\Box$
If the noise is just one-dimensional Brownian motion, the result will be different. For this, we consider the following stochastic
\bes \left\{\begin{array}{llll} du_t=\Delta udt+\sigma(u,x,t)dB_t,\ \ t>0,\ &x\in\mathbb{R}^d,\\ u(x,0)=u_0(x)\gneqq0, \ \ &&x\in\mathbb{R}^d,
\end{array}\right.\lbl{3.10}\ees where $B_t$ is one-dimensional Brownian motion. A mild solution to (\ref{3.10}) in sense of Walsh \cite{walsh1986} is any $u$ which is adapted to the filtration generated by the white noise and satisfies the following evolution equation
\begin{eqnarray*} u(x,t)=\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)\sigma(u,y,s)dydB_s,
\end{eqnarray*} where $K(t,x)$ denotes the heat kernel of Laplacian operator.
\begin{theo}\lbl{t3.3} Suppose $d=1$ and $\sigma^2(u,x,t)\geq C_0u^{2}$, $C_0>0$, then the solutions of (\ref{3.10}) blows up in finite time for any nontrivial nonnegative initial data $u_0$.
\end{theo}
{\bf Proof.} Similar to the proof of Theorem \ref{t3.2}, we assume that the solution remains finite for all finite $t$ almost surely. By taking the second moment and using the Walsh isometry, we get
\begin{eqnarray*}
\mathbb{E}|u(x,t)|^2&=&\left(\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy\right)^2 +\int_0^t\left(\int_{\mathbb{R}^d}K(t-s,x-y)\mathbb{E}\sigma(u,y,s)dy\right)^2ds\\ &=:&u_1(x,t)+u_2(x,t).
\end{eqnarray*} We may assume without loss of generality that $u_0(x)\geq C_1>0$ for $|x|<1$ by the assumption. The estimate (\ref{3.7}) also holds, i.e.,
\begin{eqnarray*} u_1(x,t)\geq \frac{C}{(2\pi t)^{d/2}}\exp\left(-\frac{|x|^2}{2t}\right)
\end{eqnarray*} for $t>1$ and $C>0$.
It is easy to see that, for $m\geq2$,
\begin{eqnarray*} u_2(x,t)&\geq& C_0\int_0^t\left(\int_{\mathbb{R}^d}K(t-s,x-y)\mathbb{E}|u(y,x)|^{m}dy\right)^2ds\\
&\geq& C_0\int_0^t\left(\int_{\mathbb{R}^d}K(t-s,x-y)[\mathbb{E}|u(y,x)|^{2}]^{m/2}dy\right)^2ds\\
&\geq& C_0\int_0^t\left(\int_{\mathbb{R}^d}K(t-s,x-y)\mathbb{E}|u(y,x)|^{2}dy\right)^mds.
\end{eqnarray*} Denote $v(x,t)=\mathbb{E}|u(x,t)|^{2}$. Let
\begin{eqnarray*} G(t)=\int_{\mathbb{R}^d}K(t,x) v(x,t)dx.
\end{eqnarray*} Then for $t>1$,
\bes G(t)&=&\int_{\mathbb{R}^d}u^2_1(x,t)K(t,x)dx+\int_{\mathbb{R}^d}u_2(x,t)K(t,x)dx\nonumber\\ &\geq&\frac{C_2}{t^d}+\int_0^t\left(\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}K(t,x)K(t-s,x-y)v(y,s)dydx\right)^mds.
\lbl{3.11}\ees It is clear that (see \cite[Page 42]{Hubook2018})
\begin{eqnarray*} \int_{\mathbb{R}^d}K(t,x)K(t-s,x-y)dx\geq C_3K(s,y)\left(\frac{s}{t}\right)^{d/2}.
\end{eqnarray*} Substituting the above estimate into (\ref{3.11}) and applying Jensen's inequality, we obtain
\begin{eqnarray*} G(t)\geq\frac{C_2}{t^d}+C_3\int_0^t\left(\frac{s^{d/2}}{t^{d/2}}\right)^mG^m(s)ds
\end{eqnarray*} We can rewrite the above inequality as
\bes t^{md/2} G(t)&\geq& C_2t^{(m-2)d/2}+C_3\int_0^ts^{dm/2}G^m(s)ds\lbl{3.9}\\ &=:&g(t).\nonumber
\ees Then for $t>1$, we have
\begin{eqnarray*} &&g(t)\geq C_2t^{(m-2)d/2},\\ &&g'(t)\geq C_3t^{dm/2}G^m(t)\geq C_3t^{d/2}\left(\frac{1}{t^{dm/2}}g(t)\right)^m=C_3t^{(1-m)md/2}g^m(t),
\end{eqnarray*} which implies
\begin{eqnarray*} \frac{C_2^{1-m}}{m-1}t^{-d(m-1)(m-2)/2}\geq \frac{1}{m-1}g^{1-m}(t)\geq C_4\int_t^Ts^{(1-m)md/2}dx\ \ {\rm for}\ T>t\geq1.
\end{eqnarray*} If $(m-1)md/2\leq1$, we will get a contradiction by letting $T\to\infty$. If $\frac{d(m-1)(m-2)}{2}>-1+ \frac{(m-1)md}{2}$, then we will get a contradiction by letting $T\to\infty$ and then taking $t\gg1$. Noting that when $m=2,\ d=1$, we have $(m-1)md/2=1$ and $\frac{d(m-1)(m-2)}{2}>-1+ \frac{(m-1)md}{2}$ is equivalent to $m<1+\frac{1}{d}$. Since $m\geq2$, we get a contradiction for the case that $m=2,\ d=1$. The proof is complete. $\Box$
\begin{remark} \lbl{r3.1} Comparing Theorem \ref{t3.2} with Proposition \ref{p3.1}, the assumptions of Proposition \ref{p3.1} on initial data need the lower bound, but in Theorem \ref{t3.2} we did not.
Theorems \ref{t3.2} and \ref{t3.3} show that the time-space white noise and Brownian motion are different. But the method used here is not suitable to fractional Laplacian operator. Sugitani \cite{Sug1975} established the Fujita index for Cauchy problem of fractional Laplacian operator. The main difficult is that we can not get the exact estimate of $\int_{\mathbb{R}^d}p^2(t,x)p(t-s,x-y)dx$, where $p(t,x)$ is the heat kernel of fractional Laplacian operator.
\end{remark}
\section{Discussion} \setcounter{equation}{0} An interesting issue of stochastic partial differential equations is to find the difference when we add the noise, i.e., the impact of noise. For stochastic partial differential equations, we want to know whether the solutions keep positive. In this section, we first consider the positivity of the solutions of stochastic parabolic equations in the whole space, and then consider the impact of noise.
In the followings, we will select a test function $\beta_\varepsilon(r)$. Define
\begin{eqnarray*} &&\beta_\varepsilon(r)=\int_r^\infty\rho_\varepsilon(s)ds,\ \ \ \rho_\varepsilon(r)=\int_{r+\varepsilon}^\infty J_\varepsilon(s)ds,\ \ \ r\in\mathbb{R},\nonumber\\
&&J_\varepsilon(|x|)=\varepsilon^{-n}J\left(\frac{|x|}{\varepsilon}\right),\ \ \
J(x)=\left\{\begin{array}{llll}
C\exp\left(\frac{1}{|x|^2-1}\right),\ \ \ &|x|<1,\\
0, \ \ \ &|x|\geq1.
\end{array}\right. \end{eqnarray*} Then by direct verification, we have the following result.
\begin{lem}\lbl{l4.1} The above constructed functions $\rho_\varepsilon,\beta_\varepsilon$ are in $C^\infty(\mathbb{R})$ and have the following properties: $\rho_\varepsilon$ is a non-increasing function and
\begin{eqnarray*} \beta_\varepsilon'(r)=-\rho_\varepsilon(r)=\left\{\begin{array}{llll}
0,\ \ \ &r\geq0,\\ -1,\ \ \ &r\leq-2\varepsilon.
\end{array}\right.\end{eqnarray*}
Additionally, $\beta_\varepsilon$ is convex and
\begin{eqnarray*} \beta_\varepsilon(r)=\left\{\begin{array}{llll}
0,\ \ \ \ \ &r\geq0,\\ -2\varepsilon-r+\varepsilon \hat C, \ \ &r\leq-2\varepsilon,
\end{array}\right.\end{eqnarray*} where $\hat C=\int^0_{-2}\int_{t+1}^1J(s)dsdt<2$. Furthermore,
\begin{eqnarray*} 0\leq\beta_\varepsilon''(r)=J_\varepsilon(r+\varepsilon)\leq \varepsilon^{-d}C, \ \ -2\varepsilon\leq r\leq0,
\end{eqnarray*} which implies that
\begin{eqnarray*} -2^dC\leq r^d\beta_\varepsilon''(r)\leq0 \ \ \ \ & {\rm for}\ -2\varepsilon\leq r\leq0,\ {\rm and}\ d\ {\rm is}\ {\rm odd};\\ 0\leq r^d\beta_\varepsilon''(r)\leq2^dC\ \ \ \ &{\rm for}\ -2\varepsilon\leq r\leq0,\ {\rm and}\ d\ {\rm is}\ {\rm even}.
\end{eqnarray*}
\end{lem}
Now, we consider the following stochastic parabolic equations
\bes\left\{\begin{array}{lll}
du=(\Delta u+f(u,x,t))dt+g(u,x,t)dW(x,t), \ \ \qquad t>0,\ &x\in \mathbb{R},\\[1.5mm]
u(x,0)=u_0(x), \ \ \ \ \ &x\in \mathbb{R},
\end{array}\right.\lbl{4.1}\ees where $W(x,t)$ is time-space white noise.
\begin{theo}\lbl{t4.1} Assume that (i) the function $f(r,x,t)$ is continuous on
$\mathbb{R}\times\mathbb{R}\times[0,T]$; (ii) $f(r,x,t)\geq0$ for $r\leq0$, $x\in\mathbb{R}$ and $t\in[0,T]$; and (iii) $ g^2(u,x,t)\leq ku^{2m}$, where $k>0$, $2m>1$ and $(-1)^{2m-1}\in\mathbb{R}$. Then the solution of initial-boundary value problem {\rm(\ref{4.1})} with nonnegative initial datum remains positive: $u(x,t)\geq0$, a.s. for almost every $x\in \mathbb{R}$ and for all $ t\in[0,T]$. \end{theo}
{\bf Proof.} Define
\begin{eqnarray*} \Phi_\varepsilon(u_t)=(1,\beta_\varepsilon(u_t))=\int_{\mathbb{R}} \beta_\varepsilon(u(x,t))dx.
\end{eqnarray*} By It\^{o}'s formula, we have
\begin{eqnarray*} \Phi_\varepsilon(u_t)&=&\Phi_\varepsilon(u_0)+\int_0^t\int_{\mathbb{R}} \beta_\varepsilon'(u(x,s))\Delta u(x,s)dxds\\ &&+\int_0^t\int_ {\mathbb{R}} \beta_\varepsilon'(u(x,s))f(u(x,s),x,s)dxds\\ &&+\int_0^t\int_ {\mathbb{R}} \beta_\varepsilon'(u(x,s))g(u(x,s),x,s)dW(x,s)dx\\ &&+\frac{1}{2}\int_0^t\int_{\mathbb{R}^d}\beta_\varepsilon''(u(x,s))g^2(u(x,s),x,t)dxds\\
&=&\Phi_\varepsilon(u_0)+\int_0^t\int_{\mathbb{R}}\beta_\varepsilon''(u(x,s))\left(\frac{1}{2}g^2(u(x,s),x,s)-|\nabla u|^2\right)dxds\\ &&+\int_0^t\int_ {\mathbb{R}} \beta_\varepsilon'(u(x,s))f(u(x,s),x,s)dxds\\ &&+\int_0^t\int_ {\mathbb{R}} \beta_\varepsilon'(u(x,s))g(u(x,s),x,s)dW(x,s)dx.
\end{eqnarray*} Taking expectation over the above equality and using Lemma \ref{l4.1}, we get
\begin{eqnarray*} \mathbb{E}\Phi_\varepsilon(u_t) &=&\mathbb{E}\Phi_\varepsilon(u_0)+\mathbb{E}\int_0^t\int_{\mathbb{R}} \beta_\varepsilon''(u(x,s))\\
&&\times\left(\frac{1}{2}g^2(u(x,s),x,s)-|\nabla u|^2\right)dxds\\ &&+\mathbb{E}\int_0^t\int_{\mathbb{R}} \beta_\varepsilon'(u(x,s))f(u(x,s),x,s)dxds\\ &\leq&\mathbb{E}\Phi_\varepsilon(u_0)+\frac{k}{2}\mathbb{E}\int_0^t\int_{\mathbb{R}} \beta_\varepsilon''(u(x,s)) u(x,s)^{2m}dxds\\ &&+\mathbb{E}\int_0^t\int_{\mathbb{R}}\beta_\varepsilon'(u(x,s))f(u(x,s),x,s)dxds.
\end{eqnarray*} Here and after, we denote $\|\cdot\|_{L^1}$ by $\|\cdot\|_1$. Let $\eta(u)=u^-$
denote the negative part of $u$ for $u\in\mathbb{R}$. Then we have $\lim\limits_{\varepsilon\rightarrow0}\mathbb{E}\Phi_\varepsilon(u_t)=\mathbb{E}\|\eta(u_t)\|_1$. It follows from Lemma \ref{l4.1} that
\begin{eqnarray*} 0\geq u^{2m}\beta''_\varepsilon(u)\geq\left\{\begin{array}{llll}
0,\ \ \ \ & u\geq0\ {\rm or}\ u\leq-2\varepsilon,\\ -2Cu^{2m-1},\ \ \ \ &-2\varepsilon\leq u\leq0,\ {\rm and}\ u^{2m-1}\geq0,
\end{array}\right.
\end{eqnarray*} or
\begin{eqnarray*} 0\leq u^{2m}\beta''_\varepsilon(u)\leq\left\{\begin{array}{llll}
0,\ \ \ \ & u\geq0\ {\rm or}\ u\leq-2\varepsilon,\\ -2Cu^{2m-1},\ \ \ \ &-2\varepsilon\leq u\leq0,\ {\rm and}\ u^{2m-1}\leq0
\end{array}\right.
\end{eqnarray*} which implies that $\lim\limits_{\varepsilon\rightarrow0}u^{2m}\beta_\varepsilon''(u)=0$ provided that $2m>1$. By taking the limits termwise as $\varepsilon\rightarrow0$ and using Lemma \ref{l4.1}, we get
\begin{eqnarray*}
\mathbb{E}\|\eta(u_t)\|_1&\leq&\mathbb{E}\|\eta(u_0)\|_1 -\mathbb{E}\int_0^t\int_{\mathbb{R}} \eta'(u(x,s))f(u(x,s),x,s)dxds\nonumber\\ &\leq&0,
\end{eqnarray*} which implies that $u^-=0$ a.s. for a.e. $x\in D$, $\forall t\in[0,T]$. This completes the proof. $\Box$
If $W(x,t)$ is replaced by $B_t$ in (\ref{4.1}), then Theorem \ref{t4.1} holds for any dimension. The reason why we only consider one dimension in Theorem \ref{t4.1} is that the It\^{o} formula only holds for one-dimensional time-space white noise.
In order to find the impact of noise, we first recall a well-known result of deterministic parabolic equations. Consider the Cauchy problem
\bes \left\{\begin{array}{llll} \frac{\partial}{\partial t}u_t=\Delta u+u^p,\ \ t>0,\ &x\in\mathbb{R}^d,\\ u(x,0)=u_0(x)\gneqq0, \ \ &x\in\mathbb{R}^d.
\end{array}\right.\lbl{4.2}\ees \begin{prop}\lbl{p4.1} (i) If $p>1+\frac{2}{d}$, then the solution of (\ref{4.2}) is global in time, provided the initial datum satisfies, for some small $\varepsilon>0$,
\begin{eqnarray*} u_0(x)\leq\varepsilon K(1,x), \ \ \ x\in\mathbb{R}^d.
\end{eqnarray*} (ii) If $1<p\leq1+\frac{2}{d}$, then all nontrivial solutions of (\ref{4.2}) blow up in finite time.
\end{prop} Next we consider the stochastic parabolic equation
\bes \left\{\begin{array}{llll}
du_t=[\Delta u+|u|^{p-1}u]dt+\sigma(u)dW(x,t),\ \ t>0,\ &x\in\mathbb{R}^d,\\ u(x,0)=u_0(x)\gneqq0, \ \ &x\in\mathbb{R}^d.
\end{array}\right.\lbl{4.3}\ees It is well known that the mild solution of (\ref{4.3}) can be written as
\begin{eqnarray*} u(x,t)&=&\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy+\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)|u|^{p-1}udyds\\ && +\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)\sigma(u,y,s)W(dy,ds).
\end{eqnarray*}
\begin{theo}\lbl{t4.2} Assume all the assumptions of Theorem \ref{t4.1} hold. Then $1<p\leq1+\frac{2}{d}$, then the expectation of all nontrivial solutions of (\ref{4.3}) blow up in finite time. That is to say, there exists a positive constant $t_0>0$ such that $\mathbb{E}u(x,t)=\infty,\ \ t\geq t_0$ for all $x\in\mathbb{R}^d$. When $m>1$, the mean square of solutions to (\ref{4.3}) will blow up in finite time under the condition that the initial data is suitable large.
\end{theo}
{\bf Proof.} It follows from Theorem \ref{t4.1} that the solutions of (\ref{4.3}) keep positive. Following the representation of mild solution, we have
\begin{eqnarray*}
\mathbb{E}u(x,t)=\int_{\mathbb{R}^d}K(t,x-y)\mathbb{E}u_0(y)dy+\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)\mathbb{E}|u|^pdyds,
\end{eqnarray*} which implies that
\begin{eqnarray*} \mathbb{E}u(x,t)\geq\int_{\mathbb{R}^d}K(t,x-y)\mathbb{E}u_0(y)dy+\int_0^t\int_{\mathbb{R}^d}K(t-s,x-y)[\mathbb{E}u]^pdyds,
\end{eqnarray*} Denoting $v(x,t)=\mathbb{E}u(x,t)$, we have that $v(x,t)$ is a super-solution of (\ref{4.2}). By the results of Proposition \ref{p4.1} and comparison principle, we obtain that there exists a positive constant $t_0>0$ such that $\mathbb{E}u(x,t)=\infty,\ \ t\geq t_0$ for all $x\in\mathbb{R}^d$. Meanwhile, noting that
\begin{eqnarray*} \mathbb{E}u(x,t)\leq\left(\mathbb{E}u^p(x,t)\right)^{\frac{1}{p}},\ \ p>1,
\end{eqnarray*} we have that $\mathbb{E}u^p(x,t)$, $p>1$, will blow up in finite time.
When $m>1$, we have
\begin{eqnarray*}
\mathbb{E}|u(x,t)|^2&\geq&\left(\int_{\mathbb{R}^d}K(t,x-y)u_0(y)dy\right)^2 +\int_0^t\int_{\mathbb{R}^d}K^2(t-s,x-y)\mathbb{E}\sigma^2(u,y,s)dyds\\ &=:&w(x,t).
\end{eqnarray*} Foondun \cite{FLN2018} proved the mean square of function $w(x,t)$ will blow up in finite time under the condition that the initial data is suitable large. So the solution $u$ will also blow up in finite time. The proof is complete. $\Box$
\noindent {\bf Acknowledgment} The first author was supported in part by NSFC of China grants 11771123. The authors thanks Prof. Feng-yu Wang for discussing this manuscript.
\end{document} |
\begin{document}
\title[Uniqueness of Ancient Ovals in MCF] {Uniqueness of two-convex closed ancient solutions to the mean curvature flow } \author[Angenent]{Sigurd Angenent} \address{Department of Mathematics, University of Wisconsin -- Madison} \author[Daskalopoulos]{Panagiota Daskalopoulos} \address{Department of Mathematics, Columbia University, New York} \author[Sesum]{Natasa Sesum} \address{Department of Mathematics, Rutgers University, New Jersey}
\thanks{ P.~Daskalopoulos thanks the NSF for support in DMS-1266172. N.~Sesum thanks the NSF for support in DMS-1056387. }
\date{\today}
\begin{abstract}
In this paper we consider closed noncollapsed ancient solutions to the mean curvature flow ($n \ge 2$) which are uniformly two-convex.
We prove such an ancient solution is up to translations and scaling the unique rotationally symmetric closed ancient noncollapsed solution constructed in \cite{Wh} and \cite{HH}. \end{abstract} \maketitle
\tableofcontents
\section{Introduction} In this paper we consider closed noncollapsed ancient solutions $F(\cdot,t): M^n \to \mathbb{R}^{n+1}$ to the mean curvature flow ($n \ge 2$) \begin{equation}
\label{eq-mcf}
\frac{\partial}{\partial t} F = -H\, \nu \end{equation} for $t\in (-\infty,0)$, where $H$ is the mean curvature of $M_t := F(M^n,t)$ and $\nu$ is the outward unit normal vector. We know by Huisken's result \cite{Hu} that the surfaces $M_t$ will contract to a point in finite time.
The main focus of the paper is the classification of two-convex {\em closed ancient solutions} to mean curvature flow, i.e.~solutions that are defined for $t\in (-\infty,T)$, for some $T < +\infty$. Ancient solutions play an important role in understanding the singularity formation in geometric flows, as such solutions are usually obtained after performing a blow up near points where the curvature is very large. In fact, Perelman's famous work on the Ricci flow \cite{P} shows that the high curvature regions are modeled on ancient solutions which have nonnegative curvature and are $\kappa$-noncollapsed. Similar results for mean curvature flow were obtained in \cite{HK}, \cite{Wh0}, \cite{Wh} assuming mean convexity and embeddedness.
Daskalopoulos, Hamilton and Sesum previously established the complete classification of ancient compact convex solutions to the curve shortening flow in \cite{DHS1}, and ancient compact solutions the Ricci flow on $S^2$ in \cite{DHS2}. The higher dimensional cases have remained open for both the mean curvature flow and the Ricci flow.
In an important work by Xu-Jia Wang \cite{W} the author introduced the following notion of non-collapsed solutions to the MCF which is the analogue to the $\kappa$-non-collapsing condition for the Ricci flow discussed above. In the same work Xu-Jia Wang provided a number of results regarding the asymptotic behavior of ancient solutions, as $t \to -\infty$, and he also constructed new examples of ancient MCF solutions.
\begin{definition}
Let $K^{n+1} \subset \mathbb{R}^{n+1}$ be a smooth domain whose boundary is a mean convex hypersurface $M^n$.
We say that $M^n$ is $\alpha$-noncollapsed if for every $p\in M^n$ there are balls $B_1$ and $B_2$ of radius at least $\frac{\alpha}{H(p)}$ such that $\bar B_1 \subset K^{n+1}$ and $\bar B_2 \subset \mathbb{R}^{n+1}\setminus Int (K^{n+1})$, and such that $B_1$ and $B_2$ are tangent to $M^n$ at the point $p$, from the interior and exterior of $K^{n+1}$, respectively (in the limiting case $H(p) \equiv 0$, this means that $K^{n+1}$ is a halfspace).
A smooth mean curvature flow $\{M_t\}$ is $\alpha$-noncollapsed if $M_t$ is $\alpha$-noncollapsed for every $t$. \end{definition}
In \cite{And} Andrews showed that the $\alpha$-noncollapsedness property is preserved along mean curvature flow, namely, if the initial hypersurface is $\alpha$-noncollapsed at time $t = t_0$, then evolving hypersurfaces $M_t$ are $\alpha$-noncollapsed for all later times for which the solution exists. Haslhofer and Kleiner \cite{HK} showed that every closed, ancient, and $\alpha$-noncollapsed solution is necessarily convex.
In recent breakthrough works, Brendle and Choi \cite{BC,BC2} gave the complete classification of noncompact ancient solutions to the mean curvature flow that are both strictly convex and uniformly two-convex. More precisely, they show that any noncompact and complete ancient solution to mean curvature flow \eqref{eq-mcf} that is strictly convex, uniformly two-convex, and noncollapsed is the Bowl soliton, up to scaling and ambient isometries. Recall that the Bowl soliton is the unique rotationally-symmetric, strictly convex solution to mean curvature flow that translates with unit speed. It has the approximate shape of a paraboloid and its mean curvature is largest at the tip. The uniqueness of the Bowl soliton among convex and uniformly two-convex translating solitons has been proved by Haslhofer in \cite{Ha}.
While the $\alpha$-noncollapsedness property for mean curvature flow is preserved forward in time, it is not necessarily preserved going back in time. Indeed, Xu-Jia Wang (\cite{W}) exhibited examples of ancient compact convex mean curvature flow solutions $\{M_t \,\,\,|\,\,\, t<0\}$, that is not uniformly $\alpha$-noncollapsed for any $\alpha > 0$. Such solutions lie in slab regions. The methods in \cite{W} rely on the level set flow. Recently, Bourni, Langford and Tinaglia \cite{BLT} provided a detailed construction of the Xu-Jia Wang solutions by different methods, showing also that the solution they construct is unique within the class of rotationally symmetric mean curvature flows that lie in a slab of a fixed width. In the present paper we will not consider these ancient collapsed solutions and focus on the classification of ancient closed noncollapsed mean curvature flows.
Ancient self-similar solutions to MCF are of the form $M_t = \sqrt{T-t}\,\bar M$ for some fixed surface $\bar M$ and some ``blow-up time'' $T$. We rewrite a general ancient solution $\{M_t : t<T\}$ as \begin{equation}
\label{eq-parabolic-blow-up}
M_t = \sqrt{T-t} \, \bar M_\tau, \qquad \tau:={-\log (T-t)}. \end{equation}
Haslhofer and Kleiner \cite{HK} proved that every closed ancient noncollapsed mean curvature flow with strictly positive mean curvature sweeps out the whole space. By Xu-Jia~Wang's result \cite{W}, it follows that in this case the backward limit as $\tau \to -\infty$ of the type-I rescaling $\bar M_\tau$ of the original solution $M_t$, defined by \eqref{eq-parabolic-blow-up}, is either a sphere or a generalized cylinder ${\mathbb R}^k\times S^{n-k}$ of radius $\sqrt{2(n-k)}$. In \cite{ADS} we showed that if the backward limit is a sphere then the ancient solution $\{M_t\}$ has to be a family of shrinking spheres itself.
\begin{definition}
\label{def-oval}
We say an ancient mean curvature flow $\{M_t : -\infty < t < T\}$ is an \emph{Ancient Oval} if it is compact, smooth, noncollapsed, and not self-similar.
\end{definition}
\begin{definition}
We say that an ancient solution $\{M_t : -\infty < t < T\}$ is \emph{uniformly 2-convex} if there exists a uniform constant $\beta > 0$
so that
\begin{equation}
\label{eqn-2convex}
\lambda_1 + \lambda_2 \ge \beta H, \qquad \mbox{ for all} \,\, t \le t_0.
\end{equation} \end{definition}
Throughout the paper we will be using the following observation: if an Ancient Oval $M_t$ is uniformly 2-convex, then by results in \cite{W}, the backward limit of its type-I parabolic blow-up must be a shrinking round cylinder $\mathbb{R}\times S^{n-1}$, with radius $\sqrt{2(n-1)}$.
Based on formal matched asymptotics, Angenent \cite{A} conjectured the existence of an Ancient Oval, that is, of an ancient solution that for $t\to 0$ collapses to a round point, but for $t\to -\infty$ becomes more and more oval in the sense that it looks like a round cylinder $\mathbb{R}\times S^{n-1}$ in the middle region, and like a rotationally symmetric translating soliton (the Bowl soliton) near the tips. A variant of this conjecture was proved already by White in \cite{Wh}. By considering convex regions of increasing eccentricity and using a limiting argument, he proved the existence of ancient flows of compact, convex sets that are not self-similar. Haslhofer and Hershkovits \cite{HH} carried out White's construction in more detail, including, in particular, the study of the geometry at the tips. As a result they gave a rigorous and simple proof for the existence of an Ancient Oval.
Our main result in this paper is as follows.
\begin{theorem}
\label{thm-main-main}
Let $\{ M_t, \, -\infty < t < T \} $ be a uniformly 2-convex Ancient Oval.
Then it is unique and hence must be the solution constructed by White in \cite{Wh} and later by Haslhofer and Hershkovits in \cite{HH}, up to ambient isometries, scaling and translations in time. \end{theorem}
The proof of this theorem will follow from the results stated below.
\begin{theorem}
\label{thm-rot-symm}
If $\{M_t : -\infty < t <0\}$ is an Ancient Oval which is uniformly 2-convex, then it is rotationally symmetric. \end{theorem}
Our proof of Theorem \ref{thm-rot-symm} closely follows the arguments by Brendle and Choi in \cite{BC, BC2} on the uniqueness of strictly convex, noncompact, uniformly 2-convex, and noncollapsed ancient mean curvature flow. It was shown in \cite{BC} that such solutions are rotationally symmetric. Then, by analyzing the rotationally symmetric solutions, Brendle and Choi showed that such solutions agree with the Bowl soliton.
Given Theorem~\ref{thm-rot-symm}, we may assume in our proof of Theorem~\ref{thm-main-main} that any Ancient Oval $M_t$ is rotationally symmetric. After applying a suitable Euclidean motion we may assume that its {\em axis of symmetry is the $x_1$-axis}. Then, $M_t$ can be represented as \begin{equation}
\label{eq-O1xOn-symmetry}
M_t = \bigl\{ (x, x') \in {\mathbb R}\times{\mathbb R}^{n} : -d_1(t)<x<d_2(t), \|x'\|=U(x, t)\bigr\} \end{equation}
for some function $\|x'\|=U(x,t)$, and from now on we will set $x:=x_1$ and $x'=(x_2, \cdots, x_{n+1})$. We call the points $(-d_1(t), 0)$ and $(d_2(t),0)$ \emph{the tips} of the surface. The function $U(x, t)$, which we call the \emph{profile} of the hypersurface $M_t$, is only defined for $x\in[-d_1(t), d_2(t)]$. Any surface $M_t$ defined by \eqref{eq-O1xOn-symmetry} is automatically invariant under $O(n)$ acting on ${\mathbb R}\times{\mathbb R}^n$. Convexity of the surface $M_t$ is equivalent to concavity of the profile $U$, i.e.~$M_t$ is convex if and only if $U_{xx}\leq0$.
A family of surfaces $M_t$ defined by $\|x'\|=U(x, t)$ evolves by mean curvature flow if and only if the profile $U(x,t)$ satisfies \begin{equation}
\label{eq-u-original}
\frac{\partial U}{\partial t} = \frac{U_{xx}}{1+U_x^2} - \frac{n-1}{U}. \end{equation} If $M_t$ satisfies MCF, then its parabolic rescaling $\bar{M}_\tau$ defined by \eqref{eq-parabolic-blow-up} evolves by the \emph{rescaled MCF} \[ \nu\cdot\frac{\partial \bar F}{\partial \tau} = H + \tfrac12 \bar F\cdot\nu, \] where $\bar F(x, \tau) = e^{\tau/2}F(x, T-e^{-\tau})$ is the parametrization of $\bar M_\tau$, and $\nu = \nu(x, t)$ is the corresponding unit normal. Also, \[ \bar{M}_{\tau} = \{(y,y')\in \mathbb{R}\times \mathbb{R}^n \mid
-\bar{d}_1(\tau) \le y \le \bar{d}_2(\tau), \,\,\, \|y'\| = u(y,\tau)\} \] for a profile function $u$, which is related to $U$ by \[ U(x,t) = \sqrt{T-t}\, u(y, \tau), \qquad y=\frac x{ \sqrt{T-t}}, \quad \tau=-\log (T-t). \] The points $(-\bar{d}_1(\tau),0)$ and $(\bar{d}_2(\tau),0)$ are referred to as the tips of rescaled surface $\bar{M}_{\tau}$. Equation \eqref{eq-u-original} for $U(x,t)$ is equivalent to the following equation for $u(y,\tau)$ \begin{equation}
\label{eq-u}
\frac{\partial u}{\partial \tau} = \frac{u_{yy}}{1+u_y^2} - \frac y2 \, u_y - \frac{n-1}{u}+ \frac u2. \end{equation}
It follows from the discussion above, that our most general result \ref{thm-main-main} reduces to the following classification under the presence of rotational symmetry. \begin{theorem}
\label{thm-main}
Let $(M_1)_t$ and $(M_2)_t$, $-\infty < t < T$ be two $O(n)$-invariant Ancient Ovals with the same axis of symmetry (which is assumed to be the $x_1$-axis) whose profile functions $U_1(x,t)$ and $U_2(x,t))$ satisfy equation \eqref{eq-u-original}
and rescaled profile functions $u_1(y,\tau)$ and $u_2(y,\tau)$ satisfy equation \eqref{eq-u}.
Then, they are the same up to translations along the axis of symmetry (translations in $x$), translations in time and parabolic rescaling.
\end{theorem}
Since the asymptotics result from \cite{ADS} will play a significant role in this work, we state it below for the reader's convenience.
\begin{theorem}[Angenent, Daskalopoulos, Sesum in \cite{ADS}]
\label{thm-old}
Let $\{M_t\}$ be any $O(1)\times O(n)$ invariant Ancient Oval (see Definition \ref{def-oval}) .
Then the solution $u(y,\tau)$ to \eqref{eq-u}, defined on $\mathbb{R}\times \mathbb{R}$, has the following asymptotic expansions:
\begin{enumerate}
\item[(i)] For every $M > 0$,
\[
u(y,\tau) = \sqrt{2(n-1)} \Bigl(1 - \frac{y^2 - 2}{4|\tau|}\Bigr) + o(|\tau|^{-1}), \qquad |y| \le M
\]
as $\tau \to -\infty$.
\item[(ii)] Define $z := {y}/{\sqrt{|\tau|}}$ and $\bar{u}(z,\tau) := u(z\sqrt{|\tau|}, \tau)$.
Then,
$$\lim_{\tau \to -\infty} u(z,\tau) = \sqrt{(n-1)\, (2 - z^2)}$$
uniformly on compact subsets in $|z| \leq \sqrt{2}$.
\item[(iii)] Denote by $p_t$ the tip of $M_t \subset \mathbb{R}^{n+1}$, and define for any $t_*<0$ the rescaled flow at the tip
\[
\tilde{M}_{t_*}(t) = \lambda(t_*) \bigl\{M_{t_* + t \lambda(t_*)^{-2}} - p _{t_*}\bigr\}
\]
where
\[
\lambda(t) := H(p_{t}, t) = H_{\max}(t) = \sqrt{ \tfrac12 |t|\log|t| } \bigl(1+o(1)\bigr)
\]
Then, as $t_*\to-\infty$, the family of mean curvature flows $\tilde M_{t_*}(\cdot)$ converges to the unique unit speed Bowl soliton, i.e.~the unique convex rotationally symmetric translating soliton with velocity one.
\end{enumerate} \end{theorem}\noindent
Before we conclude our introduction we give a short description of our proof for Theorem \ref{thm-main}. A more detailed outline of this proof is given in Section \ref{sec-regions}.
\noindent{\em Discussion on the proof of Theorem \ref{thm-main}.} The proof of Theorem \ref{thm-main} makes extensive use of our previous work \cite{ADS} where the detailed asymptotic behavior of Ancient Ovals, as $\tau:= - \log |t| \to -\infty$, was given under the assumption of $O(1)\times O(n)$ symmetry (see Theorem ~\ref{thm-old} below). Note that our symmetry result, Theorem ~\ref{thm-rot-symm}, which will be shown in Section \ref{sec-rot-symm}, only shows the $O(n)$-symmetry of solutions and not the $O(1)\times O(n)$-symmetry assumed in Theorem \ref{thm-old}. However, as we will demonstrate in the Appendix of this work (see Theorem \ref{thm-O1}), the estimates in Theorem ~\ref{thm-old} simply extend to the $O(n)$-symmetric case. Since the proof of Theorem \ref{thm-main} is quite involved, in Section \ref{sec-regions} we will give an outline of the different steps of our proof. The main idea is simple: given $U_1(x,t)$ and $U_2(x,t)$ any two solutions of \eqref{eq-u-original}, we will find parameters $\alpha, \beta, \gamma$, corresponding to translations along the x-axis, translations in time $t$ and parabolic rescaling respectively, such that $U_1(x,t) \equiv U^{\alpha\beta\gamma}(x,t)$, where $U^{\alpha\beta\gamma}$ denotes the image of $U(x,t)$ under these transformations (see \eqref{eq-Ualphabeta}). To achieve this uniqueness, we will consider the corresponding rescaled profiles $u_1(y,\tau), u^{\alpha\beta\gamma}_2(y,\tau)$ and show that $w:= u_1(y,\tau)- u^{\alpha\beta\gamma}_2(y,\tau) \equiv 0$. It will mainly follow from analyzing the equation for $w$ in the {\em cylindrical region} (the region $\{ (y, \tau): \,\, u_1(y,\tau) \geq \theta/2 >0 \}$, for some $\theta >0$ and small). We restrict $w$ to the cylindrical region by introducing an appropriate cut off function $\varphi_{\mathcal{C}}$ and setting $w_{\mathcal{C}}:= \varphi_{\mathcal{C}}\, w$. The difference $w_{\mathcal{C}}$ in this region satisfies the equation \begin{equation}\label{eq-wC-pde}
\partial_t (w_{\mathcal{C}} ) = \mathcal{L} w_{\mathcal{C}} + \mathcal{E} [w, \varphi_C]. \end{equation} for a nonlinear error term $\mathcal{E} [w, \varphi_C]$. The operator $ \mathcal{L} := \partial_y^2 - \frac y2 \partial_y + 1$ is simply the linearized operator for equation \eqref{eq-u} on the cylinder which we see in the middle, i.e. constant the $ \sqrt{2(n-1)}$. This operator is well studied and it is known to have two unstable modes (corresponding to two positive eigenvalues) and one neutral mode (corresponding to the zero eigenvalue). The uniqueness at the end follows by a coercive estimate on \eqref{eq-wC-pde}
with the right norm (we call it $ \| \cdot \|_{2,\infty}$), which roughly implies that if $w \not\equiv 0$, then \begin{equation}\label{eq-wC-estimate}
\| w_{\mathcal{C}} \|_{2,\infty} \leq C \, \| \mathcal{E} [w, \varphi_{\mathcal{C}}] \|_{2,\infty} < \frac 12 \, \| w_{\mathcal{C}} \|_{2,\infty} \end{equation} thus leading to a contradiction. It is apparent that to obtain such a coercive estimate one needs to adjust the parameters $\alpha, \beta, \gamma$ in such a way that the projections $\mathcal{P}_+ w (\tau)$ and $\mathcal{P}_0 w(\tau)$ onto the positive and zero eigenspaces of $\mathcal{L}$ are all {\em simultaneously zero } at some time $\tau_0 \ll -1$. The main challenge in showing \eqref{eq-wC-estimate} comes from the \emph{error terms} which are introduced by the cut-off function $\varphi_{\mathcal{C}}$ and supported at the {\em transition region} between the {\em cylindrical } and {\em tip} regions (the latter is defined to be the region $\{ (y, \tau): \,\, u_1(y,\tau) \geq 2\theta \}$). To estimate these errors one needs to consider our equation in the tip region and show a suitable coercive estimate there which allows us to {\em bound back $w$ in the tip region back in terms of $w_{\mathcal{C}}$}. To achieve this, one heavily uses the \emph{a priori} estimates and Theorem \ref{thm-old} from \cite{ADS}. We also need to introduce an appropriate weighted norm in the tip region which lets us show the Poincar\'e type estimate we need to proceed. Unfortunately, numerous technical difficulties arise from various facts including the non-compactness of the limit as $\tau \to -\infty$ and the fact that $u_y \to \pm \infty$ at the tips.
In previous classifications of ancient solutions to mean curvature flow and Ricci flow, \cite{DHS1}, \cite{DHS2}, \cite{BC, BC2}, an essential role in the proofs was played by the fact that all such solutions were given in closed form or they were solitons. One of the significance of our techniques in our current work is that they overcome such a requirement and potentially can be used in many other parabolic equations and particularly in other geometric flows. To our knowledge, our work and the recent work by Bourni, Langford and Tinaglia \cite{BLT} are the first classification results of geometric ancient solutions where the solutions are not given in closed form and they are not solitons. Let us also point out that our current techniques are reminiscent of the significant work by Merle and Zaag in \cite{MZ} which has provided an inspiration for us.
{\bf Acknowledgements:} The authors are indebted to S. Brendle for many useful discussions regarding the rotational symmetry of ancient solutions.
\section{Rotational symmetry}\label{sec-rot-symm}
The main goal in this section is to prove Theorem \ref{thm-rot-symm}. Our proof of Theorem \ref{thm-rot-symm} follows closely the arguments of the recent work by Brendle and Choi \cite{BC, BC2} on the uniqueness of strictly convex, uniformly 2-convex, noncompact and noncollapsed ancient solutions of mean curvature flow in ${\mathbb R}^{n+1}$. It was shown in \cite{BC} that such solutions are rotationally symmetric. Then by analyzing the rotationally symmetric solutions, Brendle and Choi showed that such solutions agree with the Bowl soliton. For the reader's convenience we state their result next.
\begin{theorem}[Brendle and Choi \cite{BC}]
\label{thm-BC}
Let $\{M_t : t \in (-\infty,0)\}$ be a noncompact ancient mean curvature flow in ${\mathbb R}^{n+1}$ which is strictly convex, noncollapsed, and uniformly 2-convex.
Then $M_t$ agrees with the Bowl soliton, up to scaling and ambient isometries. \end{theorem}
In the proof of Theorem \ref{thm-rot-symm} we will use both the key results that led to the proof of the main theorem in \cite{BC} (see Propositions \ref{prop-neck} and \ref{prop-cap} below), and the uniqueness result as stated in Theorem \ref{thm-BC}.
Before we proceed with the proof of Theorem~\ref{thm-rot-symm}, let us recall some standard notation. Our solution $M_t$ is embedded in ${\mathbb R}^{n+1}$, for all $t \in (-\infty,T)$ and in the mean curvature flow, time scales like distance squared. We denote by $\mathcal{P}(\bar x,\bar t,r)$ the \emph{parabolic cylinder} centered at $(\bar x,\bar t) \in {\mathbb R}^{n+1}\times {\mathbb R}$ of radius $r > 0$, namely the set \[ \mathcal{P}(\bar x,\bar t,r) := \mathcal{B}(\bar x,r) \times [\bar t-r^2,\bar t] \]
where $\mathcal{B}(x,r):= \{ x \in {\mathbb R}^{n+1} \mid |x-\bar x| \leq r \} $ denotes the \emph{closed} Euclidean ball of radius $r$ in ${\mathbb R}^{n+1}$.
Also, following the notation in \cite{HS} and \cite{BC}, we denote by $ \hat \mathcal{P} (\bar x,\bar t,r)$ the {\em rescaled by the mean curvature} parabolic cylinder centered at $(\bar x,\bar t) \in \mathbb{R}^{n+1}\times {\mathbb R}$ of radius $r > 0$, namely the set \[ \hat {\mathcal{P}} (\bar x,\bar t,r) := \mathcal{P}(\bar x,\bar t, \hat \rho (\bar x, \bar t)\, r), \qquad \hat \rho(\bar x, \bar t) :=\frac{n}{H(\bar x,\bar t)}. \]
Note that in \cite[\S7]{HS} Huisken and Sinestrari consider parabolic cylinders with respect to the intrinsic metric $g(t)$ on the solution $M_t$, which in our case is equivalent to the extrinsic metric on space-time that we are considering here.
We recall Brendle and Choi's \cite{BC} definition of a mean curvature flow being $\epsilon$-symmetric, in terms of the normal components of rotation vector fields. In what follows we identify ${\mathfrak{so}}(n)$ with the subalgebra of ${\mathfrak{so}}(n+1)$ consisting of skew symmetric matrices of the form \[ J = \begin{bmatrix}
0 & 0 \\ 0 & J' \end{bmatrix},\quad \text{ with }\quad J'\in{\mathfrak{so}}(n). \] Thus ${\mathfrak{so}}(n)$ acts on the second factor in the splitting ${\mathbb R}^{n+1}={\mathbb R}\times{\mathbb R}^n$. Any $J\in {\mathfrak{so}}(n+1)$ generates a vector field on ${\mathbb R}^{n+1}$ by $\vec v(x) = Jx$. If $\varPhi(x) = Sx+p$ is a Euclidean motion, with $p\in{\mathbb R}^{n+1}$ and $S\in O(n+1)$, then the pushforward of the vector field $\vec v(x) = Jx$ under $\varPhi$ is given by \[ \varPhi_*\vec v(x) = d\varPhi_x\cdot \vec v(\varPhi^{-1}x) = SJS^{-1}(x-p). \] Any vector field of this form is a \emph{rotation vector field.} \begin{definition}
A collection of vector fields $\mathcal{K} := \{K_{\alpha}\mid 1 \le \alpha \le \frac{1}{2}n(n-1)\}$ on $\mathbb{R}^{n+1}$ is a \emph{normalized set of rotation vector fields} if there exist an orthonormal basis $\{J_{\alpha}\mid 1 \le \alpha \le \frac{1}{2}n(n-1)\}$ of ${\mathfrak{so}}(n) \subset {\mathfrak{so}}(n+1)$, a matrix $S \in O(n+1)$, and a point $q\in \mathbb{R}^{n+1}$ such that
\[
K_{\alpha}(x) = S J_{\alpha} S^{-1}(x-q).
\] \end{definition}
\begin{definition}
\label{def-normalized}
Let $M_t$ be a solution of mean curvature flow.
We say that a point $(\bar{x},\bar{t})$ is $\epsilon$-symmetric if there exists a normalized set of rotation vector fields $\mathcal{K}^{(\bar{x},\bar{t})} = \{K_{\alpha}^{(\bar{x},\bar{t})} \mid 1 \le \alpha \le \frac{1}{2}n(n-1)\}$ such that $\max_{\alpha}|\langle K_{\alpha}, \nu\rangle| H \le \epsilon$ in the parabolic neighborhood $\bar{\mathcal{P}}(\bar{x},\bar{t},10)$. \end{definition}
Lemma 4.3 in \cite{BC} allows us to control how the axis of rotation of a normalized set of rotation vector fields $\mathcal{K}^{(x,t)}$ varies as we vary the point $(x,t)$.
The proof of Theorem \ref{thm-rot-symm} relies on the following two key propositions which were both shown in \cite{BC}. The first proposition is directly taken from \cite{BC} (see Theorem 4.4 in \cite{BC}). The second proposition required some modifications of arguments in \cite{BC} and hence we present parts of its proof below (see \ref{prop-cap}).
\begin{definition}
A point $(x, t)$ of a mean curvature flow lies on an $(\epsilon, L)$-neck if there is a Euclidean transformation $\varPhi:{\mathbb R}^{n+1}\to{\mathbb R}^{n+1}$ , and a scale $\lambda>0$ such that
\begin{itemize}
\item $\varPhi$ maps $x$ to $(0, \sqrt{2(n-1)}, 0, \dots, 0)$
\item for all $\tau\in[-L^2, 0]$ the hypersurface $\lambda^{-1} \varPhi \bigl(M_{t + \lambda^2\tau}\bigr)$ is $\epsilon$-close in $C^{20}$ to the cylinder of length $L$, of radius $\sqrt{2(n-1)(1-\tau)}$, and with the $x_1$-axis as symmetry axis.
\end{itemize} \end{definition}
\begin{proposition}[Neck Improvement - Theorem 4.4 in \cite{BC}]
\label{prop-neck}
There exists a large constant $L_0$ and a small constant $\epsilon_0$ with the following property.
Suppose that $M_t$ is a mean curvature flow, and suppose that $(\bar{x}, \bar{t})$ is a point in space-time with the property that every point in $\hat \mathcal{P}(\bar{x},\bar{t}, L_0)$ is $\epsilon$-symmetric and lies on an $(\epsilon_0, 10)$-neck, where $\epsilon \le \epsilon_0$.
Then $(\bar{x}, \bar{t})$ is $\frac{\epsilon}{2}$-symmetric. \end{proposition}
\begin{proof}
The proof is given in Theorem 4.4 in \cite{BC}. \end{proof}
The next result will be shown by slight modification of arguments in the proof of Theorem 5.2 in \cite{BC}. The proof of Proposition \ref{prop-cap} below follows closely arguments in \cite{BC}.
\begin{proposition}[Cap Improvement \cite{BC}]
\label{prop-cap}
Let $L_0$ and $\epsilon_0$ be chosen as in the Neck Improvement Proposition \ref{prop-neck}.
Then there exist a large constant $L_1 \ge 2L_0$ and a small constant $\epsilon_1 \le \frac{\epsilon_0}{2}$ with the following property.
Suppose that $M_t$ is a mean curvature flow solution defined on $\hat \mathcal{P}(\bar{x},\bar{t},L_1)$.
Moreover, we assume that
$\hat \mathcal{P}(\bar{x},\bar{t},L_1)$ is, after scaling to make $H(\bar x, \bar t)=1$, $\epsilon_1$-close in the $C^{20}$-norm to a piece of a Bowl soliton which includes the tip (where the tip lies well inside in the interior of that piece of a Bowl soliton, at a definite distance from the boundary of that piece of the Bowl soliton)
and that every point in $\hat \mathcal{P}(\bar{x},\bar{t},L_1)$ is $\epsilon$-symmetric, where $\epsilon \le \epsilon_0$.
Then $(\bar{x}, \bar{t})$ is $\frac{\epsilon}{2}$-symmetric. \end{proposition}
\begin{proof}
Without loss of generality assume $\bar{t} = 1$ and $H(\bar{x},-1) = 1$ and $\bar{x}\in \Omega_{-1}^1$.
For the sake of the proof, keep in mind that the statement in the Proposition is of local nature and it only matters what is happening on a large parabolic cylinder $\hat{\mathcal{P}}(\bar{x},-1,L_1)$, while the behavior of our solution outside of this neighborhood does not matter.
The assumptions in the Proposition imply that if we take $\epsilon_1$ sufficiently small, using that the Hessian of the mean curvature around the maximum mean curvature point in a Bowl soliton is strictly negative definite (note that by our assumption
we may assume the maximum of $H(\cdot,t)$ in $B(\bar{x},L_1)\cap M_t$ is attained at a unique interior point $q_t \in B(\bar{x},L_1)\cap M_t$.
Moreover, the Hessian of the mean curvature at $q_t$ is negative definite.
Hence, $q_t$ varies smoothly in $t$.
We now conclude that if $(x_0,t_0) \in \hat{\mathcal{P}}(\bar{x},-1,L_1)$, then
\begin{equation}
\label{eq-mon-tip}
\frac{d}{dt}|x_0 - q_t| < 0, \qquad -1 - L_1^2 \le t \le t_0.
\end{equation}
The proof of \eqref{eq-mon-tip} is the same as the proof of Lemma 5.2 in \cite{BC}.
We claim that there exists a uniform constant $s_*$ with the property that every point $(x,t) \in \hat{\mathcal{P}}(\bar{x},-1,L_1)$ with $|x - q_t| \ge s_*$ lies on an $(\epsilon_0,10)$-neck and satisfies $|x - q_t| H(x,t) \ge 1000 L_0$.
Indeed, knowing the behavior of the Bowl soliton, it is a straightforward computation to check previous claims are true on the Bowl soliton, with a constant for example, $2000 L_0$.
By our assumption, $\hat{\mathcal{P}}(\bar{x},-1,L_1)$ is $\epsilon_1$-close to the Bowl soliton and hence the claims are true for our solution as well.
If $|\bar{x} - q_{-1}| \ge s_*$, the Proposition follows immediately from Proposition \ref{prop-neck}.
Thus, we may assume that $|\bar{x} - q_{-1}| \le s_*$.
Then we have the following claim.
\begin{claim}
Suppose that $M_t$ is an ancient solution of mean curvature flow.
Given any positive integer $j$, there exist a large constant $L(j)$
and a small constant $\epsilon(j)$ with the following property: if the parabolic neighborhood $\hat{\mathcal{P}}(\bar{x},-1,L(j))$ is $\epsilon(j)$-close in the $C^{20}$-norm to a piece of the Bowl soliton which includes the tip, and every point in $\hat{\mathcal{P}}(\bar{x},-1,L(j))$ is $\epsilon$-symmetric, then every point $(x,t)\in \hat{\mathcal{P}}(\bar{x},-1,L(j))$ with $t \in [-2^{\frac{3j}{100}},-1]$ and $s_* 2^{\frac{j}{100}} \le |x - q_t| \le s_* 2^{\frac{j+1}{100}}$ is $2^{-j} \epsilon$-symmetric.
\end{claim}
\begin{proof}
Assume $\kappa$ is the maximal curvature of the tip of the Bowl soliton in the statement of the Claim.
Note that $\kappa$ may depend on $(\bar{x},\bar{t})$, but is independent of $j$.
Define $L(j) \ge \frac{2^{\frac{1}{100}}}{1000}\, s_* + s_* 2^{\frac{j+2}{100}} + \kappa\, (2^{\frac{3(j+1)}{100}}+1) + s_*$, which choice will become apparent later.
The proof is by induction on $j$ and is similar to the proof of Proposition 5.3 in \cite{BC}.
For $j = 0$ we have $s_* \le |x - q_t| \le s_* 2^{\frac{1}{100}}$.
By above discussion we have that $(x,t)$ lies on an $(\epsilon_0,10)$-neck and $|x-q_t| \ge \frac{1000 L_0}{H(x,t)}$.
This implies $\frac{L_0}{H(x,t)} \le \frac{s^* 2^{\frac{1}{100}}}{1000}$ and hence $\hat{\mathcal{P}}(x,t,L_0) \subset \hat{\mathcal{P}}(\bar{x},-1,L_1)$ if we choose $L_1$ sufficiently big compared to $s_*$.
Hence, every point in $\hat{\mathcal{P}}(x,t,L_0)$ is $\epsilon$-symmetric and lies on an $(\epsilon_0,10)$-neck (where $\epsilon < \epsilon_0$).
By Proposition \ref{prop-neck} we conclude $(x,t)$ is $\frac{\epsilon}{2}$-symmetric.
Assume the claim holds for $j-1$.
We want to show it holds for $j$ as well, that is, if the parabolic neighborhood $\hat{\mathcal{P}}(\bar{x},-1,L(j))$ is $\epsilon(j)$-close in the $C^{20}$-norm to a piece of the Bowl soliton which includes the tip, and every point in $\hat{\mathcal{P}}(\bar{x},-1,L(j))$ is $\epsilon$-symmetric, then every point $(x,t)\in \hat{\mathcal{P}}(\bar{x},-1,L(j))$ with $t \in [-2^{\frac{3j}{100}},-1]$ and $s_* 2^{\frac{j}{100}} \le |x - q_t| \le s_* 2^{\frac{j+1}{100}}$ is $2^{-j} \epsilon$-symmetric.
Suppose this is false.
Then there exists $(x_0,t_0)$ so that $x_0\in M_{t_0}$ and $t_0\in [-2^{\frac{3j}{100}},-1]$ and $s_* 2^{\frac{j}{100}} \le |x_0 - q_{t_0}| \le s_* 2^{\frac{j+1}{100}}$ so that $(x_0,t_0)$ is not $2^{-j}\epsilon$-symmetric.
By Proposition \ref{prop-neck}, there exists a point $(y,s) \in \hat{\mathcal{P}}(x_0,t_0,L_0)$ such that either $(y,s)$ is not $2^{-j+1}\epsilon$-symmetric or $(y,s)$ does not lie at the center of an $(\epsilon_0,10)$-neck.
Note that if we choose $L(j) \ge \frac{2^{\frac{1}{100}}}{1000}\, s_* + s_* 2^{\frac{j+2}{100}} + \kappa\, (2^{\frac{3(j+1)}{100}}+1) + s_*$, then $(y,s) \in \hat{\mathcal{P}}(\bar{x},-1,L(j-1))$.
To see this we combine the inequalities $|y-x_0| \leq \frac{L_0}{H(x_0,t_0)}$, $|\bar x - q_{-1}| \leq s_*$,
$|x_0 - q_{t_0}| \le s_* 2^{\frac{j+1}{100}}$ to conclude that
\begin{equation*}
\begin{split}
|y-\bar x| &\leq |y-x_0| + |x_0 - q_{t_0}| + |q_{t_0} - q_{-1}| + |q_{-1} - \bar x| \\&\leq \frac{L_0}{H(x_0,t_0)} + s_* 2^{\frac{j+1}{100}} + \kappa \, |t_0+1| + s_* \\
&\leq \frac{2^{\frac 1{100}}} {1000} \, s_* + s_* 2^{\frac{j+1}{100}} + \kappa \, (2^{\frac{3j}{100}} +1 ) + s_*\\
&\le L(j-1).
\end{split}
\end{equation*}
In view of the induction hypothesis, we conclude that $|x - q_t| \le 2^{\frac{j-1}{100}} s_*$.
We can now follow the proof of Proposition 5.3 in \cite{BC} closely to obtain a contradiction.
In that part of the proof one needs to use \eqref{eq-mon-tip}, which is proved in Lemma 5.2 in \cite{BC}.
It is clear from the proof in \cite{BC} that if we take a bigger parabolic cylinder around $(\bar{x},-1)$ of size $L(j)$, in order to still have \eqref{eq-mon-tip} one needs to require that $\hat{\mathcal{P}}(\bar{x},-1,L(j))$ is $\epsilon(j)$ close to a Bowl soliton, where $\epsilon(j)$ needs to be taken very small, depending on $L(j)$.
This is clear from the proof of \eqref{eq-mon-tip} that can be found in \cite{BC}.
\end{proof}
In the following, $j$ will denote a large integer, which will be determined later.
Moreover, assume that $L \ge L(j)$ and $\epsilon \le \epsilon(j)$.
Using the Claim, we conclude that for every point $(x,t)\in \{(x,t)\,\,|\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\, s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{\frac{j}{100}}\}$ there exists a normalized set of rotation vector fields $\mathcal{K}^{(x,t)} = \{K_{\alpha}^{(x,t)}\,\,\,|\,\,\, 1 \le \alpha \le \frac{n(n-1)}{2}\}$, such that $\max_{\alpha}|\langle K_{\alpha}^{(x,t)},\nu\rangle| H \le 2^{-j} \epsilon$ on $\bar{\mathcal{P}}(x,t,10)$.
Moreover, since $|x-q_t| \ge s_*$ implies $H(x,t) |x-q_t| \ge 1000 L_0$, we have that
\[\max_{\alpha}|\langle K^{(x,t)}_{\alpha},\nu\rangle| \le \frac{2^{\frac{j+1}{100} - j} s_*}{1000 L_0}\, \epsilon \le C \, 2^{\frac{j}{10} - j} \, \epsilon, \qquad j \ge j_0,\]
for a uniform constant $C$ that is independent of $j$ and $\epsilon$.
Lemma 4.3 in \cite{BC} allows us to control how the axis of rotation of $\mathcal{K}^{(x,t)}$ varies as we vary the point $(x,t)$.
More precisely, as in \cite{BC}, if $(x_1,t_1)$ and $(x_2,t_2)$ are in $\{(x,t)\,\,|\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\, s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{\frac{j}{100}}\}$ and $(x_2,t_2) \in \hat{\mathcal{P}}(x_1,t_1,1)$, then
\begin{align*}
& \inf_{w\in O\left(\frac{n(n-1)}{2}\right)}\, \sup_{B_{10H(x_2,t_2)^{-1}}(x_2)}\, \max_{\alpha}\left| K_{\alpha}^{(x_1,t_1)} - \sum_{\beta=1}^{\frac{n(n-1)}{2}} w_{\alpha\beta} K_{\beta}^{(x_2,t_2)}\right| \\
&\le C 2^{-j} H(x_2,t_2)^{-1}.
\end{align*}
Hence, we can find a normalized set of rotation vector fields $\mathcal{K}^{(j)} = \{K_{\alpha}^{(j)}\,\,\,|\,\,\, 1 \le \alpha \le \frac{n(n-1)}{2}\}$ so that if $(x,t) \in \{(x,t)\,\,|\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\, s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{\frac{j}{100}}\}$, then,
\[
\inf_{w\in O\left(\frac{n(n-1)}{2}\right)}\, \max_{\alpha} \left|K_{\alpha}^{(j)} - \sum_{\beta=1}^{\frac{n(n-1)}{2}} w_{\alpha\beta} K_{\beta}^{(x,t)}\right| \le C 2^{-\frac j2},
\]
at the point $(x,t)$.
From this we deduce that $\max_{\alpha}|\langle K_{\alpha}^{(j)},\nu\rangle| \le C 2^{-\frac j2}$, for all points $(x,t) \in \{(x,t)\,\,|\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\, s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{\frac{j}{100}}\}$.
As in \cite{BC} we conclude that $\max_{\alpha} |\langle K_{\alpha}^{(j)},\nu\rangle| \le C 2^{-\frac j2}$ for all $(x,t) \in \{(x,t)\,\,|\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\, s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{\frac{j}{100}}\}$.
Finally, note that $\max_{\alpha}|\langle K_{\alpha}^{(j)},\nu\rangle| \le C 2^{\frac{j}{100}}$, whenever $x\in \{x\,\,|\,\, s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{\frac{j}{100}}\}$ and $t = -2^{\frac{3j}{100}}$.
As in \cite{BC}, for each $\alpha\in \{1,\dots, \frac{n(n-1)}{2}\}$ we define a function $f_{\alpha}^{(j)} : \{(x,t)\,\,\,|\,\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\, |x - q_t| \le s_* 2^{\frac{j}{100}}\} \to \mathbb{R}$ by
\[
f_{\alpha}^{(j)} := e^{2^{-\frac{j}{50}} t} \frac{\langle K_{\alpha}^{(j)},\nu\rangle}{H - 2^{-\frac{j}{100}}}.
\]
The same computation as in \cite{BC} implies that by the maximum principle applied to the evolution of $f_{\alpha}^{(j)}$ we get
\[
\sup_{
\{
(x,t)\,\,\,|\,\,\, t\in [-2^{\frac{3j}{100}},-1], \,\,\,\,
s_* 2^{\frac{j+1}{100}} \le |x - q_t| \le s_* 2^{ \frac{j}{100} }
\} }
|f_{\alpha}^{(j)}(x,t)| \le C\, 2^{-\frac j4}.
\]
Standard interior estimates for parabolic equations give estimates for the higher order derivatives of $\langle K_{\alpha}^{(j)},\nu\rangle$.
Hence, if we choose $j$ sufficiently big, then the same reasoning as in \cite{BC} implies $(\bar{x},-1)$ is $\frac{\epsilon}{2}$-symmetric.
Having chosen $j$ in this way, we finally define $L_1 = L(j)$ and $\epsilon_1 := \epsilon(j)$.
Then $L_1$ and $\epsilon_1$ have the desired properties as stated in Proposition \ref{prop-cap}. \end{proof}
The goal of the remaining part of this section is to show how we can employ Propositions \ref{prop-neck} and \ref{prop-cap} to prove Theorem \ref{thm-rot-symm}.
Observe that by the crucial work of Haslhofer and Kleiner in \cite{HK} we know that a strictly convex $\alpha$-noncollapsed ancient solution to mean curvature flow sweeps out the whole space. Hence, the well known important result of X.J. Wang in \cite{W} shows that the rescaled flow, after a proper rotation of coordinates, converges, as time goes to $-\infty$, uniformly on compact sets, to a round cylinder of radius $\sqrt{2(n-1)}$.
This has as a consequence that $M_t\cap B_{8(n-1)\sqrt{|t|}}$ is a neck with radius $\sqrt{2(n-1)|t|}$. The complement
$M_t\backslash B_{8(n-1)\sqrt{|t|}}$ has two connected components, call them $\Omega_1^t$ and $\Omega_2^t$, both compact. Thus, for every $t$, the maximum of $H$ on $\Omega_1^t$ is attained at least at one point in $\Omega_1^t$ and similarly for $\Omega_2^t$.
For every $t$, we define the {\it tip} points $p_{t}^1$ and $p_{t}^2$ as follows. Let $p_t^k$, for $k=1,2$ be a point such that \[
|\langle F,\nu\rangle (p_t^k,t)| = |F|(p_t^k,t) \quad \mbox{and} \quad |F|(p_t^k,t) = \max_{\Omega_k^t} |F|(\cdot,t). \]
Denote by $d_k(t) := |F|(p_t^k,t)$, for $k\in \{1,2\}$.
Throughout the rest of the section we will be using the next observation about possible limits of our solution around arbitrary sequence of points $(x_j,t_j)$ with $x_j \in M_{t_j}$, $t_j \to -\infty$ when rescaled by $H(x_j,t_j)$.
\begin{lemma}
\label{lemma-possible-limits}
Let $M_t$, $t\in (-\infty,0)$ be an Ancient Oval satisfying the assumptions in Theorem \ref{thm-rot-symm}.
Fix a $k\in \{1,2\}$.
Then for every sequence of points $x_j\in M_{t_j}$ and any sequence of times $t_j\to -\infty$, the rescaled sequence of solutions $F_j(\cdot,t) := H(x_j,t_j) (F(\cdot,t_j+t Q_j^{-2}) - x_j)$ subconverges to either a Bowl soliton or a shrinking round cylinder. \end{lemma}
\begin{proof}
By the global convergence theorem (Theorem 1.12) in \cite{HK} we have that after passing to a subsequence, the flow $M^j_t$ converges, as $j\to \infty$, to an ancient solution
$M^{\infty}_t$, for $t \in (-\infty, 0]$, which is convex and uniformly 2-convex.
Note that $H(0,0) = 1$ on the limiting manifold.
By the strong maximum principle applied to $H$ we have that $H > 0$ everywhere on $M^{\infty}_t$, where $t\in (-\infty,0]$.
If $M^{\infty}_t$ is strictly convex, by the classification result in \cite{BC} we have that it is a Bowl soliton.
If the limit is not strictly convex, by the strong maximum principle it splits off a line and hence it is of the form $N_t^{n-1}\times \mathbb{R}$, where $N_t^{n-1}$ is an $n-1$-dimensional ancient solution.
On the other hand the uniform 2-convexity assumption on our solution implies the inequality $\lambda_{\min}({N^{n-1}_t}) \ge \beta H (N^{n-1}_t)$,
for a uniform constant $\beta > 0$.
Thus, Lemma 3.14 in \cite{HK} implies that the limiting flow $M_t^{\infty}$ is a family of round shrinking cylinders $S^{n-1}\times\mathbb{R}$. \end{proof}
We will next show that points which are away from the tip points in both regions $\Omega_t^k$, $k=1,2$ are cylindrical.
\begin{lemma}
\label{lemma-cylindrical}
Let $M_t$, $t \in (-\infty,0)$, be an Ancient Oval satisfying the assumptions of Theorem \ref{thm-rot-symm} and fix $k\in 1,2$.
Then, for every $\eta > 0$ there exist $\bar{L}$ and $t_0$, so that for all $t \le t_0$ and $L \ge \bar{L}$ the following holds
\begin{equation}\label{eqn-cyl10}
|x - p_t^k| \ge \frac{L}{H(x,t)} \implies \frac{\lambda_{min}}{H}(x,t) < \eta.
\end{equation}
We may chose $L$ so that \eqref{eqn-cyl10} holds for both $k=1,2$. \end{lemma}
\begin{proof}
Without loss of generality we may assume that $k=1$ and we will argue by contradiction.
If the statement is not true,
then there exist $L_j\to \infty$ and sequences of times $t_j\to -\infty$ and points $x_j \in M_{t_j}$ so that
\begin{equation}
\label{eqn-contra}
|x_j - p_{t_j}^1| \ge \frac{L_j}{H(x_j,t_j)} \quad \mbox{and} \quad \frac{\lambda_{\min}}{H}(x_j,t_j) \geq \eta.
\end{equation}
Rescale the flow around $(x_j,t_j)$ by $Q_j := H(x_j,t_j)$ as in Lemma \ref{lemma-possible-limits} and call the rescaled manifolds $M^j_t$.
Then,
\begin{equation}
|0 - \bar{p}_j^1| \ge L_j \to \infty, \qquad \mbox{as} \,\,\,\, j\to\infty
\end{equation}
where the origin and $\bar{p}_j^1$ correspond to $x_j$ and tip points $p_{t_j}^1$ after rescaling, resepctively.
By Lemma \ref{lemma-possible-limits} we have that passing to a subsequence $M_t^j$ converges to either a Bowl soliton or a cylinder.
Since $\frac{\lambda_{min}}{H}$ is a scaling invariant quantity, \eqref{eqn-contra} implies that on the limiting manifold we have
$\frac{\lambda_{min}}{H} (0,0) \geq \eta$
which immediately excludes the cylinder.
Thus the limiting manifold must be the Bowl soliton.
Lets look next at the tip points $p_{t_j}^2$ of our solution which lie on the other side $\Omega_{t_j}^2$ and denote by $\bar{p}_j^2$ the corresponding points
on our rescaled solution.
Then we must have that $ |0 - \bar{p}_j^2| \le C_0 $ for some constant $C_0$.
Otherwise, if we had that $\limsup_{j \to +\infty}|0 - \bar{p}_j^2| \to +\infty$, this
together with \eqref{eqn-contra}, the convexity of our surface, the fact that the furthest points $p_{t_j}^1$ and $p_{t_j}^2$ lie on the opposite side of a necklike piece and the splitting theorem
would imply that the limit of $M_t^j$ would split off a line.
This and Lemma \ref{lemma-possible-limits} would yield the limit of $M_t^j$ would have been the cylinder which we have already ruled out.
In terms of our unrescaled solution $M_t$ then we conclude that
$ |x_j- p_{t_j}^2| \leq \frac{C_0}{H(x_j,t_j)}$.
Since $x_j\in \Omega_{t_j}^1$ and $p_{t_j}^2\in \Omega_{t_j}^2$, we would have that the whole neck-like region that divides the sets $\Omega_{t_j}^1$ and $\Omega_{t_j}^2$ lies at a distance less that equal to $\frac{C_0}{H(x_j,t_j)}$ from $x_j$.
This would imply the whole neck-like region would have to lie on
a compact set of the Bowl soliton, implying that $\frac{\lambda_{min}}{H} (\cdot, t_j) \geq c_0 >0$ holds for some constant $c_0$, independent of $j$.
This is a contradiction since on the neck-like region of our solution, the scaling invariant quantity $ \lambda_{\min} H^{-1} \to 0$ as $t_j \to -\infty$.
The above discussion shows that $|x - p_t^1| \ge \frac{L}{H(x,t)}$ implies that $ \frac{\lambda_{min}}{H}(x,t) < \eta$, thus finishing the proof of the Lemma.
Recall that $|0 - \tilde{p}_{t_j}^k| \ge L_j \to \infty$ as $j\to\infty$, for $k=1,2$.
This together with the convexity of our solution and the fact that the furthest points $p_{t_j}^1$ and $p_{t_j}^2$ lie on the opposite side of a necklike piece of our surface imply the limit $M^{\infty}_t$ contains a line.
Hence, by Lemma \ref{lemma-possible-limits} we conclude that $M_t^{\infty}$ is a family of round shrinking cylinders $S^{n-1}\times\mathbb{R}$.
\end{proof}
In the following lemma we show that mean curvatures of an Ancient Oval solution satisfying the assumptions of Theorem \ref{thm-rot-symm}, around the tip points on $\Omega_t^k$, for a fixed $k=1,2$, are uniformly equivalent in a quantitative way.
\begin{lemma}
\label{lem-unif-equiv-curv}
Let $M_t$, $t \in (-\infty,0)$, be an Ancient Oval satisfying the assumptions of Theorem \ref{thm-rot-symm} and fix $k=1,2$.
For every $L > 0$ there exist uniform constants $c >0 $, $C < \infty$ and $t_0 \ll -1$ so that for all $t \le t_0$ we have
\begin{equation}
\label{eq-unif-equiv-curv}
c \, H(p_t^k,t) \le H(x,t) \le C\, H(p_t^k,t) \quad \mbox{{\em if}} \,\, |x - p_t^k| < \frac{L}{H(x,t)}, \, x \in \Omega_t^k.
\end{equation}
We may chose $c, C$ so that \eqref{eq-unif-equiv-curv} holds for both $k=1,2$. \end{lemma}
\begin{proof}
Let us take without loss of generality $k = 1$.
First lets show the estimate from below.
Assume the statement is false.
This implies there exist a sequence of times $t_j\to -\infty$ and a sequence of constants $C_j\to \infty$ so that
\begin{equation}
\label{eq-violate}
H(p_{t_j}^1,t_j) \ge C_j\, H(x_j,t_j) \qquad \forall \,\, j
\end{equation}
for some $x_j \in \Omega_{t_j}^1$ such that the $|x_j - p_{t_j}^1| < \frac{L}{H(x_j,t_j)}$.
Rescale the flow around $(x_j,t_j)$ by $Q_j := H(x_j,t_j)$.
By the global convergence theorem 1.12 in \cite{HK}, the sequence of rescaled flows subconverges uniformly on compact sets to an ancient noncollapsed solution.
Points $x_j$ get translated to the origin and points $p_{t_j}^1$ get translated to points $\tilde{p}_{t_j}^1$ under rescaling.
Since by our assumption we have
\[|0 - \tilde{p}_{t_j}^1| = H(x_j,t_j) \, |x_j - p_{t_j}^1| < L\]
then due to uniform convergence of the rescaled flow on bounded sets we have
\[H_j(\tilde{p}_{t_j}^1,0) \le C, \qquad j \ge j_0\]
for a uniform constant $C < \infty$, that depends on $L$, but is independent of $j$.
This implies
\[ H(p_{t_j}^1,t_j) \le C\, H(x_j,t_j), \qquad j \ge j_0\]
which contradicts \eqref{eq-violate}.
To prove the upper bound in \eqref{eq-unif-equiv-curv} note that the lower bound in \eqref{eq-unif-equiv-curv} that we have just proved implies $|x - p_t^1| \le \frac{L}{H(x,t)} \le \frac{L}{c\, H(p_t^1,t)}$.
Hence, we can switch the roles of $x$ and $p_t^1$ in the proof above.
This ends the proof of the Lemma. \end{proof}
\begin{remark}
Note that we can choose uniform $c >0$ and $t_0 \ll -1$ so that the conclusion of Lemma \ref{lem-unif-equiv-curv} holds for both $k = 1$ and $k = 2$. \end{remark}
Let $\epsilon > 0$ be a small number. By our assumption the flow is $\alpha$-noncollapsed and uniformly 2-convex, meaning that \eqref{eqn-2convex} holds. By the cylindrical estimate (\cite{HK}, \cite{HS}) we can find an $\eta = \eta(\epsilon, \alpha,\beta) > 0$ so that if the flow is defined in the normalized parabolic cylinder $\hat \mathcal{P}(x,t,\eta^{-1})$ and if \[ \frac{\lambda_1}{H}(x,t) < \eta \] then the flow $M_t$ is $\epsilon$-close to a shrinking round cylinder $S^{n-1}\times \mathbb{R}$ near $(x,t)$. Being $\epsilon$-close to a shrinking round cylinder near $(x,t)$ means that after parabolic rescaling by $H(x,t)$, shifting $(x,t)$ to $(0,0)$ and a rotation, the solution becomes $\epsilon$-close in the $C^{[\frac{1}{\epsilon}]}$-norm on $\mathcal{P}(0,0,1/\epsilon)$ to the standard shrinking cylinder with $H(0,0) = 1$ (see for more details \cite{HK}).
\begin{proposition}
\label{limit-tip}
Fix a $k \in \{1,2\}$ and let $L > 0$ be any fixed constant.
Let $M_t$ be an Ancient Oval that satisfies the assumptions of Theorem \ref{thm-rot-symm}.
Then for any sequence of times $t_j\to -\infty$, and any sequence of points $x_j\in \Omega_{t_j}^k$ such that $|x_j - p_{t_j}^k| \le \frac{L}{H(x_j,t_j)}$, the rescaled limit around $(x_j,t_j)$ by factors $H(x_j,t_j)$ subconverges to a Bowl soliton. \end{proposition}
In the course of proving this Proposition we need the following observation.
\begin{lemma}
\label{lem-simple-lambda}
For all $t \ll 0$ each of the two components $\Omega_t^j$ of $M_t \setminus B(0, \sqrt{8(n-1)})$ contains at least one point at which $\lambda_{\min}$ is not a simple eigenvalue. \end{lemma} \begin{proof}
Suppose $\lambda_{\min}$ is a simple eigenvalue at each point on $\Omega_t^1$.
Then the corresponding eigenspace defines a one dimensional subbundle of the tangent bundle $TM_t$.
Since $M_t$ is simply connected any one dimensional bundle over $M_t$ is trivial, and thus has a section $v:M_t\to TM_t$ with $v(p)\neq p$ for all $p$.
Within the region $\bar B(0, \sqrt{8(n-1)})$ the hypersurfaces $M_t$ converge in $C^2$ to a cylinder with radius $\sqrt{2(n-1)}$, so within this region $\lambda_{\min}$ is a simple eigenvalue, and the eigenvector $v(p)$ will be transverse to the boundary $\partial\Omega_t^1$.
We may assume that it points outward relative to $\Omega_t^1$.
The component $\Omega_t^1$ is diffeomorphic with the unit ball $B^n\subset{\mathbb R}^n$, and under this diffeomorphism the vector field $v:\Omega_t^1\to T\Omega_t^1$ is mapped to nonzero vector field $\tilde v:B^n\to{\mathbb R}^n$, which points outward on the boundary $S^{n-1} = \partial B^n$.
The normalized map $\hat v = \tilde v/|\tilde v| : S^{n-1}\to S^{n-1}$ is therefore homotopic to the unit normal, i.e.~the identity map $\mathrm{id} : S^{n-1}\to S^{n-1}$.
Its degree must then equal $+1$, which is impossible because $\hat v$ can be extended continuously to $\hat v = \tilde v/|\tilde v| : B^n\to S^{n-1}$. \end{proof}
\begin{proof}[Proof of Proposition \ref{limit-tip}]
Without any loss of generality take $k = 1$ and let $\tilde{L} > 0$ be an arbitrary fixed constant.
Let $t_j\to -\infty$ be an arbitrary sequence of times and let $x_j\in \Omega_{t_j}^1$ be an arbitrary sequence of points such that $|x_j - p_{t_j}^1| \le \frac{\tilde{L}}{H(x_j,t_j)}$.
Rescale our solution around $(x_j, t_j)$ by scaling factors $H(x_j,t_j)$.
By Lemma \ref{lemma-possible-limits} we know that the sequence of our rescaled solutions subconverges to either a Bowl soliton or a round shrinking cylinder.
If the limit is a Bowl soliton, we are done.
Hence, assume the limit is a shrinking round cylinder, which is a situation we want to rule out.
By Lemma \ref{lem-unif-equiv-curv} we have that for $j$ large enough, curvatures $H(p_{t_j}^1,t_j)$ and $H(x_j,t_j)$ are uniformly equivalent.
This together with $|x_j - p_{t_j}^1| \le \frac{\tilde{L}}{H(x_j,t_j)}$ implies that if we rescale our solution around points $(p_{t_j}^1,t_j)$ by factors $H(p_{t_j}^1,t_j)$, after taking a limit we also get a shrinking round cylinder.
Since the limit around $(p_{t_j}^1,t_j)$ is a round shrinking cylinder, for every $\epsilon > 0$ there exists a $j_0$
so that for $j \ge j_0$ we have ${\displaystyle \frac{\lambda_{\min}(p_{t_j}^1,t_j) }{H(p_{t_j}^1,t_j)} < \epsilon}$.
In the following two claims, $p_{t_j}^1 \in \Omega_{t_j}^1$ will be a sequence of the tip points as above, such that the limit of the sequence of rescaled solutions around $(p_{t_j}^1,t_j)$ by factors $H(p_{t_j}^1,t_j)$ is a shrinking round cylinder.
In the first claim we show the ratio $\frac{\lambda_{\min}}{H}$ can be made arbitrarily small not only at points $p_{t_j}^1$, but also at all the points that are at bounded distances away from them.
\noindent{\it Claim:}
For every $\epsilon > 0$ and every $C_0 > 0$ there exists a $j_0$ so that for $j \ge j_0$ we have
\begin{equation}
\label{eqn-tj99}
\frac{\lambda_{\min}(p, t_j)}{H(p,t_j)} < \epsilon, \qquad \mbox{whenever} \qquad |p - p_{t_j}^1| \le \frac{C_0}{H(p,t_j)} \,\,\,\, \mbox{and} \,\,\,\, p\in \Omega_{t_j}^1.
\end{equation}
\begin{proof}[Proof of Claim]
Assume the claim is not true, meaning there exist constants $\epsilon > 0$, $C_0 > 0$, a subsequence which we still denote by $t_j$ and points $p_j \in \Omega_{t_j}^1$ so that
\begin{equation}
\label{eq-contra100}
|p_j - p_{t_j}^1| \le \frac{C_0}{H(p_j,t_j)} \qquad \mbox{but} \qquad \frac{\lambda_{\min}(p_j,t_j)}{H(p_j,t_j)} \ge \epsilon.
\end{equation}
Consider the sequence of rescaled flows around $(p_j,t_j)$ by factors $H(p_j,t_j)$.
Lemma \ref{lemma-possible-limits} and the second inequality in \eqref{eq-contra100} imply the above sequence subconverges to a Bowl soliton.
On the other hand, since $|p_j - p_{t_j}^1| \le \frac{C_0}{H(p_j,t_j)}$, by Lemma \ref{lem-unif-equiv-curv}, the curvatures $H(p_j,t_j)$ and $H(p_{t_j}^1,t_j)$ are uniformly equivalent.
This together with our assumption on $(p_{t_j}^1,t_j)$ and the first inequality in \eqref{eq-contra100} yield the rescaled sequence around $(p_j,t_j)$ by factors $H(p_j,t_j)$ subconverges to a round shrinking cylinder at the same time and hence we get contradiction.
This proves the Claim.
\end{proof}
Next we claim that for sufficiently big $j$, even far away from the tip points $p_{t_j}^1$ we see the cylindrical behavior.
Assume $\bar{L}$ is big enough so that the conclusion of Lemma \ref{lemma-cylindrical} holds.
The immediate consequence of the Lemma \ref{lemma-cylindrical} is that for every $\epsilon > 0$ there exists a $j_0$ so that for $j \ge j_0$ we have
\begin{equation}\label{eqn-tj100}
\frac{\lambda_{\min}(p,t_j)}{H(p,t_j)} < \epsilon, \qquad \mbox{whenever} \qquad p \in \Omega_{t_j}^1 \,\,\,\, \mbox{and} \,\,\,\, |p - p_{t_j}^1| \ge \frac{\bar{L}}{H(p,t_j)}
\end{equation}
We continue proving Proposition \ref{limit-tip}. \\
Estimates \eqref{eqn-tj99} after taking $C_0 = \bar{L}$ and \eqref{eqn-tj100} yield for every $\epsilon > 0$ there exists a $j_0$ so that for $j \ge j_0$,
\begin{equation}
\label{eq-cond-cyl}
\frac{\lambda_{\min}}{H}(p,t_j) < \epsilon, \qquad \mbox{on all of} \qquad \Omega_{t_j}^1.
\end{equation}
By the cylindrical estimate (\cite{HS}, \cite{HK}) we have that for every $\epsilon > 0$ there
exists a $j_0$ so that for $j \ge j_0$.
\begin{equation}
\label{eq-cross-sphere}
\frac{|\lambda_p - \lambda_q|}{H}(p,t_j) < \epsilon, \qquad \mbox{for all} \qquad n \ge p, q \ge 2,
\end{equation}
on $\Omega_{t_j}^1$.
For small enough $\epsilon>0$ the conditions~\eqref{eq-cond-cyl} and~\eqref{eq-cross-sphere} imply that $\lambda_{\min}$ is a simple eigenvalue, hence contradicting Lemma \ref{lem-simple-lambda}.
This finishes the proof of Proposition \ref{limit-tip}. \end{proof}
\begin{lemma}
\label{lemma-tip-Bowl}
Let $M_t$, $t \in (-\infty,0)$, be an Ancient Oval satisfying the assumptions of Theorem \ref{thm-rot-symm} and fix $k=1,2$.
Then for every $\epsilon > 0$ there exist uniform constants $\rho_0 < \infty$ and $t_0 \ll -1$, so that for every $t \le t_0$ we have that $\hat{\mathcal{P}}(p_t^k,t,\rho_0)$ is $\epsilon$-close to a piece of a Bowl soliton that includes the tip. \end{lemma}
\begin{proof}
First of all observe that by Proposition \ref{limit-tip} it is easy to argue that for every $\epsilon > 0$ and any $\rho_0 < \infty$ there exists a $t_0 \ll -1$ so that for $t \le t_0$, the parabolic cylinder $\hat{\mathcal{P}}(p_t^k,t,\rho_0)$ is $\epsilon$-close to a piece of a Bowl soliton.
The point of this lemma is to show that we can find $\rho_0$ big enough, but uniform in $t \le t_0 \ll -1$ so that the piece of the Bowl soliton above includes the tip.
To prove the statement we argue by contradiction.
Assume the statement is not true, meaning there exist an $\epsilon > 0$, a sequence $\rho_j\to \infty$ and a sequence $t_j\to -\infty$ so that $\hat{\mathcal{P}}(p_{t_j}^k,t_j,\rho_j)$ is $\epsilon$-close to a piece of Bowl soliton that does not include the tip.
Rescale the solution around $(p_{t_j}^k,t_j)$ by factors $H(p_{t_j}^k,t_j)$.
By Proposition \ref{limit-tip} we know that the rescaled solution subconverges to a piece of a Bowl soliton.
Hence there exists a uniform constant $C_0$ so that the origin that lies on the limiting Bowl soliton and corresponds after scaling, to the points $(p_{t_j}^k,t_j)$, is at distance $C_0$ from the tip of the soliton (which is the point of maximum curvature).
This implies there exist points $q_{t_j} \in \Omega_{t_j}^k$ so that $|q_{t_j} - p_{t_j}^k| \le \frac{2C_0}{H(p_{t_j}^k,t_j)}$ for $j \ge j_0$, with the property that the points $q_{t_j}$ converge to the tip of the Bowl soliton.
Furthermore, for sufficiently big $j \ge j_0$, parabolic cylinders $\hat{\mathcal{P}}(p_{t_j}^k,t_j,3C_0)$ are $\epsilon$-close to a piece of the Bowl soliton that includes the tip.
That contradicts our assumption that for every $j$, $\hat{\mathcal{P}}(p_{t_j}^k,t_j,\rho_j)$ is $\epsilon$-close to a piece of Bowl soliton that does not include its tip. \end{proof}
Finally we show the crucial for our purposes proposition below which says that every point on $M_t$ has a parabolic neighborhood of uniform size, around which it is either close to a Bowl soliton or to a round shrinking cylinder. \begin{prop}
\label{lem-either-or}
Let $M_t$ be an Ancient Oval that is uniformly two convex.
Let $\epsilon_0, \epsilon_1$, $L_0, L_1$ be the constants from Propositions \ref{prop-neck} and \ref{prop-cap} and let $\epsilon \le \min\{\epsilon_0,\epsilon_1\}$.
Then, there exists $t_0 \ll -1$, depending on these constants, with the following property: for every $(\bar x, \bar t)$ with $\bar x \in M_{\bar t}$ and $\bar{t} \le t_0$, either $\hat{\mathcal{P}} (\bar x, \bar t, L_0)$ lies
on an $(\epsilon,10)$-neck or every point in $\hat{\mathcal{P}} (\bar x, \bar t, L_1)$ is, after scaling by
$H(\bar x,\bar t)$, $\epsilon$-close in the $C^{20}$-norm to a piece of a Bowl soliton which includes the tip. \end{prop}
\begin{proof}
Recall that as a consequence of Hamilton's Harnack estimate our ancient solution satisfies $H_t \geq 0$.
This implies there exists a uniform constant $C_0$ so that
\begin{equation}
\label{eq-H}
\max_{M_t} H(\cdot, t) \le C_0, \qquad t \le t_0.
\end{equation}
Let $\bar{\epsilon} \ll \min\{\epsilon_0, \epsilon_1, L_0^{-1}\}$.
For this $\bar{\epsilon} > 0$ find a $\delta = \delta(\bar{\epsilon})$ as in Theorem 1.19 in \cite{HK} (see also \cite{HS} for the similar estimate) so that if
\begin{equation}
\label{eq-delta}
\frac{\lambda_{\min}}{H}(p,t) < \delta
\end{equation}
and the flow is defined in $\hat \mathcal{P}(p,t,\delta^{-1})$, then the solution $M_t$ is $\bar{\epsilon}$-close to a round cylinder around $(p,t)$, in the sense that a rescaled flow by $H(p,t)$ around $(p,t)$ is $\bar{\epsilon}$-close on $\mathcal{P}(0,0,{\bar{\epsilon}}^{-1})$ to a round cylinder with $H(0,0) = 1$.
Take $\delta > 0$ as in \eqref{eq-delta}.
For this $\delta$ choose $\bar{L}$ sufficiently big and $t_0 \ll -1$ so that Lemma \ref{lemma-cylindrical} holds (after we take $\eta$ in the Lemma to be equal to $\delta$).
Let $(\bar{x},\bar{t})$ be such that $\bar{x} \in M_{\bar{t}}$ and $\bar{t} < t_0$.
Then either $\bar{x}\in M_{\bar{t}}\cap B_{8(n-1)\sqrt{|t|}}$, or $\bar{x} \in \Omega_{\bar{t}}^1$, or $\bar{x} \in \Omega_{\bar{t}}^2$.
In the first case that has been already discussed above, for $-\bar{t}$ sufficiently large, we know that $M_{\bar{t}}\cap B_{16(n-1)\sqrt{|\bar{t}|}}$ is neck-like and hence there exists $t_0 \ll -1$ so that for $t \le t_0$
\[\max_{M_{\bar{t}}\cap B_{16(n-1)\sqrt{|\bar{t}|}}} \frac{\lambda_{\min}}{H} < \delta\]
where $\delta$ is as in \eqref{eq-delta}.
Thus every point $\bar{x}\in M_{\bar{t}}\cap B_{8(n-1)\sqrt{|t|}}$ has the property that every point in $\hat{\mathcal{P}}(\bar{x},\bar{t},L_0)$ lies at the center of an $(\epsilon,10)$-neck.
We may assume from now on, with no loss of generality, that $\bar{x}\in \Omega_{\bar{t}}^1$, since the discussion for $\bar{x}\in \Omega_{\bar{t}}^2$ is equivalent.
We either have $|\bar{x} - p_{\bar{t}}^1| \ge \frac{\bar{L}}{H(\bar{x},\bar{t})}$, or $|\bar{x} - p_{\bar{t}}^1| \le \frac{\bar{L}}{H(\bar{x},\bar{t})}$.
In the first case, Lemma \ref{lemma-cylindrical} gives that ${\displaystyle \frac{\lambda_{\min}}{H}(\bar{x},\bar{t}) < \delta}.$
As discussed above, the cylindrical estimate then implies that the rescaled flow $H(\bar{x},\bar{t}) (F_{\bar{t} + H(\bar{x},\bar{t})^{-2} t} - \bar{x})$ is $\bar{\epsilon}$-close to the round cylinder with $H(0,0) = 1$, in a parabolic cylinder $\mathcal{P}(0,0,\bar{\epsilon}^{-1})$.
It is straightforward then to conclude that every point in the normalized cylinder $\hat{\mathcal{P}}(\bar{x},\bar{t},L_0,)$ lies on an $(\epsilon,10)$-neck, where we use that $L_0 \ll {\bar{\epsilon}}^{-1}$ and $\bar{\epsilon} \ll \epsilon$.
Assume now that $\bar x \in \Omega_{\bar t}^1$ and $|\bar{x} - p_{\bar{t}}^1| \le \frac{\bar{L}}{H(\bar{x},\bar{t})}$.
Combining this with Lemma \ref{lem-unif-equiv-curv} and Lemma \ref{lemma-tip-Bowl} yield we can find a sufficiently large but uniform constant $L_1$ and constant $t_0 \ll -1$, so that for $\bar{t} \le t_0$ we have that $\hat{\mathcal{P}}(\bar{x},\bar{t},L_1)$ is $\epsilon_1$-close to a piece of a Bowl soliton that also includes its tip. \end{proof}
We can now conclude the proof of Theorem \ref{thm-rot-symm}.
\begin{proof}[Proof of Theorem \ref{thm-rot-symm}]
Let $L_0, L_1, \epsilon_0, \epsilon_1$ be chosen so that Propositions \ref{prop-neck} and \ref{prop-cap} hold.
Let $\bar{\epsilon} \ll \epsilon := \min\ (\epsilon_0, \epsilon_1) $.
Let $t_0 \ll -1$ be as in Proposition \ref{lem-either-or} so that for every $(\bar x, \bar t)$ with $\bar x \in M_{\bar t}$ and $\bar{t} \le t_0$, either $\hat P (\bar x, \bar t, L_0)$ lies
on an $(\bar{\epsilon},10)$-neck (and hence on an $(\epsilon_0,10)$-neck, since $\bar{\epsilon} \le \epsilon_0$), or every point in $\hat P (\bar x, \bar t, L_1)$ is, after scaling, $\bar{\epsilon}$-close in the $C^{20}$-norm to a piece of the Bowl soliton
which includes the tip (and hence is also $\epsilon_1$ close, since $\epsilon_1 \le \bar{\epsilon}$).
Note that the axis of symmetry of this Bowl soliton may depend on the point $(\bar x, \bar t)$.
Above implies that every point $(\bar{x},\bar{t})$, for $\bar{x} \in M_{\bar{t}}$ and $\bar{t}\le t_0$, lies in a parabolic neighborhood of uniform size, that is after scaling, $\bar{\epsilon}$ close to a rotationally symmetric surface (either a round cylinder or a Bowl soliton).
Hence, it follows that if we choose $\bar{\epsilon}$ sufficiently small relative to $\epsilon$, then $(\bar{x},\bar{t})$ is $\epsilon$-symmetric (defined as in Definition \ref{def-normalized}).
After applying Propositions \ref{prop-neck} and \ref{prop-cap} we then conclude that $(\bar{x},\bar{t})$ is $\frac{\epsilon}{2}$-symmetric, for all $\bar{x}\in M_{\bar{t}}$ and all $\bar{t} \le T$.
Iterative application of Propositions \ref{prop-neck} and \ref{prop-cap} yields that $(\bar{x},\bar{t})$ is $\frac{\epsilon}{2^j}$-symmetric, for all $\bar{x}\in M_{\bar{t}}$, $\bar{t} \le t_0$ and all $j\geq 1$.
Letting $j \to +\infty$ we finally conclude that $M_t$ is rotationally symmetric for all $t \le t_0$ which also implies that $M_t$ is rotationally symmetric for all $t \in (-\infty,0)$. \end{proof}
\section{Outline of the proof of Theorem \ref{thm-main}} \label{sec-regions}
Since the proof of Theorem \ref{thm-main} is quite involved, in this preliminary section we will give an outline of the main steps in the proof of the classification result in the presence of rotational symmetry. Our method is based on a priori estimates for various distance functions between two given ancient solutions in appropriate coordinates and measured in weighted $L^2$ norms. We need to consider two different regions: the {\em cylindrical} region and the {\em tip} region. Note that the tip region will be divided in two sub-regions: the {\em collar} and the {\em soliton} region. These are pictured in Figure~\ref{fig-three-regions} below. In what follows, we will define these regions, review the equations in each region and define appropriate weighted $L^2$ norms with respect to which we will prove coercive type estimates in the subsequent sections. At the end of the section we will give an outline of the proof of Theorem \ref{thm-main}.
Let $M_{1}(t), M_2(t)$ be two rotationally symmetric ancient oval solutions satisfying the assumptions of Theorem \ref{thm-main}. Being surfaces of rotation, they are each determined by a function $U = U_i(x, t)$, ($i=1,2$), which satisfies the equation \begin{equation}
\label{eq-MCF}
U_t = \frac{U_{xx}} {1+U_x^2} - \frac{n-1} {U}. \end{equation}
In the statement of Theorem \ref{thm-main} we claim the uniqueness of any two Ancient Ovals up to dilations and translations. In fact since equation \eqref{eq-MCF} is invariant under translation in time, translation in space and also under cylindrical dilations in space-time, each solution $M_i(t)$ gives rise to a three parameter family of solutions \begin{equation}
\label{eq-two-parameters}
M_i^{\alpha \beta \gamma}(t) = e^{\gamma/2} \, \varPhi_{\alpha}( M_i(e^{-\gamma}(t - \beta))), \end{equation} where $\varPhi_{\alpha}$ is a rigid motion, that is just the translation of the hypersurface along $x$ axis by value $\alpha$. The theorem claims the following: \emph{ given two ancient oval solutions we can find $\alpha, \beta, \gamma$ and $t_0 \in {\mathbb R}$ such that \[ M_1(t) = M_2^{\alpha\beta\gamma}(t), \qquad \mbox{for} \,\, t \leq t_0. \]} The profile function $U_i^{\alpha\beta\gamma}$ corresponding to the modified solution $M_i^{\alpha\beta\gamma}(t)$ is given by \begin{equation}
\label{eq-Ualphabeta}
U_i^{\alpha\beta\gamma} (x, t) = e^{\gamma/2} U_i\Bigl (e^{-\gamma/2}(x-\alpha), e^{-\gamma} (t - \beta) \Bigr). \end{equation} We rescale the solutions $M_i(t)$ by a factor $\sqrt{-t}$ and introduce a new time variable $\tau = -\log(-t)$, that is, we set \begin{equation}
\label{eq-type-1-blow-up}
M_i(t) = \sqrt{-t} \, \bar M_i(\tau), \qquad \tau:= - \log (-t). \end{equation} These are again $O(n)$ symmetric with profile function $u$, which is related to $U$ by \begin{equation}
\label{eq-cv1}
U(x,t) = \sqrt{-t}\, u(y, \tau), \qquad y=\frac x{ \sqrt{-t}}, \quad \tau=-\log (-t). \end{equation} If the $U_i$ satisfy the MCF equation \eqref{eq-MCF}, then the rescaled profiles $u_i$ satisfy~\eqref{eq-u}, i.e. \[ \frac{\partial u}{\partial \tau} = \frac{u_{yy}}{1+u_y^2} - \frac y2 \, u_y - \frac{n-1}{u}+ \frac u2. \] Translating and dilating the original solution $M_i(t)$ to $M_i^{\alpha\beta\gamma}(t)$ has the following effect on $u_i(y,\tau)$: \begin{equation}
\label{eq-ualphabeta}
u_i^{\alpha\beta\gamma}(y, \tau) = \sqrt{1+\beta e^{\tau}} u_i\Bigl( \frac{y-\alpha e^{\frac{\tau}{2}}} {\sqrt{1+\beta e^\tau} }, \tau+\gamma-\log\bigl(1+\beta e^\tau\bigr) \Bigr). \end{equation}
To prove the uniqueness theorem we will look at the difference $U_1 - U_2^{\alpha\beta\gamma}$, or equivalently at $u_1 - u_2^{\alpha\beta\gamma}$. The parameters $\alpha, \beta, \gamma$ will be chosen so that the projections of $u_1 - u_2^{\alpha\beta\gamma}$ onto positive eigenspace (that is spanned by two independent eigenvectors) and zero eigenspace of the linearized operator $\mathcal{L}$ at the cylinder are equal to zero at time $\tau_0$, which will be chosen sufficiently close to $-\infty$.
Correspondingly, we denote the difference $U_1 - U_2^{\alpha\beta\gamma}$ by $U_1-U_2$ and $u_1 - u_2^{\alpha\beta\gamma}$ by $u_1-u_2$. What we will actually observe is that the parameters $\alpha, \beta$ and $\gamma$ can be chosen to lie in a certain range, which allows our main estimates to hold without having to keep track of these parameters during the proof. In fact, we will show in Section \ref{sec-conclusion} that for a given small $\epsilon >0$ there exists $\tau_0 \ll -1 $ sufficiently negative for which we have \begin{equation}
\label{eq-asymp-par1}
\alpha \le \epsilon \frac{e^{-\tau_0/2}}{|\tau_0|}, \qquad \beta \leq \epsilon \, \frac{e^{-\tau_0}}{|\tau_0|}, \qquad \gamma \leq \epsilon \, |\tau_0| \end{equation} and our estimates hold for $(u_1 - u_2^{\alpha\beta\gamma})(\cdot,\tau)$, $\tau \leq \tau_0$. This inspires the following definition. \begin{definition}[Admissible triple of parameters $(\alpha, \beta,\gamma)$]
\label{def-admissible}
We say that the triple of parameters $(\alpha, \beta,\gamma)$ is admissible with respect to time $\tau_0$ if they satisfy \eqref{eq-asymp-par1}. \end{definition}
\begin{figure}
\caption{\textbf{The three regions. } The \emph{cylindrical region} consists of all points with $u_1(y, \tau)\geq \frac12\theta$; the \emph{tip region} contains all points with $u_1(y, \tau)\leq 2\theta$ and is subdivided into the \emph{collar}, in which $u_1\geq 2L/\sqrt{|\tau|}$, and the \emph{soliton region}, where $u_1\leq 2L/\sqrt{|\tau|}$. }
\label{fig-three-regions}
\end{figure}
We will next define different regions and outline how we treat each region.
\subsection{The cylindrical region}\label{subsec-cylindrical} For a given $\tau \leq \tau_0$ and constant $\theta$ positive and small, the cylindrical region is defined by \[ \mathcal{C}_{\theta} = \bigl\{(y, \tau) : u_1(y,\tau) \ge \frac{\theta}{2} \bigr\} \] (see Figure~\ref{fig-three-regions}). We will consider in this region a \emph{cut-off function $\varphi_\mathcal{C}(y,\tau)$} with the following properties: \[ (i) \,\, \mathop{\mathrm {supp}} \varphi_\mathcal{C} \Subset \mathcal{C}_{\theta} \qquad (ii)\,\, 0 \leq \varphi_\mathcal{C} \leq 1 \qquad (iii) \,\, \varphi_\mathcal{C} \equiv 1 \mbox{ on } \mathcal{C}_{2\theta}. \] The solutions $u_i$, $i=1,2$, satisfy equation \eqref{eq-u}. Setting \[ w:=u_1-u_2^{\alpha\beta\gamma} \qquad \text{ and } \qquad w_\mathcal{C}:= w\, \varphi_\mathcal{C} \] we see that $w_\mathcal{C}$ satisfies the equation \begin{equation}\label{eqn-w100}
\frac{\partial}{\partial\tau} w_\mathcal{C} = \mathcal{L}[w_\mathcal{C}] + \mathcal{E}[w,\varphi_\mathcal{C}] \end{equation} where the operator $\mathcal{L}$ is given by \[ \mathcal{L} = \partial_y^2 - \frac y2 \partial_y + 1 \] and where the error term $\mathcal{E}$ is described in detail in Section~\ref{sec-cylindrical}. We will see that \[ \mathcal{E}[w,\varphi_\mathcal{C}] = \mathcal{E}(w_\mathcal{C}) + \bar \mathcal{E}[w,\varphi_\mathcal{C}] \] where $\mathcal{E}(w_\mathcal{C}) $ is the error introduced due to the nonlinearity of our equation and is given by \eqref{eq-E10} and $\bar \mathcal{E}[w,\varphi_\mathcal{C}]$ is the error introduced due to the cut off function $\varphi_\mathcal{C}$ and is given by \eqref{eq-bar-E} (to simplify the notation we have set $u_2:=u_2^{\alpha\beta\gamma}$).
The differential operator $\mathcal{L}$ is a well studied self-adjoint operator on the Hilbert space $\mathfrak{H} := L^2({\mathbb R}, e^{-y^2/4}dy)$ with respect to the norm and inner product \begin{equation}\label{eqn-normp0}
\|f\|_{\mathfrak{H}}^2 = \int_{\mathbb R} f(y)^2 e^{-y^2/4}\, dy, \qquad \langle f, g \rangle = \int_{\mathbb R} f(y)g(y) e^{-y^2/4}\, dy. \end{equation}
We split $\mathfrak{H}$ into the unstable, neutral, and stable subspaces $\mathfrak{H}_+$, $\mathfrak{H}_0$, and $\mathfrak{H}_-$, respectively. The unstable subspace $\mathfrak{H}_+$ is spanned by all eigenfunctions with positive eigenvalues (in this case $\mathfrak{H}_+$ is spanned by a constant function equal to $\psi_0 = 1$, that corresponds to eigenvalue 1 and by a linear function $\psi_1 = y$, that corresponds to eigenvalue $\frac 12$, that is, $\mathfrak{H}_+$ is two dimensional). The neutral subspace $\mathfrak{H}_0$ is the kernel of $\mathcal{L}$, and is the one dimensional space spanned by $\psi_2 = y^2-2$. The stable subspace $\mathfrak{H}_-$ is spanned by all other eigenfunctions. Let $\mathcal{P}_\pm$ and $\mathcal{P}_0$ be the orthogonal projections on $\mathfrak{H}_\pm$ and $\mathfrak{H}_0$.
For any function $f : {\mathbb R} \times (-\infty,\tau_0]$, we define the cylindrical norm \[
\|f\|_{\mathfrak{H},\infty}(\tau) = \sup_{\sigma \le \tau} \Bigl(\int_{\sigma - 1}^{\sigma} \|f(\cdot,s)\|_{\mathfrak{H}}^2\, ds\Bigr)^{\frac12}, \qquad \tau \leq \tau_0 \] and we will often simply set \begin{equation}
\label{eqn-normp}
\|f\|_{\mathfrak{H},\infty}:= \|f\|_{\mathfrak{H},\infty}(\tau_0). \end{equation}
In the course of proving necessary estimates in the cylindrical region we define yet another Hilbert space $\mathfrak{D}$ by \[ \mathfrak{D} = \{f\in\mathfrak{H} : f, f_y \in \mathfrak{H}\}, \] equipped with a norm \[
\|f\|_\mathfrak{D}^2 = \int_{{\mathbb R}} \{f(y)^2 + f'(y)^2\} e^{-y^2/4}dy. \] We will write \[ ( f, g )_\mathfrak{D} = \int_{\mathbb R} \{f'(y)g'(y) + f(y)g(y)\} e^{-y^2/4} dy, \] for the inner product in $ \mathfrak{D} $.
Since we have a dense inclusion $ \mathfrak{D} \subset \mathfrak{H} $ we also get a dense inclusion $ \mathfrak{H} \subset \mathfrak{D}^* $ where every $ f\in\mathfrak{H} $ is interpreted as a functional on $ \mathfrak{D} $ via \[ g\in \mathfrak{D} \mapsto \langle f, g \rangle. \] Because of this we will also denote the duality between $ \mathfrak{D} $ and $ \mathfrak{D}^* $ by \[ (f, g) \in \mathfrak{D}\times \mathfrak{D}^* \mapsto \langle f, g \rangle . \] Similarly as above define the cylindrical norm \[
\|f\|_{\mathfrak{D},\infty}(\tau) = \sup_{\sigma\le\tau}\Bigl(\int_{\sigma-1}^{\sigma} \|f(\cdot,s)\|_{\mathfrak{D}}^2\, ds\Bigr)^{\frac12}, \]
and analogously we define the cylindrical norm $\|f\|_{\mathfrak{D}^*,\infty}(\tau)$ and set for simplicity $\| f\|_{\mathfrak{D}^*,\infty}:=\|f\|_{\mathfrak{D}^*,\infty}(\tau_0)$.
In Section \ref{sec-cylindrical} we will show a coercive estimate for $w_\mathcal{C}$ in terms of the error $E[w, \varphi_\mathcal{C}]$. However, as expected, this can only be achieved by removing the projection $\mathcal{P}_0 w_\mathcal{C}$ onto the kernel of $\mathcal{L}$, generated by $\psi_2$. More precisely, setting \[ \hat w_\mathcal{C} := \mathcal{P}_+ w_\mathcal{C} + \mathcal{P}_- w_\mathcal{C} = w_\mathcal{C} - \mathcal{P}_0 w_\mathcal{C} \] we will prove that for any $\epsilon >0$ there exist $\theta > 0$ and $\tau_0 \ll 0$ such that the following bound holds \begin{equation}\label{eqn-cylindrical100}
\| \hat w_\mathcal{C} \|_{\mathfrak{D},\infty} \leq C \, \|E[w,\varphi_\mathcal{C}]\|_{\mathfrak{D}^*,\infty} \end{equation} provided that $\mathcal{P}_+ w_\mathcal{C}(\tau_0) =0$. In fact, we will show in Proposition \ref{lem-rescaling-components-zero} that the parameters $\alpha, \beta$ and $\gamma$ can be adjusted so that for $w^{\alpha\beta\gamma}:= u_1 - u_2^{\alpha\beta\gamma}$, we have \begin{equation}\label{eqn-projections}
\mathcal{P}_+ w_\mathcal{C}(\tau_0) =0 \qquad \mbox{and} \qquad \mathcal{P}_0 w_\mathcal{C}(\tau_0) =0. \end{equation} Thus \eqref{eqn-cylindrical100} will hold for such a choice of $\alpha, \beta, \gamma$ and $\tau_0 \ll 0$. The condition $\mathcal{P}_0 w_\mathcal{C}(\tau_0) =0$ is essential and will be used in Section \ref{sec-conclusion} to give us that $w^{\alpha\beta\gamma} \equiv 0$. In addition, we will show in Proposition \ref{lem-rescaling-components-zero} that $\alpha$, $\beta$ and $\gamma$ can be chosen to be admissible according to our Definition \ref{def-admissible}.
The norm of the error term $\|E[w,\varphi_\mathcal{C}]\|_{\mathfrak{D}^*,\infty}$ on the right hand side of \eqref{eqn-cylindrical100} will be estimated in Section \ref{sec-cylindrical}, Lemmas \ref{lem-error1-est} and \ref{lem-error-bar}. We will show that given $\epsilon >0$ small, there exists a $\tau_0 \ll -1$ such that \begin{equation}\label{eqn-error-111}
\|E[w,\varphi_\mathcal{C}]\|_{\mathfrak{D}^*,\infty} \leq \epsilon \, \big ( \| w_\mathcal{C} \|_{\mathfrak{D},\infty} + \| w\, \chi_{D_\theta}\|_{\mathfrak{D}, \infty} \big ). \end{equation} where $D_\theta:= \{ (y,\tau): \,\, \theta/2 \leq u_1(y,\tau) \leq \theta \}$ denotes the support of the derivative of $\varphi_{\mathcal{C}}$. Combining \eqref{eqn-cylindrical100} and \eqref{eqn-error-111} yields the bound \begin{equation}\label{eqn-cylindrical1}
\| \hat w_\mathcal{C} \|_{\mathfrak{D},\infty} \le \epsilon \, (\|w_\mathcal{C}\|_{\mathfrak{D},\infty} + \| w\, \chi_{D_\theta} \|_{\mathfrak{H},\infty}), \end{equation} holding for all $\epsilon >0$ and $\tau_0 := \tau_0(\epsilon) \ll -1$.
To close the argument we need to estimate $\| w \, \chi_{D_\theta}\|_{\mathfrak{H},\infty}$ in terms of $\|w_\mathcal{C} \|_{\mathfrak{D},\infty}$. This will be done by considering the tip region and establishing an appropriate \emph{a priori} bound for the difference of our two solutions there.
\subsection{The tip region}\label{subsec-tip} The tip region is defined by \[ \mathcal{T}_{\theta} = \{(u, \tau):\,\, u_1 \le 2\theta, \, \tau \leq \tau_0\} \] (see Figure~\ref{fig-three-regions}). In the tip region we switch the variables $y$ and $u$ in our two solutions, with $u$ becoming now an independent variable. Hence, our solutions $u_1(y,\tau) $ and $u_2^{\alpha\beta\gamma}(y,\tau)$ become $Y_1(u,\tau)$ and $Y_2^{\alpha\beta\gamma}(u,\tau)$. In this region we consider a {\em cut-off function $\varphi_T(u)$} with the following properties: \begin{equation}\label{eqn-cutofftip}
(i) \,\, \mathop{\mathrm {supp}} \varphi_T \Subset \mathcal{T}_{\theta} \qquad (ii) \,\, 0 \leq \varphi_T \leq 1 \qquad (iii) \,\, \varphi_T \equiv 1, \,\, \mbox{on} \,\, \mathcal{T}_{\theta/2}. \end{equation}
Both functions $Y_1(u, \tau), Y_2^{\alpha\beta\gamma}$ satisfy the equation \begin{equation}
\label{eqn-Y}
Y_\tau =\frac{Y_{uu}}{1+Y_u^2} + \frac{n-1} {u} Y_u + \frac{1} {2}\bigl(Y-uY_u \bigr ). \end{equation} It follows from \eqref{eqn-Y} that the difference $W:= Y_1 - Y_2^{\alpha\beta\gamma}$ satisfies \begin{equation}
\label{eqn-WW}
W_\tau = \frac{W_{uu}} {1+ Y_{1u}^2} +\Bigl(\frac{n-1} {u} - \frac{u} {2} + D \Bigr) \, W_u + \frac{1} {2} W, \end{equation} where \[ D := -\frac{(Y_2^{\alpha\beta\gamma})_{uu}\, (Y_{1u} + Y_{2u}^{\alpha\beta\gamma})}{(1 + (Y_{1u})^2\, (1 + (Y_{2u}^{\alpha\beta\gamma})^2)}. \]
Our next goal is to define an appropriate weighted $L^2$ norm \[
\| W(\tau) \| := \Bigl ( \int_0^\theta W^2(u,\tau) \, e^{\mu(u,\tau)} \, du \Bigr )^{1/2} \] in the tip region $\mathcal{T}_\theta$, by defining the weight $\mu(u,\tau)$. To this end we need to further distinguish between two regions in $\mathcal{T}_{\theta}$: for $L >0$ sufficiently large to be determined in Section \ref{sec-tip}, we define the {\it collar} region to be the set \[
\mathcal{K}_{\theta,L} := \Bigl\{y \mid \frac{L}{\sqrt{|\tau|}} \le u_1 (y,\tau) \le 2\theta\Bigr\} \] and the {\it soliton} region to be the set \[
\mathcal{S}_L := \Bigl\{y \mid 0 \le u_1(y,\tau) \le \frac{L}{\sqrt{|\tau|}}\Bigr\} \] (see Figure~\ref{fig-three-regions}). It will turn out later that one can regard the term $D$ in \eqref{eqn-WW} as an error term in $\mathcal{K}_{\theta,L}$ (since in this region $D$ can be made arbitrarily small for $\tau_0 \ll -1$ and in addition by choosing $\theta, L$ appropriately). On the contrary, $D$ is not necessarily small in the entire {\it soliton} region $\mathcal{S}_L$ and hence its approximation needs to be taken as a part of the linear operator.
The soliton region is the set where our asymptotic result in Theorem \ref{thm-old} implies that the solutions $Y_1$ and $Y_2$ are very close to the Bowl soliton (after re-scaling). The collar is the transition region between the cylindrical region and the soliton region. Having this in mind we define the weight $\mu(u,\tau)$ on the collar region $\mathcal{K}_{\theta,L}$ to be \[ \mu(u,\tau) = - \frac{1}{4} Y_1^2(u,\tau), \qquad u \in\mathcal{K}_{\theta,L} \] which is in correspondence (after our coordinate switch) with the Gaussian weight $e^{-y^2/4}$ which we use in the cylindrical region.
In the soliton region we will define our weight $\mu(u,\tau)$ using the Bowl soliton. In fact, we center the solution $Y_1$ at the tip and zoom in to a length scale ${1}/{\sqrt{|\tau|}}$ by setting \begin{equation}
\label{eqn-Y-expansion}
Y_1(u,\tau) = Y_1(0, \tau) + \frac{1}{\sqrt{|\tau|}}\,Z_1 \Bigl(\rho, \tau\Bigr), \qquad \rho:= u\, \sqrt{|\tau|}. \end{equation}
By Corollary \ref{cor-old}, $Z_1(\rho,\tau)$ converges, as $\tau\to -\infty$, to the unique rotationally symmetric, translating Bowl solution $Z_0(\rho)$ with speed $\sqrt2/2$, which satisfies \begin{equation}
\frac{Z_{0\rho\rho}} {1+Z_{0\rho}^2} + \frac{n-1} {\rho} Z_{0\rho} + \frac12 \sqrt{2} = 0,\qquad Z_0 (0) = Z_0 '(0) = 0.
\label{eq-Z-bar} \end{equation}
Since $n>1$ these equations determine $Z_0$ uniquely. For large and small $\rho$ the function $Z_0(\rho)$ satisfies \begin{equation}
\label{eq-Z0-asymptotics}
Z_0(\rho) =
\begin{cases}
-\sqrt{2}\rho^2/4(n-1) + \mathcal{O}(\log \rho) & \rho\to\infty \\[2pt]
-\sqrt{2}\rho^2/4n +\mathcal{O}(\rho^4) & \rho\to0.
\end{cases} \end{equation} These expansions may be differentiated.
In order to motivate the choice for the weight in the soliton region, we formally approximate $Y_{1, u}$ in equation \eqref{eqn-WW} by ${Z_0}_\rho$ using \eqref{eqn-Y-expansion} and the convergence to the soliton. Using also the change of variables \[
{\bar W}(\rho, \tau) = \frac 1{\sqrt{|\tau|}} \, W(u, \tau), \qquad \rho:= u\, \sqrt{|\tau|}, \] it follows from \eqref{eqn-WW} that ${\bar W}$ \[ \Bigl( \frac{{\bar W}_\rho} {1+Z_{0\rho}^2} \Bigr)_\rho + \Bigl(\frac{n-1} {\rho} \Bigr){\bar W}_\rho = E[{\bar W}], \] where $E[{\bar W}]$ is the error term.
This prompts us to introduce the linear differential operator \[ \mathcal{M} = \frac{d} {d\rho} \Bigl ( \frac{1} {1+Z_{0\rho}^2} \frac{d} {d\rho} \Bigr ) + \frac{n-1} {\rho}\frac{d} {d\rho}, \] which we can write as \[ \mathcal{M}[\phi] = e^{-m} \frac{d} {d\rho}\Bigl\{ \frac{e^{m}} {1+Z_{0\rho}^2} \, \frac{d\phi} {d\rho}\Bigr\}, \] where $m:{\mathbb R}_+\to{\mathbb R}$ is the function given by \[ \begin{aligned}
m(\rho) &= (n-1)\log \rho + \int_0^\rho \frac{n-1}{s} Z_{0}'(s)^2\, ds \\
&= (n-1)\log \rho - \frac12\sqrt2 Z_0(\rho) -\frac12 \log \bigl(1+Z_0'(\rho)^2). \end{aligned} \] The operator $\mathcal{M}$ is symmetric and negative definite in the Hilbert space $ {\mathcal H} =L^2\Bigl({\mathbb R}_+, e^{m(\rho)}\,d\rho\Bigr) $ in which the norm is given by \[
\|f\|_{\mathcal{H}}^2 = \int_0^\infty f(\rho)^2 \, e^{m(\rho)}\,d\rho. \]
We will use the variable $\rho$ in the proof of the Poincar\'e inequality in Proposition \ref{prop-Poincare}, while in our main estimate~\eqref{eqn-tip2} in the tip region we will bound $W(u,\tau)$ in an appropriate weighted norm, using the $u$ variable. We would like to define our weight $\mu(u,\tau)$ in the soliton region $\mathcal{S}_L $ to be equal more or less to $m(\rho)$. In order to make $\mu(u,\tau)$ to be a $C^1$ function in the whole tip region $\mathcal{T}_{\theta}$ we will modify $m(\rho)$ in $\mathcal{S}_L$ by adding a linear correction term, setting \[
\mu(u,\tau) = m(\rho) + a(L,\tau ) \rho + b(L,\tau), \qquad u \in \mathcal{S}_L, \,\, \rho = u\, \sqrt{|\tau|}. \]
It follows from our discussion above we have the following definition for the weight $\mu(u,\tau)$ in the tip region $\mathcal{T}_{\theta}$: \begin{equation}
\label{eq-weight}
\mu(u,\tau) :=
\begin{cases}
-\frac14 Y_1^2(u,\tau), & \quad u \in\mathcal{K}_{\theta,L} \\
m(\rho) + a(L,\tau)\, \rho + b(L,\tau), & \quad u \in \mathcal{S}_L.
\end{cases} \end{equation}
The requirement that $\mu(u, \tau)$ be $C^1$ at $u=L/\sqrt{|\tau|}$ dictates the following choice of $a$ and $b$: \begin{align}\label{eqn-aL}
a(L,\tau)&:= - m'(L) - \frac{1}{2\sqrt{|\tau|}} \, Y_1 Y_{1u}\Bigr|_{u=L/\sqrt{|\tau|}}
\\
\label{eqn-bL}
b(L,\tau) &:= -\frac{Y_1^2(L,\tau)}{4} - m(L) + L\, m'(L) + - \frac{L}{2\sqrt{|\tau|}} \, Y_1 Y_{1u}\Bigr|_{u=L/\sqrt{|\tau|}}. \end{align}
For a function $W : [0, 2\theta]\to{\mathbb R}$ and any $\tau\leq \tau_0$, we define the weighted $L^2$ norm with respect to the weight $e^{\mu(\cdot,\tau)} \, du$ by \[
\|W\|_\tau^2 := \int_0^{2\theta} W^2(u, \tau) \, e^{\mu(u,\tau)}\, du, \qquad \tau \leq \tau_0 \] For a function $W:[0,2\theta]\times(-\infty, \tau_0] \to {\mathbb R}$ we define \emph{the cylindrical norms} \[
\|W\|_{2,\infty,\tau} = \sup_{\tau' \le \tau}|\tau'|^{-1/4}\, \Bigl(\int_{\tau'-1}^{\tau'} \|W\|_s^2\, ds\Bigr)^{\frac12} \]
for any $\tau\leq \tau_0$. We include the weight in time $|\tau|^{-1/4}$ to make the norms equivalent in the transition region, between the cylindrical and the tip region, as will become apparent in Lemma \ref{prop-norm-equiv}. We will also abbreviate \[
\|W\|_{2,\infty}:= \|W\|_{2,\infty,\tau_0}. \]
For a cutoff function $\varphi_T$ as in \eqref{eqn-cutofftip}, we set $W_T(u,\tau):= W(u,\tau) \, \varphi_T$. We will see in Section \ref{sec-tip} that the following bound holds in the tip region \begin{equation}
\label{eqn-tip2}
\| W_T \|_{\mathfrak{H},\infty} \leq \frac{C}{\sqrt{|\tau_0|}} \, \| W \chi \|_{2,\infty} \end{equation} where $\chi$ is the cut off function that is supported in the overlap between cylindrical and tip regions, for $\chi = 1$ on an open neighborhood of the support of $\partial_u\varphi_T$.
\subsection{The conclusion} \label{subsec-conclusion} The statement of Theorem \ref{thm-main} is equivalent to showing there exist parameters $\alpha, \beta$ and $\gamma$ so that $u_1(y,\tau) = u_2^{\alpha\beta\gamma}(y,\tau)$, where $u_2^{\alpha\beta\gamma}(y,\tau)$ is defined by \eqref{eq-ualphabeta} and both functions, $u_1(y,\tau)$ and $u_2^{\alpha\beta\gamma}(y,\tau)$, satisfy equation \eqref{eq-u}. We set $w:= u_1 - u_2^{\alpha\beta\gamma}$, where $(\alpha,\beta,\gamma)$ is an admissible triple of parameters with respect to $\tau_0$, such that \eqref{eqn-projections} holds for a $\tau_0 \ll -1$. Now for this $\tau_0$, the main estimates in each of the regions, namely \eqref{eqn-cylindrical1} and \eqref{eqn-tip2} hold for $w$. Next, we want to combine \eqref{eqn-cylindrical1} and \eqref{eqn-tip2}. To this end we need to show that the norms of the difference of our two solutions, with respect to the weights defined in the cylindrical and the tip regions, are equivalent in the intersection between the cylindrical and the tip regions, the so called {\em transition region}. More precisely, we will show in Section \ref{sec-conclusion} that for every $\theta > 0$ small, there exist $\tau_0 \ll 0$ and uniform constants $c(\theta), C(\theta) > 0$, so that for $\tau\le \tau_0$, we have \begin{equation}\label{eqn-normequiv1}
c(\theta ) \, \| W \chi_{_{[\theta, 2\theta]}} \|_{\mathfrak{H},\infty} \leq \| w \, \chi_{_{D_{2\theta}}} \|_{\mathfrak{H},\infty} \leq C(\theta) \, \| W \chi_{_{[\theta, 2\theta]}} \|_{\mathfrak{H},\infty} \end{equation}
where $\mathcal{D}_{2\theta} := \{y\,\,\, |\,\,\, \theta \le u_1(y,\tau) \le 2\theta\}$ and $\chi_{[\theta,2\theta]}$ is the characteristic function of the interval $[\theta,2\theta]$.
Combining \eqref{eqn-normequiv1} with \eqref{eqn-cylindrical1} and \eqref{eqn-tip2} finally shows that in the norm $\| w_\mathcal{C} \|_{\mathfrak{D},\infty}$ what actually dominates is $\| \mathcal{P}_0 w_\mathcal{C} \|_{\mathfrak{D},\infty}$. We will use this fact in Section \ref{sec-conclusion} to conclude that $w(y,\tau):= w^{\alpha\beta\gamma}(y,\tau) \equiv 0$ for our choice of parameters $\alpha, \beta$ and $\gamma$. To do so we will look at the projection $a(\tau):= \mathcal{P}_0 w_\mathcal{C}$ and define its norm \[
\|a\|_{\mathfrak{H},\infty}(\tau)
= \sup_{\sigma \le \tau} \Bigl(\int_{\sigma - 1}^{\sigma} \|a(s)\|^2\, ds\Bigr)^{\frac12}, \qquad \tau \leq \tau_0 \]
with $\|a\|_{\mathfrak{H},\infty}:=\|a\|_{\mathfrak{H},\infty}(\tau_0)$.
By projecting equation \eqref{eqn-w100} onto the zero eigenspace spanned by $\psi_2$ and estimating error terms by $\| a\|_{\mathfrak{H},\infty}$ itself, we will conclude in Section \ref{sec-conclusion} that $a(\tau)$ satisfies a certain differential inequality which combined with our assumption that $a(\tau_0)=0$ (that follows from the choice of parameters $\alpha, \beta$, $\gamma$ so that \eqref{eqn-projections} hold) will yield that $a(\tau)=0$ for all $\tau \leq \tau_0$. On the other hand, since $\| a\|_{\mathfrak{H},\infty} $ dominates the $\| w_\mathcal{C} \|_{\mathfrak{H},\infty}$, this will imply that $w_\mathcal{C} \equiv 0$, thus yielding $w \equiv 0$, as stated in Theorem \ref{thm-main}.
\begin{remark}
\label{rem-symmetry}
Note that our evolving hypersurface has $O(n)$ symmetry and can be represented as in \eqref{eq-O1xOn-symmetry}.
Due to asymptotics proved in Theorem \ref{thm-old}, when considering the tip region, it is enough to consider our solutions and prove the estimates only around $y = \bar{d}_1(\tau)$, where after switching the variables as in \eqref{eqn-Y-expansion}, we have $\rho \ge 0$.
There we have $Z(\rho,\tau) \le 0$ and $Z_{\rho} \le 0$.
We also have $Z_{\rho\rho} \le 0$, due to our convexity assumption.
The estimates around $y = -\bar{d}_2(\tau)$ are similar. \end{remark}
\section{A priori estimates} Let $u(y,\tau)$ be an ancient oval solution of \eqref{eq-u} which satisfies the asymptotics in Theorem \ref{thm-old}. In this section we will prove some further \emph{a priori} estimates on $u(y,\tau)$ which hold for $\tau \ll -1$. These estimates will be used in the subsequent sections. Throughout this section we will use the notation introduced in the previous section and in particular the definition of $Y(u,\tau)$ as the inverse function of $u(y,\tau)$ in the tip region and $Z(\rho,\tau)$ given by \eqref{eqn-Y-expansion}.
Before we start discussing \emph{a priori} estimates for our solution $u(y,\tau)$, we recall a corollary of Theorem \ref{thm-old} that will be used throughout the paper, especially in dealing with the tip region.
\begin{corollary}[Corollary of Theorem \ref{thm-old}]
\label{cor-old}
Let $M_t$ be any ancient oval satisfying the assumptions of Theorem \ref{thm-old}.
Consider the tip region of our solution as in part (iii) of Theorem \ref{thm-old} and switch the coordinates around the tip region as in formula \eqref{eqn-Y-expansion}.
Then, $Z(\rho,\tau)$ converges, as $\tau\to -\infty$, uniformly smoothly to the unique rotationally symmetric translating Bowl solution $Z_0(\rho)$ with speed $\sqrt2/2$. \end{corollary} \begin{proof}
According to the asymptotic description of the tip-region from \cite{ADS} (see part (iii) of Theorem~\ref{thm-old}) the family of hypersurfaces that we get by translating the tip of $M_t$ to the origin and then rescaling so that the maximal mean curvature becomes equal to one, converges to the translating Bowl soliton with velocity equal to one.
In defining $Z(\rho, \tau)$ by
\[
Y(u,\tau) = Y(0,\tau) + \frac{1}{\sqrt{|\tau|}}\, Z(\rho,\tau)
\]
we have in fact translated the tip to the origin, and rescaled the surface $M_t$, first by a factor $1/\sqrt{|t|} = e^{\tau/2}$ (the cylindrical rescaling \eqref{eq-type-1-blow-up} which leads to $u(y,\tau)$ or equivalently $Y(u,\tau)$, and then by the factor $\sqrt{|\tau|}$ from \eqref{eqn-Y-expansion}.
These two rescalings together shrink $M_t$ by a factor $\sqrt{|t| / \log|t|}$.
Since by Theorem \ref{thm-old} the maximal mean curvature at the tip satisfies
\[
H_{\max}(t) = (1+o(1)) \sqrt{\frac {\log|t|}{2 |t|}}
\]
the hypersurface of rotation given by $z=Z(\rho, \tau)$ has maximal mean curvature
$ H_{\max} (t) \cdot \sqrt{{|t|}/{ \log |t|} } = \sqrt 2/2 + o(1).$ It therefore converges to the unique rotationally symmetric, translating Bowl solution $Z_0(\rho)$ with speed $\sqrt2/2$, which satisfies equation \eqref{eq-Z-bar}. \end{proof}
Next we prove a Proposition that will play an important role in obtaining the coercive type estimate \eqref{eqn-tip2} in the tip region.
\begin{prop}\label{pro-Sigurd} Let $u$ be an ancient oval solution of \eqref{eq-u} which satisfies the assumptions and conclusion of
Theorem \ref{thm-old}.
Then, there exists $\tau_0 \ll -1$ for which we have $(u^2)_{yy}(y,\tau) \le 0$, for all $\tau \leq \tau_0$. \end{prop}
The proof of this Proposition will combine a contradiction argument based on scaling and the following maximum principle lemma. \begin{lemma}
\label{lemma-help}
Under the assumptions of Proposition \ref{pro-Sigurd}, there exists time $\tau_0 \ll -1$ such that
\[ \max_{\bar M_\tau} (u^2)_{yy}(\cdot, \tau) >0 \quad \mbox{and} \quad \tau \leq \tau_0 \implies \frac {d}{d\tau} \max (u^2)_{yy} (\cdot, \tau) \leq 0.
\] \end{lemma} \begin{proof}
For the proof of this lemma it is more convenient to work in the original scaling $(x,t,U(x,t))$ (see equation \eqref{eq-u-original}) and is related to $(y,\tau,u(y,\tau))$ via the change of variables \eqref{eq-cv1}.
Set $Q(x,t):=U^2(x,t)$.
The inequality we want to show is scaling invariant,
namely $(U^2)_{xx}(x,t) = (u^2)_{yy}(y,\tau)$.
Hence, it is sufficient to show that there exists $t_0 \ll -1$ such that
\[
\max_{M_t} Q_{xx}(\cdot, t) >0 \quad \mbox{and} \quad t \leq t_0 \implies \frac {d}{d t} \max_{M_t} Q_{xx} (\cdot, t) \leq 0.
\]
To this end, we will apply the maximum principle to the evolution of $Q_{xx}$.
Since $U$ satisfies \eqref{eq-u-original}, a simple calculation shows that
\[
Q_t = \frac{ 4QQ_{xx} - 2Q_x^2 }{ 4Q + Q_x^2 } - 2(n-1).
\]
Differentiate this equation with respect to $x$ to get
\begin{equation}
\label{eq-qxpde}
\begin{split}
Q_{xt} &= \frac{4QQ_{xxx}}{4Q+Q_x^2}
- (4QQ_{xx} - 2Q_x^2) \frac{4Q_x+2Q_xQ_{xx}}{(4Q+Q_x^2)^2} \\
&= \frac{4QQ_{xxx}}{4Q+Q_x^2} -(4QQ_{xx}-2Q_x^2)(Q_{xx}+2) \frac{2Q_x}{(4Q+Q_x^2)^2}.
\end{split}
\end{equation}
We differentiate again, but this time we only consider points where $Q_{xx}$ is either maximal or minimal, so that $Q_{xxx}=0$.
Note that
\begin{equation}
(4QQ_{xx}-2Q_x^2)_x = 4QQ_{xxx} = 0 \qquad \text{and}\qquad (Q_{xx}+2)_x = Q_{xxx} = 0,
\end{equation}
at those points.
Also,
\begin{equation*}
\begin{split}
\Bigl(\frac{2Q_x}{(4Q+Q_x^2)^2}\Bigr)_x &=
\frac{2Q_{xx}(4Q+Q_x^2) - 2(4Q_x+ 2 Q_xQ_{xx})(2Q_x)}{(4Q+Q_x^2)^3} \\
&= 2\; \frac{(4Q-3Q_x^2) Q_{xx} - 8 Q_x^2}{(4Q+Q_x^2)^3} \\
&= 2\frac{4Q-3Q_x^2}{(4Q+Q_x^2)^3} \Bigl(Q_{xx} - \frac{8Q_x^2}{4Q-3Q_x^2}\Bigr).
\end{split}
\end{equation*}
Using these facts we now differentiate \eqref{eq-qxpde}.
This leads us to
\begin{equation*}
\begin{split}
Q_{xxt} &- \frac{4QQ_{xxxx}}{4Q+Q_x^2}\\
&= - (Q_{xx}+2)(4QQ_{xx}-2Q_x^2) \cdot 2\frac{4Q-3Q_x^2}{(4Q+Q_x^2)^3} \Bigl(Q_{xx} - \frac{8Q_x^2}{4Q-3Q_x^2}\Bigr),
\end{split}
\end{equation*}
holding at the maximal or minimal points of $Q_{xx}$.
Recall that since $Q = U^2$, we have $Q_x^2 = 4U^2U_x^2$.
Thus the previous equation becomes
\begin{equation}
\begin{split}
\bigl(Q_{xx}\bigr)_t &- \frac{\bigl(Q_{xx}\bigr)_{xx}}{1+U_x^2}\\
&= - \frac{2}{4Q} (Q_{xx}+2)(Q_{xx}-2U_x^2) \Bigl(Q_{xx} - \frac{8U_x^2}{1-3U_x^2}\Bigr)\frac{1-3U_x^2}{(1+U_x^2)^3}.
\label{eq-qxxpde}
\end{split}
\end{equation}
We will now use \eqref{eq-qxxpde} to conclude that at a maximum point of $Q_{xx}$, such that $Q_{xx} > 0$, we have
\begin{equation}\label{eqn-Qxx}
\bigl( Q_{xx}\bigr )_t - \frac{\bigl(Q_{xx}\bigr)_{xx}}{1+U_x^2} \leq 0.
\end{equation}
Since the equation becomes singular at the tip of the surface, we will
first show that very near the tip we have $Q_{xx} <0$.
After going to the $y$ variable and setting $q(y,\tau):=u^2(y,\tau)$, we have $Q_{xx}=q_{yy}$, where after switching coordinates
\begin{equation}\label{eqn-qyy10}
q_{yy} = 2\, (u u_{yy} + u_y^2) = 2\Bigl( -u\,\frac{Y_{uu}}{Y_u^3} + \frac{1}{Y_u^2}\Bigr) = \frac{2}{Z_{\rho}^3}\, (Z_{\rho} - \rho Z_{\rho\rho}).
\end{equation}
Since by Corollary \ref{cor-old} we have that $Z(\rho,\tau)$ converges
uniformly smoothly, as $\tau\to -\infty$, on the set $\rho \leq 1$, to the translating soliton $Z_0(\rho)$, it will be sufficient to show that ${\displaystyle \frac{2}{Z_{\rho}^3}\, ( {Z_0}_{\rho} - \rho {Z_0}_{\rho\rho}) <0}$ near $\rho=0$.
Since $Z_0$ is a smooth function, this can be easily seen using the Taylor expansion of $Z_0$ near the origin.
Let $Z_0(\rho) = a \, \rho^2 + b\, \rho^2 + o(\rho)$, as $\rho \to 0$.
A direct calculation using \eqref{eq-Z-bar} shows that ${\displaystyle a=-\frac 1{2\sqrt{2}n}}$ and ${\displaystyle b=-\frac{\sqrt{2}}{16 n^3 (2+n)}}$, implying that
\begin{equation}\label{eqn-Qxx10}
\frac{2}{Z_{\rho}^3}\, ( {Z_0}_{\rho} - \rho {Z_0}_{\rho\rho}) = \frac 1{(2a\rho)^3} \, \frac{\sqrt{2} \, \rho^3}{ 2 n^3 (2+n)} + o(1) = - \frac{2}{2+n} + o(1)
\end{equation}
as $\rho \to 0$.
We conclude that for $\tau \leq \tau_0 \ll -1$ and $\rho$ sufficiently close to zero we have
\begin{equation}
\label{eq-Qxx-neg}
Q_{xx} =q_{yy} \leq - \frac{1}{2+n} < 0.
\end{equation}
We will now show that at a maximum point where $Q_{xx} > 0$, \eqref{eqn-Qxx} holds.
By \eqref{eq-Qxx-neg} we know this point cannot be at the tip, and hence all derivatives are well defined at the maximum point of $Q_{xx}$.
At such a point $Q_{xx}+2 >0$.
We also have $Q_{xx}= 2UU_{xx}+2U_x^2$, so convexity of the surface implies $ Q_{xx}-2U_x^2 = 2UU_{xx} <0 $ on the entire solution.
Thus, it is sufficient to show that when $Q_{xx} >0$,
\begin{equation}\label{eqn-Qxx2}
\Bigl ( Q_{xx}- \frac{8U_x^2}{1-3U_x^2} \Bigr)\frac{1-3U_x^2}{(1+U_x^2)^3} \leq 0
\end{equation}
holds.
To this end, we will look at the two different cases, when $3U_x^2 <1$ or $3U_x^2 \geq 1$.
When $3U_x^2 <1$, we also have ${\displaystyle \frac{ 8U_x^2 }{ 1-3U_x^2 } > 8 U_x^2 \geq 2 U_x^2}$, hence ${\displaystyle Q_{xx} - \frac{ 8U_x^2 }{ 1-3U_x^2 } < 0}$ implying that \eqref{eqn-Qxx2} holds.
In the region where $3U_x^2 \geq 1$, we have ${\displaystyle Q_{xx} - \frac{ 8U_x^2 }{ 1-3U_x^2 } \geq 0}$, thus \eqref{eqn-Qxx2} holds as well.
We conclude from both cases that at a maximum point where $Q_{xx} > 0$, \eqref{eqn-Qxx} holds.
\end{proof}
Let $Z_0$ be the translating Bowl soliton which satisfies \eqref{eq-Z-bar} and the asymptotics \eqref{eq-Z0-asymptotics}. Recall that we have $Z_0(0) = (Z_0)_\rho(0) =0$, and the sign conventions $(Z_0)_\rho(\rho) < 0$ and $(Z_0)_{\rho\rho} (\rho) < 0$, for $\rho >0$ (see Remark \ref{rem-symmetry}), which also imply that $Z_0(\rho) < 0$ for $\rho >0$. By Corollary \ref{cor-old} we have $\lim_{\tau \to -\infty} Z(\rho,\tau) =Z_0(\rho) $, smoothly on compact sets in $\rho$. Thus \eqref{eqn-qyy10} implies that ${\displaystyle q_{yy} \sim \frac{2}{(Z_0)_{\rho}^3}\, ((Z_0)_{\rho} - \rho (Z_0)_{\rho\rho})}$ for $\tau \leq \tau_0 \ll -1$. In the proof of the previous lemma we have shown that this quantity is negative near the origin $\rho=0$. We will next show that it remains negative for all $\rho >0$.
\begin{lemma}\label{lemma-help2} On the translating Bowl soliton
$Z_0(\rho)$ which satisfies equation \eqref{eq-Z-bar} we have
\[
\frac{2}{(Z_0)_{\rho}^3}\, \big ((Z_0)_{\rho} - \rho (Z_0)_{\rho\rho} \big )< 0
\]
for any $\rho \geq 0$. \end{lemma} \begin{proof}
The proof simply follows from the maximum principle in a similar manner as the proof of Lemma \ref{lemma-help}.
To use the calculations from before we need to flip the coordinates.
Setting $x=Z_0(\rho)$, after we flip coordinates we have $\rho=U_0(x)$ for some function $U_0>0$.
Since we have assumed above that $Z_0 \leq 0$, we also have that $x \leq 0$.
Setting $Q:=U_0^2$ we find that ${\displaystyle Q_{xx} = \frac{2}{(Z_0)_{\rho}^3}\, \big ((Z_0)_{\rho} - \rho (Z_0)_{\rho\rho} \big )}$, hence it is sufficient to show that $Q_{xx} <0$ for $x < 0$.
A direct calculation shows that $U_0$ satisfies the equation
\[
\frac{(U_0)_{xx}} {1+(U_0)_x^2} - \frac{n-1} {U} = \frac{\sqrt 2}{2} \, (U_0)_x.
\]
Note that in addition to $U >0$ for $x<0$, we have $(U_0)_x = 1/(Z_0)_\rho <0$ and $(U_0)_{xx} = - (Z_0)_{\rho\rho}/(Z_0)_\rho^3 <0$.
Also since $(U_0)_x \to -\infty$ as $x \to 0$ the function $U_0$ fails to be a $C^1$ function near $x=0$.
However this is not a problem since we have shown in the proof of the previous Lemma that \eqref{eqn-Qxx10} holds, implying that $Q_{xx} <0$ for $|x| \leq \eta$, if $\eta$ chosen sufficiently small.
In addition a direct calculation where we use that $Z_0(\rho)$ satisfies the asymptotics
\[
Z_0(\rho) = -\frac{\rho^2}{2\sqrt{2}(n-1)} + \log\rho + o(\log\rho), \qquad \mbox{as}\,\,\,\rho\to \infty,
\]
as shown in by Proposition 2.1 in \cite{AV}, leads to
\[
Q_{xx} = \frac{2}{(Z_0)_{\rho}^3}\, \big ((Z_0)_{\rho} - \rho (Z_0)_{\rho\rho} < 0,
\]
for $\rho$ sufficiently large which is equivalent to $|x| \geq \ell $ with $\ell $ sufficiently large.
We will now use the maximum principle to conclude that $Q_{xx} <0$ for $\eta < |x| < \ell$.
Similarly to the computation in the proof of the previous lemma, after setting $Q:=U_0^2$, we find that
\[
\frac{ 4QQ_{xx} - 2Q_x^2 }{ 4Q + Q_x^2 } - 2(n-1) = \frac{\sqrt 2}{2} \, Q_x.
\]
After we differentiate twice in $x$, following the same calculations as in the proof of Lemma \ref{lemma-help}, we find that $Q_{xx}$ satisfies the equation
\[
\begin{split}
\frac{\sqrt 2}2 \, & \Bigl(Q_{xx}\Bigr)_x - \frac{\bigl(Q_{xx}\bigr)_{xx}}{1+(U_0)_x^2}\\
&= - \frac{2}{4Q} (Q_{xx}+2)(Q_{xx}-2(U_0)_x^2) \Bigl(Q_{xx} - \frac{8 (U_0) _x^2}{1-3(U_0)_x^2}\Bigr)\frac{1-3(U_0)_x^2}{(1+(U_0)_x^2)^3}.
\end{split}
\]
Assume that $Q_{xx}$ assumes a {\em positive} maximum at some point $x_0 \in [-\ell, -\eta]$.
Arguing exactly as in Lemma \ref{lemma-help} we conclude that at a maximum point of $Q_{xx}$ where $Q_{xx} >0$, we have
\[
- \frac{2}{4Q} (Q_{xx}+2)(Q_{xx}-2U_x^2) \Bigl(Q_{xx} - \frac{8U_x^2}{1-3U_x^2}\Bigr)\frac{1-3U_x^2}{(1+U_x^2)^3} \leq 0.
\]
On the other hand at this point we also have that $Q_{xxx} =0$ and $Q_{xxxx} \leq 0$.
If $Q_{xxxx}(x_0) <0$ at the maximum point $x_0$ we have reached a contradiction.
If $Q_{xxxx}(x_0)=0$, then by replacing $Q_{xx}$ by $Q_{xx} - \epsilon (x-x_0)^2$ where $\epsilon = \epsilon (\eta, \ell) >0$ and sufficiently small, then $Q_{xx} - \epsilon (x-x_0)^2$ also attains its maximum at point $x_0$, where now
\[
\frac{\sqrt 2}2 \, \Bigl(Q_{xx} - \epsilon (x-x_0)^2 \Bigr)_x - \frac{\bigl(Q_{xx} - \epsilon (x-x_0)^2 \bigr)_{xx}}{1+(U_0)_x^2} >0
\]
leading again to contradiction.
Hence, $Q_{xx}$ cannot achieve a positive maximum on $[-\ell,-\eta]$ finishing the proof of our lemma. \end{proof}
For the purpose of the next lemma we consider $U(x,t)$ to be a solution to the unrescaled mean curvature flow equation \eqref{eq-u-original}.
\begin{lemma}\label{lemma-help3}
If the hypersurface $M_{t_0}$ defined by $U(\cdot,t_0)$ encloses the interval $(x_0-2\ell, x_0+2\ell) \times \{0\}$, and if this interval is sufficiently long in the sense that
\[
\ell \geq \sqrt{2n+1}\,\, U(x_0, t_0),
\]
then
\[
U(x, t) + \ell| \, U_{x}(x, t)| \leq 8 \sqrt{2n+1} \, U(x_0, t_0),
\]
for all $x\in [x_0-\ell, x_0+\ell]$ and $t\in (t_0- U(x_0, t_0)^2, t_0]$. \end{lemma}
\begin{proof}
After translation in space and time, and after cylindrical rescaling of the solution $M_t$ we may assume that $U(x_0,t_0)=1$, $x_0=t_0=0$.
The assumption on $\ell$ then simply reduces to $\ell\geq \sqrt{2n+1}$.
Since the hypersurfaces $M_t$ are convex they expand in backward time under MCF.
Thus, if $M_0$ encloses the line segment $[-2\ell, +2\ell] \times \{0\}$ (i.e.~if $U(x, 0)$ is defined for $|x|\leq 2\ell$), then so does $M_t$ for all $t<0$.
For now we ignore the fact that $M_t$ evolves by MCF, and merely consider the consequences of convexity for the hypersurface at some time $t\in(-1,0]$.
If $x\mapsto U(x, t)$ is a non negative concave function that is defined for $|x|\leq 2\ell$, then we have
\begin{equation}
\label{eq-U-convex-estimate}
\tfrac12 U(0, t) \leq U(x,t)\leq 2 U(0, t)\,\, \text{ for } \,\, |x|\leq \ell \,\, \text{ and } \, -1<t\leq 0
\end{equation}
(see Figure~\ref{fig-cylindrical}).
The concavity of $x\mapsto U(x, t)$ also implies that at any $x\in{\mathbb R}$ for which $U (\cdot, t)$ is defined on the whole interval $(x-\ell, x+\ell)$, one has the derivative estimate
\[
|U_x(x, t)| \leq \frac{U(x,t)} {\ell}
\]
(see again Figure~\ref{fig-cylindrical}).
Combined with \eqref{eq-U-convex-estimate} this leads us to
\begin{equation}
\label{eq-U-convex-derivative-estimate}
|U_x(x, t)| \leq 2\, \frac{U(0,t)}{\ell} \,\, \text{ for } \ |x|\leq \ell \ \text{ and } \, -1<t\leq 0.
\end{equation}
We now recall that $M_t$ is a solution to MCF.
Since $U(0,0)=1$, the hypersurface $M_0$ intersects the closed ball $\bar B_1(0,0)$.
By the maximum principle for MCF, the hypersurface $M_t$ must intersect $B_{\sqrt{1-2nt}}(0,0)$ for all $t\in(-1, 0]$.
It follows that for each $t\in(-1, 0]$ there is an $x'$, with $|x'|\leq \sqrt{1+2n}$, for which $U(x', t) \leq \sqrt{1+2n}$.
If we assume that $\ell \geq \sqrt{1+2n}$, then \eqref{eq-U-convex-estimate} implies that $U(0, t) \leq 2 U(x', t) \leq 2\sqrt{1+2n}$.
Applying this estimate to \eqref{eq-U-convex-estimate} and \eqref{eq-U-convex-derivative-estimate} we find
\[
|U(x,t)|\leq 4\sqrt{2n+1}, \qquad \ell |U_x(x, t)| \leq 4\sqrt{2n+1},
\]
when $|x|\leq \ell$ and $-1<t\leq 0$.
\begin{figure}
\caption{\textbf{Top: } If $M_t$ is a rotationally symmetric convex hypersurface that encloses the line segment $\{(x', 0) \mid x-2\ell \leq x'\leq x+2\ell\}$, then one has $\frac12 U(x,t) \leq U(x', t) \leq 2U(x, t)$ whenever $x-\ell\leq x'\leq x+\ell$. \textbf{Bottom: } If $M_t$ encloses the line segment $\{(x', 0) \mid x-\ell\leq x'\leq x+\ell\}$, then at $x$ one has $|U_x(x,t)| \leq U(x, t)/\ell$.}
\label{fig-cylindrical}
\end{figure}
\end{proof}
We will now proceed to the proof of Proposition \ref{pro-Sigurd}. \begin{proof}[Proof of Proposition \ref{pro-Sigurd}]
We will argue by contradiction.
Assuming that our claim doesn't hold, we can find a decreasing sequence $\tau_j \to -\infty$ and points $(y_j,\tau_j)$ such that $q_{yy} (y_j,\tau_j) = \max_{\bar M_{\tau_j }} q_{yy} (\cdot,\tau_j) > 0.$ By the symmetry of our surface, we may assume without loss of generality that $y_j >0$.
It follows from Lemma \ref{lemma-help} that the sequence $\{ q_{yy} (y_j,\tau_j) \}$ is non-increasing, implying that
\begin{equation}\label{eq-qyy}
q_{yy} (y_j,\tau_j) := \max_{\bar M_{\tau_j }} q_{yy} (\cdot,\tau_j) \geq c >0, \qquad \forall i.
\end{equation}
We need the following two simple claims.
\begin{claim}\label{claim-uy}
Set $ \delta_j:= |u_y(y_j,\tau_j)|$, where $(y_j,\tau_j)$ as in \eqref{eq-qyy}.
Then, we have
\[
\lim_{\tau_j \to -\infty} \delta_j = 0.
\]
\end{claim}
\begin{proof}[Proof of Claim \ref{claim-uy}] Indeed, if our claim doesn't hold this means that there exists a subsequence, which may be assumed without loss of generality to be the sequence $\tau_j$ itself, for which $ |u_y(y_j,\tau_j)| \geq \delta >0$.
However, after flipping the coordinates and using the change of variables
\[
Y(u,\tau) = Y(0, \tau) + \frac{1}{\sqrt{|\tau|}}\,Z \Bigl(\rho, \tau\Bigr), \qquad \rho = \sqrt{|\tau|}\, u
\]
we find that for $u_j=u(y_j, \tau_j), \rho_j = \sqrt{|\tau_j|}\, u_j$ we have
\[
|u_y(y_j,\tau_j)| = \frac 1{|Y_u(u_j,\tau_j)|} = \frac 1{|Z_\rho(\rho_j,\tau_j)|} \geq \delta \implies |Z_\rho(\rho_j,\tau_j)| \leq \frac 1{\delta}.
\]
The monotonicity of $Z_\rho(\rho,\tau)$ in $\rho$ and the convergence $\lim_{\tau \to \infty} Z(\rho, \tau) = Z_0(\rho)$ smoothly on any compact set in $\rho$, imply that $\rho_j \leq \rho_{\delta}$, where $\rho_{\delta}$ is the point at which $|(Z_0)_\rho(\rho_\delta)|=2/\delta$.
We may assume, without loss of generality that $\delta $ is small, which means that $\rho_\delta$ is large.
The asymptotics \eqref{eq-Z0-asymptotics} for $Z_0(\rho)$ as $\rho \to \infty$, give that $|(Z_0)_\rho(\rho) | \sim {\rho}/({\sqrt{2} (n-1)})$, as $\rho \to +\infty$, implying that by choosing $\delta$ sufficiently small we have $2/ \delta = |(Z_0)_\rho(\rho_\delta)| \sim {\rho_\delta}/({\sqrt{2} (n-1)})$, or equivalently $\rho_\delta \sim 2\sqrt{2}(n-1)/\delta$.
Since $\rho_j \leq \rho_\delta$, we conclude that the points $(\rho_j, \tau_j, Z(\rho_j,\tau_j))$, or equivalently the points $(y_j,\tau_j, u(y_j,\tau_j))$, belong to the soliton region where we know that $q_{yy} <0$ by Lemma \ref{lemma-help2}, contradicting our assumption \eqref{eq-qyy}.
\end{proof}
\begin{claim}\label{claim-uy2}
Let $y_j^1>0$ be the point for which ${\displaystyle u(y_j^1,\tau_j) = \frac 12 u(y_j,\tau_j)}$, where $(y_j,\tau_j)$ as in \eqref{eq-qyy}.
If $\bar \delta_j:= |u_y(y_j^1,\tau_j)|$, then
\[
\lim_{j \to +\infty}\bar \delta_j =0.
\]
\end{claim}
\begin{proof}[Proof of Claim \ref{claim-uy2}]
We will again argue by contradiction.
If our claim doesn't hold this means that there exists a subsequence, which may be assumed without loss of generality to be the sequence $\tau_j$ itself, for which $ |u_y^1(y_j^1,\tau_j)| \geq \delta >0$.
Then, arguing as in the previous claim implies that $(y_j^1,\tau_j,u(y_j^1,\tau_j))$ belong to the tip region which means that $u(y_j^1,\tau_j) \leq L / \sqrt{|\tau|} $, for some uniform number $L >0$.
Since ${\displaystyle u(y_j^1,\tau_j) = \frac 12 u(y_j,\tau_j)}$, we also have $u(y_j,\tau_j) \leq 2\, L / \sqrt{|\tau|} $, implying that the points $(y_j,\tau_j,u(y_j,\tau_j))$ belong to the tip region as well and by Lemma \ref{lemma-help2} we must have $q_{yy}(y_j,\tau_j) <0$, for $\tau_j \ll -1$ contradicting our assumption \eqref{eq-qyy}.
\end{proof}
We will now conclude the proof of the proposition.
Let $(y_j,\tau_j)$ be our maximum points for $q_{yy}$ as in \eqref{eq-qyy} and let $y_j^1$ be the points for which ${\displaystyle u(y_j^1,\tau_j) = \frac 12 u(y_j,\tau_j)}$, as in the previous claim.
Since $u_y <0$, we must have $ y_j^1 > y_j$ and by the concavity of $u$ we obtain
\begin{equation}\label{eqn-est100}
\frac 12 u(y_j,\tau_j) = u(y_j,\tau_j) - u(y_j^1,\tau_j) \leq |u_y (y_j^1, \tau_j)|\, |y_j - y_j^1| = \bar \delta_j \, |y_j - y_j^1|.
\end{equation}
We will now use a rescaling argument to reach a contradiction.
To this end it is more convenient to work in the original variables, rescaling the solution $U(x,t)$.
Denote by $(x_j,t_j)$, $(x_j^1,t_j)$ the points corresponding to $(y_j,\tau_j)$, $(y_j^1,\tau_j)$, respectively.
Setting
\[
U_j( \bar x, \bar t) = \frac 1{\alpha_j} U(x_j+ \alpha_j \, \bar x , t_j + \alpha_j^2 \, \bar t), \qquad \alpha_j := U(x_j,t_j)
\]
it follows that all $U_j$ satisfy the same equation \eqref{eq-u-original} with $U_j(0,0)=1$.
Denote by $\bar x_j^1$ the point at which $x_j + \alpha_j \, \bar x_j^1 = x_j^1$.
Since, by \eqref{eqn-est100} we have
\[
\frac 12 U(x_j,t_j) \leq \delta_j \, |x_j - x_j^1|
\]
in terms of the rescaled solutions we have
\[
\frac 12 \alpha_j \, U_j(0,0) \leq \bar \delta_j \, |x_j - (x_j + \alpha_j \, \bar x_j^1 ) | = \bar \delta_j \, \alpha_j \, | \bar x_j^1|,
\]
where $x_j + \alpha_j \, \bar x_j^1 = x_j^1$.
Thus, defining the length $l_j$ so that $2 l_j:= x_j^1 >0$, it follows that
\[
2l_j= \bar x_j^1 \geq \frac{1}{2 \bar \delta_j}.
\]
Now consider $U_j$ on the interval $[-2 l_j, 2 l_j]=[-\bar x_j^1, \bar x_j^1]$ and apply Lemma \ref{lemma-help3}.
Let us verify that the assumptions of the lemma hold.
The rescaled surface, defined through the rescaled width function $U_j(\cdot,0)$ encloses the interval $(-\bar{x}_j^1, \bar{x}_j^1)$ and
\[
\bar{x}_j^1 = 2l_j \ge \frac{1}{2\bar{\delta}_j} \to \infty,
\]
as $j\to\infty$, where we have used Claim \ref{claim-uy2}.
Moreover,
\[
l_j \ge \sqrt{2n+1} = \sqrt{2n+1} U_j(0,0), \quad \mbox{for} \,\, j \gg 1.
\]
We can now apply lemma \ref{lemma-help3} to conclude
\[
|(U_j)_{\bar{x}}(\cdot,\bar{t})| \le \frac{8\sqrt{2n+1}}{l_j} \le C\, \bar{\delta}_j \ll -1, \quad \mbox{for} \,\, j \gg 1,
\]
for all $\bar t \in [-1,0]$ and $|\bar x | \leq l_j=1/(4 \bar \delta_j)$, in particular for $ |\bar x | \leq 2$.
Thus, on the cube $Q_2:= \{ | \bar x| \leq 2, \,\, -1 \leq t \leq 0 \}$ we have $U \geq 1/2$ (since $|(U_j)_{\bar x}(\cdot, \bar t)| \leq C\, \bar \delta_j \ll-1$).
By standard cylindrical estimates, passing if necessary to a subsequence $j_k$, we conclude that $\lim_{j_k \to +\infty} U_{j_k} = \hat U$ on the cube $Q_1:= \{ | \bar x| \leq 1, \,\, -1/4 \leq t \leq 0 \}$, where $\hat U$ still solves equation \eqref{eq-u-original} and satisfies $\hat U(0,0)=1$ and $\hat U_{\bar x}(\bar x,0)=0$ for $\bar x \in [-1,1]$.
This in particular implies that $\hat U_{\bar x\bar x}(0,0) =0$, thus $\hat Q_{\bar x \bar x}(0,0) := (\hat U^2)_{\bar x\bar x} (0,0)=0$.
On the other hand, since the quantity $Q_{xx}$ is scaling invariant, we have
\[
\lim_{j_k \to +\infty} (Q_{j_k})_{\bar x \bar x} (0,0) = \lim_{j_k \to +\infty} Q_{xx} (x_{j_k},t_{j_k}) = \lim_{j_k \to +\infty} q_{xx} (y_{j_k},\tau_{j_k}) \geq c >0,
\]
where in the last inequality we used our assumption \eqref{eq-qyy}.
This is a contradiction, finishing the proof of the Proposition. \end{proof}
In the rotationally symmetric case that we consider here, the principal curvatures of our hypersurface are given by \[ \lambda_1 = -\frac{ u_{yy}}{(1+u_y^2)^{3/2}} \qquad \mbox{and} \qquad \lambda_2 = \cdots = \lambda_n =\frac{1}{u \, (1 + u_y^2)^{1/2}}. \] In \cite{ADS} we showed that on our Ancient Ovals $M_t$ we have \[ \lambda_1 \le \lambda_2. \] We also showed $\lambda_1 = \lambda_2$ at the tip of the Ancient Ovals, at which the mean curvature is maximal as well. The quotient \[ R:= \frac{\lambda_1}{\lambda_2} = \frac{ U\, U_{xx}}{1+U_x^2} = \frac{ u\, u_{yy}}{1+u_y^2} \]
is a scaling invariant quantity and in some sense measures how close we are to a cylinder, in a given region and at a given scale. It turns out that this quotient can be made arbitrarily small {\em outside} the {\em soliton } region $ S_L(\tau) := \big \{y \mid 0 \le u(y,\tau) \le \frac{L}{\sqrt{|\tau|}}\big\}$, by choosing $L \gg 1$ and $\tau \leq \tau_0 \ll -1$. This is shown next.
\begin{proposition}
\label{prop-ratio-small}
For every $\eta > 0$, there exist $L \gg 1$ and $\tau_0 \ll -1$ so that
\[
\frac{\lambda_1}{\lambda_2}(y,\tau) < \eta, \qquad \mbox{if} \,\,\, u(y,\tau) > \frac{L}{\sqrt{|\tau|}} \,\,\, \mbox{and } \, \,\, \tau \le \tau_0.
\] \end{proposition}
\begin{proof} The proof is by contradiction in the spirit of Proposition \ref{pro-Sigurd} but easier.
Assuming that our proposition doesn't hold, this means that there is an $\eta >0$ and sequences $\tau_j \to -\infty$, $L_j \to +\infty$ and points $(y_j,\tau_j)$ for which we have
\begin{equation}
\label{eqn-ratio}
\frac{\lambda_1}{\lambda_2}(y_j,\tau) \geq \eta >0 \quad \mbox{and} \quad u(y_j,\tau_j) > \frac{L_j}{\sqrt{|\tau_j|}}.
\end{equation}
\begin{claim}\label{claim-uy5}
Set $ \delta_j:= |u_y(y_j,\tau_j)|$, where $(y_j,\tau_j)$ as in \eqref{eqn-ratio}.
Then, we have
\[
\lim_{\tau_j \to -\infty} \delta_j = 0.
\]
\end{claim}
\begin{proof} [Proof of Claim \ref{claim-uy5}] This claim is shown in a very similar away as Claim \ref{claim-uy} in Proposition \ref{pro-Sigurd}.
Arguing by contradiction, if our claim doesn't hold this means that there exists a subsequence, which may be assumed without loss of generality to be the sequence $\tau_j$ itself, for which $ |u_y(y_j,\tau_j)| \geq \delta >0$.
Arguing exactly as in the proof of Claim \ref{claim-uy} we conclude that the points $(y_j,\tau_j)$ satisfy $\sqrt{|\tau_j|} \, u(y_j,\tau_j) \leq C/{\delta} $, for an absolute constant $C$, contradicting that $u(y_j,\tau_j) > {L_j}/{\sqrt{|\tau_j|}}$ with $L_j \to +\infty$.
\end{proof}
We will now use the same rescaling argument as in Proposition \ref{pro-Sigurd} to reach a contradiction.
Working again in the original variables, we rescale the solution $U(x,t)$, setting
\[
U_j( \bar x, \bar t) = \frac 1{\alpha_j} U(x_j+ \alpha_j \, \bar x , t_j + \alpha_j^2 \, \bar t), \qquad \alpha_j := U(x_j,t_j),
\]
where $(x_j,t_j)$ are the points in the original variables corresponding to $(y_j,\tau_j)$.
The same argument as before, based now on Claim \ref{claim-uy5} instead of Claim \ref{claim-uy} (note that Claim \ref{claim-uy2} still holds in our case) allows us to pass to the limit and conclude that passing to a subsequence $j_k$ we have $U_{j_k} \to \hat U$, smoothly on compact sets.
The limit $\hat U$ still solves equation \eqref{eq-u-original} and satisfies $\hat U(0,0)=1$ and $\hat U_{\bar x}(\bar x,0)=0$ for $\bar x \in [-1,1]$.
This in particular implies that $\hat U_{\bar x\bar x}(0,0) =0$.
On the other hand, the ratio ${\displaystyle R:= \frac{ \lambda_1}{\lambda_2}}$ is scaling invariant, which means that
\[
\hat R(0,0) = \lim_{j_k \to +\infty} R_{j_k} (0,0) = \lim_{j_k \to +\infty} R (x_{j_k},t_{j_k}) = \lim_{j_k \to +\infty} R(y_{j_k},\tau_{j_k}) \geq \eta >0,
\]
where in the last inequality we used our assumption \eqref{eqn-ratio}.
This is a contradiction, since at the point $(0,0)$ we also have
\[
\hat R (0,0) = \frac{U\, U_{\bar x \bar x} (0,0)}{1 + \hat U_{\bar x}^2(0,0)}=0,
\]
therefore finishing the proof of the Proposition.
\end{proof}
We will finally use the convexity estimate shown in Proposition \ref{pro-Sigurd} to show the estimates in the next two Corollaries which will play a crucial role in estimating various terms in the tip region $\mathcal{T}_\theta$, in Section \ref{sec-tip}. The first Corollary concerns with an estimate which holds in the collar region $\mathcal{K}_{\theta,L}$, as defined in \ref{subsec-tip}.
\begin{corollary}
\label{lemma-Sigurd}
Let $u$ be an ancient oval solution of \eqref{eq-u} which satisfies the asymptotics in Theorem \ref{thm-old}.
Then, for $0 < \theta \ll 1$ and $L \gg 1$ large, there exists $\epsilon(\theta,L)$ small and a $\tau_0 \ll -1$ for which we have
\[
\Bigl|1 + \frac{1}{2(n-1)} Y \, \frac{u}{Y_u} \Bigr | < \epsilon(\theta,L) \quad \mbox{in} \, \, \mathcal{K}_{\theta,L}, \quad \mbox{for} \,\, \tau \le \tau_0.
\]
Moreover, for $L \gg 1$ and $\theta \ll 1$, we can choose $\epsilon := \max\{4\theta^2, c(n) L^{-1}\}$. \end{corollary}
\begin{proof}
Assume for the moment we have $(u^2)_{yy} \le 0$.
We need to show that ${\displaystyle 1-\epsilon \leq - \frac 1{2(n-1)} \frac{Y\, u}{Y_u} \leq 1+ \epsilon}$ in the considered region which is equivalent to
\begin{equation}\label{eqn-crucialY2}
1-\epsilon \leq - \frac 1{4(n-1)} y\, (u^2)_y \leq 1+ \epsilon.
\end{equation}
Now since at $u=2\theta$ we have
\[
y \approx \sqrt{|\tau|} \, \sqrt{ 2 - \frac{u^2}{n-1}} = \sqrt{ 2|\tau|} \sqrt{1- \frac{2 \theta^2}{n-1}} \approx \sqrt{ 2|\tau|} \, \big ( 1- \frac{ \theta^2}{n-1} \big ),
\]
it follows that at $u=2\theta$ and for $\theta$ small, $y \geq \sqrt{ 2|\tau|} \, ( 1- 2 \theta^2 ).$ Hence, in the considered region ${L}/{\sqrt{|\tau|}} \leq u \leq 2\theta$, we have
\begin{equation}\label{eqn-y12}
\sqrt{2|\tau|} (1- 2 \theta^2) \leq y \leq \sqrt{2|\tau|}.
\end{equation}
Next, using the inequality $-(u^2)_{yy} \geq 0$ which was shown in Proposition \ref{pro-Sigurd}, we can estimate
\[
- (u^2)_{y}\bigr|_{u=2\theta}
\leq -(u^2)_{y}
\leq - (u^2)_{y}\bigr|_{u=L/\sqrt{|\tau|}}.
\]
Our intermediate region asymptotics from Theorem \ref{thm-old} imply that at $u=2\theta$,
\[
- (u^2)_{y} |_{u=2\theta} = 2 (n-1) \frac{y}{|\tau|} \approx \frac{2 \sqrt{2} (n-1)}{\sqrt{|\tau|}} ( 1- \frac{\theta^2}{n-1} ).
\]
On the other hand, the asymptotics in the tip region give us that at $u=L/\sqrt{|\tau|}$, we have
\[
- (u^2)_{y}|_{u=L/\sqrt{|\tau|}} = - \frac{2u}{Y_u}|_{u=L/\sqrt{|\tau|}} \approx \frac{2L}{\sqrt{|\tau|}} \frac 1{Z_\rho(L,\tau)}.
\]
The smooth convergence $\lim_{\tau\to -\infty} Z(\rho,\tau) = Z_0(\rho)$, together with the asymptotics \eqref{eq-Z0-asymptotics} imply that for $L \gg1$, we have
\[
Z_\rho(L,\tau) \geq \frac{L - c}{\sqrt{2} (n-1)},
\]
for a fixed constant $c=c(n)$, hence
\[
- (u^2)_{y}|_{u=L/\sqrt{|\tau|}} \leq \frac{2L}{\sqrt{|\tau|}} \, \frac{\sqrt{2} (n-1)}{L-c} = \frac{2 \sqrt {2} (n-1)}{\sqrt{|\tau|}} \, (1+ \epsilon),
\]
for $\epsilon = c/L$, for another fixed constant $c=c(n)$.
We conclude that
\begin{equation}\label{eqn-u12}
\frac{2 \sqrt{2} (n-1)}{\sqrt{|\tau|}} ( 1- \frac{\theta^2}{n-1} ) \leq -(u^2)_{y} \leq \frac{2 \sqrt {2} (n-1)}{\sqrt{|\tau|}} \, (1+ \epsilon).
\end{equation}
Combining \eqref{eqn-y12} and \eqref{eqn-u12} yields
\[
(1- 2 \theta^2)\, ( 1- \frac{\theta^2}{n-1} ) \leq - \frac 1{4(n-1)} y\, (u^2)_y \leq (1+\epsilon),
\]
which yields \eqref{eqn-crucialY2} for $\epsilon := \max (4\theta^2, c(n)\, L^{-1})$ and $L\gg1$, $\theta \ll1$. \end{proof}
\begin{remark}
\label{cor-sigurd}
It is an easy consequence of Corollary \ref{lemma-Sigurd} that for $0 < \theta \ll 1$ small and $L \gg 1$ large, there exists a $\tau_0 \ll -1$ for which we have
\begin{equation}
\label{eq-cor-Sigurd}
(1 - \epsilon)\,\frac{Y}{2(n-1)}
< \frac{|Y_u|}{u}
< (1 + \epsilon)\, \frac{Y}{2(n-1)}
\quad \mbox{in} \,\, \mathcal{K}_{\theta,L},
\quad \mbox{for} \,\, \tau \le \tau_0
\end{equation}
with $\epsilon:= \max\{4\theta^2, c(n) L^{-1}\}$ small.
From now on we denote by the same symbol $\epsilon =\epsilon(\theta,L)$ a constant that is small for $\theta \ll 1$ and $L \gg 1$, but may differ from line to line. \end{remark}
We will next show that if $u_i$, $i=1,2$ are two solutions as in Theorem \ref{thm-main}, then $Y_1(u,\tau)$ and $Y_2(u,\tau)$ are comparable to each other in the whole tip region $\mathcal{T}_{\theta}$.
\begin{corollary}
Let $u_i(y,\tau)$, $i=1,2$ be two solutions as in Theorem \ref{thm-main}, and let $Y_i(u,\tau)$, $i=1,2$ be the corresponding solutions in flipped coordinates.
Then, for every $\epsilon > 0$ there exist $0 < \theta \ll 1$ small and $\tau_0 \ll -1$ so that
\begin{equation}
\label{eq-comp-der}
1 - \epsilon < \frac{Y_{1u}}{Y_{2u}} < 1 + \epsilon \quad \, \mbox{in} \,\,\, \mathcal{T}_{\theta}, \quad \mbox{for} \,\, \tau \le \tau_0.
\end{equation} \end{corollary}
\begin{proof}
We begin by observing that \eqref{eq-comp-der} holds in the collar region $\mathcal{K}_{\theta,L}$, which is an immediate consequence of \eqref{eq-cor-Sigurd} and ${\displaystyle 1 -\epsilon < \frac{Y_1}{Y_2} < 1 + \epsilon}$, which holds in the considered region.
Hence, we only need to show that \eqref{eq-comp-der} holds in the soliton region $S_L$.
Let $Z_i$, $i=1,2$, be the functions defined in terms of $Y_i$ by \eqref{eqn-Y-expansion}.
Then $\lim_{\tau\to -\infty} Z_i(\rho,\tau) = Z_0(\rho)$, uniformly smoothly on compact sets in $\rho$, for both $i\in \{1,2\}$.
Write
\begin{equation}
\label{eq-Y1u-Y2u}
\Bigl|\frac{Y_{1u}}{Y_{2u}} - 1\Bigr| = \Bigl|\frac{Z_{1\rho}}{Z_{2\rho}} - 1\Bigr| \le \frac{|Z_{1\rho} - Z_{0\rho}|}{|Z_{2\rho}|} + \frac{|Z_{2\rho} - Z_{0\rho}|}{|Z_{2\rho}|}.
\end{equation}
By the smoothness of $Z_i(\rho,\tau)$, $Z_0(\rho)$ around the origin we have $(Z_i)_\rho(0,\tau) = (Z_0)_\rho (0) = 0$ and hence,
\begin{equation}
\label{eq-diff-Z-small}
|(Z_i(\rho,\tau) - Z_0(\rho))_{\rho}| \le \sup_{\rho\in [0,2L]} |(Z_i(\rho,\tau) - Z_0(\rho))_{\rho\rho}|\, \rho < \eta \, \rho
\end{equation}
if $\tau \le \tau_0 \ll -1$ is sufficiently small and close to negative infinity ($\eta > 0$ a small positive number to be chosen below).
This also implies
\[
|Z_{i\rho}(\rho,\tau)| \ge |(Z_{0\rho}(\rho)| - \eta \, \rho.
\]
On the other hand, by the asymptotics for $Z_0(\rho)$ we have the
\[
\lim_{\rho\to 0} \frac{Z_{0\rho}}{\rho} = -\frac{1}{ \sqrt{2} \, n} \qquad \mbox{and} \qquad \lim_{\rho\to +\infty} \frac{Z_{0\rho}}{\rho} = -\frac{1}{\sqrt{2}\, (n-1)}.
\]
Let
\[
2a := \min\Biggl\{\frac{1}{2\sqrt{2}\, (n-1)}, \min_{\rho\in [\rho_1,\rho_2] } \, \frac{|Z_{0\rho}|}\rho\Biggr\},
\]
where $\rho_1 > 0$ is close to zero and $\rho_2 > 0$ is very large, so that ${\displaystyle \frac{|Z_{0\rho}|}\rho > \frac{1}{2\sqrt{2} (n-1)} }$, for $\rho \le \rho_1$ or $\rho \ge \rho_2$.
By the definition of $a$ we have ${\displaystyle \frac{|Z_{0\rho}|}\rho \ge 2 a > 0}$ for all $\rho$.
Choosing ${\displaystyle 0 < \eta < \min\{a, \frac{\epsilon}{2}\, a\}}$ we can make
\[
|Z_{i\rho}(\rho,\tau)| \ge a \, \rho, \qquad \rho\in [0,L], \quad \tau\le \tau_0 \ll -1.
\]
Combining this, \eqref{eq-Y1u-Y2u} and \eqref{eq-diff-Z-small} yields
\[
\Bigl|\frac{Z_{1\rho}}{Z_{2\rho}} - 1\Bigr| < 2\, \frac{\eta }{a } \leq \epsilon, \qquad \rho\in [0,L], \quad \tau\le \tau_0 \ll -1.
\]
This concludes the proof of the Corollary. \end{proof}
\section{The cylindrical region} \label{sec-cylindrical}
Let $u_1(y,\tau)$ and $u_2(y,\tau)$ be the two solutions to equation \eqref{eq-u} as in the statement of Theorem \ref{thm-main} and let $u_2^{\alpha\beta\gamma}$ be defined by \eqref{eq-ualphabeta}. In this section we will estimate the difference $w:= u_1-u_2^{\alpha\beta\gamma}$ in the cylindrical region
$\mathcal{C}_{\theta} = \{y\,\,\, |\,\,\, u_1(y,\tau) \ge {\theta}/{2}\, \}$, for a given number $\theta > 0$ small and any $\tau \leq \tau_0 \ll -1$. Recall all the definitions and notation introduced in Section \ref{subsec-cylindrical}. Before we state and prove the main estimate in the cylindrical region we give a remark that a reader should be aware of throughout the whole section.
\begin{remark}
\label{rem-cylindrical}
Recall that we write simply $u_2(y,\tau)$ for $u_2^{\alpha\beta\gamma}(y,\tau)$, where
\[
u_2^{\alpha\beta\gamma}(y,\tau) = \sqrt{1+\beta e^{\tau}}\, u_2\Bigl(\frac{y}{\sqrt{1+\beta e^{\tau}}}, \tau + \gamma - \log(1 + \beta e^{\tau})\Bigr),
\]
is still a solution to \eqref{eq-u} and simply write $w(y,\tau)$ for $w^{\alpha\beta\gamma}(y,\tau) := u_1(y,\tau) - u_2^{\alpha\beta\gamma}(y,\tau)$.
As it has been already indicated in Section \ref{subsec-conclusion}, we will choose $\alpha=\alpha(\tau_0)$, $\beta = \beta(\tau_0)$ and $\gamma = \gamma(\tau_0)$ (as it will be explained in Section \ref{sec-conclusion}) so that the projections $\mathcal{P}_+ w_\mathcal{C}(\tau_0) = \mathcal{P}_0 w_\mathcal{C}(\tau_0) = 0$, at a suitably chosen $\tau_0 \ll -1$.
In Section \ref{sec-conclusion} we show the pair $(\alpha,\beta,\gamma)$ is admissible with respect to $\tau_0$, in the sense of Definition \ref{def-admissible}, if $\tau_0$ is sufficiently small.
That will imply all our estimates that follow are independent of parameters $\alpha, \beta, \gamma$, as long as they are admissible with respect to $\tau_0$, and will hold for $u_1(y,\tau) - u_2^{\alpha\beta\gamma}(y,\tau)$, for $\tau\le \tau_0$ (as explained in section \ref{sec-regions}). \end{remark}
Our goal in this section is to prove that the bound~\eqref{eqn-cylindrical1} holds as stated next.
\begin{prop}\label{prop-cylindrical}
For every $\epsilon > 0$ and $\theta > 0$ small there exists a $\tau_0 \ll -1$ so that if $w(y,\tau)$ is a solution to \eqref{eqn-ww} for which $\mathcal{P}_+w_\mathcal{C}(\tau_0) = 0$, then we have
\[
\| \hat w_\mathcal{C} \|_{\mathfrak{D},\infty}
\leq
\epsilon\, \big(\| w_\mathcal{C} \|_{\mathfrak{D},\infty} + \|w\, \chi_{D_{\theta}}\|_{\mathfrak{H},\infty}\big),
\]
where $D_{\theta} := \{y\,\,\,|\,\,\, \theta/2 \le u_1(y,\theta) \le \theta\}$ and $\hat{w}_C = \mathcal{P}_- w_\mathcal{C} + \mathcal{P}_+ w_\mathcal{C}$. \end{prop}
The rest of this section will be devoted to the proof of Proposition \ref{prop-cylindrical}. To simplify the notation for the rest of the section we will simply denote $u_2^{\alpha\beta\gamma}$ by $u_2$ and set $w:=u_1-u_2$. The difference $w$ satisfies \begin{equation}
\label{eqn-ww}
w_\tau = \frac{w_{yy}}{1+u_{1y}^2} -\frac{(u_{1y}+u_{2y})u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)} w_y -\frac y2 w_y + \frac 12 w +\frac{n-1}{u_1u_2} w \end{equation} which we can rewrite as \begin{equation}
w_\tau = \mathcal{L} w + \mathcal{E} w
\label{eq-w-linear} \end{equation} in which $ \mathcal{L} = \partial_y^2 - \frac y2 \partial_y + 1$ is as above, and where $ \mathcal{E} $ is given by \begin{equation}
\label{eq-E10}
\mathcal{E}[\phi] = -\frac{u_{1y}^2}{1+u_{1y}^2} \phi_{yy} -\frac{(u_{1y}+u_{2y})u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)} \phi_y +\frac{2(n-1) - u_1u_2}{2u_1u_2}\phi. \end{equation}
\subsection{The operator $ \mathcal{L} $} We recall the definition of the Hilbert spaces $\mathfrak{H}$, $\mathfrak{D}$ and $\mathfrak{D}^*$ are given in Section \ref{subsec-cylindrical}. The formal linear operator \[ \mathcal{L} = \partial_y^2 - \frac y2 \partial_y + 1 = -\partial_y^* \partial_y + 1 \] defines a bounded operator $\mathcal{L}:\mathfrak{D}\to\mathfrak{D}^*$, meaning that for any $f\in \mathfrak{D}$ we have that $Lf\in \mathfrak{D}^*$ is the functional given by \[ \forall \phi\in \mathfrak{D} : \langle \mathcal{L} f, \phi\rangle = \int_{{\mathbb R}} \bigl ( -f_y\phi_y + f\phi\bigr ) \, e^{-y^2/4}\, dy. \] By integrating by parts one verifies that if $f\in C^2_c$, one has \[ \langle f, \phi \rangle = \int_{\mathbb R} \bigl ( f_{yy} - \frac y2 f_y + f\bigr ) \phi \,e^{-y^2/4}dy, \] so that the weak definition of $ \mathcal{L} f $ coincides with the classical definition.
\subsection{Operator bounds and Poincar\'e type inequalities}
The following inequality was shown in Lemma 4.12 in \cite{ADS}.
\begin{lemma} \label{lem-Poincare} For any $f\in \mathfrak{D}$ one has
\[
\int_{\mathbb R} y^2 f(y)^2 e^{-y^2/4} dy \leq C \int_{\mathbb R} \big ( f(y)^2 + f_y(y)^2 \big ) \, e^{-y^2/4} dy,
\]
which implies that the multiplication operator $ f\mapsto yf $ is bounded from $ \mathfrak{D} $ to $ \mathfrak{H} $, i.e.
\[
\|yf\|_\mathfrak{H} \leq C \|f\|_\mathfrak{D},
\]
for all $ f\in \mathfrak{D} $. \end{lemma}
As a consequence we have the following two lemmas:
\begin{lemma} \label{lem-bounded-first-order} The following operators are bounded both as operators from $\mathfrak{D}$ to $\mathfrak{H}$ and also as operators from $\mathfrak{H}$ to $\mathfrak{D}^*$:
\[
f\mapsto yf,\quad f\mapsto \partial_y f, \quad f\mapsto \partial_y^* f = \bigl(-\partial_y + \frac y2\bigr)f,
\]
where $\partial_y^*$ is the formal adjoint of the operator $\partial_y$, it satisfies $\langle f, \partial_y^* g\rangle = \langle \partial_yf, g\rangle$ for all $f,g\in\mathfrak{D}$. \end{lemma}
\begin{lemma}\label{lem-bounded-second-order}
The following operators are bounded from $\mathfrak{D}$ to $\mathfrak{D}^*$:
\[
f\mapsto y^2f, \quad f\mapsto y\partial_y f, \quad f\mapsto \partial_y^2 f.
\] \end{lemma}
\begin{proof}[Proof of Lemmas \ref{lem-bounded-first-order} and \ref{lem-bounded-second-order}]
By definition of the norms in $\mathfrak{D}$ and $\mathfrak{H}$ the operator $\partial_y$ is bounded from $\mathfrak{D}$ to $\mathfrak{H}$, and by duality its adjoint $\partial_y^* = -\partial_y + \frac y2$ is bounded from $\mathfrak{H}$ to $\mathfrak{D}^*$.
The Poincar\'e inequality from Lemma~\ref{lem-Poincare} implies directly that $f\mapsto yf$ is bounded from $\mathfrak{D}$ to $\mathcal{H}$.
By duality the same multiplication operator is also bounded from $ \mathcal{H} $ to $ \mathfrak{D}^* $; i.e.~for every $ f\in \mathcal{H} $ the product $ y f $ defines a linear functional on $ \mathfrak{D} $ by $ \langle yf, \phi \rangle = \langle f, y\phi \rangle $ for every $ \phi\in \mathfrak{D} $.
We get
\[
\|y\, f \|_{\mathfrak{D}^*} \leq C \|f\|_\mathfrak{H},
\]
for all $ f\in\mathfrak{H} $.
Composing the multiplications $y : \mathfrak{D} \to \mathcal{H} $ and $ y:\mathcal{H}\to\mathfrak{D}^* $ we see that multiplication with $ y^2 $ is bounded as operator from $ \mathfrak{D} $ to $ \mathfrak{D}^* $, i.e.~for all $ f\in\mathfrak{D} $ we have $ y^2f\in \mathfrak{D}^* $, and
\[
\|y^2\, f \|_{\mathfrak{D}^*} \leq C^2 \|f\|_\mathfrak{D}.
\]
Since $y:\mathfrak{D} \to \mathcal{H}$ and $\partial_y:\mathfrak{D}\to\mathcal{H}$ are both bounded operators, we find that $\partial_y^* = - \partial_y^*+\frac y2$ also is bounded from $\mathfrak{D}$ to $\mathfrak{D}$.
By duality again, it follows that $\partial_y$ is bounded from $\mathcal{H}$ to $\mathfrak{D}^*$.
This proves Lemma~\ref{lem-bounded-first-order}.
Each of the operators in Lemma~\ref{lem-bounded-second-order} is the composition of two operators from Lemma~\ref{lem-bounded-first-order}, so they are also bounded. \end{proof}
More generally, to estimate the operator norm of multiplication with some function $ m:{\mathbb R}\to{\mathbb R} $, seen as operator from $ \mathfrak{D} $ to $ \mathcal{H} $, we have \[
\|m\,f\|_\mathfrak{H} \leq \sup_{y\in{\mathbb R}} \frac{|m(y)|}{1+|y|} \; \|f\|_\mathfrak{D}. \]
Indeed the following lemma can be easily shown.
\begin{lemma}
Let $m:{\mathbb R}\to{\mathbb R}$ be a measurable function, consider the multiplication operator $\mathcal{M} : f\mapsto mf$.
Then, the following hold:
$\mathcal{M}:\mathcal{H}\to\mathcal{H}$ is bounded if $m\in L^\infty({\mathbb R})$, and $\|\mathcal{M}\|_{\mathcal{H}\to\mathcal{H}} \leq \|m\|_{L^\infty}$.
$\mathcal{M}:\mathfrak{D}\to\mathcal{H}$ is bounded if and only if $\mathcal{M}:\mathcal{H}\to\mathfrak{D}^*$ is bounded.
Both operators are bounded if $(1+|y|)^{-1}m(y)$ is bounded, and
\[
\|\mathcal{M}\|_{\mathcal{H}\to\mathfrak{D}^*} = \|\mathcal{M}\|_{\mathfrak{D}\to\mathcal{H}} \leq C\,\mathrm{ess~sup}_{y\in{\mathbb R}} \frac{|m(y)|}{1+|y|}.
\]
Finally, $\mathcal{M}$ is a bounded operator from $ \mathfrak{D} $ to $ \mathfrak{D}^* $ if $(1+|y|)^{-2}m(y)$ is bounded, and the operator norm is bounded by
\[
\|\mathcal{M}\|_{\mathfrak{D}\to\mathfrak{D}^*} \leq \mathrm{ess~sup}_{y\in{\mathbb R}} \frac{|m(y)|}{(1+|y|)^2} .
\] \end{lemma}
\subsection{Eigenfunctions of $ \mathcal{L} $} There is a sequence of polynomials $ \psi_n(y) = y^n+\cdots$ that are eigenfunctions of the operator $ \mathcal{L} $. The $ n^{\rm th} $ eigenfunction has eigenvalue $ \lambda_n = 1 - \frac n2 $. The first few eigenfunctions are given by \[ \psi_0(y) = 1, \quad \psi_1(y) = y, \quad \psi_2(y) = y^2-2 \] up to scaling.
The functions $ \{\psi_n : n\in\mathrm{N}\} $ form an orthogonal basis in all three Hilbert spaces $ \mathfrak{D}$, $\mathcal{H}$ and $\mathfrak{D}^*$. The three projections $ \mathcal{P}_\pm $ and $ \mathcal{P}_0 $ onto the subspaces spanned by the eigenfunctions with negative/positive, or zero eigenvalues are therefore the same on each of the three Hilbert spaces. Since $ \psi_2 $ is the eigenfunction with eigenvalue zero, they are given by \[ \mathcal{P}_+f = \sum_{j=0}^1 \frac{\langle \psi_j, f\rangle }{\langle \psi_j, \psi_j\rangle}\psi_j,\quad \mathcal{P}_-f = \sum_{j=3}^\infty \frac{\langle \psi_j, f\rangle }{\langle \psi_j, \psi_j\rangle}\psi_j,\quad \mathcal{P}_0f = \frac{\langle \psi_2, f\rangle }{\langle \psi_2, \psi_2\rangle}\psi_2. \]
\subsection{Estimates for ancient solutions of the linear cylindrical equation}
In this section we will give energy type estimates for ancient solutions $ f:(-\infty, \tau_0] \to \mathfrak{D} $ of the linear cylindrical equation \begin{equation}
\label{eqn-linear1}
\frac{df}{d\tau} - \mathcal{L} f(\tau) = g(\tau). \end{equation}
\begin{lemma}
\label{lem-linear-cylindrical-estimates}
Let $ f:(-\infty, \tau_0] \to \mathfrak{D} $ be a bounded solution of \eqref{eqn-linear1}.
Then there is a constant $ C <\infty $ that does not depend on $ f $, such that
\[
\sup_{\tau\leq\tau_0}\| \hat f(\tau)\|_\mathfrak{H}^2 +\frac 1C \int_{-\infty}^{\tau_0} \|\hat f(\tau)\|_\mathfrak{D}^2\, d\tau
\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 + C \int_{-\infty}^{\tau_0} \|\hat g(\tau)\|_{\mathfrak{D}^*}^2\, d\tau
\]
where $ f_+ = \mathcal{P}_+ f $ and $ \hat f = \mathcal{P}_+f + \mathcal{P}_- f$. \end{lemma} \begin{proof}
This is a standard cylindrical estimate applied to the infinite time domain $ (-\infty, \tau_0] $.
Since the operator $ \mathcal{L} $ commutes with the projections $ \mathcal{P}_\pm$ we can split $ f(\tau) $ into its $ \mathcal{P}_+ $ and $ \mathcal{P}_- $ components, and estimate these separately.
Applying the projection $ \mathcal{P}_- $ to both sides of the equation $ f_\tau - \mathcal{L} f = g $ we get
\[
f_-'(\tau) = \mathcal{L} f_-(\tau) + g_-(\tau),
\]
where $g_-(\tau) = \mathcal{P}_- g(\tau)$.
This implies
\[
\frac 12\frac{d}{d\tau} \|f_-\|_\mathfrak{H}^2 = \langle f_-, \mathcal{L} f_- \rangle + \langle f_-, g_- \rangle.
\]
Using the eigenfunction expansion of $ f_- $ we get
\[
\langle f_-, \mathcal{L} f_- \rangle \leq - C \|f_-\|_\mathfrak{D}^2.
\]
We also have
\[
\langle f_-, g_- \rangle \leq \|f_-\|_\mathfrak{D} \; \|g_-\|_{\mathfrak{D}^*} \leq \frac C2 \|f_-\|_\mathfrak{D}^2 + \frac {1}{2C}\|g_-\|_{\mathfrak{D}^*}^2.
\]
We therefore get
\[
\frac 12\frac{d}{d\tau} \|f_-\|_\mathfrak{H}^2 \leq -\frac C2 \|f_-\|_\mathfrak{D}^2 + \frac 1 {2C} \|g_-\|_{\mathfrak{D}^*}^2.
\]
Integrating in time over the interval $ (-\infty, \tau]$ then leads to
\[
\frac12 \|f_-(\tau)\|_\mathfrak{H}^2 + \frac C2 \int _{-\infty}^{\tau} \|f_-(\tau')\|_\mathfrak{D}^2 \; d\tau' \leq \frac 1{2C} \int_{-\infty}^\tau \|g_-(\tau')\|_{\mathfrak{D}^*}^2 \;d\tau'.
\]
Taking the supremum over $ \tau\leq \tau_0 $ then gives us the $ \mathcal{P}_- $ component of \eqref{eq-basic-cylindrical-estimate}.
For the other component, $ f_+(\tau) = \mathcal{P}_+ f$, we have
\[
\langle f_+, \mathcal{L} f_+ \rangle \geq C \|f_+\|_\mathfrak{D}^2.
\]
A similar calculation then leads to
\[
\frac 12\frac{d}{d\tau} \|f_+\|_\mathfrak{H}^2 \geq \frac C2 \|f_+\|_\mathfrak{D}^2 - \frac 1 {2C} \|g_+\|_{\mathfrak{D}^*}^2.
\]
Integrating this over the interval $ [\tau, \tau_0] $ introduces the boundary term $ \|f_+(\tau_0)\|_\mathfrak{H}^2 $, and gives us the estimate
\[
\frac12 \|f_+(\tau)\|_\mathfrak{H}^2 + \frac C2 \int _{\tau}^{\tau_0} \|f_+(\tau')\|_\mathfrak{D}^2 \; d\tau' \leq \frac12 \|f_+(\tau_0)\|_\mathfrak{H}^2 + \frac 1{2C} \int_{\tau}^{\tau_0} \|g_+(\tau')\|_{\mathfrak{D}^*}^2 \; d\tau'.
\]
Adding the estimates for $ \mathcal{P}_+ f $ and $ \mathcal{P}_-f $ yields \eqref{eq-basic-cylindrical-estimate}. \end{proof}
\begin{lemma}
\label{lem-linear-cylindrical-estimates-sup-L2-version}
Let $f:(-\infty, \tau_0] \to \mathfrak{D}$ be a bounded solution of equation \eqref{eqn-linear1}.
If $T>0$ is sufficiently large, then there is a constant $C_{\star}$ such that
\begin{equation}
\begin{split}
\sup_{\tau\leq\tau_0}\| \hat f(\tau)\|_\mathfrak{H}^2
+\frac 1{C_{\star}} \sup_{n\geq 0} \int_{I_n} &\|\hat f(\tau)\|_\mathfrak{D}^2\, d\tau \\
&\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 + C_\star \sup _{n\geq 0}\int_{I_n} \|\hat g(\tau)\|_{\mathfrak{D}^*}^2\, d\tau,
\label{eq-basic-cylindrical-estimate}
\end{split}
\end{equation}
where $I_n$ is the interval $ I_n = [\tau_0-(n+1)T, \tau_0-nT] $ and where $ f_+ = \mathcal{P}_+ f $ and $ \hat f = \mathcal{P}_+f + \mathcal{P}_- f$.
\end{lemma} \begin{proof}
To simplify notation we assume in this proof that $\mathcal{P}_0f(\tau)=0$, i.e.~that $\hat f(\tau) = f(\tau)$ for all $\tau$.
Likewise we assume that $\hat g(\tau) = g(\tau)$ for all $\tau\leq \tau_0$.
Choose a large number $T>0$ and let $\eta\in C^\infty_c({\mathbb R})$ be a smooth cut-off function with $\eta(t)=1$ for $t\in[-T,0]$, $\mathop{\mathrm {supp}}\eta \subset(-2T, +T)$.
We may assume that
\begin{equation}
\label{eq-cut-off-f-derivative-bound}
|\eta'(\tau)| \leq \frac 2 T\text{ for all }\tau\in{\mathbb R}.
\end{equation}
For any integer $n\geq 0$ we consider
\[
f_n(\tau) = \eta_n(\tau)f(\tau), \quad\text{where}\quad \eta_n(\tau) = \eta(\tau - \tau_0 + nT).
\]
\begin{figure}
\caption{The cut off function $\eta_n(\tau)$, and the intervals $I_n$ and $J_n$.}
\end{figure}
The cut-off function $\eta_n$ satisfies $\eta_n(\tau)=1$ for $\tau\in I_n$, and $\mathop{\mathrm {supp}}\eta_n \subset J_n$, where, by definition,
\[
J_n = I_{n+1}\cup I_n \cup I_{n-1}.
\]
The function $f_n$ is a solution of
\[
f_n'(\tau) -\mathcal{L} f_n(\tau) = \eta_n'(\tau) f(\tau) + \eta_n(\tau) g(\tau).
\]
If $n\geq 1$, then we can apply Lemma~\ref{lem-linear-cylindrical-estimates} to $f_n$, with $f_n(\tau_0) = 0$.
Since $f_n$ and $f$ coincide on $I_n$, we get
\begin{align*}
\sup_{\tau\in I_n}\|f(\tau)\|_\mathfrak{H}^2 +\frac1{C} \int_{I_n} \|f\|_{\mathfrak{D}}^2 d\tau &\leq \sup_{\tau\in J_n}\|f_n(\tau)\|_\mathfrak{H}^2
+\frac1{C} \int_{J_n} \|f_n\|_{\mathfrak{D}}^2 d\tau \\
&\leq C \int_{J_n} \|\eta_n' f +\eta_n g\|_{\mathfrak{D}^*}^2 d\tau.
\end{align*}
Here $C$ is the constant from Lemma~\ref{lem-linear-cylindrical-estimates}.
Using $(a+b)^2\leq 2(a^2+b^2)$ and also our bound \eqref{eq-cut-off-f-derivative-bound} for $\eta_n'(\tau)$ we get
\[
\sup_{\tau\in I_n}\|f(\tau)\|_\mathfrak{H}^2 +\frac1{C} \int_{I_n} \|f\|_{\mathfrak{D}}^2 d\tau \leq C \int_{J_n} \biggl\{\frac 2{T^2}\| f \|_{\mathfrak{D}^*}^2 + \| g \|_{\mathfrak{D}^*}^2 \biggr\} d\tau.
\]
It follows that
\begin{equation}
\label{eq-supinfty-proof-1}
\begin{split}
\sup_{\tau\in I_n}\|f(\tau)\|_\mathfrak{H}^2 + \frac1{C} \int_{I_n} &\|f\|_{\mathfrak{D}}^2 d\tau \\
&\leq \frac{3C}{T^2} \sup_k \int_{I_k} \| f\|_{\mathfrak{D}^*}^2 d\tau + 3C\sup_k \int_{I_k} \| g \|_{\mathfrak{D}^*}^2 d\tau.
\end{split}
\end{equation}
For $n=0$ the truncated function $f_n(\tau)$ is not defined for $\tau > \tau_0$ and we must use an estimate on $J_0 = I_1\cup I_0$.
We apply Lemma~\ref{lem-linear-cylindrical-estimates} to the function $f_0(\tau) = \eta_0(\tau)f(\tau)$:
\begin{equation} \label{eq-supinfty-proof-2}
\begin{split}
\sup_{\tau\in I_0}\|f(\tau)\|_\mathfrak{H}^2 &
+\frac1{C} \int_{I_0} \|f\|_{\mathfrak{D}}^2 d\tau\\
&\leq \sup_{\tau\leq\tau_0}\|f_0(\tau)\|_\mathfrak{H}^2 +\frac1{C} \int_{-\infty}^{\tau_0} \|f_0\|_{\mathfrak{D}}^2 d\tau
\\
&\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 + C \int_{-\infty}^{\tau_0} \|\eta_0'f+\eta_0 g\|_{\mathfrak{D}^*}^2 d\tau
\\
&\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 + 2C \int_{I_1} (\eta_0')^2 \|f\|_{\mathfrak{D}^*}^2 d\tau + 2C \int_{J_0} \|g\|_{\mathfrak{D}^*}^2 d\tau
\\
&\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 + \frac{2C}{T^2} \sup_k \int_{I_k} \| f\|_{\mathfrak{D}^*}^2 d\tau + 2C\sup_k \int_{I_k} \| g \|_{\mathfrak{D}^*}^2 d\tau.
\end{split}
\end{equation}
Combining \eqref{eq-supinfty-proof-1} and \eqref{eq-supinfty-proof-2}, and taking the supremum over $n$, yield
\[
\begin{split}
\sup_{\tau\leq \tau_0}\|f(\tau)\|_\mathfrak{H}^2
&+\frac1{C}\sup_{n}\int_{I_n} \|f\|_{\mathfrak{D}}^2 d\tau \\
&\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 +\frac{3C}{T^2} \sup_k \int_{I_k} \| f\|_{\mathfrak{D}^*}^2 d\tau + 3C\sup_k \int_{I_k} \| g \|_{\mathfrak{D}^*}^2 d\tau.
\end{split}
\]
Since $\|u\|_\mathfrak{H} \leq \|u\|_\mathfrak{D}$ for all $u\in\mathfrak{D}$, it follows by duality that $\|u\|_{\mathfrak{D}^*} \leq \|u\|_\mathfrak{H}$ for all $u\in\mathcal{H}$, and thus we have $\|f(\tau)\|_{\mathfrak{D}^*} \leq \|f(\tau)\|_{\mathfrak{D}}$.
Therefore
\[
\begin{split}
\sup_{\tau\leq \tau_0}\|f(\tau)\|_\mathfrak{H}^2
&+\frac1{C}\sup_{n}\int_{I_n} \|f\|_{\mathfrak{D}}^2 d\tau \\
&\leq \|f_+(\tau_0)\|_\mathfrak{H}^2 +\frac{3C}{T^2} \sup_k \int_{I_k} \| f\|_{\mathfrak{D}}^2 d\tau + 3C\sup_k \int_{I_k} \| g \|_{\mathfrak{D}^*}^2 d\tau.
\end{split}
\]
At this point we assume that $T$ is so large that $3C/T^2 \leq 1/{2C}$, which lets us move the terms with $f$ on the right to the left hand side of the inequality:
\[
\sup_{\tau\leq \tau_0}\|f(\tau)\|_\mathfrak{H}^2 +\frac1{2C}\sup_{n}\int_{I_n} \|f\|_{\mathfrak{D}}^2 d\tau \leq \|f_+(\tau_0)\|_\mathfrak{H}^2 + 3C\sup_k \int_{I_k} \| g \|_{\mathfrak{D}^*}^2 d\tau.
\] \end{proof}
\subsection{$L^2$-Estimates for the error terms}
The two solutions $u_1, u_2$ of equation \eqref{eq-u} that we are considering are only defined for \(y^2 \leq (2+o(1))|\tau|\). This follows from the asymptotics in our previous work \cite{ADS} (see also Theorems \ref{thm-old} and \ref{thm-O1}) where it was also shown that they satisfy the asymptotics \[ u(y, \tau) = \sqrt{(n-1)\, (2-z^2)} + o(1), \qquad \mbox{as}\,\, \tau\to-\infty \]
uniformly in $z$, where $ {\displaystyle z = \frac{y}{\sqrt{|\tau|}}}$.
We have seen that $w:= u_1-u_2$ satisfies \eqref{eq-w-linear} where the error term $\mathcal{E}$ is given by \eqref{eq-E10}. We will now consider this equation only in the ``cylindrical region,'' i.e.~the region where \[
u > \frac{\theta}2 \qquad \text{ i.e. }\quad \frac{y}{\sqrt{|\tau|}} < \sqrt{2 - \frac{\theta^2}{4(n-1)}} + o(1). \] To concentrate on this region, we choose a cut-off function $\Phi\in C^\infty({\mathbb R})$ which decreases smoothly from $1$ to $0$ in the interior of the interval \[ \sqrt{2 - \frac{\theta^2}{n-1}} < z < \sqrt{2 - \frac{\theta^2}{4(n-1)}}. \] With this cut-off function we then define \[
\varphi_\mathcal{C} (y, \tau) = \Phi\Bigl(\frac{y}{|\tau|}\Bigr) \] and \[ w_\mathcal{C}(y, \tau) = \varphi_\mathcal{C}(y, \tau) w(y, \tau). \] The cut-off function $\varphi_\mathcal{C}$ satisfies the bounds \[
|(\varphi_\mathcal{C})_y|^2 + |(\varphi_\mathcal{C})_{yy}| + |(\varphi_\mathcal{C})_{\tau}| \le \frac{\bar{C}(\theta)}{|\tau|} \] where $\bar{C}(\theta)$ is a constant that depends on $\theta$ and that may change from line to line in the text. The localized difference function $w_\mathcal{C}$ satisfies \begin{equation}
w_{C,\tau} - \mathcal{L} w_\mathcal{C} = \mathcal{E}[w_\mathcal{C}] +\bar{\mathcal{E}} [w,\varphi_\mathcal{C}]
\label{eq-w-c-evolution} \end{equation} where the operator $\mathcal{E}$ is again defined by \eqref{eq-E10} and where the new error term $\bar\mathcal{E}$ is given by the commutator \[ \bar\mathcal{E}[w,\varphi_\mathcal{K}] = \bigl[ \partial_\tau - (\mathcal{L} +\mathcal{E}), \varphi_\mathcal{K} \bigr]w, \] i.e. \begin{multline}
\label{eq-bar-E}
\bar{\mathcal{E}}[w,\varphi_\mathcal{C}] = \\
\Biggl \{\varphi_{\mathcal{C},\tau} - \varphi_{\mathcal{C}, yy} + \frac{u_{1y}^2}{1+u_{1y}^2}\, \varphi_{\mathcal{C}, yy}
+ \frac{(u_{1y}+u_{2y})u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)} (\varphi_\mathcal{C})_y + \frac y2 (\varphi_\mathcal{C})_y\Biggr \} w \\
+ \biggl \{\frac{2u_{1y}^2}{1+u_{1y}^2} (\varphi_\mathcal{C})_y - 2(\varphi_\mathcal{C})_y\biggr\} w_y. \end{multline}
Equation \eqref{eq-w-c-evolution} for $w_\mathcal{C}$ is not self contained because of the last term $\bar{\mathcal{E}}[w,\varphi_\mathcal{C}]$, which involves $w$ rather than $w_\mathcal{C}$. The extra non local term is supported in the intersection of the cylindrical and tip regions because all the terms in it involve derivatives of $\varphi_\mathcal{C}$, but not $\varphi_\mathcal{C}$ itself.
Let us abbreviate the right hand side in \eqref{eq-w-c-evolution} to \[ g := \mathcal{E}[w_\mathcal{C}] + \bar{\mathcal{E}}[w,\varphi_\mathcal{C}]. \] Apply Lemma \ref{lem-linear-cylindrical-estimates-sup-L2-version} to $w_\mathcal{C}$ solving \eqref{eq-w-c-evolution}, to conclude that there exist $\tau_0 \ll -1$ and constant $C_* > 0$, so that if the parameters $(\alpha, \beta, \gamma)$ are chosen to ensure that $\mathcal{P}_+ w_\mathcal{C}(\tau_0) = 0$, then $\hat w_\mathcal{C} := \mathcal{P}_+ w_\mathcal{C} + \mathcal{P}_- w_\mathcal{C}$ satisfies the estimate \begin{equation}
\label{eq-first-step}
\|\hat{w}_\mathcal{C}\|_{\mathfrak{D},\infty} \le C_* \, \|g\|_{\mathfrak{D}^*,\infty} \end{equation} for all $\tau \le \tau_0$.
In the next two lemmas we focus on estimating $\|g\|_{\mathfrak{D}^*}$.
\begin{lemma}
\label{lem-error1-est}
For every $\epsilon > 0$ there exist a $\tau_0$ and a uniform constant $C$ so that for $\tau\le \tau_0$ we have
\[
\|\mathcal{E}[w_\mathcal{C}]\|_{\mathfrak{D}^*} \le \epsilon\, \|w_\mathcal{C}\|_{\mathfrak{D}}.
\] \end{lemma}
\begin{proof}
Recall that
\[\mathcal{E}[w_\mathcal{C}]
= -\frac{u_{1y}^2}{1+u_{1y}^2} (w_\mathcal{C})_{yy} -\frac{(u_{1y}+u_{2y})u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)} (w_\mathcal{C})_y +\frac{2(n-1) - u_1u_2}{2u_1u_2}w_\mathcal{C}.\] In \cite{ADS} we showed that for $\tau \leq \tau_0 \ll-1$
\begin{equation}
\label{eq-apriori-bounds}
|(u_i)_y| + |(u_i)_{yy}| + |(u_i)_{yyy}| \le \frac{\bar{C}(\theta)}{\sqrt{|\tau|}}, \qquad \mbox{for} \,\,\, (y,\tau) \in \mathcal{C}_{\theta}
\end{equation}
where $u_i, i=1,2$ is any of the two considered solutions.
The constant $\bar C(\theta)$ depends on $\theta$ and may change from line to line, but it is independent of $\tau$ as long as $\tau \leq \tau_0 \ll -1$.
Using \eqref{eq-apriori-bounds} and Lemma \ref{lem-bounded-second-order} we have,
\begin{equation}
\label{eq-term-one}
\Bigl\|\frac{u_{1y}^2}{1+u_{1y}^2} (w_\mathcal{C})_{yy}\Bigr\|_{\mathfrak{D}^*} \le \frac{\bar{C}(\theta)}{|\tau|} \|(w_\mathcal{C})_{yy}\|_{\mathfrak{D}^*} \le \frac{\bar{C}(\theta)}{|\tau|}\, \|w_\mathcal{C}\|_{\mathfrak{D}}.
\end{equation}
while by \eqref{eq-apriori-bounds} and Lemma \ref{lem-bounded-first-order} we have,
\begin{equation}
\label{eq-term-two}
\Bigl\|\frac{(u_{1y} + u_{2y}) u_{2yy}}{(1+u_1y^2) (1+ u_{2y}^2)}\, (w_\mathcal{C})_y\Bigr\|_{\mathfrak{D}^*} \le \frac{\bar{C}(\theta)}{|\tau|} \|(w_\mathcal{C})_y\|_{\mathfrak{D}^*} \le \frac{\bar{C}(\theta)}{|\tau|}\, \|w_\mathcal{C}\|_{\mathfrak{H}}.
\end{equation}
Also,
\[
\Bigl \|\frac{(2(n-1) - u_1 u_2)}{2u_1 u_2}\, w_\mathcal{C}\Bigr \|_{\mathfrak{D}^*} \le \Bigl \| \frac{(2(n-1) - u_1^2)}{2u_1 u_2}\, w_\mathcal{C}\Bigr \|_{\mathfrak{D}^*} + \Bigl \|\frac{(u_1 - u_2)}{2u_2}\, w_\mathcal{C}\Bigr\|_{\mathfrak{D}^*}.
\]
It is very similar to deal with either of the terms on the right hand side, so we explain how to deal with the first one next: Lemma \ref{lem-bounded-first-order}, the uniform boundedness of our solutions and the fact that $u_i \ge \theta/4$ in $\mathcal{C}$ for $i\in \{1,2\}$, give
\[
\begin{split}
\Bigl \| \frac{(2(n-1) - u_1^2)}{2u_1 u_2}\, w_\mathcal{C}\Bigr\|_{\mathfrak{D}^*} &\le \frac{\bar{C}(\theta)}{\theta^2}\|(\sqrt{2(n-1)} - u_1) \, w_\mathcal{C}\|_{\mathfrak{D}^*} \\
&\le \frac{\bar{C}(\theta)}{\theta^2}\Bigl \|\frac{(\sqrt{2(n-1)} - u_1)}{y+1}\, w_\mathcal{C}\Bigr \|_{ \mathfrak{H}}.
\end{split}
\]
Then, for any $K >0$ we have
\[
\begin{split}
\Bigl\| \frac{(2(n-1) - u_1^2)}{2u_1 u_2}\, w_\mathcal{C}\Bigr\|_{\mathfrak{D}^*} &\le
\frac{\bar{C}(\theta)}{\theta^2} \, \int_{0 \le y \le K} \frac{(\sqrt{2(n-1)} - u_1)^2}{(y+1)^2}\, w_\mathcal{C}^2 \, e^{-\frac{y^2}{4}}\, dy \\
&+ \frac{\bar{C}(\theta)}{\theta^2}\, \int_{y \ge K} \frac{(\sqrt{2(n-1)} - u_1)^2}{(y+1)^2}\, w_\mathcal{C}^2 \, e^{-\frac{y^2}{4}}\, dy.
\end{split}
\]
Now for any given $\epsilon >0$ we choose $K$ large so that ${\displaystyle \frac{\bar{C}(\theta)}{\theta^2 K^2} < \frac \epsilon{6} }$, and then for that chosen $K$ we choose a $\tau_0 \ll -1$ so that ${\displaystyle \frac{\bar{C}(\theta)}{\theta^2}(\sqrt{2(n-1)} - u_1) < \frac \epsilon{6}}$ for all $\tau \le \tau_0$ and $0 \le y \le K$ (note that here we use that $u_i(y,\tau)$ converges uniformly on compact sets in $y$ to $\sqrt{2(n-1)}$, as $\tau\to -\infty$).
We conclude that for $\tau \geq \tau_0$
\begin{equation}
\label{eq-term-three}
\Bigl\| \frac{(2(n-1) - u_1^2)}{2u_1 u_2}\, w_\mathcal{C}\Bigr\|_{\mathfrak{D}^*} \leq \frac{\epsilon}3 \, \|w_\mathcal{C}\|_{\mathfrak{H}} \leq \frac{\epsilon}3 \, \|w_\mathcal{C}\|_{\mathfrak{D}}.
\end{equation}
Finally combining \eqref{eq-term-one}, \eqref{eq-term-two} and \eqref{eq-term-three} finishes the proof of Lemma. \end{proof}
We will next estimate the error term $\bar{\mathcal{E}}[w,\varphi_\mathcal{C}]$.
\begin{lemma}
\label{lem-error-bar}
There exists a $\tau_0 \ll -1$ and $\bar{C}(\theta)$ so that for all $\tau \le \tau_0$ we have
\[
\|\bar{\mathcal{E}}[w,\varphi_\mathcal{C}]\|_{\mathfrak{D}^*} \le \frac{\bar{C}(\theta)}{\sqrt{|\tau_0|}}\, \|\chi_{D_{\theta}}\, w\|_{\mathfrak{H}}
\]
where $\bar{\mathcal{E}}[w,\varphi_\mathcal{C}]$ is defined by \eqref{eq-bar-E} and $\chi_{D_{\theta}}$ is the characteristic function of the set $D_\theta:= \{ \theta/2 < u < \theta \}$. \end{lemma}
\begin{proof}
Setting
\[
a(y,\tau) := \varphi_{\mathcal{C},\tau} - \varphi_{\mathcal{C}, yy} + \frac{u_{1y}^2}{1+u_{1y}^2}\, \varphi_{\mathcal{C}, yy} + \frac{(u_{1y}+u_{2y})u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)} \varphi_{\mathcal{C},y}
\]
and
\[
b(y,\tau) := (\varphi_\mathcal{C})_y \qquad \mbox{and} \qquad d(y,\tau) := \frac{2u_{1y}^2}{1+u_{1y}^2} (\varphi_\mathcal{C})_y - 2(\varphi_\mathcal{C})_y
\]
we may write
\begin{equation}
\label{eq-rewriting-error}
\bar{\mathcal{E}}[w,\varphi_\mathcal{C}] = a(y,\tau) w + \frac y2 \, b(y,\tau)\, w + d(y,\tau) \, w_y.
\end{equation}
Note that the support of all three functions, $a(y,\tau)$, $b(y,\tau)$ and $d(y,\tau)$ is contained in $D_\theta$ and
\[
|a(y,\tau)| + |b(y,\tau)| + |d(y,\tau)| \le \frac{\bar{C}(\theta)}{\sqrt{|\tau|}}.
\]
Furthermore, by \eqref{eq-apriori-bounds} and Lemma \ref{lem-bounded-first-order} we get
\[
\|a(y,\tau) \, w\|_{\mathfrak{D}^*} \le \|a(y,\tau) w\|_{\mathfrak{H}} \le \frac{\bar{C}(\theta)}{\sqrt{|\tau|}}\, \|w \chi_{D_{\theta}}\|_{\mathfrak{H}},
\]
\[
\|\frac y2\, b(y,\tau) w\|_{\mathfrak{D}^*} \le \|b(y,\tau) \, w\|_{\mathfrak{H}} \le \frac{\bar{C}(\theta)}{\sqrt{|\tau|}}\, \|w \, \chi_{D_{\theta}}\|_{\mathfrak{H}}
\]
and
\[
\begin{split}
\|d(y,\tau) \, w_y\|_{\mathfrak{D}^*} &\le \|(d(y,\tau) w)_y\|_{\mathfrak{D}^*} + \|w d_y(y,\tau)\|_{\mathfrak{D}^*}\\
&\le \|d(y,\tau) \, w\|_{\mathfrak{H}} + \frac{\bar{C}(\theta)}{\sqrt{|\tau|}} \|w\, \chi_{D_{\theta}}\|_{\mathfrak{H}} \\
&\le \frac{\bar{C}(\theta)}{\sqrt{|\tau|}} \|w\, \chi_{D_{\theta}}\|.
\end{split}
\]
The above estimates together with \eqref{eq-rewriting-error} readily imply the lemma. \end{proof}
Finally, we now employ all the estimates shown above to conclude the proof of Proposition \ref{prop-cylindrical}.
\begin{proof}[Proof of Proposition \ref{prop-cylindrical}]
By \eqref{eq-first-step} with $g := \mathcal{E}[w_\mathcal{C}] + \bar{\mathcal{E}}[w,\varphi_\mathcal{C}]$
and using also Lemma \ref{lem-error1-est}, Lemma \ref{lem-error-bar} and the
assumption that $\mathcal{P}_+ w_\mathcal{C} (\tau_0) = 0$, we have that for every $\epsilon >
0$ there exists a $\tau_0 \ll -1$ so that
\[
\|\hat{w}_C\|_{\mathfrak{D},\infty}
\le \epsilon\, \|w_\mathcal{C}\|_{\mathfrak{D},\infty}
+ \frac{\bar{C}(\theta)}{\sqrt{|\tau_0|}}\,
\|w \chi_{D_{\theta}}\|_{\mathfrak{H},\infty}.
\]
This readily gives the proposition. \end{proof}
\section{The tip region} \label{sec-tip}
Let $u_1(y,\tau)$ and $u_2(y,\tau)$ be the two solutions to equation \eqref{eq-u} as in the statement of Theorem \ref{thm-main} and let $u_2^{\alpha\beta\gamma}$ be defined by \eqref{eq-ualphabeta}. We will now estimate the difference of these solutions in the tip region which is defined by $\mathcal{T}_{\theta} = \{(y,\tau)\,\,\,|\,\,\, u_1 \le 2\theta\}$, for $\theta >0$ sufficiently small, and $\tau \le \tau_0 \ll -1$, where $\tau_0$ is going to be chosen later. In the tip region we need to switch the variables $y$ and $u$ in our both solutions, with $u$ becoming now an independent variable. Hence, our solutions become $Y_1(u,\tau)$ and $Y_2^{\alpha\beta\gamma}(u,\tau)$. Define the difference $ W:=Y_1-Y_2^{\alpha\beta\gamma}$ and for a standard cutoff function $\varphi_T(u)$ as in \eqref{eqn-cutofftip} we denote $W_T := \varphi_T\, W$.
\begin{rem}
\label{rem-tip}
By the change of variables \eqref{eqn-Y-expansion} and by the definition of $u_2(y,\tau) := u_2^{\alpha\beta\gamma}(y,\tau)$ as in \eqref{eq-ualphabeta}, we have that
\[
Z_2^{\alpha\beta\gamma}(\rho,\tau) = \sqrt{|\tau|}\, \Bigl(Y_2^{\alpha\beta\gamma}\big (\frac{\rho}{\sqrt{|\tau|}},\tau\big ) - Y_2^{\alpha\beta\gamma}(0,\tau)\Bigr)
\]
where
\[
Y_2^{\alpha\beta\gamma}(u,\tau) = \sqrt{1+\beta e^{\tau}}\, Y_2\big (\frac{u}{\sqrt{1+\beta e^{\tau}}}, \sigma\big ), \quad \sigma:= \tau+\gamma-\log(1+\beta e^{\tau}).
\]
Combining the above two equations yields
\[Z_2^{\alpha\beta\gamma}(\rho,\tau) = \frac{\sqrt{|\tau|} \, \sqrt{1+\beta e^{\tau}}}{\sqrt{|\sigma|}}\, Z_2 \big (\rho\, \frac{\sqrt{|\sigma|}}{\sqrt{|\tau|}\sqrt{1+\beta e^{\tau}}}, \sigma \big ).
\]
Recall that $\alpha=\alpha(\tau_0)$, $\beta=\beta(\tau_0)$, $\gamma=\gamma(\tau_0)$ will be chosen in Section \ref{sec-conclusion} so that $(\alpha, \beta,\gamma)$ is admissible with respect to $\tau_0$.
Using the fact that $Z_2(\rho,\tau)$ converges as $\tau\to -\infty$, uniformly smoothly on compact sets in $\rho$, to the Bowl soliton $Z_0(\rho)$ we have
\[
\begin{split}
Z_2^{\alpha\beta\gamma}&(\rho,\tau) = (1 + o(1))\, \Bigl \{ Z_2(\rho, \sigma) + \Bigl ( Z_2 \big (\rho\, \frac{\sqrt{|\sigma|}}{\sqrt{|\tau|}\sqrt{1+\beta e^{\tau}}}, \sigma \big ) - Z_2(\rho, \sigma) \Bigr) \Bigr \} \\
&= (1 + o(1))\, \Bigl \{ Z_0(\rho) + o(1) + (Z_2)_\rho (\hat{\rho},\sigma) \, \rho \, \Bigl (\frac{\sqrt{\sigma|}}{\sqrt{|\tau|}\, \sqrt{1+\beta e^{\tau}}} - 1\Bigr ) \Bigr \}
\end{split}
\]
where $o(1)$ denote functions that may differ from line to line, but are uniformly small for all $\tau\le \tau_0 \ll -1$ and for all $(\alpha,\beta,\gamma)$ that are admissible with respect to $\tau_0$.
Note also that above we applied the mean value theorem, with $\hat{\rho}$ being a value in between $\rho$ and $\rho\, \frac{\sqrt{|\sigma|}}{\sqrt{|\tau|}\sqrt{1+\beta e^{\tau}}} = \rho \, (1 + o(1))$.
By the monotonicity of $(Z_2)_\rho(\cdot,\sigma)$ in $\rho$, we see that for $\tau_0$ sufficiently small we have
\[
(Z_2)_\rho (\rho + \epsilon, \sigma) \le (Z_2)_\rho(\hat{\rho}, \sigma) \le (Z_2)_\rho(\rho -\epsilon, \sigma)
\]
for some small $\epsilon$ and all $\tau \le \tau_0$, implying
\[
(Z_0)_\rho (\rho+\epsilon) + o(1) \le (Z_2)_\rho (\hat{\rho}, \sigma) \le (Z_0)_\rho (\rho-\epsilon) + o(1)
\]
for $\tau \le \tau_0$ and $\tau_0 \ll -1$.
All these together with $ \lim_{\tau\to -\infty} \frac{\sqrt{|\sigma|}}{\sqrt{|\tau|}\sqrt{1+\beta e^{\tau}}} = 0 $ imply that $ Z_2^{\alpha\beta\gamma}(\rho,\tau) = Z_0(\rho) + o(1)$, where $o(1)$ is a function that is, as before, uniformly small for all $\tau\le \tau_0 \ll -1$ and all $\alpha, \beta$ and $\gamma$ that are admissible with respect to $\tau_0$.
Hence, it is easy to see that in all the estimates below, in this section, we can find a uniform $\tau_0 \ll -1$, independent of parameters $\alpha, \beta$ and $\gamma$ (as long as they are admissible with respect to $\tau_0$), so that all the estimates below hold for $Y_1(u,\tau) - Y_2^{\alpha\beta\gamma}(u,\tau)$, for all $\tau \le \tau_0$. \end{rem}
Our goal in this section is to show the following bound.
\begin{prop}\label{prop-tip} There exist $\theta$ with
$0 < \theta \ll 1$, $\tau_0 \ll -1$ and $C< +\infty$ such that
\begin{equation}
\label{eqn-tip}
\| W_T \|_{2,\infty} \leq \frac{C}{|\tau_0| } \, \| W \, \chi_{_{[\theta, 2\theta]} } \|_{2,\infty}
\end{equation}
holds. \end{prop}
To simplify the notation throughout this section we will drop the subscript on $Y_1$ and write $Y=Y_1$ instead. Also, we will denote $Y_2^{\alpha\beta\gamma}$ by $Y_2$. As already explained in Section \ref{subsec-tip}, the proof of this proposition will be based on a Poincar\'e inequality for the function $W_T$ which is supported in the tip region. These estimates will be shown to hold with respect to an appropriately chosen weight $e^{\mu(u,\tau)}\, du$, where $\mu(u,\tau)$ is given by \eqref{eq-weight}. We will begin by establishing various properties on the weight $\mu(u,\tau)$. We will continue with the proof of the Poincar\'e inequality and we will finish with the proof of the Proposition. Recall that the definitions of the {\em collar region} $\mathcal{K}_{L,\theta}$ and the {\em soliton region} $\mathcal{S}_L$ are given in Section \ref{subsec-tip}.
\subsection{Properties of $\mu(u,\tau)$} In a few subsequent lemmas we show estimates for the weight $\mu(u,\tau)$, which is given by \eqref{eq-weight}. Recall that in the soliton region $\mathcal{S}_L$ we have defined $\mu(u,\tau) := m(\rho) + a(L,\tau)\, \rho + b(L,\tau)$, where $a(L,\tau)$ and $b(L,\tau)$ are given by \eqref{eqn-aL} and \eqref{eqn-bL} respectively. \begin{lemma}
\label{lemma-prop-mu}
For all sufficiently large $L$ the limit \(a_\infty(L) = \lim_{\tau\to-\infty} a(L, \tau)\) exists.
Moreover, there is a constant $C<\infty$ such that
\[
|a_\infty(L)|\leq CL^{-1}.
\]
In particular, for every $\eta > 0$ there exist an $L_0$ so that for every $L \ge L_0$, there exists a $\tau_0 \ll -1$ such that
\[
|a(L,\tau)| \le \eta \text{ for all $\tau \leq \tau_0$. }
\] \end{lemma}
\begin{proof}
Recall that
\[
a(L,\tau) := -m'(L) - \frac{1}{2\sqrt{|\tau|}}Y\, Y_{u}\bigl({L / \sqrt{|\tau|}},\tau\bigr),
\]
where
\[
Y(u,\tau) = Y(0,\tau) + \frac{1}{\sqrt{|\tau|}}\, Z(\rho,\tau).
\]
Using $Y(0,\tau) = \sqrt{2|\tau|}(1 + o(1))$,
$Y_{u}(u,\tau) = Z_{\rho}(\rho, \tau)$
and $Z(\rho, \tau)\to Z_0(\rho)$ for $\tau\to-\infty$,
we get
\[
a(L, \tau) = -m'(L) - \frac12\sqrt{2|\tau|}(1 + o(1))
\frac{1}{\sqrt{|\tau|}} Z_\rho(L, \tau)
\]
so
\[
\lim_{\tau\to-\infty} a(L,\tau)
= a_\infty(L)
= -m'(L) - \frac12 \sqrt2 \, Z_0'(L).
\]
Since $m'(L) = \frac{n - 1}{L}\, (1 + Z_{0}'(L)^2)$, we have
\[
a_\infty(L)
= -\frac{n-1}{L} -\frac{n - 1}{L}\, Z_{0}'(L)^2 - \frac{1}{2}\sqrt{2}\, Z_{0}'(L).
\]
The asymptotic expansion \eqref{eq-Z0-asymptotics} for $Z_0(\rho)$ as $\rho\to\infty$ then implies \(a_\infty(L) = \mathcal{O}(L^{-1})\) as \(L\to-\infty\). \end{proof}
In the following lemma we prove further properties of $\mu(u,\tau)$ that will be used later in the text.
\begin{lemma}
Fix $\eta > 0$ small.
There exist
$\theta > 0$, $L > 0$ and $\tau_0 \ll -1$ so that
\begin{equation}\label{eqn-mut}
\mu_\tau \leq \eta \, |\tau| \qquad \mbox{holds on} \,\,\,\, 0\le u \le 2\theta
\end{equation}
and
\begin{equation}\label{eqn-mus}
1- \eta \leq \frac{u\, \mu_u}{n-1}\, \frac{1}{1+Y_{u}^2} \leq 1+ \eta, \quad 1-\eta \le \frac{2(n-1)\,\mu_u}{u|\tau|} < 1 + \eta
\end{equation} holds on $\mathcal{K}_{\theta,L}$ and for all $\tau \le \tau_0$. \end{lemma}
\begin{proof}
To prove \eqref{eqn-mut} we first deal with the {\it collar region} ${\mathcal{K}}_{\theta,L}$.
By \eqref{eqn-Y} and \eqref{eq-weight} we have
\[
\mu_{\tau} = -\frac{Y Y_{\tau}}{2} = -\frac{Y}{2}\, \Bigl(\frac{Y_{uu}}{1+Y_u^2} + \big(\frac{n-1}{u} - \frac u2\big)\, Y_u + \frac Y2\Bigr).
\]
By Proposition \ref{prop-ratio-small} we have that for every $\eta > 0$ there exist $\theta, L > 0$ and $\tau_0 < 0$ so that
${\displaystyle
{\lambda_1}/{\lambda_2} < \frac{\eta}{100}}$, on $\mathcal{K}_{\theta,L}$ and for $\tau \leq \tau_0$,
implying the bound
\[
\frac{|Y_{uu}|}{1 + Y_u^2} \le \frac{\eta}{100}\, \frac{|Y_u|}{u}, \qquad \mbox{on} \,\,\,\,\mathcal{K}_{\theta,L},\,\, \tau \leq \tau_0.
\]
Using \eqref{eq-cor-Sigurd} and the previous estimate yields
\[
\frac{Y}{2}\, \frac{|Y_{uu}|}{1 + Y_u^2} \le \frac{\eta}{100}\, \frac{Y^2}{4(n-1)}\, (1 + \epsilon) < \frac{\eta}{50}\, |\tau|,
\]
if $\theta$ is chosen sufficiently small and $L$ sufficiently big (note that we used $Y(u,\tau) \le Y(0,\tau) = \sqrt{2|\tau|}\, (1 + o_{\tau}(1))$).
Furthermore, by Corollary \ref{lemma-Sigurd} we have
\begin{align*}
-\frac Y2 \Bigl\{ \Bigl(\frac{n-1}{u} - \frac u2\Bigr)\, Y_u + \frac Y2\Bigr\}
&\le -\frac Y2 \, \Bigl(\frac{n-1}{u} Y_u + \frac Y2\Bigr) \\
&= \frac{Y |Y_u|(n-1)}{2u}\, \Bigl(1 + \frac{u Y}{2(n-1)\, Y_u}\Bigr) \\
&< \frac{Y |Y_u| (n-1)}{2u}\, \epsilon(\theta,L) \le \tilde{C}\, |\tau| \epsilon(\theta,L) < \frac{\eta}{2}\, |\tau|.
\end{align*}
We conclude that \eqref{eqn-mut} holds in the collar region $\mathcal{K}_{\theta,L}$.
To estimate $\mu_{\tau}$ in the \emph{soliton region} $\mathcal{S}_L$, where $\rho \le L$, we note that \eqref{eq-weight} implies
\[
\mu_{\tau}
= \frac{d}{d \tau} a(L,\tau)\, \rho + \frac{d}{d\tau} b(L,\tau).
\]
By \eqref{eqn-aL} and \eqref{eqn-bL}, we have that $b(L,\tau) = - m(L) - L\, a(L,\tau)$ and hence,
\[
| \mu_{\tau} | = \big | \frac{d}{d \tau} a(L,\tau)\, ( \rho - L ) \big | \leq L\, \big | \frac{d}{d \tau} a(L,\tau) \big |.
\]
Now using the definition of $a(L,\tau)$ in \eqref{eqn-aL}, we have
\[
\begin{split}
\Bigl| \frac{d}{d\tau} & \, a(L,\tau)\Bigr| = \frac{1}{4|\tau|^{3/2}}\, |Y Y_{u}| +\\
&+ \frac{1}{2\sqrt{|\tau|}}\Bigl( |Y_{\tau} Y_{u}| + |Y Y_{u\tau}| + \frac{L}{4|\tau|^{3/2}} Y_{u}^2 + \frac{L}{4|\tau|^{3/2}} |YY_{uu}|\Bigr),
\end{split}
\]
where all terms on the right hand side in above equation are computed at
$\bigl(L/\sqrt{|\tau|},\tau\bigr)$.
Let us estimate all these terms.
While
doing so we will use \eqref{eqn-Y-expansion} and the smooth convergence of
$Z(\rho,\tau)$, as $\tau\to -\infty$, to the Bowl soliton $Z_0(\rho)$.
For
example,
\[
\frac{|Y\,Y_u|}{|\tau|^{3/2}} \le C\, \frac{|Z_{\rho}|}{|\tau|} \ll \frac{\eta}{100} \, |\tau|,
\]
by choosing $\tau_0 \ll -1$.
Furthermore, using \eqref{eqn-Y} we have
\[
\frac{|Y_{\tau} Y_u|}{2\sqrt{|\tau|}} = \frac{|Z_{\rho}|}{2\sqrt{|\tau|}} \Bigl|\frac{Z_{\rho\rho}\, \sqrt{|\tau|}}{1 + Z_{\rho}^2} + \frac{(n-1)\sqrt{|\tau|}}{\rho}\, Z_{\rho}^2 - \frac{\rho}{2\sqrt{|\tau|}} Z_{\rho} + \frac{Y}{2}\Bigr|,
\]
leading to
\[
\frac{|Y_{\tau} Y_u|}{2\sqrt{|\tau|}} \le C(L) \ll \frac{\eta}{100} |\tau|,
\]
for $\tau \leq \tau_0 \ll -1$.
Next,
\[
\frac{L}{8|\tau|^2} Y_{u}^2 = \frac{L}{8|\tau|^2} Z_{\rho}^2 \le \frac{C(L)}{|\tau|^2} \ll \frac{\eta}{100}\, |\tau|,
\]
and
\[
\frac{L|Y Y_{uu}|}{8\,|\tau|^2} \le C(L)\frac{|Z_{\rho\rho}|}{|\tau|} \ll \eta\, |\tau|,
\]
for $\tau\le \tau_0 \ll -1$ sufficiently small.
Finally, differentiating equation \eqref{eqn-Y} in $u$ and using \eqref{eqn-Y-expansion} we have
\[
\frac{|Y_{u\tau}|}{\sqrt{|\tau|}} = \frac{1}{\sqrt{|\tau|}}\, \Bigl|\Bigl(\frac{Z_{\rho\rho}}{1 + Z_{\rho}^2}\Bigr)_{\rho}\, |\tau| + \Bigl(\Bigl(\frac{(n-1)|\tau|}{\rho} - \rho\Bigr)\, Z_{\rho}\Bigr)_{\rho} + \frac{Z_{\rho}}{2}\Bigr| \le C(L)\, \sqrt{|\tau|} \ll \frac{\eta}{100}\, |\tau|,
\]
for $\tau \le \tau_0 \ll -1$.
Combining the last estimates we conclude that \eqref{eqn-mut} holds also in the soliton region.
Combining the two estimates in the collar and soliton regions yilelds \eqref{eqn-mut}.
To prove the first estimate in \eqref{eqn-mus} note that
\[
\frac{u\mu_u}{(n-1)\, (1 + Y_{u}^2)} = -\frac{u\, Y Y_{u}}{2(n-1)\, (1+ Y_{u}^2)}.
\]
Using \eqref{eq-cor-Sigurd} we have that for every $\eta > 0$ we can choose $\theta \ll 1$ small and $L \gg 1$ big and $\tau_0 \ll -1$ so that
\[
(1 - \frac{\eta}{2})\, \frac{|Y_{u}|^2}{1 + Y_{u}^2} < \frac{u\mu_u}{(n-1)\, (1 + Y_{u}^2)} < (1 + \frac{\eta}{2})\, \frac{|Y_{u}|^2}{1 + Y_{u}^2}.
\]
Since $|Y_{u}|$ is large in $\mathcal{K}_{\theta,L}$, we get that
\[
1 - \eta < \frac{u\mu_u}{(n-1)} < 1 + \eta, \qquad \mbox{in} \,\,\,\,\mathcal{K}_{\theta,L},
\]
for $\theta \ll 1$, $L \gg 1$ and $\tau \le \tau_0 \ll -1$.
To prove the second estimate in \eqref{eqn-mus}, note that
\[
\frac{2(n-1)\, \mu_u}{u|\tau|} = \frac{(n-1)Y |Y_{u}|}{u\, |\tau|},
\]
and use \eqref{eq-cor-Sigurd} together with the fact that $Y = \sqrt{|\tau|}\, (\sqrt{2} + o_{\tau,\theta}(1))$, where the limit $ \lim_{\tau\to-\infty, \theta\to 0} o_{\tau,\theta}(1) = 0$. \end{proof}
\subsection{Poincar\'e inequality in the tip region} We will next show a {\em weighted Poincar\'e type estimate} (with respect to weight $\mu(u,\tau)$ defined in \eqref{eq-weight}) that will be needed in obtaining the coercive type estimate \eqref{eqn-tip} in the {\em tip region} $\mathcal{T}_{\theta}$. As we discussed earlier, near the tip we switch the variables $y$ and $u$ in both solutions, with $u$ becoming now an independent variable.
\begin{proposition}
\label{prop-Poincare}
There exist uniform constants $C > 0$ and $C(\theta) > 0$, independent of $\theta$, and $\tau_0$, so that for $\theta \le \theta_0$, and $\tau \le \tau_0$, for every compactly supported function $f$ in $\mathcal{T}_{\theta}$ we have
\begin{equation}
\label{eq-Poincare}
|\tau|\int_0^{\theta} f^2(u) \, e^{\mu(u,\tau)}\, du \le C\,\int_0^{2\theta} \frac{f_u^2}{1 + Y_{u}^2}\, e^{\mu(u,\tau)}\, du + \int_{\theta}^{2\theta} f^2\, e^{\mu(u,\tau)}\, du.
\end{equation} \end{proposition}
\begin{proof}
We divide the proof in several steps.
In Step \ref{step-Poincare-pretip} we show the weighted Poincar\'e inequality for compactly supported functions in $\mathcal{K}_{\theta,\frac L2}$.
In Step \ref{step-Poincare-verytip} we show the weighted Poincar\'e inequality for compactly supported functions in $\rho = u\sqrt{|\tau|} \in [0,\infty)$.
In Step \ref{step-final} we use cut off functions to show \eqref{eq-Poincare}.
\begin{step} \label{step-Poincare-pretip}
We will first derive the weighted Poincar\'e inequality in the collar region, $\mathcal{K}_{\theta, \frac L2}$, for $\theta$ small, $L$ big and $\tau \ll -1$.
\end{step} Let $f(u)$ be a compactly supported function in $\mathcal{C}_{\theta,\frac L2}$.
We claim we have
\begin{equation}
\label{eq-help-111}
1+Y_{u}^2 \leq \frac 32 u \mu_u, \qquad \mbox{in} \,\,\,\mathcal{K}_{\theta,\frac L2}.
\end{equation}
To show \eqref{eq-help-111}, lets first consider the case when $u\in\mathcal{K}_{\theta,L}$.
By \eqref{eqn-mus} we have
\[
1 + Y_{u}^2 \le (1 - \eta)\, \frac{u \mu_u}{n-1} \le \frac 32\, u\mu_u, \qquad \mbox{in} \,\,\,\,\mathcal{K}_{\theta,L}.
\]
To finish the proof of \eqref{eq-help-111} we need to check the estimate holds for $u \in \Bigl[\frac{L}{2\sqrt{|\tau|}}, \frac{L}{\sqrt{|\tau|}}\Bigr]$, or equivalently, for
$\rho \in [L/2, L]$ as well.
Recall that in this region $\mu(u,\tau) = m(u\sqrt{|\tau|}) + a(L,\tau) u \sqrt{|\tau|} + b(L,\tau)$, and hence
\[
u \mu_u = \rho \, m_{\rho} + a(L,\tau)\, \rho = (n - 1)\, (1 + (Z_0)_{\rho}^2) + a(L,\tau)\, \rho.
\]
By part (b) of Lemma \ref{lemma-prop-mu} we can make $|a(L,\tau)|$ as small as we want by taking $L$ sufficiently big and $\tau \le \tau_0 \ll -1$ sufficiently small.
Moreover, using the asymptotics for $Z_0(\rho)$ and its derivatives one concludes that for $\rho \in [ L/2, L]$, we have
\[
u\mu_u \ge (n - 1) (1 - \epsilon)\, ( 1 + (Z_0)_{\rho}^2).
\]
On the other hand, denote by $Z(\rho,\tau)$ a solution with respect to $\rho$ variable that corresponds to $Y(u,\tau)$, via rescaling \eqref{eqn-Y-expansion}.
By results in \cite{ADS} we know $Z(\rho,\tau)$ converges uniformly smoothly on compact sets in $\rho$ to the Bowl soliton, $Z_0(\rho)$.
This and the fact that $Y_{u} = Z_{\rho}$ yield
\[
u\, \mu_u \ge (n - 1) (1 - 2\epsilon)\, (1 + Y_{u}^2),
\]
for $L$ sufficiently big and $\tau \le \tau_0$, where $\tau_0 \ll -1$ is sufficiently small.
This implies
\[
1 + Y_{u}^2 \le \frac 32 u \mu_u, \qquad \mbox{for} \,\,\,\, u \in \Bigl[\frac{L}{2\sqrt{|\tau|}}, \frac{L}{\sqrt{|\tau|}}\Bigr],
\]
hence concluding the proof of \eqref{eq-help-111}.
Using this estimate, for any $f$ that is compactly supported in $\mathcal{K}_{\theta,\frac L2}$ we have
\begin{equation}\label{eqn-muu3}
\int \frac{f^2}{u^2} \, (1+Y_{u}^2) \, e^{\mu(u,\tau)}\, du \leq \frac 32 \int \frac{f^2}{u} \, \mu_u \, e^{\mu(u,\tau)}\, du.
\end{equation}
Furthermore,
\begin{multline}\label{eqn-poin5}
\int \frac {f^2}{u} \, \mu_u \, e^{\mu(u,\tau)}\,du = \int \frac{f^2}{ u }\, \frac{ \partial } {\partial u} \big ( e^{\mu} \big ) \, du
= -2\int \frac{f f_u}{u }\, e^{\mu(u,\tau)}\, du + \int \frac {f^2} {u^2} \, e^{\mu(u,\tau)}\, du\\
\leq 2\, \int \frac {f^2_u}{1+Y_{u}^2} \, e^{\mu(u,\tau)}\, du + \frac 12 \int \frac{f^2}{u^2} \, (1+Y_{u}^2) \, e^{\mu(u,\tau)}\, du + \int \frac {f^2} {u^2} \, e^{\mu(u,\tau)}\, du.
\end{multline}
Also observe that in the considered region where $u^2 |\tau| \geq L \gg1$, using \eqref{eqn-muu3} we have
\begin{multline*}
\int \frac {f^2} {u^2} \, e^{\mu(u,\tau)}\, du = \int \frac {f^2}{u^2 |\tau|} \, \frac{\mu_u |\tau|}{\mu_u}\, e^{\mu(u,\tau)}\, du \leq \frac{1}{L^2} \int \frac {f^2}{u} \, \mu_u\, \frac{|\tau| u}{\mu_u} \, e^{\mu(u,\tau)}\, du
\\
\leq \frac 1{8} \, \int \frac {f^2}{u} \, \mu_u \,e^{\mu(u,\tau)}\, du.
\end{multline*}
Inserting this and \eqref{eqn-muu3} in \eqref{eqn-poin5}, finally yields
\[
\int \frac {f^2}{u} \, \mu_u \, e^{\mu(u,\tau)}\, du
\leq 16\, \int \frac {f^2_u}{1+Y_{u}^2} \, e^{\mu(u,\tau)}\, du.
\]
If we choose $\eta < 1/2$, the previous estimate and \eqref{eqn-muu3} imply
\begin{equation} \label{eq-poin-pretip}
|\tau|\, \int f^2\, e^{\mu(u,\tau)}\, du \le 64(n - 1)\, \int \frac{f_u^2}{1 + Y_{u}^2}\, e^{\mu(u,\tau)}\, du,
\end{equation}
for any compactly supported function $f$ in $\mathcal{K}_{\theta,\frac L2}$.
Observe that the Poincar\'e constant in \eqref{eq-poin-pretip} is uniform, independent of $L$, $\theta$ and $\tau$.
\begin{step}\label{step-Poincare-verytip}
Denote by $\bar{\mu}(\rho,\tau):= \mu(u,\tau) = m(\rho) + a(L,\tau)\, \rho + b(L,\tau)$.
We show there exists a $\delta > 0$ so that for all $f \in C_c^{\infty}([0,\infty))$, with $f'(0) = 0$ we have,
\begin{equation}
\label{eq-Poincare-verytip}
\delta\, \int_0^{\infty} f^2 e^{\bar{\mu}}\, d\rho \le \int_0^{\infty} \frac{f_{\rho}^2}{1 + (Z_0)_{\rho}^2}\, e^{\bar{\mu}}\, d\rho.
\end{equation}
\end{step}
To prove \eqref{eq-Poincare-verytip} we begin by establishing the inequality for functions supported on the interval $[A, \infty)$ for sufficiently large $A$.
Then we argue by contradiction to extend the inequality to functions defined on $[0, \infty)$.
Let $A<\infty$ be large and let and consider for $f\in C^\infty_c((A, \infty))$
\[
\int_A^\infty f^2 e^{\bar{\mu}} d\rho = \int_A^\infty \frac{f^2} {\bar{\mu}_\rho} de^{\bar{\mu}} = -\int_A^\infty \Bigl( \frac{2ff_\rho} {\bar{\mu}_\rho} - \bigl(\bar{\mu}_\rho^{-1}\bigr)_\rho\, f^2 \Bigr) e^{\bar{\mu}} \, d\rho.
\]
We use the asymptotic relation for $Z_0(\rho)$, which implies $m(\rho) = \rho^2/4(n-1) + o(\rho^2)$ and $m_\rho = \rho/2(n-1)+o(\rho)$, and part (b) of Lemma \ref{lemma-prop-mu} to conclude that $(\bar{\mu}_\rho^{-1})_\rho = -2(n-1)/\rho^2 + o(\rho^{-2})$, for large $\rho$.
Continuing our estimate, we find for any $\epsilon>0$
\begin{equation}
\begin{aligned}
\int_A^\infty f^2 e^{\bar{\mu}} d\rho & \leq \int_A^\infty \Bigl(\epsilon f^2 + \frac{1}{\epsilon}\, \frac{f_\rho^2}{\bar{\mu}_\rho^2} +\frac{C}{\rho^2} f^2
\Bigr) \, e^{\bar{\mu}}\, d\rho\\
& \leq \bigl(\epsilon + CA^{-2}\bigr) \int _A^\infty f^2e^{\bar{\mu}}d\rho + \frac{1}{\epsilon}\int_A^\infty \frac{f_\rho^2}{\bar{\mu}_\rho^2} e^{\bar{\mu}}d\rho.
\end{aligned}
\end{equation}
Choose $\epsilon=1/4$, and let $A$ be so large that $C/A^2<1/4$, then we find
\[
\int f^2 e^{\bar{\mu}} d\rho \leq 8 \int_A^\infty \frac{f_\rho^2}{\bar{\mu}_\rho^2} e^{\bar{\mu}}d\rho.
\]
Finally, we note that for large $\rho$ both, $Z_{0\rho}$ and $\bar{\mu}_\rho$, are asymptotically proportional to $\rho$, so that $(1+Z_{0\rho}^2)^{-1} \leq C (\bar{\mu}_\rho)^{-2}$, and thus we have
\begin{equation}
\label{eq-Poincare-beyond-L}
\int_A^\infty f^2 e^{\bar{\mu}} d\rho \leq C \int_A^\infty \frac{f_\rho^2} {1+Z_{0\rho}^2}\,e^{\bar{\mu}} d\rho.
\end{equation}
Therefore the Poincar\'e inequality holds for all $f$ supported in $[A, \infty)$.
It is clear from the proof above that the Poincar\'e constant $C$ in \eqref{eq-Poincare-beyond-L} is a universal constant, independent of $L$.
We now show that the inequality holds for all $f\in C_c^{\infty}([0,\infty))$.
Suppose the inequality does not hold.
Then there is a sequence of functions $f_n\in C_c^{\infty}([0,\infty))$, for which
\[
\int_0^\infty f_n(\rho)^2 e^{\bar{\mu}} d\rho =1, \quad\text{and}\quad \lim_{n\to\infty} \int_0^\infty \frac{f_n'(\rho)^2}{1+Z_{0\rho}^2} e^{\bar{\mu}} d\rho =0.
\]
Since the weight $S(\rho) := e^{\bar{\mu}}/(1+Z_{0\rho}^2)$ is a positive continuous function on $(0, \infty)$ the assumption $\int_0^\infty f_{n}'(\rho)^2 S(\rho)d\rho \to 0$ implies that $f_n$ is bounded in $H^1_{\rm loc}({\mathbb R}_+)$, and thus that any subsequence has a further subsequence that converges locally uniformly.
Moreover, any limit $f(\rho) = \lim f_{n_i}(\rho)$ must have $\int_0^\infty f'(\rho)^2 S(\rho)d\rho=0$, i.e.~must be constant, and, because $\int_0^\infty f_n^2 e^{\bar{\mu}}d\rho =1$ for all $n$, the limit must also satisfy $\int_0^\infty f(\rho)^2 e^{\bar{\mu}}d\rho \leq 1$.
Since $\bar{\mu}\sim C\rho^2$ for large $\rho$, the only possible limit is $f(\rho)=0$.
We conclude that if the sequence $f_n\in C_c^{\infty}([0,\infty))$ exists, then it must converge locally uniformly to $f(\rho)=0$.
Choose $\varphi\in C^\infty([0,\infty))$ with $\varphi(\rho)=0$ for $\rho\leq L$ and $\varphi(\rho)=1$ for $\rho\geq 2L$.
Then $\varphi f_n$ is supported in $[L,\infty)$, so that the Poincar\'e inequality \eqref{eq-Poincare-beyond-L} that we already have established implies
\begin{align*}
\int_0^\infty (\varphi f_n)^2 e^{\bar{\mu}} d\rho & \leq C \int_0^\infty \frac{(\varphi f_n)_\rho^2}{1+Z_{0\rho}^2}
e^{\bar{\mu}} d\rho\\
& \leq C \int_0^\infty \Bigl\{ \frac{\varphi_\rho^2 f_n^2}{1+Z_{0\rho}^2} + \frac{\varphi^2 f_{n\rho}^2}{1+Z_{0\rho}^2}
\Bigr\} e^{\bar{\mu}} d\rho\\
& \leq C\int_L^{2L} f_n^2 e^{\bar{\mu}} d\rho + C\int_0^\infty \frac{f_{n\rho}^2}{1+Z_{0\rho}^2}e^{\bar{\mu}} d\rho,
\end{align*}
where we have used that $\varphi_\rho$ is supported in $[L, 2L]$.
Since $f_n$ converges to zero uniformly on $[L,2L]$, the first integral also converges to zero.
The second integral tends to zero by assumption, and therefore $\lim_{n\to\infty}\int \varphi^2 f_n^2\, d\bar{\mu}\, d\rho = 0 $.
Next, we consider $(1-\varphi)f_n$.
These functions are supported in $[0, 2L]$.
On this interval we have
\[
c\, \rho^{n-1} \le \frac{e^{\bar{\mu}}}{1+Z_{0\rho}^2} \le e^{\bar{\mu}} \le C\, \rho^{n-1},
\]
for suitable constants $c < C$ (these depend on $L$, but here $L$ is fixed).
This allows us to compare the integrals with the $L^2$ and $H_0^1$ norms on $B_{2L}(0)\subset{\mathbb R}^n$.
The standard Poincar\'e inequality on $B_{2L}(0)$ implies
\[
\int_0^{2L} f^2 \rho^{n-1}d\rho \leq C \int_0^{2L} f_\rho^2 \rho^{n-1} d\rho,
\]
for all $f\in C^1([0,2L))$ with $f'(0)=f(2L)=0$.
Thus we have
\begin{align*}
\int(1-\varphi)^2f_n^2\, e^{\bar{\mu}}\, d\rho
& \leq C \int_0^{2L} (1-\varphi)^2 f_n^2 \rho^{n-1}d\rho \\
& \leq C \int_0^{2L} \bigl((1-\varphi)f_n\bigr)_\rho^2 \; \rho^{n-1} d\rho \\
& =C \int_0^{2L} \bigl(\varphi_\rho f_n + (1-\varphi)f_{n\rho}\bigr)^2 \rho^{n-1} d\rho \\
& \leq C\int_L^{2L} \varphi_\rho^2 f_n^2 \rho^{n-1}d\rho + C \int_0^{2L} f_{n\rho}^2 \rho^{n-1}d\rho.
\end{align*}
Here the first integral tends to zero because $f_n$ converges to zero uniformly on the bounded interval $[L,2L]$, while the second integral can be bounded by
\[
\int_0^{2L} f_{n\rho}^2 \rho^{n-1}d\rho \leq C \int_0^{2L} \frac{f_{n\rho}^2}{1+Z_{0\rho}^2} e^{\bar{\mu}} d\rho
\]
which also converges to zero as $n\to\infty$.
Thus we find that $ \int (1-\varphi)^2f_n^2\, e^{\bar{\mu}}\, d\rho\to 0$ as $n\to\infty$.
Combined with our previous estimate for $\int \varphi^2 f_n^2 e^{\bar{\mu}}\, d\rho$ we get
\[
\lim_{n\to\infty} \int f_n^2 e^{\bar{\mu}}\, d\rho \leq \lim_{n\to\infty} \int(\varphi)^2 f_n^2 e^{\bar{\mu}}\, d\rho + \lim_{n\to\infty} \int(1-\varphi)^2 f_n^2 e^{\bar{\mu}}\, d\rho =0.
\]
This contradicts the assumption $\int f_n^2 e^{\bar{\mu}}\, d\rho =1$ for all $n$.
\begin{step}
\label{step-final}
In this step we combine \eqref{eq-poin-pretip} and \eqref{eq-Poincare-verytip}, using cut off functions, to show \eqref{eq-Poincare}.
More precisely, there exist uniform constants $C$ and $C(\theta) > 0$, independent of $\tau \le \tau_0$, so that
\[
|\tau| \int_0^{\theta} f^2 e^{\mu}\, du \le C\, \int_0^{2\theta} \frac{f_u^2}{1+Y_{\theta,u}^2} \, e^{\mu}\, du + C(\theta)\, \int_{\theta}^{2\theta} f^2 \, e^{\mu}\, du.
\]
\end{step}
Let $\psi_1$ be a cut off function so that $\psi_1 = 1$ for $\frac{L}{\sqrt{|\tau|}} \le u \le \theta$ and $\psi_1 = 0$ outside of $[\frac{L}{\sqrt{2|\tau|}}, 2\theta]$.
Let $\psi_2$ be a cut off function so that $\psi_2 = 1$ for $0 \le u \le \frac{L}{\sqrt{|\tau|}}$ and $\psi_2 = 0$ for $u \ge \frac{2L}{\sqrt{|\tau|}}$.
By \eqref{eq-poin-pretip} applied to $\psi_1 f$ we have
\[
\int_{L/\sqrt{|\tau|}}^{\theta} \frac{f^2}{u}\, \mu_u \, d\sigma \le \int \frac{(\psi_1 f)_u^2}{1 + Y_{u}^2}\, d\sigma.
\]
This yields
\begin{multline*}
\int_{L/\sqrt{|\tau|}}^{\theta} \frac{f^2}{u}\, \mu_u\, d\sigma \le C\, \int_{L/(2\sqrt{|\tau|)}}^{2\theta} \frac{f_u^2}{1+Y_{u}^2}\, d\sigma + C(\theta)\, \int_{\theta}^{2\theta} f^2\, d\sigma
\\
+ \frac{C|\tau|}{L^2}\, \int_{L/(2\sqrt{|\tau|)}}^{L/\sqrt{|\tau|}} \frac{f^2}{1+Y_{u}^2}\, d\sigma.
\end{multline*}
Combining this with the second estimate in \eqref{eqn-mus} yields
\begin{multline}
\label{eq-Poincare-part1}
|\tau|\, \int_{L/\sqrt{|\tau|}}^{\theta} f^2\, d\sigma
\le C\int_{L/\sqrt{|\tau|}}^{\theta} \frac{f^2}{u}\, \mu_u\, d\sigma\\
\le C\, \int_{L/(2\sqrt{|\tau|}}^{2\theta} \frac{f_u^2}{1+Y_{u}^2} + C(\theta)\, \int_{\theta}^{2\theta} f^2\, d\sigma + \frac{C|\tau|}{L^2}\, \int_{L/\big(2\sqrt{|\tau|}\big)}^{L/\sqrt{|\tau|}} f^2\, d\sigma.
\end{multline}
We can rewrite the weighted Poincar\'e inequality \eqref{eq-Poincare-verytip}, applied to $\psi_2 f$ as,
\[
|\tau|\, \int (\psi_2 f)^2\, e^{\mu}\, du \le C\, \int \frac{(\psi_2 f)_u^2}{1 + Y_{u}^2}\, e^{\mu}\, du
\]
where we use again the fact that in the considered tip region we have uniformly smooth convergence of solutions to the Bowl soliton and we can replace $Z_{0\rho}$ by $Y_{u}$.
This implies
\begin{equation}
\label{eq-Poincare-part2}
\begin{split}
\int_0^{L/\sqrt{|\tau|}} f^2 |\tau|\, e^{\mu}\, du &\le \int (\psi_2 f)^2 |\tau|\, d\sigma \le
C\int\frac{(\psi_2 f)_u^2}{1+Y_{u}^2}\, e^{\mu}\, du \\
&\le C\, \int_0^{2L/\sqrt{|\tau|}} \frac{f_u^2}{1+Y_{u}^2}\, e^{\mu}\, du + C\int \frac{(\psi_2)_u^2f^2}{1+Y_{u}^2}\, e^{\mu}\, du \\
&+ C\, \int \frac{|\psi_2||(\psi_2)_u| |f||f_u|}{1+Y_{u}^2}\, e^{\mu}\, du \\
&\le C\, \int_0^{2L/\sqrt{|\tau|}} \frac{f_u^2}{1+Y_{u}^2}\, e^{\mu}\, du + C\int \frac{(\psi_2)_u^2f^2}{1+Y_{u}^2}\, e^{\mu}\, du
\end{split}
\end{equation}
where we applied Cauchy-Schwartz inequality to the last term on the right hand side.
Add \eqref{eq-Poincare-part1} and \eqref{eq-Poincare-part2} to get
\begin{equation*}
\begin{split}
|\tau| \int_0^{\theta} f^2\, d\sigma &\le C\Bigl( \int_0^{2L/\sqrt{|\tau|}} \frac{f_u^2}{1+Y_{u}^2}\, d\sigma + \int_{L/(2\sqrt{|\tau|})}^{2\theta} \frac{f_u^2}{1+Y_{u}^2}\, d\sigma\Bigr) \\
&+ C(\theta) \, \int_{\theta}^{2\theta} f^2\, d\sigma + \frac{C|\tau|}{L^2}\int_{L/(2\sqrt{|\tau|})}^{2L/\sqrt{|\tau|}} f^2\, d\sigma \\
&\le C\int_0^{2\theta} \frac{f_u^2}{1+Y_{u}^2}\, d\sigma + C(\theta) \, \int_{\theta}^{2\theta} f^2\, d\sigma + \frac{C|\tau|}{L^2}\int_{L/(2\sqrt{|\tau|}}^{2L/\sqrt{|\tau|}} f^2\, d\sigma. \end{split}
\end{equation*}
We can absorb the last term on the right hand side into the left hand side, for $|\tau|$ large, which finally yields
\[
|\tau|\int_0^{\theta} f^2\, d\sigma \le C\int_0^{2\theta} \frac{f_u^2}{1+Y_{u}^2}\, d\sigma + C(\theta)\, \int_{\theta}^{2\theta} f^2\, d\sigma.
\] \end{proof}
\subsection{Proof of Proposition \ref{prop-tip}} We will now conclude the proof of Proposition \ref{prop-tip}. In order to prove the Proposition, we combine an energy estimate for the difference $W$ which will be shown below, with our Poincar\'e inequality \eqref{eq-Poincare} (recall that $W$ has been defined at the beginning of section \ref{sec-tip}). Let $\varphi_T(u)$ be a standard smooth cutoff function supported on $0 < u < 2\theta$, with $\varphi =1$ on $0 \leq u \leq \theta$ and $\varphi = 0$ for $u \ge 2\theta$, and let $W_T := W\, \varphi_T$.
\begin{proof}[Proof of Proposition \ref{prop-tip}]
After multiplying equation \eqref{eqn-WW} by $W \varphi_T^2 \, e^{\mu}$ and integrating by parts we obtain
\begin{equation}\label{eqn-energy-i1}
\begin{split}
& \frac{d}{d\tau} \Bigl ( \frac 12 \int W^2_T \, e^{\mu}\, du \Bigr ) = -\int \frac{W_u^2}{1+Y_{u}^2}\, \, \varphi_T^2 \,e^{\mu}\, du\\
&+ \int \Bigl(\frac{n-1}{u} - \frac u2 - \frac{\mu_u}{1+Y_{u}^2} + \frac{2Y_{u} (Y)_{uu}}{(1+Y_{u}^2)^2} - \frac{(Y_2)_{uu}}{1+ Y_{2u}^2}\, \frac{Y_{u} + Y_{2u}}{1 + Y_{u}^2}\Bigr) W_u W\, \varphi_T^2 \, e^{\mu}\, du \\
&+ 2 \int \frac 1{1+Y_{u}^2}\, W_u W \, \varphi_T\, (\varphi_T)_u \, e^{\mu}\, du + \int W^2_T \Bigl( \frac12 + \mu_{\tau}\Bigr)\, e^{\mu}\, du. \\
\end{split}
\end{equation}
Let us write
\[
\Bigl(\frac{n-1}{u} - \frac u2 - \frac{\mu_u}{1+Y_{u}^2} + \frac{2Y_{u} (Y)_{uu}}{(1+Y_{u}^2)^2} - \frac{(Y_2)_{uu} (Y_{u} + Y_{2u})}{(1 + Y_{u}^2)\,(1+ Y_{2u}^2)}\Bigr) = \frac {n-1}u \,G
\]
where
\begin{multline}
\label{eq-G-def}
G:= \Biggl \{1 - \frac{u^2}{2(n-1)} - \frac{u \mu_u}{n-1} \, \frac 1{1+Y_{u}^2}\\
+ \frac{2 Y_{u} (Y)_{uu}}{(n-1)\, (1 + Y_{u}^2)^2} - \frac{u\,(Y_2)_{uu} (Y_{u} + Y_{2u})}{(n-1)\,(1 + Y_{u}^2)\,(1+ Y_{2u}^2)}\Biggr \}.
\end{multline}
Denote by $C$ a uniform constant independent of $\tau$ that can vary from line to line.
Applying Cauchy-Schwarz to the two terms of \eqref{eqn-energy-i1} involving $W_u W$ we conclude
\begin{multline}\label{eqn-energy-i2}
\frac{d}{d\tau} \Bigl ( \frac 12 \int W^2_T \, e^{\mu}\, du \Bigr ) \leq - \frac 12 \int \frac{W_u^2}{1+Y_{u}^2}\,\varphi_T^2 \, e^{\mu}\, du
+ \int W^2_T \, (\frac 12 + \mu_\tau) \, e^{\mu}\, du \\
+ \int \frac{(n-1)^2}{u^2}\, G^2 W^2_T \, (1+Y_{u}^2)\, e^{\mu}\, du + 4 \int \frac {W^2}{1+Y_{u}^2} \, (\varphi_T)_u^2\, e^{\mu}\, du.
\end{multline}
Note that the support of $(\varphi_T)_u$ is contained in the region $\{\theta \le u \le 2\theta\}$.
By the intermediate region asymptotics in \cite{ADS} we have that in this region we have $c_1(\theta)\, \sqrt{|\tau|} \le |Y_{u}| \le C_1 \sqrt{|\tau|}$, if $\tau \le \tau_0 \ll -1$.
By this estimate and by \eqref{eqn-mut} we deduce from \eqref{eqn-energy-i2} the differential inequality
\begin{equation}\label{eqn-energy-i3}
\begin{split}
\frac{d}{d\tau} \Bigl ( \frac 12 \int W^2_T \, e^{\mu}\, du \Bigr )
&\leq - \frac 14 \int \frac{W_u^2}{1+Y_{u}^2}\,\varphi_T^2 \, e^{\mu}\, du + \eta |\tau|\, \int W^2_T \, e^{\mu}\, du \\
& \quad + \int \frac{(n-1)^2}{u^2}\, G^2 W^2_T \, (1+Y_{u}^2)\, e^{\mu}\, du + \frac{C}{|\tau|}\, \int (W\chi_{[\theta,2\theta]})^2 \, e^{\mu}\, du,
\end{split}
\end{equation}
for $\eta$ small (we have also used that $|(\varphi_T)_u| \le C(\theta)$ in $\{\theta \le u \le 2\theta\}$).
We will next estimate the quantity $\frac{(n-1)^2}{u^2}\, G^2$, separately in the regions $L/{\sqrt{|\tau|}} \leq u \leq 2\theta$ and $0 \leq u \leq L/{\sqrt{|\tau|}}$.
\begin{claim}
Fix $\eta $ small.
There exist $\theta, L >0$ depending on $\eta$ and $\tau_0 \ll0$ such that
\begin{equation}\label{eqn-G5} \frac{(n-1)^2}{u^2}\, G^2 \,
(1+Y_{u}^2) \, \le \eta |\tau|
\end{equation}
on $0 \leq u \leq 2\theta$ and $\tau \leq \tau_0$.
\end{claim}
\begin{proof}
We begin by establishing the bound for $L/{\sqrt{|\tau|}} \leq u \leq 2\theta$, where $L \gg1$ is large.
By the \eqref{eq-G-def}, \eqref{eqn-mus}, Remark \ref{cor-sigurd} and the fact that $|Y_{u}|$ is large in the considered region, we have
\begin{equation}
\label{eq-G-small1}
\begin{split}
& \frac{(n-1)}{u}\, \Bigl| 1 - \frac{u\mu_u}{(n-1)\, (1 + Y_{u}^2)} - \frac{u^2}{2(n-1)}\Bigr| \, \sqrt{1 + Y_{u}^2} \\
&\le \frac{2(n-1)\, |Y_{u}|}{u}\, (\eta + \frac{u^2}{2})
\le (1 + \eta) (\eta + 2\theta^2)\, Y \\
&\le 4\, \eta \sqrt{|\tau|}.
\end{split}
\end{equation}
In the above estimate we also use that $Y$ is close to $\sqrt{2|\tau|}$ in the considered region, and that we can choose $\theta$ small so that $2\theta^2 < \eta$.
Note that by Remark \ref{cor-sigurd} we have
\[
1 - 2\eta \le (1 - \eta)\, \frac{Y}{Y_2} \le \frac{|Y_{u}|}{|Y_{2u}|} \le (1 + \eta)\, \frac{Y}{Y_2} \le 1 + 2\eta
\]
since $(1 - \eta)\, \sqrt{2|\tau|} \le Y_i \le (1 + \eta)\, \sqrt{2|\tau|}$, for $i \in \{1,2\}$, in the considered region, provided that $\theta$ is small enough, $L$ is big enough and $\tau \le \tau_0$ is big enough in its absolute value.
Furthermore, by Proposition \ref{prop-ratio-small} and Remark \ref{cor-sigurd} we have
\[
\Bigl|\frac{(Y_2)_{uu}}{1+Y_{2u}^2}\Bigr| \le \eta \frac{|Y_{2u}|}{u} \le \frac{\eta\, (1 + \eta)}{2(n-1)}\, Y_2 \le \frac{4\eta}{n-1}\, \sqrt{|\tau|},
\]
hence, implying
\[
\Bigl|\frac{(Y_2)_{uu}}{1+ Y_{2u}^2}\, \frac{Y_{u} + Y_{2u}}{1 + Y_{u}^2}\Bigr| \le 4\,\sqrt{2}\eta (1 + \eta)\,\sqrt{|\tau|}\, \frac{|Y_{u}|}{\sqrt{1 + Y_{u}^2}}.
\]
Similarly, we get
\[
\Bigl|\frac{2\,(Y)_{uu} Y_{u}}{(1+Y_{u}^2}\Bigr| \le 4\, \sqrt{2}\eta (1 + \eta)\,\sqrt{|\tau|}\, \frac{|Y_{u}|}{\sqrt{1 + Y_{u}^2}},
\]
and hence,
\begin{equation}
\label{eq-G-small2}
\Bigl|\frac{(Y_2)_{uu}}{1+ Y_{2u}^2}\, \frac{Y_{u} + Y_{2u}}{1 + Y_{u}^2}\Bigr| + \Bigl|\frac{2\,(Y)_{uu} Y_{u}}{(1+Y_{u}^2}\Bigr| < 16\eta\, \sqrt{|\tau|}.
\end{equation}
Combining \eqref{eq-G-small1} and \eqref{eq-G-small2}, by the definition of $G$ (see \eqref{eq-G-def}), we get that \eqref{eqn-G5} holds in $\mathcal{K}_{\theta,L}$, if we take $\eta < \frac{1}{544}$.
For the other region, $0 \leq u \leq L/{\sqrt{|\tau|}}$, which is very near the tip, we will use the fact that our solutions $Y, Y_2$ after rescaling are close to the soliton $Z_0(\rho)$.
Recall that in this case we have
\[
Y_i(u,\tau) = Y(0,\tau) + \frac 1{\sqrt{|\tau|}} \, Z_i(\rho,\tau), \qquad \rho:= u \sqrt{|\tau|}
\]
which gives $Y_{u} = Z_{\rho}$.
Hence,
\[
\frac{(n-1)^2}{u^2}\, G^2 \, (1+Y_{u}^2) = \frac{(n-1)^2\, |\tau| }{\rho^2}\, G^2 \, (1+Z_{\rho}^2).
\]
Also, $\mu_u = \bar{\mu}_\rho \, \sqrt{|\tau|}$, which gives $u\, \mu_u = \rho \, \bar{\mu}_\rho = \rho\, (m_{\rho}(\rho) + a(L,\tau))$ (we use the definition of weight $\mu(u,\tau)$ given by \eqref{eq-weight}).
Lets write $G = G_1 + G_2$, where
\[
G_1 = 1 - \frac{\rho^2}{2(n-1) |\tau|} - \frac{\rho \, m_\rho }{n-1} \, \frac 1{1+Z_{\rho}^2}
\]
and
\[
G_2 = -\frac{\rho\, a(L,\tau)}{(n-1)\, (1 + Z_{\rho}^2)} - \frac{\rho (Z_2)_{\rho\rho}\, (Z_{\rho} + Z_{2\rho})}{(1 + Z_{\rho}^2)\,(1 + Z_{2\rho}^2)} + \frac{2\rho\,(Z)_{\rho\rho} Z_{\rho}}{(1 + Z_{\rho}^2)^2}.
\]
From the definition of $m(\rho)$ we have
\[
1 - \frac{\rho \, m_\rho }{n-1} \, \frac 1{1+ Z_{0 \rho}^2} =0,
\]
which implies that
\[
G_1 = \frac{\rho \, m_\rho }{n-1} \Bigl ( \frac 1{1+Z_{\rho}^2} - \frac 1{1+ Z_{0\rho}^2} \Bigr ) - \frac{\rho^2}{2(n-1)|\tau|},
\]
and after squaring
\[
G_1^2 \leq \frac{2\rho^2 \, m_\rho^2 }{(n-1)^2} \Bigl ( \frac 1{1+Z_{\rho}^2} - \frac 1{1+ Z_{0\rho}^2} \Bigr )^2 + \frac{\rho^4}{2(n-1)^2|\tau|^2}.
\]
It follows that
\[
\frac{(n-1)^2\, |\tau| }{\rho^2}\, G_1^2 \, (1+Z_{\rho}^2)
\leq m_\rho^2 \, |\tau| \, (1+Z_{\rho}^2)
\Bigl ( \frac 1{1+Z_{\rho}^2} - \frac 1{1+ Z_{0\rho}^2} \Bigr )^2
+ \frac{\rho^2}{2|\tau|} \, (1+Z_{\rho}^2).
\]
Using that $Z(\rho,\tau)$ converges uniformly smoothly on compact sets to the soliton $Z_0(\rho)$, we have
\begin{equation*}
\begin{split}
m_{\rho}^2 |\tau|\, (1+Z_{\rho}^2)
\Bigl ( \frac 1{1+Z_{\rho}^2} - \frac 1{1+ Z_{0\rho}^2} \Bigr )^2
&=\frac{(n-1)^2}{\rho^2\, (1 + Z_{\rho}^2)}\,
(Z_{0\rho}^2 - Z_{\rho}^2)^2 \, |\tau|\\
&\leq \,\frac{(n-1)^2 \,(Z_{0\rho} + Z_{\rho})^2}{\rho^2} (Z_{0\rho} - Z_{\rho})^2\, |\tau| \\
&\le C\, (Z_{0\rho} - Z_{\rho})^2\, |\tau| < \frac{\eta}{3}\, |\tau|,
\end{split}
\end{equation*}
where we also used that $\frac{(n-1)^2 \,(Z_{0\rho} + Z_{\rho})^2}{\rho^2}$ is uniformly bounded for all $\tau \le \tau_0 \ll -1$ and all $\rho \in [0,L]$.
This follows from the fact that $Z(\rho,\tau)$ uniformly smoothly converges to $Z_0(\rho)$ as $\tau\to-\infty$, for $\rho \in [0,L]$, the asymptotics of $Z_0(\rho)$ around the origin and infinity and the fact that $(Z_0)_{\rho}(0) = Z_{\rho}(0,\tau) = 0$, for all $\tau$.
Furthermore,
\[
\frac{\rho^2}{2|\tau|}\, (1 + Z_{\rho}^2) \le \frac{C_1 L^2 + C_2}{2|\tau|} < \frac{\eta}{3},
\]
which can be achieved by taking $|\tau| \ge |\tau_0| \gg 1$ very large (relative to a fixed constant $L$).
Finally, the fact that $|m_\rho| \leq C(L) \, \rho^{-1}$, for $\rho\in [0,L]$ and the fact that $Z(\rho,\tau)$ converges uniformly smoothly, as $\tau\to -\infty$, to the soliton $Z_0(\rho)$, implies that ${\displaystyle m_\rho^2 \, (1+Z_{\rho}^2) \Bigl ( \frac 1{1+Z_{\rho}^2} - \frac 1{1+ Z_{0\rho}^2} \Bigr )^2 }$ can be made arbitrarily small if $\tau \leq \tau_0(L)\ll-1$.
Above estimates guarantee that
\begin{equation}
\label{eq-G1-est}
\frac{(n-1)^2}{\rho^2} |\tau|\, G_1^2 (1 + Z_{\rho}^2) \le \frac{\eta}{4}\, |\tau|,
\end{equation}
for $\tau \le \tau_0 \ll -1$.
On the other hand, by Lemma \ref{lemma-prop-mu} and the fact that both $Z(\rho,\tau)$ and $Z_2(\rho,\tau)$ converge uniformly smoothly on $\rho\in [0,L]$, as $\tau\to-\infty$, to the soliton $Z_0(\rho)$ we have that
\begin{equation}
\label{eq-G2-est}
\frac{(n-1)^2}{\rho^2}\, |\tau| G_2^2 (1 + Z_{\rho}^2) \le \frac{\eta}{4}\, |\tau|.
\end{equation}
Combining \eqref{eq-G1-est} and \eqref{eq-G2-est} yields \eqref{eqn-G5}.
\end{proof}
We now continue the proof of Proposition \ref{prop-tip}.
Lets insert the bound \eqref{eqn-G5} in the differential inequality \eqref{eqn-energy-i3}.
Using also the bound $| (\varphi_T)_u \chi_{[\theta,2\theta]}| \leq C(\theta) $ we obtain
\begin{multline*}
\frac{d}{d\tau} \Bigl ( \frac 12 \int W^2_T \, e^{\mu}\, du \Bigr )
\leq - \frac 14 \int \frac{(W_T)_u^2}{1+Y_{u}^2}\,e^{\mu}\, du \\
+ (2 \eta |\tau| +C) \int W^2_T \, e^{\mu}\, du + C(\theta) \, \int_{\theta}^{2\theta} W^2\, e^{\mu}\, du.
\end{multline*}
On the other hand, our Poincar\'e inequality says that
\[
\int \frac{(W_T)_u^2}{1+Y_{u}^2}\,e^{\mu}\, du + \int_{\theta}^{2\theta} W_T^2\, e^{\mu}\, du \geq c_0 \, |\tau | \int W^2_T \, e^{\mu}\, du,
\]
with $c_0 >0$ a constant which is uniform in $\tau$ and independent of $\theta$.
Hence,
\begin{equation*}
\begin{split}
- \frac 14 \int \frac{(W_T)_u^2}{1+Y_{u}^2}\,e^{\mu}\, du &+ (2 \eta |\tau| +C) \int W^2_T \, e^{\mu}\, du \leq \\
&- \frac {c_0}4\, |\tau| \int W^2_T \,e^{\mu}\, du + (2 \eta^2 |\tau| +C) \int W^2_T \, e^{\mu}\, du \\
&\leq - \frac {c_0} 8 \int |\tau| \, W^2_T \,e^{\mu}\, du
\end{split}
\end{equation*}
if $\tau \leq \tau_0$, with $\tau_0$ depending on $\eta$, $c_0$ and $C$.
We conclude that in the tip region $\mathcal{T}_{\theta}$ the following holds
\begin{equation}\label{eqn-diff-ineq-tip1}
\frac{d}{d\tau} \int W^2_T \, e^{\mu}\, du \leq - \frac {c_0}8\, |\tau| \int \, W^2_T \, e^{\mu}\, du + \frac{C(\theta)}{|\tau|} \, \int (W\chi_{[\theta,2\theta]})^2\, e^{\mu}\, du.
\end{equation}
Define
\[
f(\tau) := \int W_T^2\, e^{\mu}\, du, \qquad g(\tau) := \int (W\chi_{[\theta,2\theta]})^2\, e^{\mu}\, du .
\]
Then equation \eqref{eqn-diff-ineq-tip1} becomes
\[
\frac{d}{d\tau} f(\tau) \le - \frac{c_0}{8} \, |\tau|\, f(\tau) + \frac{C(\theta)}{|\tau|}\,g(\tau).
\]
Furthermore, setting ${\displaystyle F(\tau) := \int_{\tau-1}^{\tau} f(s)\, ds}$ and ${\displaystyle G(\tau) := \int_{\tau-1}^{\tau} g(s)\, ds}$, we have
\begin{equation*}
\begin{split}
\frac{d}{d\tau} F(\tau) = f(\tau) - f(\tau-1) &= \int_{\tau-1}^{\tau} \frac{d}{ds} f(s)\, ds \\
&\le \frac{c_0}{8}\,\int_{\tau-1}^{\tau} s f(s)\, ds + \int_{\tau-1}^{\tau} \frac{C(\theta)}{|s|} g(s)\, ds
\end{split}
\end{equation*}
implying
\[
\frac{d}{d\tau} F(\tau) \le \frac{c_0}{16} \, \tau \, F(\tau) + \frac{C(\theta)}{|\tau|}G(\tau).
\]
This is equivalent to
\[
\frac{d}{d\tau} \bigl( e^{-c_0 \tau ^2/32} F(\tau) \bigr) \le \frac{C(\theta)}{|\tau|} e^{-c_0 \tau^2/32}\, G(\tau).
\]
Since $W_T$ is uniformly bounded for $\tau \leq \tau_0 \ll -1$, it follows that $f(\tau)$ and therefore also $F(\tau)$ are uniformly bounded functions for $\tau \leq \tau_0$.
Hence, ${\displaystyle \lim_{\tau\to-\infty} e^{-c_0 \tau^2/32} F(\tau) = 0}$, so that from the last differential inequality we get
\begin{equation*}
\begin{split}
e^{-c_0 |\tau|^2/32}\, F(\tau)
&\le C\, \int_{-\infty}^{\tau} \frac{G(s)}{s^2} (|s|\, e^{-c_0 s^2/32} )\, ds \\
&\le \frac{C}{|\tau|^2} \, \sup_{s\le\tau} G(s) \, \int_{-\infty}^{\tau} |s|\, e^{-c_0 s^2/32} \, ds\\
&\le \frac{C}{|\tau|^2} \sup_{s\le\tau} G(s) \, e^{-c_0 \tau^2/32}
\end{split}
\end{equation*}
with $C=C(\theta,\delta)$.
This yields
\[
\sup_{s\le \tau} F(s) \le \frac{C}{|\tau|^2}\, \sup_{s\le\tau} G(s),
\]
or equivalently,
\[
\|W_T\|_{2,\infty} \le \frac{C(\theta)}{|\tau_0|} \|W \chi_{[\theta,2\theta]}\|_{2,\infty},
\]
therefore concluding the proof of Proposition \ref{prop-tip}. \end{proof}
\section{Proofs of Theorems \ref{thm-main-main} and \ref{thm-main}}\label{sec-conclusion}
We will now combine Propositions \ref{prop-cylindrical} and \ref{prop-tip} to conclude the proof of our main result Theorem \ref{thm-main}. Our most general result, Theorem \ref{thm-main-main} will then readily follow by combining Theorem \ref{thm-rot-symm} and Theorem \ref{thm-main}.
In fact as have seen at the beginning of Section \ref{sec-regions} that
Translating and dilating the original solution has an effect on the rescaled solution rotationally symmetric solution $u(y,\tau)$ as given in formula \eqref{eq-ualphabeta}. Thus for any two rotationally symmetric solutions $u_1(y,\tau), u_2(y,\tau)$
Let $u_1(y,\tau)$ and $u_2(y,\tau)$ be any two solutions to equation \eqref{eq-u} as in the statement of Theorem \ref{thm-main} and let $u_2^{\alpha\beta\gamma}$ be defined by \eqref{eq-ualphabeta}. Our goal is to find parameters $(\alpha,\beta,\gamma)$ so that the difference \[ w^{\alpha\beta\gamma}:= u_1-u_2^{\alpha\beta\gamma}\equiv 0. \]
Proposition \ref{prop-tip} says that the weighted $L^2$-norm $\|W^{\alpha\beta\gamma} \|_{2,\infty}$ of the difference of our solutions $W^{\alpha\beta\gamma}(u,\tau):=Y_1(u,\tau) - Y_2^{\alpha\beta\gamma}(u,\tau)$ (after we switch the variables $y$ and $u$) in the whole tip region $\mathcal{T}_\theta$ is controlled by $\| W^{\alpha\beta\gamma} \, \chi_{[\theta,2\theta]}\|_{2,\infty}$, where $\chi_{[\theta,2\theta]}(u)$ is supported in the transition region
between the cylindrical and tip regions and is included in the cylindrical region $ \mathcal{C}_{\theta} = \{(y, \tau) : u_1(y,\tau) \ge \theta/2\}$. Lemma \ref{prop-norm-equiv} below says that the norms $\| W^{\alpha\beta\gamma} \, \chi_{D_{2\theta}}\|_{2,\infty}$ and $\| w^{\alpha\beta\gamma} \, \chi_{D_{2\theta}}\|_{\mathfrak{H},\infty}$ are equivalent for every number $\theta >0$ sufficiently small (recall the definition of $\| \cdot \|_{\mathfrak{H},\infty}$ in \eqref{eqn-normp0}-\eqref{eqn-normp}). Therefore combining Propositions \ref{prop-cylindrical} and \ref{prop-tip} gives the crucial estimate \eqref{eqn-w1230} which will be shown in detail in Proposition \ref{prop-cor-main} below. This estimate says that the norm of the difference $w^{\alpha\beta\gamma}_{\mathcal{C}}$ of our solutions when restricted in the cylindrical region is dominated by the norm of its projection of $w^{\alpha\beta\gamma}_{\mathcal{C}}$ onto the zero eigenspace of the operator $\mathcal{L}$ (the linearization of our equation on the limiting cylinder). However, Proposition \ref{prop-cylindrical} holds under the assumption that the projection of $w_\mathcal{C}^{\alpha\beta\gamma}$ onto the positive eigenspace of $\mathcal{L}$ is zero, that is $\mathcal{P}_+ w_\mathcal{C}(\tau_0)^{\alpha\beta\gamma} =0$. Recall that the zero eigenspace of $\mathcal{L}$ is spanned by the function $\psi_2(y) = y^2 - 2$ and the positive eigenspace is spanned by the eigenvectors $\psi_0(y) = 1$ (corresponding to eigenvalue $1$) and $\psi_1(y) = y$ (corresponding to eigenvalue $1/2$).
After having established that the projection onto the zero eigenspace $a(\tau):= \langle w_\mathcal{C}^{\alpha\beta\gamma}, \psi_2 \rangle$
dominates in the $\| w^{\alpha\beta\gamma}_{\mathcal{C}} \|_{\mathfrak{H},\infty}$, the conclusion of Theorem \ref{thm-main} will follow by establishing an appropriate differential inequality for $a(\tau)$, for $\tau \leq \tau_0\ll -1$ and also having that $a(\tau_0) = \mathcal{P}_0 w_C^{\alpha\beta\gamma}(\tau_0) = 0$ at the same time. The above discussion shows that it is essential for our proof to have \begin{equation}\label{eqn-abc}
\mathcal{P}_+ w_\mathcal{C}^{\alpha\beta\gamma}(\tau_0) = \mathcal{P}_0 w_\mathcal{C}^{\alpha\beta\gamma}(\tau_0) = 0. \end{equation} We will next show that for every $\tau_0 \ll -1$ we can find parameters $\alpha=\alpha(\tau_0), \beta=\beta(\tau_0)$ and $\gamma=\gamma(\tau_0)$ such that \eqref{eqn-abc} holds and we will also give their asymptotics relative to $\tau_0$. Let us emphasize that we need to be able {\em for every} $\tau_0 \ll -1$ to find parameters $\alpha, \beta, \gamma$ so that \eqref{eqn-abc} holds, since up to the final step of our proof we have to keep adjusting $\tau_0$ by taking it even more negative so that our estimates hold (see Remark \ref{rem-choice-par} below).
For $v_i$ related to $u_i$ by $u_i = \sqrt{2(n-1)}(1+v_i)$, the corresponding dilations by $(\alpha,\beta,\gamma)$ are given by \[ v_i^{\alpha\beta\gamma}(y, \tau) = \sqrt{1+\beta\, e^{\tau}} \Bigl\{ 1+ v_i\Bigl( \frac{y - \alpha \, e^{\tau/2}} {\sqrt{1+\beta e^\tau} }, \tau+\gamma-\log\bigl(1+\beta e^\tau\bigr) \Bigr) \Bigr\} -1. \] Simply write $v$ for $v_1$ and ${\bar v}$ for $v_2^{\alpha\beta\gamma}$.
Our asymptotics in Theorem \ref{thm-old} imply that each $v_i$ satisfies the following estimates in the cylindrical region $\mathcal{C}_{\theta}$: for any $\epsilon_0>0$ and any number $M>0$ there is a $\tau_{\epsilon_0,M} < 0$ such that \begin{equation}\label{eqn-good3}
v_i(y, \tau) = - \frac{y^2-2} {4|\tau|} + \frac{\epsilon(y, \tau)} {|\tau|}, \qquad \mbox{for} \,\,\, 0\leq y \leq 2M,\, \tau\leq \tau_{\epsilon_0,M} \end{equation} where $\epsilon(y, \tau)$ is a generic function whose definition may change from line to line, but which always satisfies \begin{equation}
|\epsilon(y, \tau)| \leq \epsilon_0, \qquad \mbox{for} \,\,\, 0\leq y \leq 2M,\, \tau\leq \tau_{\epsilon_0,M}. \end{equation} Furthermore, by choosing $\tau_{\epsilon_0,M} \ll -1$ we also have \begin{equation}
0\leq -v_i(y, \tau) \leq C \, \frac{y^2}{|\tau|} \qquad \text{in } \,\,\,\, \mathcal{C}_{\theta} \cap \{ |y| \geq M \}, \, \tau\leq \tau_{\epsilon_0,M}.
\label{eq-v-quadratic-upper-bound} \end{equation}
We will next estimate the first three components of the truncated difference $\varphi_\mathcal{C}({\bar v}-v)$, \[ \Bigl\langle \psi_j , \, \varphi_\mathcal{C}\, ( {\bar v} - v ) \Bigr\rangle \qquad (j=0, 1, 2) \] where $\varphi_\mathcal{C}$ is the cut-off function for the cylindrical region $ \mathcal{C}_{\theta}$ and we will show that the coefficients $\alpha, \beta$ and $\gamma$ can be chosen so as to make these components vanish. Instead of working directly with $\alpha, \beta$ and $\gamma$ it will be more convenient to use \begin{equation}
\label{btheta}
b= \sqrt{1+\beta e^\tau} -1, \qquad \Gamma = \frac{\gamma - \log(1+\beta e^\tau)}{\tau}, \qquad A = \alpha\, e^{\tau/2}. \end{equation} Then \begin{equation}
{\bar v}(y, \tau) = b + (1+b)\, v_2\Bigl( \frac{y-A} {1+b} , (1+\Gamma)\tau \Bigr) \end{equation}
Our next goal is to show the following result.
\begin{prop}\label{lem-rescaling-components-zero}
There is a number $ \tau_* \ll -1 $ such that for all $\tau\leq \tau_*$ there exist $ b $, $ \Gamma $ and $A$ such that the difference $w^{\alpha\beta\gamma} :=u_1 - u_2^{\alpha\beta\gamma}$ satisfies
\[
\langle \psi_0, \varphi_\mathcal{C} \, w^{\alpha\beta\gamma} \rangle = \langle \psi_1, \varphi_\mathcal{C} \, w^{\alpha\beta\gamma} \rangle = \langle \psi_2, \varphi_\mathcal{C} \,w^{\alpha\beta\gamma} \rangle = 0.
\]
In addition, the parameters $\alpha, \beta$ and $\gamma$ can be chosen so that $b$, $\Gamma$ and $A$ defined in \eqref{btheta} satisfy
\begin{equation}\label{eq-b-Theta-bound}
b = o\Bigl( |\tau|^{-1} \Bigr), \qquad \Gamma = o(1)\qquad \mbox{and} \qquad |A| = o(1), \qquad \mbox{as}\,\, \tau \to -\infty.
\end{equation}
Equivalently, this means that the triple $(\alpha, \beta, \gamma)$ is admissible with respect to $\tau$, according to our Definition \ref{def-admissible}. \end{prop}
The proof of the proposition will be based on the following estimate.
\begin{lemma} \label{lem-rescaling-effect-on-components} For every $ \eta>0 $ there exist $ \tau_\eta <0 $ such that for all $ \tau\leq \tau_\eta $, and all $ b , \Gamma, A\in {\mathbb R} $ with
\[
|b| \leq \frac{1}{|\tau|}, \qquad |\Gamma|\leq \frac12, \qquad |A| \le 1
\]
one has
\begin{equation}\label{eq-rescaling-effect-on-components}
\begin{split}
\Bigl| \langle\hat\psi_0, \varphi_\mathcal{C}({\bar v}-v)\rangle - b + \frac{A^2}{4(\Gamma+1)|\tau|}\Bigr| &+ \Bigl| \langle\hat\psi_1, \varphi_\mathcal{C}({\bar v}-v)\rangle - \frac{A}{2|\tau|(\Gamma+1)} \Bigr| \\&+ \Bigl| \langle\hat\psi_2, \varphi_\mathcal{C}({\bar v}-v)\rangle - \frac{\Gamma} {4(\Gamma+1)|\tau|} \Bigr| \leq \frac{\eta}{|\tau|}
\end{split}
\end{equation}
where $ \hat\psi_j = \psi_j/\langle \psi_j, \psi_j \rangle $. \end{lemma} The conditions on $b$, $\Gamma$ and $A$ are met if the original parameters $\alpha, \beta$ and $\gamma$ satisfy \[
|\alpha\, e^{\tau/2}| \le 1, \qquad |\beta e^\tau|\leq \frac{C}{|\tau|}, \qquad |\gamma|\leq \tfrac{1}{3} |\tau|. \]
\begin{proof}[Proof that Lemma~\ref{lem-rescaling-effect-on-components} implies Proposition~\ref{lem-rescaling-components-zero}]
Let $\eta_*>0$ be given, and consider the disc
\[
\mathcal{B} = \Bigl\{ (b, \Gamma, A) \mid |\tau|^2b^2 + \Gamma^2 + A^2 \leq \eta_*^2\Bigr\}.
\]
On this ball we define the map $ \Phi : \mathcal{B} \to{\mathbb R}^3 $ given by
\[
\Phi(b, \Gamma) =
\begin{pmatrix}
|\tau|\langle \hat\psi_0, \varphi_\mathcal{C}({\bar v}-v) \rangle\\
|\tau|\langle \hat\psi_1, \varphi_\mathcal{C}({\bar v}-v) \rangle\\
|\tau| \langle \hat\psi_2, \varphi_\mathcal{C}({\bar v}-v) \rangle
\end{pmatrix}.
\]
The map $ \Phi $ is continuous because the solution $ {\bar v} $ depends continuously on the parameters $b, \Gamma , A$.
It follows from \eqref{eq-rescaling-effect-on-components} that if $\eta \ll\eta_*$ is chosen small enough, and if $ \tau $ is restricted to $ \tau < \tau_\eta $, with $ \tau_\eta $ defined as in Lemma~\ref{lem-rescaling-effect-on-components}, then the map $ \Phi $ restricted to the boundary of the ball $ \mathcal{B} $ is homotopic to the injective map
\[(b, \Gamma,A) \mapsto \left(|\tau|b - \frac{A^2}{4(\Gamma+1)}, \frac{A}{2(\Gamma+1)},\frac{\Gamma}{4(\Gamma+1)}\right),\]
through maps from $ \partial\mathcal{B} $ to $ {\mathbb R}^3\setminus\{0\} $.
The map $ \Phi $ from the full ball to ${\mathbb R}^3$ therefore has degree one, and it follows that for some $ (b',\Gamma',A')\in\mathcal{B} $ one has $ \Phi(b', \Gamma',A') = 0 $.
The fact that $(b',\Gamma',A')$ that we have just found do indeed satisfy~\eqref{eq-b-Theta-bound} follows from the definition of the disc $ \mathcal{B} $. \end{proof}
\begin{proof}[Proof of Lemma~\ref{lem-rescaling-effect-on-components}]
First we consider the outer region $\mathcal{C}_{\theta} \cap \{ |y| \geq M \}$ where \eqref{eq-v-quadratic-upper-bound} implies that
\begin{equation}\label{eq-v-bv-difference}
\big | \varphi_\mathcal{C} \, ({\bar v} - v ) \big | \leq |b| + \tilde C \frac{y^2}{|\tau|} \leq \frac{1+\tilde Cy^2} {|\tau|}, \quad\text{for all } \mathcal{C}_{\theta} \cap \{ |y| \geq M \}.
\end{equation}
For the inner region $\mathcal{C}_{\theta} \cap \{ |y| \leq M \}$, \eqref{eqn-good3} implies that for all $\tau\leq 2\, \tau_{\epsilon_0,M} \ll -1$ one has
\begin{align*}
{\bar v}(y,\tau) &= b - \frac{1}{(1+b)} \frac{(y-A)^2-2(1+b)^2}{4(1+\Gamma)|\tau|}
+ \frac{2\epsilon(y,\tau)}{(1+\Gamma)|\tau|}\\
&= b + \frac{2b+b^2}{(1+b)} \frac{1}{2(1+\Gamma)|\tau|} - \frac{1}{(1+b)} \frac{y^2-2}{4(1+\Gamma)|\tau|} \\
&\qquad + \frac{2A y - A^2}{4|\tau|(\Gamma+1)(b+1)} + \frac{2\epsilon(y,\tau)}{(1+\Gamma)|\tau|}.
\end{align*}
If $v(y, \tau)$ is the other unrescaled solution, then we have
\begin{align*}
& {\bar v}(y, \tau) - v(y,\tau)\\
&= b + \frac{2b+b^2}{2(1+\Gamma)(1+b)} \frac{1}{|\tau|} + \Bigl \{ 1 - \frac{1}{(1+b)} \frac{1}{(1+\Gamma)} \Bigr \}\frac{y^2-2}{4|\tau|} + \frac{2Ay - A^2}{4|\tau|(\Gamma+1)(b+1)}
+ \frac{5\epsilon(y,\tau)}{|\tau|}\\
&= b + \frac{2b+b^2}{2(1+\Gamma)(1+b)} \frac{1}{|\tau|} + \frac{b+\Gamma+b\Gamma}{(1+\Gamma)(1+b)} \frac{y^2-2}{4|\tau|} + \frac{2Ay - A^2}{4|\tau|(\Gamma+1)(b+1)} + \frac{5\epsilon(y,\tau)}{|\tau|}.
\end{align*}
We conclude that in the region $ |y|\leq M$ and for $ \tau\leq 2 \tau_{\epsilon_0,M}$ we have
\begin{equation*}
{\bar v}(y, \tau) - v(y,\tau) = b - \frac{A^2}{4(\Gamma+1)|\tau|} + \frac{\Gamma}{\Gamma+1} \frac{(y^2-2)}{4|\tau|} + \frac{Ay}{2|\tau|(\Gamma+1)}
+ R(y,\tau)
\end{equation*}
where the remainder $R$ satisfies
\begin{equation}\label{eq-R-in-cylindrical-bound}
|R(y,\tau)| \leq C \, \Bigl( \frac{1+y^2} {|\tau|^2} + \frac{\epsilon(y,\tau)}{|\tau|} \Bigr), \quad \mbox{on} \,\, \mathcal{C}_\theta \cap \{ |y| \leq M \}\end{equation} with $ \epsilon(y,\tau) \leq \epsilon_0$ and $C$ is a universal constant that does not depend on $\tau$ or $M$.
Combining the last bound which holds on $|y| \leq M$ with our first bound \eqref{eq-v-bv-difference} on $|y|\leq M$
yields that there exists $\tau_* \ll -1$ such that for all $y\in \mathbb{R}$ and $ \tau\leq \tau_*$, we have
\begin{equation}
\label{eq-diff-expression}
\varphi_\mathcal{C}\, \big ({\bar v} - v\big ) = b - \frac{A^2}{4(\Gamma+1)|\tau|} + \frac{A\, y}{2|\tau|(\Gamma+1)} + \frac{\Gamma}{\Gamma+1} \, \frac{y^2-2}{4|\tau|}
+ R(y,\tau)
\end{equation}
where the new error $R$ still satisfies \eqref{eq-R-in-cylindrical-bound} when $|y|\leq M$, and
\begin{equation}
\label{eq-R-estimate}
|R(y, \tau)|\leq C \, \dfrac{1+y^2} {|\tau|}, \quad \mbox{on} \,\, \mathcal{C}_\theta \cap \{ |y| \geq M \}
\end{equation}
for a universal constant $C$.
\subsubsection*{Components of the error}
We now estimate the inner products $\langle \psi_j, \varphi_\mathcal{C}({\bar v}-v)\rangle$
\begin{align*}
\bigl\langle \psi_j, \varphi_\mathcal{C}({\bar v}-v)\bigr\rangle &= \langle\psi_j, 1\rangle \Bigl (b - \frac{A^2}{4(\Gamma+1)|\tau|}\Bigr ) + \langle \psi_j, y\rangle \, \frac{A}{2(\Gamma+1)|\tau|}\\
&\quad + \langle\psi_j, y^2-2\rangle \frac{\Gamma} {4(\Gamma+1)|\tau|} + \langle\psi_j, R\rangle.
\end{align*}
In view of the fact that $\psi_0=1$, $\psi_1 = y$ and $\psi_2=y^2-2$, we have
$$
\frac{\langle\psi_0, \varphi_\mathcal{C}({\bar v}-v)\rangle} {\langle\psi_0,\psi_0\rangle}
= b - \frac{A^2}{4(\Gamma+1)|\tau|}+ \frac{\langle\psi_0, R\rangle} {\langle\psi_0,\psi_0\rangle}
$$
and
$$\frac{\langle\psi_1, \varphi_\mathcal{C}({\bar v}-v)\rangle} {\langle\psi_1,\psi_1\rangle}
= \frac{A}{2(\Gamma+1)|\tau|} + \frac{\langle\psi_1, R\rangle} {\langle\psi_1,\psi_1\rangle}, \quad
\frac{\langle\psi_2, \varphi_\mathcal{C}({\bar v}-v)\rangle} {\langle\psi_2,\psi_2\rangle} = \frac{\Gamma} {4(\Gamma+1)|\tau|} + \frac{\langle\psi_2, R\rangle} {\langle\psi_2,\psi_2\rangle}.$$
We claim that \eqref{eq-R-in-cylindrical-bound}-\eqref{eq-R-estimate} imply that for every $ \eta>0 $ there exist $ \tau_\eta < 0 $ and $ M_\eta >0 $ such that for all $ \tau \leq \tau_\eta $ and $ M\geq M_\eta $ one has \begin{equation}\label{eqn-finalR}
\Bigl|\langle \psi_j, R\rangle\Bigr| \leq \frac{\eta} {|\tau|}. \end{equation} Indeed, to prove the above claim we notice that all three inner products can be bounded by the integral \[
\int_0^\infty (1+ y + y^2) \, |R(y, \tau)|\, e^{-y^2/4} \, dy. \] Split the integral into three parts, the first from $y=0$ to $y=M$: \begin{align*}
\int_0^M |R|\, (1+y+y^2) \, e^{-y^2/4}\, dy
&\leq \frac{C}{|\tau|} \int_0^M (1+y+y^2) \Bigl\{ \frac{1}{|\tau|}(1+y^2) + \epsilon \Bigr\} e^{-y^2/4} dy\\
&\leq \frac{C}{|\tau|^2} + \frac{C\epsilon}{|\tau|}. \end{align*}
If we choose $|\tau_{\epsilon,M}|$ so that $-\tau_{\epsilon,M} < 1/\epsilon $ then \[
\int_0^M |R| \, (1+y+y^2)\, e^{-y^2/4} \, dy \leq \frac{C\epsilon}{|\tau|} \leq \frac{\eta}{ 2|\tau|} \] by taking $\epsilon$ small enough. For the remaining part, using \eqref{eq-R-estimate} we obtain \begin{align*}
\int_{|y|\geq M} |R| \, (1+y + y^2)\, e^{-y^2/4}\, dy
&\leq C\int_{M}^\infty (1+y^2)\, \frac{1+y^2}{|\tau|} e^{-y^2/4} \, dy \\
&\leq \frac{C}{|\tau|} \int_M^\infty (1+y^2)^2 e^{-y^2/4} \, dy\\
&\leq \frac{C}{|\tau|} M^3e^{-M^2/4} \leq \frac{\eta}{2 |\tau|} \end{align*} by chooing $ M $ sufficiently large. The lemma now readily follows from \eqref{eq-diff-expression}, \eqref{eq-v-bv-difference} and \eqref{eqn-finalR}. \end{proof}
\begin{remark}[The choice of parameters $(\alpha,\beta, \gamma)$]\label{rem-choice-par}
We can choose $\tau_0 \ll -1$ to be any small number so that $\tau_0 \le \tau_*$, where $\tau_*$ is as in Proposition \ref{lem-rescaling-components-zero} and so that all our uniform estimates in previous sections hold for $\tau \le \tau_0$.
Note also that having Proposition \ref{lem-rescaling-components-zero} we can decrease $\tau_0$ if necessary and choose parameters $\alpha, \beta$ and $\gamma$ again so that we still have $\mathcal{P}_+ w_\mathcal{C}(\tau_0) = \mathcal{P}_0 w_\mathcal{C}(\tau_0) = 0$, without effecting our estimates.
Hence, from now on we will be assuming that we have fixed parameters $\alpha, \beta$ and $\gamma$ at some time $\tau_0 \ll -1$, to have both projections zero at time $\tau_0$.
As a consequence of Proposition \ref{lem-rescaling-components-zero} which shows that the parameters $(\alpha, \beta, \gamma)$ are {\em admissible} with respect to $\tau_0$, Remark \ref{rem-cylindrical} and Remark \ref{rem-tip}, all the estimates for $w = u_1- u_2^{\alpha\beta\gamma}$ will then hold for all $\tau \le \tau_0$, {\em independently of our choice} of
$(\alpha, \beta, \gamma)$. \end{remark}
As we pointed out above, we need to show next that the norms of the difference of our two solutions with respect to the weights defined in the cylindrical and the tip regions are equivalent in the intersection between the regions, the so called {\em transition} region.
\begin{lemma}[Equivalence of the norms in the transition region]
\label{prop-norm-equiv}
Let $w, W$ denote the difference of the two solutions, $w:=u_1-u_2^{\alpha\beta\gamma}$ and $W:=Y_1-Y_2^{\alpha\beta\gamma}$ in the cylindrical and tip regions respectively.
Then, for every $\theta > 0$ small there exist $\tau_0 \ll -1$ and uniform constants $c(\theta), C(\theta) > 0$, so that for $\tau\le \tau_0$, we have
\begin{equation}\label{eqn-normequiv}
c(\theta ) \, \| W \chi_{_{[\theta, 2\theta]}} \|_{2,\infty} \leq \| w \, \chi_{_{D_{2\theta}}} \|_{\mathfrak{H},\infty} \leq C(\theta) \, \| W \chi_{_{[\theta, 2\theta]}} \|_{2,\infty}
\end{equation}
where $D_{2\theta} := \{(y,\tau):\,\, \theta \le u_1(y,\tau) \le 2\theta\}$. \end{lemma}
\begin{proof}
To simplify the notation we put $u_2:= u_2^{\alpha\beta\gamma}$ and $Y_2:=Y_2^{\alpha\beta\gamma}$ in this proof.
Define $A_{2\theta} := D_{2\theta}\cup\{ (y,\tau):\,\, \theta \le u_2(y,\tau) \le 2\theta\}$.
The convexity of both our solutions $u_1$ and $u_2$ imply that
\begin{equation}
\label{eq-secant}
\min_{A_{2\theta}}\,|(u_2)_y| \le \Bigl| \frac{u_1(y,\tau) - u_2(y,\tau)} {Y_1(u,\tau) - Y_2(u,\tau)}\Bigr| \le \max_{A_{2\theta}} |(u_2)_y|.
\end{equation}
This easily follows from
\[
\frac{|u_1(y,\tau) - u_2(y,\tau)|}{|Y_1(u,\tau) - Y_2(u,\tau)|} = \frac{|u_2(Y_1(u,\tau),\tau) - u_2(Y_2(u,\tau)|}{|Y_1(u,\tau) - Y_2(u,\tau)|} = |u_{2y}(\xi,\tau)|
\]
where $\xi$ is a point in between $Y_1(u,\tau)$ and $Y_2(u,\tau)$.
The results in \cite{ADS} (see also Theorem \ref{thm-old} in the current paper) show that by the asymptotics in the intermediate region for $u_2$, we have
\begin{equation}
\label{eq-u2y}
\frac{c_1(\theta)}{\sqrt{|\tau|}} \le |u_{2y}(y,\tau)| \le \frac{C_1(\theta)}{\sqrt{|\tau|}}, \qquad \mbox{for} \,\,\,\, y\in \{y \,\,\,|\,\,\, \theta \le u_2(y,\tau) \le 2\theta\}
\end{equation}
for uniform constants $c_1(\theta) > 0$ and $C_1(\theta) > 0$, independent of $\tau$ for $\tau \le \tau_0$.
On the other hand, using that $u_2$ has the same asymptotics in the intermediate region as $u_1$, it is easy to see that for $\tau \le \tau_0 \ll -1$,
\[
D_{2\theta} \subset \{ (y,\tau):\,\, \frac{\theta}{2} \le u_2(y,\tau) \le 3\theta\}
\]
and hence
\[
\frac{c_1(\theta)}{\sqrt{|\tau|}} \le |u_{2y}| \le \frac{C_1(\theta)}{\sqrt{|\tau|}}, \qquad \mbox{for} \,\,\,\, y\in D_{2\theta}.
\]
Combining this, \eqref{eq-u2y} and \eqref{eq-secant} yields
\begin{equation}
\label{eq-equiv-secant}
\frac{c_1(\theta)}{\sqrt{|\tau|}} \le \frac{|w(y,\tau)|}{|W(u,\tau)|} \le \frac{C_1(\theta)}{\sqrt{|\tau|}}
\end{equation}
for all $y\in D_{2\theta}$ and $\tau \le \tau_0 \ll -1$.
See Figure \ref{fig-conversion} on the next page.
By \eqref{eq-weight} we have $\mu(u,\tau) = - Y_1^2(u,\tau)/4$ for $u \in [\theta, 2\theta]$.
Introducing the change of variables $y = Y_1(u,\tau)$ (or equivalently $u = u_1(y,\tau)$), the inequality \eqref{eq-equiv-secant} yields
\[
\begin{split}
\int_{\theta}^{2\theta} W^2 \, e^{\mu(u,\tau)}\, du = \int_{\theta}^{2\theta} W^2 \, e^{- \frac{Y_1^2(u,\tau)}4}\, du &\le C(\theta) \sqrt{|\tau|} \int_{D_{2\theta}} w^2 e^{-\frac{y^2}{4}}\, dy
\end{split}
\]
where we used that $du = (u_1)_y\, dy$ and that due to our asymptotics from \cite{ADS} in the intermediate region, we have
\begin{equation}\label{eqn-cv}
c_2(\theta) \sqrt{|\tau|} \le |(u_1)_y| \le \sqrt{|\tau|}\, C_2(\theta), \qquad \text{for } y\in D_{2\theta}.
\end{equation}
In conclusion
\[
\| W \chi_{_{[\theta, 2\theta]}} \|_{2,\infty} \leq C(\theta) \, \| w \, \chi_{_{D_{2\theta}}} \|_{2,\infty}
\]
which proves one of the inequalities in \eqref{eqn-normequiv}.
\begin{figure}
\caption{Converting the vertical distance $u_2(y, \tau)-u_1(y,\tau)$ to the horizontal distance $Y_2(u, \tau)-Y_1(u,\tau)$.
Given a point $(y, u)$ on the graph of $u_1(\cdot,\tau)$ we define $Y_1 = y$, $u=u_1(y, \tau)$, $Y_2 = Y_2(u, \tau)$.
By the Mean Value Theorem the ratio $\frac{u_2-u_1}{Y_2-Y_1}$ must equal the derivative $u_{2,y}(\tilde y,\tau)$ at some $\tilde y\in (Y_1, Y_2)$. }
\label{fig-conversion}
\end{figure}
We will next show the other inequality in \eqref{eqn-normequiv}.
To this end, we use again \eqref{eq-equiv-secant}, the change of variables $u=u_1(y,\tau)$ (or equivalently $y = Y_1(u,\tau)$) and \eqref{eqn-cv}, to obtain
\begin{equation}
\int_{D_{2\theta}} w^2 e^{-\frac{y^2}{4}}\, dy \le \frac{C(\theta)}{\sqrt{|\tau|}} \int_{\theta}^{2\theta} W^2 e^{-\frac{Y_1^2(u,\tau)}{4}}\, du = \frac{C(\theta)}{\sqrt{|\tau|}}\, \int_{\theta}^{2\theta} W^2 \, e^{\mu(u,\tau)}\, du
\end{equation}
from which the bound
\[
\| w \, \chi_{_{D_{2\theta}}} \|_{2,\infty} \leq C(\theta) \, \| W \chi_{_{[\theta, 2\theta]}} \|_{2,\infty}
\]
readily follows. \end{proof}
We will next combine the main results in the previous two sections, Propositions \ref{prop-cylindrical} and \ref{prop-tip}, with the estimate \eqref{eqn-normequiv} above
to establish our {\em crucial estimate} which says that what actually dominates in the norm $\|w_\mathcal{C}\|_{\mathfrak{D},\infty}$ is $\|\mathcal{P}_0 w_\mathcal{C}\|_{\mathfrak{D},\infty}$.
\begin{prop}
\label{prop-cor-main}
For any $\epsilon >0$ there exists a $\tau_0 \ll -1$ so that we have
\begin{equation}
\label{eqn-w1230}
\| \hat w_\mathcal{C} \|_{\mathfrak{D},\infty} \leq \epsilon\, \|\mathcal{P}_0 w_\mathcal{C}(\tau)\|_{\mathfrak{D},\infty}.
\end{equation} \end{prop}
\begin{proof}
By Proposition \ref{lem-rescaling-components-zero} we know that for every $\tau_0 \ll -1$ sufficiently small we can choose parameters $\alpha, \beta$ and $\gamma$ which are admissible with respect to $\tau_0$ and such that $\mathcal{P}_+ w_\mathcal{C}(\tau_0) = \mathcal{P}_0 w_\mathcal{C}(\tau_0) = 0$.
From now on we will always consider $w(y,\tau) = u_1(y,\tau) - u_2^{\alpha\beta\gamma}(y,\tau)$, for these chosen parameters $\alpha, \beta$ and $\gamma$.
By Proposition \ref{prop-cylindrical}, for every $\epsilon > 0$, there exists a $\tau_0 \ll -1$ so that
\[
\|\hat{w}_C\|_{\mathfrak{D},\infty} < \frac{\epsilon}{3} ( \|w_\mathcal{C}\|_{\nu,\infty} + \|w \chi_{D_{\theta}\|_{\mathfrak{H},\infty}})
\]
where $D_{\theta} = \{y \,\,\,|\,\,\, {\theta}/{2} \le u_1(y,\tau) \le \theta\}$.
Furthermore, by Lemma \ref{prop-norm-equiv}, by decreasing $\tau_0$ if necessary we ensure that the following holds
\begin{equation}
\label{eq-dominates1}
\begin{split}
\|\hat{w}_C\|_{\mathfrak{D},\infty} &< \frac{\epsilon}{3} (\|w_\mathcal{C}\|_{\mathfrak{D},\infty} + C(\theta) \|W\chi_{[\theta/2,\theta]}\|_{2,\infty}) \\
&< \frac{\epsilon}{3}\, (\|w_\mathcal{C}\|_{\mathfrak{D},\infty} + C(\theta)\, \|W_T\|_{2,\infty})
\end{split}
\end{equation}
where $\chi_{[\theta/2,\theta]}$ is the characteristic function of interval $u\in [\theta/2,\theta]$ and where we used the property of the cut off function $\varphi_T$ that $\varphi_T \equiv 1$ for $u\in [\theta/2,\theta]$.
By Proposition \ref{prop-tip}, there exist $0 < \theta \ll 1$ and $\tau_0 \ll -1$ so that
\[
\|W_T\|_{2,\infty} < \frac{C(\theta)}{\sqrt{|\tau_0|}} \, \|W \chi_{[\theta,2\theta]}\|_{2,\infty}.
\]
By Lemma \ref{prop-norm-equiv} we have
\[
\|W_T\|_{2,\infty} \le \frac{C(\theta)}{\sqrt{|\tau_0|}} \, \|w \, \chi_{D_{2\theta}}\|_{\mathfrak{H},\infty} \le \frac{C(\theta)}{\sqrt{|\tau_0|}}\, \|w_\mathcal{C}\|_{\mathfrak{H},\infty}
\]
where we also use that $\varphi_\mathcal{C} \equiv 1$ on $D_{2\theta}$.
Combining this with \eqref{eq-dominates1} yields
\[
\|\hat{w}_C\|_{\nu,\infty} < \epsilon\Bigl( \|w_\mathcal{C}\|_{\nu,\infty} + \frac{C(\theta)}{\sqrt{|\tau_0|}}\, \|w_\mathcal{C}\|_{\mathfrak{H},\infty}\Bigr) < \frac{2\epsilon}{3} \|w_\mathcal{C}\|_{\mathfrak{D},\infty}
\]
by choosing $|\tau_0|$ sufficiently large relative to $C(\theta)$.
By choosing $\epsilon$ small, the last estimate yields \eqref{eqn-w1230} finishing the proof of the proposition.
\end{proof}
\begin{proof}[Proof of the Main Theorem \ref{thm-main}]
Recall that $w^{\alpha\beta\gamma}(y,\tau) = u_1(y,\tau) - u_2^{\alpha\beta\gamma}(y,\tau)$, which we shortly denote by $w(y,\tau) = u_1(y,\tau) - u_2(y,\tau)$, where $u_2^{\alpha\beta\gamma}(y,\tau)$ is given by \eqref{eq-ualphabeta}.
Proposition \ref{lem-rescaling-components-zero} tells us that for every $\tau_0 \ll -1$ sufficiently small we can choose parameters $\alpha, \beta$ and $\gamma$ which are admissible with respect to $\tau_0$ and such that $\mathcal{P}_+ w_\mathcal{C}(\tau_0) = \mathcal{P}_0 w_\mathcal{C}(\tau_0) = 0$.
For a given $\tau_0$ sufficiently small we fix parameters $\alpha, \beta$ and $\gamma$ so that above holds.
Due to admissibility of the parameters all our estimates hold for the difference $w^{\alpha\beta\gamma}(y,\tau)$, for $\tau \le \tau_0 \ll -1$, independently of $\alpha, \beta$ and $ \gamma$.
Our goal is to show that for that choice of parameters $w(y,\tau) \equiv 0$.
Following the notation from previous sections we have
\[
\frac{\partial}{\partial \tau} w_\mathcal{C} = \mathcal{L}[w_\mathcal{C}] + \mathcal{E}[w_\mathcal{C}] + \bar{\mathcal{E}}[w,\varphi_\mathcal{C}]
\]
with $w_\mathcal{C} = \hat{w}_\mathcal{C} + a(\tau) \, \psi_2$, where $a(\tau) = \langle w_\mathcal{C}, \psi_2\rangle$.
Projecting the above equation on the eigenspace generated by $\psi_2$ while using that $\langle \mathcal{L}[w_\mathcal{C}] , \psi_2 \rangle =0$ we obtain
\[
\frac{d}{d\tau}a(\tau) = \langle \mathcal{E}[w_\mathcal{C}] + \bar{\mathcal{E}}[w,\varphi_\mathcal{C}] , \psi_2 \rangle.
\]
Since ${\displaystyle \frac{\langle \psi_2^2,\psi_2\rangle}{\| \psi_2\|^2} = 8}$ we can write the above equation as
\[
\frac{d}{d\tau}a(\tau) = \frac{2a(\tau)}{|\tau|} + F(\tau)
\]
where
\begin{equation}
\label{eq-F-tau}
\begin{split}
F(\tau) &:= \frac{\langle \mathcal{E}[w_\mathcal{C}] + \bar{\mathcal{E}}[w,\varphi_\mathcal{C}] - \frac{a(\tau)}{4|\tau|} \psi_2^2, \psi_2\rangle}{\|\psi_2\|^2} \\
&= \frac{\langle\bar{\mathcal{E}}[w,\varphi_\mathcal{C}], \psi_2\rangle}{\|\psi_2\|^2} + \frac{\langle \mathcal{E}[w_\mathcal{C}] - \frac{a(\tau)}{4|\tau|} \psi_2^2, \psi_2\rangle}{\|\psi_2\|^2}.
\end{split}
\end{equation}
Furthermore, solving the above ordinary differential equation for $a(\tau)$ yields
\[
a(\tau) = \frac{C}{\tau^2} - \frac{\int_{\tau}^{\tau_0} F(s) s^2\, ds}{\tau^2}.
\]
By Remark \ref{rem-choice-par} we may assume $\alpha(\tau_0) = 0$ and hence, $C = 0$, which implies
\begin{equation}
\label{eq-alpha-CC}
|a(\tau)| = \frac{|\int_{\tau}^{\tau_0} F(s) s^2\, ds|}{\tau^2}.
\end{equation}
Define $\|a\|_{\mathfrak{H},\infty}(\tau) = \sup_{s\le \tau} \Bigl(\int_{s-1}^{s} |a(\zeta)|^2\, d \zeta \Bigr)^{\frac12}$.
Since $\mathcal{P}_0 w_\mathcal{C}(\cdot,\tau) = a(\tau)\, \psi_2(\cdot)$, we have
\[
\|\mathcal{P}_0 w_\mathcal{C}\|_{\mathfrak{D},\infty}(\tau) =\|a\|_{\mathfrak{H},\infty}(\tau) \, \|\psi_2\|_{\mathfrak{D}}.
\]
Denote by $\|a\|_{\mathfrak{H},\infty} :=\|a\|_{\mathfrak{H},\infty}(\tau_0)$.
Note that
\[
\Bigl| \int_{\tau}^{\tau_0} F(s)\, s^2\, ds\Bigr| \le \sum_{j=[\tau]-1}^{\tau_0} \Bigl|\int_j^{j+1} s^2 F(s)\, ds\Bigr| \le C\, \sum_{j=[\tau]-1}^{\tau_0} j^2 \Bigl|\int_j^{j+1} F(s)\, ds\Bigr|
\]
where with no loss of generality we may assume $\tau_0$ is an integer.
Next we need the following claim.
\begin{claim}
\label{claim-Fs}
For every $\epsilon > 0$ there exists a $\tau_0$ so that
\[\Bigl|\int_{\tau-1}^{\tau} F(s)\, ds\Bigr| \le
\frac{\epsilon}{|\tau|}\,\|a\|_{\mathfrak{H},\infty}
\]
for $\tau \le \tau_0$.
\end{claim}
Assume for the moment that the Claim holds.
Then,
\begin{equation*}
\begin{split}
\Bigl|\int_{\tau}^{\tau_0} F(s) \, s^2\, ds\Bigr| &\le \sum_{j=[\tau]}^{\tau_0} \int_{j-1}^j s^2 F(s)\, ds \le \epsilon\, \|a\|_{\mathfrak{H},\infty}\,\sum_{j=[\tau]-1}^{\tau_0} |j|\, \\
&\le \epsilon\,\|\alpha\|_{\mathfrak{H},\infty} \, \sum_{j=[\tau]-1}^{\tau_0} | j | \\
&\le \epsilon\, |\tau|^2\,\|a\|_{\mathfrak{H},\infty}.
\end{split}
\end{equation*}
Combining this with \eqref{eq-alpha-CC}, where $\epsilon \le 1/2$, yields
\[
|a(\tau)| \le \frac12\|a\|_{\mathfrak{H},\infty}, \qquad \mbox{for all} \,\,\, \tau\le \tau_0.
\]
This implies
\[
\|a\|_{\mathfrak{H},\infty} \le \frac12 \, \|a\|_{2,\infty}
\]
and hence $\|a\|_{\mathfrak{H},\infty} = 0$, which further gives
\[
\|\mathcal{P}_0 w_\mathcal{C}\|_{\mathfrak{D},\infty} = 0.
\]
Finally, \eqref{eqn-w1230} implies $\hat{w}_C \equiv 0$ and hence, $w_\mathcal{C} \equiv 0$ for $\tau \le \tau_0$.
By \eqref{eqn-normequiv} and the fact that $\varphi_\mathcal{C} \equiv 1$ on $D_{2\theta}$ we have $W\chi_{[\theta,2\theta]} \equiv 0$ for $\tau \le \tau_0$.
Proposition \ref{prop-tip} then yields that $W_T \equiv 0$ for $\tau\le \tau_0$.
All these imply $u_1(y,\tau) \equiv u_2^{\alpha\beta\gamma}(y,\tau)$, for $\tau\le \tau_0$.
By forward uniqueness of solutions to the mean curvature flow (or equivalently to cylindrical equation \eqref{eq-u}), we have $u_1 \equiv u_2^{\alpha\beta\gamma}$, and hence $
M_1 \equiv M_2^{\alpha\beta\gamma}.$
This concludes the proof of the main Theorem \ref{thm-main}.
To complete the proof of Theorem \ref{thm-main} we still need to prove Claim \ref{claim-Fs}, what we do below.
\begin{proof}[Proof of Claim \ref{claim-Fs}]
Throughout the proof we will use the estimate
\begin{equation}
\label{eq-par-int}
\|w_\mathcal{C}\|_{\mathfrak{D},\infty} \leq C \, \| a\|_{\mathfrak{H},\infty}, \qquad \mbox{for} \,\, \tau_0 \ll -1
\end{equation}
which follows from Proposition \ref{prop-cor-main}.
By the proof of the same Proposition we also have
\[
\|w \, \chi_{D_{\theta}}\|_{\mathfrak{H},\infty} < \frac{C(\theta)}{\sqrt{|\tau_0|}}\, \|w_\mathcal{C}\|_{\mathfrak{H},\infty}, \qquad \mbox{for} \,\, \tau_0 \ll -1.
\]
Also throughout the proof we will use the a'priori estimates on the solutions $u_i$ shown in our previous work \cite{ADS} which continue to hold here without the assumption of $O(1)$ symmetry,
as we discuss in Theorem \ref{thm-O1} of our current paper.
From the definition of $\bar{\mathcal{E}}[w,\varphi_\mathcal{C}]$ given in \eqref{eq-bar-E} and the definition of cut off function $\varphi_\mathcal{C}$, we see that the support of $\mathcal{E}[w,\varphi_\mathcal{C}]$ is contained in
\[
\Bigl(\sqrt{2 - \frac{\theta^2}{n-1}} - \epsilon_1\Bigr)\, \sqrt{|\tau|} \le |y| \le \Bigl(\sqrt{2 - \frac{\theta^2}{4(n-1)}} + \epsilon_1\Bigr)\, \sqrt{|\tau|}
\]
where $\epsilon_1$ is so tiny that $\sqrt{2 - \frac{\theta^2}{4(n-1)}} + \epsilon_1 < \sqrt{2}$.
Also by the \emph{a priori} estimates proved in \cite{ADS} we have \begin{equation}
\label{eq-der-bounds10}
|u_y| + |u_{yy}| \le \frac{C(\theta)}{\sqrt{|\tau|}}, \qquad \mbox{for} \,\, |y| \leq \big (\sqrt{2 - \frac{\theta^2}{4(n-1)}} + \epsilon_1\big ) \sqrt{|\tau|}.
\end{equation}
Furthermore, Lemma 5.14 in \cite{ADS} shows that our ancient solutions $u_i$, $i\in \{1,2\}$ satisfy
\begin{equation}
\label{eq-L2-asymp0}
\begin{split}
\Bigl \|u_i - \sqrt{2(n-1)} + \frac{\sqrt{2(n-1)}}{4|\tau|}\, \psi_2 \Bigr\| &= o(|\tau|^{-1}), \\
\Bigl\|\Bigl(u_i + \frac{\sqrt{2(n-1)}}{4|\tau|}\, \psi_2\Bigr)_y\Bigr\| &= o(|\tau|^{-1}).
\end{split}
\end{equation}
In particular, this implies
\begin{equation}
\label{eq-L2-asymp1}
\Bigl\|u_i - \sqrt{2(n-1)} \Bigr\| = O(|\tau|^{-1}) \qquad \mbox{and} \qquad \Bigl\|(u_i) _y\Bigr\| = O(|\tau|^{-1}).
\end{equation}
We start by estimating the first term on the right hand side in \eqref{eq-F-tau}.
Using Lemma \ref{lem-error1-est} we conclude
\begin{equation}
\label{eq-barE-est}
\begin{split}
|\langle \bar{\mathcal{E}}[w,\varphi_\mathcal{C}], \psi_2\rangle| \le \|\bar{\mathcal{E}}[w,\varphi_\mathcal{C}]\|_{\mathfrak{D}^*} \|\psi_2 \, \bar {\chi} \|_{\mathfrak{D}} < \epsilon \, \|w_\mathcal{C}\|_{\mathfrak{D}} \, e^{-|\tau|/4}.
\end{split}
\end{equation}
where $\bar {\chi} $ denotes a smooth function with a support in $|y| \ge (\sqrt{2-\theta^2/(4(n-1))} - 2\epsilon_1)\, \sqrt{|\tau|}$, being equal to one for $|y| \ge (\sqrt{2-\theta^2/(4(n-1))} - \epsilon_1)\, \sqrt{|\tau|}$.
This implies for every $\epsilon > 0$ we can find a $\tau_0 \ll -1$ so that for $\tau \le \tau_0$ we have
\[
\Bigl|\int_{\tau-1}^{\tau} \langle \bar{\mathcal{E}}[w,\varphi_\mathcal{C}], \psi_2\rangle\, ds \Bigr| \leq \frac{\epsilon\|a\|_{\mathfrak{H},\infty}}{|\tau|}
\]
where we used \eqref{eq-par-int}.
We focus next on the second term on the right hand side in \eqref{eq-F-tau}.
Lets write $w_\mathcal{C} = \hat{w}_\mathcal{C} + a(\tau) \psi_2$.
Recall that
\begin{equation}
\label{eq-E-recall}
\mathcal{E}[w_\mathcal{C}] = \frac{2(n-1) - u_1u_2}{2u_1u_2}w_\mathcal{C} -\frac{u_{1y}^2}{1+u_{1y}^2} (w_\mathcal{C})_{yy} -\frac{(u_{1y}+u_{2y})u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)} (w_\mathcal{C})_y.
\end{equation}
Then, for the first term on the right hand side of \eqref{eq-E-recall} we get
\begin{equation}
\label{eq-first-to-estimate}
\begin{split}
& \Bigl| \Bigl\langle \frac{2(n-1) - u_1 u_2}{2u_1 u_2}\, w_\mathcal{C} - \frac{a(\tau)}{4|\tau|}\, \psi_2^2, \psi_2\Bigr\rangle\Bigr| \le \\
&\le \Bigl| \Bigl\langle \frac{2(n-1) - u_1 u_2}{2 u_1 u_2}\, \hat{w}_C, \psi_2\Bigr\rangle\Bigr| + |a(\tau)|\, \Bigl|\Bigl\langle \frac{2(n-1) - u_1 u_2}{2 u_1 u_2} - \frac{1}{4|\tau|}\, \psi_2, \psi_2^2\Bigr\rangle\Bigr|.
\end{split}
\end{equation}
To estimate the first term on the right hand side in \eqref{eq-first-to-estimate}, we write
\begin{equation}
\label{eq-help-help100}
\begin{split}
& \Bigl|\Bigl\langle \frac{2(n-1) - u_1 u_2}{2 u_1 u_2}\, \hat{w}_C, \psi_2\Bigr\rangle\Bigr| \le
\Bigl|\Bigl\langle \frac{(\sqrt{2(n-1)} - u_1)(\sqrt{2(n-1)} + u_1)}{2 u_1 u_2} \hat{w}_C, \psi_2\Bigr\rangle\Bigr| + \\
&+ \Bigl|\Bigl\langle \frac{u_1 - \sqrt{2(n-1)}}{2 u_2}\, \hat{w}_C, \psi_2\Bigr\rangle\Bigr| + \Bigl|\Bigl\langle \frac{\sqrt{2(n-1)} - u_2}{2 u_2}\, \hat{w}_C, \psi_2\Bigr\rangle\Bigr|.
\end{split}
\end{equation}
Note that ${\displaystyle u_i \ge {\theta}/{2}}$ on the support of $\hat{w}_C$ and hence the arguments for estimating either of the terms on the right hand side in \eqref{eq-help-help100} are analogous to estimating the second term in \eqref{eq-help-help100}.
Using Lemma \ref{lem-Poincare}, Proposition \ref{prop-cor-main} and \eqref{eq-L2-asymp1} we get that for every $\epsilon > 0$ there exists a $\tau_0 \ll -1$ so that for $\tau \le \tau_0$ we have
\[
\begin{split}
\Bigl|\Bigl\langle &\frac{u_1 - \sqrt{2(n-1)}}{2u_2}\, \hat{w}_C, \psi_2\Bigr\rangle\Bigr| \\
&\le C(\theta)\, \Bigl(\int \hat{w}_C^2 |\psi_2| e^{-y^2/4}\, dy\Bigr)^{1/2} \, \Bigl(\int (\sqrt{2(n-1)} - u_1)^2 |\psi_2| e^{-y^2/4}\, dy\Bigr)^{1/2}\\
&\le C(\theta) \|\hat{w}_C\|_{\mathfrak{D}} \, \|\sqrt{2(n-2)} - u_1\|_{\mathfrak{D}} \\
&< \frac{\epsilon}{|\tau|}\,\|a\|_{\mathfrak{H},\infty}
\end{split}
\]
implying
\begin{equation}
\label{eq-term1-bar}
\Bigl|\int_{\tau-1}^{\tau} \Bigl\langle \frac{2(n-1) - u_1 u_2}{2u_1u_2}\, \hat{w}_C, \psi_2\Bigr\rangle\, ds\Bigr| < \frac{\epsilon}{|\tau|}\,\|a\|_{\mathfrak{H},\infty}.
\end{equation}
Lets now estimate the second term on the right hand side in \eqref{eq-first-to-estimate}.
Writing $u_i = \sqrt{2(n-1)} (1 + v_i)$, we get
\begin{equation}
\label{eq-noname111}
\begin{split}
\Bigl\langle & \frac{2(n-1) - u_1 u_2}{2 u_1 u_2} - \frac{1}{4|\tau|} \psi_2, \psi_2^2\Bigr\rangle \\
&= - \Bigl\langle \frac{v_1 + v_2 + v_1 v_2}{2(1 + v_1) (1 + v_2)} + \frac{1}{4|\tau|}\, \psi_2, \psi_2^2\Bigr\rangle \\
&= -\frac 12 \Bigl\langle \frac{v_1}{(1+ v_1)(1+v_2)} + \frac{\psi_2}{4|\tau|}, \psi_2^2\Bigr\rangle - \frac12 \Bigl\langle \frac{v_2}{1 + v_2} + \frac{\psi_2}{4|\tau|}, \psi_2^2\Bigr\rangle.
\end{split}
\end{equation}
The two terms on the right hand side in above equation can be estimated in the same way so we will demonstrate how to estimate the second one.
Using \eqref{eq-L2-asymp0}, \eqref{eq-L2-asymp1}, the bound \eqref{eq-v-quadratic-upper-bound} and H\"older's inequality we get that for every $\epsilon > 0$ there exist $K$ large enough and $\tau_0 \ll -1$ so that for $\tau \le \tau_0$ we have
\[
\begin{split}
\Bigl\langle \frac{v_2}{1+v_2} + \frac{\psi_2}{4|\tau|}, \psi_2^2\Bigr\rangle
&= \langle v_2 + \frac{\psi_2}{4|\tau|}, \psi_2^2\rangle - \langle \frac{v_2^2}{1+v_2}, \psi_2^2\rangle \\
&\le C\, \Bigl\| v_2 + \frac{\psi_2}{4|\tau|}\Bigr\| + C\int_{\mathbb{R}} v_2^2\, y^4\, e^{-\frac{y^2}{4}}\, dy \\
&\le \frac{o(1)}{|\tau|} + \Bigl(\int_{\mathbb{R}} v_2^2 \, e^{-\frac{y^2}{4}}\, dy\Bigr)^{\frac12} \, \Bigl(\int_{\mathbb{R}} v_2^2 \, y^8 \, e^{-\frac{y^2}{4}}\, dy\Bigr)^{\frac12} \\
&\le \frac{o(1)}{|\tau|} + \frac{C}{|\tau|} \left( \Bigl(\int_{|y| \le K} v_2^2 e^{-\frac{y^2}{4}}\, dy\Bigr)^{\frac12} + \Bigl(\int_{|y| \ge K} y^{10} e^{-\frac{y^2}{4}}\, dy\Bigr)^{\frac12} \right) \\
&< \frac{\epsilon}{|\tau|}.
\end{split}
\]
To justify the last inequality note that for a given $\epsilon > 0$ we can find $K$ large enough so that ${\displaystyle \Bigl(\int_{|y|\ge K} y^{10} e^{-\frac{y^2}{4}}\, dy\Bigr)^{\frac12} < \frac{\epsilon}{6C}}$.
On the other hand, using our asymptotics result proven in \cite{ADS}, for a chosen $K$ we can find a $\tau_0 \ll -1$, so that for $\tau \le \tau_0$ we have ${\displaystyle |v_i| < \frac{\epsilon}{6C\sqrt{K}}}$.
Finally, we conclude that for every $\epsilon > 0$ there exists a $\tau_0 \ll -1$, so that for all $\tau \le \tau_0$,
\begin{equation}
\label{eq-term2-bar}
\Bigl|\int_{\tau-1}^{\tau} \Bigl\langle \frac{2(n-1) - u_1 u_2}{2 u_1 u_2} - \frac{\psi_2}{4|\tau|}, \psi_2^2\Bigr\rangle \, ds\Bigr| < \frac{\epsilon}{2 |\tau|}\,\|a\|_{\mathfrak{H},\infty}.
\end{equation}
Since the first term on the right hand side in \eqref{eq-noname111} can be estimated in a similar manner, we conclude that this inequality holds.
It remains now to estimate the second and third terms in the error term \eqref{eq-E-recall}, which involve first and second order derivative bounds for our solutions $u_i$.
We claim that for every $K$ there exist $\tau_0 \ll -1$ and a uniform constant $C$ so that
\begin{equation}
\label{eq-der-bounds11}
|(u_i)_y| + |(u_i)_{yy}| \le \frac{C}{|\tau|}, \qquad \mbox{for} \,\,\,\, |y| \le K, \,\, \tau \leq \tau_0, \,\,\, \quad i=1,2.
\end{equation}
This follows by standard derivative estimates applied to the equation satisfied by each of the $v_i$, $i=1,2$ and the $L^\infty$ bound $|v_i| \leq \frac{C}{|\tau|}$,
which holds on $|y| \le 2K, \, \tau \leq \tau_0
\ll -1$.
Let us use \eqref{eq-der-bounds11} to estimate the projection involving the third term in \eqref{eq-E-recall}: for every $\epsilon > 0$, there exists a $\tau_0 \ll -1$ so that for
$\tau \le \tau_0$
\[
\begin{split}
\Bigl|\Bigl\langle
&\frac{({u_1}_y + {u_2}_y) u_{2yy}}{(1+u_{1y}^2)\, (1+u_{2y}^2)}\, (w_\mathcal{C})_y, \psi_2\Bigr\rangle\Bigr| \\
&\le C \, \int_{|y| \le K} (|u_{1y}| + |u_{2y}|)\, |u_{2yy}|\, |(w_\mathcal{C})_y| \, (y^2 +1 ) \, e^{-\frac{y^2}{4}}\, dy \\
&\qquad +C\, \int_{|y| \ge K } (|u_{1y}| + |u_{2y}|)\, |u_{2yy}|\, |(w_\mathcal{C})_y| \,y^2 \, e^{-\frac{y^2}{4}}\, dy\\
&\le \frac{C}{|\tau|^2}\, \|w_\mathcal{C}\|_{\mathfrak{D}} + \frac{C}{|\tau|}\, \|w_\mathcal{C}\|_{\mathfrak{D}} \, \Bigl(\int_{|y| \ge K} \, y^4 \, e^{-\frac{y^2}{4}}\, dy \Bigr)^{\frac12} \\
&< \frac{\epsilon}{|\tau|}\, \|w_\mathcal{C}\|_{\mathfrak{D}}
\end{split}
\]
where we used H\"older's inequality, estimate \eqref{eq-der-bounds10} in the region $\{|y| \ge K\}\cap \mathop{\mathrm {supp}} w_\mathcal{C}\}$ and estimate \eqref{eq-der-bounds11} in the region $\{|y| \le K\}$.
This implies that for every $\epsilon > 0$ there exists a $\tau_0 \ll -1$ so that for
\begin{equation}
\label{eq-term3-bar}
\Bigl|\int_{\tau-1}^{\tau} \Bigl\langle \frac{(u_1y + u_{2y}) u_{2yy}}{(1+u_{1y}^2)(1+u_{2y}^2)}\, (w_\mathcal{C})_y, \psi_2\Bigr\rangle\Bigr| < \frac{\epsilon}{|\tau|}\, \|w_\mathcal{C}\|_{\mathfrak{D},\infty} < \frac{\epsilon}{|\tau|}\,\|a\|_{\mathfrak{H},\infty}.
\end{equation}
Finally, to estimate the projection involving the second term in \eqref{eq-E-recall}, we note that integration by parts yields
\begin{equation}
\label{eq-term4-bar}
\begin{split}
\Bigl\langle
& \frac{u_{1y}^2}{1 + u_{1y}^2} (w_\mathcal{C})_{yy}, \psi_2\Bigr\rangle \\
&= -2\int_{\mathbb{R}} \frac{u_{1yy} u_{1y}}{1 + u_{1y}^2}\, (w_\mathcal{C})_y \, \psi_2 \, e^{-\frac{y^2}{4}}\, dy
+ 2\int_{\mathbb{R}} \frac{u_{1y}^3 u_{1yy}}{(1 + u_{1y}^2)^2}\, (w_\mathcal{C})_y \, \psi_2 \, e^{-\frac{y^2}{4}}\, dy \\
&\quad - \int_{\mathbb{R}} \frac{ u_{1y}^2}{1 + u_{1y}^2 }\, (w_\mathcal{C})_{y} \, (\psi_2)_y \, e^{-\frac{y^2}{4}}\, dy + \frac12 \int_{\mathbb{R}} \frac{u_{1y}^2 }{1+u_{1y}^2} (w_\mathcal{C})_y \psi_2 \, y \, e^{-\frac{y^2}{4}}\, dy.
\end{split}
\end{equation}
It is easy to see that all terms on the right hand side in \eqref{eq-term4-bar} can be estimated very similarly as in \eqref{eq-term3-bar}.
Hence, for every $\epsilon > 0$ there exists a $\tau_0$ so that for all $\tau \le \tau_0$ we have
\begin{equation}
\label{eq-term5-bar}
\Bigl|\int_{\tau-1}^{\tau} \Bigl\langle \frac{u_{1y}^2}{1 + u_{1y}^2} (w_\mathcal{C})_{yy}, \psi_2\Bigr\rangle\Bigr| < \frac{\epsilon}{|\tau|}\,\|a\|_{\mathfrak{H},\infty}.
\end{equation}
Combining \eqref{eq-F-tau}, \eqref{eq-barE-est}, \eqref{eq-E-recall}, \eqref{eq-term1-bar}, \eqref{eq-term2-bar}, \eqref{eq-term3-bar}, \eqref{eq-term4-bar} and \eqref{eq-term5-bar} concludes the Claim \ref{claim-Fs}.
\end{proof}
The proof of our theorem is now also complete. \end{proof}
\section{Appendix - Reflection symmetry} In this appendix we will justify why the conclusions of Theorem \ref{thm-old} proved in \cite{ADS} under the assumption on $O(1)\times O(n)$ symmetry hold in the presence of $O(n)$-symmetry only. More precisely we will show the following result.
\begin{theorem}
\label{thm-O1}
If $M_t$ is an Ancient Oval that is rotationally symmetric, then the conclusions of Theorem \ref{thm-old} hold. \end{theorem}
\begin{proof}
We will follow closely the arguments in Theorem \ref{thm-old} and point out below only steps in which the arguments slightly change because of the lack of reflection symmetry.
All other estimates can be argued in exactly the same way.
Recall that we consider noncollapsed, ancient solutions (and hence convex due to \cite{HK}) which are $O(n)$-invariant hypersurfaces in $\mathbb{R}^{n+1}$.
Such hypersurfaces can be represented as
\[\{(x,x') \,\,\in \mathbb{R}\times\mathbb{R}^n\,\,\,|\,\,\, -d_1(t) < x < d_2(t), \,\,\, \|x'\| = U(x,t)\}\]
for some function $\|x'\| = U(x,t)$.
The points $(-d_1(t),0)$ and $(d_2(t),0)$ are called the tips of the surface.
The profile function $U(x,t)$ is defined only for $x\in [-d_1(t), d_2(t)]$.
After parabolic rescaling
\[U(x,t) = \sqrt{T - t} \, u(y,\tau), \qquad y = \frac{x}{\sqrt{T-t}}, \, \qquad \tau = -\log(T-t)\]
the profile function $u(y,\tau)$ is defined for $-\bar{d}_1(\tau) \le y \le \bar{d}_2(\tau)$.
Theorem 1.11 in \cite{HK} and Corollary 6.3 in \cite{W} imply that as $\tau \to -\infty$, surfaces $M_{\tau}$ converge in $C_{loc}^{\infty}$ to a cylinder of radius $\sqrt{2(n-1)}$, with axis passing through the origin.
Due to concavity, for every $\tau$, there exists a $y(\tau)$ so that $u_y(\cdot,\tau) \le 0$ for $y \ge y(\tau)$, $u_y(\cdot,\tau) \ge 0$ for $y \le y(\tau)$ and $u_y(y(\tau),\tau) = 0$.
To finish the proof of Theorem \ref{thm-O1} we need the following lemma saying the maximum of $H$ is attained at one of the tips.
\begin{lemma}
\label{lem-Hmax-tips}
We have that $(\lambda_1)_y \ge 0$ for $y \in [y(\tau), \bar{d}_1(\tau))$ and $(\lambda_1)_y \le 0$ for $y \in (-\bar{d}_2(\tau), y(\tau)]$.
As a consequence, the mean curvature $H$ on $M_t$ attains its maximum at one of the tips $(-d_1(t),0)$ or $(d_2(t),0)$.
\end{lemma}
\begin{proof}
We follow the proof of Corollary 3.8 in \cite{ADS} where the result followed from the fact that the scaling invariant quantity
\[ R := \frac{\lambda_n}{\lambda_1} = -\frac{u u_{yy}}{1 + u_y^2} \geq 0\]
satisfies
\begin{equation}\label{eqn-RRR}
R \leq 1.
\end{equation}
Let us then show that \eqref{eqn-RRR} still holds in our case.
Note that at umbilic points one has $R = 1$.
Both tips of the surface are umbilic points and hence we have $R = 1$ at the tips for all $\tau$ (here we use that the surface is smooth and strictly convex and radially symmetric at the tips).
Hence, $R_{\max}(\tau)$ is achieved on the surface for all $\tau$ and is larger or equal than one.
Thus it is sufficient to show that $R_{\max}(\tau) \leq 1$.
We first note that the quantity ${\displaystyle Q := \frac{u_y^2}{u^2(1+u_y^2)}}$ that we considered before in \cite{ADS} satisfies $Q_y \ge 0$ for $y \ge y(\tau)$ and $Q_y \le 0$ for $y \le y(\tau)$.
To prove \eqref{eqn-RRR}, we may assume $R_{\max}(\tau) =R(\bar y_\tau,\tau) > 1$, for all $\tau \le \tau_0$ and some $\bar y_\tau \in \bar M_\tau$, since otherwise the statement is true.
The convergence to the cylinder in the middle implies that $|\bar{y}_{\tau}| \to +\infty$, as $\tau \to -\infty$.
As in the proof of Lemma 3.5 in \cite{ADS} it is enough to show that the
\begin{equation}
\label{eq-Q-lower}
\liminf_{\tau\to -\infty} Q(\bar{y}_{\tau},\tau) \ge c > 0
\end{equation}
for a uniform constant $c > 0$ and all $\tau \le \tau_0$.
The same proof as in \cite{ADS} implies there exists a uniform constant $c_1 > 0$ so that for all $\tau \le \tau_0 \ll -1$ we have that
\begin{equation}
\label{eq-Q-1}
Q(y,\tau) \ge c_1, \qquad \mbox{whenever} \,\,\, R(y,\tau) = 1.
\end{equation}
We claim that this implies \eqref{eq-Q-lower}.
To prove this claim we argue by contradiction and hence, assume that there exists a sequence $\tau_i\to -\infty$ for which $Q(\bar{y}_{\tau_i},\tau_i) \to 0$ as $i\to \infty$.
This implies that the $\lim_{\tau\to -\infty} R(y,\tau) = 0$, uniformly for $y$ bounded.
We conclude that for all $\tau \le \tau_0$ there exists at least one point $y_{\tau}$ such that $R(y_{\tau},\tau) = 1$.
The convergence to the cylinder also implies that
Without loss of generality we may take that for a subsequence, $y(\tau_i) < \bar{y}_{\tau_i}$.
We consider two different cases.
\noindent{\em Case} 1. $R(y(\tau_i),\tau_i) \le 1$.
Then, either $R(y(\tau_i),\tau_i) = 1$ (in which case set $\hat y_{\tau_i} := y(\tau_i)$), or $R(y(\tau_i),\tau_i) < 1$ (in which case we find $\hat y_{\tau_i} \in ( y(\tau_i), \bar{y}_{\tau_i})$ so that $R(y_{\tau_i},\tau_i) = 1$).
In either case, since $R(\hat y_{\tau_i}, \tau_i)=1$, \eqref{eq-Q-1} implies that
$Q(\hat y_{\tau_i},\tau_i) \ge c_1$, for $i \geq i_0$.
Since $Q_y(\cdot,\tau) \ge 0$ for $y \ge y(\tau)$ and $\bar y_{\tau_i} \geq \hat y_{\tau_i} \geq y(\tau_i)$, we conclude that
$Q(\bar{y}_{\tau_i},\tau_i) \ge c_1 > 0$, for $ i\ge i_0$
contradicting our assumption that the $\lim_{i\to\infty} Q(\bar{y}_{\tau_i},\tau_i) = 0$.
\noindent{\em Case} 2. $R(y(\tau_i),\tau_i) > 1$.
Recall that $u(y,\tau)$ satisfies the equation
\[\frac{\partial}{\partial\tau} u = \frac{u_{yy}}{1 + u_y^2} - \frac y2 u_y + \frac u2 - \frac{n-1}{u} = -H \sqrt{1 + u_y^2} - \frac y2 u_y + \frac u2.\]
The maximum of $u(\cdot,\tau)$ is achieved at $y(\tau)$ and hence, by \eqref{eq-H} we have
\[\frac{d}{d\tau} u_{\max} \ge -C + \frac{u_{\max}}{2}\]
implying that
\[u(y(\tau),\tau) = u_{\max}(\tau) \le \max\{2C, u_{\max}(\tau_0)\}, \qquad \mbox{for} \,\,\,\, \tau \le \tau_0.\]
On the other hand, due to the convergence to the cylinder of radius $\sqrt{2(n-1)}$ in the middle we have that $u_{\max}(\tau) \ge u(0,\tau) \ge \frac 12 \,\sqrt{2(n-1)}$ for $\tau \le \tau_0 \ll -1$.
All these imply that for $\tau\le \tau_0 \ll -1$ we have
\[C_0 \ge H(y(\tau),\tau) \ge \frac{n-1}{u} \ge c_0 > 0.\]
Hence, we can take a limit around $(y(\tau_i), u(y(\tau_i),\tau_i)$ to conclude that the limit is a complete graph of a concave, nonnegative function $\hat{u}(y,\tau)$ so that $\hat{u}_y(0,0) = 0$.
All these yield $\hat{u} \equiv \text{ constant}$, that is the limit is the round cylinder $\mathbb{R}\times S^{n-1}$, contradicting that $R(y(\tau_i),\tau_i) > 1$.
This finishes the proof of estimate \eqref{eq-Q-lower} and then we can argue as in the proof of Lemma 3.5 in \cite{ADS} to conclude the proof that $R \le 1$, for $\tau \le \tau_0 \ll -1$.
To finish the proof of Lemma \ref{lem-Hmax-tips}, note that $R \le 1$ on $M_{\tau}$, for $\tau \le \tau_0$ implies that
$$(\lambda_1)_y \ge 0,\,\,\, \mbox{for} \,\, y\in [y(\tau),\bar{d}_1(\tau)] \quad \mbox{and} \quad
(\lambda_1)_y \le 0 \,\,\, \mbox{for} \,\, y\in [-\bar d_2(\tau),y(\tau)].$$
We now conclude as in the proof of Corollary 3.8 in \cite{ADS} that
Hence,
\[H(y,\tau) \le \max \big ( H(\bar{d}_1(\tau),\tau), H(\bar{d}_2(\tau),\tau) \big ), \qquad y\in M_{\tau}\]
for all $\tau \le \tau_0 \ll -1$ finishing the proof of Lemma \ref{lem-Hmax-tips}.
\end{proof}
The a'priori estimates from Section 4 in \cite{ADS} hold as well in our case, one has just to use that $u_y \le 0$ for $y \in [y(\tau), \bar{d}_1(\tau)]$ and $u_y \ge 0$ for $y\in [-\bar{d}_2(\tau),y(\tau)]$.
By using the same barriers that we constructed in \cite{ADS} one can easily see that we still have the inner-outer estimate we showed in Section 4.5 in \cite {ADS}.
Note that the same inner-outer estimates were proved and the same barriers were used in \cite{BC} without assuming any symmetry.
\begin{lemma}\label{lem-must-be-inside}
There is an $L_n>0$ such that for any rescaled Ancient Oval $u(y,\tau)$ there exist sequences $\tau_i, \tau_i'\to-\infty$ such that for all $i=1, 2, 3, \dots$ one has
\[
u(L_n, \tau_i) < \sqrt{2(n-1)} \quad\text{and}\quad u(-L_n, \tau_i') < \sqrt{2(n-1)}.
\]
\end{lemma}
\begin{proof}
Choose $L_n$ so that the region $\{(y, u) : y\geq L_n, 0\leq u\leq \sqrt{2(n-1)}\}$ is foliated by self-shinkers as in \cite{ADS}, i.e.~for each $a\in(0, \sqrt{2(n-1)})$ there is a unique solution $U_a:[L_n, \infty)\to{\mathbb R}$ of
\begin{equation}
\frac{U_{yy}}{1+U_y^2} - \frac y2 U_y + \frac 12 U - \frac{n-1}{U} = 0,
\qquad U(L_n) = a.
\end{equation}
To prove the Lemma we argue by contradiction and assume that the sequence $\tau_i$ does not exist.
This means that for some $\tau_*$ one has $u(L_n, \tau)\geq \sqrt{2(n-1)}$ for all $\tau\leq \tau_*$.
The same arguments as in \cite[Section 4]{ADS} then imply that $u(y, \tau) \geq U_a(y)$ for all $y\geq L_n$, any $\tau\leq \tau_*$ and any $a\in (0, \sqrt{2(n-1)})$.
This implies that $u(y,\tau)\geq \sqrt{2(n-1)}$ for all $y\geq L_n$ and therefore contradicts the compactness of $M_\tau$.
\end{proof}
For any of our rescaled rotationally symmetric Ancient Ovals $u(y,\tau)$, then we can consider the truncated difference
\[
v(y, \tau) = \varphi(\frac yL) \bigl ( \frac{u(y, \tau)}{\sqrt{2(n-1)}} - 1 \bigr )
\]
for some large $L$.
This function satisfies
\begin{equation}
v_\tau = \mathcal{L} v + E(\tau)
\end{equation}
where $E$ contains the nonlinear as well as the cut-off terms, and where $\mathcal{L}$ is the operator
\[
\mathcal{L}\phi = \phi_{yy} - \frac y2 \phi_y + \phi.
\]
Using the fact that $v$ comes from an ancient solution, and by comparing the Huisken functionals of $M_\tau$ with that of the cylinder we can show as in \cite{ADS} that for any $\epsilon>0$ one can choose $\ell=\ell_\epsilon$ and $\tau_\epsilon<0$ large enough so that
\begin{equation}
\label{eq-error-small}
\| E(\tau) \|_\mathfrak{H} \leq \epsilon \|v(\cdot, \tau)\|_\mathfrak{H}
\end{equation}
holds for all $\tau\leq\tau_\epsilon$.
As in \cite{ADS} we can decompose $v$ into eigenfunctions of the linearized equation, i.e.
\[
v(y, \tau) = v_-(y, \tau) + c_2(\tau) \psi_2(y) + v_+(y, \tau)
\]
with the only difference that $v_\pm$ are no longer necessarily even functions of $y$.
The component in the unstable directions now has two terms,
\[
v_+(y) = c_0(\tau) \psi_0(y) + c_1(\tau) \psi_1(y) = c_0(\tau) + c_1(\tau)\, y.
\]
The estimate \eqref{eq-error-small} implies that the exponential growth rates of the various components $v_-$, $c_2$, $c_1$, $c_0$ are close to the growth rates predicted by the linearization, i.e.~if we write $V_-(\tau) = \|v_-(\cdot, \tau)\|_\mathfrak{H}$, then we have
\begin{subequations}
\begin{align}
&V_-'(\tau)
\leq -\tfrac 12 V_-(\tau)
+ \epsilon \|v(\cdot, \tau)\| \label{eq-component-Vmin}\\
&|c_2'(\tau)| \leq \epsilon \|v(\cdot, \tau)\| \\
&|c_1'(\tau) - \tfrac 12 c_1(\tau)| \leq \epsilon \|v(\cdot, \tau)\| \\
&|c_0'(\tau) - c_0(\tau)| \leq \epsilon \|v(\cdot, \tau)\|
\end{align}
\end{subequations}
The total norm, which appears on the right in each of these inequalities, is given by Pythagoras
\[
\|v(\cdot, \tau)\|_\mathfrak{H}^2 = V_-(\tau)^2 + c_0(\tau)^2 + c_1(\tau)^2 + c_2(\tau)^2.
\]
Using the ODE Lemma (see Lemma in \cite{ADS}) we conclude that for $\tau\to-\infty$ exactly one of the four quantities $V_-(\tau)$, $c_0(\tau)$, $c_1(\tau)$, and $c_2(\tau)$ is much larger than the others.
Similarly to \cite{ADS}, we will now argue that $c_2(\tau)$ is in fact the largest term:
\begin{lemma}
For $\tau\to -\infty$ we have
\[
V_-(\tau) + |c_0(\tau)| + |c_1(\tau)|= o\bigl(|c_2(\tau)|\bigr).
\]
\end{lemma}
\begin{proof}
We must rule out that any of the three components $V_-$, $c_0$, or $c_1$ dominates for $\tau\ll 0$.
The simplest is $V_-$, for if $\|v(\tau)\|_\mathfrak{H} = \mathcal{O}(V_-(\tau))$, then \eqref{eq-component-Vmin} implies that $V_-(\tau)$ is exponentially decaying.
Since $v(\cdot, \tau)\to0$ as $\tau\to -\infty$, it would follow that $V_-(\tau)\equiv0$, and thus $v(\cdot, \tau)\equiv 0$, which is impossible.
If $\|v(\cdot, \tau)\|_\mathfrak{H} = o\bigl(c_0(\tau)\bigr)$ then on any bounded interval $|y|\leq L$ we have
\[
v(y,\tau) = c_0(\tau) \bigl( 1 + o(1)\bigr) \qquad (\tau\to -\infty).
\]
In this case we derive contradiction using the same arguments as in \cite{ADS}.
Finally, if $c_1(\tau)$ were the largest component, then we would have
\[
v(y,\tau) = c_1(\tau) \bigl( y + o(1)\bigr) \qquad (\tau\to -\infty)
\]
so that we would have either $v(L,\tau)>0$, or $v(-L,\tau)>0$ for all $\tau\ll0$.
This again contradicts Lemma~\ref{lem-must-be-inside}.
\end{proof}
Once we have the result in Lemma \ref{lem-must-be-inside}, it follows as in \cite{ADS} that
\[
u(y,\tau) = \sqrt{2(n-1)}\, \left(1 - \frac{y^2 - 2}{4|\tau|}\right) + o(|\tau|^{-1}) \qquad |y| \le M
\]
as $\tau \to -\infty$.
This implies that
$y(\tau)$, the maximum point of $u(y,\tau)$ (such that $u_y(y(\tau),\tau) = 0$) satisfies
\[
|y(\tau)| = o(1), \qquad \mbox{as}\,\, \tau \to -\infty.
\]
In particular we have that $y(\tau) \leq 1$ for $\tau \leq \tau_0 \ll -1$.
After we conclude this, the arguments in the intermediate and the tip region asymptotics in \cite{ADS} go through in our current case where we lack the reflection symmetry.
\end{proof}
\end{document} |
\begin{document}
\title{Experimental Observation of Equilibrium and Dynamical Quantum Phase Transitions via Out-of-Time-Ordered Correlators}
\author{Xinfang Nie} \thanks{These authors contributed equally to this work.} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \affiliation{CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China}
\author{Bo-Bo Wei} \thanks{These authors contributed equally to this work.} \affiliation{School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen 518172, China}
\author{Xi Chen} \affiliation{CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China}
\author{Ze Zhang} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \author{Xiuzhu Zhao} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \author{Chudan Qiu} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \author{Yu Tian} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \author{Yunlan Ji} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China}
\author{Tao Xin} \email{[email protected]} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China}
\author{Dawei Lu} \email{[email protected]} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \author{Jun Li} \email{[email protected]} \affiliation{Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China}
\date{\today}
\begin{abstract} The out-of-time-ordered correlators (OTOC) have been established as a fundamental concept for quantifying quantum information scrambling and diagnosing quantum chaotic behavior. Recently, it was theoretically proposed that the OTOC can be used as an order parameter to dynamically detect both equilibrium quantum phase transitions (EQPTs) and dynamical quantum phase transitions (DQPTs) in one-dimensional many-body systems. Here we report the first experimental observation of EQPTs and DQPTs in a quantum spin chain via quench dynamics of OTOC on a nuclear magnetic resonance quantum simulator. We observe that the quench dynamics of both the order parameter and the two-body correlation function cannot detect the DQPTs, but the OTOC can unambiguously detect the DQPTs. Moreover, we demonstrate that the long-time average value of the OTOC in quantum quench signals the equilibrium quantum critical point and ordered quantum phases, thus one can measure the EQPTs from the non-equilibrium quantum quench dynamics. Our experiment paves a way for experimentally investigating DQPTs through OTOCs and for studying the EQPTs through the non-equilibrium quantum quench dynamics with quantum simulators.
\end{abstract}
\maketitle \emph{Introduction.} -- Equilibrium quantum phase transitions (EQPTs)~\cite{Sachdev2011} are one of the most significant phenomena in many-body physics since it signals new states of quantum matter. It is accompanied by a nonanalytic change of some physical observable at a quantum critical point and is well understood from the paradigm of renormalization group theory~\cite{Cardy:1996th}. Recently, dynamical quantum phase transitions (DQPTs) that emerge in the dynamics of an isolated quantum many-body systems have attracted extensive theoretical efforts~\cite{PhysRevLett.110.135704,PhysRevLett.113.205701,PhysRevLett.115.140602,PhysRevB.89.125120,PhysRevB.93.085416,PhysRevLett.117.086802, zvyagin2016dynamical,PhysRevB.93.144306,PhysRevB.95.075143,PhysRevLett.120.130601,heyl2018dynamical,PhysRevA.98.022129} and experimental interests~\cite{zhang2017observation2,PhysRevLett.119.080501, PhysRevLett.119.080501,PhysRevApplied.11.044080,wang2018simulating,bernien2017probing,flaschner2018observation,PhysRevB.100.024310}. There are two different types of DQPTs. One type is witnessed by the nonanalyticity in the rate function of the Loschmidt echo at critical times~\cite{PhysRevLett.110.135704}, which resembles the nonanalyticity of free energy density as a function of temperature or other control parameters in the EQPTs. The other type is revealed by nonanalyticity of some local order parameters in quench dynamics measured at long time limit as a function of the control parameter of the quenched Hamiltonian~\cite{PhysRevLett.120.130601}. Both types of DQPTs are intrinsically dynamical quantum phenomena without equilibrium counterparts~\cite{heyl2018dynamical}.
EQPTs and DQPTs are both connected to the large quantum fluctuations~\cite{Sachdev2011,Cardy:1996th} and therefore related to fast propagation of quantum information in many-body systems, which can be captured by a recently proposed out-of-time-ordered correlations (OTOC)~\cite{shenker2014black,shenker2014multiple}. OTOC is defined as \begin{equation}\label{Eq0} O(t)=\langle W(t)^\dagger V^\dagger W(t) V\rangle \end{equation} for a given physical system described by a Hamiltonian $H$ and an initial state $\vert \psi_0\rangle$. Here, $W$ and $V$ are two local Hermitian operators, where $W(t)=U^\dagger(t) W(0)U(t)$ is an operator in the Heisenberg picture with time evolution operator $U(t)=e^{-iHt}$, and $\langle\cdot\rangle$ denotes the expectation value over the initial state $\vert \psi_0\rangle$. OTOC has been proposed to describe dispersions of local quantum information in quantum many-body systems, termed information scrambling~\cite{chen2016universal,banerjee2017solvable,he2017characterizing,shen2017out,slagle2017out,fan2017out,huang2017out,iyoda2018scrambling,lin2018out,pappalardi2018scrambling,zhang2019information}. Moreover, it has triggered numerous applications in far-from-equilibrium quantum phenomena, ranging from nonequilibrium statistical mechanics~\cite{PhysRevA.95.012120}, quantum thermalization~\cite{eisert2015quantum,Bohrdt_2017,swingle2018unscrambling,PhysRevE.95.062127} to black holes~\cite{PhysRevLett.120.231601,magan2018black}. A recent theoretical study~\cite{heyl2018detecting} proposes that quench dynamics of the OTOC can be used to detect EQPTs and DQPTs in many-body systems~\cite{PhysRevLett.123.140602,sun2018out,PhysRevB.100.195107}. However, experimental progress on this OTOC-based detection scheme has been elusive.
Here, we report the first experimental observation of the EQPTs and DQPTs from the quench dynamics of OTOC. Specifically, we simulate the quantum Ising models, including an integrable model and a nonintegrable model, on a four-qubit quantum simulator with the nuclear magnetic resonance (NMR) technique. We measure the quench dynamics of both the two-body correlation function of the order parameter and the OTOC of the order parameter experimentally from a fully polarized initial quantum state. On the one hand, we observe that the two-body correlation function cannot signal the DQPTs but the OTOC can clearly detect the DQPTs both in an integrable and non-integrable quantum Ising models, which experimentally establishes OTOCs as a useful probe of DQPTs. On the other hand, we experimentally show that the long-time average value of the OTOC in quench dynamics signals the equilibrium critical points in both integrable and nonintegrable quantum models, showing that one can extract the equilibrium quantum critical properties from the non-equilibrium quantum quench dynamics.
\emph{Quenches in Ising models.} -- \begin{figure}
\caption{
(a) Illustration of the two kinds of sudden quantum quenches in a periodic one-dimensional ferromagnetic Ising chain from initial ferromagnetic phase to (i) ferromagnetic phase ($g<g_c$) , and (ii) paramagnetic phase ($g>g_c$).
(b) Molecular structure and the Hamiltonian parameters of $^{13}$C-iodotrifluoroethylene (C$_2$F$_3$I).
The precession frequencies $\omega_i$ and the scalar coupling strengths are given by the diagonal and off-diagonal elements in the table respectively (in units of Hz).
(c) Quantum circuit diagram of the experiment to detect the OTOC $F(t)$ of the one-dimensional Ising chains.}
\label{Fig1}
\end{figure} We study the quench dynamics of ferromagnetic one-dimensional transverse-field Ising model with periodic boundary condition, as shown in Fig.~\ref{Fig1}(a). The corresponding Hamiltonian is written as \begin{equation}\label{Eq1} H=-\sum_{n=1}^{N}[J\sigma^z_n\sigma^z_{n+1}+\Delta\sigma^z_n\sigma^z_{n+2}+g \sigma^x_n],
\end{equation} where $\sigma^\alpha_n$ $ (\alpha=x, y, z)$ are Pauli operators on the $n$-th site, $J$ and $\Delta$ denote nearest-neighbor (NN) and the next-nearest-neighbor (NNN) couplings, and $g$ is the uniform transverse field. For a ferromagnetic Ising model ($J>0$), we assume $J=1$ without loss of generality.
We investigate two kinds of Ising chains: the integrable version with only nearest interactions, i.e., $\Delta=0$, which is termed as transverse-field Ising chain (TFIC), and the nonintegrable one with both NN and NNN interactions, namely axial next-nearest-neighbor Ising model (ANNNI). Both models serve as paradigms in EQPT ~\cite{Sachdev2011,PhysRevB.87.195104,bernien2017probing,PhysRevB.93.144306,SELKE1988213,chakrabarti2008quantum}. The TFIC undergoes a quantum phase transition at the critical point $g_c=1$: it stays in the ferromagnetic state for $g<1$, and in the paramagnetic phase for $g>1$. The quantum phase diagram is much more complex for the ANNNI~\cite{SELKE1988213,chakrabarti2008quantum,PhysRevB.24.6620,Peschel1981,PhysRevB.73.052402,PhysRevB.87.195104}. Here, we consider the phase transition between ferromagnetic and paramagnetic phases in the case of $\Delta=0.5$, where the critical point locates at $g_c\simeq1.6$. The DQPTs in such Ising models have been theoretically and experimentally studied by Loschmidt echoes~\cite{PhysRevB.87.195104,PhysRevLett.110.135704,PhysRevLett.119.080501,zhang2017observation2}.
\begin{figure*}
\caption{Experimentally measured real parts of OTOC $F_R(t)$ (top panels) and autocorrelation $\chi_R(t)$ (bottom panels) of the quantum quench dynamics
as a function of time $t$
in (a) TFIC model with $\Delta=0$ and (b) ANNNI model with $\Delta=0.5$.
Both systems start from the fully polarized state $|\psi_0\rangle$ and then undergo unitary evolutions governed by the Hamiltonian $H(g)$.
The dots are the experimental data, the solid lines are the numerical simulation results,
and the error bars are computed from the imperfections of the pulses.}
\label{Fig2}
\end{figure*}
DQPTs are usually investigated by quantum quenches. It starts from the ground state of an initial Hamiltonian $H_0$, and then evolves under another Hamiltonian $H_f$. For example, in the Ising model in Eq.~\eqref{Eq1}, we choose $\vert\psi_0\rangle$ as the fully polarized state $\left| \uparrow \uparrow \uparrow\ldots \right\rangle$, which is one of the two degenerated ground states of initial Hamiltonian with $g=0$. There are two kinds of quantum quenches in the transverse-field Ising model as shown in Fig.~\ref{Fig1}(a), depending on the case $g < g_c$ or $g > g_c$.
DQPT only occurs in the second case when the system quenches across $g_c$.
The autocorrelation function $\chi(t)=\langle \sigma_n^z(t)\sigma_n^z\rangle$~\cite{Sachdev2011}, which can detect equilibrium dynamics, becomes indistinctive in the nonequilibrium case. It is proposed that the second moment of the autocorrelation function~\cite{heyl2018detecting}, i.e., \begin{equation}\label{Eq2}
F(t)=\langle \sigma_n^z(t)\sigma_n^z\sigma_n^z(t)\sigma_n^z\rangle, \end{equation} can be used to distinguish the two kinds of quench dynamics.
In fact, this function $F(t)$ corresponds to the OTOC $O(t)$ in Eq.~\eqref{Eq0} when the two local operators $W$ and $V$ are both chosen to be $\sigma_n^z$. In experiment, we set $W=V=\sigma_1^z$. Through observing the time dependence of the real part $F_\text{R}(t)$, one can obtain the information about whether the time evolving Hamiltonian $H$ is in the ferromagnetic or paramagnetic region.
Moreover, the time-averaging OTOC $\bar{F}_\text{R}=\frac{1}{t}\int_{\tau=0}^tF_\text{R}(\tau)d\tau$ also serves as an order parameter for DQPT: $\bar{F}_\text{R}$ is nonzero for ferromagnetic phase and vanishes gradually upon approaching the critical point, while in contrast it stays zero in the whole paramagnetic phase.
\emph{Experiment.} -- The experiments are carried out on a Bruker Ascend $600$ MHz spectrometer ($14.1$ T) equipped with a cryo probe. The physical system used to perform quantum simulation is the ensemble of $^{13}$C-iodotrifluoroethylene (C$_2$F$_3$I) dissolved in $d$-chloroform. The $^{13}$C nuclear spin (Qubit 1) and the three $^{19}$F nuclear spins (Qubits 2-4) constitute a four-qubit quantum simulator. Each nuclear spin corresponds to a spin site in the Ising model. The natural Hamiltonian of the nuclear system placed in a static magnetic field along $z$-direction is \begin{equation}\label{Eq3}
H_\text{NMR}=-\sum_{i=1}^4\frac{\omega_{i}}{2}\sigma^z_i+\sum_{i<j,=1}^4\frac{\pi J_{i,j}}{2}\sigma^z_i\sigma^z_j, \end{equation} where $\omega_{i}/2\pi$ is the Larmor frequency of the $i$-th spin, $J_{i,j}$ is the scalar coupling between the $i$-th and $j$-th spin. The molecular structure and the NMR Hamiltonian parameters of the sample are given in Fig. ~\ref{Fig1} (b).
Because $\vert\psi_0\rangle$ is an eigenstate of $\sigma_1^z$, the OTOC in Eq.~\eqref{Eq2} can be rewritten as \begin{equation}\label{Eq4}
F(t)=\langle \psi(t)\vert \sigma_1^z \vert \psi(t)\rangle, \end{equation}
with $\vert \psi(t)\rangle=e^{iHt}\sigma_1^ze^{-iHt}\vert\psi_0\rangle$. In other words, to measure the OTOC throughout the quench dynamics, we need to initialize the system to the fully polarized state $\vert \psi_0\rangle$, then apply a unitary transformation $U(t)=e^{iHt}\sigma_1^ze^{-iHt}$ and finally measure the expectation value $\langle\sigma_1^z\rangle$ of the final state. The whole experimental process is illustrated in Fig.~\ref{Fig1} (c). The major part of the experiment is to simulate the unitary operation $U(t) $, which we describe in the following.
Given the target Hamiltonian $H$, we divide the quench dynamics into $M$ discrete time steps, and record the instantaneous OTOCs at time $t=k\tau$ ($k=0,1,\cdots, M-1$). The unitary evolution $U(t)$ can be decomposed into a sequence of unitary transformations: a time evolution operator $e^{-iHk\tau}$, a single qubit rotation $\sigma_1^z$ and a backward time evolution $e^{iHk\tau}$. The key point of the unitary evolution lies in the realization of the two operators $e^{-iH\tau}$ and $e^{iH\tau}$. Utilizing the Trotter-Suzuki decomposition formula, the time evolution $e^{-iH\tau}$ can be simulated approximately by \begin{equation}\label{Trotter}
e^{-iH\tau}\approx [e^{-iH_\text{x}\delta\tau/2} e^{-iH_\text{zz}\delta\tau} e^{-iH_\text{x}\delta\tau/2}]^m, \end{equation} where the evolution time $\tau$ is divided into $m$ segments with equal time length $\delta\tau=\tau/m$. Here, $H_\text{x}=-g\sum_{n=1}^{4}\sigma_x^{n}$, $H_\text{zz}=-\sum_{n=1}^4\sigma_z^{n}\sigma_z^{n+1}$ for the TFIC model, and $H_\text{zz}=-[\sum_{n=1}^4\sigma_z^{n}\sigma_z^{n+1}+\Delta\sum_{n=1}^4\sigma_z^{n}\sigma_z^{n+2}]$ for the ANNNI model. In each segment of evolution, $e^{-iH_\text{x}\delta\tau/2}$ and $e^{-iH_\text{zz}\delta\tau}$ can be realized through optimized radio-frequency pulses combined with the NMR refocusing technique.
The reverse time evolution $e^{iH\tau}$ can also be done in a similar way. As to the operator $\sigma_1^z$, it is a $\pi$ rotation about the $z$-axis on the first nucleus.
In experiment, to improve the control accuracy, we engineer the unitary evolution $U(t)$ with a shaped pulse optimized by the gradient ascent technique~\cite{khaneja2005optimal}. The width of the shaped pulse for each $U(t)$ is $40$ ms with theoretical fidelity above $99.5\%$.
\emph{Integrable TFIC model.} -- We first study the quench dynamics in the TFIC by observing the time dependence of OTOC.
In experiment, we consider two different quenches as shown in the upper panels of Fig.~\ref{Fig1}(a): (i) quenching from $g=0$ to $g=0.5$ and (ii) quenching from $g=0$ to $g=1.5$. The whole evolution is divided into $M=12$ steps with fixed time increment $\tau=0.5$, and the experimental results are shown in the upper panels of Fig.~\ref{Fig2}(a). Only the real parts of the OTOC $F_\text{R}(t)$ are measured in experiment. In both quenches, $F_\text{R}(t)$ starts from $F_R(t=0)=1$ at $t=0$ and then decays due to the information spreading. Obviously, the long time behavior of the two cases are quite different. For $g=0.5$ where the Hamiltonian is in the ferromagnetic region, $F_\text{R}(t)$ oscillates as a function of time but is always positive. In contrast, for $g=1.5$, ${F}_\text{R}(t)$ oscillates around zero \footnote{This result is to some extent different in comparison to the theoretical prediction in \cite{heyl2018detecting}, which states that $F_\text{R}(t)$ will reach zero. This discrepancy is mainly due to the small size of the simulated system; see supplemental material~\cite{SupplementalMaterial}.}.
From the behaviour of $F_\text{R}(t)$, we can readily differentiate the dynamical ferromagnetic phases and paramagnetic phases. That is to say, there will be a DQPT in-between. For comparison, we measure the time evolution of the autocorrelation $\chi(t)=\langle \sigma_1^z(t)\sigma_1^z \rangle$
during the quench dynamics, with the experimental results shown in the lower panels of Fig.~\ref{Fig2}(a). In theory, for quantum quench from the polarized state $\left| \uparrow \uparrow \uparrow\ldots \right\rangle$, $\chi_\text{R}(t)=\langle \sigma_1^z(t)\rangle$ vanishes with time because the quantum system is heated by the quenching process. There is no long-range quantum order for one-dimensional models with short-range interaction at non-zero temperature. Indeed, we observe that $\chi_\text{R}(t)$ oscillates around zero in both quantum quenches [the lower panels of Fig.~\ref{Fig2}(a)], which indicates that the autocorrelation function cannot be used to signify the two different dynamical quantum phases, and thus cannot detect DQPTs. Therefore, we experimentally verify that the OTOC of the order parameter, as a four-point correlation function, can detect different dynamical quantum phases and the DQPTs, while the order parameter and the two-body correlation function can not.
\begin{figure}
\caption{Long-time averaged OTOC $\bar{F}_\text{R}$ as a function of the transverse field strength $g$ in
(a) TFIC model and (b) ANNNI model.
The blue dots and the solid lines represent the experimental data and numerical simulation results, respectively.
The red dots represent the phase transition critical points of the two models
between the ferromagnetic phase and the paramagnetic phase, respectively.
The red lines are the simulation results with $N=9$ for comparison.}
\label{Fig3}
\end{figure}
Furthermore, we study how the long-time average of OTOCs $\bar{F}_\text{R}$ changes with the transverse field $g$. We vary $g$ from $0.1$ to $1.9$ with increment $0.1$, and measure $\bar{F}_\text{R}$ during the time evolution of OTOCs. The experimental results are shown in Fig.~\ref{Fig3}(a).
In the ferromagnetic phase, $\bar{F}_\text{R}$ is nonzero and eventually vanishes when approaching the equilibrium critical point $g=1$. In the paramagnetic phase, $\bar{F}_\text{R}$ stays zero. This result confirms the validity of using the long-time averaged OTOC as an order parameter to detect DQPTs as well as to locate the corresponding equilibrium quantum critical point. The fluctuation beyond the critical point in the simulation result (blue dashed line) is due to the small size of the system. We implement numerical simulations in larger systems ($N\geq9$), and find that the fluctuation is much lower (red solid line).
\emph{Non-integrable ANNNI mode..} --
We now turn to the non-integrable ANNNI model. Two different quantum quenches are investigated: from $g=0$ to $g=0.5$ and from $g=0$ to $g=2.0$. The dynamics is divided into $M=15$ segments with duration of each segment set as $\tau=0.2$. The experimental OTOCs $F_\text{R}(t)$ are shown in the top panels of Fig.~\ref{Fig2}(b), and the $\chi_\text{R}(t)$ are also measured for comparison, shown in the bottom panels of Fig.~\ref{Fig2}(b). From the results, it can be seen that, the two-body correlation function cannot distinguish different dynamical phases and the DQPTs, while the OTOC of the order parameter works fine. The long-time averaged OTOC as a function of the quenched parameter is experimentally observed in Fig.~\ref{Fig3}(b), where $g$ is varied from $0.1$ to $2.4$. Similar behaviors with the TFIC model are observed for the ANNNI model: $\bar{F}_\text{R}$ takes a finite value at the ferromagnetic phase, gradually approaches zero when $g$ approaches the critical point $g_c\simeq1.6$, and finally stays zero throughout the paramagnetic region. This is an evidence that OTOC in quench dynamics can be served as an order parameter to locate the equilibrium quantum critical point in the non-integrable cases, beating the autocorrelation function. The numerical simulation of $N=9$ (red solid line in Fig.~\ref{Fig3}(b)) is also given.
\emph{Conclusion.} -- In this work, we present the first experimental observation of EQPTs and DQPTs from quench dynamics of OTOC in both integrable and nonintegrable Ising models on an NMR quantum simulator. To be concluded, both the order parameter and the ordinary two-body correlation function in quantum quench cannot be used as a probe to observe DQPTs. However, the OTOC, which is a four-point correlation function, can detect DQPTs. Therefore, our experiment unveils the important correlations between the OTOC and DQPTs. Moreover, our experiment demonstrates the feasibility of experimentally studying the EQPTs by performing a dynamical non-equilibrium measurement without carrying out the challenging initialization of the true many-body ground state. In addition to quantifying the information scrambling and diagnosing chaotic behavior of quantum many-body systems, our experiment establishes the OTOC as a faithful probe for DQPTs and EQPTs. While our work focuses on the short-range many-body systems, it would be interesting to investigate the relations among OTOCs, EQPTs and DQPTs in long-range many-body systems of one- and higher-dimensions.
\emph{Acknowledgments}. This work is supported by the National Key Research and Development Program of China (Grants No. 2019YFA0308100), the National Natural Science Foundation of China (Grants No. 11605005, No. 11875159, No. 11905099, No. 11975117, No. U1801661, and No. 11604220), Science, Technology and Innovation Commission of Shenzhen Municipality (Grants No. ZDSYS20170303165926217 and No. JCYJ20170412152620376), Guangdong Innovative and Entrepreneurial Research Team Program (Grant No. 2016ZT06D348), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2019A1515011383). B. B. W. also acknowledges the President's Fund of the Chinese University of Hong Kong, Shenzhen.
\begin{thebibliography}{54} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Sachdev}(2011)}]{Sachdev2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Sachdev}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Phase
Transitions}}}\ (\bibinfo {publisher} {Cambridge University Press,
Cambridge, England},\ \bibinfo {year} {2011})\BibitemShut {NoStop} \bibitem [{\citenamefont {Cardy}(1996)}]{Cardy:1996th}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Cardy}},\ }\href@noop {} {\emph {\bibinfo {title} {Scaling and
Renormalization in Statistical Physics}}}\ (\bibinfo {publisher} {Cambridge
University Press},\ \bibinfo {year} {1996})\BibitemShut {NoStop} \bibitem [{\citenamefont {Heyl}\ \emph {et~al.}(2013)\citenamefont {Heyl},
\citenamefont {Polkovnikov},\ and\ \citenamefont
{Kehrein}}]{PhysRevLett.110.135704}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Heyl}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Polkovnikov}}, \
and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kehrein}},\ }\href
{\doibase 10.1103/PhysRevLett.110.135704} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo
{pages} {135704} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heyl}(2014)}]{PhysRevLett.113.205701}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Heyl}},\ }\href {\doibase 10.1103/PhysRevLett.113.205701} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {113}},\ \bibinfo {pages} {205701} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heyl}(2015)}]{PhysRevLett.115.140602}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Heyl}},\ }\href {\doibase 10.1103/PhysRevLett.115.140602} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {115}},\ \bibinfo {pages} {140602} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Andraschko}\ and\ \citenamefont
{Sirker}(2014)}]{PhysRevB.89.125120}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Andraschko}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Sirker}},\ }\href {\doibase 10.1103/PhysRevB.89.125120} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{89}},\ \bibinfo {pages} {125120} (\bibinfo {year} {2014})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Budich}\ and\ \citenamefont
{Heyl}(2016)}]{PhysRevB.93.085416}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont
{Budich}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Heyl}},\
}\href {\doibase 10.1103/PhysRevB.93.085416} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {085416} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ and\ \citenamefont
{Balatsky}(2016)}]{PhysRevLett.117.086802}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Huang}}\ and\ \bibinfo {author} {\bibfnamefont {A.~V.}\ \bibnamefont
{Balatsky}},\ }\href {\doibase 10.1103/PhysRevLett.117.086802} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {117}},\ \bibinfo {pages} {086802} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zvyagin}(2016)}]{zvyagin2016dynamical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Zvyagin}},\ }\href {\doibase 10.1063/1.4969869} {\bibfield {journal}
{\bibinfo {journal} {Low Temp. Phys.}\ }\textbf {\bibinfo {volume} {42}},\
\bibinfo {pages} {971} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sharma}\ \emph {et~al.}(2016)\citenamefont {Sharma},
\citenamefont {Divakaran}, \citenamefont {Polkovnikov},\ and\ \citenamefont
{Dutta}}]{PhysRevB.93.144306}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Sharma}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Divakaran}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Polkovnikov}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dutta}},\ }\href
{\doibase 10.1103/PhysRevB.93.144306} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {144306} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Karrasch}\ and\ \citenamefont
{Schuricht}(2017)}]{PhysRevB.95.075143}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Karrasch}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Schuricht}},\ }\href {\doibase 10.1103/PhysRevB.95.075143} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{95}},\ \bibinfo {pages} {075143} (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {\ifmmode \check{Z}\else
\v{Z}\fi{}unkovi\ifmmode~\check{c}\else \v{c}\fi{}}\ \emph
{et~al.}(2018)\citenamefont {\ifmmode \check{Z}\else
\v{Z}\fi{}unkovi\ifmmode~\check{c}\else \v{c}\fi{}}, \citenamefont {Heyl},
\citenamefont {Knap},\ and\ \citenamefont {Silva}}]{PhysRevLett.120.130601}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{\ifmmode \check{Z}\else \v{Z}\fi{}unkovi\ifmmode~\check{c}\else
\v{c}\fi{}}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Heyl}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Knap}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Silva}},\ }\href {\doibase
10.1103/PhysRevLett.120.130601} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages}
{130601} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heyl}(2018)}]{heyl2018dynamical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Heyl}},\ }\href {\doibase 10.1088/1361-6633/aaaf9a} {\bibfield {journal}
{\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo {volume} {81}},\
\bibinfo {pages} {054001} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2018)\citenamefont {Zhou},
\citenamefont {Wang}, \citenamefont {Wang},\ and\ \citenamefont
{Gong}}]{PhysRevA.98.022129}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Zhou}}, \bibinfo {author} {\bibfnamefont {Q.-h.}\ \bibnamefont {Wang}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Gong}},\ }\href {\doibase
10.1103/PhysRevA.98.022129} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {022129}
(\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2017)\citenamefont {Zhang},
\citenamefont {Pagano}, \citenamefont {Hess}, \citenamefont {Kyprianidis},
\citenamefont {Becker}, \citenamefont {Kaplan}, \citenamefont {Gorshkov},
\citenamefont {Gong},\ and\ \citenamefont {Monroe}}]{zhang2017observation2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Pagano}},
\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Hess}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Kyprianidis}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Becker}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Kaplan}}, \bibinfo {author} {\bibfnamefont {A.~V.}\
\bibnamefont {Gorshkov}}, \bibinfo {author} {\bibfnamefont {Z.-X.}\
\bibnamefont {Gong}}, \ and\ \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Monroe}},\ }\href
{https://www.nature.com/articles/nature24654} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {551}},\ \bibinfo {pages}
{601} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jurcevic}\ \emph {et~al.}(2017)\citenamefont
{Jurcevic}, \citenamefont {Shen}, \citenamefont {Hauke}, \citenamefont
{Maier}, \citenamefont {Brydges}, \citenamefont {Hempel}, \citenamefont
{Lanyon}, \citenamefont {Heyl}, \citenamefont {Blatt},\ and\ \citenamefont
{Roos}}]{PhysRevLett.119.080501}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Jurcevic}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shen}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hauke}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Maier}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Brydges}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Hempel}}, \bibinfo {author} {\bibfnamefont {B.~P.}\
\bibnamefont {Lanyon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Heyl}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}}, \ and\
\bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}},\ }\href
{\doibase 10.1103/PhysRevLett.119.080501} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo
{pages} {080501} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Guo}\ \emph {et~al.}(2019)\citenamefont {Guo},
\citenamefont {Yang}, \citenamefont {Zeng}, \citenamefont {Peng},
\citenamefont {Li}, \citenamefont {Deng}, \citenamefont {Jin}, \citenamefont
{Chen}, \citenamefont {Zheng},\ and\ \citenamefont
{Fan}}]{PhysRevApplied.11.044080}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont
{Guo}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Yang}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Zeng}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont
{H.-K.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {Y.-R.}\
\bibnamefont {Jin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zheng}}, \ and\
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fan}},\ }\href {\doibase
10.1103/PhysRevApplied.11.044080} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages}
{044080} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2018)\citenamefont {Wang},
\citenamefont {Qiu}, \citenamefont {Xiao}, \citenamefont {Zhan},
\citenamefont {Bian}, \citenamefont {Yi},\ and\ \citenamefont
{Xue}}]{wang2018simulating}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Qiu}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Xiao}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Zhan}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Bian}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Yi}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Xue}},\
}\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.020501}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {122}},\ \bibinfo {pages} {020501} (\bibinfo {year}
{2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bernien}\ \emph {et~al.}(2017)\citenamefont
{Bernien}, \citenamefont {Schwartz}, \citenamefont {Keesling}, \citenamefont
{Levine}, \citenamefont {Omran}, \citenamefont {Pichler}, \citenamefont
{Choi}, \citenamefont {Zibrov}, \citenamefont {Endres}, \citenamefont
{Greiner}, \citenamefont {Vuleti{\'c}},\ and\ \citenamefont
{Lukin}}]{bernien2017probing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bernien}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Schwartz}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Keesling}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Levine}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Omran}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Pichler}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Choi}}, \bibinfo {author} {\bibfnamefont {A.~S.}\
\bibnamefont {Zibrov}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Endres}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Greiner}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vuleti{\'c}}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}},\ }\href
{https://www.nature.com/articles/nature24622} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {551}},\ \bibinfo {pages}
{579} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fl{\"a}schner}\ \emph {et~al.}(2018)\citenamefont
{Fl{\"a}schner}, \citenamefont {Vogel}, \citenamefont {Tarnowski},
\citenamefont {Rem}, \citenamefont {L{\"u}hmann}, \citenamefont {Heyl},
\citenamefont {Budich}, \citenamefont {Mathey}, \citenamefont {Sengstock},\
and\ \citenamefont {Weitenberg}}]{flaschner2018observation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Fl{\"a}schner}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Vogel}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tarnowski}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Rem}}, \bibinfo {author}
{\bibfnamefont {D.-S.}\ \bibnamefont {L{\"u}hmann}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Heyl}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Budich}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Mathey}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Sengstock}}, \ and\ \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Weitenberg}},\ }\href
{https://www.nature.com/articles/s41567-017-0013-8} {\bibfield {journal}
{\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {14}},\
\bibinfo {pages} {265} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tian}\ \emph {et~al.}(2019)\citenamefont {Tian},
\citenamefont {Ke}, \citenamefont {Zhang}, \citenamefont {Lin}, \citenamefont
{Shi}, \citenamefont {Huang}, \citenamefont {Lee},\ and\ \citenamefont
{Du}}]{PhysRevB.100.024310}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Tian}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ke}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Lee}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Du}},\ }\href {\doibase
10.1103/PhysRevB.100.024310} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. B}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages}
{024310} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shenker}\ and\ \citenamefont
{Stanford}(2014{\natexlab{a}})}]{shenker2014black}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~H.}\ \bibnamefont
{Shenker}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Stanford}},\ }\href {https://doi.org/10.1007/JHEP03(2014)067} {\bibfield
{journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo
{volume} {2014}},\ \bibinfo {pages} {67} (\bibinfo {year}
{2014}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shenker}\ and\ \citenamefont
{Stanford}(2014{\natexlab{b}})}]{shenker2014multiple}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~H.}\ \bibnamefont
{Shenker}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Stanford}},\ }\href {https://doi.org/10.1007/JHEP12(2014)046} {\bibfield
{journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo
{volume} {2014}},\ \bibinfo {pages} {46} (\bibinfo {year}
{2014}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}(2016)}]{chen2016universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Chen}},\ }\href {https://arxiv.org/abs/1608.02765} {\bibfield {journal}
{\bibinfo {journal} {arXiv:1608.02765v2}\ } (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Banerjee}\ and\ \citenamefont
{Altman}(2017)}]{banerjee2017solvable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Banerjee}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Altman}},\ }\href {\doibase 10.1103/PhysRevB.95.134302} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{95}},\ \bibinfo {pages} {134302} (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {He}\ and\ \citenamefont
{Lu}(2017)}]{he2017characterizing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.-Q.}\ \bibnamefont
{He}}\ and\ \bibinfo {author} {\bibfnamefont {Z.-Y.}\ \bibnamefont {Lu}},\
}\href {\doibase 10.1103/PhysRevB.95.054201} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo
{pages} {054201} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shen}\ \emph {et~al.}(2017)\citenamefont {Shen},
\citenamefont {Zhang}, \citenamefont {Fan},\ and\ \citenamefont
{Zhai}}]{shen2017out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Shen}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fan}}, \ and\ \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Zhai}},\ }\href {\doibase
10.1103/PhysRevB.96.054503} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. B}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {054503}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Slagle}\ \emph {et~al.}(2017)\citenamefont {Slagle},
\citenamefont {Bi}, \citenamefont {You},\ and\ \citenamefont
{Xu}}]{slagle2017out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Slagle}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Bi}}, \bibinfo
{author} {\bibfnamefont {Y.-Z.}\ \bibnamefont {You}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Xu}},\ }\href {\doibase
10.1103/PhysRevB.95.165136} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. B}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {165136}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fan}\ \emph {et~al.}(2017)\citenamefont {Fan},
\citenamefont {Zhang}, \citenamefont {Shen},\ and\ \citenamefont
{Zhai}}]{fan2017out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Fan}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zhang}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Shen}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Zhai}},\ }\href {\doibase
10.1016/j.scib.2017.04.011} {\bibfield {journal} {\bibinfo {journal} {Sci.
Bull.}\ }\textbf {\bibinfo {volume} {62}},\ \bibinfo {pages} {707} (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2017)\citenamefont {Huang},
\citenamefont {Zhang},\ and\ \citenamefont {Chen}}]{huang2017out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Zhang}}, \
and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Chen}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Annalen der Physik}\ }\textbf
{\bibinfo {volume} {529}},\ \bibinfo {pages} {1600318} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Iyoda}\ and\ \citenamefont
{Sagawa}(2018)}]{iyoda2018scrambling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Iyoda}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sagawa}},\
}\href {\doibase 10.1103/PhysRevA.97.042330} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo
{pages} {042330} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lin}\ and\ \citenamefont
{Motrunich}(2018)}]{lin2018out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.-J.}\ \bibnamefont
{Lin}}\ and\ \bibinfo {author} {\bibfnamefont {O.~I.}\ \bibnamefont
{Motrunich}},\ }\href {\doibase 10.1103/PhysRevB.97.144304} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{97}},\ \bibinfo {pages} {144304} (\bibinfo {year} {2018})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Pappalardi}\ \emph {et~al.}(2018)\citenamefont
{Pappalardi}, \citenamefont {Russomanno}, \citenamefont {\ifmmode
\check{Z}\else \v{Z}\fi{}unkovi\ifmmode~\check{c}\else \v{c}\fi{}},
\citenamefont {Iemini}, \citenamefont {Silva},\ and\ \citenamefont
{Fazio}}]{pappalardi2018scrambling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Pappalardi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Russomanno}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {\ifmmode
\check{Z}\else \v{Z}\fi{}unkovi\ifmmode~\check{c}\else \v{c}\fi{}}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Iemini}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Silva}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Fazio}},\ }\href {\doibase
10.1103/PhysRevB.98.134303} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. B}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {134303}
(\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2019)\citenamefont {Zhang},
\citenamefont {Huang},\ and\ \citenamefont {Chen}}]{zhang2019information}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Huang}}, \ and\
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Chen}},\ }\href {\doibase
10.1103/PhysRevB.99.014303} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. B}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {014303}
(\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yunger~Halpern}(2017)}]{PhysRevA.95.012120}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Yunger~Halpern}},\ }\href {\doibase 10.1103/PhysRevA.95.012120} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{95}},\ \bibinfo {pages} {012120} (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Eisert}\ \emph {et~al.}(2015)\citenamefont {Eisert},
\citenamefont {Friesdorf},\ and\ \citenamefont
{Gogolin}}]{eisert2015quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Eisert}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Friesdorf}}, \
and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},\ }\href
{https://www.nature.com/articles/nphys3215} {\bibfield {journal} {\bibinfo
{journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages}
{124} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bohrdt}\ \emph {et~al.}(2017)\citenamefont {Bohrdt},
\citenamefont {Mendl}, \citenamefont {Endres},\ and\ \citenamefont
{Knap}}]{Bohrdt_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bohrdt}}, \bibinfo {author} {\bibfnamefont {C.~B.}\ \bibnamefont {Mendl}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Endres}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Knap}},\ }\href {\doibase
10.1088/1367-2630/aa719b} {\bibfield {journal} {\bibinfo {journal} {New J.
Phys.}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {063001}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Swingle}(2018)}]{swingle2018unscrambling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Swingle}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat.
Phys.}\ }\textbf {\bibinfo {volume} {14}} (\bibinfo {year}
{2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campisi}\ and\ \citenamefont
{Goold}(2017)}]{PhysRevE.95.062127}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Campisi}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Goold}},\ }\href {\doibase 10.1103/PhysRevE.95.062127} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {95}},\
\bibinfo {pages} {062127} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Grozdanov}\ \emph {et~al.}(2018)\citenamefont
{Grozdanov}, \citenamefont {Schalm},\ and\ \citenamefont
{Scopelliti}}]{PhysRevLett.120.231601}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}\
\bibnamefont {Grozdanov}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Schalm}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Scopelliti}},\ }\href {\doibase 10.1103/PhysRevLett.120.231601} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {120}},\ \bibinfo {pages} {231601} (\bibinfo {year}
{2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mag{\'a}n}(2018)}]{magan2018black}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Mag{\'a}n}},\ }\href
{https://link.springer.com/article/10.1007/JHEP09(2018)043#citeas} {\bibfield
{journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo
{volume} {2018}},\ \bibinfo {pages} {43} (\bibinfo {year}
{2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heyl}\ \emph {et~al.}(2018)\citenamefont {Heyl},
\citenamefont {Pollmann},\ and\ \citenamefont
{D{\'o}ra}}]{heyl2018detecting}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Heyl}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pollmann}}, \
and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {D{\'o}ra}},\ }\href
{\doibase 10.1103/PhysRevLett.121.016801} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo
{pages} {016801} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Da\ifmmode~\breve{g}\else \u{g}\fi{}}\ \emph
{et~al.}(2019)\citenamefont {Da\ifmmode~\breve{g}\else \u{g}\fi{}},
\citenamefont {Sun},\ and\ \citenamefont {Duan}}]{PhysRevLett.123.140602}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~B.}\ \bibnamefont
{Dag}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Sun}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-M.}\
\bibnamefont {Duan}},\ }\href {\doibase 10.1103/PhysRevLett.123.140602}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {123}},\ \bibinfo {pages} {140602} (\bibinfo {year}
{2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2018)\citenamefont {Sun},
\citenamefont {Cai}, \citenamefont {Tang}, \citenamefont {Hu},\ and\
\citenamefont {Fan}}]{sun2018out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.-H.}\ \bibnamefont
{Sun}}, \bibinfo {author} {\bibfnamefont {J.-Q.}\ \bibnamefont {Cai}},
\bibinfo {author} {\bibfnamefont {Q.-C.}\ \bibnamefont {Tang}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Hu}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Fan}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:1811.11191}\ } (\bibinfo
{year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wei}\ \emph {et~al.}(2019)\citenamefont {Wei},
\citenamefont {Sun},\ and\ \citenamefont {Hwang}}]{PhysRevB.100.195107}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.-B.}\ \bibnamefont
{Wei}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Sun}}, \ and\
\bibinfo {author} {\bibfnamefont {M.-J.}\ \bibnamefont {Hwang}},\ }\href
{\doibase 10.1103/PhysRevB.100.195107} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo
{pages} {195107} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Karrasch}\ and\ \citenamefont
{Schuricht}(2013)}]{PhysRevB.87.195104}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Karrasch}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Schuricht}},\ }\href {\doibase 10.1103/PhysRevB.87.195104} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{87}},\ \bibinfo {pages} {195104} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Selke}(1988)}]{SELKE1988213}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Selke}},\ }\href {\doibase https://doi.org/10.1016/0370-1573(88)90140-8}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo
{volume} {170}},\ \bibinfo {pages} {213 } (\bibinfo {year}
{1988})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chakrabarti}\ \emph {et~al.}(2008)\citenamefont
{Chakrabarti}, \citenamefont {Dutta},\ and\ \citenamefont
{Sen}}]{chakrabarti2008quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont
{Chakrabarti}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dutta}},
\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Sen}},\
}\href@noop {} {\emph {\bibinfo {title} {Quantum Ising phases and transitions
in transverse Ising models}}},\ Vol.~\bibinfo {volume} {41}\ (\bibinfo
{publisher} {Springer Science \& Business Media},\ \bibinfo {year}
{2008})\BibitemShut {NoStop} \bibitem [{\citenamefont {Ruj\'an}(1981)}]{PhysRevB.24.6620}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Ruj\'an}},\ }\href {\doibase 10.1103/PhysRevB.24.6620} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {24}},\
\bibinfo {pages} {6620} (\bibinfo {year} {1981})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peschel}\ and\ \citenamefont
{Emery}(1981)}]{Peschel1981}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Peschel}}\ and\ \bibinfo {author} {\bibfnamefont {V.~J.}\ \bibnamefont
{Emery}},\ }\href {\doibase 10.1007/BF01297524} {\bibfield {journal}
{\bibinfo {journal} {Z. Phys. B}\ }\textbf {\bibinfo {volume} {43}},\
\bibinfo {pages} {241} (\bibinfo {year} {1981})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beccaria}\ \emph {et~al.}(2006)\citenamefont
{Beccaria}, \citenamefont {Campostrini},\ and\ \citenamefont
{Feo}}]{PhysRevB.73.052402}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Beccaria}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Campostrini}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Feo}},\ }\href {\doibase 10.1103/PhysRevB.73.052402} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {73}},\
\bibinfo {pages} {052402} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khaneja}\ \emph {et~al.}(2005)\citenamefont
{Khaneja}, \citenamefont {Reiss}, \citenamefont {Kehlet}, \citenamefont
{Schulte-Herbr{\"u}ggen},\ and\ \citenamefont {Glaser}}]{khaneja2005optimal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Khaneja}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Reiss}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kehlet}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Schulte-Herbr{\"u}ggen}}, \ and\
\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Glaser}},\ }\href
{https://www.sciencedirect.com/science/article/pii/S1090780704003696?via
{\bibfield {journal} {\bibinfo {journal} {J. Magn. Reson.}\ }\textbf
{\bibinfo {volume} {172}},\ \bibinfo {pages} {296} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1}
\BibitemOpen
\bibinfo {note} {This result is to some extent different in comparison to the
theoretical prediction in \cite {heyl2018detecting}, which states that
$F_\protect \text {R}(t)$ will reach zero. This discrepancy is mainly due to
the small size of the simulated system; see supplemental material~\cite
{SupplementalMaterial}.}\BibitemShut {Stop} \bibitem [{Sup()}]{SupplementalMaterial}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {See Supplemental Material for more
details.}\BibitemShut {Stop} \end{thebibliography}
\appendix
\section{Appendix A. Experimental details}
\subsection{Initialization} The main experimental procedure of the experiment is illustrated in Fig. 1(a) of the main text. Firstly, starting from the thermal equilibrium state $\rho_\text{eq}\simeq\frac{1}{N}(\bm{I}-\sum_{i=1}^{4}\alpha_i\sigma_i^z)$ with $\alpha_i=\omega_i/k_\text{B}T$,
we initialize the system to a Pseudo-pure state (PPS)~\cite{cory1997ensemble} \begin{equation}\label{Eq1}
\rho_0=\frac{1-\epsilon}{16}\bm{I}+\epsilon \vert 0000\rangle \langle 0000\vert. \end{equation}
using the line-selective method~\cite{peng2001preparation}, here $\bm{I}$ is the $16\times16$ identity matrix, $\epsilon\sim10^{-5}$ is the polarization of the nuclear system. The PPS behaves similarly as the pure state $\vert 0000\rangle$ up to a scale factor $\epsilon$ on the readout signal. The pulse sequence of the PPS preparation includes two line-selective pulses together with two gradient fields to remove the undesired quantum coherence. The first selective pulse aims to saturate the populations of all computational basis states except the first one, and the second is designed to transfer the zero-order quantum coherence to the positions of the multi-order coherence. Both line-selective pulses are realized using shaped pulses optimized by the GRAPE algorithm~\cite{khaneja2005optimal}, whose durations are $25$ ms and $20$ ms, respectively.
\subsection{Experimental measurement} \emph{Ensemble readout.}-- The values of both OTOC and autocorrelation in the Fig. 2 of the main text are experimentally obtained by measuring the expectation values of $\sigma_1^z$ of the final states. Because the readout of an NMR experiment is an average over the nuclear spin ensemble, the expectation value can be acquired by measuring the magnetization along $x$-direction after applying a $\pi/2$ single spin rotation around the $y$-axis on the first nucleus in a single experiment.
\begin{figure}
\caption{The quantum circuit used to measure the autocorrelation $\chi(t)$ in $4$-spin one-dimensional Ising models
quenching from the fully polarized state $\vert\psi_0\rangle=\left| \uparrow \uparrow \uparrow \uparrow \right\rangle$. }
\label{FigS1}
\end{figure} \emph{Measurement of autocorrelation.}-- The quantum circuit to measure the autocorrelation is different from that of the OTOC. In the quantum quench with initial state $\vert\psi_0\rangle=\vert0000\rangle$, the autocorrelation can be written in the form of \begin{equation}\label{Eq2}
\chi(t)=\langle\Psi(t)\vert \sigma_1^z\vert \Psi(t)\rangle, \end{equation} where $\vert\Psi(t)\rangle=e^{-iHt}\vert\psi_0\rangle$. Discrete the time duration as the same as the main text, the quantum circuit of the measurement is shown in Figure.~\ref{FigS1}, the concrete form of $H_\text{x}$ and $H_\text{zz}$ can be found in the main text. To reduce the experimental errors, we implement the unitary evolution $e^{-iHt}$ by a GRAPE pulse with the time length of $40$ ms.
\section{Appendix B. Analyses of experimental Results}
\subsection{The influence of system size} \begin{figure}
\caption{ The numerically simulated $F_\text{R}(t)$ following quench from the fully polarized
the state is shown for the two Ising models with different spin numbers.
The number of spins are chosen as $N=4$, $8$ and $12$, respectively.
Time evolution of $F_\text{R}$ following quenches in the TFIC model from $g=0$ (a) to $g=0.5$ and (b) to $g=1.5$;
Time evolution of $F_\text{R}$ following quenches in the ANNNI model from $g=0$ (c) to $g=0.5$ and (d) to $g=2.0$. }
\label{FigS2}
\end{figure}
To analyze the difference between the experimental data in Fig. 2 of the main text and the theoretically predicted results, we numerically simulate the time dependence of $F_\text{R}$ with different system sizes, as shown in Figure.~\ref{FigS2}. The simulation results in both TFIC and ANNNI models reveal that the fluctuation of $F_\text{R}(t)$ of the second quench suppresses as the system size enlarges. \subsection{Error analysis}
We use the average abstract deviation $\delta=\frac{1}{K}\vert x^i_\text{exp}-x^i_\text{th}\vert$ to estimate the error in Fig. 2 and Fig. 3 of the main text and their average error are $0.081$ and $0.018$, respectively, here $K$ is the number of the data points, $x^i_\text{exp}$ and $x^i_\text{th}$ represent the $i$-th experimental and theoretical values. It shows that the error of $\bar{F}_\text{R}$ is much smaller than that of $F_\text{R}$, which is mainly due to that the random error term is reduced by multiple measurements. These experimental errors may be caused by the imperfection of the initial state, the inaccuracy of the GRAPE pulse, the effect of decoherence during the experimental time and the sampling error. Considering that the duration of quantum circuit in experiments is much shorter than the relaxation time, the effect of decoherence can be ignored. Besides, the fidelities of the GRAPE pulses are all above $99.5\%$, which will result in an error of about $0.3\%$. In the error analysis, we mainly consider the effect of the imperfect initial states and the readout error. The imperfect of the initial state with a fidelity of $95\%$ adds an average error of $2.5\%$ and $3.3\%$ to the experimental results of TFIC and ANNNI, respectively. The readout error is coursed by the white noise of the NMR spectra, which is about $2\%$. \begin{figure}
\caption{The experimental quantum circuits of the measurements of in the equilibrium dynamics of the $3$-spin Ising models of the equilibrium quantum phase transition using a four-qubit quantum simulator.
(a) The circuit to measure the real part of (a) OTOC $F_\text{R}(t)$.
(b) The circuit to measure the real part of autocorrelation $\chi_\text{R}(t)$. }
\label{FigS3}
\end{figure}
\begin{figure*}
\caption{The time dependence of $F_\text{R}$ and $\chi_\text{R}$ of the equilibrium quantum dynamics for the $3$-spin (a) TFIC and (b) ANNNI models.
The upper panels are the time evolutions of $F_\text{R}$, and the lower panels are the time evolutions of $\chi _\text{R}$. Stars are experimental results, and the solid lines are numerical simulation results with the experimental parameters.}
\label{FigS4}
\end{figure*}
\begin{figure}
\caption{The long-time averaged OTOC as a function of $g$ for the (a) TFIC and (b) ANNNI model.
The dots are the experimental data and the solid lines are the numerical simulation results with the same parameters as the experiment, the red solid lines are the numerical simulation results of $8$-spin Ising models.
The red dots are the theoretical quantum critical points.}
\label{FigS5}
\end{figure}
\section{Appendix C. Observation of quantum phase transition via OTOC in equilibrium dynamics} \subsection{Experimental Implementation} We also experimentally demonstrate that the OTOC of the equilibrium dynamics can be used to detect quantum phase transition and the critical points in both integrable and non-integrable quantum systems. The initial state of equilibrium dynamics is the ground state of the Hamiltonian $H$, i.e., $\vert\psi_0\rangle=\vert\psi_g\rangle$. As same as the experiments of quench dynamics in the main text, we investigate the TFIC and ANNNI models in the equilibrium case. Comparing with the quench experiment, the experimental measurement of the OTOCs in an equilibrium case is more complicated. First, we have to prepare the ground state of the Hamiltonian $H$ with different $g$, which is much more difficult than the fully polarized state; Second, for $\vert \psi_g\rangle$ is not an eigenstate of the local operator $\sigma_1^z$, we cannot rewrite $F(t)$ and $\chi(t)$ as in the quantum quench case so that they cannot be measured with quantum circuit shown in Fig. 1(c) of the main text or the quantum circuit in Figure.~\ref{FigS1}. In the following, we will explain the experimental details and show the experimental results of the equilibrium cases.
\textit{Quantum circuits}.-- We write the OTOC in the equilibrium dynamics in the form of $F(t)=\langle\psi_1(t)\vert \psi_2(t)\rangle$, with $\vert\psi_1(t)\rangle=\sigma_1^z\sigma_1^z(t)\vert\psi_g\rangle$ and $\vert\psi_2(t)\rangle=\sigma_1^z(t)\sigma_1^z\vert\psi_g\rangle$. Such kind of state overlap can be measured by introducing an ancillary qubit. We use the same $4$-spin nuclear magnetic system as the quantum simulator and study the OTOC behaviors of the $3$-spin Ising models except for an ancillary qubit.
The quantum system is first initialized into a product state $\vert0\rangle\otimes\vert\psi_g\rangle$. Then a Hadamard gate is applied on the ancillary qubit so as to get a state $\frac{1}{\sqrt{2}}(\vert0\rangle+\vert1\rangle)\otimes \vert\psi_g\rangle$. After that the system is evolved with two control gates in the form of $U_1=\vert1\rangle\langle1\vert\otimes\sigma_1^z\sigma_1^z(t)$ and $U_2=\vert0\rangle\langle0\vert\otimes\sigma_1^z(t)\sigma_1^z$. The real part of OTOC can be read out by measuring the expectation value of $\sigma_x^1$. The quantum circuit is shown in Figure.~\ref{FigS3}(a).
We also experimentally study the time evolution of autocorrelation $\chi(t)=\langle\sigma_1^z(t)\sigma_1^z \rangle$ in the equilibrium dynamics. The quantum circuit is similar to that of the OTOC as shown in Figure.~\ref{FigS3}(b). The only difference is in the two control gates, which are in the form of $U_1=\vert1\rangle\langle1\vert\otimes\sigma_1^z(t)$ and $U_2=\vert0\rangle\langle0\vert\otimes\sigma_1^z$, respectively.
\textit{Preparation of the ground states}.-- The initial state of the equilibrium dynamics is the ground state of the Hamiltonian $H$, i.e., $\vert\psi_0\rangle=\vert\psi_g\rangle$. It is usually difficult to prepare the ground state in experiment, because the concrete form of $\vert\psi_g\rangle$ depends on the value of $g$ and is always highly entangled. Fortunately, in the case of small system, the ground state can be exactly solved, we can search a unitary evolution $U_g$ which satisfies $\vert\psi_g\rangle=U_g\vert00\cdots0\rangle$, such that the ground state $\vert\psi_g\rangle$ can be prepared from $\vert00\cdots0\rangle$ directly. In the case of large system, one may have to design an adiabatic passage so as to obtain the ground state for a particular $g$.
In principle, all the unitary operations can be realized with the well-designed pulse sequence composed of the universal quantum gates, but it always results in tremendous experimental errors. In the experiments, we use a single shaped pulse searched by GRAPE algorithm to realize the unitary evolution $U_2U_1HU_g$, the time length and the theoretical fidelity of the shaped pulse are $40$ ms and $99.5\%$, respectively.
\subsection{Experimental Results and Discussion} The experimental results of the equilibrium dynamics are shown in Figure.~\ref{FigS4} and Figure.~\ref{FigS5}. Firstly, we observe the time dependence of OTOC and autocorrelation in equilibrium dynamics with the time evolving Hamiltonian in different regions for both TFIC and ANNNI models, with experimental results shown in Figure.~\ref{FigS4}. We find that $F_\text{R}(t)$ behaves analogously as that of the quantum quench in both Ising models: in the case where Hamiltonian locates in the ferromagnetic region, $F_\text{R}$ maintains a positive value as the time goes by; where in the region of paramagnetic phase, $F_\text{R}$ diminishes to a small value nearby zero. This means that the time dependence of OTOC can be used to characterize the kind of Hamiltonian. Besides, the behavior of $\chi(t)$ also changes with the Hamiltonian: when the Hamiltonian is ferromagnetic, $\chi_\text{R}(t)$ oscillates around a positive value; while for the paramagnetic Hamiltonian, $\chi_\text{R}(t)$ diminishes to a negative value and then revives. This illustrates that $\chi(t)$ can be used to differentiate the kind of the Hamiltonian in the equilibrium dynamics, which is different with the non-equilibrium dynamics.
The experimental long-time averaged OTOCs of the two Ising models are shown in Figure.~\ref{FigS5}: $\bar{F}_\text{R}$ gradually decreases to zero when approaching the corresponding theoretical critical points at $g_c=1.0$ and $g_c=1.6$ for the TFIC and ANNNI models, and oscillates around a small value near zero in the whole paramagnetic phase region. However, there are obvious distinctions between the theoretical critical points and the experimental ones, which is mainly due to the small size of the Ising models simulated in the experiments. We also numerically calculate $\bar{F}_\text{R}$ in larger Ising systems with $N=8$ spins, which are shown with the red solid lines. It is an obvious evidence that the difference between experimental data and the theoretical predictions can be removed as the system grows larger.
\end{document} |
\begin{document}
\maketitle
\section{Introduction}
The classical connection between dynamical systems and $C^*$-algebras is the crossed product construction which associates a $C^*$-algebra to a ho\-meo\-mor\-phism of a compact metric space. This construction has been generalized stepwise by J. Renault (\cite{Re}), V. Deaconu (\cite{De1}) and C. Anantharaman-Delaroche (\cite{An}) to local ho\-meo\-mor\-phisms and recently also to locally injective surjections by the second named author in \cite{Th1}. The main motivation for the last generalisation was the wish to include the Matsumoto-type $C^*$-algebra of a subshift which was introduced by the first named author in \cite{C}.
In this paper we continue the investigation of the structure of the $C^*$-algebra of a locally injective surjection which was begun in \cite{Th1}. The main goal here is to give necessary and sufficient conditions for the algebras, or at least any simple quotient of them, to be purely infinite; a property they are known to have in many cases. Recall that a simple $C^*$-algebra is said to be purely infinite when all its non-zero hereditary $C^*$-subalgebras contain an infinite projection. Our main result is that a simple quotient of the $C^*$-algebra arising from a locally injective surjection on a compact metric space of finite covering dimension, as in Section 4 of \cite{Th1}, is one of the following kinds: \begin{enumerate} \item[1)] a full matrix algebra $M_n(\mathbb C)$ for some $n \in
\mathbb N$, or \item[2)] the crossed product $C(K) \times_f \mathbb Z$ corresponding
to a minimal homeomorphism $f$ of a compact metric space $K$ of
finite covering dimension, or \item[3)] a unital purely infinite simple $C^*$-algebra. \end{enumerate} In particular, when the algebra itself is simple it must be one of the three types, and in fact purely infinite unless the underlying map is a homeomorphism. Hence the problem of finding necessary and sufficient conditions for the $C^*$-algebra of a locally injective surjection on a compact metric space of finite covering dimension to be both simple and purely infinite has a strikingly straightforward solution: If the algebra is simple (and \cite{Th1} gives necessary and sufficient conditions for this to happen) then it is automatically purely infinite unless the map in question is a homeomorphism. A corollary of this result is that if the $C^*$-algebra of a one-sided subshift is simple, then it is also purely infinite.
On the way to the proof of the main result we study the ideal structure. We find first the gauge invariant ideals, obtaining an insight which combined with methods and results of Katsura (\cite{K}) leads to a list of the primitive ideals. We then identify the maximal ideals among the primitive ones and obtain in this way a description of the simple quotients which we use to obtain the conclusions described above. A fundamental tool all the way is the canonical locally homeomorphic extension discovered in \cite{Th2} which allows us to replace the given locally injective map with a local homeomorphism. It means, however, that much of the structure we investigate gets described in terms of the canonical locally homeomorphic extension, and this is unfortunate since it may not be easy to obtain a satisfactory understanding of it for a given locally injective surjection. Still, it allows us to obtain qualitative conclusions of the type mentioned above.
Besides the $C^*$-algebras of subshifts our results cover of course also the $C^*$-algebras associated to a local homeomorphism by the construction of Renault, Deaconu and Anantharaman-Delaroche, provided the map is surjective and the space is of finite covering dimension. This means that the results have bearing on many classes of $C^*$-algebras which have been associated to various structures, for example the $\lambda$-graph systems of Matsumoto (\cite{Ma}) and the continuous graphs of Deaconu (\cite{De2}).
\emph{Acknowledgement:} This work was supported by the NordForsk Research Network 'Operator Algebras and Dynamics' (grant 11580). The first named author was also supported by the Research Council of Norway through project 191195/V30.
\section{The $C^*$-algebra of a locally injective surjection}
Let $X$ be a compact metric space and $\varphi : X \to X$ a locally injective surjection. Set $$ \Gamma_{\varphi} = \left\{ (x,k,y) \in X \times \mathbb Z \times X :
\ \exists n,m \in \mathbb N, \ k = n -m , \ \varphi^n(x) =
\varphi^m(y)\right\} . $$ This is a groupoid with the set of composable pairs being $$ \Gamma_{\varphi}^{(2)} \ = \ \left\{\left((x,k,y), (x',k',y')\right) \in \Gamma_{\varphi} \times
\Gamma_{\varphi} : \ y = x'\right\}. $$ The multiplication and inversion are given by $$ (x,k,y)(y,k',y') = (x,k+k',y') \ \text{and} \ (x,k,y)^{-1} = (y,-k,x) . $$ Note that the unit space of $\Gamma_{\varphi}$ can be identified with $X$ via the map $x \mapsto (x,0,x)$. To turn $\Gamma_{\varphi}$ into a locally compact topological groupoid, fix $k \in \mathbb Z$. For each $n \in \mathbb N$ such that $n+k \geq 0$, set $$ {\Gamma_{\varphi}}(k,n) = \left\{ \left(x,l, y\right) \in X \times \mathbb
Z \times X: \ l =k, \ \varphi^{k+i}(x) = \varphi^i(y), \ i \geq n \right\} . $$ This is a closed subset of the topological product $X \times \mathbb Z \times X$ and hence a locally compact Hausdorff space in the relative topology. Since $\varphi$ is locally injective $\Gamma_{\varphi}(k,n)$ is an open subset of $\Gamma_{\varphi}(k,n+1)$ and hence the union $$ {\Gamma_{\varphi}}(k) = \bigcup_{n \geq -k} {\Gamma_{\varphi}}(k,n) $$ is a locally compact Hausdorff space in the inductive limit topology. The disjoint union $$ \Gamma_{\varphi} = \bigcup_{k \in \mathbb Z} {\Gamma_{\varphi}}(k) $$ is then a locally compact Hausdorff space in the topology where each ${\Gamma_{\varphi}}(k)$ is an open and closed set. In fact, as is easily verified, $\Gamma_{\varphi}$ is a locally compact groupoid in the sense of \cite{Re} and a semi \'etale groupoid in the sense of \cite{Th1}. The paper \cite{Th1} contains a construction of a $C^*$-algebra from any semi \'etale groupoid, but we give here only a description of the construction for $\Gamma_{\varphi}$.
Consider the space $B_c\left(\Gamma_{\varphi}\right)$ of compactly supported bounded functions on $\Gamma_{\varphi}$. They form a $*$-algebra with respect to the convolution-like product $$ f \star g (x,k,y) = \sum_{z,n+ m = k} f(x,n,z)g(z,m,y) $$ and the involution $$ f^*(x,k,y) = \overline{f(y,-k,x)} . $$ To turn it into a $C^*$-algebra, let $x \in X$ and consider the Hilbert space $H_x$ of square summable functions on $\left\{ (x',k,y')
\in \Gamma_{\varphi} : \ y' = x \right\}$ which carries a
representation $\pi_x$ of the $*$-algebra $B_c\left(\Gamma_{\varphi}\right)$
defined such that \begin{equation}\label{pirep} \left(\pi_x(f)\psi\right)(x',k, x) = \sum_{z, n+m = k} f(x',n,z)\psi(z,m,x) \end{equation} when $\psi \in H_x$. One can then define a $C^*$-algebra $B^*_r\left(\Gamma_{\varphi}\right)$ as the completion of $B_c\left(\Gamma_{\varphi}\right)$ with respect to the norm $$
\left\|f\right\| = \sup_{x \in X} \left\|\pi_x(f)\right\| . $$ The space $C_c\left(\Gamma_{\varphi}\right)$ of continuous and compactly supported functions on $\Gamma_{\varphi}$ generate a $*$-subalgebra $\operatorname{alg}^* \Gamma_{\varphi}$ of $B^*_r\left(\Gamma_{\varphi}\right)$ which completed with respect to the above norm becomes the $C^*$-algebra $C^*_r\left(\Gamma_{\varphi}\right)$ which is our object of study. When $\varphi$ is open, and hence a local homeomorphism, $C_c\left(\Gamma_{\varphi}\right)$ is a $*$-subalgebra of $B_c\left(\Gamma_{\varphi}\right)$ so that $\operatorname{alg}^* \Gamma_{\varphi} = C_c\left(\Gamma_{\varphi}\right)$ and $C^*_r\left(\Gamma_{\varphi}\right)$ is then the completion of $C_c\left(\Gamma_{\varphi}\right)$. In this case $C_r^*\left(\Gamma_{\varphi}\right)$ is the algebra studied by Renault in \cite{Re}, by Deaconu in \cite{De1}, and by Anantharaman-Delaroche in \cite{An}.
The algebra $ C^*_r\left(\Gamma_{\varphi}\right)$ contains several canonical $C^*$-subalgebras which we shall need in our study of its structure. One is the $C^*$-algebra of the open sub-groupoid $$ R_{\varphi} = \Gamma_{\varphi}(0) $$ which is a semi \'etale groupoid (equivalence relation, in fact) in itself. The corresponding $C^*$-algebra $C^*_r\left(R_{\varphi}\right)$ is the $C^*$-subalgebra of $C^*_r\left(\Gamma_{\varphi}\right)$ generated by the continuous and compactly supported functions on $R_{\varphi}$. Equally important are two canonical abelian $C^*$-subalgebras, $D_{\Gamma_{\varphi}}$ and $D_{R_{\varphi}}$. They arise from the fact that the $C^*$-algebra $B(X)$ of bounded functions on $X$ sits canonically inside $B^*_r\left(\Gamma_{\varphi}\right)$, cf. p. 765 of \cite{Th1}, and are then defined as $$ D_{\Gamma_{\varphi}} = C^*_r\left(\Gamma_{\varphi}\right) \cap B(X) $$ and $$ D_{R_{\varphi}} = C^*_r\left(R_{\varphi}\right) \cap B(X), $$ respectively. There are faithful conditional expectations $P_{\Gamma_{\varphi}} : C^*_r\left(\Gamma_{\varphi}\right) \to D_{\Gamma_{\varphi}}$ and $P_{R_{\varphi}} : C^*_r\left(R_{\varphi}\right) \to D_{R_{\varphi}}$, obtained as extensions of the restriction map $\operatorname{alg}^* \Gamma_{\varphi} \to B(X)$ to $C^*_r\left(\Gamma_{\varphi}\right)$ and $C^*_r\left(R_{\varphi}\right)$, respectively. When $\varphi$ is open and hence a local homeomorphism, the two algebras $D_{\Gamma_{\varphi}}$ and $D_{R_{\varphi}}$ are identical and equal to $C(X)$, but in general the inclusion $D_{R_{\varphi}} \subseteq D_{\Gamma_{\varphi}}$ is strict.
Our approach to the study of $C^*_r\left(\Gamma_{\varphi}\right)$ hinges on a construction introduced in \cite{Th2} of a compact Hausdorff space $Y$ and a local homeomorphic surjection $\phi : Y \to Y$ such that $(X,\varphi)$ is a factor of $(Y,\phi)$ and \begin{equation}\label{basiciso} C^*_r\left(\Gamma_{\varphi}\right) \simeq C^*_r\left(\Gamma_{\phi}\right) . \end{equation} Everything we can say about ideals and simple quotients of $C^*_r\left(\Gamma_{\phi}\right)$ will have bearing on $C^*_r\left(\Gamma_{\varphi}\right)$, but while the isomorphism (\ref{basiciso}) is equivariant with respect to the canonical gauge actions (see Section \ref{gaugeac}), it will not in general take $C^*_r\left(R_{\varphi}\right)$ onto $C^*_r\left(R_{\phi}\right)$. This is one reason why we will work with $C^*_r\left(\Gamma_{\varphi}\right)$ whenever possible, instead of using (\ref{basiciso}) as a valid excuse for working with local homeomorphisms only. Another is that it is generally not so easy to get a workable description of $(Y,\phi)$. As in \cite{Th2} we will refer to $(Y,\phi)$ as the \emph{canonical locally homeomorphic extension} of $(X,\varphi)$. The space $Y$ is the Gelfand spectrum of $D_{\Gamma_{\varphi}}$ so when $\varphi$ is already a local homeomorphism itself, the extension is redundant and $(Y,\phi) = (X,\varphi)$.
\section{Ideals in $C^*_r\left(R_{\varphi}\right)$}
Recall from \cite{Th1} that there is a semi \'etale equivalence relation $$ R\left(\varphi^n\right) = \left\{ (x,y) \in X \times X : \varphi^n(x)
= \varphi^n(y) \right\} $$ for each $n \in \mathbb N$. They will be considered as open sub-equivalence relations of $R_{\varphi}$ via the embedding $(x,y) \mapsto (x,0,y) \in \Gamma_{\varphi}(0)$. In this way we get embeddings $C^*_r\left(R\left(\varphi^n\right)\right) \subseteq C^*_r\left(R\left(\varphi^{n+1}\right)\right) \subseteq C^*_r\left(R_{\varphi}\right)$ by Lemma 2.10 of \cite{Th1}, and then \begin{equation}\label{crux} C^*_r\left(R_{\varphi}\right) = \overline{\bigcup_n C^*_r\left(R\left(\varphi^n\right)\right)} , \end{equation} cf. (4.2) of \cite{Th1}. This inductive limit decomposition of $C^*_r\left(R_{\varphi}\right)$ defines in a natural way a similar inductive limit decomposition of $D_{R_{\varphi}}$. Set $$ D_{R\left(\varphi^n\right)} = C^*_r\left(R\left(\varphi^n\right)\right) \cap B(X) . $$
\begin{lemma}\label{AA1}
$D_{R_{\varphi}} =
\overline{\bigcup_{n=1}^{\infty} D_{R\left(\varphi^n\right)}}$. \begin{proof} Since $C^*_r\left(R\left(\varphi^n\right)\right)
\subseteq C^*_r\left(R_{\varphi}\right)$, it follows that $$ D_{R(\varphi^n)} = C^*_r\left(R\left(\varphi^n\right)\right) \cap B(X) \subseteq C^*_r\left(R_{\varphi}\right) \cap B(X) = D_{R_{\varphi}}. $$ Hence \begin{equation}\label{AA2}
\overline{\bigcup_{n=1}^{\infty} D_{R\left(\varphi^n\right)}}
\subseteq D_{R_{\varphi}} . \end{equation} Let $x \in D_{R_{\varphi}}$ and let $\epsilon > 0$. It follows from (\ref{crux}) that there is an $n \in \mathbb N$ and an element $y \in \operatorname{alg}^* R\left(\varphi^n\right)$ such that \begin{equation*}
\left\|x - P_{R_{\varphi}}(y)\right\| \leq \epsilon . \end{equation*} On $\operatorname{alg}^* R_{\varphi}$ the conditional expectation $P_{R_{\varphi}}$ is just the map which restricts functions to $X$ and the same is true for the conditional expectation $P_{R\left(\varphi^n\right)}$ on $\operatorname{alg}^* R\left(\varphi^n\right)$, where $P_{R\left(\varphi^n\right)}$ is the conditional expectation of Lemma 2.8 in \cite{Th1} obtained by considering $R\left(\varphi^n\right)$ as a semi \'etale groupoid in itself. Hence $P_{R_{\varphi}}(y) = P_{R\left(\varphi^n\right)}(y) \in D_{R\left(\varphi^n\right)}$. It follows that we have equality in (\ref{AA2}). \end{proof} \end{lemma}
In the following, by an ideal of a $C^*$-algebra we will always mean a closed and two-sided ideal. The next lemma is well known and crucial for the sequel.
\begin{lemma}\label{AA3} Let $Y$ be a compact Hausdorff space, $M_n$
the $C^*$-algebra of $n$-by-$n$ matrices for some natural number $n \in
\mathbb N$ and $p$ a projection in $C(Y,M_n)$. Set $A = pC(Y,M_n)p$
and let $C_A$ be the center of $A$.
For every ideal $I$ in $A$ there
is an approximate unit for $I$ in $I \cap C_A$. \end{lemma}
\begin{lemma}\label{A7} Let $I,J \subseteq
C^*_r\left(R_{\varphi}\right)$ be two ideals. Then $$ I \cap D_{R_{\varphi}} \subseteq J \cap D_{R_{\varphi}} \ \Rightarrow \ I \subseteq J. $$ \begin{proof} If $I \cap D_{R_{\varphi}} \subseteq J \cap
D_{R_{\varphi}}$ it
follows that $I \cap D_{R\left(\varphi^n\right)} \subseteq J \cap D_{R\left(\varphi^n\right)}$ for all $n$. Note that the center of $C^*_r\left(R\left(\varphi^n\right)\right)$ is contained in $D_{R\left(\varphi^n\right)}$ since $D_{R\left(\varphi^n\right)}$ is maximal abelian in $C^*_r\left(R\left(\varphi^n\right)\right)$ by Lemma 2.19 of \cite{Th1}. By using Corollary 3.3 of \cite{Th1} it follows therefore from Lemma \ref{AA3} that there is a sequence $\{x_n\}$ in $ I \cap D_{R\left(\varphi^n\right)}$ such that $\lim_{n \to \infty} x_na = a$ for all $a \in I \cap C^*_r\left(R\left(\varphi^n\right)\right)$. Since $x_n \in J \cap D_{R\left(\varphi^n\right)}$ this implies that $I \cap C^*_r\left(R\left(\varphi^n\right)\right) \subseteq J \cap C^*_r\left(R\left(\varphi^n\right)\right)$ for all $n$. Combining with (\ref{crux}) we find that $$ I = \overline{\bigcup_n I \cap
C^*_r\left(R\left(\varphi^n\right)\right)} \subseteq \overline{\bigcup_n J \cap
C^*_r\left(R\left(\varphi^n\right)\right)} = J . $$ \end{proof} \end{lemma}
Recall from \cite{Th1} that an ideal $J$ in $D_{R_{\varphi}}$ is said to be \emph{$R_{\varphi}$-invariant} when $n^*Jn \subseteq J$ for all $n \in \operatorname{alg}^* R_{\varphi}$ supported in a bisection of $R_{\varphi}$. For every $R_{\varphi}$-invariant ideal $J$ in $D_{R_{\varphi}}$, set $$ \widehat{J} = \left\{ a \in C^*_r\left(R_{\varphi}\right) : \
P_{R_{\varphi}}(a^*a) \in J \right\} . $$
\begin{thm}\label{A4} The map $J \mapsto \widehat{J}$ is a bijection
between the $R_{\varphi}$-invariant ideals in
$D_{R_{\varphi}}$ and the ideals in $C^*_r\left(R_{\varphi}\right)$. The inverse is given by the map $I \mapsto I \cap D_{R_{\varphi}}$ \begin{proof} It follows from Lemma 2.13 of \cite{Th1} that $\widehat{J} \cap
D_{R_{\varphi}} = J$ for any $R_{\varphi}$-invariant ideal in
$D_{R_{\varphi}}$. It suffices therefore to show that every ideal in
$C^*_r\left(R_{\varphi}\right)$ is of the form $\widehat{J}$ for some
$R_{\varphi}$-invariant ideal $J$ in
$D_{R_{\varphi}}$.
Let $I$ be an ideal in
$C^*_r\left(R_{\varphi}\right)$. Set $J = I \cap
D_{R_{\varphi}}$, which is clearly a $R_{\varphi}$-invariant ideal in
$D_{R_{\varphi}}$. Since $\widehat{J} \cap D_{R_{\varphi}} = J
= I \cap D_{R_{\varphi}}$ by Lemma 2.13 of \cite{Th1},
we conclude from Lemma \ref{A7} that $\widehat{J} = I$.
\end{proof} \end{thm}
A subset $A \subseteq Y$ is said to be \emph{$\phi$-saturated} when $\phi^{-k}\left(\phi^k(A)\right) = A$ for all $k \in \mathbb N$.
\begin{cor}\label{A5} (Cf. Proposition II.4.6 of \cite{Re}) The map $$ L \mapsto I_L = \left\{ a \in C^*_r\left(R_{\phi}\right) : \
P_{R_{\phi}}(a^*a)(x) = 0\ \forall
x \in L \right\} $$ is a bijection from the non-empty closed $\phi$-saturated subsets $L$ of $Y$ onto the set of proper ideals in $C^*_r\left(R_{\phi}\right)$. \begin{proof} Since $\phi$ is a local homeomorphism, we have that $D_{R_{\phi}} =
C(Y)$ so the corollary follows from Theorem \ref{A4} by use of the
well-known bijection between ideals in $C(Y)$ and closed subsets of
$Y$. The only thing to show is that an open subset $U$ of $Y$ is
$\phi$-saturated if and only if the ideal $C_0(U)$ of $C(Y)$ is
$R_{\phi}$-invariant which is straightforward, cf. the proof of
Corollary 2.18 in \cite{Th1}. \end{proof} \end{cor}
The next issue will be to determine which closed $\phi$-saturated subsets of $Y$ correspond to primitive ideals. For a point $x \in Y$ we define the \emph{$\phi$-saturation} of $x$ to be the set $$ H(x) = \bigcup_{n=1}^{\infty} \left\{ y \in Y : \ \phi^n(y) =
\phi^n(x) \right\} . $$ The closure $\overline{H(x)}$ of $H(x)$ will be referred to as the \emph{closed $\phi$-saturation} of $x$. Observe that both $H(x)$ and $\overline{H(x)}$ are $\phi$-saturated.
\begin{prop}\label{A17} Let $L \subseteq Y$ be a non-empty closed
$\phi$-saturated subset. The ideal $I_L$ is primitive if and only
$L$ is the closed $\phi$-saturation of a point in $Y$.
\begin{proof} Since $C^*_r\left(R_{\phi}\right)$ is separable an ideal
is primitive if and only if it is prime, cf. \cite{Pe}. We show that $I_L$ is prime
if and only if $L = \overline{H(x)}$ for some $x \in Y$.
Assume
first that $L = \overline{H(x)}$ and consider two ideals, $I_1$ and
$I_2$, in $C^*_r\left(R_{\phi}\right)$ such that $I_1I_2 \subseteq
I_{\overline{H(x)}}$. By Corollary \ref{A5} there are closed
$\phi$-saturated subsets, $L_1$ and $L_2$, in $Y$ such that $I_j =
I_{L_j}$, $j =1,2$. It follows from Corollary \ref{A5} that $\overline{H(x)} \subseteq L_1
\cup L_2$. At least one of the $L_j$'s must contain $x$, say $x \in
L_1$. Since $L_1$ is $\phi$-saturated and closed it follows that
$\overline{H(x)} \subseteq L_1$, and hence that $I_1 \subseteq
I_{\overline{H(x)}}$. Thus $I_{\overline{H(x)}}$ is prime.
Assume next that $I_L$ is prime. Let $\{U_k\}_{k=0}^{\infty}$ be a base for the topology of $L$ consisting of non-empty sets. We will construct sequences $\{B_k\}_{k=0}^{\infty}$ of compact non-empty neighbourhoods in $L$ and non-negative integers $\left\{n_k\right\}_{k=0}^{\infty}$ such that \begin{enumerate} \item[i)] $B_k \subseteq B_{k-1}$ for $ k \geq 1$, and \item[ii)] $\phi^{n_{k}}\left(B_{k}\right) \subseteq
\phi^{n_{k}}\left(U_{k}\right)$ for $ k \geq 0$. \end{enumerate} We start the induction by letting $B_0$ be any compact non-empty neighbourhood in $U_0$ and $n_0 = 0$. Assume then that $B_0,B_1,B_2,\dots , B_m$ and $n_0,n_1, \dots, n_m$ have been constructed. Choose a non-empty open subset $V_{m+1} \subseteq B_{m}$. Note that both of $$ L \backslash \bigcup_l \phi^{-l}\left(\phi^l(V_{m+1})\right) $$ and $$ L \backslash \bigcup_l \phi^{-l}\left(\phi^l(U_{m+1})\right) $$ are closed $\phi$-saturated subsets of $L$, and hence of $Y$, and none of them is all of $L$. It follows from Corollary \ref{A5} and primeness of $I_L$ that $L$ is not contained in their union, which in turn implies that $$ \phi^{-n_{m+1}}\left(\phi^{n_{m+1}}(V_{m+1})\right) \cap
\phi^{-n_{m+1}}\left(\phi^{n_{m+1}}(U_{m+1})\right) $$ is non-empty for some $n_{m+1} \in \mathbb N$. There is therefore a point $z \in V_{m+1}$ such that $\phi^{n_{m+1}}(z) \in \phi^{n_{m+1}}\left(U_{m+1}\right)$, and therefore also a compact non-empty neighbourhood $B_{m+1} \subseteq V_{m+1}$ of $z$ such that $\phi^{n_{m+1}}(B_{m+1}) \subseteq \phi^{n_{m+1}}\left(U_{m+1}\right)$. This completes the induction. Let $x \in \bigcap_m B_m$. By construction every $U_k$ contains an element from $H(x)$. It follows that $\overline{H(x)} = L$. \end{proof} \end{prop}
\section{On the ideals of $C^*_r\left(\Gamma_{\varphi}\right)$}\label{gaugeac}
The $C^*$-algebra $C^*_r\left(\Gamma_{\varphi}\right)$ carries a canonical circle action $\beta$, called the \emph{gauge action}, defined such that $$ \beta_{\lambda}(f)(x,k,y) = \lambda^k f(x,k,y) $$ when $f \in C_c\left(\Gamma_{\varphi}\right)$ and $\lambda \in \mathbb T$, cf. \cite{Th1}. As the next step we describe in this section the gauge-invariant ideals in $C^*_r\left(\Gamma_{\varphi}\right)$.
Consider first the function $m : X \to \mathbb N$ defined such that \begin{equation}\label{m-funk} m(x) = \# \left\{ y \in X : \ \varphi(y) = \varphi(x) \right\} . \end{equation} As shown in \cite{Th1}, $m \in D_{R(\varphi)} \subseteq D_{R_{\varphi}}$. Define a function $V_{\varphi} : \Gamma_{\varphi} \to \mathbb C$ such that $$ V_{\varphi} (x,k,y) = \begin{cases} m(x)^{-\frac{1}{2}} & \ \text{when} \ k = 1 \
\text{and} \ y = \varphi(x) \\ 0 & \ \text{otherwise.} \end{cases} $$ Then $V_{\varphi}$ is the product $V_{\varphi} = m^{-\frac{1}{2}} 1_{\Gamma_{\varphi}(1,0)}$ in $C^*_r\left(\Gamma_{\varphi}\right)$ and in fact an isometry which induces an endomorphism $\widehat{\varphi}$ of $C^*_r\left(R_{\varphi}\right)$, viz. $$ \widehat{\varphi}(a) = V_{\varphi}aV_{\varphi}^*$$ Together with $C^*_r\left(R_{\varphi}\right)$ the isometry $V_{\varphi}$ generates $C^*_r\left(\Gamma_{\varphi}\right)$ which in this way becomes a crossed product $C^*_r\left(R_{\varphi}\right) \times_{\widehat{\varphi}} \mathbb N$ in the sense of Stacey, cf. \cite{St} and \cite{Th1}; in particular Theorem 4.6 in \cite{Th1}.
\subsection{Gauge invariant ideals }
Let $C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$ denote the fixed point algebra of the gauge action.
\begin{lemma}\label{kalgs} For each $k \in \mathbb N$ we have that ${V_{\varphi}^*}^k
C^*_r\left(R_{\varphi}\right)V_{\varphi}^k $ is a $C^*$-subalgebra of
$C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$, \begin{equation}\label{bkr0} {V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k \subseteq
{V_{\varphi}^*}^{k+1}C^*_r\left(R_{\varphi}\right)V_{\varphi}^{k+1}, \end{equation} and \begin{equation}\label{bkr} C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} = \overline{\bigcup_{k=0}^{\infty} {V_{\varphi}^*}^k
C^*_r\left(R_{\varphi}\right)V_{\varphi}^k }. \end{equation} \begin{proof} Since $V_{\varphi}^k{V_{\varphi}^*}^k \in C^*_r\left(R_{\varphi}\right)$, it
is easy to check that ${V_{\varphi}^*}^k
C^*_r\left(R_{\varphi}\right)V_{\varphi}^k$ is a $*$-algebra. To see that
${V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k$ is closed let $\{a_n\}$ be a sequence
in $C^*_r\left(R_{\varphi}\right)$ such that
$\left\{{V_{\varphi}^*}^ka_nV_{\varphi}^k\right\}$ converges in
$C^*_r\left(\Gamma_{\varphi}\right)$, say $\lim_{n \to \infty}
{V_{\varphi}^*}^ka_nV_{\varphi}^k = b$. It follows that
$$ \left\{V_{\varphi}^k{V_{\varphi}^*}^ka_nV_{\varphi}^k{V_{\varphi}^*}^k\right\} $$ is Cauchy in
$C^*_r\left(R_{\varphi}\right)$ and hence convergent, say to $a \in
C^*_r\left(R_{\varphi}\right)$. But then $b = \lim_{n \to \infty}
{V_{\varphi}^*}^k a_nV_{\varphi}^k = \lim_{n \to \infty}
{V_{\varphi}^*}^kV_{\varphi}^k {V_{\varphi}^*}^k a_nV_{\varphi}^k{V_{\varphi}^*}^kV_{\varphi}^k = {V_{\varphi}^*}^kaV_{\varphi}^k$. It follows that
$$ {V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k $$ is closed and hence a
$C^*$-subalgebra. The inclusion (\ref{bkr0}) follows from the observation that $V_{\varphi}^k = V_{\varphi}^*V_{\varphi}^{k+1}$ and $V_{\varphi}C^*_r\left(R_{\varphi}\right)V_{\varphi}^* \subseteq C^*_r\left(R_{\varphi}\right)$.
It is straightforward to check that $\beta_{\lambda}(V_{\varphi}) =
\lambda V_{\varphi}$ and that $C^*_r\left(R_{\varphi}\right) \subseteq
C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$. The inclusion
$\supseteq$ in (\ref{bkr}) follows from this. To obtain the other, let $x
\in C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$ and let $\epsilon > 0$. It follows from Theorem 4.6 of \cite{Th1} and Lemma 1.1. of \cite{BoKR} that there is an $n \in \mathbb N$ and an element $$ y \in \operatorname{Span} \bigcup_{i,j \leq n} {V_{\varphi}^*}^i C^*_r\left(R_{\varphi}\right)V_{\varphi}^j $$
such that $\left\|x - y\right\| \leq \epsilon$. Then $\left\| x -
\int_{\mathbb T} \beta_{\lambda}(y) \ d\lambda\right\| \leq \epsilon$ and since $$ \int_{\mathbb T} \beta_{\lambda}(y) \ d\lambda \in {V_{\varphi}^*}^nC^*_r\left(R_{\varphi}\right)V_{\varphi}^n, $$ we see that (\ref{bkr}) holds. \end{proof} \end{lemma}
\begin{lemma}\label{ident} Let $I$ be a gauge invariant ideal in
$C^*_r\left(\Gamma_{\varphi}\right)$. It follows that $$ I = \left\{ a \in C^*_r\left(\Gamma_{\varphi}\right) : \ \int_{\mathbb
T} \beta_{\lambda}(a^*a) \ d \lambda \in I \cap
C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} \right\} . $$ \begin{proof} Set $B = C^*_r\left(\Gamma_{\varphi}\right)/I$. Since
$I$ is gauge-invariant there is an action $\hat{\beta}$ of $\mathbb
T$ on $B$ such that $q \circ \beta = \hat{\beta} \circ q$, where $q
: C^*_r\left(\Gamma_{\varphi}\right) \to B$ is the quotient
map. Thus, if $$ y \in \left\{ a \in C^*_r\left(\Gamma_{\varphi}\right) : \ \int_{\mathbb
T} \beta_{\lambda}(a^*a) \ d \lambda \in I \cap
C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} \right\} , $$ we find that $$ \int_{\mathbb T} \hat{\beta}_{\lambda}(q(y^*y)) \ d \lambda = q\left(\int_{\mathbb T} \beta_{\lambda}(y^*y) \ d\lambda \right) = 0 . $$ Since $\int_{\mathbb T} \hat{\beta}_{\lambda}( \cdot ) \ d \lambda$ is faithful we conclude that $q(y) = 0$, i.e. $y \in I$. This establishes the non-trivial part of the asserted identity. \end{proof} \end{lemma}
\begin{lemma}\label{intersectunique} Let $I,I'$ be gauge invariant ideals in $C^*_r\left(\Gamma_{\varphi}\right)$. Then $$ I \cap D_{R_{\varphi}} \subseteq I' \cap D_{R_{\varphi}} \ \Rightarrow \ I \subseteq I' . $$ \begin{proof} Assume that $I \cap D_{R_{\varphi}} \subseteq I' \cap
D_{R_{\varphi}}$. It follows from Lemma \ref{A7} that $I \cap
C^*_r\left(R_{\varphi}\right) \subseteq I' \cap C^*_r\left(R_{\varphi}\right)$. Then \begin{equation*} \begin{split} &I \cap
{V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k = {V_{\varphi}^*}^k\left(I \cap
C^*_r\left(R_{\varphi}\right)\right)V_{\varphi}^k \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \subseteq {V_{\varphi}^*}^k\left(I' \cap
C^*_r\left(R_{\varphi}\right)\right)V_{\varphi}^k = I' \cap
{V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k \end{split} \end{equation*} for all $k \in \mathbb N$. Hence Lemma \ref{kalgs} implies that $I \cap C_r^*\left(\Gamma_{\varphi}\right)^{\mathbb T} \subseteq I' \cap C_r^*\left(\Gamma_{\varphi}\right)^{\mathbb T}$. It follows then from Lemma \ref{ident} that $I \subseteq I'$.
\end{proof} \end{lemma}
\begin{prop}\label{gaugeideals} The map $J \mapsto \widehat{J}$, where $$ \widehat{J} = \left\{ a \in C^*_r\left(\Gamma_{\varphi}\right) : \
P_{\Gamma_{\varphi}}(a^*a) \in J\right\}, $$ is a
bijection from the $\Gamma_{\varphi}$-invariant ideals of
$D_{\Gamma_{\varphi}}$ onto the gauge invariant ideals of
$C^*_r\left(\Gamma_{\varphi}\right)$. Its inverse is the map $I
\mapsto I \cap D_{\Gamma_{\varphi}}$. \begin{proof} Since $P_{\Gamma_{\varphi}} \circ \beta = P_{\Gamma_{\varphi}}$ the ideal $\widehat{J}$ is gauge invariant. It
follows from Lemma 2.13 of \cite{Th1} that $\widehat{J} \cap D_{\Gamma_{\varphi}} = J$ so it suffices to show that \begin{equation}\label{gaugeb} \widehat{I \cap D_{\Gamma_{\varphi}}} = I \end{equation} when $I$ is a gauge invariant ideal in $C^*_r\left(\Gamma_{\varphi}\right)$. It follows from Lemma 2.13 of \cite{Th1} that $\widehat{I \cap D_{\Gamma_{\varphi}}} \cap D_{\Gamma_{\varphi}} = I \cap D_{\Gamma_{\varphi}}$. Since $D_{R_{\varphi}} \subseteq D_{\Gamma_{\varphi}}$ this implies that $\widehat{I \cap D_{\Gamma_{\varphi}}} \cap D_{R_{\varphi}} = I \cap D_{R_{\varphi}}$. Then (\ref{gaugeb}) follows from Lemma \ref{intersectunique}.
\end{proof} \end{prop}
To simplify notation, set $D = D_{\Gamma_{\phi}} = C(Y)$. Every ideal $I$ in $C^*_r\left(\Gamma_{\phi}\right)$ determines a closed subset $\rho(I)$ of $Y$ defined such that \begin{equation}\label{rho} \rho(I) = \left\{ y \in Y : \ f(y) = 0 \ \forall f \in
I \cap D \right\} . \end{equation}
We say that a subset $F \subseteq Y$ is \emph{totally $\phi$-invariant} when $\phi^{-1}(F) = F$.
\begin{lemma}\label{psiinv} $\rho(I)$ is totally $\phi$-invariant for
every ideal $I$ in $C^*_r\left(\Gamma_{\phi}\right)$. \begin{proof} It suffices to show that $Y\setminus
\rho(I)$ is totally
$\phi$-invariant, which is what we will do. Assume first that $x\in
Y \setminus \rho(I)$. Then there is an $f\in
I\cap D$ such that $f(x)\neq 0$.
Choose an open bisection $W \subseteq \Gamma_{\phi}$ such that
$(x,1,\phi(x)) \in W$. Choose then $\eta\in
C_c(\Gamma_\phi)$ such that $\eta((x,1,\phi(x))=1$ and $\operatorname{supp} \eta
\subseteq W$. It is not
difficult to check that $\eta^*f\eta\in D$ and that
$\eta^*f\eta(\phi(x))= f(x)\ne 0$, and since
$\eta^*f\eta\in I$, it follows that
$\phi(x)\in Y \setminus \rho(I)$. Assume then that $\phi(x)\in Y
\setminus \rho(I)$. Then there is a
$g\in I\cap D$ such that $g(\phi(x))\ne 0$. Choose an open bisection $W \subseteq \Gamma_{\phi}$ such that
$(x,1,\phi(x)) \in W$ and $\eta\in
C_c(\Gamma_\phi)$ such that $\eta((x,1,\phi(x))=1$ and $\operatorname{supp} \eta
\subseteq W$. Then $\eta g\eta^*\in D$ and
$\eta g \eta^*(\phi(x))= g(x)\ne 0$, and since
$\eta g\eta^*\in I$, this shows that
$x \in Y\backslash \rho(I)$, proving that
$\phi^{-1}\left(\rho(I)\right) = \rho(I)$. \end{proof} \end{lemma}
Thus every ideal in $C^*_r\left(\Gamma_{\phi}\right)$ gives rise to a closed totally $\phi$-invariant subset of $Y$. To go in the other direction, let $F$ be a closed totally $\phi$-invariant subset of $Y$. Then $Y\backslash F$ is open and totally $\phi$-invariant so that the reduction
$\Gamma_{\phi}|_{Y \backslash F}$ is an \'etale groupoid in its own right, cf. \cite{An}. In fact, $\phi$ restricts to local homeomorphic surjections $\phi : Y \backslash F \to Y \backslash F$ and $\phi : F \to F$, and $$
\Gamma_{\phi}|_{Y \backslash F} = \Gamma_{\phi|_{Y \backslash F}} . $$
Note that $C^*_r\left( \Gamma_{\phi}|_{Y \backslash
F}\right) = C^*_r\left( \Gamma_{\phi|_{Y \backslash
F}}\right)$ is an ideal in $C^*_r\left(\Gamma_{\phi}\right)$ because $Y \backslash F$ is totally $\phi$-invariant.
\begin{prop} \label{prop:canonic} (Cf. Proposition II.4.5 of \cite{Re}.)
Let $F$ be a non-empty, closed and totally $\phi$-invariant subset
of $Y$. There is then a
surjective $*$-homomorphism $\pi_F: C_r^*(\Gamma_\phi)\to
C_r^*(\Gamma_{\phi|_F})$ which extends the restriction map
$C_c\left(\Gamma_{\phi}\right) \to C_c\left(\Gamma_{\phi|_F}\right)$
and has the property that $\ker \pi_F =
C^*_r\left(\Gamma_{\phi|_{Y \backslash F}}\right)$, i.e. \begin{equation*} \begin{xymatrix}{
0 \ar[r] & C^*_r\left(\Gamma_{\phi|_{Y \backslash F}}\right) \ar[r] &
C_r^*(\Gamma_\phi) \ar[r]^-{\pi_F} \ar[r] & C_r^*(\Gamma_{\phi|_F}) \ar[r] & 0} \end{xymatrix} \end{equation*} is exact. Furthermore, \begin{equation}\label{rhoF} \rho(\ker\pi_F)=F. \end{equation} \end{prop}
\begin{proof} Let $\dot{\pi_F} : C_c\left(\Gamma_{\phi}\right) \to
C_c\left(\Gamma_{\phi|_F}\right)$ denote the restriction map which
is surjective by Tietze's theorem. By using that $F$ is totally
$\phi$-invariant, it follows straightforwardly that $\dot{\pi_F}$ is
a $*$-homomorphism. Since $\pi_x \circ \dot{\pi_F} = \pi_x$ when $x
\in F$, it follows that $\dot{\pi_F}$ extends by continuity to a
$*$-homomorphism $\pi_F : C_r^*(\Gamma_\phi)\to
C_r^*(\Gamma_{\phi|_F})$ which is surjective because $\dot{\pi_F}$
is. To complete the proof observe that \begin{equation*}\label{estblis}
\ker \pi_F \cap D = C_0\left(Y \backslash F\right) = C^*_r\left(\Gamma_{\phi|_{Y \backslash
F}}\right) \cap D . \end{equation*} The first identity shows that (\ref{rhoF}) holds, and since $\ker \pi_F$
and $C^*_r\left(\Gamma_{\phi|_{Y \backslash
F}}\right)$ are both gauge-invariant ideals the second that
they are identical by Lemma \ref{intersectunique}. \end{proof}
By combining Proposition \ref{gaugeideals}, Lemma \ref{psiinv} and Proposition \ref{prop:canonic} we obtain the following.
\begin{thm}\label{psi-invariant} The map $\rho$ is a bijection
from the gauge-invariant ideals in
$C^*_r\left(\Gamma_{\phi}\right)$ onto the set of closed totally $\phi$-invariant
subsets of $Y$. The inverse is the map which sends a closed totally
$\phi$-invariant subset $F \subseteq Y$ to the ideal $$ \ker \pi_F = \left\{ a \in C^*_r\left(\Gamma_{\phi}\right) : \
P_{\Gamma_{\phi}}(a^*a)(x) = 0 \ \forall x \in F \right\} . $$
\end{thm}
We remark that since the isomorphism (\ref{basiciso}) is equivariant with respect to the gauge actions, Theorem \ref{psi-invariant} gives also a description of the gauge invariant ideals in $C^*_r\left(\Gamma_{\varphi}\right)$, as a complement to the one of Proposition \ref{gaugeideals}.
\subsection{The primitive ideals}
We are now in position to obtain a complete description of the primitive ideals of $C^*_r\left(\Gamma_{\phi}\right)$. Much of what we do is merely a translation of Katsuras description of the primitive ideals in the more general $C^*$-algebras considered by him in \cite{K}. Recall that because we only deal with separable $C^*$-algebras the primitive ideals are the same as the prime ideals, cf. 3.13.10 and 4.3.6 in \cite{Pe}.
\begin{lemma} \label{prop:ideal-gen}
Let $I$ be an ideal in $C_r^*(\Gamma_\phi)$ and let
$A$ be a closed totally $\phi$-invariant subset of $Y$.
If $\rho(I)\subseteq A$, then $\ker\pi_A\subseteq I$. \end{lemma}
\begin{proof}
Since $\rho(I)\subseteq A$ it follows from
the Stone-Weierstrass theorem that $C_0(Y\setminus A)\subseteq I
\cap C(Y)$. Let
$\left\{i_n\right\}$ be an approximate unit in $C_0(Y \backslash
A)$. It follows from Proposition \ref{prop:canonic} that $\{i_n\}$ is
also an approximate unit in $\ker \pi_A$. Since $\{i_n\} \subseteq
I$ it follows that $\ker\pi_A\subseteq I$. \end{proof}
We say that a closed totally $\phi$-invariant subset $A$ of $Y$ is \emph{prime}
when it has the property that if $B$ and $C$ also are closed and
totally $\phi$-invariant subsets of $Y$ and $A\subseteq B\cup C$, then either
$A\subseteq B$ or $A\subseteq C$.
Let ${\mathcal{M}}:=\{A\subseteq Y : A\text{ is non-empty, closed, totally $\phi$-invariant and
prime}\}$. For $x\in Y$ let $$ \orb(x) =\{y\in Y : \exists m,n\in{\mathbb{N}}:\phi^n(x)=\phi^m(y)\}. $$ We call $\orb(x)$ the \emph{total
$\phi$-orbit of $x$}.
\begin{prop}(Cf. Proposition 4.13 and 4.4 of \cite{K}.)\label{glemt}
\begin{equation*}
{\mathcal{M}}=\{\overline{\orb(x)} : x\in Y\}.
\end{equation*} \end{prop}
\begin{proof}
It is clear that $\overline{\orb(x)}\in{\mathcal{M}}$ for every $x\in Y$. Assume that $A\in{\mathcal{M}}$ and let $\{U_k\}_{k=1}^\infty$ be a basis for
$A$. We will by induction show that we can choose
compact neighbourhoods $\{C_k\}_{k=0}^\infty$ and
$\{C_k'\}_{k=0}^\infty$ in $A$ and positive integers $(n_k)_{k=0}^\infty$
and $(n_k')_{k=0}^\infty$ such that $C_k\subseteq U_k$ and
$C_k'\subseteq \phi^{n_{k-1}}(C_{k-1})\cap
\phi^{n'_{k-1}}(C'_{k-1})$ for $k\ge 1$. For this set $C_0=C'_0=A$. Assume then that $n\ge 1$ and that
$C_1,\dots,C_n$, $C'_1,\dots,C'_n$, $n_0,\dots,n_{n-1}$ and $n'_0,\dots,n'_{n-1}$
satisfying the conditions above have been chosen. Choose non-empty
open subsets $V_n\subseteq C_n$ and $V'_n\subseteq C'_n$.
We then have that
\begin{equation*}
\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\text{ and }\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n))
\end{equation*}
are non-empty open and totally $\phi$-invariant
subsets of $A$, and thus that
\begin{equation} \label{eq:1}
A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\text{ and
}A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n))
\end{equation}
are closed, totally $\phi$-invariant subsets of $Y$. Since $A$ is prime and is not
contained in either of the sets from \eqref{eq:1}, it follows that
$A$ is not contained in
\begin{equation*}
\left(A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\right)\bigcup
\left(A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n)) \right),
\end{equation*}
whence
\begin{equation*}
\left(\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\right)\bigcap
\left(\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n))\right) \ne\emptyset.
\end{equation*}
It follows that there are positive integers $n_n$ and $n'_n$
such that $\phi^{n_n}(V_n)\cap\phi^{n'_n}(V'_n)$ is
non-empty. Thus we can choose a compact neighbourhood $C_{n+1}\subseteq
U_{n+1}$ and a compact neighbourhood $C'_{n+1}\subseteq
\phi^{n_n}(V_n)\cap\phi^{n'_n}(V'_n)\subseteq
\phi^{n_n}(C_n)\cap\phi^{n'_n}(C'_n)$ which is what is required for the induction step.
It is easy to check that
\begin{equation*}
C'_0\cap\phi^{-n'_0}(C'_1)\cap\dots \dots \cap\phi^{-n'_0- \dots -n'_k}(C'_{k+1}),\ k=0,1,\dots
\end{equation*}
is a decreasing sequence of non-empty compact sets. It follows
that there is an
\begin{equation*}
x\in \bigcap_{k=0}^\infty \phi^{-n'_0-\dots \dots -n'_k}(C'_{k+1})\cap C'_0.
\end{equation*}
We have for every $k\in{\mathbb{N}}$ that $\phi^{n'_0+\dots+n'_k}(x)\in
C'_{k+1}\subseteq \phi^{n_k}(C_k)\subseteq \phi^{n_k}(U_k)$, and it
follows that $\orb(x)$ is dense in $A$, and thus that $A=\overline{\orb(x)}$. \end{proof}
\begin{prop}(Cf. Proposition 9.3 of \cite{K}.) \label{prop:prime} Assume that $I$ is a prime ideal in $C_r^*(\Gamma_\phi)$. It follows that $\rho(I)\in{\mathcal{M}}$. \end{prop}
\begin{proof}
It follows from Lemma \ref{psiinv} that $\rho(I)$ is closed and
totally $\phi$-invariant.
To show that $\rho(I)$ is also prime, assume that $B$ and $C$ are
closed totally $\phi$-invariant subsets such that $\rho(I)\subseteq
B\cup C$.
It follows then from
Lemma \ref{prop:ideal-gen} that $\ker(\pi_{B\cup C})\subseteq
I$. Since $\ker \pi_B \cap \ker \pi_C \cap D =
C_0(Y \backslash B) \cap C_0(Y \backslash C) = C_0\left(Y \backslash
(B\cup C)\right) = \ker \pi_{B \cup C} \cap D$ it follows from Lemma \ref{intersectunique} that $\ker \pi_B \cap \ker \pi_C = \ker \pi_{B\cup C}$. Therefore $\ker(\pi_B)\subseteq I$ or
$\ker(\pi_C)\subseteq I$ since $I$ is prime. Hence $\rho(I)\subseteq B$ or
$\rho(I)\subseteq C$, thanks to (\ref{rhoF}). \end{proof}
We say that a point $x\in Y$ is $\phi$-periodic if $\phi^n(x)=x$ for some $n>0$. Let $\per $ denote the set of $\phi$-periodic points $x\in Y$ which are isolated in $\orb(x)$ and let $$ {\cip_{\per}} =\{\overline{\orb(x)} : x\in \per\} $$ and $$ {\cip_{\aper}} ={\mathcal{M}}\setminus{\cip_{\per}}. $$ Let $A \subseteq Y$ be a closed totally $\phi$-invariant subset. We say that
$\phi|_A$ is \emph{topologically free} if the set of $\phi$-periodic points in $A$ has empty interior in $A$.
\begin{prop} (Cf. Proposition 11.3 of \cite{K}.) \label{prop:free}
Let $A\in{\mathcal{M}}$. Then $\phi|_A$ is topologically free if and only
if $A\in{\cip_{\aper}}$. \end{prop}
\begin{proof}
We will show that $\phi|_A$ is not topologically free if and only
if $A\in{\cip_{\per}}$. If $x\in\per$ and $A=\overline{\orb(x)}$, then $\phi|_A$ is not
topologically free because $x$ is periodic and isolated in $\orb(x)$
and thus in $A$. Assume then that $\phi|_A$ is not topologically free. There is then
a non-empty open subset $U\subseteq A$ such that every element of $U$ is
$\phi$-periodic. Choose $x\in A$ such that
$A=\overline{\orb(x)}$. Then $U\cap\orb(x)\ne\emptyset$. Let $y\in
U\cap\orb(x)$. Then $y$ is $\phi$-periodic and
$\overline{\orb(y)}=\overline{\orb(x)}=A$, so if we can show that
$y$ is isolated in $\orb(y)$, then we have that $A\in{\cip_{\per}}$.
Since $y$ is $\phi$-periodic there is
an $n\ge 1$ such that $\phi^n(y)=y$. We claim that
$U\subseteq\{y,\phi(y),\dots,\phi^{n-1}(y)\}$. It will then follow that $y$
is isolated in $A$ and thus in $\orb(y)$.
Assume that $U\setminus \{y,\phi(y),\dots,\phi^{n-1}(y)\}$ is
non-empty. Since it is also open it follows that $\orb(y)\cap
U\setminus \{y,\phi(y),\dots,\phi^{n-1}(y)\}$ is non-empty. Let
$z\in \orb(y)\cap
U\setminus \{y,\phi(y),\dots,\phi^{n-1}(y)\}$. Since $z\in U$ there
is an $m\ge 1$ so that $\phi^m(z)=z$, and since $z\in\orb(y)$ there
are $k,l\in{\mathbb{N}}$ such that $\phi^k(z)=\phi^l(y)$. But then
$z=\phi^{mk}(z)=\phi^{(m-1)k+l}(y)\in
\{y,\phi(y),\dots,\phi^{n-1}(y)\}$ and we have a contradiction. It
follows that $U\subseteq\{y,\phi(y),\dots,\phi^{n-1}(y)\}$. \end{proof}
In particular, it follows from Proposition \ref{prop:free} that the elements of ${\cip_{\aper}}$ are infinite sets.
\begin{prop} (Cf. Proposition 11.5 of \cite{K}.) \label{prop:aper}
Let $A\in{\cip_{\aper}}$. Then $\ker\pi_A$ is the unique ideal $I$ in
$C_r^*(\Gamma_\phi)$ with $\rho(I)=A$. \end{prop}
\begin{proof}
We have already in Proposition \ref{prop:canonic} seen that
$\rho(\ker\pi_A)=A$. Assume that $I$ is an ideal in
$C_r^*(\Gamma_\phi)$ with $\rho(I)=A$. It follows then from
Lemma \ref{prop:ideal-gen} that $\ker\pi_A\subseteq I$. Thus
it sufficies to show that $\pi_A(I)=\{0\}$. Note that $\pi_A(I)$ is an ideal in
$C_r^*(\Gamma_{\phi|_A})$ with $\rho(\pi_A(I))=A$. It
follows that $\pi_A(I)\cap C(A)=\{0\}$. To conclude from this that
$\pi_A(I) = \{0\}$ we will show that the points of $A$
whose isotropy group in $\Gamma_{\phi|_A}$ is trivial are dense in
$A$. It will then follow from
Lemma 2.15 of \cite{Th1} that $\pi_A(I)=\{0\}$ because $\pi_A(I) \cap
C(A) = \{0\}$.
That the points of $A$ with trivial isotropy in $\Gamma_{\phi|_A}$ are dense in
$A$ is established as follows: The points in $A$ with
non-trivial isotropy in $\Gamma_{\phi|A}$ are the pre-periodic
points in $A$. Let $\operatorname{Per}_n A$ denote the set of points in $A$ with
minimal period $n$ under $\phi$ and note that $\operatorname{Per}_n A$ is closed and
has empty interior since
$\phi|_A$ is topologically free by Proposition \ref{prop:free}. It follows that
$A \backslash \phi^{-k}\left(\operatorname{Per}_n A\right)$ is open and dense in $A$
for all $k,n$. By the Baire category theorem it follows that
$$
A \backslash \left( \bigcup_{k,n} \phi^{-k}\left(\operatorname{Per}_n A\right) \right)
= \bigcap_{k,n} A \backslash \phi^{-k}\left(\operatorname{Per}_n A\right)
$$
is dense in $A$, proving the claim.
\end{proof}
\begin{lemma}\label{prop2} Let $A\in{\cip_{\aper}}$. Then $\ker\pi_A$ is a primitive ideal.
\begin{proof} Let $A = \overline{\orb(x)}$. To show that $\ker \pi_A$ is primitive it suffices to show that it is prime, cf. Proposition 4.3.6 of \cite{Pe}. Equivalently, it suffices to show that $C^*_r\left(\Gamma_{\phi|_A}\right)$ is a prime
$C^*$-algebra. Consider therefore two ideals $I_j \subseteq C^*_r\left(\Gamma_{\phi|_A}\right), j = 1,2$, such that $I_1I_2 = \{0\}$. Then $$ \left\{ y \in A : \ f(y) = 0 \ \forall f \in I_1 \cap C(A) \right\} \cup \left\{ y \in A : \ f(y) = 0 \ \forall f \in I_2 \cap C(A) \right\} = A . $$ In particular, $x$ must be in $\left\{ y \in A : \ f(y) = 0 \ \forall
f \in I_j \cap C(A) \right\}$, either for $j = 1$ or $j =2$. It follows then from Lemma \ref{psiinv}, applied to $\phi|_A$, that $$ A = \left\{ y \in A : \ f(y) = 0 \ \forall f \in I_j \cap C(A) \right\}. $$
Hence $I_j = \{0\}$ by Proposition \ref{prop:aper} applied to $\phi|_A$. \end{proof} \end{lemma}
Let $A\in{\cip_{\per}}$. Choose $x\in\per$ such that $\overline{\orb(x)}=A$, and let $n$ be the minimal period of $x$. Then $x$ is isolated in $A$. It follows that the characteristic functions $1_{(x,0,x)}$ and $1_{(x,n,x)}$
belong to $C_r^*(\Gamma_{\phi|_A})$. Let $p_x=1_{(x,0,x)}$ and $u_x=1_{(x,n,x)}$. For $w\in{\mathbb{T}}$ let
$\dot{P}_{x,w}$ denote the ideal in $C_r^*(\Gamma_{\phi|_A})$ generated by $u_x-wp_x$.
\begin{lemma} \label{remark:ens}
Suppose that $x,y\in\per$ and that
$\overline{\orb(x)}=\overline{\orb(y)}$ and let $w\in{\mathbb{T}}$. Then
$\dot{P}_{x,w}=\dot{P}_{y,w}$. \end{lemma} \begin{proof}
By symmetry, it is enough to show that
$\dot{P}_{y,w}\subseteq\dot{P}_{x,w}$.
Since $y$ is isolated in $\orb(y)$, it is isolated in
$\overline{\orb(y)}=\overline{\orb(x)}$. Thus $y$ must belong to
$\orb(x)$. This means that there are $k,l\in{\mathbb{N}}$ such that
$\phi^k(x)=\phi^l(y)$. Since $y$ is $\phi$-periodic, it follows that
there is an $i\in{\mathbb{N}}$ such that $y=\phi^i(x)$. Let
$A=\overline{\orb(y)}=\overline{\orb(x)}$. Since $x$ and $y$ are
isolated in $A$ we have that
$1_{(x,i,y)}\in C_r^*(\Gamma_{\phi|_A})$. Let $v=1_{(x,i,y)} $. It
is easy to check that $v^*p_xv=p_y$ and that $v^*u_xv=u_y$. Thus
$u_y-wp_y=v^*(u_x-wp_x)v\in \dot{P}_{x,w}$ and it follows that
$\dot{P}_{y,w}\subseteq\dot{P}_{x,w}$. \end{proof}
Let $A\in{\cip_{\per}}$ and let $w\in{\mathbb{T}}$. It follows from Lemma \ref{remark:ens} that the ideal $\dot{P}_{x,w}$ does not depend of the particular choice of $x \in A \cap \operatorname{Per}$, as long as $\overline{\orb(x)}=A$. We will therefore simply write $\dot{P}_{A,w}$ for $\dot{P}_{x,w}$. We then define $P_{A,w}$ to be the ideal $\pi_A^{-1}(\dot{P}_{A,w})$ in $C_r^*(\Gamma_\phi)$.
\begin{prop} (Cf. Proposition 11.13 of \cite{K}.) \label{prop:per}
Let $A\in{\cip_{\per}}$. Then
\begin{equation*}
w\mapsto P_{A,w}
\end{equation*}
is a bijection between ${\mathbb{T}}$ and the set of primitive ideals $P$ in
$C_r^*(\Gamma_\phi)$ with $\rho(P)=A$. \end{prop}
\begin{proof}
The map $P\mapsto\pi_A(P)$ gives a bijection between the primitive
ideals in $C_r^*(\Gamma_\phi)$ with $\ker\pi_A\subseteq P$ and the
primitive ideals in $C_r^*(\Gamma_{\phi|_A})$, cf. Theorem 4.1.11
(ii) in \cite{Pe}. The inverse of this
bijection is the map $Q\mapsto \pi_A^{-1}(Q)$.
If $P$ is a primitive ideal in $C_r^*(\Gamma_\phi)$ with $\rho(P)=A$, it
follows from Lemma \ref{prop:ideal-gen} that
$\ker\pi_A\subseteq P$. In addition $\rho(\pi_A(P))=A$. If on the other
hand $Q$ is a primitive ideal in $C_r^*(\Gamma_{\phi|_A})$ with
$\rho(Q)=A$, then $\pi_A^{-1}(Q)$ is a primitive ideal in
$C_r^*(\Gamma_\phi)$ and $\rho(\pi_A^{-1}(Q))=A$. Thus
$P\mapsto\pi_A(P)$ gives a bijection between the primitive ideals in
$C_r^*(\Gamma_\phi)$ with $\rho(P)=A$ and the
primitive ideals $Q$ in $C_r^*(\Gamma_{\phi|_A})$ with $\rho(Q)=A$.
Choose $x\in\per$ such that $\overline{\orb(x)}=A$.
Let $\langle p_x\rangle$ be the ideal in $C_r^*(\Gamma_{\phi|_A})$ generated by
$p_x$. Observe that $\dot{P}_{A,w} \subseteq \langle p_x \rangle$
for all $w \in \mathbb T$ since $p_x\left(u_x -wp_x\right) = u_x - wp_x$.
The map $Q\mapsto Q\cap\langle p_x\rangle$ gives a bijection between
the primitive ideals in $C_r^*(\Gamma_{\phi|_A})$ with $\langle
p_x\rangle\nsubseteq Q$ and the primitive ideals in $\langle
p_x\rangle$, cf. Theorem 4.1.11 (ii) in \cite{Pe}. We claim that
$\langle p_x\rangle\nsubseteq Q$ if and only if $\rho(Q)=A$. To see
this, let $Q$ be an ideal in $C_r^*(\Gamma_{\phi|_A})$. If $p_x\in Q$,
then $x\notin \rho(Q)$ and $\rho(Q)\ne A$. If on the other hand
$\rho(Q)\ne A$, then $x\notin \rho(Q)$ because $\rho(Q)$ is closed
and totally $\phi$-invariant and $\overline{\orb(x)}=A$. It follows that there
is an $f\in Q\cap C(A)$ such that $f(x)\ne 0$, whence $p_x\in
Q$. This proves the claim and it follows that $Q\mapsto Q\cap\langle p_x\rangle$ gives a bijection between
the primitive ideals in $C_r^*(\Gamma_{\phi|_A})$ with $\rho(Q)=A$ and the primitive ideals in $\langle
p_x\rangle$.
The $C^*$-algebra $\langle p_x\rangle$ is Morita equivalent to
$p_xC_r^*(\Gamma_{\phi|_A})p_x$ via the
$p_xC_r^*(\Gamma_{\phi|_A})p_x$- $\langle p_x\rangle$ imprimitivity
bimodule $p_xC_r^*(\Gamma_{\phi|_A})$, and therefore $T\mapsto p_xTp_x$ gives a
bijection between the primitive ideals $T$ in $\langle p_x\rangle$ and
the primitive ideals in $p_xC_r^*(\Gamma_{\phi|A})p_x$,
cf. Proposition 3.24 and Corollary 3.33 in \cite{RW}. Now note that
\begin{equation*}
\{(x',n',y') \in\Gamma_{\phi|_A} :
x'=y' =x \}=\{(x,kn,x) : k\in{\mathbb{Z}}\}
\end{equation*}
where $n$ is the smallest positive integer such that
$\phi^n(x)=x$. It follows that $p_xC_r^*(\Gamma_{\phi|A})p_x$ is
isomorphic to $C({\mathbb{T}})$ under an isomorphism taking the canonical
unitary generator of $C(\mathbb T)$ to $u_x$. In this way we
conclude that the primitive ideals of $p_xC_r^*(\Gamma_{\phi|A})p_x$
are in one-to-one correspondance with $\mathbb T$ under the map $$
\mathbb T \ni w \mapsto p_x\overline{C_r^*(\Gamma_{\phi|A})
\left(u_x-wp_x\right)C_r^*(\Gamma_{\phi|A})}p_x = p_x \dot{P}_{A,w} p_x. $$ This completes the proof.
\end{proof}
By combining Proposition \ref{prop:prime}, \ref{prop:aper} and \ref{prop:per} we get the following theorem.
\begin{thm}\label{primitive}
The set of primitive ideals in $C_r^*(\Gamma_\phi)$ is the disjoint
union of $\{\ker\pi_A : A\in{\cip_{\aper}}\}$ and $\{P_{A,w} : A\in{\cip_{\per}},\ w\in{\mathbb{T}}\}$. \end{thm}
\subsection{The maximal ideals}
The next step is to identify the maximal ideals among the primitive ones.
\begin{lemma}\label{intcx} Assume that not all points of $Y$ are
pre-periodic and that $C^*_r\left(\Gamma_{\phi}\right)$ contains a
non-trivial ideal. It follows that there is a non-trivial
gauge-invariant ideal $J$ in
$C^*_r\left(\Gamma_{\phi}\right)$ such that $J \cap C(Y) \neq \{0\}$.
\begin{proof} Let $I$ be a non-trivial ideal in
$C^*_r\left(\Gamma_{\phi}\right)$. Assume first that $I \cap C(Y) =
\{0\}$. Since we assume that not all points of $Y$ are pre-periodic
we can apply Lemma 2.16 of \cite{Th1} to conclude that $J_0 = \overline{P_{\Gamma_{\phi}}(I)}$ is a
non-trivial $\Gamma_{\phi}$-invariant ideal in $C(Y)$. Then $$ J = \left\{ a \in C^*_r\left(\Gamma_{\phi}\right) : \
P_{\Gamma_{\phi}}(a^*a) \in J_0 \right\} $$ is a non-trivial gauge-invariant ideal by Theorem \ref{gaugeideals}, and $J \cap C(Y) = J_0 \neq \{0\}$. Note that $J$ contains $I$ in this case. If $I \cap C(X) \neq \{0\}$ we set $$ J = \left\{ a \in C^*_r\left(\Gamma_{\phi}\right) : \
P_{\Gamma_{\phi}}(a^*a) \in I \cap C(Y) \right\} $$ which is a non-trivial ideal in $C^*_r\left(\Gamma_{\phi}\right)$ such that $J \cap C(Y) = I \cap C(Y)$ by Lemma 2.13 of \cite{Th1}. Since $J$ is gauge-invariant, this completes the proof. \end{proof} \end{lemma}
\begin{lemma}\label{minimal} Let $F \subseteq Y$ be a minimal closed
non-empty totally $\phi$-invariant subset. Then either \begin{enumerate} \item[1)] $F \in \mathcal M_{Aper}$ and $\ker \pi_F$ is a maximal
ideal, or \item[2)] $F = \orb(x) = \left\{ \phi^n(x) : n \in \mathbb N\right\}$, where $x \in \operatorname{Per}$. \end{enumerate} \begin{proof} It follows from the minimality of $F$ that $\overline{\orb(x)}=F$
for all $x\in F$. We will show that 1) holds when $F$ does not
contain an element of $\per$, and that 2) holds when it does. Assume first that $F$ does not contain any elements of $\per$. Then
$F\in \mathcal M_{Aper}$. If there is a proper ideal $I$ in $C_r^*(\Gamma_\phi)$ such that $\ker \pi_F
\subsetneq I$, then $\pi_F(I)$ is a non-trivial ideal in
$C^*_r\left(\Gamma_{\phi|_F}\right)$, and then it follows from
Lemma \ref{intcx} that there is a non-trivial gauge-invariant ideal $J$ in
$C^*_r\left(\Gamma_{\phi|_F}\right)$. By Theorem
\ref{psi-invariant} $\rho(\pi_F^{-1}(J))$ is then a
non-trivial closed totally $\phi$-invariant subset of
$F$, contradicting the minimality of $F$. Thus 1) holds when $F$ does not
contain an element from $\per$.
Assume instead that there is an $x\in F\cap\operatorname{Per}$.
Then $x$ is isolated in $\orb(x)$, and thus in
$F$. It follows that $F=\orb(x)$, because if $y\in
F\setminus\orb(x)$ we would have that $x\notin
\overline{\orb(y)}=F$, which is absurd.
Since $F$ is compact, $\orb(x)$ must be finite. Since $\phi$ is surjective we
must then have that
$\orb(x) = \left\{ \phi^n(x) : n \in \mathbb N\right\}$.
Thus 2) holds if $F$ contains
an element from $\operatorname{Per}$. \end{proof} \end{lemma}
\begin{lemma}\label{maxideal1} Let $I$ be a maximal ideal in
$C^*_r\left(\Gamma_{\phi}\right)$. Then either $I = \ker \pi_F$ for
some minimal closed totally $\phi$-invariant subset $F \in \mathcal M_{Aper}$, or $I =
P_{\orb(x),w}$ for some $w \in \mathbb T$ and some $x \in \operatorname{Per}$ such
that $\orb(x) = \left\{\phi^n(x): \ n \in \mathbb N\right\}$. \begin{proof} Since $I$ is also primitive we know from Theorem
\ref{primitive} that $I = \ker \pi_A$ for some $A \in \mathcal
M_{Aper}$ or $I = P_{A,w}$ for some $A \in \mathcal M_{\per}$ and
some $w \in \mathbb T$. In the first case it follows that $A$ must
be a minimal closed totally $\phi$-invariant subset since $I$ is a maximal
ideal. Assume then that $I = P_{A,w}$ for some $A \in \mathcal M_{\per}$ and
some $w \in \mathbb T$. In the notation from the proof of Proposition
\ref{prop:per}, observe that $\dot{P}_{A,w} \subseteq \langle p_x
\rangle$ since $p_x\left(u_x-wp_x\right) = u_x -wp_x$. Note that $\dot{P}_{A,w} \neq \langle p_x
\rangle$ because the latter of these ideals is gauge-invariant and the
first is not. By maximality of $I$ this implies that $\langle p_x
\rangle = C^*_r\left(\Gamma_{\phi|_A}\right)$. On the other hand,
$\orb(x)$ is an open totally $\phi$-invariant subset of $A$ and $p_x \in
C^*_r\left( \Gamma_{\phi|_{\orb(x)}}\right)$, so we see that $\langle p_x
\rangle = C^*_r\left(\Gamma_{\phi|_A}\right) =
C^*_r\left( \Gamma_{\phi|_{\orb(x)}}\right)$. This implies that $$ C_0\left(\orb(x)\right) = C(A) \cap C^*_r\left(
\Gamma_{\phi|_{\orb(x)}}\right) = C(A), $$ and hence that $A = \orb(x)$. Compactness of $A$ implies that $\orb(x)$ is finite and surjectivity of $\phi$ that $\orb(x) = \left\{\phi^n(x): \ n \in \mathbb N\right\}$. \end{proof} \end{lemma}
\begin{thm}\label{maximal2} The maximal ideals in
$C^*_r\left(\Gamma_{\phi}\right)$ consist of the primitive ideals of
the form $\ker \pi_F$ for some infinite minimal closed totally $\phi$-invariant subset $F
\subseteq Y$ and the primitive ideals $P_{A,w}$ for some $w \in {\mathbb{T}}$, where $A = \orb(x) =
\left\{\phi^n(x) : \ n \in \mathbb N\right\}$ for a $\phi$-periodic point
$x \in Y$. \begin{proof} This follows from the last two lemmas, after the
observation that a primitive ideal $P_{A,w}$ of the form described
in the statement is maximal. \end{proof} \end{thm}
\begin{cor}\label{maximal3} Let $A$ be a simple quotient of
$C^*_r\left(\Gamma_{\phi}\right)$. Assume $A$ is not finite dimensional. It follows that there is an infinite minimal closed totally $\phi$-invariant
subset $F$ of $Y$ such that $A \simeq C^*_r\left(
\Gamma_{\phi|_F}\right)$. \end{cor}
To make more detailed conclusions about the simple quotients we need to restrict to the case where $Y$ is of finite covering dimension so that the result of \cite{Th3} applies. For this reason we prove first that finite dimensionality of $Y$ follows from finite dimensionality of $X$.
\section{On the dimension of $Y$}
Let $\operatorname{Dim} X$ and $\operatorname{Dim} Y$ denote the covering dimensions of $X$ and $Y$, respectively. The purpose with this section is to establish
\begin{prop}\label{dim!!!} $\operatorname{Dim} Y \leq \operatorname{Dim} X$.\end{prop} \begin{proof}By definition $Y$ is the Gelfand spectrum of
$D_{\Gamma_{\varphi}}$. Since the conditional expectation
$P_{\Gamma_{\varphi}} : C^*_r\left(\Gamma_{\varphi}\right) \to
D_{\Gamma_{\varphi}}$ is invariant under the gauge action, in the
sense that $P_{\Gamma_{\varphi}} \circ \beta_{\lambda} =
P_{\Gamma_{\varphi}}$ for all $\lambda$, it follows that \begin{equation*}\label{D17} D_{\Gamma_{\varphi}} = P_{\Gamma_{\varphi}}\left(C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb
T}\right) . \end{equation*} To make use of this description of $D_{\Gamma_{\varphi}}$ we need a refined version of (\ref{bkr}). Note first that it follows from (4.4) and (4.5) of \cite{Th1} that $V_{\varphi}C^*_r\left(R\left(\varphi^l\right)\right)V_{\varphi}^* \subseteq C^*_r\left(R\left(\varphi^{l+1}\right)\right)$ for all $l \in
\mathbb N$. Consequently $$ {V_{\varphi}^*}^k C^*_r\left(R\left(\varphi^l\right)\right) V_{\varphi}^k = {V_{\varphi}^*}^{k+1}V_{\varphi} C^*_r\left(R\left(\varphi^l\right)\right)V_{\varphi}^* V_{\varphi}^{k+1} \subseteq {V_{\varphi}^*}^{k+1} C^*_r\left(R\left(\varphi^{l+1}\right)\right) V_{\varphi}^{k+1} $$ for all $k,l \in \mathbb N$. It follows therefore from (\ref{crux}) and (\ref{bkr}) that there are sequences $\{k_n\}$ and $\left\{l_n\right\}$ in $\mathbb N$ such that $l_n \geq k_n$, \begin{equation}\label{D17} {V_{\varphi}^*}^{k_n} C^*_r\left(R\left(\varphi^{l_n}\right)\right)V_{\varphi}^{k_n} \subseteq {V_{\varphi}^*}^{k_{n+1}} C^*_r\left(R\left(\varphi^{l_{n+1}}\right)\right)V_{\varphi}^{k_{n+1}} \end{equation} and \begin{equation}\label{D1} C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb
T} = \overline{ \bigcup_n {V_{\varphi}^*}^{k_n} C^*_r\left(R\left(\varphi^{l_n}\right)\right)V_{\varphi}^{k_n}}; \end{equation} we can for example use $k_n=n$ and $l_n=2n$.
Let $D_n$ denote the $C^*$-subalgebra of
$D_{\Gamma_{\varphi}}$ generated by $$ P_{\Gamma_{\varphi}}\left({V^*_{\varphi}}^{k_n}
C^*_r\left(R\left(\varphi^{l_n} \right)\right)V_{\varphi}^{k_n}\right) $$ and let $Y_n$ be the character space of $D_n$. Note that $C(X) \subseteq D_n$ since $V_{\varphi}^{k_n}g{V_{\varphi}^*}^{k_n} \in C^*_r\left(R\left(\varphi^{l_n}\right)\right)$ and $g = P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n} V_{\varphi}^{k_n}g
{V_{\varphi}^*}^{k_n} V^{k_n}_{\varphi} \right)$ when $g \in C(X)$. There is therefore a continuous surjection $$ \pi_n : Y_n \to X $$ defined such that $g\left(\pi_n(y)\right) = y(g), \ g \in C(X)$. We claim that $\# \pi_n^{-1}(x) < \infty$ for all $x \in X$. To show this note that by definition $D_n$ is generated as a $C^*$-algebra by functions of the form \begin{equation}\label{expresssion} \begin{split} &x \mapsto P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n}
fV_{\varphi}^{k_n}\right)(x) = \sum_{z,z' \in \varphi^{-k_n}(x)}
f(z,z') \prod_{j=0}^{k_n-1} m(\varphi^j(z))^{-\frac{1}{2}} m(\varphi^j(z'))^{-\frac{1}{2}} \end{split} \end{equation} for some $f \in C^*_r\left(R\left(\varphi^{l_n}\right)\right)$. In fact, since $\operatorname{alg}^* R\left(\varphi^{l_n}\right)$ is dense in $C^*_r\left(R\left(\varphi^{l_n}\right)\right)$, already functions of the form (\ref{expresssion}) with \begin{equation}\label{expression7} f = f_1 \star f_2 \star \dots \star f_N, \end{equation} for some $f_i \in C\left(R\left(\varphi^{l_n}\right)\right), i = 1,2,\dots, N$, will generate $D_n$.
Fix $x \in X$ and consider an element $y \in \pi_n^{-1}(x)$. Every $x' \in X$ defines a character $\iota_{x'}$ of $D_n$ by evaluation, viz. $\iota_{x'}(h) = h(x')$, and $\left\{\iota_{x'} : x'
\in X \right\}$ is dense in $Y_n$ because the implication $$ h \in D_n, \ h(x') = 0 \ \forall x' \in X \ \Rightarrow \ h = 0 $$ holds. In particular, there is a sequence $\left\{x_l\right\}$ in $X$ such that $\lim_{l \to \infty} \iota_{x_l} = y$ in $Y_n$. Recall now from Lemma 3.6 of \cite{Th1} that there is an
open neighbourhood $U$ of $x$ and open sets $V_j, j=1,2, \dots,d$, where $d = \# \varphi^{-k_n}(x)$, in
$X$ such
that \begin{enumerate} \item[1)] $\varphi^{-k_n}\left(\overline{U}\right) \subseteq V_1 \cup V_2
\cup \dots \cup V_d$, \item[2)] $\overline{V_i} \cap \overline{V_j} = \emptyset, \ i \neq
j$, and \item[3)] $\varphi^{k_n}$ is injective on $\overline{V_j}$ for
each $j$. \end{enumerate} Since $\lim_{l \to \infty} x_l = x$ in $X$ we can assume that $x_l \in U$ for all $l$. For each $l$, set $$ F_l = \left\{ j : \ \varphi^{-k_n}(x_l) \cap V_j \neq \emptyset \right\} \subseteq \left\{1,2,\dots,d\right\}. $$ Note that there is a subset $F \subseteq \left\{1,2,\dots,d\right\}$ such that $F_l = F$ for infinitely many $l$. Passing to a subsequence we can therefore assume that $F_l = F$ for all $l$. For each $k \in F$ we define a continuous map $\lambda_k : \varphi^{k_n}\left(\overline{V_k}\right) \to \overline{V_k}$ such that $\varphi^{k_n} \circ \lambda_k (z) = z$. Set $T = \max_{z
\in X} \# \varphi^{-1}(z)$. For each $j \in \left\{1,2, \dots,
T\right\}$, set $$ A_j = \left\{ z \in X : \# \varphi^{-1}\left(\varphi(z)\right)
= j \right\} = m^{-1}(j). $$ For each $l$ and each $k \in F$ there is a unique tuple $\left(j_0(k),j_1(k), \dots, j_{k_n-1}(k)\right) \in \left\{1,2, \dots, T\right\}^{k_n}$ such that $$ \varphi^{-k_n}(x_l) \cap V_k \cap A_{j_0(k)} \cap \varphi^{-1}\left(A_{j_1(k)}\right) \cap \varphi^{-2}\left(A_{j_2(k)}\right) \cap \dots \cap \varphi^{-k_n+1 }\left(A_{j_{k_n-1}(k)}\right) \neq \emptyset . $$ Since there are only finitely many choices we can arrange that the same tuples, $\left(j_0(k),j_1(k), \dots, j_{k_n-1}(k)\right), k \in F$, work for all $l$. Then \begin{equation}\label{expresssion2} \begin{split} &\iota_{x_l}\left(P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n}
fV_{\varphi}^{k_n}\right)\right) = \sum_{k,k' \in F}
f\left(\lambda_k(x_l),\lambda_{k'}(x_l)\right) \prod_{i=0}^{k_n-1} j_i(k)^{-\frac{1}{2}} j_i(k')^{-\frac{1}{2}} \end{split} \end{equation} for all $f \in C^*_r\left(R\left(\varphi^{l_n}\right)\right)$ and all $l$.
There is an open neighbourhood $U'$ of $\varphi^{l_n-k_n}(x)$ and open sets $V'_j, j=1,2, \dots,d'$, where $d' = \# \varphi^{-l_n}\left(\varphi^{l_n-k_n}(x)\right)$, in
$X$ such
that \begin{enumerate} \item[1')] $\varphi^{-l_n}\left(\overline{U'}\right) \subseteq V'_1 \cup V'_2
\cup \dots \cup V'_{d'}$, \item[2')] $\overline{V'_i} \cap \overline{V'_j} = \emptyset, \ i \neq
j$, and \item[3')] $\varphi^{l_n}$ is injective on $\overline{V'_j}$ for
each $j$. \end{enumerate} Since $\lim_{l \to \infty} \varphi^{l_n-k_n}(x_l) = \varphi^{l_n
-k_n}(x)$ we can assume that $\varphi^{l_n -k_n}(x_l) \in U'$ for all $l$. By an argument identical to the way we found $F$ above we can now find a subset $F' \subseteq \{1,2,\dots, d'\}$ such that $$ F' = \left\{ j : \varphi^{-l_n}\left(\varphi^{l_n-k_n}(x_l)\right)
\cap V'_j \neq \emptyset \right\} $$ for all $l$. For $i \in F'$ we define a continuous map $\mu'_i : \varphi^{l_n}\left(\overline{V'_i}\right) \to \overline{V'_i}$ such that $\mu'_i \circ \varphi^{l_n}(z) = z$ when $z \in \overline{V'_i}$. Set $$ \mu_i = \mu'_i \circ \varphi^{l_n-k_n} $$ on $\varphi^{-(l_n-k_n)}\left(\varphi^{l_n}\left(\overline{V'_i}\right)\right)$. Assuming that $f$ has the form (\ref{expression7}) we find now that \begin{equation}\label{yrk} \begin{split} &f\left(\lambda_k(x_l),\lambda_{k'}(x_l)\right) = \\ & \sum_{i_1,i_2, \dots,
i_{N-1} \in F'} f_1\left(\lambda_k(x_l),
\mu_{i_1}(x_l)\right)f_2\left(\mu_{i_1}(x_l),
\mu_{i_2}(x_l)\right)\dots \dots f_N\left(\mu_{i_{N-1}}(x_l), \lambda_{k'}(x_l)\right) \end{split} \end{equation} for all $k,k' \in F$. By combining (\ref{yrk}) with (\ref{expresssion2}) we find by letting $l$ tend to infinity that $$ y\left(P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n}
fV_{\varphi}^{k_n}\right)\right) = \sum_{k,k' \in F} H_{k,k'}(x)\prod_{i=0}^{k_n-1} j_i(k)^{-\frac{1}{2}} j_i(k')^{-\frac{1}{2}}, $$ where $$ H_{k,k'}(x) = \sum_{i_1,i_2, \dots,
i_{N-1} \in F'} f_1\left(\lambda_k(x),
\mu_{i_1}(x)\right)f_2\left(\mu_{i_1}(x),
\mu_{i_2}(x)\right)\dots \dots f_N\left(\mu_{i_{N-1}}(x), \lambda_{k'}(x)\right) . $$ Since this expression only depends on $F,F'$ and the tuples $$ \left(j_0(k),j_1(k), \dots, j_{k_n-1}(k)\right), k \in F, $$ it follows that the number of possible values of an element from $\pi_n^{-1}(x)$ on the generators of the form (\ref{expresssion}) does not exceed $2^d2^{d'}T^{k_n}$, proving that $\# \pi_n^{-1}(x) < \infty$ as claimed.
We can then apply Theorem 4.3.6 on page 281 of \cite{En} to conclude that $\operatorname{Dim} Y_n \leq \operatorname{Dim} X$. Note that $D_n \subseteq D_{n+1}$ and $D_{\Gamma_{\varphi}} = \overline{\bigcup_n D_n}$ by (\ref{D17}) and (\ref{D1}). Hence $Y$ is the projective limit of the sequence $Y_1 \gets Y_2 \gets Y_3 \gets \dots $. Since $\operatorname{Dim} Y_n \leq \operatorname{Dim} X$ for all $n$ we conclude now from Theorem 1.13.4 in \cite{En} that $\operatorname{Dim} Y \leq \operatorname{Dim} X$. \end{proof}
\section{The simple quotients}
Following \cite{DS} we say that $\phi$ is \emph{strongly transitive} when for any non-empty open subset $U \subseteq Y$ there is an $n \in \mathbb N$ such that $Y = \bigcup_{j=0}^n \phi^j(U)$, cf. \cite{DS}. By Proposition 4.3 of \cite{DS}, $C^*_r\left(\Gamma_{\phi}\right)$ is simple if and only if $Y$ is infinite and $\phi$ is strongly transitive.
\begin{lemma}\label{hmzero} Assume that $\phi$ is strongly transitive
but not injective. It
follows that $$ \lim_{k \to \infty} \frac{1}{k} \log \left(\inf_{x \in Y}\# \phi^{-k}(x)\right) > 0. $$
\begin{proof} Note that $U = \left\{ x \in Y : \ \# \phi^{-1}(x) \geq 2 \right\}$ is open and not empty since $\phi$ is a local homeomorphism and not injective. It follows that there is an $m \in \mathbb N$ such that \begin{equation}\label{D18} \bigcup_{j=0}^{m-1} \phi^j(U) = Y \end{equation} because $\phi$ is strongly transitive. We claim that \begin{equation}\label{est1} \inf_{z \in Y} \# \phi^{-k}(z) \geq 2^{\left[\frac{k}{m}\right]} \end{equation} for all $k \in \mathbb N$ where $\left[\frac{k}{m}\right]$ denotes the integer part of $\frac{k}{m}$. This follows by induction: Assume that it true for all $k' < k$. Consider any $z \in Y$. If $k < m$ there is nothing to prove so assume that $k \geq m$. By (\ref{D18}) we can then write $z = \phi^j(z_1) = \phi^j(z_2)$ for some $j \in \left\{1,2,\dots,
m\right\}$ and some $z_1 \neq z_2$. It follows that $$ \# \phi^{-k}(z) \geq \# \phi^{-(k-j)}(z_1) + \# \phi^{-(k-j)}(z_2) \geq 2 \cdot 2^{\left[\frac{k-j}{m}\right]} \geq 2^{\left[\frac{k}{m}\right]} . $$ It follows from (\ref{est1}) that $\lim_{k \to \infty} \frac{1}{k} \log \left(\inf_{x \in Y}\# \phi^{-k}(x)\right) \geq \frac{1}{m} \log 2$. \end{proof} \end{lemma}
Let $M_l$ denote the $C^*$-algebra of complex $l \times l$-matrices. In the following a \emph{homogeneous $C^*$-algebra} will be a $C^*$-algebra isomorphic to a $C^*$-algebra of the form $eC(X,M_l)e$ where $X$ is a compact metric space and $e$ is a projection in $C(X,M_l)$ such that $e(x) \neq 0$ for all $x \in X$.
\begin{defn}\label{slowdim} A unital $C^*$-algebra $A$ is an \emph{AH-algebra} when
there is an increasing sequence $A_1 \subseteq A_2 \subseteq A_3
\subseteq \dots$ of unital $C^*$-subalgebras of $A$ such that $A =
\overline{\bigcup_n A_n}$ and each $A_n$ is a homogeneous
$C^*$-algebra. We say that $A$ has \emph{no dimension
growth} when the sequence $\{A_n\}$ can be chosen such that $$ A_n \simeq e_nC\left(X_n,M_{l_n}\right)e_n $$ with $\sup_n \operatorname{Dim} X_n < \infty$ and $\lim_{n \to \infty} \min_{x \in
X_n} \operatorname{Rank} e_n(x) = \infty$. \end{defn}
Note that the no dimension growth condition is stronger than the slow dimension growth condition used in \cite{Th3}.
\begin{prop}\label{AHthm} Assume that $\operatorname{Dim} Y < \infty$ and that
$\phi$ is strongly transitive and not injective. It follows
$C^*_r\left(R_{\phi}\right)$ is an AH-algebra with no dimension growth. \end{prop} \begin{proof} For each $n$ we have that \begin{equation}\label{renu}
C^*_r\left(R\left(\phi^n\right)\right) \simeq e_nC\left(Y,M_{m_n}\right)e_n \end{equation} for some $m_n \in \mathbb N$ and some projection $e_n \in C\left(Y,M_{m_n}\right)$. Although this seems to be well known it is hard to find a proof anywhere so we point out that it can proved by specializing the proof of Theorem 3.2 in \cite{Th1} to the case of a surjective local homeomorphism $\phi$. In fact, it suffices to observe that the $C^*$-algebra $A_{\phi}$ which features in Theorem 3.2 of \cite{Th1} is $C(Y)$ in this case. Since $\min_{y \in Y} \operatorname{Rank} e_n(y)$ is the minimal dimension of an irreducible representation of $C^*_r\left(R\left(\phi^n\right)\right)$ it therefore now suffices to show that the minimal dimension of the irreducible
representations of $C^*_r\left(R(\phi^n)\right)$ goes to infinity when
$n$ does. It follows from Lemma 3.4 of \cite{Th1} that the minimal dimension of the irreducible
representations of $C^*_r\left(R(\phi^n)\right)$ is the same as the
number $\min_{y \in Y} \# \phi^{-n}(y)$. It follows from Lemma \ref{hmzero} that $$ \lim_{n \to \infty} \min_{y \in Y} \# \phi^{-n}(y) = \infty , $$ exponentially fast in fact. \end{proof}
\begin{lemma}\label{?1} Assume that $C^*_r\left(\Gamma_{\phi }\right)$
is simple. Then either $\phi $ is a homeomorphism or else \begin{equation}\label{limit} \lim_{n \to \infty} \sup_{x \in Y} m(x)^{-1}m(\phi (x))^{-1}m\left(\phi ^2(x)\right)^{-1} \dots m\left(\phi ^{n-1}(x)\right)^{-1} = 0 , \end{equation} where $m : Y \to \mathbb N$ is the function (\ref{m-funk}). \begin{proof} Assume (\ref{limit}) does not hold. Since $\phi$ is a
local homeomorphism, the function $m$ is continuous so it follows
from Dini's theorem that
there is at least one $x$ for which \begin{equation}\label{limit2} \lim_{n \to \infty} m(x)^{-1}m(\phi (x))^{-1}m\left(\phi ^2(x)\right)^{-1} \dots m\left(\phi ^{n-1}(x)\right)^{-1} \end{equation} is not zero. For this $x$ there is a $K$ such that $\# \phi^{-1} \left(\phi^k(x)\right) = 1$ when $k \geq K$, whence the set $$ F = \left\{ y \in
Y : \ \# \phi ^{-1}\left( \phi^k(y)\right) =1 \ \forall k \geq 0\right\} $$ is not empty. Note that $F$ is closed and that $\phi ^{-k}\left(\phi ^k(F)\right) = F$ for all $k$, i.e. $F$ is $\phi$-saturated. It follows from Corollary \ref{A5} that $F$ determines a proper ideal $I_F$ in $C_r^*(R_\phi)$. Since $\phi(F) \subseteq F$, it follows that $\widehat{\phi}(I_F)\subseteq I_F$. Then Theorem 4.10 of \cite{Th1} and the simplicity of $C^*_r\left(\Gamma_{\phi}\right)$ imply that either $\phi$ is injective or $I_F = \{0\}$. But $I_F = \{0\}$ means that $F=Y$ and thus that $\phi$ is injective. Hence $\phi$ is a homeormophism in both cases. \end{proof} \end{lemma}
\begin{thm}\label{quotients} Let $\varphi : X \to X$ be a locally
injective surjection on a compact metric
space $X$ of finite covering dimension, and let $(Y,\phi)$ be its canonical
locally homeomorphic extension. Let $A$ be a simple quotient of
$C^*_r\left(\Gamma_{\varphi}\right)$. It follows that $A$ is
$*$-isomorphic to either \begin{enumerate} \item[1)] a full matrix algebra $M_n(\mathbb C)$ for some $n \in
\mathbb N$, or
\item[2)] the crossed product $C(F) \times_{\phi|_F} \mathbb Z$
corresponding to an infinite minimal closed totally $\phi$-invariant
subset $F \subseteq Y$ on which $\phi$ is injective, or \item[3)] a purely infinite, simple, nuclear, separable $C^*$-algebra; more specifically to
the crossed product $C^*_r\left(R_{\phi|_F}\right)
\times_{\widehat{\phi|_F}} \mathbb N$ where $F$ is an infinite
minimal closed totally $\phi$-invariant subset of $Y$ and
$C^*_r\left(R_{\phi|_F}\right)$ is an AH-algebra with no dimension growth. \end{enumerate} \begin{proof} If $A$ is not a matrix algebra it has the form
$C^*_r\left(\Gamma_{\phi|_F}\right)$ for some infinite minimal closed totally $\phi$-invariant
subset $F \subseteq Y$ by (\ref{basiciso}) and Corollary \ref{maximal3}. If $\phi$ is injective on $F$ we are in case
2). Assume not. Since $\operatorname{Dim} F \leq \operatorname{Dim} Y \leq \operatorname{Dim} X$ by
Proposition \ref{dim!!!} it follows from Proposition \ref{AHthm}
that $C^*_r\left(R_{\phi|_F}\right)$ is an AH-algebra with no dimension growth. By \cite{An} (or Theorem 4.6 of \cite{Th1}) we have an isomorphism $$
C^*_r\left(\Gamma_{\phi|_F}\right) \simeq C^*_r\left(R_{\phi|_F}\right) \times_{\widehat{\phi|_F}} \mathbb N, $$
where $\widehat{\phi|_F}$ is the endomorphism of
$C^*_r\left(R_{\phi|_F}\right)$ given by conjugation with
$V_{\phi|_F}$. We claim that the pure infiniteness of $
C^*_r\left(R_{\phi|_F}\right) \times_{\widehat{\phi|_F}} \mathbb N$ follows from Theorem 1.1 of \cite{Th3}. For this it remains only to check that
$\widehat{\phi|_F} = \operatorname{Ad} V_{\phi|_F}$ satisfies the two conditions on
$\beta$ in Theorem 1.1 of \cite{Th3}, i.e. that $\widehat{\phi|_F}(1) =
V_{\phi|_F}V_{\phi|_F}^*$ is a full projection and that there is no
$\widehat{\phi|_F}$-invariant trace state on
$C^*_r\left(R_{\phi|_F}\right)$. The first thing was observed already in Lemma 4.7 of \cite{Th1} so we focus on the second. Observe that it follows from Lemma 2.24 of \cite{Th1} that $\omega = \omega \circ P_{R_{\phi}}$ for every trace state $\omega$ of $C^*_r\left(R_{\phi}\right)$. By using this, a direct calculation as on page 787 of \cite{Th1} shows that $$
\omega\left( V_{\phi|_F}^n{V_{\phi|_F}^*}^n\right) \leq \sup_{y \in Y} \left[m(y)m(\phi(y))\dots m\left(\phi^{n-1}(y)\right)\right]^{-1} $$ Then Lemma \ref{?1} implies that $\lim_{n \to \infty} \omega\left(
V_{\phi|_F}^n{V_{\phi|_F}^*}^n\right) = 0$. In particlar, $\omega$ is not $\widehat{\phi|_F}$-invariant. \end{proof} \end{thm}
\begin{cor}\label{cor1} Assume that
$C^*_r\left(\Gamma_{\varphi}\right)$ is simple and that $\operatorname{Dim} X < \infty$. It follows that
$C^*_r\left(\Gamma_{\varphi}\right)$ is purely infinite if and only if
$\varphi$ is not injective. \end{cor}
\begin{proof} Assume first that $\varphi$ is injective. Then
$C^*_r\left(\Gamma_{\varphi}\right)$ is the crossed product $C(X)
\times_{\varphi} \mathbb Z$ which is stably finite and thus not
purely infinite.
Conversely, assume that $\varphi$ is not
injective. Then a direct calculation, as in the proof of Theorem 4.8
in \cite{Th1}, shows that $V_{\varphi}$ is a
non-unitary isometry in $C^*_r\left(\Gamma_{\varphi}\right)$.
Since the $C^*$-algebras which feature in case 1) and case 2) of
Theorem \ref{quotients} are stably finite, the presence of a non-unitary isometry
implies that $C^*_r\left(\Gamma_{\varphi}\right)$ is purely infinite. \end{proof}
\begin{cor}\label{tokecor}
Let $S$ be a one-sided subshift. If the $C^*$-algebra
$\mathcal{O}_S$ associated with $S$ in \cite{C} is simple, then it is also
purely infinite. \end{cor}
\begin{proof}
It follows from Theorem 4.18 in \cite{Th1} that $\mathcal{O}_S$ is
isomorphic to $C^*_r\left(\Gamma_{\sigma}\right)$ where $\sigma$ is
the shift map on $S$.
If $\mathcal{O}_S$ is simple, $S$ must be infinite and it then
follows from Proposition 2.4.1 in \cite{BS} (cf. Theorem 3.9 in
\cite{BL}) that $\sigma$ is not injective. The
conclusion follows then from Corollary \ref{cor1}. \end{proof}
In Corollary \ref{tokecor} we assume that the shift map $\sigma$ on $S$ is surjective. It is not clear if the result holds without this assumption.
For completeness we point out that when $X$ is totally disconnected
(i.e. zero dimensional) the algebra $C^*_r\left(R_{\phi|_F}\right)$ which features in case 3) of Theorem \ref{quotients} is approximately divisible, cf. \cite{BKR}. We don't know if this is the case in general, but a weak form of divisibility is always present in $C^*_r\left(R_{\phi}\right)$ when $C^*_r\left(\Gamma_{\varphi}\right)$ is simple and $\phi$ not injective, cf. \cite{Th3}.
\begin{prop}\label{propfinish} Assume that $Y$ is totally disconnected
and $\phi$ strongly transitive and not injective. It follows that
$C^*_r\left(R_{\phi}\right)$ is an approximately divisible AF-algebra. \begin{proof} It follows from Proposition 6.8 of \cite{DS} that $C^*_r\left(R_{\phi}\right)$ is an
AF-algebra. As pointed out in Proposition 4.1 of \cite{BKR} a unital
AF-algebra fails to be approximately divisible only if it has a
quotient with a non-zero abelian projection. If $C^*_r\left(R_{\phi}\right)$ has such a quotient there is also a primitive
quotient with an abelian projection; i.e. by Proposition \ref{A17} there is
an $x \in Y$ such that $C^*_r
\left(R_{\phi|_{\overline{H(x)}}}\right)$ has a non-zero abelian
projection $p$. It follows from (\ref{crux}) that every projection
of $C^*_r
\left(R_{\phi|_{\overline{H(x)}}}\right)$ is unitarily equivalent to
a projection in $C^*_r
\left(R\left(\phi^n|_{\overline{H(x)}}\right)\right)$ for some $n$. Since $\overline{H(x)}$ is totally disconnected we can use Proposition 6.1 of
\cite{DS} to conclude that every projection in $C^*_r
\left(R\left(\phi^n|_{\overline{H(x)}}\right)\right)$ is unitarily equivalent to
a projection in $D_{R_{\phi|_{\overline{H(x)}}}} =
C\left(\overline{H(x)}\right)$. We may therefore assume that $p \in
C\left(\overline{H(x)}\right)$ so that $p = 1_A$ for some clopen $A
\subseteq \overline{H(x)}$. Then $H(x) \cap A \neq \emptyset$ so by
exchanging $x$ with some element in $H(x)$ we may assume that $x \in
A$. If there is a $y \neq x$ in $A$ such that $\phi^k(x) =
\phi^k(y)$ for some $k \in \mathbb N$, consider functions $g \in
C\left(\overline{H(x)}\right)$ and $f \in C_c\left(R_{\phi}\right)$
such that $g(x) = 1, g(y) = 0$, $\operatorname{supp} g \subseteq A$, $\operatorname{supp} f \subseteq R_{\phi} \cap
\left(A \times A\right)$ and $f(x,y) \neq 0$. Then $f,g \in 1_AC^*_r
\left(R_{\phi|_{\overline{H(x)}}}\right)1_A$ and $gf \neq 0$
while $fg = 0$, contradicting that
$1_AC^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)1_A$ is abelian. Thus no
such $y$ can exist which implies that $\pi_x(1_A) = 1_{\{x\}}$, where
$\pi_x$ is the representation (\ref{pirep}), restricted to the
subspace of $H_x$ consisting of the functions supported in
$\left\{(x',k,x) \in \Gamma_{\phi} : \ k = 0 \right\}$. It follows
that
$\pi_x\left(1_AC^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)1_A\right) \simeq \mathbb C$. Consider a non-zero ideal $J \subseteq \pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$. Then $\pi_x^{-1}(J)$ is a non-zero ideal in $C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)$ and it follows from Corollary \ref{A5} that there is an open non-empty subset $U$ of $\overline{H(x)}$ such that $\phi^{-k}\left( \phi^k(U)\right) = U$ for all $k$ and $C_0(U) = \pi_x^{-1}(J) \cap C\left(\overline{H(x)}\right)$. Since $H(x) \cap U \neq \emptyset$, it follows that $x \in U$ so there is a function $g \in \pi_x^{-1}(J) \cap C\left(\overline{H(x)}\right)$ such that $g(x) = 1$. It follows that $\pi_x\left(g1_A\right) = 1_{\left\{x\right\}} = \pi_x\left(1_A\right) \in J$. This shows that $\pi_x(1_A)$ is a full projection in $\pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$ and Brown's theorem, \cite{Br}, shows now that $\pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$ is stably isomorphic to $\pi_x\left(1_AC^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)1_A\right) \simeq \mathbb C$. Since $\pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$ is unital this means that it is a full matrix algebra. In conclusion we deduce that if $C^*_r\left(R_{\phi}\right)$ is not approximately divisible it has a full matrix
algebra as a quotient. By Corollary \ref{A5} this implies that there is a finite set $F'
\subseteq Y$ such that $F' = \phi^{-k}\left(\phi^k(F')\right)$
for all $k \in \mathbb N$. Since $$ \phi^{-k}\left(\phi^k(x)\right) \subseteq \phi^{-k-1}\left(\phi^{k+1}(x)\right) \subseteq F' $$ for all $k$ when $x \in F'$, there is for each $x \in F $ a natural number $K$ such that $\phi^{-k}\left(\phi^k(x)\right) = \phi^{-K}\left(\phi^K(x)\right)$ when $k \geq K$. Then $\# \phi^{-1} \left(
\phi^k(x)\right) =1$ for $k \geq K+1$, so that $m\left(\phi^k(x)\right) = 1$ for all $k \geq K$, which by Lemma \ref{hmzero} contradicts that $\phi$ is not injective. This contradiction finally shows that $C^*_r\left(R_{\phi}\right)$ is approximately divisible, as desired. \end{proof} \end{prop}
\end{document} |
\begin{document}
\begin{abstract}
We define the $i$-restriction and $i$-induction functors on the category $\mathcal{O}$ of the cyclotomic rational double affine Hecke algebras. This yields a crystal on the set of isomorphism classes of simple modules, which is isomorphic to the crystal of a Fock space. \\ \\ \noindent\textsc{R\'esum\'e.} On d\'efinit les foncteurs de $i$-restriction et $i$-induction sur la cat\'egorie $\mathcal{O}$ des alg\`ebres de Hecke doublement affine rationnelles cyclotomiques. Ceci donne lieu \`a un cristal sur l'ensemble des classes d'isomorphismes de modules simples, qui est isomorphe au cristal d'un espace de Fock.
\end{abstract}
\maketitle
\section*{Introduction}
In \cite{A1}, S. Ariki defined the $i$-restriction and $i$-induction functors for cyclotomic Hecke algebras. He showed that the Grothendieck group of the category of finitely generated projective modules of these algebras admits a module structure over the affine Lie algebra of type $A^{(1)}$, with the action of Chevalley generators given by the $i$-restriction and $i$-induction functors.
The restriction and induction functors for rational DAHA's(=double affine Hecke algebras) were recently defined by R. Bezrukavnikov and P. Etingof. With these functors, we give an analogue of Ariki's construction for the category $\mathcal{O}$ of cyclotomic rational DAHA's: we show that as a module over the type $A^{(1)}$ affine Lie algebra, the Grothendieck group of this category is isomorphic to a Fock space. We also construct a crystal on the set of isomorphism classes of simple modules in the category $\mathcal{O}$. It is isomorphic to the crystal of the Fock space. Recall that this Fock space also enters in some conjectural description of the decomposition numbers for the category $\mathcal{O}$ considered here. See \cite{U}, \cite{Y}, \cite{R} for related works.
\section*{Notation}
For $A$ an algebra, we will write $A\modu$ for the category of finitely generated $A$-modules. For $f: A\rightarrow B$ an algebra homomorphism from $A$ to another algebra $B$ such that $B$ is finitely generated over $A$, we will write $$f_\ast: B\modu\rightarrow A\modu$$ for the restriction functor and we write $$f^\ast: A\modu\rightarrow B\modu,\quad M\mapsto B\otimes_AM.$$
A $\mathbb{C}$-linear category $\mathcal{A}$ is called artinian if the Hom sets are finite dimensional $\mathbb{C}$-vector spaces and every object has a finite length. Given an object $M$ in $\mathcal{A}$, we denote by $\soc(M)$ (resp. $\head(M)$) the socle (resp. the head) of $M$, which is the largest semi-simple subobject (quotient) of $M$.
Let $\mathcal{C}$ be an abelian category. The Grothendieck group of $\mathcal{C}$ is the quotient of the free abelian group generated by objects in $\mathcal{C}$ modulo the relations $M=M'+M''$ for all objects $M,M',M''$ in $\mathcal{C}$ such that there is an exact sequence $0\rightarrow M'\rightarrow M\rightarrow M''\rightarrow 0$. Let $K(\mathcal{C})$ denote the complexified Grothendieck group, a $\mathbb{C}$-vector space. For each object $M$ in $\mathcal{C}$, let $[M]$ be its class in $K(\mathcal{C})$. Any exact functor $F: \mathcal{C}\rightarrow\mathcal{C}'$ between two abelian categories induces a vector space homomorphism $K(\mathcal{C})\rightarrow K(\mathcal{C}')$, which we will denote by $F$ again. Given an algebra $A$ we will abbreviate $K(A)=K(A\modu)$.
Denote by $\Fct(\mathcal{C},\mathcal{C}')$ the category of functors from a category $\mathcal{C}$ to a category $\mathcal{C}'$. For $F\in\Fct(\mathcal{C},\mathcal{C}')$ write $\End(F)$ for the ring of endomorphisms of the functor $F$. We denote by $1_F: F\rightarrow F$ the identity element in $\End(F)$. Let $G\in\Fct(\mathcal{C'},\mathcal{C''})$ be a functor from $\mathcal{C}'$ to another category $\mathcal{C}''$. For any $X\in\End(F)$ and any $X'\in\End(G)$ we write $X'X:G\circ F\rightarrow G\circ F$ for the morphism of functors given by $X'X(M)=X'(F(M))\circ G(X(M))$ for any $M\in\mathcal{C}$.
Let $e\geqslant 2$ be an integer and $z$ be a formal parameter. Denote by $\mathfrak{sl}_e$ the Lie algebra of traceless $e\times e$ complex matrices. Write $E_{ij}$ for the elementary matrix with $1$ in the position $(i,j)$ and $0$ elsewhere. The type $A^{(1)}$ affine Lie algebra $\widehat{\mathfrak{sl}}_e$ is $\mathfrak{sl}_e\otimes\mathbb{C}[z,z^{-1}]\oplus\mathbb{C} c$ with $c$ a central element. The Lie bracket is the usual one. We will denote the Chevalley generators of $\widehat{\mathfrak{sl}}_e$ as follows: \begin{eqnarray*} &e_i=E_{i,i+1}\otimes 1,\quad &f_i=F_{i+1,i}\otimes 1,\quad h_i=(E_{ii}-E_{i+1,i+1})\otimes 1, \quad 1\leqslant i\leqslant e-1,\\ &e_0=E_{e1}\otimes z,\quad &f_0=E_{1e}\otimes z^{-1},\quad h_0=(E_{ee}-E_{11})\otimes 1+c. \end{eqnarray*} For $i\in\mathbb{Z}/e\mathbb{Z}$ we will denote the simple root (resp. coroot) corresponding to $e_i$ by $\alpha_i$ (resp. $\alpha\spcheck_i$). The fundamental weights are $\{\Lambda_i: i\in\mathbb{Z}/e\mathbb{Z}\}$ with $\alpha\spcheck_i(\Lambda_j)=\delta_{ij}$ for any $i,j\in\mathbb{Z}/e\mathbb{Z}$. We will write $P$ for the weight lattice, the free abelian group generated by the fundamental weights.
\section{Reminders on Hecke algebras, rational DAHA's and restriction functors}\label{s:reminder}
\iffalse We give some reminders on the restriction and induction functors of Hecke algebras, those of rational DAHA's and the Knizhnik-Zamolodchikov functor. The reminders on their definitions mainly serve the proof of Theorem \ref{iso}, while those on their properties will be more frequently used. This section contains no new result except in Proposition \ref{standard} and Proposition \ref{indstandard}.\fi
\subsection{Hecke algebras.}\label{ss:Hecke}
Let $\mathfrak{h}$ be a finite dimensional vector space over $\mathbb{C}$. Recall that a pseudo-reflection is a non trivial element $s$ of $GL(\mathfrak{h})$ which acts trivially on a hyperplane, called the reflecting hyperplane of $s$. Let $W\subset GL(\mathfrak{h})$ be a finite subgroup generated by pseudo-reflections. Let $\mathcal{S}$ be the set of pseudo-reflections in $W$ and $\mathcal{A}$ be the set of reflecting hyperplanes. We set $\mathfrak{h}_{reg}=\mathfrak{h}-\bigcup_{H\in\mathcal{A}}H$, it is stable under the action of $W$. Fix $x_0\in \mathfrak{h}_{reg}$ and identify it with its image in $\mathfrak{h}_{reg}/W$. By definition the braid group attached to $(W,\mathfrak{h})$, denoted by $B(W,\mathfrak{h})$, is the fundamental group $\pi_1(\mathfrak{h}_{reg}/W, x_0).$
For any $H\in\mathcal{A}$, let $W_H$ be the pointwise stabilizer of $H$. This is a cyclic group. Write $e_H$ for the order of $W_H$. Let $s_H$ be the unique element in $W_H$ whose determinant is $\exp(\frac{2\pi\sqrt{-1}}{e_H})$.
Let $q$ be a map from $\mathcal{S}$ to $\mathbb{C}^\ast$ that is constant on the $W$-conjugacy classes. Following \cite[Definition 4.21]{BMR} the Hecke algebra $\mathscr{H}_q(W,\mathfrak{h})$ attached to $(W,\mathfrak{h})$ with parameter $q$ is the quotient of the group algebra $\mathbb{C} B(W,\mathfrak{h})$ by the relations: \begin{equation}\label{heckerelation} (T_{s_H}-1)\prod_{t\in W_H\cap\mathcal{S}}(T_{s_H}-q(t))=0,\quad H\in\mathcal{A}. \end{equation} Here $T_{s_H}$ is a generator of the monodromy around $H$ in $\mathfrak{h}_{reg}/W$ such that the lift of $T_{s_H}$ in $\pi_1(W,\mathfrak{h}_{reg})$ via the map $\mathfrak{h}_{reg}\rightarrow\mathfrak{h}_{reg}/W$ is represented by a path from $x_0$ to $s_H(x_0)$. See \cite[Section 2B]{BMR} for a precise definition. When the subspace $\mathfrak{h}^W$ of fixed points of $W$ in $\mathfrak{h}$ is trivial, we abbreviate $$B_W=B(W,\mathfrak{h}), \quad \mathscr{H}_q(W)=\mathscr{H}_q(W,\mathfrak{h}).$$
\subsection{Parabolic restriction and induction for Hecke algebras.}\label{ss:resHecke}
In this section we will assume that $\mathfrak{h}^W=1$. A parabolic subgroup $W'$ of $W$ is by definition the stabilizer of a point $b\in\mathfrak{h}$. By a theorem of Steinberg, the group $W'$ is also generated by pseudo-reflections. Let $q'$ be the restriction of $q$ to $\mathcal{S'}=W'\cap \mathcal{S}$. There is an explicit inclusion $\imath_q: \mathscr{H}_{q'}(W')\hookrightarrow \mathscr{H}_q(W)$ given by \cite[Section 2D]{BMR}. The restriction functor \begin{equation*} \Resh:\mathscr{H}_q(W)\modu\rightarrow\mathscr{H}_{q'}(W')\modu \end{equation*} is the functor $(\imath_q)_\ast$. The induction functor $$\Indh=\mathscr{H}_q(W)\otimes_{\mathscr{H}_{q'}(W')}-$$ is left adjoint to $\Resh$. The coinduction functor $$\coIndh=\Hom_{\mathscr{H}_{q'}(W')}(\mathscr{H}_q(W),-)$$ is right adjoint to $\Resh$. The three functors above are all exact.
Let us recall the definition of $\imath_q$. It is induced from an inclusion $\imath: B_{W'}\hookrightarrow B_{W}$, which is in turn the composition of three morphisms $\ell$, $\kappa$, $\jmath$ defined as follows. First, let $\mathcal{A}'\subset\mathcal{A}$ be the set of reflecting hyperplanes of $W'$. Write $$\overline{\mathfrak{h}}=\mathfrak{h}/\mathfrak{h}^{W'},\quad\overline{\mathcal{A}}=\{\overline{H}=H/\mathfrak{h}^{W'}:\,H\in \mathcal{A}'\}, \quad \overline{\mathfrak{h}}_{reg}=\overline{\mathfrak{h}}-\bigcup_{\overline{H}\in\overline{\mathcal{A}}}\overline{H}, \quad\mathfrak{h}'_{reg}=\mathfrak{h}-\bigcup_{H\in\mathcal{A}'}H.$$ The canonical epimorphism $p: \mathfrak{h}\rightarrow\overline{\mathfrak{h}}$ induces a trivial $W'$-equivariant fibration $p: \mathfrak{h}'_{reg}\rightarrow \overline{\mathfrak{h}}_{reg}$, which yields an isomorphism \begin{equation}\label{heckeres1} \ell: B_{W'}=\pi_1(\overline{\mathfrak{h}}_{reg}/{W'}, p(x_0))\overset{\sim}\rightarrow \pi_1(\mathfrak{h}'_{reg}/W',x_0). \end{equation}
Endow $\mathfrak{h}$ with a $W$-invariant hermitian scalar product. Let
$||\cdot||$ be the associated norm. Set \begin{equation}\label{eq:omega}
\Omega=\{x\in\mathfrak{h}:\,||x-b||< \varepsilon\}, \end{equation} where $\varepsilon$ is a positive real number such that the closure of $\Omega$ does not intersect any hyperplane that is in the complement of $\mathcal{A}'$ in $\mathcal{A}$. Let $\gamma: [0,1]\rightarrow \mathfrak{h}$ be a path such that $\gamma(0)=x_0$, $\gamma(1)=b$ and $\gamma(t)\in\mathfrak{h}_{reg}$ for $0<t<1$. Let $u\in[0,1[$ such that $x_1=\gamma(u)$ belongs to $\Omega$, write $\gamma_u$ for the restriction of $\gamma$ to $[0,u]$. Consider the homomorphism \begin{equation*} \sigma: \pi_1(\Omega\cap\mathfrak{h}_{reg},x_1)\rightarrow\pi_1(\mathfrak{h}_{reg}, x_0), \quad\lambda\mapsto \gamma^{-1}_u\cdot\lambda\cdot\gamma_u. \end{equation*} The canonical inclusion $\mathfrak{h}_{reg}\hookrightarrow\mathfrak{h}'_{reg}$ induces a homomorphism $\pi_1(\mathfrak{h}_{reg}, x_0)\rightarrow \pi_1(\mathfrak{h}'_{reg}, x_0)$. Composing it with $\sigma$ gives an invertible homomorphism $$\pi_1(\Omega\cap\mathfrak{h}_{reg},x_1)\rightarrow\pi_1(\mathfrak{h}'_{reg}, x_0).$$ Since $\Omega$ is $W'$-invariant, its inverse gives an isomorphism \begin{equation}\label{heckeres2} \kappa:\pi_1(\mathfrak{h}'_{reg}/W', x_0)\overset{\sim}\rightarrow\pi_1((\Omega\cap\mathfrak{h}_{reg})/W',x_1). \end{equation}
Finally, we see from above that $\sigma$ is injective. So it induces an inclusion $$\pi_1((\Omega\cap\mathfrak{h}_{reg})/W',x_1)\hookrightarrow\pi_1(\mathfrak{h}_{reg}/W', x_0).$$ Composing it with the canonical inclusion $\pi_1(\mathfrak{h}_{reg}/W', x_0)\hookrightarrow \pi_1(\mathfrak{h}_{reg}/W, x_0)$ gives an injective homomorphism \begin{equation}\label{heckeres3} \jmath:\pi_1((\Omega\cap\mathfrak{h}_{reg})/W',x_1)\hookrightarrow \pi_1(\mathfrak{h}_{reg}/W, x_0)=B_W. \end{equation} By composing $\ell$, $\kappa$, $\jmath$ we get the inclusion \begin{equation}\label{heckeres4} \imath=\jmath\circ\kappa\circ\ell: B_{W'}\hookrightarrow B_W. \end{equation} It is proved in \cite[Section 4C]{BMR} that $\imath$ preserves the relations in (\ref{heckerelation}). So it induces an inclusion of Hecke algebras which is the desired inclusion \begin{equation*} \imath_q: \mathscr{H}_{q'}(W')\hookrightarrow \mathscr{H}_q(W). \end{equation*}
For $\imath$, $\imath': B_{W'}\hookrightarrow B_W$ two inclusions defined as above via different choices of the path $\gamma$, there exists an element $\rho\in P_W=\pi_1(\mathfrak{h}_{reg},x_0)$ such that for any $a\in B_{W'}$ we have $\imath(a)=\rho\imath'(a)\rho^{-1}$. In particular, the functors $\imath_\ast$ and $(\imath')_\ast$ from $B_W\modu$ to $B_{W'}\modu$ are isomorphic. Also, we have $(\imath_q)_\ast\cong(\imath_q')_\ast.$ So there is a unique restriction functor $\Resh$ up to isomorphisms.
\subsection{Rational DAHA's.}\label{ss:DAHA}
Let $c$ be a map from $\mathcal{S}$ to $\mathbb{C}$ that is constant on the $W$-conjugacy classes. The rational DAHA attached to $W$ with parameter $c$ is the quotient $H_c(W,\mathfrak{h})$ of the smash product of $\mathbb{C} W$ and the tensor algebra of $\mathfrak{h}\oplus\mathfrak{h}^\ast$ by the relations \begin{equation*} [x,x']=0,\quad[y,y']=0,\quad [y,x]=\pair{x,y}-\sum_{s\in\mathcal{S}}c_s\pair{\alpha_s, y}\pair{x,\alpha_s\spcheck}s, \end{equation*} for all $x,x'\in\mathfrak{h}^\ast$, $y,y'\in\mathfrak{h}$. Here $\pair{\cdot,\cdot}$ is the canonical pairing between $\mathfrak{h}^\ast$ and $\mathfrak{h}$, the element
$\alpha_s$ is a generator of $\mathrm{Im}(s|_{\mathfrak{h}^\ast}-1)$ and
$\alpha_s\spcheck$ is the generator of $\mathrm{Im}(s|_{\mathfrak{h}}-1)$ such that $\pair{\alpha_s, \alpha_s\spcheck}=2$.
For $s\in\mathcal{S}$ write $\lambda_s$ for the non trivial eigenvalue of $s$ in $\mathfrak{h}^\ast$. Let $\{x_i\}$ be a basis of $\mathfrak{h}^\ast$ and let $\{y_i\}$ be the dual basis. Let \begin{equation}\label{euler1} \mathbf{eu}=\sum_{i}x_iy_i+\frac{\dim(\mathfrak{h})}{2}-\sum_{s\in\mathcal{S}}\frac{2c_s}{1-\lambda_s}s \end{equation} be the Euler element in $H_c(W,\mathfrak{h})$. Its definition is independent of the choice of the basis $\{x_i\}$. We have \begin{equation}\label{euler2} [\mathbf{eu},x_i]=x_i,\quad [\mathbf{eu},y_i]=-y_i,\quad [\mathbf{eu},s]=0. \end{equation}
\subsection{}\label{ss:catO}
The category $\mathcal{O}$ of $H_c(W,\mathfrak{h})$ is the full subcategory $\mathcal{O}_c(W,\mathfrak{h})$ of the category of $H_c(W,\mathfrak{h})$-modules consisting of objects that are finitely generated as $\mathbb{C}[\mathfrak{h}]$-modules and $\mathfrak{h}$-locally nilpotent. We recall from \cite[Section 3]{GGOR} the following properties of $\mathcal{O}_c(W,\mathfrak{h})$.
The action of the Euler element $\mathbf{eu}$ on a module in $\mathcal{O}_c(W,\mathfrak{h})$ is locally finite. The category $\mathcal{O}_c(W,\mathfrak{h})$ is a highest weight category. In particular, it is artinian. Write $\Irr(W)$ for the set of isomorphism classes of irreducible representations of $W$. The poset of standard modules in $\mathcal{O}_c(W,\mathfrak{h})$ is indexed by $\Irr(W)$ with the partial order given by \cite[Theorem 2.19]{GGOR}. More precisely, for $\xi\in\Irr(W)$, equip it with a $\mathbb{C} W\ltimes\mathbb{C}[\mathfrak{h}^\ast]$-module structure by letting the elements in $\mathfrak{h}\subset\mathbb{C}[\mathfrak{h}^\ast]$ act by zero, the standard module corresponding to $\xi$ is $$\Delta(\xi)=H_c(W,\mathfrak{h})\otimes_{\mathbb{C} W\ltimes\mathbb{C}[\mathfrak{h}^\ast]}\xi.$$ It is an indecomposable module with a simple head $L(\xi)$. The set of isomorphism classes of simple modules in $\mathcal{O}_c(W,\mathfrak{h})$ is $$\{[L(\xi)]:\xi\in\Irr(W)\}.$$ It is a basis of the $\mathbb{C}$-vector space $K(\mathcal{O}_c(W,\mathfrak{h}))$. The set $\{[\Delta(\xi)]:\xi\in\Irr(W)\}$ gives another basis of $K(\mathcal{O}_c(W,\mathfrak{h}))$.
We say a module $N$ in $\mathcal{O}_c(W,\mathfrak{h})$ has a standard filtration if it admits a filtration $$0=N_0\subset N_1\subset\ldots\subset N_n=N$$ such that each quotient $N_i/N_{i-1}$ is isomorphic to a standard module. We denote by $\mathcal{O}^\Delta_c(W,\mathfrak{h})$ the full subcategory of $\mathcal{O}_c(W,\mathfrak{h})$ consisting of such modules.
\begin{lemme}\label{standfilt} (1)Any projective object in $\mathcal{O}_c(W,\mathfrak{h})$ has a standard filtration.
(2)A module in $\mathcal{O}_c(W,\mathfrak{h})$ has a standard filtration if and only if it is free as a $\mathbb{C}[\mathfrak{h}]$-module. \end{lemme} Both (1) and (2) are given by \cite[Proposition 2.21]{GGOR}.
The category $\mathcal{O}_c(W,\mathfrak{h})$ has enough projective objects and has finite homological dimension \cite[Section 4.3.1]{GGOR}. In particular, any module in $\mathcal{O}_c(W,\mathfrak{h})$ has a finite projective resolution. Write $\Proj_c(W,\mathfrak{h})$ for the full subcategory of projective modules in $\mathcal{O}_c(W,\mathfrak{h})$. Let \begin{equation*} I: \Proj_c(W,\mathfrak{h})\rightarrow \mathcal{O}_c(W,\mathfrak{h}) \end{equation*} be the canonical embedding functor. We have the following lemma.
\begin{lemme}\label{projiso} For any abelian category $\mathcal{A}$ and any right exact functors $F_1$, $F_2$ from $\mathcal{O}_c(W,\mathfrak{h})$ to $\mathcal{A}$, the homomorphism of vector spaces \begin{equation*} r_I:\Hom(F_1, F_2)\rightarrow \Hom(F_1\circ I, F_2\circ I), \quad\gamma\mapsto \gamma1_I \end{equation*} is an isomorphism. \end{lemme} In particular, if the functor $F_1\circ I$ is isomorphic to $F_2\circ I$, then we have $F_1\cong F_2$. \begin{proof} We need to show that for any morphism of functors $\nu: F_1\circ I\rightarrow F_2\circ I$ there is a unique morphism $\tilde{\nu}: F_1\rightarrow F_2$ such that $\tilde{\nu}1_{I}=\nu$. Since $\mathcal{O}_c(W,\mathfrak{h})$ has enough projectives, for any $M\in \mathcal{O}_c(W,\mathfrak{h})$ there exists $P_0$, $P_1$ in $\Proj_c(W,\mathfrak{h})$ and an exact sequence in $\mathcal{O}_c(W,\mathfrak{h})$ \begin{equation}\label{eq:projresolution} P_1\overset{d_1}\longrightarrow P_0\overset{d_0}\longrightarrow M\longrightarrow 0. \end{equation} Applying the right exact functors $F_1$, $F_2$ to this sequence we get the two exact sequences in the diagram below. The morphism of functors $\nu:F_1\circ I\rightarrow F_2\circ I$ yields well defined morphisms $\nu(P_1)$, $\nu(P_0)$ such that the square commutes $$\xymatrix{F_1(P_1)\ar[r]^{F_1(d_1)}\ar[d]^{\nu(P_1)} & F_1(P_0)\ar[r]^{F_1(d_0)}\ar[d]^{\nu(P_0)} &F_1(M)\ar[r] \ar@{}[d] &0\ar@{}[d]\\F_2(P_1)\ar[r]^{F_2(d_1)} & F_2(P_0)\ar[r]^{F_2(d_0)} &F_2(M)\ar[r] &0.}$$ Define $\tilde{\nu}(M)$ to be the unique morphism $F_1(M)\rightarrow F_2(M)$ that makes the diagram commute. Its definition is independent of the choice of $P_0$, $P_1$, and it is independent of the choice of the exact sequence (\ref{eq:projresolution}). The assignment $M\mapsto \tilde{\nu}(M)$ gives a morphism of functor $\tilde{\nu}: F_1\rightarrow F_2$ such that $\tilde{\nu}1_{I}=\nu$. It is unique by the uniqueness of the morphism $\tilde{\nu}(M)$. \end{proof}
\subsection{KZ functor.}\label{ss:KZ}
The Knizhnik-Zamolodchikov functor is an exact functor from the category $\mathcal{O}_c(W,\mathfrak{h})$ to the category $\mathscr{H}_q(W,\mathfrak{h})\modu$, where $q$ is a certain parameter associated with $c$. Let us recall its definition from \cite[Section 5.3]{GGOR}.
Let $\mathcal{D}(\mathfrak{h}_{reg})$ be the algebra of differential operators on $\mathfrak{h}_{reg}$. Write $$H_c(W,\mathfrak{h}_{reg})=H_c(W,\mathfrak{h})\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}_{reg}].$$ We consider the Dunkl isomorphism, which is an isomorphism of algebras \begin{equation*} H_c(W,\mathfrak{h}_{reg})\overset{\sim}\rightarrow \mathcal{D}(\mathfrak{h}_{reg})\rtimes\mathbb{C} W \end{equation*} given by $x\mapsto x$, $w\mapsto w$ for $x\in\mathfrak{h}^\ast$, $w\in W$, and \begin{equation*} y\mapsto \partial_y+\sum_{s\in\mathcal{S}}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(y)}{\alpha_s}(s-1),\quad\text{for }y\in\mathfrak{h}. \end{equation*}
For any $M\in \mathcal{O}_c(W,\mathfrak{h})$, write $$M_{\mathfrak{h}_{reg}}=M\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}_{reg}].$$ It identifies via the Dunkl isomorphism with a $\mathcal{D}(\mathfrak{h}_{reg})\rtimes W$-module which is finitely generated over $\mathbb{C}[\mathfrak{h}_{reg}]$. Hence $M_{\mathfrak{h}_{reg}}$ is a $W$-equivariant vector bundle on $\mathfrak{h}_{reg}$ with an integrable connection $\nabla$ given by $\nabla_y(m)=\partial_ym$ for $m\in M$, $y\in\mathfrak{h}$. It is proved in \cite[Proposition 5.7]{GGOR} that the connection $\nabla$ has regular singularities. Now, regard $\mathfrak{h}_{reg}$ as a complex manifold endowed with the transcendental topology. Denote by $\mathcal{O}^{an}_{\mathfrak{h}_{reg}}$ the sheaf of holomorphic functions on $\mathfrak{h}_{reg}$. For any free $\mathbb{C}[\mathfrak{h}_{reg}]$-module $N$ of finite rank, we consider $$N^{an}=N\otimes_{\mathbb{C}[\mathfrak{h}_{reg}]}\mathcal{O}^{an}_{\mathfrak{h}_{reg}}.$$ It is an analytic locally free sheaf on $\mathfrak{h}_{reg}$. For $\nabla$ an integrable connection on $N$, the sheaf of holomorphic horizontal sections \begin{equation*} N^{\nabla}=\{n\in N^{an}:\,\nabla_y(n)=0\text{ for all }y\in\mathfrak{h}\} \end{equation*} is a $W$-equivariant local system on $\mathfrak{h}_{reg}$. Hence it identifies with a local system on $\mathfrak{h}_{reg}/W$. So it yields a finite dimensional representation of $\mathbb{C} B(W,\mathfrak{h})$. For $M\in \mathcal{O}_c(W,\mathfrak{h})$ it is proved in \cite[Theorem 5.13]{GGOR} that the action of $\mathbb{C} B(W,\mathfrak{h})$ on $(M_{\mathfrak{h}_{reg}})^{\nabla}$ factors through the Hecke algebra $\mathscr{H}_q(W,\mathfrak{h})$. The formula for the parameter $q$ is given in \cite[Section 5.2]{GGOR}.
The Knizhnik-Zamolodchikov functor is the functor $$\KZ(W,\mathfrak{h}): \mathcal{O}_c(W,\mathfrak{h})\rightarrow\mathscr{H}_q(W,\mathfrak{h})\modu,\quad M\mapsto (M_{\mathfrak{h}_{reg}})^{\nabla}.$$ By definition it is exact. Let us recall some of its properties following \cite{GGOR}. Assume in the rest of this subsection that \emph{the algebras $\mathscr{H}_q(W,\mathfrak{h})$ and $\mathbb{C} W$ have the same dimension over $\mathbb{C}$}. We abbreviate $\KZ=\KZ(W,\mathfrak{h})$. The functor $\KZ$ is represented by a projective object $P_{\KZ}$ in $\mathcal{O}_c(W,\mathfrak{h})$. More precisely, there is an algebra homomorphism \begin{equation*}
\rho:\mathscr{H}_q(W,\mathfrak{h})\rightarrow\End_{\mathcal{O}_c(W,\mathfrak{h})}(P_{\KZ})^{\op} \end{equation*} such that $\KZ$ is isomorphic to the functor $\Hom_{\mathcal{O}_c(W,\mathfrak{h})}(P_{\KZ},-)$. By \cite[Theorem 5.15]{GGOR} the homomorphism $\rho$ is an isomorphism. In particular $\KZ(P_{\KZ})$ is isomorphic to $\mathscr{H}_q(W,\mathfrak{h})$ as $\mathscr{H}_q(W,\mathfrak{h})$-modules.
Now, recall that the center of a category $\mathcal{C}$ is the algebra $Z(\mathcal{C})$ of endomorphisms of the identity functor $Id_{\mathcal{C}}$. So there is a canonical map $$Z(\mathcal{O}_c(W,\mathfrak{h}))\rightarrow\End_{\mathcal{O}_c(W,\mathfrak{h})}(P_{\KZ}).$$ The composition of this map with $\rho^{-1}$ yields an algebra homomorphism \begin{equation*} \gamma: Z(\mathcal{O}_c(W,\mathfrak{h}))\rightarrow Z(\mathscr{H}_q(W,\mathfrak{h})), \end{equation*} where $Z(\mathscr{H}_q(W,\mathfrak{h}))$ denotes the center of $\mathscr{H}_q(W,\mathfrak{h})$.
\begin{lemme}\label{lem:center} (1) The homomorphism $\gamma$ is an isomorphism.
(2) For a module $M$ in $\mathcal{O}_c(W,\mathfrak{h})$ and an element $f$ in $Z(\mathcal{O}_c(W,\mathfrak{h}))$ the morphism $$\KZ(f(M)): \KZ(M)\rightarrow\KZ(M)$$ is the multiplication by $\gamma(f)$. \end{lemme} See \cite[Corollary 5.18]{GGOR} for (1). Part (2) follows from the construction of $\gamma$.
The functor $\KZ$ is a quotient functor, see \cite[Theorem 5.14]{GGOR}. Therefore it has a right adjoint $S:\mathscr{H}_q(W,\mathfrak{h})\rightarrow\mathcal{O}_c(W,\mathfrak{h})$ such that the canonical adjunction map $\KZ\circ S\rightarrow\Id_{\mathscr{H}_q(W,\mathfrak{h})}$ is an isomorphism of functors. We have the following proposition.
\begin{prop}\label{KZ} Let $Q$ be a projective object in $\mathcal{O}_c(W,\mathfrak{h})$.
(1) For any object $M\in\mathcal{O}_c(W,\mathfrak{h})$, the following morphism of $\mathbb{C}$-vector spaces is an isomorphism $$\Hom_{\mathcal{O}_c(W,\mathfrak{h})}(M,Q)\overset{\sim}\lra \Hom_{\mathscr{H}_q(W)}(\KZ(M),\KZ(Q)),\quad f\mapsto\KZ(f).$$ In particular, the functor $\KZ$ is fully faithful over $\Proj_c(W,\mathfrak{h})$.
(2)The canonical adjunction map gives an isomorphism $Q\overset{\sim}\ra S\circ \KZ (Q)$. \end{prop} See \cite[Theorems 5.3, 5.16]{GGOR}.
\subsection{Parabolic restriction and induction for rational DAHA's.}\label{ss:resDAHA}
From now on we will always assume that $\mathfrak{h}^W=1$. Recall from Section \ref{ss:resHecke} that $W'\subset W$ is the stabilizer of a point $b\in\mathfrak{h}$ and that $\overline{\mathfrak{h}}=\mathfrak{h}/\mathfrak{h}^{W'}$. Let us recall from \cite{BE} the definition of the parabolic restriction and induction functors $$\Res_b:\mathcal{O}_c(W,\mathfrak{h})\rightarrow\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})\,,\quad \Ind_b:\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})\rightarrow\mathcal{O}_c(W,\mathfrak{h}).$$ First we need some notation. For any point $p\in\mathfrak{h}$ we write $\mathbb{C}[[\mathfrak{h}]]_p$ for the completion of $\mathbb{C}[\mathfrak{h}]$ at $p$, and we write $\widehat{\mathbb{C}[\mathfrak{h}]}_p$ for the completion of $\mathbb{C}[\mathfrak{h}]$ at the $W$-orbit of $p$ in $\mathfrak{h}$. Note that we have $\mathbb{C}[[\mathfrak{h}]]_0=\widehat{\mathbb{C}[\mathfrak{h}]}_0$. For any $\mathbb{C}[\mathfrak{h}]$-module $M$ let $$\widehat{M}_p=\widehat{\mathbb{C}[\mathfrak{h}]}_p\otimes_{\mathbb{C}[\mathfrak{h}]}M.$$ The completions $\widehat{H}_{c}(W,\mathfrak{h})_b$, $\widehat{H}_{c'}(W',\mathfrak{h})_0$ are well defined algebras. We denote by $\widehat{\mathcal{O}}_c(W,\mathfrak{h})_b$ the category of $\widehat{H}_{c}(W,\mathfrak{h})_b$-modules that are finitely generated over $\widehat{\mathbb{C}[\mathfrak{h}]}_b$, and we denote by $\widehat{\mathcal{O}}_{c'}(W',\mathfrak{h})_0$ the category of $\widehat{H}_{c'}(W',\mathfrak{h})_0$-modules that are finitely generated over $\widehat{\mathbb{C}[\mathfrak{h}]}_0$. Let $P=\mathrm{Fun}_{W'}(W,\widehat{H}_{c}(W',\mathfrak{h})_0)$ be the set of $W'$-invariant maps from $W$ to $\widehat{H}_{c}(W',\mathfrak{h})_0$. Let $Z(W,W',\widehat{H}_{c}(W',\mathfrak{h})_0)$ be the ring of endomorphisms of the right $\widehat{H}_{c}(W',\mathfrak{h})_0$-module $P$. We have the following proposition given by \cite[Theorem 3.2]{BE}. \begin{prop}\label{BEiso} There is an isomorphism of algebras $$\Theta: \widehat{H}_{c}(W,\mathfrak{h})_b\longrightarrow Z(W,W', \widehat{H}_{c'}(W',\mathfrak{h})_0)$$ defined as follows: for $f\in P$, $\alpha\in\mathfrak{h}^\ast$, $a\in\mathfrak{h}$, $u\in W$, \begin{eqnarray*} (\Theta(u)f)(w)&=&f(w u),\\ (\Theta(x_{\alpha})f)(w)&=&(x^{(b)}_{w\alpha}+\alpha(w^{-1}b))f(w),\\ (\Theta(y_a)f)(w)&=&y^{(b)}_{wa}f(w)+\sum_{s\in\mathcal{S}, s\notin W'}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(wa)}{x^{(b)}_{\alpha_s}+\alpha_s (b)}(f(sw)-f(w)), \end{eqnarray*} where $x_\alpha\in\mathfrak{h}^\ast\subset H_{c}(W,\mathfrak{h})$, $x^{(b)}_{\alpha}\in\mathfrak{h}^\ast\subset H_{c'}(W',\mathfrak{h})$, $y_a\in\mathfrak{h}\subset H_{c}(W,\mathfrak{h})$, $y_a^{(b)}\in\mathfrak{h}\subset H_{c'}(W',\mathfrak{h})$. \end{prop}
Using $\Theta$ we will identify $\widehat{H}_{c}(W,\mathfrak{h})_b$-modules with $Z(W,W', \widehat{H}_{c'}(W',\mathfrak{h})_0)$-modules. So the module $P=\mathrm{Fun}_{W'}(W,\widehat{H}_{c}(W',\mathfrak{h})_0)$ becomes an $(\widehat{H}_{c}(W,\mathfrak{h})_b,\widehat{H}_{c'}(W',\mathfrak{h})_0)$-bimodule. Hence for any $N\in \widehat{\mathcal{O}}_{c'}(W',\mathfrak{h})_0$ the module $P\otimes_{\widehat{H}_{c'}(W',\mathfrak{h})_0}N$ lives in $\widehat{\mathcal{O}}_c(W,\mathfrak{h})_b$. It is naturally identified with $\mathrm{Fun}_{W'}(W,N)$, the set of $W'$-invariant maps from $W$ to $N$. For any $\mathbb{C}[\mathfrak{h}^\ast]$-module $M$ write $E(M)\subset M$ for the locally nilpotent part of $M$ under the action of $\mathfrak{h}$.
The ingredients for defining the functors $\Res_b$ and $\Ind_b$ consist of: \begin{itemize} \item the adjoint pair of functors $(\widehat{\quad}_b, E^b)$ with $$\widehat{\quad}_b:\mathcal{O}_{c}(W,\mathfrak{h})\rightarrow\widehat{\mathcal{O}}_c(W,\mathfrak{h})_b,\quad M\mapsto\widehat{M}_b,$$ $$E^b: \widehat{\mathcal{O}}_c(W,\mathfrak{h})_b\rightarrow \mathcal{O}_{c}(W,\mathfrak{h}),\quad N\rightarrow E(N),$$ \item the Morita equivalence $$J:\widehat{\mathcal{O}}_{c'}(W',\mathfrak{h})_0\rightarrow\widehat{\mathcal{O}}_c(W,\mathfrak{h})_b,\quad N\mapsto \mathrm{Fun}_{W'}(W,N),$$ and its quasi-inverse $R$ given in Section \ref{ss:BEiso} below, \item the equivalence of categories $$E: \widehat{\mathcal{O}}_{c'}(W',\mathfrak{h})_0\rightarrow\mathcal{O}_{c'}(W',\mathfrak{h}), \quad M\mapsto E(M)$$ and its quasi-inverse given by $N\mapsto\widehat{N}_0$, \item the equivalence of categories \begin{equation}\label{zeta} \zeta: \mathcal{O}_{c'}(W',\mathfrak{h})\rightarrow \mathcal{O}_{c'}(W',\overline{\mathfrak{h}}),\quad M\mapsto\{v\in M:\,yv=0,\,\text{ for all }y\in \mathfrak{h}^{W'}\} \end{equation} and its quasi-inverse $\zeta^{-1}$ given in Section \ref{rmq:resDAHA} below. \end{itemize} For $M\in\mathcal{O}_c(W,\mathfrak{h})$ and $N\in\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$ the functors $\Res_b$ and $\Ind_b$ are defined by \begin{eqnarray} \Res_b(M)=\zeta\circ E\circ R(\widehat{M}_b),\label{Resb}\\ \Ind_b(N)=E^b\circ J(\widehat{\zeta^{-1}(N)}_0).\nonumber \end{eqnarray} We refer to \cite[Section 2,3]{BE} for details.
\subsection{The idempotent $x_{\pr}$ and the functor $R$.}\label{ss:BEiso}
We give some details on the isomorphism $\Theta$ for a future use. Fix elements $1=u_1, u_2,\ldots, u_r$ in $W$ such that $W=\bigsqcup_{i=1}^r W'u_i$. Let $\mathrm{Mat}_r(\widehat{H}_{c'}(W',\mathfrak{h})_0)$ be the algebra of $r\times r$ matrices with coefficients in $\widehat{H}_{c'}(W',\mathfrak{h})_0$. We have an algebra isomorphism \begin{eqnarray} \Phi:Z(W,W', \widehat{H}_{c'}(W',\mathfrak{h})_0)&\rightarrow& \mathrm{Mat}_r(\widehat{H}_{c'}(W',\mathfrak{h})_0),\label{Phi}\\ A&\mapsto& (\Phi(A)_{ij})_{1\leqslant i,j\leqslant r}\nonumber \end{eqnarray} such that \begin{equation*} (Af)(u_i)=\sum_{j=1}^r\Phi(A)_{ij}f(u_j), \quad \text{ for all }f\in P, \,1\leqslant i\leqslant r.\end{equation*} Denote by $E_{ij}$, $1\leqslant i,j\leqslant r$, the elementary matrix in $\mathrm{Mat}_r(\widehat{H}_{c'}(W',\mathfrak{h})_0)$ with coefficient $1$ in the position $(i,j)$ and zero elsewhere. Note that the algebra isomorphism $$\Phi\circ\Theta: \widehat{H}_{c}(W,\mathfrak{h})_b\overset{\sim}\lra \mathrm{Mat}_r(\widehat{H}_{c'}(W',\mathfrak{h})_0)$$ restricts to an isomorphism of subalgebras \begin{equation}\label{eq:phithetax}
\widehat{\mathbb{C}[\mathfrak{h}]}_b\cong\bigoplus_{i=1}^r \mathbb{C}[[\mathfrak{h}]]_0E_{ii}. \end{equation} Indeed, there is an unique isomorphism of algebras \begin{equation}\label{eq:varpi} \varpi:\widehat{\mathbb{C}[\mathfrak{h}]}_b\cong\bigoplus_{i=1}^r\mathbb{C}[[\mathfrak{h}]]_{u_i^{-1}b}. \end{equation} extending the algebra homomorphism $$\mathbb{C}[\mathfrak{h}]\rightarrow\bigoplus_{i=1}^r\mathbb{C}[\mathfrak{h}],\quad x\mapsto (x,x,\ldots, x),\quad \forall\ x\in\mathfrak{h}^\ast.$$ For each $i$ consider the isomorphism of algebras $$\phi_i: \mathbb{C}[[\mathfrak{h}]]_{u_i^{-1}b}\rightarrow\mathbb{C}[[\mathfrak{h}]]_0,\quad x\mapsto u_ix+x(u_i^{-1}b),\quad\forall\ x\in\mathfrak{h}^\ast.$$ The isomorphism (\ref{eq:phithetax}) is exactly the composition of $\varpi$ with the direct sum $\oplus_{i=1}^r\phi_i.$ Here $E_{ii}$ is the image of the idempotent in $\widehat{\mathbb{C}[\mathfrak{h}]}_b$ corresponding to the component $\mathbb{C}[[\mathfrak{h}]]_{u_i^{-1}b}$. We will denote by $x_{\pr}$ the idempotent in $\widehat{\mathbb{C}[\mathfrak{h}]}_b$ corresponding to $\mathbb{C}[[\mathfrak{h}]]_b$, i.e., $\Phi\circ\Theta(x_{\pr})=E_{11}$. Then the following functor $$R:\widehat{\mathcal{O}}_c(W,\mathfrak{h})_b\rightarrow \widehat{\mathcal{O}}_{c'}(W',\mathfrak{h})_0,\quad M\mapsto x_{\pr}M$$ is a quasi-inverse of $J$. Here, the action of $\widehat{H}_{c'}(W',\mathfrak{h})_0$ on $R(M)=x_{\pr}M$ is given by the following formulas deduced from Proposition \ref{BEiso}. For any $\alpha\in\mathfrak{h}^\ast$, $w\in W'$, $a\in\mathfrak{h}^\ast$, $m\in M$ we have \begin{eqnarray}
x_\alpha^{(b)}x_{\pr}(m)&=&x_{\pr}((x_{\alpha}-\alpha(b))m), \label{xform}\\ wx_{\pr}(m)&=&x_{\pr}(wm), \label{wform}\\ y_a^{(b)}x_{\pr}(m)&=&x_{\pr}((y_a+\sum_{s\in\mathcal{S},\,s\notin W'}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(a)}{x_{\alpha_s}})m). \label{yform} \end{eqnarray} In particular, we have \begin{equation}\label{eq:R(M)}
R(M)=\phi_1^\ast(x_{\pr}(M)) \end{equation} as $\mathbb{C}[[\mathfrak{h}]]_0\rtimes W'$-modules. Finally, note that the following equality holds in $\widehat{H}_c(W,\mathfrak{h})_b$ \begin{equation}\label{killwform} x_{\pr}ux_{\pr}=0, \quad \forall\ u\in W-W'. \end{equation}
\subsection{A quasi-inverse of $\zeta$.}\label{rmq:resDAHA}
Let us recall from \cite[Section 2.3]{BE} the following facts. Let $\mathfrak{h}^{\ast W'}$ be the subspace of $\mathfrak{h}^\ast$ consisting of fixed points of $W'$. Set $$(\mathfrak{h}^{\ast W'})^\bot=\{v\in\mathfrak{h}: f(v)=0\text{ for all } f\in\mathfrak{h}^{\ast W'}\}.$$ We have a $W'$-invariant decomposition $$\mathfrak{h}=(\mathfrak{h}^{\ast W'})^\bot\oplus\mathfrak{h}^{W'}.$$ The $W'$-space $(\mathfrak{h}^{\ast W'})^\bot$ is canonically identified with $\overline{\mathfrak{h}}$. Since the action of $W'$ on $\mathfrak{h}^{W'}$ is trivial, we have an obvious algebra isomorphism \begin{equation}\label{isobete} H_{c'}(W',\mathfrak{h})\cong H_{c'}(W',\overline{\mathfrak{h}})\otimes \mathcal{D}(\mathfrak{h}^{W'}). \end{equation} It maps an element $y$ in the subset $\mathfrak{h}^{W'}$ of $H_{c'}(W',\mathfrak{h})$ to the operator $\partial_y$ in $\mathcal{D}(\mathfrak{h}^{W'})$. Write $\mathcal{O}(1,\mathfrak{h}^{W'})$ for the category of finitely generated $\mathcal{D}(\mathfrak{h}^{W'})$-modules that are $\partial_y$-locally nilpotent for all $y\in\mathfrak{h}^{W'}$. The algebra isomorphism above yields an equivalence of categories \begin{equation*} \mathcal{O}_{c'}(W',\mathfrak{h})\cong\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})\otimes\mathcal{O}(1,\mathfrak{h}^{W'}). \end{equation*} The functor $\zeta$ in (\ref{zeta}) is an equivalence, because it is induced by the functor $$\mathcal{O}(1,\mathfrak{h}^{W'})\overset{\sim}\ra\mathbb{C}\modu,\quad M\rightarrow\{m\in M, \partial_y(m)=0\text{ for all }y\in\mathfrak{h}^{W'}\},$$ which is an equivalence by Kashiwara's lemma upon taking Fourier transforms. In particular, a quasi-inverse of $\zeta$ is given by \begin{equation}\label{eq:zetainverse} \zeta^{-1}: \mathcal{O}_{c'}(W',\overline{\mathfrak{h}})\rightarrow \mathcal{O}_{c'}(W',\mathfrak{h}),\quad N\mapsto N\otimes\mathbb{C}[\mathfrak{h}^{W'}],\end{equation} where $\mathbb{C}[\mathfrak{h}^{W'}]\in \mathcal{O}(1,\mathfrak{h}^{W'})$ is the polynomial representation of $\mathcal{D}(\mathfrak{h}^{W'})$.
Moreover, the functor $\zeta$ maps a standard module in $\mathcal{O}_{c'}(W',\mathfrak{h})$ to a standard module in $\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$. Indeed, for any $\xi\in\Irr(W')$, we have an isomorphism of $H_{c'}(W',\mathfrak{h})$-modules \begin{equation*} H_{c'}(W',\mathfrak{h})\otimes_{\mathbb{C}[\mathfrak{h}^\ast]\rtimes W'}\xi=(H_{c'}(W',\overline{\mathfrak{h}})\otimes_{\mathbb{C}[(\overline{\mathfrak{h}})^\ast]\rtimes W'}\xi)\otimes(\mathcal{D}(\mathfrak{h}^{W'})\otimes_{\mathbb{C}[(\mathfrak{h}^{W'})^\ast]}\mathbb{C}). \end{equation*} On the right hand side $\mathbb{C}$ denotes the trivial module of $\mathbb{C}[(\mathfrak{h}^{W'})^\ast]$, and the latter is identified with the subalgebra of $\mathcal{D}(\mathfrak{h}^{W'})$ generated by $\partial_y$ for all $y\in\mathfrak{h}^{W'}$. We have $$\mathcal{D}(\mathfrak{h}^{W'})\otimes_{\mathbb{C}[(\mathfrak{h}^{W'})^\ast]}\mathbb{C}\cong\mathbb{C}[\mathfrak{h}^{W'}]$$ as $\mathcal{D}(\mathfrak{h}^{W'})$-modules. So $\zeta$ maps the standard module $\Delta(\xi)$ for $H_{c'}(W',\mathfrak{h})$ to the standard module $\Delta(\xi)$ for $H_{c'}(W',\overline{\mathfrak{h}})$.
\subsection{}\label{ss:resprop}
Here are some properties of $\Res_b$ and $\Ind_b$. \begin{prop}\label{Res} \begin{itemize} \item[(1)] Both functors $\Res_b$ and $\Ind_b$ are exact. The functor $\Res_b$ is left adjoint to $\Ind_b$. In particular the functor $\Res_b$ preserves projective objects and $\Ind_b$ preserves injective objects. \iffalse \item[(2)] If $W'=1$ then for each $M\in \mathcal{O}_c(W,\mathfrak{h})$ the vector space $\Res_b(M)$ is the fiber of the $\mathbb{C}[\mathfrak{h}]$-module $M$ at $b$, and $\Ind_b(\mathbb{C})=P_{\KZ}.$\fi \item[(2)] Let $\Res^W_{W'}$ and $\Ind^W_{W'}$ be respectively the restriction and induction functors of groups. We have the following commutative diagram \begin{equation*} \xymatrix{K(\mathcal{O}_c(W,\mathfrak{h}))\ar[r]_{\sim}^{\omega}\ar@<1ex>[d]^{\Res_b} & K(\mathbb{C} W)\ar@<1ex>[d]^{\Res^W_{W'}}\\
K(\mathcal{O}_{c'}(W',\overline{\mathfrak{h}}))\ar[r]_{\sim}^{\omega'}\ar@<1ex>[u]^{\Ind_b} & K(\mathbb{C} W')\ar@<1ex>[u]^{\Ind^W_{W'}}.} \end{equation*} Here the isomorphism $\omega$ (resp. $\omega'$) is given by mapping $[\Delta(\xi)]$ to $[\xi]$ for any $\xi\in\Irr(W)$ (resp. $\xi\in\Irr(W')$). \end{itemize} \end{prop}
See \cite[Proposition $3.9$, Theorem $3.10$]{BE} for (1), \cite[Proposition $3.14$]{BE} for (2).
\subsection{Restriction of modules having a standard filtration}\label{ss:standardres}
In the rest of Section 1, we study the actions of the restriction functors on modules having a standard filtration in $\mathcal{O}_c(W,\mathfrak{h})$ (Proposition \ref{standard}). We will need the following lemmas.
\begin{lemme}\label{lem:MV} Let $M$ be a module in $\mathcal{O}^\Delta_c(W,\mathfrak{h})$.
(1) There is a finite dimensional subspace $V$ of $M$ such that $V$ is stable under the action of $\mathbb{C} W$ and the map \begin{equation*} \mathbb{C}[\mathfrak{h}]\otimes V\rightarrow M,\quad p\otimes v\mapsto pv \end{equation*} is an isomorphism of $\mathbb{C}[\mathfrak{h}]\rtimes W$-modules.
(2) The map $\omega:K(\mathcal{O}_c(W,\mathfrak{h}))\rightarrow K(\mathbb{C} W)$ in Proposition \ref{Res}(2) satisfies \begin{equation}\label{eq:omegakgp} \omega([M])=[V]. \end{equation} \end{lemme} \begin{proof} Let $$0=M_0\subset M_1\subset \ldots\subset M_l=M$$ be a filtration of $M$ such that for any $1\leqslant i\leqslant l$ we have $M_i/M_{i-1}\cong\Delta(\xi_i)$ for some $\xi_i\in\Irr(W)$. We prove (1) and (2) by recurrence on $l$. If $l=1$, then $M$ is a standard module. Both (1) and (2) hold by definition. For $l>1$, by induction we may suppose that there is a subspace $V'$ of $M_{l-1}$ such that the properties in (1) and (2) are satisfied for $M_{l-1}$ and $V'$. Now, consider the exact sequence $$0\longrightarrow M_{l-1}\longrightarrow M\overset{j}\longrightarrow \Delta(\xi_l)\longrightarrow 0$$ From the isomorphism of $\mathbb{C}[\mathfrak{h}]\rtimes W$-modules $\Delta(\xi_l)\cong\mathbb{C}[\mathfrak{h}]\otimes \xi$ we see that $\Delta(\xi_l)$ is a projective $\mathbb{C}[\mathfrak{h}]\rtimes W$-module. Hence there exists a morphism of $\mathbb{C}[\mathfrak{h}]\rtimes W$-modules $s: \Delta(\xi_l)\rightarrow M$ that provides a section of $j$. Let $V=V'\oplus s(\xi_l)\subset M$. It is stable under the action of $\mathbb{C} W$. The map $\mathbb{C}[\mathfrak{h}]\otimes V\rightarrow M$ in (1) is an injective morphism of $\mathbb{C}[\mathfrak{h}]\rtimes W$-modules. Its image is $M_{l-1}\oplus s(\Delta(\xi))$, which is equal to $M$. So it is an isomorphism. We have $$\omega([M])=\omega([M_{l-1}])+\omega([\Delta(\xi_l)]),$$ by assumption $\omega([M_{l-1}])=[V']$, so $\omega([M])=[V']+[\xi_l]=[V]$. \end{proof}
\begin{lemme}\label{lem:eufinite} (1) Let $M$ be a $\widehat{H}_{c}(W,\mathfrak{h})_0$-module free over $\mathbb{C}[[\mathfrak{h}]]_0$. If there exist generalized eigenvectors $v_1,\ldots v_n$ of $\eu$ which form a basis of $M$ over $\mathbb{C}[[\mathfrak{h}]]_0$, then for $f_1,\ldots, f_n\in\mathbb{C}[[\mathfrak{h}]]_0$ the element $m=\sum_{i=1}^nf_iv_i$ is $\eu$-finite if and only if $f_1,\ldots,f_n$ all belong to $\mathbb{C}[\mathfrak{h}]$.
(2) Let $N$ be an object in $\mathcal{O}_c(W,\mathfrak{h})$. If $\widehat{N}_0$ is a free $\mathbb{C}[[\mathfrak{h}]]_0$-module, then $N$ is a free $\mathbb{C}[\mathfrak{h}]$-module. It admits a basis consisting of generalized eigenvectors $v_1,\ldots,v_n$ of $\eu$. \end{lemme} \begin{proof} (1) It follows from the proof of \cite[Theorem 2.3]{BE}.
(2) Since $N$ belongs to $\mathcal{O}_c(W,\mathfrak{h})$, it is finitely generated over $\mathbb{C}[\mathfrak{h}]$. Denote by $\mathfrak{m}$ the maximal ideal of $\mathbb{C}[[\mathfrak{h}]]_0$. The canonical map $N\rightarrow\widehat{N}_0/\mathfrak{m}\widehat{N}_0$ is surjective. So there exist $v_1,\ldots,v_n$ in $N$ such that their images form a basis of $\widehat{N}_0/\mathfrak{m}\widehat{N}_0$ over $\mathbb{C}$. Moreover, we may choose $v_1,\ldots,v_n$ to be generalized eigenvectors of $\eu$, because the $\eu$-action on $N$ is locally finite. Since $\widehat{N}_0$ is free over $\mathbb{C}[[\mathfrak{h}]]_0$, Nakayama's lemma yields that $v_1,\ldots,v_n$ form a basis of $\widehat{N}_0$ over $\mathbb{C}[[\mathfrak{h}]]_0$. By part (1) the set $N'$ of $\eu$-finite elements in $\widehat{N}_0$ is the free $\mathbb{C}[\mathfrak{h}]$-submodule generated by $v_1,\ldots, v_n$. On the other hand, since $\widehat{N}_0$ belongs to $\widehat{\mathcal{O}}_c(W,\mathfrak{h})_0$, by \cite[Proposition 2.4]{BE} an element in $\widehat{N}_0$ is $\mathfrak{h}$-nilpotent if and only if it is $\eu$-finite. So $N'=E(\widehat{N}_0).$ On the other hand, the canonical inclusion $N\subset E(\widehat{N}_0)$ is an equality by \cite[Theorem 3.2]{BE}. Hence $N=N'$. This implies that $N$ is free over $\mathbb{C}[\mathfrak{h}]$, with a basis given by $v_1,\ldots,v_n$, which are generalized eigenvectors of $\eu$. \end{proof}
\begin{prop}\label{standard} Let $M$ be an object in $\mathcal{O}^\Delta_c(W,\mathfrak{h})$.
(1) The object $\Res_b(M)$ has a standard filtration.
(2) Let $V$ be a subspace of $M$ that has the properties of Lemma \ref{lem:MV}(1). Then there is an isomorphism of $\mathbb{C}[\overline{\mathfrak{h}}]\rtimes W'$-modules \begin{equation*} \Res_b(M)\cong \mathbb{C}[\overline{\mathfrak{h}}]\otimes\Res^{W}_{W'}(V). \end{equation*} \end{prop} \begin{proof} (1) By the end of Section \ref{rmq:resDAHA} the equivalence $\zeta$ maps a standard module in $\mathcal{O}_{c'}(W',\mathfrak{h})$ to a standard one in $\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$. Hence to prove that $\Res_b(M)=\zeta\circ E\circ R(\widehat{M}_b)$ has a standard filtration, it is enough to show that $N=E\circ R(\widehat{M}_b)$ has one. We claim that the module $N$ is free over $\mathbb{C}[\mathfrak{h}]$. So the result follows from Lemma \ref{standfilt}(2).
Let us prove the claim. Recall from (\ref{eq:R(M)}) that we have $R(\widehat{M}_b)=\phi_1^\ast(x_{\pr}\widehat{M}_b)$ as $\mathbb{C}[[\mathfrak{h}]]_0\rtimes W'$-modules. Using the isomorphism of $\mathbb{C}[\mathfrak{h}]\rtimes W$-modules $M\cong\mathbb{C}[\mathfrak{h}]\otimes V$ given in Lemma \ref{lem:MV}(1), we deduce an isomorphism of $\mathbb{C}[[\mathfrak{h}]]_0\rtimes W'$-modules \begin{eqnarray*} R(\widehat{M}_b)&\cong&\phi_1^\ast(x_{\pr}(\widehat{\mathbb{C}[\mathfrak{h}]}_b\otimes V))\nonumber\\ &\cong & \mathbb{C}[[\mathfrak{h}]]_0\otimes V.\label{completeiso} \end{eqnarray*} So the module $R(\widehat{M}_b)$ is free over $\mathbb{C}[[\mathfrak{h}]]_0$. The completion of the module $N$ at $0$ is isomorphic to $R(\widehat{M}_b)$. By Lemma \ref{lem:eufinite}(2) the module $N$ is free over $\mathbb{C}[\mathfrak{h}]$. The claim is proved.
(2) Since $\Res_b(M)$ has a standard filtration, by Lemma \ref{lem:MV} there exists a finite dimensional vector space $V'\subset \Res_b(M)$ such that $V'$ is stable under the action of $\mathbb{C} W'$ and we have an isomorphism of $\mathbb{C}[\overline{\mathfrak{h}}]\rtimes W'$-modules \begin{equation*} \Res_b(M)\cong \mathbb{C}[\overline{\mathfrak{h}}]\otimes V'. \end{equation*} Moreover, we have $\omega'([\Res_b(M)])=[V']$ where $\omega'$ is the map in Proposition \ref{Res}(2). The same proposition yields that $\Res^W_{W'}(\omega[M])=\omega'([\Res_b(M)])$. Since $\omega([M])=[V]$ by (\ref{eq:omegakgp}), the $\mathbb{C} W'$-module $V'$ is isomorphic to $\Res^W_{W'}(V)$. So we have an isomorphism of $\mathbb{C}[\overline{\mathfrak{h}}]\rtimes W'$-modules $$\Res_b(M)\cong \mathbb{C}[\overline{\mathfrak{h}}]\otimes\Res^{W}_{W'}(V).$$ \end{proof}
\section{KZ commutes with restriction functors}\label{s:KZcommute}
In this section, we relate the restriction and induction functors for rational DAHA's to the corresponding functors for Hecke algebras via the functor $\KZ$. We will always assume that the Hecke algebras have the same dimension as the corresponding group algebras. Thus the Knizhnik-Zamolodchikov functors admit the properties recalled in Section \ref{ss:KZ}.
\subsection{}\label{ss: thmiso}
Let $W$ be a complex reflection group acting on $\mathfrak{h}$. Let $b$ be a point in $\mathfrak{h}$ and let $W'$ be its stabilizer in $W$. We will abbreviate $\KZ=\KZ(W,\mathfrak{h})$, $\KZ'=\KZ(W',\overline{\mathfrak{h}})$. \begin{thm}\label{iso} There is an isomorphism of functors \begin{equation*}
\KZ'\circ\Res_b\cong\Resh\circ\KZ. \end{equation*} \end{thm} \begin{proof} We will regard $\KZ: \mathcal{O}_c(W,\mathfrak{h})\rightarrow \mathscr{H}_q(W)\modu$ as a functor from $\mathcal{O}_c(W,\mathfrak{h})$ to $B_W\modu$ in the obvious way. Similarly we will regard $\KZ'$ as a functor to $B_{W'}\modu$. Recall the inclusion $\imath: B_{W'}\hookrightarrow B_W$ from (\ref{heckeres4}). The theorem amounts to prove that for any $M\in \mathcal{O}_c(W,\mathfrak{h})$ there is a natural isomorphism of $B_{W'}$-modules \begin{equation}\label{eq:thm} \KZ'\circ\Res_b(M)\cong\imath_\ast\circ\KZ(M).\end{equation}
\emph{Step 1.} Recall the functor $\zeta: \mathcal{O}_{c'}(W',\mathfrak{h})\rightarrow\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$ from (\ref{zeta}) and its quasi-inverse $\zeta^{-1}$ in (\ref{eq:zetainverse}). Let $$N=\zeta^{-1}(\Res_b(M)).$$ We have $N\cong\Res_b(M)\otimes\mathbb{C}[\mathfrak{h}^{W'}].$ Since the canonical epimorphism $\mathfrak{h}\rightarrow\overline{\mathfrak{h}}$ induces a fibration $\mathfrak{h}'_{reg}\rightarrow\overline{\mathfrak{h}}_{reg}$, see Section \ref{ss:resHecke}, we have \begin{equation}\label{h} N_{\mathfrak{h}'_{reg}}\cong \Res_b(M)_{\overline{\mathfrak{h}}_{reg}}\otimes\mathbb{C}[\mathfrak{h}^{W'}].\end{equation} By Dunkl isomorphisms, the left hand side is a $\mathcal{D}(\mathfrak{h}'_{reg})\rtimes W'$-module while the right hand side is a $(\mathcal{D}(\overline{\mathfrak{h}}_{reg})\rtimes W')\otimes\mathcal{D}(\mathfrak{h}^{W'})$-module. Identify these two algebras in the obvious way. The isomorphism (\ref{h}) is compatible with the $W'$-equivariant $\mathcal{D}$-module structures. Hence we have $$(N_{\mathfrak{h}'_{reg}})^\nabla\cong(\Res_b(M)_{\overline{\mathfrak{h}}_{reg}})^\nabla\otimes \mathbb{C}[\mathfrak{h}^{W'}]^\nabla.$$ Since $\mathbb{C}[\mathfrak{h}^{W'}]^\nabla=\mathbb{C}$, this yields a natural isomorphism \begin{equation*} \ell_\ast\circ\KZ(W',\mathfrak{h})(N)\cong\KZ'\circ\Res_b(M), \end{equation*} where $\ell$ is the homomorphism defined in (\ref{heckeres1}).
\emph{Step 2.} Consider the $W'$-equivariant algebra isomorphism $$\phi: \mathbb{C}[\mathfrak{h}]\rightarrow\mathbb{C}[\mathfrak{h}],\quad x\mapsto x+x(b)\text{ for } x\in\mathfrak{h}^\ast.$$ It induces an isomorphism $\hat{\phi}:\mathbb{C}[[\mathfrak{h}]]_b\overset{\sim}\rightarrow\mathbb{C}[[\mathfrak{h}]]_0$. The latter yields an algebra isomorphism $$\mathbb{C}[[\mathfrak{h}]]_b\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}_{reg}]\simeq \mathbb{C}[[\mathfrak{h}]]_0\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}'_{reg}].$$ To see this note first that by definition, the left hand side is $\mathbb{C}[[\mathfrak{h}]]_b[\alpha_{s}^{-1}, s\in\mathcal{S}]$. For $s\in\mathcal{S}$, $s\notin W'$ the element $\alpha_{s}$ is invertible in $\mathbb{C}[[\mathfrak{h}]]_b$, so we have $$\mathbb{C}[[\mathfrak{h}]]_b\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}_{reg}]=\mathbb{C}[[\mathfrak{h}]]_b[\alpha_{s}^{-1}, s\in\mathcal{S}\cap W'].$$ For $s\in \mathcal{S}\cap W'$ we have $\alpha_{s}(b)=0$, so $\hat{\phi}(\alpha_s)=\alpha_s$. Hence \begin{eqnarray*} \hat{\phi}(\mathbb{C}[[\mathfrak{h}]]_b)[\hat{\phi}(\alpha_{s})^{-1}, s\in\mathcal{S}\cap W']&=&\mathbb{C}[[\mathfrak{h}]]_0[\alpha_{s}^{-1}, s\in\mathcal{S}\cap W']\\ &=&\mathbb{C}[[\mathfrak{h}]]_0\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}'_{reg}]. \end{eqnarray*}
\emph{Step 3.} We will assume in Steps 3, 4, 5 that $M$ is a module in $\mathcal{O}^\Delta_c(W,\mathfrak{h})$. In this step we prove that $N$ is isomorphic to $\phi^\ast(M)$ as $\mathbb{C}[\mathfrak{h}]\rtimes W'$-modules. Let $V$ be a subspace of $M$ as in Lemma \ref{lem:MV}(1). So we have an isomorphism of $\mathbb{C}[\mathfrak{h}]\rtimes W$-modules \begin{equation}\label{inducefilt} M\cong \mathbb{C}[\mathfrak{h}]\otimes V. \end{equation} Also, by Proposition \ref{standard}(2) there is an isomorphism of $\mathbb{C}[\mathfrak{h}]\rtimes W'$-modules \begin{eqnarray*} N&\cong&\mathbb{C}[\mathfrak{h}]\otimes\Res^W_{W'}(V). \end{eqnarray*} So $N$ is isomorphic to $\phi^\ast(M)$ as $\mathbb{C}[\mathfrak{h}]\rtimes W'$-modules.
\emph{Step 4.} In this step we compare $(\widehat{(\phi^\ast(M))}_0)_{\mathfrak{h}'_{reg}}$ and $(\widehat{N}_0)_{\mathfrak{h}'_{reg}}$ as $\widehat{\mathcal{D}(\mathfrak{h}'_{reg})}_0$-modules. The definition of these $\widehat{\mathcal{D}(\mathfrak{h}'_{reg})}_0$-module structures will be given below in terms of connections. By (\ref{Resb}) we have $N=E\circ R(\widehat{M}_b)$, so we have $\widehat{N}_0\cong R(\widehat{M}_b).$ Next, by (\ref{eq:R(M)}) we have an isomorphism of $\mathbb{C}[[\mathfrak{h}]]_0\rtimes W'$-modules \begin{eqnarray*}
R(\widehat{M}_b)&=&\hat{\phi}^\ast (x_{\pr}(\widehat{M}_b))\\
&=&\widehat{(\phi^\ast(M))}_0. \end{eqnarray*} So we get an isomorphism of $\mathbb{C}[[\mathfrak{h}]]_0\rtimes W'$-modules $$\hat\Psi: \widehat{(\phi^\ast(M))}_0\rightarrow\widehat{N}_0.$$ Now, let us consider connections on these modules. Note that by Step $2$ we have $$(\widehat{(\phi^\ast(M))}_0)_{\mathfrak{h}'_{reg}} =\hat{\phi}^\ast(x_{\pr}(\widehat{M}_b)_{\mathfrak{h}_{reg}}).$$ Write $\nabla$ for the connection on $M_{\mathfrak{h}_{reg}}$ given by the Dunkl isomorphism for $H_c(W,\mathfrak{h}_{reg})$. We equip $(\widehat{(\phi^\ast(M))}_0)_{\mathfrak{h}'_{reg}}$ with the connection $\tilde{\nabla}$ given by $$\tilde{\nabla}_a(x_{\pr}m)=x_{\pr}(\nabla_a(m)),\quad\forall\ m\in(\widehat{M}_b)_{\mathfrak{h}_{reg}},\ a\in\mathfrak{h}.$$ Let $\nabla^{(b)}$ be the connection on $N_{\mathfrak{h}'_{reg}}$ given by the Dunkl isomorphism for $H_{c'}(W',\mathfrak{h}'_{reg})$. This restricts to a connection on $(\widehat{N}_0)_{\mathfrak{h}'_{reg}}$. We claim that $\Psi$ is compatible with these connections, i.e., we have \begin{equation}\label{but} \nabla_a^{(b)}(x_{\pr} m)=x_{\pr}\nabla_a(m),\quad\forall\ m\in (\widehat{M}_b)_{\mathfrak{h}_{reg}}. \end{equation} Recall the subspace $V$ of $M$ from Step 3. By Lemma \ref{lem:MV}(1) the map $$(\widehat{\mathbb{C}[\mathfrak{h}]}_b\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}_{reg}])\otimes V\rightarrow (\widehat{M_b})_{\mathfrak{h}_{reg}},\quad p\otimes v\mapsto pv$$ is a bijection. So it is enough to prove (\ref{but}) for $m=pv$ with $p\in\widehat{\mathbb{C}[\mathfrak{h}]}_b\otimes_{\mathbb{C}[\mathfrak{h}]}\mathbb{C}[\mathfrak{h}_{reg}]$, $v\in V$. We have \begin{eqnarray}\label{conncal} \nabla^{(b)}_a(x_{\pr}p v)&=&(y^{(b)}_a-\sum_{s\in\mathcal{S}\cap W'}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(a)} {x_{\alpha_s}^{(b)}}(s-1))(x_{\pr}p v)\nonumber\\ &=&x_{\pr}(y_a+\sum_{s\in\mathcal{S},s\notin W'}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(a)}{x_{\alpha_s}}-\nonumber\\ &&-\sum_{s\in\mathcal{S}\cap W'}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(a)}{x_{\alpha_s}}(s-1))(x_{\pr}p v)\nonumber\\ &=&x_{\pr}(\nabla_a+\sum_{s\in\mathcal{S},s\notin W'}\frac{2c_s}{1-\lambda_s}\frac{\alpha_s(a)}{x_{\alpha_s}}s)(x_{\pr}p v)\nonumber\\ &=&x_{\pr}\nabla_a(x_{\pr}p v) . \end{eqnarray} Here the first equality is by the Dunkl isomorphism for $H_{c'}(W',\mathfrak{h}'_{reg})$. The second is by (\ref{xform}), (\ref{wform}), (\ref{yform}) and the fact that $x_{\pr}^2=x_{\pr}$. The third is by the Dunkl isomorphism for $H_{c}(W,\mathfrak{h}_{reg})$. The last is by (\ref{killwform}). Next, since $x_{\pr}$ is the idempotent in $\widehat{\mathbb{C}[\mathfrak{h}]}_b$ corresponding to the component $\mathbb{C}[[\mathfrak{h}]]_b$ in the decomposition (\ref{eq:varpi}), we have \begin{eqnarray*} \nabla_a(x_{\pr}p v)&=&(\partial_a(x_{\pr}p))v+x_{\pr}p\,(\nabla_av)\\ &=&x_{\pr}(\partial_a(p))v+x_{\pr}p\,(\nabla_av)\\ &=&x_{\pr}\nabla_a(p v). \end{eqnarray*} Together with (\ref{conncal}) this implies that $$\nabla^{(b)}_a(x_{\pr}p v)=x_{\pr}\nabla_a(p v).$$ So (\ref{but}) is proved.
\emph{Step 5.} In this step we prove isomorphism (\ref{eq:thm}) for $M\in\mathcal{O}^\Delta_c(W,\mathfrak{h})$. Here we need some more notation. For $X=\mathfrak{h}$ or $\mathfrak{h}'_{reg}$, let $U$ be an open analytic subvariety of $X$, write $i:U\hookrightarrow X$ for the canonical embedding. For $F$ an analytic coherent sheaf on $X$ we write $i^\ast (F)$ for the restriction of $F$ to $U$. If $U$ contains $0$, for an analytic locally free sheaf $E$ over $U$, we write $\widehat{E}$ for the restriction of $E$ to the formal disc at $0$.
Let $\Omega\subset \mathfrak{h}$ be the open ball defined in (\ref{eq:omega}). Let $f:\mathfrak{h}\rightarrow\mathfrak{h}$ be the morphism defined by $\phi$. It maps $\Omega$ to an open ball $\Omega_0$ centered at $0$. We have $$f(\Omega\cap\mathfrak{h}_{reg})=\Omega_0\cap\mathfrak{h}'_{reg}.$$ Let $u:\Omega_0\cap\mathfrak{h}'_{reg}\hookrightarrow\mathfrak{h}$ and $v: \Omega\cap\mathfrak{h}_{reg}\hookrightarrow\mathfrak{h}$ be the canonical embeddings. By Step 3 there is an isomorphism of $W'$-equivariant analytic locally free sheaves over $\Omega_0\cap\mathfrak{h}'_{reg}$ $$u^\ast (N^{an})\cong \phi^\ast(v^\ast (M^{an})).$$ By Step 4 there is an isomorphism $$\widehat{u^\ast (N^{an})}\overset{\sim}\ra\widehat{\phi^\ast(v^\ast (M^{an}))}$$ which is compatible with their connections. It follows from Lemma \ref{monodromie} below that there is an isomorphism $$(u^\ast (N^{an}))^{\nabla^{(b)}} \cong \phi^\ast((v^\ast (M^{an}))^{\nabla}).$$ Since $\Omega_0\cap\mathfrak{h}'_{reg}$ is homotopy equivalent to $\mathfrak{h}'_{reg}$ via $u$, the left hand side is isomorphic to $(N_{\mathfrak{h}'_{reg}})^{\nabla^{(b)}}$. So we have \begin{equation*} \kappa_\ast\circ\jmath_\ast\circ\KZ(M)\cong\KZ(W',\mathfrak{h})(N), \end{equation*} where $\kappa$, $\jmath$ are as in (\ref{heckeres2}), (\ref{heckeres3}). Combined with Step 1 we have the following isomorphisms \begin{eqnarray}\label{i} \KZ'\circ\Res_b(M)&\cong&\ell_\ast\circ\KZ(W',\mathfrak{h})(N)\nonumber\\ &\cong&\ell_\ast\circ\kappa_\ast\circ\jmath_\ast\circ\KZ(M)\\ &=&\imath_\ast\circ\KZ(M).\nonumber \end{eqnarray} They are functorial on $M$.
\begin{lemme}\label{monodromie} Let $E$ be an analytic locally free sheaf over the complex manifold $\mathfrak{h}'_{reg}$. Let $\nabla_1$, $\nabla_2$ be two integrable connections on $E$ with regular singularities. If there exists an isomorphism $\hat{\psi}:(\widehat{E},\nabla_1)\rightarrow (\widehat{E},\nabla_2)$, then the local systems $E^{\nabla_1}$ and $E^{\nabla_2}$ are isomorphic. \end{lemme} \begin{proof} Write $\End(E)$ for the sheaf of endomorphisms of $E$. Then $\End(E)$ is a locally free sheaf over $\mathfrak{h}'_{reg}$. The connections $\nabla_1$, $\nabla_2$ define a connection $\nabla$ on $\End(E)$ as follows, $$\nabla: \End(E)\rightarrow\End(E),\quad f\mapsto \nabla_2\circ f-f\circ\nabla_1.$$ So the isomorphism $\hat{\psi}$ is a horizontal section of $(\widehat{\End(E)},\nabla)$. Let $(\End(E)^\nabla)_0$ be the set of germs of horizontal sections of $(\End(E),\nabla)$ on zero. By the Comparison theorem \cite[Theorem 6.3.1]{KK} the canonical map $(\End(E)^\nabla)_0\rightarrow (\widehat{\End(E)})^\nabla$ is bijective. Hence there exists a holomorphic isomorphism $\psi: (E,\nabla_1)\rightarrow (E,\nabla_2)$ which maps to $\hat{\psi}$. Now, let $U$ be an open ball in $\mathfrak{h}'_{reg}$ centered at $0$ with radius $\varepsilon$ small enough such that the holomorphic isomorphism $\psi$ converges in $U$. Write $E_U$ for the restriction of $E$ to $U$. Then $\psi$ induces an isomorphism of local systems $(E_U)^{\nabla_1}\cong (E_U)^{\nabla_2}$. Since $\mathfrak{h}'_{reg}$ is homotopy equivalent to $U$, we have $$E^{\nabla_1}\cong E^{\nabla_2}.$$ \end{proof}
\emph{Step 6.} Finally, write $I$ for the inclusion of $\Proj_c(W,\mathfrak{h})$ into $\mathcal{O}_c(W,\mathfrak{h})$. By Lemma \ref{standfilt}(1) any projective object in $\mathcal{O}_c(W,\mathfrak{h})$ has a standard filtration, so (\ref{i}) yields an isomorphism of functors $$\KZ'\circ\Res_b\circ I\rightarrow \imath_\ast\circ\KZ\circ I.$$ Applying Lemma \ref{projiso} to the exact functors $\KZ'\circ\Res_b$ and $\imath_\ast\circ\KZ$ yields that there is an isomorphism of functors $$\KZ'\circ\Res_b\cong\imath_\ast\circ\KZ.$$ \end{proof}
\subsection{}\label{ss:coriso}
We give some corollaries of Theorem \ref{iso}. \begin{cor}\label{indiso} There is an isomorphism of functors \begin{equation*} \KZ\circ\Ind_b\cong\coIndh\circ\KZ'. \end{equation*} \end{cor} \begin{proof} To simplify notation let us write $$\mathcal{O}=\mathcal{O}_c(W,\mathfrak{h}), \quad\mathcal{O}'=\mathcal{O}_{c'}(W',\overline{\mathfrak{h}}), \quad\mathscr{H}=\mathscr{H}_q(W), \quad\mathscr{H}'=\mathscr{H}_{q'}(W').$$ Recall that the functor $\KZ$ is represented by a projective object $P_{\KZ}$ in $\mathcal{O}$. So for any $N\in \mathcal{O}'$ we have a morphism of $\mathscr{H}$-modules \begin{eqnarray} \KZ\circ\Ind_b(N)&\cong&\Hom_{\mathcal{O}}(P_{\KZ},\Ind_b(N))\nonumber\\ &\cong&\Hom_{\mathcal{O}'}(\Res_b(P_{\KZ}),N)\nonumber\\ \quad&\rightarrow&\Hom_{\mathscr{H}'}(\KZ'(\Res_b (P_{\KZ})), \KZ'(N)).\label{a} \end{eqnarray} By Theorem \ref{iso} we have $$\KZ'\circ\Res_b(P_{\KZ})\cong \sideset{^{\scriptscriptstyle\mathscr{H}}}{^W_{W'}}\Res\circ\KZ(P_{\KZ}).$$ Recall from Section \ref{ss:KZ} that the $\mathscr{H}$-module $\KZ(P_{\KZ})$ is isomorphic to $\mathscr{H}$. So as $\mathscr{H}'$-modules $\KZ'(\Res_b(P_{\KZ}))$ is also isomorphic to $\mathscr{H}$. Therefore the morphism (\ref{a}) rewrites as \begin{equation}\label{b} \chi(N):\KZ\circ\Ind_b(N)\rightarrow\Hom_{\mathscr{H}'}(\mathscr{H},\KZ'(N)). \end{equation} It yields a morphism of functors $$\chi: \KZ\circ\Ind_b\rightarrow \coIndh\circ \KZ'.$$ Note that if $N$ is a projective object in $\mathcal{O}'$, then $\chi(N)$ is an isomorphism by Proposition \ref{KZ}(1). So Lemma \ref{projiso} implies that $\chi$ is an isomorphism of functors, because both functors $\KZ\circ\Ind_b$ and $\coIndh\circ \KZ'$ are exact. \end{proof}
\subsection{}
The following lemma will be useful to us.
\begin{lemme}\label{fullyfaithful} Let $K$, $L$ be two right exact functors from $\mathcal{O}_1$ to $\mathcal{O}_2$, where $\mathcal{O}_1$ and $\mathcal{O}_2$ can be either $\mathcal{O}_c(W,\mathfrak{h})$ or $\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$. Suppose that $K$, $L$ map projective objects to projective ones. Then the vector space homomorphism \begin{equation}\label{y} \Hom(K,L)\rightarrow\Hom(\KZ_2\circ K, \KZ_2\circ L),\quad f\mapsto 1_{\KZ_2}f, \end{equation} is an isomorphism. \end{lemme} Notice that if $K=L$, this is even an isomorphism of rings.
\begin{proof} Let $\Proj_1$, $\Proj_2$ be respectively the subcategory of projective objects in $\mathcal{O}_1$, $\mathcal{O}_2$. Write $\tilde{K}$, $\tilde{L}$ for the functors from $\Proj_1$ to $\Proj_2$ given by the restrictions of $K$, $L$, respectively. Let $\mathscr{H}_2$ be the Hecke algebra corresponding to $\mathcal{O}_2$. Since the functor $\KZ_2$ is fully faithful over $\Proj_2$ by Proposition \ref{KZ}(1), the following functor $$\Fct(\Proj_1,\Proj_2)\rightarrow\Fct(\Proj_1, \mathscr{H}_2\modu)\,,\quad G\mapsto \KZ_2\circ G$$ is also fully faithful. This yields an isomorphism $$\Hom(\tilde{K},\tilde{L})\overset{\sim}\ra\Hom(\KZ_2\circ\tilde{K},\KZ_2\circ\tilde{L}),\quad f\mapsto 1_{\KZ_2}f.$$ Next, by Lemma \ref{projiso} the canonical morphisms $$\Hom(K,L)\rightarrow\Hom(\tilde{K}, \tilde{L)},\quad\Hom(\KZ_2\circ K,\KZ_2\circ L)\rightarrow \Hom(\KZ_2\circ\tilde{K},\KZ_2\circ\tilde{L})$$ are isomorphisms. So the map (\ref{y}) is also an isomorphism. \end{proof}
Let $b(W,W'')$ be a point in $\mathfrak{h}$ whose stabilizer is $W''$. Let $b(W',W'')$ be its image in $\overline{\mathfrak{h}}=\mathfrak{h}/\mathfrak{h}^{W'}$ via the canonical projection. Write $b(W,W')=b$. \begin{cor}\label{corcom} There are isomorphisms of functors \begin{eqnarray*} \Res_{b(W',W'')}\circ\Res_{b(W,W')}&\cong&\Res_{b(W,W'')},\\ \Ind_{b(W,W')}\circ\Ind_{b(W',W'')}&\cong&\Ind_{b(W,W'')}. \end{eqnarray*} \end{cor} \begin{proof} Since the restriction functors map projective objects to projective ones by Proposition \ref{Res}(1), Lemma \ref{fullyfaithful} applied to the categories $\mathcal{O}_1=\mathcal{O}_c(W,\mathfrak{h})$, $\mathcal{O}_2=\mathcal{O}_{c''}(W'',\mathfrak{h}/\mathfrak{h}^{W''})$ yields an isomorphism \begin{eqnarray*} &&\Hom(\Res_{b(W',W'')}\circ\Res_{b(W,W')},\Res_{b(W,W'')})\\ &&\cong\Hom(\KZ''\circ\Res_{b(W',W'')}\circ\Res_{b(W,W')},\KZ''\circ\Res_{b(W,W'')}). \end{eqnarray*} By Theorem \ref{iso} the set on the second row is \begin{equation}\label{d} \Hom(\sideset{^{\scriptscriptstyle\mathscr{H}}}{^{W'}_{W''}}\Res\circ\Resh\circ\KZ, \sideset{^{\scriptscriptstyle\mathscr{H}}}{^W_{W''}}\Res\circ\KZ). \end{equation}
By the presentations of Hecke algebras in \cite[Proposition 4.22]{BMR}, there is an isomorphism $$\sigma:\sideset{^{\scriptscriptstyle\mathscr{H}}}{^{W'}_{W''}}\Res\circ\Resh\overset{\sim}\ra\sideset{^{\scriptscriptstyle\mathscr{H}}}{^{W}_{W''}}\Res.$$ Hence the element $\sigma 1_{\KZ}$ in the set (\ref{d}) maps to an isomorphism $$\Res_{b(W',W'')}\circ\Res_{b(W,W')}\cong\Res_{b(W,W'')}.$$ This proves the first isomorphism in the corollary. The second one follows from the uniqueness of right adjoint functor. \end{proof}
\subsection{Biadjointness of $\Res_b$ and $\Ind_b$.}\label{ss:biadjoint}
Recall that a finite dimensional $\mathbb{C}$-algebra $A$ is symmetric if $A$ is isomorphic to $A^\ast=\Hom_{\mathbb{C}}(A,\mathbb{C})$ as $(A,A)$-bimodules.
\begin{lemme}\label{heckeind}
Assume that $\mathscr{H}_{q}(W)$ and $\mathscr{H}_{q'}(W')$ are symmetric
algebras. Then the functors $\Indh$ and $\coIndh$ are isomorphic,
i.e., the functor $\Indh$ is biadjoint to $\Resh$. \end{lemme} \begin{proof}
We abbreviate $\mathscr{H}=\mathscr{H}_{q}(W)$ and $\mathscr{H}'=\mathscr{H}_{q'}(W')$. Since $\mathscr{H}$ is free as a left $\mathscr{H}'$-module,
for any $\mathscr{H}'$-module $M$ the map
\begin{equation}\label{eq:proj}
\Hom_{\mathscr{H}'}(\mathscr{H},\mathscr{H}')\otimes_{\mathscr{H}'}M\rightarrow
\Hom_{\mathscr{H}'}(\mathscr{H},M)
\end{equation} given by multiplication is an
isomorphism of $\mathscr{H}$-modules. By assumption $\mathscr{H}'$ is isomorphic to $(\mathscr{H}')^\ast$ as $(\mathscr{H}',\mathscr{H}')$-bimodules.
Thus we have the following $(\mathscr{H},\mathscr{H}')$-bimodule isomorphisms \begin{eqnarray*} \Hom_{\mathscr{H}'}(\mathscr{H},\mathscr{H}')&\cong&\Hom_{\mathscr{H}'}(\mathscr{H},(\mathscr{H}')^\ast)\\ &\cong&\Hom_{\mathbb{C}}(\mathscr{H}'\otimes_{\mathscr{H}'}\mathscr{H},\mathbb{C})\\ &\cong&\mathscr{H}^\ast\\ &\cong&\mathscr{H}. \end{eqnarray*} The last isomorphism follows from the fact the $\mathscr{H}$ is symmetric. Thus, by (\ref{eq:proj}) the functors $\Indh$ and $\coIndh$ are isomorphic. \end{proof}
\begin{rmq}\label{rmq:symmetric} It is proved that $\mathscr{H}_{q}(W)$ is a symmetric algebra for all irreducible complex reflection group $W$ except for some of the 34 exceptional groups in the Shephard-Todd classification. See \cite[Section 2A]{BMM} for details. \end{rmq}
The biadjointness of $\Res_b$ and $\Ind_b$ was conjectured in \cite[Remark 3.18]{BE} and was announced by I. Gordon and M. Martino. We give a proof in Proposition \ref{leftadjunction} since it seems not yet to be available in the literature. Let us first consider the following lemma.
\begin{lemme}\label{lem:adjunction}
(1) Let $A$, $B$ be noetherian algebras and $T$ be a functor $$T:A\modu\rightarrow B\modu.$$ If $T$ is right exact and commutes with direct sums, then it has a right adjoint.
(2) The functor $$\Res_b:\mathcal{O}_{c}(W,\mathfrak{h})\rightarrow\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$$
has a left adjoint. \end{lemme} \begin{proof} (1) Consider the $(B,A)$-bimodule $M=T(A)$. We claim that the functor $T$ is isomorphic to the functor $M\otimes_A-$. Indeed, by definition we have $T(A)\cong M\otimes_AA$ as $B$ modules. Now, for any $N\in A\modu$, since $N$ is finitely generated and $A$ is noetherian there exists $m$, $n\in\mathbb{N}$ and an exact sequence $$A^{\oplus n}\longrightarrow A^{\oplus m}\longrightarrow N\longrightarrow 0.$$ Since both $T$ and $M\otimes_A-$ are right exact and they commute with direct sums, the fact that $T(A)\cong M\otimes_AA$ implies that $T(N)\cong M\otimes_AN$ as $B$-modules. This proved the claim. Now, the functor $M\otimes_A-$ has a right adjoint $\Hom_B(M,-)$, so $T$ also has a right adjoint.
(2) Recall that for any complex reflection group $W$, a contravariant duality functor $$(-)\spcheck:\mathcal{O}_{c}(W,\mathfrak{h})\rightarrow\mathcal{O}_{c^\dag}(W,\mathfrak{h}^\ast)$$ was defined in \cite[Section 4.2]{GGOR}, here $c^\dag:\mathcal{S}\rightarrow\mathbb{C}$ is another parameter explicitly determined by $c$. Consider the functor $$\Res_b\spcheck=(-)\spcheck\circ\Res_b\circ(-)\spcheck: \mathcal{O}_{c^\dag}(W,\mathfrak{h}^\ast)\rightarrow \mathcal{O}_{{c'}^\dag}(W',(\overline{\mathfrak{h}})^\ast).$$ The category $\mathcal{O}_{c^\dag}(W,\mathfrak{h}^\ast)$ has a projective generator $P$. The algebra $\End_{\mathcal{O}_{c^\dag}(W,\mathfrak{h}^\ast)}(P)^{\op}$ is finite dimensional over $\mathbb{C}$ and by Morita theory we have an equivalence of categories $$\mathcal{O}_{c^\dag}(W,\mathfrak{h}^\ast)\cong \End_{\mathcal{O}_{c^\dag}(W,\mathfrak{h}^\ast)}(P)^{\op}\modu.$$ Since the functor $\Res_b\spcheck$ is exact and obviously commutes with direct sums, by part (1) it has a right adjoint $\Psi$. Then it follows that $(-)\spcheck\circ\Psi\circ(-)\spcheck$ is left adjoint to $\Res_b$. The lemma is proved. \end{proof} \begin{prop}\label{leftadjunction} Under the assumption of Lemma \ref{heckeind}, the functor $\Ind_b$ is left adjoint to $\Res_b$. \end{prop} \begin{proof} \emph{Step 1.} We abbreviate $\mathcal{O}=\mathcal{O}_c(W,\mathfrak{h})$, $\mathcal{O}'=\mathcal{O}_{c'}(W',\overline{\mathfrak{h}})$, $\mathscr{H}=\mathscr{H}_q(W)$, $\mathscr{H}'=\mathscr{H}_{q'}(W')$, and write $\Id_{\mathcal{O}}$, $\Id_{\mathcal{O}'}$, $\Id_{\mathscr{H}}$, $\Id_{\mathscr{H}'}$ for the identity functor on the corresponding categories. We also abbreviate $E^{\scriptscriptstyle\mathscr{H}}=\Resh$, $F^{\scriptscriptstyle\mathscr{H}}=\Indh$ and $E=\Res_b$. By Lemma \ref{lem:adjunction} the functor $E$ has a left adjoint. We denote it by $F:\mathcal{O}'\rightarrow\mathcal{O}$. Recall the functors $$\KZ:\mathcal{O}\rightarrow\mathscr{H}\modu,\quad \KZ':\mathcal{O}'\rightarrow\mathscr{H}'\modu.$$ The goal of this step is to show that there exists an isomorphism of functors \begin{equation*}
\KZ\circ F\cong F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'. \end{equation*} To this end, let $S$, $S'$ be respectively the right adjoints of $\KZ$, $\KZ'$, see Section \ref{ss:KZ}. We will first give an isomorphism of functors $$F^{\scriptscriptstyle\mathscr{H}}\cong\KZ\circ F\circ S'.$$ Let $M\in\mathscr{H}'\modu$ and $N\in\mathscr{H}\modu$. Consider the following equalities given by adjunctions \begin{eqnarray*}
\Hom_{\mathscr{H}}(\KZ\circ F\circ S'(M), N)&=&\Hom_{\mathcal{O}}(F\circ S'(M), S(N))\\
&=&\Hom_{\mathcal{O}'}(S'(M), E\circ S(N)). \end{eqnarray*} The functor $\KZ'$ yields a map \begin{equation}\label{eq:mapKZ'}
a(M,N): \Hom_{\mathcal{O}'}(S'(M), E\circ S(N))\rightarrow\Hom_{\mathscr{H}'}(\KZ'\circ S'(M), \KZ'\circ E\circ S(N)). \end{equation} Since the canonical adjunction maps $\KZ'\circ S'\rightarrow \Id_{\mathscr{H}'}$, $\KZ\circ S\rightarrow \Id_{\mathscr{H}}$ are isomorphisms (see Section \ref{ss:KZ}) and since we have an isomorphism of functors $\KZ'\circ E\cong E^{\scriptscriptstyle\mathscr{H}}\circ \KZ$ by Theorem \ref{iso}, we get the following equalities \begin{eqnarray*}
\Hom_{\mathscr{H}'}(\KZ'\circ S'(M), \KZ'\circ E\circ S(N))&=&\Hom_{\mathscr{H}'}(M, E^{\scriptscriptstyle\mathscr{H}}\circ \KZ\circ S(N))\\
&=&\Hom_{\mathscr{H}'}(M, E^{\scriptscriptstyle\mathscr{H}}(N))\\
&=&\Hom_{\mathscr{H}}(F^{\scriptscriptstyle\mathscr{H}}(M), N). \end{eqnarray*} In the last equality we used that $F^{\scriptscriptstyle\mathscr{H}}$ is left adjoint to $E^{\scriptscriptstyle\mathscr{H}}$. So the map (\ref{eq:mapKZ'}) can be rewritten into the following form \begin{equation*}
a(M,N): \Hom_{\mathscr{H}}(\KZ\circ F\circ S'(M), N)\rightarrow\Hom_{\mathscr{H}}(F^{\scriptscriptstyle\mathscr{H}}(M), N). \end{equation*} Now, take $N=\mathscr{H}$. Recall that $\mathscr{H}$ is isomorphic to $\KZ(P_{\KZ})$ as $\mathscr{H}$-modules. Since $P_{\KZ}$ is projective, by Proposition \ref{KZ}(2) we have a canonical isomorphism in $\mathcal{O}$ $$P_{\KZ}\cong S(\KZ(P_{\KZ}))=S(\mathscr{H}).$$ Further $E$ maps projectives to projectives by Proposition \ref{Res}(1), so $E\circ S(\mathscr{H})$ is also projective. Hence Proposition \ref{KZ}(1) implies that in this case (\ref{eq:mapKZ'}) is an isomorphism for any $M$, i.e., we get an isomorphism \begin{equation*}
a(M,\mathscr{H}): \Hom_{\mathscr{H}}(\KZ\circ F\circ S'(M), \mathscr{H})\overset{\sim}\ra\Hom_{\mathscr{H}}(F^{\scriptscriptstyle\mathscr{H}}(M), \mathscr{H}). \end{equation*} Further this is an isomorphism of right $\mathscr{H}$-modules with respect to the $\mathscr{H}$-actions induced by the right action of $\mathscr{H}$ on itself. Now, the fact that $\mathscr{H}$ is a symmetric algebra yields that for any finite dimensional $\mathscr{H}$-module $N$ we have isomorphisms of right $\mathscr{H}$-modules \begin{eqnarray*}
\Hom_{\mathscr{H}}(N,\mathscr{H})&\cong&\Hom_{\mathscr{H}}(N,\Hom_{\mathbb{C}}(\mathscr{H},\mathbb{C}))\\
&\cong&\Hom_\mathbb{C}(N,\mathbb{C}). \end{eqnarray*} Therefore $a(M,\mathscr{H})$ yields an isomorphism of right $\mathscr{H}$-modules $$\Hom_{\mathbb{C}}(\KZ\circ F\circ S'(M), \mathbb{C})\rightarrow\Hom_{\mathbb{C}}(F^{\scriptscriptstyle\mathscr{H}}(M), \mathbb{C}).$$ We deduce a natural isomorphism of left $\mathscr{H}$-modules $$\KZ\circ F\circ S'(M)\cong F^{\scriptscriptstyle\mathscr{H}}(M)$$ for any $\mathscr{H}'$-module $M$. This gives an isomorphism of functors $$\psi:\KZ\circ F\circ S'\overset{\sim}\ra F^{\scriptscriptstyle\mathscr{H}}.$$ Finally, consider the canonical adjunction map $\eta:\Id_{\mathcal{O}'}\rightarrow S'\circ\KZ'$. We have a morphism of functors $$\phi=(1_{\KZ\circ F}\eta)\circ(\psi 1_{\KZ'}):\KZ\circ F\rightarrow F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'.$$ Note that $\psi 1_{\KZ'}$ is an isomorphism of functors. If $Q$ is a projective object in $\mathcal{O}'$, then by Proposition \ref{KZ}(2) the morphism $\eta(Q): Q\rightarrow S'\circ\KZ'(Q)$ is also an isomorphism, so $\phi(Q)$ is an isomorphism. This implies that $\phi$ is an isomorphism of functors by Lemma \ref{projiso}, because both $\KZ\circ F$ and $F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'$ are right exact functors. Here the right exactness of $F$ follows from that it is left adjoint to $E$. So we get the desired isomorphism of functors $$\KZ\circ F\cong F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'.$$
\emph{Step 2.} Let us now prove that $F$ is right adjoint to $E$. By uniqueness of adjoint functors, this will imply that $F$ is isomorphic to $\Ind_b$. First, by Lemma \ref{heckeind} the functor $F^{\scriptscriptstyle\mathscr{H}}$ is isomorphic to $\coIndh$. So $F^{\scriptscriptstyle\mathscr{H}}$ is right adjoint to $E^{\scriptscriptstyle\mathscr{H}}$, i.e., we have morphisms of functors $$\varepsilon^{\scriptscriptstyle\mathscr{H}}: E^{\scriptscriptstyle\mathscr{H}}\circ F^{\scriptscriptstyle\mathscr{H}}\rightarrow\Id_{\mathscr{H}'},\quad \eta^{\scriptscriptstyle\mathscr{H}}: \Id_{\mathscr{H}}\rightarrow F^{\scriptscriptstyle\mathscr{H}}\circ E^{\scriptscriptstyle\mathscr{H}}$$ such that $$(\varepsilon^{\scriptscriptstyle\mathscr{H}} 1_{E^{\scriptscriptstyle\mathscr{H}}})\circ(1_{E^{\scriptscriptstyle\mathscr{H}}}\eta^{\scriptscriptstyle\mathscr{H}})=1_{E^{\scriptscriptstyle\mathscr{H}}},\quad (1_{F^{\scriptscriptstyle\mathscr{H}}}\varepsilon^{\scriptscriptstyle\mathscr{H}} )\circ(\eta^{\scriptscriptstyle\mathscr{H}}1_{F^{\scriptscriptstyle\mathscr{H}}})=1_{F^{\scriptscriptstyle\mathscr{H}}}.$$ Next, both $F$ and $E$ have exact right adjoints, given respectively by $E$ and $\Ind_b$. Therefore $F$ and $E$ map projective objects to projective ones. Applying Lemma \ref{fullyfaithful} to $\mathcal{O}_1=\mathcal{O}_2=\mathcal{O}'$, $K=E\circ F$, $L=\Id_{\mathcal{O}'}$ yields that the following map is bijective \begin{equation}\label{eq:isoFE} \Hom(E\circ F,\Id_{\mathcal{O}'})\rightarrow\Hom(\KZ'\circ E\circ F,\KZ'\circ\Id_{\mathcal{O}}),\quad f\mapsto 1_{\KZ'}f. \end{equation} By Theorem \ref{iso} and Step $1$ there exist isomorphisms of functors $$\phi_E: E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\overset{\sim}\ra\KZ'\circ E,\quad \phi_F: F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'\overset{\sim}\ra\KZ\circ F.$$ Let \begin{eqnarray*} \phi_{EF}=(\phi_E 1_F)\circ(1_{E^{\scriptscriptstyle\mathscr{H}}}\phi_F): E^{\scriptscriptstyle\mathscr{H}}\circ F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'\overset{\sim}\ra\KZ'\circ E\circ F,\\ \phi_{FE}=(\phi_F 1_E)\circ(1_{F^{\scriptscriptstyle\mathscr{H}}}\phi_E):F^{\scriptscriptstyle\mathscr{H}}\circ E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\overset{\sim}\ra\KZ\circ F\circ E. \end{eqnarray*} Identify $$\KZ\circ\Id_{\mathcal{O}}=\Id_{\mathscr{H}}\circ\KZ,\quad \KZ'\circ\Id_{\mathcal{O}'}=\Id_{\mathscr{H}'}\circ\KZ'.$$ We have a bijective map $$\Hom(\KZ'\circ E\circ F,\KZ'\circ\Id_{\mathcal{O}'})\overset{\sim}\ra \Hom(E^{\scriptscriptstyle\mathscr{H}}\circ F^{\scriptscriptstyle\mathscr{H}}\circ\KZ',\Id_{\mathscr{H}'}\circ\KZ'),\quad g\mapsto g\circ\phi_{EF}.$$ Together with (\ref{eq:isoFE}), it implies that there exists a unique morphism $\varepsilon: E\circ F\rightarrow\Id_{\mathcal{O}'}$ such that $$(1_{\KZ'}\varepsilon)\circ\phi_{EF}=\varepsilon^{\scriptscriptstyle\mathscr{H}}1_{\KZ'}.$$ Similarly, there exists a unique morphism $\eta: \Id_{\mathcal{O}}\rightarrow F\circ E$ such that $$(\phi_{FE})^{-1}\circ(1_{\KZ}\eta)=\eta^{\scriptscriptstyle\mathscr{H}}1_{\KZ}.$$ Now, we have the following commutative diagram $$\xymatrix{E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\ar@{=}[r]\ar[d]_{1_{E^{\scriptscriptstyle\mathscr{H}}} \eta^{\scriptscriptstyle\mathscr{H}}1_{\KZ}}&E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\ar[r]^{\phi_E} \ar[d]^{1_{E^{\scriptscriptstyle\mathscr{H}}}1_{\KZ}\eta} &\KZ'\circ E\ar[d]^{1_{\KZ'}1_E\eta}\\ E^{\scriptscriptstyle\mathscr{H}}\circ F^{\scriptscriptstyle\mathscr{H}}\circ E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\quad\ar[r]^{\,1_{E^{\scriptscriptstyle\mathscr{H}}}\phi_{FE}}\ar@{=}[d] &\quad E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\circ F\circ E\quad\ar[r]^{\,\phi_E1_F1_E} &\quad\KZ'\circ E\circ F\circ E\ar@{=}[d]\\ E^{\scriptscriptstyle\mathscr{H}}\circ F^{\scriptscriptstyle\mathscr{H}}\circ E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\quad\ar[r]^{\,1_{E^{\scriptscriptstyle\mathscr{H}}}1_{F^{\scriptscriptstyle\mathscr{H}}}\phi_E} \ar[d]_{\varepsilon^{\scriptscriptstyle\mathscr{H}}1_{E^{\scriptscriptstyle\mathscr{H}}}1_{\KZ}} &\quad E^{\scriptscriptstyle\mathscr{H}}\circ F^{\scriptscriptstyle\mathscr{H}}\circ\KZ'\circ E\quad\ar[u]_{1_{E^{\scriptscriptstyle\mathscr{H}}}\phi_F1_E} \ar[r]^{\,\phi_{EF}1_E}\ar[d]^{\varepsilon^{\scriptscriptstyle\mathscr{H}}1_{\KZ'}1_E} &\quad\KZ'\circ E\circ F\circ E\ar[d]^{1_{\KZ'}\varepsilon 1_E}\\ E^{\scriptscriptstyle\mathscr{H}}\circ\KZ\ar[r]^{\phi_E} &\KZ'\circ E\ar@{=}[r]&\KZ'\circ E.}$$ It yields that $$(1_{\KZ'}\varepsilon 1_E)\circ(1_{\KZ'}1_E\eta)= \phi_E\circ(\varepsilon^{\scriptscriptstyle\mathscr{H}}1_{E^{\scriptscriptstyle\mathscr{H}}}1_{\KZ})\circ(1_{E^{\scriptscriptstyle\mathscr{H}}} \eta^{\scriptscriptstyle\mathscr{H}}1_{\KZ})\circ(\phi_E)^{-1}.$$ We deduce that \begin{eqnarray} 1_{\KZ'}((\varepsilon 1_E)\circ(1_E\eta))&=&\phi_E\circ(1_{E^{\scriptscriptstyle\mathscr{H}}}1_{\KZ}) \circ(\phi_E)^{-1}\nonumber\\ &=&1_{\KZ'}1_E.\label{eq:unit} \end{eqnarray} By applying Lemma \ref{fullyfaithful} to $\mathcal{O}_1=\mathcal{O}$, $\mathcal{O}_2=\mathcal{O}'$, $K=L=E$, we deduce that the following map is bijective $$\End(E)\rightarrow\End(\KZ'\circ E),\quad f\mapsto1_{\KZ'}f.$$ Hence (\ref{eq:unit}) implies that $$(\varepsilon 1_E)\circ(1_E\eta)=1_E.$$ Similarly, we have $(1_F\varepsilon)\circ (\eta 1_F)=1_F$. So $E$ is left adjoint to $F$. By uniqueness of adjoint functors this implies that $F$ is isomorphic to $\Ind_b$. Therefore $\Ind_b$ is biadjoint to $\Res_b$. \end{proof}
\section{Reminders on the Cyclotomic case.}\label{s:cyclotomiccase}
From now on we will concentrate on the cyclotomic rational DAHA's. We fix some notation in this section.
\subsection{}\label{ss:cyclot1}
Let $l,n$ be positive integers. Write $\varepsilon=\exp(\frac{2\pi \sqrt{-1}}{l})$. Let $\mathfrak{h}=\mathbb{C}^n$, write $\{y_1,\ldots,y_n\}$ for its standard basis. For $1\leqslant i,j,k\leqslant n$ with $i,j,k$ distinct, let $\varepsilon_k$, $s_{ij}$ be the following elements of $GL(\mathfrak{h})$: $$\varepsilon_k(y_k)=\varepsilon y_k,\quad \varepsilon_k(y_j)=y_j,\quad s_{ij}(y_i)=y_j,\quad s_{ij}(y_k)=y_k.$$ Let $B_n(l)$ be the subgroup of $GL(\mathfrak{h})$ generated by $\varepsilon_k$ and $s_{ij}$ for $1\leqslant k\leqslant n$ and $1\leqslant i<j\leqslant n$. \iffalse We have $B_n(l)\cong\mathfrak{S}_n\ltimes(\mu_l)^n$ where $\mathfrak{S}_n$ is the symmetric group on $n$ elements and $(\mu_l)^n$ is $n$ copies of the cyclic group $\mu_l$ which is generated by $\varepsilon=\exp(\frac{2\pi\sqrt{-1}}{l})$. \fi It is a complex reflection group with the set of reflections $$\mathcal{S}_n=\{\varepsilon_i^p:1\leqslant i\leqslant n, 1\leqslant p \leqslant l-1\}\bigsqcup \{s_{ij}^{(p)}=s_{ij}\varepsilon_i^p\varepsilon_j^{-p}:1\leqslant i<j\leqslant n, 1\leqslant p\leqslant l\}.$$ Note that there is an obvious inclusion $\mathcal{S}_{n-1}\hookrightarrow\mathcal{S}_{n}$. It yields an embedding \begin{equation}\label{eq:inclusiongroup}
B_{n-1}(l)\hookrightarrow B_n(l). \end{equation} This embedding identifies $B_{n-1}(l)$ with the parabolic subgroup of $B_{n}(l)$ given by the stabilizer of the point $b_n=(0,\ldots,0,1)\in\mathbb{C}^n$.
The cyclotomic rational DAHA is the algebra $H_{c}(B_n(l),\mathfrak{h})$. We will use another presentation in which we replace the parameter $c$ by an $l$-tuple $\mathbf{h}=(h,h_1,\ldots,h_{l-1})$ such that \begin{equation*} c_{s^{(p)}_{ij}}=-h, \quad c_{\varepsilon_p}=\frac{-1}{2}\sum_{p'=1}^{l-1}(\varepsilon^{-pp'}-1)h_{p'}. \end{equation*}
We will denote $H_c(B_n(l),\mathfrak{h})$ by $H_{\mathbf{h},n}$. The corresponding category $\mathcal{O}$ will be denoted by $\mathcal{O}_{\mathbf{h},n}$. In the rest of the paper, we will fix the positive integer $l$. We will also fix a positive integer $e\geqslant 2$ and an $l$-tuple of integers $\mathbf{s}=(s_1,\ldots,s_l)$. \emph{We will always assume that the parameter $\mathbf{h}$ is given by the following formulas\,,} \begin{equation}\label{assumptionh} h=\frac{-1}{e},\quad h_p=\frac{s_{p+1}-s_p}{e}-\frac{1}{l},\quad 1\leqslant p\leqslant l-1\,. \end{equation}
The functor $\KZ(B_n(l),\mathbb{C}^n)$ goes from $\mathcal{O}_{\mathbf{h},n}$ to the category of finite dimensional modules of a certain Hecke algebra $\mathscr{H}_{\mathbf{q},n}$ attached to the group $B_n(l)$. Here the parameter is $\mathbf{q}=(q,q_1,\ldots, q_l)$ with \begin{equation*} q=\exp(2\pi\sqrt{-1}/e),\quad q_p=q^{s_p},\quad 1\leqslant p\leqslant l. \end{equation*} The algebra $\mathscr{H}_{\mathbf{q},n}$ has the following presentation: \begin{itemize} \item Generators: $T_0, T_1,\ldots, T_{n-1}$, \item Relations: \begin{gather*} (T_0-q_1)\cdots(T_0-q_l)=(T_i+1)(T_i-q)=0,\quad 1\leqslant i\leqslant n-1, \notag \\ T_0T_1T_0T_1=T_1T_0T_1T_0,\notag \\
T_iT_j=T_jT_i,\quad\text{if }|i-j|>1,\label{pres} \\ T_iT_{i+1}T_i=T_{i+1}T_iT_{i+1},\quad 1\leqslant i\leqslant n-2. \notag \end{gather*} \end{itemize} The algebra $\mathscr{H}_{\mathbf{q},n}$ satisfies the assumption of Section \ref{s:KZcommute}, i.e., it has the same dimension as $\mathbb{C} B_n(l)$.
\subsection{}\label{ss:cyclot2}
For each positive integer $n$, the embedding (\ref{eq:inclusiongroup}) of $B_{n}(l)$ into $B_{n+1}(l)$ yields an embedding of Hecke algebras $$\imath_{\mathbf{q}}: \mathscr{H}_{\mathbf{q},n}\hookrightarrow \mathscr{H}_{\mathbf{q},n+1},$$ see Section \ref{ss:resHecke}. Under the presentation above this embedding is given by $$\imath_{\mathbf{q}}(T_i)=T_i,\quad\forall\ 0\leqslant i\leqslant n-1,$$ see \cite[Proposition 2.29]{BMR}.
We will consider the following restriction and induction functors: \begin{eqnarray*} E(n)=\Res_{b_n},\quad E(n)^{\scriptscriptstyle\mathscr{H}}=\sideset{^{\scriptscriptstyle\mathscr{H}}}{^{B_{n}(l)}_{B_{n-1}(l)}}\Res,\\ F(n)=\Ind_{b_n},\quad F(n)^{\scriptscriptstyle\mathscr{H}}=\sideset{^{\scriptscriptstyle\mathscr{H}}}{^{B_{n}(l)}_{B_{n-1}(l)}}\Ind.
\end{eqnarray*} The algebra $\mathscr{H}_{\mathbf{q},n}$ is symmetric (see Remark \ref{rmq:symmetric}). Hence by Lemma \ref{heckeind} we have $$F(n)^{\scriptscriptstyle\mathscr{H}}\cong\sideset{^{\scriptscriptstyle\mathscr{H}}}{^{B_{n}(l)}_{B_{n-1}(l)}}\coInd.$$ We will abbreviate $$\mathcal{O}_{\mathbf{h},\mathbb{N}}=\bigoplus_{n\in\mathbb{N}}\mathcal{O}_{\mathbf{h},n},\quad \KZ=\bigoplus_{n\in\mathbb{N}}\KZ(B_n(l),\mathbb{C}^n),\quad \mathscr{H}_{\mathbf{q},\mathbb{N}}\modu=\bigoplus_{n\in\mathbb{N}}\mathscr{H}_{\mathbf{q},n}\modu.$$ So $\KZ$ is the Knizhnik-Zamolodchikov functor from $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ to $\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu$. Let
\begin{eqnarray*} E=\bigoplus_{n\geqslant 1}E(n),\quad E^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{n\geqslant 1}E^{\scriptscriptstyle\mathscr{H}}(n),\\ F=\bigoplus_{n\geqslant 1}F(n),\quad F^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{n\geqslant 1}F^{\scriptscriptstyle\mathscr{H}}(n).\end{eqnarray*} So $(E^{\scriptscriptstyle\mathscr{H}},F^{\scriptscriptstyle\mathscr{H}})$ is a pair of biadjoint endo-functors of $\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu$, and $(E,F)$ is a pair of biadjoint endo-functors of $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ by Proposition \ref{leftadjunction}.
\subsection{Fock spaces.}\label{ss: fock}
Recall that an $l$-partition is an $l$-tuple $\lambda=(\lambda^1,\cdots, \lambda^l)$ with each $\lambda^j$ a partition, that is a sequence of integers $(\lambda^j)_1\geqslant\cdots\geqslant(\lambda^j)_k>0$. To any $l$-partition $\lambda=(\lambda^1,\ldots,\lambda^l)$ we attach the set \begin{equation*} \Upsilon_\lambda=\{(a,b,j)\in \mathbb{N}\times\mathbb{N}\times(\mathbb{Z}/l\mathbb{Z}): 0<b\leqslant(\lambda^j)_a\}. \end{equation*}
Write $|\lambda|$ for the number of elements in this set, we say that
$\lambda$ is an $l$-partition of $|\lambda|$. For $n\in\mathbb{N}$ we denote by $\mathcal{P}_{n,l}$ the set of $l$-partitions of $n$. For any $l$-partition $\mu$ such that $\Upsilon_\mu$ contains $\Upsilon_\lambda$, we write $\mu/\lambda$ for the complement of
$\Upsilon_\lambda$ in $\Upsilon_\mu$. Let $|\mu/\lambda|$ be the number of elements in this set. To each element $(a,b,j)$ in $\Upsilon_\lambda$ we attach an element $$\res ((a,b,j))=b-a+s_j\in\mathbb{Z}/e\mathbb{Z},$$ called the residue of $(a,b,j)$. Here $s_j$ is the $j$-th component of our fixed $l$-tuple $\mathbf{s}$.
The Fock space with multi-charge $\mathbf{s}$ is the $\mathbb{C}$-vector space $\mathcal{F}_\mathbf{s}$ spanned by the $l$-partitions, i.e., $$\mathcal{F}_\mathbf{s}=\bigoplus_{n\in\mathbb{N}}\bigoplus_{\lambda\in\mathcal{P}_{n,l}}\mathbb{C}\lambda.$$ It admits an integrable $\widehat{\mathfrak{sl}}_e$-module structure with the Chevalley generators acting as follows (cf. \cite{JMMO}): for any $i\in\mathbb{Z}/e\mathbb{Z}$, \begin{equation}\label{fockei}
e_i(\lambda)=\sum_{|\lambda/\mu|=1,\res(\lambda/\mu)=i}\mu,\quad f_i(\lambda)=\sum_{|\mu/\lambda|=1,\res(\mu/\lambda)=i}\mu. \end{equation} For each $n\in\mathbb{Z}$ set $\Lambda_n=\Lambda_{\underline{n}}$, where $\underline{n}$ is the image of $n$ in $\mathbb{Z}/e\mathbb{Z}$ and $\Lambda_{\underline{n}}$ is the corresponding fundamental weight of $\widehat{\Lie{sl}}_e$. Set $$\Lambda_{\mathbf{s}}=\Lambda_{\underline{s_1}}+\cdots+\Lambda_{\underline{s_l}}.$$ Each $l$-partition $\lambda$ is a weight vector of $\mathcal{F}_\mathbf{s}$ with weight \begin{equation}\label{wt} \wt(\lambda)=\Lambda_{\mathbf{s}}-\sum_{i\in\mathbb{Z}/e\mathbb{Z}}n_i\alpha_i, \end{equation} where $n_i$ is the number of elements in the set $\{ (a,b,j)\in\Upsilon_\lambda: \res((a,b,j))=i\}$. We will call $\wt(\lambda)$ the weight of $\lambda$.
In \cite[Section $6.1.1$]{R} an explicit bijection was given between the sets $\Irr(B_n(l))$ and $\mathcal{P}_{n,l}$. Using this bijection we identify these two sets and index the standard and simple modules in $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ by $l$-partitions. In particular, we have an isomorphism of $\mathbb{C}$-vector spaces \begin{equation}\label{Fock} \theta: K(\mathcal{O}_{\mathbf{h},\mathbb{N}})\overset{\sim}\rightarrow\mathcal{F}_{\mathbf{s}},\quad [\Delta(\lambda)]\mapsto \lambda. \end{equation}
\subsection{}\label{kzspecht}
We end this section by the following lemma. Recall that the functor $\KZ$ gives a map $K(\mathcal{O}_{\mathbf{h},n})\rightarrow K(\mathscr{H}_{\mathbf{q},n})$. For any $l$-partition $\lambda$ of $n$ let $S_\lambda$ be the corresponding Specht module in $\mathscr{H}_{\mathbf{q},n}\modu$, see \cite[Definition $13.22$]{A} for its definition. \begin{lemme}\label{Specht} In $K(\mathscr{H}_{\mathbf{q},n})$, we have $\KZ([\Delta(\lambda)])=[S_\lambda]$. \end{lemme} \begin{proof} Let $R$ be any commutative ring over $\mathbb{C}$. For any $l$-tuplet $\mathbf{z}=(z,z_1,\ldots,z_{l-1})$ of elements in $R$ one defines the rational DAHA over $R$ attached to $B_n(l)$ with parameter $\mathbf{z}$ in the same way as before. Denote it by $H_{R,\mathbf{z},n}$. The standard modules $\Delta_R(\lambda)$ are also defined as before. For any $(l+1)$-tuplet $\mathbf{u}=(u,u_1,\ldots, u_l)$ of invertible elements in $R$ the Hecke algebra $\mathscr{H}_{R,\mathbf{u},n}$ over $R$ attached to $B_n(l)$ with parameter $\mathbf{u}$ is defined by the same presentation as in Section \ref{ss:cyclot1}. The Specht modules $S_{R,\lambda}$ are also well-defined (see \cite{A}). If $R$ is a field, we will write $\Irr(\mathscr{H}_{R,\mathbf{u},n})$ for the set of isomorphism classes of simple $\mathscr{H}_{R,\mathbf{u},n}$-modules.
Now, fix $R$ to be the ring of holomorphic functions of one variable $\varpi$. We choose $\mathbf{z}=(z,z_1,\ldots,z_{l-1})$ to be given by \begin{equation*}
z=l\varpi,\quad z_p=(s_{p+1}-s_p)l\varpi+e\varpi,\quad 1\leqslant p\leqslant
l-1. \end{equation*} Write $x=\exp(-2\pi\sqrt{-1}\varpi)$. Let $\mathbf{u}=(u,u_1,\ldots, u_l)$ be given by \begin{equation*}
u=x^{l},\quad u_p=\varepsilon^{p-1}x^{s_pl-(p-1)e},\quad 1\leqslant p\leqslant
l. \end{equation*} By \cite[Theorem 4.12]{BMR} the same definition as in Section \ref{ss:KZ} yields a well defined $\mathscr{H}_{R,\mathbf{u},n}$-module $$T_{R}(\lambda)=\KZ_{R}(\Delta_{R}(\lambda)).$$ It is a free $R$-module of finite rank \iffalse. Moreover, for any ring homomorphism $a: R\rightarrow\mathbb{C}$, write $\mathbb{C}_a$ for the vector space $\mathbb{C}$ equipped with the $R$-module structure given by $a$. Let $a(\mathbf{z})$, $a(\mathbf{u})$ denote the images of $\mathbf{z}$, $\mathbf{u}$ by $a$. Note that we have $H_{a(\mathbf{z}),n}=H_{R,\mathbf{z},n}\otimes_R\mathbb{C}_a$ and $\mathscr{H}_{a(\mathbf{u}),n}=\mathscr{H}_{R,\mathbf{u},n}\otimes_R\mathbb{C}_a$. Denote the Knizhnik-Zamolodchikov functor of $H_{a(\mathbf{z}),n}$ by $\KZ_{a(\mathbf{z})}$ and the standard module corresponding to $\lambda$ by $\Delta_{a(\mathbf{z})}(\lambda)$. Then by the existence and unicity theorem for linear differential equations, there is a canonical isomorphism of $\mathscr{H}_{a(\mathbf{u}),n}$-modules \begin{equation*}
T_{R}(\lambda)\otimes_{R}\mathbb{C}_{a}\cong\KZ_{a(\mathbf{z})}(\Delta_{a(\mathbf{z})}(\lambda)). \end{equation*} Similarly, for any ring $R'$ over $R$, the module $T_{R'}(\lambda)=\KZ_{R'}(\Delta_{R'}(\lambda))$ is well defined. It is free over $R'$. For any homomorphism $a':R'\rightarrow\mathbb{C}$ such that the restriction of $a'$ to $R$ is equal to $a$, we have \begin{equation}\label{eq:basechange}
T_{R'}(\lambda)\otimes_{R'}\mathbb{C}_{a'}\cong\KZ_{a(\mathbf{z})}(\Delta_{a(\mathbf{z})}(\lambda)). \end{equation} We also h \begin{equation*}
T_{R}(\lambda)\otimes_R{R'}\cong T_{R'}(\lambda). \end{equation*} \fi and it commutes with the base change functor by the existence and unicity theorem for linear differential equations, i.e., for any ring homomorphism $R\rightarrow R'$ over $\mathbb{C}$, we have a canonical isomorphism of $\mathscr{H}_{R',\mathbf{u},n}$-modules \begin{equation}\label{eq:basechange}
T_{R'}(\lambda)=\KZ_{R'}(\Delta_{R'}(\lambda))\cong T_{R}(\lambda)\otimes_RR'. \end{equation} In particular, for any ring homomorphism $a: R\rightarrow\mathbb{C}$. Write $\mathbb{C}_a$ for the vector space $\mathbb{C}$ equipped with the $R$-module structure given by $a$. Let $a(\mathbf{z})$, $a(\mathbf{u})$ denote the images of $\mathbf{z}$, $\mathbf{u}$ by $a$. Note that we have $H_{a(\mathbf{z}),n}=H_{R,\mathbf{z},n}\otimes_R\mathbb{C}_a$ and $\mathscr{H}_{a(\mathbf{u}),n}=\mathscr{H}_{R,\mathbf{u},n}\otimes_R\mathbb{C}_a$. Denote the Knizhnik-Zamolodchikov functor of $H_{a(\mathbf{z}),n}$ by $\KZ_{a(\mathbf{z})}$ and the standard module corresponding to $\lambda$ by $\Delta_{a(\mathbf{z})}(\lambda)$. Then we have an isomorphism of $\mathscr{H}_{a(\mathbf{u}),n}$-modules \begin{equation*}
T_{R}(\lambda)\otimes_{R}\mathbb{C}_{a}\cong\KZ_{a(\mathbf{z})}(\Delta_{a(\mathbf{z})}(\lambda)). \end{equation*}
Let $K$ be the fraction field of $R$. By \cite[Theorem 2.19]{GGOR} the category $\mathcal{O}_{K,\mathbf{z},n}$ is split semisimple. In particular, the standard modules are simple. We have $$\{T_K(\lambda),\lambda\in\mathcal{P}_{n,l}\}=\Irr(\mathscr{H}_{K,\mathbf{u},n}).$$ The Hecke algebra $\mathscr{H}_{K,\mathbf{u},n}$ is also split semisimple and we have $$\{S_{K,\lambda},\lambda\in\mathcal{P}_{n,l}\}=\Irr(\mathscr{H}_{K,\mathbf{u},n}),$$ see for example \cite[Corollary 13.9]{A}. Thus there is a bijection $\varphi: \mathcal{P}_{n,l}\rightarrow\mathcal{P}_{n,l}$ such that $T_K(\lambda)$ is isomorphic to $S_{K,\varphi(\lambda)}$ for all $\lambda$. We claim that $\varphi$ is identity. To see this, consider the algebra homomorphism $a_0:R\rightarrow\mathbb{C}$ given by $\varpi\mapsto 0$. Then $\mathscr{H}_{a_0(\mathbf{u}),n}$ is canonically isomorphic to the group algebra $\mathbb{C} B_n(l)$, thus it is semi-simple. Let $\overline{K}$ be the algebraic closure of $K$. Let $\overline{R}$ be the integral closure of $R$ in $\overline{K}$ and fix an extension $\overline{a}_0$ of $a_0$ to $\overline{R}$. By Tit's deformation theorem (see for example \cite[Section 68A]{CuR}), there is a bijection $$\psi:\Irr(\mathscr{H}_{\overline{K},\mathbf{u},n})\overset{\sim}\ra\Irr(\mathscr{H}_{a_0(\mathbf{u}),n})$$ such that \begin{equation*}
\psi(T_{\overline{K}}(\lambda))=T_{\overline{R}}(\lambda)\otimes_{\overline{R}}\mathbb{C}_{\overline{a}_0},\quad
\psi(S_{\overline{K},\lambda})=S_{\overline{R},\lambda}\otimes_{\overline{R}}\mathbb{C}_{\overline{a}_0}. \end{equation*} By the definition of Specht modules we have $S_{\overline{R},\lambda}\otimes_{\overline{R}}\mathbb{C}_{\overline{a}_0}\cong\lambda$ as $\mathbb{C} B_n(l)$-modules. On the other hand, since $a_0(\mathbf{z})=0$, by (\ref{eq:basechange}) we have the following isomorphisms \begin{eqnarray*}
T_{\overline{R}}(\lambda)\otimes_{\overline{R}}\mathbb{C}_{\overline{a}_0}&\cong&T_R(\lambda)\otimes_R\mathbb{C}_{a_0}\\
&\cong&\KZ_0(\Delta_0(\lambda))\\
&=&\lambda. \end{eqnarray*} So $\psi(T_{\overline{K}}(\lambda))=\psi(S_{\overline{K},\lambda})$. Hence we have $T_{\overline{K}}(\lambda)\cong S_{\overline{K},\lambda}$. Since $T_{\overline{K}}(\lambda)=T_K(\lambda)\otimes_K\overline{K}$ is isomorphic to $S_{\overline{K},\varphi(\lambda)}=S_{K,\varphi(\lambda)}\otimes_K\overline{K}$, we deduce that $\varphi(\lambda)=\lambda$. The claim is proved.
Finally, let $\mathfrak{m}$ be the maximal ideal of $R$ consisting of the functions vanishing at $\varpi=-1/el$. Let $\widehat R$ be the completion of $R$ at $\mathfrak{m}$. It is a discrete valuation ring with residue field $\mathbb{C}$. Let $a_1:\widehat R\rightarrow \widehat{R}/\mathfrak{m}\widehat{R}=\mathbb{C}$ be the quotient map. We have $a_1(\mathbf{z})=\mathbf{h}$ and $a_1(\mathbf{u})=\mathbf{q}$. Let $\widehat{K}$ be the fraction field of $\widehat R$. Recall that the decomposition map is given by $$d: K(\mathscr{H}_{\widehat{K},\mathbf{u},n})\rightarrow K(\mathscr{H}_{\mathbf{q},n}),\quad [M]\mapsto [L\otimes_{\widehat{R}}\mathbb{C}_{a_1}].$$ Here $L$ is any free $\widehat{R}$-submodule of $M$ such that $L\otimes_{\widehat{R}}\widehat{K}=M$. The choice of $L$ does not affect the class $[L\otimes_{\widehat{R}}\mathbb{C}_{a_1}]$ in $K(\mathscr{H}_{\mathbf{q},n})$. See \cite[Section 13.3]{A} for details on this map. Now, observe that we have \begin{eqnarray*}
&d([S_{\widehat{K},\lambda}])= [S_{\widehat{R},\lambda}\otimes_{\widehat{R}}\mathbb{C}_{a_1}]=[S_\lambda],\\
&d([T_{\widehat{K}}(\lambda)])= [T_{\widehat{R}}(\lambda)\otimes_{\widehat{R}}\mathbb{C}_{a_1}]=[\KZ(\Delta(\lambda))]. \end{eqnarray*} Since $\widehat{K}$ is an extension of $K$, by the last paragraph we have $[S_{\widehat{K},\lambda}]=[T_{\widehat{K}}(\lambda)]$. We deduce that $[\KZ(\Delta(\lambda))]=[S_\lambda]$. \end{proof}
\section{$i$-Restriction and $i$-Induction}\label{iresiind}
We define in this section the $i$-restriction and $i$-induction functors for the cyclotomic rational DAHA's. This is done in parallel with the Hecke algebra case.
\subsection{}\label{ss:ireshecke}
Let us recall the definition of the $i$-restriction and $i$-induction functors for $\mathscr{H}_{\mathbf{q},n}$. First define the Jucy-Murphy elements $J_0,\ldots, J_{n-1}$ in $\mathscr{H}_{\mathbf{q},n}$ by \begin{equation*} J_0=T_0,\quad J_i=q^{-1}T_iJ_{i-1}T_i\quad\text{ for }1\leqslant i\leqslant n-1. \end{equation*} Write $Z(\mathscr{H}_{\mathbf{q},n})$ for the center of $\mathscr{H}_{\mathbf{q},n}$. For any symmetric polynomial $\sigma$ of $n$ variables the element $\sigma(J_0,\ldots,J_{n-1})$ belongs to $Z(\mathscr{H}_{\mathbf{q},n})$ (cf. \cite[Section $13.1$]{A}). In particular, if $z$ is a formal variable the polynomial $C_n(z)=\prod_{i=0}^{n-1}(z-J_i)$ in $\mathscr{H}_{\mathbf{q},n}[z]$ has coefficients in $Z(\mathscr{H}_{\mathbf{q},n})$.
Now, for any $a(z)\in\mathbb{C}(z)$ let $P_{n,a(z)}$ be the exact endo-functor of the category $\mathscr{H}_{\mathbf{q},n}\modu$ that maps an object $M$ to the generalized eigenspace of $C_n(z)$ in $M$ with the eigenvalue $a(z)$.
For any $i\in\mathbb{Z}/e\mathbb{Z}$ the $i$-restriction functor and $i$-induction functor $$E_i(n)^{\scriptscriptstyle\mathscr{H}}: \mathscr{H}_{\mathbf{q},n}\modu\rightarrow\mathscr{H}_{\mathbf{q},n-1}\modu, \quad F_i(n)^{\scriptscriptstyle\mathscr{H}}: \mathscr{H}_{\mathbf{q},n-1}\modu\rightarrow\mathscr{H}_{\mathbf{q},n}\modu$$ are defined as follows (cf. \cite[Definition 13.33]{A}): \begin{eqnarray*} E_i(n)^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{a(z)\in\mathbb{C}(z)}P_{n-1,a(z)/(z-q^i)}\circ E(n)^{\scriptscriptstyle\mathscr{H}}\circ P_{n,a(z)},\label{e}\\ F_i(n)^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{a(z)\in\mathbb{C}(z)}P_{n,a(z)(z-q^i)}\circ F(n)^{\scriptscriptstyle\mathscr{H}}\circ P_{n-1,a(z)}.\label{f} \end{eqnarray*} We will write \begin{equation*} E^{\scriptscriptstyle\mathscr{H}}_i=\bigoplus_{n\geqslant 1}E_i(n)^{\scriptscriptstyle\mathscr{H}},\quad F^{\scriptscriptstyle\mathscr{H}}_i=\bigoplus_{n\geqslant 1}F_i(n)^{\scriptscriptstyle\mathscr{H}}. \end{equation*} They are endo-functors of $\mathscr{H}_{\mathbf{q},\mathbb{N}}$. For each $\lambda\in\mathcal{P}_{n,l}$ set $$a_\lambda(z)=\prod_{v \in\Upsilon_\lambda}(z-q^{\res(v)}).$$ We recall some properties of these functors in the following proposition. \begin{prop}\label{hv} (1) The functors $E_i(n)^{\scriptscriptstyle\mathscr{H}}$, $F_i(n)^{\scriptscriptstyle\mathscr{H}}$ are exact. The functor $E_i(n)^{\scriptscriptstyle\mathscr{H}}$ is biadjoint to $F_i(n)^{\scriptscriptstyle\mathscr{H}}$.
(2) For any $\lambda\in\mathcal{P}_{n,l}$ the element $C_n(z)$ has a unique eigenvalue on the Specht module $S_\lambda$. It is equal to $a_\lambda(z)$.
(3) We have \begin{equation*} E_i(n)^{\scriptscriptstyle\mathscr{H}}([S_\lambda])=\sum_{\res(\lambda/\mu)=i}[S_\mu],\qquad F_i(n)^{\scriptscriptstyle\mathscr{H}}([S_\lambda])=\sum_{\res(\mu/\lambda)=i}[S_\mu]. \end{equation*}
(4) We have \begin{equation*} E(n)^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}E_i(n)^{\scriptscriptstyle\mathscr{H}},\quad F(n)^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}F_i(n)^{\scriptscriptstyle\mathscr{H}}. \end{equation*} \end{prop} \begin{proof}
Part $(1)$ is obvious. See \cite[Theorem $13.21$(2)]{A} for $(2)$ and \cite[Lemma $13.37$]{A} for $(3)$. Part $(4)$ follows from (3) and \cite[Lemma $13.32$]{A}. \end{proof}
\subsection{}\label{iresdaha}
By Lemma \ref{lem:center}(1) we have an algebra isomorphism $$\gamma: Z(\mathcal{O}_{\mathbf{h},n})\overset{\sim}\rightarrow Z(\mathscr{H}_{\mathbf{q},n}).$$ So there are unique elements $K_1,\ldots,K_n\in Z(\mathcal{O}_{\mathbf{h},n})$ such that the polynomial $$D_n(z)=z^n+K_1z^{n-1}+\cdots+K_n$$ maps to $C_n(z)$ by $\gamma$. Since the elements $K_1,\ldots,K_n$ act on simple modules by scalars and the category $\mathcal{O}_{\mathbf{h},n}$ is artinian, every module $M$ in $\mathcal{O}_{\mathbf{h},n}$ is a direct sum of generalized eigenspaces of $D_n(z)$. For $a(z)\in\mathbb{C}(z)$ let $Q_{n,a(z)}$ be the exact endo-functor of $\mathcal{O}_{\mathbf{h},n}$ which maps an object $M$ to the generalized eigenspace of $D_n(z)$ in $M$ with the eigenvalue $a(z)$. \begin{df}\label{def} The \emph{$i$-restriction} functor and the \emph{$i$-induction} functor $$E_i(n): \mathcal{O}_{\mathbf{h},n}\rightarrow\mathcal{O}_{\mathbf{h},n-1},\quad F_i(n): \mathcal{O}_{\mathbf{h},n-1}\rightarrow\mathcal{O}_{\mathbf{h},n}$$ are given by \begin{eqnarray*} E_i(n)=\bigoplus_{a(z)\in\mathbb{C}(z)}Q_{n-1,a(z)/(z-q^i)}\circ E(n)\circ Q_{n,a(z)},\\ F_i(n)=\bigoplus_{a(z)\in\mathbb{C}(z)}Q_{n,a(z)(z-q^i)}\circ F(n)\circ Q_{n-1,a(z)}. \end{eqnarray*} \end{df} We will write \begin{equation}\label{ireso} E_i=\bigoplus_{n\geqslant 1}E_i(n),\quad F_i=\bigoplus_{n\geqslant 1}F_i(n). \end{equation}
We have the following proposition. \begin{prop}\label{isoi} For any $i\in\mathbb{Z}/e\mathbb{Z}$ there are isomorphisms of functors \begin{eqnarray*} \KZ\circ E_i(n)\cong E^{\scriptscriptstyle\mathscr{H}}_i(n)\circ\KZ, \quad \KZ\circ F_i(n)\cong F^{\scriptscriptstyle\mathscr{H}}_i(n)\circ\KZ. \end{eqnarray*} \end{prop} \begin{proof} Since $\gamma(D_n(z))=C_n(z)$, by Lemma \ref{lem:center}(2) for any $a(z)\in\mathbb{C}(z)$ we have $$\KZ\circ Q_{n,a(z)}\cong P_{n, a(z)}\circ\KZ.$$ So the proposition follows from Theorem \ref{iso} and Corollary \ref{indiso}. \end{proof}
The next proposition is the DAHA version of Proposition \ref{hv}.
\begin{prop}\label{dv} (1) The functors $E_i(n)$, $F_i(n)$ are exact. The functor $E_i(n)$ is biadjoint to $F_i(n)$.
(2) For any $\lambda\in\mathcal{P}_{n,l}$ the unique eigenvalue of $D_n(z)$ on the standard module $\Delta(\lambda)$ is $a_\lambda(z)$.
(3) We have the following equalities \begin{equation}\label{Pierii} E_i(n)([\Delta(\lambda)])=\sum_{\res(\lambda/\mu)=i}[\Delta(\mu)],\qquad F_i(n)([\Delta(\lambda)])=\sum_{\res(\mu/\lambda)=i}[\Delta(\mu)]. \end{equation}
(4) We have \begin{equation*} E(n)=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}E_i(n),\quad F(n)=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}F_i(n). \end{equation*} \end{prop} \begin{proof} (1) This is by construction and by Proposition \ref{leftadjunction}.
(2) Since a standard module is indecomposable, the element $D_n(z)$ has a unique eigenvalue on $\Delta(\lambda)$. By Lemma \ref{Specht} this eigenvalue is the same as the eigenvalue of $C_n(z)$ on $S_\lambda$.
(3) Let us prove the equality for $E_i(n)$. The Pieri rule for the group $B_n(l)$ together with Proposition \ref{Res}(2) yields \begin{equation}\label{Pieri}
E(n)([\Delta(\lambda)])=\sum_{|\lambda/\mu|=1}[\Delta(\mu)],\quad F(n)([\Delta(\lambda)])=\sum_{|\mu/\lambda|=1}[\Delta(\mu)]. \end{equation} So we have \begin{eqnarray*} E_i(n)([\Delta(\lambda)])&=&\bigoplus_{a(z)\in\mathbb{C}[z]}Q_{n-1,a(z)/(z-q^i)}(E(n)(Q_{n,a(z)}([\Delta(\lambda)])))\\ &=&Q_{n-1,a_\lambda(z)/(z-q^i)}(E(n)(Q_{n,a_\lambda(z)}([\Delta(\lambda)])))\\ &=&Q_{n-1,a_\lambda(z)/(z-q^i)}(E(n)([\Delta(\lambda)]))\\
&=&Q_{n-1,a_\lambda(z)/(z-q^i)}(\sum_{|\lambda/\mu|=1}[\Delta(\mu)])\\ &=&\sum_{\res(\lambda/\mu)=i}[\Delta(\mu)]. \end{eqnarray*} The last equality follows from the fact that for any $l$-partition
$\mu$ such that $|\lambda/\mu|=1$ we have $a_\lambda(z)=a_\mu(z)(z-q^{\res(\lambda/\mu)})$. The proof for $F_i(n)$ is similar.
(4) It follows from part (3) and (\ref{Pieri}). \end{proof}
\begin{cor}\label{rep} Under the isomorphism $\theta$ in (\ref{Fock}) the operators $E_i$ and $F_i$ on $K(\mathcal{O}_{\mathbf{h},\mathbb{N}})$ go respectively to the operators $e_i$ and $f_i$ on $\mathcal{F}_\mathbf{s}$. When $i$ runs over $\mathbb{Z}/e\mathbb{Z}$ they yield an action of $\widehat{\mathfrak{sl}}_e$ on $K(\mathcal{O}_{\mathbf{h},\mathbb{N}})$ such that $\theta$ is an isomorphism of $\widehat{\mathfrak{sl}}_e$-modules. \end{cor} \begin{proof} This is clear from Proposition \ref{dv}$(3)$ and from (\ref{fockei}). \end{proof}
\section{$\widehat{\mathfrak{sl}}_e$-categorification}\label{categorification}
In this section, we construct an $\widehat{\mathfrak{sl}}_e$-categorification on the category $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ under some mild assumption on the parameter $\mathbf{h}$ (Theorem \ref{thm:categorification}).
\subsection{}\label{defcategorification}
Recall that we put $q=\exp(\frac{2\pi\sqrt{-1}}{e})$ and $P$ denotes the weight lattice. Let $\mathcal{C}$ be a $\mathbb{C}$-linear artinian abelian category. For any functor $F:\mathcal{C}\rightarrow\mathcal{C}$ and any $X\in\End(F)$, the generalized eigenspace of $X$ acting on $F$ with eigenvalue $a\in\mathbb{C}$ will be called the $a$-eigenspace of $X$ in $F$. By \cite[Definition 5.29]{R2} an \emph{$\widehat{\mathfrak{sl}}_e$-categorification} on $\mathcal{C}$ is the data of \begin{itemize} \item[(a)] an adjoint pair $(U,V)$ of exact functors $\mathcal{C}\rightarrow\mathcal{C}$, \item[(b)] $X\in\End(U)$ and $T\in\End(U^2)$, \item[(c)] a decomposition $\mathcal{C}=\bigoplus_{\tau\in P}\mathcal{C}_\tau$. \end{itemize} such that, set $U_i$ (resp. $V_i$) to be the $q^i$-eigenspace of $X$ in $U$ (resp. in $V$)\footnote{Here $X$ acts on $V$ via the isomorphism $\End(U)\cong\End(V)^{op}$ given by adjunction, see \cite[Section 4.1.2]{CR} for the precise definition.} for $i\in\mathbb{Z}/e\mathbb{Z}$, we have \begin{itemize} \item[(1)] $U=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}U_i$, \item[(2)] the endomorphisms $X$ and $T$ satisfy \begin{eqnarray} &(1_{U}T)\circ(T1_{U})\circ(1_{U}T)=(T1_{U})\circ(1_{U}T)\circ(T1_{U}),\nonumber\\ &(T+1_{U^2})\circ(T-q1_{U^2})=0,\label{affineHeckerelation}\\ &T\circ(1_{U}X)\circ T=qX1_{U},\nonumber \end{eqnarray} \item[(3)] the action of $e_i=U_i$, $f_i=V_i$ on $K(\mathcal{C})$ with $i$ running over $\mathbb{Z}/e\mathbb{Z}$ gives an integrable representation of $\widehat{\mathfrak{sl}}_e$. \item[(4)] $U_i(\mathcal{C}_\tau)\subset \mathcal{C}_{\tau+\alpha_i}$ and $V_i(\mathcal{C}_\tau)\subset \mathcal{C}_{\tau-\alpha_i}$, \item[(5)] $V$ is isomorphic to a left adjoint of $U$. \end{itemize}
\subsection{}\label{ss:XT}
We construct a $\widehat{\Lie{sl}}_e$-categorification on $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ in the following way. The adjoint pair will be given by $(E,F)$. To construct the part (b) of the data we need to go back to Hecke algebras. Following \cite[Section $7.2.2$]{CR} let $X^{\scriptscriptstyle\mathscr{H}}$ be the endomorphism of $E^{\scriptscriptstyle\mathscr{H}}$ given on $E^{\scriptscriptstyle\mathscr{H}}(n)$ as the multiplication by the Jucy-Murphy element $J_{n-1}$. Let $T^{\scriptscriptstyle\mathscr{H}}$ be the endomorphism of $(E^{\scriptscriptstyle\mathscr{H}})^2$ given on $E^{\scriptscriptstyle\mathscr{H}}(n)\circ E^{\scriptscriptstyle\mathscr{H}}(n-1)$ as the multiplication by the element $T_{n-1}$ in $\mathscr{H}_{\mathbf{q},n}$. The endomorphisms $X^{\scriptscriptstyle\mathscr{H}}$ and $T^{\scriptscriptstyle\mathscr{H}}$ satisfy the relations (\ref{affineHeckerelation}). Moreover the $q^i$-eigenspace of $X^{\scriptscriptstyle\mathscr{H}}$ in $E^{\scriptscriptstyle\mathscr{H}}$ and $F^{\scriptscriptstyle\mathscr{H}}$ gives respectively the $i$-restriction functor $E^{\scriptscriptstyle\mathscr{H}}_i$ and the $i$-induction functor $F^{\scriptscriptstyle\mathscr{H}}_i$ for any $i\in\mathbb{Z}/e\mathbb{Z}$.
By Theorem \ref{iso} we have an isomorphism $\KZ\circ E\cong E^{\scriptscriptstyle\mathscr{H}}\circ\KZ$. This yields an isomorphism $$\End(\KZ\circ E)\cong \End(E^{\scriptscriptstyle\mathscr{H}}\circ\KZ).$$ By Proposition \ref{standard}(1) the functor $E$ maps projective objects to projective ones, so Lemma \ref{fullyfaithful} applied to $\mathcal{O}_1=\mathcal{O}_2=\mathcal{O}_{\mathbf{h},\mathbb{N}}$ and $K=L=E$ yields an isomorphism $$\End(E)\cong\End(\KZ\circ E).$$ Composing it with the isomorphism above gives a ring isomorphism \begin{equation}\label{sigmae} \sigma_{E}:\End(E)\overset{\sim}\rightarrow\End(E^{\scriptscriptstyle\mathscr{H}}\circ\KZ). \end{equation} Replacing $E$ by $E^2$ we get another isomorphism $$\sigma_{E^2}:\End(E^2)\overset{\sim}\rightarrow\End((E^{\scriptscriptstyle\mathscr{H}})^2\circ\KZ).$$ The data of $X\in\End(E)$ and $T\in\End(E^2)$ in our $\widehat{\mathfrak{sl}}_e$-categorification on $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ will be provided by \begin{equation*} X=\sigma^{-1}_{E}(X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ}),\quad T=\sigma^{-1}_{E^2}(T^{\scriptscriptstyle\mathscr{H}} 1_{\KZ}). \end{equation*}
Finally, the part (c) of the data will be given by the block decomposition of the category $\mathcal{O}_{\mathbf{h},\mathbb{N}}$. Recall from \cite[Theorem 2.11]{LM} that the block decomposition of the category $\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu$ yields $$\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu=\bigoplus_{\tau\in P}(\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu)_\tau,$$ where $(\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu)_\tau$ is the subcategory generated by the composition factors of the Specht modules $S_\lambda$ with $\lambda$ running over $l$-partitions of weight $\tau$. By convention $(\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu)_\tau$ is zero if such $\lambda$ does not exist. By Lemma \ref{lem:center} the functor $\KZ$ induces a bijection between the blocks of the category $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ and the blocks of $\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu$. So the block decomposition of $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ is \begin{equation*} \mathcal{O}_{\mathbf{h},\mathbb{N}}=\bigoplus_{\tau\in P}(\mathcal{O}_{\mathbf{h},\mathbb{N}})_\tau, \end{equation*} where $(\mathcal{O}_{\mathbf{h},\mathbb{N}})_\tau$ is the block corresponding to $(\mathscr{H}_{\mathbf{q},\mathbb{N}}\modu)_\tau$ via $\KZ$.
\subsection{}\label{ss:categorification} Now we prove the following theorem. \begin{thm}\label{thm:categorification} The data of \begin{itemize} \item[(a)] the adjoint pair $(E,F)$, \item[(b)] the endomorphisms $X\in\End(E)$, $T\in\End(E^2)$, \item[(c)] the decomposition $\mathcal{O}_{\mathbf{h},\mathbb{N}}=\bigoplus_{\tau\in P}(\mathcal{O}_{\mathbf{h},\mathbb{N}})_\tau$ \end{itemize} is a $\widehat{\Lie{sl}}_e$-categorification on $\mathcal{O}_{\mathbf{h},\mathbb{N}}$. \end{thm} \begin{proof} First, we prove that for any $i\in\mathbb{Z}/e\mathbb{Z}$ the $q^i$-generalized eigenspaces of $X$ in $E$ and $F$ are respectively the $i$-restriction functor $E_i$ and the $i$-induction functor $F_i$ as defined in (\ref{ireso}).
Recall from Proposition \ref{hv}(4) and Proposition \ref{dv}(4) that we have $$E=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}E_i\quad\text{ and }\quad E^{\scriptscriptstyle\mathscr{H}}=\bigoplus_{i\in\mathbb{Z}/e\mathbb{Z}}E^{\scriptscriptstyle\mathscr{H}}_i.$$ By the proof of Proposition \ref{isoi} we see that any isomorphism $$\KZ\circ E\cong E^{\scriptscriptstyle\mathscr{H}}\circ\KZ$$ restricts to an isomorphism $\KZ\circ E_i\cong E^{\scriptscriptstyle\mathscr{H}}_i\circ\KZ$ for each $i\in\mathbb{Z}/e\mathbb{Z}$. So the isomorphism $\sigma_E$ in (\ref{sigmae}) maps $\Hom(E_i,E_j)$ to $\Hom(E_i^{\scriptscriptstyle\mathscr{H}}\circ\KZ,E_j^{\scriptscriptstyle\mathscr{H}}\circ\KZ)$. Write $$X=\sum_{i,j\in\mathbb{Z}/e\mathbb{Z}}X_{ij},\quad X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ}=\sum_{i,j\in\mathbb{Z}/e\mathbb{Z}}(X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ})_{ij}$$ with $X_{ij}\in\Hom(E_i, E_j)$ and $(X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ})_{ij}\in\Hom(E_i^{\scriptscriptstyle\mathscr{H}}\circ\KZ,E_j^{\scriptscriptstyle\mathscr{H}}\circ\KZ)$. We have $$\sigma_E(X_{ij})=(X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ})_{ij}.$$ Since $E^{\scriptscriptstyle\mathscr{H}}_i$ is the $q^i$-eigenspace of $X^{\scriptscriptstyle\mathscr{H}}$ in $E^{\scriptscriptstyle\mathscr{H}}$, we have $(X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ})_{ij}=0$ for $i\neq j$ and $(X^{\scriptscriptstyle\mathscr{H}} 1_{\KZ})_{ii}-q^i$ is nilpotent for any $i\in\mathbb{Z}/e\mathbb{Z}$. Since $\sigma_{E}$ is an isomorphism of rings, this implies that $X_{ij}=0$ and $X_{ii}-q^i$ is nilpotent in $\End(E)$. So $E_i$ is the $q^i$-eigenspace of $X$ in $E$. The fact that $F_i$ is the $q^i$-eigenspace of $X$ in $F$ follows from adjunction.
Now, let us check the conditions (1)--(5):
(1) It is given by Proposition \ref{dv}(4).
(2) Since $X^{\scriptscriptstyle\mathscr{H}}$ and $T^{\scriptscriptstyle\mathscr{H}}$ satisfy relations in (\ref{affineHeckerelation}), the endomorphisms $X$ and $T$ also satisfy them. Because these relations are preserved by ring homomorphisms.
(3) It follows from Corollary \ref{rep}.
(4) By the definition of $(\mathcal{O}_{\mathbf{h},\mathbb{N}})_\tau$ and Lemma \ref{Specht}, the standard modules in $(\mathcal{O}_{\mathbf{h},\mathbb{N}})_\tau$ are all the $\Delta(\lambda)$ such that $\wt(\lambda)=\tau$. By $(\ref{wt})$ if $\mu$ is an $l$-partition such that $\res(\lambda/\mu)=i$ then $\wt(\mu)=\wt(\lambda)+\alpha_i.$ Now, the result follows from (\ref{Pierii}).
(5) This is Proposition \ref{leftadjunction}. \end{proof}
\section{Crystals}\label{s:crystal} Using the $\widehat{\Lie{sl}}_e$-categorification in Theorem \ref{thm:categorification} we construct a crystal on $\mathcal{O}_{\mathbf{h},\mathbb{N}}$ and prove that it coincides with the crystal of the Fock space $\mathcal{F}_\mathbf{s}$ (Theorem \ref{thm:main}).
\subsection{}\label{defcrystal}
A \emph{crystal} (or more precisely, an $\widehat{\mathfrak{sl}}_e$-crystal) is a set $B$ together with maps $$\wt: B\rightarrow P,\quad \tilde{e}_i, \tilde{f}_i: B\rightarrow B\sqcup \{0\},\quad \epsilon_i,\varphi_i: B\rightarrow \mathbb{Z}\sqcup\{-\infty\},$$ such that \begin{itemize} \item $\varphi_i(b)=\epsilon_{i}(b)+\pair{\alpha\spcheck_i,\wt(b)}$, \item if $\tilde{e}_ib\in B$, then $\wt(\tilde{e}_ib)=\wt(b)+\alpha_i,\quad \epsilon_i(\tilde{e}_ib)=\epsilon_i(b)-1,\quad \varphi_i(\tilde{e}_ib)=\varphi_i(b)+1$, \item if $\tilde{f}_ib\in B$, then $\wt(\tilde{f}_ib)=\wt(b)-\alpha_i,\quad \epsilon_i(\tilde{f}_ib)=\epsilon_i(b)+1,\quad \varphi_i(\tilde{f}_ib)=\varphi_i(b)-1$, \item let $b, b'\in B$, then $\tilde{f}_ib=b'$ if and only if $\tilde{e}_ib'=b$, \item if $\varphi_i(b)=-\infty$, then $\tilde{e}_ib=0$ and $\tilde{f}_ib=0$. \end{itemize}
Let $V$ be an integrable $\widehat{\mathfrak{sl}}_e$-module. For any nonzero $v\in V$ and any $i\in\mathbb{Z}/e\mathbb{Z}$ we set $$l_i(v)=\max\{l\in\mathbb{N}:\,e_i^{l}v\neq 0\}.$$ Write $l_i(0)=-\infty$. For $l\geqslant 0$ let $$V_i^{<l}=\{v\in V:\,l_i(v) < l\}.$$ A weight basis of $V$ is a basis $B$ of $V$ such that each element of $B$ is a weight vector. Following A. Berenstein and D. Kazhdan (cf. \cite[Definition 5.30]{BK}), a \emph{perfect basis} of $V$ is a weight basis $B$ together with maps $\tilde{e}_i,\tilde{f}_i: B\rightarrow B\sqcup\{0\}$ for $i\in \mathbb{Z}/e\mathbb{Z}$ such that \begin{itemize} \item for $b, b'\in B$ we have $\tilde{f}_ib=b'$ if and only if $\tilde{e}_ib'=b,$ \item we have $\tilde{e}_i(b)\neq 0$ if and only if $e_i(b)\neq 0$, \item if $e_i(b)\neq 0$ then we have \begin{equation}\label{perf} e_i(b)\in\mathbb{C}^\ast\tilde{e}_i(b)+V_i^{<l_i(b)-1}. \end{equation} \end{itemize} We denote it by $(B,\tilde{e}_i, \tilde{f}_i)$. For such a basis let $\mathrm{wt}(b)$ be the weight of $b$, let $\epsilon_i(b)=l_i(b)$ and let $$\varphi_i(b)=\epsilon_i(b)+\pair{\alpha_i\spcheck, \mathrm{wt}(b)}$$ for all $b\in B$. The data \begin{equation}\label{crystaldata} (B,\wt,\tilde{e}_i,\tilde{f}_i,\epsilon_i,\varphi_i) \end{equation} is a crystal. We will always attach this crystal structure to $(B,\tilde{e}_i,\tilde{f}_i)$. We call $b\in B$ a primitive element if $e_i(b)=0$ for all $i\in\mathbb{Z}/e\mathbb{Z}$. Let $B^+$ be the set of primitive elements in $B$. Let $V^+$ be the vector space spanned by all the primitive vectors in $V$. The following lemma is \cite[Claim $5.32$]{BK}. \begin{lemme}\label{basis} For any perfect basis $(B,\tilde{e}_i,\tilde{f}_i)$ the set $B^+$ is a basis of $V^+$. \end{lemme} \begin{proof} By definition we have $B^+\subset V^+$. Given a vector $v\in V^+$, there exist $\zeta_1,\ldots, \zeta_r\in\mathbb{C}^\ast$ and distinct elements $b_1,\ldots,b_r\in B$ such that $v=\sum_{j=1}^r\zeta_jb_j$. For any $i\in\mathbb{Z}/e\mathbb{Z}$ let $l_i=\max\{l_i(b_j):\,1\leqslant j\leqslant r\}$ and $J=\{j:\,l_i(b_j)=l_i,\,1\leqslant j\leqslant r\}$. Then by the third property of perfect basis there exist $\eta_j\in\mathbb{C}^\ast$ for $j\in J$ and a vector $w\in V^{<l_i-1}$ such that $0=e_i(v)=\sum_{j\in J}\zeta_j\eta_j\tilde{e}_i(b_j)+w$. For distinct $j, j'\in J$, we have $b_j\neq b_{j'}$, so $\tilde{e}_i(b_j)$ and $\tilde{e}_i(b_{j'})$ are different unless they are zero. Moreover, since $l_i(\tilde{e}_i(b_j))=l_i-1$, the equality yields that $\tilde{e}_i(b_j)=0$ for all $j\in J$. So $l_i=0$. Hence $b_j\in B^+$ for $j=1,\ldots,r$. \end{proof}
\subsection{}\label{ss:perfectbasis}
Given an $\widehat{\Lie{sl}}_e$-categorification on a $\mathbb{C}$-linear artinian abelian category $\mathcal{C}$ with the adjoint pair of endo-functors $(U,V)$, $X\in\End(U)$ and $T\in\End(U^2)$, one can construct a perfect basis of $K(\mathcal{C})$ as follows. For $i\in\mathbb{Z}/e\mathbb{Z}$ let $U_i$, $V_i$ be the $q^i$-eigenspaces of $X$ in $U$ and $V$. By definition, the action of $X$ restricts to each $U_i$. One can prove that $T$ also restricts to endomorphism of $(U_i)^2$, see for example the beginning of Section 7 in \cite{CR}. It follows that the data $(U_i, V_i, X, T)$ gives an $\mathfrak{sl}_2$-categorification on $\mathcal{C}$ in the sense of \cite[Section 5.21]{CR}. By \cite[Proposition 5.20]{CR} this implies that for any simple object $L$ in $\mathcal{C}$, the object $\mathrm{head}(U_i(L))$ (resp. $\mathrm{soc}(V_iL)$) is simple unless it is zero.
Let $B_{\mathcal{C}}$ be the set of isomorphism classes of simple objects in $\mathcal{C}$. As part of the data of the $\widehat{\Lie{sl}}_e$-categorification, we have a decomposition $\mathcal{C}=\oplus_{\tau\in P}\mathcal{C}_\tau$. For a simple module $L\in\mathcal{C}_\tau$, the weight of $[L]$ in $K(\mathcal{C})$ is $\tau$. Hence $B_{\mathcal{C}}$ is a weight basis of $K(\mathcal{C})$. Now for $i\in\mathbb{Z}/e\mathbb{Z}$ define the maps \begin{eqnarray*} \tilde{e}_i:& B_{\mathcal{C}}\rightarrow B_{\mathcal{C}}\sqcup\{0\},\quad &[L]\mapsto [\head (U_iL)],\\ \tilde{f}_i:& B_{\mathcal{C}}\rightarrow B_{\mathcal{C}}\sqcup\{0\},\quad &[L]\mapsto [\soc (V_iL)]. \end{eqnarray*} \begin{prop}\label{perfectbasis} The data $(B_\mathcal{C},\tilde{e}_i,\tilde{f}_i)$ is a perfect basis of $K(\mathcal{C})$. \end{prop} \begin{proof} Fix $i\in \mathbb{Z}/e\mathbb{Z}$. Let us check the conditions in the definition in order: \begin{itemize} \item for two simple modules $L$, $L'\in\mathcal{C}$, we have $\tilde{e}_i([L])=[L']$ if and only if $0\neq\Hom(U_iL,L')=\Hom(L,V_iL'),$ if and only if $\tilde{f}_i([L'])=[L]$. \item it follows from the fact that any non trivial module has a non trivial head. \item this is \cite[Proposition 5.20(d)]{CR}. \end{itemize} \end{proof}
\subsection{}\label{ss:mainresult}
Let $B_{\mathcal{F}_\mathbf{s}}$ be the set of $l$-partitions. In \cite{JMMO} this set is given a crystal structure. We will call it the crystal of the Fock space $\mathcal{F}_\mathbf{s}$.
\begin{thm}\label{thm:main} (1) The set $$B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}=\{[L(\lambda)]\in K(\mathcal{O}_{\mathbf{h},\mathbb{N}}): \lambda\in\mathcal{P}_{n,l}, n\in\mathbb{N}\}$$ and the maps \begin{eqnarray*} \tilde{e}_i:& B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}\rightarrow B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}\sqcup\{0\},\quad &[L]\mapsto [\head (E_iL)],\\ \tilde{f}_i:& B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}\rightarrow B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}\sqcup\{0\},\quad &[L]\mapsto [\soc (F_iL)]. \end{eqnarray*} define a crystal structure on $B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}$.
(2) The crystal $B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}$ given by (1) is isomorphic to the crystal $B_{\mathcal{F}_\mathbf{s}}$. \end{thm}
\begin{proof} (1) Applying Proposition \ref{perfectbasis} to the $\widehat{\Lie{sl}}_e$-categorification in Theorem \ref{thm:categorification} yields that $(B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}},\tilde{e}_i,\tilde{f}_i)$ is a perfect basis. So it defines a crystal structure on $B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}$ by (\ref{crystaldata}).
(2) It is known that $B_{\mathcal{F}_\mathbf{s}}$ is a perfect basis of $\mathcal{F}_\mathbf{s}$. Identify the $\widehat{\Lie{sl}}_e$-modules $\mathcal{F}_\mathbf{s}$ and $K(\mathcal{O}_{\mathbf{h},\mathbb{N}})$. By Lemma \ref{basis} the set $B_{\mathcal{F}_\mathbf{s}}^+$ and $B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}^+$ are two weight bases of $\mathcal{F}_\mathbf{s}^+$. So there is a bijection $\psi: B_{\mathcal{F}_\mathbf{s}}^+\rightarrow B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}^+$ such that $\wt(b)=\wt(\psi(b))$. Since $\mathcal{F}_\mathbf{s}$ is a direct sum of highest weight simple $\widehat{\mathfrak{sl}}_e$-modules, this bijection extends to an automorphism $\psi$ of the $\widehat{\mathfrak{sl}}_e$-module $\mathcal{F}_\mathbf{s}$. By \cite[Main Theorem $5.37$]{BK} any automorphism of $\mathcal{F}_\mathbf{s}$ which maps $B_{\mathcal{F}_\mathbf{s}}^+$ to $B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}^+$ induces an isomorphism of crystals $B_{\mathcal{F}_\mathbf{s}}\cong B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}$. \end{proof}
\begin{rmq} One can prove that if $n < e$ then a simple module $L\in\mathcal{O}_{\mathbf{h},n}$ has finite dimension over $\mathbb{C}$ if and only if the class $[L]$ is a primitive element in $B_{\mathcal{O}_{\mathbf{h},\mathbb{N}}}$. In the case $n=1$, we have $B_n(l)=\mu_l$, the cyclic group, and the primitive elements in the crystal $B_{\mathcal{F}_\mathbf{s}}$ have explicit combinatorial descriptions. This yields another proof of the classification of finite dimensional simple modules of $H_\mathbf{h}(\mu_l)$, which was first given by W. Crawley-Boevey and M. P. Holland. See type $A$ case of \cite[Theorem $7.4$]{CB}. \end{rmq}
\end{document} |
\begin{document}
\author{G. Boyadzhiev} \title{Comparison principle for non - cooperative elliptic systems} \date{28.02.2007} \maketitle
\section{Introduction} In this paper are considered weakly coupled linear elliptic systems of the form
\ \\(1)\qquad $ L_Mu=0$ in a bounded domain $\Omega \in R^n$ with smooth boundary
\ \\and boundary data $u(x)=g(x)$ on $\partial \Omega $, where $L_M=L+M$, $L$ is a matrix operator with null off-diagonal elements $L=diag\left(L_1, L_2, ... L_N\right) $, and matrix $M=\{m_{ik}(x)\}_{i,k=1}^{N}$. Scalar operators
\ \\ \qquad $L_ku_k = -\sum_{i,j=1}^{n}D_j \left( a_k^{ij}(x)D_iu_k \right) +\sum_{i=1}^{n}b_k^i(x)D_iu_k+c_ku_k$ in $ \Omega $
\ \\are uniformly elliptic ones for $k=1,2,...N$, i.e. there are constants $\lambda,\Lambda >0$ such that
\ \\(2) \qquad $\lambda \left| \xi \right|
^2\leq \sum_{i,j=1}^{n}a_k^{ij}(x) \xi _i\xi _j\leq \Lambda \left|
\xi \right| ^2$
\ \\for every $k$ and any $\xi =(\xi _1,...\xi _n)\in R^n$.
Coefficients $c_k$ and $m_{ik}$ in (1) are supposed continuous in $\overline{\Omega}$, and $a_k^{ij}(x),b_k^i(x)\in W^{1,\infty}(\Omega)\cap C(\overline{\Omega})$.
Quasi-linear weakly coupled elliptic systems
\ \\(3) \qquad $Q^l(u)=-diva^l(x,u^l,Du^l)+F^l(x,u^1,...u^N,Du^l)=f^l(x)$ in $\Omega $
\ \\(4) \qquad $u^l(x)=g^l(x)$ on $\partial \Omega $
\ \\$l=1,...N$ are considered as well.
System (3) is supposed uniformly elliptic one, i.e. there are continuous and positive functions $\lambda (\left| u\right|
),\Lambda (\left| u\right| )$, $\left| u\right| =\left( \left( u^1\right) ^2+...+\left( u^N\right) ^2\right) ^{1/2}$, such that $\lambda (s)$ is monotone-decreasing one, $\Lambda (s)$ is monotone increasing one and
\ \\ \qquad $\lambda (\left| u\right| )\left| \xi ^l\right| ^2\leq \sum_{i,j=1}^{n}\frac{\partial a^{li}}{
\partial p_j^l}(x,t,u^1,...u^N,p^l)\xi _i^l\xi _j^l\leq \Lambda (\left|
u\right| )\left| \xi ^l\right| ^2$
\ \\for every $u^l$ and $\xi ^l=(\xi _1^l,...\xi _n^l)\in R^n$, $ l=1,2,...N.$
The coefficients $a^l(x,u,p)$, $ F^l(x,u,p)$, $f^l(x)$, $g^l(x)$ are supposed to be at least measurable functions with respect to the $x$ variable and locally Lipschitz continuous on $u^l,u$ and $p$, i.e.
\ \\ $ \begin{array}{l}
\left| F^l(x,u,p)-F^l(x,v,q)\right| \leq C(K)\left( \left|
u-v\right|
+\left| p-q\right| \right) , \\ \\
\left| a^l(x,u^l,p)-a^l(x,v^l,q)\right| \leq C(K)\left( \left|
u^l-v^l\right| +\left| p-q\right| \right) \end{array}$
\ \\for every $(x)\in \Omega$, $\left| u\right| +\left| v\right|
+\left| p\right| +\left| q\right| \leq K$, $l=1,...N.$
Hereafter by $f^-(x)=min(f(x),0)$ and $f^+(x)=max(f(x),0)$ are denoted the non-negative and, respectively, the non-positive part of the function f. The same convention is valid for matrixes as well. For instance, we denote by $M^+$ the non-negative part of $M$, i.e.$M^+={\{ m^+_{ij}(x)\} }_{i,j=1}^{N} $.
\ \\This paper concerns the validity of the comparison principle for weakly-coupled elliptic systems. Let us briefly recall the definition of the comparison principle in a weak sense for linear systems.
{\it \ The comparison principle holds in a weak sense for the operator $L_M$ if $(L_Mu,v)\leq 0$ and $u|_{\partial \Omega}\leq 0$ imply $(u,v)\leq 0$ in $\Omega$ for every $v>0$, $v\in \left(W^{1,\infty}(\Omega)\cap C_0(\overline{\Omega})\right)^N $ and $u\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N $.}
\ \\As it is well-known, there is no comparison principle for an arbitrary elliptic system /see Theorem 6 below/. On the other hand, there are broad classes of elliptic systems, such that the comparison principle holds for their members. According to Theorem 1 below, one of these classes can be constructed using the following condition:
\ \\(6) {\it \qquad There is real-valued principal eigenvalue $\lambda_{\Omega_0}$ of $L_M$ and its adjoint operator ${L^*}_M$ for every $\Omega_0\subseteq\Omega$, such that the corresponding eigenfunctions $\tilde{w_{\Omega_0}},w_{\Omega_0}\in {\left( W_{loc}^{2}(\Omega_0)\bigcap C_0(\overline{\Omega_0}) \right)}^N$ are positive ones}.$\Box$
\ \\\emph{{Remark 1: By adjoint operator we mean ${L^*}_M=L^{*}+M^{t}$, $L^{*}=diag\left(L^{*}_1, L^{*}_2,..., L^{*}_N\right) $, and $L^{*}_{k}$ are $L^2$-adjoint operators to $L_{k}$}. The principal eigenvalue is the first one, or the smallest eigenvalue.}
\ \\More precisely, the class is $C^6=\{L_M$ satisfies (6) and $\lambda_{\Omega_0}>0$ for every $\Omega_0\subseteq\Omega\}$ i.e. $C^6$ contains the elliptic systems possessing a positive principal eigenvalue with positive corresponding eigenfunction in $\Omega_0$. In this case the necessary and sufficient condition for the validity of the comparison principle for systems (Theorem 1) is the same as the one for a single equation (See [2]).
{\bf Theorem 1}{\it : Assume that (2) and (6) are satisfied. The comparison principle holds for system (1) if the principal eigenvalue $\lambda_{\Omega_0} >0$, where $\lambda_{\Omega_0}$ is the principal eigenvalue of the operator $L_{M}$ on $\Omega_0\subseteq\Omega$. If the principal eigenvalue $\lambda=\lambda_{\Omega}\leq 0$, then the comparison principle does not hold.}
If we consider classical solutions, then comparison principle holds if and only if $\lambda=\lambda_{\Omega}\leq 0$.
Proof: 1.Assume that the comparison principle does not hold for $L_M$. Let $\underline{u}, \overline{u}\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ be an arbitrary weak sub- and super-solution of $L_M$. Then $u=\underline{u}- \overline{u}\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ is a weak sub-solution of $L_M$, i.e. $(L_M(u),v)\leq 0$ in $\Omega$ for any $v\in \left( W^{1,\infty}(\Omega)\cap C_0(\overline{\Omega})\right)^N, v>0$ and $u^+\equiv 0$ on $\partial\Omega$. Suppose $u^+\neq 0$. Then
$$0\geq \left(L_Mu^+,w_{\Omega_0}\right)=\left(u^+,L^*_Mw_{\Omega_0}\right)=\lambda\left(u^+,w_{\Omega_0}\right)>0$$
\ \\for $\lambda_{\Omega_0}$, $w_{\Omega_0}$ defined in (6).
Therefore $u^+\equiv 0$, i.e for any sub- and super-solution of $L_M$ we obtain $\underline{u}\leq \overline{u}$.
2. Suppose $\lambda\leq 0$ and $\tilde{w}$ is the corresponding positive eigenfunction of $L_M$. Then $\tilde{w}>0$ but $L_M(\tilde{w})=\lambda \tilde{w} \leq 0$. Therefore the comparison principle does not hold for (1).$\Box$
Unfortunately, there are some odds in the application of this general theorem since the condition (6) is uneasy to check. First of all, the system (1) may have no principal eigenvalue at all (See [10]). Another obstacle is the computation of $\lambda$ even when it exists.
Comparison principle holds for members of another broad class, so-called cooperative elliptic systems, i.e. the systems with $m_{ij}(x)\leq 0$ for $i\neq j$ (See [9]). Most results on the positivity of the classical solutions of linear elliptic systems with non-negative boundary data are obtained for the cooperative systems (See [6,7,13,15,16,18,19,21]). As it is well known, the positiveness and the comparison principle are equivalent for linear systems. As for the non-linear ones, the positiveness of the solutions is a weaker statement than the comparison principle; positiveness can hold without ordering of sub-and super-solutions or uniqueness of the solutions at all.
Comparison principle for the diffraction problem for weakly coupled quasi-linear elliptic systems is proved in [3].
The spectrum properties of the cooperative $L_M$ are studied as well. A powerful tool in the cooperative case is the theory of the positive operators (See [17]) since the inverse operator of the cooperative $L_{M^-}$ is positive in the weak sense. Unfortunately, this approach cannot be applied to the general case $M\neq M^-$ since $(L_M)^{-1}$ is not a positive operator at all. Nevertheless in [20] is proved the validity of the comparison principle for non-cooperative systems obtained by small perturbations of cooperative ones.
Using unconventional approach, an interesting result is obtained in [14] for two-dimensional system (1) with $m_{11}=m_{22}=0$ and $m_{ij}=p_i(x)>0$ for $i\neq j$, $i=1,2$. Theorem 6.5 [14] states the existence of a principal eigenvalue with positive principal eigenfunction in the cone $C_U=P_U\times (-P_U)$, where $P_U$ is the cone of the positive functions in $W^1_{\infty}(\Omega)$. In the same paper, Theorem 6.3, are provided sharp conditions for the validity of the comparison principle with respect to the order in $C_U=P_U\times (-P_U)$, i.e. $(u_1,u_2)\leq (v_1,v_2)$ if and only if $u_1\leq v_1$ and $u_2\geq v_2$.
In [12] are studied existence and local stability of positive solutions of systems with $L_k=-d_k \Delta$, linear cooperative and non-linear competitive part, and Neumann boundary conditions. Theorem 2.4 in [12] is similar to Theorem 2 in the present article for $L_k=-d_k \Delta$.
Let us recall that the comparison principle was proved in [11] for the viscosity sub-and super-solutions of general fully non-linear elliptic systems $ G^l(x,u^1,...u^N,Du^l,D^2u^l)=0$, $l=1,...N$ /See also the references there/. The systems considered in [11] are degenerate elliptic ones and satisfy the same structure-smoothness condition as the one for a single equation. The first main assumption in [11] guarantees the quasi-monotonicity of the system. Quasi-monotonicity in the non-linear case is an equivalent condition to the cooperativeness in the linear one.
The second main assumption in [11] comes from the method of doubling of the variables in the proof.
\ \\This work extends the results obtained for cooperative systems to the non-cooperative ones. The general idea is the separation of the cooperative and competitive part of system (1). Then using the appropriate spectral properties of the cooperative part, in Theorems 3 and 4 are derived conditions for the validity of the comparison principle for the initial system. In particular in Theorem 3 is employed the fact that irreducible cooperative system possesses a principal eigenvalue and the corresponding eigenfunction is a positive one, i.e. condition (6) holds. This way are obtained some sufficient conditions for validity of the comparison principle for the non-cooperative system as well. Analogously, in Theorem 4 are derived the corresponding conditions for the validity of comparison principle for competitive systems. The conditions derived in Theorems 3 and 4 are not sharp.
Since predator-prey systems are basic model example for non-cooperative systems, in Theorem 5 is adapted the main idea of Theorem 4 to systems which cooperative part is a triangular matrix. Sufficient condition for the validity of comparison principle for predator-prey systems is derived in Theorem 5.
In Theorems 6 and 7 are given conditions for failure of the comparison principle.
The results of Theorems 3 and 4 are adapted to quasi-linear systems in Theorem 8.
\section{Comparison principle for linear elliptic systems}
As a preliminary statement we need the following well known fact
{\bf Theorem 2}{\it : Every irreducible cooperative system $L_{M^-}$ has unique principal eigenvalue and the corresponding eigenfunction is positive }.
The principal eigenfunction for linear operators is unique up to positive multiplicative constants, but for our purpose the positiveness is of importance.
In fact, Theorem 2 is in the scope of Theorems 11 and 12 in [1]. Theorems 11 and 12 in [1] concern second order cooperative linear elliptic systems with cooperative boundary conditions and are more general then Theorem 2. In sake of completeness, a sketch of the proof of Theorem 2 follows. It is based on the idea of adding a big positive constant to the operator. The same idea appears for instance in [16] and many other works.
Skatch of tne proof: Let us consider the operator $L_c= L_{M^-}+cI$ where $c\in R$ is a constant and $I$ is the identity matrix in $R^n$. Then $L_c$ satisfies the conditions of Theorem 1.1.1 [16] if $c$ is large enough, namely
1. $L_c$ is a cooperative one;
2. $L_c$ is a fully coupled;
3. There is a super-solution $\varphi$ of $L_c \varphi =0$.
Conditions 1 and 2 above are obviously fulfilled by $L_c$, since $L_{M^-}$ is a cooperative and a fully coupled one, and $L_c$ inherits these properties from $L_{M^-}$.
As for the condition 3, we construct the super – solution $\varphi$ using the principal eigenfunctions of the operators $L_k-c_k$. More precisely, $\varphi = (\varphi_1, \varphi_2,..., \varphi_N)$, where $ \left( L_k-c_k\right) \varphi_k={\lambda}_k \varphi_k$, and ${\lambda}_k,\varphi_k>0$ in $\Omega$. The existence of $\varphi_k$ is a well - known fact.
We claim that if $c$ is large enough then $\varphi$ is a super - solution
of $L_c$ , i.e. $\varphi \in {\left( W_{loc}^{2,n}(\Omega)\bigcap C(\overline{\Omega})\right)}^N$ and $\varphi\geq 0$, $L_c \varphi \geq 0$ and $\varphi$ is not identical to null in $\Omega$.
Since we have chosen ${\varphi}_k$ being the principal eigenfunctions of $L_k-c_k$, we have ${\varphi}_k \in {\left( C^{2}(\Omega)\bigcap C(\overline{\Omega})\right)}$ and ${\varphi}_k> 0$. It remains to prove that $L_c \varphi \geq 0$.
Let $$A_k = {\left(L_c\varphi \right)}_k = -\sum_{i,j=1}^{n}D_j \left( a_k^{ij}(x)D_i{\varphi}_k \right) +\sum_{i=1}^{n}b_k^i(x)D_i{\varphi}_k+ \sum_{i=1}^{n}m_{ki}(x){\varphi}_i +(c_k+c){\varphi}_k =$$
$$=({\lambda}_k+c_k+c){\varphi}_k+ \sum_{i=1}^{n}m_{ki}(x){\varphi}_i. $$
Then $A_k\geq 0$ for every $i$.
First of all, if we denote by $n$ the outer unitary normal vector to $\partial\Omega$, then
$${\frac{dA_k}{dn}}|_{\partial\Omega}= ({\lambda}_k+c_k+c)\frac{d{\varphi}_k}{dn} +\sum_{i=1}^{n}m_{ki}(x)\frac{d{\varphi}_i}{dn}$$ since
${\varphi_i}|_{\partial\Omega}=0$. Therefore there is a constant
$c'$, such that ${\frac{dA_k}{dn}}|_{\partial\Omega}<0$ for $c>c'$ since $\frac{{d\varphi}_i}{dn}<0$ on $\partial\Omega$ (See [14], Theorem 7, p.65) and $\lambda_i$ is independent on $c$.
Hence there is a neighbourhood ${\Omega}_\varepsilon = \{x\in\overline{\Omega}:dist(x,\partial\Omega)<\varepsilon\}$ for some $\varepsilon>0$, such that
$${\frac{dA_k}{dn}}|_{{\Omega}_\varepsilon}<0$$.
Since $A_k=0$ on $\partial\Omega$, then $A_k>0$ in ${\Omega}_\varepsilon $
The set $\Omega \setminus {\Omega }_\varepsilon$ is compact, therefore there is $c">0$ such that $A_k>0$ in the compact set $\Omega \setminus {\Omega}_\varepsilon$ for $c>c"$, since ${\varphi}_k>0$ in ${\Omega} \setminus {\Omega}_{\varepsilon}$.
Considering $c>max(c',c")$ we obtain $A_k>0$ in $\Omega$, therefore $\varphi$ is indeed a super - solution of $L_c$.
The rest of the proof follows the proof of Theorem 1.1.1 [16].$\Box$
A reasonable question is: could the non-cooperative part of the system "improve" the spectral facilities of the cooperative system? In other words, if the cooperative part of the system has non-positive principal eigenvalue, what are conditions on the competitive part, such that the comparison principle holds for the system? An answer of this question is given in the following
{\bf Theorem 3}{\it : Let (1) be a weakly coupled system with irreducible cooperative part of $L^*_{M^-}$ such that (2) is satisfied. Then the comparison principle holds for system (1) if there is $x_0\in \Omega$ such that
\ \\(7)\qquad $
\left(\lambda+\sum_{k=1}^{N}m_{kj}^{+}(x_0)\right)>0$ for
$j=1...N$
\ \\and
\ \\(8)\qquad $
\lambda+ m_{jj}^{+}(x)\geq 0$ for every $x\in \Omega$ and
$j=1...N$
\ \\where $\lambda=inf_{\Omega_0\subseteq \Omega}\{\lambda_{\Omega_0}$ : $\lambda_{\Omega_0}$ is the principal eigenvalue of the operator $L_{M^-}$ on $\Omega_0$\}}.
It is obvious, that if $ \lambda> 0$, then the comparison principle holds. More interesting case is $ \lambda < 0$. Then $m^+_{kj}$ can "improve" the properties of $L_M$ with respect to the validity of the comparison principle. Furthermore, if $ \lambda+ m_{jj}^{+}(x)> 0$, then (7) is consequence of (8). Condition (7) is important when $ \lambda+ m_{jj}^{+}(x)\equiv 0$.
\emph{Remark 2: If $L^*_{M^-}$ is irreducible, then $L_{M^-}$ is irreducible as well. In fact $L^*_{M^-}=L^*+{M^-}^t$ and if ${M^-}^t$ is irreducible, then such is $M^-$.}
Proof: Suppose all conditions of Theorem 3 are satisfied by $L_M$ but the comparison principle does not hold for $L_M$. Let $\underline{u}, \overline{u}\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ be an arbitrary weak sub- and super-solution of $L_M$. Then $u=\underline{u}- \overline{u}\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ is a weak sub-solution of $L_M$ as well, i.e. $(L_M(u),v)\leq 0$ in $\Omega$ for any $v\in \left(W^{1,\infty}(\Omega)\cap C_0(\overline{\Omega}\right)^N, v>0$ and $u^+\equiv 0$ on $\partial\Omega$.
Assume $u^+\neq 0$. Let $\Omega_{supp(u^+)}\subseteq supp(u^+)$ has smooth boundary. Then for any $v> 0$, $v\in \left(W^{1,\infty}(\Omega_{supp(u^+)})\cap C(\overline{\Omega_{supp(u^+)}})\right)^N$
\ \\(9)\qquad $0\geq \left( \L_{M}u^+,v\right) =\left(u^+,\L^*_{M^-}v\right)+\left(M^+u^+,v\right)$
\ \\is satisfied since $L_{M}(u^+)\leq 0$.
Since $L_{M^-}$ is a cooperative operator, such is ${\left( L_{M^-}\right)}^{*}=L^{*}+(M^-)^{t}$ as well. According to Theorem 2 above, there is a unique positive eigenfunction $w\in {\left( W_{loc}^{2,n}(\Omega_{supp(u^+)})\bigcap C_0(\overline{\Omega_{supp(u^+)}}) \right)}^N$ such that $w>0$ and $L^*_{M^-}w=\lambda w$ for some $\lambda >0$.
Then $w$ is a suitable test-function for (9). Rewriting the inequality (9) for $v=w$ we obtain
\ \\$0\geq \left(u^+,\L^*_{M^-}w\right)+\left(M^+u^+,w\right)= \left(u^+,\lambda w\right)+\left(M^+u^+,w\right)$
\ \\or componentwise
\ \\(10)\qquad $0\geq \left(u^{+}_{k},\lambda w_k\right)+\left({\sum}_{j=1}^{N}m^{+}_{kj}u^{+}_{j},w_k\right)$
\ \\for $k=1,...n$.
The sum of inequalities (10) is
\ $0\geq {\sum}_{k=1}^{N}\left( \left(u^{+}_{k},{{\tilde{L}}^*}_{k}w_k\right) +\left({\sum}_{j=1}^{N}m^{+}_{kj}u^{+}_{j},w_k\right)\right)=$
\ \\ $={\sum}_{k=1}^{N}\left(u^{+}_{k},{\lambda}w_k\right) +{\sum}_{k,j=1}^{N}\left(u^{+}_{j},m^{+}_{kj}w_k\right)=$
\ \\ $={\sum}_{j=1}^{N}\left(u^{+}_{j},{\sum}_{k=1}^{N}\left( {{\delta}_{jk}\lambda}+m^{+}_{kj}\right)w_k\right)>0$
\ \\since $u^{+}> 0$, $w_k>0$, (7) and (8). Condition (8) is used in $\left(u^{+}_{k},({\lambda}+m^{+}_{kk})w_k\right)\geq 0$.
\ \\The above contradiction proves that $u^+\equiv 0$
and therefore the comparison principle holds for operator $L_M$.$\Box$
Since in [1] and [18] are considered only systems with irreducible cooperative part, the ones with reducible $L_{M^-}$ are excluded of the range of Theorem 3. Nevertheless the same idea is applicable to some systems with reducible cooperative part as well, as it is given it Theorem 4.
{\bf Theorem 4}{\it : Assume $m^-_{ij}\equiv 0$ for $i\neq j$ and (2) is satisfied. Then the comparison principle holds for system (1) if there is $x_0\in \Omega$ such that
\ \\(11)\qquad $
\left(\lambda_{j}+\sum_{k=1}^{N} m_{kj}^{+}(x_0)\right)>0$ for
$j=1...N$
\ \\and
\ \\(12)\qquad $
\lambda_{j}+ m_{jj}^{+}(x)\geq 0$ for every $x\in \Omega$
$j=1...N$,
\ \\where $\lambda_{j}=inf_{\Omega_0\subseteq\Omega}\{\lambda_{j\Omega_0}$ : $\lambda_{j\Omega_0}$ is the principal eigenvalue of the operator $L_j+m^-_{jj}$ on $\Omega_0$\}}.
Theorem 4 is formulated for diagonal matrix $M^-$. The statement is valid with obvious modification if $M^-$ has block structure, i.e.
$$M^-=\left( \begin {array}{cccccccc} M^-_1 & & 0 & & ... & & 0 \\ & & & & & & \\
0 & & M^-_2 & & ... & & 0
\\ & & & & & & \\... & & ... & & ... & & ... & \\ & & & & & & \\ 0 & & 0 & & ... & &M^-_r \end{array} \right) $$
\ \\where $M^-_k$ are $d_k$-dimensional square matrixes, $\sum d_k\leq N$ .
Proof: Let all conditions of Theorem 4 be satisfied by $L_M$ but the comparison principle does not hold for $\tilde{L}_{M^+}$. Let $\underline{u}, \overline{u}\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ be an arbitrary weak sub- and super-solution of $\tilde{L}_{M^+}$. Then $u=\underline{u}- \overline{u}\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ is a weak sub-solution of $\tilde{L}_{M^+}$ as well, i.e. $(\tilde{L}_{M^+}(u),v)\leq 0$ in $\Omega$ for any $v\in \left(W^{1,\infty}(\Omega)\cap C_0(\overline{\Omega})\right)^N, v>0$ and $u^+\equiv 0$ on $\partial\Omega$.
Suppose that $u^+\neq 0$. Let $\Omega_{supp(u^+)}\subseteq supp(u^+)$ has smooth boundary. Then for any $v> 0$, $v\in W_2^{1,\infty}(\Omega_{supp(u^+)})\cap C(\overline{\Omega_{supp(u^+)}})$
\ \\(13)\qquad $0\geq \left( \tilde{L}_{M^+}u^+,v\right) =\left(u^+,\tilde{L}^*v\right)+\left(M^+u^+,v\right)$
\ \\is satisfied since $\tilde{L}_{M^+}u^+\leq 0$.
According to Theorem 2.1 in [2], there is a positive principal eigenfunction for the operator ${\tilde{L}}^*_k$ in $\Omega_{supp(u^+)}$, i.e. $\exists\qquad w_k(x)\in C^2(\Omega_{supp(u^+)} \bigcap R^1)$ such that ${{\tilde{L}}^*}_kw_k(x)={\lambda}_kw_k(x)$ and $w_k(x)>0$. Note that $w_k$ are classical solutions.
Then the vector-function $w(x)=(w_1(x),...,w_n(x))$, composed of the principal eigenfunctions $w_k(x)$, is suitable as a test-function in (13).
Writing componentwise inequality (13) for $v=w$ we obtain
\ \\(14)\qquad $0\geq \left(u^{+}_{k},{{\tilde{L}}^*}_{k}w_k\right)+\left({\sum}_{j=1}^{N}m^{+}_{kj}u^{+}_{j},w_k\right)$
\ \\for $k=1,...N$.
The sum of inequalities (14) is
\ $0\geq {\sum}_{k=1}^{N}\left( \left(u^{+}_{k},{{\tilde{L}}^*}_{k}w_k\right) +\left({\sum}_{j=1}^{N}m^{+}_{kj}u^{+}_{j},w_k\right)\right)=$
\ \\ $={\sum}_{k=1}^{N}\left(u^{+}_{k},{\lambda}_{k}w_k\right) +{\sum}_{k,j=1}^{N}\left(u^{+}_{j},m^{+}_{kj}w_k\right)=$
\ \\ $={\sum}_{j=1}^{N}\left(u^{+}_{j},{\sum}_{k=1}^{N}\left( {{\delta}_{jk}\lambda}_{j}+m^{+}_{kj}\right)w_k\right)>0$
\ \\since $u^{+}> 0$, $w_k>0$, (11) and (12).
\ \\The above contradiction proves that $u^+\equiv 0$
and therefore the comparison principle holds for operator $L_M^{+}$.$\Box$
\emph{Remark 3: It is obvious that conditions (7),(8), and respectively, (11), (12), can be substituted by the sharper condition ${\sum}_{k=1}^{n}\left( {{\delta}_{jk}\lambda_k}+m_{kj}\right) w_k>0$ for every $x\in \Omega$ and every $j=1...N$, which is useful only if the exact values of the eigenfunctions $w_k$ can be computed.}
The main idea in Theorem 4 could be modified for systems with triangular cooperative part, for instance with null elements above the main diagonal. For instance predator-prey systems have triangular cooperative part. Of course, if $m^-_{ij}(x)>0$ for every $x\in\Omega$ and $i=1,...N$, $j<i$, then the system is in the scope of Theorem 3. In Theorem 5 this condition is not necessary, i.e. some of the species can extinguish in some subarea of $\Omega$.
{\bf Theorem 5}{\it : Assume (2) is satisfied and the cooperative part $M^-$ is triangular for the system (1), i.e. $m^-_{ij}=0$ for $i=1,...N$, $j>i$. Then the comparison principle holds for system (1), if there is} $\varepsilon > 0$ \emph{such that}
\ \\(15)\qquad $
\left(\lambda_{j}-(1-\delta_{1j})\varepsilon+\sum_{k=1}^{N} m_{kj}^{+}(x_0)\right)>0$ for
$j=1...N$ \emph{for some} $x_0\in\Omega$
\ \\\emph{and}
\ \\(16)\qquad $
\lambda_{j}-(1-\delta_{1j})\varepsilon +m^+_{jj}(x)\geq 0$ for
every $x\in\Omega$ and $j=1...N$,
\ \\\emph{where $\lambda_{j}=inf_{\Omega_0\subseteq\Omega}\{\lambda_{j\Omega_0}$ : $\lambda_{j\Omega_0}$ is the principal eigenvalue of the operator $L_j+m^-_{jj}$ on $\Omega_0$\}}.
Note that the condition for triangular cooperative part does not exclude $m^-_{ij}(x_0)=0$ for some $x_0\in \Omega$, $i,j=1,...N$.
Proof: 1. The first equation in $L_{M^-}$ is not coupled, and there are principal eigenvalue $\lambda_1$ and principal eigenfunction $w_1>0$ of $L_1+m^-_{11}$ (See Theorem 2.1 in [2]). We put $\widetilde{w}_1=w_1$.
2. The equation $(L_2+m^-_{22})\widetilde{w}_2-\lambda\widetilde{w}_2 = m_{21}\widetilde{w}_1$ with null boundary conditions has unique solution for $\lambda<\lambda_2$, where $\lambda_2$ is the principal eigenvalue of $L_2+m^-_{22}$. We put $\lambda=\lambda_2-\varepsilon$. Since the right-hand side $m_{21}\widetilde{w}_1$ is positive, the solution $\widetilde{w}_2$ is positive as well.
3. By induction we construct positive functions $\widetilde{w}_j$, $j=3,...N$ as solutions of $(L_j+m^-_{jj})\widetilde{w}_j-(\lambda_j-\varepsilon)\widetilde{w}_j = \sum_{i=1}^{j-1}m_{ji}\widetilde{w}_i$ with null boundary conditions. As usual $\lambda_j$ are the principal eigenfunctions of $L_j+m^-_{jj}$.
4. The rest of the proof follows the proof of Theorem 4 where $\lambda_j$ is substituted with $\lambda_j-\varepsilon$ and $w_j$ is substituted with $\widetilde{w}_j$.
For the simplest predator-prey system, $N=2$, $m_{11}=m_{22}=0$, $m_{12}>0$ and $m_{21}<0$, conditions (15) and (16) are $\lambda_1\geq0$, $\lambda_2>0$, where $\lambda_{j}$ is the principal eigenvalue of the operator $L_j$, $j=1,2$.
Condition (12) in Theorem 2 is useful for construction of counter-example for the non-validity of comparison principle in general.
{\bf Theorem 6}{\it : Let (1) be a weakly coupled system with reducible cooperative part $L_{M^-}$ and (2) be satisfied. Suppose that (12) is not true, i.e there is some $j\in \{ 1...N\}$ such that $ \left( \lambda_j + m_{jj}^{+}(x)\right)<0$ for any $x\in\Omega$, and $m^+_{jl}=0$ for $l\neq j$, $l=1,...N$. Then comparison principle does not hold for system (1)}.
Proof: Let us suppose for simplicity that $j=1$ and $m^-_{1,j}=0$ for $j=2,...N$. We consider vector-function $w(x)={w_1(x),0,...,0}$, where $w_1(x)$ is the principal eigenfunction of $L_1+m^-_{11}$.
Then for the first component ${(L_M)}_1$ of $L_M$ is valid $(L_Mw)_1= \lambda w_1(x) + m_{11}^{+}w_1(x)<0$ in $\Omega$, where $\lambda_j$ is the principal eigenvalue of $L_1$, and $(L_Mw)_k=0$ for $k=1,...N$. Therefore, $L_Mw\leq 0$ but $w(x)\geq 0$ and comparison principle fails. $\Box$
The simplest case to illustrate Theorems 4 and 6 is $N=2$. Let us consider irreducible competitive system
\ \\(17)\qquad $L_ju_j+\sum_{j,k=1}^2m_{jk}u_k=f_j$, $j=1,2$,
\ \\where $m_{11}=m_{22}=0$, $m_{12}>0$, $m_{21}>0$.
Suppose $\lambda_ j$ is the principal eigenvalue of $L^*_j$, $j=1,2$. If $\lambda_ j\geq 0$ and there is $x_0\in\Omega$ such that $\lambda_1+m_{21}(x_0)>0$ and $\lambda_2+m_{12}(x_0)>0$, then according to Theorem 4 the comparison principle holds for system (1), i.e. if $f_1>0$, $f_2>0$, then $u_1>0$ and $u_2>0$, where $u=\underline{u}-\overline{u}$ is defined in the proof of Theorem 3.
If $\lambda_2+m_{12}(x)<0$ for every $x\in\Omega$, then according to Theorem 6 there is no comparison principle for system (1) in the lexicographic order, used in this paper.
More detailed analysis of the validity of the comparison principle for system (1) could be done if we consider order in the cone $C_U=P_U\times (-P_U)$, i.e. $(u_1,u_2)\leq (v_1,v_2)$ if and only if $u_1\leq v_1$ and $u_2\geq v_2$. Then Theorem 6.5 [14] states the existence of a principal eigenvalue $\lambda$ of $L^*$ with positive in $C_U$ principal eigenfunction $w_1(x)>0$, $w_2(x)<0$.
If $\lambda>0$, then according to Theorem 6.3 [14] the comparison principle holds in the order in $C_U$, i.e. if $f_1>0$, $f_2<0$, then $u_1>0$ and $u_2<0$.
If $\lambda<0$, then $(L_1(-u_1)+m_{12}u_2,w_1)+(L_2u_2+m_{21}(-u_1),w_2)= (-u_1, \lambda w_1+m_{21}w_2)+(u_2, m_{21}w_1+\lambda w_2)>0$. Hence $u_1<0$ and $u_2>0$ for $f_1>0$, $f_2>0$.
A statement analogous to Theorem 6 is valid for irreducible systems as well.
{\bf Theorem 7}{\it : Let (1) be a weakly coupled system with irreducible cooperative part $L_{M^-}$ and (2) be satisfied. Suppose that (7) is not true, i.e there is some $j\in \{ 1...N\}$ such that $ \left( \lambda + m_{jj}^{+}(x)\right)<0$ for any $x\in\Omega$, and $m^+_{jl}=0$ for $l\neq j$, $l=1,...N$. Then comparison principle does not hold for system (1)}.
Note that in Theorem 6 and Theorem 7 we need the violation of condition (12) and, respectively, condition (7) in all $\Omega$. The proof of Theorem 7 follows the proof of Theorem 6 with obvious adaptation.
\section{Comparison principle for quasi-linear elliptic systems}
Considering quasi-linear system (3), (4), we use the results of the previous section to derive conditions for the validity of comparison principle.
Let $u(x)\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ be a sub-solution and $v(x)\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ be a super-solution of (3), (4). Comparison principle holds for (3), (4), if $Q(u)\leq Q(v)$ in $\Omega$, $u\leq v$ on $\partial \Omega$ imply $u\leq v$ in $\Omega$. Last three inequalities are considered in the weak sense.
Recall that the vector-function $u(x)$ is a weak sub-solution of (3), (4) if
$$\int_\Omega \left( a^{li}(x,u^l,Du^l)\eta _{x_i}^l+F^l(x,u^1,...u^N,Du^l)\eta ^l-f^l(x)\eta ^l\right) dx\leq 0$$
\ \\for $l=1,...N$ and for every nonnegative vector function $\eta\in \left(\stackrel{\circ }{W}^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ (i.e. $\eta=(\eta ^1,...\eta ^N)$, $\eta ^l\geq 0$, $\eta ^l\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N\cap C( \overline{\Omega})$ and $\eta ^l=0$ on $\partial \Omega$.
Analogously, $v(x)\in \left(W^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$ is a super-solution of (3), (4), if $$\int_\Omega \left( a^{li}(x,v^l,Dv^l)\eta _{x_i}^l+F^l(x,v^1,...v^N,Dv^l)\eta ^l-f^l(x)\eta ^l\right) dx\geq 0$$
\ \\for $l=1,...N$ and for every nonnegative vector function $\eta \in \left(\stackrel{\circ }{W}^{1,\infty}(\Omega)\cap C(\overline{\Omega})\right)^N$.
Since $u(x)$ and $v(x)$ are sub-and super-solution respectively, then $\tilde{w}(x)=u(x)-v(x)$ is a weak sub-solution of the following problem
\ \\ $-\sum^{n}_{i,j=1} D_i \left( B_j^{li}D_j \tilde{w}^l +B_0^{li}\tilde{w}^l\right) +{\sum}^{N}_{k=1}E_k^l\tilde{w}^k+{\sum}^{n}_{i=1}H_i^lD_i \tilde{w}^l=0$ in $ \Omega $
\ \\with non-positive boundary data on $\partial \Omega $. Here
\ \\$B_j^{li}=\int_0^1\frac{\partial a^{li}}{\partial p_j} (x,P^l)ds$, $B_0^{li}=\int_0^1\frac{\partial a^{li}}{\partial u^l}(x,P^l)ds$, $E_k^l=\int_0^1\frac{\partial F^l}{\partial u^k}(x,S^l)ds$,
\ \\$H_i^l=\int_0^1\frac{\partial F^l}{\partial p_i}(x,S^l)ds$, $ P^l=\left( v^l+s(u^l-v^l),Dv^l+sD(u^l-v^l)\right) $,
\ \\ $S^l=\left( v+s(u-v),Dv^l+sD(u^l-v^l)\right)$.
Therefore, $\tilde{w}_{+}(x)=\max \left( \tilde{w}(x),0\right)$ is a sub-solution of
\ \\(18)\qquad $\sum^{n}_{i,j=1} D_i\left( B_j^{li}D_j{\tilde{w}_+}^l +B_0^{li}{\tilde{w}_+}^l\right) +\sum^{N}_{k=1}E_k^l{\tilde{w}_+}^k+\sum^{n}_{i=1}H_i^lD_i {\tilde{w}_+}^l=0$ in $ \Omega $
\ \\with zero boundary data on $\partial \Omega$.
Equation (18) is equivalent in terms of matrix to
\ \\(19)\qquad $B_E\tilde{w}_+=(B+E)\tilde{w}_+=0$ in $\Omega$,
\ \\where $B=diag(B_1,B_2,...B_N)$, $B_l=\sum^{n}_{i,j=1} D_i\left( B_j^{li}D_j{\tilde{w}_+}^l +B_0^{li}{\tilde{w}_+}^l\right) +\sum^{n}_{i=1}H_i^lD_i {\tilde{w}_+}^l$ and $E=\{E_k^l\}_{l,k+1}^N$.
If we denote $B_i^{kj}$ by $a_k^{ij}$, $B_0^{ki}+H_i^k$ by $b_k^i$, ${\sum}_{i=1}^n D_iB_0^{ki}+E_k^k$ by $m_{kk}(x)$ for $i,j=1...n$, $k=1...N$ and $E_k^l$ by $m_{lk}(x)$ for $k,l=1...N$, $k\neq l$, system (18) looks like system (1). Hereafter we follow the notations for system (1).
Suppose now that $\tilde{w}_{+}(x)$ is not identical equal to zero in $\Omega $, i.e. comparison principle fails for (3), (4). Suppose $L_{M^-}$ is irreducible. Then
\ \\$0\geq \left(L_M \tilde{w}_+,w \right) =\left(\tilde{w}_{+},\L^*_{M^-}w\right)+\left(M^+\tilde{w}_{+},w\right)= \left(\tilde{w}_{+},\lambda w\right)+\left(M^+\tilde{w}_{+},w\right)$
\ \\where $\lambda$ is the principal eigenvalue of $L^*_{M^-}$ and $w$ is the corresponding eigenfunction.
Suppose $a_k^{ij}$ and $m_{lk}(x)$ satisfy the conditions (2), (7) and (8) in Theorem 3. Following the proof of Theorem 3, we obtain that $\tilde{w}_{+}\equiv 0$ in $\Omega$, i.e. comparison principle holds for the system (3), (4).
If $L_{M^-}$ is reducible, then
\ \\$0\geq \left(L_M\tilde{w}_{+},w \right) =\left(\tilde{w}_{+},\L^*w\right)+\left(M^+\tilde{w}_{+},w\right)=\left(\tilde{w}_{+},\tilde{\lambda} w\right)+\left(M^+\tilde{w}_{+},w\right)$
\ \\where $\tilde{\lambda}w=(\tilde{\lambda}_1w_1,\tilde{\lambda}_2w_2,...\tilde{\lambda}_Nw_N)$, $\tilde{\lambda}_k$ is the principal eigenvalue of $L^*_{k}$ and $w_k$ is the corresponding eigenfunction for $k=1,...N$.
Suppose $a_k^{ij}$ and $m_{lk}(x)$ satisfy the conditions (2), (11) and (12) in Theorem 4. Following the proof of Theorem 4, we obtain that $\tilde{w}_{+}\equiv 0$ in $\Omega$, i.e. comparison principle holds for the system (3), (4).
We have sketched the proof the following
{\bf Theorem 8}{\it : Suppose (3), (4) is a quasi-linear system and the corresponding system $B_{E^-}$ in (19) is elliptic. Then the comparison principle holds for system (3), (4) if}
\ \\(i)\qquad {\it $B_{E^-}$ in (19) is irreducible and for every $j=1...n$}
\ \\(ii)\qquad $ \lambda+\left(\sum_{k=1}^{N} \frac{\partial F^k}{\partial p^j}(x,p,Dp^l)+{\sum}_{i=1}^N D_i\frac{\partial a^{ji}}{\partial p^j}(x,p^j,Dp^j)\right)^+ >0$,
\ \\(iii)\qquad $
\lambda+ \left({\sum}_{i=1}^n D_i\frac{\partial a^{ji}}{\partial p^j}(x,p^j,Dp^j)+{\frac{\partial F^j}{\partial p^j}(x,p,Dp^j)}\right)^+\geq 0$
\ \\ {\it where $x\in \Omega$, $p\in R^n$ and $\lambda=inf_{\Omega_0\subseteq \Omega}\{\lambda_{\Omega_0}$ : $\lambda_{\Omega_0}$ is the principal eigenvalue of the operator $B_{E^-}$ on $\Omega_0$\};
\ \\{\it or
\ \\(i')\qquad $B_{E^-}$ in (19) is reducible and for every $j=1...n$}
\ \\(ii')\qquad $
{\lambda}_j+\left(\sum_{k=1}^{n} \frac{\partial F^k}{\partial p^j}(x,p,Dp^j)+{\sum}_{i=1}^n D_i\frac{\partial a^{ji}}{\partial p^j}(x,p^j,Dp^j)\right)^+ >0$,
\ \\(iii')\qquad $
{\lambda}_j+ \left({\sum}_{i=1}^n D_i\frac{\partial a^{ji}}{\partial p^j}(x,p^j,Dp^j)+{\frac{\partial F^j}{\partial p^j}(x,p,Dp^j)}\right)^+\geq 0$
\ \\{\it where $x\in \Omega$, $p\in R^n$ and $\lambda_{l}=inf_{\Omega_0\subseteq\Omega}\{\lambda_{l\Omega_0}$ : $\lambda_{l\Omega_0}$ is the principal eigenvalue of the operator $B_l$ on $\Omega_0$.}
\section{Final remarks}
The sufficient conditions in Theorems 3 and 4 are derived from the spectral properties of the cooperative part of (1) - the operator $L_{M^-}$, or, in other words, comparing the principal eigenvalue of $L_{M^+}$ with the quantities in $M^{+}$. In fact the positive matrix $M^+$ causes a migration of the principal eigenvalue of $L_{M^-}$ to the left.
Theorems 3 and 4 provide a huge class of non-cooperative systems such that the comparison principle is valid for. The idea of migrating the spectrum of a positive operator on the right works in this case, though the spectrum itself is not studied in this article. The results for non-cooperative systems in this paper are not sharp and the validity of the comparison principle is to be determined more precisely in the future.
\section{Acknowledgment}
The author would like to acknowledge Professor Alexander Sobolev for the very useful talks on the theory of positive operators, during the author's stay at University of Sussex as Maria Curie fellow.
\section{REFERENCES}
[1] H.Amann, Maximum Principles and Principal Eigenvalues, 10 Mathematical Essays on Approximation in Analysis and Topology (J.Ferrera, J.Lopez-Gomez and F.R.Ruiz del Portal Eds.), Elsevier, Amsterdam (2005), 1-60.
[2] H.Berestycki, L.Nirenberg, S.R.S. Varadhan : The principal eigenvalue and maximum principle for second-order elliptic operators in general domains, Commun. Pure Appl. Math. 47, No.1, 47-92 (1994).
[3] G.Boyadzhiev, N.Kutev : Diffraction problems for quasilinear reaction-diffusion systems, Nonlinear Analysis 55 (2003), 905-926.
[4] G.Caristi, E. Mitidieri : Further results on maximum principle for non-cooperative elliptic systems. Nonl.Anal.T.M.A., 17 (1991), 547-228.
[5] C.Coosner, P.Schaefer : Sign-definite solutions in some linear elliptic systems. Peoc.Roy.Soc.Edinb.,Sect.A 111, (1989), 347-358.
[6] D.di Figueredo, E.Mitidieri : Maximum principles for cooperative elliptic systems. C.R.Acad.Sci. Paris, Ser. I, 310 (1990), 49-52.
[7] D.di Figueiredo, E.Mitidieri : A maximum principle for an elliptic system and applications to semi-linear problems, SIAM J.Math.Anal. 17 (1986), 836-849.
[8] Gilbarg, D and Trudinger, N. Elliptic partial differential equations of second order. 2nd ed., Springer - Verlag, New York.
[9] M.Hirsch : Systems of differential equations which are competitive or cooperative I. Limit sets, SIAM J. Math. Anal. 13 (1982), 167-179.
[10] P.Hess : On the Eigenvalue Problem for Weakly Coupled Elliptic Systems, Arch. Ration. Mech. Anal. 81 (1983), 151-159.
[11] Ishii, Sh. Koike : Viscosity solutions for monotone systems of second order elliptic PDEs. Commun. Part.Diff.Eq. 16 (1991), 1095 - 1128.
[12] Li Jun Hei, Juan Hua Wu : Existence and Stability of Positive Solutions
for an Elliptic Cooperative System. Acta Math. Sinica Oct.2005, Vol.21, No 5, pp 1113-1130.
[13] J.Lopez-Gomez, M. Molina-Meyer : The maximum principle for cooperative weakly coupled elliptic systems and some applications. Diff.Int.Eq. 7 (1994), 383-398.
[14] J.Lopez-Gomez, J.C.Sabina de Lis, Coexistence states and global attractivity for some convective diffusive competing species models, Trans.Amer.Math.So. 347, 10 (1995), 3797-3833.
[15] E.Mitidieri, G.Sweers : Weakly coupled elliptic systems and positivity. Math.Nachr. 173 (1995), 259-286.
[16] M. Protter, H.Weinberger : Maximum Principle in Differential Equations, Prentice Hall, 1976.
[17] M.Reed, B.Simon : Methods of modern mathematical Physics, v.IV: Analysis of operators, Academic Press, New York, (1978).
[18] G.Sweers : Strong positivity in $C(\overline{\Omega })$ for elliptic systems. Math.Z. 209 (1992), 251-271.
[19] G.Sweers : Positivity for a strongly coupled elliptic systems by Green function estimates. J Geometric Analysis, 4, (1994), 121-142.
[20] G.Sweers : A strong maximum principle for a noncooperative elliptic systems. SIAM J. Math. Anal., 20 (1989), 367-371.
[21] W.Walter : The minimum principle for elliptic systems. Appl.Anal.47 (1992), 1-6.
Author's address:
Institute of Mathematics and Informatics,
Bulgarian Academy of Sciences,
Acad.G.Bonchev st., bl.8,
Sofia, Bulgaria
\end{document} |
\begin{document}
\title {Elliptic functions from $F(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; \bullet)$ }
\date{}
\author[P.L. Robinson]{P.L. Robinson}
\address{Department of Mathematics \\ University of Florida \\ Gainesville FL 32611 USA }
\email[]{[email protected]}
\subjclass{} \keywords{}
\begin{abstract} Li-Chien Shen developed a family of elliptic functions from the hypergeometric function $_2F_1(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; \bullet)$. We comment on this development, offering some new proofs. \end{abstract}
\maketitle
\medbreak
\medbreak
Shen [1] has presented an interesting construction of elliptic functions based on the hypergeometric function $F(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; \bullet)$. We open our commentary with a brief review of his construction, making some minor notational changes (most of which amount to the dropping of suffixes).
\medbreak
Fix $0 < k < 1$ and write $$u = \int_0^{\sin \phi} F(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; k^2 t^2) \, \frac{{\rm d} t}{\sqrt{1 - t^2}}$$ so that $$\frac{{\rm d} u}{{\rm d} \phi} = F(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; k^2 \sin^2 \phi).$$ \medbreak \noindent In a (connected) neighbourhood of the origin, the relation $\phi \mapsto u$ inverts to $u \mapsto \phi$ fixing $0$. Define functions $s, c, d$ by $$s(u) = \sin \phi(u)$$ $$c(u) = \cos \phi(u)$$ and $$d(u) = \phi\,'(u) = 1/F(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; k^2 s^2 (u)).$$ Plainly, $s$ and $c$ satisfy the Pythagorean relation $$s^2 + c^2 = 1.$$ Shen uses the hypergeometric identity $$F(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; \sin^2 z) = \frac{\cos \tfrac{1}{3} z}{\cos z}$$ and the trigonometric triplication formula $$4 \cos^3 \tfrac{1}{3} z - 3 \cos \tfrac{1}{3} z = \cos z$$ to show that $s$ and $d$ satisfy the relation $$d^3 + 3 d^2 = 4(1 - k^2 s^2)$$ or equivalently $$4 k^2 s^2 = (1 - d) (2 + d)^2.$$ By differentiation, $$s\,' = c \, d$$ $$c\,' = - s \,d$$ and $$d\,' = - \frac{8}{3} k^2 \frac{s \, c}{2 + d}.$$ By means of the $(s, d)$ and $(s, c)$ relations, it follows that $d$ satisfies the differential equation $$(d\,')^2 = \frac{4}{9} (1 - d) (d^3 + 3 d^2 + 4 k^2 - 4).$$
\medbreak
\begin{theorem} \label{d} The function $d$ is $1 - \frac{4}{9} k^2 (\wp + \tfrac{1}{3})^{-1}$ where $\wp$ is the Weierstrass function with invariants $$g_2 = \frac{4}{27} (9 - 8 k^2)$$ and $$g_3 = \frac{8}{27^2} (8 k^4 - 36 k^2 + 27).$$ \end{theorem}
\begin{proof} Of course, we mean that $d = \phi'$ extends to the stated rational function of $\wp$. In [1] this is proved by appealing to a standard formula for the integral of $f^{-1/2}$ when $f$ is a quartic. Instead, we may work with the differential equation itself, as follows. First, the form of the differential equation $$(d\,')^2 = \frac{4}{9} (1 - d) (d^3 + 3 d^2 + 4 k^2 - 4)$$ suggests the substitution $r = (1 - d)^{-1}$: this has the effect of removing the explicit linear factor, thus $$(r\,')^2 = \frac{4 k^2}{9} \Big(4 r^3 - \frac{9}{k^2} r^2 + \frac{6}{k^2} r - \frac{1}{k^2}\Big).$$ Next, the rescaling $q = \frac{4 k^2}{9} r$ leads to $$(q\,')^2 = 4 q^3 - 4 q^2 + \frac{2^5}{3^3} k^2 q - \frac{2^6}{3^6} k^4$$ and the shift $p = q - \frac{1}{3}$ removes the quadratic term on the right side, yielding $$(p\,')^2 = 4 p^3 - g_2 p - g_3$$ with $g_2$ and $g_3$ as stated in the theorem. Finally, the initial condition $d(0) = 1$ gives $p$ a pole at $0$; thus $p$ is the Weierstrass function $\wp$ and so $d$ is as claimed. \end{proof}
\medbreak
We remark that $\wp$ has discriminant $$g_2^3 - 27 g_3^2 = \frac{16^3}{27^3} k^6 (1 - k^2).$$
\bigbreak
Now, recall that $0 < k < 1$. It follows that the Weierstrass function $\wp$ has real invariants and positive discriminant. Consequently, the period lattice of $\wp$ is rectangular; let $2 K$ and $2 {\rm i} K'$ be fundamental periods, with $K > 0$ and $K' > 0$. We may take the period parallelogram to have vertices $0, 2 K, 2 K + 2 {\rm i} K', 2 {\rm i} K'$ (in counter-clockwise order); alternatively, we may take it to have vertices $\pm K \pm {\rm i} K'$ (with all four choices of sign). The values of $\wp$ around the rectangle $0 \to K \to K + {\rm i} K' \to {\rm i} K' \to 0$ strictly decrease from $+ \infty$ to $- \infty$; moreover, the extreme `midpoint values' satisfy $\wp(K) > 0 > \wp({\rm i} K')$. In particular, $\wp$ is strictly negative along the purely imaginary interval $(0, {\rm i} K')$.
\medbreak
As the Weierstrass function $\wp$ is elliptic of order two, with $2 K$ and $2 {\rm i} K'$ as fundamental periods, the same is true of the function $$d = 1 - \tfrac{4}{9} k^2 (\wp + \tfrac{1}{3})^{-1}.$$
\medbreak
One of the most substantial efforts undertaken in [1] is the task of locating the poles of $d$. As a preliminary step, in [1] Lemma 3.1 it is shown (using conformal mapping theory) that $d$ has a pole in the interval $(0, {\rm i} K')$; as $d$ is even, it also has a pole in $(- {\rm i} K', 0)$. The precise location of the poles of $d$ is announced in [1] Lemma 3.2; the proof of this Lemma is prepared in [1] Section 4 and takes up essentially the whole of [1] Section 5. The approach taken in [1] rests heavily on the theory of theta functions and does more than just locate the poles of $d$. Our approach to locating the poles of $d$ will be more direct: we work with $\wp$ alone, without the need for theta functions. The following is our version of [1] Lemma 3.2.
\medbreak
\begin{theorem} \label{poles} The elliptic function $d$ has a pole at $\frac{2}{3} {\rm i} K'$. \end{theorem}
\begin{proof} Theorem \ref{d} makes it plain that $d$ has a pole precisely where $\wp = - 1/3$. For convenience, write $a = \frac{2}{3} {\rm i} K'$; our task is to establish that $\wp(a) = -1/3$. Recall the Weierstrassian duplication formula $$\wp(2 a) + 2 \, \wp(a) = \frac{1}{4} \Big\{ \frac{\wp''(a)}{\wp'(a)}\Big\}^2$$ where $$\wp''(a) = 6 \, \wp(a)^2 - \frac{1}{2} \, g_2$$ and $$\wp'(a)^2 = 4 \, \wp(a)^2 - g_2 \, \wp(a) - g_3.$$ Here, $$2 a = \frac{4}{3} {\rm i} K' \equiv - \frac{2}{3} {\rm i} K' = - a$$ where the middle congruence is modulo the period $2 {\rm i} K'$ of $\wp$; consequently, $$\wp(2 a) = \wp (- a) = \wp (a)$$ because $\wp$ is even. The left side of the duplication formula thus reduces to $3 \, \wp(a)$ and we deduce that $b = \wp(a)$ satisfies $$3 b = \frac{1}{4} \frac{(6 b^2 - \frac{1}{2} g_2)^2}{4 b^3 - g_2 b - g_3}.$$ Otherwise said, $\wp(a)$ is a zero of the quartic $f$ defined by $$f(z) = 12 z (4 z^3 - g_2 z - g_3) - (6 z^2 - \tfrac{1}{2} g_2)^2.$$ For convenience we work with $$\frac{27}{4} f(\frac{w}{3}) = w^4 - \frac{2}{3} (9 - 8 k^2) w^2 - \frac{8}{27} (8 k^4 - 36 k^2 + 27) w - \frac{1}{27} (9 - 8 k^2)^2$$ which factorizes as $$\frac{27}{4} f(\frac{w}{3}) = (w + 1) \Big(w^3 - w^2 + \frac{1}{3} (16 k^2 - 15) w - \frac{1}{27} (9 - 8 k^2)^2\Big).$$ Here, the cubic factor has discriminant $$- \frac{4096}{27} k^4 (1 - k^2)^2 < 0$$ and so has just one real zero, which is clearly positive. Thus, $f$ has four zeros: a conjugate pair of non-real zeros, a positive zero and $-1/3$. As the values of $\wp$ along $(0, {\rm i} K')$ are strictly negative, it follows that $\wp(a) = - 1/3$ as claimed.
\end{proof}
\medbreak
As an even function, $d$ also has a pole at $- \frac{2}{3} {\rm i} K'$. Both of the poles $\pm \frac{2}{3} {\rm i} K'$ lie in the period parallelogram with vertices $\pm K \pm {\rm i} K'$ and $d$ has order two, so each pole is simple and the accounting of poles (modulo periods) is complete; of course, this may be verified otherwise.
\medbreak
We close our commentary with a couple of remarks.
\medbreak
In [1] it is mentioned that the squares $s^2$ and $c^2$ are elliptic: as $d$ is elliptic, these facts follow in turn from the $(s, d)$ relation $4 k^2 s^2 = (1 - d) (2 + d)^2$ and the $(s, c)$ relation $c^2 = 1 - s^2$. As $d$ has simple poles, it follows that the poles of $s^2$ and $c^2$ are triple; thus, $s$ and $c$ themselves are not elliptic, as is also mentioned in [1]. Beyond this, the product $s \, c$ is elliptic with triple poles: indeed, $$s\, c = - \frac{3}{8 k^2} (2 + d) d\,' = - \frac{3}{16 k^2} \{ (d + 2)^2\}'.$$
\medbreak
\section*{}
\bigbreak
\begin{center} {\small R}{\footnotesize EFERENCES} \end{center} \medbreak
[1] Li-Chien Shen, {\it On the theory of elliptic functions based on $_2F_1(\frac{1}{3}, \frac{2}{3} ; \frac{1}{2} ; z)$}, Transactions of the American Mathematical Society {\bf 357} (2004) 2043-2058.
\medbreak
\end{document} |
\begin{document}
\title{Connecting geometry and performance of two-qubit parameterized quantum circuits}
\begin{abstract} Parameterized quantum circuits (PQCs) are a central component of many variational quantum algorithms, yet there is a lack of understanding of how their parameterization impacts algorithm performance. We initiate this discussion by using principal bundles to geometrically characterize two-qubit PQCs. On the base manifold, we use the Mannoury-Fubini-Study metric to find a simple equation relating the Ricci scalar (geometry) and concurrence (entanglement). By calculating the Ricci scalar during a variational quantum eigensolver (VQE) optimization process, this offers us a new perspective to how and why Quantum Natural Gradient outperforms the standard gradient descent. We argue that the key to the Quantum Natural Gradient's superior performance is its ability to find regions of high negative curvature early in the optimization process. These regions of high negative curvature appear to be important in accelerating the optimization process. \end{abstract}
\section{Introduction} \label{sec:introduction}
Parameterized quantum circuits (PQCs) are central components of variational quantum algorithms, a class of algorithms well-suited for being implemented on near-term quantum devices. In these algorithms, parameters of PQCs are tuned or optimized, using a classical computer, to prepare quantum states on a quantum computer that encode solutions of a problem, e.g.~the ground state of a quantum system or a target probability distribution \cite{cerezo2021variational,bharti2022noisy}. While PQCs can be constructed leveraging physical insights, e.g.~unitary coupled-cluster \cite{Yung2015,Cao2019Quantum,Anand2021Quantum}, in applications such as quantum machine learning or implementation of proof-of-principle experimental demonstrations of variational quantum algorithms, PQCs follow a more heuristic design. These PQCs comprise repeated layers of a particular low-depth configuration of single-qubit and two-qubit gate operations \cite{Havlicek2018Supervised, Kandala2017}. Despite the rapid developments in variational quantum algorithms, PQCs are not yet well understood nor effectively designed. For instance, while ``hardware-efficient'' circuits may correspond to very low depths, it was shown that many of the parameters (and their corresponding gates) are often unnecessary or redundant \cite{Rasmussen2020Reducing,Sim2020Adaptive,Funcke2021Dimensional}. In our work, to develop a more concrete understanding of the role of parameters in PQCs, we begin with a mathematical description of PQCs.
Formally, parameterized quantum circuits are maps $\Psi(\boldsymbol\theta)$ between a set of continuous parameters $\boldsymbol\theta$ and the output statistics of a set of observables on a given system. \begin{figure}
\caption{Progress on the study of parameterized quantum circuits (PQCs) and their connections to the algorithm performance. Past works have investigated how use of particular circuits leads to features in the cost function landscape that hinder algorithm performance (e.g.~barren plateaus). Other works have introduced circuit metrics such as expressibility and started correlating them to algorithm performance. Our work, while limited to two qubits, initiates the discussion to connect the geometry of PQCs to the algorithm performance and provides valuable insights into why and how Quantum Natural Gradient accelerates the optimization step of near-term quantum algorithms. }
\label{fig:intro}
\end{figure} In recent years, significant progress has been made to better understand PQCs and their connection to the algorithm performance as illustrated in Fig.~\ref{fig:intro}. For instance, Ref.~\cite{mcclean2018barren} introduced the phenomenon of ``barren plateaus'' or regions of non-informative gradients in cost function landscapes resulting from employing PQCs that form approximate 2-designs. Several works extended this work, connecting particular circuit structures to cost function landscape features such as barren plateaus and narrow gorges \cite{arrasmith2021equivalence} that hinder algorithm performance. For a better understanding and evaluation of PQCs, Ref.~\cite{Sim2019} introduced ``expressibility'' as a quantity to compare among PQCs and rule out circuits with limited capabilities. Since then, works such as Ref.~\cite{hubregtsen2021evaluation} have started correlating expressibility to performance metrics of particular variational quantum algorithms. Additionally, Ref.~\cite{holmes2022connecting} connected high expressibility to the presence of barren plateaus in cost function landscapes.
Among past works connecting PQCs to algorithm performance, Ref.~\cite{Stokes2019} introduced and investigated the use of Quantum Natural Gradient (QNG) descent to accelerate the optimization of parameters making connections to imaginary time evolution. This modification of the gradient descent which involves an inverse of a metric that is one fourth of the Quantum Fisher Information. The Fubini-Study metric tensor employed in QNG does not contain information about the objective function (unlike gradients or Hessian) and yet has been shown to significantly accelerate optimization by accounting for the geometry of the wave function. Since its discovery, the QNG has been a subject of additional study. For example, while it might suffer from the same \textit{barren plateau} \cite{mcclean2018barren} problem, there is numerical evidence that at least for shallow circuits the variance of the cost function is orders of magnitude bigger than if vanilla gradient descent \cite{haug2021capacity} was used. In \cite{haug2021optimal}, methods for using the QNG and adaptively choosing learning rates that can derived from the QNG were shown to outperform traditional methods like ADAM and LBFGS in learning quantum states. These performance gains come with an overhead cost of calculating what the QNG is, but work has been done showing that for quantum simulation there exist efficient algorithms to calculate it \cite{jones2020efficient} and even in the setting where one is required to calculate it from measurements coming from quantum hardware the overall cost is asymptotically negligible in the number of iterations and qubits \cite{VanStraaten2021}. The QNG has also been extended to the non-unitary case where some depolarizing noise is allowed in the circuit \cite{koczor2019quantum}. To gain a deeper understanding of the connection between the geometry of the wave function and algorithm performance, we initiate the discussion by providing a geometric characterization of two-qubit PQCs.
The structure of the paper is as follows: \begin{enumerate}[label= (\roman*) ]
\item In Sec.~\ref{sec:geometry_intro}, we introduce the geometrical formalism we shall use to analyze PQCs. We consider four specific two-qubit PQCs and take some time to carefully geometrically characterize the circuits.
\item In Sec.~\ref{sec:concurrence_curvature}, we introduce the notions of concurrence and the scalar curvature and uncover a simple and remarkable relationship between the two concepts.
\item In Sec.~\ref{sec:numerical_experiments}, we capitalize on the relationship between concurrence and curvature in order to provide geometrical insights to the VQE optimization process in the PQCs considered in this work.
\item In Sec.~\ref{sec:conclusion}, we summarize our work, providing future directions of research and open questions. \end{enumerate}
\section{Geometry, quantum mechanics, and parameterized quantum circuits}\label{sec:geometry_intro}
A geometric reformulation of quantum mechanics has largely been a mathematical and theoretical pursuit while the algebraic view of quantum mechanics appears to be more practical for the everyday quantum mechanician. The question that may arise is whether the geometric approach remains a mere mathematical happenstance or can be leveraged in more practical day-to-day settings. For completeness sake, we provide a quick introduction to the geometric structure of quantum mechanics and hopefully lay bare the complications that arise for the projective nature of quantum states. The discussion will hopefully explain more clearly the geometric connection and what goes wrong if it is not fully incorporated in one's understanding of parameterized quantum circuits. The K{\"a}hler structure of quantum mechanics furnishes us with a Riemannian metric and symplectic structure encompassed in what is called the \textit{Quantum Geometric Tensor} (for more in-depth pedagogical introductions to this large and rich area, we refer the readers to \cite{Heydari2016,robertgeroch2013}).
\subsection{Quick introduction to fiber bundles} Most topological spaces have a complicated geometry for which there most likely is no easy intuitive picture. One way of addressing this problem is instead to think about the local geometric structure of this complicated topological space. The local geometry will be thought about as being simply a Cartesian product of lower dimensional geometries. For the rest of the paper, we are going to assume that the complicated topological space $\mathcal{E}$ can be endowed with a manifold structure. The idea of a \textit{fiber bundle} is to imagine the geometry of $\mathcal{E}$ as locally being a Cartesian product of two lower dimensional manifolds $\mathcal{B}$ and $\mathcal{F}$. If in fact the \textit{total space} $\mathcal{E}$ can be thought of as merely $\mathcal{B} \times \mathcal{F}$ then the fiber bundle is called a \textit{trivial bundle}. Otherwise, in general, the local Cartesian structure will be patched together with some twisting that will make the global geometry look very different from the local geometry. The local picture of $\mathcal{E}$ can be thought of as there being a \textit{base space}, $\mathcal{B}$ (which we go to by a projection map $\pi$ from the total space, $\mathcal{E}$) and then at each point on the base space attaching the second geometry called the $\textit{fiber}$, $\mathcal{F}$. We denote the fiber bundle as
\begin{equation}
\mathcal{F} \rightarrow \mathcal{E} \xrightarrow{\pi} \mathcal{B}. \end{equation}
\begin{figure}
\caption{Fiber bundle perspective of a cylinder (a) and a M\"obius strip (b). The lines in both pictures represent the fibers, while the circle represents the base manifold. The fiber bundle perspective helps us think locally of complicated manifolds in terms of smaller dimensional manifolds.}
\label{fig:fiber_bundle_cartoon}
\end{figure}
For quantum mechanics, we are interested in fiber bundle for which we have an action of a group $\mathcal{G}$. How does the group act on our fiber bundle? For all $g \in \mathcal{G}$, and for any point $b \in \mathcal{B}$ , $bg$ is merely another point in the fiber attached at the base point $b$. We demand we have continuous right action of the group so that in fact $\mathcal{G}$ is a lie group and that this action is \textit{free} and \textit{transitive} i.e. that respectively only the identity element preserves a point in the fiber, $\mathcal{F}$ and that for any two points, $f, f'$ in the fiber there is a unique group element $g$ such that $f = f'g$. What this amounts to is that the group $\mathcal{G}$ is in fact isomorphic to the fiber $\mathcal{F}$. When this happens the fiber bundle is called a \textit{Principal Fiber Bundle}. This principal fiber bundle is then denoted as \begin{equation}
\mathcal{G} \rightarrow \mathcal{E} \xrightarrow{\pi} \mathcal{B}. \end{equation}
\subsubsection{Quantum mechanics and geometry} \label{sec:qm_geometry} We are often taught early that global phases do not matter. While this is usually peppered over since these phases have no physical consequences, it has major consequences from our point of view. Consider a wave function in a Hilbert space, $\ket{\psi} \in\mathbb{H}$. The fact that the global phase does not physically matter means that we are instead considering equivalence classes denoted as $[\ket{\psi}]$. This means that the physical space is in fact a projectivized Hilbert space $\mathbb{P}(\mathbb{H})$. Note that we can also think of this as a principal fiber bundle where the group action is given by $U(1)$ so that the principal fiber bundle we have is
\begin{equation}
U(1) \rightarrow \mathbb{H} \xrightarrow{\pi} \mathbb{P}(\mathbb{H}). \end{equation}
On the way to introducing a metric on the Hilbert Space $\mathcal{H}$, we recall that we have a Hermitian inner product, $h :\mathbb{H} \times \mathbb{H} \rightarrow \mathbb{C}$ defined by \begin{equation} \label{hermitianform}
h(\phi, \psi) = \langle \phi | \psi \rangle = G(\phi, \psi) + i F(\phi,\psi), \end{equation}
where $G(\phi, \psi) = \text{Re} (\langle \phi | \psi \rangle)$ and $F(\phi, \psi) = \text{Im} (\langle \phi | \psi \rangle)$. The real part is what produces a Riemannian metric and the imaginary part is what gives us a symplectic structure. To arrive at the metric we shall need to consider the tangent space of $\mathbb{P(H)}$. The tangent space at point on $\mathbb{P(H)}$, $T_{[ \ket{\psi}]}\mathbb{P(\mathbb{H})}$ is the quotient vector space $T_{[ \ket{\psi}]}\mathbb{H}/ $ $ \simeq$ $\mathbb{H} $ where the equivalence is $\ket{\phi_1} \sim \ket{\phi_2}$ whenever $ \ket{\phi_1} -\ket{\phi_2} = a \ket{\psi} $ where $a \in \mathbb{C}$. By choosing an orthogonal complement to the space spanned by $\ket{\psi}$ i.e $\langle \psi | \phi \rangle = 0$, we don't have an ambiguity in the tangent space. To ensure that this condition is always met, we use the projection to the orthogonal complement: \begin{equation}
P^{\perp}_{\psi} = \mathbb{I} - \frac{\ket{\psi}\!\!\bra{\psi}}{\langle \psi | \psi \rangle}. \end{equation}
On the tangent space remembering $T_{\ket{\psi}}\mathbb{H} \simeq \mathbb{H}$, we see that Eq.~\eqref{hermitianform} may be written in the form \begin{equation} \label{hermitianform2}
h(\phi_1, \phi_2)= \bra{\phi_1} P^{\perp}_{\psi}\ket{\phi_2} = \langle \phi_1| \phi_2 \rangle - \langle \phi_1| \psi \rangle \langle \psi| \phi_2 \rangle. \end{equation}
This is in fact what is called the \textit{Quantum Geometric Tensor} in the literature. To see this, consider the wave function $\ket{\psi}$, then we go to the tangent space with bases $\{ \ket{\partial_i \psi} \} $ and then plugging this into (\ref{hermitianform2}) we get
\begin{equation}
h(\partial_i \psi, \partial_j \psi)= \bra{ \partial_i \psi} P^{\perp}_{\psi}\ket{ \partial_j \psi} = \langle \partial_i \psi | \partial_j \psi \rangle - \langle \partial_i \psi | \psi \rangle \langle \psi| \partial_j \psi \rangle. \end{equation}
\textit{Remark:} Note the intrinsic nature of the above derivation for the Quantum Geometric Tensor. Usually this is arrived at by considering the transition between a state $\ket{\psi_{\vec{u}}}$ and infinitesimally close-by state $\ket{\psi_{\vec{u + \delta u}}}$ then imposing `gauge invariance' of quantum mechanics \cite{cheng2010quantum} or noting that the squared differential, $ds^2 = g_{ij}dx^idx^j$, can in quantum mechanics be written as $ds^2 \propto (1-F^2)$ where $F = | \langle\phi| \psi \rangle| $\cite{Stokes2019} to lowest order in the parameters that appear in the wave function. The latter derivations, \cite{cheng2010quantum} and \cite{Stokes2019}, while correct obscure its intimate origins to the projective nature of Quantum mechanics. The real part of the Quantum Geometric Tensor in (\ref{hermitianform2}) is the \textit{Fubini-Study metric}.
\subsection{Fiber bundles and PQCs}
A bit of care must be taken in applying the information geometric view to PQCs. In order to see why, we carefully define what a PQC is. To do this we follow Ref.~\cite{juthohaegeman2014} and define a variational ansatz or PQCs as follows:
\begin{definition}
A \textbf{\textit{parameterized quantum circuit}} is the image of the following map $\Psi : \mathcal{U} \in \mathbb{R}^m \rightarrow \mathbb{P}(\mathbb{H})$, i.e.
\begin{equation}
\mathcal{M} = \Psi(\mathcal{U}):= \{ [\ket{\Psi( \boldsymbol \theta)}] \}
\end{equation}
where $ \boldsymbol \theta = (\theta_1, \theta_2, \dots, \theta_m ) \in \mathcal{U} \subset \mathbb{R}^m $. \end{definition}
To get at the metric we go to the tangent space and as a consequence define the following push-forward map: \begin{equation}
d\Psi(\boldsymbol v) : T_p \mathbb{R} \rightarrow T_{[\ket{\Psi(\vec{\theta})}]}\mathcal{M} : v^i \partial_{\theta_i} \longrightarrow v^i \partial_{\theta_i} [\ket{\Psi(\boldsymbol \theta)}]= v^i [\ket{ \partial_{\theta_i}\Psi(\boldsymbol \theta)}] \end{equation} where $\boldsymbol v = v^i \partial_{\theta_i}$. This allows us to define the metric on $\mathbb{P}(\mathbb{H})$ using our points in $\mathbb{R}^m$ and this is done by using the pull-back by $\Psi$, back to $T_p\mathbb{R}^m$ i.e. \begin{equation}
\label{metricdef}
g_{\boldsymbol \theta}: T_p\mathbb{R}^m \times T_p\mathbb{R}^m \rightarrow \mathbb{R} : (\boldsymbol v_1,\boldsymbol v_2) = g(\boldsymbol v_1,\boldsymbol v_2) = g(d\Psi(\boldsymbol v_1), d\Psi(\boldsymbol v_2)) \end{equation} where we define the value of the metric as \begin{equation} \label{metricvalue}
g(\boldsymbol v_1,\boldsymbol v_2) = v_1^i v_2^j( \langle\partial_i \Psi| \partial_j \Psi \rangle - \langle \partial_i \Psi | \Psi \rangle \langle \Psi | \partial_j \Psi \rangle). \end{equation} Here, we have suppressed the dependence on $\boldsymbol \theta$ for ease of notation; we have implicitly chosen a point on a ray in evaluating the value for the metric.
Now the map $\Psi$ need not be injective so that in fact the metric in Eq.~\eqref{metricvalue} is, in general, degenerate. This is in fact what happens with the circuits we study in this work. As a consequence, no unique inverse metric exists. The problem derives from the fact that the parameters in the wave function are not chosen with the projective nature of quantum mechanics and thus when we ask physical questions like expectation values of hermitian operators we will in fact live on some constrained surface in $\mathbb{P}(\mathbb{H})$; in other words our tangent space at some point will in general be of smaller dimension than $T_p\mathbb{R}^m $.
Naively, we have the following Quantum Geometric Tensor for our PQC: \begin{equation}
\label{ParametrizedQGT}
G_{\theta_i \theta_j} = \langle\partial_{\theta_i} \Psi(\boldsymbol\theta)|{\partial_{\theta_j} \Psi(\boldsymbol\theta)} \rangle - \langle\partial_{\theta_i} \Psi(\boldsymbol\theta)| \Psi(\boldsymbol\theta) \rangle
\langle \Psi(\boldsymbol\theta)| \partial_{\theta_j} \Psi(\boldsymbol\theta) \rangle. \end{equation}
\subsubsection{Two-qubit parameterized quantum circuits and geometry}
In this section we study how the PQCs fit in the geometry of $\mathbb{P}(\mathbb{H})$ and find the constrained surfaces they live on. This exercise elucidates what part of the geometry the PQCs have access to.
There will be four major circuits we shall consider in studying the information-geometric view of PQCs, namely: \begin{enumerate}
\item the \textit{Hardware-Efficient Ansatz} (HEA) introduced by \cite{Kandala2017} and studied in the context of the Fubini-Study metric by \cite{Yamamoto2019},
\item the \textit{Low Depth Circuit Ansatz} (LDCA) introduced in \cite{Dallaire-Demers2018},
\item an ansatz introduced in the context of quantum generative adversarial networks (QGANs) \cite{QGAN_PLDD2018}, and
\item an ansatz used in \cite{Dallaire-Demers2020} composed of gates native to the Sycamore chip \cite{Arute2019}, e.g. the fermionic simulation (fSim) operation. We will refer to this ansatz as the Sycamore HEA (sHEA). \end{enumerate}
\begin{figure}
\caption{Two-qubit circuit blocks considered in this work. Dashed lines indicate different circuit layers or moments. Gate definitions are provided in Appendix~\ref{app:gate_definitions}.}
\label{fig:circuit_diagrams_v2}
\end{figure}
\noindent In general, we have the following isomorphisms \begin{equation}
\label{generalisomorphisms}
\mathbb{P}(\mathbb{H}^{n+1}) \simeq S^{2n-1}/S^1 \simeq \mathbb{C}P^n \simeq G_{1, n+1} \end{equation} where $\mathbb{P}(\mathbb{H}^{n+1})$ is the projectivized version of an $n+1$ dimensional Hilbert space, $\mathbb{C}P^n$ is the complex projective space and $G_{1, n+1}$ is the Grassmanian of dimension $n$. For two qubits, the string of isomorphisms in Eq.~\eqref{generalisomorphisms} become specifically the following isomorphisms:
\begin{equation}
\label{twoqubitisomorphisms}
\mathbb{P}(\mathbb{H}^{4}) \simeq S^{7}/S^1 \simeq \mathbb{C}P^3 \simeq G_{1, 4}. \end{equation}
Luckily, for two and incidentally for three qubits, we have a fiber bundle picture namely the Hopf Fibration. For two qubits, the Hopf Fibration can be thought of as an $SU(2)$ principal fiber bundle. We now concentrate on $S^7$ and think of it having a Hopf Fibration. It is a well-known fact that the fibration has the following character:
\begin{equation}
S^3 \rightarrow S^7 \xrightarrow{\pi} S^4 \end{equation} The fiber $S^3$ is isomorphic to SU(2).
From a quantum mechanical point of view $S^4$,which can locally be split into two spheres, i.e $S^2 \times S^2$, can be given the following interpretation: the first sphere is a \textit{quasi}-Bloch sphere representing the degrees of freedom one observer has access to for a given two-qubit entangled state and the second sphere parameterizes the amount of entanglement shared by the two qubits \cite{wie2020bloch}.
For each of the major circuits, we calculate the version of the metric that is in general degenerate, a metric that is specifically tailored to the geometry of $S^7$ and provide explicit parametrizations of how the two qubit circuits sit inside the geometry.
The Hopf base and fiber parameters are calculated as follows \cite{wie2020bloch, levay2004geometry}. For a 2-qubit state: \begin{align}
\ket\psi = \alpha\ket {00}
+ \beta\ket {01}
+ \gamma\ket {10}
+ \delta\ket {11} \end{align} where $\alpha,\beta,\gamma,\delta \in \mathbb C$, compute the following: \begin{enumerate}
\item First, calculate the $S^4$ Hopf base parameters $(\theta_A,\phi_A)$ and $(\chi,\xi)$ and the equivalent Cartesian coordinates $(x_0,x_1,x_2,x_3,x_4)$ as follows:
\begin{enumerate}
\item $x_0 = |\alpha|^2 +
|\beta|^2 - |\gamma|^2 - |\delta|^2
$
\item $\theta_A = \arccos(x_0)$.
\item $x_1 = 2\, \mathrm{Re}(\bar\alpha\gamma+\bar\beta\delta)$
\item $x_4 = 2\, \mathrm{Im}(\bar\alpha\gamma+\bar\beta\delta)$
\item $\phi_A = \arccos\left(x_1 \csc(\theta_A)\right)$
\item $\chi = \arccos(x_4\csc(\theta_A)\csc(\phi_A))$
\item $x_3=2 \,\mathrm{Re}(\alpha\delta-\beta\gamma)$
\item $x_2=-2 \,\mathrm{Im}(\alpha\delta-\beta\gamma)$
\item $\xi = \arctan(x_3/x_2)$ \end{enumerate}
In calculating how the PQCs fit inside the fibers using intrinsic co-ordinates, one needs to solve highly non-linear trigonometric equations. We simplify our task, by switching co-ordinates to the constrained extrinsic co-ordinates. We take advantage of the following parameterization equivalence:
\begin{equation}
\ket{\psi_q} = \begin{pmatrix}
\cos\theta_A \\
\sin \theta_A e^{t \phi_A}
\end{pmatrix}q = \frac{1}{\sqrt{2}} \begin{pmatrix}
\sqrt{1\pm \sqrt{1-|z|^2-|w|^2}} \\
\sqrt{1\mp \sqrt{1-|z|^2-|w|^2}} \frac{z+wj}
{\sqrt{|z|^2+|w|^2}}
\end{pmatrix} q, \end{equation}
where $q $ is a unit quaternion used to parameterize the fiber and $e^{t \phi_A}$ is a unit quaternion used to parameterize a point on the entanglement sphere.
\item Next, calculate the $S^3$ Hopf fiber parameters, which are represented by the quaternion $q$. \begin{enumerate}
\item $z=\tfrac 12(x_1+ix_4)$
\item $w=\tfrac 12(x_3-ix_2)$
\item $\gamma_{\pm} = \sqrt{1\pm \sqrt{1-|z|^2-|w|^2}}$
\item $\ket{\psi_H} =
(\alpha+\beta j)\ket 0 +
(\gamma+\delta j)\ket 1
$
\item $\ket {c_\pm} = \frac 1{\sqrt 2} \begin{pmatrix}
\gamma_{\pm} \\
\gamma_{\mp} \frac{z+wj}{
\sqrt{|z|^2+|w|^2}
}
\end{pmatrix}
$
\item $q_\pm = \langle c_\pm|\psi_H\rangle$ \end{enumerate} \end{enumerate}
\begin{figure}
\caption{Local geometry of $S^4$.}
\label{fig:my_label}
\end{figure}
We compute the above Hopf base and fiber parameters for the circuit blocks in Fig.~\ref{fig:circuit_diagrams_v2} and present our results in Appendix \ref{app:hopfBaseFiber}.
\subsubsection{Local geometric expressibility} The notion of expressibility for PQCs has been explored in \cite{Sim2019}. By thinking of the Hilbert space from the geometric point of view, namely as $S^7$, we can arrive at a picture of \textit{local geometric expressibility}. From this perspective we ask ourselves which points in $S^7$ can our PQCs reach, and which constrained surfaces do they live on in terms of some un-constrained co-ordinates. As discussed earlier, the local degrees of freedom accessible to an observer are parameterized by one of the spheres with intrinsic coordinates $(\theta_A, \phi_A)$ on the base manifold while the entanglement properties are parameterized by the second sphere parameterized by intrinsic coordinates $(\chi, \xi)$.
{ \renewcommand{1.5}{1.5} \begin{table}[ht]
\centering
\caption{PQC base manifold geometry}
\begin{tabular}{ |p{2.7cm}||p{4cm}|p{4cm}|p{3.2cm}| }
\hline
\centering \textbf{Quantum Circuit} & \centering $\boldsymbol{S^2}$ \textbf{(local sphere)} & \centering $\boldsymbol{S^2}$ \textbf{(entanglement sphere)} & \textbf{Constrained Base Space Geometry} \\
\hline
\centering$\text{HEA}$ & Explores with $(\phi_A, \theta_A)$ & Does not explore (Stuck to just a point) & $S^2$ \\
\hline
\centering$\text{LDCA}$ & Explores with $(\theta_A)$ & Explores with $\xi$ & $S^2$ \\
\hline
\centering$\text{QGAN}$ & Explores with $(\phi_A, \theta_A)$ & Explores with $\chi$ & $S^3$ \\
\hline
\centering $\text{sHEA}$ & Explores with $(\phi_A, \theta_A)$ & Explores with $(\chi, \xi)$ & $S^4$ \\
\hline
\end{tabular}
\label{tab:pqc_base_manifold_geometry} \end{table} }
The sHEA circuit covers the largest surface area of points, while we see that both HEA and LDCA cover points on a two sphere. Interestingly although both HEA and LDCA cover points constrained to live on $S^2$, LDCA uses one of its co-ordinates to explore the entanglement sphere. This, in principle, allows LDCA to explore more sub-manifolds of different entanglement in $S^4$.
Next we consider how the different circuits differ in the fiber space. Points in the fiber space appear from one observer's point of view as a difference in the gauge i.e they do not change the entanglement properties on the quantum state (This can be made more explicit by introducing a connection). To help abstract out the minutiae we first re-express (\ref{hea_fiber}), (\ref{ldca_fiber}), (\ref{qgans_fiber}) that represent the parameterization in the fiber space in a way that makes physical interpretation easier. For a fixed fiber, these functions will in general depend on a subset of the parameters in the quantum circuit as one moves in that fiber. Using the algebraic equivalence between quaternions and the Lie algebra of $SU(2)$ namely: $$ i \longmapsto i \sigma^x, j \longmapsto i \sigma^y, k \longmapsto i\sigma^z $$
We re-write (\ref{hea_fiber}), (\ref{ldca_fiber}), (\ref{qgans_fiber}) respectively as follows: \begin{align}
q_{\pm HEA} &= \frac{1}{2 \lambda_2} \left(e_{\pm}(\theta_4)I + i f_{\pm}(\theta_4)\sigma^y \right) \\
q_{\pm LDCA} &= \frac{1}{2 \mu_3} i\left(g_{\pm}(\theta_1, \theta_2, \theta_3, \theta_4, \theta_5)\sigma^y + h_{\pm}(\theta_1, \theta_2, \theta_3, \theta_4, \theta_5)\sigma^z \right) \\
\begin{split}
q_{\pm QGANS} &= \frac 1{\sqrt 2} \left(\sin \left(\frac{\theta _1}{2}\right) \gamma_\mp+\cos \left(\frac{\theta _1}{2}\right) \gamma_\pm \right) i \big( a_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)I + b_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)\sigma^x \\
&\quad + c_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)\sigma^y + d_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)\sigma^z \big)
\end{split} \\
q_{\pm sHEA} &= i \big( m_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)I + m_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)\sigma^x \nonumber\\
&\quad + r_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)\sigma^y + s_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)\sigma^z
\big). \end{align} In the equations above only the $\theta_i 's$ shown are those that can vary in a specific fiber on a point in the base manifold. Expressions for $a-g, m, n, s$ can be found in Appendix \ref{app:hopfBaseFiber}.
{ \renewcommand{1.5}{2} \begin{table}[ht]
\centering
\caption{PQC Geometry Summary Table
}
\begin{tabular}{ |p{2.2cm}||p{12cm}| }
\hline
\textbf{Quantum Circuit} & \textbf{Fiber Space}\\
\hline
$\text{HEA}$ & $\frac{1}{2 \lambda_2} \left(e_{\pm}(\theta_4)I + i f_{\pm}(\theta_4)\sigma^y \right)$ \\
\hline
$\text{LDCA}$ & $\frac{1}{2 \mu_3} i\left(g_{\pm}(\theta_1, \theta_2, \theta_3, \theta_4, \theta_5)\sigma^y + h_{\pm}(\theta_1, \theta_2, \theta_3, \theta_4, \theta_5)\sigma^z \right)$ \\
\hline
$\text{QGAN}$ & $ \begin{multlined} \frac 1{\sqrt 2} \left(\sin \left(\frac{\theta _1}{2}\right) \gamma_\mp+\cos \left(\frac{\theta _1}{2}\right) \gamma_\pm \right) i(a_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)I \\ + b_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)\sigma^x +
c_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)\sigma^y + d_{\pm}(\theta_2, \theta_3, \theta_4, \theta_5)\sigma^z ) \end{multlined}$ \\
\hline
$\text{sHEA}$ & $ \begin{multlined} i \big( m_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)I + m_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)\sigma^x + \\
r_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)\sigma^y + s_{\pm}(\theta_1, \theta_2, \theta_3, \theta_5, \theta_6)\sigma^z
\big) \end{multlined}$ \\
\hline
\end{tabular}
\label{tab:pqc_fiber_space} \end{table} }
\subsection{Quantum Natural Gradient descent} \label{sec:qng_descent}
We have discussed the possibility of the metric derived for directly applying Eq.~\eqref{ParametrizedQGT} being generally degenerate. In this section, we calculate the ``Fubini-Study'' metrics (by following Eq.~\eqref{ParametrizedQGT}) and see that most are indeed degenerate. Although this is the case, it has been shown in \cite{Stokes2019} that these degenerate matrices, in practice, lead to improvements in the number of iterations for the optimization process, through approximating to a block-diagonal or diagonal form, adding numerically small values to make matrices invertible, and/or taking the pseudo-inverse. A more recent study, however, notes that (block-)diagonal approximations to the metric tensor may not be necessary as the cost of computing the metric tensor is asymptotically negligible \cite{VanStraaten2021}. Using the standard gradient descent method, the parameter update rule for the parameters in the quantum circuit is: \begin{equation} \label{gradient_descent}
\boldsymbol\theta^{(t+1)} = \boldsymbol\theta^{(t)} - \eta \frac{\partial \mathcal{L}(\boldsymbol\theta)}{\partial \boldsymbol\theta}. \end{equation} where $\eta$ is the step size or learning rate and $\mathcal{L}(\boldsymbol\theta)$ is the objective function to be minimized.
For the four circuit types considered, we have the following Fubini-Study metrics assuming (\ref{ParametrizedQGT}) and naively calculating them. Matrix elements that are included in the block-diagonal approximation to the metric tensor are in {\color{blue}blue}.
\begin{align}
g^{\mathrm{HEA}} &= \begin{pmatrix}
\color{blue}{1} & \color{blue}{0} & \langle \sigma^x_2 \rangle_1 & 0 \\
\color{blue}{0} & \color{blue}{1} & 0 & \langle \sigma^z_1 \rangle _1 \\
\langle \sigma^x_2 \rangle_1 & 0 & \color{blue}{1} & \color{blue}{\frac{ \langle \{ \sigma^y_{1},\sigma^y_{2} \} \rangle_2}{2}} \\
0& \langle \sigma^z_{1} \rangle_1 & \color{blue}{\frac{\langle \{ \sigma^y_{(1)},\sigma^y_{(2)} \} \rangle_2}{2}} & \color{blue}{1}
\end{pmatrix} \label{eq:fs_metric_tensor_hea} \\
g^{\mathrm{LDCA}} &= \begin{pmatrix}
\color{blue}{0} & \color{blue}{0} & 0& 0 & 0 \\
\color{blue}{0} & \color{blue}{0} & 0 & 0 & 0 \\
0 & 0 & \color{blue}{4} & 0 & 0 \\
0 & 0 & 0 & \color{blue}{0} & 0 \\
0&0 &0&0& \color{blue}{1 + \frac{\langle \{\sigma^z_1, \sigma^z_2\}\rangle_4}{2}}
\end{pmatrix} \label{eq:fs_metric_tensor_ldca}\\
g^{\mathrm{QGAN}} &= \begin{pmatrix}
\color{blue}{1} & \color{blue}{0} & 0 & 0 & 0 \\
\color{blue}{0} & \color{blue}{1} & 0 & 0 & 0 \\
0 & 0 & \color{blue}{1- \langle \sigma^z_1\rangle_1^2} & \color{blue}{0} & \langle \sigma^z_2\rangle_2 -\langle \sigma^z_1\rangle_1 \langle \sigma^z_1\sigma^z_2\rangle_1 \\
0 & 0 & \color{blue}{0} & \color{blue}{1- \langle\sigma^z_2\rangle^2_1} & \langle \sigma^z_1\rangle_2 -\langle\sigma^z_2\rangle_1\langle\sigma^z_1\sigma^z_2\rangle_1 \\
0 & 0 & \langle \sigma^z_2\rangle_2 -\langle\sigma^z_1\rangle_1 \langle \sigma^z_1\sigma^z_2\rangle_1 &\langle\sigma^z_1\rangle_2 -\langle \sigma^z_2\rangle_1 \langle\sigma^z_1 \sigma^z_2\rangle_1 & \color{blue}{1- \langle \sigma^z_1 \sigma^z_2 \rangle_1^2}
\end{pmatrix} \label{eq:fs_metric_tensor_qgan} \\
g^{\mathrm{sHEA}} &= \begin{pmatrix}
\color{blue}{1} & \color{blue}{0} & 0 & 0 & A & -A\\
\color{blue}{0} & \color{blue}{1} & 0 & 0 & B & -B \\
0 & 0 & \color{blue}{C} & D & E & -E \\
0 & 0 & D & \color{blue}{F} & -G & G \\
A & B & E & -G & \color{blue}{H} & \color{blue}{I} \\
-A & -B & -E & G & \color{blue}{I} & \color{blue}{H} \\
\end{pmatrix} \label{eq:fs_metric_tensor_shea} \\ \end{align}
where for $g^{sHEA}$ we have \begin{align*}
A &= \ ^1\!\langle \sigma^x_1 \mathcal{S}(\theta_3)^{\dagger} \sigma^z_1 \rangle_2 - \langle \sigma_1^x \rangle_0 \langle \sigma^z_1 \rangle_2 \\
B &=\ ^1\!\langle \sigma^x_2 \mathcal{S}(\theta_3)^{\dagger} \sigma^z_1 \rangle_2 - \langle \sigma_2^x \rangle_0 \langle \sigma^z_1 \rangle_2 \\
C &= \frac{1}{2}\left( 1 - \langle \sigma^z_1 \sigma^z_2 \rangle \right) - \frac{1}{4} \left( \langle \sigma^x_1 \sigma^x_2 + \sigma^y_1 \sigma^y_2 \rangle_1 \right)^2 \\
D &= \frac{1}{8} \left( -\langle (\sigma^x_1 \sigma^x_2 + \sigma^y_1\sigma^y_2)(I_1 - \sigma^z_1)(I_2- \sigma^z_2) \rangle_2 + \langle\sigma^x_1 \sigma^x_2 + \sigma^y_1 \sigma^y_2 \rangle_1 \langle (I_1 - \sigma^z_1)(I_2- \sigma^z_2 \rangle_2 \right) \\
E &= \frac{1}{2} \left( \langle ( \sigma^x_1 \sigma^x_2 + \sigma^y_1 \sigma^y_2) \sigma^z_1 \rangle_2 + \langle\sigma^x_1 \sigma^x_2 + \sigma^y_1 \sigma^y_2 \rangle_1 \langle \sigma^z_1 \rangle_2 \right) \\
F &= \frac{1}{4}\langle (I_1 - \sigma^z_1)(I_2- \sigma^z_2 \rangle_2 - \frac{1}{16}\langle (I_1 - \sigma^z_1)(I_2- \sigma^z_2 \rangle_2^2 \\
G &= \frac{1}{4}\langle (I_1 - \sigma^z_1)(I_2- \sigma^z_2 \rangle_2 - \frac{1}{4}\langle (I_1 - \sigma^z_1)(I_2- \sigma^z_2 \rangle_2 \langle \sigma^z_1 \rangle_2 \\
H &= 1 - \langle \sigma^z_2 \rangle_2^2
I = \langle \sigma^z_1 \sigma^z_2 \rangle_2 - \langle \sigma^z_1 \rangle_2 \langle \sigma^z_2 \rangle_2. \end{align*}
\textbf{Note:} The notation used above $\langle \mathcal{O} \rangle_i $ is used to mean the expectation value of the operator $ \mathcal{O}$ with respect to the wave function at the $i^{th}$ layer of the circuit, while $ ^j \!\langle \mathcal{O} \rangle_i = \bra{\psi_j} \mathcal{O} \ket{\psi_i} $ and $\ket{\psi_i}$ is the wave function at the $i^{\textup{th}}$ layer.
Of the four metrics, ones for the QGAN circuit and sHEA are non-degenerate. From the previous section, we can see why this is true; we have 3 independent parameters in the base space, 3 parameters in the fiber space but we also have a global constraint that all the amplitudes must add to 1 so that we have a total of 5 independent global parameters. This matches the number of parameters in the quantum circuit. The same can be seen with the sHEA geometry where we get an extra degree of independence from the base space.
In practice, $g$ may be singular due to redundant parameterization, or inverting $g$ becomes computationally challenging with increasing number of parameters. Thus, in the case of natural gradient descent for training classical neural networks with many parameters, the inverse of the Fisher Information Matrix (FIM) has been approximated as an inverse of a block-diagonal matrix (and further approximated as Kronecker products of inverses of smaller matrices) to reduce the computational overhead \cite{martens2015optimizing}. The FIM of deep linear networks are also often singular due to parameter redundancies. Thus, generalized inverses are used. Ref.~\cite{bernacchia2018exact} showed that for deep linear networks, any choice of generalized inverse was effective in accelerating natural gradient descent.
\section{Concurrence and the Ricci Scalar}\label{sec:concurrence_curvature}
Unlike its classical analogue, the Quantum Natural Gradient has no obvious connection to the geometry encountered in the optimization process. One might have considered calculating the curvature from the Fubini-Study metric derived from the real part of the Quantum Geometric Tensor, but as has been previously argued, one does not get a legitimate metric since the metric used in Quantum Natural Gradient is in general degenerate. As a consequence, one could consider regularizing the metric either by considering the pseudo-inverse or by adding a small constant multiplied by the identity matrix ($\epsilon I$), a kind of Tikhonov regularization. The downside to this path is that the value of the curvature depends on the regularization procedure and in fact for the Tikhonov regularization the curvature in some cases is ill-defined since the value at a point depends on how this small constant is brought to zero. Nevertheless, in practice, carefully setting regularization has been shown to reduce the measurement costs of QNG \cite{VanStraaten2021}.
What we opt for therefore, is a rather indirect way of seeing the Quantum Natural Gradient geometrically at work. We will calculate another metric, a \textit{quaternionic} Fubini-Study metric that can be placed on the base manifold of two qubits, $S^4$. Amazingly enough this metric connects the concurrence (entanglement) of the quantum circuit and the curvature of the base manifold.
\subsection{Concurrence} In the case of two qubits, the entanglement entropy is the unique measure of entanglement. In work by Hill and Wootters \cite{Scot_Hill1997}, it was noted that the entanglement entropy can be thought of as a function of the concurrence which is defined in the following manner:
\begin{equation}
C(\psi) = 2|\alpha \delta- \beta \gamma|, \end{equation} where $\ket{\psi}=\alpha\ket{00}+\beta\ket{01}+\gamma\ket{10}+\delta\ket{11}$. The concurrence has been measured in experiments \cite{lichenmingyanglihuazhangzhuoliangcao2017,Zhou2015ConcurrenceMF}. Extensions to mixed density matrices can be considered by calculating the convex roof extension, but in this work we stick to pure states when calculating the measure.
We can also formulate the notion of concurrence within the geometric picture thus far outlined. First note that the concurrence can be written in terms of the extrinsic co-ordinates of $S^4$ as: \begin{equation}
\label{concurrence_geometry}
C(\psi) = \sqrt{|x_2|^2 + |x_3|^2} . \end{equation}
The concurrences for the four circuits in terms of their corresponding circuit parameters are:
\begin{align}
C_{\text{HEA}} &= |\text{sin}(2\theta_1)\text{cos}(2\theta_2)|, \label{eq:hea_concurrence}\\
C_{\text{LDCA}} &= \frac{1}{2} \sqrt{3 - 2\text{cos}(4\theta_3)\text{cos}^2(2\theta_5) - \text{cos}(4\theta_5)}, \label{eq:ldca_concurrence}\\
C_{\text{QGAN}} &= |\text{sin}(\theta_1)\text{sin}(\theta_2)\text{sin}(\theta_5)|, \label{eq:qgan_concurrence}\\
C_{\text{sHEA}} &= \frac{1}{2} \biggl\{\sin ^2\left(\theta _1\right) \sin ^2\left(\theta _2\right) \left(\cos \left(\theta _3\right)-\cos \left(\frac{\theta _4}{4}\right)\right){}^2 \\
&\qquad +\left(\sin \left(\theta _3\right)-\sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\frac{\theta _4}{4}\right)+\sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)\right){}^2\biggr\}^{\!1/2}.
\label{eq:shea_concurrence} \end{align}
\subsection{Ricci Scalar} \label{sec:ricci_scalar}
In Riemannian geometry, the scalar curvature, also known as the \textit{Ricci scalar}, is an invariant that characterizes the curvature of a Riemannian manifold. It may be defined in terms of the metric tensor $g$ as follows: Let $g_{ab}$ denote the components of $g$, and let $g^{ab}$ denote the components of its inverse $g^{-1}$. The \textit{scalar curvature} is defined as \begin{align}
\mathcal R = g^{ab}\left(\christoffel c{ab,c} - \christoffel c{ac,b} + \christoffel d{ab}\christoffel c{cd} - \christoffel d{ac}\christoffel c{bd} \right),
\label{eq:defScalarCurvature} \end{align} where \begin{align}
\christoffel c{ab} = \frac 12 g^{cd}(g_{da,b}+g_{db,a}-g_{ab,d}) \end{align} are the \textit{Christoffel symbols of the first kind}. In the equations above, note that we have used Einstein's summation convention and that the commas in the subscripts indicate a partial derivative: for example, $\christoffel c{ab,d} = \partial_d \christoffel c{ab}$. For more details on the scalar curvature, we refer the reader to \cite{Carroll:2004st}.
Apart from its importance in areas in differential geometry and in cosmology, the Ricci scalar is slowly finding applications in hitherto unexpected places for example the study of phase-transitions in quantum many-body systems \cite{Dey2012, rizaerdem2020, michaelkolodrubetzvladimirgritsevanatolipolkovnikov2013} and in classical machine learning, a formulation of neural nets in the context of Riemannian Geometry has been explored \cite{hauser2017principles} while specific use of Riemannian curvature has been explored in \cite{8546273, 8746812}.
Using the isomorphism $HP^1 \simeq S^4$ where $HP^1$ is the quaternionic projective space we can incorporate the concurrence as part of the calculation of the \textit{Mannoury-Fubini-Study} metric \cite{levay2004geometry} as follows:
\begin{equation}
\label{mannoury-fubini-study}
g_{\mu \nu} = \frac{1}{1- C^2}dC^2 + C^2 d\chi^2 + (1-C^2)(d \Phi^2 + \sin^2 \Theta d \Theta^2) \end{equation}
where $w = |C| e^{i \chi}$ (defined in Sec. $2.3$), and $0 < \Phi \leq 2 \pi, 0 < \Theta \leq \pi $ are the usual polar co-ordinates for $B^3$ (three dimensional ball). We may make an amusing observation that this is the same metric as the Euclidean Schwarzshild metric for fixed time.
The metrics for the four circuits are therefore
\begin{align}
\label{eq:pqc_metric1}
g_{\text{HEA}} &= \begin{pmatrix}
\frac{1}{1 - C^2_{\text{HEA}}} & 0 & 0 & 0 \\
0 & C^2_{\text{HEA}}& 0 & 0 \\
0 & 0 & (1- C^2_{\text{HEA}}) & 0 \\
0 & 0 & 0 & (1- C^2_{\text{HEA}})
\end{pmatrix} \\
\label{eq:pqc_metric2}
g_{\text{LDCA}} &= \begin{pmatrix}
\frac{1}{1 - C^2_{\text{LDCA}}} & 0 & 0 & 0 \\
0 & C^2_{\text{LDCA}}& 0 & 0 \\
0 & 0 & (1- C^2_{\text{LDCA}}) & 0 \\
0 & 0 & 0 & (1- C^2_{\text{LDCA}})
\end{pmatrix} \\
\label{eq:pqc_metric3}
g_{\text{QGAN}} &= \begin{pmatrix}
\frac{1}{1 - C^2_{\text{QGAN}}} & 0 & 0 & 0 \\
0 & C^2_{\text{QGAN}}& 0 & 0 \\
0 & 0 & (1- C^2_{\text{QGAN}}) & 0 \\
0 & 0 & 0 & (1- C^2_{\text{QGAN}}) \sin \Theta_{\text{QGAN}}
\end{pmatrix} \\
\label{eq:pqc_metric4}
g_{\text{sHEA}} &= \begin{pmatrix}
\frac{1}{1 - C^2_{\text{sHEA}}} & 0 & 0 & 0 \\
0 & C^2_{\text{sHEA}}& 0 & 0 \\
0 & 0 & (1- C^2_{\text{sHEA}}) & 0 \\
0 & 0 & 0 & (1- C^2_{\text{sHEA}}) \sin \Theta_{\text{sHEA}}
\end{pmatrix} \end{align} where: \begin{equation} \sin \Theta_{\text{QGAN}} = \sqrt{1 - (\sin \theta_1 (\sin \theta_3 \cos \theta_5) + \sin \theta_5 \cos \theta_2 \cos \theta_3))^2} \end{equation} for the QGAN ansatz and \begin{equation}
\sin \Theta_{\text{sHEA}} = \sqrt{1 - y^2}, \end{equation} where \begin{align}
y &=
\frac{1}{2} \bigg(\sin (\theta _1) \cos \bigg(\frac{\theta _3}{2}\bigg) \bigg(-\bigg(-\cos (\theta _2)+\bigg(\cos (\theta _2)+1\bigg) \cos \bigg(\frac{\theta _4}{4}\bigg)+1\bigg) \cos (\theta _5) \nonumber\\
&\quad -\sin \bigg(\frac{\theta _4}{4}\bigg) \sin (\theta _5) \bigg(\cos (\theta _2)+1\bigg)\bigg)-\sin(\theta _2) \sin \bigg(\frac{\theta _3}{2}\bigg) \bigg(2 \sin \bigg(\frac{\theta _4}{4}\bigg) \sin ^2\bigg(\frac{\theta _1}{2}\bigg) \cos (\theta _5)\nonumber\\ &\quad +\sin (\theta _5) \bigg(\cos (\theta _1) +\big(\cos (\theta _1)-1\big) \cos \bigg(\frac{\theta _4}{4}\bigg)+1\bigg)\bigg)\bigg) \end{align} for the sHEA ansatz.
The scalar curvatures $\mathcal R_{\mathcal A}$ (see Eq.~\eqref{eq:defScalarCurvature}) of all the PQC metrics (Eqs.~\eqref{eq:pqc_metric1}--\eqref{eq:pqc_metric4}) $g_{\mathcal A}$ (for $\mathcal A = $ HEA, LDCA, QGAN, sHEA) are calculated to be: \begin{align}
\label{eq:ricci_scalar}
\mathcal R_{\mathcal A} = \frac{2(6C_{\mathcal A}^2-5)}{C_{\mathcal A}^2-1}. \end{align} Note that the Ricci scalar can be calculated by just knowing the metric. Since the metric in our case is just a function of the concurrence of a general 2-qubit circuit, Eq.~\eqref{eq:ricci_scalar} is completely general for the two-qubit case.
We note that the formula is a simple rational function of the concurrence, and from the formula, we can see that singularities on the base manifold correspond to maximally entangled states with concurrence value of one.
The scalar curvature of each of the circuit written in terms of their circuit parameters are: \begin{align}
\mathcal R_{\text{HEA}} &= 12 -\frac{1}{\sin (2 \text{$\theta_1$}) \cos (2 \text{$\theta_2$})+1}+\frac{1}{\sin (2 \text{$\theta_1$}) \cos (2 \text{$\theta_2$})-1}, \label{eq:hea_ricci_scalar}\\
\mathcal R_{\text{LDCA}} &= 12 - 2 \sec ^2(2 \text{$\theta_3$}) \sec ^2(2 \text{$\theta_5$}), \label{eq:ldca_ricci_scalar}\\
\mathcal R_{\text{QGAN}} &= 12 - \frac{1}{\sin (\text{$\theta_1$}) \sin (\text{$\theta_2$}) \sin (\text{$\theta_5$})+1}+\frac{1}{\sin (\text{$\theta_1$}) \sin (\text{$\theta_2$}) \sin (\text{$\theta_5$})-1}, \label{eq:qgan_ricci_scalar}\\
\mathcal R_{\text{sHEA}} &=
\frac{
\Bigg( \splitfrac{
12 \sin ^2\left(\theta _1\right) \sin ^2\left(\theta _2\right) \left(\cos \left(\theta _3\right)-\cos \left(\frac{\theta _4}{4}\right)\right){}^2
}{
+ 12\left(\sin \left(\theta _3\right) \left(\cos \left(\theta _1\right) \cos \left(\theta _2\right)+1\right)-\sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\frac{\theta _4}{4}\right)\right){}^2 - 40
}
\Bigg) }{
\splitfrac{
\sin ^2\left(\theta _1\right) \sin ^2\left(\theta _2\right) \left(\cos \left(\theta _3\right)-\cos \left(\frac{\theta _4}{4}\right)\right){}^2
}{
+\left(\sin \left(\theta _3\right) \left(\cos \left(\theta _1\right) \cos \left(\theta _2\right)+1\right)-\sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\frac{\theta _4}{4}\right)\right){}^2-4.
} }. \label{eq:shea_ricci_scalar} \end{align} These scalar curvatures are additionally plotted in Fig. \ref{fig:curvature_landscape}. For HEA and LDCA, their Ricci scalars are each a function of two circuit parameters, i.e. only two parameters influence the entanglement in the system. On the other hand, the Ricci scalars of QGAN and sHEA are functions of three and four parameters, respectively. To visualize how parameter values impact the curvature, values of $\theta_1$ and $\theta_2$ are scanned over range $[0, 2\pi]$ while values of other parameters, if they appear in the expression of the Ricci scalar, are fixed at particular values. With QGAN, by tuning the value of $\theta_5$ from $0$ to $\frac{\pi}{2}$, we observe a gradual emergence of ``wells'' of negative curvature. With sHEA, the landscape depends significantly on values of $\theta_3$ and $\theta_4$. For instance, when $\theta_3 = \pi$, there are wells of negative curvature similar to those from QGAN's landscape. However, when $\theta_3$ is decreased to approximately $\frac{\pi}{2}$, the wells are replaced by ``valleys'' of negative curvature. Comparing these landscapes provides some insight into the (relative) entangling capabilities of the circuit blocks and the ease of generating high entanglement states by tuning the circuit parameters. Circuits corresponding to curvature landscapes with extensive or large regions of negative curvatures (i.e. high concurrences) such as LDCA, are able to more readily generate states with high entanglement compared to circuits with landscapes with limited regions of negative curvature similar to that of QGAN. Additionally, it appears easier to generate high entanglement states with LDCA than to do so using circuits such as QGAN and sHEA, in which one must tune multiple circuit parameters, several of which need to be set near specific values to generate states with high entanglement.
\begin{figure}
\caption{ Scalar curvature ``landscapes'' of the four circuit blocks: (a) HEA, (b) LDCA, (c) QGAN, and (d) sHEA. In the case of (a) and (b), the scalar curvature (denoted using $\mathcal{R}$) is a function of only two parameters: $\theta_1$ and $\theta_2$ for HEA and $\theta_3$ and $\theta_5$ for LDCA. For the other two circuit blocks, in which $\mathcal{R}$ is a function of more than two circuit parameters, $\mathcal{R}$ is plotted as a function of $\theta_1$ and $\theta_2$, while scanning specific values of the remaining parameters to produce snapshots of the landscapes. We limit the range of $\mathcal{R}$ to [-5, 10] for ease of visualization. }
\label{fig:curvature_landscape}
\end{figure}
Why might one be interested in calculating the Ricci scalar? After all, from the face of it, it seems to contain the same amount of information as the concurrence. The answer lies in the goal of this study, namely understanding PQCs from a geometric point of view and their optimization performance with regards to the Quantum Natural Gradient. We have the intuition that the Quantum Natural Gradient somehow incorporates the geometric information of the projective Hilbert space in the optimization procedure in order improve the speed of convergence. The question is that can we map out the geometry of the quantum circuits and see \textit{geometrically} what the quantum natural gradient is doing differently? This is the question we take up in next section and explore.
\section{Insights into VQE performance through scalar curvature: a two-qubit study}\label{sec:numerical_experiments}
In this section, we numerically demonstrate the connection between the geometry and performance of two-qubit parameterized quantum circuits. To quantify the performance, we consider a toy problem instance of the Variational Quantum Eigensolver (VQE) algorithm \cite{Peruzzo2014} which, despite its simplicity, shows how the scalar curvature of a parameterized quantum circuit for two qubits may help inform the effectiveness of an ansatz prior to execution of the algorithm. Namely, the task-at-hand is to estimate the ground state energy of molecular hydrogen at a particular bond length, employing each of the two-qubit circuits. We consider the two-qubit Hamiltonian for molecular hydrogen from an early VQE experimental study \cite{O_Malley2016}: \begin{equation}
H = \nu_1 \mathbb{I} + \nu_2 Z_1 + \nu_3 Z_2 + \nu_4 Z_1 Z_2 + \nu_5 X_1 X_2 + \nu_6 Y_1 Y_2, \end{equation} where $\boldsymbol \nu = \{ \nu_1, \nu_2, ..., \nu_6 \}$ corresponds to Hamiltonian coefficients. For this problem, the ground state wave function is of the form: $\ket{\psi_g} = \alpha(r_{H-H}) \ket{01} + \beta(r_{H-H}) \ket{10}$, where the wave function coefficients $\alpha$ and $\beta$ are functions of $r_{H-H}$, the inter-atomic distance between the two Hydrogen atoms. At $r_{H-H} = 3.19 \ \textup{\AA}$, the ground state wave function is a highly entangled state of the form: \begin{equation}
\ket{\psi_g} = \alpha \ket{01} + \beta \ket{10}, \end{equation}
for $\alpha, \beta \in \mathbb{C}$ where $|\alpha|^2 \approx 0.47$ and $|\beta|^2 \approx 0.53$. In this case, the ability to generate (highly) entangled states using a parameterized quantum circuit is important for representing the solution state. We executed wave function simulations for the VQE calculations. For the optimization in VQE, we consider the standard gradient descent optimizer as well as the Quantum Natural Gradient optimizer using the block-diagonal and diagonal matrix approximations for the Fubini-Study metric tensor \cite{Stokes2019}. We fix the step size of each type of gradient descent to $0.05$ and terminate each optimization run based on a convergence threshold of $10^{-6}$.
Using this toy problem, we discuss two main observations:\footnote{We repeat the VQE simulations and analyses for a bond length corresponding to an unentangled ground state and obtain similar results. These results are shown in Appendix \ref{app:50_runs_product}.} \begin{enumerate}
\item Particular circuit structures lead to precise and/or accurate solutions. Using 50 independent optimizations, we show that circuits like LDCA initialize states at high concurrences and consistently and rapidly converge to the solution. Others have wider spread in the final energies and their accuracies. We argue that further insight about these results can be understood from the scalar curvature.
\item We investigate the role of the fiber space by considering the QGAN circuit as a case study. This circuit, as constructed, is unable to reach the ground state but is able to do so by appending local rotation gates. By calculating the curvature, we are also able to see how the Quantum Natural Gradient is able to take advantage of the geometry while the standard gradient descent does not. \end{enumerate}
\subsection{Curvature landscape and optimization} We first investigate the connection between the curvature landscape and VQE optimization. Fig.~\ref{fig:h2_entangled_ground_state_data} tracks the energy error, concurrence, and scalar curvature of optimization paths using the four circuit blocks. For each circuit, 50 independent optimization trials were performed using random parameter initialization. For three of the four circuits, namely HEA, LDCA, and sHEA, using Quantum Natural Gradients enabled the optimizations to converge to errors below the chemical accuracy threshold ($\approx 10^{-3}$ Ha) within 200 descent steps. Referring back to Fig.~\ref{fig:curvature_landscape}, these three circuits correspond to curvature landscapes with extensive regions of negative curvature or high concurrence. For sHEA, $\theta_3$ and $\theta_4$ were tuned over the course of each optimization such that highly entangled states became accessible (e.g. $\theta_3 \approx \pi/2$).
In particular, optimization runs for LDCA started at higher values of concurrence on average and rapidly converged to accurate ground state energies within around 20 descent steps for QNG methods. In addition, the standard deviation over 50 runs is smaller than those corresponding to other circuits.
\begin{figure*}\label{fig:h2_entangled_ground_state_data}
\end{figure*}
Additional insights into the optimization procedure can be gathered by looking at the curvature landscapes and also the scalar curvatures reached during the optimization process. \begin{enumerate}[label=(\roman*)]
\item For the circuits that reach the ground state, we observe more regions of negative curvature; these regions in the Hilbert space are useful in accelerating the optimizations. Having more of these regions seems to correlate with performance of the optimization process. Consider the curvature landscape of LDCA, in which we observe that there are small, repeating hills of positive curvature that are surrounded by regions of negative curvature. This implies that LDCA may have a greater access to highly entangled states that are additionally constrained to a subspace spanned by $\{\ket{01}, \ket{10}\}$, both of which led to superior performance in the VQE algorithm instance. On the other hand, if one looks at the QGAN curvature landscapes, we see that for majority of parameter settings the curvature is positive.
\item Because the Quantum Natural Gradient is tailored for the wave function geometry, we see that it is able to reach these areas of negative curvature faster than standard gradient descent. Once it reaches these regions, the optimization process is sped up earlier on in the process allowing for a lower number of iterations. In all the cases looked at, the standard gradient descent takes longer to find these regions.
\item Lastly, we briefly comment on the relative costs of Quantum Natural Gradient, which depend on the number of non-zero elements in the metric tensor or its approximations.
From Eqs.~\ref{eq:fs_metric_tensor_hea}- \ref{eq:fs_metric_tensor_shea}, we see that the diagonal or block-diagonal approximation of the metric tensor for LDCA captures all of the non-zero elements.
This is in contrast to the other circuits which have at least two unique elements that are not captured by the approximations to the metric tensor.
To make better use of QNG, these other circuits may require computations of elements that are not captured by the block-diagonal or diagonal approximations.
For example, in Fig. \ref{fig:h2_entangled_ground_state_data}d, we ran the VQE optimizations using the dense or full metric tensor for sHEA, a case in which the block-diagonal approximation does not capture many of the non-zero elements.
Shown using purple lines, we observe that the optimization significantly improves in efficiency though at the cost of more function calls needed at each QNG descent step to compute the full metric tensor.
On the other hand, LDCA only requires one non-zero element of the metric tensor to be computed at each QNG descent step.
This appears to imply that certain circuits, by the way that they are parameterized, are better suited for QNG methods than others. \end{enumerate}
\subsection{Non-trivial role of adding single-qubit gates} As observed in Fig. \ref{fig:h2_entangled_ground_state_data}b, QGAN consistently fails to reach the ground state. QGAN, as constructed, is unable to reach states corresponding to high entanglement, or equivalently negative curvature. Often, in similar situations in which the circuit seems insufficient, one adds a set of gates or a circuit layer in the hopes of providing greater flexibility to reach the solution state. Thus, we tried augmenting the QGAN circuit with local rotations, adding $RX$ then $RZ$ single-qubit gates to each of the two qubits. This adds four new parameters to the circuit. Appending local rotations should not impact the concurrence and thus the curvature. It was surprising to observe, however, that adding local rotations greatly improved the optimization as shown in Fig. \ref{fig:qgan_aug}a. \begin{figure}
\caption{ (a) Energy error, concurrence, and curvature tracked over 50 independent optimization trials of QGAN augmented with local rotations (RX and RZ applied to each qubit). (b) Curvature landscape corresponding to final/optimized parameter value of $\theta_5$ of a particular optimization run using the original QGAN circuit. (c) Curvature landscape corresponding to final parameter value of $\theta_5$ of a particular optimization run using \emph{augmented} QGAN circuit.}
\label{fig:qgan_aug}
\end{figure} While the average value of concurrence at the start of the optimization using the augmented QGAN was lower than that corresponding to the original QGAN circuit, the updated QGAN circuit was able to estimate the ground state energy with high accuracy using quantum natural gradient methods. The puzzle is then the following: while the concurrence expression in Eq.~\eqref{eq:qgan_concurrence} should not change for the augmented QGAN (since single qubit gates should not increase entanglement), the additional parameterized single-qubit gates somehow appear to have aided in guiding the other circuit parameters to values such that highly entangled states are accessible (shown in Fig. \ref{fig:qgan_aug}c). In other words, without increasing the entanglement in the circuit, we are able to turn an optimization procedure that does not get to an entangled state to one that does. We suspect that this is where another part of the geometry plays a part, namely the fibers. Locally, we have the geometry being $S^4 \times S^3$ but this is not true globally, which means that how we move through the fibers has a non-trivial impact on the optimization process. This case also illustrates the role of geometry plays in the optimization process because the standard gradient descent still fails to find the ground state, at least within a reasonable number of optimization steps. From the point of view of just the optimization process, we simply see the concurrence fails to reach the target value and the energy fails to reach low enough values, but as we look at what happens to the scalar curvature, we see the Quantum Natural Gradient descent finds pockets of high negative curvature allowing for the optimization path to more easily move. In other words, as we change parts of the available geometry, the Quantum Natural Gradient is able to better leverage that.
\section{Concluding remarks}\label{sec:conclusion}
Often, investigations or uses of PQCs are limited to considering their inputs (parameters) and outputs (resulting quantum states or observables). Our work provided an investigation into the inner workings of two-qubit PQCs, which we argue are the simplest instances of PQCs and yet can still provide valuable insights for better understanding these circuits. We first defined PQCs from a geometric perspective. With the help of specifically two-qubit geometry, we explicitly worked out the intrinsic co-ordinates for four examples of two-qubit PQC blocks, at least for the base manifold, which is important for characterizing entanglement in the circuit. As a consequence of our ability to parameterize our circuits in terms of two-qubit geometry, we introduced a notion of local geometric expressibility, which describes how much of the two-qubit geometry can be explored by a two-qubit PQC.
In trying to understand the connection between the geometry of the projective Hilbert space and how it is used by the Quantum Natural Gradient in the optimization process, we used the Mannoury-Fubini-Study metric as a way to calculate the curvature of the base manifold, $S^4$. This provided a simple and remarkable connection between the curvature of the base manifold and the amount of entanglement in the ansatz.
With this connection we were able to establish a bridge between the amount of entanglement and quality of the optimization procedure to the geometry of the projective Hilbert Space. This allowed us to notice a correlation between the ability of the ansatz to find earlier on in the optimization process regions of high negative curvature and acceleration of the optimization process. We strongly suspect that this connection is not merely a correlation but represents a chain of causation, for this we give two pieces of evidence: \begin{enumerate}[label=(\roman*)]
\item The performance of the QGAN circuit which could not find either the entangled ground state or the product ground state was enhanced by creating the augmented QGAN ansatz i.e. by adding single qubit rotations. The optimization process was able to reach regions of high negative curvature that were not accessible before and after which the circuit was able to reach the entangled ground state.
Furthermore, in the numerical simulations for which the ground state is a product state and hence lies in a region of positive curvature (Appendix \ref{app:50_runs_product}), the augmented QGAN performed better by initially finding regions that are less positive than the original QGAN.
\item Secondly, by inspecting the performance of standard gradient descent we see that for the circuits that did find the ground state, the regions of high negative curvature were found later in the optimization process than in the case for QNG. Indeed the behavior of the standard gradient descent for the augmented QGAN we believe is highly suggestive. The standard gradient descent in the augmented QGAN ansatz is not able to find these regions of high negative curvature and is not able to reach the ground state for a circuit we know can find the ground state. On the other extreme end, we can look at the performance of sHEA and consider the performance of the dense QNG which finds region of negative curvature the earliest, one sees that this correlates with finding the ground state much more efficiently than the other approximations of QNG and the standard gradient descent. \end{enumerate}
In summary, the geometric perspective through the use of the scalar curvature has provided us with a tool to explain why, for example, why LDCA outperforms other circuits other than just explaining via tracking energies or entanglement. Overall, we show that the number of parameters of a given PQC does not necessarily correspond to high circuit flexibility or capability; it matters \emph{how} the circuit is parameterized. In connection with this point, we observed how single qubit gates can significantly impact the circuit performance, i.e. do more than just provide extra parameters. The single qubit gates allow us to explore the fibers of the geometry in such ways as to find regions of negative or less positive curvature. This effect is ultimately possible because the geometry is not just a cartesian product of the base manifold and the fibers but some complicated twist. The geometric perspective also provides insights into how and why the QNG is better than the standard gradient descent; the QNG can find the regions of negative curvature, and if there are none, it opts for regions with less positive curvature early in the optimization process. We stress that the geometric perspective is necessary in order to understand the QNG since the QNG does not depend on the cost function.
While we provided a extensive study on two-qubit PQCs, there remain several puzzles or open questions, which we outline in the following subsection for future work.
\subsection{Future directions and open questions} \label{sec:future_directions}
\subsubsection*{Decomposition Problem} Refs.~\cite{Stokes2019, Yamamoto2019} numerically showed the potential for Quantum Natural Gradients in optimizing parameterized quantum circuits in the context of VQE. A main challenge will be to scale up this method for larger and deeper quantum circuits. In an effort to formulate a hopefully simpler scenario that may be able to be scaled, we describe the following ``decomposition problem'':
Suppose we have a four-qubit circuit, the first and second qubits are entangled using one of the PQC building blocks, e.g. LDCA block while the third and fourth qubits are also entangled using a two-qubit block. Overall, we have 2 two-qubit subsystems, each of which we know the metric tensor, call them $g^{(12)}$ and $g^{(34)}$. However, suppose we add a static/non-parametric entangler (e.g. CNOT) to the second and third qubits. How is the metric tensor of the four-qubit system $g^{(1234)}$ constructed, compared to structures of $g^{(12)}$ and $g^{(34)}$? The idea being, rather than anaylzing the $n$ qubit geometry from the ground up in order to understand PQCs for $n$ qubits, can we use the simpler geometry of 2 qubits to bootstrap our way to understanding higher qubit number PQCs?
\subsubsection*{Three-qubit case} For this work, we crucially relied on the fact that the geometry of 2 qubits could easily be understood since it was simply the Hopf fibration $S^7$. As a consequence of Hopf Invariant One theorem \cite{johnfrankadams1958}, there is one more Hopf fibration that can be studied in great detail namely the Hopf fibration $S^7 \rightarrow S^{15} \xrightarrow{\pi} S^8 $. In this case, the use of octonions should be helpful in parameterizing the geometry.
\section{Gate definitions}\label{app:gate_definitions} We define the rotation operations in the following way: \begin{align}
R_P(\theta) = \exp ( -i \theta P /2 ), \end{align}
where $P \in \{ X, Y, Z\}$ for single-qubit gates and $P \in \{ XX, YY, ZZ, XY, YX\}$ for two-qubit gates.
We define the iSWAP$^\dagger$ gate as: \begin{equation}
\text{iSWAP}(\theta)^\dagger = \text{e}^{-i \theta (X \otimes X + Y \otimes Y) /2}, \end{equation}
and the CPHASE gate as: \begin{equation}
\text{CPHASE}(\phi) = \text{e}^{-i \phi (I-Z)\otimes (I-Z) /4}. \end{equation}
{ \renewcommand{1.5}{1.5} \begin{table}[ht]
\centering
\caption{Abbreviations and symbols}
\begin{tabular}{ll}
\toprule
\textbf{Symbol/Abbreviation} & \textbf{Description}\\
\midrule
$G$, $G(\boldsymbol\theta)$ & Fubini-Study metric tensor\\
$\mathcal{R}$ & Scalar curvature\\
$C$, $C(\psi)$ & Concurrence\\
$\boldsymbol\theta$ & Parameter vector\\
$\theta_j$ & $j$-th parameter\\
PQC & Parameterized quantum circuit\\
HEA & Hardware-efficient ansatz\\
LDCA & Low-depth circuit ansatz \cite{Dallaire-Demers2018}\\
QGAN & Quantum generative adversarial network\\
\bottomrule
\end{tabular}
\label{tab:table} \end{table} }
\section{Hopf base and fiber parameters for various circuit blocks} \label{app:hopfBaseFiber} \subsection{Hardware-efficient ansatz (HEA)}
The output state of the quantum hardware-efficient ansatz (HEA) depicted by Fig. \ref{fig:circuit_diagrams_v2}(a) is \begin{align}
\ket{\psi}_{HEA} &= \cos \left(\theta _1\right) \cos \left(\theta _3\right) \cos \left(\theta _2+\theta _4\right)-\sin \left(\theta _1\right) \sin \left(\theta _3\right) \sin \left(\theta _2-\theta _4\right) \ket{00}\nonumber\\ &\quad +\sin \left(\theta _2+\theta _4\right) \cos \left(\theta _1\right) \cos \left(\theta _3\right)-\sin \left(\theta _1\right) \sin \left(\theta _3\right) \cos \left(\theta _2-\theta _4\right) \ket{01} \nonumber\\ &\quad +\sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2+\theta _4\right)+\sin \left(\theta _1\right) \sin \left(\theta _2-\theta _4\right) \cos \left(\theta _3\right) \ket{10} \nonumber\\ &\quad +\sin \left(\theta _1\right) \cos \left(\theta _3\right) \cos \left(\theta _2-\theta _4\right)+\sin \left(\theta _3\right) \sin \left(\theta _2+\theta _4\right) \cos \left(\theta _1\right)
\ket{11}. \end{align}
The angle parameters of the $S^4$ Hopf base $(\theta_A,\phi_A)$ and $(\chi,\xi)$ and the equivalent Cartesian coordinates $(x_0,x_1,x_2,x_3,x_4)$ are calculated to be \begin{align}
x_0 &= \cos \left(2 \theta _1\right) \cos \left(2 \theta _3\right)-8 \sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right) \cos \left(\theta _3\right)\\
x_1 &= \sin \left(2 \theta _1\right) \sin \left(2 \theta _2\right) \cos \left(2 \theta _3\right)+\sin \left(2 \theta _3\right) \cos \left(2 \theta _1\right) \\ x_2 &= 0 \\ x_3 &= \sin \left(2 \theta _1\right) \cos \left(2 \theta _2\right) \\ x_4 &= 0 \end{align} and \begin{align}
\theta_A &= \arccos\left(\cos \left(2 \theta _1\right) \cos \left(2 \theta _3\right)-8 \sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right) \cos \left(\theta _3\right)\right) \\
\phi_A &= \arccos\left(\frac{\sin \left(2 \theta _1\right) \sin \left(2 \theta _2\right) \cos \left(2 \theta _3\right)+\sin \left(2 \theta _3\right) \cos \left(2 \theta _1\right)}{\sqrt{1-\left(\cos \left(2 \theta _1\right) \cos \left(2 \theta _3\right)-8 \sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right) \cos \left(\theta _3\right)\right){}^2}}\right) \\
\chi &= \frac{\pi }{2}
\\
\xi &= \frac{\pi }{2}. \end{align}
The quaternion $q_\pm=\langle c_\pm|\psi_H\rangle$ is calculated to be \begin{align}
\label{hea_fiber}
q_\pm = \frac 1{2\lambda_2} (e_\pm + jf_\pm) \end{align} where \begin{align}
e_\pm &= \left(\cos \left(\theta _1\right) \cos \left(\theta _3\right) \cos \left(\theta _2+\theta _4\right)-\sin \left(\theta _1\right) \sin \left(\theta _3\right) \sin \left(\theta _2-\theta _4\right)\right) \left(\lambda _3 \gamma_\mp+\lambda _1 \gamma_\pm\right) \\
f_\pm &=
\left(\sin \left(\theta _1\right) \sin \left(\theta _3\right) \cos \left(\theta _2-\theta _4\right)-\sin \left(\theta _2+\theta _4\right) \cos \left(\theta _1\right) \cos \left(\theta _3\right)\right) \left(-\lambda _3 \gamma_\mp-\lambda _1 \gamma_\pm \right) \end{align} where \begin{align}
\lambda_1 &=
\sqrt{2 \sin \left(4 \theta _1\right) \sin \left(2 \theta _2\right) \sin \left(4 \theta _3\right)+4 \sin ^2\left(2 \theta _1\right) \left(\cos ^2\left(2 \theta _2\right)+\sin ^2\left(2 \theta _2\right) \cos ^2\left(2 \theta _3\right)\right)+4 \sin ^2\left(2 \theta _3\right) \cos ^2\left(2 \theta _1\right)}
\nonumber\\
\lambda_2
&=
\sqrt{\sin \left(4 \theta _1\right) \sin \left(2 \theta _2\right) \sin \left(4 \theta _3\right)+2 \sin ^2\left(2 \theta _1\right) \left(\cos ^2\left(2 \theta _2\right)+\sin ^2\left(2 \theta _2\right) \cos ^2\left(2 \theta _3\right)\right)+2 \sin ^2\left(2 \theta _3\right) \cos ^2\left(2 \theta _1\right)}
\nonumber\\
\lambda_3 &=
2 \sin \left(2 \theta _1\right) \sin \left(2 \theta _2\right) \sin \left(2 \theta _3\right)-2 \cos \left(2 \theta _1\right) \cos \left(2 \theta _3\right)+2
\nonumber\\
\gamma_\pm &=
\sqrt{1\pm \sqrt{1-\frac{1}{4} \sin ^2\left(2 \theta _1\right) \cos ^2\left(2 \theta
_2\right)-\frac{1}{4} \left(\sin \left(2 \theta _1\right) \sin \left(2 \theta _2\right)
\cos \left(2 \theta _3\right)+\sin \left(2 \theta _3\right) \cos \left(2 \theta
_1\right)\right){}^2}}. \end{align}
\subsection{Low-depth circuit ansatz (LDCA)}
The output state of the quantum hardware-efficient ansatz (HEA) depicted by Fig. \ref{fig:circuit_diagrams_v2}(a) is \begin{align} \ket{\Psi}_{LDCA} &= e^{-\frac{1}{2} i \left(\theta _1-\theta _2-\theta _4\right)} \left(\cos \left(\theta _3\right) \cos \left(\theta _5\right)-i \sin \left(\theta _3\right) \sin \left(\theta _5\right)\right) \ket{01} \nonumber\\ &\quad - e^{-\frac{1}{2} i \left(\theta _1-\theta _2-\theta _4\right)} \left(\sin \left(\theta _5\right) \cos \left(\theta _3\right)+i \sin \left(\theta _3\right) \cos \left(\theta _5\right)\right) \ket{10} . \end{align}
The angle parameters of the $S^4$ Hopf base $(\theta_A,\phi_A)$ and $(\chi,\xi)$ and the equivalent Cartesian coordinates $(x_0,x_1,x_2,x_3,x_4)$ are calculated to be \begin{align}
x_0 &= \cos \left(2 \theta _3\right) \cos \left(2 \theta _5\right)\\
x_1 &= 0 \\ x_2 &= \sin \left(\theta _1-\theta _2-\theta _4\right) \sin \left(2 \theta _5\right)-\sin \left(2 \theta _3\right) \cos \left(\theta _1-\theta _2-\theta _4\right) \cos \left(2 \theta _5\right) \\ x_3 &= \sin \left(2 \theta _3\right) \sin \left(\theta _1-\theta _2-\theta _4\right) \cos \left(2 \theta _5\right)+\sin \left(2 \theta _5\right) \cos \left(\theta _1-\theta _2-\theta _4\right) \\ x_4 &= 0 \end{align} and \begin{align}
\theta_A &= \arccos \left(\cos \left(2 \theta _3\right) \cos \left(2 \theta _5\right)\right) \\
\phi_A &= \frac{\pi }{2} \\
\chi &= \frac{\pi }{2}
\\
\xi &= -\arctan \left(\frac{\sin \left(2 \theta _3\right) \cos \left(2 \theta _5\right)+\sin \left(2 \theta _5\right) \cot \left(\theta _1-\theta _2-\theta _4\right)}{\sin \left(2 \theta _3\right) \cos \left(2 \theta _5\right) \cot \left(\theta _1-\theta _2-\theta _4\right)-\sin \left(2 \theta _5\right)}\right). \end{align}
The quaternion $q_\pm=\langle c_\pm|\psi_H\rangle$ is calculated to be \begin{align}
\label{ldca_fiber}
q_\pm = \frac 1{\sqrt {\mu_3}} (j g_\pm + k h_\pm), \end{align} where \begin{align}
g_\pm &= \left(\cos \left(\theta _3\right) \cos \left(\frac{1}{2} \left(\theta _1-\theta _2-\theta _4\right)\right) \cos \left(\theta _5\right)-\sin \left(\theta _3\right) \sin \left(\frac{1}{2} \left(\theta _1-\theta _2-\theta _4\right)\right) \sin \left(\theta _5\right)\right) (\mu_1 \gamma_\pm +\mu_2 \gamma_\mp ), \nonumber\\
h_\pm &= \left(\sin \left(\frac{1}{2} \left(\theta _1-\theta _2-\theta _4\right)\right) \cos \left(\theta _3\right) \cos \left(\theta _5\right)+\sin \left(\theta _3\right) \sin \left(\theta _5\right) \cos \left(\frac{1}{2} \left(\theta _1-\theta _2-\theta _4\right)\right)\right) (-\mu_1 \gamma_\pm -\mu_2 \gamma_\mp ).
\end{align}
where
\begin{align}
\mu_1 &=
\sqrt{-2 \cos \left(4 \theta _3\right) \cos ^2\left(2 \theta _5\right)-\cos \left(4 \theta _5\right)+3}, \nonumber\\
\mu_2 &=
2-2 \cos \left(2 \theta _3\right) \cos \left(2 \theta _5\right), \nonumber\\ \mu_3 &=
2 \left(-2 \cos \left(4 \theta _3\right) \cos ^2\left(2 \theta _5\right)-\cos \left(4 \theta _5\right)+3\right),
\nonumber\\
\gamma_\pm &= \frac{1}{2} \sqrt{4\pm \sqrt{\cos \left(4 \theta _5\right)+\cos \left(4 \theta _3\right) \left(\cos \left(4 \theta _5\right)+1\right)+13}}. \end{align}
\subsection{Quantum generative adversarial network (QGAN) ansatz}
The output state of the quantum generative adversarial network (QGAN) depicted by Fig. \ref{fig:circuit_diagrams_v2}(b) is \begin{align}
\ket{\psi}_{QGAN} &= e^{-\frac{1}{2} i \left(\theta _3+\theta _4+\theta _5\right)} \cos\! \left(\frac{\theta _1}{2}\right) \cos\! \left(\frac{\theta _2}{2}\right) \ket{00} -i e^{-\frac{1}{2} i \left(\theta _3-\theta _4-\theta _5\right)} \sin \left(\frac{\theta
_2}{2}\right) \cos \left(\frac{\theta _1}{2}\right) \ket{01} \nonumber\\ &\quad -i e^{\frac{1}{2} i \left(\theta _3-\theta _4+\theta _5\right)} \sin \left(\frac{\theta
_1}{2}\right) \cos \left(\frac{\theta _2}{2}\right) \ket{10} -e^{\frac{1}{2} i \left(\theta _3+\theta _4-\theta _5\right)} \sin \left(\frac{\theta
_1}{2}\right) \sin \left(\frac{\theta _2}{2}\right)
\ket{11} \end{align}
The angle parameters of the $S^4$ Hopf base $(\theta_A,\phi_A)$ and $(\chi,\xi)$ and the equivalent Cartesian coordinates $(x_0,x_1,x_2,x_3,x_4)$ are calculated to be \begin{align}
x_0 &= \cos \left(\theta _1\right)\\
x_1 &= \sin \left(\theta _1\right) \left[\sin \left(\theta _3\right) \cos \left(\theta _5\right)+\sin \left(\theta _5\right) \cos \left(\theta_2\right) \cos \left(\theta _3\right)\right] \\ x_2 &= -\sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\theta _5\right) \\ x_3 &= 0 \\ x_4 &= \sin \left(\theta _1\right) \left[\sin \left(\theta _3\right) \sin \left(\theta _5\right) \cos \left(\theta _2\right)-\cos \left(\theta _3\right) \cos \left(\theta _5\right)\right] \end{align} and \begin{align}
\theta_A &= \theta_1 \\
\phi_A &= \arccos\left[\sin \left(\theta _3\right) \cos \left(\theta _5\right)+\sin \left(\theta _5\right) \cos \left(\theta _2\right) \cos \left(\theta _3\right)\right] \\
\chi &= \arccos\left(\frac{\sin \left(\theta _3\right) \sin \left(\theta _5\right) \cos \left(\theta _2\right)-\cos \left(\theta _3\right) \cos \left(\theta _5\right)}{\sqrt{1-\left(\sin \left(\theta _3\right) \cos \left(\theta _5\right)+\sin \left(\theta _5\right) \cos \left(\theta _2\right) \cos \left(\theta _3\right)\right){}^2}}\right)\\
\xi &= 0 \end{align}
The quaternion $q_\pm=\langle c_\pm|\psi_H\rangle$ is calculated to be \begin{align}
\label{qgans_fiber}
q_\pm = \frac 1{\sqrt 2} \left(\sin \left(\frac{\theta _1}{2}\right) \gamma_\mp+\cos \left(\frac{\theta _1}{2}\right) \gamma_\pm \right)(a_\pm + ib_\pm + jc_\pm+kd_\pm) \end{align} where \begin{align}
a_\pm &= \cos \left(\frac{\theta _2}{2}\right) \cos \left(\frac{\theta _3+\theta _4+\theta _5}{2}
\right)
\\
b_\pm &=
-\cos \left(\frac{\theta _2}{2}\right) \sin \left(\frac{\theta _3+\theta _4+\theta _5}{2}
\right)
\\
c_\pm &=
-\sin \left(\frac{\theta _2}{2}\right) \sin \left(\frac{\theta _3-\theta _4-\theta _5}{2}
\right)
\\
d_\pm &=
-\sin \left(\frac{\theta _2}{2}\right) \cos \left(\frac{\theta _3-\theta _4-\theta _5}{2}
\right) \end{align} where \begin{align}
\gamma_\pm=
\frac{1}{2} \sqrt{4\pm \sqrt{14+2\cos \left(2 \theta _1\right)}}. \end{align}
Where for example $\langle \sigma^z_2\rangle_2 $ is the expectation value of $\sigma^z_2$ after the second layer of layer of the circuit.
\subsection{Sycamore hardware-efficient ansatz (sHEA)}
The output state of the Sycamore hardware-efficient ansatz (sHEA) depicted by Fig. \ref{fig:circuit_diagrams_v2}(d) is \begin{align} \ket{\psi}_{sHEA} &= -i e^{-\frac{1}{2} i \left(\theta _5+\theta _6\right)} \sin \left(\frac{\theta _2}{2}\right) \cos \left(\frac{\theta _1}{2}\right) \ket{00} \nonumber\\ &\quad + e^{-\frac{1}{2} i \left(\theta _5-\theta _6\right)} \left(\cos \left(\frac{\theta _1}{2}\right) \cos \left(\frac{\theta _2}{2}\right) \cos \left(\frac{\theta _3}{2}\right)-i \sin \left(\frac{\theta _1}{2}\right) \sin \left(\frac{\theta _2}{2}\right) \sin \left(\frac{\theta _3}{2}\right)\right) \ket{01} \nonumber\\ &\quad + e^{\frac{1}{2} i \left(\theta _5-\theta _6\right)} \left(-\sin \left(\frac{\theta _1}{2}\right) \sin \left(\frac{\theta _2}{2}\right) \cos \left(\frac{\theta _3}{2}\right)+i \sin \left(\frac{\theta _3}{2}\right) \cos \left(\frac{\theta _1}{2}\right) \cos \left(\frac{\theta _2}{2}\right)\right) \ket{10} \nonumber\\ &\quad - i e^{-\frac{1}{4} i \left(\theta _4-2 \left(\theta _5+\theta _6\right)\right)} \sin \left(\frac{\theta _1}{2}\right) \cos \left(\frac{\theta _2}{2}\right)
\ket{11} \end{align}
\begin{landscape} The angle parameters of the $S^4$ Hopf base $(\theta_A,\phi_A)$ and $(\chi,\xi)$ and the equivalent Cartesian coordinates $(x_0,x_1,x_2,x_3,x_4)$ are calculated to be \begin{align}
x_0 &= \frac{1}{2} \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)\right)\\
x_1 &= \sin \left(\theta _2\right) \sin \left(\frac{\theta _3}{2}\right) \left(\sin ^2\left(\frac{\theta _1}{2}\right) \cos \left(\frac{\theta _4}{4}-\theta _5\right)-\cos ^2\left(\frac{\theta _1}{2}\right) \cos \left(\theta _5\right)\right) \nonumber\\
&\quad + \sin \left(\theta _1\right) \cos \left(\frac{\theta _3}{2}\right) \left(\sin ^2\left(\frac{\theta _2}{2}\right) \sin \left(\theta _5\right)-\sin \left(\frac{\theta _4}{4}-\theta _5\right) \cos ^2\left(\frac{\theta _2}{2}\right)\right)\\
x_2 &= \frac{1}{2} \left(\sin \left(\theta _3\right)-\sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\frac{\theta _4}{4}\right)+\sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)\right) \\
x_3 &= \frac{1}{2} \sin \left(\theta _1\right) \sin \left(\theta _2\right) \left(\cos \left(\theta _3\right)-\cos \left(\frac{\theta _4}{4}\right)\right)\\
x_4 &= \frac{1}{2} \bigg(\sin (\theta _1) \cos \bigg(\frac{\theta _3}{2}\bigg) \bigg(-\bigg(-\cos (\theta _2)+\bigg(\cos (\theta _2)+1\bigg) \cos \bigg(\frac{\theta _4}{4}\bigg)+1\bigg) \cos (\theta _5) \nonumber\\
&\quad -\sin \bigg(\frac{\theta _4}{4}\bigg) \sin (\theta _5) \bigg(\cos (\theta _2)+1\bigg)\bigg)-\sin(\theta _2) \sin \bigg(\frac{\theta _3}{2}\bigg) \bigg(2 \sin \bigg(\frac{\theta _4}{4}\bigg) \sin ^2\bigg(\frac{\theta _1}{2}\bigg) \cos (\theta _5)\nonumber\\ &\quad +\sin (\theta _5) \bigg(\cos (\theta _1) +\big(\cos (\theta _1)-1\big) \cos \bigg(\frac{\theta _4}{4}\bigg)+1\bigg)\bigg)\bigg) \end{align} and \begin{align}
\theta_A &= \arccos \left[ \frac{1}{2} \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)\right) \right] \\
\phi_A &= \arccos \left[
\frac{
\splitfrac{
\sin \left(\theta _2\right) \sin \left(\frac{\theta _3}{2}\right) \left(\sin ^2\left(\frac{\theta _1}{2}\right) \cos \left(\frac{\theta _4}{4}-\theta _5\right)-\cos ^2\left(\frac{\theta _1}{2}\right) \cos \left(\theta _5\right)\right)
}{
+\sin \left(\theta _1\right) \cos \left(\frac{\theta _3}{2}\right) \left(\sin ^2\left(\frac{\theta _2}{2}\right) \sin \left(\theta _5\right)-\sin \left(\frac{\theta _4}{4}-\theta _5\right) \cos ^2\left(\frac{\theta _2}{2}\right)\right)
}
}{
\sqrt{1-\frac{1}{4} \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)\right){}^2}
} \right]\\
\chi &= \arccos \left[
\frac{
\splitfrac{
\sin \left(\theta _1\right) \cos \left(\frac{\theta _3}{2}\right) \left(-\left(-\cos \left(\theta _2\right)+\left(\cos \left(\theta _2\right)+1\right) \cos \left(\frac{\theta _4}{4}\right)+1\right) \cos \left(\theta _5\right)
-\sin \left(\frac{\theta _4}{4}\right) \sin \left(\theta _5\right) \left(\cos \left(\theta _2\right)+1\right)\right)
}{
-\sin \left(\theta _2\right) \sin \left(\frac{\theta _3}{2}\right) \left(2 \sin \left(\frac{\theta _4}{4}\right) \sin ^2\left(\frac{\theta _1}{2}\right) \cos \left(\theta _5\right)
+\sin \left(\theta _5\right) \left(\cos \left(\theta _1\right)+\left(\cos \left(\theta _1\right)-1\right) \cos \left(\frac{\theta _4}{4}\right)+1\right)\right)
}
}{
\splitfrac{
2 \left(
\left( 1-\frac{1}{4} \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)\right){}^2\right)
\right.
}{
\left.
\times \left(1-\frac{\left(\sin \left(\theta _2\right) \sin \left(\frac{\theta _3}{2}\right) \left(\sin ^2\left(\frac{\theta _1}{2}\right) \cos \left(\frac{\theta _4}{4}-\theta _5\right)-\cos ^2\left(\frac{\theta _1}{2}\right) \cos \left(\theta _5\right)\right)+\sin \left(\theta _1\right) \cos \left(\frac{\theta _3}{2}\right) \left(\sin ^2\left(\frac{\theta _2}{2}\right) \sin \left(\theta _5\right)-\sin \left(\frac{\theta _4}{4}-\theta _5\right) \cos ^2\left(\frac{\theta _2}{2}\right)\right)\right){}^2}{1-\frac{1}{4} \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)\right){}^2}\right)
\right)^{-1}
}
} \right] \\
\xi &= \arctan \left[ \frac{\sin \left(\theta _1\right) \sin \left(\theta _2\right) \left(\cos \left(\theta _3\right)-\cos \left(\frac{\theta _4}{4}\right)\right)}{\sin \left(\theta _3\right)-\sin \left(\theta _1\right) \sin \left(\theta _2\right) \sin \left(\frac{\theta _4}{4}\right)+\sin \left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)} \right] \end{align}
The quaternion $q_\pm=\langle c_\pm|\psi_H\rangle$ is calculated to be \begin{align}
\label{shea_fiber}
q_\pm =
m_\pm + im_\pm + jr_\pm+ks_\pm \end{align} where \begin{align}
m_\pm &= \frac{\sin \left(\frac{\theta _2}{2}\right) \sin \left(\frac{1}{2} \left(\theta _5+\theta _6\right)\right) \cos \left(\frac{\theta _1}{2}\right) \left(\frac{2 \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)-2\right) \gamma_{\mp}}{\sqrt{-4 \cos \left(2 \theta _1\right) \cos ^4\left(\frac{\theta _3}{2}\right)-\cos \left(2 \theta _3\right)-4 \sin ^4\left(\frac{\theta _3}{2}\right) \cos \left(2 \theta _2\right)+4 \sin ^2\left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)+5}}-\sqrt{2} \gamma_{\pm} \right)}{\sqrt{2}}
\\
r_\pm &=
\frac{
\left(\cos \left(\frac{1}{2} \left(\theta _1+\theta _2-\theta _3+\theta _5-\theta _6\right)\right)+\cos \left(\frac{1}{2} \left(\theta _1-\theta _2+\theta _3+\theta _5-\theta _6\right)\right)+2 \cos \left(\frac{1}{2} \left(\theta _2+\theta _3\right)\right) \cos \left(\frac{1}{2} \left(\theta _1-\theta _5+\theta _6\right)\right)\right)
\left(
- B_\mp +
\sqrt{2} \gamma_\pm
C
\right)
}{
A
}
\\
s_\pm &=
\frac{
\left(\sin \left(\frac{1}{2} \left(\theta _1+\theta _2-\theta _3+\theta _5-\theta _6\right)\right)+\sin \left(\frac{1}{2} \left(\theta _1-\theta _2+\theta _3+\theta _5-\theta _6\right)\right)-2 \sin \left(\frac{1}{2} \left(\theta _1-\theta _5+\theta _6\right)\right) \cos \left(\frac{1}{2} \left(\theta _2+\theta _3\right)\right)\right)
\left(
B_\mp -
\sqrt{2} \gamma_\pm
C
\right)
}{
A
} \end{align}
where \begin{align} A &= 4 \sqrt{2} \sqrt{-4 \cos \left(2 \theta _1\right) \cos ^4\left(\frac{\theta _3}{2}\right)-\cos \left(2 \theta _3\right)-4 \sin ^4\left(\frac{\theta _3}{2}\right) \cos \left(2 \theta _2\right)+4 \sin ^2\left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)+5}, \end{align} \begin{align} B_\mp &= 2 \left(\cos \left(\theta _2\right) \left(\cos \left(\theta _3\right)-1\right)+\cos \left(\theta _1\right) \left(\cos \left(\theta _3\right)+1\right)-2\right) \gamma_{\mp}, \end{align} \begin{align} C &= \sqrt{-4 \cos \left(2 \theta _1\right) \cos ^4\left(\frac{\theta _3}{2}\right)-\cos \left(2 \theta _3\right)-4 \sin ^4\left(\frac{\theta _3}{2}\right) \cos \left(2 \theta _2\right)+4 \sin ^2\left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)+5}, \end{align} and \begin{align}
\gamma_\pm=
\frac{1}{2} \sqrt{\frac{\pm \sqrt{4 \cos \left(2 \theta _1\right) \cos ^4\left(\frac{\theta _3}{2}\right)+\cos \left(2 \theta _3\right)+4 \sin ^4\left(\frac{\theta _3}{2}\right) \cos \left(2 \theta _2\right)-4 \sin ^2\left(\theta _3\right) \cos \left(\theta _1\right) \cos \left(\theta _2\right)+27}}{\sqrt{2}}+4}. \end{align}
\end{landscape}
\section{Geometric origins of the quantum natural gradient}
Considering the geometric point of view, how might one arrive at the concept of the quantum natural gradient? The idea is to simply find out how the gradient operation would change in different curved geometries. For the quantum mechanics, we would need to consider the geometry of $\mathbb{P}(H)$.
Using the correspondence between a vector space $V$ and its dual $V^*$ for finite dimensional spaces, we can pair the co-vector $df$ with the vector $\nabla f$ i.e
\begin{equation}
\label{gradientdefinition}
df(w)= \langle \nabla f, w \rangle
\end{equation}
where $\langle \cdot , \cdot \rangle$ is a chosen bilinear form. Now consider the following calculation:
\begin{equation*}
\langle v, w \rangle = \sum_{ij} v^{i} \langle e_i, e_j \rangle w^j = \nu (w),
\end{equation*}
where $v, w \in V$ and $\nu \in V^*$ and $ \{ e_i\}$ are basis for $V$. We can expand $\nu$ in a basis $\{\sigma^i\}$ for $V^*$ so that $\nu = \sum_i v_i \sigma^i $ with $v_i =\nu(e_i)= \langle v, e_j \rangle $. From the fact $v^i = \sum_{j}g^{ij}v_j = \sum_i g^{ij}\langle v, e_j \rangle$, we have
\begin{equation}
\label{transformation_rule}
v = \sum_{j} v^i e_i = \sum_i \left( \sum_j g^{ij} \langle v, e_j \rangle \right) e_i .
\end{equation}
By the discussion above, we have that
\begin{equation}
\label{gradient_coordinate_definition}
\nabla f = \sum_i (\nabla f)^i e_i = \sum_i \left( \sum_j g^{ij} \langle \nabla f, e_i \rangle \right) e_i.
\end{equation}
Considering (\ref{gradientdefinition}) we have that $df(w)= \langle \nabla f, w \rangle = w(f) = \sum_{j} w^j \frac{\partial f}{\partial x^j} $ so that combining knowledge from (\ref{transformation_rule}) and (\ref{gradient_coordinate_definition}) we have $\langle \nabla , e_j \rangle = e_j(f)=\frac{\partial f}{\partial x^j} $ and thus arrive at
\begin{equation}
(\nabla f)^i = \sum_j g^{ij} \frac{\partial f}{\partial x^j}.
\end{equation}
Now in Euclidean geometry $df$ and $\nabla f$ have the same co-ordinates but in general they do not. For $\mathbb{P}(H)$ we pick the metric to be Fubini-Study metric and thus arrive at the following update rule for the parameters:
\begin{equation} \label{natural_gradient_descent}
\boldsymbol\theta^{t+1} = \boldsymbol\theta^{t} - \eta g^{-1} \frac{\partial \mathcal{L}(\boldsymbol\theta)}{\partial \boldsymbol\theta}. \end{equation}
\section{Further insights into VQE performance through scalar curvature: unentangled ground state}\label{app:50_runs_product}
In this section, we consider another regime of the same VQE problem from Sec. \ref{sec:numerical_experiments}. At $r_{H-H} = 0.2 \textup{\AA}$, the ground state wave function is a product state: \begin{equation}
\ket{\psi_g} \approx \beta \ket{10}, \end{equation}
with $|\beta|^2 \approx 1$. We repeat the simulation procedure, running 50 independent VQE optimizations for each of the four two-qubit circuits, as shown in Fig.~\ref{fig:h2_separable_ground_state_data_50_runs}. We observe similar results for $r_{H-H} = 0.2 \textup{\AA}$ as those for $r_{H-H} = 3.19 \textup{\AA}$; using LDCA, the optimizations are both precise and accurate. While initial concurrence values start at high values, they rapidly decrease to near 0. This corresponds to starting in a negative curvature region but quickly moving up to a hill of positive curvature. QGAN, in its original structure, is again insufficient for converging to the ground state energy. However, with added local rotations (Fig.~\ref{fig:h2_separable_ground_state_data_50_runs}e), the optimizations reach sufficient accuracy.
\begin{figure}\label{fig:h2_separable_ground_state_data_50_runs}
\end{figure}
\end{document} |
\begin{document}
\title[A priori bounds and existence of solutions]{A priori bounds and existence of solutions for some nonlocal elliptic problems}
\author[B. Barrios, L. Del Pezzo, J. Garc\'{\i}a-Meli\'{a}n and A. Quaas] {B. Barrios, L. del Pezzo, J. Garc\'{\i}a-Meli\'{a}n\\ and A. Quaas}
\date{}
\address{B. Barrios
\break\indent Department of Mathematics, University of Texas at Austin
\break\indent Mathematics Dept. RLM 8.100 2515 Speedway Stop C1200
\break\indent Austin, TX 78712-1202, USA.} \email{{\tt [email protected]}}
\address{L. Del Pezzo
\break\indent CONICET
\break\indent Departamento de Matem\'{a}tica, FCEyN UBA
\break\indent Ciudad Universitaria, Pab I (1428)
\break\indent Buenos Aires, ARGENTINA. } \email{{\tt [email protected]}}
\address{J. Garc\'{\i}a-Meli\'{a}n
\break\indent Departamento de An\'{a}lisis Matem\'{a}tico, Universidad de La Laguna
\break \indent C/. Astrof\'{\i}sico Francisco S\'{a}nchez s/n, 38271 -- La Laguna, SPAIN
\break\indent {\rm and}
\break \indent Instituto Universitario de Estudios Avanzados (IUdEA) en F\'{\i}sica At\'omica,
\break\indent Molecular y Fot\'onica, Universidad de La Laguna
\break\indent C/. Astrof\'{\i}sico Francisco S\'{a}nchez s/n, 38203 -- La Laguna, SPAIN.} \email{{\tt [email protected]}}
\address{A. Quaas
\break\indent Departamento de Matem\'{a}tica, Universidad T\'ecnica Federico Santa Mar\'{\i}a
\break\indent Casilla V-110, Avda. Espa\~na, 1680 -- Valpara\'{\i}so, CHILE.} \email{{\tt [email protected]}}
\begin{abstract} In this paper we show existence of solutions for some elliptic problems with nonlocal diffusion by means of nonvariational tools. Our proof is based on the use of topological degree, which requires a priori bounds for the solutions. We obtain the a priori bounds by adapting the classical scaling method of Gidas and Spruck. We also deal with problems involving gradient terms. \end{abstract}
\maketitle
\section{Introduction} \setcounter{section}{1} \setcounter{equation}{0}
Nonlocal diffusion problems have received considerable attention during the last years, mainly because their appearance when modelling different situations. To name a few, let us mention anomalous diffusion and quasi-geostrophic flows, turbulence and water waves, molecular dynamics and relativistic quantum mechanics of stars (see \cite{BoG,CaV,Co,TZ} and references therein). They also appear in mathematical finance (cf. \cite{A,Be,CoT}), elasticity problems \cite{signorini}, thin obstacle problem \cite{Caf79}, phase transition \cite{AB98, CSM05, SV08b}, crystal dislocation \cite{dfv, toland} and stratified materials \cite{savin_vald}.
A particular class of nonlocal operators which have been widely analyzed is given, up to a normalization constant, by $$
(-\Delta)^s_K u(x) = \int_{\mathbb R^N} \frac{2u(x) -u(x+y)-u(x-y)}{|y|^{N+2s}} K(y) dy, $$ where $s\in (0,1)$ and $K$ is a measurable function defined in $\mathbb R^N$ ($N\ge 2$). A remarkable example of such operators is obtained by setting $K=1$, when $(-\Delta)^s_K$ reduces to the well-known fractional Laplacian (see \cite[Chapter 5]{Stein} or \cite{NPV, Landkof, S} for further details). Of course, we will require the operators $(-\Delta)^s_K$ to be elliptic, which in our context means that there exist positive constants $\lambda\le \Lambda$ such that \begin{equation}\label{elipticidad} \lambda \le K(x) \le \Lambda \quad \hbox{in } \mathbb R^N \end{equation} (cf. \cite{CS}). While there is a large literature dealing with this class of operators, very little is known about existence of solutions for nonlinear problems, except for cases where variational methods can be employed (see for instance \cite{barrios2, barrios4, barrios3, SV2, servadeivaldinociBN, servadeivaldinociBNLOW} and references therein).
But when the problem under consideration is not of variational type, for instance when gradient terms are present, as far as we know, results about existence of solutions are very scarce in the literature. Thus our objective is to find a way to show existence of solutions for some problems under this assumption. For this aim, we will resort to the use of the fruitful topological methods, in particular Leray-Schauder degree.
It is well-known that the use of these methods requires the knowledge of the so-called a priori bounds for all possible solutions. Therefore we will be mainly concerned with the obtention of these a priori bounds for a particular class of equations. A natural starting point for this program is to consider the problem: \begin{equation}\label{problema} \left\{ \begin{array}{ll} (-\Delta)_K^s u = u^p + g(x,u) & \hbox{in }\Omega,\\[0.35pc] \ \ u=0 & \hbox{in }\mathbb R^N \setminus \Omega, \end{array} \right. \end{equation} where $\Omega \subset \mathbb R^N$ is a smooth bounded domain, $p>1$ and $g$ is a perturbation term which is small in some sense. Under several expected restrictions on $g$ and $p$ we will show that all positive solutions of this problem are a priori bounded. The most important requirement is that $p$ is subcritical, that is \begin{equation}\label{subcritico} 1<p< \frac{N+2s}{N-2s} \end{equation} and that the term $g(x,u)$ is a small perturbation of $u^p$ at infinity. By adapting the classical scaling method of Gidas and Spruck (\cite{GS}) we can show that all positive solutions of \eqref{problema} are a priori bounded.
An important additional assumption that we will be imposing on the kernel $K$ is that \begin{equation}\label{continuidad} \lim_{x\to 0} K(x)=1. \end{equation}
It is important to clarify at this moment that we are always dealing with viscosity solutions $u \in C(\mathbb R^N)$ in the sense of \cite{CS}, although in some cases the solutions will turn out to be more regular with the help of the regularity theory developed in \cite{CS, CS2}.
With regard to problem \eqref{problema}, our main result is the following:
\begin{teorema}\label{th-1} Assume $\Omega$ is a $C^2$ bounded domain of $\mathbb R^N$, $N\ge 2$, $s\in (0,1)$ and $p$ verifies \eqref{subcritico}. Let $K$ be a measurable kernel that satisfies \eqref{elipticidad} and \eqref{continuidad}. If $g\in C(\overline{\Omega} \times \mathbb R)$ verifies $$
|g(x,z)| \le C |z|^r \qquad x\in \overline{\Omega}, \ z\in \mathbb R, $$ where $1<r<p$, then problem \eqref{problema} admits at least a positive viscosity solution. \end{teorema}
It is to be noted that the scaling method requires on one side of good estimates for solutions, both interior and at the boundary, and on the other side of a Liouville theorem in $\mathbb R^N$. In the present case interior estimates are well known (cf. \cite{CS}), but good local estimates near the boundary do not seem to be available. We overcome this problem by constructing suitable barriers which can be controlled when the scaled domains are moving. It is worthy of mention at this point that the corresponding Liouville theorems are already available (cf. \cite{ZCCY,CLO1,QX,FW}).
Let us also mention that we were not aware of any work dealing with the question of a priori bounds for problem \eqref{problema}; however, when we were completing this manuscript, it has just come to our attention the very recent preprint \cite{CLC}, where a priori bounds for smooth solutions are obtained in problem \eqref{problema} with $K=1$ and $g=0$ (but no existence is shown). On the other hand, it is important to mention the papers \cite{BCPS,CT,CZ,C}, where a priori bounds and Liouville results have been obtained for related operators, like the ``spectral" fractional laplacian. To see some diferences between this operator and $(-\Delta)^s$, obtained by setting $K=1$ in the present work, see for instance \cite{SV}. In all the previous works dealing with the spectral fractional Laplacian, the main tool is the well-known Caffarelli-Silvestre extension obtained in \cite{CS3}. This tool is not available for us here, hence we will treat the problem in a nonlocal way with a direct approach.
As we commented before, we will also be concerned with the adaptation of the previous result to some more general equations. More precisely, we will study the perturbation of equation \eqref{problema} with the introduction of gradient terms, that is, \begin{equation}\label{problema-grad} \left\{ \begin{array}{ll} (-\Delta)_K^s u = u^p + h(x,u,\nabla u) & \hbox{in }\Omega,\\[0.35pc] \ \ u=0 & \hbox{in }\mathbb R^N \setminus \Omega. \end{array} \right. \end{equation} For the type of nonlocal equations that we are analyzing, a natural restriction in order that the gradient is meaningful is $s>\frac{1}{2}$. However, there seem to be few works dealing with nonlocal equations with gradient terms (see for example \cite{AI,BCI,BK2,CaV,CL,CV,GJL,S2,SVZ,W}).
It is to be noted that, at least in the case $K=1$, since solutions $u$ are expected to behave like ${\rm dist}(x,\partial\Omega)^s$ near the boundary by Hopf's principle (cf. \cite{ROS}), then the gradient is expected to be singular near $\partial\Omega$. This implies that the standard scaling method has to be modified to take care of this singularity. We achieve this by introducing some suitable weighted norms which have been already used in the context of second order elliptic equations (cf. \cite{GT}).
However, the introduction of this weighted norms presents some problems since the scaling needed near the boundary is not the same one as in the interior. Therefore we need to split our study into two parts: first, we obtain ``rough" universal bounds for all solutions of \eqref{problema-grad}, by using the well-known doubling lemma in \cite{PQS}. Since our problems are nonlocal in nature this forces us to strengthen the subcriticality hypothesis \eqref{subcritico} and to require instead \begin{equation}\label{subserrin} 1<p< \frac{N}{N-2s} \end{equation} (cf. Remarks \ref{comentario} (b) in Section 3). After that, we reduce the obtention of the a priori bounds to an analysis near the boundary. With a suitable scaling, the lack of a priori bounds leads to a problem in a half-space which has no solutions according to the results in \cite{QX} or \cite{FW}.
It is worth stressing that the main results in this paper rely in the construction of suitable barriers for equations with a singular right-hand side, which are well-behaved with respect to suitable perturbations of the domain (cf. Section \ref{s2}).
Le us finally state our result for problem \eqref{problema-grad}. In this context, a solution of \eqref{problema-grad} is a function $u\in C^1(\Omega)\cap C(\mathbb R^N)$ vanishing outside $\Omega$ and verifying the equation in the viscosity sense.
\begin{teorema}\label{th-grad} Assume $\Omega$ is a $C^2$ bounded domain of $\mathbb R^N$, $N\ge 2$, $s \in (\frac{1}{2},1)$ and $p$ verifies \eqref{subserrin}. Let $K$ be a measurable kernel that satisfies \eqref{elipticidad} and \eqref{continuidad}. If $h\in C(\Omega \times \mathbb R\times \mathbb R^N)$ is nonnegative and verifies $$
h(x,z,\xi) \le C (|z|^r + |\xi|^t), \quad x\in\Omega,\ z\in \mathbb R,\ \xi\in \mathbb R^N, $$ where $1<r<p$ and $1<t<\frac{2sp}{p+2s-1}$, then problem \eqref{problema-grad} admits at least a positive solution. \end{teorema}
The rest of the paper is organized as follows: in Section 2 we recall some interior regularity results needed for our arguments, and we solve some linear problems by constructing suitable barriers. Section 3 is dedicated to the obtention of a priori bounds, while in Secion 4 we show the existence of solutions that is, we give the proofs of Theorems \ref{th-1} and \ref{th-grad}.
\section{Interior regularity and some barriers}\label{s2} \setcounter{section}{2} \setcounter{equation}{0}
The aim of this section is to collect several results regarding the construction of suitable barriers and also some interior regularity for equations related to \eqref{problema} and \eqref{problema-grad}. We will use throughout the standard convention that the letter $C$ denotes a positive constant, probably different from line to line.
Consider $s\in (0,1)$, a measurable kernel $K$ verifying \eqref{elipticidad} and \eqref{continuidad} and a $C^2$ bounded domain $\Omega$. We begin by analyzing the linear equation \begin{equation}\label{eq-regularidad} (-\Delta)^s_K u = f \quad \hbox{in } \Omega, \end{equation} where $f\in L^\infty_{\rm loc}(\Omega)$. As a consequence of Theorem 12.1 in \cite{CS} we get that if $u \in C(\Omega)\cap L^\infty( \mathbb R^N)$ is a viscosity solution of \eqref{eq-regularidad} then $u\in C^\alpha_{\rm loc}(\Omega)$ for some $\alpha \in (0,1)$. Moreover, for every ball $B_R\subset \subset \Omega$ there exists a positive constant $C=C(N,s,\lambda,\Lambda,R)$ such that: \begin{equation}\label{est-ca}
\| u\|_{C^{\alpha}(\overline{B_{R/2}})} \le C\| f\| _{L^\infty(B_R)} + \| u\|_{L^\infty(\mathbb R^N)}. \end{equation} The precise dependence of the constant $C$ on $R$ can be determined by means of a simple scaling, as in Lemma \ref{lema-regularidad} below; however, for interior estimates this will be of no importance to us. When $s>\frac{1}{2}$, the H\"older estimate for the solution can be improved to obtain an estimate for the first derivatives. In fact, as a consequence of Theorem 1.2 in \cite{K}, we have that $u\in C^{1,\beta}_{\rm loc}(\Omega)$, for some $\beta=\beta(N,s,\lambda,\Lambda) \in (0,1)$. Also, for every ball $B_R\subset \subset \Omega$ there exists a positive constant $C=C(N,s,\lambda,\Lambda,R)$ such that: \begin{equation}\label{est-c1a}
\| u\|_{C^{1,\beta}(\overline{B_{R/2}})} \le C
\left( \| f\| _{L^\infty(B_R)} + \| u\|_{L^\infty(\mathbb R^N)}\right). \end{equation} Both estimates will play a prominent role in our proof of a priori bounds for positive solutions of \eqref{problema} and \eqref{problema-grad}.
Next we need to deal with problems with a right hand side which is possibly singular at $\partial \Omega$. For this aim, it is convenient to introduce some norms which will help us to quantify the singularity of both the right hand sides and the gradient of the solutions in case $s>\frac{1}{2}$.
Let us denote, for $x\in \Omega$, $d(x)={\rm dist}(x,\partial \Omega)$. It is well known that $d$ is Lipschitz continuous in $\Omega$ with Lipschitz constant 1 and it is a $C^2$ function in a neighborhood of $\partial\Omega$. We modify it outside this neighborhood to make it a $C^2$ function (still with Lipschitz constant 1), and we extend it to be zero outside $\Omega$.
Now, for $\theta \in \mathbb R$ and $u \in C(\Omega)$, let us denote (cf. Chapter 6 in \cite{GT}): $$
\| u\|_0^{(\theta)} =\sup_\Omega\; d(x)^{\theta} |u(x)|. $$ When $u\in C^1(\Omega)$ we also set \begin{equation}\label{norma-c1}
\| u\|_1^{(\theta)} = \sup_\Omega\; \left(d(x)^{\theta} |u(x)|+ d(x)^{\theta+1} |\nabla u(x)|\right). \end{equation} Then we have the following existence result for the Dirichlet problem associated to \eqref{eq-regularidad}.
\begin{lema}\label{lema-existencia} Assume $\Omega$ is a $C^2$ bounded domain, $0<s<1$ and $K$ is a measurable function verifying \eqref{elipticidad} and \eqref{continuidad}. Let $f\in C(\Omega)$ be such that
$\| f \|_0^{(\theta)}<+\infty$ for some $\theta \in (s,2s)$. Then the problem \begin{equation}\label{prob-lineal} \left\{ \begin{array}{ll} (-\Delta)_K^s u=f & \hbox{in } \Omega,\\[0.35pc] \; \; u=0 & \hbox{in } \mathbb R^N\setminus \Omega, \end{array} \right. \end{equation} admits a unique viscosity solution. Moreover, there exists a positive constant $C$ such that \begin{equation}\label{est-1}
\| u \|_0^{(\theta-2s)} \le C \| f \| _0^{(\theta)}. \end{equation} Finally, if $f\ge 0$ in $\Omega$ then $u\ge 0$ in $\Omega$. \end{lema}
The proof of this result relies in the construction of a suitable barrier in a neighborhood of the boundary of $\Omega$ which we will undertake in the following lemma. This barrier will also turn out to be important to obtain bounds for the solutions when trying to apply the scaling method. It is worthy of mention that for quite general operators, the lemma below can be obtained provided that $\theta$ is taken close enough to $2s$ (cf. for instance Lemma 3.2 in \cite{FQ}). But the precise assumptions we are imposing on $K$, especifically \eqref{continuidad}, allow us to construct the barrier in the whole range $\theta \in (s,2s)$.
In what follows, we denote, for small positive $\delta$, $$ \Omega_\delta=\{x\in \Omega: \hbox{dist}(x,\partial \Omega)<\delta\}, $$ and $K_\mu(x)= K(\mu x)$ for $\mu>0$.
\begin{lema}\label{lema-barrera-1} Let $\Omega$ be a $C^2$ bounded domain of $\mathbb R^N$, $0<s<1$ and $K$ be measurable and verify \eqref{elipticidad} and \eqref{continuidad}. For every $\theta \in (s,2s)$ and $\mu_0>0$, there exist $C_0,\delta>0$ such that $$ (-\Delta)^s_{K_\mu} d^{2s-\theta} \ge C_0 d^{-\theta} \quad \hbox{in } \Omega_\delta, $$ if $0<\mu \le \mu_0$. \end{lema}
\begin{proof} By contradiction, let us assume that the conclusion of the lemma is not true. Then there exist $\theta\in (s,2s)$, $\mu_0>0$, sequences of points $x_n\in \Omega$ with $d(x_n)\to 0$ and numbers $\mu_n\in (0,\mu_0]$ such that \begin{equation}\label{contradiction} \lim_{n\to +\infty} d(x_n)^{\theta} (-\Delta)^s_{K_{\mu_n}} d^{2s-\theta} (x_n) \le 0. \end{equation} Denoting for simplicity $d_n:=d(x_n)$, and performing the change of variables $y= d_n z$ in the integral appearing in \eqref{contradiction} we obtain \begin{equation}\label{contradiction-2} \int_{\mathbb R^N} \frac{2 - \left(\frac{d(x_n+d_n z)}{d_n}\right)^{2s-\theta}-
\left(\frac{d(x_n-d_n z)}{d_n}\right)^{2s-\theta}}{|z|^{N+2s}} K(\mu_n d_n z) dz \le o(1). \end{equation} Before passing to the limit in this integral, let us estimate it from below. Observe that when
$x_n+d_n z\in \Omega$, we have by the Lipschitz property of $d$ that $d(x_n+d_n z) \le d_n (1+|z|)$. Of course, the same is true when $x_n +d_n z \not\in \Omega$ and it similarly follows that $d(x_n-d_n z)
\le d_n (1+|z|)$. Thus, taking $L>0$ we obtain for large $n$ \begin{equation}\label{ineq1} \begin{array}{l}
\displaystyle \int_{|z| \ge L} \frac{2 - \left(\frac{d(x_n+d_n z)}{d_n}\right)^{2s-\theta}- \left(\frac{d(x_n-d_n z)}{d_n}\right)^{2s-\theta}}
{|z|^{N+2s}} K(\mu_n d_n z) dz\\[1.4pc]
\quad \ge \displaystyle - 2 \Lambda \int_{ |z| \ge L} \frac{(1+|z|)^{2s-\theta}}{|z|^{N+2s}} dz. \end{array} \end{equation}
On the other hand, since $d$ is smooth in a neighborhood of the boundary, when $|z|\le L$ and $x_n +d_n z\in \Omega$, we obtain by Taylor's theorem \begin{equation}\label{taylor}
d(x_n + d_n z )= d_n + d_n \nabla d(x_n) z + \Theta_n(d_n,z) d_n^2 |z|^2, \end{equation} where $\Theta_n$ is uniformly bounded. Hence \begin{equation}\label{eq3}
d(x_n+d_n z ) \le d_n + d_n \nabla d(x_n) z + C d_n^2 |z|^2. \end{equation}
Now choose $\eta \in (0,1)$ small enough. Since $d(x_n)\to 0$ and $|\nabla d|=1$ in a neighborhood of the boundary, we can assume that \begin{equation}\label{extra1} \nabla d(x_n)\to e \hbox{ as }n\to +\infty \hbox{ for some unit vector }e. \end{equation} Without loss of generality, we may take
$e=e_N$, the last vector of the canonical basis of $\mathbb R^N$. If we restrict $z$ further to satisfy $|z|\le \eta$, we obtain $1+\nabla d(x_n) z \sim 1 + z_N \ge 1-\eta>0$ for large $n$, since $|z_N| \le |z|\le \eta$. Therefore, the right-hand side in \eqref{eq3} is positive for large $n$ (depending only on $\eta$), so that the inequality \eqref{eq3} is also true when $x_n+d_n z\not\in \Omega$. Moreover, by using again Taylor's theorem $$
(1+\nabla d(x_n) z + C d_n |z|^2)^{2s-\theta} \le 1+ (2s-\theta) \nabla d(x_n) z + C |z|^2, $$ for large enough $n$. Thus from \eqref{eq3}, $$
\left(\frac{d(x_n+d_n z)}{d_n}\right)^{2s-\theta} \le 1+ (2s-\theta) \nabla d(x_n) z + C |z|^2, $$ for large enough $n$. A similar inequality is obtained for the term involving $d(x_n-d_n z)$. Therefore we deduce that \begin{equation}\label{ineq2} \begin{array}{l}
\displaystyle \int_{ |z| \le \eta }
\frac{2 - \left(\frac{d(x_n+d_n z)}{d_n}\right)^{2s-\theta}- \left(\frac{d(x_n-d_n z)}{d_n}\right)^{2s-\theta}}{|z|^{N+2s}} K(\mu_n d_n z) dz\\[1.4pc]
\quad \ge \displaystyle - 2 \Lambda C \int_{ |z| \le \eta} \frac{1}{|z|^{N-2(1-s)}} dz. \end{array} \end{equation} We finally observe that it follows from the above discussion (more precisely from \eqref{taylor} and
\eqref{extra1} with $e=e_N$) that for $\eta \le |z| \le L$ \begin{equation}\label{extra2} \frac{d(x_n \pm d_n z)}{d_n} \to (1\pm z_N)_+ \qquad \hbox{as } n \to +\infty. \end{equation} Therefore using \eqref{ineq1}, \eqref{ineq2} and \eqref{extra2}, and passing to the limit as $n\to +\infty$ in \eqref{contradiction-2}, by dominated convergence we arrive at $$ \begin{array}{ll}
\displaystyle - 2 \Lambda \int_{ |z| \ge L} \frac{(1+|z|)^{2s-\theta}}{|z|^{N+2s}} dz +
\int_{ \eta \le |z| \le L} \frac{2 - (1+z_N)_+^{2s-\theta}- (1-z_N)_+^{2s-\theta}}{|z|^{N+2s}} dz \\[1.4pc]
\displaystyle \qquad \qquad - 2 \Lambda C \int_{ |z| \le \eta} \frac{1}{|z|^{N-2(1-s)}} dz \le 0. \end{array} $$ We have also used that $\lim_{n\to +\infty} K(\mu_n d_n z)=1$ uniformly, by \eqref{continuidad} and the boundedness of $\{\mu_n\}$. Letting now $\eta \to 0$ and then $L\to +\infty$, we have $$
\int_{ \mathbb R^N} \frac{2 - (1+z_N)_+^{2s-\theta}- (1-z_N)_+^{2s-\theta}}{|z|^{N+2s}} dz \le 0. $$ It is well-known, with the use of Fubini's theorem and a change of variables, that this integral can be rewritten as a one-dimensional integral \begin{equation}\label{contra-final}
\int_ \mathbb R \frac{2 - (1+t)_+^{2s-\theta}- (1-t)_+^{2s-\theta}}{|t|^{1+2s}} dt \le 0. \end{equation} We will see that this is impossible because of our assumption $\theta \in (s,2s)$. Indeed, consider the function $$
F(\tau) = \int_ \mathbb R \frac{ 2- (1+t)_+^\tau- (1-t)_+^\tau}{|t|^{1+2s}} dt, \quad \tau \in (0,2s), $$ which is well-defined. We claim that $F \in C^\infty(0,2s)$ and it is strictly concave. In fact, observe that for $k\in \mathbb N$, the candidate for the $k-$th derivative $F^{(k)}(\tau)$ is given by $$
- \int_ \mathbb R \frac{(1+t)_+^\tau (\log(1+t))_+^k + (1-t)_+^\tau (\log(1-t))_+^k}{|t|^{1+2s}} dt. $$ It is easily seen that this integral converges for every $k\ge 1$, since by Taylor's expansion for $t\sim 0$ we deduce $(1+t)^\tau (\log(1+t))^k + (1-t)^\tau (\log(1-t))^k =O( t^2)$. Therefore it follows that $F$ is $C^\infty$ in $(s,2s)$. To see that $F$ is strictly concave, just notice that $$
F_\varepsilon''(\tau)= - \int_\mathbb R \frac{ (1+t)_+^\tau (\log (1+t)_+ )^2+ (1-t)_+^\tau (\log (1-t)_+ )^2}{|t|^{1+2s}} dt < 0. $$ Finally, it is clear that $F(0)=0$. Moreover, since $v(x)=(x_+)^s$, $x\in \mathbb R$ verifies $(-\Delta)^s v=0$ in $\mathbb R_+$ (see for instance the introduction in \cite{CJS} or Proposition 3.1 in \cite{ROS}), we also deduce that $F(s)=0$. By strict concavity we have $F(\tau)>0$ for $\tau\in (0,s)$, which clearly contradicts \eqref{contra-final} if $\theta\in (s,2s)$. Therefore \eqref{contra-final} is not true and this concludes the proof of the lemma. \end{proof}
\begin{proof}[Proof of Lemma \ref{lema-existencia}] By Lemma \ref{lema-barrera-1} with $\mu_0=1$, there exist $C_0>0$ and $\delta>0$ such that \begin{equation}\label{extra3} \mbox{$(-\Delta)^s_K d^{2s-\theta} \ge C_0 d^{-\theta}$ in $\Omega_\delta$.} \end{equation} Let us show that it is possible to construct a supersolution of the problem \begin{equation}\label{supersolucion} \left\{ \begin{array}{ll} (-\Delta)_K^s v=C_0 d^{-\theta} & \hbox{in } \Omega,\\[0.35pc] \; \; v=0 & \hbox{in } \mathbb R^N\setminus \Omega, \end{array} \right. \end{equation} vanishing outside $\Omega$.
First of all, by Theorem 3.1 in \cite{FQ}, there exists a nonnegative function $w\in C(\mathbb R^N)$ such that $(-\Delta)_K^s w = 1$ in $\Omega$, with $w=0$ in $\mathbb R^N \setminus \Omega$. We claim that $v= d^{2s-\theta} + t w$ is a supersolution of \eqref{supersolucion} if $t>0$ is large enough. For this aim, observe that $(-\Delta)^s_K d^{2s-\theta} \ge -C$ in $\Omega \setminus \Omega_\delta$, since $d$ is a $C^2$ function there. Therefore, $$ \mbox{$(-\Delta)^s_K v\ge t-C \ge C_0 d^{-\theta}$ in $\Omega\setminus \Omega_\delta$} $$ if $t$ is large enough. Since clearly $(-\Delta)^s_K v \ge C_0 d^{-\theta}$ in $\Omega_\delta$ as well, we see that $v$ is a supersolution of \eqref{supersolucion}, which vanishes outside $\Omega$.
Now choose a sequence of smooth functions $\{\psi_n\}$ verifying $0\le \psi_n\le 1$, $\psi_n=1$ in $\Omega \setminus \Omega_{2/n}$ and $\psi_n=0$ in $\Omega_{1/n}$. Define $f_n= f\psi_n$, and consider the problem \begin{equation}\label{perturbado} \left\{ \begin{array}{ll} (-\Delta)_K^s u= f_n & \hbox{in } \Omega,\\[0.35pc] \; \; u=0 & \hbox{in } \mathbb R^N\setminus \Omega. \end{array} \right. \end{equation} Since $f_n \in C(\overline{\Omega})$, we can use Theorem 3.1 in \cite{FQ} which gives a viscosity solution $u_n\in C(\mathbb R^N)$ of \eqref{perturbado}.
On the other hand, $|f_n| \le |f| \le \| f \|_0^{(\theta)} d^{-\theta}$ in $\Omega$, so that the functions
$v_{\pm} = \pm C_0^{-1} \| f \|_0^{(\theta)} v$ are sub and supersolution of \eqref{perturbado}. By comparison (cf. Theorem 5.2 in \cite{CS}), we obtain $$
- C_0^{-1} \| f\| _0^{(\theta)} v \le u_n \le C_0^{-1} \| f\| _0^{(\theta)}v \qquad \hbox{in } \Omega. $$ Now, this bound together with \eqref{est-ca}, Ascoli-Arzel\'a's theorem and a standard diagonal argument allow us to obtain a subsequence, still denoted by $\{u_n\}$, and a function $u\in C(\Omega)$ such that $u_n\to u$ uniformly on compact sets of $\Omega$. In addition, $u$ verifies \begin{equation}\label{eq-ult}
|u |\le C_0^{-1} \| f\| _0^{(\theta)}v \quad \hbox{in }\Omega. \end{equation} By Corollary 4.7 in \cite{CS}, we can pass to the limit in \eqref{perturbado}
to obtain that $u \in C(\mathbb{R}^{N})$ is a viscosity solution of \eqref{prob-lineal}. Moreover inequality \eqref{eq-ult} implies that $|u |\le C \| f\| _0^{(\theta)} d^{2s-\theta}$ in $\Omega\setminus\Omega_\delta$ for some $C>0$, so that, by \eqref{prob-lineal}, \eqref{extra3} and the comparison principle, we obtain that $$
|u| \le C \| f\|_0^{\theta} d^{2s-\theta} \quad \hbox{in }\Omega $$ which shows \eqref{est-1}.
The uniqueness and the nonnegativity of $u$ when $f \ge 0$ are a consequence of the maximum principle (again Theorem 5.2 in \cite{CS}). This concludes the proof. \end{proof}
Our next estimate concerns the gradient of the solutions of \eqref{prob-lineal} when $s>\frac{1}{2}$. The proof is more or less standard starting from \eqref{est-c1a} (cf. \cite{GT}) but we include it for completeness
\begin{lema}\label{lema-regularidad} Assume $\Omega$ is a smooth bounded domain and $s>\frac{1}{2}$. There exists a constant $C_0$ which depends on $N,s, \lambda$ and $\Lambda$ but not on $\Omega$ such that, for every $\theta \in (s,2s)$
and $f\in C(\Omega)$ with $\| f \|_0^{(\theta)}<+\infty$ the unique solution $u$ of \eqref{prob-lineal} verifies \begin{equation}\label{est-adimensional}
\| \nabla u \|_0^{(\theta - 2s +1)} \le C_0 ( \|f \|_0^{(\theta)} + \|u \|_{0}^{(\theta - 2s)}). \end{equation} \end{lema}
\begin{proof} By \eqref{est-c1a} with $R=1$ we know that if $(-\Delta)^s_K u=f$ in $B_1$ then there exists a constant which depends on $N,s, \lambda$ and
$\Lambda$ such that $\| \nabla u\|_{L^\infty (B_{1/2})} \le C ( \| f\| _{L^\infty(B_1)} + \| u\|_{L^\infty(\mathbb R^N)})$. By a simple scaling, it can be seen that if $(-\Delta)^s_K u=f$ in $\Omega$ and $B_R\subset \subset \Omega$ then $$
R \| \nabla u\|_{L^\infty (B_{R/2})} \le C ( R^{2s} \| f\| _{L^\infty(B_R)} + \| u\|_{L^\infty(\mathbb R^N)}). $$ Choose a point $x\in \Omega$. By applying the previous inequality in the ball $B=B_{d(x)/2}(x)$ and multiplying by $d(x)^{\theta-2s}$ we arrive at $$
d(x)^{\theta-2s+1} | \nabla u(x) | \le C \left( d(x)^\theta \| f\| _{L^\infty(B)} +
d(x)^{\theta-2s} \| u\|_{L^\infty(\mathbb R^N)}\right). $$
Finally, notice that $\frac{d(x)}{2} < d(y) <\frac{3d(x)}{2}$ for every $y\in B$, so that $d(x)^\theta |f(y)|
\le 2^\theta d(y)^\theta f(y) \le 2^{2s} \| f \|_0^{(\theta)}$, this implying $d(x)^\theta \| f \|_{L^\infty(B)}
\le 2^{2s} \| f \|_0^{(\theta)}$. A similar inequality can be achieved for the term involving $\| u\|_{L^\infty(\mathbb R^N)}$. After taking supremum, \eqref{est-adimensional} is obtained. \end{proof}
Our next lemma is intended to take care of the constant in \eqref{est-1} when we consider problem \eqref{prob-lineal} in expanding domains, since in general it depends on $\Omega$. This is the key for the scaling method to work properly in our setting. For a $C^2$ bounded domain $\Omega$, we take $\xi \in \partial\Omega$, $\mu>0$ and let $$ \mbox{$\Omega^\mu:=\{y\in \mathbb R^N:\ \xi +\mu y\in \Omega\}$.} $$ It is clear then that $d_\mu(y):={\rm dist}(y,\partial \Omega^\mu)=\mu^{-1} d(\xi +\mu y)$. Let us explicitly remark that the constant in \eqref{est-1} for the solution of \eqref{prob-lineal} posed in $\Omega^\mu$ will depend then on the domain $\Omega$, but not on the dilation parameter $\mu$, as we show next.
\begin{lema}\label{lema-barrera-2} Assume $\Omega$ is a $C^2$ bounded domain, $0<s<1$ and $K$ is a measurable function verifying \eqref{elipticidad} and \eqref{continuidad}. For every $\theta \in (s,2s)$ and $\mu_0>0$, there exist $C_0,\delta>0$ such that $$ (-\Delta)^s_{K_\mu} d_\mu ^{2s-\theta} \ge C_0 d_\mu^{-\theta} \quad \hbox{in } (\Omega^\mu)_\delta, $$ if $0<\mu \le \mu_0$. Moreover, if $u$ verifies $(-\Delta)_{K_\mu}^s u \le C_1 d_\mu^{-\theta}$ in $\Omega^\mu$ for some $C_1>0$ with $u=0$ in $\mathbb R^N\setminus \Omega^\mu$, then $$
u (x) \le C_2( C_1 +\|u \|_{L^\infty(\Omega^\mu)} )\; d_\mu ^{2s-\theta} \quad \hbox{for } x \in (\Omega^\mu)_\delta. $$ for some $C_2>0$ only depending on $s$, $\delta$, $\theta$ and $C_0$. \end{lema}
\begin{proof} The first part of the proof is similar to that of Lemma \ref{lema-barrera-1} but taking a little more care in the estimates. By contradiction let us assume that there exist sequences $\xi_n\in \partial \Omega$, $\mu_n \in (0,\mu_0]$ and $$ \mbox{$x_n \in \Omega^n:=\{y\in \mathbb R^N:\ \xi_n + \mu_n y\in \Omega\}$}, $$ such that $d_n(x_n)\to 0$ and $$ d_n(x_n)^{\theta} (-\Delta)^s_{K_{\mu_n}} d_n^{2s-\theta} (x_n) \le o(1). $$ Here we have denoted $$ \mbox{$d_n(y):={\rm dist}(y,\partial \Omega^n) = \mu_n^{-1} d(\xi_n+\mu_n y)$.} $$ For $L>0$, we obtain as in Lemma \ref{lema-barrera-1}, letting $d_n=d_n(x_n)$ $$ \begin{array}{l}
\displaystyle \int_{|z| \ge L} \frac{2 - \left(\frac{d_n(x_n+d_n z)}{d_n}\right)^{2s-\theta}- \left(\frac{d_n (x_n-d_n z)}{d_n}\right)^{2s-\theta}}
{|z|^{N+2s}} K(\mu_n d_n z) dz\\[1.4pc]
\quad \ge \displaystyle - 2 \Lambda \int_{ |z| \ge L} \frac{(1+|z|)^{2s-\theta}}{|z|^{N+2s}} dz. \end{array} $$ Moreover, we also have an equation like \eqref{taylor}. In fact taking into account that
$\| D^2 d_n \| = \mu_n \| D^2 d\|$ is bounded we have for $|z|\le \eta <1$: $$
d_n (x_n \pm d_n z ) \le d_n \pm d_n \nabla d_n (x_n) z + C d_n^2 |z|^2. $$ with a constant $C>0$ independent of $n$. Hence $$ \begin{array}{l}
\displaystyle \int_{ |z| \le \eta }
\frac{2 - \left(\frac{d_n(x_n+d_n z)}{d_n}\right)^{2s-\theta}- \left(\frac{d_n(x_n-d_n z)}{d_n}\right)^{2s-\theta}}{|z|^{N+2s}} K(\mu_n d_n z) dz\\[1.4pc]
\quad \ge \displaystyle - 2 \Lambda C \int_{ |z| \le \eta} \frac{1}{|z|^{N-2(1-s)}} dz. \end{array} $$ Now observe that $d_n(x_n)\to 0$ implies in particular $d(\xi_n+\mu_n x_n) \to 0$, so that
$|\nabla d(\xi_n+\mu_n x_n)|=1$ for large $n$ and then $|\nabla d_n (x_n)|=1$. As in \eqref{extra1}, passing to a subsequence we may assume that $\nabla d_n(x_n)\to e_N$. Then $$ \frac{d_n(x_n \pm d_n z)}{d_n} \to (1\pm z_N)_+ \qquad \hbox{as } n \to +\infty, $$
for $\eta \le |z| \le L$ and the proof of the first part concludes as in Lemma \ref{lema-barrera-1}.
Now let $u$ be a viscosity solution of $$\left\{ \begin{array}{ll} (-\Delta)_{K_\mu}^s u \le C_1 d_\mu^{-\theta} & \hbox{in } \Omega^\mu,\\[0.35pc] \ \ u=0 & \hbox{in }\mathbb R^N\setminus \Omega^\mu. \end{array} \right. $$ Choose $R>0$ and let $v=R d_\mu^{2s-\theta}$. Then clearly $$ \mbox{$(-\Delta)^s_{K_\mu} v \ge RC_0 d_\mu ^{-\theta}\ge C_1 d_\mu^{-\theta} \ge (-\Delta)^s_{K_\mu} u$ in $(\Omega^\mu)_\delta$,} $$ if we choose $R>C_1 C_0^{-1}$. Moreover, $u=v=0$ in $\mathbb R^N\setminus \Omega^\mu$ and
$v \ge R \delta^{2s-\theta} \ge u$ in $\Omega^\mu \setminus (\Omega^\mu)_\delta$ if $R$ is chosen so that $R \delta^{2s-\theta} \ge \|u\|_{L^\infty(\Omega^\mu)}$. Thus by comparison $u\le v$ in $(\Omega^\mu)_\delta$, which gives the desired result, with, for instance $C_2=\delta^{\theta-2s} +C_0^{-1}$. This concludes the proof. \end{proof}
We close this section with a statement of the strong comparison principle for the operator $(-\Delta)^s_K$, which will be frequently used throughout the rest of the paper. We include a proof for completeness (cf. Lemma 12 in \cite{LL} for a similar proof).
\begin{lema}\label{PFM} Let $K$ be a measurable function verifying \eqref{elipticidad} and assume $u\in C(\mathbb R^N)$, $u\ge 0$ in $\mathbb R^N$ verifies $(-\Delta)^s_K u \ge 0$ in the viscosity sense in $\Omega$. Then $u>0$ or $u\equiv 0$ in $\Omega$. \end{lema}
\begin{proof} Assume $u(x_0)=0$ for some $x_0\in \Omega$ but $u\not\equiv 0$ in $\Omega$. Choose a nonnegative test function $\phi \in C^2(\mathbb R^N)$ such that $u\ge \phi$ in a neighborhood $U$ of $x_0$ with $\phi(x_0)=0$ and let $$ \psi=\left\{ \begin{array}{ll} \phi & \hbox{in } U\\ u & \hbox{in } \mathbb R^N\setminus U. \end{array} \right. $$ Observe that $\psi$ can be taken to be nontrivial since $u$ is not identically zero, by diminishing $U$ if necessary. Since $(-\Delta)^s_K u\ge 0$ in $\Omega$ in the viscosity sense, it follows that $(-\Delta)^s_K \psi (x_0)\ge 0$. Taking into account that for a nonconstant $\psi$ we should have $(-\Delta)^s_K \psi < 0$ at a global minimum, we deduce that $\psi$ is a constant function. Moreover, since $\psi(x_0)=\phi(x_0)=0$ then
$\psi\equiv 0$ in $\mathbb R^N$, which is a contradiction. Therefore if $u(x_0)=0$ for some $x_0\in \Omega$ we must have $u\equiv 0$ in $\Omega$, as was to be shown. \end{proof}
\section{A priori bounds} \setcounter{section}{3} \setcounter{equation}{0}
In this section we will be concerned with our most important step: the obtention of a priori bounds for positive solutions for both problems \eqref{problema} and \eqref{problema-grad}. We begin with problem \eqref{problema}, with the essential assumption of subcriticality of $p$, that is equation \eqref{subcritico} and assuming that $g$ verifies the growth restriction \begin{equation}\label{crec-g-2}
|g(x,z)| \le C (1 + |z|^r), \quad x\in\Omega,\ z\in \mathbb R, \end{equation} where $C>0$ and $0<r<p$.
\begin{teorema}\label{cotas} Assume $\Omega$ is a $C^2$ bounded domain and $K$ a measurable function verifying \eqref{elipticidad} and \eqref{continuidad}. Suppose $p$ is such that \eqref{subcritico} holds and $g$ verifies \eqref{crec-g-2}. Then there exists a constant $C>0$ such that for every positive viscosity solution $u$ of \eqref{problema} we have $$
\| u\|_{L^\infty (\Omega)} \le C. $$ \end{teorema}
\begin{proof} Assume on the contrary that there exists a sequence of positive solutions $\{u_k\}$ of \eqref{problema}
such that $M_k=\| u_k \|_{L^\infty(\Omega)} \to +\infty$. Let $x_k\in \Omega$ be points with $u_k(x_k) =M_k $ and introduce the functions $$ v_k(y)= \frac{u_k(x_k+\mu_k y)}{M_k}, \quad y\in \Omega^k, $$ where $\mu_k=M_k^{-\frac{p-1}{2s}}\to 0$ and $$ \mbox{$\Omega^k:=\{y\in \mathbb R^N:\ x_k+\mu_k y\in \Omega\}$.} $$ Then $v_k$ is a function verifying $0< v_k \le 1$, $v_k(0)=1$ and \begin{equation}\label{rescale-1} (-\Delta)^s_{K_k} v_k = v_k^p + h_k \quad \hbox{in } \Omega^k \end{equation}
where $K_k(y)=K(\mu_k y)$ and $h_k \in C(\Omega^k)$ verifies $|h_k|\le C M_k^{r-p}$.
By passing to subsequences, two situations may arise: either $d(x_k) \mu_k^{-1} \to +\infty$ or $d(x_k) \mu_k^{-1} \to d \ge 0$.
Assume the first case holds, so that $\Omega^k \to \mathbb R^N$ as $k\to +\infty$. Since the right hand side in \eqref{rescale-1} is uniformly bounded and $v_k\le 1$, we may use estimates \eqref{est-ca} with an application of Ascoli-Arzel\'a's theorem and a diagonal argument to obtain that $v_k \to v$ locally uniformly in $\mathbb R^N$. Passing to the limit in \eqref{rescale-1} and using that $K$ is continuous at zero with $K(0)=1$, we see that $v$ solves $(-\Delta)^s v= v^p$ in $\mathbb R^N$ in the viscosity sense (use for instance Lemma 5 in \cite{CS2}).
By standard regularity (cf. for instance Proposition 2.8 in \cite{S}) we obtain $v\in C^{2s+\alpha}(\mathbb R^N)$ for some $\alpha \in (0,1)$. Moreover, since $v(0)=1$, the strong maximum principle implies $v>0$. Then by bootstrapping using again Proposition 2.8 in \cite{S} we would actually have $v\in C^\infty(\mathbb R^N)$. In particular we deduce that $v$ is a strong solution of $(-\Delta)^s v=v^p$ in $\mathbb R^N$ in the sense of \cite{ZCCY}. However, since $p<\frac{N+2s}{N-2s}$, this contradicts for instance Theorem 4 in \cite{ZCCY} (see also \cite{CLO1}).
If the second case holds then we may assume $x_k\to x_0\in \partial\Omega$. With no loss of generality assume also $\nu (x_0)=-e_N$. In this case, rather than working with the functions $v_k$, it is more convenient to deal with $$ w_k(y)= \frac{u_k(\xi_k+\mu_k y)}{M_k}, \quad y\in D^k, $$ where $\xi_k\in \partial\Omega$ is the projection of $x_k$ on $\partial\Omega$ and \begin{equation}\label{Dk} \mbox{$D^k:=\{y\in \mathbb R^N:\ \xi_k+\mu_k y \in \Omega\}$.} \end{equation} Observe that \begin{equation}\label{cero} 0\in \partial D^k, \end{equation} and $$\mbox{$D^k \to \mathbb R^N_+=\{y\in \mathbb R^N:\ y_N>0\}$ as $k\to +\infty$.} $$ It also follows that $w_k$ verifies \eqref{rescale-1} in $D^{k}$ with a slightly different function $h_k$, but with the same bounds.
Moreover, setting $$ y_k:=\frac{x_k-\xi_k}{\mu_k}, $$
so that $|y_k|= d(x_k)\mu_k^{-1}$, we see that
$w_k(y_k)=1$. We claim that $d=\lim_{k\to +\infty} d(x_k) \mu_k^{-1}>0$. This in particular guarantees that by passing to a further subsequence $y_k\to y_0$, where $|y_0|=d>0$, thus $y_0$ is in the interior of the half-space $\mathbb R^N_+$.
Let us show the claim. Observe that by \eqref{rescale-1}, and since $r<p$, we have $$ (-\Delta)^{s}_{K_k}w_k\leq C\leq C_1 d_k^{-\theta} \quad \hbox{in } D^k $$ for every $\theta \in (s,2s)$, where $d_k(y)={\rm dist}(y,\partial D^k)$.
By Lemma \ref{lema-barrera-2}, fixing any such $\theta$, there exist constants $C_0>0$ and $\delta>0$ such that $w_k(y) \le C_0 d_k(y)^{2s-\theta}$ if $d_k(y) < \delta$. In particular, since by \eqref{cero} $|y_k|\ge d_k(y_k)$, if $d_k(y_k) <\delta$, then $1\le C_0 d_k(y_k)^{2s-\theta} \le C_0
|y_k|^{2s-\theta}$, which implies $|y_k|$ is bounded from below so that $d>0$.
Now we can employ \eqref{est-ca} as above to obtain that $w_k\to w$ uniformly on compact sets of $\mathbb R^N_+$, where $w$ verifies $0\le w \le 1$ in $\mathbb R^N_+$, $w(y_0)=1$ and $w(y) \le C y_N^{2s-\theta}$ for $y_N <\delta$. Therefore $w\in C(\mathbb R^N)$ is a nonnegative, bounded solution of $$ \left\{ \begin{array}{ll} (-\Delta)^s w = w^p & \hbox{in } \mathbb R^N_+,\\[0.25pc] w=0 & \hbox{in } \mathbb R^N \setminus \mathbb R^N_+. \end{array} \right. $$ Again by bootstrapping and the strong maximum principle we have $w\in C^\infty (\mathbb R^N_+)$, $w>0$. Since $p<\frac{N+2s}{N-2s}<\frac{N-1+2s}{N-1-2s}$, this is a contradiction with Theorem 1.1 in \cite{QX} (cf. also Theorem 1.2 in \cite{FW}). This contradiction proves the theorem. \end{proof}
We now turn to analyze the a priori bounds for solutions of problem \eqref{problema-grad}. We have already remarked that due to the expected singularity of the gradient of the solutions near the boundary we need to work in spaces with weights which take care of the singularity. Thus we fix $\sigma \in (0,1)$ verifying \begin{equation}\label{cond-sigma} 0<\sigma< 1-\frac{s}{t}<1 \end{equation} and let \begin{equation}\label{E}
E_\sigma=\{u\in C^1(\Omega): \ \| u\|_1^{(-\sigma)}<+\infty\}, \end{equation}
where $\| \cdot \|_1^{(-\sigma)}$ is given by \eqref{norma-c1} with $\theta=-\sigma$. As for the function $h$, we assume that it has a prescribed growth at infinity: there exists $C^{0}>0$ such that for every $x\in\Omega$, $z\in\mathbb R$ and $\xi\in\mathbb R^N$, \begin{equation}\label{crec-h-2}
|h(x,z,\xi)| \le C^0 (1 + |z|^r + |\xi|^t), \end{equation} where $0<r<p$ and $1<t<\frac{2sp}{p+2s-1}<2s$ (observe that there is no loss of generality in assuming $t>1$). We recall that in the present situation we require the stronger restriction \eqref{subserrin} on the exponent $p$.
Then we can prove:
\begin{teorema}\label{cotas-grad} Assume $\Omega$ is a $C^2$ bounded domain and $K$ a measurable function verifying \eqref{elipticidad} and \eqref{continuidad}. Suppose that $s>\frac{1}{2}$, $p$ verifies \eqref{subserrin} and $h$ is nonnegative and such that \eqref{crec-h-2} holds. Then there exists a constant $C>0$ such that for every positive solution $u$ of \eqref{problema-grad} in $E_\sigma$ with $\sigma$ satisfying \eqref{cond-sigma} we have $$
\| u\|_1^{(-\sigma)} \le C. $$ \end{teorema}
We prove the a priori bounds in two steps. In the first one we obtain rough bounds for all solutions of the equation which are universal, in the spirit of \cite{PQS}. It is here where the restriction \eqref{subserrin} comes in.
\begin{lema}\label{cotas-pqs} Assume $\Omega$ is a $C^2$ (not necessarily bounded) domain and $K$ a measurable function verifying \eqref{elipticidad} and \eqref{continuidad}. Suppose that $s>\frac{1}{2}$ and $p$ verifies \eqref{subserrin}. Then there exists a positive constant $C=C(N,s,p,r,t,C^0,\Omega)$ (where $r$, $t$ and $C^0$ are given in \eqref{crec-h-2}) such that for every positive function $u\in C^1(\Omega)\cap L^\infty(\mathbb R^N)$ verifying $(-\Delta)^s_K u= u^p +h(x,u,\nabla u)$ in the viscosity sense in $\Omega$, we have $$
u(x) \le C (1+{\rm dist}(x,\partial \Omega)^{-\frac{2s}{p-1}}) ,\quad |\nabla u(x)| \le C (1+{\rm dist}(x,\partial\Omega) ^{-\frac{2s}{p-1}-1}) $$ for $x\in \Omega$. \end{lema}
\begin{proof} Assume on the contrary that there exist sequences of positive functions $u_k\in C^1(\Omega)\cap L^\infty (\mathbb R^N)$ verifying $(-\Delta)^s_K u_k= u_k^p +h(x,u_k,\nabla u_k)$ in $\Omega$ and points $y_k\in \Omega$ such that \begin{equation}\label{hipo}
u_k(y_k)^\frac{p-1}{2s} + |\nabla u_k(y_k)|^\frac{p-1}{p+2s-1} > 2k\: (1+{\rm dist}(y_k,\partial \Omega)^{-1}). \end{equation}
Denote $N_k(x)=u_k(x)^\frac{p-1}{2s} + |\nabla u_k(x)|^\frac{p-1}{p+2s-1}$, $x\in \Omega$. By Lemma 5.1 in \cite{PQS} (cf. also Remark 5.2 (b) there) there exists a sequence of points $x_k\in \Omega$ with the property that $N_k(x_k) \ge N_k(y_k)$, $N_k(x_k)>2k\: {\rm dist}(x_k,\partial \Omega)^{-1}$ and \begin{equation}\label{conjuntos} \mbox{$N_k(z) \le 2 N_k(x_k)$ in $B(x_k, kN_k(x_k)^{-1})$.} \end{equation} Observe that, in particular, \eqref{hipo} implies that $N_k(x_k)\to +\infty$. Let $\nu_k := N_k(x_k)^{-1}\to 0$ and define \begin{equation}\label{v_k}
v_k(y) := \nu_k^\frac{2s}{p-1} u_k (x_k+\nu _k y), \quad y \in B_k:=\{y\in \mathbb R^N:\ |y|<k\}. \end{equation} Then the functions $v_k$ verify $(-\Delta)^s_{K_k} v_k= v_k^p + h_k$ in $B_k$, where $K_k(y) = K (\mu_k y)$ and $$ h_k (y)=\nu_k ^{\frac{2sp}{p-1}} h(\xi_k+\nu_k y ,\nu_k^{-\frac{2s}{p-1}} v_k(y),\nu_k(x_k)^{-\frac{2s+p-1}{p-1}} \nabla v_k(y)). $$
Since $h$ verifies \eqref{crec-h-2}, we have $| h_k| \le C_0 \nu_k^{\gamma}
(1+v_k^r+|\nabla v_k|^t)$ in $B_k$, where $$ \gamma= \max\left\{ \frac{2s(p-r)}{p-1}, \frac{2ps-(2s+p-1)t}{p-1}\right\} >0. $$ Moreover by \eqref{conjuntos} it follows that \begin{equation}\label{eq1}
v_k(y)^\frac{p-1}{2s} + |\nabla v_k(y)|^\frac{p-1}{p+2s-1}\le 2, \quad y\in B_k . \end{equation} Also it is clear that \begin{equation}\label{eq2}
v_k(0)^\frac{p-1}{2s} + |\nabla v_k(0)|^\frac{p-1}{p+2s-1} =1. \end{equation}
Since $\nu_k\to 0$ and $v_k$ and $|\nabla v_k|$ are uniformly bounded in $B_k$, we see that $ h_k$ is also uniformly bounded in $B_k$. We may then use estimate \eqref{est-c1a} to obtain, again with the use of Ascoli-Arzel\'a's theorem and a diagonal argument, that there exists a subsequence, still labeled $v_k$ such that
$v_k\to v$ in $C^1_{\rm loc}(\mathbb R^N)$ as $k\to +\infty$. Since $v(0)^\frac{p-1}{2s} + |\nabla v(0)|^\frac{p-1}{p+2s-1} =1$, we see that $v$ is nontrivial.
Now let $w_k$ be the functions obtained by extending $v_k$ to be zero outside $B_k$. Then it is easily seen that $(-\Delta)^s_{K_k} w_k\ge w_k^p$ in $B_k$. Passing to the limit using again Lemma 5 of \cite{CS2}, we arrive at $(-\Delta)^s v \ge v^p$ in $\mathbb R^N$, which contradicts Theorem 1.3 in \cite{FQ2} since $p<\frac{N}{N-2s}$. This concludes the proof. \end{proof}
\begin{obss}\label{comentario} {\rm \
\noindent (a) With a minor modification in the above proof, it can be seen that the constants given by Lemma \ref{cotas-pqs} can be taken independent of the domain $\Omega$ (cf. the proof of Theorem 2.3 in \cite{PQS}).
\noindent (b) We expect Lemma \ref{cotas-pqs} to hold in the full range given by \eqref{subcritico}. Unfortunately, this method of proof seems purely local and needs to be properly adapted to deal with nonlocal equations. Observe that there is no information available for the functions $v_k$ defined in \eqref{v_k} in $\Omega\setminus B_k$, which makes it difficult to pass to the limit appropriately in the equation satisfied by $v_k$. }\end{obss}
We now come to the proof of the a priori bounds for positive solutions of \eqref{problema-grad}.
\begin{proof}[Proof of Theorem \ref{cotas-grad}]
Assume that the conclusion of the theorem is not true. Then there exists a sequence of positive solutions $u_k\in E_\sigma$ of \eqref{problema-grad} such that $\| u_k \|_1^{(-\sigma)}\to +\infty$, where $\sigma$ satisfies \eqref{cond-sigma}. Define $$
M_k(x)= d(x)^{-\sigma} u_k(x) + d(x)^{1-\sigma} |\nabla u_k(x)|. $$ Now choose points $x_k\in \Omega$ such that $M_k(x_k) \ge \sup_\Omega M_k -\frac{1}{k}$ (this supremum may not be achieved). Observe that our assumption implies $M_k(x_k)\to +\infty$.
Let $\xi_k$ be a projection of $x_k$ on $\partial \Omega$ and introduce the functions: $$ v_k(y) = \frac{u_k(\xi_k + \mu_k y)}{\mu_k^\sigma M_k(x_k)}, \quad y\in D^k, $$ where $\mu_k=M_k(x_k)^{-\frac{p-1}{2s+\sigma(p-1)}}\to 0$ and $D^{k}$ is the set defined in \eqref{Dk}. It is not hard to see that \begin{equation}\label{eq-rescalada} \left\{ \begin{array}{ll} (-\Delta)_{K_k}^s v_k = v_k^p + h_k & \hbox{in } D^k,\\[0.35pc] \ \ v_k=0 & \hbox{in }\mathbb R^N \setminus D^k, \end{array} \right. \end{equation} where $K_k(y) = K (\mu_k y)$ and $$ h_k (y)\hspace{-1mm}=\hspace{-1mm}M_k(x_k)^{-\frac{2sp}{2s+\sigma (p-1)}} h(\xi_k+\mu_k y ,M_k(x_k)^\frac{2s}{2s+\sigma(p-1)} v_k, M_k(x_k)^\frac{2s+p-1}{2s+\sigma(p-1)} \nabla v_k). $$ By assumption \eqref{crec-h-2} on $h$, it is readily seen that $h_k$ verifies the inequality
$|h_k|\le C M_k(x_k) ^{-\bar \gamma} (1+ v_k^r+ |\nabla v_k|^t)$ for some positive constant $C$ independent of $k$, where $$ \bar \gamma=\frac{2sp}{2s+\sigma(p-1)} -\frac{\max\{2sr, (2s+p-1)t\}}{2s+\sigma(p-1)}>0. $$ Moreover, the functions $v_k$ verify $$
\mu_k^\sigma d(\xi_k+\mu_k y)^{-\sigma} v_k(y)+ \mu_k^{\sigma-1} d(\xi_k+\mu_k y)^{1-\sigma} |\nabla v_k(y)| =\frac{M_k(\xi_k+\mu_k y) }{M_k(x_k)}. $$ Then, using that $\mu_k ^{-1} d(\xi_k+\mu_k y)={\rm dist}(y,\partial D^k)=:d_k(y)$ and the choice of the points $x_k$, we obtain for large $k$ \begin{equation}\label{eq-normal-1}
d_k(y)^{-\sigma} v_k(y)+d_k(y)^{1-\sigma} |\nabla v_k(y)| \le 2 \quad \mbox{ in } D^k \end{equation} and \begin{equation}\label{eq-normal-2}
d_k(y_k)^{-\sigma} v_k(y_k)+d_k(y_k)^{1-\sigma} |\nabla v_k(y_k)| =1, \end{equation} where, as in the proof of Theorem \ref{cotas}, $y_k :=\mu_k^{-1}(x_k-\xi_k)$.
Next, since $u_k$ solves \eqref{problema-grad}, we may use Lemma \ref{cotas-pqs} to obtain that $M_k(x_k) \le C d(x_k)^{-\sigma}(1+d(x_k)^{-\frac{2s}{p-1}})$ for some positive constant independent of $k$, which implies $d(x_k) \mu_k^{-1}\le C$. This bound immediately entails that (passing to subsequences) $x_k\to x_0\in \partial\Omega$ and
$|y_k|=d(x_k)\mu_k^{-1}\to d\ge 0$ (in particular the points $\xi_k$ are uniquely determined at least for large $k$). Assuming that the outward unit normal to $\partial \Omega$ at $x_0$ is $-e_N$, we also obtain then that $D^k \to \mathbb R^N_+$ as $k \to +\infty$.
We claim that $d>0$. To show this, notice that from \eqref{eq-rescalada} and \eqref{eq-normal-1} we have $(-\Delta)^s_{K_k} v_k \le C d_k^{(\sigma-1)t}$ in $D^k$, for some constant $C$ not depending on $k$. By our choice of $\sigma$ and $t$, we get that \begin{equation}\label{cond-sigma2} \sigma>\frac{t-2s}{t}. \end{equation} That is, we have \begin{equation}\label{sigma3} s<(1-\sigma)t<2s, \end{equation} so that Lemma \ref{lema-barrera-2} can be applied to give $\delta>0$ and a positive constant $C$ such that \begin{equation}\label{these1} v_k(y) \le C d_k(y)^{2s+(\sigma-1) t}, \quad \hbox{when } d_k(y) <\delta. \end{equation} Moreover, since $1<t<2s$, \eqref{cond-sigma2} in particular implies that \begin{equation}\label{sigma2} \sigma>\frac{t-2s}{t-1}, \end{equation} and, therefore, $-\sigma+2s+(\sigma-1)t =\sigma(t-1)+2s-t >0.$ Thus, by \eqref{eq-normal-1} we have $$\mbox{$v_k (y) \le 2 d_k(y)^{\sigma}\le 2 \delta^{\sigma-2s-(\sigma-1)t} d_k(y)^{2s+(\sigma-1)t}$ when $d_k(y)\ge \delta$.}$$
Hence $\| v_k\|_0^{(-2s-(\sigma-1)t)}$ is bounded. We can then use Lemma \ref{lema-regularidad}, with $\theta=(1-\sigma)t$, to obtain that \begin{equation}\label{these2}
|\nabla v_k(y)| \le C d_k(y)^{2s+(\sigma-1) t-1} \quad \hbox{in } D^k, \end{equation} where $C$ is also independent of $k$. Taking inequalities \eqref{these1} and \eqref{these2} in \eqref{eq-normal-2}, we deduce $$ 1 \le C d_k(y_k) ^{-\sigma+2s+(\sigma-1)t}, $$ thus, by \eqref{sigma2} we see that $d_k (y_k)$ is bounded away from zero. Hence, by \eqref{cero},
$|y_k|$ also is, so that $d>0$, as claimed.
Finally, we can use \eqref{est-c1a} together with Ascoli-Arzel\'a's theorem and a diagonal argument to obtain that $v_k \to v$ in $C^1_{\rm loc}(\mathbb R^N_+)$, where by \eqref{eq-normal-2}, the function $v$ verifies $d^{-\sigma} v(y_0)+ d^{1-\sigma} |\nabla v(y_0)|=1$ for some $y_0\in \mathbb R^N_+$, hence it is nontrivial and $v(y) \le C y_N ^{2s+(\sigma-1)t}$ if $0<y_N<\delta$. Thus $v\in C(\mathbb R^N)$ and $v=0$ outside $\mathbb R^N_+$. Passing to the limit in \eqref{eq-rescalada} with the aid of Lemma 5 in \cite{CS2} and using that $K$ is continuous at zero with $K(0)=1$, we obtain $$ \left\{ \begin{array}{ll} (-\Delta)^s v = v^p & \hbox{in } \mathbb R^N_+,\\[0.25pc] v=0 & \hbox{in } \mathbb R^N \setminus \mathbb R^N_+. \end{array} \right. $$ Using again bootstrapping and the strong maximum principle we have $v>0$ and $v\in C^\infty(\mathbb R^N_+)$, therefore it is a classical solution. Moreover, by Lemma \ref{cotas-pqs}, we also see that $v(y)\le C y_N^{-\frac{2s}{p-1}}$ in $\mathbb R^N_+$, so that $v$ is bounded. This is a contradiction with Theorem 1.2 in \cite{FW} (see also \cite{QX}), because we are assuming $p<\frac{N}{N-2s} <\frac{N-1+2s}{N-1-2s}$. The proof is therefore concluded. \end{proof}
\section{Existence of solutions} \setcounter{section}{4} \setcounter{equation}{0}
This final section is devoted to the proof of our existence results, Theorems \ref{th-1} and \ref{th-grad}. Both proofs are very similar, only that that of Theorem \ref{th-grad} is slightly more involved. Therefore we only show this one.
Thus we assume $s>\frac{1}{2}$. Fix $\sigma$ verifying \eqref{cond-sigma} and
consider the Banach space $E_\sigma$, defined in \eqref{E}, which is an ordered Banach space with the cone of nonnegative functions $P=\{u\in E_\sigma:\ u\ge 0 \hbox{ in }\Omega\}$. For the sake of brevity, we will drop the subindex $\sigma$ throughout the rest of the section and will denote $E$ and $\| \cdot \|$ for the space and its norm.
We will assume that $h$ is nonnegative and verifies the growth condition in the statement of Theorem \ref{th-grad}: \begin{equation}\label{hipo-h-2}
h(x,z,\xi) \le C (|z|^r +|\xi|^t), \quad x\in \Omega,\ z\in \mathbb R, \ \xi \in \mathbb R^N, \end{equation} where $1<r<p$ and $1<t<\frac{2sp}{2s+p-1}$. Observe that for every $v\in P$ we have \begin{equation}\label{h}
h(x,v(x),\nabla v(x)) \le C (\|v\|) d(x)^{(\sigma-1)t}. \end{equation} Moreover, by \eqref{sigma3} we may apply Lemma \ref{lema-existencia} to deduce that the problem $$ \left\{ \begin{array}{ll} (-\Delta)_K^s u = v^p + h(x,v,\nabla v) & \hbox{in }\Omega,\\[0.35pc] \ \ u=0 & \hbox{in }\mathbb R^N \setminus \Omega, \end{array} \right. $$
admits a unique nonnegative solution $u$, with $\|u \|_0^{(-\sigma)}<+\infty$. By Lemma \ref{lema-regularidad}
we also deduce $\| \nabla u \|_0^{(1-\sigma)}<+\infty$. Hence $u\in E$. In this way, we can define an operator $T: P \to P$ by means of $u=T(v)$. It is clear that nonnegative solutions of \eqref{problema} in $E$ coincide with the fixed points of this operator.
We begin by showing a fundamental property of $T$.
\begin{lema}\label{lema-compacidad} The operator $T: P \to P$ is compact. \end{lema}
\begin{proof} We show continuity first: let $\{u_n\} \subset P$ be such that $u_n\to u$ in $E$. In particular, $u_n\to u$ and $\nabla u_n\to \nabla u$ uniformly on compact sets of $\Omega$, so that the continuity of $h$ implies \begin{equation}\label{conv-unif} h(\cdot,u_n,\nabla u_n) \to h(\cdot,u,\nabla u) \hbox{ uniformly on compact sets of }\Omega \end{equation} Moreover, since $u_n$ is bounded in $E$, similarly as in \eqref{h} we also have that $h(\cdot,u_n,\nabla u_n) \le C d^{(\sigma-1)t}$ in $\Omega$, for a constant that does not depend on $n$ (and the same is true for $u$ after passing to the limit). This implies \begin{equation}\label{claim!}
\sup_\Omega d^\theta |h(\cdot,u_n,\nabla u_n)- h(\cdot,u,\nabla u)| \to 0, \end{equation} for every $\theta>(1-\sigma)t>s$. Indeed, if we take $\varepsilon>0$ then $$
d^\theta |h(\cdot,u_n,\nabla u_n)- h(\cdot,u,\nabla u)|\leq C d^{\theta-(1-\sigma)t} \le C \delta^{\theta-(1-\sigma)t}\le \varepsilon, $$ if $d\le \delta$, by choosing a small $\delta$. When $d\ge \delta$, $$
d^\theta |h(\cdot,u_n,\nabla u_n)- h(\cdot,u,\nabla u)|\leq (\sup_\Omega d)^\theta
|h(\cdot,u_n,\nabla u_n)- h(\cdot,u,\nabla u)| \le \varepsilon, $$ just by choosing $n\ge n_0$, by \eqref{conv-unif}. This shows \eqref{claim!}.
From Lemmas \ref{lema-existencia} and \ref{lema-regularidad} for every $(1-\sigma)t<\theta<2s$, we obtain $$
\sup_\Omega d^{\theta-2s} |T(u_n)-T(u)| + d^{\theta-2s+1} |\nabla (T(u_n)-T(u))| \to 0. $$ The desired conclusion follows by choosing $\theta$ such that $$ (1-\sigma) t < \theta \le 2s-\sigma. $$ This shows continuity.
To prove compactness, let $\{u_n\} \subset P$ be bounded. As we did before, $h(\cdot,u_n,\nabla u_n) \le C d^{(\sigma-1)t}$ in $\Omega$. By \eqref{est-c1a} we obtain that for every $\Omega' \subset \subset \Omega$ the $C^{1,\beta}$ norm of $T(u_n)$ in $\Omega'$ is bounded. Therefore, we may assume by passing to a subsequence that $T(u_n)\to v$ in $C^1_{\rm loc} (\Omega)$.
From Lemmas \ref{lema-existencia} and \ref{lema-regularidad} we deduce that $T(u_n) \le Cd^{(\sigma-1)t+2s}$,
$|\nabla T(u_n)| \le Cd^{(\sigma-1)t+2s-1}$ in $\Omega$, and the same estimates hold for $v$ and $\nabla v$ by passing to the limit. Hence $$
\sup_\Omega d^{-\sigma} |T(u_n)-v| + d^{1-\sigma} |\nabla (T(u_n)-v)| \to 0, $$ which shows compactness. The proof is concluded. \end{proof}
The proof of Theorem \ref{th-grad} relies in the use of topological degree, with the aid of the bounds provided by Theorem \ref{cotas-grad}. The essential tool is the following well-known result (see for instance Theorem 3.6.3 in \cite{Ch}).
\begin{teorema}\label{th-chang} Suppose that $E$ is an ordered Banach space with positive cone $P$, and $U\subset P$ is an open bounded set containing 0. Let $\rho>0$ be such that $B_\rho(0)\cap P\subset U$. Assume $T: U\to P$ is compact and satisfies
\begin{itemize}
\item[(a)] for every $\mu\in [0,1)$, we have $u\ne \mu T(u)$ for every $u \in P$ with $\| u \|=\rho$;
\item[(b)] there exists $\psi \in P\setminus \{0\}$ such that $u-T(u) \ne t \psi$, for every $u\in \partial U$, for every $t\ge 0$.
\end{itemize} Then $T$ has a fixed point in $U \setminus B_\rho(0)$. \end{teorema}
The final ingredient in our proof is some knowledge on the principal eigenvalue for the operator $(-\Delta)^s_K$. The natural definition of such eigenvalue in our context resembles that of \cite{BNV} for linear second order elliptic operators, that is: \begin{equation}\label{eigenvalue} \lambda_1 :=\sup\left\{ \lambda\in \mathbb R: \begin{array}{cc} \hbox{ there exists } u\in C(\mathbb R^N),\ u>0 \hbox{ in } \Omega, \hbox{ with } \\[0.25pc] u=0 \hbox{ in } \mathbb R^N\setminus \Omega \hbox{ and } (-\Delta)^s_K u \ge \lambda u \hbox{ in } \Omega \end{array} \right\}. \end{equation} At the best of our knowledge, there are no results available for the eigenvalues of $(-\Delta)^s_K$, although it seems likely that the first one will enjoy the usual properties (see \cite{QS}).
For our purposes here, we only need to show the finiteness of $\lambda_1$:
\begin{lema}\label{lema-auto} $\lambda_1<+\infty$. \end{lema}
\begin{proof} We begin by constructing a suitable subsolution. The construction relies in a sort of ``implicit" Hopf's principle (it is to be noted that Hopf's principle is not well understood for general kernels $K$ verifying \eqref{elipticidad}; see for instance Lemma 7.3 in \cite{RO} and the comments after it). However, a relaxed version is enough for our purposes.
Let $B'\subset \subset B\subset \subset \Omega$ and consider the unique solution $\phi$ of $$ \left\{ \begin{array}{ll} (-\Delta)_K^s \phi = 0 & \hbox{in } B\setminus B',\\[0.35pc] \ \ \phi = 1 & \hbox{in } B',\\[0.25pc] \ \ \phi = 0 & \hbox{in }\mathbb R^N \setminus B. \end{array} \right. $$ given for instance by Theorem 3.1 in \cite{FQ}, and the unique viscosity solution of $$ \left\{ \begin{array}{ll} (-\Delta)_K^s v= \phi & \hbox{in } B,\\[0.35pc] \ \ v = 0 & \hbox{in }\mathbb R^N \setminus B. \end{array} \right. $$ given by the same theorem. By Lemma \ref{PFM} we have both $\phi>0$ and $v>0$ in $B$, so that there exists $C_0>0$ such that $C_0 v \ge \phi$ in $B'$. Hence by comparison $C_0 v \ge \phi$ in $\mathbb R^N$. In particular, \begin{equation}\label{extra4_1} \mbox{$(-\Delta)^s_K v \le C_0 v$ in $B$.} \end{equation} We claim that $\lambda_1\le C_0$. Indeed, if we assume $\lambda_1>C_0$, then there exist $\lambda>C_0$ and a positive function $u\in C(\mathbb R^N)$ vanishing outside $\Omega$ such that \begin{equation}\label{extra4_2} \mbox{$(-\Delta)^s_K u \ge \lambda u$ in $\Omega$.} \end{equation} Since $u>0$ in $\overline{B}$, the number $$ \omega =\sup _B \frac{v}{u} $$ is finite. Moreover, $\omega u \ge v$ in $\mathbb R^N$. Observe that, since we are assuming $\lambda>C_0$, by \eqref{extra4_1} and \eqref{extra4_2} it follows that $$ \left\{ \begin{array}{ll} (-\Delta)^s_K (\omega u-v)\ge 0 & \hbox{in } B,\\[0.35pc] \ \ \omega u-v > 0 & \hbox{in }\mathbb R^N \setminus B. \end{array} \right. $$ Hence the strong maximum principle (Lemma \ref{PFM}) implies $\omega u -v>0$ in $\overline{B}$. However this would imply $(\omega-\varepsilon) u >v$ in $\overline{B}$ for small $\varepsilon$, contradicting the definition of $\omega$. Then $\lambda_1\le C_0$ and the lemma follows. \end{proof}
Now we are in a position to prove Theorem \ref{th-grad}.
\begin{proof}[Proof of Theorem \ref{th-grad}] As already remarked, we will show that Theorem \ref{th-chang} is applicable to the operator $T$ in $P\subset E$.
Let us check first hypothesis (a) in Theorem \ref{th-chang}. Assume we have $u=\mu T(u)$ for some $\mu \in [0,1)$ and $u\in P$. This is equivalent to $$ \left\{ \begin{array}{ll} (-\Delta)_K^s u = \mu (u^p + h(x,u,\nabla u)) & \hbox{in }\Omega,\\[0.35pc] \ \ u=0 & \hbox{in }\mathbb R^N \setminus \Omega. \end{array} \right. $$ By our hypotheses on $h$ we get that the right hand side of the previous equation can be bounded by $$ \begin{array}{rl}
\mu (u^p+h(x,u,\nabla u)) \hspace{-2mm} & \le d^{\sigma p} \| u\| ^p + C_0( d^{\sigma r} \| u\|^r
+ d^{(\sigma-1)t} \| u\| ^t)\\[0.25pc]
& \le C d^{(\sigma-1) t} ( \| u\| ^p + \| u\|^r + \| u\| ^t). \end{array} $$
Therefore, by Lemmas \ref{lema-existencia} and \ref{lema-regularidad} and \eqref{sigma3}, we have $\| u \| \le C
( \| u\| ^p + \| u\|^r + \| u\| ^t)$. Since $p,r,t>1$, this implies that $\| u \| > \rho$ for some small positive $\rho$. Thus there are no solutions of $u=\mu T(u)$ if $\| u \|=\rho$ and $\mu\in [0,1)$, and (a) follows.
To check (b), we take $\psi \in P$ to be the unique solution of the problem: $$ \left\{ \begin{array}{ll} (-\Delta)_K^s \psi = 1 & \hbox{in }\Omega,\\[0.35pc] \ \ \psi = 0 & \hbox{in }\mathbb R^N \setminus \Omega \end{array} \right. $$ given by Theorem 3.1 in \cite{FQ}. We claim that there are no solutions in $P$ of the equation $u-T(u)=t \psi$ if $t$ is large enough. For that purpose we note that this equation is equivalent to \begin{equation}\label{problema-t} \left\{ \begin{array}{ll} (-\Delta)_K^s u = u^p + h(x,u,\nabla u)+t & \hbox{in }\Omega,\\[0.35pc] \ \ u=0 & \hbox{in }\mathbb R^N \setminus \Omega. \end{array} \right. \end{equation} Fix $\mu> \lambda_1$, where $\lambda_1$ is given by \eqref{eigenvalue}. Using the nonnegativity of $h$, and since $p>1$, there exists a positive constant $C$ such that $u^p + h(x,u,\nabla u)+t \ge \mu u - C +t$. If $t\ge C$, then $(-\Delta)^s_K u \ge \mu u$ in $\Omega$, which is against the choice of $\mu$ and the definition of $\lambda_1$. Therefore $t< C$, and \eqref{problema-t} does not admit positive solutions in $E$ if $t$ is large enough.
Finally, since $h+t$ also verifies condition \eqref{crec-h-2} for $t\le C$, we can apply Theorem \ref{cotas-grad} to obtain that the solutions of \eqref{problema-t} are a priori bounded, that is, there exists $M > \rho$ such that
$\| u\| < M$ for every positive solution of \eqref{problema-t} with $t\ge 0$. Thus Theorem \ref{th-chang} is applicable with $U=B_M(0)\cap P$ and the existence of a solution in $P$ follows. This solution is positive by Lemma \ref{PFM}. The proof is concluded. \end{proof}
\noindent {\bf Acknowledgements.} B. B. was partially supported by a postdoctoral fellowship given by Fundaci\'on Ram\'on Areces (Spain) and MTM2013-40846-P, MINECO. L. D. P. was partially supported by PICT2012 0153 from ANPCyT (Argentina). J. G-M and A. Q. were partially supported by Ministerio de Ciencia e Innovaci\'on under grant MTM2011-27998 (Spain) and Conicyt MEC number 80130002. A. Q. was also partially supported by Fondecyt Grant No. 1151180 Programa Basal, CMM. U. de Chile and Millennium Nucleus Center for Analysis of PDE NC130017.
\end{document} |
\begin{document}
\title{Rank 2 vector bundles on ind-grassmannians}
\author[I.Penkov]{\;Ivan~Penkov}
\address{ Jacobs University Bremen\footnote{International University Bremen prior to Spring 2007} \\ School of Engineering and Science, Campus Ring 1, 28759 Bremen, Germany} \email{[email protected]}
\author[Tikhomirov]{\;Alexander~S.~Tikhomirov}
\address{ Department of Mathematics\\ State Pedagogical University\\ Respublikanskaya Str. 108 \newline 150 000 Yaroslavl, Russia} \email{[email protected]}
\begin{flushright} \begin{tabular}{l} To Yuri Ivanovich Manin\\ on the occasion of his 70$^{th}$\\ birthday \end{tabular}
\end{flushright}
\maketitle
\thispagestyle{empty}
\section{Introduction} \label{Introduction}
The simplest example of an ind-Grassmannian is the infinite projective space $\mathbf P^\infty$. The Barth-Van de Ven-Tyurin (BVT) Theorem, proved more than 30 years ago \cite{BV}, \cite{T}, \cite{Sa} (see also a recent proof by A. Coand\u a and G. Trautmann, \cite{CT}), claims that any vector bundle of finite rank on $\mathbf P^\infty$ is isomorphic to a direct sum of line bundles. In the last decade natural examples of infinite flag varieties (or flag ind-varieties) have arisen as homogeneous spaces of locally linear ind-groups, \cite{DPW}, \cite{DiP}. In the present paper we concentrate our attention to the special case of ind-Grassmannians, i.e. to inductive limits of Grassmannians of growing dimension. If $V=\displaystyle\bigcup_{n>k} V^n$ is a countable-dimensional vector space, then the ind-variety $\mathbf G(k;V)=\displaystyle\lim_\to G(k;V^n)$ (or simply $\mathbf G(k;\infty)$) of $k$-dimensional subspaces of $V$ is of course an ind-Grassmannian: this is the simplest example beyond $\mathbf P^\infty=\mathbf G(1;\infty)$. A significant difference between $\mathbf G(k;V)$ and a general ind-Grassmannian $\mathbf X=\displaystyle\lim_\to G(k_i;V^{n_i})$ defined via a sequence of embeddings \begin{equation}\label{eq1} G(k_1;V^{n_1})\stackrel{\varphi_1}{\longrightarrow}G(k_2;V^{n_2}) \stackrel{\varphi_2}{\longrightarrow}\dots\stackrel{\varphi_{m-1}}{\longrightarrow}G(k_m;V^{n_m}) \stackrel{\varphi_m}{\longrightarrow}\dots, \end{equation} is that in general the morphisms $\varphi_m$ can have arbitrary degrees. We say that the ind-Grassmannian $\mathbf X$ is \emph{twisted} if $\deg\varphi_m>1$ for infinitely many $m$, and that $\mathbf X$ is \emph{linear} if $\deg\varphi_m=1$ for almost all $m$. \begin{conjecture}\label{con1} Let the ground field be $\CC$, and let $\mathbf E$ be a vector bundle of rank $r\in\ZZ_{>0}$ on an ind-grasmannian $\mathbf X=\displaystyle\lim_\to G(k_m;V^{n_m})$, i.e. $\mathbf E=\displaystyle\lim_\gets E_m$, where $\{E_m\}$ is an inverse system of vector bundles of (fixed) rank $r$ on $G(k_m;V^{n_m})$. Then \begin{itemize} \item[(i)] $\mathbf E$ is semisimple: it is isomorphic to a direct sum of simple vector bundles on $\mathbf X$, i.e. vector bundles on $\mathbf X$ with no non-trivial subbundles; \item[(ii)] for $m\gg0$ the restriction of each simple bundle $\mathbf E$ to $G(k_m,V^{n_m})$ is a homogeneous vector bundle; \item[(iii)] each simple bundle $\mathbf E'$ has rank 1 unless $\mathbf X$ is isomorphic $\mathbf G(k;\infty)$ for some $k$: in the latter case $\mathbf E'$, twisted by a suitable line bundle, is isomorphic to a simple subbundle of the tensor algebra $T^{\cdot}(\mathbf S)$, $\mathbf S$ being the tautological bundle of rank $k$ on $\mathbf G(k;\infty)$; \item[(iv)] each simple bundle $\mathbf E$ (and thus each vector bundle of finite rank on $\mathbf X$) is trivial whenever $\mathbf X$ is a twisted ind-Grassmannian. \end{itemize} \end{conjecture} The BVT Theorem and Sato's theorem about finite rank bundles on $\mathbf G(k;\infty)$, \cite{Sa}, \cite{Sa2}, as well as the results in \cite{DP}, are particular cases of the above conjecture. The purpose of the present note is to prove Conjecture \ref{con1} for vector bundles of rank 2, and also for vector bundles of arbitrary rank $r$ on linear ind-Grassmannians $\mathbf X$.
In the 70's and 80's Yuri Ivanovich Manin taught us mathematics in (and beyond) his seminar, and the theory of vector bundles was a reoccuring topic (among many others). In 1980, he asked one of us (I.P.) to report on A. Tyurin's paper \cite{T}, and most importantly to try to understand this paper. The present note is a very preliminary progress report.
\textbf{Acknowledgement. }We acknowledge the support and hospitality of the Max Planck Institute for Mathematics in Bonn where the present note was conceived. A. S. T. also acknowledges partial support from Jacobs University Bremen. Finally, we thank the referee for a number of sharp comments.
\section{Notation and Conventions} The ground field is $\CC$. Our notation is mostly standard: if $X$ is an algebraic variety, (over $\CC$), $\mathcal{O}_X$ denotes its structure sheaf, $\Omega^1_X$ (respectively $T_X$) denotes the cotangent (resp. tangent) sheaf on X under the assumption that $X$ is smooth etc. If $F$ is a sheaf on $X$, its cohomologies are denoted by $H^i( F)$, $h^i(F):=\dim H^i(F)$, and $\chi(F)$ stands for the Euler characteristic of $F$. The Chern classes of $F$ are denoted by $c_i(F)$. If $f:X\to Y$ is a morphism, $f^*$ and $f_*$ denote respectively the inverse and direct image functors of $\mathcal{O}$-modules. All vector bundles are assumed to have finite rank. We denote the dual of a sheaf of $\mathcal O_X$-modules $F$ (or that of a vector space) by the superscript $^\vee$. Furthermore, in what follows for any ind-Grassmannian $\mathbf X$ defined by \refeq{eq1}, no embedding $\varphi_i$ is an isomorphism.
We fix a finite dimensional space $V$ and denote by $X$ the Grassmannian $G(k;V)$ for $k<\dim V$. In the sequel we write sometimes $G(k;n)$ indicating simply the dimension of $V$. Below we will often consider (parts of) the following diagram of flag varieties: \begin{equation}\label{eqDiag} \xymatrix{ &&Z:=\Fl(k-1,k,k+1;V) \ar[ld]_{\pi_1} \ar[dr]^{\pi_2} & \\ &Y:=\Fl(k-1,k+1;V)\ar[ld]_{p_1}\ar[rd]^{p_2}&&X:=G(k;V), \\ Y^1:=G(k-1;V)&&Y^2:=G(k+1;V)&\\ } \end{equation} under the assumption that $k+1<\dim V$. Moreover we reserve the letters $X,Y,Z$ for the varieties in the above diagram. By $S_k$, $S_{k-1}$, $S_{k+1}$ we denote the tautological bundles on $X$,$Y$ and $Z$, whenever they are defined ($S_k$ is defined on $X$ and $Z$, $S_{k-1}$ is defined on $Y^1$, $Y$ and $Z$, etc.). By $\mathcal O_X(i)$, $i\in \ZZ$, we denote the isomorphism class (in the Picard group $\operatorname{Pic}\nolimits X$) of the line bundle $(\Lambda^k(S_k^\vee))^{\otimes i}$, where $\Lambda^k$ stands for the $k^{th}$ exterior power (in this case maximal exterior power as $\rk S_k^\vee=k$). The Picard group of $Y$ is isomorphic to the direct product of the Picard groups of $Y^1$ and $Y^2$, and by $\mathcal{O}_Y(i,j)$ we denote the isomorphism class of the line bundle $p_1^*(\Lambda^{k-1}(S_{k-1}^\vee))^{\otimes i} \otimes_{\mathcal{O}_Y}p_2^*(\Lambda^{k+1}(S_{k+1}^\vee))^{\otimes j}$.
If $\varphi:X=G(k;V)\to X':=G(k;V')$ is an embedding, then $\varphi^*\mathcal{O}_{X'}(1)\simeq \mathcal{O}_X(d)$ for some $d\in\ZZ_{\geq 0}$: by definition $d$ is the \emph{degree} $\deg\varphi$ of $\varphi$. We say that $\varphi$ is linear if $\deg\varphi=1$. By a \textit{projective subspace} (in particular a \emph{line}, i.e. a 1-dimensional projective subspace) of $X$ we mean a linearly embedded projective space into $X$. It is well known that all such are Schubert varieties of the form
$\{V^k\in X| V^{k-1}\subset V^k\subset V^t\}$ or $\{V^k\in X| V^i\subset V^k\subset V^{k+1}\}$, where $V^k$ is a variable $k$-dimensional subspace of $V$, and $V^{k-1}$, $V^{k+1}$, $V^t$, $V^i$ are fixed subspaces of $V$ of respective dimensions $k-1$, $k+1$, $t$, $i$. (Here and in what follows $V^t$ always denotes a vector space of dimension $t$). In other words, all projective subspaces of $X$ are of the form $G(1;V^t/V^{k-1})$ or $G(k-i, V^{k+1}/V^i)$. Note also that $Y=\Fl(k-1,k+1;V)$ is the variety of lines in $X=G(k;V)$.
\section{The linear case} We consider the cases of linear and twisted ind-Grassmannians separately. In the case of a linear ind-Grassmannian, we show that Conjecture \ref{con1} is a straightforward corollary of existing results combined with the following proposition. We recall, \cite{DP}, that a \textit{standard extension} of Grassmannians is an embedding of the form \begin{equation}\label{eq31} G(k;V)\to G(k+a;V\oplus \hat W), \quad \{ V^k\subset \CC^n\}\mapsto\{V^k\oplus W\subset V\oplus\hat W\}, \end{equation} where $W$ is a fixed $a$-dimensional subspace of a finite dimensional vector space $\hat W$.
\begin{proposition}\label{linear embed} Let $\varphi:X=G(k;V)\to X':=G(k';V')$ be an embedding of degree 1. Then $\varphi$ is a standard extension, or $\varphi$ factors through a standard extension $\mathbb{P}^r\to G(k';V')$ for some $r$. \end{proposition} \begin{proof} We assume that $k\leq n-k$, $k\leq n'-k'$, where $n=\dim V$ and $n'=\dim V'$, and use induction on $k$. For $k=1$ the statement is obvious as the image of $\varphi$ is a projective subspace of $G(k';V')$ and hence $\varphi$ is a standard extension. Assume that the statement is true for $k-1$. Since $\deg \varphi=1$, $\varphi$ induces an embedding $\varphi_Y:Y\to Y'$, where $Y=\Fl(k-1,k+1;V)$ is the variety of lines in $X$ and $Y\:=\Fl(k'-1,k'+1;V')$ is the variety of lines in $X'$. Moreover, clearly we have a commutative diagram of natural projections and embeddings \[
\xymatrix{ &Z\ar[rrr]^{\varphi_Z}\ar[dl]_{\pi_1}\ar[dr]^{\pi_2}&&&Z'\ar[dl]_{\pi_1'}\ar[dr]^{\pi_2'}& \\ Y\ar[dr]&&X\ar[dr]&Y'&&X',\\ &\ar[r]_{\varphi_Y}&\ar[ur]&\ar[r]_{\varphi}&\ar[ur]& } \] where $Z:=\Fl(k-1,k,k+1;V)$ and $Z':=\Fl(k'-1,k',k'+1;V')$.
We claim that there is an isomorphism \begin{equation}\label{eqLE1} \varphi^*_Y\mathcal{O}_{Y'}(1,1)\simeq\mathcal{O}_Y(1,1). \end{equation} Indeed, $\varphi^*_Y\mathcal{O}_{Y'}(1,1)$ is determined up to isomorphism by its restriction to the fibers of $p_1$ and $p_2$ (see diagram \refeq{eqDiag}), and therefore it is enough to check that \begin{equation}\label{eqLE2}
\varphi^*_Y\mathcal{O}_{Y'}(1,1)_{|p_1^{-1}(V^{k-1})}\simeq\mathcal{O}_{p_1^{-1}(V^{k-1})}(1), \end{equation} \begin{equation}\label{eqLE21}
\varphi^*_Y\mathcal{O}_{Y'}(1,1)_{|p_2^{-1}(V^{k+1})}\simeq \mathcal{O}_{p_2^{-1}(V^{k+1})}(1) \end{equation} for some fixed subspaces $V^{k-1}\subset V$, $V^{k+1}\subset V$. Note that the restriction of $\varphi$ to the projective subspace $G(1;V/V^{k-1})\subset X$ is simply an isomorphism of $G(1;V/V^{k-1})$ with a projective subspace of $X'$, hence the map induced by $\varphi$ on the variety $G(2;V/V^{k-1})$ of projective lines in $G(1;V/V^{k-1})$ is an isomorphism with the Grassmannian of 2-dimensional subspaces of an appropriate subquotient of $V'$. Note furthermore that $p_1^{-1}(V^{k-1})$ is nothing but the variety of lines $G(2;V/V^{k-1})$ in $G(1;V/V^{k-1})$, and that the image of $G(2;V/V^{k-1})$ under $\varphi$ is nothing but $\varphi_Y(p_1^{-1}(V^{k-1}))$. This shows that the restriction of $\varphi^*_Y\mathcal{O}_{Y'}(1,1)$ to $G(2;V/V^{k-1})$ is isomorphic to the restriction of $\mathcal{O}_Y(1,1)$ to $G(2;V/V^{k-1})$, and we obtain \refeq{eqLE2}. The isomorphism \refeq{eqLE21} follows from a very similar argument.
The isomorphism \refeq{eqLE1} leaves us with two alternatives: \begin{equation}\label{eqLE3} \varphi^*_{Y}\mathcal{O}_{Y'}(1,0)\simeq\mathcal{O}_Y \mathrm{~or~} \varphi_Y^*\mathcal{O}_{Y'}(0,1)\simeq \mathcal{O}_Y, \end{equation} or \begin{equation}\label{eqLE4} \varphi^*_{Y}\mathcal{O}_{Y'}(1,0)\simeq\mathcal{O}_Y(1,0) \mathrm{~or~} \varphi_Y^*\mathcal{O}_{Y'}(1,0)\simeq \mathcal{O}_Y(0,1). \end{equation} Let \refeq{eqLE3} hold, more precisely let $\varphi_Y^*\mathcal{O}_{Y'}(1,0)\simeq\mathcal{O}_Y$. Then $\varphi_Y$ maps each fiber of $p_2$ into a single point in $Y'$ (depending on the image in $Y^2$ of this fiber), say $({(V')}^{k'-1}\subset {(V')}^{k'+1})$, and moreover the space ${(V')}^{k'-1}$ is constant. Thus $\varphi$ maps $X$ into the projective subspace $G(1;V'/{(V')}^{k'-1})$ of $X'$. If $\varphi_Y^*\mathcal{O}_{Y'}(0,1)\simeq\mathcal{O}_Y$, then $\varphi$ maps $X$ into the projective subspace $G(1;{(V')}^{k'+1})$ of $X'$. Therefore, the Proposition is proved in the case \refeq{eqLE3} holds.
We assume now that \refeq{eqLE4} holds. It is easy to see that \refeq{eqLE4} implies that $\varphi$ induces a linear embedding $\varphi_{Y^1}$ of $Y^1:=G(k-1;V)$ into $G(k'-1;V')$ or $G(k'+1;V')$. Assume that $\varphi_{Y^1}:Y^1\to {(Y')}^1:=G(k'-1;V')$ (the other case is completely similar). Then, by the induction assumption, $\varphi_{Y^1}$ is a standard extension or factors through a standard extension $\mathbb{P}^r\to {(Y')}^1$. If $\varphi_{Y^1}$ is a standard extension corresponding to a fixed subspace $W\subset \hat W$, then $\varphi_{Y^1}^* S_{k'-1}\simeq S_{k-1}\oplus \left(W\otimes_\CC\mathcal{O}_{Y^1}\right)$ and we have a vector bundle monomorphism \begin{equation}\label{eqLE5} 0\to\pi_1^*p_1^*\varphi_{Y^1}^*S_{k'-1}\to \pi_2^*\varphi^*S_{k'}. \end{equation} By restricting \refeq{eqLE5} to the fibers of $\pi_1$ we see that the quotient line bundle $\pi_2^*\varphi^*S_{k'}/\pi_1^*p_1^*\varphi_{Y^1}^*S_{k'-1}$ is isomorphic to $S_k/S_{k-1}\otimes \pi_1^*p_1^*\mathcal{L}$, where $\mathcal{L}$ is a line bundle on $Y^1$. Applying $\pi_{2*}$ we obtain \begin{equation}\label{eqLE6} 0\to W\otimes_\CC \mathcal{O}_X\to\pi_{2*}(\pi_2^*\varphi^*S_{k'})=\varphi^*S_{k'}\to \pi_{2*}((S_k/S_{k-1})\otimes\pi_1^*p_1^*\mathcal{L}) \to 0. \end{equation} Since $\rk\varphi^*S_{k'}=k'$ and $\dim W=k'-k$, $\rk\pi_{2*}((S_k/S_{k-1})\otimes\pi_1^*p_1^*\mathcal{L})=k$, which implies immediately that $\mathcal{L}$ is trivial. Hence \refeq{eqLE6} reduces to $0\to W\otimes_{\CC}\mathcal{O}_X\to\varphi^*S_{k'}\to S_k\to 0$, and thus \begin{equation}\label{eqLE7} \varphi^*S_{k'}\simeq S_k\oplus \left(W\otimes_\CC\mathcal{O}_X\right) \end{equation} as there are no non-trivial extensions of $S_k$ by a trivial bundle. Now \refeq{eqLE7} implies that $\varphi$ is a standard extension.
It remains to consider the case when $\varphi_{Y^1}$ maps $Y^1$ into a projective subspace $\mathbb{P}^s$ of ${(Y')}^1$. Then $\mathbb{P}^s$ is of the form $G(1;V'/{(V')}^{k'-2})$ for some ${(V')}^{k'-2}\subset V'$, or of the form $G(k'-1;{(V')}^{k'})$ for some ${(V')}^{k'}\subset V'$. The second case is clearly impossible because it would imply that $\varphi$ maps $X$ into the single point ${(V')}^{k'}$. Hence $\mathbb{P}^s=G(1;V'/{(V')}^{k'-2})$ and $\varphi$ maps $X$ into the Grassmannian $G(2;V'/{(V')}^{k'-2})$ in $G(k';V')$. Let $S_2'$ be the rank 2 tautological bundle on $G(2;V'/{(V')}^{k'-2})$. Then its restriction $S'':=\varphi^*S_2'$ to any line $l$ in $X$ is isomorphic to $\mathcal{O}_{l}\oplus\mathcal{O}_{l}(-1)$, and we claim that this implies one of the two alternatives: \begin{equation}\label{eqLE8} S''\simeq\mathcal{O}_X\oplus\mathcal{O}_X(-1) \end{equation} or \begin{equation}\label{eqLE9} S''\simeq S_2 \text{~and~} k=2,\text{~or~} S''\simeq(V\otimes_\CC \mathcal{O}_X)/S_2\text{~and~}k=n-k=2. \end{equation} Let $k\geq 2$. The evaluation map $\pi_1^*\pi_{1*}\pi_2^*S''\to \pi_2^*S''$ is a monomorphism of the line bundle $ \pi_1^*\mathcal{L}:=\pi_1^*\pi_{1*}\pi_2^*S''$ into $\pi_2^*S''$ (here $\mathcal{L}:=\pi_{1*}\pi_2^*S''$). Restricting this monomorphism to the fibers of $\pi_2$ we see immediately that $\pi_1^*\mathcal{L}$ is trivial when restricted to those fibers and is hence trivial. Therefore $\mathcal{L}$ is trivial, i.e. $\pi_1^*\mathcal{L}=\mathcal{O}_Z$. Push-down to $X$ yields \begin{equation}\label{eqLE10} 0\to\mathcal{O}_X\to S''\to\mathcal{O}_X(-1)\to 0, \end{equation} and hence \refeq{eqLE10} splits as $\Ext^1(\mathcal{O}_X(-1),\mathcal{O}_X)=0$. Therefore \refeq{eqLE8} holds. For $k=2$, there is an additional possibility for the above monomorphisms to be of the form $\pi_1^*\mathcal{O}_Y(-1,0)\to\pi_2^*S$ (or of the form $\pi_1^*\mathcal{O}_Y(0,-1)\to\pi_2^*S$ if $n-k=2$) which yields the option \refeq{eqLE9}.
If \refeq{eqLE8} holds, $\varphi$ maps $X$ into an appropriate projective subspace of $G(2;V'/{(V')}^{k'-2})$ which is then a projective subspace of $X'$, and if \refeq{eqLE9} holds, $\varphi$ is a standard extension corresponding to a zero dimensional space $W$. The proof is now complete. \end{proof}
We are ready now to prove the following theorem. \begin{theorem} Conjecture \ref{con1} holds for any linear ind-Grassmannian $\mathbf X$. \end{theorem} \begin{proof} Assume that $\deg \varphi_m=1$ for all $m$, and apply Proposition \ref{linear embed}. If infinitely many $\varphi_m$'s factor through respective projective subspaces, then $\mathbf X$ is isomorphic to $\mathbf P^\infty$ and the BVT Theorem implies Conjecture \ref{con1}. Otherwise, all $\varphi_m$'s are standard extensions of the form \refeq{eq31}. There are two alternatives: $\displaystyle\lim_{m\to\infty} k_{m}=\lim_{m\to\infty}(n_{m}-k_{m})=\infty$, or one of the limits $\displaystyle\lim_{m\to \infty}k_{m}$ or $\displaystyle\lim_{m\to \infty}(n_{m}-k_{m})$ equals $l$ for some $l\in \NN$. In the first case the claim of Conjecture \ref{con1} is proved in \cite{DP}: Theorem 4.2. In the second case $\mathbf X$ is isomorphic to $\mathbf G(l;\infty)$, and therefore Conjecture \ref{con1} is proved in this case by E. Sato in \cite{Sa2}. \end{proof}
\section{Auxiliary results}
In order to prove Conjecture \ref{con1} for rank 2 bundles $\mathbf E$ on a twisted ind-Grassmannian $\mathbf X=\displaystyle \lim_\to G(k_m;V^{n_m})$, we need to prove that the vector bundle $\mathbf E=\displaystyle\lim_{\gets}E_m$ of rank 2 on $\mathbf X$ is trivial, i.e. that $E_m$ is a trivial bundle on $G(k_m;V^{n_m})$ for each $m$. From this point on we assume that none of the Grassmannians $G(k_m;V^{n_m})$ is a projective space, as for a twisted projective ind-space Conjecture 1.1 is proved in \cite{DP} for bundles of arbitrary rank $r$.
The following known proposition gives a useful triviality criterion for vector bundles of arbitrary rank on Grassmannians.
\begin{prop}\label{prop31} A vector bundle $E$ on $X=G(k;n)$ is trivial iff its restriction
$E_{|l}$ is trivial for every line $l$ in $G(k;n)$, $l\in Y=\Fl(k-1,k+1;n)$. \end{prop} \begin{proof} We recall the proof given in \cite{P}. It uses the well known fact that the Proposition holds for any projective space, [OSS, Theorem 3.2.1]. Let first $k=2$, $n=4$, i.e. $X=G(2;4)$. Since $E$ is linearly trivial, $\pi_2^*E$ is trivial along the fibers of $\pi_1$ (we refer here to diagram \refeq{eqDiag}). Moreover, $\pi_{1*}\pi_2^*E$ is trivial along the images of the fibers of $\pi_2$ in $Y$. These images are of the form $\mathbb{P}_1^1\times\mathbb{P}_2^1$, where $\mathbb{P}_1^1$ (respectively $\mathbb{P}_2^1$) are lines in $Y^1:=G(1;4)$ and $Y^2:=G(3;4)$. The fiber of $p_1$ is filled by lines of the form $\mathbb{P}^1_2$, and thus $\pi_{1*}\pi_2^*E$ is linearly trivial, and hence trivial along the fibers of $p_1$. Finally the lines of the form $\mathbb{P}_1^1$ fill $Y^1$, hence ${p_1}_*\pi_{1*}\pi_2^*E$ is also a trivial bundle. This implies that $E=\pi_{2*}\pi_1^*p_1^*(p_{1*}\pi_{1*}\pi_2^*E)$ is also trivial.
The next case is the case when $k=2$ and $n$ is arbitrary, $n\geq 5$. Then the above argument goes through by induction on $n$ since the fiber of $p_1$ is isomorphic to $G(2;n-1)$. The proof is completed by induction on $k$ for $k\geq 3$: the base of $p_1$ is $G(k-1;n)$ and the fiber of $p_1$ is $G(2;n-1)$. \end{proof}
If $C\subset N$ is a smooth rational curve in an algebraic variety $N$ and $E$ is a vector bundle on $N$, then by a classical theorem of Grothendieck,
$\displaystyle E_{|C}$ is isomorphic to $\bigoplus_i\mathcal{O}_C(d_i)$ for some $d_1\geq d_2\geq\dots\geq d_{\rk E}$. We call the ordered $\rk E$-tuple $(d_1,\dots,d_{\rk E})$ \emph{the splitting type} of
$E_{|C}$ and denote it by $\mathbf{d}_E(C)$. If $N=X=G(k;n)$, then the lines on $N$ are parametrized by points $l\in Y$, and we obtain a map \[ Y\to \ZZ^{\rk E}\ :\ l\mapsto \mathbf{d}_E(l). \] By semicontinuity (cf. \cite[Ch.I, Lemma 3.2.2]{OSS}), there is a dense open set $U_E\subset Y$ of lines with minimal splitting type with respect to the lexicographical ordering on $\ZZ^{\rk E}$. Denote this
minimal splitting type by $\mathbf{d}_E$. By definition, $U_E=\{l\in Y|~ \mathbf{d}_E(l)=\mathbf{d}_E\}$ is the set of \emph{non-jumping} lines of $E$, and its complement $Y\setminus U_E$ is the proper closed set of \emph{jumping} lines.
A coherent sheaf $F$ over a smooth irreducible variety $N$ is called $normal$ if for every open set $U\subset N$ and every closed algebraic subset $A\subset U$ of codimension at least 2 the restriction map ${F}(U)\to {F}(U\smallsetminus A)$ is surjective. It is well known that, since $N$ is smooth, hence normal, a normal torsion-free sheaf $F$ on $N$ is reflexive, i.e. $F^{\lor\lor}=F$. Therefore, by \cite[Ch.II, Theorem 2.1.4]{OSS} $F$ is necessarily a line bundle (see \cite[Ch.II, 1.1.12 and 1.1.15]{OSS}).
\begin{theorem}\label{thSubbdl} Let $E$ be a rank $r$ vector bundle of splitting type $\mathbf{d}_E=(d_1,...,d_r),\ d_1\ge...\ge d_r,$ on $X=G(k;n)$. If $d_s-d_{s+1}\ge2$ for some $s<r$, then there is a normal subsheaf $F\subset E$ of rank $s$ with the following properties: over the open set $\pi_2(\pi_1^{-1}(U_E))\subset X$ the sheaf $F$ is a subbundle of $E$, and for any $l\in U_E$ $$
F_{|l}\simeq\overset{s}{\underset{i=1}\bigoplus}\mathcal{O}_{l}(d_i). $$ \end{theorem} \begin{proof} It is similar to the proof of Theorem 2.1.4 of \cite[Ch.II]{OSS}. Consider the vector bundle $E'=E\bigotimes\mathcal{O}_X(-d_s)$ and the evaluation map $\Phi:\pi_1^*\pi_{1*}\pi_2^*E'\to \pi_2^*E'$. The definition of $U_E$ implies that
$\Phi_{|\pi_1^{-1}(U_E)}$ is a morphism of constant rank $s$ and that its image ${\rm \im}\Phi\subset \pi_2^*E'$ is a subbundle of rank $s$ over $\pi_1^{-1}(U_E)$. Let $M:=\pi_2^*E'/{\rm im}\Phi$, let $T(M)$ be the torsion subsheaf of $M$, and $F':=\ker(\pi_2^*E'\to M':=M/T(M))$. Consider the singular set $\operatorname{Sing}\nolimits F'$ of the sheaf $F'$ and set $A:=Z\smallsetminus\operatorname{Sing}\nolimits F'$. By the above, $A$ is
an open subset of $Z$ containing $\pi_1^{-1}(U_E)$ and $f={\pi_2}_{|A}:A\to B:=\pi_2(A)$ is a submersion with connected fibers.
Next, take any point $l\in Y$ and put $ L:=\pi_1^{-1}(l)$. By definition, $L\simeq\mathbb{P}^1$, and we have \begin{equation}\label{tangent}
{T_{Z/X}}_{|L}\simeq\mathcal{O}_{L}(-1)^{\oplus(n-2)}, \end{equation} where $T_{Z/X}$ is the relative tangent bundle of Z over X. The construction of the sheaves $F'$ and $M$ implies that for any $l\in U_E$:
${F'}^{\vee}_{|{L}}=\oplus_{i=1}^s\mathcal{O}_{L}(-d_i+d_s),\ \
{M'}_{|{L}} =\oplus_{i=s+1}^r\mathcal{O}_{L}(d_i-d_s)$. This, together with (\ref{tangent}) and the condition
$d_s-d_{s+1}\ge2,$ immediately implies that $H^0(\Omega^1_{A/B}\otimes{F'}^{\vee}\otimes M'_{|{L}})=0$. Hence
$H^0(\Omega^1_{A/B}\otimes{F'}^{\vee}\otimes M'_{|\pi_1^{-1}(U_E)})=0$, and thus, since $\pi_1^{-1}(U_E)$ is dense open in $Z$,
$\Hom(T_{A/B},\mathcal H om(F',M'_{|A}))=
H^0(\Omega^1_{A/B}\otimes{F'}^{\vee}\otimes M'_{|A})=0.$ Now we apply the Descent Lemma (see \cite[Ch.II, Lemma 2.1.3]{OSS}) to the data
$(f_{|\pi_1^{-1}(U_E)}:\pi_1^{-1}(U_E)\to V_E,\ F'_{|\pi_1^{-1}(U_E)}
\subset E'_{|\pi_1^{-1}(U_E)})$. Then $F:=(\pi_{2*}F')\otimes\mathcal{O}_X(-d_s)$ is the desired sheaf. \end{proof}
\section{The case $\rk\mathbf{E}=2$} In what follows, when considering a twisted ind-Grassmannian $\mathbf X=\displaystyle\lim_\to G(k_m;V^{n_m})$ we set $G(k_m;V^{n_m})=X_m$. \refth{thSubbdl} yields now the following corollary. \begin{corollary}\label{d=(0,0)} Let $\displaystyle\mathbf{E}=\lim_{\gets}E_m$ be a rank 2 vector bundle on a twisted ind-Grassmannian $\displaystyle\mathbf{X}=\lim_{\to}X_m$. Then there exists $m_0\ge1$ such that $\mathbf{d}_{E_m}=(0,0)$ for any $m\ge m_0.$ \end{corollary} \begin{proof} Note first that the fact that $\mathbf X$ is twisted implies \begin{equation}\label{c_1=0} c_1(E_m)=0,\ m\ge1. \end{equation} Indeed, $c_1(E_m)$ is nothing but the integer corresponding to the line bundle $\Lambda^2(E_m)$ in the identification of $\operatorname{Pic}\nolimits X_m$ with $\ZZ$. As $\mathbf X$ is twisted, $c_1(E_m)=\deg\varphi_m\deg\varphi_{m+1}\dots\deg\varphi_{m+k}c_1(E_{m+k+1})$ for any $k\geq 1$, in other words $c_1(E_m)$ is divisible by larger and larger integers and hence $c_1(E_m)=0$ (cf. \cite[Lemma 3.2]{DP}). Suppose that for any $m_0\ge1$ there exists $m\ge m_0$ such that $\mathbf{d}_{E_m}=(a_m,-a_m)$ with $a_m>0$. Then Theorem \ref{thSubbdl} applies to $E_m$ with $s=1$, and hence $E_m$ has a normal rank-1 subsheaf $F_m$ such that
\begin{equation}\label{F|l}
F_{m|l}\simeq\mathcal{O}_{l}(a_m) \end{equation} for a certain line $l$ in $X_m$. Since $F_m$ is a torsion-free normal subsheaf of the vector bundle $E$, the sheaf $F_m$ is a line bundle, i.e. $F_m\simeq\mathcal{O}_{X_m}(a_m)$. Therefore we have a monomorphism: \begin{equation}\label{injectn} 0\to\mathcal{O}_{X_m}(a_m)\to E_m,\ \ \ a_m\ge1. \end{equation} This is clearly impossible. In fact, this monomorphism implies in view of (\ref{c_1=0}) that any rational curve $C\subset X_m$ of degree $\delta_m:=\deg\varphi_1\cdot...\cdot\deg\varphi_{m-1}$ has splitting type $\mathbf{d}_{E_m}(C)=(a'_m,-a'_m)$, where $a'_m\ge a_m\delta_m\ge\delta_m$. Hence, by semiconinuity, any line $l\in X_1$ has splitting type $\mathbf{d}_{E_1}(l)=(b,-b),\ \ b\ge\delta_m$. Since $\delta_m\to\infty$ as $m_0\to\infty,$ this is a contradiction. \end{proof}
We now recall some standard facts about the Chow rings of $X_m=G(k_m;V^{n_m}),$ (see, e.g., \cite[14.7]{F}): \begin{itemize} \item[(i)] $A^1(X_m)=\operatorname{Pic}\nolimits(X_m)=\mathbb{Z}[\mathbb{V}_m]$, $A^2(X_m)=\mathbb{Z}[\mathbb{W}_{1,m}]\oplus\mathbb{Z}[\mathbb{W}_{2,m}]$, where $\mathbb{\mathbb{V}}_m,\mathbb{W}_{1,m},\mathbb{W}_{2,m}$ are the following Schubert varieties:
$\mathbb{V}_m:=\{V^{k_m}\in X_m|\ \dim(V^{k_m}\cap V_0^{n_m-k_m})\ge1$ for a fixed subspace $V_0^{n_m-k_m-1}$ of $V^{n_m}\}$,
$\mathbb{W}_{1,m}:=\{V^{k_m}\in X_m| $ $\dim (V^{k_m}\cap V_0^{n_m-k_m-1})\ge1$ for a fixed subspace $V_0^{n_m-k_m-1}$ in $V^{n_m}\}$,
$\mathbb{W}_{2,m}:=\{{V}^{k_m}\in X_m|\ \dim({V}^{k_m}\cap V_0^{n_m-k_m+1})\ge2$ for a fixed subspace $V_0^{n_m-k_m+1}$ of $V^{n_m}\}$; \item[(ii)] $[\mathbb{V}_m]^2=[\mathbb{W}_{1,m}]+[\mathbb{W}_{2,m}]$ in $A^2(X_m)$; \item[(iii)] $A_2(X_m)=\mathbb{Z}[\mathbb{P}^2_{1,m}]\oplus\mathbb{Z}[\mathbb{P}^2_{2,m}]$, where the projective planes $\mathbb{P}^2_{1,m}$ (called \emph{$\alpha$-planes}) and $\mathbb{P}^2_{2,m}$ (called \emph{$\beta$-planes}) are respectively the Schubert varieties
$\mathbb{P}^2_{1,m}:=\{V^{k_m}\in X_m|\ V_0^{k_m-1}\subset {V}^{k_m}\subset V_0^{k_m+2}$ for a fixed flag $V_0^{k_m-1}\subset V_0^{k_m+2}$ in $V^{n_m}\}$,
$\mathbb{P}^2_{2,m}:=\{V^{k_m}\in X_m|\ V_0^{k_m-2}\subset {V}^{k_m}\subset V_0^{k_m+1}$ for a fixed flag $V_0^{k_m-2}\subset V_0^{k_m+1}$ in $V^{n_m}\};$ \item[(iv)] the bases $[\mathbb{W}_{i,m}]$ and $[\mathbb{P}^2_{j,m}]$ are dual in the standard sense that $[\mathbb{W}_{i,m}]\cdot[\mathbb{P}^2_{j,m}]=\delta_{i,j}.$ \end{itemize} \begin{lemma}\label{c_2(E_m)=0} There exists $m_1\in\ZZ_{>0}$ such that for any $m\ge m_1$ one of the following holds: \begin{itemize}
\item[(1)] $c_2({E_m}_{|\mathbb{P}^2_{1,m}})>0,$
$c_2({E_m}_{|\mathbb{P}^2_{2,m}})\le0$,
\item[(2)] $c_2({E_m}_{|\mathbb{P}^2_{2,m}})>0,$
$c_2({E_m}_{|\mathbb{P}^2_{1,m}})\le0$,
\item[(3)] $c_2({E_m}_{|\mathbb{P}^2_{1,m}})=0$,
$c_2({E_m}_{|\mathbb{P}^2_{2,m}})=0$. \end{itemize} \end{lemma}
\begin{proof} According to (i), for any $m\ge1$ there exist $\lambda_{1m},\lambda_{2m}\in\ZZ$ such that \begin{equation}\label{c_2(E_m)} c_2(E_m)=\lambda_{1m}[\mathbb{W}_{1,m}]+\lambda_{2m}[\mathbb{W}_{2,m}]. \end{equation} Moreover, (iv) implies \begin{equation}\label{lambda_jm}
\lambda_{jm}=c_2({E_m}_{|\mathbb{P}^2_{j,m}}),\ \ j=1,2. \end{equation} Next, (i) yields: \begin{equation}\label{abcd} \varphi_m^*[\mathbb{W}_{1,m+1}]=a_{11}(m)[\mathbb{W}_{1,m}]+a_{21}(m)[\mathbb{W}_{2,m}],\ \ \varphi_m^*[\mathbb{W}_{2,m+1}]=a_{12}(m)[\mathbb{W}_{1,m}]+a_{22}(m)[\mathbb{W}_{2,m}], \end{equation} where $a_{ij}(m)\in\mathbb{Z}$. Consider the $2\times2$-matrix $A(m)=(a_{ij}(m))$ and the column vector $\Lambda_m=(\lambda_{1m},\lambda_{2m})^t.$ Then, in view of (iv), the relation (\ref{abcd}) gives: $\Lambda_m=A(m)\Lambda_{m+1}$. Iterating this equation and denoting by $A(m,i)$ the $2\times2$-matrix $A(m)\cdot A(m+1)\cdot...\cdot A(m+i),\ i\ge1,$ we obtain \begin{equation}\label{Lambda_m} \Lambda_m=A(m,i)\Lambda_{m+i+1}. \end{equation} The twisting condition $\varphi_m^*[\mathbb{V}_{m+1}]=\deg\varphi_m[\mathbb{V}_{m}]$ together with (ii) implies: $\varphi_m^*([\mathbb{W}_{1,m+1}]+[\mathbb{W}_{2,m+1}])=(\deg\varphi_m)^2([\mathbb{W}_{1,m}]+[\mathbb{W}_{2,m}])$. Substituting (\ref{abcd}) into the last equality, we have: $a_{11}(m)+a_{12}(m)=a_{21}(m)+a_{22}(m)=(\deg\varphi_m)^2,\ \ \ m\ge1.$ This means that the column vector ${v}=(1,1)^t$ is an eigenvector of $A(m)$ with eigenvalue $(\deg\varphi_m)^2$. Hence, it is an eigenvector of $A(m,i)$ with the eigenvalue $d_{m,i}=(\deg\varphi_m)^2(\deg\varphi_{m+1})^2...(\deg\varphi_{m+i})^2:$ \begin{equation}\label{eigen} A(m,i){v}=d_{m,i}{v}. \end{equation} Notice that the entries of $A(m),\ m\ge1,$ are nonnegative integers (in fact, from the definition of the Schubert varieties $\mathbb{W}_{j,m+1}$ it immediately follows that $\varphi_m^*[\mathbb{W}_{j,m+1}]$ is an effective cycle on $X_m$, so that (\ref{abcd}) and (iv) give $0\le\varphi_m^*[\mathbb{W}_{i,m+1}]\cdot[\mathbb{P}^2_{j,m}]=a_{ij}(m)$); hence also the entries of $A(m,i),\ m,i\ge1,$ are nonnegative integers). Besides, clearly $d_{m,i}\to\infty$ as $i\to\infty$ for any $m\ge1$. This, together with (\ref{Lambda_m}) and (\ref{eigen}), implies that, for $m\gg1$, $\lambda_{1m}$ and $\lambda_{2m}$ cannot both be nonzero and have the same sign. This together with (\ref{lambda_jm}) is equivalent to the statement of the Lemma. \end{proof}
In what follows we denote the $\alpha$-planes and the $\beta$-planes on $X=G(2;4)$ respectively by $\mathbb{P}_\alpha^2$ and $\mathbb{P}_\beta^2$. \begin{proposition}\label{not exist} There exists no rank 2 vector bundle $E$ on the Grassmannian $X=G(2;4)$ such that: \begin{itemize} \item[(a)] $c_2(E)=a[\mathbb{P}^2_{\alpha}],\ \ a>0,$
\item[(b)] $E_{|\mathbb{P}^2_{\beta}}$ is trivial for a generic $\beta$-plane $\mathbb{P}^2_{\beta}$ on $X$. \end{itemize} \end{proposition}
\begin{proof} Now assume that there exists a vector bundle $E$ on $X$ satisfying the conditions (a) and (b) of the Proposition. Fix a $\beta$-plane $P\subset X$ such that
\begin{equation}\label{E|Y}
E_{|P}\simeq\mathcal{O}_{P}^{\oplus2}. \end{equation} As $X$ is the Grassmannian of lines in $\mathbb{P}^3$, the plane $P$ is the dual plane of a certain plane $\tilde P$ in $\mathbb{P}^3$. Next, fix a point $x_0\in\mathbb{P}^3\smallsetminus\tilde P$ and denote by $S$ the variety of lines in $\mathbb{P}^3$ which contain $x_0$. Consider the variety
$Q=\{(x,l)\in\mathbb{P}^3\times X\ |\ x\in l\cap\tilde P\}$ with natural projections $p:Q\to S:(x,l)\mapsto\Span(x,x_0)$ and $\sigma:Q\to X:(x,l)\mapsto l$. Clearly, $\sigma$ is the blowing up of $X$ at the plane $P$, and the exceptional divisor $D_P=\sigma^{-1}(P)$ is isomorphic to the incidence subvariety of $P\times\tilde{P}$. Moreover, one easily checks that $Q\simeq\mathbb{P}(\mathcal{O}_{S}(1)\oplus T_{S}(-1))$, so that the projection $p:Q\to S$ coincides with the structure morphism $\mathbb{P}(\mathcal{O}_{S}(1)\oplus T_{S}(-1))\to S$. Let $\mathcal{O}_Q(1)$ be the Grothendieck line bundle on $Q$ such that $p_*\mathcal{O}_Q(1)=\mathcal{O}_{S}(1)\oplus T_{S}(-1)$. Using the Euler exact triple on $Q$ \begin{equation}\label{Euler} 0\to\Omega^1_{Q/S}\to p^*(\mathcal{O}_{S}(1)\oplus T_{S}(-1)) \otimes\mathcal{O}_Q(-1)\to\mathcal{O}_Q\to 0, \end{equation} we find the $p$-relative dualizing sheaf $\omega_{Q/S}:=\det(\Omega^1_{Q/S})$: \begin{equation}\label{rel dual} \omega_{Q/S}\simeq\mathcal{O}_Q(-3)\otimes p^*\mathcal{O}_{S}(2). \end{equation}
Set $\mathcal{E}:=\sigma^*E$. By construction, for each $y\in S$ the
fiber $Q_y=p^{-1}(y)$ is a plane such that $l_y=Q_y\cap D_P$ is a line, and, by (\ref{E|Y}), \begin{equation}\label{triv on l}
\mathcal{E}_{|l_y}\simeq\mathcal{O}_{l_y}^{\oplus2}. \end{equation} Furthermore, $\sigma(Q_y)$ is an $\alpha$-plane in $X$, and from (\ref{triv on l}) it follows clearly that
$h^0(\mathcal{E}_{|Q_y}(-1))=\mathcal{E}^\vee_{|Q_y}(-1))=0$. Hence, in view of condition (a) of the Proposition, the sheaf $\mathcal{E}_{|Q_y}$ is the cohomology sheaf of a monad \begin{equation}\label{eqMonad} 0\to\mathcal{O}_{Q_y}(-1)^{\oplus a}\to\mathcal{O}_{Q_y}^{\oplus(2a+2)}\to \mathcal{O}_{Q_y}(1)^{\oplus a}\to0 \end{equation} (see \cite[Ch. II, Ex. 3.2.3]{OSS}). This monad immediately implies the equalities \begin{equation}\label{cohomology}
h^1(\mathcal{E}_{|Q_y}(-1))=h^1(\mathcal{E}_{|Q_y}(-2))=a,\ \
h^1(\mathcal{E}_{|Q_y}\otimes\Omega^1_{Q_y})=2a+2, \end{equation} $$
h^i(\mathcal{E}_{|Q_y}(-1))=h^i(\mathcal{E}_{|Q_y}(-2))=
h^i(\mathcal{E}_{|Q_y}\otimes\Omega^1_{Q_y})=0,\ \ i\ne1. $$ Consider the sheaves of $\mathcal{O}_{S}$-modules \begin{equation}\label{E_i} E_{-1}:=R^1p_*(\mathcal{E}\otimes\mathcal{O}_Q(-2)\otimes p^*\mathcal{O}_{S}(2)),\ \ \ E_0:=R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S}), \ \ \ E_1:=R^1p_*(\mathcal{E}\otimes\mathcal{O}_Q(-1)). \end{equation} The equalities (\ref{cohomology}) together with Cohomology and Base Change imply that $E_{-1},\ E_1$ and $E_0$ are locally free $\mathcal{O}_{S}$-modules, and $\rk(E_{-1})=\rk(E_1)=a,$ and $\rk(E_0)=2a+2$. Moreover, \begin{equation}\label{R_i} R^ip_*(\mathcal{E}\otimes\mathcal{O}_Q(-2))= R^ip_*(\mathcal{E}\otimes\Omega^1_{Q/S})=R^ip_*(\mathcal{E}\otimes\mathcal{O}_Q(-1))=0 \end{equation} for $i\ne 1$. Note that $\mathcal{E}^\vee\simeq\mathcal{E}$ as $c_1(\mathcal{E})=0$ and $\rk\mathcal{E}=2$. Furthermore, (\ref{rel dual}) implies that the nondegenerate pairing ($p$-relative Serre duality) $R^1p_*(\mathcal{E}\otimes\mathcal{O}_Q(-1))\otimes R^1p_*(\mathcal{E}^\vee\otimes\mathcal{O}_Q(1)\otimes \omega_{Q/S})\to R^2p_*\omega_{Q/S}=\mathcal{O}_{S}$ can be rewritten as $E_1\otimes E_{-1}\to\mathcal{O}_{S}, $ thus giving an isomorphism \begin{equation}\label{isom dual} E_{-1}\simeq E_1^\vee. \end{equation} Similarly, since $\mathcal{E}^\vee\simeq\mathcal{E}$ and $\Omega^1_{Q/S}\simeq T_{Q/S}\otimes\omega_{Q/S}$, $p$-relative Serre duality yields a nondegenerate pairing $E_0\otimes E_0=R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S})\otimes R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S})= R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S})\otimes R^1p_*(\mathcal{E}^\vee\otimes T_{Q/S}\otimes\omega_{Q/S}) \to R^2p_*\omega_{Q/S}=\mathcal{O}_{S}$. Therefore $E_0$ is self-dual, i.e. $E_0\simeq E_0^\vee$, and in particular $c_1(E_0)=0$.
Now, let $J$ denote the fiber product $Q\times_{S}Q$ with projections $Q\overset{pr_1}\leftarrow J\overset{pr_2}\to Q$ such that $p\circ pr_1=p\circ pr_2$. Put $F_1\boxtimes F_2:=pr_1^*F_1\otimes pr_2^*F_2$ for sheaves $F_1$ and $F_2$ on $Q$, and consider the standard $\mathcal{O}_J$-resolution of the structure sheaf $\mathcal{O}_{\Delta}$ of the diagonal $\Delta\hookrightarrow J$ \begin{equation}\label{resoln of diag} 0\to\mathcal{O}_Q(-1)\otimes p^*\mathcal{O}_{S}(2)\boxtimes\mathcal{O}_Q(-2)\to {\Omega^1}_{Q/S}(1)\boxtimes\mathcal{O}_Q(-1)\to\mathcal{O}_J\to \mathcal{O}_{\Delta}\to0. \end{equation} Twist this sequence by the sheaf $(\mathcal{E}\otimes\mathcal{O}_Q(-1))\boxtimes\mathcal{O}_Q(1)$ and apply the functor $R^ipr_{2*}$ to the resulting sequence. In view of (\ref{E_i}) and (\ref{R_i}) we obtain the following monad for $\mathcal{E}$: \begin{equation}\label{monad1} 0\to p^*E_{-1}\otimes\mathcal{O}_Q(-1)\overset{\lambda}\to p^*E_0\overset{\mu} \to p^*E_1\otimes\mathcal{O}_Q(1)\to0,\ \ \ \ \ \ker(\mu)/{\rm im}(\lambda)=\mathcal{E}. \end{equation} Put $R:=p^*h$, where $h$ is the class of a line in $S$. Furthermore, set $H:=\sigma^*H_X$, $[\mathbb{P}_\alpha]:=\sigma^*[\mathbb{P}^2_\alpha]$, $[\mathbb{P}_\beta]:=\sigma^*[\mathbb{P}^2_\beta]$, where $H_X$ is the class of a hyperplane section of $X$ (via the Pl\"ucker embedding), and respectively, $[\mathbb{P}^2_\alpha]$ and $[\mathbb{P}^2_\beta]$ are the classes of an $\alpha$- and $\beta$-plane. Note that, clearly, $\mathcal{O}_Q(H)\simeq\mathcal{O}_Q(1)$. Thus, taking into account the duality (\ref{isom dual}), we rewrite the monad (\ref{monad1}) as \begin{equation}\label{monad2} 0\to p^*E_1^\vee\otimes\mathcal{O}_Q(-H)\overset{\lambda}\to p^*E_0\overset{\mu}\to p^*E_1\otimes\mathcal{O}_Q(H)\to0,\ \ \ \ \ \ \ker(\mu)/{\rm im}(\lambda)\simeq\mathcal{E}. \end{equation} In particular, it becomes clear that \refeq{monad1} is a relative version of the monad \refeq{eqMonad}.
As a next step, we are going to express all Chern classes of the sheaves in (\ref{monad2}) in terms of $a$. We start by writing down the Chern polynomials of the bundles $p^*E_1\otimes\mathcal{O}_Q(H)$ and $p^*E_1^\vee\otimes\mathcal{O}_Q(-H)$ in the form \begin{equation}\label{Chern1} c_t(p^*E_1\otimes\mathcal{O}_Q(H))=\prod_{i=1}^a(1+(\delta_i+H)t),\ \ \ c_t(p^*E_1^\vee\otimes\mathcal{O}_Q(-H))=\prod_{i=1}^a(1-(\delta_i+H)t), \end{equation} where $\delta_i$ are the Chern roots of the bundle $p^*E_1$. Thus \begin{equation}\label{c,d} cR^2=\sum_{i=1}^a\delta_i^2,\ \ dR=\sum_{i=1}^a\delta_i. \end{equation} for some $c,d\in\mathbb{Z}$. Next we invoke the following easily verified relations in $A^\cdot(Q)$: \begin{equation}\label{rel in A(Q)} H^4=RH^3=2[pt],\ \ \ R^2H^2=R^2[\mathbb{P}_\alpha]= RH[\mathbb{P}_\alpha]=H^2[\mathbb{P}_\alpha]=RH[\mathbb{P}_\beta]=H^2[\mathbb{P}_\beta]=[pt], \end{equation} $$ [\mathbb{P}_\alpha][\mathbb{P}_\beta]=R^2[\mathbb{P}_\beta]=R^4=R^3H=0, $$ where $[pt]$ is the class of a point. This, together with (\ref{c,d}), gives \begin{equation}\label{sums} \sum_{1\le i<j\le a}\delta_i^2\delta_j^2= \sum_{1\le i<j\le a}(\delta_i^2\delta_j+\delta_i\delta_j^2)H=0, \sum_{1\le i<j\le a}\delta_i\delta_jH^2=\frac{1}{2}(d^2-c)[pt], \sum_{1\le i\le a}(\delta_i+\delta_j)H^3=2(a-1)d[pt]. \end{equation} Note that, since $c_1(E_0)=0$, \begin{equation}\label{Chern2} c_t(p^*E_0)=1+bR^2t^2 \end{equation} for some $b\in\mathbb{Z}$. Furthermore, \begin{equation}\label{c_t(E)} c_t(\mathcal{E})=1+a[\mathbb{P}_\alpha]t^2 \end{equation} by the condition of the Proposition. Substituting (\ref{Chern2}) and (\ref{c_t(E)}) into the polynomial $f(t):=c_t(\mathcal{E})c_t(p^*E_1\otimes\mathcal{O}_Q(H)) c_t(p^*E_1^\vee\otimes\mathcal{O}_Q(-H))$, we have $f(t)=(1+a[\mathbb{P}_\alpha]t^2)\prod_{i=1}^a(1-(\delta_i+H)^2t^2)$. Expanding $f(t)$ in $t$ and using (\ref{c,d})-(\ref{sums}), we obtain \begin{equation}\label{f(t)2} f(t)=1+(a[\mathbb{P}_\alpha]-cR^2-2dRH-aH^2)t^2+e[pt]t^4,\ \ \ \end{equation} where \begin{equation}\label{e} e=-3c-a(2d+a)+(a-1)(a+4d)+2d^2. \end{equation} Next, the monad (\ref{monad2}) implies $f(t)=c_t(p^*E_0)$. A comparison of (\ref{f(t)2}) with (\ref{Chern2}) yields \begin{equation}\label{c_2} c_2(\mathcal{E})=a[\mathbb{P}_\alpha]=(b+c)R^2+2dRH+aH^2, \end{equation} \begin{equation}\label{c_4} e=c_4(p^*E_0)=0. \end{equation} The relation (\ref{c_4}) is the crucial relation which enables us to express the Chern classes of all sheaves in (\ref{monad2}) just in terms of $a$. More precisely, (\ref{c_2}) and (\ref{rel in A(Q)}) give $0=c_2(\mathcal{E})[\mathbb{P}_\beta]=2d+a$, hence $a=-2d$. Substituting these latter equalities into (\ref{e}) we get $e=-a(a-2)/2-3c$. Hence $c=-a(a-2)/6$ by (\ref{c_4}). Since $a=-2d$, (\ref{c,d}) and the equality $c=-a(a-2)/6$ give $c_1(E_1)=-a/2,\ \ c_2(E_1)=(d^2-c)/2=a(5a-4)/24$. Substituting this into the standard formulas $e_k:=c_k(p^*E_1\otimes\mathcal{O}_Q(H))=\sum_{i=0}^2\binom{a-i}{k-i}R^iH^{k-i}c_i(E_1), \ \ 1\le k\le4$, we obtain \begin{equation}\label{ee_i} e_1=-aR/2+aH,\ \ e_2=(5a^2/24-a/6)R^2+(a^2-a)(-RH+H^2)/2, \end{equation} $$e_3=(5a^3/24-7a^2/12+a/3)R^2H+(-a^3/4+3a^2/4-a/2)RH^2+(a^3/6-a^2/2+a/3)H^3,$$ $$ e_4=(-7a^4/144+43a^3/144-41a^2/72+a/3)[pt]. $$ It remains to write down explicitely $c_2(p^*E_0)$: (\ref{rel in A(Q)}), (\ref{c_2}) and the relations $a=-2d$, $c=-a(a-2)/6$ give $a=c_2(\mathcal{E})[\mathbb{P}_\alpha]=b+c,$ hence \begin{equation}\label{c_2(E_0)} c_2(E_0)=b=(a^2+4a)/6 \end{equation} by (\ref{Chern2}).
Our next and final step will be to obtain a contradiction by computing the Euler characteristic of the sheaf $\mathcal{E}$ and two different ways. We first compute the Todd class ${\rm td}(T_Q)$ of the bundle $T_Q$. From the exact triple dual to (\ref{Euler}) we find $c_t(T_{Q/S})=1+(-2R+3H)t+(2R^2-4RH+3H^2)t^2$. Next, $c_t(T_Q)=c_t(T_{Q/S})c_t(p^*T_S)$. Hence $c_1(T_Q)=R+3H,\ c_2(T_Q)=-R^2+5RH+3H^2,\ c_3(T_Q)=-3R^2H+9H^2R,\ c_4(T_Q)=9[pt].$ Substituting into the formula for the Todd class of $T_Q$, ${\rm td}(T_Q)=1+\frac{1}{2}c_1+\frac{1}{12}(c_1^2+c_2)+\frac{1}{24}c_1c_2 -\frac{1}{720}(c_1^4-4c_1^2c_2-3c_2^2-c_1c_3+c_4)$, where $c_i:=c_i(T_Q)$ (see, e.g., \cite[p.432]{H}), we get \begin{equation}\label{td(T_Q)} {\rm td}(T_Q)=1+\frac{1}{2}R+\frac{3}{2}H+\frac{11}{12}RH+H^2+\frac{1}{12}HR^2+ \frac{3}{4}H^2R+\frac{3}{8}H^3+[pt]. \end{equation} Next, by the hypotheses of Proposition $c_1(\mathcal{E})=0,\ c_2(\mathcal{E})=a[\mathbb{P}_{\alpha}],\ c_3(\mathcal{E})=c_4(\mathcal{E})=0$. Substituting this into the general formula for the Chern character of a vector bundle $F$, $$ {\rm ch}(F)=\rk(F)+c_1+(c_1^2-2c_2)/2+(c_1^3-3c_1c_2-3c_3)/6+(c_1^4-4c_1^2c_2+4c_1c_3+2c_2^2-4c_4)/24, \ \ $$ $c_i:=c_i(F)$ (see, e.g., \cite[p.432]{H}), and using (\ref{td(T_Q)}), we obtain by the Riemann-Roch Theorem for $F=\mathcal{E}$ \begin{equation}\label{chi(E)} \chi(\mathcal{E})=\frac{1}{12}a^2-\frac{23}{12}a+2. \end{equation} In a similar way, using (\ref{ee_i}), we obtain \begin{equation}\label{chi(E1)+chi(E-1)} \chi(p^*E_1\otimes\mathcal{O}_Q(H))+\chi(p^*E_1^\vee\otimes\mathcal{O}_Q(-H))= \frac{5}{216}a^4-\frac{29}{216}a^3-\frac{1}{54}a^2+\frac{113}{36}a. \end{equation} Next, in view of (\ref{c_2(E_0)}) and the equality $c_1(E_0)=0$ the Riemann-Roch Theorem for $E_0$ easily gives \begin{equation}\label{chi(E_0)} \chi(p^*E_0)=\chi(E_0)=-\frac{1}{6}a^2+\frac{4}{3}a+2. \end{equation} Together with (\ref{chi(E)}) and (\ref{chi(E1)+chi(E-1)}) this yields $$ \Phi(a):=\chi(p^*E_0)-(\chi(\mathcal{E})+ \chi(p^*E_1\otimes\mathcal{O}_Q(H))+\chi(p^*E_1^\vee\otimes\mathcal{O}_Q(-H)))= -\frac{5}{216}a(a-2)(a-3)(a-\frac{4}{5}). $$ The monad (\ref{monad2}) implies now $\Phi(a)=0.$ The only positive integer roots of the polynomial $\Phi(a)$ are $a=2$ and $a=3$. However, (\ref{chi(E)}) implies $\chi(\mathcal{E})=-\frac{3}{2}$ for $a=2$, and (\ref{chi(E_0)}) implies $\chi(p^*E_0)=\frac{9}{2}$ for $a=3$. This is a contradiction as the values of $\chi(\mathcal{E})$ and $\chi(p^*E_0)$ are integers by definition. \end{proof}
We need a last piece of notation. Consider the flag variety $Fl(k_m-2,k_m+2;V^{n_m})$. Any point $u=(V^{k_m-2},V^{k_m+2})\in \Fl(k_m-2,k_m+2;V^{n_m})$ determines a standard extension \begin{equation}\label{i_z} i_{u}:\ X=G(2;4)\hookrightarrow X_m, \end{equation} \begin{equation}\label{eq} W^2\mapsto V^{k_m-2}\oplus W^2\subset V^{k_m+2}\subset V^{n_m}=V^{k_m-2}\oplus W^4\subset V^{n_m}, \end{equation} where $W^2\in X=G(2;W^4)$ and an isomorphism $V^{k_m-2}\oplus W^4\simeq V^{k_m+2}$ is fixed (clearly $i_{u}$ does not depend on the choice of this isomorphism modulo $\Aut(X_m)$). We clearly have isomorphisms of Chow groups \begin{equation}\label{isomChow} i_{u}^*:\ A^2(X_m)\overset{\sim}\to A^2(X),\ \ \ i_{u*}:\ A_2(X)\overset{\sim}\to A_2(X_m), \end{equation} and the flag variety $Y_m:=Fl(k_m-1,k_m+1;V^{n_m})$ (respectively, $Y:=Fl(1,3;4)$) is the set of lines in $X_m$ (respectively, in $X$).
\begin{theorem}\label{th56} Let $\displaystyle\mathbf{X} = \lim_{\to}X_m$ be a twisted ind-Grassmannian. Then any vector bundle $\displaystyle\mathbf{E}=\lim_{\gets}E_m$ on $\mathbf{X}$ of rank 2 is trivial, and hence Conjecture \ref{con1}(iv) holds for vector bundles of rank 2. \end{theorem}
\begin{proof} Fix $m\ge\max\{m_0,m_1\},$ where $m_0$ and $m_1$ are as in Corollary \ref{d=(0,0)} and Lemma \ref{c_2(E_m)=0}. For $j=1,2$, let $E^{(j)}$ denote the restriction of $E_m$ to a projective plane of type $\mathbb{P}^2_{j,m}$,
$T^j\simeq\Fl(k_m-j,k_m+3-j,V^{n_m})$ be the variety of planes of the form $\mathbb{P}^2_{j,m}$
in $X_m$, and $\Pi^j:=\{\mathbb{P}^2_{j,m}\in T^j|\
{E_m}_{|\mathbb{P}^2_{j,m}}$ is properly unstable (i.e. not semistable)$\}.$ As semistability is an open condition, $\Pi^j$ is a closed subset of $T^j$.
(i) Assume that $c_2(E^{(1)})>0$. Then, since $m\ge m_1$, Lemma \ref{c_2(E_m)=0} implies $c_2(E^{(2)})\le0$.
(i.1) Suppose that $c_2(E^{(2)})=0$. If $\Pi^2\ne T^2$, then for any $\mathbb{P}^2_{2,m}\in T^2\smallsetminus \Pi^2$ the corresponding bundle $E^{(2)}$ is semistable, hence $E^{(2)}$ is trivial as $c_2(E^{(2)})=0$, see \cite[Prop. 2.3,(4)]{DL}. Thus, for a generic point $u\in Fl(k_m-2,k_m+2;V^{n_m})$, the bundle $E=i_{u}^*E_m$ on $X=G(2;4)$ satisfies the conditions of Proposition \ref{not exist}, which is a contradiction.
We therefore assume $\Pi^2=T^2$. Then for any $\mathbb{P}^2_{2,m}\in T^2$ the corresponding bundle $E^{(2)}$ has a maximal destabilizing subsheaf $0\to\mathcal{O}_{\mathbb{P}^2_{2,m}}(a)\to E^{(2)}.$ Moreover $a>0$. In fact, otherwise the condition $c_2(E^{(2)})=0$ would imply that $a=0$ and $E^{(2)}/\mathcal{O}_{\mathbb{P}^2_{2,m}}=\mathcal{O}_{\mathbb{P}^2_{2,m}}$, i.e. $E^{(2)}$ would be trivial, in particular semistable. Hence \begin{equation}\label{a,-a} \mathbf{d}_{E^{(2)}}=(a,-a). \end{equation} Since any line in $X_m$ is contained in a plane $\mathbb{P}^2_{2,m}\in T^2$, (\ref{a,-a}) implies $\mathbf{d}_{E_m}=(a,-a)$ with $a>0$ for $m>m_0$, contrary to Corollary \ref{d=(0,0)}.
(i.2) Assume $c_2(E^{(2)})<0$. Since $E^{(2)}$ is not stable for any $\mathbb{P}^2_{2,m}\in T^2$, its maximal destabilizing subsheaf $0\to\mathcal{O}_{\mathbb{P}^2_{2,m}}(a)\to E^{(2)}$ clearly satisfies the condition $a>0$, i.e. $E^{(2)}$ is properly unstable, hence $\Pi^2=T^2$. Then we again obtain a contradiction as above.
(ii) Now we assume that $c_2(E^{(2)})>0$. Then, replacing $E^{(2)}$ by $E^{(1)}$ and vice versa, we arrive to a contradiction by the same argument as in case (i).
(iii) We must therefore assume $c_2(E^{(1)})=c_2(E^{(2)})=0$. Set
$D(E_m):=\{l\in Y_m|~\mathbf{d}_{E_m}(l)\ne(0,0)\}$ and $D(E):=\{l\in
Y|~\mathbf{d}_E(l)\ne(0,0)\}$. By Corollary \ref{d=(0,0)}, $\mathbf{d}_{E_m}=(0,0),$ hence $\mathbf{d}_E=(0,0)$ for a generic embedding $i_u:X\hookrightarrow X_m$. Then by deformation theory \cite{B}, $D(E_m)$ (respectively, $D(E)$) is an effective divisor on $Y_m$ (respectively, on $Y$). Hence, $\mathcal{O}_Y(D(E))=p_1^*\mathcal{O}_{Y^1}(a) \otimes p_2^*\mathcal{O}_{Y^2}(b)$ for some $a,b\ge0$, where $p_1$, $p_2$ are as in diagram \refeq{eqDiag}. Note that each fiber of $p_1$ (respectively, of $p_2$) is a plane $\tilde{\mathbb{P}}^2_{\alpha}$ dual to some $\alpha$-plane $\mathbb{P}^2_{\alpha}$ (respectively, a plane $\tilde{\mathbb{P}}^2_{\beta}$ dual to some $\beta$-plane $\mathbb{P}^2_{\beta}$). Thus, setting
$D(E_{|\mathbb{P}^2_{\alpha}}):=\{l\in\tilde{\mathbb{P}}^2_{\alpha}|~\mathbf{d}_E(l)\ne(0,0)\}$,
$D(E_{|\mathbb{P}^2_{\beta}}):=\{l\in\tilde{\mathbb{P}}^2_{\beta}|~\mathbf{d}_E(l)\ne(0,0)\}$, we obtain
$\mathcal{O}_{\tilde{\mathbb{P}}^2_{\alpha}}(D(E_{|\mathbb{P}^2_{\alpha}}))=
\mathcal{O}_Y(D(E))_{|\tilde{\mathbb{P}}^2_{\alpha}}= \mathcal{O}_{\tilde{\mathbb{P}}^2_{\alpha}}(b),\ \ \
\mathcal{O}_{\tilde{\mathbb{P}}^2_{\beta}}(D(E_{|\mathbb{P}^2_{\beta}}))=
\mathcal{O}_Y(D(E))_{|\tilde{\mathbb{P}}^2_{\beta}}= \mathcal{O}_{\tilde{\mathbb{P}}^2_{\beta}}(a).$ Now if
$E_{|\mathbb{P}^2_{\alpha}}$ is semistable, a theorem of Barth \cite[Ch. II, Theorem 2.2.3]{OSS} implies that
$D(E_{|\mathbb{P}^2_{\alpha}})$ is a divisor of degree
$c_2(E_{|\mathbb{P}^2_{\alpha}})=a$ on
$\mathbb{P}^2_{\alpha}$. Hence $a=c_2(E^{(1)})=0$ for a semistable $E_{|\mathbb{P}^2_{\alpha}}$. If $E_{|\mathbb{P}^2_{\alpha}}$ is not semistable, it is unstable and the equality $\mathbf{d}_E(l)=(0,0)$ yields
$\mathbf{d}_{E_{|\mathbb{P}^2_{\alpha}}}=(0,0)$. Then the maximal destabilizing subsheaf of $E_{|\mathbb{P}^2_{\alpha}}$ is isomorphic to $\mathcal{O}_{\mathbb{P}^2_{\alpha}}$ and, since
$c_2(E_{|\mathbb{P}^2_{\alpha}})=0,$ we obtain an exact triple
$0\to\mathcal{O}_{\mathbb{P}^2_{\alpha}}\to E_{|\mathbb{P}^2_{\alpha}}\to \mathcal{O}_{\mathbb{P}^2_{\alpha}}\to 0$, so that
$E_{|\mathbb{P}^2_{\alpha}}\simeq\mathcal{O}_{\mathbb{P}^2_{\alpha}}^{\oplus2}$ is semistable, a contradiction. This shows that $a=0$ whenever $c_2(E^{(1)})=c_2(E^{(2)})=0$. Similarly, $b=0$. Therefore $D(E_m)=\emptyset$, and Proposition \ref{prop31} implies that $E_m$ is trivial. Therefore $\mathbf{E}$ is trivial as well. \end{proof}
In \cite{DP} Conjecture \ref{con1} (iv) was proved not only when $\mathbf{X}$ is a twisted projective ind-space, but also for finite rank bundles on special twisted ind-Grassmannians defined through certain homogeneous embeddings $\varphi_m$. These include embeddings of the form \[ G(k;n)\to G(ka;nb) \] \[ V^k\subset V\mapsto V^k\otimes W^a\subset V\otimes W^b, \] where $W^a\subset W^b$ is a fixed pair of finite-dimensional spaces with $a>b$, or of the form \[ G(k;n)\to G\left(\frac{k(k+1)}{2};n^2\right) \] \[ V^k\subset V\mapsto S^2(V^k)\subset V\otimes V. \] More precisely, Conjecture \ref{con1} (iv) was proved in \cite{DP} for twisted ind-Grassmannians whose defining embeddings are homogeneous embeddings satisfying some specific numerical conditions relating the degrees $\deg\varphi_m$ with the pairs of integers $(k_m,n_m)$. There are many twisted ind-Grassmannians for which those conditions are not satisfied. For instance, this applies to the ind-Grassmannians defined by iterating each of the following embeddings: \begin{eqnarray*} G(k;n)\to G\left(\frac{k(k+1)}{2};\frac{n(n+1)}{2}\right)\\ V^k\subset V\mapsto S^2(V^k)\subset S^2(V), \\ G(k;n)\to G\left(\frac{k(k-1)}{2};\frac{n(n-1)}{2}\right)\\ V^k\subset V\mapsto \Lambda^2(V^k)\subset \Lambda^2(V). \end{eqnarray*} Therefore the resulting ind-Grassmannians $\mathbf G(k,n,S^2)$ and $\mathbf G (k,n,\Lambda^2)$ are examples of twisted ind-Grassmannians for which \refth{th56} is new.
\end{document} |
\begin{document}
\title*{Replication of Wiener-transformable stochastic processes with application to financial markets with memory}
\titlerunning{Replication of Wiener-transformable processes}
\author{Elena Boguslavskaya, Yuliya Mishura, and Georgiy Shevchenko}
\institute{ Elena Boguslavskaya \at Department of Mathematics, Brunel University London, Uxbridge UB8 3PH, UK\\ \email{[email protected]} \and Yuliya Mishura \at Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, 64, Volodymyrs'ka St.,
01601 Kyiv, Ukraine\\ \email{[email protected]} \and Georgiy Shevchenko\at Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, 64, Volodymyrs'ka St.,
01601 Kyiv, Ukraine\\ \email{[email protected]}}
\maketitle
\abstract*{We investigate Wiener-transformable markets, where the driving process is given by an adapted transformation of a Wiener process. This includes processes with long memory, like fractional Brownian motion and related processes, and, in general, Gaussian processes satisfying certain regularity conditions on their covariance functions. Our choice of markets is motivated by the well-known phenomena of the so-called ``constant'' and ``variable depth'' memory observed in real world price processes, for which fractional and multifractional models are the most adequate descriptions. Motivated by integral representation results in general Gaussian setting, we study the conditions under which random variables can be represented as pathwise integrals with respect to the driving process. From financial point of view, it means that we give the conditions of replication of contingent claims on such markets. As an application of our results, we consider the utility maximization problem in our specific setting. Note that the markets under consideration can be both arbitrage and arbitrage-free, and moreover, we give the representation results in terms of bounded strategies. \keywords{Wiener-transformable process; fractional Brownian motion; long memory; pathwise integral; martingale representation; utility maximization}}
\abstract{We investigate Wiener-transformable markets, where the driving process is given by an adapted transformation of a Wiener process. This includes processes with long memory, like fractional Brownian motion and related processes, and, in general, Gaussian processes satisfying certain regularity conditions on their covariance functions. Our choice of markets is motivated by the well-known phenomena of the so-called ``constant'' and ``variable depth'' memory observed in real world price processes, for which fractional and multifractional models are the most adequate descriptions. Motivated by integral representation results in general Gaussian setting, we study the conditions under which random variables can be represented as pathwise integrals with respect to the driving process. From financial point of view, it means that we give the conditions of replication of contingent claims on such markets. As an application of our results, we consider the utility maximization problem in our specific setting. Note that the markets under consideration can be both arbitrage and arbitrage-free, and moreover, we give the representation results in terms of bounded strategies. \keywords{Wiener-transformable process; fractional Brownian motion; long memory; pathwise integral; martingale representation; utility maximization}}
\section{Introduction}
Consider a general continuous time market model with one risky asset. For simplicity, we will work with discounted values. Let the stochastic process $\{X_t,t\in [0,T]\}$ model the discounted price of risky asset. Then the discounted final value of a self-financing portfolio\index{portfolio} is given by a stochastic integral \begin{equation}\label{wti:capital} V^\psi(T) = V^\psi(0) + \int_0^T \psi(t) dX(t), \end{equation} where an adapted process $\psi$ is the quantity of risky asset in the portfolio. Loosely speaking, the self-financing assumption means that no capital is withdrawn or added to the portfolio; for precise definition and general overview of financial market models with continuous time we refer a reader to \cite{bjork,karat}.
Formula \eqref{wti:capital} raises several important questions of financial modeling, we will focus here on the following two. \begin{itemize} \item \textit{Replication}: \index{replication} identifying random variables (i.e.\ discounted contingent claims), which can be represented as final capitals of some self-financing portfolios. In other words, one looks at integral representations \begin{equation}\label{wti:represent} \xi = \int_0^T \psi(t) dX(t) \end{equation} with adapted integrand $\psi$; the initial value may be subtracted from $\xi$, so we can assume that it is zero. \item \textit{Utility maximization}: \index{utility maximization} maximizing the expected utility of final capital over some set of admissible self-financing portfolios. \end{itemize}
An important issue is the meaning of stochastic integral in \eqref{wti:capital} or \eqref{wti:represent}. When the process $X$ is a semimartingale, it can be understood as It\^o integral. In this case \eqref{wti:capital} is a kind of It\^o representation, see e.g.\ \cite{KS} for an extensive coverage of this topic. When the It\^o integral is understood in some extended sense, then the integral representation may exist under very mild assumptions and may be non-unique. For example, if $X=W$, a Wiener process, and $\psi$ satisfies $\int_0^T \psi_s^2 ds<\infty$ a.s., then, as it was shown by \cite{dudley}, any random variable can be represented as a final value of some self-financing portfolio for any value of initial capital.
However, empirical studies suggest that financial markets often exhibit long-range dependence (in contrast to stochastic volatility that can be both smooth and rough, i.e., can demonstrate both long-and short-range dependence). The standard model for the phenomenon of long-range dependence is the fractional Brownian motion with Hurst index $H>1/2$. It is not a semimartingale, so the usual It\^o integration theory is not available. The standard approach now is to define the stochastic integral in such models as a pathwise integral, namely, one usually considers the fractional integral, see \cite{bender-sottinen-valkeila,zahle}.
The models based on the fractional Brownian motion usually admit arbitrage possibilities, i.e.\ there self-financing portfolios $\psi$ such that $V_\psi(0)\le 0$, $V_\psi(T)\ge 0$ almost surely, and $V_\psi(T)>0$ with positive probability. In the fractional Black--Scholes model, where $X_t=X_0\exp\{at+bB_t^H\}$, and $B^H$ is a fractional Brownian motion with $H>1/2$, the existence of arbitrage was shown in \cite{rogers}. Specifically, the strategy constructed there was of a ``doubling'' type, blowing the portfolio in the case of negative values; thus the potential intermediate losses could be arbitrarily large. It is worth to mention that such arbitrage exists even in the classical Black--Scholes model: the aforementioned result by Dudley allows gaining any positive final value of capital from initial zero by using a similar ``doubling'' strategy. For this reason, one usually restricts the class of admissible strategies by imposing a lower bound on the running value:\index{non-doubling strategy} \begin{equation}\label{wti:eq:nds} V^\psi(t)\ge -a,\quad t\in(0,T), \end{equation} which in particular disallows the ``doubling'' strategies. However, in the fractional Black--Scholes model, the arbitrage exists even in the class of strategies satisfying \eqref{wti:eq:nds}, as was shown in \cite{Cheridito1}.
There are several ways to exclude arbitrage in the fractional Brownian model\index{fractional Brownian motion}. One possibility is to restrict the class of admissible strategies. For example, in \cite{Cheridito1} the absence of arbitrage is proved under further restriction that interval between subsequent trades is bounded from below (i.e.\ high frequency trading is prohibited). Another possibility is to add to the fractional Brownian motion an independent Wiener process, thus getting the so-called mixed fractional Brownian motion\index{mixed fractional Brownian motion} $M^H = B^H + W$. The absence in such mixed models was addressed in \cite{andrmish,Cheridito}. In \cite{andrmish}, it was shown that there is no arbitrage in the class of self-financing strategies $\gamma_t = f(t,M^H,t)$ of Markov type, depending only on the current value of the stock. In \cite{Cheridito}, it was shown that for $H\in(3/4,1)$ the distribution of mixed fractional Brownian motion on a finite interval is equivalent to that of Wiener process. As a result, in such models there is no arbitrage strategies satisfying the non-doubling assumption \eqref{wti:eq:nds}. A more detailed exposition concerning arbitrage in models based on fractional Brownian motion is given in \cite{bender-sottinen-valkeila1}.
The replication question, i.e.\ the question when a random variable can be represented as a pathwise (fractional) integral in the models with long memory was studied in many articles, even in the case where arbitrage opportunities are present. The first results were established in \cite{msv}, where it was shown that a random variable $\xi$ has representation \eqref{wti:represent} with respect to fractional Brownian motion if it is a final value of some H\"older continuous adapted process. The assumption of H\"older continuity might seem too restrictive at the first glance. However, the article \cite{msv} gives numerous examples of random variables satisfying this assumption.
The results of \cite{msv} were extended in \cite{shev-viita}, where similar results were shown for a wide class of Gaussian integrators. The article \cite{mish-shev} extended them even further and studied when a combination of H\"older continuity of integrator and small ball estimates lead to existence of representation \eqref{wti:represent}.
For the mixed fractional Brownian motion, the question of replication was considered in \cite{shev-viita}. The authors defined the integral with respect to fractional Brownian motion in pathwise sense and that with respect to Wiener process in the extended It\^o sense and shown, similarly to the result of \cite{dudley}, that any random variable has representation \eqref{wti:represent}.
It is worth to mention that the representations constructed in \cite{msv,mish-shev,shev-viita} involve integrands of ``doubling'' type, so in particular they do not satisfy the admissibility assumption \eqref{wti:eq:nds}.
Our starting point for this article was to see what contingent claims are representable as final values of some H\"older continuous adapted processes. It turned out that the situation is quite transparent whenever the Gaussian integrator generates the same flow of sigma-fields as the Wiener process. As a result, we came up with the concept of Wiener-transformable financial market, which turned out to be a fruitful idea, as a lot of models of financial markets are Wiener-transformable. We consider many examples of such models in our paper. Moreover, the novelty of the present results is that we prove representation theorems that, in financial interpretation, are equivalent to the possibility of hedging of contingent claims, in the class of \textit{bounded} strategies. While even with such strategies the non-doubling assumption \eqref{wti:eq:nds} may fail, the boundedness seems a feasible admissibility assumption.
More specifically, in the present paper we study a replication and the utility maximization problems for a broad class of asset prices processes, which are obtained by certain adapted transformation of a Wiener process; we call such processes \textit{Wiener-transformable} and provide several examples. We concentrate mainly on non-semimartingale markets because the semimartingale markets have been studied thoroughly in the literature. Moreover, the novelty of the present results is that we prove representation theorems that, in financial interpretation, are equivalent to the possibility of hedging of contingent claims, in the class of bounded strategies. We would like to draw the attention of the reader once again to the fact that the possibility of representation means that we have arbitrage possibility in the considered class of strategies and they may be limited, although in a narrower and more familiar class of strategies the market can be arbitrage-free. Therefore, our results demonstrate rather subtle differences in the properties of markets in different classes of strategies.
The article is organized as follows. In Section~\ref{wti:sec:2}, we recall basics of pathwise integrations in the fractional sense. In Section~\ref{wti:sec:3}, we prove a new representation result, establishing an existence of integral representation with bounded integrand, which is of particular importance in financial applications. We also define the main object of study, Wiener-transformable markets, and provide several examples. Section~\ref{wti:sec:4} is devoted to application of representation results to the utility maximization problems.
\section{Elements of fractional calculus}\label{wti:sec:2} As announced in the introduction, the integral with respect to Wiener-transformable processes will be defined in pathwise sense, as fractional integral. Here we present the basic facts on fractional integration; for more details see \cite{samko,zahle}. Consider functions $f,g:[0,T]\rightarrow \mathbb{R}$, and let $[a,b]\subset [0,T]$. For $\alpha\in (0,1)$ define Riemann-Liouville fractional derivatives on finite interval $[a,b]$\index{Riemann-Liouville fractional derivative} \begin{gather*} \big(\mathcal{D}_{a+}^{\alpha}f\big)(x)=\frac{1}{\Gamma(1-\alpha)}\bigg(\frac{f(x)}{(x-a)^\alpha}+\alpha \int_{a}^x\frac{f(x)-f(u)}{(x-u)^{1+\alpha}}du\bigg)1_{(a,b)}(x),\end{gather*} \begin{gather}\label{wti:equ:dif}\big(\mathcal{D}_{b-}^{ \alpha}g\big)(x)=\frac{1} {\Gamma(1-\alpha)}\bigg(\frac{g(x)}{(b-x)^{ \alpha}}+ \alpha \int_{x}^b\frac{g(x)-g(u)}{(u-x)^{1+\alpha}}du\bigg)1_{(a,b)}(x). \end{gather} Assuming that
$\mathcal{D}_{a+}^{\alpha}f\in L_1[a,b]$, $\mathcal{D}_{b-}^{1-\alpha}g_{b-}\in L_\infty[a,b]$, where $g_{b-}(x) = g(x) - g(b)$, the generalized Lebesgue--Stieltjes integral\index{generalized Lebesgue--Stieltjes integra} is defined as \begin{equation*}\int_a^bf(x)dg(x)= \int_a^b\big(\mathcal{D}_{a+}^{\alpha}f\big)(x) \big(\mathcal{D}_{b-}^{1-\alpha}g_{b-}\big)(x)dx. \end{equation*}
Let function $g$ be $\theta$-H\"{o}lder continuous, $g\in C^\theta[a,b]$ with $\theta\in(\frac12,1)$, i.e.\ $$ \sup_{t,s\in[0,T],t\neq s}\frac{\wtiabs{g(t)-g(s)}}{\wtiabs{t-s}^\theta}<\infty. $$
In order to integrate w.r.t. function $g$ and to find an upper bound of the integral, fix some $\alpha \in(1-\theta,1/2)$ and introduce the following norm: \begin{gather*}
\|f\|_{\alpha,[a,b]} = \int_a^b \left(\frac{|{f(s)}|}{(s-a)^\alpha} + \int_a^s \frac{|{f(s)-f(z)}|}{(s-z)^{1+\alpha}}dz\right)ds. \end{gather*}
For simplicity we abbreviate $\|\cdot\|_{\alpha,t} = \|\cdot\|_{\alpha,[0,t]}$. Denote $$\Lambda_\alpha(g):= \sup_{0\le s<t\le T} |{\mathcal{D}_{t-}^{1-\alpha}g_{t-}}(s)|.$$ In view of H\"{o}lder continuity, $\Lambda_\alpha(g)<\infty$.
Then for any $t\in(0,T]$ and for any $f$ with $\|f\|_{\alpha,t}<\infty$, the integral $\int_0^t f(s) dg(s)$ is well defined as a generalized Lebesgue--Stieltjes integral, and the following bound is evident: \begin{gather}\label{wti:equ:ineq}
\Big|{\int_0^t f(s)dg(s)}\Big|\le \Lambda_\alpha(g) \|f\|_{\alpha,t}. \end{gather} It is well known that in the case if $f$ is $\beta$-H\"{o}lder continuous, $f\in C^\beta[a,b]$, with $\beta+\theta>1$, the generalized Lebesgue--Stieltjes integral $\int_a^bf(x)dg(x)$ exists, equals to the limit of Riemann sums and admits bound \eqref{wti:equ:ineq} for any $\alpha \in(1-\theta, \beta\wedge 1/2)$.
\section{Representation results for Gaussian and Wiener-transformable processes}\label{wti:sec:3}
Let throughout the paper $(\Omega, \mathcal{F}, {\bf P})$ be a complete probability space supporting all stochastic processes mentioned below. Let also $\mathbb{F} = \{\mathcal F_t,t\in[0,T]\}$ be a filtration satisfying standard assumptions. In what follows, the adaptedness of a process $X = \{X(t),t\in[0,T]\}$ will be understood with respect to $\mathbb{F}$, i.e.\ $X$ will be called adapted if for any $t\in[0,T]$, $X(t)$ is $\mathcal{F}_t$-measurable.
We start with representation results, which supplement those of \cite{mish-shev}.
Consider a continuous centered Gaussian process $G$ with incremental variance of $G$ satisfying the following two-sided power bounds for some $H\in (1/2,1)$. \begin{itemize}
\item[$(A)$] There exist $C_1, C_2>0$ such that for any $s,t\in [0,T]$
\begin{equation}\label{wti:eq:helix}
C_1\left|t-s\right|^{2H}\le {\bf E}\left|G(t)-G(s)\right|^2\le C_2 \left|t-s\right|^{2H}.
\end{equation} \end{itemize} Assume additionally that the increments of $G$ are positively correlated. More exactly, let the following condition hold \begin{itemize} \item[$(B)$] For any $0 \le s_1 \le t_1 \le s_2 \le t_2\le T$ $$ {\bf E}\left(G({t_1})-G({s_1})\right)\left(G({t_2})-G({s_2})\right)\ge0.$$ \end{itemize} A process satisfying \eqref{wti:eq:helix} is often referred to as a \textit{quasi-helix}.\index{quasi-helix}
Note that the right inequality in \eqref{wti:eq:helix} implies that \begin{equation}\label{wti:eq:Gmodcont}
\sup_{t,s\in[0,T]}\frac{|G(t)-G(s)|}{|t-s|^{H}|\log(t-s)|^{1/2}} <\infty \end{equation} almost surely (see e.g.\ p.~220 in \cite{lifshits}).
We will need the following small deviation estimate for sum of squares of Gaussian random variables, see e.g.\ \cite{lishao}.
\begin{lemma} \label{wti:lem:small} Let $\{\xi_i\}_{i=1,\ldots,n}$ be jointly Gaussian centered random variables. For all $x$ such that $0<x<\sum_{i=1}^n {\bf E} \xi^2_i$, it holds \begin{gather*} \wtipr{\sum_{i=1}^n\xi_i^2\le x}\leq\exp\left\{-\frac{\left(\sum_{i=1}^n {\bf E} {\xi_i^2}-x\right)^2}{\sum_{i,j=1}^n ( {\bf E} {\xi_i\xi_j})^2}\right\}. \end{gather*} \end{lemma}
\begin{theorem}\label{wti:thm:representation}
Let a centered Gaussian process $G$ satisfy $(A)$ and $(B)$ and $\xi$ be a random variable such that there exists an adapted $r$-H\"older continuous process $Z$ with $Z(T) = \xi$. There exists a bounded adapted process $\psi$, such that $\left\|\psi\right\|_{\alpha,T}<\infty$ for some $\alpha\in \left(1-H,1\right)$ and $\xi$ admits the representation \begin{equation}\label{wti:reprez} \xi=\int_{0}^{T}\psi(s) dG(s), \end{equation} almost surely. \end{theorem} \begin{remark} A similar result was proved in \cite{mish-shev}, Theorem 4.1, which assumed \eqref{wti:eq:helix} with different exponents in the right-hand side and in the left-hand side of the inequality. Having equal exponents allowed us to establish existence of a \textit{bounded} integrand $\psi$, thus extending previous results. \end{remark} \begin{proof} To construct an integrand, we modify ideas of \cite{mish-shev} and \cite{shalaiko}. Throughout the proof, $C$ will denote a generic constant, while $C(\omega)$, a random constant; their values may change between lines.
Choose some $\alpha \in \big(1-H ,(r+1-H)\wedge \frac12\big)$.
We start with the construction of $\psi$. First take some $\theta\in (0,1)$, put $t_n = T-\theta^{n}$, $n\ge 1$, and let $\Delta_n = t_{n+1}-t_n$. It is easy to see that \begin{gather} T-t_n\le C\Delta_n. \label{wti:t_n-ineq} \end{gather} Denote for brevity $\xi_n = Z(t_n)$. Then by Assumption 1, $\wtiabs{\xi_n -\xi_{n+1}}\le C(\omega) \theta^{rn}$. Therefore, there exists some $N_0 = N_0(\omega)$ such that \begin{equation}\label{wti:eq:deltaxi} \wtiabs{\xi_n -\xi_{n+1}}\le n \theta^{rn} \end{equation} for all $n\ge N_0(\omega)$.
We construct the integrand $\psi$ inductively between the points $\{t_n,n\ge 1\}$. First let $\psi(t)=0$, $t\in[0,t_1]$. Assuming that we have already constructed $\psi(t)$ on $[0,t_n)$, define $V(t)=\int_0^t \psi(s)dG(s), t\in[0,t_n]$.
Consider some cases.
\underline{Case I.} $V(t_n)\neq \xi_{n-1}$. By Lemma 4.1 in \cite{mish-shev}, there exists an adapted process $\{\phi_n(t),t\in[t_n,t_{n+1}]\}$, bounded on $[t_n,t]$ for any $t\in(t_n,t_{n+1})$ and such that $\int_{t_n}^{t}\phi_n(s) d G(s)\to +\infty$ as $t\to t_{n+1}-$. Define a stopping time \begin{gather*}
\tau_n=\inf\left\{t\geq t_n: \int_{t_n}^{t}\phi_n(s) dG(s)\geq |\xi_n-V_{t_n}| \right\}, \end{gather*} and set \begin{gather*} \psi(t)=\phi_n(t) \operatorname{sign}\big(\xi_n-V(t_n)\big)\mathbf{1}_{[t_n,\tau_n]}(t), \,t\in[t_n,t_{n+1}). \end{gather*} It is obvious that $\int_{t_n}^{t_{n+1}}\psi(s)dG(s)=\xi_n-V(t_n)$ and $V(t_{n+1})=\xi_n$.
\underline{Case II.} $V(t_n)=\xi_{n-1}$. We consider a uniform partition $s_{n,k} = t_n + k\delta_n$, $k=1,\ldots,n$ of $[t_n,t_{n+1}]$ with a mesh $\delta_n=\Delta_n/n$ and an auxiliary function \begin{gather*} \phi_n(t)=a_n\sum_{k=0}^{n-1} \big(G(t)-G(s_{n,k})\big)\mathbf{1}_{[s_{n,k},s_{n,k+1})}(t), \end{gather*} where $a_n = n^{-2}\theta^{(\alpha-H-1)n}$. Since $\phi_n$ is piecewise H\"older continuous of order up to $H$, by the change of variables formula (Theorem 4.3.1 in \cite{zahle}) \begin{equation*} \int_{t_n}^{t_{n+1}} \phi_n(t) dG(t) = a_n \sum_{k=0}^{n-1} \big(G(s_{n,k+1})-G(s_{n,k})\big)^2. \end{equation*}
Define a stopping time \begin{gather*}
\sigma_n=\inf\Big\{t\geq t_n: \int_{t_n}^t \phi_n(s)dG(s)\geq |\xi_n-\xi_{n-1}|\Big\}\wedge t_{n+1}, \end{gather*} and set \begin{gather*} \psi(t)=\operatorname{sign}(\xi_n-\xi_{n-1})\phi_n(t)\mathbf{1}_{[t_n,\sigma_n]}(t),\, t\in[t_n,t_{n+1}). \end{gather*} Now we want to ensure that, almost surely, $V(t_{n}) = \xi_{n-1}$ for all $n$ large enough. By construction, Case I is always succeeded by Case II. So we need to ensure that $\sigma_n <t_{n+1}$ for all $n$ large enough, equivalently, that $$ a_n \sum_{k=0}^{n-1} \big(G(s_{n,k+1})-G(s_{n,k})\big)^2> \wtiabs{\xi_n - \xi_{n-1}}. $$ Thanks to \eqref{wti:eq:deltaxi}, it is enough to ensure that $$
\sum_{k=0}^{n-1} \big(G(s_{n,k+1})-G(s_{n,k})\big)^2 > a_n^{-1}n\theta^{rn} = n^2 \theta^{(r+H +1-\alpha)n} $$ for all $n$ large enough. Define $\xi_k = G(s_{n,k+1})-G(s_{n,k})$, $k=0,\dots,n-1$. Thanks to our choice of $\alpha$, $r+H+1-\alpha > 2H$, so $n^2 \theta^{(r+H +1-\alpha)n} < C_1 n^{1-2H}\theta^{2Hn}$ for all $n$ large enough. Therefore, in view of \eqref{wti:eq:helix}, \begin{gather*} \sum_{k=0}^{n-1} {\bf E} \xi_k^2 \ge C_1 n \delta_n^{2H} = C_1 n^{1-2H}\theta^{2H n} > n^2 \theta^{(r+H +1-\alpha)n}, \end{gather*} so we can use Lemma \ref{wti:lem:small}. Using $(A)$ and $(B)$, estimate \begin{gather*} \sum_{i,j=0}^{n-1} \big( {\bf E} \xi_i\xi_j \big)^2\le \max_{0\le i,j\le n-1} {\bf E} \xi_i\xi_j \sum_{i,j=0}^{n-1} {\bf E} \xi_i\xi_j\\
\le C_1 \delta_n^{2H} {\bf E} \Big( \sum_{i=0}^{n-1} \xi_i\Big)^2 = C_1 \delta_n^{2H} {\bf E} \big(G(t_{n+1}) - G(t_n) \big)^2\\
\le C_1^2 \delta_n^{2H}\Delta_n^{2H}\le C_1^2 n^{-2H} \Delta^{4H} = C_1^2 n^{-2H} \theta^{4H n}. \end{gather*} Hence, by Lemma~\ref{wti:lem:small}, \begin{gather*} \wtipr{ \sum_{k=0}^{n-1} \big(G(s_{n,k+1})-G(s_{n,k})\big)^2 \le n^2 \theta^{(r+H +1-\alpha)n}}\\ \le \exp\left\{ - \frac{\big(C_1 n^{1-2H}\theta^{2H n} - n^2 \theta^{(r+H +1-\alpha)n}\big)^2}{C_1^2 n^{-2H} \theta^{4H n}}\right\} \le \exp \left\{ - C n^{2 -2H}\right\}. \end{gather*} Therefore, by the Borel--Cantelli lemma, almost surely there exists some $N_1(\omega)\ge N_0(\omega)$ such that for all $n\ge N_1(\omega)$ $$ \sum_{k=0}^{n-1} \big(G(s_{n,k+1})-G(s_{n,k})\big)^2 > n^2 \theta^{(r+H +1-\alpha)n}, $$ so, as it was explained above, we have $V(t_n) = \xi_{n-1}$, $n\ge N_1(\omega)$.
Since all functions $\phi_n$ are bounded, we have that $\psi$ is bounded on $[0,t_N]$ for any $N\ge 1$. Further, thanks to \eqref{wti:eq:Gmodcont}, for $t\in[t_n,t_{n+1}]$ with $n\ge N_1(\omega)$, \begin{equation}\label{wti:eq:psibound} \begin{gathered} \wtiabs{\psi(s)}\le C(\omega) a_n \delta_n^H\wtiabs{\log \delta_n}^{1/2} \le C(\omega) n^{-2}\theta^{(\alpha-H-1)n} n^{-H} \theta^{Hn} n^{1/2}\\ = C(\omega) n^{\alpha - H - 3/2}\theta^{(\alpha-1)n}. \end{gathered} \end{equation} Therefore, $\psi$ is bounded (moreover, $\psi(t)\to 0$, $t\to T-$).
Further, by construction, $\wtinorm{\psi}_{\alpha,t_N}<\infty$ for any $N\ge 1$. Moreover, $\wtiabs{V(t)-\xi_{N-1}}\le \wtiabs{\xi_N - \xi_{N-1}}$, $t\in[t_{N},t_{N+1}]$. Thus, it remains to to verify that $\wtinorm{\psi}_{\alpha,[t_N,1]}<\infty$ and $\int_{t_N}^1 \psi(s) dG(s)\to 0$, $N\to\infty$, which would follow from $\wtinorm{\psi}_{\alpha,[t_N,1]}\to 0$, $N\to\infty$.
Let $N\ge N_1(\omega)$. Write \begin{equation*}
\wtinorm{\psi}_{\alpha,[t_N,T]} = \sum_{n=N}^\infty \int_{t_n}^{t_{n+1}}\left(\frac{|\psi(s)|}{(s-t_N)^{\alpha}}+\int_{t_N}^s \frac{|\psi(s)-\psi(u)|}{|s-u|^{1+\alpha}}du\right) ds. \end{equation*} Thanks to \eqref{wti:eq:psibound}, \begin{gather*}
\int_{t_n}^{t_{n+1}}\frac{|\psi(s)|}{(s-t_N)^{\alpha}} ds \le C(\omega)\Delta_n^{1-\alpha}n^{\alpha - H - 3/2}\theta^{(\alpha-1)n} = C(\omega)n^{\alpha - H - 3/2}. \end{gather*} Further, \begin{gather*}
\int_{t_N}^{t_{n+1}}\int_{t_n}^s\frac{|\psi(s)-\psi(u)|}{|s-u|^{1+\alpha}}du\, ds\\ =
\sum_{k=1}^n \int_{s_{n,k-1}}^{s_{n,k}}\left(\int_{t_N}^{t_n}+\int_{t_n}^{s_{n,k-1}}+\int_{s_{n,k-1}}^{s} \right)\frac{|\psi(s)-\psi(u)|}{|s-u|^{1+\alpha}}du\, ds=:I_1+I_2+I_3. \end{gather*} Start with $I_1$, observing that $\psi$ vanishes on $(\sigma_n,t_{n+1}]$: \begin{gather*}
I_1\leq \int_{t_n}^{t_{n+1}}\sum_{j=N}^n \int_{t_{j-1}}^{t_j}\frac{|\psi(s)|+|\psi(u)|}{|s-u|^{1+\alpha}}du\, ds\\ \leq C(\omega) n^{\alpha - H - 3/2}\theta^{(\alpha-1)n} \int_{t_n}^{t_{n+1}}(s-t_n)^{-\alpha}ds\\ + C(\omega)\sum_{j=N}^{n-1} j^{\alpha-H-3/2}\theta^{(\alpha-1)j} \int_{t_n}^{t_{n+1}}(s-t_{j+1})^{-\alpha}ds\\ \leq C(\omega)n^{\alpha - H - 3/2}\theta^{(\alpha-1)n}\Delta_n^{1-\alpha} + C(\omega) \sum_{j=N}^{n-1}j^{\alpha-H-3/2}\theta^{(\alpha-1)j} \Delta_n^{1-\alpha}\\ = C(\omega)n^{\alpha - H - 3/2} + C(\omega) \sum_{j=N}^{n-1}j^{\alpha-H-3/2}\theta^{(\alpha-1)(j-n)}. \end{gather*} Similarly, \begin{gather*}
I_2\leq C(\omega)n^{\alpha - H - 3/2}\theta^{(\alpha-1)n} \sum_{k=1}^n \int_{s_{n,k-1}}^{s_{n,k}}\int_{t_n}^{s_{n,k-1}}|s-u|^{-1-\alpha}du\, ds \\ \le C(\omega)n^{\alpha - H - 3/2}\theta^{(\alpha-1)n} \sum_{k=1}^n \int_{s_{n,k-1}}^{s_{n,k}} (s-s_{n,k-1})^{-\alpha}ds\\ \le C(\omega)n^{\alpha - H - 3/2}\theta^{(\alpha-1)n} n \delta_n^{1-\alpha}=C(\omega)n^{2\alpha - H - 3/2}. \end{gather*} Finally, assuming that $\sigma_n\in [s_{n,l-1},s_{n,l})$, \begin{gather*}
I_3\leq C(\omega)\sum_{k=1}^{l-1} \int_{s_{n,k-1}}^{s_{n,k}}\int_{s_{n,k-1}}^s a_n\frac{(s-u)^{H}|\log (s-u)|^{1/2}}{(s-u)^{1+\alpha}}du\, ds\\
+ \int_{s_{n,l-1}}^{\sigma_n}\int_{s_{n,l-1}}^s\frac{|\psi(s)-\psi(u)|}{|s-u|^{1+\alpha}}du\, ds+\int_{\sigma_n}^{s_{n,l}}\int_{s_{n,l-1}}^{\sigma_n}\frac{|\psi(s)-\psi(u)|}{|s-u|^{1+\alpha}}du\, ds\\
\le C(\omega)a_n\sum_{k=1}^n \int_{s_{n,k-1}}^{s_{n,k}}(s-s_{n,k-1})^{H-\alpha}|\log(s-s_{n,k-1})|^{1/2}ds\\
+ C(\omega) n^{\alpha - H - 3/2}\theta^{(\alpha-1)n}\int_{\sigma_n}^{s_{n,l}}\int_{s_{n,l-1}}^{\sigma_n}\frac{1}{|s-u|^{1+\alpha}}du\, ds\\ \leq C(\omega)a_n n\delta_n^{H+1-\alpha}|\log \delta_n|^{1/2} + C(\omega) n^{\alpha - H - 3/2}\theta^{(\alpha-1)n}\delta_n^{-\alpha}\\ = C(\omega)n^{\alpha-H-3/2} + C(\omega)n^{2\alpha - H - 3/2}\le C(\omega)n^{2\alpha - H - 3/2}. \end{gather*}
Gathering all estimates we get \begin{gather*}
\int_{t_N}^1 |D^\alpha_{t_N+}(\psi)(s)|ds \leq C(\omega)\sum_{n=N}^\infty \Big(n^{2\alpha - H - 3/2} + \sum_{j=N}^{n-1}j^{\alpha-H-3/2}\theta^{(\alpha-1)(j-n)}\Big)\\ \le C(\omega)\Big( N^{2\alpha - H-1/2} + \sum_{j=N}^ \infty j^{\alpha-H-3/2}\sum_{n=j+1}^\infty \theta^{(1-\alpha)(n-j)} \Big)\\ \le C(\omega) N^{2\alpha - H-1/2}, \end{gather*} which implies that $\wtinorm{\psi}_{\alpha,[t_N,T]}\to 0$, $N\to\infty$, finishing the proof. \end{proof}
Now we turn to the main object of this article.\index{Wiener-transformable process}
\begin{definition}\label{wti:def1} A Gaussian process $G=\{G(t), t\in{\mathbb{R}}^+\}$ is called $m$-Wiener-trans\-for\-mable if there exists $m$-dimensional Wiener process $W=\{W(t), t\in{\mathbb{R}}^+\}$ such that $G$ and $W$ generate the same filtration, i.e. for any $t\in{\mathbb{R}}^+$ $$\mathcal{F}_t^G=\mathcal{F}_t^W.$$
We say that $G$ is $m$-Wiener-transformable to $W$ (evidently, process $W$ can be non-unique.)
\end{definition}
\begin{remark}\begin{itemize}
\item[$(i)$] In the case when $m=1$ we say that the process $G$ is Wiener-trans\-formable.
\item[$(ii)$] Being Gaussian so having moments of any order, $m$-Wiener-transformable process admits at each time $t\in{\mathbb{R}}^+$
the martingale representation $G(t)= {\bf E}(G(0))+\sum_{i=1}^m\int_0^tK_i(t,s)dW_i(s),$
where $K_i(t,s)$ is $\mathcal{F}_s^W$-adapted for any $0\leq s\leq t$ and $\int_0^t {\bf E}(K_i(t,s))^2ds<\infty$ for any $t\in{\mathbb{R}}^+$.
\end{itemize}
\end{remark}
Now let the random variable $\xi$ be ${\mathcal{F}}_T^W$-measurable, $ {\bf E}\xi^2<\infty$. Then in view of martingale representation theorem, $\xi$ can be represented as \begin{equation}\label{wti:mart-repr} \xi = {\bf E} \xi + \int_0^T \vartheta(t) dW(t), \end{equation} where $\vartheta$ is an adapted process with $\int_0^T {\bf E}\vartheta(t)^2 dt<\infty$.
As it was explained in introduction, we are interested when $\xi$ can be represented in the form $$ \xi = \int_0^T \psi(s) dG(s), $$ where the integrand is adapted, and the integral is understood in the pathwise sense.
\begin{theorem}\label{wti:thm1} Let the following conditions hold. \begin{itemize} \item[$(i)$] Gaussian process $G$ satisfies condition $(A)$ and $(B)$. \item[$(ii)$] Stochastic process $\vartheta$ in representation \eqref{wti:mart-repr} satisfies \begin{equation}\label{wti:thetaassump}
\int_{0}^{T}|\vartheta(s)|^{2p}ds<\infty \end{equation} a.s.\ with some $p> 1$. \end{itemize}
Then there exists a bounded adapted process $\psi$ such that $\left\|\psi\right\|_{\alpha,T}<\infty$ for some $\alpha\in \left(1-H,\frac{1}{2}\right)$ and $\xi$ admits the representation
\begin{equation*} \xi=\int_{0}^{T}\psi(s) dG(s), \end{equation*} almost surely. \end{theorem}
\begin{remark} As it was mentioned in \cite{mish-shev}, it is sufficient to require the properties $(A)$ and $(B)$ to hold on some subinterval $[T-\delta,T]$. Similarly, it is enough to require in $(ii)$ that $\int_{T-\delta}^T |\vartheta(t)|^{2p}dt<\infty$ almost surely. \end{remark}
First we prove a simple result establishing H\"older continuity of It\^o integral.
\begin{lemma}\label{wti:lem1}
Let $\vartheta=\{\vartheta(t), t\in [0,T]\}$ be a real-valued progressively measurable process such that for some $p\in(1,+\infty]$ $$\int_{0}^{T}|\vartheta(s)|^{2p}ds<\infty$$ a.s. Then the stochastic integral $\int_{0}^{t}\vartheta(s)dW(s)$ is H\"{o}lder continuous of any order up to $\frac{1}{2}-\frac{1}{2p}$. \end{lemma} \begin{proof} First note that if there exist non-random positive constants $a,C$ such that for any $s,t\in [0,T]$ with $s<t$ $$ \int_{s}^{t}\vartheta^2(u)du \le C(t-s)^a, $$ then $\int_0^t \vartheta(s)dW(s)$ is H\"older continuous of any order up to $a/2$. Indeed, in this case by the Burkholder inequality, for any $r>1$ and $s,t\in [0,T]$ with $s<t$ $$
{\bf E} \left| \int_s^t \vartheta(u) dW(u)\right|^r\le C_r {\bf E} \left( \int_s^t \vartheta^2(u) du\right)^{r/2} \le C (t-s)^{ar/2}, $$ so by the Kolmogorov--Chentsov theorem, $\int_0^t \vartheta(s)dW(s)$ is H\"older continuous of order $\frac{1}{r}(\frac{ar}2-1) = \frac{a}2 - \frac{1}{2r}$. Since $r$ can be arbitrarily large, we deduce the claim.
Now let for $n\ge 1$, $\vartheta_n(t) = \vartheta(t)\mathbf{1}_{\int_0^t |\vartheta(s)|^{2p} ds\le n}$, $t\in[0,T]$. By the H\"older inequality, for any $s,t\in [0,T]$ with $s<t$ $$
\int_s^t \vartheta_n^2(u)du \le (t-s)^{1-1/p}\left(\int_s^t |\vartheta(u)|^{2p} du\right)^{1/p} \le n^{1/p}(t-s)^{1-1/p}. $$
Therefore, by the above claim, $\int_0^t \vartheta_n(s)dW(s)$ is a.s.\ H\"older continuous of any order up to $\frac12 - \frac{1}{2p}$. However, $\vartheta_n$ coincides with $\vartheta$ on $\Omega_n = \{\int_0^T |\vartheta(t)|^{2p} dt\le n\}$. Consequently, $\int_0^t \vartheta_n(s)dW(s)$ is a.s.\ H\"older continuous of any order up to $\frac12 - \frac{1}{2p}$ on $\Omega_n$. Since $ {\bf P}(\bigcup_{n\ge 1} \Omega_n) = 1$, we arrive at the statement of the lemma. \end{proof}
\noindent \textbf{Proof of Theorem~\ref{wti:thm1}.} Define $$ Z(t) = {\bf E}\xi + \int_0^t \vartheta(s) dW(s). $$ This is an adapted process with $Z(T) = \xi$, moreover, it follows from Lemma~\ref{wti:lem1} that $Z$ is H\"older continuous of any order up to $\frac{1}{2}- \frac{1}{2p}$. Thus, the statement follows from Theorem~\ref{wti:thm:representation}.
In the case where one looks at improper representation, no assumptions on $\xi$ are needed. \begin{theorem}\label{wti:thm2} (Improper representation theorem)
Assume that an adapted Gaussian process $G=\{G(t)$, $t\in[0,T]\}$ satisfies conditions $(A),(B)$. Then for any random variable $\xi$ there exists an adapted process $\psi$ that $\left\|\psi\right\|_{\alpha,t}<\infty$ for some $\alpha\in \left(1-H,\frac{1}{2}\right)$ and any $t\in[0,T)$ and $\xi$ admits the representation
\begin{equation*} \xi=\lim_{t\to T-}\int_{0}^{t}\psi(s) dG(s), \end{equation*} almost surely. \end{theorem} \begin{proof} The proof is exactly the same as for Theorem 4.2 in \cite{shev-viita}, so we just sketch the main idea.
Consider an increasing sequence of points $\{t_n,n\ge 1\}$ in $[0,T)$ such that $t_n\to T$, $n\to\infty$, and let $\{\xi_n,n\ge 1\}$ be a sequence of random variables such that $\xi_n$ is $\mathcal{F}_{t_n}$-measurable for each $n\ge 1$, and $\xi_n\to \xi$, $n\to\infty$, a.s. Set for convenience $\xi_0 = 0$. Similarly to Case I in Theorem~\ref{wti:thm:representation}, for each $n\ge 1$, there exists an adapted process $\{\phi_n(t),t\in[t_n,t_{n+1}]\}$, such that $\int_{t_n}^{t}\phi_n(s) d G(s)\to +\infty$ as $t\to t_{n+1}-$. For $n\ge 1$, define a stopping time \begin{gather*}
\tau_n=\inf\left\{t\geq t_n: \int_{t_n}^{t}\phi_n(s) dG(s)\geq |\xi_n-\xi_{n-1}| \right\} \end{gather*} and set \begin{gather*} \psi(t)=\phi_n(t) \operatorname{sign}\big(\xi_n-\xi_{n-1}\big)\mathbf{1}_{[t_n,\tau_n]}(t), \,t\in[t_n,t_{n+1}). \end{gather*} Then for any $n\ge 1$, we have $\int_{0}^{t_{n+1}}\psi(s)dG(s)=\xi_n$ and $\int_{0}^{t}\psi(s)dG(s)$ lies between $\xi_{n-1}$ and $\xi_n$ for $t\in[t_{n-1},t_n]$. Consequently, $\int_{0}^{t}\psi(s) dG(s)\to \xi$, $t\to T-$, a.s., as required. \end{proof}
Further we give several examples of Wiener-transformable Gaussian processes satisfying conditions $(A)$ and $(B)$ (for more detail and proofs see, e.g. \cite{mish-shev}) and formulate the corresponding representation results.
\subsection{Fractional Brownian motion}
Fractional Brownian motion\index{fractional Brownian motion} $B^H$ with Hurst parameter $H\in(0,1)$ is a centered Gaussian process with the covariance $$
{\bf E} B^H(t)B^H(s) = \frac{1}{2}\left(t^{2H}+s^{2H}-|t-s|^{2H}\right); $$ an extensive treatment of fractional Brownian motion is given in \cite{Mish}. For $H=\frac12$, fractional Brownian motion is a Wiener process; for $H\neq \frac12$ it is Wiener-transformable to the Wiener process $W$ via relations \begin{equation}\label{wti:fbmviawin} B^H(t)=\int_0^t K^H(t,s) dW(s) \end{equation} and \begin{equation}\label{wti:winviafbm}
W(t)=\int_0^t k^H(t,s)dB^H(s), \end{equation} see e.g.\ \cite{norros}.
Fractional Brownian motion with index $H\in(0,1)$ satisfies condition $(A)$ and satisfies condition $(B)$ if $H\in(\frac{1}{2},1)$.
Therefore, a random variable satisfying \eqref{wti:thetaassump} with any $p>1$ admits the representation \eqref{wti:reprez}.
\subsection{Fractional Ornstein--Uhlenbeck process} Let $H\in (\frac{1}{2},1)$. Then the fractional Ornstein--Uhlenbeck process\index{fractional Ornstein--Uhlenbeck process} $Y=\{Y(t), t\ge 0\}$, involving fractional Brownian component and satisfying the equation $$ Y(t)=Y_0+\int_0^t(b-aY(s))ds+\sigma B^H(t),$$ where $a,b\in{\mathbb{R}}$ and $\sigma>0$, is Wiener-transformable to the same Wiener process as the underlying fBm $B^H$.
Consider a fractional Ornstein--Uhlenbeck process of the simplified form \begin{equation*} Y(t) = Y_0 + a \int_0^t Y(s) ds + B^H(t), \mbox{ } t \geq 0. \end{equation*} It satisfies condition $(A)$; if $a>0$, it satisfies condition $(B)$ as well.
As it was mentioned in \cite{mish-shev}, the representation theorem is valid for a fractional Ornstein-Uhlenbeck process with a negative drift coefficient too. Indeed, we can annihilate the drift of the fractional Ornstein-Uhlenbeck process with the help of Girsanov theorem, transforming a fractional Ornstein-Uhlenbeck process with negative drift to a fractional Brownian motion $\widetilde{B}^H$. Then, assuming \eqref{wti:thetaassump}, we represent the random variable $\xi$ as $\xi=\int_0^T\psi(s)d\widetilde{B}^H(s)$ on the new probability space. Finally, we return to the original probability space. Due to the pathwise nature of integral, its value is not changed upon changes of measure.
\subsection{Subfractional Brownian motion} Subfractional Brownian motion\index{subfractional Brownian motion} with index $H$, that is a centered Gaussian process $G^H=\left\{G^H(t), t \geq 0 \right\}$ with covariance function
$$ {\bf E} G^H(t) G^H(s) = t^{2H}+s^{2H} -\frac{1}{2}\left(|t+s|^{2H} + |t-s|^{2H} \right),$$
satisfies condition $(A)$ and condition $(B)$ for $H\in(\frac{1}{2},1)$.
\subsection{Bifractional Brownian motion} Bifractional Brownian motion\index{bifractional Brownian motion} with indices $A \in (0,1)$ and $K \in (0,1)$, that is a centered Gaussian process with covariance function
$$ {\bf E} G^{A,K}(t) G^{A,K}(s) = \frac{1}{2^K} \left( \left(t^{2A}+s^{2A}\right)^K - |t-s|^{2AK}\right),$$ satisfies condition $(A)$ with $H = AK$ and satisfies condition $(B)$ for $AK>\frac{1}{2}$.
\subsection{Geometric Brownian motion}\index{geometric Brownian motion} Geometric Brownian motion involving the Wiener component and having the form $$S=\left\{S(t)=S(0)\exp\left\{\mu t+\sigma W(t)\right\}, \;\; t\ge 0 \right\},$$ with $S(0)>0$, $\mu\in{\mathbb{R}}$, $\sigma>0$, is Wiener-transformable to the underlying Wiener process $W$. However, it does not satisfy the assumptions of Theorem~\ref{wti:thm1}. One should appeal here to the standard semimartingale tools, like the martingale representation theorem.
\subsection{Linear combination of fractional Brownian motions} Consider a collection of Hurst indices $\frac{1}{2}\le H_1< H_2<\ldots<H_m<1$ and independent fractional Brownian motions with corresponding Hurst indices $H_i$, $1\le i \le m$. Then the linear combination $\sum_{i=1}^{m}a_iB^{H_i}$ is $m$-Wiener-transformable to the Wiener process $W=(W_1,\ldots,W_m)$, where $W_i$ is such Wiener process to which fractional Brownian motion $B^{H_i}$ is Wiener-transformable. In particular, the mixed fractional Brownian motion $M^H=W+B^H$, introduced in \cite{Cheridito}, is $2$-Wiener-transformable.
The linear combination $\sum_{i=1}^{m}a_iB^{H_i}$ satisfies condition $(A)$ with $H=H_1$, and condition $(B)$ whenever $H_1>1/2$.
We note that in the case of mixed fractional Brownian motion, the existence of representation \eqref{wti:reprez} cannot be derived from Theorem~\ref{wti:thm1}, as we have $H = \frac12$ in this case. By slightly different methods, it was established in \cite{shev-viita} that arbitrary $\mathcal{F}_T$-measurable random variable $\xi$ admits the representation $$ \xi = \int_0^T \psi(s) d\big(B^H(s) + W(s)\big), $$ where the integral with respect to $B^H$ is understood, as here, in the pathwise sense, the integral with respect to $W$, in the extended It\^o sense. In contrast to Theorem \ref{wti:thm:representation}, we can not for the moment establish this result for the bounded strategies. Therefore, it would be interesting to study which random variables have representations with bounded $\psi$ in the mixed model.
\subsection{Volterra process}\index{Volterra process} Consider Volterra integral transform of Wiener process, that is the process of the form $G(t) = \int_0^t K(t,s) dW(s)$ with non-random kernel $K(t, \cdot) \in L_2[0,t]$ for $t\in[0,T]$. Let the constant $r\in[0,1/2)$ be fixed. Let the following conditions hold. \begin{itemize} \item[ $(B1)$] The kernel $K$ is non-negative on $[0,T]^2$ and for any $s\in [0,T]$ $K(\cdot,s)$ is non-decreasing in the first argument;
\item[$(B2)$] There exist constants $D_i>0, i=2,3$ and $H\in(1/2,1)$ such that $$|K(t_2,s) - K(t_1,s)| \leq D_2 |t_2-t_1|^{H}s^{-r},\quad s, t_1,t_2 \in [0,T] $$
and $$\ K(t,s)\leq D_3(t-s)^{H-1/2}s^{-r};$$ \end{itemize}
and at least one of the following conditions \begin{itemize}
\item[$(B3,a)$] There exist constant $D_1>0$ such that $$
D_1|t_2-t_1|^{H}s^{-r}\leq|K(t_2,s) - K(t_1,s)|,\quad s, t_1,t_2 \in [0,T];
$$
\item[$(B3,b)$]There exist constant $D_1>0$ such that $$
K(t,s)\geq D_1(t-s)^{H-1/2}s^{-r},\quad s, t \in [0,T].$$ \end{itemize} Then the Gaussian process $G(t) = \int_0^t K(t,s) dW(s)$, satisfies condition $(A)$, $(B)$ on any subinterval $[T-\delta, T]$ with $\delta\in (0,1)$.
\section{Expected utility maximization in Wiener-transformable markets}\label{wti:sec:4}
\subsection{Expected utility maximization for unrestricted capital profiles}\index{utility maximization} Consider the problem of maximizing the expected utility. Our goal is to characterize the optimal asset profiles in the framework of the markets with risky assets involving Gaussian processes satisfying conditions of Theorem \ref{wti:thm1}. We follow the general approach described in \cite{ekel} and \cite{karat}, but apply its interpretation from \cite{Foll-Sch}. We fix $T>0$ and from now on consider ${\mathcal{F}}_T^W$-measurable random variables. Let the utility function $u:{\mathbb{R}}\rightarrow{\mathbb{R}}$ be strictly increasing and strictly concave, $L^0(\Omega, {\mathcal{F}}_T^W, {\bf P})$ be the set of all ${\mathcal{F}}_T^W$-measurable random variables, and let the set of admissible capital profiles coincide with $L^0(\Omega, {\mathcal{F}}_T^W, {\bf P})$. Let $ {\bf P}^*$ be a probability measure on $(\Omega, {\mathcal{F}}_T^W)$, which is equivalent to $ {\bf P}$, and denote $\varphi(T)=\frac{d {\bf P}^*}{d {\bf P}}$. The budget constraint is given by $ {\bf E}_{ {\bf P}^*}(X)=w$, where $w>0$ is some number that can be in some cases, but not obligatory, interpreted as the initial wealth. Thus the budget set is defined as
$$\mathcal{B}=\left\{X\in L^0\left(\Omega, {\mathcal{F}}_T^{W}, {\bf P}\right)\cap L^1\left(\Omega, {\mathcal{F}}_T^W, {\bf P}^* \right)| {\bf E}_{ {\bf P}^*}(X)=w\right\}.$$ The problem is to find such $X^*\in\mathcal{B}$, for which $ {\bf E}( u(X^*))=\max_{X\in\mathcal{B}} {\bf E}( u(X))$. Consider the inverse function $I(x)=(u'(x))^{-1}$.
\begin{theorem}[{\cite[Theorem 3.34]{Foll-Sch}}]\label{wti:Theorem main for max} Let the following condition hold: \label{wti:Follmer-Sch}
Strictly increasing and strictly concave utility function $u:{\mathbb{R}}\rightarrow{\mathbb{R}}$ is continuously differentiable, bounded from above and $$\lim_{x\downarrow -\infty} u'(x)=+\infty.$$ Then the solution of this maximization problem has a form $$X^*=I(c\varphi(T)),$$ under additional assumption that $ {\bf E}_{ {\bf P}^*}(X^*)=w$. \end{theorem} To connect the solution of maximization problem with specific $W$-transform\-able Gaussian process describing the price process, we consider the following items.
1. Consider random variable $\varphi(T)$, $\varphi(T)>0$ a.s. and let $ {\bf E}(\varphi(T))=1.$ Being the terminal value of a positive martingale $\varphi=\{\varphi_t= {\bf E}(\varphi(T)|{\mathcal{F}}_t^W), t\in[0,T]\}$, $\varphi(T)$ admits the following representation \begin{equation}\label{wti:fi1} \varphi(T)=\exp\left\{\int_0^T \vartheta(s)dW_s-\frac{1}{2}\int_0^T \vartheta^2(s)ds\right\}, \end{equation}
where $\vartheta$ is a real-valued progressively measurable process for which $$ {\bf P}\left\{\int_{0}^{T}\vartheta^2(s)ds<\infty\right\}=1.$$ Assume that $\vartheta$ satisfies \eqref{wti:thetaassump}. Then $\varphi(T)$ is a terminal value of a H\"{o}lder continuous process of order $\frac{1}{2} - \frac{1}{2p}$.
2. Consider $W$-transformable Gaussian process $G=\{G(t), t\in[0,T]\}$ satisfying conditions $(A)$ and $(B)$, and introduce the set \begin{gather*}
\mathcal{B}_w^G=\bigg\{\psi\colon [0,T]\times \Omega\to {\mathbb{R}}\ \Big|\ \text{$\psi$ is bounded ${\mathcal{F}}_t^W$-adapted, there exists a generalized}\\ \text{Lebesgue-Stieltjes integral}
\int_0^T \psi(s)dG(s), \;\; \text{and} \;\; {\bf E}\bigg(\varphi(T)\int_0^T\psi(s) dG(s)\bigg)=w\bigg\}. \end{gather*} \begin{theorem} \label{wti:TheoremIntRepresentation} Let the following conditions hold \begin{itemize}
\item[$(i)$] Gaussian process $G$ satisfies condition $(A)$ and $(B)$.
\item[$(ii)$] Function $I(x),x\in{\mathbb{R}}$ is H\"older continuous.
\item[$(iii)$] Stochastic process $\vartheta$ in representation \eqref{wti:fi1} satisfies \eqref{wti:thetaassump} with some $p>1$.
\item[$(iv)$] There exists $c\in{\mathbb{R}}$ such that $ {\bf E}(\varphi(T)I(c\varphi(T)))=w$. \end{itemize} Then the random variable $X^*=I(c\varphi(T))$ admits the representation \begin{equation}\label{wti:reprmain} X^*=\int_0^T\overline{\psi}(s)dG(s), \end{equation} with some $\overline{\psi}\in \mathcal{B}_w^G,$ and \begin{equation}\label{wti:maxim} {\bf E}( u(X^*))=\max_{\psi\in\mathcal{B}_w^G} {\bf E} \left(u\left(\int_0^T\psi(s)dG(s)\right)\right).\end{equation} \end{theorem}
\begin{proof} From Lemma \ref{wti:lem1} we have that for any $c \in \mathbb{R}$ the random variable $\xi= I(c \varphi(T))$ is the final value of a H\"{o}lder continuous process $$ U(t)= I(c \varphi(t)) = I\left(c \exp\left\{\int_0^t \vartheta(s) d W(s) - \frac{1}{2} \int_0^t \vartheta^2(s) ds\right\}\right). $$ and the H\"{o}lder exponent exceeds $\rho$. Together with $(i)$--$(iii)$ this allows to apply Theorem \ref{wti:thm1} to obtain the existence of representation (\ref{wti:reprmain}). Assume now that (\ref{wti:maxim}) is not valid, and there exists $\psi_0 \in \mathcal{B}_w^G$ such that $ {\bf E}\left(\varphi(T)\int_0^T \psi_0(s) d G(s)\right)=w$, and $ {\bf E} u \left( \int_0^T \psi_0(s) d G(s) \right)> {\bf E} u(X^*)$. But in this case $\int_0^T \psi_0(s) d G(s)$ belongs to $\mathcal{B}$, and we get a contradiction with Theorem \ref{wti:Theorem main for max}. \end{proof} \begin{remark} Assuming only $(i)$ and $(iv)$, one can show in a similar way, but using Theorem~\ref{wti:thm2} instead of Theorem~\ref{wti:thm1} that $$ {\bf E}( u(X^*))=\sup_{\psi\in\mathcal{B}_w^G} {\bf E} \left(u\left(\int_0^T\psi(s)dG(s)\right)\right). $$ However, the existence of a maximizer is not guaranteed in this case. \end{remark}
\begin{example} Let $u(x) = 1 - e^{- \beta x}$ be an exponential utility function with constant absolute risk aversion $\beta>0$. In this case $I(x) = - \frac{1}{\beta} \log ( \frac{x}{\beta})$. Assume that $$\varphi(T) = \exp \left\{ \int_0^T \vartheta(s) dW(s) - \frac{1}{2} \int_0^T\vartheta^2(s) ds \right\}$$ is chosen in such a way that \begin{equation}\begin{gathered} \label{wti:eq1ex41}
{\bf E} \left( \varphi(T) |\log \varphi(T)|\right)\\ = {\bf E} \bigg( \exp\left\{ \int_0^T \vartheta(s) dW(s) - \frac{1}{2} \int_0^T\vartheta^2(s) ds \right\}\\ \times
\left|\int_0^T \vartheta(s) dW(s) - \frac{1}{2} \int_0^T\vartheta^2(s) ds\right| \bigg)<\infty. \end{gathered}\end{equation} Then, according to Example 3.35 from \cite{Foll-Sch}, the optimal profile can be written as \begin{equation} \label{wti:ex41optprofile}
X^* = - \frac{1}{\beta} \left( \int_0^T \vartheta(s) dW(s) - \frac{1}{2} \int_0^T\vartheta^2(s) ds \right) + w + \frac{1}{\beta} H( {\bf P}^*| {\bf P}), \end{equation}
where $H( {\bf P}^*| {\bf P}) = {\bf E} \left(\varphi(T) \log \varphi(T)\right)$, condition (\ref{wti:eq1ex41}) supplies that $H( {\bf P}^*| {\bf P})$ exists, and the maximal value of the expected utility is $$
{\bf E} (u(X^*)) = 1 - \exp\left\{-\beta w - H( {\bf P}^*| {\bf P}) \right\}. $$ Let $\varphi(T)$ be chosen in such a way that the corresponding process $\vartheta$ satisfies the assumption of Lemma \ref{wti:lem1}. Also, let $W$-transformable process $G$ satisfy conditions $(A)$ and $(B)$ of Theorem \ref{wti:Follmer-Sch}, and $\vartheta$ satisfy \eqref{wti:thetaassump} with $p>1$.
Then we can conclude directly from representation (\ref{wti:ex41optprofile}) that conditions of Theorem \ref{wti:Follmer-Sch} hold. Therefore, the optimal profile $X^*$ admits the representation $X^* = \int_0^T \psi (s) d G(s).$ \end{example}
\begin{remark} Similarly, under the same conditions as above, we can conclude that for any constant $d\in \mathbb{R}$ there exists $\psi_d$ such that $X^* = d + \int_0^T \psi_d(s) d G(s).$ Therefore, we can start from any initial value of the capital and achieve the desirable wealth. In this sense, $w$ is not necessarily the initial wealth as it is often assumed in the semimartingale framework, but is rather a budget constraint in the generalized sense.
\end{remark}
\begin{remark}
In the case when $W$-transformable Gaussian process $G$ is a semimartingale, we can use Girsanov's theorem in order to get the representation, similar to \eqref{wti:reprmain}. Indeed,
let, for example, $G$ be a Gaussian process of the form $G(t)=\int_0^t\mu(s)ds +\int_0^t a(s)dW(s)$, $|\mu(s)|\leq \mu$, $a(s)>a>0$ are non-random measurable functions, and $\xi$ is ${\mathcal{F}}_T^W$-measurable random variable, $ {\bf E}(\xi^2)<\infty$. Then we transform $G$ into $\widetilde{G}=\int_0^\cdot a(s)d\widetilde{W}(s)$, with the help of equivalent probability measure $\widetilde{ {\bf P}}$ having Radon--Nikodym derivative $$\frac{d\widetilde{ {\bf P}}}{d {\bf P}}=\exp\left\{-\int_0^T\frac{\mu(s)}{a(s)}dW(s)-\frac{1}{2}
\int_0^T\left(\frac{\mu(s)}{a(s)}\right)^2d s \right\}.$$ With respect to this measure $ {\bf E}_{\widetilde{ {\bf P}}}|X^*|<\infty$, and we get the following representation \begin{gather}\label{wti:repraux} X^*= {\bf E}_{\widetilde{ {\bf P}}}(X^*)+\int_0^T\psi(s)d\widetilde{W}_s= {\bf E}_{\widetilde{ {\bf P}}}(X^*)+\int_0^T \frac{\psi(s)}{a(s)}d\widetilde{G}(s)\\= {\bf E}_{\widetilde{ {\bf P}}}(X^*)+\int_0^T \frac{\psi(s)}{a(s)}d{G}(s) = {\bf E}_{\widetilde{ {\bf P}}}(X^*)+\int_0^T\psi(s) {\mu(s)} ds+\int_0^T
{\psi(s)} dW(s). \end{gather} Representations \eqref{wti:reprmain} and \eqref{wti:repraux} have the following distinction: \eqref{wti:reprmain} ``starts'' from $0$ (but can start from any other constant) while \eqref{wti:repraux} ``starts'' exactly from $ {\bf E}_{\widetilde{ {\bf P}}}(X^*)$. \end{remark}
As we can see, the solution of the utility maximization problem for $W$--trans\-formable process depends on the process in indirect way, through the random variable $\varphi(T)$ such that $ {\bf E}\varphi(T)=1$, $\varphi(T)>0$ a.s. Also, this solution depends on whether or not we can choose the appropriate value of $c$, but this is more or less a technical issue. Let us return to the choice of $\varphi(T)$. In the case of the semimartingale market, $\varphi(T)$ can be reasonably chosen as the likelihood ratio of some martingale measure, and the choice is unique in the case of the complete market. The non-semimartingale market can contain some hidden semimartingale structure. To illustrate this, consider two examples.
\begin{example}\label{wti:ex4.2} Let the market consist of bond $B$ and stock $S$, $$B(t)=e^{rt},\;S(t)=\exp\left\{\mu t +\sigma B_t^{H}\right\},$$ $r\geq0$, $\mu\in{\mathbb{R}}$, $\sigma>0$, $H>\frac{1}{2}$. The discounted price process has a form $Y(t)=\exp\left\{(\mu-r)t+\sigma B_t^H\right\}$. It is well-known that such market admits an arbitrage, but even in these circumstances the utility maximization problem makes sense. Well, how to choose $\varphi(T)$? There are at least two natural approaches.
1. Note that for $H>\frac{1}{2}$ the kernel $K^H$ from (\ref{wti:fbmviawin}) has a form $$ K^H(t,s)= C(H) s^{\frac{1}{2}- H}\int_s^t u^{H-\frac{1}{2}}(u-s)^{H-\frac{3}{2}} du, $$ and representation (\ref{wti:winviafbm}) has a form $$ W(t) = \left(C(H)\right)^{-1}\int_0^t s^{\frac{1}{2}-H} K^*(t,s) d B_s^H, $$ where \begin{equation*}\begin{gathered} K^*(t,s) = \Big(t^{H-\frac{1}{2}}(t-s)^{\frac{1}{2} - H} \\ -\left(H-\frac{1}{2}\right)\int_s^t u^{H-\frac{3}{2}}(u-s)^{\frac{1}{2}-H}du\Big)\frac{1}{\Gamma\left(\frac{3}{2} - H\right)}. \end{gathered}\end{equation*}
Therefore, \begin{eqnarray*} & &\left(C(H)\right)^{-1}\int_0^t s^{\frac{1}{2}-H} K^*(t,s) d \left( (\mu - r) s + \sigma B_s^H\right)\\ &=& \sigma W(t) + \frac{\mu-r}{C(H)} \int_0^t s^{\frac{1}{2}-H} K^*(t,s) ds\\ &=& \sigma W(t) + \frac{\mu-r}{C(H)\Gamma\left(\frac{3}{2} - H\right)} \int_0^t\left( s^{\frac{1}{2}-H} t^{H- \frac{1}{2}} (t-s)^{\frac{1}{2}-H}\right.\\ & &{}- \left.\left(H-\frac{1}{2}\right)s^{\frac{1}{2}-H} \int_s^t u^{H-\frac{3}{2}}(u-s)^{\frac{1}{2}-H}du \right)ds\\ &=&\sigma W_t + \frac{\mu -r}{C(H) \Gamma(\frac{3}{2}-H)}\frac{\Gamma^2(\frac{3}{2}-H)}{(\frac{3}{2}-H)\Gamma (2 - 2 H)} t^{\frac{3}{2}-H}\\ &=& \sigma W_t + (\mu-r) C_1(H) t^{\frac{3}{2}-H}, \end{eqnarray*} where $$ C_1(H) = \left(\frac{3}{2}-H\right)^{-1} \left(\frac{\Gamma(\frac{3}{2} - H)}{2H\Gamma(2 - 2H)\Gamma(H+\frac{1}{2})} \right)^\frac{1}{2}. $$ In this sense we say that the model involves a hidden semimartingale structure.\\ Consider a virtual semimartingale asset \begin{align*} \hat{Y}(t) &= \exp \left\{ (C(H))^{-1} \int_0^t s^{\frac{1}{2}-H}K^*(t,s) d \log Y(s) \right\} \\ &= \exp\left\{\sigma W_t + (\mu-r) C(H)t^{\frac{3}{2}-H}\right\}. \end{align*} We see that measure $ {\bf P}^*$ such that \begin{equation}\begin{split}\label{wti:ex42ChangeMeasure} \frac{d {\bf P}^*}{d {\bf P}} &=\exp \left\{ - \int_0^T \left(\frac{(\mu-r)C_2(H)}{\sigma} s^{\frac{1}{2}-H}+ \frac{\sigma}{2}\right)dW_s\right. \\ & \quad \left.- \frac{1}{2} \int_0^T \left(\frac{(\mu-r)C_2(H)}{\sigma} s^{\frac{1}{2}-H}+ \frac{\sigma}{2}\right)^2 ds\right\}, \end{split}\end{equation} where $C_2(H) = C_1(H) \left(\frac{3}{2}-H\right),$ reduces $\hat{Y}(t)$ to the martingale of the form\linebreak $\exp \left\{\sigma W_t - \frac{\sigma^2}{2} t\right\}$. Therefore, we can put $\varphi(T) = \frac{d {\bf P}^*}{d {\bf P}}$ from (\ref{wti:ex42ChangeMeasure}). Regarding the H\"{o}lder property, $\vartheta(s)= s^{\frac{1}{2}-H}$ satisfies \eqref{wti:thetaassump} with some $p>1$ for any $H\in(\frac12,1)$. Therefore, for the utility function $u(x) = 1 - e^{-\alpha x}$ we have $$
X^* = \frac{1}{\alpha} \left(\int_0^T \varsigma(s) dW_s - \frac{1}{2} \int_0^T \varsigma_s^2 ds \right) + W + \frac{1}{2} H( {\bf P}^*| {\bf P}), $$
where $\varsigma(s) = \frac{(\mu-r)C_2(H)}{\sigma} s^{\frac{1}{2}-H}+ \frac{\sigma}{2}$, and $|H( {\bf P}^*| {\bf P})|<\infty.$
2. It was proved in \cite{Dung} that the fractional Brownian motion $B^H$ is the limit in $L_p(\Omega,\mathcal{F}, {\bf P})$ for any $p>0$ of the process $$B^{H,\epsilon}(t)=\int_0^tK(s+\epsilon,s)dW(s)+\int_0^t\psi_\epsilon (s)ds,$$ where $W$ is he underlying Wiener process, i.e. $B^{H}(t)=\int_0^tK(t,s)dW(s),$ where \begin{gather*} K(t,s)=C_H s^{\frac{1}{2}-H}\int_s^t u^{H-\frac{1}{2}}(u-s)^{H-\frac{3}{2}}du,\\ \psi_\epsilon(s)=\int_0^s\partial_1K(s+\epsilon,u)dW_u,\\ \partial_1K(t,s)=\frac{\partial K(t,s)}{\partial t}=C_Hs^{\frac{1}{2}-H}t^{H-\frac{1}{2}}(t-s)^{H-\frac{3}{2}}. \end{gather*} Consider prelimit market with discounted risky asset price $Y^{\epsilon}$ of the form $$Y^{\epsilon}(t)=\exp{\left\{(\mu-r)t+\sigma\int_{0}^{t}\psi_\epsilon(s)ds+\sigma\int_0^tK(s+\epsilon,s)dW_s\right\}}.$$ This financial market is arbitrage-free and complete, and the unique martingale measure has the Radon-Nikodym derivative $$\varphi_\epsilon(T)=\exp\left\{-\int_0^T\zeta_\epsilon(t)dW_t-\frac{1}{2}\int_0^T\zeta^2_\epsilon(t)dt\right\},$$ where $$\zeta_\epsilon(t)=\frac{\mu-r+\sigma\psi_\epsilon(t)}{\sigma K(t+\epsilon,t)}+\frac{1}{2}\sigma K(t+\epsilon,t).$$ Note that $K(t+\epsilon,t)\rightarrow 0$ as $\epsilon\rightarrow0$. Furthermore, $\rho_t=\frac{\mu-r+\sigma\psi_\epsilon(t)}{\sigma K(t+\epsilon,t)}$ is a Gaussian process with $ {\bf E}\rho_t=0$ and \begin{gather*} \operatornamewithlimits{\bf var} \zeta_\varepsilon(t)=\int_{0}^{t}\left(\frac{\partial_1 K(t+\epsilon,u)}{ K(t+\epsilon,t)}\right)^2du\\ =\int_{0}^{t}\left(\frac{u^{1/2-H}(t+\epsilon)^{H-1/2}(t+\epsilon-u)^{H-3/2}}{t^{1/2-H}\int_t^{t+\epsilon}v^{H-1/2}(v-t)^{H-3/2}}\right)^2du\\ \ge\epsilon^{1-2H}\int_0^t(t+\epsilon-u)^{2H-3}du=\frac{\epsilon^{1-2H}t}{2-2H}\left(\epsilon^{2H-2}-(t+\epsilon)^{2H-2}\right)\rightarrow\infty. \end{gather*} Therefore, we can not get a reasonable limit of $\varphi_\epsilon(T)$ as $\epsilon\rightarrow0.$ Thus one should use this approach with great caution. \end{example}
\subsection{Expected utility maximization for restricted capital profiles}
Consider now the case when the utility function $u$ is defined on some interval $(a,\infty)$. Assume for technical simplicity that $a=0$. Therefore, in this case the set $\mathcal{B}_0$ of admissible capital profiles has a form $$\mathcal{B}_0=\left\{X\in L^0(\Omega,{\mathcal{F}}, {\bf P}):X\ge0 \;\; \text{a.s. and} \;\; {\bf E}(\varphi(T) X)=w\right\}.$$ Assume that the utility function $u$ is continuously differentiable on $(0,\infty)$, introduce $\pi_1=\lim\limits_{x\uparrow\infty}u'(x)\ge0$, $\pi_2=u'(0+)=\lim\limits_{x\downarrow0}u'(x)\le +\infty$, and define $I^+:(\pi_1,\pi_2)\longrightarrow(0,\infty)$ as the continuous, bijective function, inverse to $u'$ on $(\pi_1,\pi_2)$.
Extend $I^+$ to the whole half-axis $\left[0,\infty\right]$ by setting \begin{displaymath} I^+(y)=\left\{ \begin{array}{ll} +\infty,& y\le\pi_1\\ 0,& y\ge\pi_2. \end{array}\right. \end{displaymath}
\begin{theorem}[\cite{Foll-Sch}, Theorem 3.39] Let the random variable $X^*\in\mathcal{B}_0$ have a form $X^*=I^+(c\varphi(T))$ for such constant $c>0$ that $ {\bf E}( \varphi(T) I^{+}(c\varphi(T)))=w$. If $ {\bf E} u(X^*)<\infty$ then $$ {\bf E}( u(X^*))=\max\limits_{X\in\mathcal{B}_0} {\bf E} (u(X)),$$ and this maximizer is unique. \end{theorem}
From here we deduce the corresponding result on the solution of utility maximization problem similarly to Theorem~\ref{wti:TheoremIntRepresentation}. Define, as before, \begin{gather*}
\mathcal{B}_w^G=\bigg\{\psi\colon [0,T]\times \Omega\to {\mathbb{R}}\ \Big|\ \text{$\psi$ is bounded ${\mathcal{F}}_t^W$-adapted, there exists a generalized}\\ \text{Lebesgue-Stieltjes integral}
\int_0^T \psi(s)dG(s)\ge 0, \;\; \text{and} \;\; {\bf E}\bigg(\varphi(T)\int_0^T\psi(s) dG(s)\bigg)=w\bigg\}. \end{gather*} \begin{theorem} Let the following conditions hold \begin{itemize}
\item[$(i)$] Gaussian process $G$ satisfies conditions $(A)$ and $(B)$.
\item[$(ii)$] Function $I^+(x),x\in{\mathbb{R}}$ is H\"older continuous.
\item[$(iii)$] Stochastic process $\vartheta$ in representation \eqref{wti:fi1} satisfies \eqref{wti:thetaassump} with some $p> 1$.
\item[$(iv)$] There exists $c\in{\mathbb{R}}$ such that $ {\bf E}(\varphi(T)I^+(c\varphi(T)))=w$. \end{itemize} Then the random variable $X^*=I^+(c\varphi(T))$ admits the representation \begin{equation*} X^*=\int_0^T\overline{\psi}(s)dG(s), \end{equation*} with some $\overline{\psi}\in \widetilde{\mathcal{B}}_w^G$. If $ {\bf E} u(X^*)<\infty$, the $X^*$ is the solution to expected utility maximization problem: and \begin{equation*} {\bf E}( u(X^*))=\max_{\psi\in\widetilde{\mathcal{B}}_w^G} {\bf E} \left(u\left(\int_0^T\psi(s)dG(s)\right)\right).\end{equation*} \end{theorem}
\begin{example} Consider the case of CARA utility function $u$. Let first $u(x)=\frac{x^\gamma}{\gamma}$, $x>0$, $\gamma\in(0,1)$. Then, according to \cite[Example 3.43]{Foll-Sch}, $$I^+(c\varphi(T))=c^{-\frac{1}{1-\gamma}}(\varphi(T))^{-\frac{1}{1-\gamma}}.$$ If $d:= {\bf E} (\varphi(T))^{-\frac{\gamma}{1-\gamma}}<\infty$ then unique optimal profile is given by $X^*=\frac{w}{d}(\varphi(T))^{-\frac{1}{1-\gamma}}$, and the maximal value of the expected utility is equal to $$ {\bf E}( u(X^*))=\frac{1}{\gamma}w^{\gamma}d^{1-\gamma}.$$ As it was mentioned, \begin{equation}\label{wti:fi} \varphi=\varphi(T)=\exp\left\{\int\limits_{0}^T\vartheta(s)dW(s)-\frac{1}{2}\int\limits_0^T \vartheta^2(s)ds,\right\} \end{equation}
thus $$(\varphi(T))^{-\frac{1}{1-\gamma}}=\exp\left\{-\frac{1}{1-\gamma}\int\limits_0^T \vartheta(s)dW(s)+\frac{1}{2(1-\gamma)}\int\limits_0^T \vartheta^2(s)ds\right\}.$$ Therefore, we get the following result. \begin{theorem} Let the process $\vartheta$ in the representation \eqref{wti:fi} satisfy \eqref{wti:thetaassump}, and $$ {\bf E}\exp\left\{-\frac{\gamma}{1-\gamma}\int\limits_0^T\vartheta(s)dW_s+\frac{\gamma}{2(1-\gamma)}\int\limits_0^T\vartheta^2_sds\right\}<\infty.$$ Let the process $G$ satisfy the same conditions as in Theorem \ref{wti:TheoremIntRepresentation}. Then $X^*=\int\limits_0^T\psi(s)dG(s)$. \end{theorem}
In the case where $u(x)=\log x$, we have $\gamma=0$ and $X^*=\frac{w}{\varphi(T)}$. Assuming that the relative entropy $H\left({ {\bf P}}|{ {\bf P}^*}\right)= {\bf E}(\frac{1}{\varphi(T)}\log \varphi(T))$ is finite, we get that
$$ {\bf E}(\log X^*)=\log w + H\left({ {\bf P}}|{ {\bf P}^*}\right).$$ \end{example}
\section*{Conclusion}
We have studied a broad class of non-semimartingale financial market models, where the random drivers are Wiener-transformable Gaussian random processes, i.e. some adapted transformations of a Wiener process. Under assumptions that the incremental variance of the process satisfies two-sided power bounds, we have given sufficient conditions for random variables to admit integral representations with bounded adapted integrand; these representations are models for bounded replicating strategies. It turned out that these representation results can be applied to solve utility maximization problems in non-semimartingale market models.
\begin{petit} \noindent \textbf{Acknowledgements } Elena Boguslavskaya is supported by Daphne Jackson fellowship funded by EPSRC. The research of Yu.~Mishura was funded (partially) by the Australian Government through the Australian Research Council (project number DP150102758). Yu.~Mishura acknowledges that the present research is carried through within the frame and support of the ToppForsk project nr. 274410 of the Research Council of Norway with title STORM: Stochastics for Time-Space Risk Models. \end{petit}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Estimating Linear Mixed-effects State Space Model Based on Disturbance Smoothing}
\tnotetext[]{Corresponding author, email: \textit{[email protected].} Financial supports from
NSFC (71271165) is gratefully acknowledged.}
\author{Jie Zhou $^{\star}$} \author{Aiping Tang}
\address{Department of Statistics, Xidian University,
Xi'an, 710071, P R China}
\begin{abstract}
We extend the linear mixed-effects state space model to accommodate the correlated individuals and investigate its parameter and state estimation based on disturbance smoothing in this paper. For parameter estimation, EM and score based algorithms are considered. Intermediate quantity of EM algorithm is investigated firstly from which the explicit recursive formulas for the maximizer of the intermediate quantity are derived out for two given models. As for score based algorithms, explicit formulas for the score vector are achieved from which it is shown that the maximum likelihood estimation is equivalent to moment estimation. For state estimation
we advocate it should be carried out without assuming the random effects being known in advance especially when the longitudinal observations are sparse. To this end an algorithm named kernel smoothing based mixture Kalman filter (MKF-KS) is proposed. Numerical studies are carried out to investigate the proposed algorithms which validate the efficacy of the proposed inference approaches. \end{abstract}
\begin{keyword}
State space model\sep Mixed-effects\sep Parameter estimation
\sep State estimation \sep Disturbance smoothing
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{introduction} State space models are widely used in various fields such as economics, engineering, biology et al. In particular structural time series models are just the special state space models. For linear state space model with Gaussian error, it is known that Kalman filter is optimal for state estimation. For nonlinear state space model, there does not exist optimal algorithm and various suboptimal algorithms for state estimation have been proposed in literatures, see Harvey (1989), Durbin and Koopman (2012) for details about these algorithms. Traditionally the state space models are designed for the single processes. \par In recent years in order to deal with the longitudinal data, the state space models for the multiple processes have been proposed and much attention has been attracted in this field. These models can be classified into two categories, i.e., the discrete and continuous models. For the single processes the discrete models are often referred as the hidden Markov models (HMMs). Historically the discrete models with random effects were introduced by Langeheine and van de Pol (1994) while Altman (2007) provided a general framework for implementing the random effects in the discrete models. For the parameter estimation, Altman (2007) evaluated the likelihood as a product of matrixes and performed numerical integration via Gaussian quadrature. A quasi-Newton method is used for maximum likelihood estimation. Maruotti (2011) discussed mixed hidden Markov models and their estimation using EM algorithm. Jackson et al (2014) extended the work of Altman (2007) by allowing the hidden state to jointly model longitudinal binary and count data. The likelihood was evaluated by forward-backward algorithm and adaptive Gaussian quadrature. For continuous state space models, Gamerman and Migon (1993) was the first to use the state space model to deal with multiple processes. They proposed dynamic hierarchical models for the longitudinal data. Unlike the usual hierarchical model where the parameters are modeled by hierarchical structure, the hierarchy in Gamerman and Migon (1993) is built for the state variables. Landim and Gamerman (2000) generalized such models to multiple processes. It should be noted that dynamic hierarchical models are still the linear state space models with Gaussian error and so the statistical inference for such model can be carried out using the traditional method. Lodewyckx et al (2011) proposed hierarchical linear state space model to model the emotion dynamics. Here the hierarchy is built for the parameters. Unlike the models in Gamerman and Migon (1993), these models are essentially the nonlinear state space model and Baysian approach was employed to estimate the unknown parameters. Liu et al (2011) proposed a similar model, which was called mixed-effects state space model (MESSM), to model the longitudinal observations of a group of HIV infected patients. As for the statistical inference of the model, both EM algorithm and Baysian approach were investigated. In order to justify their statistical inference, Liu et al (2011) assumed that the individuals in the group are independent and the model should have a linear form of parameter. As for the state estimation, they took the predicted values of random effects as the true values and then estimate the state using Kalman filter. \par In this paper we extend the models proposed in Liu et al (2011) and Lodewyckx et al (2011). The proposed models can accommodate the group with correlated individuals and do not require
the models should possess the linear form of parameters. The model will
still
be named as MESSM just as in Liu et al (2011).
For this generalized MESSM, both the parameter and state estimation are considered. As for parameter estimation, EM algorithm is firstly considered. Unlike Liu et al (2011) in which EM algorithm is based on state smoothing, we establish the EM algorithm based on the disturbance smoothing which greatly simplifies EM algorithm. Actually the proposed EM algorithm can be seen as the Rao-Blackwellized version of that proposed in Liu et al (2011). For two important special MESSM's, we get the elegant recursive formula for the maximizer of intermediate quantity of EM algorithm. Since the convergence rate of EM algorithm is just linear, score based algorithms, e.g., quasi-Newton algorithm, are then investigated. Also based on the disturbance smoothing, an explicit and simple expression for the score vector is derived out for both the fixed effects and variance components involved in MESSM. Based on the score vector, it is shown that the maximum likelihood estimation of MESSM is in fact equivalent to a particular moment estimation. \par
As for state estimation, based on the predicted random effects
Liu et al (2011) employed Kalman filter to estimate the state. Such
prediction is based on the batch data and so it is
not a recursive prediction. In many cases, e.g., clinical trial, the recursive prediction is more meaningful. Furthermore it is known that the predicting error of the random effects is rather large if longitudinal observations are sparse. Ignorance of the predicting error in this situation will result in a large bias and underestimate mean squared error of Kalman filter. In this paper we propose a algorithm adapted from the algorithm in Liu and West (2001) to estimate the state which is a recursive method and dose not require the random effects are known in advance. Thus the algorithm can apply whether the longitudinal observations are sparse or not. \par In the last the models are further extended to accommodate
several practical problems, including missing data, non-diagonal
transition matrix and time-dependent effects et al.
Simulation examples are carried out which validate the efficacy of the algorithms of parameter estimation. These approaches are applied to a real clinical trial data set and the results show that though the state estimation is based on the data only up to the present time point, the resulted mean squared errors are comparable to the mean squared error that are resulted from Kalman filter proposed by Liu et al (2011).
\par This paper is organized as follows. In section \ref{model formulation},
the data generating process for generalized MESSM is
described; In section \ref{model estimation} the algorithms
for both parameter and state estimation are detailed; Several
further
extensions
of the MESSM are considered in section \ref{extensions}.
In section \ref{numerical studies},
two numerical examples are investigated to illustrate the efficacy of
proposed algorithms. Section \ref{conclusions} presents a brief discussion about
the proposed algorithms. \section{Model Formulation}\label{model formulation} Consider a group of dynamic individuals. For $i$th individual ($i=1,\cdots,m$), the following linear state space model is assumed, \begin{eqnarray} x_{it}&=&T(\theta_i)x_{i,t-1}+v_{it},\ \ v_{it}\sim N(0,Q),\label{equation 1} \\ y_{it}&=&Z(\theta_i)x_{it}+w_{it},\ \ w_{it}\sim N(0,R), \label{equation 2} \end{eqnarray} where $x_{it}$ and $y_{it}$ are the $p\times 1$ state vector and $q\times 1$ observation vector for the $i$th individual at time $t$; $v_{it}$ is the $p\times 1$ state disturbance and $w_{it}$ is the $q\times 1$ observational error, both of which are normally distributed with mean zero and variance matrix $Q$ and $R$ respectively. The $p\times p$ state transition matrix $T(\theta_i)$ and the $q\times p$ observation matrix $Z(\theta_i)$ are parameterized with the $r\times 1$ parameter vector $\theta_i$. \par For $\{v_{it},t=1,2,\cdots\}$, the following correlation structure are assumed $$Cov(v_{it},v_{i't'})=\left\{ \begin{array}{lll} Q(i,i')_{p\times p}&&\mbox{if}\ \ t=t'\\ 0 &&\mbox{else}\\ \end{array} \right.,\label{assumption 2} $$ i.e., at the same time point, the covariance between the different individuals $i$ and $i'$ is $Q(i,i')$ and so the individuals in this group are correlated with each other. If $i=i'$, then $Q(i,i')=Q$. More complex relationship also can be possible, see section \ref{general transition matrix} for another modeling of the relationship among the individuals. For $\{w_{it},t=1,2,\cdots\}$, we assume $$Cov(w_{it},w_{i't'})=\left\{ \begin{array}{lll} R_{q\times q}&&\mbox{if}\ \ i=i', t=t'\\ 0&&\mbox{else}\ \ \end{array} \right..\label{assumption 3}$$
There is another layer of complexity in model (\ref{equation 1}) $\sim$ (\ref{equation 2}), i.e., we have to specify the correlation structure for $\theta_i,(1\leq i \leq n)$, for which we assume \begin{eqnarray}\label{theta} \theta_i&=&\psi_ia+b_i, \ \ b_i\sim N(0,D), \end{eqnarray} where $\psi_i$ is the exogenous variable representing the characteristics of the $i$th individuals, $a$ is the fixed effect and $b_i$ the random effect. We assume $b_i$'s are independent with $Cov(b_{i},b_{i'})=D$. Here an implicit assumption is that the individual parameter $\theta_i$ is static. Time-dependent $\theta_i$ may be more appropriate in some cases which will be considered in section \ref{general transition matrix}.
For the correlation structure among $v_{it}, w_{it}$ and $\theta_i$, we assume \begin{eqnarray} Cov(\theta_{i},v_{i't'})=Cov(\theta_{i},w_{i't'})= Cov(v_{it},w_{i't'})=0 \label{assumption 4} \end{eqnarray}
for $1\leq i \leq m, 1\leq i'\leq m, t\geq 1,t'\geq 1$. \par The model given above is a generalized version of MESSM given in Liu et al (2011) and Lodewyckx et al (2011), in which they assume that the disturbance $v_{it}$ is independent to $v_{i't}$ for $i\neq i'$. Here we assume there exists static correlation among the individuals. Another critical assumption in Liu et al (2011) is that both $T(\theta_i)$ and $Z(\theta_i)$ should be the linear functions of $\theta_i$. Here this restriction also is not required. \par The following notations are adopted in this paper. $\{_m \ a_{ij}\}{_{i=1}^p}{_{j=1}^q} =\{_m \ a_{ij}\}$ denotes a $p\times q$ matrix with elements $a_{ij}$; $\{_c\ u_i\}_{i=1}^n$ denotes a $n$ dimensional column vector; $\{_r\ u_i\}_{i=1}^m$ denotes a $n$ dimensional row vector; diagonal matrix is denoted by $\{_d\ a_i\}_{i=1}^n$. All the elements can be replaced by matrixes which will result in a block matrix. As for the model (\ref{equation 1})$\sim$(\ref{equation 2}), define $x_t=\{_c\ x_{it}\}_{i=1}^m$, $\theta=\{_c\ \theta_{i}\}_{i=1}^m$, $\tilde{T}(\theta)=\{_d\ T(\theta_i)\}_{i=1}^m$, $\tilde{Z}(\theta)=\{_d\ Z(\theta_i)\}_{i=1}^m$,
$v_t=\{_c\ v_{it}\}_{i=1}^m$, $y_t=\{_c\ y_{it}\}_{i=1}^m$, $w_t=\{_c\ w_{it}\}_{i=1}^m$, and then the model can be written in matrix form as \begin{eqnarray} x_t&=&\tilde{T}(\theta)x_{t-1}+v_t,\label{model1}\\ y_t&=&\tilde{Z}(\theta)x_{t}+w_t.\label{model2} \end{eqnarray} Here $Var(v(t))\triangleq \tilde{Q}={ \{_m\ Q(i,i')\}_{i=1}^m}_{i'=1}^m$, $Var(w_t)\triangleq \tilde{R}=\{_d\ R\}_{i=1}^m$, $Var(\theta)=\{_d\ D\}_{i=1}^m$ and $Cov(v_t,w_t)=Cov(\theta,w_t)=Cov(\theta,w_t)=0$. Equations (\ref{equation 1})$\sim$(\ref{model2}) represent the data generating process. Given the observations up to time $t$, $y_{1:t}=(y_{11},\cdots,y_{m1},\cdots,y_{1t},\cdots,y_{mt})$, we will study the following problems: (1)\ How to estimate the parameters involved in the model,
including covariance matrix $\tilde{Q}$, $\tilde{R}$, $D$ and fixed effects
$a$. (2)\ How to get the online estimate of the state $x_{it}$ for $1\leq i \leq
m$. Though These problems had been
studied in literatures, we will adopt different
ways to address these issues which turn out to be more efficient in most settings. \section{Model Estimation}\label{model estimation}
The parameters involved in MESSM include the fixed effects
$a$ and those involved in variance matrixes $ (\tilde{Q}, \tilde{R}, D)$ which is
denoted by $\delta$. We write $(\tilde{Q}(\delta), \tilde{R}(\delta),
D(\delta))$ to indicate explicitly such dependence of variance matrix on $\delta$. In this section we consider how to estimate parameter $\Delta^T\triangleq(a^T,\delta^T)$ and the state $x_t$ based on the observations $y_{1:T}$. Lodewyckx et al (2011) and Liu et al (2011) had investigated these questions in details, including EM algorithm based maximum likelihood estimation and Baysian estimation. While these approaches are shown to be efficient for the given illustrations, they are cumbersome to be carried out. On the other hand it is also well known that the rate of convergence for EM algorithm is linear which is slower than quasi-Newton algorithm. In the following we will first consider a new version of EM algorithm which is simpler than the existed results. Then scores based algorithm is investigated. Explicit and simple expression for the score vector is derived out. State estimation also is investigated using an adapted filter algorithm proposed by Liu and West (2001). \subsection{Maximizing the likelihood via EM algorithm} \label{EM} For model (\ref{model1})$\sim$(\ref{model2}), we take $(\theta^T,x_1^T,\cdots,x_n^T)^T$ as the missing data and $(\theta^T, x_1^T,\cdots,x_n^T,y_{1:T})^T$ the complete data. Note
$f(\theta,x_{1:T},y_{1:T}|\Delta)=f(\theta|\Delta)f(x_{1:T}|\theta,\Delta)
f(y_{1:T}|\theta,x_{1:T},\Delta)$ in which all the terms
$f(\theta|\Delta)$, $f(x_{1:T}|\theta,\Delta)$ and
$f(y_{1:T}|\theta,x_{1:T},\Delta)$ are normal densities by assumption. For the sake of simplicity, we let $x_1\sim N(a_1,P_1)$ with known $a_1$ and $P_1$. Then omitting constants, the log joint density can be written as \begin{eqnarray}\label{jointdensity}
&&\log f(\theta,x_{1:T},y_{1:T}|\Delta)=-\frac{m}{2}\log |D(\delta)|
-\frac{T}{2} \log |\tilde{R}(\delta)|-\frac{T}{2}\log
|\tilde{Q}(\delta)|\\ &&-\frac{1}{2}\sum_{i=1}^m {\rm tr}\left[D(\delta)^{-1}(\theta_i-\Psi_i a)(\theta_i-\Psi_i a)^T\right]-\frac{1}{2}\sum_{t=1}^T {\rm tr}[\tilde{R}(\delta)^{-1} \{y_{t}-\tilde{Z}(\theta)x_{t}\}\nonumber\\ &&\times \{y_{t}-\tilde{Z}(\theta)x_{t}\}^T]-\frac{1}{2}\sum_{t=1}^T {\rm tr}[\tilde{Q}(\delta)^{-1}\{x_t-\tilde{T}(\theta)x_{t-1}\} \{x_t-\tilde{T}(\theta)x_{t-1}\}^T]\nonumber \end{eqnarray} where for $t=1$, $\tilde{Q}(\delta)^{-1}\{x_t-\tilde{T}(\theta)x_{t-1}\} \{x_t-\tilde{T}(\theta)x_{t-1}\}^T$ is explained as $P_1^{-1}(x_1-a_1)(x_1-a_1)^T$. Let $\Delta^{\star}=(a^{\star},\delta^{\star})^T$ denote the value of $\Delta$ in the $j$th step of EM algorithm, then $Q(\Delta,\Delta^{\star})$, the intermediate quantity of EM algorithm, is defined as the expectation of $ \log f(\theta,x_{1:T},y_{1:T})$ conditional on $\Delta^{\star}$ and the observations $y_{1:T}$. Let $\tilde{E}(\cdot)$ denote this conditional expectation and then with (\ref{jointdensity}) and the normal assumption in hand, we have \begin{eqnarray}\label{E-step} Q(\Delta,\Delta^{\star})&\triangleq&
\tilde{E}[\log f(\theta,x_{1:T},y_{1:T}|\Delta)] \end{eqnarray} \begin{eqnarray*}
&=& -\frac{m}{2}\log |D(\delta)| -\frac{T}{2}\log
|\tilde{R}(\delta)|-\frac{T}{2}\log |\tilde{Q}(\delta)|\\ \nonumber
&&-\frac{1}{2}\sum_{i=1}^m {\rm tr}\left[D(\delta)^{-1}\left\{(\Psi_i(a^{\star}-a)+b_{i|T})
(\Psi_i(a^{\star}-a)+b_{i|T})^T\right.\right.\nonumber\\
&&\left.\left.+{\rm Var}(b_i|y_{1:t},\Delta^{\star}) \right\}\right]
-\frac{1}{2}\sum_{t=1}^T {\rm tr}\left[\tilde{R}(\delta)^{-1}\left\{w_{t|T}w_{t|T}^T+{\rm Var}(w_t|y_{1:T},\Delta^{\star})\right\}\right]\nonumber\\
&&-\frac{1}{2}\sum_{t=1}^T {\rm tr}\left[\tilde{Q}(\delta)^{-1}\left\{v_{t|T}v_{t|T}^T+{\rm Var}(v_t|y_{1:T},\Delta^{\star})\right\}\right],\nonumber \end{eqnarray*}
where $ b_{i|T}=\tilde{E}(b_i), w_{t|T}=\tilde{E}(w_{t}),
v_{t|T}=\tilde{E}(v_t)$. In order to find
the maximizer of $Q(\Delta, \Delta^{\star})$ with respect to
$\Delta$, we have to compute these conditional expectations and variances firstly. Note that \begin{eqnarray}
b_{i|T}= \tilde{E}(b_{i|T}(\theta)),\quad w_{t|T}=\tilde{E}(w_{t|T}(\theta)),\quad
v_{t|T}= \tilde{E}(v_{t|T}(\theta)),\label{expectation} \end{eqnarray} where \begin{eqnarray*}
b_{i|T}(\theta)=E(b_i|y_{1:T},\Delta^{\star},\theta),\
w_{t|T}(\theta)=E(w_t|y_{1:T},\Delta^{\star},\theta),\
v_{t|T}(\theta)=E(v_t|y_{1:T},\Delta^{\star},\theta) \end{eqnarray*}
and \begin{eqnarray}
{\rm Var}(v_t|y_{1:T},\Delta^{\star})&=&\tilde{E}({\rm Var}(v_t|y_{1:T}, \Delta^{\star},\theta))+{\rm Var}(v_{t|T}(\theta)|y_{1:T},\Delta^{\star}),\label{variance1}\\
{\rm Var}(w_t|y_{1:T},\Delta^{\star})&=&\tilde{E}({\rm Var}(w_t|y_{1:T}, \Delta^{\star},\theta))+{\rm Var}(w_{t|T}(\theta)|y_{1:T},\Delta^{\star}).\label{variance2} \end{eqnarray}
For the smoothed disturbances $w_{t|T}(\theta), v_{t|T}(\theta)$ and the relevant variances we have, \begin{eqnarray}
w_{t|T}(\theta)=\tilde{R}(\delta^{\star})e_t(\theta),&& {\rm Var}(w_t|y_{1:T},\Delta^{\star},\theta)=\tilde{R}(\delta^{\star})- \tilde{R}(\delta^{\star})D_t(\theta)\tilde{R}(\delta^{\star}),\label{recursion1}\\
v_{t|T}(\theta)=\tilde{Q}(\delta^{\star})r_{t-1}(\theta),&& {\rm Var}(v_t|y_{1:T},\Delta^{\star},\theta)=\tilde{Q} (\delta^{\star})-\tilde{Q}(\delta^{\star})N_{t-1}(\theta) \tilde{Q}(\delta^{\star}), \end{eqnarray} where the backward recursions for $e_t(\theta)$, $r_{t}(\theta)$, $D_t(\theta)$ and $N_t(\theta)$ are given by \begin{eqnarray} e_t(\theta)&=&F_t(\theta)^{-1}\nu_t-K_t(\theta)^T r_t(\theta),\label{e and D1}\\ r_{t-1}(\theta)&=&Z(\theta)^T F_t^{-1}(\theta)\nu_t+L_t(\theta)^Tr_t(\theta),\label{e and D2}\\ D_t(\theta)&=&F_t(\theta)^{-1}+K_t(\theta)^TN_t(\theta)K_t(\theta),\label{r and N1}\\ N_{t-1}(\theta)&=&Z(\theta)^T F_t(\theta)^{-1}Z(\theta) +L_t(\theta)^TN_t(\theta)L_t(\theta)\label{r and N2} \end{eqnarray} for $t=T,\cdots,1$. These terms are calculated backwardly with $r_T=0$ and $N_T=0$. Here $F_t(\theta), K_t(\theta)$ are respectively the variance matrix of innovation and gain matrix involved in Kalman filter. The recursions for these matrix can be stated as follows, \begin{eqnarray}
P_{t+1|t}(\theta)=T(\theta)P_{t|t-1}(\theta)L_t^T+\tilde{Q}(\delta^{\star}),&&
F_t(\theta)=Z(\theta)P_{t|t-1}(\theta)Z(\theta)^T+\tilde{R}(\delta^{\star}),\\
K_t(\theta)=T(\theta)P_{t|t-1}(\theta)Z(\theta)^TF_t(\theta)^{-1},&& L_t(\theta)=T(\theta)-K_t(\theta)Z(\theta).\label{recursion2} \end{eqnarray} The recursions (\ref{recursion1})$\sim$(\ref{recursion2}) can be found in Durbin and Koopman (2012). Combining these recursive formulas with (\ref{expectation})$\sim$(\ref{variance2}) yields \begin{eqnarray*}
w_{t|T}w_{t|T}^T&=&\tilde{R}(\delta^{\star})\tilde{E}(e_t(\theta)) \tilde{E}(e_t(\theta))^T \tilde{R}(\delta^{\star}),\label{w}\\
v_{t|T}v_{t|T}^T&=&\tilde{Q}(\delta^{\star})\tilde{E}(r_{t-1}(\theta)) \tilde{E}(r_{t-1}(\theta))^T \tilde{Q}(\delta^{\star}),\label{v}\\
{\rm Var}(w_t|y_{1:T},\Delta^{\star})&=&\tilde{R}(\delta^{\star})- \tilde{R}(\delta^{\star})\tilde{E}(D_t(\theta))\tilde{R}(\delta^{\star})
+ \tilde{R}(\delta^{\star}){\rm Var}(e_t(\theta)|y_{1:T},\Delta^{\star})\tilde{R} (\delta^{\star}),\label{e}\\
{\rm Var}(v_t|y_{1:T},\Delta^{\star})&=&\tilde{Q}(\delta^{\star})- \tilde{Q}(\delta^{\star})\tilde{E}(N_{t-1}(\theta))\tilde{Q}(\delta^{\star})
+ \tilde{Q}(\delta^{\star}){\rm Var}(r_{t-1}(\theta)|y_{1:T},\Delta^{\star})\tilde{Q}(\delta^{\star}).\label{r} \end{eqnarray*} Substituting these expression into (\ref{E-step}) we have \begin{eqnarray}\label{Q function}
Q(\Delta,\Delta^{\star}) &=& -\frac{m}{2}\log |D(\delta)|
-\frac{T}{2}\log |\tilde{R}(\delta)|-\frac{T}{2}\log
|\tilde{Q}(\delta)|\\
&&-\frac{1}{2}\sum_{i=1}^m {\rm tr}\left[D(\delta)^{-1}\left\{(\Psi_i(a^{\star}-a)+b_{i|T})
(\Psi_i(a^{\star}-a)+b_{i|T})^T\right.\right.\nonumber\\
&&\left.\left.+{\rm Var}(b_i|y_{1:t},\Delta^{\star}) \right\}\right]\nonumber\\ && -\frac{1}{2}\sum_{t=1}^T {\rm tr}\left[\tilde{R}(\delta)^{-1}\left\{\tilde{R}(\delta^{\star})+ \tilde{R}(\delta^{\star})\tilde{E}(e_t^2(\theta)-D_t(\theta))\tilde{R}(\delta^{\star}) \right\}\right]\nonumber\\ &&-\frac{1}{2}\sum_{t=1}^T {\rm tr}\left[\tilde{Q}(\delta)^{-1}\left\{\tilde{Q}(\delta^{\star})+ \tilde{Q}(\delta^{\star})\tilde{E}(r_{t-1}^2(\theta)-N_{t-1}(\theta))\tilde{Q}(\delta^{\star}) \right\}\right].\nonumber \end{eqnarray}
\par Now we have obtained the expression for the intermediate quantity of EM algorithm. Except conditional expectations and variances, all the quantities involved can be easily computed by Kalman filter. These conditional expectations and variances include $b_{i|T}$, $\tilde{E}D_t(\theta)$, $\tilde{E}N_{t}(\theta)$, $\tilde{E}\{e_t(\theta)e_t(\theta)^T\}$, $\tilde{E}\{r_{t-1}
(\theta)r_{t-1}(\theta)^T\}$ and $Var(b_i|y_{1:t},\Delta^{\star})$. Here we adopt the Monte Carlo method to approximate the expectations and variances. Specifically given the random samples $\{\theta^{(j)}_t,j=1,\cdots,M\}$ from the
posterior $f(\theta|y_{1:T}, \Delta^{\star})$, all the population expectation is approximated by the sample expectation. For example we approximate $b_{i|T}$ by $\frac{1}{M} \sum_{j=1}^M
\theta^{(j)}_i-\Psi_i a^{\star}$. The same approximation applies to other expectations and variances. As for the sampling from the posterior $f(\theta|y_{1:T},\Delta^{\star})$, the random-walk Metropolis algorithm is employed in this paper to generate the samples. Certainly it is also possible to use other sampling scheme such as importance sampling to generate the random samples from $f(\theta|y_{1:T},\Delta^{\star})$. In our finite experiences MCMC algorithm is superior to the importance sampling in present situations. It is meaningful to compare the proposed EM algorithm with that in Liu et al (2011). Recall the EM algorithm in Liu et al
(2011) have to sample both $x_t$ and $\theta$ from the joint distribution $f(\theta,x_t|y_{1:T},\Delta^{\star})$ where Gibbs sampler was proposed to implement the sampling in their study. Here only the random samples of $\theta$ from
$f(\theta|y_{1:T},\Delta^{\star})$ are needed for running the EM algorithm and thus the proposed EM algorithm can be seen as a Rao-Blackwellized version of that in Liu et al (2011). Note the dimension of $x_t$ increases as the number of the individuals increases and consequently a faster and more stable convergence rate of the proposed algorithm can be expected especially when the number of the correlated individuals is large. \par For the purpose of illustration consider the following autoregressive plus noise model, \begin{eqnarray}\label{autoregression1} y_{it}=x_{it}+w_{it},\ w_{it}\sim {\rm i.i.d.}\ N(0,\delta_1),\ x_{it}=\theta_ix_{i,t-1}+v_{it},\ v_{it}\sim {\rm i.i.d.}\ N(0,\delta_2) \end{eqnarray} where \begin{eqnarray}\label{autoregression2} \theta_i&=&\mu_{\theta}+b_i,\ b_i\sim {\rm i.i.d.}\ N(0,\delta_3), i=1,\cdots,m. \end{eqnarray} Here we assume all the individuals in this group are independent with each other. This model can be rewritten in the matrix form as \begin{eqnarray*} y_t=\tilde{Z}(\theta)x_t+w_t,w_t\sim N_m(0,\delta_1 I_m), && x_t=\tilde{T}(\theta)x_{t-1}+v_t, v_t\sim N_m(0,\delta_2 I_m) \end{eqnarray*} with $y_t=(y_{1t},\cdots,y_{mt})^T$, $x_t=(x_{1t},\cdots,x_{mt})^T$,
$w_t=(w_{1t},\cdots,w_{mt})^T$, $v_t=(v_{1t},\cdots,v_{mt})^T$,
$\theta=(\theta_1,\cdots,\theta_m)^T$, $\delta=(\delta_1,\delta_2,\delta_3)^T$,
$\tilde{Z}(\theta)=I_m$, $\tilde{T}(\theta)={\rm diag}(\theta_1,\cdots,\theta_m)$, $\tilde{R}(\delta)=\delta_1I_m$, $\tilde{Q}=\delta_2I_m$, $\Psi_i=1$, $D(\delta)=\delta_3$. From (\ref{Q function}) we get the following recursive formulas, \begin{eqnarray}\label{recursive1} \hat{\mu}_{\theta}&=&\frac{1}{m}\sum_{i=1}^m \tilde{E}(\theta_i),\quad \hat{\delta}_3=\frac{1}{m}\sum_{i=1}^m [\hat{\mu}_{\theta}-\tilde{E}(\theta_i)]^2,\nonumber\\ \hat{\delta}_1&=&\frac{1}{Tm}\sum_{i=1}^m \sum_{t=1}^T [\delta_1^{\star}+{\delta_1^{\star}}^2\tilde{E}\{e_{it}^2(\theta)-D_{it}(\theta)\}],\\ \hat{\delta}_2&=&\frac{1}{Tm}\sum_{i=1}^m \sum_{t=1}^T [\delta_2^{\star}+{\delta_2^{\star}}^2\tilde{E}\{r_{it-1}^2(\theta) -N_{it-1}(\theta)\}].\nonumber \end{eqnarray} After getting $(\hat{\mu}_{\theta},\hat{\delta}_1, \hat{\delta}_2, \hat{\delta}_3 )^T$ from (\ref{recursive1}), we take it as the new $\Delta^{\star}$ and use it to compute the next maximizer of $Q(\Delta,\Delta^{\star})$ until the convergence is achieved. The convergent point is defined as the estimator of $\Delta$. \par The second illustration we consider is the damped local linear model which can be expressed as \begin{eqnarray}\label{local linear1} y_{it}&=&z_{it}+\epsilon_{it},\quad z_{it}=z_{i(t-1)}+u_{it}+\eta_{it},\quad u_{it}=\theta_i u_{i(t-1)}+\tau_{it}, \end{eqnarray} with $\theta_i=\mu_{\theta}+b_i$ and \begin{eqnarray}\label{local linear2} \epsilon_{it}\sim {\rm i.i.d.} N(0,\delta_1),\quad\eta_{it}\sim {\rm i.i.d.} N(0,\delta_2),\quad\tau_{it}\sim {\rm i.i.d.} N(0,\delta_3), \quad b_i\sim {\rm i.i.d.} N(0,\delta_4). \end{eqnarray} We also assume that the individuals in the group are independent with each other. Defining the state variable as $x_{it}=(z_{it},u_{it})^T$, then the damped local linear model can be rewritten as the state space model (\ref{equation 1})$\sim$(\ref{equation 2}) with $$T=(1,0),\ Z =\left( \begin{array}{cc} 1&1\\ 0&\theta_i\\ \end{array} \right),\ Q=\left( \begin{array}{cc} \delta_2&0\\ 0&\delta_3\\ \end{array} \right),\ R=\delta_1.$$ Here the unknown parameters include $\Delta\triangleq(\mu_{\theta},\delta_1,\delta_2,\delta_3,\delta_4)^T$. Let $$r_{it}\triangleq(r_{it}^{(z)}, r_{it}^{(u)})^T,\quad N_{it} \triangleq\left( \begin{array}{cc} N_{it}^{(zz)}&N_{it}^{(zu)}\\ N_{it}^{(zu)}&N_{it}^{(uu)}\\ \end{array} \right),$$ then from (\ref{Q function}), the recursive formula of EM algorithm turns out to be \begin{eqnarray*} \hat{\mu}_{\theta}&=&\frac{1}{m}\sum_{i=1}^m \tilde{E}(\theta_i),\quad \hat{\delta}_4=\frac{1}{m}\sum_{i=1}^m [\hat{\mu}_{\theta}-\tilde{E}(\theta_i)]^2,\\ \hat{\delta}_1&=&\frac{1}{Tm}\sum_{i=1}^m \sum_{t=1}^T [\delta_1^{\star}+{\delta_1^{\star}}^2\tilde{E}\{e_{it}^2(\theta)-D_{it}(\theta)\}],\\ \hat{\delta}_2&=&\frac{1}{Tm}\sum_{i=1}^m \sum_{t=1}^T [\delta_2^{\star}+{\delta_2^{\star}}^2\tilde{E}\{(r_{it-1}^{(z)})^2(\theta)- N_{it-1}^{(zz)}(\theta)\}],\\ \hat{\delta}_3&=&\frac{1}{Tm}\sum_{i=1}^m \sum_{t=1}^T [\delta_3^{\star}+{\delta_3^{\star}}^2\tilde{E}\{(r_{it-1}^{(u)})^2(\theta)- N_{it-1}^{(uu)}(\theta)\}]. \end{eqnarray*} \subsection{Maximizing the likelihood via score based algorithms} In this section we consider the score-based algorithms which include quasi-Newton algorithm, steepest ascent algorithm et al. The core of such algorithms is how to compute the score vector. Here the likelihood
$ L(\Delta|y_{1:t})$ is a complex function of $\Delta$ and the
direct computation of score is difficult both analytically and
numerically. We consider the following transformation of
$L(\Delta|y_{1:t})$,
\begin{eqnarray}\label{likelihood}
\log L(\Delta|y_{1:T})&=&\log f(\theta,x_{1:T},y_{1:T}|\Delta)-\log f(\theta,x_{1:T}|y_{1:T}, \Delta)
\end{eqnarray}
Recall in section \ref{model estimation}
$f(\theta,x_{1:T},y_{1:T}|\Delta)$ denotes the joint
distribution of $(\theta,x_{1:T},y_{1:T})$ conditional on $\Delta$
and $\tilde{E}(\cdot)$ the conditional expectation
$E(\cdot|y_{1:T},\Delta^{\star})$. In present situation we let $\Delta^{\star}$
denote the present value of $\Delta$ in quasi-Newton
algorithm. Then taking $\tilde{E}$ of both sides of (\ref{likelihood}) yields
\begin{eqnarray}
\log
L(\Delta|y_{1:T})&=&\tilde{E}\left[\log f(\theta,x_{1:T},y_{1:T}|\Delta)\right]-
\tilde{E}\left[\log f(\theta,x_{1:T}|y_{1:T},\Delta)\right].
\end{eqnarray}
Under the assumption that the exchange of integration and differentiation
is legitimate it can be shown that
\begin{eqnarray}
\tilde{E}\left[\left.\frac{\partial \log f(\theta,x_{1:T}|y_{1:T},\Delta)}{\partial
\Delta}\right|_{\Delta=\Delta^{\star}}\right]=0,
\end{eqnarray}
Consequently we have \begin{eqnarray}\label{score1}
\left.\frac{\partial \log L(\Delta | y_{1:t})}{\partial
\Delta}\right|_{\Delta=\Delta^{\star}}&=&\frac{\partial }{\partial
\Delta}\left.\tilde{E}\left[ \log f(\theta,x_{1:T},y_{1:T}|\Delta)
\right]\right|_{\Delta=\Delta^{\star}}. \end{eqnarray}
Note the expectation in the right-hand side has the same form as
the intermediate quantity of EM algorithm in the previous
section. And so
substituting (\ref{Q function}) into (\ref{score1}) we get \begin{eqnarray}\label{score2}
\left.\frac{\partial \log L(\Delta | y_{1:T})} {\partial a}\right|_{\Delta=\Delta^{\star}}&=&\sum_{i=1}^m
\psi_i^TD(\delta^{\star})^{-1}b_{i|T},\\
\left.\frac{\partial \log L(\Delta |
y_{1:T})}{\partial \delta_j}\right |_{\Delta=\Delta^{\star}}&=& -\frac{1}{2}\sum_{i=1}^m {\rm tr } \left[ D(\delta^{\star})^{-1}\frac{\partial D(\delta^{\star})}{\partial \delta_j}\right.\label{score3}\\
&&-D(\delta^{\star})^{-1}\left\{b_{i|T}b_{i|T}^T+Var(b_i|y_{1:T}, \Delta^{\star})\right\} \left.D(\delta^{\star})^{-1}\frac{\partial D(\delta^{\star})}{\partial \delta_j}\right]\nonumber\\ &&+\frac{1}{2}\sum_{t=1}^T {\rm tr}\left[\tilde{E}\left\{e_t(\theta) e_t(\theta)^T-D_t(\theta)\right\}\frac{\partial \tilde{R}(\delta^{\star})}{\partial \delta_j}\right]\nonumber\\ &&+\frac{1}{2}\sum_{t=1}^T {\rm tr}\left[\tilde{E}\left\{r_{t-1} (\theta)r_{t-1}(\theta)^T-N_{t-1}(\theta)\right\}\frac{\partial \tilde{Q}(\delta^{\star})}{\partial \delta_j}\right]\nonumber \end{eqnarray}
\par Inspection of the score vector (\ref{score2})$\sim$(\ref{score3}) shows that in order to evaluate the score vector in present value $\Delta^{\star}$, we need (1) a single pass of Kalman filter and smoother, (2) to run a MCMC algorithm to get the random samples $\theta^{(j)}_{t} (j=1,\cdots,M)$ from
$f(\theta|y_{1:t},\Delta^{\star})$. These calculation can be carried out readily. It is interesting to compare this result with the existed results for the fixed-effects state space models. Engle and Watson (1981) had constructed a set of filter for computing the score vector analytically. However, as pointed out by Koopman and Shephard (1992), this approach is cumbersome, difficult to program and typically much more expensive to use than numerically differentiating the likelihood. Koopman and Shephard (1992) and Koopman (1993) also obtained an analytical expression for the score vector. But those expressions are only feasible for the variance components and the scores for the parameters in observational matrix and state transition matrix should be computed by numerically differentiating. On the contrary the exact expressions of score vectors given in (\ref{score2})$\sim$(\ref{score3}) not only can be used to compute the scores for variance components but also can be used to compute the scores for fixed effects straightforwardly. \par As an illustration consider the autoregressive plus noise model given by (\ref{autoregression1})$\sim$(\ref{autoregression2}). The scores defined in (\ref{score2})$\sim$(\ref{score3}) can be
shown to be \begin{eqnarray}
\frac{\partial \log L(\Delta^{\star}|y_{1:T})}{\partial
\mu_{\theta}}&=&\sum_{i=1}^m \frac{E(\theta_i|y_{1:T}, \Delta^{\star})-\mu_{\theta}^{\star}}{\delta_3^{\star}},\label{exam1_score1}\\
\frac{\partial \log L(\Delta^{\star}|y_{1:T})}{\partial \delta_1}&=&\frac{1}{2}\sum_{t=1}^T \sum_{i=1}^m \left[\tilde{E}\left\{e_{it}^2(\theta_i)-D_{it}(\theta_i)\right\}\right], \label{exam1_score2}\\
\frac{\partial \log L(\Delta^{\star}|y_{1:T})}{\partial \delta_2}&=&\frac{1}{2}\sum_{t=1}^T \sum_{i=1}^m \left[\tilde{E} \left\{r_{it-1}^2(\theta_i)-N_{it-1}(\theta_i)\right\}\right], \label{exam1_score3}\\
\frac{\partial \log L(\Delta^{\star}|y_{1:T})}{\partial \delta_3}&=&-\frac{1}{2}\sum_{i=1}^m \frac{\delta_3^{\star}
-b_{i|T}^2-Var(b_i|y_{1:T},\Delta^{\star})}{\delta_3^{\star 2}}.\label{exam1_score4} \end{eqnarray} Here $e_{it}(\theta_i), r_{it}(\theta_i), D_{it}(\theta_i)$ and $N_{it}(\theta_i)$ have been defined in (\ref{e and D1})$\sim$(\ref{r and N2}) which correspond to the $i$th individual. If we denote the MLE of $\Delta$ by $\hat{\Delta}=(\hat{\mu}_{\theta},\hat{\delta}_1,\hat{\delta}_2,\hat{\delta}_3)^T$, then by equating these scores at $\hat{\Delta}$ to zero we have \begin{eqnarray}
&&\hat{\mu}_{\theta}=\frac{1}{m}\sum_{i=1}^m E(\theta_i|y_{1:T},\hat{\Delta}),\ \hat{\delta}_3=\frac{1}{m}\sum_{i=1}^m\left(
b_{i|T}^2+Var(b_i|y_{1:T},\hat{\Delta})\right),\label{exam1_1}\\ &&\hspace*{30pt}\sum_{i=1}^m\sum_{t=1}^T \left[\tilde{E}\{e_{it}^2(\theta_i)\}\right]= \sum_{i=1}^m\sum_{t=1}^T \left[\tilde{E}D_{it}(\theta_i)\right],\label{exam1_2}\\ &&\hspace*{30pt}\sum_{i=1}^m\sum_{t=1}^T \left[\tilde{E}\{r_{it-1}^2(\theta_i)\}\right]= \sum_{i=1}^m \sum_{t=1}^T \left[\tilde{E}N_{it-1}(\theta_i)\right].\label{exam1_3} \end{eqnarray}
Equations (\ref{exam1_1}) says that $\hat{\mu}_{\theta}$ is the sample
mean of posterior mean $\tilde{E}(\theta_i)$ at $\Delta=\hat{\Delta}$;
As for the second term in (\ref{exam1_1}), note at the true parameter $\Delta_0$,
$$E\{b_{i|T}^2+Var(b_i|y_{1:T},\Delta_0)\}=Var(E(b_i|y_{1:T},\Delta_0))
+EVar(b_i|y_{1:T},\Delta_0),$$
where the right hand side is just equal to $\delta_3$ and
so $\hat{\delta}_3$ can also be seen as a moment estimator.
As for equation (\ref{exam1_2}) and (\ref{exam1_3}), it can be easily
checked
that for given $\theta \in \Theta$ \begin{eqnarray}
E\{e_{it}^2(\theta)| \hat{\Delta},\theta\}=D_{it}(\theta),\quad E\{r_{it}^2(\theta)| \hat{\Delta},\theta\}=N_{it}(\theta), \end{eqnarray} i.e., (\ref{exam1_2}) and (\ref{exam1_3}) are the moment equation for estimating $\delta_1$ and $\delta_2$. Consequently $\hat{\Delta}$ can been regarded as a moment estimator. \par As another illustration consider the damped local linear model defined by (\ref{local linear1})$\sim$ (\ref{local linear2}). The score vectors can also be obtained by formulas (\ref{score2})$\sim$(\ref{score3}). In fact it turns out the scores with respect to $\mu_{\theta}$, $\delta_1$ and $\delta_4$ have the same form as the scores given in (\ref{exam1_score1}), (\ref{exam1_score2}) and (\ref{exam1_score4}) respectively. As for $\delta_2$ and $\delta_3$ we have \begin{eqnarray}
\frac{\partial \log L(\Delta^{\star}|y_{1:T})}{\partial \delta_2}&=&\frac{1}{2}\sum_{t=1}^T \sum_{i=1}^m \left[\tilde{E} \left\{(r_{it-1}^{(z)})^2(\theta_i)-N_{it-1}^{(zz)} (\theta_i)\right\}\right], \label{exam2_score1}\\
\frac{\partial \log L(\Delta^{\star}|y_{1:T})}{\partial \delta_3}&=&\frac{1}{2}\sum_{t=1}^T \sum_{i=1}^m \left[\tilde{E} \left\{(r_{it-1}^{(u)})^2(\theta_i)-N_{it-1}^{(uu)} (\theta_i)\right\}\right]. \label{exam2_score2} \end{eqnarray} Here $r_{it}^{(z)}$, $r_{it}^{(u)}$, $N_{it}^{(zz)}$ and $N_{it}^{(zu)}$ have been defined in section \ref{EM}. \par From these two illustrations it can be seen that for i.i.d. individuals, the maximum likelihood estimation of MESSM is equivalent to the moment estimation. For the general cases where the individuals may be correlated, this conclusion also holds but more complex moment equations are needed in those situations.
\subsection{State estimation}\label{state estimation} In this section we discuss the algorithms for state estimation of MESSM under the assumption that the true parameter $\Delta_0$ is known. If the random effects $b_i$'s are also assumed to be known, then Kalman filter can yields the optimal state estimator. Just as mentioned in section \ref{introduction}, it is unappropriate to assume $b_i$'s being known in the setting of sparse longitudinal data and consequently Kalman filter should not be applied directly.
\par One way out is to define the random effects as the new state variables, then
MESSM turns out to be a
nonlinear state space model. Consequently for the state filter, we can employ the usual nonlinear filter or Monte Carlo filter to estimate the state.
Though being straightforward, this approach is thought to be suboptimal because it
does not utilize the structure information contained in MESSM
(\ref{equation 1})$\sim$(\ref{equation 2}) in an efficient
way.\par Note that given the random effects,
MESSM is a conditional linear state space model and so the
mixture Kalman filter proposed in Chen and Liu (2001)
seems to be a good candidate for state
estimation. However because the parameter $\theta$ is static in present settings, the re-sampling step in mixture Kalman fitler will make the sample $\left\{\theta_{t}^{(1)}, \cdots, \theta_{t}^{(M)}\right \}$ at time $t$ being a sub-sample of the sample $\{\theta_{t-1}^{(1)}, \cdots, \theta_{t-1}^{(M)} \}$ at time $t-1$. This will make $\{\theta_{t}^{(1)}, \cdots, \theta_{t}^{(M)}
\}$ a poor representative of the posterior $f(\theta|y_{1:t})$ as time $t$ passes. In order to get an improved representative of
$f(\theta|y_{1:t})$, in the following we will present another algorithm which can overcome the problem of particle degeneracy
and usually has a better performance in the aspect of representation of
$f(\theta|y_{1:t})$ than usual mixture Kalman filter. This filter algorithm is adapted from the work in Liu and West (2001). The idea is to approximate the posterior distribution $f(\theta|y_{1:t})$
sequentially by a proper mixture of normal distribution. Then the problem of sampling from the complex posterior $f(\theta|y_{1:t})$ becomes a problem of sampling from a mixture distribution, which can be carried out straightforwardly. Specifically at time $t$ we assume the following approximation is appropriate \begin{eqnarray}
f(\theta|y_{1:t})\approx \sum_{j=1}^M w_{t}^{(j)}N(m_t^{(j)}, h^2 V_t) \end{eqnarray} for some proper $w_t^{(j)}$, $m_t^{(j)}$ and $V_t$. The choices of $w_t^{(j)}$, $m_t^{(j)}$ and $V_t$ depend on the last particles $\{\theta_{t-1}^{(1)}, \cdots, \theta_{t-1}^{(M)} \}$ and the present observation $y_t$. The smoothing parameter $h$ controls the overall scale. We denote the Kalman filter at time $t\geq 1$ corresponding to $\theta_{t}^{(j)}$ by
$KF_{t}^{(j)}=\left(x_{t|t}^{(j)},P_{t|t}^{(j)},x_{t+1|t}^{(j)},P_{t+1|t}^{(j)}
\right)$ where $x_{t|t}^{(j)}$ denotes the filter estimator of $x_t$
with variance $P_{t|t}^{(j)}$; $x_{t+1|t}^{(j)}$ denotes the one-step-ahead predictor of $x_{t+1}$ with variance
$P_{t+1|t}^{(j)}$. The filter algorithm can then be stated as follows. \par
Suppose the Monte Carlo sample $\theta_{t-1}^{(j)}$ and
weights $w_{t-1}^{(j)}$ ($j=1,\cdots,M$), representing the
posterior $f(\theta|y_{1:t-1})$,
are available. Also the Kalman filter $KF_{t-1}^{(j)}$ has been derived out.
$\bar{\theta}_{t-1}$ and $V_{t-1}$ denote the weighted
sample mean and variance of the
particles $\{\theta_{t-1}^{(1)}, \cdots, \theta_{t-1}^{(M)} \}$
respectively.
Then at time $t$ when the observation $y_t$ is brought in,
\par $\bullet$ For each $j=1,\cdots,M$, compute
$ m_{t-1}^{(j)}=a\theta_{t-1}^{(j)}+(1-a)\bar{\theta}_{t-1}$ where $a=\sqrt{1-h^2}$.
\par $\bullet$ Sample an auxiliary integer variable from set $\{1,\cdots,
M\}$ with probabilities proportional to $z_t^{(j)}\propto w_{t-1}^{(j)}
f(y_t|x_{t|t-1}^{(j)}, \theta_{t-1}^{(j)})$, which is referred as $k$.
\par $\bullet$ Sample a new parameter vector $\theta_t^{(k)}$
from the $k$th normal component of the kernel density, i.e., $\theta_t^{(k)}\sim N(m_{t-1}^{(k)}, h^2V_{t-1})$. \par $\bullet$
For $\theta_t^{(k)}$, compute $KF_t^{(k)}$ and evaluate the corresponding weight $$w_{t}^{(k)}=\frac{f(y_{t}|x_{t|t}^{(k)},
\theta_t^{(k)})}{ f(y_{t}|x_{t|t-1}^{(k)}, m_{t-1}^{(k)})}$$. \par $\bullet$ Repeat step (2)-(4) a large number of times to produce a final posterior approximation $\theta_t^{(k)}$ and Kalman filter $KF_t^{(k)}$ both of which are associated with weights $w_t^{(k)}$. \par We call the algorithm above mixture Kalman filter with kernel smoothing (MKF-KS). Historically using kernel smoothing of density to approximate the posterior distribution of dynamic system originated from West (1993a,1993b). MKF-KS assumes that the posterior can be well approximated by a mixture of normal distribution which in many cases is a reasonable assumption. More important is that MKF-KS can solves the problem of particle degeneration satisfyingly in most settings. From the Example 2 in section \ref{numerical studies} it can be seen MKF-KS does have a good performance. Therefore we recommend to use MKF-KS to estimate state for MESSM when the observations are sparse.\par In additional to state estimation, MKF-KS can also be used as a basis to estimate the observed information matrix whose inverse usually is taken as the estimate of the variance matrix of the maximum likelihood estimator in literatures. Poyiadjis et al (2011) is the first to use the particle filter to approximate the observed information matrix. Nemeth et al (2013) improved the efficiency of such algorithms by using the idea of kernel smoothing of Liu and West (2001). The details of this algorithm will be omitted for brevity, for further details see Nemeth et al (2013). In the section \ref{numerical studies}, we will combine MKF-KS with the algorithms 3 in Nemeth et al (2013) to estimate the observed information matrix. \section{Extensions}\label{extensions} \subsection{Incomplete observations}\label{incomplete observations} In previous sections, we have assumed all the individuals can be observed at all the time points. For longitudinal data however such assumption does not hold in many settings and the observations for some or even all of individuals may be missing at given time point. In this section we show that the mixed-effects state space model can be easily adapted to accommodate such situations. \par Assume first the observations for all of the individuals are missing at time $t$ for $\tau\leq t \leq \tau^{\star}-1$. As for the EM algorithm in section \ref{model estimation}, the intermediate quantity now is given by (\ref{Q function}) minus the following terms, \begin{eqnarray}\label{missing Q} &&\quad \quad\quad \quad -\frac{\tau^{\star}-\tau}{2}\log
|\tilde{R}(\delta)|-\frac{\tau^{\star}-\tau}{2}\log
|\tilde{Q}(\delta)|\nonumber\\ &&-\frac{1}{2}\sum_{t=\tau}^{\tau^{\star}-1} {\rm tr}\left[\tilde{R}(\delta)^{-1}\left\{\tilde{R}(\delta^{\star})+ \tilde{R}(\delta^{\star})\tilde{E}(e_t^2(\theta)-D_t(\theta))\tilde{R}(\delta^{\star}) \right\}\right] \end{eqnarray} \begin{eqnarray} &&-\frac{1}{2}\sum_{t=\tau}^{\tau^{\star}-1} {\rm tr}\left[\tilde{Q}(\delta)^{-1}\left\{\tilde{Q}(\delta^{\star})+ \tilde{Q}(\delta^{\star})\tilde{E}(r_{t-1}^2(\theta)-N_{t-1}(\theta))\tilde{Q}(\delta^{\star}) \right\}\right].\nonumber \end{eqnarray} Note here $\tilde{E}(\cdot)$ is interpreted as
$\tilde{E}(\cdot)=E(\cdot|y_{1:\tau-1,\tau^{\star}:T},\Delta^{\star})$. As for the quasi-Newton algorithm in section \ref{model estimation}, the equation (\ref{score1} ) still holds in the present situation with the new interpretation of $\tilde{E}(\cdot)$. It can be shown straightforwardly that the scores with respect to fixed effects are the same as that given in (\ref{score2}) while the scores with respect to variance components are just that given in (\ref{score3}) minus the following terms, \begin{eqnarray*} \sum_{t=\tau}^{\tau^{\star}-1} {\rm tr}\left[\tilde{E}\left\{e_t(\theta) e_t(\theta)^T-D_t(\theta)\right\}\frac{\partial \tilde{R}(\delta^{\star})}{\partial \delta_j}+\tilde{E}\left\{r_{t-1}(\theta) r_{t-1}(\theta)^T-N_{t-1}(\theta)\right\}\frac{\partial \tilde{Q}(\delta^{\star})}{\partial \delta_j}\right]. \end{eqnarray*} \par As for state estimation, the only changes occurs when $\tau\leq t \leq \tau^{\star}-1$. Given $\theta^{(j)}$ with $1\leq j \leq M$, the Kalman filter involved in MKF-KS at time $\tau\leq t \leq \tau^{\star}-1$
can be stated as
\begin{eqnarray}
&&\hspace*{40pt}x_{t|t}^{(j)}=x_{t|t-1}^{(j)},\quad P_{t|t}^{(j)}=P_{t|t-1}^{(j)},\nonumber\\
&&x_{t+1|t}^{(j)}=T(\theta^{(j)})x_{t|t}^{(j)},\quad P_{t+1|t}^{(j)}=T(\theta^{(j)})P_{t|t}^{(j)}T(\theta^{(j)})^T+\tilde{Q}.\nonumber \end{eqnarray}
While for weights involved in MKF-KS, we only need to modify the weight in the second step in MKF-KS from
$w_{t-1}f(y_t|x_{t|t-1}^{(j)},\theta_{t-1}^{(j)})$ to $w_{t-1}$. The weight in the fourth step will be unchanged.
\par Another type of the missing data is that
only some of the individuals
are not observed at given time point. In order to accommodate such case, we
only need to allow the observation matrix $\tilde{Z}(\theta)$ being time-dependent. Now model
(\ref{model1})$\sim$(\ref{model2}) becomes \begin{eqnarray} x_t&=&\tilde{T}(\theta)x_{t-1}+v_t,\label{model3}\\ y_t&=&\tilde{Z}_t(\theta)x_{t}+w_t.\label{model4} \end{eqnarray} (\ref{model3})$\sim$(\ref{model4}) allow $\tilde{Z}_t(\theta)$ can possess different dimension at differen time point and thus can accommodate this type of missing data. The algorithms for parameter and state estimation given in section \ref{model estimation} can be extended straightforwardly to accommodate this more general model. Example 2 in the next section involves a real data set which contains both types of missing data. \subsection{General transition matrix}\label{general transition matrix} In section \ref{model formulation}, we have assumed the individuals in the group can be correlated, i.e., the covariance matrix $Q(i,i')$ may be a non-diagonal matrix. In addition to allowing the non-diagonal covariance matrix, the correlation within the group can also be modeled by adopting a different form of $\tilde{F}(\theta)$, the state transition matrix. In section \ref{model formulation}, we have assumed $\tilde{F}(\theta)$ is a diagonal matrix, i.e., $\tilde{F}(\theta)=\{_d\ F(\theta_i)\}_{i=1}^m$. It can be seen that the algorithms of the parameter and state estimation in the previous sections can apply regardless of $\tilde{F}(\theta)$ being a diagonal matrix or not. Non-diagonal transition matrix can occur in many different situations. Consider the following target tracking model, \begin{eqnarray} d\dot{S}_{it}&=&\{-\alpha_i[S_{it}-h(S_t)]-\gamma_i \dot{S}_{it}-\beta_i[\dot{S}_{it}-g(\dot{S}_t)]\}dt+dW_{it}+dB_t,\label{ground tracking} \end{eqnarray} where $S_{it}=(S_{it}^{(x)},S_{it}^{(y)})^T$ denotes the position of target $i$ at time $t$; $\dot{S}_{it}=(\dot{S}_{it}^{(x)},\dot{S}_{it}^{(y)})^T$ denotes the velocity of target $i$ at time $t$; $h(S_t)=\frac{1}{N}\sum_{i=1}^m S_{it}$ and $g(\dot{S}_t)=\frac{1}{m}\sum_{i=1}^m \dot{S}_{it}$ denotes the average position and velocity at time $t$. $B_t$ is a 2-dimensional Brownian motion common to all targets; $W_{it}$ is another 2-dimensional Brownian motion assumed to be independently generated for each target $i$ in the group; $\alpha_i$ denotes the rate at which $S_{it}$ restores to the average position $h(S_t)$; $\beta_i$ denotes the rate at which $\dot{S}_{itt}$ restores to the average velocity $g(\dot{S}_t)$; $\gamma_i$ denotes the rate at which $\dot{S}_{it}$ restores to zero. Model (\ref{ground tracking}) is the fundamental model for the group tracking problem. In present literatures, e.g., Khan et al (2005), Pang et al (2008, 2011), three restoring parameters $\alpha_i,\beta_i, \gamma_i$ are assumed to be identical across different individuals, i.e., $\alpha_1=\cdots=\alpha_m$, $\beta_1=\cdots=\beta_m$, $\gamma_i=\cdots=\gamma_m$. Here with MESSM in hand we can relax this restriction and allow different restoring parameters for different individuals which is more reasonable in most situations. Let $\theta_i=(\alpha_i,\beta_i,\gamma_i)$ and $\theta^T=(\theta_1^T,\cdots,\theta_m^T)$. For $i=1,\cdots,m$, let $$A_{i2}=\left( \begin{array}{cc} 0&1\\ -\alpha_i+\frac{\alpha_i}{m}&-\beta_i-\gamma_i+\frac{\beta_i}{m} \end{array}\right),\quad A_{i4}=\left( \begin{array}{cc} 0&0\\ \frac{\alpha_i}{m}& \frac{\beta_i}{m} \end{array}\right), $$ and $A_{i1}=\{_d\ A_{i2},A_{i2}\}$, $A_{i3}=\{_d\ A_{i4}, A_{i4}\}$, $$A(\theta)=\left( \begin{array}{cccc} A_{11}&A_{13}&\cdots&A_{13}\\ A_{23}&A_{21}&\cdots&A_{23}\\ \cdot&\cdot&\cdots&\cdot\\ \cdot&\cdot&\cdots&\cdot\\ A_{m3}&A_{m3}&\cdots&A_{m1} \end{array} \right)_{4m\times 4m}. \label{assumption 1}$$ Defining the non-diagonal matrix $T(\theta)=\exp (A(\theta)\tau)$ where $\tau$ is the time between successive observations , then we have the following discretized version of model (\ref{ground tracking}) for $m$ targets, \begin{eqnarray}\label{state equation} x_t=T(\theta)x_{t-1}+v_t, \end{eqnarray} where $x_t=(S_{1t}^{(x)},\dot{S}_{1t}^{(x)}, S_{1t}^{(y)}, \dot{S}_{1t}^{(y)},\cdots,S_{mt}^{(x)},\dot{S}_{mt}^{(x)}, S_{mt}^{(y)}, \dot{S}_{mt}^{(y)} )^T$, $v(t)=(v_{1t}^T,\cdots,v_{mt}^T)^T$ and $v_{it}$ denotes the state disturbance for the $i$th target with $v_{it} \sim N_4(0,Q)$ and $ Cov(v_{it},v_{jt})=\Sigma_{4\times 4} $ for $i\neq j$. Consequently $Var(v(t))=(\mathbf{1}_m\otimes\mathbf{1}_m)\Sigma+\{_d \ Q-\Sigma\}_{i=1}^m$ where $\mathbf{1}_m$ denotes the $m$-dimensional vector with entry one.
Furthermore for $\theta_i\ (i=1,\cdots,m)$, we assume \begin{eqnarray}\label{mixed equation} \theta_i=\mu_{\theta}+b_i\sim {\rm i.i.d.}N(\mu_{\theta},D), \end{eqnarray} where $\mu_{\theta}^T\triangleq(\alpha,\beta,\gamma)$ represents the fixed effects and $b_i\sim N_3(0,D)$ the random effects. In matrix form we have $ \theta=\mathbf{1}_m\otimes\mu_{\theta}+b$ where $b^T=(b_1^T,\cdots,b_m^T)\sim N_{3m}\left(0,\{_d\ D\}_{i=1}^m\right)$. Model (\ref{state equation})$\sim$(\ref{mixed equation}) constitute the state equations for MESSM. The measurement model is more complex and we refer to Khan et al (2005), Pang et al (2008, 2011) for more details in this respect. These state equations are meaningful generalization of the present group target tracking models. \subsection{Time-dependent effects}\label{time dependent effects} In the previous sections both the fixed effects $a$ and random effects $b_i$ are assumed to be static, i.e., constant across the time range. In some situations, as can be seen in Example 2 in the next section, $a$ and $b_i$ can be time-dependent. It turns out that the results given in previous sections can be easily adapted to accommodate the time-dependent effects. For the illustrative purpose, consider the case in which there exists a time point $1<T'<T$ that for $1\leq t\leq T'$ we have $\theta_i=\Psi_i^{(1)} a_1+b_{i1}$ with $b_{i1}\sim N(0,D_1)$; while for $T'<t\leq T$ we have $\theta_i=\Psi_i^{(2)} a_2+b_{i2}$ with $b_{i2}\sim N(0,D_2)$. For ease of exposition, we assume the individuals are independent with each other. The unknown parameters include $\Delta=(a_1,a_2,\delta^T)^T$ where $\delta$ denotes the unknown parameter contained in $D_1, D_2, Q$ and $R$. In this situation, the intermediate quantity of EM algorithm can be shown to be \begin{eqnarray*}\label{extended Q} Q(\Delta,\Delta^{\star}) &=& -\frac{m}{2}\log
|D_1(\delta)|-\frac{m}{2}\log |D_2(\delta)| -\frac{Tm}{2}\log
|R(\delta)|-\frac{Tm}{2}\log |Q(\delta)|\nonumber\\
&&-\frac{1}{2}\sum_{i=1}^m {\rm tr}\left[D_1(\delta)^{-1}\left\{(\Psi_i^{(1)}(a^{\star}_1-a_1)+b_{i1|T})\right.\right.\\ &&\quad\quad \quad\quad
\left.\left.\times(\Psi_i^{(1)}(a^{\star}_1-a_1)+b_{i1|T})^T+{\rm Var}(b_{i1}|y_{i,1:T},\Delta^{\star}) \right\}\right]\nonumber\\
&&-\frac{1}{2}\sum_{i=1}^m {\rm tr}\left[D_2(\delta)^{-1}\left\{(\Psi_i^{(2)}(a^{\star}_2-a_2)+b_{i2|T})\right.\right.\\ &&\quad\quad \quad\quad
\left.\left.\times(\Psi_i^{(2)}(a^{\star}_2-a_2)+b_{i2|T})^T +{\rm Var}(b_{i2}|y_{i,1:T},\Delta^{\star}) \right\}\right]\nonumber \end{eqnarray*} \begin{eqnarray*}
&&-\frac{1}{2}\sum_{i=1}^m \sum_{t=1}^T
{\rm tr}\left[R(\delta)^{-1}\left\{w_{t|T}w_{it|T}^T+{\rm Var}(w_{it}|y_{1:T},\Delta^{\star})\right\}\right]\nonumber\\
&&-\frac{1}{2}\sum_{i=1}^m \sum_{t=1}^T {\rm tr}\left[Q(\delta)^{-1}\left\{v_{it|T}v_{it|T}^T+{\rm Var}(v_{it}|y_{1:T},\Delta^{\star})\right\}\right],\nonumber \end{eqnarray*}
where $b_{i1|T}, b_{i2|T},w_{i|T}, v_{i|T}$ have the same explanation as $b_{i|T}, w_{i|T}, v_{i|T}$ in section \ref{model estimation}. As for quasi-Newton algorithm, the score vector now can be shown to be \begin{eqnarray*}
\left.\frac{\partial \log L(\Delta | y_{1:T})} {\partial a_1}\right|_{\Delta=\Delta^{\star}}=\sum_{i=1}^m
\psi_i^TD_1(\delta^{\star})^{-1}b_{i1|T}, \end{eqnarray*}
\begin{eqnarray*} \left.\frac{\partial \log L(\Delta | y_{1:T})}
{\partial a_2}\right|_{\Delta=\Delta^{\star}}=\sum_{i=1}^m
\psi_i^TD_2(\delta^{\star})^{-1}b_{i2|T}, \end{eqnarray*} \begin{eqnarray*}
&&\left.\frac{\partial \log L(\Delta |
y_{1:T})}{\partial \delta_j}\right |_{\Delta=\Delta^{\star}}= -\frac{1}{2}\sum_{i=1}^m {\rm tr } \left[ D_1(\delta^{\star})^{-1}\frac{\partial D_1(\delta^{\star})}{\partial \delta_j}\right.\\ &&\quad\quad\quad\quad
-D_1(\delta^{\star})^{-1}\left\{b_{i1|T}b_{i1|T}^T+Var(b_{i1}|y_{1:T}, \Delta^{\star})\right\} \left.D_1(\delta^{\star})^{-1}\frac{\partial D_1(\delta^{\star})}{\partial \delta_j}\right]\nonumber\\ &&-\frac{1}{2}\sum_{i=1}^m {\rm tr } \left[ D_2(\delta^{\star})^{-1}\frac{\partial D_2(\delta^{\star})}{\partial \delta_j}\right.\\ &&\quad\quad\quad\quad
-D_2(\delta^{\star})^{-1}\left\{b_{i2|T}b_{i2|T}^T+Var(b_{i2}|y_{1:T}, \Delta^{\star})\right\}\nonumber \times \left.D_2(\delta^{\star})^{-1}\frac{\partial D_2(\delta^{\star})}{\partial \delta_j}\right]\\ && +\frac{1}{2}\sum_{i=1}^m \sum_{t=1}^T {\rm tr}\left[\tilde{E}\left\{e_t(\theta) e_t(\theta)^T-D_t(\theta)\right\}\frac{\partial R(\delta^{\star})}{\partial \delta_j}\right]\nonumber\\ &&+\frac{1}{2}\sum_{i=1}^m \sum_{t=1}^T {\rm tr}\left[\tilde{E}\left\{r_{t-1} (\theta)r_{t-1}(\theta)^T-N_{t-1}(\theta)\right\}\frac{\partial Q(\delta^{\star})}{\partial \delta_j}\right]. \nonumber \end{eqnarray*} \par Liu et al (2011) used time-dependent effects to model the dynamics of load of HIV in vivo. Their model can be formulated as that defined in (\ref{autoregression1})$\sim$(\ref{autoregression2}) with the modification that for $1\leq t\leq T'$, $\theta_i=\mu_{\theta_1}+b_{i1}$ with $b_{i1}\sim N(0,\delta_3)$; while for $T'<t\leq T$, $\theta_i=\mu_{\theta_2}+b_{i2}$ with $b_{i2}\sim N(0,\delta_4)$. For this model, recursive formulas for EM and quasi-Newton algorithm can be derived out straightforwardly from those expressions given above. It turns out these formulas are similar to those given in section \ref{model estimation} and so the details are omitted. \section{Numerical Studies}\label{numerical studies} In this section we investigate the performance of the proposed algorithms by two numerical examples. The first example uses the simulated data which is generated from the autoregressive
with noise model; The second example involves a clinical trial data set which had been investigated by several other authors. For parameter estimation both the EM and BFGS algorithms will be carried out while only the results of BFGS will be reported because of the similarity of the results. The variances are calculated from the observed information matrix based on MKF-KS and the algorithm 3 in Nemeth et al (2013). \par {\bf Example 1.}\ Consider the model
given by (\ref{autoregression1})$\sim$ (\ref{autoregression2}). The unknown parameters include $\Delta=(\mu_{\theta},\delta_1,\delta_2,\delta_3)$. To generate the simulated data, the true parameters are set to be $\Delta_0=(0.3,0.3,3,0.1)$; initial state satisfies $x_0\sim N(0, 3.2)$. We only consider the problem of parameter estimation in this example and three sample sizes, $m=15,30,50$ will be investigated. In each case, three kinds of time series, $T=10,20,30$, are considered. The repetition for each combination is set to be 500. The number of the random samples generated from the posterior distribution of random effects is set to be $M=200$. The results are reported in Table 1 which include the parameter estimates and the corresponding standard errors. From Table 1 it can be seen that the proposed inference approaches can provide the reasonable estimates for the unknown parameters. \par {\bf Example 2.}\ A data set from the clinical trial of AIDS
had been investigated in Liu et al (2011), Wu and Ding (1999) and Lederman et al (1998). This data set contains the records of 48 HIV infected patients who are treated with potent antiviral drugs. Dynamic models with mixed effects for this data set had been constructed in
literatures, see Wu and Ding (1999), Liu et al (2011). In particular the model proposed in Liu et al (2011) is just the model given in the last paragraph in section \ref{time dependent effects}. For parameter estimation, they investigated the EM algorithm and Baysian method. For state estimation, they took the estimates as the true values of the parameters and then employed the Kalman filter to estimate the state.
Here the same model will be investigated and the focus is put on the statistical inference of such model. The observations $y_{it}$'s are the base 10 logarithm of the viral load for patient
$i$ at week $t$. Unknown parameters include $\Delta=(\mu_{\theta_1}, \mu_{\theta_2}, \delta_1,\delta_2, \delta_3,\delta_4)^T$.
Note for each patient, there exist
some time points that the corresponding records $y_{it}$'s are missing.
Thus the models in section \ref{incomplete observations}
and \ref{time dependent effects} need to be combined together to
analyze this data set.
\par For parameter estimation the results are reported in Table 2 which include the parameter estimates and the corresponding standard errors. Table 3 presents the estimated individual parameters using the particles $\{(\theta^{(j)}_t,w_t^{(j)}), j=1,\cdots,M\}$ generated by MKF-KS algorithm at the last time point. These estimates are just the weighted means of $\theta_t^{(j)}$ with weights $w_t^{(j)}$. With the estimated population parameters in hand, the state estimation is carried out using the MKF-KS algorithm. The resulted filter estimate and the one-step ahead prediction are plotted in Figure 1 for four patients who have the most observations among these 48 patients. For the purpose of comparison we also run Kalman filter with the individual parameters replaced by their estimates. Figure 2 presents the box plots of mean squared errors of MKF-KS and Kalman filter for 48 patients. It seems that these two MSE's are similar in magnitude. This can be explained as follows. On the one hand, Kalman filter uses all the observations and should outperform MKF-KS algorithm which only uses the observations up to the present time point. On the other hand the predicted random effects are taken as the true random effects in Kalman filter which will results in bias in state estimation. While for MKF-KS the random effects are integrated out when the states are estimated and so less affected by estimating errors. Both factors affect the magnitude of the MSE's. Recall contrary to Kalman filter the main advantages of MKF-KS is that without the known random effects it also can provide the recursive state estimation. This point is more important in the setting of sparse data in which the random effects can not be estimated accurately. \section{Conclusion}\label{conclusions} We consider both the parameter and state estimation for the linear mixed-effects state space model which can accommodate the correlated individuals. For parameter estimation EM and score based algorithms are investigated based on disturbance smoothing. The implementation of
EM and score based algorithms only require the random samples of random
effects from the posterior distribution. Particularly the proposed EM algorithm can be regarded as a Rao-Blackwellized version of that proposed in Liu et al (2011). For state estimation, because longitudinal data set usually involves sparse data with which random effects can not be estimated accurately, we advocate state estimation should be carried out without assuming the random effects being known.
To this end a kernel smoothing based mixture Kalman filter is
proposed to estimate the state. Numerical studies show the proposed
inferences perform well in the setting of finite
samples.
The proposed models and statistical
inferences can be extended by different ways. For example nonlinear mixed-effects state space model with additive Gaussian error can be
handled by the similar ideas in this paper without much difficulty. But for the
general nonlinear/non-Gaussian state space model with
mixed-effects, the proposed algorithms can not apply and
new inference techniques need to be developed. Another interesting problem
is how to carry out the parameter estimation in a recursive manner. For the ordinary
fixed-effect state space models, there have existed some studies in
this respect. Extending such inferences to state
space model with mixed effects also is meaningful.
\appendix
\appendix \begin{table} \caption{Parameter estimates and standard errors with true parameter $\mu=0.3,\delta_1=0.3, \delta_2=3,\delta_3=0.1$.} \label{table1} \centering \begin{tabular}{rccccccccc} \hline\hline &&\multicolumn{2}{c}{$\mu_{\theta}$}&\multicolumn{2}{c}{$\delta_1$} &\multicolumn{2}{c}{$\delta_2$}&\multicolumn{2}{c}{$\delta_3$}\\
Cases&&Estimate&SE&Estimate&SE&Estimate&SE&Estimate&SE\\
\hline $m$=\small{15}&$T$=\small{10}&\small{0.27}&\small{0.05}&\small{0.40}&\small{0.1} &\small{4.70}&\small{0.52}&\small{0.14}&\small{0.007}\\ &$T$=\small{20}&\small{0.26}&\small{0.04}&\small{0.38}&\small{0.08}&\small{3.21} &\small{0.47}&\small{0.07}&\small{0.006}\\ &$T$=\small{30}&\small{0.28}&\small{0.04}&\small{0.34}&\small{0.03} &\small{3.17}&\small{0.27}&\small{0.07}&\small{0.006}\\ \hline $m$=\small{30}&$T$=\small{10}&\small{0.27}&\small{0.02}&\small{0.37}&\small{0.07} &\small{3.82}&\small{0.29}&\small{0.13}&\small{0.005}\\ &$T$=\small{20}&\small{0.28}&\small{0.02}&\small{0.34}&\small{0.03}&\small{2.43}&\small{0.17}&\small{0.12}&\small{0.006}\\ &$T$=\small{30}&\small{0.31}&\small{0.01}&\small{0.24}&\small{0.02}& \small{2.71}&\small{0.20}&\small{0.08}&\small{0.005}\\ \hline $m$=\small{50}&$T$=\small{10}&\small{0.30}&\small{0.01}&\small{0.34}&\small{0.06}&\small{3.51}&\small{0.27}&\small{0.12}&\small{0.004}\\ &$T$=\small{20}&\small{0.31}&\small{0.01}&\small{0.32}&\small{0.04}&\small{3.22} &\small{0.21}&\small{0.11}&\small{0.005}\\ &$T$=\small{30}&\small{0.31}&\small{0.01}&\small{0.32}&\small{0.04}&\small{2.87}&\small{0.20}&\small{0.11}&\small{0.001}\\ \hline \end{tabular} \end{table}
\begin{table} \caption{Population parameter estimates and standard errors} \label{table2} \centering \begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}lcccccc} \hline \hline &$\mu_{\theta_1}$&$\mu_{\theta_2}$&$\delta_1$&$\delta_2$&$\delta_3$&$\delta_4$\\ \hline Estimates&\small{0.85}&\small{0.86}&\small{0.33}&\small{0.76}&\small{0.007}&\small{0.044}\\ \hline SE's&\small{0.06}&\small{0.04}&\small{0.08}&\small{0.23}&\small{0.002}&\small{0.01}\\ \hline \end{tabular*} \end{table}
\begin{table} \caption{Estimation of the individual parameters for 48 patients} \begin{tabular}{lccccccc} \hline\hline $\theta_{i}^{(1)}:$&&&&&&&\\ \small{0.868915}&\small{0.849607}&\small{0.868957} &\small{0.827189}&\small{0.851339}&\small{0.847058}&\small{0.848824}&\small{0.851733}\\ \small{0.849319}&\small{0.868136}&\small{0.838490}&\small{0.835906}&\small{0.859012}&\small{0.825839}&\small{0.846816}&\small{0.859276}\\ \small{0.867417}&\small{0.857401}&\small{0.843068}&\small{0.835888}&\small{0.837270} &\small{0.852747}&\small{0.832048}&\small{0.842219}\\ \small{0.850987}&\small{0.852832}&\small{0.835151} &\small{0.856031}&\small{0.872748}&\small{0.873013}&\small{0.840243}&\small{0.851437}\\ \small{0.893272}&\small{0.865324}&\small{0.853658}&\small{0.858038} &\small{0.863467}&\small{0.836726}&\small{0.837801}&\small{0.846284}\\ \small{0.809998}&\small{0.844643}&\small{0.846764}&\small{0.848282} &\small{0.846723}&\small{0.833354}&\small{0.837123}&\small{0.828165}\\ \hline $\theta_{i}^{(2)}:$&&&&&&&\\ \small{0.852200}&\small{0.841703}&\small{0.947872}&\small{0.894520}&\small{0.848136} &\small{0.975277}&\small{0.862129}&\small{0.925632}\\ \small{0.859690}&\small{0.865170}&\small{0.762160}&\small{0.921414} &\small{0.932375}&\small{0.892159}&\small{0.858365}&\small{0.776656}\\ \small{0.938613}&\small{0.870105}&\small{0.900053}&\small{0.844046} &\small{0.962848}&\small{0.872085}&\small{0.826155}&\small{0.900559}\\ \small{0.843995}&\small{0.712202}&\small{0.901383}&\small{0.924811}&\small{0.910035} &\small{0.957107}&\small{0.920373}&\small{0.878261}\\ \small{0.868421}&\small{0.928374}&\small{0.867860}&\small{0.915143}&\small{0.849401} &\small{0.908050}&\small{0.944003}&\small{0.925122}\\ \small{0.936315}&\small{0.905399}&\small{0.872215}&\small{0.858642}&\small{0.821628} &\small{0.875818}&\small{0.753861}&\small{0.948502}\\ \hline \end{tabular} \end{table}
\begin{figure}
\caption{\footnotesize{Estimation of viral load for four patients in the HIV dynamic study. The circles represent base 10 logarithm of the viral loads. The green solid lines represent the one-step ahead prediction; The dotted lines represent the filtering estimates; The dashed lines represent the 95\% confidence interval of the filtering estimates; The pink solid lines represent the 95\% confidence interval of the one-step ahead prediction.}}
\label{fig1}
\end{figure}
\begin{figure}
\caption{\footnotesize{Mean square errors of the one-step ahead prediction for 48 patients. The left panel corresponds to the MKF-KS algorithm. The right panel corresponds to the Kalman filter with the estimated individual parameters.}}
\label{fig2}
\end{figure}
\end{document} |
\begin{document}
\title{\LARGE \bf
Stochastic Optimal Control With \\Dynamic, Time-Consistent Risk Constraints
} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract}
In this paper we present a dynamic programing approach to stochastic optimal control problems with dynamic, time-consistent risk constraints. Constrained stochastic optimal control problems, which naturally arise when one has to consider multiple objectives, have been extensively investigated in the past 20 years; however, in most formulations, the constraints are formulated as either risk-neutral (i.e., by considering an expected cost), or by applying static, single-period risk metrics with limited attention to ``time-consistency" (i.e., to whether such metrics ensure rational consistency of risk preferences across multiple periods). Recently, significant strides have been made in the development of a rigorous theory of dynamic, \emph{time-consistent} risk metrics for multi-period (risk-sensitive) decision processes; however, their integration within constrained stochastic optimal control problems has received little attention. The goal of this paper is to bridge this gap. First, we formulate the stochastic optimal control problem with dynamic, time-consistent risk constraints and we characterize the tail subproblems (which requires the addition of a Markovian structure to the risk metrics). Second, we develop a dynamic programming approach for its solution, which allows to compute the optimal costs by value iteration. Finally, we discuss both theoretical and practical features of our approach, such as generalizations, construction of optimal control policies, and computational aspects. A simple, two-state example is given to illustrate the problem setup and the solution approach. \end{abstract}
\section{Introduction} Constrained stochastic optimal control problems naturally arise in several domains, including engineering, finance, and logistics. For example, in a telecommunication setting, one is often interested in the maximization of the throughput of some traffic subject to constraints on delays \cite{Altman_99, Korilis_95}, or seeks to minimize the average delays of some traffic types, while keeping the delays of other traffic types within a given bound \cite{Nain_86}. Arguably, the most common setup is the optimization of a \emph{risk-neutral expectation} criterion subject to a \emph{risk-neutral} constraint \cite{Chen_04, Piunovskiy_06, Chen_07}. This model, however, is not suitable in scenarios where risk-aversion is a key feature of the problem setup. For example, financial institutions are interested in trading assets while keeping the \emph{riskiness} of their portfolios below a threshold; or, in the optimization of rover planetary missions, one seeks to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the \emph{risk} of a mission failure (e.g., due to a failed landing) is below a user-specified bound \cite{Pavone_12}.
A common strategy to include risk-aversion in constrained problems is to have constraints where a static, single-period risk metric is applied to the future stream of costs; typical examples include variance-constrained stochastic optimal control problems (see, e.g., \cite{Piunovskiy_06, Sniedovich_80, Mannor_11}), or problems with probability constraints \cite{Chen_04, Piunovskiy_06}. However, using static, single-period risk metrics in multi-period decision processes can lead to an over or under-estimation of the true dynamic risk, as well as to a potentially ``inconsistent" behavior (whereby risk preferences change in a seemingly irrational fashion between consecutive assessment periods), see \cite{Iancu_11} and references therein. In \cite{Rudloff_11}, the authors provide an example of a portfolio selection problem where the application of a static risk metric in a multi-period context leads a risk-averse decision maker to (erroneously) show risk neutral preferences at intermediate stages.
Indeed, in the recent past, the topic of \emph{time-consistent} risk assessment in multi-period decision processes has been heavily investigated \cite{rus_shapiro_2004, rus_shapiro_06, rus_shapiro_06_2, rus_09, shapiro_12, Cheridito_11, Penner_06}. The key idea behind time consistency is that if a certain outcome is considered less risky in all states of the world at stage $k$, then it should also be considered less risky at stage $k$ \cite{Iancu_11}. Remarkably, in \cite{rus_09}, it is proven that any risk measure that is time consistent can be represented as a composition of one-step conditional risk mappings, in other words, in multi-period settings, risk (as expected) should be compounded over time.
Despite the widespread usage of constrained stochastic optimal control and the significant strides in the theory of dynamic, time-consistent risk metrics, their integration within constrained stochastic optimal control problems has received little attention. The purpose of this paper is to bridge this gap. Specifically, the contribution of this paper is threefold. First, we formulate the stochastic optimal control problem with dynamic, time-consistent risk constraints and we characterize the tail subproblems (which requires the addition of a Markovian structure to the risk metrics). Second, we develop a dynamic programming approach for the solution, which allows to compute the optimal costs by value iteration. There are two main reasons behind our choice of a dynamic programing approach: (a) the dynamic programming approach can be used as an analytical tool in special cases and as the basis for the development of either exact or approximate solution algorithms; and (b) in the risk-neutral setting (i.e., both objective and constraints given as expectations of the sum of stage-wise costs) the dynamic programming approach appears numerical convenient with respect to other approaches (e.g., with respect to the convex analytic approach \cite{Altman_99}) and allows to build all (Markov) optimal control strategies \cite{Piunovskiy_06}. Finally, we discuss both theoretical and practical features of our approach, generalizations, construction of optimal control policies, and computational aspects. A simple, two-state example is given to illustrate the problem setup and the solution approach.
The rest of the paper is structured as follows. In Section \ref{sec:prelim} we present background material for this paper, in particular about dynamic, time-consistent risk measures. In Section \ref{sec:ps} we formally state the problem we wish to solve, while in Section \ref{sec:dp} we present a dynamic programming approach for the solution. In Section \ref{sec:dis} we discuss several aspects of our approach and provide a simple example. Finally, in Section \ref{sec:conc}, we draw our conclusions and offer directions for future work.
\section{Preliminaries}\label{sec:prelim} In this section we provide some known concepts from the theory of Markov decision processes and of dynamic risk measures, on which we will rely extensively later in the paper.
\subsection{Markov Decision Processes}
A finite Markov Decision Process (MDP) is a four-tuple $(S, U, Q, U(\cdot))$, where $S$, the state space, is a finite set; $U$, the control space, is a finite set; for every $x\in S$, $U(x)\subseteq U$ is a nonempty set which represents the set of admissible controls when the system state is $x$; and, finally, $Q(\cdot|x,u)$ (the transition probability) is a conditional probability on $S$ given the set of admissible state-control pairs, i.e., the sets of pairs $(x,u)$ where $x\in S$ and $u\in U(x)$.
Define the space $H_k$ of admissible histories up to time $k$ by $H_k = H_{k-1} \times S\times U$, for $k\geq 1$, and $H_0=S$. A generic element $h_{0,k}\in H_k$ is of the form $h_{0,k} = (x_0, u_0, \ldots , x_{k-1}, u_{k-1}, x_k)$. Let $\Pi$ be the set of all deterministic policies with the property that at each time $k$ the control is a function of $h_{0,k}$. In other words, $\Pi := \Bigl \{ \{\pi_0: H_0 \rightarrow U,\, \pi_1: H_1 \rightarrow U, \ldots\} | \pi_k(h_{0,k}) \in U(x_k) \text{ for all } h_{0,k}\in H_k, \, k\geq 0 \Bigr\}$.
\subsection{Time-Consistent Dynamic Risk Measures} This subsection follows closely the discussion in \cite{rus_09}. Consider a probability space $(\Omega, \mathcal F, P)$, a filtration $\mathcal F_1\subset \mathcal F_2 \cdots \subset \mathcal F_N \subset \mathcal F$, and an adapted sequence of random variables $Z_k$, $k\in \{0, \cdots,N\}$. We assume that $\mathcal F_0 = \{\Omega, \emptyset\}$, i.e., $Z_0$ is deterministic. In this paper we interpret the variables $Z_k$ as stage-wise costs. For each $k\in\{1, \cdots, N\}$, define the spaces of random variables with finite $p$th order moment as $\mathcal Z_k:= L_p(\Omega, \mathcal F_k, P)$, $p\in [1,\infty]$; also, let $\mathcal Z_{k, N}:=\mathcal Z_k \times \cdots \times \mathcal Z_N$.
The fundamental question in the theory of dynamic risk measures is the following: how do we evaluate the risk of the subsequence $Z_k, \ldots, Z_N$ from the perspective of stage $k$? Accordingly, the following definition introduces the concept of dynamic risk measure (here and in the remainder of the paper equalities and inequalities are in the almost sure sense).
\begin{definition}[Dynamic Risk Measure] A dynamic risk measure is a sequence of mappings $\rho_{k,N}:\mathcal Z_{k, N}\rightarrow\mathcal Z_k$, $k\in\{0, \ldots,N\}$, obeying the following monotonicity property: \[ \rho_{k,N}(Z)\leq \rho_{k,N}(W) \text{ for all } Z,W \in\mathcal Z_{k,N} \text{ such that } Z\leq W. \] \end{definition} The above monotonicity property is arguably a natural requirement for any meaningful dynamic risk measure. Yet, it does not imply the following notion of \emph{time consistency}: \begin{definition}[Time Consistency] A dynamic risk measure $\{ \rho_{k,N}\}_{k=0}^N$ is called time-consistent if, for all $0\leq l<k\leq N$ and all sequences $Z, W \in \mathcal Z_{l,N}$, the conditions \begin{equation} \begin{split} &Z_i = W_i,\,\, i = l,\cdots,k-1, \text{ and }\\ &\rho_{k,N}(Z_k, \cdots,Z_N)\leq \rho_{k,N}(W_k, \cdots,W_N), \end{split} \end{equation} imply that \[
\rho_{l,N}(Z_l, \cdots,Z_N)\leq \rho_{l,N}(W_l, \cdots,W_N). \] \end{definition} In other words, if the $Z$ cost sequence is deemed less risky than the $W$ cost sequence from the perspective of a future time $k$, and they yield identical costs from the current time $l$ to the future time $k$, then the $Z$ sequence should be deemed as less risky at the current time $l$, as well. The pitfalls of time-inconsistent dynamic risk measures have already been mentioned in the introduction and are discussed in detail in \cite{Cheridito_09, shapiro_09, Iancu_11}.
The issue then is what additional ``structural" properties are required for a dynamic risk measure to be time consistent. To answer this question we need one more definition: \begin{definition}[Coherent one-step conditional risk measures] A coherent one-step conditional risk measures is a mapping $\rho_k:\mathcal Z_{k+1}\rightarrow \mathcal Z_k$, $k\in\{0,\ldots,N\}$, with the following four properties: \begin{itemize} \item Convexity: $\rho_k(\lambda Z + (1-\lambda)W)\leq \lambda\rho_k(Z) + (1-\lambda)\rho_k(W)$, $\forall \lambda\in[0,1]$ and $Z,W \in\mathcal Z_{k+1}$; \item Monotonicity: if $Z\leq W$ then $\rho_k(Z)\leq\rho_k(W)$, $\forall Z,W \in\mathcal Z_{k+1}$; \item Translation invariance: $\rho_k(Z+W)=Z + \rho_k(W)$, $\forall Z\in\mathcal Z_k$ and $W \in \mathcal Z_{k+1}$; \item Positive homogeneity: $\rho_k(\lambda Z) = \lambda \rho_k(Z)$, $\forall Z \in \mathcal Z_{k+1}$ and $\lambda\geq 0$. \end{itemize} \end{definition}
We are now in a position to state the main result of this section. \begin{theorem}[Dynamic, time-consistent risk measures]\label{thrm:tcc} Consider, for each $k\in\{0,\cdots,N\}$, the mappings $\rho_{k,N}:\mathcal Z_{k, N}\rightarrow\mathcal Z_k$ defined as \begin{equation}\label{eq:tcrisk} \begin{split} \rho_{k,N} &= Z_k + \rho_k(Z_{k+1} + \rho_{k+1}(Z_{k+2}+\ldots+\\
&\qquad\rho_{N-2}(Z_{N-1}+\rho_{N-1}(Z_N))\ldots)), \end{split} \end{equation} where the $\rho_k$'s are coherent one-step risk measures. Then, the ensemble of such mappings is a time-consistent dynamic risk measure. \end{theorem} \begin{proof} See \cite{rus_09}. \end{proof} Remarkably, Theorem 1 in \cite{rus_09} shows (under weak assumptions) that the ``multi-stage composition" in equation \eqref{eq:tcrisk} is indeed \emph{necessary for time consistency}. Accordingly, in the remainder of this paper, we will focus on the \emph{dynamic, time-consistent risk measures} characterized in Theorem \ref{thrm:tcc}.
With dynamic, time-consistent risk measures, since at stage $k$ the value of $\rho_k$ is $\mathcal F_k$-measurable, the evaluation of risk can depend on the whole past (even though in a time-consistent way). On the one hand, this generality appears to be of little value in most practical cases, on the other hand, it leads to optimization problems that are intractable from a computational standpoint (and, in particular, do not allow for a dynamic programing solution). For these reasons, in this paper we consider a (slight) refinement of the concept of dynamic, time-consistent risk measure, which involves the addition of a Markovian structure \cite{rus_09}. \begin{definition}[Markov dynamic risk measures]\label{def:Markov} Let $\mathcal V:=L_p(S, \mathcal B, P)$ be the space of random variables on $S$ with finite $p$\emph{th} moment. Given a controlled Markov process $\{x_k\}$, a Markov dynamic risk measure is a dynamic, time-consistent risk measure if each coherent one-step risk measure $\rho_k:\mathcal Z_{k+1}\rightarrow \mathcal Z_k$ in equation \eqref{eq:tcrisk} can be written as: \begin{equation}\label{eq:Markov}
\rho_k(V(x_{k+1})) = \sigma_k(V(x_{k+1}),x_k, Q(x_{k+1} |x_k, u_k)), \end{equation}
for all $V(x_{k+1})\in \mathcal V$ and $u\in U(x_k)$, where $\sigma_k$ is a coherent one-step risk measure on $\mathcal V$ (with the additional technical property that for every $V(x_{k+1})\in \mathcal V$ and $u\in U(x_k)$ the function $x_k \mapsto \sigma_k(V(x_{k+1}), x_k, Q(x_{k+1}|x_k, u_k))$ is an element of $\mathcal V$). \end{definition}
In other words, in a Markov dynamic risk measures, the evaluation of risk is not allowed to depend on the whole past.
\begin{example} An important example of coherent one-step risk measure satisfying the requirements for Markov dynamic risk measures (Definition \ref{def:Markov}) is the mean-semideviation risk function: \begin{align} \rho_k(V) = \expectation{V} + \lambda\, \Bigl (\expectation{[V - \expectation{V}]_{+}^p} \Bigr)^{1/p},\label{MD_risk} \end{align} where $p\in[1,\infty)$, $[z]_+^p:=(\max(z,0))^p$, and $\lambda\in [0,\, 1].$ \end{example} Other important examples include the conditional average value at risk and, of course, the risk-neutral expectation \cite{rus_09}. Accordingly, in the remainder of this paper we will restrict our analysis to Markov dynamic risk measures.
\section{Problem Statement}\label{sec:ps}
In this section we formally state the problem we wish to solve. Consider an MDP and let $c : S \times U \rightarrow \real$ and $d : S \times U \rightarrow \real$ be functions which denote costs associated with state-action pairs. Given a policy $\pi\in \Pi$, an initial state $x_0\in S$, and an horizon $N\geq 1$, the cost function is defined as \[ J^{\pi}_N(x_0):=\expectation{\sum_{k=0}^{N-1}\, c(x_k, u_k)}, \] and the risk constraint is defined as \[ R^{\pi}_N(x_0):= \rho_{0,N}\Bigl(d(x_0,u_0), \ldots, d(x_{N-1},u_{N-1}),0\Bigr), \] where $\rho_{k,N}(\cdot)$, $k\in \{0,\ldots, N-1\}$, is a time consistent multi-period risk measure with $\rho_i$ being a Markov risk measure for any $i\in[k,N-1]$ (for simplicity, we do not consider terminal costs, even though their inclusion is straightforward). The problem we wish to solve is then as follows: \begin{quote} {\bf Optimization problem $\mathcal{OPT}$} --- Given an initial state $x_0\in S$, a time horizon $N\geq 1$, and a risk threshold $r_0 \in \real$, solve \begin{alignat*}{2}
\min_{\pi \in \Pi} & & \quad&J^{\pi}_N(x_0) \\ \text{subject to} & & \quad&R^{\pi}_N(x_0) \leq r_0. \end{alignat*} \end{quote} If problem $\mathcal OPT$ is not feasible, we say that its value is $\overline C$, where $\overline C$ is a ``large" constant (namely, an upper bound over the $N$-stage cost). Note that, when the problem is feasible, an optimal policy always exists since the state and control spaces are finite. When $\rho_{0,N}$ is replaced by an expectation, we recover the usual risk-neutral constrained stochastic optimal control problem studied, e.g., in \cite{Chen_04, Piunovskiy_06}. In the next section we present a dynamic programing approach to solve problem $\mathcal OPT$.
\section{A Dynamic Programming Algorithm for Risk-Constrained Multi-Stage Decision-Making}\label{sec:dp} In this section we discuss a dynamic programming approach to solve problem $\mathcal{OPT}$. We first characterize the relevant value functions, and then we present the Bellman's equation that such value functions have to satisfy.
\subsection{Value functions}
Before defining the value functions we need to define the tail subproblems. For a given $k\in \{0,\ldots,N-1\}$ and a given state $x_k\in S$, we define the \emph{sub-histories} as $h_{k,j}:=(x_k, u_k, \ldots,x_j)$ for $j\in \{k,\ldots, N\}$; also, we define the \emph{space of truncated policies} as $\Pi_k:=\Bigl \{ \{\pi_k, \pi_{k+1}, \ldots\} | \pi_j(h_{k,j}) \in U(x_j) \text{ for } j\geq k \Bigr\}$. For a given stage $k$ and state $x_k$, the cost of the tail process associated with a policy $\pi\in \Pi_k$ is simply $J^{\pi}_N(x_k):=\expectation{\sum_{j=k}^{N-1}\, c(x_j, u_j)}$. The risk associated with the tail process is: \[ R^{\pi}_N(x_k):= \rho_{k,N}\Bigl(d(x_k,u_k), \ldots, d(x_{N-1},u_{N-1}),0\Bigr), \] which is \emph{only} a function of the current state $x_k$ and does \emph{not} depend on the history $h_{0,k}$ that led to $x_k$. This crucial fact stems from the assumption that $\{\rho_{k,N}\}_{k=0}^{N-1}$ is a Markov dynamic risk measure, and hence the evaluation of risk only depends on the \emph{future} process and on the present state $x_k$ (formally, this can be easily proven by repeatedly applying equation \eqref{eq:Markov}). Hence, the tail subproblems are \emph{completely} specified by the knowledge of $x_k$ and are defined as \begin{alignat}{2}
\min_{\pi \in \Pi_k} & & \quad&J^{\pi}_N(x_k) \label{problem_SOCP}\\ \text{subject to} & & \quad&R^{\pi}_N(x_k) \leq r_k(x_k),\label{constraint_SOCP} \end{alignat} for a given (undetermined) threshold value $r_k(x_k) \in \real$ (i.e., the tail subproblems are specified up to a threshold value). We are interested in characterizing a ``minimal" set of \emph{feasible} thresholds at each step $k$, i.e., a ``minimal" interval of thresholds for which the subproblems are feasible.
The minimum risk-to-go for each state $x_k\in S$ and $k\in \{0,\ldots,N-1\}$ is given by: \[ \underline{R}_N(x_k):=\min_{\pi\in \Pi_k} \, R_N^{\pi}(x_k). \] Since $\{\rho_{k,N}\}_{k=0}^{N-1}$ is a Markov dynamic risk measure, $\underline{R}_N(x)$ can be computed by using a dynamic programming recursion (see Theorem 2 in \cite{rus_09}). The function $\underline{R}_N(x_k)$ is clearly the lowest value for a feasible constraint threshold. To characterize the upper bound, let: \[ \rho_{\text{max}}:=\max_{k\in\{0,\ldots,N-1\}} \, \, \max_{(x,u)\in S\times U} \, \rho_k(d(x,u)). \] By the monotonicity and translation invariance of Markov dynamic risk measures, one can easily show that \[ \max_{\pi\in \Pi_k} \, R_N^{\pi}(x_k)\leq (N-k)\rho_{\text{max}}:=\overline{R}_N. \] Accordingly, for each $k\in\{0,\ldots, N-1\}$ and $x_k\in S$, we define the set of feasible constraint thresholds as \[ \Phi_k(x_k):=[\underline{R}_N(x_k), \overline{R}_N],\quad \Phi_N(x_N):=\{0\}. \] (Indeed, thresholds larger than $\overline{R}_N$ would still be feasible, but would be redundant and would increase the complexity of the optimization problem.)
The value functions are then defined as follows: \begin{itemize} \item If $k<N$ and $r_k \in \Phi_k(x_k)$: \begin{alignat*}{2} V_k(x_k, r_k) = &\min_{\pi \in \Pi_k} & \quad&J^{\pi}_N(x_k) \\ &\text{subject to} & &R^{\pi}_N(x_k) \leq r_k(x_k); \end{alignat*} the minimum is well-defined since the state and control spaces are finite. \item iI $k\leq N$ and $r_k \notin \Phi_k(x_k)$: \[ V_k(x_k, r_k) = \overline{C}; \] \item when $k=N$ and $r_N=0$: \[ V_N(x_N,r_N) = 0. \] \end{itemize}
Clearly, for $k=0$, we have the definition of problem $\mathcal OPT$.
\subsection{Dynamic programming recursion} In this section we prove that the value function can be computed by dynamic programming. Let $B(S)$ denote the space of real-valued bounded functions on $S$, and $B(S \times \real)$ denote the space of real-valued bounded functions on $S\times \real$. For $k\in \{0, \ldots, N-1\}$, we define the dynamic programming operator $T_k[V_k] : B(S \times \real) \mapsto B(S \times \real)$ according to the equation: \begin{equation}\label{eq:T} \begin{split} T_k[V_{k+1}]&(x_k, r_k) := \inf_{(u,r^{\prime})\in F_k(x_k, r_k)} \, \biggl\{c(x_k,u) \, \,+\\
& \ \sum_{x_{k+1} \in S} \, Q(x_{k+1}|x_k,u)\, V_{k+1}(x_{k+1}, r^{\prime}(x_{k+1})) \biggr\}, \end{split} \end{equation} where $F_k \subset \real \times B(S)$ is the set of control/threshold \emph{functions}: \begin{equation*} \begin{split}
F_k(x_k,& r_k):= \biggr\{(u, r^{\prime}) \Big | u\in U(x_k), r^{\prime}(x^{\prime}) \in \Phi_{k+1}(x^{\prime}) \text{ for}\\& \text{all } x^{\prime} \in S, \text{ and } d(x_k, u) + \rho_k(r^{\prime}(x_{k+1})) \leq r_k\biggl\}. \end{split} \end{equation*} If $F_k(x_k,r_k) = \emptyset$, then $T_k[V_{k+1}](x_k, r_k)=\overline C$.
Note that, for a given state and threshold constraint, set $F_k$ characterizes the set of feasible pairs of actions and subsequent constraint thresholds. Feasible subsequent constraint thresholds are thresholds which if satisfied at the next stage ensure that the current state satisfies the given threshold constraint (see \cite{Chen_07} for a similar statement in the risk-neutral case). Also, note that equation \eqref{eq:T} involves a functional minimization over the Banach space $B(S)$. Indeed, since $S$ is finite, $B(S)$ is isomorphic with $\real^{|S|}$, hence the minimization in equation \eqref{eq:T} can be re-casted as a regular (although possibly large) optimization problem in the Euclidean space. Computational aspects are further discussed in the next section.
We are now in a position to prove the main result of this paper. \begin{theorem}[Bellman's equation with risk constraints]\label{TC_good} Assume that the infimum in equation \eqref{eq:T} is attained. Then, for all $k\in \{0, \ldots, N-1\}$, the optimal cost functions satisfy the Bellman's equation: \[ V_k(x_k, r_k) = T_k[V_{k+1}](x_k, r_k). \] \end{theorem} \begin{proof} The proof style is similar to that of Theorem 3.1 in \cite{Chen_04}. The proof consists of two steps. First, we show that $V_k(x_k, r_k) \geq T_k[V_{k+1}](x_k, r_k)$ for all pairs $(x_k, r_k) \in S\times \real$. Second, we show $V_k(x_k, r_k) \leq T_k[V_{k+1}](x_k, r_k)$ for all pairs $(x_k, r_k) \in S\times \real$. These two results will prove the claim.
\emph{Step (1)}. If $r_k \notin \Phi_k(x_k)$, then, by definition, $V_k(x_k, r_k) = \overline{C}$. Also, $r_k \notin \Phi_k(x_k)$ implies that $F_k(x_k, r_k)$ is empty (this can be easily proven by contradiction). Hence, $T_k[V_{k+1}](x_k, r_k) = \overline C$. Therefore, if $r_k \notin \Phi_k(x_k)$, \begin{equation}\label{eq:easy} V_k(x_k, r_k) = \overline{C} = T_k[V_{k+1}](x_k, r_k), \end{equation} i.e., $V_k(x_k, r_k)\geq T_k[V_{k+1}](x_k, r_k)$.
Assume, now, $r_k \in \Phi_k(x_k)$. Let $\pi^* \in \Pi_k$ be the optimal policy that yields the optimal cost $V_k(x_k, r_k)$. Construct the ``truncated" policy $\bar \pi \in \Pi_{k+1}$ according to: \[ \bar \pi_j(h_{k+1,j}):= \pi^*_j(x_k, \pi_k^*(x_k), h_{k+1,j}), \quad \text{ for } j\geq k+1. \] In other words, $\bar \pi$ is a policy in $\Pi_{k+1}$ that acts as prescribed by $\pi^*$. By applying the law of total expectation, we can write: \begin{equation*} \begin{split} &V_k(x_k, r_k) = \expectation{\sum_{j=k}^{N-1} \, c(x_j, \pi^*_j(h_{k,j}))} = \\ &\quad c(x_k, \pi_k^*(x_k)) + \expectation{\sum_{j=k+1}^{N-1} \, c(x_j, \pi^*_j(h_{k,j}))}=\\
&\quad c(x_k, \pi_k^*(x_k)) + \expectation{\expectation{\sum_{j=k+1}^{N-1} \, c(x_j, \pi_j^*(h_{k,j})) \, \Big | \, h_{k,k+1}}}. \end{split} \end{equation*}
Note that $\expectation{\sum_{j=k+1}^{N-1} \, c(x_j, \pi^*_j(h_{k,j})) \, \Big | \, h_{k,k+1}} = J_N^{\bar \pi}(x_{k+1})$. Clearly, the truncated policy $\bar \pi$ is a feasible policy for the tail subproblem \begin{alignat*}{2}
\min_{\pi \in \Pi_{k+1}} & & \quad&J^{\pi}_N(x_{k+1}) \\ \text{subject to} & & \quad&R^{\pi}_N(x_{k+1}) \leq R^{\bar \pi}_N(x_{k+1}). \end{alignat*} Collecting the above results, we can write \begin{equation*} \begin{split} V_k(x_k, r_k) &= c(x_k, \pi_k^*(x_k)) + \expectation{J_N^{\bar \pi}(x_{k+1})} \\ &\geq c(x_k, \pi_k^*(x_k)) + V_{k+1}(x_{k+1}, R^{\bar \pi}_N(x_{k+1}))\\ &\geq T_k[V_{k+1}](x_k, r_k), \end{split} \end{equation*} where the last inequality follows from the fact that $R^{\bar \pi}_N(\cdot)$ can be viewed as a valid threshold function in the minimization in equation \eqref{eq:T}.
\emph{Step (2)}. If $r_k \notin \Phi_k(x_k)$, equation \eqref{eq:easy} holds and, therefore, $V_k(x_k, r_k)\leq T_k[V_{k+1}](x_k, r_k)$.
Assume $r_k \notin \Phi_k(x_k)$. For a given pair $(x_k, r_k)$, where $r_k \in \Phi_k(x_k)$, let $u^*$ and $r^{\prime, *}$ the minimizers in equation \eqref{eq:T} (here we are exploiting the assumption that the minimization problem in equation \eqref{eq:T} admits a minimizer). By definition, $r^{\prime, *}(x_{k+1}) \in \Phi_{k+1}(x_{k+1})$ for all $x_{k+1}\in S$. Also, let $\pi^* \in \Pi_{k+1}$ the optimal policy for the tail subproblem: \begin{alignat*}{2}
\min_{\pi \in \Pi_{k+1}} & & \quad&J^{\pi}_N(x_{k+1}) \\ \text{subject to} & & \quad&R^{\pi}_N(x_{k+1}) \leq r^{\prime,*}(x_{k+1}). \end{alignat*} Construct the ``extended" policy $\bar \pi \in \Pi_k$ as follows: \[ \bar \pi_k(x_k) = u^*, \text{ and } \bar \pi_j(h_{k,j}) = \pi^*_j(h_{k+1,j}) \text{ for } j\geq k+1. \] Since $\pi^*$ is an optimal, and a fortiori feasible, policy for the tail subproblem (from stage $k+1$) with threshold function $r^{\prime, *}$, the policy $\bar \pi \in \Pi_k$ is a feasible policy for the tail subproblem (from stage $k$): \begin{alignat*}{2}
\min_{\pi \in \Pi_{k}} & & \quad&J^{\pi}_N(x_{k}) \\ \text{subject to} & & \quad&R^{\pi}_N(x_{k}) \leq r_k. \end{alignat*} Hence, we can write \begin{equation*} \begin{split} &V_{k}(x_k, r_k) \leq J^{\bar \pi}_N(x_k) = \\
&\quad c(x_k, \bar \pi_k(x_k)) + \expectation{\expectation{\sum_{j=k+1}^{N-1} \, c(x_j, \bar \pi_j(h_{k,j})) \, \Big | \, h_{k,k+1}}}. \end{split} \end{equation*}
Note that $\expectation{\sum_{j=k+1}^{N-1} \, c(x_j, \bar \pi_j(h_{k,j})) \, \Big | \, h_{k,k+1}} = J_N^{\pi^*}(x_{k+1})$. Hence, from the definition of $\pi^*$, one easily obtains: \begin{equation*} \begin{split} &V_{k}(x_k, r_k) \leq c(x_k, \bar \pi_k(x_k)) + \expectation{J^{\pi^*}_N(x_{k+1})} = c(x_k, u^*) \, +\\
&\qquad \sum_{x_{k+1}\in S} Q(x_{k+1}|x_k, u^*) V_{k+1}(x_{k+1}, r^{\prime, *}(x_{k+1}))= \\ &\qquad T_k[V_{k+1}](x_k, r_k). \end{split} \end{equation*}
Collecting the above results, the claim follows. \end{proof}
\begin{remark}[On the assumption in Theorem \ref{TC_good}] In Theorem \ref{TC_good} we assume that the infimum in equation \eqref{eq:T} is attained. This is indeed true under very weak conditions (namely that $U(x_k)$ is a compact set, $\sigma_k(\nu(x_{k+1}),x_{k+1},Q)$ is a lower semi-continuous function in $Q$, $Q(x_k,u_k)$ is continuous in $u_k$ and the stage-wise cost $c$ and $d$ are lower semi-continuous in $u_k$). The proof of this statement is omitted in the interest of brevity and is left for a forthcoming publication.
\end{remark}
\section{Discussion}\label{sec:dis} In this section we show how to construct optimal policies, discuss computational aspects, and present a simple two-state example for machine repairing.
\subsection{Construction of optimal policies} Under the assumption of Theorem \ref{TC_good}, optimal control policies can be constructed as follows. For any given $x_k\in S$ and $r_k \in \Phi_k(x_k)$, let $u^*(x_k, r_k)$ and $r^{\prime}(x_k, r_k)(\cdot)$ be the minimizers in equation \eqref{eq:T} (recall that $r^{\prime}$ is a function). \begin{theorem}[Optimal policies]\label{them:optPoli} Assume that the infimum in equation \eqref{eq:T} is attained. Let $\pi \in \Pi$ be a policy recursively defined as follows: \begin{equation*} \begin{split} \pi_k(h_{0,k}) = u^*(x_k, r_k) \text{ \em with } r_k = r^{\prime}(x_{k-1},r_{k-1})(x_k), \end{split} \end{equation*} when $k\in \{1,\ldots, N-1\}$, and \[ \pi(x_0) = u^*(x_0, r_0), \] for a given threshold $r_0\in \Phi_0(x_0)$. Then, $\pi$ is an optimal policy for problem $\mathcal{OPT}$ with initial condition $x_0$ and constraint threshold $r_0$. \end{theorem} \begin{proof} As usual for dynamic programming problems, the proof uses induction arguments (see, in particular, \cite{bertsekas_05} and \cite[Theorem 4]{Chen_07} for a similar proof in the risk-neutral case).
Consider a tail subproblem starting at stage $k$, for $k=0,\ldots,N-1$; for a given initial state $x_k\in S$ and constraint threshold $r_k \in \Phi_k(x_k)$, let $\pi^{k, r_k} \in \Pi_k$ be a policy recursively defined as follows: \[ \pi_j^{k, r_k}(h_{k,j}) = u^*(x_j, r_j) \text{ with } r_j=r^{\prime}(x_{j-1}, r_{j-1})(x_j), \] when $j\in \{k+1,\ldots, N-1\}$, and \[ \pi_k^{k, r_k}(x_k) = u^*(x_k, r_k). \] We prove by induction that $\pi^{k, r_k}$ is optimal. Clearly, for $k=0$, such result implies the claim of the theorem.
Let $k=N-1$ (base case). In this case the tail subproblem is: \begin{alignat*}{2}
\min_{\pi \in \Pi_{N-1}} & & \quad&c(x_{N-1}, \pi(x_{N-1})) \\ \text{subject to} & & \quad& d(x_{N-1}, \pi(x_{N-1})) \leq r_{N-1}. \end{alignat*} Since, by definition, $r^{\prime}(x_N)$ and $V_N(x_N, r_N)$ are identically equal to zero, and due to the positive homogeneity of one-step conditional risk measures, the above tail subproblem is identical to the optimization problem in the Bellman's recursion \eqref{eq:T}, hence $\pi^{N-1, r_{N-1}}$ is optimal.
Assume as induction step that $\pi^{k+1,r_{k+1}}$ is optimal for the tail subproblems starting at stage $k+1$ with $x_{k+1}\in S$ and $r_{k+1} \in \Phi_{k+1}(x_{k+1})$. We want to prove that $\pi^{k, r_k}$ is optimal for the tail subproblems starting at stage $k$ with initial state $x_k\in S$ and constraint threshold $r_k \in \Phi_k(x_k)$. First, we prove that $\pi^{k, r_k}$ is a feasible control policy. Note that, from the recursive definitions of $\pi^{k, r_k}$ and $\pi^{k+1, r_{k+1}}$, one has \[ R_N^{\pi^{k, r_k}}(x_{k+1}) = R_N^{\pi^{k+1, r^{\prime}(x_k, r_k)(x_{k+1})}}(x_{k+1}). \] Hence, one can write: \begin{equation} \begin{split} &R_N^{\pi^{k, r_k}}(x_k) = d(x_k, u^*(x_k, r_k)) + \rho_k(R_N^{\pi^{k, r_k}}(x_{k+1}))=\\ &\qquad d(x_k, u^*(x_k, r_k)) + \rho_k\Bigl(R_N^{\pi^{k+1, r^{\prime}(x_k, r_k)(x_{k+1})}}(x_{k+1})\Bigr)\leq\\ &\qquad d(x_k, u^*(x_k, r_k)) + \rho_k\Bigl(r^{\prime}(x_k, r_k)(x_{k+1})\Bigr) \leq r_k, \end{split} \end{equation} where the first inequality follows from the inductive step and the monotonicity of coherent one-step conditional risk measures, and the last step follows from the definition of $u^*$ and $r^{\prime}$. Hence, $\pi^{k, r_k}$ is a feasible control policy (assuming initial state $x_k\in S$ and constraint threshold $r_k \in \Phi_k(x_k)$). As for its cost, one has, similarly as before, \[ J_N^{\pi^{k, r_k}}(x_{k+1}) = J_N^{\pi^{k+1, r^{\prime}(x_k, r_k)(x_{k+1})}}(x_{k+1}). \] Then, one can write: \begin{equation} \begin{split} &J_N^{\pi^{k, r_k}}(x_k) = c(x_k, u^*(x_k, r_k)) + \expectation{J_N^{\pi^{k, r_k}}(x_{k+1})}=\\ &\qquad c(x_k, u^*(x_k, r_k)) + \expectation{ J_N^{\pi^{k+1, r^{\prime}(x_k, r_k)(x_{k+1})}}(x_{k+1})}=\\ &\qquad c(x_k, u^*(x_k, r_k)) +\expectation{V_{k+1}(x_{k+1}), r^{\prime}(x_k, r_k)(x_{k+1})}=\\ &\qquad T_k[V_{k+1}](x_k, r_k) = V_k(x_k, r_k), \end{split} \end{equation} where the third equality follows from the inductive step, the fourth equality follows form the definition of the dynamic programming operator in equation \eqref{eq:T}, and the last equality follows from Theorem \ref{TC_good}. Since policy $\pi^{k, r_k}$ is feasible and achieves the optimal cost, it is optimal. This concludes the proof. \end{proof}
Note that the optimal policy in the statement of Theorem \ref{them:optPoli} can be written in ``compact" form without the aid of the extra variable $r_k$. Indeed, for $k=1$, by defining the threshold transition function $\mathcal R_1(h_{0,1}):=r'(x_{0},r_{0})(x_1)$, one can write $r_1=\mathcal{R}_1(h_{0,1})$. Then, by induction arguments, one can write, for any $k\in\{1,\ldots,N\}$, $r_{k}=\mathcal{R}_k(h_{0,k})$, where $\mathcal R_k$ is the threshold transition function at stage $k$. Therefore, the optimal policy in the statement of Theorem \ref{them:optPoli} can be written as $\pi(h_{0,k}) = u^*(x_k, \mathcal R_k(h_{0,k}))$, which makes explicit the dependency of $\pi$ over the process history.
Interestingly, if one views the constraint thresholds as state variables, the optimal policies of problem $\mathcal{OPT}$ have a Markovian structure with respect to the augmented control problem.
\subsection{Computational issues}
In our approach, the solution of problem $\mathcal{OPT}$ entails the solution of two dynamic programing problems, the first one to find the lower bound for the set of feasible constraint thresholds (i.e., the function $\underline{R}(x)$, see Section \ref{sec:dp}), and the second one to compute the value functions $V_k(x_k, r_k)$. The latter problem is the most challenging one since it involves a functional minimization. However, as already noted, since $S$ is finite, $B(S)$ is isomorphic with $\mathbb{R}^{|S|}$, and the functional minimization in the Bellman operator \eqref{eq:T} can be re-casted as an optimization problem in the Euclidean space. This problem, however, can be large and, in general, is not convex.
\begin{figure}
\caption{Left figure: transition probabilities for control $u=0$. Right figure: transition probabilities for control $u=1$. Circles represent states. The transition probabilities satisfy $1\geq q>h\geq 0$.}
\label{fig:tp}
\end{figure}
\subsection{System maintenance example} Finally, we illustrate the above concepts with a simple two-stage (i.e., $N=2$) example that represents the problem of scheduling maintenance operations for a given system. The state space is given by $S=\{0,1\}$, where $\{0\}$ represents a normal state and $\{1\}$ represents a failure state; the control space is given by $U=\{0,1\}$, where $\{0\}$ means ``do nothing" and $\{1\}$ means ``perform maintenance". The transition probabilities are given in Figure \ref{fig:tp} for some $1\geq q>h\geq 0$.
Also, the cost functions and the constraint cost functions are as follows: \begin{alignat*}{1} &c(0,0)=c(1,0)=0,\quad c(0,1)=c(1,1)=c_2,\\ &d(0,1)=d(0,0)=0,\quad d(1,0)=d(1,1)=c_1\in(0,1). \end{alignat*} The terminal costs are zero. The one-step conditional risk measures is the mean semi-deviation (see equation (\ref{MD_risk})) with fixed $\lambda\in [0, 1]$ and $p\in[1,\infty)$. We wish to solve problem $\mathcal{OPT}$ for this example.
Note that, for any $\lambda$ and $p$, function \begin{alignat*}{1} f(x):=\lambda x(1-x)^{1/p}+(1-x) \end{alignat*} is a non-increasing function in $x\in[0,1]$. Therefore, $f(q)\leq f(p)\leq f(0)$. At stage $k=2$, $V_2(1,r_2)=V_2(0,r_2)=0$, and $\Phi_2(1)=\Phi_2(0)=\{0\}$. At stage $k=1$, \begin{alignat*}{1} &V_1(0,r_1)=\left\{\begin{array}{ll} 0&\!\textrm{if }r_1\geq 0,\\ \bar{C}&\!\textrm{else}. \end{array}\right.\\ &V_1(1,r_1)=\left\{\begin{array}{ll} 0&\!\textrm{if }r_1\geq c_1,\\ \bar{C}&\!\textrm{else}. \end{array}\right.\nonumber \end{alignat*} Also, $\Phi_1(0)=[0,\infty)$ and $\Phi_1(1)=[c_1,\infty)$. At stage $k=0$, define $K^{(x)}:=f(x)c_1$ (hence $K^{(0)}=c_1$) and \begin{alignat*}{1} &E_x(r'(0),r'(1)):=r'(0)x+r'(1)(1-x)\\ &M_x(r'(0),r'(1)):=\left(\small\begin{array}{l} (1-x)[r'(1)-E_x(r'(0),r'(1))]_+^p\\ +x[r'(0)-E_x(r'(0),r'(1))]_+^p \end{array}\right)^{1/p}; \end{alignat*} hence, $E_0(r'(0),r'(1))=r'(1)$ and $M_0(r'(0),r'(1))=0$. Then, we can write \begin{alignat*}{1} \begin{split} F_0(0,r_0)=&\emptyset\quad\textrm{if }r_0<K^{(q)}\\ F_0(0,r_0)=&\{(1,r'):r'(0)\in[0,\infty),r'(1)\in[c_1,\infty), \\ &E_q(r'(0),r'(1))+\lambda M_q(r'(0),r'(1))\leq r_0\}\\ &\textrm{if }K^{(q)}\leq r_0<K^{(h)}\\ F_0(0,r_0)=&\{(1,r'):r'(0)\in[0,\infty),r'(1)\in[c_1,\infty), \\ &E_q(r'(0),r'(1))+\lambda M_q(r'(0),r'(1))\leq r_0\}\\ &\bigcup\{(0,r'):r'(0)\in[0,\infty),r'(1)\in[c_1,\infty), \\ &\qquad E_h(r'(0),r'(1))+\lambda M_h(r'(0),r'(1))\leq r_0\}\\ &\textrm{if } r_0\geq K^{(h)}\\ \end{split} \end{alignat*} \begin{alignat*}{1} \begin{split} F_0(1,r_0)=&\emptyset\quad\textrm{if }r_0<c_1+K^{(q)}\\ F_0(1,r_0)=&\{(1,r'):r'(0)\in[0,\infty),r'(1)\in[c_1,\infty), \\ &c_1+E_q(r'(0),r'(1))+\lambda M_q(r'(0),r'(1))\leq r_0\}\\ &\textrm{if }c_1+K^{(q)}\leq r_0<c_1+K^{(0)}\\ F_0(1,r_0)=&\{(1,r'):r'(0)\in[0,\infty),r'(1)\in[c_1,\infty), \\ &c_1+E_q(r'(0),r'(1))+\lambda M_q(r'(0),r'(1))\leq r_0\}\\ &\bigcup\{(0,r'):r'(0)\in[0,\infty),r'(1)\in[c_1,\infty), \\ &\qquad c_1+r'(1)\leq r_0\}\\ &\textrm{if } r_0\geq c_1+K^{(0)}\\ \end{split} \end{alignat*} As a consequence, \begin{alignat*}{1} &V_0(1,r_0)=\left\{\begin{array}{ll} \bar{C}&\textrm{if }r_0<c_1+K^{(q)}\\ c_2&\textrm{if }c_1+K^{(q)}\leq r_0<c_1+K^{(0)}\\ 0&\textrm{if } r_0\geq K^{(0)}\\ \end{array}\right.\nonumber\\ &V_0(0,r_0)=\left\{\begin{array}{ll} \bar{C}&\textrm{if }r_0<K^{(q)}\\ c_2&\textrm{if }K^{(q)}\leq r_0<K^{(h)}\\ 0&\textrm{if } r_0\geq K^{(h)}\\ \end{array}\right.\nonumber\\ \end{alignat*} Therefore, for $V_0(1,c_1+K^{(q)})$, the infimum of the Bellman's equation is attained with $u=1$, $r'(0)=0$, $r'(1)=c_1$. For $V_0(0,K^{(h)})$, the infimum of the Bellman's equation is attained with $u=0$, $r'(0)=0$, $r'(1)=c_1$. Note that, as expected, the value function is a decreasing function with respect to the risk threshold. \addtolength{\textheight}{-13cm} \section{Conclusions}\label{sec:conc} In this paper we have presented a dynamic programing approach to stochastic optimal control problems with dynamic, time-consistent (in particular Markov) risk constraints. We have shown that the optimal cost functions can be computed by value iteration and that the optimal control policies can be constructed recursively. This paper leaves numerous important extensions open for further research. First, it is of interest to study how to carry out the Bellman's equation efficiently; a possible strategy involving convex programming has been briefly discussed. Second, to address problems with large state spaces, we plan to develop approximate dynamic programing algorithms for problem $\mathcal{OPT}$. Third, it is of both theoretical and practical interest to study the relation between stochastic optimal control problems with time-consistent and time-inconsistent constraints, e.g., in terms of the optimal costs. Fourth, we plan to extend our approach to the case with partial observations and an infinite horizon. Finally, we plan to apply our approach to real settings, e.g., to the architectural analysis of planetary missions or to the risk-averse optimization of multi-period investment strategies.
\end{document} |
\begin{document}
\keywords{Normal function, derivative, ordinal exponentiation, dilator, reverse mathematics.} \subjclass[2010]{03B30, 03D60, 03E10, 03F15.}
\title[A note on ordinal exponentiation and derivatives of normal functions]{A note on ordinal exponentiation and\\ derivatives of normal functions}
\author{Anton Freund} \address{Fachbereich Mathematik, Technische Universit\"at Darmstadt, Schlossgartenstr.~7, 64289~Darmstadt, Germany} \email{[email protected]}
\begin{abstract} Michael Rathjen and the present author have shown that $\Pi^1_1$-bar induction is equivalent to (a suitable formalization of) the statement that every normal function has a derivative, provably in $\mathbf{ACA}_0$. In this note we show that the base theory can be weakened to $\mathbf{RCA}_0$. Our argument makes crucial use of a normal function~$f$ with $f(\alpha)\leq 1+\alpha^2$ and $f'(\alpha)=\omega^{\omega^\alpha}$. We will also exhibit a normal function $g$ with $g(\alpha)\leq 1+\alpha\cdot 2$ and~$g'(\alpha)=\omega^{1+\alpha}$. \end{abstract}
\maketitle {\let\thefootnote\relax\footnotetext{\copyright~2020 The Authors. \emph{Mathematical Logic Quarterly} published by Wiley-VCH GmbH.\\ This is the accepted version of a publication in \emph{Mathematical Logic Quarterly} 66:3 (2020)~326-335. Please cite the official journal publication, which is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License.}
\section{Introduction}
A function $f$ from ordinals to ordinals is called normal if it is strictly increasing and continuous at limit stages. The latter means that we have $f(\lambda)=\sup_{\alpha<\lambda}f(\alpha)$ whenever~$\lambda$ is a limit ordinal. For any normal function~$f$, the class $\{\alpha\,|\,f(\alpha)=\alpha\}$ of fixed points is closed and unbounded. The strictly increasing enumeration of these fixed points is itself a normal function, which is called the derivative $f'$ of~$f$. In the present paper we will only be concerned with normal functions that map countable ordinals to countable ordinals. These play an important role in proof theory and have interesting computability theoretic properties (see~\cite{schuette77,marcone-montalban}).
In many investigations of normal functions, ordinal exponentiation is presupposed as a starting point. Most notably, the first function in the Veblen hierarchy is usually defined as $\varphi_0(\alpha)=\omega^\alpha$ (see e.\,g.~\cite{schuette77}). This makes a lot of sense in the context of ordinal notation systems, since a non-zero ordinal is of the form $\omega^\alpha$ if and only if it is closed under addition. On the other hand, ordinal exponentiation does itself presuppose certain set existence principles, as the following result from reverse mathematics shows (see below for an introduction):
\begin{theorem}[J.-Y.~Girard~\cite{girard87}, J.~Hirst~\cite{hirst94}]\label{thm:girard-hirst}
The following are equivalent over the base theory $\mathbf{RCA}_0$:
\begin{itemize}
\item arithmetical comprehension (i.\,e.~the principal axiom of $\mathbf{ACA}_0$),
\item if $(X,<_X)$ is a well-order, then so is
\begin{equation*}
2^X=\{\langle x_1,\dots,x_n\rangle\,|\,x_1,\dots,x_n\in X\text{ and }x_n<_X\dots<_X x_1\}
\end{equation*}
with the lexicographic order.
\end{itemize} \end{theorem}
Note that elements of $2^X$ correspond to ordinals in base-$2$ Cantor normal form. In particular, $2^X$ has order type $2^\alpha$ (as usually defined in ordinal arithmetic) if $X$ has order type~$\alpha$. The theorem is also valid with base $\omega$ (recall $\omega^\alpha=2^{\omega\cdot\alpha}$), but base $2$ will have technical advantages in the following.
Theorem~\ref{thm:girard-hirst} has been formulated as a result of reverse mathematics. In this research program one investigates implications between foundational and mathematical principles that can be expressed in the language of second order arithmetic. Implications and equivalences are proved over some weak base theory, most commonly in~$\mathbf{RCA}_0$. The latter can handle primitive recursive constructions relative to an oracle. More specifically, the acronym $\mathbf{RCA}_0$ alludes to the recursive comprehension axiom, which allows to form $\Delta^0_1$-definable subsets of~$\mathbb N$. Furthermore, $\mathbf{RCA}_0$ allows induction for induction formulas of complexity $\Sigma^0_1$. For a detailed introduction to reverse mathematics we refer to~\cite{simpson09}.
The case of Theorem~\ref{thm:girard-hirst} shows that it is relatively straightforward to consider specific normal functions in reverse mathematics. It is considerably more difficult to express statements that quantify over all normal functions, or at least over a sufficiently rich class. In order to do so, one needs a general representation of normal functions by subsets of the natural numbers. Such a representation is possible via J.-Y.~Girard's~\cite{girard-pi2} notion of dilator and related work by P.~Aczel~\cite{aczel-phd,aczel-normal-functors}. Full details have been worked out in~\cite[Section~2]{freund-rathjen_derivatives}; we will recall them as they become relevant for the present paper. Relative to the representation of normal functions in second order arithmetic, M.~Rathjen and the present author have shown that the following are equivalent over $\mathbf{ACA}_0$ (see~\cite[Theorem~5.9]{freund-rathjen_derivatives}): \begin{enumerate}
\item Every normal function has a derivative.
\item The principle of $\Pi^1_1$-bar induction (also called transfinite induction) holds. \end{enumerate} Considering the proof given in~\cite{freund-rathjen_derivatives}, we see that the implication from (1) to (2) uses arithmetical comprehension (in the form of the Kleene normal form theorem, cf.~\cite[Lemma~V.1.4]{simpson09}). The proof that (2) implies (1) is carried out in $\mathbf{RCA}_0$. In any case, a result of J.~Hirst~\cite{hirst99} shows that (2) implies arithmetical comprehension (the author is grateful to E.~Frittaion for pointing this out). To establish the equivalence between (1) and (2) over $\mathbf{RCA}_0$ it remains to show that~(1) implies arithmetical comprehension as well. This is the main result of the present paper.
Concerning terminology, we use ``bar induction" and ``transfinite induction" synonymously, since this coincides with the usage in other papers on the reverse mathematics of well ordering principles (cf.~e.\,g.~\cite{rathjen-model-bi}). At the same time, we appreciate that bar induction is a conceptually different notion in constructive mathematics.
In the rest of this introduction we sketch the proof that statement (1) above implies arithmetical comprehension. Since we have not yet explained the representation of normal functions in second order arithmetic, the following argument will be rather informal. Formal versions of all claims will be established in the following sections. The idea of the proof is to construct a normal function $f$ such that the following holds for any ordinal $\alpha$ (where $f'$ is the derivative of $f$): \begin{enumerate}[label=(\roman*)]
\item We have $f(\alpha)\leq 1+\alpha^2\leq(1+\alpha)^2$.
\item We have $2^\alpha\leq f'(\alpha)$. \end{enumerate} Part~(i) is supposed to ensure that $\mathbf{RCA}_0$ recognizes $f$ as a normal function (since it proves that~$(1+\alpha)^2$ is well-founded for any well-order~$\alpha$). Invoking~(1) from above, we obtain access to the well-founded values $f'(\alpha)$ of the derivative. The inequality in~(ii) corresponds to an order embedding of $2^\alpha$ into $f'(\alpha)$, which witnesses that $2^\alpha$ is also well-founded. By Theorem~\ref{thm:girard-hirst} this yields arithmetical comprehension.
Let us now show how clauses (i) and (ii) can be satisfied: Working in a sufficiently strong set theory, the required function $f$ can be described by \begin{equation*} f(\alpha)=1+\sum_{\gamma<\alpha}(1+\gamma). \end{equation*} More formally, this infinite sum corresponds to the recursive clauses \begin{align*} f(0)&=1,\\ f(\alpha+1)&=f(\alpha)+1+\alpha,\\ f(\lambda)&=\textstyle\sup_{\alpha<\lambda}f(\alpha)\qquad\text{for $\lambda$ limit}, \end{align*} which immediately reveal that $f$ is normal. It might appear more natural to set $f(\alpha+1)=f(\alpha)+\alpha$ in the successor case (at least for $\alpha>0$), but the summand~$1$ will be crucial in the following sections. A straightforward induction on $\alpha$ shows that we have $f(\alpha)\leq1+\alpha^2$. The inequality $2^\alpha\leq f'(\alpha)$ is also proved by induction on $\alpha$: In view of $1=f(0)\leq f'(0)$ the claim holds for $\alpha=0$. In case $\alpha\neq 0$ we have \begin{equation*}
2^\alpha=\sup\{2^\beta+\gamma\,|\,\beta<\alpha\text{ and }\gamma<2^\beta\}. \end{equation*} Given $\beta<\alpha$ and $\gamma<2^\beta$, the induction hypothesis yields \begin{multline*} 2^\beta+\gamma< f'(\beta)+2^\beta\leq f(f'(\beta))+1+f'(\beta)={}\\ {}=f(f'(\beta)+1)\leq f(f'(\beta+1))=f'(\beta+1)\leq f'(\alpha), \end{multline*} which completes the induction step. When we formalize the proof, we will see that the use of transfinite induction can be avoided, which may be somewhat surprising.
The bound $2^\alpha\leq f'(\alpha)$ suffices to lower the base theory of~\cite[Theorem~5.9]{freund-rathjen_derivatives}, but it is not optimal: In the last section of this note we will establish~$f'(\alpha)=\omega^{\omega^\alpha}$. In ordinal arithmetic one is particularly interested in the function $\alpha\mapsto\omega^\alpha$, which is the most common starting function for the Veblen hierarchy (see e.\,g.~\cite{schuette77}). In this context it is natural to ask whether $\alpha\mapsto\omega^\alpha$ itself is the derivative of some normal function. This is not the case: For any normal function~$g$ we see that $g(0)=0$ implies $g'(0)=0<\omega^0$ while $g(0)>0$ implies $g(1)>1$ and then $g'(0)>1=\omega^0$. However, this value is the only obstruction: We will exhibit a normal function $g$ with $g(\alpha)\leq 1+\alpha\cdot 2$ and $g'(\alpha)=\omega^{1+\alpha}$.
\section{A normal function justified by recursive comprehension}
In the present section we recall how normal functions can be represented in second order arithmetic; further explanations and full details of all missing proofs can be found in~\cite[Section~2]{freund-rathjen_derivatives}. We then apply this representation to the normal function~$f$ that has been considered in the introduction.
To find a representation of normal functions, we need to understand how they can be determined by a countable amount of information. Clearly, normal functions are not determined by their values on some fixed countable set of arguments. This suggests to extend our functions into objects with more internal structure. For this purpose it is convenient to use some notions from category theory, namely those of category, functor and natural transformation. We will not need anything that goes beyond these basic concepts; an introduction to category theory (which is much too comprehensive for our purpose) can be found in~\cite{maclane-working}.
The category of linear orders consists of the linear orders as objects and the embeddings (strictly increasing functions) as morphisms. We will be particularly interested in the finite orders $n=\{0,\dots,n-1\}$ (with the usual order relation). These orders and the embeddings between them form the category of natural numbers. Girard's~\cite{girard-pi2} idea was to consider particularly uniform functors from linear orders to linear orders; those functors that preserve well-foundedness are called dilators. Due to their uniformity, dilators are essentially determined by their restrictions to the category of natural numbers. Provided that these restrictions are countable, they can be used to represent normal functions in reverse mathematics.
In order to describe the uniformity property of dilators, we consider the finite subset functor on the category of sets, which is given by \begin{align*} [X]^{<\omega}&=\text{``the set of finite subsets of $X$"},\\
[f]^{<\omega}(a)&=\{f(x)\,|\,x\in a\}, \end{align*}
where the second clause refers to $f:X\to Y$ and $a\in[X]^{<\omega}$. The cardinality of a finite set~$a$ will be denoted by~$|a|\in\mathbb N$ (which can in turn stand for the finite order~$\{0,\dots,|a|-1\}$). The following is essentially due to Girard~\cite{girard-pi2}; we refer to~\cite[Remark~2.2.2]{freund-thesis} for a detailed comparison with his original definition.
\begin{definition}[$\mathbf{RCA}_0$]\label{def:prae-dil} A prae-dilator consists of \begin{enumerate}[label=(\roman*)] \item a functor $T$ from natural numbers to linear orders, such that each order $T(n)=(T(n),<_{T(n)})$ has field $T(n)\subseteq\mathbb N$, and
\item a natural transformation $\operatorname{supp}:T\Rightarrow[\cdot]^{<\omega}$ that satisfies the following support condition: Each $\sigma\in T(n)$ lies in the image of the embedding~$T(\operatorname{en}_\sigma)$, where $\operatorname{en}_\sigma:|\operatorname{supp}_n(\sigma)|\to n$ is the strictly increasing enumeration of the set $\operatorname{supp}_n(\sigma)\subseteq n=\{0,\dots,n-1\}$. \end{enumerate} \end{definition}
In second order arithmetic, the function $n\mapsto T(n)$ can be represented by the set $T^0=\{(n,\sigma)\,|\,\sigma\in T(n)\}$; the latter is a subset of~$\mathbb N$ if we code each pair by a single number. Officially, this turns $\sigma\in T(n)$ into an abbreviation for $(n,\sigma)\in T^0$, which can be expressed by a formula that is $\Delta^0_1$ in~$\mathbf{RCA}_0$. Assuming that finite sets and functions are coded by natural numbers, the action $f\mapsto T(f)$ on morphisms and the map $(n,\sigma)\mapsto\operatorname{supp}_n(\sigma)$ can be represented in the same way.
Above we have mentioned that certain functors on linear orders are determined by their restrictions to the category of natural numbers. Conversely, we now explain how a prae-dilator can be extended into a functor on linear orders. In $\mathbf{RCA}_0$ we define \begin{equation}\label{eq:extend-dil}
D^T(X)=\{\langle a,\sigma\rangle\,|\,a\in[X]^{<\omega}\text{ and }\sigma\in T(|a|)\text{ and }\operatorname{supp}_{|a|}(\sigma)=|a|\} \end{equation}
for any prae-dilator $T=(T,\operatorname{supp})$ and any linear order $X$. Informally speaking, the pair $\langle a,\sigma\rangle$ represents the element $T(\operatorname{en}_a)(\sigma)\in T(X)$, where $\operatorname{en}_a:|a|\to X$ is the increasing function with image $a\subseteq X$ (note that $T(\operatorname{en}_a)(\sigma)$ would make sense if~$T$ was defined on all linear orders). Due to the condition $\operatorname{supp}_{|a|}(\sigma)=|a|$, the representation is unique (we would have $a=\operatorname{supp}_X(T(\operatorname{en}_a)(\sigma))$ if $\operatorname{supp}$ was defined beyond the category of natural numbers). In order to define the appropriate order relation on $D^T(X)$, we introduce the following notation: Given an embedding $f:a\to b$ between finite orders, let $|f|:|a|\to|b|$ be the unique function that makes \begin{equation*} \begin{tikzcd}
{|a|}\arrow["\cong"]{r}\arrow[swap,"|f|"]{d} & a\arrow["f"]{d}\\
{|b|}\arrow["\cong"]{r} & b \end{tikzcd} \end{equation*} a commutative diagram. We can now stipulate \begin{equation*}
\langle a_0,\sigma_0\rangle <_{D^T(X)} \langle a_1,\sigma_1\rangle\quad:\Leftrightarrow\quad T(|\iota_0|)(\sigma_0)<_{T(|a_0\cup a_1|)} T(|\iota_1|)(\sigma_1), \end{equation*} where $\iota_i:a_i\hookrightarrow a_0\cup a_1$ are the inclusions. It is also possible to turn $D^T(\cdot)$ into a functor and to define natural support functions $\operatorname{supp}_X:D^T(X)\to[X]^{<\omega}$. In particular we can declare that $T$ is a dilator if and only if the order $D^T(X)$ is well-founded for any well-order~$X$ (the two obvious definitions of well-ordering are equivalent over $\mathbf{RCA}_0$, see e.\,g.~\cite[Lemma~2.3.12]{freund-thesis}). From the viewpoint of a sufficiently strong set theory, each dilator $T$ gives rise to a function $f_T$ from ordinals to ordinals, with \begin{equation}\label{eq:induced-function} f_T(\alpha)=\operatorname{otp}(D^T(\alpha)). \end{equation} Here we view $\alpha$ as a linear order and write $\operatorname{otp}(X)$ for the order type of~$X$. We can view $T$ as a representation of the function $f_T$ in second order arithmetic.
It is straightforward to specify a dilator $T$ with $f_T(\alpha)=\alpha+1$. In particular, the function $f_T$ does not need to be normal. The following condition, which was identified by Aczel~\cite{aczel-phd,aczel-normal-functors}, ensures that we are concerned with a normal function:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:normal-dil} A normal (prae-)dilator consists of a (prae-)dilator $T$ and a natural family of embeddings $\mu_n:n\to T(n)$ such that \begin{equation*} \sigma<_{T(n)}\mu_n(m)\quad\Leftrightarrow\quad\operatorname{supp}_n(\sigma)\subseteq m=\{0,\dots,m-1\} \end{equation*} holds for all $\sigma\in T(n)$ and all $m<n$. \end{definition}
Note that we necessarily have $\operatorname{supp}_1(\mu_1(0))=1$, since $\operatorname{supp}_1(\mu_1(0))=\emptyset$ would yield $\mu_1(0)<_{T(1)}\mu_1(0)$. This allows us to define $D^\mu_X:X\to D^T(X)$ by \begin{equation}\label{eq:extend-normal} D^\mu_X(x)=\langle\{x\},\mu_1(0)\rangle. \end{equation} One can show that we have \begin{equation*}
\langle a,\sigma\rangle<_{D^T(X)}D^\mu_X(x)\quad\Leftrightarrow\quad a\subseteq X\!\restriction\!x=\{x'\in X\,|\,x'<_X x\}. \end{equation*} Hence the elements $D^\mu_X(x)$ are cofinal in $D^T(X)$ if $X$ has limit type. In a sufficiently strong set theory one can deduce that $f_T$ is normal (cf.~\cite[Proposition~2.12]{freund-rathjen_derivatives}).
In the introduction we have considered a normal function $f$ with \begin{equation*} f(\alpha)=1+\sum_{\gamma<\alpha}(1+\gamma). \end{equation*} Our next goal is to construct a normal dilator $F$ that represents this function. Given an order $X$, we write \begin{equation*} 1+X=\{\bot\}\cup X \end{equation*} for the extension of $X$ by a new minimum element~$\bot$. To obtain a functor we map each embedding $h:X\rightarrow Y$ to the embedding $1+h:1+X\to 1+Y$ with \begin{equation*} (1+h)(x)=\begin{cases} \bot & \text{if }x=\bot,\\ h(x) & \text{if }x\in X. \end{cases} \end{equation*} In order to define a dilator $F$ we must specify a linear order $F(n)$ for each finite order~$n=\{0,\dots,n-1\}$. It will later be convenient to have a more general definition, which explains $F(X)$ for any linear order $X$.
\begin{definition}[$\mathbf{RCA}_0$]\label{def:F} For each linear order $X$ we define \begin{equation*}
F(X)=1+\sum_{x\in 1+X}(1+X)\!\restriction\!x=\{\bot\}\cup\{\langle x,y\rangle\in(1+X)^2\,|\,y<_{1+X}x\}. \end{equation*} Note that $F(X)$ contains no pairs of the form $\langle \bot,y\rangle$, since $y<_{1+X}\bot$ must fail. To turn $F(X)$ into a linear order we declare that $\bot$ is minimal and that we have \begin{equation*} \langle x_0,y_0\rangle<_{F(X)}\langle x_1,y_1\rangle\quad\Leftrightarrow\quad\begin{cases} \text{either $x_0<_X x_1$},\\ \text{or $x_0=x_1$ and $y_0<_{1+X}y_1$}. \end{cases} \end{equation*} For an embedding $h:X\to Y$, define $F(f):F(X)\to F(Y)$ by $F(f)(\bot)=\bot$ and \begin{equation*} F(h)(\langle x,y\rangle)=\langle h(x),(1+h)(y)\rangle. \end{equation*} Each order $X$ gives rise to a function $\operatorname{supp}^F_X:F(X)\to[X]^{<\omega}$ with \begin{equation*} \operatorname{supp}^F_X(\bot)=\emptyset\qquad\text{and}\qquad\operatorname{supp}^F_X(\langle x,y\rangle)=\begin{cases} \{x\} & \text{if }y=\bot,\\ \{x,y\} & \text{if }y\in X. \end{cases} \end{equation*} Finally, we define functions $\mu^F_X:X\to F(X)$ by setting $\mu^F_X(x)=\langle x,\bot\rangle$. \end{definition}
Note that the relations $\sigma\in F(n)$, $\sigma<_{F(n)}\tau$, $F(h)(\sigma)=\tau$ with $h:n\to m$, $a=\operatorname{supp}^F_n(\sigma)$ and $\sigma=\mu^F_n(m)$ are $\Delta^0_1$-definable in $\mathbf{RCA}_0$. Hence the restriction of $F$ to the category of natural numbers exists as a set. It is straightforward to verify that Definitions~\ref{def:prae-dil} and~\ref{def:normal-dil} are satisfied (the condition $y<_{1+X}x$ in the definition of $F(X)$ is crucial for the latter):
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:F-prae-dil} Restricting Definition~\ref{def:F} to the category of natural numbers yields a normal prae-dilator~$F$. \end{lemma}
To show that $F$ is a dilator we need to consider the ordered sets $D^F(X)$ from equation~(\ref{eq:extend-dil}). As a preparation, we relate $D^F(X)$ to the order $F(X)$ constructed in Definition~\ref{def:F}. Let us also recall that $\mu^F$ (or rather its restriction to the category of natural numbers) gives rise to a family of functions $D^{\mu^F}_X:X\to D^F(X)$, as defined by equation~(\ref{eq:extend-normal}). For later use, we relate these to the functions $\mu^F_X:X\to F(X)$.
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:iso-reconstruct} For each order~$X$ we have an isomorphism \begin{equation*} \eta_X:D^F(X)\xrightarrow{\cong} F(X) \end{equation*} with $\eta_X\circ D^{\mu^F}_X=\mu^F_X$. \end{lemma} \begin{proof}
Recall that $D^F(X)$ consists of pairs $\langle a,\sigma\rangle$, where $a$ is a finite suborder of $X$ and $\sigma\in F(|a|)$ satisfies $\operatorname{supp}^F_{|a|}(\sigma)=|a|$. We set \begin{equation*} \eta_X(\langle a,\sigma\rangle)=F(\operatorname{en}_a)(\sigma), \end{equation*}
writing $\operatorname{en}_a:|a|\to X$ for the increasing function with range $a$. It is straightforward to verify that $F$ is an endofunctor on the category of linear orders. Using this fact one can show that $\eta_X$ is an embedding, as in the proof of~\cite[Proposition~2.1]{freund-computable}: Given sets~$Y\subseteq Z$, we agree to write $\iota_Y^Z:Y\hookrightarrow Z$ for the inclusion. Assume that we have $\langle a,\sigma\rangle<_{D^F(X)}\langle b,\tau\rangle$, and note that this amounts to \begin{equation*}
F(|\iota_a^{a\cup b}|)(\sigma)<_{F(|a\cup b|)} F(|\iota_b^{a\cup b}|)(\tau). \end{equation*}
Given a finite order~$c$, we write $\operatorname{en}^0_c:|c|\to c$ for the increasing enumeration. For the function $\operatorname{en}_a:a\to X$ from above, the definition of~$|\cdot|$ yields \begin{equation*}
\operatorname{en}_a=\iota_a^X\circ\operatorname{en}^0_a=\iota_{a\cup b}^X\circ\iota_a^{a\cup b}\circ\operatorname{en}^0_a=\iota_{a\cup b}^X\circ\operatorname{en}^0_{a\cup b}\circ|\iota_a^{a\cup b}|. \end{equation*} Since $F(\iota_{a\cup b}^X\circ\operatorname{en}^0_{a\cup b})$ is an embedding, the above inequality implies \begin{multline*}
\eta_X(\langle a,\sigma\rangle)=F(\operatorname{en}_a)(\sigma)=F(\iota_{a\cup b}^X\circ\operatorname{en}^0_{a\cup b})\circ F(|\iota_a^{a\cup b}|)(\sigma)<_{F(X)}\\
<_{F(X)} F(\iota_{a\cup b}^X\circ\operatorname{en}^0_{a\cup b})\circ F(|\iota_b^{a\cup b}|)(\tau)=\eta_X(\langle b,\tau\rangle). \end{multline*} This confirms that $\eta_X$ is an embedding and in particular injective. It remains to show surjectivity. As a representative example, consider $\langle x,y\rangle\in F(X)$ with~$y\neq\bot$. According to Definition~\ref{def:F} we must have $y<_X x$. Hence $a:=\{x,y\}$ has two elements, and the function $\operatorname{en}_a:2\to X$ has values $\operatorname{en}_a(0)=y$ and $\operatorname{en}_a(1)=x$. Since $\sigma:=\langle 1,0\rangle\in F(2)$ satisfies $\operatorname{supp}^F_2(\sigma)=\{0,1\}=2$, we get $\langle a,\sigma\rangle\in D^F(X)$. By construction we have \begin{equation*} \eta_X(\langle a,\sigma\rangle)=F(\operatorname{en}_a)(\langle 1,0\rangle)=\langle\operatorname{en}_a(1),(1+\operatorname{en}_a)(0)\rangle=\langle x,y\rangle. \end{equation*} A similar argument shows that the image of $\eta_X$ contains $\bot$ and all elements of the form $\langle x,\bot\rangle$. In order to verify the remaining claim we consider $x\in X$ and write $\operatorname{en}_{\{x\}}:1\to X$ for the function with range $\{x\}$. In view of equation~(\ref{eq:extend-normal}) we obtain \begin{multline*} \eta_X\circ D^{\mu^F}_X(x)=\eta_X(\langle\{x\},\mu^F_1(0)\rangle)=F(\operatorname{en}_{\{x\}})(\langle 0,\bot\rangle)=\\ =\langle\operatorname{en}_{\{x\}}(0),(1+\operatorname{en}_{\{x\}})(\bot)\rangle=\langle x,\bot\rangle=\mu^F_X(x), \end{multline*} as required. \end{proof}
The normal function $f$ from the introduction satisfies $f(\alpha)\leq(1+\alpha)^2$. We can now recover this result on the level of the prae-dilator $F$.
\begin{lemma}[$\mathbf{RCA}_0$] For each linear order $X$ we have an embedding of $D^F(X)$ into $(1+X)^2$, where the latter is equipped with the lexicographic order. \end{lemma} \begin{proof} In view of the previous lemma it suffices to exhibit an embedding of $F(X)$ into $(1+X)^2$. Indeed, we have defined $F(X)\backslash\{\bot\}$ as a suborder of $(1+X)^2$. In order to obtain the desired embedding it suffices to map $\bot\in F(X)$ to the minimum element $\langle\bot,\bot\rangle\in(1+X)^2$. This is possible because $\langle\bot,\bot\rangle$ does not lie in the suborder $F(X)\backslash\{\bot\}$, due to the condition $y<_{1+X} x$ in Definition~\ref{def:F}. \end{proof}
The following result concludes the reconstruction of $f$ in second order arithmetic:
\begin{corollary}[$\mathbf{RCA}_0$]\label{cor:F-dil} The normal prae-dilator $F$ is a normal dilator. \end{corollary} \begin{proof} In view of Lemma~\ref{lem:F-prae-dil} it remains to show that $D^F(X)$ is well-founded for any well-order~$X$. By the previous lemma this reduces to the claim that $(1+X)^2$ is well-founded. More generally, the usual proof that any product $X\times Y$ of well-orders is well-founded goes through in $\mathbf{RCA}_0$: Assume that there is a strictly decreasing sequence $(\langle x_n,y_n\rangle)_{n\in\mathbb N}$ in $X\times Y$. Then the sequence $(x_n)_{n\in\mathbb N}$ is non-increasing. Since $X$ is well-founded, there is an $N\in\mathbb N$ such that $x_n=x_N$ holds for all $n\geq N$ (otherwise a strictly decreasing sequence in $X$ could be constructed by recursion). Then $(y_n)_{n\geq N}$ is a strictly decreasing sequence in $Y$, which contradicts the assumption that $Y$ is well-founded. \end{proof}
\section{From derivative to arithmetical comprehension}
In the present section we recall how derivatives of normal functions are defined in the context of second order arithmetic. We then show how the inequality $2^\alpha\leq f'(\alpha)$ from the introduction can be recovered in $\mathbf{RCA}_0$. Finally, we conclude that the base theory in a result of Rathjen and the present author can be lowered from $\mathbf{ACA}_0$ to $\mathbf{RCA}_0$.
If $g'$ is the derivative of a normal function $g$, then we have $g\circ g'=g'$. To formulate this condition in second order arithmetic, we need to define the composition $T\circ S$ of normal prae-dilators. This is not entirely straightforward: In view of Definition~\ref{def:prae-dil} the orders $S(n)$ may be infinite, while $T$ is only defined on finite orders represented by natural numbers. In order to overcome this obstacle we use equation~(\ref{eq:extend-dil}) to extend $T$ beyond the category of natural numbers, and set \begin{equation*} (T\circ S)(n)=D^T(S(n)). \end{equation*} One can equip $T\circ S$ with the structure of a prae-dilator, as shown in~\cite[Section~2]{freund-rathjen_derivatives}. According to~\cite[Proposition~2.14]{freund-rathjen_derivatives} there is a family of isomorphisms \begin{equation*} \zeta^{T,S}_X:D^T\circ D^S(X)\xrightarrow{\cong} D^{T\circ S}(X). \end{equation*} If $S$ and $T$ are dilators, then equation (\ref{eq:induced-function}) yields \begin{equation*} f_{T\circ S}(\alpha)=\operatorname{otp}(D^{T\circ S}(\alpha))=\operatorname{otp}(D^T\circ D^S(\alpha))=\operatorname{otp}(D^T(f_S(\alpha)))=f_T\circ f_S(\alpha), \end{equation*} where the third equality relies on $D^S(\alpha)\cong\operatorname{otp}(D^S(\alpha))=f_S(\alpha)$ and the fact that~$D^T$ is functorial. Hence the given composition of dilators represents the usual composition of functions on the ordinals. If $T=(T,\mu^T)$ and $S=(S,\mu^S)$ are normal prae-dilators, then we can invoke equation~(\ref{eq:extend-normal}) to define $\mu^{T\circ S}_n:n\to(T\circ S)(n)$ by \begin{equation*} \mu^{T\circ S}_n=D^{\mu^T}_{S(n)}\circ\mu^S_n. \end{equation*} In~\cite[Lemma~2.16]{freund-rathjen_derivatives} it has been verified that this turns $T\circ S$ into a normal prae-dilator, and that we have \begin{equation}\label{eq:mu-compose} D^{\mu^{T\circ S}}_X=\zeta^{T,S}_X\circ D^{\mu^T}_{D^S(X)}\circ D^{\mu^S}_X. \end{equation} We can now recall the following notion, which has been introduced in~\cite{freund-rathjen_derivatives}:
\begin{definition}[$\mathbf{RCA}_0$] Let $T$ be a normal prae-dilator. An upper derivative of~$T$ consists of a normal prae-dilator $S$ and a natural transformation $\xi:T\circ S\Rightarrow S$ that satisfies $\xi\circ\mu^{T\circ S}=\mu^S$. \end{definition}
According to~\cite[Lemma~2.19]{freund-rathjen_derivatives}, the natural transformation $\xi$ can be extended into a family of order embeddings $D^\xi_X:D^{T\circ S}(X)\to D^S(X)$ with \begin{equation}\label{eq:extend-upper-deriv} D^\xi_X\circ D^{\mu^{T\circ S}}_X=D^{\mu^S}_X. \end{equation} If $S$ is a dilator, then the embedding $D^\xi_\alpha$ witnesses \begin{equation*} f_T\circ f_S(\alpha)=\operatorname{otp}(D^{T\circ S}(\alpha))\leq\operatorname{otp}(D^S(\alpha))=f_S(\alpha), \end{equation*} for any ordinal $\alpha$. The converse inequality is automatic when $f_T$ is a normal function. Hence $f_S$ does indeed enumerate fixed points of $f_T$. It is possible that some fixed points are omitted. In this case $f_S$ grows faster than the derivative of $f_T$, which justifies the term ``upper derivative". To characterize the actual derivative on the level of normal dilators one can consider initial objects in the category of upper derivatives, as shown in~\cite{freund-rathjen_derivatives}.
We can now state the main technical result of this paper. As explained in the introduction, the order $2^X$ consists of finite descending sequences with entries in~$X$.
\begin{theorem}[$\mathbf{RCA}_0$] Assume that $G$ and $\xi:F\circ G\Rightarrow G$ form an upper derivative of the normal dilator $F$ from Definition~\ref{def:F}. Then there is an order embedding of $2^X$ into~$D^G(X)$, for each linear order~$X$. \end{theorem} \begin{proof} As preparation, we observe the following: By Lemma~\ref{lem:iso-reconstruct} and the results that we have recalled in the the first part of the present section, we get an embedding \begin{equation*} \xi^F_X:=D^\xi_X\circ\zeta^{F,G}_X\circ\eta_{D^G(X)}^{-1}:F(D^G(X))\to D^G(X). \end{equation*} According to Definition~\ref{def:normal-dil}, the normal prae-dilator $G$ comes with a natural transformation $\mu^G$. The latter extends into an embedding $D^{\mu^G}_X:X\to D^G(X)$, by equation~(\ref{eq:extend-normal}). The values of the desired embedding \begin{equation*} J:2^X\to D^G(X) \end{equation*} will be defined by recursion along sequences in $2^X$. To ensure that the recursion goes through we will simultaneously verify that we have \begin{equation}\label{eq:embedding-sim-ind} J(\langle x_1,\dots,x_n\rangle)<_{D^G(X)} D^{\mu^G}_X(x)\qquad\text{if we have $x_1<_X x$ or $n=0$}. \end{equation} Officially, the recursive construction and the inductive verification should be untangled. To show that this is possible, we point out that it will always be decidable whether the prerequisites of our recursive clauses are satisfied. Whenever they fail, we can thus assign $\xi^F_X(\bot)\in D^G(X)$ as a default value. Once the recursion is completed, the inductive verification shows that the default value is never required. Let us also point out that the induction statement has complexity $\Pi^0_1$ (note that (\ref{eq:embedding-sim-ind}) involves a universal quantification over~$x$). Since $\Sigma^0_1$-induction and $\Pi^0_1$-induction are equivalent (see e.g.~\cite[Corollary~II.3.10]{simpson09}), this shows that the induction can be carried out in~$\mathbf{RCA}_0$. Let us now specify the details: For the base of the recursion we use the minimum element $\bot$ of $F(D^G(X))$ and set \begin{equation*} J(\langle\rangle)=\xi^F_X(\bot). \end{equation*} To verify condition~(\ref{eq:embedding-sim-ind}) we observe that equations~(\ref{eq:extend-upper-deriv}) and (\ref{eq:mu-compose}) and Lemma~\ref{lem:iso-reconstruct} yield \begin{multline*} D^{\mu^G}_X(x)=D^\xi_X\circ D^{\mu^{F\circ G}}_X(x)=D^\xi_X\circ\zeta^{F,G}_X\circ D^{\mu^F}_{D^G(X)}\circ D^{\mu^G}_X(x)=\\ =D^\xi_X\circ\zeta^{F,G}_X\circ\eta_{D^G(X)}^{-1}\circ\mu^F_{D^G(X)}\circ D^{\mu^G}_X(x)=\xi^F_X\circ\mu^F_{D^G(X)}\circ D^{\mu^G}_X(x)=\xi^F_X(\langle D^{\mu^G}_X(x),\bot\rangle). \end{multline*} In view of $\bot<_{F(D^G(X))}\langle D^{\mu^G}_X(x),\bot\rangle$ we get $J(\langle\rangle)<_{D^G(X)}D^{\mu^G}_X(x)$ for any $x\in X$, as required by condition~(\ref{eq:embedding-sim-ind}). In the recursion step we put \begin{equation*} J(\langle x_0,\dots,x_n\rangle)=\xi^F_X(\langle D^{\mu^G}_X(x_0),J(\langle x_1,\dots,x_n\rangle)\rangle). \end{equation*} To see that the argument $\langle D^{\mu^G}_X(x_0),J(\langle x_1,\dots,x_n\rangle)\rangle$ does indeed lie in the domain $F(D^G(X))$ of $\xi^F_X$, we must establish the condition $J(\langle x_1,\dots,x_n\rangle)<_{D^G(X)}D^{\mu^G}_X(x_0)$ from Definition~\ref{def:F}. By the definition of the order $2^X$ we have $x_1<_Xx_0$ or $n=0$. Hence the required inequality holds by condition~(\ref{eq:embedding-sim-ind}). Furthermore, condition~(\ref{eq:embedding-sim-ind}) remains valid in the recursion step: For $x_0<_X x$ we have $D^{\mu^G}_X(x_0)<_{D^G(X)}D^{\mu^G}_X(x)$. Together with the definition of $J(\langle x_0,\dots,x_n\rangle)$ and the equality $D^{\mu^G}_X(x)=\xi^F_X(\langle D^{\mu^G}_X(x),\bot\rangle)$ from above, this does indeed imply the condition $J(\langle x_0,\dots,x_n\rangle)<_{D^G(X)}D^{\mu^G}_X(x)$. It remains to show that $J$ is an order embedding. We establish \begin{equation*} \sigma<_{2^X}\tau\quad\Rightarrow\quad J(\sigma)<_{D^G(X)}J(\tau) \end{equation*} by joint induction on $\sigma$ and $\tau$ (or by induction on the length of $\tau$, which leads to an induction statement of complexity~$\Pi^0_1$). Let us first assume that we have \begin{equation*} \sigma=\langle\rangle<_{2^X}\langle y_0,\dots,y_m\rangle=\tau \end{equation*} with $\tau\neq\langle\rangle$. Since $\bot\in F(D^G(X))$ is minimal we do indeed get \begin{equation*} J(\sigma)=\xi^F_X(\bot)<_{D^G(X)}\xi^F_X(\langle D^{\mu^G}_X(y_0),J(\langle y_1,\dots,y_m\rangle)\rangle)=J(\tau). \end{equation*} Now consider an inequality \begin{equation*} \sigma=\langle x_0,\dots,x_n\rangle<_{2^X}\langle y_0,\dots,y_m\rangle=\tau. \end{equation*} We must either have $x_0<_X y_0$, or $x_0=y_0$ and $\langle x_1,\dots,x_n\rangle<_{2^X}\langle y_1,\dots,y_m\rangle$. If the latter holds, then we get $J(\langle x_1,\dots,x_n\rangle)<_{D^G(X)}J(\langle y_1,\dots,y_m\rangle)$ by the induction hypothesis. In either case we obtain \begin{equation*} \langle D^{\mu^G}_X(x_0),J(\langle x_1,\dots,x_n\rangle)\rangle<_{F(D^G(X))}\langle D^{\mu^G}_X(y_0),J(\langle y_1,\dots,y_m\rangle)\rangle. \end{equation*} By applying $\xi^F_X$ to both sides we get $J(\sigma)<_{D^G(X)}J(\tau)$. \end{proof}
Recall that a (normal) prae-dilator $S$ is a dilator if and only if the order $D^S(X)$ is well-founded for any well-order~$X$. We can draw the following conclusion.
\begin{corollary}[$\mathbf{RCA}_0$]\label{cor:deriv-to-aca} Assume that any normal dilator $T$ has an upper derivative $(S,\xi)$ such that $S$ is a dilator. Then arithmetical comprehension holds. \end{corollary} \begin{proof} In view of Theorem~\ref{thm:girard-hirst} it suffices to show that $2^X$ is well-founded for any given well-order~$X$. Construct $F$ as in Definition~\ref{def:F}. From Lemma~\ref{lem:F-prae-dil} and Corollary~\ref{cor:F-dil} we know that $F$ is a normal dilator. Hence the assumption of the present corollary yields an upper derivative $\xi:F\circ G\Rightarrow G$ such that $D^G(X)$ is well-founded. The previous theorem provides an order embedding of $2^X$ into $D^G(X)$, which witnesses that $2^X$ is well-founded as well. \end{proof}
According to~\cite[Definition~2.26]{freund-rathjen_derivatives}, a derivative of a normal prae-dilator is an upper derivative that is initial in a suitable sense. In~\cite[Section~4]{freund-rathjen_derivatives} it has been shown how to construct a derivative $(\partial T,\xi^T)$ of a given normal prae-dilator $T$. The transformation of $T$ into $\partial T$ and $\xi^T$ can be implemented in $\mathbf{RCA}_0$ (in particular it is computable). Hence $\mathbf{RCA}_0$ proves that (upper) derivatives exist. What $\mathbf{RCA}_0$ cannot show is that $X\mapsto D^{\partial T}(X)$ preserves well-foundedness when $X\mapsto D^T(X)$ does. Indeed, Rathjen and the present author have shown that the latter is equivalent to $\Pi^1_1$-bar induction (which asserts that $\Pi^1_1$-induction is available along any well-order). As explained in the introduction, we can now lower the base theory over which this equivalence holds (Theorem~5.9 of~\cite{freund-rathjen_derivatives} proves it over $\mathbf{ACA}_0$).
\begin{corollary}[$\mathbf{RCA}_0$]\label{cor:main-equiv} The following are equivalent: \begin{enumerate} \item If $T$ is a normal dilator, then so is $\partial T$. \item Any normal dilator~$T$ has an upper derivative $(S,\xi)$ such that $S$ is a dilator. \item The principle of $\Pi^1_1$-bar induction holds. \end{enumerate} \end{corollary} \begin{proof} To see that (1) implies (2) it suffices to know that $\partial T$ and $\xi^T$ form an upper derivative of $T$. This holds by~\cite[Proposition~4.11]{freund-rathjen_derivatives}, which was proved in~$\mathbf{RCA}_0$. The implication from~(2) to (3) holds over $\mathbf{ACA}_0$, by the original proof of~\cite[Theorem~5.9]{freund-rathjen_derivatives}. Now Corollary~\ref{cor:deriv-to-aca} of the present paper tells us that (2) implies arithmetical comprehension, which means that all ingredients of the proof become available over~$\mathbf{RCA}_0$. The implication from (3) to (1) holds by~\cite[Theorem~5.8]{freund-rathjen_derivatives}, which was established in $\mathbf{RCA}_0$. \end{proof}
Combining Corollaries~\ref{cor:main-equiv} and~\ref{cor:deriv-to-aca} yields a somewhat indirect proof that $\Pi^1_1$-bar induction implies arithmetical comprehension. In fact, the latter is equivalent to the weaker principle of arithmetical transfinite induction, as shown by J.~Hirst~\cite{hirst99}.
\section{Ordinal exponentiation as a derivative}
In the present section we show that the derivative of the normal function $f$ from the introduction is given by $f'(\alpha)=\omega^{\omega^\alpha}$. We also specify a normal function $g$ with $g(\alpha)\leq 1+\alpha\cdot 2$ and $g'(\alpha)=\omega^{1+\alpha}$. The relevance of this result has been discussed at the end of the introduction. In contrast to the previous sections, we do not aim to formalize this section in a weak base theory.
The normal function~$f$ was defined by $f(\alpha)=1+\sum_{\gamma<\alpha}(1+\gamma)$, or more formally by the recursive clauses \begin{align*} f(0)&=1,\\ f(\alpha+1)&=f(\alpha)+1+\alpha,\\ f(\lambda)&=\textstyle\sup_{\alpha<\lambda}f(\alpha)\qquad\text{for $\lambda$ limit}. \end{align*} Recall that $\alpha>0$ is multiplicatively (resp.~additively) principal if $\beta,\gamma<\alpha$ implies $\beta\cdot\gamma<\alpha$ (resp.~$\beta+\gamma<\alpha$). The following determines the derivative of $f$.
\begin{lemma}
We have $f(\alpha)=\alpha$ if and only if $\alpha$ is a multiplicatively principal limit ordinal. \end{lemma} \begin{proof}
Assume that $f(\alpha)=\alpha$ holds. In view of $f(1)>f(0)=1$ we get $\alpha>1$. By the definition of $f$ we also see that $0<\beta<\alpha$ implies
\begin{equation*}
\beta+1\leq f(\beta)+1<f(\beta)+1+\beta=f(\beta+1)\leq f(\alpha)=\alpha,
\end{equation*}
so that $\alpha$ is a limit. We can now infer that $\alpha$ is additively principal: Consider $\beta,\gamma<\alpha$ and set $\delta:=\max\{\beta,\gamma\}$. Since $\alpha$ is a limit, we get $\delta+1<\alpha$ and then
\begin{equation*}
\beta+\gamma\leq f(\delta)+1+\delta=f(\delta+1)<f(\alpha)=\alpha.
\end{equation*}
By a straightforward induction on $\gamma$ we get $\beta\cdot\gamma\leq f(\beta+\gamma)$. Since $\alpha$ is additively principal, it follows that $\beta,\gamma<\alpha$ implies
\begin{equation*}
\beta\cdot\gamma\leq f(\beta+\gamma)<f(\alpha)=\alpha.
\end{equation*}
Now assume that $\alpha$ is a multiplicatively (and hence additively) principal limit ordinal. Then $\gamma<\alpha$ implies $1+\gamma^2<\alpha$. In the introduction we have noted that $f(\gamma)$ is bounded by $1+\gamma^2$. Hence we get
\begin{equation*}
f(\alpha)=\textstyle\sup_{\gamma<\alpha}f(\gamma)\leq\textstyle\sup_{\gamma<\alpha}(1+\gamma^2)\leq\alpha.
\end{equation*}
The inequality $\alpha\leq f(\alpha)$ is automatic, since $f$ is strictly increasing. \end{proof}
The derivative of $f$ can now be described as follows:
\begin{corollary}
We have $f'(\alpha)=\omega^{\omega^\alpha}$ for any ordinal $\alpha$. \end{corollary} \begin{proof}
It is known that an infinite ordinal is multiplicatively principal if and only if it is of the form $\omega^{\omega^\alpha}$ (see e.\,g.~\cite[Exercise~3.3.15]{pohlers-proof-theory}). Hence the previous lemma implies that $\alpha\mapsto\omega^{\omega^\alpha}$ is the increasing enumeration of the fixed points of $f$. The claim follows by the definition of the derivative. \end{proof}
In the rest of this section we construct a normal function $g$ with $g'(\alpha)=\omega^{1+\alpha}$. Such a function can be defined by \begin{align*}
g(0)&=1,\\
g(\alpha+1)&=(\alpha+1)\cdot 2,\\
g(\lambda)&=\textstyle\sup_{\alpha<\lambda}g(\alpha)\quad\text{for $\lambda$ limit}. \end{align*} By induction on the limit ordinal $\lambda$ we get \begin{equation*}
g(\lambda)\leq\textstyle\sup_{\alpha<\lambda}\alpha\cdot2\leq\lambda\cdot 2. \end{equation*} In particular we have $g(\lambda)<g(\lambda+1)$, which readily implies that $g$ is strictly increasing. We also obtain $g(\alpha)\leq1+\alpha\cdot 2$ for any ordinal $\alpha$, as promised above. To characterize the derivative of $g$ we show the following:
\begin{lemma}
We have $g(\alpha)=\alpha$ if and only if $\alpha$ is an additively principal limit ordinal. \end{lemma} \begin{proof}
First assume that we have $g(\alpha)=\alpha$. In view of $g(0)=1$ we get $\alpha>0$. Since $g(\gamma+1)>\gamma+1$ holds for any successor, we learn that $\alpha$ must be a limit. In order to show that $\alpha$ is additively principal we consider arbitrary ordinals $\beta,\gamma<\alpha$. Setting $\delta:=\max\{\beta,\gamma\}$, we get
\begin{equation*}
\beta+\gamma<(\delta+1)\cdot 2=g(\delta+1)\leq g(\alpha)=\alpha.
\end{equation*}
Conversely, assume that $\alpha$ is an additively principal limit ordinal. Then $\gamma<\alpha$ implies~$\gamma\cdot 2<\alpha$, which yields
\begin{equation*}
g(\alpha)\leq\textstyle\sup_{\gamma<\alpha}\gamma\cdot 2\leq\alpha.
\end{equation*}
Yet again, the inequality $\alpha\leq g(\alpha)$ is automatic. \end{proof}
We can now describe the derivative of $g$:
\begin{corollary}
We have $g'(\alpha)=\omega^{1+\alpha}$ for any ordinal $\alpha$. \end{corollary} \begin{proof}
It is well-known that an ordinal is additively principal if and only if it is of the form $\omega^\alpha$ (consider Cantor normal forms). Excluding $\omega^0=1$, we see that the additively principal limit ordinals are those of the form~$\omega^{1+\alpha}$. Now the claim follows by the previous lemma. \end{proof}
To conclude, we explain why we have used $f$ rather than $g$ to lower the base theory of~\cite[Theorem~5.9]{freund-rathjen_derivatives}: In order to represent $g$ by a normal dilator we would need uniform notation systems for the values of this function. Elements of $g(\alpha+1)$ can be written as $\beta$ or $(\alpha+1)+\beta$ with $\beta<\alpha+1$, which suggests a relativized ordinal notation system. Canonical representations for elements of $g(\lambda)$ appear less obvious when $\lambda$ is a limit. For example, the ordinal $\omega+2\in g(\omega\cdot 2)$ could be written as $(\omega+1)+1\in g(\omega+1)$, as $(\omega+2)+0\in g(\omega+2)$ or as $\omega+2\in g(\omega+3)$. It would be interesting to know whether $g$ does have a reasonable representation as a normal dilator.
\end{document} |
\begin{document}
\title{Orthogonalization of partly unknown quantum states}
\author{M. Je\v{z}ek} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 1192/12, CZ-771 46 Olomouc, Czech Republic}
\author{M. Mi\v{c}uda} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 1192/12, CZ-771 46 Olomouc, Czech Republic}
\author{I. Straka} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 1192/12, CZ-771 46 Olomouc, Czech Republic}
\author{M. Mikov\'{a}} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 1192/12, CZ-771 46 Olomouc, Czech Republic}
\author{M. Du\v{s}ek} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 1192/12, CZ-771 46 Olomouc, Czech Republic}
\author{J. Fiur\'{a}\v{s}ek} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 1192/12, CZ-771 46 Olomouc, Czech Republic}
\begin{abstract} A quantum analog of the fundamental classical NOT gate is a quantum gate that would transform
any input qubit state onto an orthogonal state. Intriguingly, this universal NOT gate is forbidden by the laws
of quantum physics. This striking phenomenon has far-reaching implications concerning quantum information processing
and encoding information about directions and reference frames into quantum states. It also triggers the question under what conditions
the preparation of quantum states orthogonal to input states becomes possible. Here we report on experimental
demonstration of orthogonalization of partly unknown single- and two-qubit quantum states. A state orthogonal
to an input state is conditionally prepared by quantum filtering, and the only required information about the input state is
a mean value of a single arbitrary operator. We show that perfect orthogonalization of partly unknown two-qubit entangled
states can be performed by applying the quantum filter to one of the qubits only. \end{abstract}
\pacs{03.67.-a, 42.50.Dv, 42.50.Ex}
\maketitle
\section{Introduction}
The laws of quantum physics impose fundamental limits on processing of information encoded into states of quantum systems. Our ability to extract information from a quantum register, represented by a sequence of spin 1/2 particles, depends on how the individual spins are oriented. Probably the most striking example consists of the higher efficiency of spin direction encoding into a pair of orthogonal spins compared to the parallel ones \cite{GisinPopescu99}. The different information capacities of orthogonal and parallel quantum states stem from their symmetry properties, resulting in the impossibility to freely convert between these configurations. Explicitly, one cannot construct
a perfect universal NOT gate that would map an arbitrary pure qubit state $|\psi\rangle$ onto an orthogonal
qubit state $|\psi_\perp\rangle$, $\langle \psi_\perp|\psi\rangle=0$. An {\em imperfect} implementation of universal NOT gate is possible, although fundamentally limited by conservation laws \cite{vanEnk05}. The average fidelity of the optimal approximate universal NOT gate reads $2/3$ \cite{Buzek99,Buzek00,DeMartini02} and it cannot be increased even if we allow for probabilistic operations \cite{Fiurasek04}.
However, very recently, it has been shown by Vanner \emph{et al.} that the task of quantum state orthogonalization becomes feasible provided that we possess some a-priori information about the input state \cite{Vanner}. In particular, it suffices to know a mean value $a= \langle \psi |A |\psi\rangle$
of some operator $A$. A state orthogonal to the input state $|\psi\rangle$ can then be conditionally prepared by applying a quantum filter ${A}-a{{I}}$ to the input state, \begin{equation}
|\psi_\perp\rangle \propto \left( {A}- a {{I}}\right)|\psi\rangle, \label{orthogonalization} \end{equation}
where ${{I}}$ denotes the identity operator. It is simple to check that $\langle \psi_\perp|\psi\rangle=0$ as required. It follows from Eq. (\ref{orthogonalization}) that the success probability $p_\perp$ of the orthogonalization procedure can be expressed as
$p_\perp =(\langle A^\dagger A\rangle-|a|^2)/\lambda^2$, where $\lambda=\max_j|\Delta A_j|$ and $\Delta A_j$ denotes the singular values of $\Delta A=A -a {{I}} $.
Interestingly, the task of preparing a state orthogonal to a completely unknown input pure state $|\psi\rangle$ becomes easier
with increasing Hilbert space dimension $d$. If the input states $|\psi\rangle$ are randomly chosen according to a uniform a-priori distribution induced by the Haar measure on $\mathrm{SU}(d)$, then the minimum achievable
average overlap between input states $|\psi\rangle$ and output orthogonalized states reads \begin{equation} F_{\perp}(d)=\frac{1}{d+1}. \label{Fperp} \end{equation} For comparison, a hypothetical perfect orthogonalization device would achieve $F_\perp(d)=0$, where by perfect orthogonalization we mean preparation of a state that is perfectly orthogonal to a given unknown quantum state. Formula (\ref{Fperp}) follows from the relation between average state fidelity $F$ and quantum process fidelity $F_{\chi}$ valid for an arbitrary deterministic quantum operation \cite{Horodecki99}, \begin{equation} F=\frac{dF_{\chi}+1}{d+1}. \end{equation}
Since $F_\chi\geq 0$ by definition, $F$ is minimized when $F_\chi=0$ and we obtain Eq. (\ref{Fperp}). For unitary operations $U$, $F_\chi=|\mathrm{Tr}[U]|^2/d^2$, hence the minimum average overlap (\ref{Fperp}) can be achieved by any unitary operation that satisfies $\mathrm{Tr}[U]=0$. Another option is to employ the universal quantum inverter \cite{Rungta00} that represents an extension of the
approximate universal-NOT operation \cite{Buzek99} to qudits, $\mathcal{G}_{\mathrm{NOT}}(\rho)=\left(d{{I}} -\rho \right)/(d^2-1)$. As discussed in Ref. \cite{Rungta00}, $\mathcal{G}_{\mathrm{NOT}}$ can be implemented by a measure-and-prepare strategy. An isotropic measurement with POVM elements
$|\varphi\rangle \langle \varphi| d\varphi$ is performed on the input state, and after obtaining a measurement outcome $|\varphi\rangle$
an output state $({{I}} - |\varphi\rangle\langle \varphi|)/(d-1)$ is prepared.
The above discussion suggests that the orthogonalization is most difficult for qubits while it becomes feasible for continuous variable states in infinite dimensional Hilbert space.
This can be intuitively understood by realizing that a continuous-variable state $|\psi\rangle$
can be coherently displaced by an arbitrary amount $\alpha$. The overlap between the displaced state $D(\alpha)|\psi\rangle$
and the input state $|\psi\rangle$ can be made arbitrarily small by choosing a sufficiently large displacement $\alpha$.
Here we focus on qubit systems and report on experimental perfect conditional orthogonalization of partly unknown pure single-qubit and two-qubit states encoded in polarization states of photons generated by spontaneous parametric downconversion. The rest of the paper is organized as follows. In Section II we describe the orthogonalization procedure in detail and in Section III we present our experimental setup. Experimental results for orthogonalization of partly unknown single-qubit states are discussed in Section IV. In Section V we consider orthogonalization of partly unknown two-qubit entangled states and we demonstrate that such states can be orthogonalized by a local operation where the quantum filter is applied to one of the qubits only. Finally, Section VI contains brief conclusions.
\begin{figure}
\caption{(Color online) Orthogonalization of partly unknown single-qubit states. (a) Attenuation of amplitude of state $|0\rangle$.
(b) Unitary $\pi$ phase shift. For details, see text.}
\end{figure}
\section{Orthogonalization protocol}
In our study, the operator $A$ is chosen to be equal to the Pauli operator \begin{equation}
\sigma_Z=|0\rangle\langle 0|-|1\rangle\langle 1|. \end{equation}
We parametrize the pure qubit states by spherical angles $\theta$ and $\phi$ on the Poincar\'{e} sphere, $|\psi\rangle=\cos \frac{\theta}{2}|0\rangle + e^{i\phi}\sin\frac{\theta}{2}|1\rangle$
and $|\psi_\perp\rangle=\sin \frac{\theta}{2}|0\rangle-e^{i\phi}\cos\frac{\theta}{2}|1\rangle$. Since $\langle\sigma_Z\rangle=\cos\theta$, the knowledge of $\langle \sigma_Z\rangle$ specifies the latitude on the Poincar\'{e} sphere, see Fig. 1. However, the state is still partly unknown, because $\phi$ can be arbitrary. Without loss of generality, we can assume that $\theta \leq \frac{\pi}{2}$, hence
$\langle \sigma_Z\rangle \geq 0$ and the input state is located on the northern hemisphere of the Poincar\'{e} sphere.
The quantum filter $Z \propto \sigma_Z- {{I}} \cos\theta $ producing a state orthogonal to $|\psi\rangle$ then reads, \begin{equation}
Z= \tan^2\frac{\theta}{2}\,|0\rangle \langle 0| -| 1\rangle \langle 1|. \label{Zdefinition} \end{equation} This operator is normalized such that the maximum of the absolute values of its eigenvalues is equal to $1$.
As illustrated in Fig. 1, the orthogonalization procedure can be divided into two steps.
In the first step, the amplitude of the state $|0\rangle$ is attenuated by a factor of $\tan^2(\theta/2)$. A circle on the Poincar\'{e} sphere, which is specified by $\theta$, is transformed onto a similar circle symmetrically positioned with respect to the equator of the Poincar\'{e} sphere, $\theta'=\pi-\theta$. In the second step, a unitary $\pi$ phase shift rotates this circle by $180^\circ$, $\phi'=\phi+\pi$, which maps all the states on the original circle onto orthogonal states. The orthogonalization procedure succeeds with a probability $p_\perp=\tan^2(\theta/2)$. $p_\perp$ is maximal for states on the equator of Poincar\'{e} sphere ($\theta=\pi/2$), whose orthogonalization can be performed by a deterministic unitary $\pi$ phase shift \cite{Buzek99,Buzek00}. On the other hand, when we approach the limit $\theta=0$, then the amplitudes of the input states become highly unbalanced and heavy filtering is required, which results in a small success probability.
The orthogonalization procedure can be straightforwardly generalized to multipartite systems. Consider a bipartite pure state $|\Psi\rangle_{\mathrm{AB}}$. Suppose that we know a mean value of an operator $A$ acting on subsystem A,
$a =\langle \Psi| A_\mathrm{A}\otimes {{I}}_\mathrm{B}|\Psi\rangle$. Then it holds that the state \begin{equation}
|\Psi_\perp\rangle_{\mathrm{AB}} \propto \left(A -a {{I}}\right)_\mathrm{A} \otimes {{I}}_{\mathrm{B}} |\Psi\rangle_{\mathrm{AB}} \end{equation}
is orthogonal to state $|\Psi\rangle_{\mathrm{AB}}$. The orthogonalization can thus be performed by local filtering operation on a single subsystem A.
\section{Experimental setup}
Our experimental setup is depicted in Fig. 2(a). Time correlated orthogonally polarized photon pairs with central wavelength of 810 nm are generated in the process of type-II collinear spontaneous parametric downconversion in a 2~mm thick BBO crystal pumped by a CW laser diode with 75 mW of power and central wavelength of 405 nm \cite{Jezek11}. The orthogonally polarized signal and idler photons are spatially separated on a polarizing beam splitter and coupled into single mode fibers. Detection of idler photon heralds the presence of signal photon. The signal photon is released into free space and a desired input polarization state $|\psi\rangle$ is prepared with the help of a sequence of quarter- and half-wave plates.
The filtering operation (\ref{Zdefinition}) was realized by a tunable polarization-dependent attenuator which consists of a pair of calcite beam displacers and half-wave plates \cite{Kwiat04,Kwiat05,Lemr11,Micuda12}. The two beam displacers form an inherently stable Mach-Zehnder interferometer \cite{OBrien03}. The first beam displacer introduces transversal spatial offset between vertically (V) and horizontally (H) polarized beams, and the half-wave plate HWP1 set at $45^\circ$ transforms the vertical polarization onto horizontal and vice versa.
The polarization qubit $|\psi\rangle$ is thus converted into spatial qubit such that the states $|0\rangle$ and $|1\rangle$ correspond to the photon propagating in the upper and lower interferometer arms, respectively. The amplitude of photon propagating in the upper arm is selectively attenuated by rotating the half-wave plate HWP2. Rotation of HWP2 by angle $\vartheta$ transforms the initial horizontal polarization onto a linear polarization at angle $2\vartheta$.
The second beam displacer collects only the vertically polarized signal from the upper arm while the horizontally polarized signal is deflected and discarded. The amplitude attenuation factor of this device is thus given by $\cos(2\vartheta)$.
The output polarization state behind the second beam displacer was analyzed with the help of a detection block that
consists of a HWP and QWP followed by polarizing beam splitter and two single-photon detectors monitoring both output ports of the PBS.
The unitary $\pi$ phase shift which is a part of the orthogonalization operation was in our implementation incorporated
into the setting of waveplates that are part of the detection block.
\begin{figure}
\caption{(Color online) Experimental setup for orthogonalization of single-qubit (a) and two-qubit (b) states.
NLC---nonlinear crystal, SMF---single-mode fiber,
BD---calcite beam displacer, PPBS---partially polarizing beam splitter,
PBS---polarizing beam splitter, HWP---half-wave plate, QWP---quarter-wave plate,
D---single-photon detector, DB---detection block consisting of a HWP, QWP, PBS
and two single-photon detectors.}
\end{figure}
\section{Single-qubit states}
We have carried out measurements for 4 different values of $\theta$ and $\phi$, which represents in total $16$ different input single-qubit states $|\psi\rangle$.
For each input state, the HWP2 was first set to $\vartheta=0$ (no attenuation), and the input state was characterized by a
tomographically complete measurement consisting of a sequence of projective measurements in three mutually unbiased bases $H/V$, $D/A$ and $R/L$ \cite{Wootters89,James01, Rehacek04,Altepeter05}. Here D and A denote the diagonally and anti-diagonally linearly polarized states, and
R and L denote the right- and left-handed circularly polarized states. Then we set the attenuation according to the value of $\theta$ used in the state preparation procedure and performed quantum state tomography of the orthogonalized state. Finally, $\langle\sigma_Z\rangle$ was also
estimated from projective measurement on the input state in the $H/V$ basis, the attenuation was set according to this measurement,
and a quantum state tomography of the output orthogonalized state was carried out. The states were reconstructed from the experimental data using the standard
maximum-likelihood estimation algorithm \cite{MaxLik}.
\begin{figure}
\caption{(Color online) Overlap $F$ between input and orthogonalized single-qubit states. The results are shown for the two approaches where $\langle\sigma_Z\rangle$ is determined either from the theoretical
knowledge of the prepared input state (a) or from measurements on the input state in the $H/V$ basis (b).}
\end{figure}
The reconstructed input single-qubit states exhibited very high purity $\mathcal{P}=\mathrm{Tr}(\rho^2)$ exceeding in all cases $0.992$. The orthogonalized states were slightly more mixed but the minimum observed purity was still as high as $0.986$. We employ fidelity \begin{equation} F= \left[\mathrm{Tr}\sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}}\right]^2 \end{equation} to quantify the overlap of two mixed states $\rho_1$ and $\rho_2$. If $F=0$ then the two density matrices $\rho_1$ and $\rho_2$
have orthogonal supports. For single qubits it holds that $F=0$ if and only if both states are pure and orthogonal, $\rho_1=|\psi\rangle\langle\psi|$ and
$\rho_2=|\psi_\perp\rangle\langle\psi_\perp|$. The overlaps between input and orthogonalized states are plotted in Fig. 3. We can see that the overlap is in all cases smaller than $0.0254$ which indicates good performance of the orthogonalization procedure. Note that the overlap is higher for smaller $\theta$. A likely explanation of this feature is that small $\theta$ requires heavy filtering, as discussed above. In this case, any imperfection in setting of the attenuation factor can have a significant impact.
\begin{figure}
\caption{(Color online) Success probability of orthogonalization is plotted as a function of $\theta$. The solid line represents theoretical dependence, and symbols indicate experimental results for single-qubit (blue circles)
and two-qubit (red triangles) states. Results are shown for both methods of determination of $\langle \sigma_Z\rangle$, as in Fig.~3.}
\end{figure}
For comparison, we plot in Fig. 4 the minimum average overlap between input and output single-qubit states that is achievable by deterministic quantum operations when the input states
$|\psi\rangle$ have known fixed $\theta$. As shown in the Appendix, this minimum overlap is given by \begin{equation} F_{\mathrm{min}}=\left \{ \begin{array}{lcl} \displaystyle \frac{1}{4}\sin^2\theta-\frac{\sin^6\frac{\theta}{2}}{\cos\theta}, & \quad & 0\leq \theta \leq \theta_{T}, \\[3mm] \cos^2\theta, & & \theta_T < \theta \leq \frac{\pi}{2}, \end{array} \right. \label{Fmin} \end{equation} where $\theta_T=2\arcsin(1/\sqrt{3})$. We can see that $F_{\mathrm{\min}}$ vanishes only if the set of input states shrinks into a single state representing a pole of the Poincar\'{e} sphere, or if the states lie on the equator of the Poincar\'{e} sphere ($\theta=90^\circ$).
All experimentally determined overlaps plotted in Fig.~3, except those for $\theta=88^\circ$, lie well below $F_{\mathrm{min}}$. This confirms that the probabilistic orthogonalization outperforms the best deterministic strategy.
The success probability of conditional orthogonalization was determined as a ratio of the total number of measured coincidences for the orthogonalized and the input states, respectively, recorded over the time interval of $600$~s. The results are plotted in Fig. 5 and they agree well with the theoretical prediction. A higher vertical spread of data points corresponding to states with the same $\theta$ in Fig. 5(b) occurs because in this case the attenuation was determined from measurements on input states. Therefore, the exact attenuation factors slightly varied among the states with identical
$\theta$ but different $\phi$.
\section{Two-qubit entangled states}
We have also experimentally tested orthogonalization of partly unknown two-qubit entangled states by local single-qubit quantum filtration. In our experiment, the two-photon entangled states were generated from input product states with the help of a linear optical quantum controlled-Z gate \cite{Ralph02,Hofmann02,Langford05,Kiesel05,Okamoto05}.
As shown in Fig. 2(b), the two photons interfere on a partially polarizing beam splitter PPBS with transmittances $T_V=1/3$ and $T_H=1$ for vertical and horizontal polarizations, respectively. This interference gives rise to a $\pi$ phase shift only if both qubits are in logical state $|1\rangle$. The gate also includes two additional partially polarizing beam splitters which balance the amplitudes and ensure unitarity of the gate. The gate operates in the coincidence basis which means that we have to post-select the events where a single photon is detected in each output port of the gate \cite{Ralph02,Hofmann02,Langford05,Kiesel05,Okamoto05}. For technical reasons, the PPBS was placed inside the interferometer formed by the two beam displacers, see Fig. 2(b). The idler photon thus interferes with the signal photon only if the latter propagates through the lower interferometer arm. The setup is designed so that the signal photon propagating in the lower interferometer arm is vertically polarized, which ensures correct operation of the quantum CZ gate in this configuration.
\begin{table*}[!t!] \caption{Overlap $F$ between the input (I) and orthogonalized (O) two-qubit states and purity $\mathcal{P}$ and entanglement of formation $E_f$ of the input and orthogonalized states. The data are presented for orthogonalization using the knowledge of $\langle \sigma_{Z1}\rangle$ from state preparation ($F$, $\mathcal{P}_O$, $E_{f,O}$) and for orthogonalization where $\langle \sigma_{Z1}\rangle$ is determined from measurements on the first qubit ($F'$, $\mathcal{P}_O'$, $E_{f,O}'$).} \begin{ruledtabular} \begin{tabular}{cccccccccccc} $\theta_1$ & $\phi_1$ & $\theta_2$ & $\phi_2$ & $F$ & $F'$ & $\mathcal{P}_I$ & $\mathcal{P}_O$ & $\mathcal{P}_O'$ & $E_{f,I}$ & $E_{f,O}$ & $E_{f,O}'$ \\ $45^\circ$ & $0^\circ$ & $90^\circ$ & $0^\circ$ & 0.040 & 0.044 & 0.964 & 0.890 & 0.909 & 0.547 & 0.612 & 0.622 \\ $67.5^\circ$ & $0^\circ$ & $90^\circ$ & $0^\circ$ & 0.031 & 0.037 & 0.961 & 0.891 & 0.907 & 0.819 & 0.807 & 0.781\\ $45^\circ$ & $0^\circ$ & $45^\circ$ & $0^\circ$ & 0.021 & 0.029 & 0.936 & 0.944 & 0.942 & 0.286 & 0.334 & 0.361 \\ $67.5^\circ$ & $0^\circ$ & $45^\circ$ & $0^\circ$ & 0.008 & 0.008 & 0.975 & 0.952 & 0.941 & 0.523 & 0.482 & 0.496 \\ $67.5^\circ$ & $90^\circ$ & $45^\circ$ & $90^\circ$ & 0.041 & 0.035 & 0.971 & 0.946 & 0.935 & 0.497 & 0.518 & 0.468
\end{tabular} \end{ruledtabular} \end{table*}
Let $\theta_1$, $\phi_1$ and $\theta_2$, $\phi_2$ denote the parameters of the input single-qubit states of signal ($|\psi_1\rangle$) and idler ($|\psi_2\rangle$) photon, respectively.
The quantum CZ gate is diagonal in the computational basis, $U_{CZ}|jk\rangle=(-1)^{jk}|jk\rangle$, and for input product state $|\psi_1\rangle|\psi_2\rangle$ we obtain \begin{equation}
|\Psi\rangle=U_{CZ}|\psi_1\rangle|\psi_2\rangle=\cos \frac{\theta_1}{2}|0\rangle|\psi^{+}\rangle+ e^{i\phi_1}\sin \frac{\theta_1}{2}|1\rangle|\psi^{-}\rangle, \end{equation}
where $|\psi^{\pm}\rangle= \cos \frac{\theta_2}{2}|0\rangle \pm e^{i\phi_2}\sin \frac{\theta_2}{2}|1\rangle$. Since $\langle\sigma_{Z1}\rangle=\cos\theta_1$ the amount of filtering required for orthogonalization depends only on $\theta_1$.
After preparation of the input entangled state $|\Psi\rangle$, filtering operation (\ref{Zdefinition}) can be applied to the first qubit by rotating HWP2. Polarization states of both photons are then measured with the help of two detection blocks DB identical to that shown in Fig. 2(a) and coincidences between clicks of detectors in the two blocks are counted. We have performed full tomographic reconstruction of the input entangled states
$|\Psi\rangle$ as well as of the orthogonalized states. Like for single-qubit states, we have first used the theoretical value of $\langle \sigma_{Z1}\rangle$ known from state preparation and then we have also used the value determined from measurements on the first qubit in the $H/V$ basis.
The experimental results are summarized in Table I.
A successful orthogonalization is indicated by low overlaps between input and orthogonalized states. The purities of the orthogonalized states
are generally lower than the purity of the input state. This occurs because the filtration effectively enhances the
terms sensitive to the visibility of two-photon interference on the central PPBS. In our experiment, we have measured visibility $\mathcal{V}=0.94$. The success probability of orthogonalization
is plotted in Fig. 5 (red triangles) and the results agree well with the theory.
We have also determined entanglement of formation of the input and orthogonalized states \cite{Wootters98}. The values are listed in Table I
and they confirm that the states are highly entangled. Due to various experimental imperfections, the observed $E_{f,I}$ is slightly lower than the theoretically predicted entanglement
of pure two-qubit state $|\Psi\rangle$ which can be expressed as $S_E=-x\log_2 x -(1-x)\log_2(1-x)$, where $x=\frac{1}{2}\left(1+\sqrt{1-\sin^2\theta_1\sin^2\theta_2}\right)$. This latter formula also indicates that in the present case the orthogonalization should preserve the amount of entanglement, because $S_E$ is invariant with respect to the transformation $\theta_1\rightarrow \pi-\theta_1$. The differences between the measured $E_{f,I}$ and $E_{f,O}$ are indeed rather small and can be attributed to the fact that
the experimentally generated input states are not entirely pure and slightly differ from the theoretical states $|\Psi\rangle$.
\section{Conclusions}
In summary, we have experimentally demonstrated orthogonalization of partly unknown single-qubit and two-qubit states by quantum filtering. Our experimental data clearly show that if we possess some partial prior information about the state that should be orthogonalized, then conditional orthogonalization significantly outperforms the best deterministic procedure. Remarkably, bipartite entangled states can be orthogonalized by a local strategy where the quantum filter is applied just to one of the qubits and no information about the state of the other qubit is necessary. The conditional orthogonalization represents an intriguing addition to the toolbox of probabilistic protocols such as unambiguous quantum state discrimination \cite{IDP87a,IDP87b,IDP87c}, probabilistic quantum cloning \cite{Duan98,Muller12}, and quantum metrology assisted with abstention \cite{Gendra13}. We anticipate applications of the orthogonalization procedure in quantum information processing and quantum state engineering.
\acknowledgments This work was supported by the Czech Science Foundation (Project No. 13-20319S) and by Palack\'{y} University (Project No. PrF-2013-008).
\appendix* \section{Deterministic orthogonalization of single-qubit states with prior information}
Let us consider single-qubit input states \begin{equation}
|\psi\rangle=\cos\frac{\theta}{2}|0\rangle+e^{i\phi}\sin\frac{\theta}{2}|1\rangle \label{psiin} \end{equation}
with known $\langle \sigma_Z\rangle$, i.e. with known fixed $\theta$. Here we derive the minimum average overlap between input states (\ref{psiin}) and output states $\mathcal{E}(|\psi\rangle\langle \psi|)$, which is achievable by deterministic quantum operations $\mathcal{E}$,
i.e. by trace-preserving completely positive maps. According to the Choi-Jamiolkowski isomorphism \cite{Jamiolkowski72,Choi75},
any trace preserving completely positive map is isomorphic to a positive semidefinite operator $\chi$ on the tensor product
of the Hilbert spaces of the input and output states. Given an input state $\rho_{\mathrm{in}}$
the corresponding output state can be calculated as
$\rho_{\mathrm{out}}=\mathrm{Tr}_{\mathrm{in}}[\rho_{\mathrm{in}}^T\otimes{{I}}_{\mathrm{out}} \,\chi]$, where $T$ stands for a transposition
in a fixed basis. The trace preservation condition can be expressed as
\begin{equation}
\mathrm{Tr}_{\mathrm{out}}[\chi]={{I}}_{\mathrm{in}}.
\label{chitrace}
\end{equation}
Assuming homogeneous prior distribution of angle $\phi$, the average overlap between input and output states achieved by quantum operation $\chi$ can be expressed as
\begin{equation}
F_\theta=\frac{1}{2\pi}\int_{0}^{2\pi} \mathrm{Tr}[\psi^T\otimes \psi \, \chi] d\phi,
\label{Fintegral}
\end{equation}
where $\psi=|\psi\rangle\langle \psi|$. This integral can easily be evaluated, and we get \begin{equation} F_\theta=\mathrm{Tr}[R_\theta\chi], \end{equation} where the operator $R_\theta$ is given by \begin{eqnarray}
R_\theta&=& c^4 |00\rangle\langle 00|+c^2s^2(|01\rangle\langle 01|+|10\rangle\langle 10|) +s^4 |11\rangle\langle 11| \nonumber \\
& &+c^2s^2(|00\rangle \langle 11|+|11\rangle \langle 00|). \end{eqnarray} Here we introduced abbreviations \begin{equation} c=\cos \frac{\theta}{2}, \qquad s=\sin\frac{\theta}{2}. \end{equation} The optimal trace-preserving quantum operation that minimizes the average overlap $F_\theta$ can be determined by solving a semidefinite program \cite{Vandenberghe96,Audenaert02}. We shall first present the resulting operation and then prove its optimality. For the sake of simplicity we shall restrict ourselves to the northern hemisphere of the Poincar\'{e} sphere, $\theta \leq \pi/2$. Then, the optimal $\chi$ can be expressed as \begin{equation}
\chi_{\mathrm{opt}}=(a |00\rangle- |11\rangle)(a\langle 00|-\langle 11|)+(1-a^2)|01\rangle\langle 01|, \end{equation} where the parameter $a$ depends on $\theta$ as follows, \begin{equation} a= \left \{ \begin{array}{lcl} \displaystyle \frac{\sin^2\!\frac{\theta}{2}}{\cos\theta}, & \quad & 0\leq \theta \leq \theta_{T}, \\[3mm] 1, & & \theta_T < \theta \leq \frac{\pi}{2}. \end{array} \right. \end{equation} The threshold angle $\theta_T=2\arcsin(1/\sqrt{3})$ is determined by the condition $2s^2=c^2=2/3$. The average overlap achieved by the optimal operation $\chi_{\mathrm{opt}}$ reads \begin{equation} F_{\mathrm{min}}=\left \{ \begin{array}{lcl} \displaystyle \frac{1}{4}\sin^2\theta-\frac{\sin^6\frac{\theta}{2}}{\cos\theta}, & \quad & 0\leq \theta \leq \theta_{T}, \\[3mm] \cos^2\theta, & & \theta_T < \theta \leq \frac{\pi}{2}. \end{array} \right. \end{equation}
If $s^2>1/3$, then $a=1$ and $\chi_{\mathrm{opt}}$ represents a unitary $\pi$ rotation about the $z$ axis, $|\psi\rangle \rightarrow \sigma_Z |\psi\rangle$, which perfectly orthogonalizes states lying on the equator of the Poincar\'{e} sphere \cite{Buzek99}. If we get close enough to the north pole of the Poincar\'{e} sphere such that $\theta < \theta_T$, then the optimal operation becomes a sequence of a unitary transformation $\sigma_Z$
and an amplitude damping channel, where the state $|0\rangle$ decays into state $|1\rangle$ with probability $1-a^2$.
To prove the optimality of $\chi_{\mathrm{opt}}$, we first define an operator $\lambda=\mathrm{Tr}_{\mathrm{out}}[R_\theta\chi]$. We get \begin{equation} \lambda= \left\{ \begin{array}{ll}
\displaystyle \frac{1}{4}\sin^2\theta|0\rangle\langle 0|-\frac{\sin^6\frac{\theta}{2}}{\cos\theta}|1\rangle\langle 1|, & 0 \leq \theta \leq \theta_T,\\[3mm]
\displaystyle \cos\theta\left(\cos^2\frac{\theta}{2}|0\rangle\langle 0| -\sin^2\frac{\theta}{2} |1\rangle \langle 1|\right), & \theta_T < \theta \leq \frac{\pi}{2}. \end{array} \right. \end{equation} It holds by definition that $F_{\mathrm{min}}=\mathrm{Tr}[\lambda]$. We now prove that the operator \begin{equation} M= R_\theta-\lambda\otimes {{I}} \end{equation} is positive semidefinite, $M\geq 0$. This implies that $F_{\mathrm{min}}$ is the minimum achievable over all deterministic quantum operations $\chi$. Indeed, since $\chi \geq 0$, we have $\mathrm{Tr}[M\chi] \geq 0$, which yields \begin{equation} \mathrm{Tr}[R_\theta\chi] \geq \mathrm{Tr}[\lambda \otimes {{I}}\, \chi]= \mathrm{Tr}[\lambda]=F_{\mathrm{min}}. \end{equation} Here we used the trace-preservation condition (\ref{chitrace}). If $0 \leq \theta \leq \theta_T$, then the eigenvalues of $M$ read \begin{equation} \begin{array}{lcl} \displaystyle m_1=0, & \quad & \displaystyle m_3=c^2(c^2-s^2)+\frac{c^2s^4}{c^2-s^2}, \\[3mm] \displaystyle m_2=0, & & \displaystyle m_4=c^2s^2+\frac{s^6}{c^2-s^2}. \\[2mm] ~~ \end{array} \end{equation} Since $c^2 >s^2$ for all $0\leq \theta \leq \theta_T$, all eigenvalues $m_j$ are non-negative. If $\theta_T <\theta \leq \pi/2$, then the eigenvalues read \begin{equation} \begin{array}{lcl} \displaystyle m_1=0, & \quad & m_3=c^2(2s^2-c^2), \\[3mm] \displaystyle m_2=2c^2s^2, & & m_4=s^2(2c^2-s^2). \end{array} \label{meigtwo} \end{equation}
In this case $2s^2>c^2\geq s^2$ (see the definition of $\theta_T$ above), which ensures that all eigenvalues (\ref{meigtwo}) ar also non-negative. This concludes the proof of the optimality of $\chi_{\mathrm{opt}}$. Due to symmetry, the optimal operation for $\theta> \pi/2$ can be obtained from the optimal operation for $\pi-\theta$ by bit flips on both input and output qubits, $|0\rangle \rightarrow |1\rangle$, $|1\rangle \rightarrow |0\rangle$.
\end{document} |
\begin{document}
\thanks{Research supported by NSF Grants DMS-1404754 and DMS-1708249} \date{\today}
\begin{abstract} Andersen, Masbaum and Ueno conjectured that certain quantum representations of surface mapping class groups should send pseudo-Anosov mapping classes to elements of infinite order (for large enough level $r$).
In this paper, we relate the AMU conjecture to a question about the growth of the Turaev-Viro invariants $TV_r$ of hyperbolic 3-manifolds. We show that if the $r$-growth of $|TV_r(M)|$ for a hyperbolic 3-manifold $M$ that fibers over the circle is exponential, then the monodromy of the fibration of $M$ satisfies the AMU conjecture. Building on earlier work \cite{DK} we give broad constructions of (oriented) hyperbolic fibered links, of arbitrarily high genus, whose $SO(3)$-Turaev-Viro invariants have exponential $r$-growth. As a result, for any $g>n\geqslant 2$, we
obtain infinite families of non-conjugate pseudo-Anosov mapping classes, acting on surfaces of genus $g$ and $n$ boundary components, that satisfy the AMU conjecture.
We also discuss integrality properties of the traces of quantum representations and we answer a question of Chen and Yang about Turaev-Viro invariants of torus links. \end{abstract}
\vskip 0.1in
\title{Quantum representations and monodromies of fibered links}
\section{Introduction} Given a compact oriented surface $\Sigma,$ possibly with boundary, the mapping class group $\mathrm{Mod}(\Sigma)$ is the group of isotopy classes of orientation preserving homeomorphisms of $\Sigma$ that fix the boundary. The Witten-Reshetikhin-Turaev Topological Quantum Field Theories \cite{ReTu, Turaevbook} provide families of finite dimensional projective representations of mapping class groups. For each semi-simple Lie algebra, there is an associated theory and an infinite family of such representations. In this article we are concerned with the
$SO(3)$-theory and we will follow the skein-theoretic framework given by Blanchet, Habegger, Masbaum and Vogel \cite{BHMV2}: For each odd integer $r\geqslant 3,$ let $U_r=\lbrace 0,2,4,\ldots, r-3\rbrace$ be the set of even integers smaller than $r-2.$ Given a primitive $2r$-th root of unity $\zeta_{2r},$ a compact oriented surface $\Sigma,$ and a coloring $c$ of the components of $\partial \Sigma$ by elements of $U_r,$ a finite dimensional ${\mathbb{C}}$-vector space $RT_r(\Sigma,c)$ is constructed in \cite{BHMV2}, as well as a projective representation: $$\rho_{r,c} : \mathrm{Mod}(\Sigma) \rightarrow \mathbb{P}\mathrm{Aut}(RT_r(\Sigma,c)).$$
For
different choices of root of unity, the traces of $\rho_{r,c}$, that are of particular interest to us in this paper,
are related by actions of Galois groups of cyclotomic fields.
Unless otherwise indicated, we will always choose $\zeta_{2r}=e^{\frac{i\pi}{r}},$ which is important for us in order to apply results from \cite{DK, DKY}.
The representation $\rho_{r,c}$ is called the $SO(3)$-quantum representation of $\mathrm{Mod}(\Sigma)$ at level $r.$ Although the representations are known to be asymptotically faithful \cite{FZW, Andersen}, the question of how well these representations reflect the geometry of the mapping class groups remains wide open.
By the Nielsen-Thurston classification, mapping classes $f\in \mathrm{Mod}(\Sigma)$ are divided into three types: periodic, reducible and pseudo-Anosov. Furthermore, the type of $f$ determines the geometric structure, in the sense of Thurston, of the 3-manifold obtained as mapping torus of $f.$ In \cite{AMU} Andersen, Masbaum and Ueno formulated the following conjecture and proved it when $\Sigma$ is the four-holed sphere.
\begin{conjecture}\label{AMU}{\rm{(AMU conjecture \cite{AMU})}}{ Let $\phi \in \mathrm{Mod}(\Sigma)$ be a pseudo-Anosov mapping class. Then for any big enough level $r,$ there is a choice of colors $c$ of the components of $\partial \Sigma,$ such that $\rho_{r,c}(\phi)$ has infinite order.} \end{conjecture}
Note that it is known that the representations $\rho_{r,c}$ send Dehn twists to elements of finite order and criteria for recognizing reducible mapping classes from their images under $\rho_{r,c}$ are given in
\cite{Andersen2}.
The results of \cite{AMU} were extended by Egsgaard and Jorgensen in \cite{EgsJorgr} and by Santharoubane in \cite{San17} to prove Conjecture \ref{AMU} for some mapping classes of spheres with $n\geqslant 5$ holes. In \cite{San12}, Santharoubane proved the conjecture for the one-holed torus. However, until recently there were no known cases of the AMU conjecture for mapping classes of surfaces of genus at least $2.$ In \cite{MarSan}, March\'e and Santharoubane used skein theoretic techniques in $\Sigma \times S^1$ to obtain such examples of mapping classes in arbitrary high genus. As explained by Koberda and Santharoubane \cite{KS}, by means of Birman exact sequences of mapping class groups, one extracts representations of $\pi_1(\Sigma)$ from the representations $\rho_{r,c}.$ Elements in $\pi_1(\Sigma)$ that correspond to pseudo-Anosov mappings classes via Birman exact sequences are characterized by a result of Kra \cite{Kra}. March\'e and Santharoubane used this approach to obtain their examples of pseudo-Anosov mappings classes satisfying the AMU conjecture by exhibiting elements in $\pi_1(\Sigma)$ satisfying an additional technical condition they called Euler incompressibility. However, they informed us that they suspect their construction yields only finitely many mapping classes in any surface of fixed genus, up to mapping class group action.
The purpose of the present paper is to describe an alternative method for approaching the AMU conjecture and use it to construct mapping classes acting on surfaces of any genus, that satisfy the conjecture. In particular, we produce infinitely many non-conjugate mapping classes acting on surfaces of fixed genus that satisfy the conjecture. Our approach is to relate the conjecture with a question on the growth rate, with respect to $r$, of the $SO(3)$-Turaev-Viro 3-manifold invariants $TV_r$.
For $M$ a compact orientable $3$-manifold, closed or with boundary, the invariants $TV_r(M)$ are real-valued topological invariants of $M,$ that can be computed from state sums over triangulations of $M$ and are closely related to the $SO(3)$-Witten-Reshetikhin-Turaev TQFTs. For a compact 3-manifold $M$ (closed or with boundary) we define:
$$lTV(M)=\underset{r \rightarrow \infty, \ r \ \textrm{odd}}{\liminf} \frac{2\pi}{r}\log |TV_r(M,q)|,$$ where $q=\zeta_{2r}=e^{\frac{2i\pi}{r}}$.
Let $f \in \mathrm{Mod}(\Sigma)$ be a mapping class represented by a pseudo-Anosov homeomorphism of $\Sigma$ and let $M_f=F \times [0,1]/_{(x,1)\sim ({ {f}}(x),0)}$ be the mapping torus of $f$.
\begin{theorem}\label{amu-ltv}Let $f \in \mathrm{Mod}(\Sigma)$ be a pseudo-Anosov mapping class and let $M_{f}$ be the mapping torus of $f$. If $lTV(M_{f})>0,$ then $f$ satisfies the conclusion of the AMU conjecture. \end{theorem}
The proof of the theorem relies heavily on the properties of TQFT underlying the Witten-Reshetikhin-Turaev $SO(3)$-theory as developed in \cite{BHMV2}.
As a consequence of Theorem \ref{amu-ltv} whenever we have a hyperbolic 3-manifold $M$ with $lTV(M)>0$ that fibers over the circle, then the monodromy of the fibration represents a mapping class that satisfies the AMU conjecture.
By a theorem of Thurston, a mapping class $f\in \mathrm{Mod}(\Sigma)$ is represented by a pseudo-Anosov homeomorphism of $\Sigma$ if and only if the mapping torus $M_{f}$ is hyperbolic. In \cite{Chen-Yang} Chen and Yang conjectured that for any hyperbolic 3-manifold with finite volume $M$ we should have $lTV(M)={\rm vol} (M)$. Their conjecture implies, in particular, that
the aforementioned technical condition $lTV(M_f)>0$ is true for all pseudo-Anosov mapping classes $f\in \mathrm{Mod}(\Sigma)$. Hence, the Chen-Yang conjecture implies the AMU conjecture.
Our method can also be used to produce new families of mapping classes acting on punctured spheres that satisfy the AMU conjecture (see Remark \ref{spheres}). In this paper we will be concerned with surfaces with boundary and mapping classes that appear as monodromies of fibered links in $S^3.$ In \cite{BDKY}, with Belletti and Yang, we construct families of 3-manifolds in which the monodromies of all hyperbolic fibered links satisfy the AMU conjecture. In this paper show the following. \begin{theorem} \label{hyperbgeneral} Let $L\subset S^3$ be a link with $lTV(S^3{\smallsetminus} L)>0.$ Then
there are fibered hyperbolic links $L',$ with $L\subset L'$ and $lTV(S^3{\smallsetminus} L')>0,$ and
such that the complement of $L'$ fibers over $S^1$ with fiber a surface of arbitrarily large genus.
In particular, the monodromy of such a fibration
gives a mapping class in $\mathrm{Mod}(\Sigma)$ that satisfies the AMU conjecture. \end{theorem}
In \cite{DK} the authors gave criteria for constructing 3-manifolds, and in particular link complements, whose $SO(3)$-Turaev-Viro invariants satisfy the condition $lTV>0$. Starting from these links, and applying Theorem \ref{hyperbgeneral}, we obtain fibered links whose monodromies give examples of mapping classes that satisfy Conjecture \ref{AMU}. However, the construction yields only finitely many mapping classes in the mapping class groups of fixed surfaces. This is because the links $L'$ obtained by Theorem \ref{hyperbgeneral} are represented by closed homogeneous braids and it is known that there are only finitely many links of fixed genus and number of components represented that way. To obtain infinitely many mapping classes for surfaces of fixed genus and number of boundary components, we need to refine our construction. We do this
by using Stallings twists and appealing to a result of Long and Morton \cite{LongMorton} on compositions of pseudo-Anosov maps with powers of a Dehn twist. The general process is given in Theorem \ref{infinitegen}. As an application we have the following.
\begin{theorem} \label{general} Let $\Sigma$ denote an orientable surface of genus $g$ and with $n$-boundary components. Suppose that either $n=2$ and $g\geqslant 3$ or $g\geqslant n \geqslant 3.$ Then there are are infinitely many non-conjugate pseudo-Anosov mapping classes in $\mathrm{Mod}(\Sigma)$ that satisfy the AMU conjecture. \end{theorem}
In the last section of the paper we discuss integrality properties of quantum representations for mapping classes of finite order (i.e. periodic mapping classes) and how they reflect on the Turaev-Viro invariants of the corresponding mapping tori. To state our result, we recall that the traces of the representations $\rho_{r,c}$ are known to be algebraic numbers. For periodic mapping classes we have
the following. \begin{theorem}\label{thm:integertrace} Let $f\in \mathrm{Mod}(\Sigma)$ be periodic of order $N.$ For any odd integer $r\geqslant 3$, with $\mathrm{gcd}(r, N)=1$, we have
$|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}},$
for any $U_r$-coloring $c$ of $\partial \Sigma,$ and any primitive
$2r$-root of unity. \end{theorem}
As a consequence of Theorem \ref{thm:integertrace} we have the following corollary that was conjectured by Chen and Yang \cite[Conjecture 5.1]{Chen-Yang}. \begin{corollary}\label{cor:toruslinks} For integers $p,q$ let $T_{p,q}$ denote the $(p,q)$-torus link. Then, for any odd $r$ coprime with $p$ and $q$, we have $TV_r(S^3{\smallsetminus} T_{p,q})\in {\mathbb{Z}}$. \end{corollary}
The paper is organized as follows: In Section \ref{sec:TV}, we summarize results from the $SO(3)$-Witten-Reshetikhin-Turaev TQFT and their relation to Turaev-Viro invariants that we need in this paper. In Section \ref{sec:lTV>0}, we discuss how to construct families of links whose $SO(3)$-Turaev-Viro invariants have exponential growth (i.e. $lTV>0$) and then we prove Theorem \ref{amu-ltv} that explains how this exponential growth relates to the AMU Conjecture. In Section \ref{sec:homogenize}, we describe a method to get hyperbolic fibered links with any given sublink and we prove Theorem \ref{hyperbgeneral}. In Section \ref{sec:examples}, we explain how to refine the construction of Section \ref{sec:homogenize} to get infinite families of mapping classes on fixed genus surfaces that satisfy the AMU Conjecture (see Theorem \ref{general}). We also provide an explicit construction that leads to Theorem \ref{general}.
Finally in Section \ref{sec:integertraces}, we discuss periodic mapping classes and we prove Theorem \ref{thm:integertrace} and Corollary \ref{cor:toruslinks}. We also state a non-integrality conjecture about Turaev-Viro invariants of hyperbolic mapping tori.
\section{TQFT properties and quantum representations} \label{sec:TV}
In this section, we summarize some properties of the $SO(3)$-Witten-Reshetikhin-Turaev TQFTs, which we introduce in the skein-theoretic framework of \cite{BHMV2}, and briefly discuss their relation to the $SO(3)$-Turaev-Viro invariants.
\subsection{Witten-Reshetikhin-Turaev $SO(3)$-TQFTs} Given an odd $r\geqslant 3,$ let $U_r$ denote the set of even integers less than $r-2.$ A banded link in a manifold $M$ is an embedding of a disjoint union of annuli $S^1\times[0,1]$ in $M,$ and a $U_r$-colored banded link $(L,c)$ is a banded link whose components are colored by elements of $U_r.$ For a closed, oriented 3-manifold $M,$ the Reshetikhin-Turaev invariants $RT_r(M)$ are complex valued topological invariants. They also extend to invariants $RT_r(M,(L,c))$ of manifolds containing colored banded links. These invariants are part of a compatible set of invariants of compact surfaces and compact 3-manifolds, which is called a TQFT. Below we summarize the main properties of the theory that will be useful to us in this paper, referring the reader to \cite{BHMV, BHMV2} for the precise definitions and details.
\begin{theorem}\label{thm:TQFTdef}{\rm{ (\cite[Theorem 1.4]{BHMV2})}} For any odd integer $r \geqslant 3$ and any primitive $2r$-th root of unity $\zeta_{2r},$ there is a TQFT functor $RT_r$ with the following properties: \begin{itemize} \item[(1)] For $\Sigma$ a compact oriented surface, and if $\partial \Sigma\neq \emptyset$ a coloring $c$ of $\partial \Sigma$ by elements of $U_r$, there is a finite dimensional ${\mathbb{C}}$-vector space $RT_r(\Sigma,c),$ with a Hermitian form $\langle , \rangle.$ Moreover for disjoint unions, we have $$RT_r(\Sigma \coprod \Sigma')=RT_r(\Sigma)\otimes RT_r(\Sigma').$$ \item[(2)]For $M$ a closed compact oriented $3$-manifold, containing a $U_r$-colored banded link $(L,c),$ the value $RT_r(M,(L,c),\zeta_{2r}) \in {\mathbb{Q}}[\zeta_{2r}]\subset {\mathbb{C}}$ is the $SO(3)$-Reshetikhin-Turaev invariant at level $r.$ \vskip 0.06in
\item[(3)] For $M$ a compact oriented $3$-manifold with $\partial M=\Sigma$, and $(L,c)$ a $U_r$-colored banded link in $M,$ the invariant $RT_r(M,(L,c))$ is a vector in $RT_r(\Sigma).$
Moreover, for compact oriented 3-manifolds $M_1$, $M_2$ with $\partial M_1=-\partial M_2=\Sigma,$ we have $$RT_r(M_1\underset{\Sigma}{\cup}M_2)=\langle RT_r(M_1),RT_r(M_2)\rangle.$$ Finally, for disjoint unions $M=M_1\coprod M_2,$ we have $$RT_r(M)=RT_r(M_1)\otimes RT_r(M_2).$$
\item[(4) ]For a cobordism $M$ with $\partial M= -\Sigma_0 \cup \Sigma_1,$ there is a map
$$RT_r(M) \in \mathrm{End}(RT_r(\Sigma_0),RT_r(\Sigma_1)).$$ \item[(5)]The composition of cobordisms is sent by $RT_r$ to the composition of linear maps, up to a power of $\zeta_{2r}.$ \end{itemize} \end{theorem}
In \cite{BHMV2} the authors construct some explicit orthogonal basis $E_r$ for $RT_r(\Sigma,c)$: Let $\Sigma$ be a compact, oriented surface that is not the 2-torus or the 2-sphere with less than four holes. Let $P$ be a collection of simple closed curves on $\Sigma$ that contains the boundary $\partial \Sigma$ and gives a pants decomposition of $\Sigma.$ The elements of $E_r$ are in one-to-one correspondence with colorings ${\hat{c}}: P \longrightarrow U_r$, such that ${\hat{c}}$ agrees with $c$ on $\partial \Sigma$ and for each pant the colors of the three boundary components satisfy certain admissibility conditions. We will not make use of the general construction. What we need is the following:
\begin{theorem}\label{thm:TQFTbasis} {\rm{(\cite[Theorem 4.11, Corollary 4.10]{BHMV2}) }} \\ \item[(1)] For $\Sigma$ a compact, oriented surface, with genus $g$ and $n$ boundary components, such that $(g,n)\neq (1,0),(0,0),(0,1),(0,2),(0,3),$ we have $$\mathrm{dim}(RT_r(\Sigma,c))\leqslant r^{3g-3+n}.$$
\item[(2)] If $\Sigma=T$ is the 2-torus we actually have an orthonormal basis for $RT_r(T).$ It consists of the elements $e_0,e_2,\ldots,e_{r-3},$ where $$e_i=RT_r(D^2\times S^1, ([0,\frac{1}{2}]\times S^1, i))$$ is the Reshetikhin-Turaev vector of the solid torus with the core viewed as banded link and colored by $i.$ \end{theorem}
\subsection{$SO(3)$-quantum representations of the mapping class groups} For any odd integer $r \geqslant 3,$ any choice of a primitive $2r$-root of unity $\zeta_{2r}$ and a coloring $c$ of the boundary components of $\Sigma$ by elements of $U_r,$ we have a finite dimensional projective representation, $$\rho_{r,c} : \mathrm{Mod}(\Sigma) \rightarrow \mathbb{P}\mathrm{Aut}(RT_r(\Sigma,c)).$$ If $\Sigma$ is a closed surface and $f \in \mathrm{Mod}(\Sigma),$ we simply have $\rho_r(f)=RT_r(C_{\phi}),$ where the cobordism $C_{\phi}$ is the mapping cylinder of $f:$ $$C_{f}=\Sigma\times [0,1]\underset{(1,x)\sim f(x)}{\coprod} \Sigma.$$ The fact that this gives a projective representation of $\mathrm{Mod}(\Sigma)$ is a consequence of points (4) and (5) of Theorem \ref{thm:TQFTdef}.
For $\Sigma$ with non-empty boundary, giving the precise definition of the quantum representations would require us to discuss the functor $RT_r$ for cobordisms containing colored tangles (see \cite{BHMV2}).
Since in this paper we will only be interested in the traces of the quantum representations, we will not recall the definition of the quantum representations in its full generality. We will use the following theorem:
\begin{theorem}\label{thm:tracequantumrep} For $r\geqslant 3$ odd, let $\Sigma$ be a compact oriented surface with $c$ a $U_r$-coloring on the components of $\partial \Sigma.$ Let $\tilde{\Sigma}$ be the surface obtained from $\Sigma$ by capping the components of $\partial \Sigma$ with disks. For $f\in \mathrm{Mod}(\Sigma),$ let $\tilde{f}\in \mathrm{Mod}(\tilde{\Sigma)},$ denote the mapping class of the extension of $f$ on the capping disks by the identity. Let $M_{\tilde {f}}=F \times [0,1]/_{(x,1)\sim ({\tilde {f}}(x),0)}$
be the mapping torus of $\tilde{f}$ and let $L \subset M_{\tilde {f}}$ denote the link whose components consist of the cores of the solid tori in $M_{\tilde {f}}$ over the capping disks. Then, we have $$\mathrm{Tr} (\rho_{r,c}(f))=RT_r(M_{\tilde {f}} ,(L,c)).$$ \end{theorem}
\subsection{SO(3)-Turaev-Viro invariants} In \cite{TuraevViro}, Turaev and Viro introduced invariants of compact oriented $3$-manifolds as state sums on triangulations of 3-manifolds. The triangulations are colored by representations of a semi-simple quantum Lie algebra. In this paper, we are only concerned with the $SO(3)$-theory: Given a compact 3-manifold $M$, an odd integer $r\geqslant 3,$ and a primitive $2r$-root of unity, there is an ${\mathbb{R}}$-valued invariant $TV_r.$ We refer to \cite{DKY} for the precise flavor of Turaev-Viro invariants we are using here, and to \cite{TuraevViro} and for the original definitions and proofs of invariance. We will make use the following theorem, which relates the Turaev-Viro invariants $TV_r(M)$ of a $3$-manifold $M$ with the Witten-Reshetikhin-Turaev TQFT $RT_r.$ For closed 3-manifolds it was proved by Roberts \cite{Roberts}, and was extended to manifolds with boundary by Benedetti and Petronio \cite{BePe}. In fact, as Benedetti and Petronio formulated their theorem in the case of $\mathrm{SU}_2$-TQFT, the adaptation of the proof in the setting of $SO(3)$-TQFT we use here can be found in \cite{DKY}. \begin{theorem}{\rm{ (\cite[Theorem 3.2]{BePe})}}\label{thm:BePe} For $M$ an oriented compact $3$-manifold with empty or toroidal boundary and $r\geqslant 3$ an odd integer, we have:
$$TV_r(M,q=e^{\frac{2i\pi}{r}})=||RT_r(M,\zeta=e^{\frac{i\pi}{r}})||^2.$$ \end{theorem}
\section{Growth of Turaev-Viro invariants and the AMU conjecture}
In this section, first we explain how the growth of the $SO(3)$-Turaev-Viro invariants is related to the AMU conjecture. Then we give examples of link complements $M$ for which the $SO(3)$-Turaev-Viro invariants have exponential growth with respect to $r$; that is, we have $lTV(M)>0.$
\subsection{Exponential growth implies the AMU conjecture} Let $\Sigma$ denote a compact orientable surface with or without boundary and, as before, let $\mathrm{Mod}(\Sigma)$ denote the mapping class group of $\Sigma$ fixing the boundary.
\begin{named}{Theorem \ref{amu-ltv}} Let $f \in \mathrm{Mod}(\Sigma)$ be a pseudo-Anosov mapping class and let $M_{f}$ be the mapping torus of $f$. If $lTV(M_{f})>0,$ then $f$ satisfies the conclusion of the AMU conjecture. \end{named} \label{sec:amu-ltv} The proof of Theorem \ref{amu-ltv} relies on the following elementary lemma:
\begin{lemma}\label{lem:order} If $A \in \mathrm{GL}_n (\mathbb{C})$ is such that $|\mathrm{Tr}(A)|>n,$ then $A$ has infinite order. \end{lemma} \begin{proof} Up to conjugation we can assume that $A$ is upper triangular. If the sum of the $n$ diagonal entries has modulus bigger than $n,$ one of these entries must have modulus bigger that $1$. This implies that $A$ has infinite order. \end{proof}
\vskip 0.06in
\begin{proof}[Proof of Theorem \ref{amu-ltv}] Suppose that for the mapping torus $M_{f}$ of some $f \in \mathrm{Mod}(\Sigma)$, we have $lTV(M_{f})>0.$ We will prove Theorem \ref{amu-ltv} by relating $TV_r(M_{f})$ to traces of the quantum representations of $\mathrm{Mod}(\Sigma)$. By Theorem \ref{thm:BePe}, we have $$TV_r(M_{f})=||RT_r(M_{f})||^2= \langle RT_r(M_{f}), \ RT_r(M_{f})\rangle,$$ where, with the notation of Theorem \ref{thm:TQFTdef}, $ \langle , \rangle$ is the Hermitian form on $RT_r(\Sigma,c).$
Suppose that $\Sigma$ has genus $g$ and $n$ boundary components. Now $\partial M_f$ is a disjoint union of $n$ tori. Note that by Theorem \ref{thm:TQFTbasis}-(2) and Theorem \ref{thm:TQFTdef}-(1), $RT_r(\partial M_f)$ admits an orthonormal basis given by vectors $${\bf e}_{c}=e_{c_1}\otimes e_{c_2} \otimes \ldots e_{c_n},$$ where $c=(c_1,c_2,\ldots c_n)$ runs over all $n$-tuples of colors in $U_r,$ one for each boundary component. By Theorem \ref{thm:TQFTdef}-(3) and Theorem \ref{thm:TQFTbasis}-(2), this vector is also the $RT_r$-vector of the cobordism consisting of $n$ solid tori, with the $i$-th solid torus containing the core colored by $c_i.$
We can write $ RT_r(M_{f})=\underset{c}{\sum} \lambda_c {\bf e}_{c}$ where $\lambda_c=\langle RT_r(M_f), {\bf e}_{c}\rangle$. Thus we have
$$TV_r(M_{f})=\underset{c}{\sum}|\lambda_c|^2=\underset{c}{\sum} |\langle RT_r(M_f), {\bf e}_{c}\rangle|^2,$$ where ${\bf e}_c$ is the above orthonormal basis of $RT_r(\partial M_{f})$ and the sum runs over $n$-tuples of colors in $U_r.$
By Theorem \ref{thm:TQFTdef}-(3), the pairing $\langle RT_r(M_{f}),{\bf e}_{c}\rangle$ is obtained by filling the boundary components of $M_{f}$ by solid tori and adding a link $L$ which is the union of the cores and the core of the $i$-th component is colored by $c_i.$ Thus by Theorem \ref{thm:tracequantumrep}, we have $$\langle RT_r(M_{f}), {\bf e}_{c} \rangle= RT_r({M_{\tilde{f}}},(L,c))=\mathrm{Tr} (\rho_{r,c}(f)),$$ and thus
$$TV_r(M_{f})=\underset{c}{\sum}|\mathrm{Tr}\rho_{r,c}(f)|^2,$$ where the sum ranges over all colorings of the boundary components of $M_{f}$ by elements of $U_r.$ Now, on the one hand, since $lTV(M_{f})>0,$ the sequence $\{TV_r(M_{f})\}_r$ is bounded below by a sequence that is exponentially growing in $r$ as $r \rightarrow \infty.$ On the other hand, by Theorem \ref{thm:TQFTbasis}-(1), the sequence $\underset{c}{\sum}\mathrm{dim}(RT_r(\Sigma,c))$ only grows polynomially in $r.$
For big enough $r,$ there will be at least one $c$ such that $|\mathrm{Tr}\rho_{r,c}(f)|>\mathrm{dim}(RT_r(\Sigma,c))$. Thus by Lemma \ref{lem:order}, $\rho_{r,c}(\phi)$ will have infinite order. \end{proof}
By a theorem of Thurston \cite{thurston:mappingtori}, a mapping class $f \in \mathrm{Mod}(\Sigma)$ is represented by a pseudo-Anosov homeomorphism of $\Sigma$ if and only if the mapping torus $M_f$ is hyperbolic.
As a consequence of Theorem \ref{amu-ltv}, whenever a hyperbolic 3-manifold $M$ that fibers over the circle has $lTV(M)>0,$ the monodromy of the fibration represents a mapping class that satisfies the AMU Conjecture.
In the remaining of this paper we will be concerned with surfaces with boundary and mapping classes that appear as monodromies of fibered links in $S^3.$
\subsection{Link complements with $lTV>0$} \label{sec:lTV>0} Links with exponentially growing Turaev-Viro invariants will be the fundamental building block of our construction of examples of pseudo-Anosov mapping classes satisfying the AMU conjecture. We will need the following result proved by the authors in \cite{DK}.
\begin{theorem}\label{thm:ltvdehnfilling}{\rm {(\cite[Corollary 5.3]{DK})}} Assume that $M$ and $M'$ are oriented compact 3-manifolds with empty or toroidal boundaries and such that
$M$ is obtained by a Dehn-filling of $M'.$ Then we have:
$$lTV(M)\leqslant lTV(M').$$
\end{theorem}
Note that for a link $L\subset S^3$, and a sublink $K\subset L$, the complement of $K$ is obtained from that of $L$ by Dehn-filling. Thus Theorem \ref{thm:ltvdehnfilling} implies that if $K$ is a sublink of a link $L\subset S^3$ and $lTV(S^3 {\smallsetminus} K)>0,$ then we have $lTV(S^3 {\smallsetminus} L) )>0.$
\begin{corollary} \label{positive} Let $K\subset S^3$ be the knot $4_1$ or a link with complement homeomorphic to that of the Boromean links or the Whitehead link. If $L$ is any link containing $K$ as a sublink then $lTV(S^3 {\smallsetminus} L) )>0.$ \end{corollary} \begin{proof}Denote by $B$ the Borromean rings. By \cite{DKY}, $lTV(S^3{\smallsetminus} 4_1)=2v_3\simeq 2.02988$ and $lTV(S^3{\smallsetminus} B)=2v_8\simeq 7.32772;$ and hence the conclusion holds for $B$ and $4_1.$
The complement of $K=4_1$ is obtained by Dehn filing along one of the components of the Whitehead link $W.$ Thus, by Theorem \ref{thm:ltvdehnfilling}, $lTV(S^3{\smallsetminus} W)\geqslant 2 v_3>0.$
For links with homeomorphic complements the conclusion follows since the Turaev-Viro invariants are homeomorphism invariants of the link complement; that is, they will not distinguish different links with homeomorphic complements. \end{proof}
\begin{remark}Additional classes of links with $lTV>0$ are given by the authors in \cite{DKY} and \cite{DK}. Some of these examples are non-hyperbolic. However it is known that any link is a sublink of a hyperbolic link \cite{Baker}. Thus one can start with any link $K$ with $lTV(S^3{\smallsetminus} K)>0$ and construct hyperbolic links $L$ containing $K$ as sublink; by Theorem \ref{thm:ltvdehnfilling} these will still have $lTV(S^3 {\smallsetminus} L)> 0.$ \end{remark}
\section{A hyperbolic version of Stallings's homogenization} \label{sec:homogenize} A classical result of Stallings \cite{Stallings} states that every link $L$ is a sublink of fibered links with fibers of arbitrarily large genera. Our purpose in this section is to prove the following hyperbolic version of this result.
\begin{theorem} \label{mainofsection} Given a link $L\subset S^3,$
there are hyperbolic links $L',$ with $L\subset L'$ and
such that the complement of $L'$ fibers over $S^1$ with fiber a surface of arbitrarily large genus. \end{theorem} \subsection{Homogeneous braids} Let $\sigma_1,\ldots, \sigma_{n-1}$ denote the standard braid generators of the $n$-strings braid group $B_n$. We recall that a braid $\sigma \in B_n$ is said to be \emph{homogeneous} if each standard generator $\sigma_i$ appearing in $\sigma$ always appears with exponents of the same sign. In \cite{Stallings}, Stallings studied relations between closed homogeneous braids and fibered links. We summarize his results as follows:
\begin{enumerate}
\item The closure of any homogeneous braid $\sigma \in B_n$ is a fibered link: The complement fibers over $S^1$ with fiber the surface $F$ obtained by Seifert's algorithm from the homogeneous closed braid diagram. The Euler characteristic of $F$ is $\chi(F)=n-c(\sigma)$ where $c(\sigma)$ is the number of crossings of $\sigma.$
\item Given a link $L=\hat{\sigma}$ represented as the closure of a braid $\sigma \in B_n,$ one can add additional strands to obtain a homogeneous braid $\sigma' \in B_{n+k}$ so that the closure of $\sigma'$ is a link $L\cup K,$ where $K,$ the closure of the additional $k$-strands, represents the unknot. Furthermore, we can arrange $\sigma'$ so that the linking numbers of $K$ with the components of $\hat{\sigma}$ are any arbitrary numbers. The link $L\cup K,$ as a closed homogeneous braid, is fibered. \\
Throughout the paper we will refer to the component $K$ of $L\cup K,$ as the Stallings component. \end{enumerate}
In order to prove Theorem \ref{mainofsection}, given a hyperbolic link $L$, we want to apply Stallings' homogenizing method in a way such that the resulting link is still hyperbolic.
Let $L$ be a hyperbolic link with $n$ components $L_1,\ldots, L_n.$ The complement $M_L:=S^3{\smallsetminus} L$ is a hyperbolic 3-manifold with $n$ cusps; one for each component. For each cusp, corresponding to some component $L_i$, there is a conjugacy class of a rank two abelian subgroup of $\pi_1(M_L).$ We will refer to this as the {\emph{peripheral group}} of $L_i.$
\begin{definition} \label{defcondition}Let $L$ be a hyperbolic link with $n$ components $L_1,\ldots, L_n.$ We say that an unknotted circle $K$ embedded in $S^3 {\smallsetminus} L$ satisfies condition $(\clubsuit )$ if (i) the free homotopy class $[K]$ does not lie in a peripheral group of any component of $L$; and (ii) we have $$\mathrm{gcd}\left(lk(K,L_1),lk(K,L_2),\ldots , lk(K,L_n)\right)=1.$$ \end{definition}
The rest of this subsection is devoted to the proof of the following proposition that is needed for the proof of Theorem \ref{mainofsection}.
\begin{proposition}\label{prop:condition} Given a hyperbolic link $L,$ one can choose the Stallings component $K$ so that (i) $K$ satisfies condition $(\clubsuit )$; and (ii) the fiber of the complement of $L\cup K$ has arbitrarily high genus. \end{proposition}
Since $L$ is hyperbolic, we have the
discrete faithful representation
$$\rho: \pi_1(S^3{\smallsetminus} L)\longrightarrow \mathrm{PSL}_2({\mathbb{C}}).$$
We recall that an element of $A\in \mathrm{PSL}_2({\mathbb{C}})$ is called {\emph{parabolic}} if $\mathrm{Tr} (A)=\pm 2,$
and that $\rho$ takes elements in the peripheral subgroups of $ \pi_1(S^3 {\smallsetminus} L)$ to parabolic elements in $ \mathrm{PSL}_2({\mathbb{C}}).$
Since matrix trace is invariant under conjugation, in the discussion below we will not make distinction between elements in $\pi_1(M_L)$ and their conjugacy classes.
With this understanding we recall that
if an element $\gamma \in \pi_1(S^3 {\smallsetminus} L)$ satisfies $\mathrm{Tr} (\rho (\gamma))\neq \pm 2,$ then it does not lie in any peripheral subgroup \cite[Chapter 5]{thurston:notes}.
\begin{lemma}\label{lem:traces} Let $A$ and $B$ be elements in $\mathrm{PSL}_2({\mathbb{C}}).$ \begin{itemize}
\item[(1)]If $A$ and $B$ are non commuting parabolic elements then $|\mathrm{Tr}( A^l B^{-l})| >2$ for some $l.$
\item[(2)]If $|\mathrm{Tr}(A)|>2,$ then $|\mathrm{Tr} (A^k B)|\neq 2$ for all $k$ big enough. \end{itemize} \begin{proof} For (1), note that after conjugation we can take $A=\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$ and $B=\begin{pmatrix} 1 & x \\ 0 & 1
\end{pmatrix}.$ Then $$|\mathrm{Tr}(A^l B^{-l})|=|1-l^2 x| \underset{l\rightarrow\infty}{\rightarrow} \infty.$$ For (2), after conjugation take $A=\begin{pmatrix} \lambda & 0 \\ 0 & \lambda^{-1}
\end{pmatrix}$ where $|\lambda|>1$ and write $B=\begin{pmatrix}
u & v \\ w & x \end{pmatrix}.$ Then
$$\mathrm{Tr} (A^k B)=\lambda^k u+\lambda^{-k} x,$$
which as $k \rightarrow +\infty$ tends either to infinity if $u\neq 0$ or to $0$ else. In the first case we will have
$|\mathrm{Tr} (A^k B)|> 2$ for $k$ big enough; in the second case we will have
$|\mathrm{Tr} (A^k B)|< 2$. In both cases we have $|\mathrm{Tr} (A^k B)|\neq 2$ as desired. \end{proof} \end{lemma}
Next we will consider the Wirtinger presentation of $\pi_1(S^3 {\smallsetminus} L)$ corresponding to a link diagram representing a hyperbolic link $L$ as a closed braid $\hat{\sigma}.$ The Wirtinger generators are conjugates of meridians of the components of $L$ and are mapped to parabolic elements of $\mathrm{PSL}_2({\mathbb{C}})$ by $\rho$. A key point in the proof of Proposition \ref{prop:condition} is to choose the Stallings component $K$ so that the word it represents in $\pi_1(S^3 {\smallsetminus} L)$ is conjugate to one that begins with a sub-word
$(a^l b^{-l})^k,$ where $a$ and $b$ are Wirtinger generators mapped to non-commuting elements under $\rho$. Then we will use
Lemma \ref{lem:traces} to prove that the free homotopy class $[K]$ is not in a peripheral subgroup of any component of $L$.
We first need the following lemma:
\begin{lemma}\label{lem:meridians} Let $L$ be a hyperbolic link in $S^3,$ with a link diagram of a closed braid $\hat{\sigma}.$ We can find two strands of $\sigma$ meeting at a crossing so that if $a$ and $b$ are the Wirtinger generators corresponding to an under-strand and the over-strand of the crossing respectively, then $\rho(a)$ and $\rho(b)$ don't commute.
\end{lemma} \begin{proof} Suppose that for any pair of Wirtinger generators $a, b$ corresponding to a crossing as above, $\rho(a)$ and $\rho(b)$ commute. Since $\rho(a)$ and $\rho(b)$ are commuting parabolic elements of infinite order in $\mathrm{PSL}_2({\mathbb{C}}),$ elementary linear algebra shows that they share
their unique eigenline. Then step by step, we get that the images under $\rho$ of all Wirtinger generators share an eigenline. But this would imply that $\rho\left(\pi_1(S^3{\smallsetminus} L)\right)$ is abelian
which is a contradiction. \end{proof} We can now turn to the proof of Proposition \ref{prop:condition}, which we will prove by tweaking Stallings homogenization procedure.
\begin{proof}[Proof of Proposition \ref{prop:condition}] Let $L$ be a hyperbolic link, with components,
$L_1,\ldots,L_n$, represented as a braid closure ${\hat{ \sigma}}$. Let $a,b$ be Wirtinger generators of $\pi_1(S^3{\smallsetminus} L)$ chosen as in
Lemma \ref{lem:meridians}.
Starting with the projection of ${\hat{ \sigma}}$, we proceed in the following way:
We arrange the crossings of ${\hat{ \sigma}}$ to occur at different verticals on the projection plane. \begin{enumerate}
\item Begin drawing the Stallings component so that near the strands where above chosen Wirtinger generators $a,b$ occur, we create the pattern shown the left of Figure \ref{fig:pattern}.
\item We deform the strands of $\sigma$ to create `` zigzags" as shown in the second drawing of Figure \ref{fig:homogenization}.
\item We fill the empty spaces in verticals with new braid strands and choose the new crossings so that the resulting braid is homogeneous and so that the new strands meet the strands of $\sigma$ both in positive and negative crossings. Adding enough `` zigzags" at the previous step will ensure that there is enough freedom in choosing the crossings to make this second condition possible.
\item At this stage, we have turned the braid $\sigma$ into a homogeneous braid, say $\sigma_h$. The closure ${\hat{ \sigma_h}}$ contains $L$ as a sublink and some number $s\geqslant 1$ of unknotted components. To reduce the number of components added, we connect the new components with a single crossing between each pair of neighboring new components. Doing so we may have to create new crossings with the components of $L,$ but we can always choose them to preserve homogeneousness. Thus we homogenized $L$ by adding a single unknotted component $K$ to it. \end{enumerate} The four step process described above is illustrated in Figure \ref{fig:homogenization}. \begin{figure}
\caption{The four step homogenization process.}
\label{fig:homogenization}
\end{figure}
Now, because we have positive and negative crossings of $K$ with each component of $L,$ we can set the linking numbers as we want just by adding an even number of positive or negative crossings between $K$ and a component of $L$ locally. If the strands $a$ and $b$ correspond to the same component $L_1,$ we simply ask that $lk(K,L_1)=1.$ If they correspond to two distinct components $L_1$ and $L_2,$ we choose $(lk(K,L_1),lk(K,L_2))=(1,0).$
Recall that we have chosen $a,b$ to be Wirtinger generators of $\pi_1(S^3{\smallsetminus} L)$, as in
Lemma \ref{lem:meridians}, and so that $K$ is added to $L$ so that the pattern shown on the left hand side of Figure \ref{fig:pattern} occurs near the corresponding crossing. Assume that $[K]$ is conjugate to a word $w \in \pi_1(S^3{\smallsetminus} L).$ Now one may modify the diagram of $L\cup K$ locally, as shown in the right hand side of Figure \ref{fig:pattern}, to make $[K]$ conjugate to $(a^{-l} b^l)^k w$ for any non-negative $k$ and $l.$ Notice also this move leaves $K$ unknotted and that $L \cup K$ is still a closed homogeneous braid.
Also notice that doing so, we left $lk(K,L_1)$ unchanged if $a$ and $b$ were part of the same component $L_1,$ and we turned $(lk(K,L_1),lk(K,L_2))$ into $(1-kl,kl)$ if they correspond to different components $L_1$ and $L_2.$ In both cases, we preserved the fact that
$$gcd(lk(K,L_1),lk(K,L_2),\ldots ,lk(K,L_n))=1$$
and $K$ satisfies part (ii) of Condition $(\clubsuit).$
\begin{figure}
\caption{Changing $[K]$ from a conjugate of $w \in \pi_1(S^3{\smallsetminus} L)$ (left), to a conjugate of $w(a^{-l}b^l)^k$ (right), for any non-negative $l$ and $k.$ Here $l=k=2.$}
\label{fig:pattern}
\end{figure}
To ensure that part (i) of the condition is satisfied, note that since $\rho(a)$ and $\rho(b)$ are non-commuting, Lemma \ref{lem:traces} (1) implies $|\mathrm{Tr} (\rho(a)^{-l}\rho(b)^l)|> 2$ for $l>>0.$ Thus by choosing $k>>0,$ and using Lemma \ref{lem:traces} (2), we may assume that $|\mathrm{Tr}(A^k B)|\neq 2,$ where $A=\rho(a)^{-l} \rho(b)^{l}$ and $B=\rho(w).$ Then $[K]=A^k B$ is not in a peripheral subgroup of $\pi_1(S^3{\smallsetminus} L).$
Now notice that as above mentioned positive integers $k,l$ become arbitrarily large, the crossing number of the resulting homogeneous braid projections becomes arbitrarily large while the braid index remains unchanged. Since the fiber of the fibration of a closed homogeneous braid is the Seifert surface of the closed braid projection it follows that as $k,l\to \infty$, the genus of the fiber becomes arbitrarily large.
\end{proof}
\subsection{Ensuring hyperbolicity}
In this subsection we will finish the proof of Theorem \ref{mainofsection}. For this we need the following:
\begin{proposition}\label{prop:hyperbolic} Suppose that $L$ is a hyperbolic link and let $L\cup K$ be a homogeneous closed braid obtained from $L$ by adding a Stallings component $K$ that satisfies condition $(\clubsuit )$. Then $L\cup K$ is a hyperbolic link. \end{proposition}
Before we can proceed with the proof of Proposition \ref{prop:hyperbolic} we need some preparation: We recall that when an oriented link $L$ is embedded in a solid torus, the total winding number of $L$ is the non-negative integer $n$ such that $L$ represents $n$ times a generator of $H_1(V,{\mathbb{Z}}).$ When convenient we will consider $M_{L\cup K}$ to be the compact 3-manifold obtained by removing the interiors of neighborhoods of the components of $L\cup K$; the interior of $M_{L\cup K}$ is homeomorphic to $S^3 {\smallsetminus} (L\cup K)$. In the course of the proof of the proposition we will see that condition $(\clubsuit)$ ensures that the complement of $L\cup K$ cannot contain embedded tori that are not boundary parallel or compressible (i.e. $M_{L\cup K}$ is {\emph{atoroidal}}). We need the following lemma that provides restrictions on winding numbers of satellite fibered links.
\begin{lemma}\label{lem:windingnumber}We have the following:
\begin{itemize} \item[(1)] Suppose that $L$ is a oriented fibered link in $S^3$ that is embedded in a solid torus $V$ with boundary $T$ incompressible in $S^3{\smallsetminus} L.$ Then, some component of $L$ must have non-zero
winding number. \item[(2)] Suppose that $L$ is an oriented fibered link in $S^3$ such that only one component $K$ is embedded inside a solid torus $V.$ If $K$ has winding number $1,$ then $K$ is isotopic to the core of $V.$ \end{itemize} \end{lemma} Though this statement is fairly classical in the context of fibered knots \cite{HiraMuraSilver}, we include a proof as we are working with fibered links. \begin{proof} The complement $M_L=S^3{\smallsetminus} N(L)$ fibers over $S^1$ with fiber a surface $(F, \partial F)\subset (M_L, \partial M_L)$. Then $S^3{\smallsetminus} L$ cut along $F=F\times \{0\}=F\times \{1\}$ is homeomorphic to $F \times [0,1].$ It is known that $F$ maximizes the Euler characteristic in its homology class in $H_2(M_L, \partial M_L)$ and thus $F$ is incompressible and $\partial$-incompressible.
(1) Assume that the winding number of every component of $L$ is zero, and consider the intersection of $F$ with $T,$ the boundary of the solid torus containing $L.$ Since $F \times [0,1]$ is irreducible, and $F$ is incompressible in the complement of $L$, up to isotopy, one can assume that the intersection $T\cap F$ consists of a collection of parallel curves in $T,$ each of which is homotopically essential in $T.$ The hypothesis on the winding number implies that the intersection $F\cap T$ is null-homologous in $T,$ where each component of $F\cap T$ is given the orientation inherited by the surface $V\cap F.$ Thus the curves in $F\cap T$ can be partitioned in pairs of parallel curves with opposite orientations in $T \cap F$. Each such pair bounds an annulus in $T$ and in $F \times (0,1)$ each of these annuli has both ends on $F \times \lbrace 0 \rbrace$ or on $F \times \lbrace 1 \rbrace.$ This implies that we can find $0<t<1$ such that $F_t=F\times \{t\}$ misses the torus $T$. This in turn implies that $T$ must be an essential torus in the manifold obtained by cutting $S^3{\smallsetminus} L$ along the fiber $F_t$. But this is impossible since the later manifold is $F_t\times I$ which is a handlebody and cannot contain essential tori; contradiction.
(2) By an argument similar to that used in case (1) above, we can simplify the intersection of the fiber surface $F$ with $T$ until it consists of one curve only. This curve, say $\gamma$, cuts $T$ into an essential annulus embedded in $F \times (0,1)$ with one boundary component on $F \times \lbrace 0 \rbrace$ and the other on $F \times \lbrace 1 \rbrace.$ As the annulus closes up, the curve $\gamma$ must be fixed by the monodromy of the fibration and one can isotope $T$ to make it compatible with the fibration. Then one has that $K$ is fibered in $V,$ and as the winding number of $K$ is $1,$ by Corollary 1 in \cite{HiraMuraSilver}, $K$ must be isotopic to the core of $V.$ \end{proof}
We are now ready to give the proof of Proposition \ref{prop:hyperbolic}.
\begin{proof}[Proof of Proposition \ref{prop:hyperbolic}] First we remark that $S^3 {\smallsetminus} (L \cup K)$ is non-split as $S^3 {\smallsetminus} L$ is and $K$ represents a non-trivial element in $\pi_1(S^3 {\smallsetminus} L).$
Next we argue that $S^3 {\smallsetminus} (L \cup K)$ is atoroidal: Assume that we have an essential torus, say $T,$ in $M_{L\cup K}=S^3 {\smallsetminus} (L \cup K).$ Since $L$ is hyperbolic, in $M_L=S^3 {\smallsetminus} L$ the torus $T$ becomes either boundary parallel or compressible. Moreover, the torus $T$ bounds a solid torus $V$ in $S^3.$
Suppose that $T$ becomes boundary parallel in the complement of $L$. Then, we may assume that $V$ is a tubular neighborhood of a component $L_i$ of $V.$ Then $K$ must lie inside $V$; for otherwise $T$ would still be boundary parallel in $M_{L\cup K}$. Then the free homotopy class $[K]$ would represent a conjugacy class in the peripheral subgroup of $\pi_1(M_L)$ corresponding to $L_i.$ However this contradicts condition $(\clubsuit);$ thus this case cannot happen.
Suppose now that we know that $T$ becomes compressible in $M_L$. In $S^3,$ the torus $T$ bounds a solid torus $V$ that contains a compressing disk of $T$ in $M_L.$ If $V$ contains no component of $L \cup K,$ the torus $T$ is still compressible in $M_{L\cup K}$. Otherwise, there are again two cases:
{\it Case 1:} The solid torus $V$ contains some components of $L.$ We claim that $V$ actually contains all the components of $L.$ Otherwise, after compressing $T$ in $M_L,$ one would get a sphere that separates the components of $L,$ which can not happen as $L$ is non-split. Moreover, as the compressing disk is inside $V,$ all components of $L$ have winding number zero in $V.$
Since $T$ is incompressible in the complement of $M_{L\cup K},$ the component $K$ must also lie inside $V.$ Note that $V$ has to be knotted since otherwise $T$ would compress outside $V$ and thus in $M_{L\cup K}$. But then since $K$ is unknotted, it must have winding number zero in $V.$ Thus we have the fibered link $L\cup K$ lying inside $V$ so that each component has winding number zero. But then $T$ can not be incompressible in $M_{L\cup K}$ by Lemma \ref{lem:windingnumber}-(1); contradiction. Thus this case will not happen.
{\it Case 2:} The solid torus $V$ contains only $K.$ Since $T$ is incompressible in $M_{L\cup K},$ $K$ must be geometrically essential in $V;$ that is it doesn't lie
in a 3-ball inside $V.$ Since $K$ is unknotted, it follows that $V$ is unknotted.
For each component $L_i$ of $L,$ we have
$$lk(K,L_i)=w\cdot lk(c,L_i),$$ where $c$ is the core of $V,$
and $w$ denote the winding number of $K$ in $V.$ Since $K$ satisfies condition $(\clubsuit),$ we know that
$$gcd\left(lk(K,L_1),lk(K,L_2),\ldots ,lk(K,L_n)\right)=1,$$ which implies that we must have $w=1.$
Thus by Lemma \ref{lem:windingnumber}-(2), $K$ is isotopic to the core of $V$ and $T$ is boundary parallel, contradicting the assumption that $T$ is essential in $M_{L\cup K}.$ This finishes the proof that $M_{L\cup K}$ is atoroidal.
Since $M_{L\cup K}$ contains no essential spheres or tori, and has toroidal boundary, it is either a Seifert fibered space or a hyperbolic manifold. But $M_L$ is a Dehn-filling of $M_{L\cup K}$ which is hyperbolic. Since the Gromov norm $||\cdot ||$ does not increase under Dehn filling \cite{thurston:notes} we get $||M_{L\cup K}||\geq ||M_L||>0.$ The Gromov norm of Seifert 3-manifolds is zero, thus $L \cup K$ must be hyperbolic. \end{proof}
We can now finish the proof of Theorem \ref{mainofsection} and the proof of Theorem \ref{hyperbgeneral} stated in the Introduction.
\begin{proof}[Proof of Theorem \ref{mainofsection}] Let $L$ be any link. If $L$ is not hyperbolic, then we can find a hyperbolic link $L',$ that contains $L$ as a sub-link. See for example \cite{Baker}. If $L$ is hyperbolic then set $L=L'.$ Then apply Proposition \ref{prop:condition} to $L'$ to get links $L'\cup K$ that are closed homogeneous braids with arbitrarily high crossing numbers and fixed braid index. By \cite{Stallings}, the links $L'\cup K$ are fibered and the fibers have arbitrarily large genus and by Proposition \ref{prop:hyperbolic} they are hyperbolic. \end{proof}
\begin{proof}[Proof of Theorem \ref{hyperbgeneral}] Suppose that $L$ is a link with $lTV(S^3{\smallsetminus} L)>0$. By Theorem \ref{mainofsection} we have fibered hyperbolic links $L'$ that contain $L$ as sublink and whose fibers have arbitrarily large genus. By Theorem \ref{thm:ltvdehnfilling} we have $lTV(S^3{\smallsetminus} L)>0$. \end{proof}
\section{Stallings twists and the AMU conjecture} \label{sec:examples} By our results in the previous sections, starting from a hyperbolic link $L\subset S^3$ with $lTV(S^3{\smallsetminus} L) >0,$ one can add an unknotted component $K$ to obtain a hyperbolic fibered link $L\cup K,$ with $lTV(S^3{\smallsetminus} (L\cup K)) >0.$
The monodromy of a fibration of $L\cup K$ provides a pseudo-Anosov mapping class on the surface $\Sigma=\Sigma_{g,n},$ where $g$ is the genus of the fiber and $n$ is the number of components of $L\cup K.$
One can always increase the number of boundary components $n$ by adding more components to $L$ and appealing to Theorem \ref{thm:ltvdehnfilling}. However since $L\cup K$ is a closed homogeneous braid this construction alone will not provide infinite families of examples for fixed genus and number of boundary components.
In this section we show how to address this problem and prove Theorem \ref{general} stated in the introduction and which, for the convenience of the reader we restate here.
\begin{named}{Theorem \ref{general}} Let $\Sigma$ denote an orientable surface of genus $g$ and with $n$-boundary components. Suppose that either $n=2$ and $g\geqslant 3$ or $g\geqslant n \geqslant 3.$ Then there are are infinitely many non-conjugate pseudo-Anosov mapping classes in $\mathrm{Mod}(\Sigma)$ that satisfy the AMU conjecture. \end{named}
\subsection{Stallings twists and pseudo-Anosov mappings}
Stallings \cite{Stallings} introduced an operation that transforms a fibered link into a fibered link with a fiber of the same genus:
Let $L$ be a fibered link with fiber $F$ and let $c$ be a simple closed curve on $F$ that is unknotted in $S^3$ and such that $lk(c,c^+)=0,$ where $c^+$ is the curve $c$ pushed along the normal of $F$ in the positive direction. The curve $c$ bounds a disk $D\subset S^3$ that is transverse to $F$. Let $L_m$ denote the link obtained from $L$ by a full twist of order $m$ along $D.$ This operation is known as Stallings twist of order $m.$
Alternatively, one can think the Stallings twist operation as performing $1/m$ surgery on $c,$ where the framing of $c$ is induced by the normal vector on $F.$
\begin{theorem} {\rm { (\cite[Theorem 4]{Stallings})}}\label{twist} Let $L$ be a link whose complement fibers over $S^1$ with fiber $F$ and monodromy $f$. Let $L_m$ denote a link obtained by a Stallings twist of order $m$ along a curve $c$ on $F$. Then, the complement of $L_m$ fibers over $S^1$ with fiber $F$ and the monodromy is $f \circ \tau_c^m,$ where $\tau_c$ is the Dehn-twist on $F$ along $c.$ \end{theorem}
Note that when $c$ is parallel to a component of $L,$ then such an operation does not change the homeomorphism class of the link complement; we call these Stallings twists trivial.
To facilitate the identification of non-trivial Stallings twists on link fibers, we recall the notion of {\emph {state graphs}}:
Recall that the fiber for the complement of a homogeneous closed braid ${\hat{\sigma}}$ is obtained as follows: Resolve all the crossings in the projection of ${\hat{\sigma}}$ in a way consistent with the braid orientation. The result is a collection of nested embedded circles (Seifert circles) each bounding a disk on the projection plane; the disks can be made disjoint by pushing them slightly above the projection plane. Then we construct the fiber $F$ by attaching a half twisted band for each crossing. The state graph consists of the collection of the Seifert circles together with an edge for each crossing of ${\hat{\sigma}}.$ We will label each edge by $A$ or $B$ according to whether the resolution of the corresponding crossing during the construction of $F$ is of type $A$ or $B$ shown in Figure \ref{fig:resolutions}, if viewed as unoriented resolution.
\begin{figure}
\caption{A crossing and its $A$ and $B$ resolutions.}
\label{fig:resolutions}
\end{figure} \begin{figure}
\caption{Left: Pattern in the state graph exhibiting a non-trivial Stallings twist. Right: The curve $c$ obtained as a connected sum the curves $c_1, c_2$ with self linking $+2$ and $-2$. }
\label{fig:stallingstwist}
\end{figure}
\begin{remark}\label{locate} {\rm As the homogeneous braids get more complicated the fiber is more likely to admit a non-trivial Stallings twist. Indeed, if the state graph of $L={\hat{\sigma}}$ exhibits the local pattern shown in the left hand side of
Figure \ref{fig:stallingstwist}, we can perform a non-trivial Stallings twist along the curve $c$ which corresponds to the connected sum of the two curves $c_1$ and $c_2$ shown in the Figure. We can see that $lk(c_1,c_1^+)=+2$ and $lk(c_2,c_2^+)=-2,$ and the mixed linkings are zero. In the end, $lk(c,c+)=2-2=0.$}
\end{remark}
We will need the following theorem, stated and proved by Long and Morton \cite{LongMorton} for closed surfaces. Here we state the bounded version and for completeness we sketch the slight adaptation of their argument in this setting.
\begin{theorem} {\rm {(\cite[Theorem A]{LongMorton})}}\ \label{thm:LongMorton}Let $F$ be a compact oriented surface with $\partial F\neq 0.$ Let $f$ be
a pseudo-Anosov homeomorphism on $F$ and let $c$ be a non-trivial, non-boundary parallel simple closed curve on $F.$
Let $\tau_{c}$ denote the Dehn-twist along $c.$ Then, the family $\{f \circ \tau_{c}^m\}_m$ contains infinitely many non-conjugate pseudo-Anosov homeomorphisms.
\end{theorem} \begin{proof}
The proof rests on the fact that the mapping torus of $f_m = f \circ \tau_{c}^m$ is obtained from $M_f$ by performing $1/m$-surgery on the curve $c$ with framing induced by a normal vector in $F.$ Once we prove that $M_f {\smallsetminus} c$ is hyperbolic, Thurston's hyperbolic Dehn surgery theorem implies, for $m$ big enough, that the mapping tori $M_{f_{m}}$ are hyperbolic and all pairwise non-homeomorphic (as their hyperbolic volumes differ). Since conjugate maps have homeomorphic mapping tori the non-finiteness statement follows.
We will consider the curve $c$ as embedded on the fiber $F \times \lbrace 1/2 \rbrace \subset M_f.$ Notice that $M_f {\smallsetminus} c$ is irreducible, as $c$ is non-trivial in $\pi_1(F)$ and thus in $\pi_1(M_f).$
We need to show that $M_f {\smallsetminus} c$ contains no essential embedded tori: Let $T$ be a torus embedded in $M_f {\smallsetminus} c.$
If $T$ is boundary parallel in $M_f,$ it will also be in $M_f {\smallsetminus} c,$ otherwise one would be able to isotope $c$ onto the boundary of $M_f,$ and as $c$ is actually a curve on $F \times \lbrace 1/2 \rbrace,$ $c$ would be conjugate in
$\pi_1(F)$ to a boundary component. As we chose $c$ non-boundary parallel in $F,$ this does not happen.
Now, assume that $T$ is non-boundary parallel in $M_f.$ Then we can put $T$ in general position and consider of $T\cap F\times \lbrace 1/2 \rbrace.$ If this intersection is empty then $T$ is compressible as $F \times [0, 1]$ does not contain any essential tori. After isotopy we can assume that $T \cap F \times [0, 1]$ is a collection of properly embedded annuli in $ F \times [0, 1],$ each of which either misses a fiber $F$ or is vertical with respect to the $I$-bundle structure. Now note that if one of these annuli misses a fiber then we can remove it by isotopy in $M_f {\smallsetminus} c,$ unless if it connects to curves parallel to $c$ on opposite sides of $c$ on $ F\times \lbrace 1/2 \rbrace.$ Also observe that we cannot have annuli that connect a non-boundary parallel curve $c \subset F\times \lbrace 1/2 \rbrace$ to $f(c):$ For, since $f$ is pseudo-Anosov, the curves $f^k (c)$ and $f^l (c)$ are freely homotopic on the fiber if and only if $k=l;$ and thus the annuli would never close up to give $T.$ In the end, and since $M_f$ is hyperbolic,
we are left with two annuli connecting both sides of $c$ and $T$ is boundary parallel in $M_f {\smallsetminus} c.$
\\ Finally, $M_f{\smallsetminus} c$ is irreducible and atoroidal and since its Gromov norm satisfies $||M_f {\smallsetminus} c||>||M_f||>0,$ it is hyperbolic.\end{proof}
\subsection{Infinite families of mapping classes} We are now ready to present our examples of infinite families of non-conjugate pseudo-Anosov mapping classes of fixed surfaces that satisfy the AMU conjecture. The following theorem gives the general process of the construction. \begin{theorem} \label{infinitegen} Let $L$ be a hyperbolic fibered link with fiber $\Sigma$ and monodromy $f.$ Suppose that $L$ contains a sublink $K$ with $lTV(S^3{\smallsetminus} K)>0.$ Suppose, moreover, that the fiber $\Sigma$ admits a non-trivial Stallings twist along a curve $c\subset \Sigma$ such that the interior of the twisting disc $D$ intersects $K$ at most once geometrically. Let $\tau_c$ denote the Dehn twist of $\Sigma$ along $c$. Then the family $\{f \circ \tau_{c}^m\}_m$ of homeomorphisms gives infinitely many non-conjugate pseudo-Anosov mappings classes in $\mathrm{Mod}(\Sigma)$ that satisfy the AMU conjecture. \end{theorem} \begin{proof} Since $L$ contains $K$ as sublink we have $lTV(S^3{\smallsetminus} L)\geqslant lTV(S^3{\smallsetminus} K)>0.$ Since $D$ intersects $K$ at most once, each of the links $L'$ obtained by Stalling twists along $c$, will also contain a sublink isotopic to $K$ and hence $lTV(S^3{\smallsetminus} L')\geqslant lTV(S^3{\smallsetminus} K)>0.$ The conclusion follows by Theorems \ref{twist}, \ref{thm:LongMorton} and \ref{amu-ltv}. \end{proof}
We finish the section with concrete constructions of infinite families obtained by applying Theorem \ref{infinitegen}. Start with $K_1=4_1$ represented as the closure of the homogeneous braid $\sigma_2^{-1} \sigma_1 \sigma_2^{-1} \sigma_1.$ We construction a 2-parameter family of links $L_{n,m}$ where $n\geqslant 2$, $m\geqslant 1,$ defined as follows:
The link $L_{4,m}$
is shown in the left panel of Figure \ref{fig:example2}, where the box shown contains $2m$ crossings. It is obtained from $K_1$ by adding three unknotted components.
The link $L_{3,m}$ is obtained from $L_{4,m}$ by removing the unknotted component corresponding to the outermost string of the braid.
The link $L_{n+1,m}$ for $n\geqslant 4$ is obtained from $L_{n,m}$ adding one strand in the following way: denote by $K_1,\ldots ,K_n$ the components of $L_{n,m}$ from innermost to outermost, $K_1$ being the $4_1$ component.
To get $L_{n+1,m},$ we add one strand $K_{n+1}$ to $L_{n,m}$, so that traveling along $K_n$ one finds $2$ crossings with $K_{n-1},$ then $2$ crossings with $K_{n+1},$ then $2$ crossings with $K_{n-1},$ then $2$ crossings with $K_{n+1},$ and, moreover, the crossings with $K_{n-1}$ and $K_{n+1}$ have opposite signs. There is only one way to chose this new strand, and doing so we added one unknotted component to $L_{n,m},$ thus $L_{n+1,m}$ has $n+1$ components and $4$ more crossings than $L_{n,m}.$
\\ In the special case $n=2,$ the link $L_{2,m}$ is obtained from the link $L_{3,m}$ by replacing the box with $2m$ crossings with a box with $2m-1$ crossings. The links $L_{2,m}$ are then $2$-components links. We note that all the links $L_{n,m}$ contain the component $K_1$ we started with.
\begin{figure}
\caption{The link $L_{4,m}$ and the state graph for $L_{4,2}.$}
\label{fig:example2}
\end{figure} \begin{proposition}\label{prop:example} The link $L_{n,m}$ is hyperbolic, fibered and satisfies the hypotheses of Theorem \ref{infinitegen}. The fiber has genus $g=m+2$ if $n=2,$
$g=n+m-1$ otherwise. \end{proposition} \begin{proof} For every $n\geqslant 2$ and any $m\geqslant 1,$ the link $L_{n,m}$ contains the knot $K_1=4_1$ as sublink and as said earlier we have $lTV(S^3{\smallsetminus} K_1)>0.$ Since $L_{n,m}$ is alternating, hyperbolicity follows from Menasco's criterion \cite{menasco:primediagram}: any prime non-split alternating diagram of a link that is not the standard diagram of the $T(2,q)$ torus link, represents a hyperbolic link. Since $L_{n,m}$ is represented by an alternating (and thus homogeneous) closed braid, fiberedness follows from Stalling's criterion. For $n\geqslant 3,$ the resulting closed braid diagram has braid index $2+n$ and $2+4n+2m$ crossings. Hence the Euler characteristic is $-3n-2m$ and the genus is $m+n-1,$ as the fiber has $n$ boundary components. In the case $n=2,$ the braid index is $5,$ number of crossings $9+2m,$ thus the Euler characteristic of the fiber is $-4-2m$ and the genus is $m+2.$
Using Remark \ref{locate} and the state graph given in Figure \ref{fig:example2} we can easily locate a simple closed curve $c$ on the fiber with the properties in the statement of Theorem \ref{infinitegen}. \end{proof}
Now Proposition \ref{prop:example} and Theorem \ref{infinitegen} immediately give Theorem \ref{general} stated in the beginning of the section.
\begin{remark} Note that if we restrict ourselves to closed homogeneous braids to get explicit examples of links satisfying the hypotheses of Theorem \ref{infinitegen}, it seems necessary that the genus will grow with the number of components. However, one could consider other methods, that can increase the number of components, while keeping the fiber genus low, to produce links satisfying the hypotheses of Theorem \ref{infinitegen}. One way would be to take a Murasugi sum of the link $L_{2,m}$ with links of arbitrarily large number of components and whose complement fibers over $S^1$ with fiber of genus $0$. This should be done so that the Murasugi sum operation leaves the component $4_1\subset L_{2,m}$ unaffected and it produces hyperbolic links. It seems plausible that combining homogeneous braids and Murasugi sums should give explicit examples of infinite families of mapping classes that satisfy the AMU Conjecture, for all surfaces $\Sigma$ with genus $g\geqslant 2$ and $n\geqslant 2$ boundary components. \end{remark}
\begin{remark}\label {spheres} Our methods also apply to surfaces of genus zero to produce examples of mapping class that satisfy the conclusion of the AMU Conjecture. Given that such examples were previously known, we just outline an explicit construction without pursuing the details of determining the Nielsen-Thurston types of the resulting mapping classes or discussing how the construction relates to that of Santharoubane \cite{San17}: Let $\Sigma_{0, n}$ denote the sphere with $n$ holes. A mapping class in $Mod(\Sigma_{0, n+1})$ can be thought as an element in the pure braid group on $n$-strings, say $P_n$. It is known that for $n>3$, $P_n$ is a semi-direct product of a subgroup $W_n$ that is itself a semi direct product of free subgroups of $P_n$ and of $P_3$. See, for example, \cite[Theorem 1.8]{birman:book}. As result any braid $b\in P_n$ can be uniquely written as a product $\beta\cdot w$ where $\beta\in P_3$ and $w\in W_n$. Now take any $b=\beta \cdot w$ for which the closure of $\beta\in P_3$ represents the Borromean rings $B$. The braid $b$ represents an element in $Mod(\Sigma_{0, n+1})$ whose mapping torus is the complement of the link $L_b$ that is the closure of $b$ together with the braid axis. The link $L_b$ contains $B$ as a sublink and hence $lTV(S^3{\smallsetminus} L_b)>lTV(S^3{\smallsetminus} B)>0.$ Thus by Theorem \ref{amu-ltv}, for $r$ big enough, there is a choice of colors $c$ of the components of $\Sigma_{0, n+1}$ such that $\rho_{r,c}(\phi)$ has infinite order. \end{remark} \begin{remark} \label{knots}Theorem \ref{infinitegen} leads to constructions of mapping classes on surfaces with at least two boundary components that satisfy the AMU. Furthermore, all these mapping classes are obtained as monodromies of fibered links in $S^3$. In \cite{BDKY} we prove the Turaev-Viro invariants volume conjecture for an infinite family of cusped hyperbolic 3-manifolds. Considering the doubles of these 3-manifolds we obtain an infinite family of closed 3-manifolds $M$ with $lTV(M)>0$. It is known that every closed 3-manifold contains hyperbolic fibered knots \cite{Soma}. By Theorem \ref{amu-ltv}, monodromies of such knots provide examples of pseudo-Anosov mappings on surfaces with a single boundary component that satisfy the AMU conjecture. \end{remark}
\section{Integrality properties of periodic mapping classes} \label{sec:integertraces} In this section we give the proofs of Theorem \ref{thm:integertrace} and Corollary \ref{cor:toruslinks} stated in the Introduction. We also state a conjecture about traces of quantum representations of pseudo-Anosov mapping classes and we give some supporting evidence.
\begin{named} {Theorem \ref{thm:integertrace}} Let $f\in \mathrm{Mod}(\Sigma)$ be periodic of order $N.$ For any odd integer $r\geqslant 3$, with $\mathrm{gcd}(r, N)=1$, we have
$|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}},$
for any $U_r$-coloring $c$ of $\partial \Sigma,$ and any primitive
$2r$-root of unity. \end{named}
\begin{proof} For any choice of a primitive $2r$-root of unity $\zeta_{2r}$, the traces $\mathrm{Tr} \rho_{r,c}(f)$ lie in the field $ {\mathbb{Q}}[\zeta_{2r}]$. Since the ${\mathbb{Z}}$ is invariant under the action of the Galois group of the field, the property $|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}},$ does not depend on the choice of root of unity to define the TQFT.
In the rest of the proof, for any positive integer $n,$ we write $\zeta_n=e^{\frac{2i\pi}{n}}.$
By choosing a lift we can consider $\rho_{r,c}(f)$ as an element of $\mathrm{Aut}(RT_r(\Sigma,c))$ instead of $\mathbb{P}\mathrm{Aut}(RT_r(\Sigma,c)).$
Since $f^N=id$ and $\rho_{r,c}$ is a projective representation with projective ambiguity a $2r$-root of unity, we have $\rho_r(f)^N=\zeta_{2r}^k\ \mathrm{Id}_{RT_r(\Sigma,c)}.$ Since $N$ and $r$ are coprime, by changing the lift $\rho_{r,c}(f)$ by a power of $\zeta_{2r}$ we can assume actually that $\rho_{r,c}(f)^N=\pm \mathrm{Id}_{RT_r(\Sigma,c)}.$ Then $\rho_{r,c}(f)$ is diagonalizable, with eigenvalues that are $2N$-th roots of unity. This implies that $|\mathrm{Tr} \rho_{r,c} (f)| \in {\mathbb{Z}}[\zeta_{2N}].$
On the other hand we know that the traces of quantum representations $\rho_{r, c}$ take values in ${\mathbb{Q}} [\zeta_{2r}]$, and the same is true for $RT_r$ invariants of any closed $3$-manifold with a colored link (see Theorem \ref{thm:TQFTdef}-(2)). Thus we have
$$|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}}[\zeta_{2N}]\cap {\mathbb{Q}}[\zeta_{2r}].$$ By elementary number theory, it is known that $${\mathbb{Q}}[\zeta_m]\cap {\mathbb{Q}}[\zeta_n]= {\mathbb{Q}}[\zeta_{d}],$$ where $d=\mathrm{gcd}(m, n)$. See, for example, \cite[Theorem 3.4]{Konrad} for a proof of this fact.
Hence, since ${\mathbb{Q}}[\zeta_2]={\mathbb{Q}}[-1]={\mathbb{Q}}$ and the algebraic integers in ${\mathbb{Q}}$ are the integers,
we have ${\mathbb{Z}}[\zeta_{2N}]\cap {\mathbb{Q}}[\zeta_{2r}]={\mathbb{Z}}.$
Thus we obtain $|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}}.$ \end{proof}
It is known that the mapping torus of a class $f\in \mathrm{Mod}(\Sigma)$ is a Seifert fibered manifold if and only if $f$ is periodic. In particular, the complement $S^3{\smallsetminus} T_{p,q}$ of a $(p, q)$ torus link is fibered with periodic monodromy of order $pq$ \cite{orlik:seifert-manifolds}. As a corollary of Theorem \ref{thm:integertrace} we have the following result which in particular implies Corollary \ref{cor:toruslinks} that settles a question of \cite{Chen-Yang}.
\begin{corollary}\label{integerTV} Let $M_f$ be the mapping torus of a periodic mapping class $f\in \mathrm{Mod}(\Sigma)$ of order $N$. Then, for any odd integer $r\geqslant 3$, with $\mathrm{gcd}(r, N)=1$, we have $TV_r(M_f)\in {\mathbb{Z}},$
for any choice of root of unity. \end{corollary}
\begin{proof}
As in the proof of Theorem \ref{amu-ltv}, we write
$$TV_r(M_f)=\underset{\mathbf{c}}{\sum} |\mathrm{Tr} \rho_{r,\mathbf{c}}(f)|^2$$ where $f$ is the monodromy and the sum is over $U_r$-colorings of the components of $\partial \Sigma$.
But if $r$ is coprime with $N$ this sum is a sum of integers by Theorem \ref{thm:integertrace}. \end{proof}
Corollary \ref{integerTV} implies that for mapping tori of periodic classes the Turaev-Viro invariants take integer values at infinitely many levels and this property is independent of the choice of the root of unity. In contrast with this we have the following, were $lTV$ is defined in the Introduction.
\begin{proposition}\label{prop:integertrace} Let $f \in \mathrm{Mod}(\Sigma)$ such that $lTV(M_f)>0.$ Then, there can be at most finitely many odd integers $r$ such that $TV_r(M_f)\in {\mathbb{Z}}$. \end{proposition} \begin{proof} As in the proof of Corollary \ref{integerTV} and Theorem \ref{thm:integertrace} for any odd $r\geq 3$ and any choice of a primitive $2r$-root of unity $\zeta_{2r}$ the invariant $TV_r(M_f, e^{\frac{2i\pi}{r}})$ lies in $\mathbb F={\mathbb{Q}} [e^{\frac{i\pi}{r}}].$
Suppose that there are arbitrarily large odd levels $r$ such that $TV_r(M_f, e^{\frac{2i\pi}{r}})\in {\mathbb{Z}}$. Then since ${\mathbb{Z}}$ is left fixed under the action of the Galois group of $\mathbb F$, we would have $TV_r(M_f, e^{\frac{i\pi}{r}})=TV_r(M_f, e^{\frac{2i\pi}{r}})$, for all $r$ as above.
But this is contradiction: Indeed, on the one hand, the assumption $lTV(M_f)>0,$ implies that the invariants $TV_r(M_f, e^{\frac{2i\pi}{r}})$ grow exponentially in $r$; that is $TV_r(M_f, e^{\frac{2i\pi}{r}}) > \exp{Br}$, for some constant $B>0$. On the other hand, by combining results of \cite{Garoufalidis} and \cite{BePe}, the invariants $TV_r(M_f, e^{\frac{i\pi}{r}})$ grow at most polynomially in $r$; that is $TV_r(M_f, e^{\frac{i\pi}{r}})\leqslant D r^N$, for some constants $D>0$ and $N$. \end{proof}
As discussed earlier the Turaev-Viro invariants volume conjecture of \cite{Chen-Yang} implies that for all pseudo-Anosov mapping classes we have $lTV(M_f)>0,$ and the later hypothesis implies the AMU Conjecture. These implications and Proposition \ref{prop:integertrace} prompt the following conjecture suggesting that the Turaev-Viro invariants of mapping tori distinguish pseudo-Anosov mapping classes from periodic ones.
\begin{conjecture}\label{conj:integertrace} Suppose that $f\in \mathrm{Mod}(\Sigma)$ is pseudo-Anosov. Then, there can be at most finitely many odd integers $r$ such that $TV_r(M_f)\in {\mathbb{Z}}.$ \end{conjecture}
\end{document} |
\begin{document}
\begin{abstract} A partial automorphism of a finite graph is an isomorphism between its vertex-induced subgraphs. The set of all partial automorphisms of a given finite graph forms an inverse monoid under composition {(of partial maps)}. We describe the algebraic structure of such inverse monoids by the means of standard tools of inverse semigroup theory, {namely} Green's relations and some properties of the natural partial order, and give a characterization of inverse monoids which arise as inverse monoids of partial graph automorphisms. We extend our results to digraphs and edge-colored digraphs as well. \end{abstract}
\title{Inverse monoids of partial graph automorphisms}
\section{Introduction}
The study of almost any type of combinatorial structures includes problems of distinguishing different objects via the use of isomorphisms, and, more specifically, leads to questions concerning {the} automorphism groups of the structures under consideration. The problem of determining the automorphism group for a specific combinatorial structure or a class of combinatorial structures is a notoriously hard computational task whose exact complexity is the subject of intense research efforts. For example, the time complexity of determining the automorphism group of a finite graph (as a function of its order) has been the focus of a series of recent highly publicized lectures of Babai \cite{Bab}.
Knowledge of the automorphism group of a specific combinatorial object allows one to make various claims about its structure. As an extreme example, the fact that the induced action of the automorphism group of a graph on the set of unordered pairs of its vertices is transitive yields immediately that the graph under consideration is either the complete graph $K_n$ or its complement $\widetilde{K_n}$. Similarly, vertex-transitivity of the action of the automorphism group of a graph yields that each vertex of the graph is contained in the same number of cycles of prescribed length (e.g., \cite{Fil&Jaj}).
However, the usefulness of knowing the automorphism group of a graph decreases with an increasing number of orbits of its action on the vertices. Revoking once again an extreme case, knowing that the automorphism group of a graph is trivial yields almost no information about the structure of the graph (those who would like to disagree should consider the fact that almost all finite graphs have a trivial automorphism group, \cite[Corollary 2.3.3]{God&Roy}).
This observation appears to suggest rather limited use of algebraic tools in general graph theory. To counter this view, we propose an approach that relies on the use of an algebraic theory often viewed as a generalization of group theory which applies to all finite graphs and relies on the concept of partial isomorphisms of combinatorial structures \cite{Lau3,Lawson}. Namely, we propose to study {\emph{the partial automorphism monoid $\paut\Gamma$ of a finite graph $\Gamma$}, which consists of all isomorphisms between the vertex-induced subgraphs of $\Gamma$, called partial automorphisms of $\Gamma$,} with the operation of {the usual} composition {of partial maps}. {Unlike the case of automorphism groups, no finite graph of at least two vertices admits a trivial inverse monoid of partial automorphisms. Indeed, partial identical maps are always partial automorphisms, and these already account for exponentially more elements than the number of vertices of the graph. In addition, as we will see, there are usually many more partial automorphisms.
In this paper, we make steps towards developing a general theory of partial automorphism monoids of combinatorial structures, in particular, of finite graphs, digraphs and edge-colored digraphs.
Our main aim is to understand which inverse monoids arise as partial automorphism monoids of these structures.
We begin by reviewing {the} relevant concepts of graph theory in Section \ref{sec:graph}, {followed by the basics needed} from inverse semigroup theory in Section \ref{sec:sem}. In Section \ref{sec:struc}, we {formulate the most important properties which determine} the structure of the partial automorphism monoid of a graph. We then address two closely related classification problems concerning the partial automorphism monoids of graphs, digraphs and edge-colored digraphs.
In Theorems \ref{thm:pautgraph} and \ref{thm:pautdigraph}, we describe those inverse monoids which arise as partial automorphism monoids of graphs, that is, we characterize those inverse submonoids $\mathcal{S}$ of the so-called symmetric inverse monoid on $X$ for which a {graph (resp. digraph or edge-colored digraph) $\Gamma$ exists on the set of vertices $X$ such that $\paut\Gamma$ is {\em equal} to $\mathcal{S}$.
This problem is similar to the more specialized problem of the classification of finite groups that admit a Graphical Regular Representation (for short, GRR). A finite group $G$ admits a GRR if there exists a set of edges $E$ on $G$ with the property that the automorphism group of the graph $(G,E)$ is equal to $G_L$, the left multiplication regular representation of $G$ in its action on itself \cite{God}. Cayley's left multiplication representation of groups has a direct analogue in the inverse semigroup theory in the form of the Wagner--Preston representation \cite[Theorem 1.5.1]{Lawson} which represents $\mathcal{S}$ as a set of partial permutations on itself. These partial permutations of $\mathcal{S}$ give rise to color-preserving partial automorphisms of its Cayley graph, just like in the group case, however, the Cayley graph of an inverse semigroup $\mathcal{S}$ has many more partial automorphisms than just those induced by $\mathcal{S}$.
Thus, an exact equivalent to the GRR classification would be the classification of the finite inverse monoids $\mathcal{S}$ that admit a set of edges $E$ on $\mathcal{S}$, {such that} the partial automorphism monoid of the graph $(\mathcal{S},E)$ is equal to the partial permutation representation of $\mathcal{S}$ on itself given by the Wagner--Preston theorem. However, by the observation made above concerning the number of vertices and the number of partial automorphisms of a graph, the existence of such a graph $(\mathcal{S},E)$ is impossible for any non-trivial inverse monoid $\mathcal{S}$.
}
In Section \ref{sec:frucht2}, building on Theorems \ref{thm:pautgraph} and \ref{thm:pautdigraph}, we give a similar description in Theorems \ref{thm:pautgraph2} and \ref{thm:pautdigraph2} for the finite inverse monoids which are {\em isomorphic} to {partial automorphism monoids of} finite graphs, digraphs or edge-colored digraphs. It turns out that the class of finite inverse monoids arising as {partial automorphism monoids of} (edge-colored di)graphs is very restrictive. This is in stark contrast to the well-known result of Frucht \cite{fru} who proved that every finite group is isomorphic to the automorphism group of a finite graph. A kind of `extended' Frucht theorem for finite inverse monoids has been obtained in \cite{Nem}, where it is proved that every finite inverse monoid is isomorphic to the partial weighted automorphism monoid of a finite weighted graph. The construction takes a cleverly modified version of the Cayley color graph of an inverse monoid $\mathcal{S}$, and adds weights to the vertices in order to avoid the partial automorphisms not arising from the Wagner--Preston representation. The resulting structure therefore has a partial automorphism monoid isomorphic to $\mathcal{S}$.
A slightly different approach is taken in \cite{Sie} in order to achieve a theorem of a similar flavor: here, the Cayley color graph of $\mathcal{S}$ remains unchanged, but the notion of partial automorphisms is restricted sufficiently so that only the partial automorphisms arising from the Wagner--Preston representation satisfy the definition. The inverse monoid of such partial automorphisms is therefore again isomorphic to $\mathcal{S}$.
We feel obliged to address the similarities and differences between our paper and the recent paper \cite{Chi&Ple} that appeared while we were preparing our article. The authors of \cite{Chi&Ple} consider two types of inverse monoids associated with a finite undirected graph which is allowed to have multiple edges and loops: the first type being the inverse monoids of all partial automorphisms between any two (not necessarily vertex-induced) subgraphs of such graphs, and the second type being the inverse monoids considered in our paper. The majority of the results obtained in \cite{Chi&Ple} deal with the partial automorphism monoids of the first type (not considered in our paper), and focus on the structure of their idempotents and ideals. None of the results obtained in our paper is included among the results contained in \cite{Chi&Ple}.
{Finally, let us mention papers devoted to finite monoids and semigroups and their relation to graphs that take a different approach from ours. Graphs play a fundamental role in studying certain classes of finite monoids and semigroups (see, e.g., \cite{Cam2}, \cite{Pin}, \cite{RhodesSteinberg}), while monoids attached to graphs have also been considered (see, e.g., \cite{Cam1}). The monoids and semigroups appearing in this type of research consist almost exclusively of transformations (e.g., transition monoids of automata, transformation semigroups generated by idempotent transformations coming from graphs, endomorphism monoids of graphs) rather than of partial permutations considered in our paper.}
\section{Preliminaries from Graph Theory}\label{sec:graph}
A \emph{graph} is an ordered pair $\Gamma=(V,E)$, where $V$ is the set of vertices, and $E$ is the set of (undirected) edges, which is a set of $2$-element subsets of $V$. Similarly, a \emph{digraph} is an ordered pair $\Gamma=(V,E)$, with $V$ being the set of vertices, and $E$ the set of (directed) edges, consisting of ordered pairs of vertices. Graphs can naturally be regarded as special digraphs {where each edge $\{u,v\}$ of the graph is replaced by} two directed edges of opposite directions, $(u, v)$ and $(v, u)$. Notice that digraphs can also have \emph{loops}, while graphs or digraphs associated to graphs cannot. In both cases, we use the notation $V(\Gamma)$ and $E(\Gamma)$ for $V$ and $E$, respectively. Furthermore, we also consider \emph{edge-colored digraphs}, that is, structures $\Gamma=(V, E_1, \ldots, E_l)$, where $\{1,\ldots, l\}$ is the set of colors, and $E_c\subseteq V \times V \ (c\in \{1,2,\ldots,l\})$ are the pairwise disjoint sets of (directed) edges of color $c$. In this case, $V(\Gamma)$ and $E(\Gamma)$ stand for $V$ and $\bigcup_{c=1}^l E_c$, respectively. For example, Cayley color graphs constitute a natural class of edge-colored digraphs.
A \emph{partial automorphism} of an edge-colored digraph $\Gamma$ (as well as of a digraph or a graph) is an isomorphism between two vertex-induced subgraphs of $\Gamma$, that is, a bijection $\varphi\colon V_1 \to V_2$ between two sets of vertices $V_1, V_2 \subseteq V(\Gamma)$ such that any pair of vertices $u, v \in V_1$ satisfies the condition $(u,v) \in E_c$ if and only if $(\varphi(u), \varphi(v)) \in E_c$ for any color $c$. The set of {all} partial automorphisms of $\Gamma$ together with the operation of {the usual composition of partial maps} form an inverse monoid, which we denote by $\paut\Gamma$. {This composition, as well as further concepts needed later and several basic properties of $\paut\Gamma$ are discussed} in detail in the next section. In the rest of the paper, vertex-induced subgraphs are simply called induced subgraphs.
As argued in the introduction, we believe that developing a full-fledged theory of partial automorphism monoids of graphs might lead to discoveries in areas of graph theory that have traditionally resisted the use of algebraic methods. As an example of one such possible impact, we mention the long-standing open problem called \emph{Graph Reconstruction Conjecture}, first introduced in \cite{Kel}. Given a finite graph $\Gamma=(\{v_1,\ldots,v_n\},E)$ of order $n$, let the \emph{deck of} $\Gamma$, $\operatorname{Deck}(\Gamma)$, be the multiset consisting of the subgraphs $\Gamma - v_i\ (1 \leq i \leq n)$ induced by ($n-1$)-vertex subsets of $\Gamma$. The Graph Reconstruction Conjecture predicts the unique `reconstructability' of any graph $\Gamma$ of order at least $3$ from its $\operatorname{Deck}(\Gamma)$, or, in other words, predicts that the multisets $\operatorname{Deck}(\Gamma_1)$ and $\operatorname{Deck}(\Gamma_2)$, up to isomorphism, coincide if and only if $\Gamma_1$ and $\Gamma_2$ are isomorphic (in notation, $\Gamma_1 \cong \Gamma_2$) for any pair of graphs $\Gamma_1,\Gamma_2$ of order at least $3$. This conjecture appears closely related to partial automorphisms \cite{Koc1,Ill}. Namely, any two induced subgraphs $\Gamma - v_i$ and $\Gamma - v_j\ (i\neq j)$ admit at least one partial isomorphism $\varphi$ with domain of size $n-2$, namely the `identity' isomorphism between $(\Gamma - v_i)-v_j$ and $(\Gamma - v_j)-v_i$. Clearly, in the case where $\Gamma - v_i$ and $\Gamma - v_j$ admit exactly one partial isomorphism $\varphi$ with domain of size $n-2$, $\Gamma$ is reconstructable from $\Gamma - v_i$ and $\Gamma - v_j$ alone, by `gluing' together $v$ and $\varphi(v)$, for each $v$ in the domain of $\varphi$. Furthermore, two induced subgraphs $\Gamma - v_i$ and $\Gamma - v_j$ of $\Gamma$ (or sometimes the vertices $v_i, v_j$) are said to be \emph{pseudo-similar} if $\Gamma - v_i \cong \Gamma - v_j$, but no automorphism of $\Gamma$ maps $v_i$ to $v_j$ and $\Gamma - v_i$ to $\Gamma - v_j$. Or, using the language of partial automorphisms, $\Gamma - v_i$ and $\Gamma - v_j$ are pseudo-similar, if $\paut\Gamma$ contains a partial automorphism mapping $\Gamma - v_i$ to $\Gamma - v_j$ that cannot be extended to an automorphism of $\Gamma$. It has been claimed in \cite{lau} that the Graph Reconstruction Conjecture holds for graphs containing no pseudo-similar vertices (more precisely, an alleged `proof' of the Graph Reconstruction Conjecture is claimed to have been submitted for publication that falsely assumed the non-existence of pseudo-similar vertices). This suggests that a valid proof of the Graph Reconstruction Conjecture might be found through better understanding of the finite graphs containing pseudo-similar vertices, or equivalently, through understanding the graphs $\Gamma$ with $\paut\Gamma$ containing elements with domain size $n-1$ that cannot be extended to automorphisms of $\Gamma$. While graphs whose decks split into pairs of pseudo-similar subgraphs can be easily constructed from graphical regular representations of groups of odd order \cite{Kim&Sch&Sto}, graphs containing arbitrarily large subsets of mutually pseudo-similar subgraphs, while they exist \cite{Kim&Sch&Sto,Lau2}, are generally hard to find and tend to be large compared to the size of their sets of mutually pseudo-similar subgraphs. The topic of constructing graphs containing a small number of mutually pseudo-similar vertices has also been the focus of intense research \cite{God&Koc1,God&Koc2,Lau4}. Inspired by these results, we suggest that the following problem from \cite{Kim&Sch&Sto} may be accessible via the use of methods of partial automorphism monoids.
\begin{Probl} Determine the orders of the smallest graphs $\Gamma_k$ containing $k \geq 2$ mutually pseudo-similar vertices. \end{Probl}
\section{Preliminaries from semigroup theory}\label{sec:sem}
In this section, we provide an introduction to our main tool {needed in the paper}: the algebraic structure of {finite} inverse monoids. We follow \cite{Lawson}, a monograph addressed to a much wider readership than researchers interested in semigroup theory, in its conventional notation of maps (functions): given a map $\varphi$, the element assigned to an element $x$ is denoted $\varphi(x)$, and the composition of maps $\varphi$ and $\psi$, where $\varphi$ is applied first and $\psi$ after that, is denoted $\psi\varphi$. {The interested reader is referred to \cite{Lawson} for further details of the theory of inverse semigroups.}
\subsection{Partial permutations}
Given a set $X$, we call a bijection between two subsets $Y, Z \subseteq X$ a \emph{partial permutation} of $X$. {The set of all partial permutations of $X$ is denoted $\is X$.}
In the paper, we will always assume $X$ to be finite. We allow for $Y=Z=\emptyset$, in which case we obtain the empty map. If $\varphi\colon Y\to Z\in \is X$, then $Y$ and $Z$ are the \emph{domain} and \emph{range} of $\varphi$, denoted by $\operatorname{dom}\varphi$ and $\operatorname{ran}\varphi$, respectively. The common size $|\operatorname{dom}\varphi|=|\operatorname{ran}\varphi|$ of the sets $\operatorname{dom}\varphi$ and $\operatorname{ran}\varphi$ is called the \emph{rank} of $\varphi$. {The inverse of $\varphi$, considered as a bijection, is also a partial permutation of $X$ denoted by $\varphi^{-1}$, and $\operatorname{dom}\varphi^{-1}=\operatorname{ran}\varphi$ and $\operatorname{ran}\varphi^{-1}=\operatorname{dom}\varphi$.}
Partial permutations can be composed: given two maps $\varphi_1\colon Y_1 \to Z_1$ and $\varphi_2\colon Y_2 \to Z_2$, one obtains their composition $\varphi_2 \varphi_1$ by composing them on the largest {subset of $X$} where it `makes sense' to do so, that is, on ${\operatorname{dom} \varphi_2\varphi_1=}\varphi_1^{-1}(Z_1 \cap Y_2)$, where by definition $(\varphi_2 \varphi_1)(x)=\varphi_2(\varphi_1(x))$ for any $x$. {The range of $\varphi_2 \varphi_1$ is $\operatorname{ran} \varphi_2 \varphi_1=\varphi_2(Z_1 \cap Y_2)$.} It may happen that $Z_1 \cap Y_2=\emptyset$, in which case $\varphi_2 \varphi_1$ is the empty map.
One of the first theorems one learns about permutations is that they can be written as {compositions of pairwise disjoint cyclic permutations} in a unique way, where cyclic permutations correspond to the orbits. An analogous theorem exists for partial permutations as well, as described in \cite{Lips}, although orbits are slightly more complicated, and a partial permutation is decomposed into the union of its restrictions to the orbits rather than as a product. We slightly deviate from the notation of \cite{Lips}, in particular, the right-left composition of maps followed in the paper induces notation which is quite unusual both in semigroup and permutation group theories.
If $\varphi$ is a partial permutation of $X$ and $x \in \operatorname{dom} \varphi$, then either there exists some $n \in \mathbb{N}$ such that $\varphi^{n}(x)=x$, in which case
the orbit of $x$ is permuted cyclically by $\varphi$ (which is always the case if $\varphi$ is a permutation), or it may be that $\varphi^{n}(x)$ eventually ends up outside $\operatorname{dom}\varphi$, in which case the orbit forms what is called a path.
A cyclic permutation cyclically permutes all the elements it does not fix. In the realm of partial permutations, this notion is not as useful as that of partial permutations which permute all elements in their domain cyclically. We call these partial permutations \emph{cycles}. Of course, one can turn a nonidentical cyclic permutation into a cycle by omitting fixed points from its domain, and vice versa. The cycle $x_1\mapsfrom x_k\mapsfrom \cdots\mapsfrom x_2\mapsfrom x_1$, where $x_1, x_2,\ldots, x_k$ are pairwise distinct and $k \geq 1$, is denoted by $(x_k\, \cdots\, x_2\, x_1)$. A {\emph{path}} is a partial permutation of the form $x_k\mapsfrom \cdots\mapsfrom x_2\mapsfrom x_1$, denoted by $[x_k\, \cdots\, x_2\, x_1)$, where $x_1, x_2, \ldots x_k$ are all distinct and $k \geq 2$. Notice that $\operatorname{dom} [x_k\, \cdots\, x_2\,x_1)=\{x_1, x_2,\ldots, x_{k-1}\}$ and $\operatorname{ran} [x_k\, \cdots\, x_2\,x_1)=\{x_2, x_3,\ldots, x_k\}$.
To understand how partial permutations decompose into cycles and paths, consider for instance the following partial permutations of the set $X=\{1,\ldots, 6\}$: { \begin{eqnarray*} \varphi\colon \{1,2,3,4\} \to \{1,2,4,5\}&\!\!\!\!\!,& 1 \mapsto 2,\ 2 \mapsto 1,\ 3 \mapsto 4,\ 4 \mapsto 5;\\ \psi\colon \{1,2,3,4,5,6\} \to \{1,2,3,4,5,6\}&\!\!\!\!\!,& 1 \mapsto 2,\ 2 \mapsto 1,\ 3 \mapsto 4,\ 4 \mapsto 5,\ 5 \mapsto 3,\ 6 \mapsto 6. \end{eqnarray*}} {Notice that $\varphi$} decomposes into the cycle $(2\,1)$ and the path $[5\,4\,3)$, {where the sets $\{1,2\},\ \{3,4,5\}$ are the orbits of $\varphi$. That is, $\varphi$ is the union of these two partial permutations, which we denote by $({2\,1}) \vee [5\,4\,3)$. Similarly, $\psi=(2\,1)\vee (5\,4\,3)\vee (6)$, and it is a permutation of $X$. Notice that the last member $(6)$ of this union cannot be deleted because the partial permutation $(2\,1)\vee (5\,4\,3)$ has $\{1,2,3,4,5\}$ as its common domain and range rather than $X$.}
Two partial permutations $\varphi,\psi\in \is X$ are said to be {\em disjoint} if the unions $\operatorname{dom}\varphi \cup \operatorname{ran}\varphi$ and $\operatorname{dom}\psi \cup \operatorname{ran}\psi$ are disjoint. It is true that every partial permutation in $\is X$ is {the union of pairwise disjoint paths and cycles}, and the set of the members of this union is unique. Using this form, one can multiply partial permutations similarly as it is usually done in the case of permutations (from right to left) {by starting with an element of the domain of the right factor, finding its image, then finding the image of this element, etc., but paths make the calculation somewhat more complicated because the choice of a starting element determines whether the whole path is obtained or only a part of it.} For example, $([4\,3\,1) \vee (2))([4\,1) \vee (3\,2))=[4\,2)\vee [4\,2\,3)=[4\,2\,3)$.
The inverse of a partial permutation is obtained by reversing all paths and cycles, e.g.\ $(({2\,1}) \vee [5\,4\,3))^{-1}=({1\,2}) \vee [3\,4\,5)$.
To avoid having to separate too many cases later in our proofs, we will sacrifice uniqueness and allow for the possibility of $[x_k\, \cdots\, x_2\,x_1)$ denoting a cycle with $x_1=x_k$, but still indicate cycles in the usual form when it is more convenient. For example, this will allow us to write any $\varphi\in \is X$ with $\operatorname{dom} \varphi=\{x_1, x_2,\ldots,x_k\}$ in the form $\varphi=[\varphi(x_1)\, x_1)\vee [\varphi(x_2)\, x_2)\vee \cdots \vee [\varphi(x_k)\, x_k)$.
Let us further explore the partial operation $\vee$ we used in the decomposition. Given arbitrary partial permutations $\varphi$ and $\psi$, their union is a map if an only if they coincide on their common domain $\operatorname{dom} \varphi \cap \operatorname{dom} \psi$. In this case, this map is injective, that is, a partial permutation itself if and only if $\varphi$ and $\psi$ map elements outside their common domain into disjoint sets. In this case, we call $\varphi$ and $\psi$ \emph{compatible}, and denote their union by $\varphi \vee \psi$. This is in fact the least upper bound of $\varphi$ and $\psi$ in the partial order of restriction, defined by $\varphi_1 \leq \varphi_2$ if $\varphi_1$ is the restriction of $\varphi_2$ to some set $Y \subseteq X$, that is, if $\varphi_1=\varphi_2\operatorname{id}_Y$. For this reason, from now on we refer to $\varphi \vee \psi$ as the \emph{join} of $\varphi$ and $\psi$. Given an arbitrary subset $S$ of $\is X$, its join exists if and only if the elements of $S$ are pairwise compatible, and in this case, its join is the union of its elements.
The operations of composition and taking inverse {in $\is X$} distribute over joins. More precisely, for any $\varphi,\psi,\eta\in\is X$, if $\varphi\vee\psi$ exists then all of $\varphi\eta\vee\psi\eta$, $\eta\varphi\vee\eta\psi$, and $\varphi^{-1}\vee\psi^{-1}$ exist, and we have $(\varphi\vee\psi)\eta=\varphi\eta\vee\psi\eta$, $\eta(\varphi\vee\psi)=\eta\varphi\vee\eta\psi$, and $(\varphi\vee\psi)^{-1}=\varphi^{-1}\vee\psi^{-1}$.
\subsection{Inverse monoids}\label{ssec:invmon}
The concept of inverse monoids was independently defined by Wagner and Preston in the early 50s as the algebraic abstraction of partial permutations. The set {$\is X$} of {all} partial permutations of a set $X$ forms a monoid under composition, $\operatorname{id}_X$ being the identity element. Elements also have `local' inverses: for any partial permutation $\varphi\colon Y \to Z$, its inverse $\varphi^{-1}\colon Z \to Y$ has the property that $\varphi^{-1}\varphi=\operatorname{id}_{Y}$ and $\varphi\varphi^{-1}=\operatorname{id}_{Z}$, and therefore the identities $\varphi\varphi^{-1}\varphi=\varphi$ and $\varphi^{-1}\varphi\varphi^{-1}=\varphi^{-1}$ hold. Moreover, $\varphi^{-1}$ is the only element with these properties.
Generalizing the above, a monoid $\mathcal{S}$ {is said to be} an \emph{inverse monoid} if for every $s \in \mathcal{S}$ there exists a unique element $s^{-1} \in \mathcal{S}$ called the inverse of $s$ such that $ss^{-1}s=s$ and $s^{-1}ss^{-1}=s^{-1}$ hold. Note that the unary operation of taking inverse has the properties $(s^{-1})^{-1}=s$ and $(st)^{-1}=t^{-1}s^{-1}$ for any $s,t\in \mathcal{S}$. {The inverse monoid of all partial permutations of $X$ is called the \emph{symmetric inverse monoid on $X$}, and is also denoted by $\is X$.}
Symmetric inverse monoids are the archetypal examples of inverse monoids. They are archetypal in the same sense as symmetric groups are the archetypal groups: in parallel with Cayley's theorem, the Wagner--Preston theorem states that every inverse monoid can be embedded into a suitable symmetric inverse monoid.
Another example of an inverse monoid is the set $\paut \Gamma$ of partial automorphisms of a finite edge-colored digraph (graph, digraph) $\Gamma$; we again allow for the empty map as well. Each partial automorphism of $\Gamma$, that is, an isomorphism between two induced subgraphs of $\Gamma$, is a partial permutation of $V(\Gamma)$, thus, $\paut \Gamma \subseteq \is {V(\Gamma)}$. It is easy to see that $\paut \Gamma$ is closed under partial {composition} and taking inverses and contains the identity element $\operatorname{id}_{V(\Gamma)}$, thus, $\paut \Gamma$ forms an \emph{inverse submonoid} of $\is {V(\Gamma)}$. If $\varphi\in \paut \Gamma$ then the subgraphs of $\Gamma$ induced on the vertex sets $\operatorname{dom} \varphi$ and $\operatorname{ran} \varphi$ are also denoted by $\operatorname{dom} \varphi$ and $\operatorname{ran} \varphi$, respectively.
The partial order discussed above induced by restriction on $\is X$ can be extended to any inverse monoid $\mathcal{S}$ as follows. First, consider the set $E(\mathcal{S})$ of \emph{idempotent} elements in $\mathcal{S}$ satisfying $e^2=e$. These are the elements arising in the form $ss^{-1}$ {for some $s\in \mathcal{S}$}. Idempotents in $E(\mathcal{S})$ necessarily commute and the product of two idempotents is again an idempotent. {Thus, the set of idempotents $E(\mathcal{S})$ forms a semilattice under multiplication.} Furthermore, an idempotent is always its own inverse, which means that $E(\mathcal{S})$ forms an inverse submonoid.
In the case of $\mathcal{S}=\is X$, the set of idempotents is $E(\mathcal{S})=\{\operatorname{id}_{Y}\colon Y \subseteq X\}$ with $\operatorname{id}_\emptyset$ being the empty map. If $Y, Z \subseteq X$, then $\operatorname{id}_Y \operatorname{id}_Z=\operatorname{id}_{Y \cap Z}$, so the subsemilattice {$E(\mathcal{S})$ is} isomorphic to the semilattice $(\mathcal P(X), \cap)$. The semilattice structure defines a partial order on $E(\mathcal{S})$, and this partial order can be naturally extended to $\mathcal{S}$ by putting $s \leq t$ if and only if there exists $e \in E(\mathcal{S})$ such that $s=te$. This is called the \emph{natural partial order} on $\mathcal{S}$. Notice that this is indeed the restriction order on $\is X$. {Moreover, if} $\Gamma$ is a finite edge-colored digraph, then $E(\is{V(\Gamma)})\subseteq \paut{\Gamma}$, therefore $E(\paut{\Gamma})=E(\is{V(\Gamma)})$. So the induced natural partial order on $\paut{\Gamma}$ is also the restriction order.
The concept of compatibility we introduced in $ \is X$ can also be extended to abstract inverse monoids. Two partial permutations $\varphi$ and $\psi$ in $\is X$ coincide on their common domain if and only if $\varphi^{-1}\psi$ is an idempotent, while $\varphi$ and $\psi$ map elements outside their common domain into disjoint sets if and only if $\varphi\psi^{-1}$ is an idempotent. In an abstract inverse monoid $\mathcal{S}$, we will call a pair of elements $a,b \in \mathcal{S}$ \emph{compatible} if $ab^{-1}$ and $a^{-1}b$ are idempotents. A \emph{subset} of $\mathcal{S}$ is called \emph{compatible} if all its elements are pairwise compatible. Although every compatible subset of a symmetric inverse monoid has a join (a least upper bound), compatibility is not a sufficient condition for the existence of a join in general inverse monoids. For instance, let $\Gamma_0$ be the graph in Figure \ref{fig:graph}, and consider the elements $[1\,2)$ and $[4\,3)$ of $\paut {\Gamma_0}$. These are compatible, as $[1\,2)[4\,3)^{-1}=[1\,2)^{-1}[4\,3)=\operatorname{id}_\emptyset$, but any common upper bound of $[1\,2)$ and $[4\,3)$ in the restriction order would need to take the edge $\{2,3\}$ to the non-edge $\{1,4\}$, and so their join in $\paut{\Gamma_0}$ does not exist.
\begin{figure}
\caption{The graph $\Gamma_0$}
\label{fig:graph}
\end{figure}
An inverse monoid $\mathcal{S}$ with zero is called {\em Boolean} if the semilattice $E(\mathcal{S})$ is the meet semilattice of a Boolean algebra. {Notice that the zero element of a Boolean inverse monoid $\mathcal{S}$ is necessarily the minimum element of $E(\mathcal{S})$.} Both symmetric inverse monoids and partial automorphism monoids {of edge-colored digraphs (for brevity, partial automorphism monoids)} are Boolean: in both cases, the semilattice of idempotents is isomorphic to $(\mathcal P(X), \cap)$, and $\mathcal P(X)$ is of course a Boolean algebra. Furthermore, the minimum element $\operatorname{id}_{\emptyset}$ is indeed a zero element for the whole monoid.
In this paper, we will mainly be concerned with finite inverse monoids: if $\mathcal{S}$ is finite, the fact that $E(\mathcal{S})$ is Boolean automatically implies that $E(\mathcal{S})$ is isomorphic to $(\mathcal P(X), \cap)$ for some finite set $X$.
We say that $\mathcal{T}$ is a \emph{full inverse submonoid} of $\mathcal{S}$ if $\mathcal{T}$ is an inverse submonoid of $\mathcal{S}$ such that $E(\mathcal{T})=E(\mathcal{S})$. For instance, we have seen that $\paut{\Gamma}$ is a full inverse submonoid of $\is {V(\Gamma)}$. Some of our previous assertions were a direct consequence of this fact, for instance that the natural partial orders of $\paut{\Gamma}$ and $\is {V(\Gamma)}$ coincide on $\paut{\Gamma}$, or that $\paut{\Gamma}$ is Boolean. Notice that any full inverse submonoid of a symmetric inverse monoid $\is X$ is Boolean, and is closed under taking restrictions.
\subsection{Green's relations}\label{ssec:green}
A key to understanding the structure of inverse monoids (or semigroups in general) is knowing which elements can be multiplied to which elements. Some information is carried by the natural partial order -- one can never move `up' by multiplication in finite inverse monoids -- but an accurate description of this is best captured by five equivalence relations, called \emph{Green's relations}, two of which coincide in the cases we are concerned with.
Let $\mathcal{S}$ be {an arbitrary monoid}, and $a, b \in \mathcal{S}$ be arbitrary elements. We define two equivalence relations $\mathrel{\mathcal{L}}$ and $\mathrel{\mathcal{R}}$ the following way:
\begin{center}
$a \mathrel{\mathcal{L}} b$ if and only if there exist $x,y \in \mathcal{S}$ such that $xa=b$ and $yb=a$,\\
$a \mathrel{\mathcal{R}} b$ if and only if there exist $x,y \in \mathcal{S}$ such that $ax=b$ and $by=a$. \end{center}
\noindent In a symmetric inverse monoid for instance, note that $\operatorname{dom} \varphi\psi \subseteq \operatorname{dom} \psi$ for any pair of partial permutations $\varphi$, $\psi$, so if $\varphi_1 \mathrel{\mathcal{L}} \varphi_2$, then clearly $\operatorname{dom} \varphi_1=\operatorname{dom} \varphi_2$ holds. On the other hand if $\operatorname{dom} \varphi_1=\operatorname{dom} \varphi_2$, then $(\varphi_2\varphi_1^{-1})\varphi_1=\varphi_2 \operatorname{id}_{\operatorname{dom}\varphi_1}=\varphi_2$, and similarly, $(\varphi_1\varphi_2^{-1})\varphi_2=\varphi_1 \operatorname{id}_{\operatorname{dom}\varphi_2}=\varphi_1$, so $\varphi_1 \mathrel{\mathcal{L}} \varphi_2$. Therefore, the $\mathrel{\mathcal{L}}$ relation coincides with having the same domain. Dually, the $\mathrel{\mathcal{R}}$ relation coincides with pairs of partial permutations having the same range. Note that the above argument holds without any modifications in $\paut \Gamma$ as well (or in any other inverse submonoid of a symmetric inverse monoid).
So, for instance, returning to $\paut {\Gamma_0}$ discussed above, the $\mathrel{\mathcal{R}}$-class of the partial automorphism $(1)\vee(2)$ consists of all partial automorphisms of range $\{1,2\}$, that is, of $\{(1)\vee(2), (1\,2), [1\,2\,3), (2) \vee [1\,3)\}$. Similarly, the $\mathrel{\mathcal{L}}$-class of $(1)\vee(2)$ consist of all partial automorphisms with domain $\{1,2\}$, that is, of $\{(1)\vee(2), (1\,2), [3\,2\,1), (2) \vee [3\,1)\}$. Note that the $\mathrel{\mathcal{L}}$-class of $(1)\vee(2)$ contains exactly the inverses of the elements in its $\mathrel{\mathcal{R}}$-class. This is no coincidence: in any inverse monoid, $a \mathrel{\mathcal{R}} b$ if and only if $a^{-1} \mathrel{\mathcal{L}} b^{-1}$.
The third Green's relation is ${\mathrel{\mathcal{H}}} = {\mathrel{\mathcal{R}} \cap \mathrel{\mathcal{L}}}$, which is again an equivalence relation. In a symmetric inverse monoid, or in a partial automorphism monoid, we clearly have $\varphi_1 \mathrel{\mathcal{H}} \varphi_2$ if and only if $\operatorname{dom} \varphi_1=\operatorname{dom} \varphi_2$ and $\operatorname{ran} \varphi_1 =\operatorname{ran} \varphi_2$. The $\mathrel{\mathcal{H}}$-class of $(1) \vee (2)$ in $\paut {\Gamma_0}$ is $\{(1)\vee(2), (1\,2)\}$. Note that it forms a subgroup. This is again not a coincidence. In an inverse monoid, each $\mathrel{\mathcal{R}}$-class and each $\mathrel{\mathcal{L}}$-class are known to contain precisely one idempotent, and the $\mathrel{\mathcal{H}}$-classes containing these idempotents are the maximal subgroups of the inverse monoid. This is easy to see in partial automorphism monoids: if $\Gamma$ is an edge-colored digraph and $\Delta$ is an induced subgraph, then there is exactly one idempotent with a prescribed domain (or range) $V(\Delta)$, namely $\operatorname{id}_{V(\Delta)}$. The $\mathrel{\mathcal{H}}$-class of the idempotent $\operatorname{id}_{V(\Delta)}$ consists of all partial automorphisms $\varphi$ with $\operatorname{dom}\varphi=\operatorname{ran}\varphi=V(\Delta)$, which is exactly the automorphism group of the graph $\Delta$.
The smallest equivalence relation containing both $\mathrel{\mathcal{R}}$ and $\mathrel{\mathcal{L}}$ is called the $\mathrel{\mathcal{D}}$ relation, and in any monoid, ${\mathrel{\mathcal{D}}}={\mathrel{\mathcal{R}} \circ \mathrel{\mathcal{L}}} = {\mathrel{\mathcal{L}} \circ \mathrel{\mathcal{R}}}$, where $\circ$ denotes the composition of relations. Thus, for any $a,b \in \mathcal{S}$, we have $a \mathrel{\mathcal{D}} b$ if and only if there exists $c \in \mathcal{S}$ such that {$a \mathrel{\mathcal{L}} c \mathrel{\mathcal{R}} b$}. In a symmetric inverse monoid, this means that $\varphi_1 \mathrel{\mathcal{D}} \varphi_2$ if and only if there exists a partial permutation $\psi$ with $\operatorname{dom}\psi=\operatorname{dom}\varphi_1$ and $\operatorname{ran}\psi=\operatorname{ran}\varphi_2$, which is equivalent to
$|\operatorname{dom}\varphi_1|=|\operatorname{ran}\varphi_2|$, or, in other words, $\varphi_1$ and $\varphi_2$ having the same rank.
The $\mathrel{\mathcal{D}}$ relation has a different characterization in partial automorphism monoids. It is still necessary to require that $|\operatorname{dom}\varphi_1|=|\operatorname{ran}\varphi_2|$, but it is no longer sufficient. For instance, consider $\Gamma_0$ and the partial automorphisms $(12), (34) \in \paut{\Gamma_0}$ again. Both have rank $2$, but there is no partial graph automorphism $\psi \in \paut{\Gamma_0}$ with $\operatorname{dom}\psi=\{1,2\}$ and $\operatorname{ran}\psi=\{3,4\}$, as $\{1,2\}$ {is an edge} while $\{3,4\}$ is not. It turns out that {in} partial automorphism monoids, the $\mathrel{\mathcal{D}}$ relation corresponds to isomorphism classes of {induced} subgraphs, as formulated in the following proposition.
\begin{Prop} \label{prop:d} For any edge-colored digraph $\Gamma$, the $\mathrel{\mathcal{D}}$-classes of $\paut{\Gamma}$ correspond to the isomorphism classes of {the} induced subgraphs of\/ $\Gamma$, that is, two elements are $\mathrel{\mathcal{D}}$-related if and only if the subgraphs induced {on their domains (or ranges)} are isomorphic. \end{Prop}
\begin{Proof} Let $\varphi_1, \varphi_2 \in \paut{\Gamma}$. Then $\varphi_1 \mathrel{\mathcal{D}} \varphi_2$ if and only if there exists $\psi \in \paut{\Gamma}$ with $\varphi_1 \mathrel{\mathcal{L}} \psi \mathrel{\mathcal{R}} \varphi_2$. The latter relation is equivalent to the equations $\operatorname{dom}{\varphi_1}=\operatorname{dom}{\psi}$ and $\operatorname{ran}{\varphi_2}=\operatorname{ran}{\psi}$. Thus, such $\psi$ exists in $\paut{\Gamma}$ if and only if there is a graph isomorphism between the induced subgraphs on $\operatorname{dom}{\varphi_1}$ and $\operatorname{ran}{\varphi_2}$, which proves our statement. \end{Proof}
In an inverse monoid, each $\mathrel{\mathcal{D}}$-class is a disjoint union of $\mathrel{\mathcal{R}}$-classes, and a disjoint union of $\mathrel{\mathcal{L}}$-classes, and any $\mathrel{\mathcal{R}}$- and $\mathrel{\mathcal{L}}$-class of a $\mathrel{\mathcal{D}}$-class intersect in an $\mathrel{\mathcal{H}}$-class. Moreover, every $\mathrel{\mathcal{D}}$-class contains the same number of $\mathrel{\mathcal{R}}$-classes as $\mathrel{\mathcal{L}}$-classes. A $\mathrel{\mathcal{D}}$-class is therefore usually depicted in what is called an `eggbox' diagram: the rows are the $\mathrel{\mathcal{R}}$-classes, the columns are the $\mathrel{\mathcal{L}}$-classes, the small rectangles are the $\mathrel{\mathcal{H}}$-classes, and they are arranged in such a way that the $\mathrel{\mathcal{H}}$-classes containing idempotents are on the main diagonal.
For an example, see Figure~\ref{fig:D} which depicts the $\mathrel{\mathcal{D}}$-class of $\paut{\Gamma_0}$ corresponding to the edges of the graph. The first row (column) corresponds to the range (domain) $\{1,2\}$, and the second row (column) corresponds to the range (domain) $\{2,3\}$. These belong to the same $\mathrel{\mathcal{D}}$-class since both $\{1,2\}$ and $\{2,3\}$ represent edges of $\paut{\Gamma_0}$, and therefore there exists a partial automorphism of rank $2$ mapping $\{1,2\}$ to $\{2,3\}$. The idempotents are colored grey.
\begin{center} \begin{figure}\label{fig:D}
\end{figure} \end{center}
In the case of finite semigroups, the $\mathrel{\mathcal{D}}$-classes form a partially ordered set: if we denote the $\mathrel{\mathcal{D}}$-class of $a$ by $D_a$, we put $D_a \leq D_b$ if $a=xby$ for some $x,y\in\mathcal{S}$, that is, if $b$ can be multiplied into $a$. In the case of $\is{X}$, this is just the ordering of $\mathrel{\mathcal{D}}$-classes according to their rank; the $\mathrel{\mathcal{D}}$-classes form a chain. The minimum element is the $\mathrel{\mathcal{D}}$-class of partial permutations of rank $0$, which is just $\{\operatorname{id}_{\emptyset}\}$.
The maximum element is the $\mathrel{\mathcal{D}}$-class of rank {$|X|$} partial permutations, which are the permutations of $X$. This maximal $\mathrel{\mathcal{D}}$-class therefore consists of a single $\mathrel{\mathcal{H}}$-class, which is the symmetric group $\operatorname{Sym}(X)$.
In the case of partial automorphism monoids, this partial order corresponds to the `induced subgraph' relation between the isomorphism classes of graphs,
as implied by the following proposition.
\begin{Prop} Let $\Gamma$ be an edge-colored digraph, and let $\varphi_1, \varphi_2 $ be elements of $ \paut{\Gamma}$. Then, $D_{\varphi_1} \le D_{\varphi_2}$ (i.e., there exist partial permutations $\psi, \sigma \in \paut{\Gamma}$ such that $\varphi_1 =\psi\varphi_2\sigma$) if and only if $\operatorname{dom}\varphi_1$ is isomorphic to an induced subgraph of $\operatorname{dom}\varphi_2$. \end{Prop}
\begin{Proof} First suppose such $\psi$ and $\sigma$ exist. Then, $$\operatorname{dom}\varphi_1=\sigma^{-1}(\operatorname{dom}(\psi\varphi_2)\cap \operatorname{ran} \sigma)\subseteq\sigma^{-1}(\operatorname{dom}(\psi\varphi_2)) \subseteq \sigma^{-1}(\operatorname{dom}\varphi_2),$$ and the assertion holds.
Conversely, suppose that $\operatorname{dom}\varphi_1$ is isomorphic to an induced subgraph of $\operatorname{dom}\varphi_2$, {and} denote the {set of} vertices of this subgraph by $W$. Then there exists a partial automorphism $\sigma \colon \operatorname{dom} \varphi_1 \to W$. Note that $\varphi_2(W)$ is isomorphic to $\operatorname{ran}\varphi_1$, so there exists a partial automorphism $\psi\colon \varphi_2(W) \to \operatorname{ran}\varphi_1$. Thus, $\varphi_1=\psi\varphi_2\sigma$. \end{Proof}
In $\paut\Gamma$, the minimum element of the poset of $\mathrel{\mathcal{D}}$-classes is again the singleton $\mathrel{\mathcal{D}}$-class $\{\operatorname{id}_{\emptyset}\}$, while the maximum element is the single $\mathrel{\mathcal{H}}$-class equal to the automorphisms group $\operatorname{Aut}(\Gamma)$.
If $\mathcal{S}$ is any finite semigroup with a zero element $0$, the $\mathrel{\mathcal{D}}$-class $D_0=\{0\}$ is always the minimum element of this poset. {I}n a {finite} poset $P$ with minimum element $0$, the {\em height} of an element $a$ is the largest
$s\in\mathbb{N}_0$ such that there exist elements $a_1,a_2,\ldots,a_s\in P$ with
$0<a_1<a_2<\ldots<a_s=a$. An element of height $1$ is usually called \emph{$0$-minimal}. This allows us to define the height of $\mathrel{\mathcal{D}}$-classes in $\mathcal{S}$, and also the height of elements in $\mathcal{S}$ with respect to the natural partial order. In any {finite} inverse monoid with zero, the height of each element is known to be equal to the height of its $\mathrel{\mathcal{D}}$-class. In $\is{X}$, the height of a partial permutation is exactly its rank, and the same is true in $\paut\Gamma$.
The $\mathrel{\mathcal{D}}$-class depicted in Figure \ref{fig:D} is a $\mathrel{\mathcal{D}}$-class of height $2$ in $\paut{\Gamma_{0}}$. There is one other $\mathrel{\mathcal{D}}$-class of height $2$: the class that belongs to the isomorphism class {of} \emph{non-edges} (that is, pairs of vertices with no edge between them). Figure \ref{fig:green} illustrates the entire eggbox structure of $\paut{\Gamma_0}$. The graph isomorphism class corresponding to a $\mathrel{\mathcal{D}}$-class is depicted to the right of it.
\begin{center} \begin{figure}
\caption{The structure of the partial automorphism monoid $\paut{\Gamma_0}$}
\label{fig:green}
\end{figure} \end{center}
\section{The structure of partial automorphism monoids of graphs} \label{sec:struc}
We begin by some simple observations. The partial automorphism monoids of a graph $\Gamma$ and its complement $\widetilde{\Gamma}$ are equal, that is, $ \paut{\Gamma} = \paut{\widetilde{\Gamma}} $. In particular, the partial automorphism monoid{s} of the complete graph $K_n$ and its complement $\widetilde{K_n}$ are equal to the symmetric inverse monoid $\is{V(K_n)}$. If the graphs $ \Gamma $ and $ \Gamma' $ are isomorphic, then $\paut{\Gamma}$ and $\paut{\Gamma'}$ are also isomorphic (more specifically, $\paut{\Gamma} = \varphi^{-1} \paut{\Gamma'} \varphi $ for {any} isomorphism $ \varphi\colon \Gamma \to \Gamma' $).
A key observation in describing the partial automorphism monoids of edge-colored digraphs (in particular, of graphs) is the fact that the elements of rank $1$ and $2$ determine the rest of the monoid. This is formalized in the following lemmas. The proof of the first lemma is obvious.
\begin{Lem} \label{lem:rank2} Le{t $\Gamma=(X,E)$ b}e an edge-colored {di}grap{h}, and let $\varphi \in \is{X}$ be a partial permutation of rank at least $2$.
Then $\varphi \in \paut{\Gamma}$ if and only if $\varphi|_{Y} \in \paut{\Gamma}$ for any $2$-element subse{t $Y$ of $\operatorname{dom}{\varphi}$.} \end{Lem}
Lemma \ref{lem:rank2} implies that one can build all partial automorphisms of an edge-colored digraph (in particular, of a graph) from partial automorphisms of rank at most $2$, using joins. \begin{Prop} \label{prop:joins1} The partial automorphism monoid $\mathcal{S}=\paut{\Gamma}$ of any edge-colored digraph $\Gamma$ has the following property: \begin{enumerate} \item[{\rm (U)}] for any compatible subset $A \subseteq \mathcal{S}$ of partial permutations of rank $1$, if $\mathcal{S}$ contains the join of any two elements of $A$, then $\mathcal{S}$ contains the join of the set $A$. \end{enumerate} \end{Prop}
\begin{Proof} Since $A$ is compatible{, $\varphi=\bigvee A$ i}s an element of $\is{V(\Gamma)}$. The elements of $A$ have rank $1$, therefore compatibility of $A$ implies that distinct elements of $A$ have distinct domains and distinct ranges. Thus, for any distinct element{s $\psi_1, \psi_2 \in A$, $\psi_1 \vee \psi_2$ i}s a rank $2$ partial permutation {on} $V(\Gamma)$ (and, by assumption, also a partial graph automorphism of $\Gamma$), namely the restriction of $\varphi$ to the $2$-element set $\operatorname{dom}{\psi_1} \cup \operatorname{dom}{\psi_2}$. Moreover, by definition, any rank $2$ restriction of $\varphi$ arises in this way. Therefore $\varphi$ satisfies the conditions of Lemma \ref{lem:rank2}, and hence belongs to $\mathcal{S}$. \end{Proof}
\begin{Prop} \label{prop:joins2} If $\mathcal{S},\mathcal{T}$ are full inverse submonoids of $\is{X}$ which coincide on their elements of rank at most $2$ and satisfy condition {\rm (U)}, then $\mathcal{S} =\mathcal{T} $. \end{Prop}
\begin{Proof} To verify $\mathcal{S} \subseteq \mathcal{T}$, let $\varphi \in \mathcal{S}$ be of rank greater than $2$. Then all restrictions of $\varphi$ are in $\mathcal{S}$, since $\mathcal{S}$ is {a} full {inverse submonoid of $\is{X}$}. In particular, with $A$ denoting the set of rank $1$ restrictions of $\varphi$, we have $A \subseteq \mathcal{S}$. The elements of $A$ are, of course, pairwise compatible, and as all their two-element joins are rank $2$ restrictions of $\varphi$, they belong to $\mathcal{S}$ as well. Since we assume that $\mathcal{T}$ contains the same elements of ranks $1$ and $2$ as $\mathcal{S}$ and that $\mathcal{T}$ satisfies condition {\rm (U)}, this yields $\varphi=\bigvee A $ is an element of $ \mathcal{T}$, as needed. The inclusion $\mathcal{T} \subseteq \mathcal{S}$ is obtained by the symmetric argument, swapping the roles of $\mathcal{T} $ and $ \mathcal{S}$. \end{Proof}
\section{When does an inverse monoid of partial permutations coincide with the partial automorphism monoid of a graph?} \label{sec:frucht}
The first theorem of this section answers the question from the title of the section. The second theorem answers the same question for edge-colored digraphs; the conditions {in this second characterization} being a subset of the conditions for graphs. {In case of graphs it is also observed that the partial automorphism monoid uniquely determines the graph up to forming the complement.}
\begin{Thm}[Partial automorphism monoids of graphs] \label{thm:pautgraph} Given an inverse submonoid $\mathcal{S}$ of $\is{X}$, where $X$ is a finite set,
$|X| \geq 2 $, there exists a graph $\Gamma=(X,E)$ whose partial automorphism monoid $\paut\Gamma$ is equal to $\mathcal{S}$ if and only if the following conditions hold: \begin{enumerate}
\item \label{thm:pautgraphi}
$\mathcal{S}$ is a full inverse submonoid of $\is{X}$,
\item \label{thm:pautgraphii}
for any compatible subset $A \subseteq \mathcal{S}$ of rank $1$ partial
permutations,
if $\mathcal{S}$ contains the join of any two elements of $A$, then
$\mathcal{S}$ contains the join of the set $A$,
\item \label{thm:pautgraphiii}
the rank $2$ elements of $\mathcal{S}$ form at least one and at most two $\mathrel{\mathcal{D}}$-classes,
\item \label{thm:pautgraphiv}
the $\mathrel{\mathcal{H}}$-classes of rank $2$ elements are nontrivial. \end{enumerate} \end{Thm}
\begin{Proof} Suppose $\Gamma$ is a graph on the vertex set $X$ of size at least $2$, and take the inverse monoid $\mathcal{S}=\paut{\Gamma}$. We begin by proving that $\mathcal{S}$ has the asserted properties. The simple observation that all partial identity permutations are partial automorphisms implies condition (\ref{thm:pautdigraphi}), while condition (\ref{thm:pautdigraphii}) follows from Proposition \ref{prop:joins1}.
For condition (\ref{thm:pautgraphiii}), recall that by Proposition \ref{prop:d}, the $\mathrel{\mathcal{D}}$-classes correspond to the isomorphism classes of the induced subgraphs. If $\Gamma$ is {a graph on at least two vertices, then} any induced subgraph of $\Gamma$ with two vertices either has no edge or has a single edge between the two vertices. Since any two graphs on two vertices containing an edge are isomorphic, and so are any two graphs on two vertices containing no edges, this yields at most two $\mathrel{\mathcal{D}}$-classes of partial automorphisms of rank $2$. Since $\Gamma$ contains at least one subgraph on at least two vertices, it admits at least one $\mathrel{\mathcal{D}}$-class of partial automorphisms of rank $2$.
{A} group $\mathrel{\mathcal{H}}$-clas{s o}f any $\mathrel{\mathcal{D}}$-class {is} isomorphic to the automorphism group of {the induced subgraph on} the common domain. Both kinds of subgraphs with two vertices have a two-element automorphism group containing the identity map and a transposition; isomorphic to ${\Bbb Z}_2$. Hence, the $\mathrel{\mathcal{H}}$-classes of rank $2$ are non-trivial, proving condition (\ref{thm:pautgraphiv}).
Our proof of the converse statement is constructive: Given an inverse submonoid $\mathcal{S}$ of $\is{X}$ that satisfies the conditions listed in the theorem, we will construct a graph $\Gamma_\mathcal{S}$ on the set $X$ for which $\paut{\Gamma_\mathcal{S}} = \mathcal{S}$.
We define $\Gamma_\mathcal{S}=(X,E)$ as follows. If $\mathcal{S}$ contains just one $\mathrel{\mathcal{D}}$-class of rank $2$, denote it either $D_{e}$ or $D_{n}$, and if $\mathcal{S}$ contains two $\mathrel{\mathcal{D}}$-classes of rank $2$, denote them by $D_{e}$ and $D_{n}$ (standing for `edges' and `non-edges', respectively; the choice of `which is which' can be made arbitrarily). For any two distinct vertices $v_1, v_2 \in X$, we le{t $\{v_1, v_2\} \in E$ i}f and only if the partial permutation $(v_1)\vee(v_2)$ belongs to $D_{e}$.
Next, we show that $\mathcal{S}$ and $\paut{\Gamma_\mathcal{S}}$ coincide on their elements of rank at most $2$. Let $\varphi \in \mathcal{S}$ be a permutation of rank at most $2${, and we intend to verify that $\varphi\in\paut{\Gamma_\mathcal{S}}$}. If $\varphi$ has rank $1$ or $0$, then it is a partial graph automorphism of $\Gamma_\mathcal{S}$ (and of any graph on $X$), as all {sub}graphs induced by one vertex are isomorphic, and the empty map is a partial graph automorphism of any graph by definition. Next, suppose $\varphi$ has rank $2$, that is, $\varphi=[v_1\, u_1)\vee[v_2\, u_2)$, where $u_1 \neq u_2$ (and therefore $v_1 \neq v_2$). Since, $\varphi\varphi^{-1}=(v_1) \vee (v_2)$ and $\varphi^{-1}\varphi=(u_1) \vee (u_2)$, $\varphi\varphi^{-1} \mathrel{\mathcal{D}} \varphi^{-1}\varphi$ implies $(v_1) \vee (v_2) \mathrel{\mathcal{D}} (u_1) \vee (u_2)$ in $\mathcal S$. Thus, by the definition of $\Gamma_\mathcal{S}$, we obtain the equivalence $(u_1, u_2)\in E$ if and only if $(v_1, v_2)\in E$. Hence, $\varphi \in \paut{\Gamma_\mathcal{S}}$.
Next we turn to proving that each partial automorphism of $\Gamma_\mathcal{S}$ of rank at most $2$ belongs to $\mathcal{S}$. Since $\mathcal{S}$ is a full inverse submonoid of $\is{X}$, all idempotents of $\is{X}$ belong to $\mathcal{S}$. In particular, the empty map belongs to $\mathcal{S}$, and it is, of course, the single rank $0$ partial permutation in $\mathcal{S}$. We establish that every rank $1$ partial permutatio{n $[v_2v_1)$ of $X$ is} contained in $\mathcal{S}$. By condition (\ref{thm:pautgraphiv}), the $\mathrel{\mathcal{H}}$-class of the rank $2$ idempotent $(v_2)\vee(v_1)$ is nontrivial, therefore, besides the idempotent itself, it contains another partial permutation which is an automorphism of the induced subgraph on $\{v_1,v_2\}$. Necessarily, this partial permutation swaps the two vertices $v_1,v_2$. This implies that {the cycle $(v_2 v_1)$ belongs} to $\mathcal{S}$ for every pair $v_1,v_2$ of distinct elements of $X$. Since any rank $1$ partial permutation arises as a restriction of such a partial permutation, the claim about rank $1$ partial permutations follows.
Now suppose $\varphi$ is a rank $2$ partial automorphism of $\Gamma_\mathcal{S}$, that is, $\varphi=[v_1\, u_1)\vee[v_2\, u_2)$ for some $u_1,u_2,v_1,v_2\in X$ with $u_1\not= u_2$ and $v_1\not= v_2$. Then $u_1$ and $u_2$ are connected by an edge if and only if $v_1$ and $v_2$ are, whence $(v_1) \vee (v_2) \mathrel{\mathcal{D}} (u_1) \vee (u_2)$ in $\mathcal{S}$ by the definition of $\Gamma_\mathcal{S}$. Therefore, there exists an element $\psi$ in $\mathcal{S}$ with domain $\{u_1, u_2\}$ and range $\{v_1, v_2\}$. Moreover, since the rank $2$ $\mathrel{\mathcal{H}}$-classes of $\mathcal{S}$ are nontrivial by condition (\ref{thm:pautgraphiv}), the $\mathrel{\mathcal{H}}$-class of $\psi$ contains both partial permutations wit{h d}omain $\{u_1,u_2\}$ and range $\{v_1,v_2\}$, in particular, $\varphi \in \mathcal{S}$.
We have shown that $\mathcal{S}$ and $\paut{\Gamma}$ coincide on their elements of rank at most $2$, and so, by applying Proposition \ref{prop:joins2}, we obtain $\paut{\Gamma_\mathcal{S}}=\mathcal{S}$. \end{Proof}
As we have observed for any graph $\Gamma$, the idempotents of the two rank $2$ $\mathrel{\mathcal{D}}$-classes of $\paut{\Gamma}$ correspond to pairs of vertices forming edges and non-edges. Consequently, the graph $\Gamma$ with $\mathcal{S}=\paut{\Gamma}$ is uniquely determined up to forming complement. Hence the following can be deduced.
\begin{Cor} \label{cor:paut-eq} {If $\Gamma,\Gamma'$ are graphs with $\paut\Gamma=\paut{\Gamma'}$ then either $\Gamma=\Gamma'$ or $\Gamma=\widetilde{\Gamma'}$.} \end{Cor}
\begin{Thm}[Partial automorphism monoids of edge-colored digraphs] \label{thm:pautdigraph} Given an inverse submonoid $\mathcal{S}$ of $\is{X}$, where $X$ is a finite set, there exists an edge-colored digraph $\Gamma=(X,E_1,\ldots,E_l)$ whose partial automorphism monoid $\paut\Gamma$ is equal to $\mathcal{S}$ if and only if the following conditions hold: \begin{enumerate}
\item \label{thm:pautdigraphi}
$\mathcal{S}$ is a full inverse submonoid of $\is{X}$,
\item \label{thm:pautdigraphii}
for any compatible subset $A \subseteq \mathcal{S}$ of rank $1$ partial permutations,
if $\mathcal{S}$ contains the join of any two elements of $A$, then
$\mathcal{S}$ contains the join of the set $A$. \end{enumerate} \end{Thm}
\begin{Proof} Suppose $\Gamma=(X,E_1,\ldots,E_l)$ is an edge-colored digraph, and consider $\mathcal{S}=\paut{\Gamma}$. As before, the fact that all partial identical permutations are partial automorphisms implies that $\mathcal{S}$ satisfies condition (\ref{thm:pautdigraphi}), while condition (\ref{thm:pautdigraphii}) follows from Proposition \ref{prop:joins1}.
The converse part of our proof is again constructive. Suppose $\mathcal{S} \subseteq \is X$ has the asserted properties, and let us define an edge-colored graph $\Gamma_\mathcal{S}=(X,E_c\ (c\in C))$ as follows. Let the color palette {be $C=C_1 \cup C_2$ with $C_1 \cap C_2=\emptyset$}, where $C_1$ indexes the set of rank $1$, $C_2$ the set of rank $2$ $\mathrel{\mathcal{D}}$-classes of $\mathcal{S}$. For every $c\in C_1$, let $E_c$ consist of all loops $(v,v)$ where $(v)$ is in the $\mathrel{\mathcal{D}}$-class $D_c$. Moreover, for every $c\in C_2$, let us choose and fix vertices $v_1^{c}, v_2^{c}\in X$ with $(v_1^c)\vee(v_2^c) \in D_c$, and define $E_c$ to consist of all edges $(u_1,u_2)$ such that $[u_1\, v_1^c)\vee[u_2\, v_2^c) \in \mathcal{S}$. Note that this implies $(v_1^c, v_2^c)\in E_c$ and $(v_1^c)\vee(v_2^c) \mathrel{\mathcal{D}} (u_1)\vee(u_2)$ by definition. The converse of the latter relation is not quite true. However, if $(v_1^c)\vee(v_2^c) \mathrel{\mathcal{D}} (u_1)\vee(u_2)$ in $\mathcal{S}$, then at least one of the partial permutations $[u_1\, v_1^c)\vee[u_2\, v_2^c)$ and $[u_2\, v_1^c)\vee[u_1\, v_2^c)$ is in $\mathcal{S}$, and therefore, for any such $u_1, u_2$, at least one of the edges $(u_1, u_2)$ and $(u_2,u_1)$ belongs to $E_c$. Both belong to $E_c$ if any only if the $\mathrel{\mathcal{H}}$-classes in $D_c$ are nontrivial.
To complete the proof, we again intend to show that $\mathcal{S}$ and $\paut{\Gamma_\mathcal{S}}$ coincide on their elements of rank at most $2$. Suppose {first} that $\varphi \in \mathcal{S}$ is of rank $1$ or $2${, and check that $\varphi\in\paut{\Gamma_\mathcal{S}}$}. In the first case, $\varphi=[v_2\, v_1)$, and so $(v_1) \mathrel{\mathcal{D}} (v_2)$ in $\mathcal{S}$. Therefore, by the definition of $\Gamma_\mathcal{S}$, {both} subgraphs induced {on $\{v_1\}$ and $\{v_2\}$ contain} a single loop of the same color, making $\varphi$ a partial automorphism. In the second case, suppose that $\varphi=[v_1\, u_1)\vee[v_2\, u_2)$ with $u_1 \neq u_2$ and $(u_1,u_2) \in E_c$. Then $[u_1\, v_1^c)\vee[u_2\, v_2^c) \in \mathcal{S}$, and $\big([v_1\, u_1)\vee[v_2\, u_2)\big)\big([u_1\, v_1^c)\vee[u_2\, v_2^c)\big)= [v_1\, v_1^c)\vee[v_2\, v_2^c) \in \mathcal{S}$ follows from $u_1\neq u_2$. This implies $(v_1, v_2)\in E_c$. Similar arguments interchanging the $u$'s and $v$'s imply that $\varphi$ is indeed in $\paut{\Gamma_\mathcal{S}}$.
Now suppose that $\varphi \in \paut{\Gamma_\mathcal{S}}$ is of rank $1$ or $2$, we need to verify that $\varphi\in\mathcal{S}$. If $\varphi=[v_2\, v_1)$, then the subgraphs induced on $\{v_1\}$ and $\{v_2\}$ are isomorphic. This implies by the definition of $\Gamma_\mathcal{S}$ that $(v_1) \mathrel{\mathcal{D}} (v_2)$ in $\mathcal{S}$, and hence $\varphi \in \mathcal{S}$. Otherwise, let $\varphi=[v_1\, u_1)\vee[v_2\, u_2)$ with $u_1 \neq u_2$, and let $D_c$ be the $\mathrel{\mathcal{D}}$-class of $\mathcal{S}$ containing $(v_1)\vee(v_2)$. As we have seen above, this implies $(u_1, u_2) \in E_c$ or $(u_2, u_1) \in E_c$. If $(u_1, u_2) \in E_c$, that is, $[u_1\, v_1^c)\vee[u_2\, v_2^c) \in \mathcal{S}$, then $(v_1, v_2) \in E_c$ as well, therefore $[v_1\, v_1^c)\vee[v_2\, v_2^c) \in \mathcal{S}$, and $\big([v_1\, v_1^c)\vee[v_2\, v_2^c)\big)\big([u_1\, v_1^c)\vee[u_2\, v_2^c)\big)^{-1}= [v_1\, u_1)\vee[v_2\, u_2) \in \mathcal{S}$ follows. I{f $(u_2, u_1) \in E_c$, then} $[v_1\, u_1)\vee[v_2\, u_2) \in \mathcal{S}$ can be similarly deduced.
Thus we have seen that $\mathcal{S}$ and $\paut{\Gamma}$ coincide on their elements of rank at most $2$, and by applying Proposition \ref{prop:joins2}, we conclude $\paut{\Gamma_\mathcal{S}}=\mathcal{S}$. \end{Proof}
\begin{Rem} Since any edge-colored digraph corresponds to a digraph admitting the same inverse monoid of partial automorphisms in which the colors are encoded with different numbers of edges, partial automorphism monoids of (monochromatic) digraphs admit the same characterization as the one in Theorem \ref{thm:pautdigraph}. \end{Rem}
\section{When is an inverse monoid isomorphic to the partial automorphism monoid of a graph?} \label{sec:frucht2}
{This section gives abstract characterizations for the inverse monoids of partial automorphism monoids of graphs, and more generally, of edge-colored digraphs. It is also determined when the partial automorphism monoids of graphs are isomorphic.}
These characterizations are obtained from the theorems of the previous section. The link between the abstract inverse monoids and partial permutation monoids is established by a modified version of the so-called Munn-representation \cite[Chapter 5.2]{Lawson}.
As mentioned before, all inverse monoids can be represented by partial permutations. The standard representation is the Wagner--Preston representation, which represents $\mathcal{S}$ as an inverse submonoid of $\is{\mathcal{S}}$ faithfully, and is the analogue of the Cayley representation. Nevertheless, as in the case of groups, there also exist representations on smaller sets which are faithful for special classes of inverse monoids. In this section, we shall use one of them which represents an inverse monoid on the set of its idempotents.
Let $\mathcal{S}$ be an inverse monoid, and for any $e \in E(\mathcal{S})$, let $[e]=\{f \in E(\mathcal{S}): f \leq e\}$, the \emph{order ideal} of $E(\mathcal{S})$ generated by $e$. The \emph{Munn representation} of $\mathcal{S}$ is the map $$\delta_\mathcal{S} \colon \mathcal{S} \to \is{E(\mathcal{S})},\ s \mapsto \delta_s,$$ where $$\delta_s \colon [s^{-1}s] \to [ss^{-1}], \ e \mapsto ses^{-1}.$$ An inverse monoid $\mathcal{S}$ is called \emph{fundamental} if its Munn representation is faithful (injective). It can be proven that each symmetric inverse monoid is fundamental, and each full inverse submonoid of a fundamental inverse semigroup is also fundamental. Hence the partial automorphism monoid $\paut \Gamma$ of any graph, digraph, or colored digraph $\Gamma$, is fundamental.
Let $\mathcal{S}$ be a finite inverse monoid, and let $X$ denote the set of atoms of $E(\mathcal{S})$. The {\em restricted Munn representation} of $\mathcal{S}$ is the Munn representation of $\mathcal{S}$ restricted to the atoms of $E(\mathcal{S})$, that is, it is the {homomorphism}
$$\alpha_\mathcal{S} \colon \mathcal{S} \to \is{X},\ s \mapsto \delta_s|_{X},$$ where
$$\delta_s|_{X} \colon [s^{-1}s] \cap X \to [ss^{-1}] \cap X,\ e \mapsto ses^{-1}.$$
Note that $\alpha_\mathcal{S}$ is well defined for any finite inverse monoid, as the atoms of an order ideal of $E(\mathcal{S})$ are atoms of $E(\mathcal{S})$, and an order isomorphism between posets maps atoms to atoms. Naturally, this representation can only be faithful (injective) if $\mathcal{S}$ is fundamental. However, this is not a sufficient condition in general. The next lemma states that within the class of finite Boolean inverse monoids, the two properties are indeed equivalent.
\begin{Prop} \label{prop:boolfund} For a finite Boolean inverse monoid $\mathcal{S}$, the restricted Munn representation of $\mathcal{S}$ is {faithful (injective)} if and only if $\mathcal{S}$ is fundamental. \end{Prop}
\begin{Proof} If $\mathcal{S}$ is a finite Boolean inverse monoid, then any principal order ideal of $E(\mathcal{S})$ is a (finite) Boolean algebra, and an order isomorphism between order ideals is also a Boolean algebra isomorphism (see \cite{lattice}). Since finite Boolean algebras are generated by their atoms, an isomorphism $\delta \colon I \to J$ between Boolean algebras is uniquely determined by its restriction to the atoms of $I$.
Hence, for any elements $s, t \in \mathcal{S}$, we have $\delta_s|_{X}=\delta_t|_{X}$ if and only if $\delta_s=\delta_t$, making the restricted Munn representation faithful (injective) if and only if the Munn representation is. \end{Proof}
There is another reason why the restricted Munn representation is a very natural representation to consider for the partial automorphism monoids, and this is the following statement.
\begin{Prop} \label{prop:repr} For a partial automorphism monoid $\mathcal{S}=\paut{\Gamma}$ of an {edge}-colored digraph $\Gamma$, the restricted Munn representation $\alpha_\mathcal{S}$ and the representation of $\mathcal{S}$ on $V(\Gamma)$ are essentially the same. More precisely, if $X$ is the set of atoms of $E(\mathcal{S})$, then the map $$\xi\colon V(\Gamma)\to X,\ v\mapsto \operatorname{id}_{\{v\}}$$ is a bijection with the property that, for every $\varphi\in \mathcal{S}$, we have \begin{equation} \label{prop:repr1} {\xi(\operatorname{dom}\varphi)=\operatorname{dom}(\alpha_\mathcal{S}(\varphi)),\quad \xi(\operatorname{ran}\varphi)=\operatorname{ran}(\alpha_\mathcal{S}(\varphi)),} \end{equation} {and} \begin{equation} \label{prop:repr2} \xi(\varphi(v))=(\alpha_\mathcal{S}(\varphi))(\xi(v))\quad {\hbox{for any}\ v\in V(\Gamma).} \end{equation} \end{Prop}
\begin{Proof} The semilattice of idempotents $E(\mathcal{S})$ of $\mathcal{S}$ consists of the identical maps on the subsets of $V(\Gamma)$, that is, $\Theta\colon \mathcal P(V(\Gamma))\to E(\mathcal{S}),\ Y\mapsto \operatorname{id}_{Y}$ is an isomorphism from the {meet-semilattice} $(\mathcal{P}(V(\Gamma));\cap)$ of the Boolean algebra $\mathcal{P}(V(\Gamma))$ to $E(\mathcal{S})$. Since the atoms of $\mathcal{P}(V(\Gamma))$ are the singleton subsets, and the set of atoms of $E(\mathcal{S})$ is $X=\{\operatorname{id}_{\{v\}}: v\in V(\Gamma)\}$, the restriction of $\Theta$ to the sets of atoms induces the bijection $\xi\colon V(\Gamma)\to X,\ v\mapsto \operatorname{id}_{\{v\}}$. This provides a natural identification of the atoms of $E(\mathcal{S})$ with the vertices of $\Gamma$.
The equalities (\ref{prop:repr1}) and (\ref{prop:repr2}), which we prove in this paragraph, say that, under this identification, $\varphi$ and $\alpha_\mathcal{S}(\varphi)$ are the same for every $\varphi\in \mathcal{S}$. By definition, $\operatorname{dom}(\alpha_\mathcal{S}(\varphi))=[\varphi^{-1}\varphi]\cap X= \{\operatorname{id}_{\{v\}}: v\in \operatorname{dom}\varphi\}=\xi(\operatorname{dom}\varphi)$, and the second equality in (\ref{prop:repr1}) follows similarly. Furthermore, if $v\in\operatorname{dom}\varphi$, then we have $(\alpha_\mathcal{S}(\varphi))\left(\operatorname{id}_{\{v\}}\right)
=\delta_{\varphi}|_{X}\left(\operatorname{id}_{\{v\}}\right) =\varphi\operatorname{id}_{\{v\}}\varphi^{-1}=\operatorname{id}_{\{\varphi(v)\}}$. \end{Proof}
We are now ready to prove the main theorems of the section.
\begin{Thm}[Partial automorphism monoids of graphs, up to isomorphism] \label{thm:pautgraph2} Given a finite inverse monoid $\mathcal{S}$, there exists a finite graph whose partial automorphism monoid is isomorphic to $\mathcal{S}$ if and only if the following conditions hold: \begin{enumerate}
\item \label{thm:pautgraph2i}
$\mathcal{S}$ is Boolean,
\item \label{thm:pautgraph2ii}
$\mathcal{S}$ is fundamental,
\item \label{thm:pautgraph2iii}
for any subset $A \subseteq \mathcal{S}$ of compatible $0$-minimal elements, if all $2$-element subsets of $A$ have {joins in} $\mathcal{S}$, then the set $A$ has a join in $\mathcal{S}$,
\item \label{thm:pautgraph2iv}
$\mathcal{S}$ has at most two $\mathcal D$-classes of height $2$,
\item \label{thm:pautgraph2v}
the $\mathcal H$-classes of the height $2$ $\mathcal D$-classes of $\mathcal{S}$ are nontrivial. \end{enumerate} \end{Thm}
\begin{Proof} For any graph $\Gamma$, we have seen that $\paut{\Gamma}$ is Boolean and fundamental, proving condition{s (\ref{thm:pautgraph2i}) and (\ref{thm:pautgraph2ii})}.
Since the $0$-minimum elements of $\paut{\Gamma}$ are just the rank $1$ elements, and the $\mathrel{\mathcal{D}}$-classes of height $2$ are just the rank $2$ $\mathrel{\mathcal{D}}$-classes, conditions (\ref{thm:pautgraph2iii})--(\ref{thm:pautgraph2v}) are immediate consequences of our previous observations on partial automorphism monoids; see properties (\ref{thm:pautgraphii})--(\ref{thm:pautgraphiv}) in Theorem \ref{thm:pautgraph}.
Conversely, let $\mathcal{S}$ be an inverse monoid having the asserted properties, and let $X$ denote the set of all atoms of $E(\mathcal{S})$. By propert{ies} (\ref{thm:pautgraph2i}) and (\ref{thm:pautgraph2ii}), Proposition \ref{prop:boolfund} implies that the restricted Munn representation $\alpha_\mathcal{S}$ of $\mathcal{S}$ embeds $\mathcal{S}$ into $\is{X}$. We check that the inverse submonoid $\alpha_\mathcal{S}(\mathcal{S})$ of $\is{X}$ satisfies conditions {(\ref{thm:pautgraphi})--(\ref{thm:pautgraphii})} of Theorem \ref{thm:pautgraph}. Since the rank $1$ elements of $\alpha_\mathcal{S}(\mathcal{S})$ are clearly $0$-minimal, condition (\ref{thm:pautgraphii}) is immediate from property (\ref{thm:pautgraph2iii}) on joins in $\mathcal{S}$. To verify condition (\ref{thm:pautgraphi}), that is, that $\alpha_\mathcal{S}(\mathcal{S})$ is a full inverse submonoid of $\is{X}$, note that for any element $e \in X$,
$\operatorname{id}_{\{e\}}=\alpha_\mathcal{S}(e) \in \alpha_\mathcal{S}(\mathcal{S})$. Since $E(\mathcal{S})$ is a meet-semilattic{e o}f a Boolean algebra with $|X|$ atoms, the same holds also for $E(\alpha_\mathcal{S}(\mathcal{S}))$. However, $E(\is{X})$ is also a Boolean algebra of the same size which contains $E(\alpha_\mathcal{S}(\mathcal{S}))$. Hence $E(\alpha_\mathcal{S}(\mathcal{S}))=E(\is{X})$, and $\alpha_\mathcal{S}(\mathcal{S})$ is, indeed, a full inverse submonoid of $\is{X}$. Since $\alpha_\mathcal{S}(\mathcal{S})$ is a full inverse submonoid of $\is{X}$, its $\mathrel{\mathcal{D}}$-classes of height $2$ are just the rank $2$ $\mathrel{\mathcal{D}}$-classes, therefore properties (\ref{thm:pautgraph2iv})--(\ref{thm:pautgraph2v}) imply that $\alpha_\mathcal{S}(\mathcal{S})$ satisfies conditions (\ref{thm:pautgraphiii})--(\ref{thm:pautgraphiv}) of Theorem \ref{thm:pautgraph} as well.
Theorem \ref{thm:pautgraph} therefore shows the existence of a required $\Gamma$, thus completing the proof. \end{Proof}
Using exactly the same arguments and Theorem \ref{thm:pautdigraph}, one obtains the following theorem for edge-colored digraphs.
\begin{Thm}[Partial automorphism monoids of edge-colored digraphs, up to isomorphism] \label{thm:pautdigraph2} Given a finite inverse monoid $\mathcal{S}$, there exists a finite edge-colored digraph whose partial automorphism monoid is isomorphic to $\mathcal{S}$ if and only if the following conditions hold: \begin{enumerate}
\item
$\mathcal{S}$ is Boolean,
\item
$\mathcal{S}$ is fundamental,
\item
for any subset $A \subseteq \mathcal{S}$ of compatible $0$-minimal elements, if all $2$-element subsets of $A$ have {joins in} $\mathcal{S}$, then the set $A$ has a join in $\mathcal{S}$. \end{enumerate} \end{Thm}
{We can combine our results to determine when the partial automorphism monoids of graphs are isomorphic.}
\begin{Cor} {For any graphs $\Gamma$ and $\Gamma'$, the partial automorphism monoids $\paut\Gamma$ and $\paut{\Gamma'}$ are isomorphic if and only if either $\Gamma$ and $\Gamma'$, or $\Gamma$ and $\widetilde{\Gamma'}$ are isomorphic.} \end{Cor}
\begin{Proof} The `if' part of the statement is obvious. To see the converse, suppose that $\iota\colon \paut{\Gamma}\to \paut{\Gamma'}$ is an isomorphism, and put $X=V(\Gamma),\ X'=V(\Gamma')$. {The sets of elements of rank at most $1$ form inverse subsemigroups $S_1$ and $S'_1$ in $\paut{\Gamma}$ and $\paut{\Gamma'}$, respectively. The isomorphism $\iota$ maps idempotents to idempotents and preserves their heights, and so it also preserves ranks of elements. Therefore $\iota(S_1)=S'_1$, and $\iota$ restricts to a bijection between the sets of rank $1$ idempotents of $\paut{\Gamma}$ and $\paut{\Gamma'}$. Combining this bijection with the bijections $v\mapsto \operatorname{id}_{\{v\}}$ from the sets $X$ and $X'$ to the latter sets, we obtain a bijection $\overline{\iota}\colon X\to X'$, where $\operatorname{id}_{\{\overline{\iota}(v)\}}=\iota(\operatorname{id}_{\{v\}})$ for every $v\in X$. Recall that $S_1\setminus \{\operatorname{id}_{\emptyset}\}$ and $S'_1\setminus \{\operatorname{id}_{\emptyset}\}$ constitute $\mathrel{\mathcal{D}}$-classes in $\is{X}$ and $\is{X'}$, respectively. Since isomorphisms map $\mathrel{\mathcal{L}}$-related ($\mathrel{\mathcal{R}}$-related) elements to elements with the same property, and the $\mathrel{\mathcal{H}}$-classes of $S_1$ and $S'_1$ are singletons, we obtain that $\iota([v\,u))=[\overline{\iota}(v)\,\overline{\iota}(u))$ for any $u,v\in X$.}
Now let us define a graph $\Gamma''=(X,E'')$ in the way that $\{u,v\}\in E''$ for distinct $u,v\in X$ if and only if $\{\overline{\iota}(u),\overline{\iota}(v)\}\in E'$. It is clear that $\Gamma''$ is isomorphic to $\Gamma'$. We intend to verify that the rank $2$ elements of $\paut{\Gamma''}$ and $\paut{\Gamma}$ coincide.
Let $\varphi=[v_1\,u_1)\vee [v_2\,u_2)$ be a rank $2$ element of $\is{X}$. By definition, $\varphi\in\paut{\Gamma''}$ if and only if either both $\{\overline{\iota}(u_1),\overline{\iota}(u_2)\}$ and $\{\overline{\iota}(v_1),\overline{\iota}(v_2)\}$ are edges, or both are non-edges in $\Gamma'$, that is, if and only if $[\overline{\iota}(v_1)\,\overline{\iota}(u_1))\vee [\overline{\iota}(v_2)\,\overline{\iota}(u_2))\in\paut{\Gamma'}$. As isomorphisms are easily seen to respect joins, we obtain that $$[\overline{\iota}(v_1)\,\overline{\iota}(u_1))\vee [\overline{\iota}(v_2)\,\overline{\iota}(u_2))= \iota\big([v_1\,u_1)\big)\vee \iota\big([v_2\,u_2)\big)= \iota\big([v_1\,u_1)\vee [v_2\,u_2)\big)=\iota(\varphi).$$ Hence $\varphi\in\paut{\Gamma''}$ if and only if $\varphi\in \iota^{-1}\big(\paut{\Gamma'}\big)=\paut{\Gamma}$, and so the rank $2$ elements of $\paut{\Gamma''}$ and $\paut{\Gamma}$, indeed, coincide.
Since both in $\paut{\Gamma''}$ and $\paut{\Gamma}$, the elements of rank at most $1$ are just the elements of the same kind in $\is X$, we deduce by Proposition \ref{prop:joins2} that $\paut{\Gamma}=\paut{\Gamma''}$. Thus Corollary \ref{cor:paut-eq} implies that either $\Gamma=\Gamma''$ or $\Gamma=\widetilde{\Gamma''}$, and since $\Gamma''$ is isomorphic to $\Gamma'$, our corollary follows. \end{Proof}
Motivated by this result and the Graph Reconstruction Conjecture, we propose a related question.
\begin{Probl} Given graphs $\Gamma_1$ and $\Gamma_2$, if their decks $\operatorname{Deck}(\Gamma_1)$ and $\operatorname{Deck}(\Gamma_2)$ coincide up to isomorphism and complementation, do $\Gamma_1$ and $\Gamma_2$ necessarily coincide up to isomorphism and complementation? \end{Probl}
This can be translated to the language of semigroups in the following way. Note that given a graph $\Gamma$, the partial automorphism monoids $\paut{\Gamma-v_i}$ ($v_i \in V$) are inverse submonoids of $\paut{\Gamma}$, specifically, the maximal Boolean inverse submonoids of height $|V(\Gamma)|-1$. Let $\operatorname{Deck}(\paut{\Gamma})$ denote the multiset of such inverse submonoids of $\paut{\Gamma}$.
\begin{Probl} \label{probl:last} Given two inverse monoids $\paut{\Gamma_1}$ and $\paut{\Gamma_2}$, if $\operatorname{Deck}(\paut{\Gamma_1})$ and $\operatorname{Deck}(\paut{\Gamma_2})$ coincide up to isomorphism, do we have $\paut{\Gamma_1} \cong \paut{\Gamma_2}$? \end{Probl}
We note that the above problems can fail for small graphs. E.g., take the following two graphs on four vertices:
\begin{center} \end{center}
One can check that $\operatorname{Deck}(\paut{\Gamma_1})$ and $\operatorname{Deck}(\paut{\Gamma_2})$ are the same, but $\paut{\Gamma_1}$ and $\paut{\Gamma_2}$ are different. However, we believe that for large enough graphs, the answer to Problem \ref{probl:last} may be positive.
\end{document} |
\begin{document}
\title{Path-wise solutions of SDEs driven by L\'evy processes} \def\fnsymbol{footnote}{\fnsymbol{footnote}} \def\hbox to\z@{$\m@th^{\@thefnmark}$\hss}{\hbox to\z@{$\m@th^{\@thefnmark}$\hss}} \footnotesize\rm\noindent \hspace*{16pt}\strut\footnote[0]{{\it AMS\/\ {\rm 1991} subject classifications}. 60H20, 60G17, 60H05}\footnote[0]{{\it Key words and phrases}. L\'evy process, path integral, $p$-variation, area process, stochastic differential equations} \normalsize\rm
\begin{abstract} In this paper we show that a path-wise solution to the following integral equation $$ Y_t = \int_0^t f(Y_t)\;dX_t \qquad Y_0=a \in \R^d $$ exists under the assumption that $X_t$ is a L\'evy process of finite $p$-variation for some $p \geq1$ and that $f$ is an $\alpha$-Lipschitz function for some $\alpha>p$. There are two types of solution, determined by the solution's behaviour at jump times of the process $X$, one we call geometric the other forward. The geometric solution is obtained by adding fictitious time and solving an associated integral equation. The forward solution is derived from the geometric solution by correcting the solution's jump behaviour.
L\'evy processes, generally, have unbounded variation. So we must use a pathwise integral different from the Lebesgue-Stieltjes integral. When $X$ has finite $p$-variation almost surely for $p<2$ we use Young's integral. This is defined whenever $f$ and $g$ have finite $p$ and $q$-variation for $1/p+1/q>1$ (and they have no common discontinuities). When $p>2$ we use the integral of Lyons. In order to use this integral we construct the L\'evy area of the L\'evy process and show that it has finite $(p/2)$-variation almost surely. \end{abstract}
\section*{Introduction}\label{s.intro}
In this paper I give a path-wise method for solving the following integral equation: \begin{equation}\label{e.1} Y_t=Y_0 + \int_0^t f(Y_t) \;dX_t\qquad Y_0=a\in\R^d. \end{equation} when the driving process is a L\'evy process.
Typically, a L\'evy process a.s. has unbounded variation. The integral does not exist in a Lebesgue-Stieltjes sense. However, the integral still makes sense as a random variable due to the stochastic calculus of semi-martingales developed by the Strasbourg school \cite{myrintsto}.
The semi-martingale integration theory is not complete though. There are processes of interest which do not fit into the semi-martingale framework, for example the fractional Brownian motion. An alternative integral is provided by the path-wise approach studied by Lyons \cite{tel1}, \cite{tel5} and Dudley \cite{dud}. The basis of their papers is that of Young \cite{you}, who showed that the integral \begin{equation}\label{e.youdef} \int_0^t f\;dg \end{equation} is defined whenever $f$ and $g$ have finite $p$ and $q$-variation for $1/p+1/q>1$ (and they have no common discontinuities). For a comprehensive overview of the theory we recommend the lecture notes of Dudley and Norvai{\v s}a \cite{dudnor}.
Recently in \cite{miknor}, a system of linear Riemann-Stieltjes integral equations is solved when the integrator has finite $p$-variation for some $0<p<2$. These results are contained in Theorem \ref{thm.1} where we allow non-linearity of the vector field $f$. This is because our approach is an extension of the method of \cite{tel1}, \cite{tel5}.
The approach that I follow distinguishes two cases. The first is when the process has finite $p$-variation a.s., for some $p<2$. We use the Young integral \cite{you}. In \cite{tel1} \eqref{e.1} is solved when $X_t$ is a continuous path of finite $p$-variation for some $p<2$.
The second case is when the process has finite $p$-variation a.s., for some $p>2$. The Young integral is only defined when $f$ and $g$ have finite $p$ and $q$-variation for $1/p+1/q>1$. So an iteration scheme on the space of paths with finite $p$-variation does not work. However, Lyons defined an integral against a continuous function of $p$-variation for some $p>2$ \cite{tel5}. The integral is developed in the space of geometric multiplicative functionals (described in Appendix \ref{app.hom}). The key idea is that we enhance the path by adding an area function to it. If there is sufficient control of the pair, path {\em and} area, then the integral is defined. The canonical example in \cite{tel5} is Brownian motion. The area process enhancing the Brownian motion is the L\'evy area \cite[Ch.7, Sect.55]{levy}. I show that there is an area process of a L\'evy process which has finite $(p/2)$-variation a.s..
In order to solve \eqref{e.1} for a discontinuous function I add fictitious time during which linear segments remove the discontinuities, creating a continuous path. By solving for the continuous path and then removing the fictitious time we recover a solution for the discontinuous path. This is called a geometric solution. A second type of solution is derined from the geometric solution which we call the forward solution.
The first section treats the case where the discontinuous driving path has finite $p$-variation for some $p<2$. The second section treats the case where the path has finite $p$-variation for some $p>2$ only. The main proofs of the second section are deferred to the third section. In the appendix I prove the homeomorphic flow property for the solutions when the driving path is continuous. This is used in proving that forward solutions can be recovered from geometric solutions.
\section{Discontinuous processes - $p<2$}\label{s.pl2}
In this section we extend the results of \cite{tel1} to allow the driving path of \eqref{e.1} to have discontinuities. The results are applied to sample paths of some L\'evy processes, those that have finite $p$-variation a.s. for some $p<2$. Throughout this section $p\in[1,2)$ unless otherwise stated.
First, we determine the solution's behaviour when the integrator jumps. There are two possibilities to consider: the first is an extension of the Lebesgue-Stieltjes integral; the second is based on a geometric approach.
Suppose that the discontinuous integrator has bounded variation. The solution $y$ would jump $$ y_t-y_{t-} = f(y_{t-})\;(x_t-x_{t-}) $$ at a jump time $t$ of $x$. If $x$ has finite $p$-variation for some $1<p<2$ we insert these jumps at the discontinuities of $x$. We call a path $y$ with the above jump behaviour a forward solution.
The other jump behaviour we consider is the following: When a jump of the integrator occurs we insert some fictitious time during which the jump is traversed by a linear segment, creating a continuous path on an extended time frame. Then we solve the differential equation driven by the continuous path. Finally we remove the fictitious time component of the solution path. We call this a geometric solution because the solution has an 'instantaneous flow' along an integral curve at the jump times. This jump behaviour has been considered before by \cite{marc} and \cite{kur2}.
The disadvantage of the first approach is that the solution does not, generally, generate a flow of diffeomorphisms \cite{lea2}.
In this section we prove the following theorem:
\begin{thm}\label{thm.1} Let $x_t$ be a discontinuous function of finite $p$-variation for some $p<2$. Let $f$ be an $\alpha$-Lipschitz vector field for some $\alpha>p$. Then there exists a unique geometric solution to the integral equation \begin{equation}\label{e.2} y_t = y_0 + \int_0^t f(y_t)\;dx_t \qquad y_0=a \in\R^d. \end{equation} With the above assumptions, there exists a unique forward solution as well. \end{thm}
Before proving the theorem we recall the definitions of $p$-variation and $\alpha$-Lipschitz:
\begin{defn} The $p$-variation of a function ${ x }(s)$ over the interval $[0,t]$ is defined as follows: \begin{equation*} {\Vert}{ x }{\Vert}_{_{p,[0,t]}}=\bigg\{\sup_{\pi\in\pi[0,t]}\quad{\sum_\pi}{\vert}{ x }(t_k)-{ x }(t_{k-1}){\vert}^p\bigg\}^{1\over p} \end{equation*} where $\pi[0,t]$ is the collection of all finite partitions of the interval $[0,t]$. \end{defn} \begin{rem} This is the strong $p$-variation. Usually probabilists use the weaker form where the supremum is over partitions restricted by a mesh size which tends to zero. \end{rem}
\begin{defn}
A function $f$ is in $\lip(\alpha)$ for some $\alpha>1$ if \begin{equation*} \norm{f}{\infty} < \infty \;\hbox{and}\; {{\partial f}\over{\partial x_j}} \in \hbox{Lip}(\alpha -1)\quad j=1,\dots ,d\,. \end{equation*} Its norm is given by \begin{equation*} \norm{f}{\lip(\alpha)} \dfn \norm{f}{\infty} + \sum_{j=1}^d \lnorm{{{\partial{f}}\over{\partial{x_j}}}}{\lip(\alpha -1)} \qquad\qquad\hbox{for}\;\alpha>1. \end{equation*} \end{defn}
This is Stein's \cite{ste} definition of $\alpha$-Lipschitz continuity for $\alpha>1$. It extends the classical definition: $f$ is in $\lip(\alpha)$ for some $\alpha\in(0,1]$ if \begin{equation*} \snorm{f(x) - f(y)} \leq K \snorm{x-y}^{\alpha} \end{equation*} with norm \begin{equation*} \norm{f}{\infty} + \sup_{x \neq y} \frac{\snorm{f(x)-f(y)}}{\snorm{x-y}^{\alpha}}. \end{equation*}
\subsection{Geometric Solutions.} In this subsection we define a parametrisation for a c\`adl\`ag path $x$ of finite $p$-variation. The parametrisation adds fictitious time allowing the traversal of the discontinuities of the path $x$. We prove that the resulting continuous path $x^{\delta}$ has the same $p$-variation that $x$ has. We solve \eqref{e.2} driven by $x^{\delta}$ using the method of Lyons \cite{tel1}. Then we get a geometric solution of \eqref{e.2} by removing the fictitious time (i.e. by undoing the parametrisation).
\begin{defn}\label{d.tau} Let ${x}$ be a c\`adl\`ag path of finite $p$-variation. Let $\delta>0\,,$ for each $n \geq 1\,,$ let $t_n$ be the time of the $n$'th largest jump of ${x}$. We define a map $\tau^{\delta} : [0,T]\rightarrow [0, T+\delta \sum_{i=1}^{\infty}\vert j(t_i) \vert^p] $ (where $j(u)$ denotes the jump of the path ${x}$ at time u) in the following way: \begin{equation}\label{e.tau} \tau^{\delta}(t) = t + \delta \sum_{n=1}^{\infty} \vert j(t_n)\vert^p \chi_{\{t_n \leq t\}}(t). \end{equation} The map $\tau^{\delta} : [0,T]\rightarrow[0,\tau^{\delta}(T)]$ extends the time interval into one on where we define the continuous process ${x}^{\delta}(s)$ \begin{align}\label{e.xdel} {x}^{\delta}(s)&\nonumber\\ =& \left\{ \begin{array}{ll}
{x}(t) &\text{if}\; s = \tau^{\delta}(t) ,\\
{x}(t_n^-) + (s - \tau^{\delta}(t_n^-)) j(t_n)\delta^{-1}\vert j(t_n) \vert ^{-p} &\text{if}\; s\in[\tau^{\delta}(t_n^-) \tau^{\delta}(t_n)) . \end{array} \right. \end{align} \end{defn} \begin{rems} $$ $$
\begin{enumerate} \item $(s,\,{x}^{\delta}_s),\;s\in [0,\tau^{\delta}(T)]$ is a parametrisation of the driving path ${x}$. \item The terms $\snorm{j(t_n)}^{p}$ in \eqref{e.tau} ensure that the addition of the fictitious time does not make $\tau^{\delta}(t)$ explode. \item In Figure \ref{fig.1} we see an example of a parametrisation of a discontinuous path $x_s$ in terms of the pair $(t(s),\,y(s))$. \end{enumerate} \end{rems} The next proposition shows that the above parametrisation has the same $p$-variation as the original path, on the extended time frame $[0,\tau^{\delta}(T)]$. \begin{figure}\label{fig.1}
\end{figure}
\begin{propn}\label{p.pvarn} Let ${x}$ be a c\`adl\`ag path of finite $p$-variation. Let ${x}^{\delta}$ be a parametrisation of ${{x}}$ as above. Then \begin{equation*} \variation{{x}^{\delta}}{p}{\tau^{\delta}(T)} = \variation{{x}}{p}{T} \qquad \forall \delta > 0 . \end{equation*} \end{propn} \proof Let $\pi_0$ be a partition of $[0,\tau^{\delta}(T)]$. Let \begin{equation*} V_{x^{\delta}}(\pi_0) = \sum_{\pi_0}\vert {x}^{\delta}(t_i) - {x}^{\delta}(t_{i-1}) \vert^p \end{equation*} We show that we increase the value of $V_p(\pi_0)$ by moving points lying on the jump segments to the endpoints of those segments.
Let $t_{i-1},t_i,t_{i+1}$ be three neighbouring points in the partition $\pi_0$ such that $t_i$ lies in a jump segment. Consider the following term: \begin{equation}\label{e.dom} \snorm{x^{\delta}_{t_i} - x^{\delta}_{t_{i-1}}}^p + \snorm{x^{\delta}_{t_{i+1}}-x^{\delta}_{t_i} }^p. \end{equation} We show that \eqref{e.dom} is dominated by replacing $x^{\delta}_{t_i}$ by one of $x^{\delta}_l$ and $x^{\delta}_r$, where $l$ and $r$ denote the left and right endpoint of the jump segment containing $t_i$.
For simplicity we set $a=x^{\delta}_{t_{i-1}}, b=x^{\delta}_{t_{i+1}}$ and $c=x^{\delta}_l$. Let \begin{equation*} L \dfn \{{c}+k{x} : k\in(0,1),\quad {c},{x}\in\R^d,{x}\neq0\} , {a},{b}\in\R^d\backslash L . \end{equation*} Let the function $f:[0,1]\rightarrow (0,\infty)$ be defined by \begin{equation*} f(k) = \vert {a} - {d}\vert^p + \vert {d} - {b}\vert^p \qquad, {d} = {c}+k{x}. \end{equation*} Then $f\in C^2[0,1]$ and one can show that $f'' \geq 0$ on $(0,1)$ when $p \geq 1$. To conclude the proof we move along the partition replacing $t_i$ which lie in the jump segments by new points $t_i^{\prime}$ that increase $V_{x^{\delta}}(\pi_0)$. The partition $\pi_0$ is replaced by a partition $\pi_0'$ whose points lie on the pre-image of $[0,\tau^{\delta}(T)]$. Therefore we have \begin{equation*} V_{x^{\delta}}(\pi_0) \leq V_{x^{\delta}}(\pi_0') = V_{x}(\pi_0'). \end{equation*} Hence $\variation{{x}^{\delta}}{p}{\tau^{\delta}(T)} = \variation{{x}}{p}{T}$.\endproof
\begin{thm}\label{thm.geo} Let ${x}$ be a c\`adl\`ag path with finite $p$-variation for some $p<2$. Let $f$ be a $\lip(\gamma)$ vector field on $\R^n$ for some $\gamma> p$. Then there exists a unique geometric solution ${y}$, having finite $p$-variation which solves the differential equation \begin{equation}\label{e.21} d{y}_t = f({y}_t)\; d{x}_t \qquad {y}_0 = {a} \in \R^n. \end{equation} \end{thm} \proof Let $x^{\delta}$ be the parametrisation given in \eqref{e.xdel}. The theorem of section three of \cite{tel1} proves that there is a continuous solution ${y}^{\delta}$ which solves \eqref{e.2} on $[0,\tau^{\delta}(T)]$. Then $(s,\,{y}^{\delta}_s)$ is a parametrisation of a c\`adl\`ag path ${y}$ on $[0,\,T]$.
The solution is well-defined. To see this, consider two parametrisations of ${x}$ and note that there exists a monotonically increasing function $\lambda_s$ such that \begin{equation*} (s,{x}^{\delta}_s) = (\lambda_s,{x}^{\nu}_{\lambda_s}).\eendproof \end{equation*}
\subsection{Forward Solutions.}\label{ss.for} In this subsection we show how to recover forward solutions from geometric solutions. The idea behind our approach is to correct the jump behaviour of the geometric solution using a Taylor series expansion Lemma \ref{l.tay}. The correction terms are controlled by $$ \sum_{i=1}^{\infty} \snorm{x_{t_i}-x_{t^-_i}}^2 $$ which is finite due to the finite $p$-variation of the path $x$.
In the case where the driving path has only a finite number of jumps we note that the forward solution can be recovered trivially. It is enough to mark the jump times of $x$ and solve the differential equation on the components where $x$ is continuous, inserting the forward jump behaviour when the jumps occur. It remains to show that the forward solution exists when the driving path has a countably infinite number of jumps. The method we use requires the following property of the geometric solution:
\begin{thm}\label{thm.hom} Let $x$ be a continuous path of finite $p$-variation for some $p>1$. Let $f$ be in $\lip(\alpha)$ for some $\alpha>p$. The maps $(\pi_t)_{t\geq0}:\R^n\rightarrow \R^n$ obtained by varying the initial condition of the following differential equation generate a flow of homeomorphisms: \begin{equation}\label{eq.fde} d\pi_t = f(\pi_t)\;dx_t \qquad \pi_0 = Id, \quad (\text{the identity map}). \end{equation} \end{thm}
We leave the proof of Theorem \ref{thm.hom} until Appendix \ref{app.hom}. We note the uniform estimate \begin{equation}\label{e.simhom} \sup_{0\leq t \leq T}\slnorm{\pi^{a}_t-\pi^{b}_t} \leq C(T) \slnorm{a-b}. \end{equation}
The following lemma will enable estimates to be made when the geometric jumps are replaced by the forward jumps:
\begin{lem}\label{l.tay} Let $x$ be a c\`adl\`ag path with finite $p$-variation. Let $f$ be in $\lip(\alpha)$ for some $\alpha>p$. Let $\Delta y_i $ (resp. $\Delta z_i $) denote the geometric (resp. forward) solution's jump which correspond to $\Delta x_i$, the $i$'th largest jump of $x$. Then we have the following estimate on the difference of the two jumps: \begin{equation*} \norm{\Delta y_i - \Delta z_i}{\infty} \leq K \vert \Delta x_i \vert^2 \end{equation*} where the constant $K$ depends on $\norm{f}{\lip(\alpha)}$. \end{lem}
\proof Parametrise the path $x$ so that it traverses its discontinuity in unit time. Solve geometrically over this interval with the solution having initial point $a$. Note that the forward jump is the first order Taylor approximation to the geometric jump. Then \begin{eqnarray} y_1(a) = y_0(a) &+& \left.\deriv{y_s(a)}{s}\right\vert_{s=0} + \half\;\left.\deriv{^2y_s(a)}{s^2}\right\vert_{s=\theta}\quad \hbox{for some}\;0<\theta<1\nonumber\\ &=& z_1(a) + \half\; \left.\deriv{^2y_s(a)}{s^2}\right\vert_{s=\theta}. \end{eqnarray} We estimate the second order term by \begin{eqnarray} \llnorm{\half\;\deriv{^2y_s(a)}{s^2}}{\infty} &=& \llnorm{\half\;\frac{d}{ds}f(y_s(a))(\Delta x_i)}{\infty}\nonumber\\ &\leq& \half\;\lnorm{\nabla f}{\infty} \lnorm{f}{\infty} \vert \Delta x_i\vert^2\nonumber\\ &\leq& \half\; \lnorm{f}{\lip(\alpha)}^2 \vert \Delta x_i\vert^2 \end{eqnarray} Both $\norm{\nabla f}{\infty}$ and $\norm{f}{\infty}$ are finite because $f$ is $\lip(\alpha)$ for some $\alpha >p \geq 1$.\endproof
\begin{thm}\label{thm.24} Let $x$ be a c\`adl\`ag path with finite $p$-variation. Let $f$ be in $\lip(\alpha)$ for some $\alpha >p$. Then there exists a unique forward solution to the following differential equation: \begin{equation}\label{e.unifor} dz_t = f(z_t)\;dx_t \qquad\qquad z_0=a. \end{equation} \end{thm}
\proof By Theorem \ref{thm.hom} there exists a unique homeomorphism $y$ which solves \begin{equation*} dy_t = f(y_t)\;dx_t \qquad\qquad y_0=a \end{equation*} in a geometric sense.
Label the jumps of $x$ by $j_{x}=\{j_i\}_{i=1}^{\infty}$ according to their decreasing size. Let $z^n$ denote the path made by replacing the geometric jumps of $y$ corresponding to $\{j_i\}_{i=1}^{n}$ by the forward jumps $\{f()\,(\Delta x_i)\}_{i=1}^n$. We show that the $(z^n)_{n\geq1}$ have a uniform limit.
We order the corrected jumps chronologically, say $\{t_i\}_{i=1}^n$. Then we estimate the following term using Lemma \ref{l.tay} and the uniform bound on the growth of $y$ given in \eqref{e.simhom}: \begin{eqnarray} \snorm{z^n_s(a) - y_s(a)} &\leq& \sum_{i=1}^n \snorm{y_{t_i,s}(z^n_{t_i}(a)) - y_{t_i,s}(y_{t_{i-1},t_i}(z^n_{t_{i-1}}(a)))}\nonumber\\ &\leq& C(T) \sum_{i=1}^n \snorm{z^n_{t_i}(a) - y_{t_{i-1},t_i}(z^n_{t_{i-1}}(a))}\nonumber\\ &\leq& C^2(T)\; K\;\sum_{i=1}^{\infty} \snorm{\Delta x_i}^2. \end{eqnarray} So we have the uniform estimate \begin{equation}\label{e.234} \norm{z^n-y}{\infty} \leq K(C_3(T),\norm{f}{\lip(\alpha)}) \sum_{i=1}^{\infty} \vert \Delta x_i \vert^2 < \infty \qquad \forall n \geq 1. \end{equation}
We use an analogous bound to get Cauchy convergence of $\{z^n\}_{n\geq1}$. Let $m, r \geq 1$. \begin{equation*} \norm{z^m - z^{m+r}}{\infty} \leq K (C(T,z^m),\norm{f}{\lip(\alpha)}) \sum_{i=m+1}^{\infty} \vert \Delta x_i \vert^2\,. \end{equation*} One notes that $\{C(T,z^m)\}$ are uniformly bounded, because of the boundedness of $C(T)= C(T,y)$ and the Lipschitz condition on $f$. Therefore we have the following estimate: \begin{equation*} \norm{z^m - z^{m+r}}{\infty} \leq L \sum_{i=m+1}^{\infty} \snorm{\Delta x_i}^2. \end{equation*} This implies that $\{z^n\}$ are Cauchy in the supremum norm because $x$ has finite $p$-variation $(p<2)$ which implies that $\sum_{m+1}^{\infty} \snorm{\Delta x_i}^2$ tends to zero as $m$ increases.\endproof
\begin{rem} Theorems \ref{thm.24} and \ref{thm.geo} combine to prove Theorem \ref{thm.1}. \end{rem}
\begin{cor}\label{cor.p} With the above notation, $z$ has finite $p$-variation. \end{cor} \proof Let $s<t \in [0,T]$. \begin{align*} \snorm{z_t-z_s} &\leq \snorm{(z_t-z_s)-(y_t-y_s)} + \snorm{y_t-y_s}\\ \intertext{where $(y_t-y_s)$ is the increment of the geometric solution starting from $z_s$ driven by the path $x_t$ on the interval $[s,T]$. Then} \snorm{(z_t-z_s)-(y_t-y_s)} \leq& \quad C\;\sum_{\substack{ j_x\arrowvert_{[s,t]} }} \snorm{\Delta x_i}^2 \qquad\text{ and } \qquad\snorm{y_t-y_s} \leq \variationt{x}{p}{s}{t}\,,\\ \intertext{which implies that} \snorm{z_t-z_s}^p &\leq 2^{p-1} \bigg\{ C^p \big(\sum_{\substack{j_x\arrowvert_{[s,t]} }} \snorm{\Delta x_i}^2\big)^p + \variationt{x}{p}{s}{t}^p\bigg\}\,,\\ \intertext{hence} \variation{z}{p}{T} &\leq 2^{(p-1)/p} \bigg\{ C^p \big(\sum_{\substack{ j_x\arrowvert_{[0,T]} }} \snorm{\Delta x_i}^2\big)^p + \variation{x}{p}{T}^p \bigg\}^{1/p} < \infty. \end{align*} \hspace{8cm}\endproof
\subsection{$p$-variation of L\'evy processes}\label{ss.plevy}
In this subsection we apply Theorem \ref{thm.1} to L\'evy processes which have finite $p$-variation a.s..
L\'evy processes are the class of processes with stationary, independent increments which are continuous in probability. The class includes Brownian motion, although this process is atypical due to its continuous sample paths. Typically a L\'evy process will be a combination of a deterministic drift, a Gaussian process and a jump process. For further information on L\'evy processes we direct the reader to \cite{bertbk}.
The regularity of the sample paths of a L\'evy process has been studied intensively. In the 1960's several people worked on the question of characterising the sample path $p$-variation. The following theorem, due to Monroe, gives the characterisation:
\begin{thm}\cite[Theorem 2]{mon}\label{thm.monroe} Let $(X_t)_{t\geq0}$ be a L\'evy process in $\R^n$ without a Gaussian part. Let $\nu$ be the L\'evy measure. Let $\beta$ denote the index of $X_t$, that is \begin{equation}\label{e.index} \beta \dfn \inf\bigg\{ \alpha >0 \,:\, \int_{\vert {y}\vert \leq 1} \vert {y}\vert^{\alpha}\, \nu(d{y}) < \infty\bigg\} \end{equation} and suppose that $1\leq \beta\leq2$. If $ \gamma> \beta$ then \begin{equation} \prob{\norm{X}{\gamma} < \infty} = 1 \end{equation} where the $\gamma$-variation is considered over any compact interval. \end{thm}
\begin{rem} Note that all L\'evy processes with a Gaussian part only have finite $p$-variation for $p>2$. \end{rem}
\begin{cor}\label{c.levy} Let $(X_t)_{t\geq0}$ be a L\'evy process with index $\beta<2$ and no Gaussian part. Let $f$ be a vector field in $\lip(\alpha)$ for some $\alpha>p$. Then, a.s., the following stochastic differential equation has a unique forward and a unique geometric solution: $$ dY_t = f(Y_t)\;dX_t\qquad Y_0=a. $$ \end{cor}
\proof The corollary follows immediately from Theorems \ref{thm.monroe} and \ref{thm.1}.\endproof
\section{Discontinuous processes - $p>2$}\label{s.pg2}
The goal of this section is to extend (Corollary \ref{c.levy}) to let any L\'evy process be the integrator of \eqref{e.1}.
One problem we have is that the Young integral is no longer useful because we use a Picard iteration scheme which fails condition \eqref{e.youdef} when $p>2$. However, we can use the method from \cite{tel5}. To define the integral we need to provide more information about the sample path. We do this by defining an area process of the L\'evy process. Then we prove that the enhanced process (path {\em and} area) has finite $p$-variation Definition \ref{d.enhp}.
We parametrise the enhanced process in an analogous manner to \eqref{e.xdel} (adding fictitious time). Then we solve \eqref{e.1} in a geometric sense using the method for continuous paths $(p>2)$ given in \cite{tel5}. Finally, forward solutions are obtained by jump correction as before.
Before enhancing $(X_t)_{t\geq0}$ we give an example which shows that there exist L\'evy measures with index two. So a L\'evy process does not need a Gaussian part to have, a.s., finite $p$-variation only for $p>2$.
\begin{eg} {\allowdisplaybreaks
One can define the following measures on $\R$: \begin{align*} \nu_k\;(dx) &\dfn \snorm{x}^{-3+1/k}\;dx\qquad \snorm{x}\in((k+1)^{-3(k+1)},k^{-3k}] \dfn J_k\\ \eta_m\;(dx) &\dfn \sum_{k=1}^m \nu_k\;(dx\cap J_k\cap (-J_k)) . \end{align*} We show that $\eta \dfn \lim_{m\rightarrow\infty} \eta_m$ is a L\'evy measure. The integrability condition \begin{equation}\label{e.intcon} \int_{\snorm{x}\leq 1} \snorm{x}^2 \eta\;(dx) < \infty. \end{equation} must be satisfied. \begin{align*} \int_{\snorm{x}\leq 1} \snorm{x}^2 \eta_m\;(dx) &= 2 \int_0^1 \sum_{k=1}^m x^{-1+1/k} \chi_{J_k}(x)\;dx = 2 \sum_{k=1}^m \bigg[ k x^{1/k} \bigg]_{(k+1)^{-3(k+1)}}^{k^{-3k}}\\ &= 2 \sum_{k=1}^m k\bigg\{k^{-3} - (k+1)^{-3(1+1/k)}\bigg\}\\ &\leq 2 \sum_{k=1}^m k\bigg\{k^{-3} - 2^{-3(1+1/k)} k^{-3(1+1/k)}\bigg\}\\ &= 2 \sum_{k=1}^m k^{-2}\bigg\{1 - 2^{-3(1+1/k)} k^{-3/k}\bigg\}\\ &< C \sum_{k=1}^{\infty} k^{-2}\;<\infty, \end{align*} where $C$ is some suitable constant. We take the limit as $m$ tends to infinity on the left hand side to prove \eqref{e.intcon}.
Now we show that \begin{equation}\label{e.intpvar} \int_{\snorm{x}\leq 1} \snorm{x}^{\alpha} \eta\;(dx) = \infty \end{equation} for all $\alpha<2$.} {\allowdisplaybreaks Fix $\alpha<2$. Define the following number: \begin{align*} m(\alpha) &\dfn \inf\{ k\;:\; \alpha+1/k <2\} < \infty \qquad\text{as}\;\alpha<2.\\ \intertext{Let $m>m(\alpha)$. Then} &\int_{\snorm{x}\leq 1} \snorm{x}^{\alpha} \eta_m\;(dx) \\ &\geq 2 \sum_{k=m(\alpha)}^m {{1}\over{(\alpha+1/k-2)}}\bigg\{ k^{-3k(\alpha+1/k-2)}-(k+1)^{-3(k+1)(\alpha+1/k-2)}\bigg\}\\ &= 2 \sum_{k=m(\alpha)}^m {{1}\over{(2-(\alpha+1/k))}}\bigg\{ (k+1)^{-3(k+1)(\alpha+1/k-2)}- k^{-3k(\alpha+1/k-2)}\bigg\}\\ &\geq {{2}\over{2-\alpha}} \sum_{m(\alpha)}^m \bigg\{ (k+1)^{3(k+1)(2-(\alpha+1/k))}- k^{3k(2-(\alpha+1/k))}\bigg\}\\ &\rightarrow \infty \qquad\text{as}\; m\rightarrow \infty\,. \end{align*} } This proves that the index $\beta$ of $\eta$ equals two. Theorem \ref{thm.monroe} implies that the pure jump process associated to the L\'evy measure $\eta$ a.s. has finite $p$-variation for $p>2$ only. \end{eg}
The following theorem gives a construction of the L\'evy area of the L\'evy process $(X_t)_{t\geq0}$. The L\'evy area process and the L\'evy process form the enhanced process which we need in order to use the method of Lyons \cite{tel5}.
\begin{thm}\label{thm.areaconv} The $d$-dimensional L\'evy process $(X_t)_{t\geq0}$ has an anti-symmetric area process \begin{equation*} (A_{s,t})^{ij} \dfn \half \int_s^t X^i_{u-} \circ dX^j_u - X^j_{u-} \circ dX^i_u \qquad i,j = 1,2. \qquad\as \end{equation*} \end{thm} The proof is deferred to Section \ref{s.proofs}.
\begin{thm}\label{thm.parea} The L\'evy area of the L\'evy process $(X_t)_{t\geq0}$ a.s. has finite $(p/2)$-variation for $p>2$. That is \begin{equation*} \sup_{\pi} \big(\sum_{\pi} \snorm{A_{t_{k-1},t_k}}^{p/2}\big)^{2/p}\;<\infty\qquad\as \end{equation*} where the supremum is taken over all finite partitions $\pi$ of $[0,T]$. \end{thm}
\noindent The proof is deferred to Section \ref{s.proofs}.
Now we parametrise the sample paths of $(X_t)_{t\geq0}$ as before \eqref{e.xdel}.
\begin{propn} Parametrising the process $(X_t)_{t\geq0}$ does not affect the area process' $(p/2)$-variation. \end{propn}
\proof The proof is similar to the proof of Proposition \ref{p.pvarn}. One can show that if $\lambda$ lies in a jump segment then $$ \snorm{A_{s,\lambda}}^{(p/2)} +\snorm{A_{\lambda,t}}^{(p/2)}\qquad s<\lambda<t $$ is maximised when $\lambda$ is moved to one of the endpoints of the jump segment.\endproof
With the parametrisation of the path and the area we can define the integral in the sense of Lyons \cite{tel5}. Consequently we have the following theorem:
\begin{thm} Let $(X_t)_{t\geq0}$ be a L\'evy process with finite $p$-variation for some $p>2$. Let $f$ be in $\lip(\alpha)$ for some $\alpha>p$. Then there exists, with probability one, a unique geometric and a unique forward solution to the following integral equation: \begin{equation}\label{e.ass} Y_t = Y_0 +\int_0^t f(Y_t)\;dX_t\qquad Y_0=a\in\R^d. \end{equation} \end{thm}
\begin{rem} When constructing the forward solution it is necessary that the sum $$ \sum_{n=1}^{\infty} \snorm{\Delta X_n}^2 $$ remains finite. This is guaranteed by the requirement on L\'evy measures to satisfy $$ \int_{\snorm{x}\leq1}\;\snorm{x}^2\wedge1\;\nu(dx) < \infty. $$ \end{rem}
\section{Proofs of Theorem \ref{thm.areaconv} and Theorem \ref{thm.parea}}\label{s.proofs}
For clarity throughout this section we assume that the L\'evy process $(X_t)_{t\geq0}$ is two dimensional and takes the following form: \begin{equation}\label{e.form} X_t=B_t+\int_{\snorm{x}\leq1}\;x\;(N_t(dx)-t\nu(dx)). \end{equation} That is, $(X_t)_{t\geq0}$ is a Gaussian process with a compensated pure jump process, whose L\'evy measure is supported on $(x\in\R^2\, :\, \snorm{x}\leq1)$.
\begin{propn}\label{prop.areaconv} The $d$-dimensional L\'evy process $(X_t)_{t\geq0}$ has an anti-symmetric area process \begin{equation*} (A_{s,t})^{ij} \dfn \half \int_s^t X^i_{u-} \circ dX^j_u - X^j_{u-} \circ dX^i_u \qquad i,j = 1,2. \qquad\as \end{equation*} For fixed $s<t$ we obtain the area process by the following limiting procedure: \begin{equation*} (A_ {s,t})^{ij} = \lim_{n\rightarrow \infty} \sum_{m=0}^n \sum _{\substack{ k=1,\\ odd }} ^{2^m -1} A^{i,j}_{k,m}\qquad \as \end{equation*} where $A^{ij}_{k,m}$ is the area of the $(ij)$-projected triangle with vertices \begin{equation*} X(u_{(k+1)/2,m-1}),\; X(u_{(k-1)/2,m-1}), \;X(u_{k,m}) \end{equation*} where $u_{k,m} \dfn s+ k2^{-m}(t-s)$. Also we have the second order moment estimate \begin{equation}\label{e.2ndmom} \expn{(A_{s,t}^{ij})^2} \leq C(\nu) (t-s)^2. \end{equation} \end{propn}
\proof We define $A_{s,t}(n)$ \begin{align*} A_{s,t}(n) &\dfn \half \sum_{k=0}^{2^n-1} (X^{(1)}(u_{k,n})-X^{(1)}(s))(X^{(2)}(u_{k+1,n})-X^{(2)}(u_{k,n}))\\ &\qquad \qquad\qquad- (X^{(2)}(u_{k,n})-X^{(2)}(s))(X^{(1)}(u_{k+1,n})-X^{(1)}(u_{k,n}))\\ & = \sum_{k=0}^{2^n-1} B_{k,n} \end{align*} where $B_{k,n}$ is the (signed) area of the triangle with vertices \begin{equation*} X(s),\; X(u_{k,n}),\; X(u_{k+1,n}). \end{equation*} By considering the difference between $A_{s,t}(n)$ and $A_{s,t}(n+1)$ we see that \begin{equation*} B_{2k,n+1} + B_{2k+1,n+1} - B_{k,n} \end{equation*} is the area of the triangle with vertices \begin{equation*} X(u_{k,n}),\; X(u_{k+1,n}),\; X(u_{2k+1,n+1}) \end{equation*} which we denote by $A_{k,n}$. We re-order $A_{s,t}(n)$ \begin{align*} A_{s,t}(n) &= \half \sum_{m=0}^n \sum _{\substack{ k=1, \\ odd }} ^{2^m-1} \bigg(X(u_{k,m}) - d_{k,m}\bigg)\\ &\qquad\qquad\qquad\qquad\otimes\bigg(X(u_{(k+1)/2,m-1})-X(u_{(k-1)/2,m-1})\bigg)\\ &= \half \sum_{m=0}^n \sum _{\substack{ k=1, \\ odd }} ^{2^m-1} A_{k,m} \end{align*} where $d_{k,m} \dfn 1/2\, (X(u_{(k+1)/2,m-1}) + X(u_{(k-1)/2,m-1}))$. The convergence to the area process is completed using martingale methods.
Let $\sigf_n \dfn \sigma \big( X(u_{k,n}) \;:\; k=0,\ldots,2^n\big).$ Then \begin{lem} \begin{equation}\label{e.cent} \condexp{X(u_{k,m})}{\sigf_{m-1}} = d_{k,m} \qquad \as \end{equation} \end{lem} \proof For ease of presentation we let \begin{align*} U_1 &\dfn X(u_{k,m}) - X(u_{(k-1)/2,m-1})\\ U_2 &\dfn X((u_{(k+1)/2,m-1}) - X(u_{k,m}). \end{align*} Then \begin{align} &\condexp{X(u_{k,m}) - d_{k,m}}{\sigf_{m-1}} \nonumber\\ &= \condexp{X(u_{k,m}) - d_{k,m}}{X(u_{(k-1)/2,m-1}),X(u_{(k+1)/2,m-1})}\nonumber\\ &= \half\; \condexp{U_1 - U_2}{X(u_{(k-1)/2,m-1}),X(u_{(k+1)/2,m-1})} \nonumber \end{align} {\allowdisplaybreaks
Using the stationarity and the independence of the increments of $X$ we see that $U_1$ and $U_2$ are exchangeable, that is \begin{equation*} \prob{U_1 \in A ,\; U_2 \in B} = \prob{U_2 \in A ,\; U_1 \in B}\qquad \forall A,B \in \borel(\R^2). \end{equation*} The exchangeability extends to the random variables \begin{equation*} \big(U_i \vert X(u_{(k-1)/2,m-1}),X(u_{(k+1)/2,m-1})\big)\qquad i=1,2. \end{equation*} We deduce that \begin{equation*} \expn{U_1-U_2\;\vert\;X(u_{(k-1)/2,m-1}),X(u_{(k+1)/2,m-1})} = 0.\eendproof \end{equation*}
Returning to the proof of Proposition \ref{prop.areaconv}, we compute the variance of $A_{k,m}$. This will be used to show that \begin{equation*} \sup_{n\geq 1}\, \expn{A_{s,t}(n)^2} < \infty. \end{equation*}
\begin{align*} \E\big(&A^2_{k,m}\big)\\ = \E\bigg(&\bigg([ X^{(1)}(u_{k,m}) - d^{(1)}_{k,m}][U_1^{(2)}+U_2^{(2)}] - [ X^{(2)}(u_{k,m}) - d^{(2)}_{k,m} ][U_1^{(1)}+U_2^{(1)}]\bigg)^2 \bigg)\\ &= \quart \expn{\bigg\{(U_1^{(1)}-U_2^{(1)})(U_1^{(2)}+U_2^{(2)})-(U_1^{(2)}-U_2^{(2)})(U_1^{(1)}+U_2^{(1)})\}^2}\\ &= \quart\expn{(U_1^{(1)}U_2^{(2)})^2 -2 U_1^{(1)}U_2^{(2)}U_2^{(1)}U_1^{(2)} +(U_2^{(1)}U_1^{(2)})^2 }\\ &\dfn \quad(1)\quad+\quad(2)\quad+\quad(3). \end{align*} We use the independence of the increments and It\^o's formula for discontinuous semi-martingales to compute $(1),(2)$ and $(3)$. \begin{align*} (1) &= \expn{(U_1^{(1)}U_2^{(2)})^2} = \expn{(U_1^{(1)})^2}\expn{(U_2^{(2)})^2}. \end{align*} By applying It\^o's formula and using the stationarity of the L\'evy process we find that \begin{align*} &(3)=(1)=2^{-2m} \;(t-s)^2 \int_{\snorm{x}\leq 1} \snorm{x_1}^2 \;\nu(dx)\;\int_{\snorm{x}\leq 1} \snorm{x_2}^2 \;\nu(dx). \end{align*} Another application of It\^o's formula gives \begin{align*} (2)&= -2 \expn{U_1^{(1)}U_2^{(2)}U_2^{(1)}U_1^{(2)}} = -2 \expn{U_1^{(1)}U_1^{(2)}}\expn{U_2^{(2)}U_2^{(1)}}\\ &= - 2^{-2m+1}\; (t-s)^2 \;\bigg(\int_{\snorm{x}\leq 1} x_1\,x_2 \;\nu(dx)\bigg)^2. \end{align*} Collecting the terms together we have the following expression: \begin{align*} &\expn{A^2_{k,m}}= C_0(\nu)\; 2^{-2m+1} \;(t-s)^2 \\ \intertext{where} &C_0(\nu) \dfn \bigg\{\int_{\snorm{x}\leq 1} \snorm{x_1}^2 \;\nu(dx)\;\int_{\snorm{x}\leq 1} \snorm{x_2}^2 \;\nu(dx)-\bigg(\int_{\snorm{x}\leq 1} x_1\,x_2 \;\nu(dx)\bigg)^2\bigg\}. \end{align*}
Now we estimate the following term: \begin{align*} \expn{A_{s,t}^2(n)} &= \expn{\bigg(\sum_{m=1}^n \sum _{\substack{ k=1,\\ odd }} ^{2^m -1} A_{k,m}\bigg)^2}\\ \intertext{which through conditioning and independence arguments equals} &= \expn{\sum_{m=1}^n \sum _{\substack{ k=1,\\ odd }} ^{2^m -1} A_{k,m}^2}\\ &= C_0(\nu) \sum_{m=1}^n \sum _{\substack{ k=1,\\ odd }} ^{2^m -1} 2^{-2m+1}(t-s)^2\\ &\leq C_0(\nu) \sum_{m=1}^{\infty} \sum _{\substack{ k=1,\\ odd }} ^{2^m -1} 2^{-2m+1}(t-s)^2 \dfn C(\nu) (t-s)^2. \end{align*} We use the martingale convergence theorem to deduce that a.s. there is a unique limit of $A_{s,t}(n)$. Furthermore the last calculation implies that there is a moment estimate of the area process given by \begin{equation*} \expn{A^2_{s,t}} \leq C(\nu) \;(t-s)^2.\eendproof \end{equation*}
We note that there is another way that one could define an area process of a L\'evy process. One could define the area process for the truncated L\'evy processes and look for a limit as the small (compensated) jumps are put in. Using the above construction one can define $A^{\epsilon}_{s,t}$ for a fixed pair of times, corresponding to the L\'evy process $X^{\epsilon}$. With the $\sigma$-fields $(\sigg^{\epsilon})_{\epsilon>0}$ defined by $$ \gothic{G}^{\epsilon} \dfn \sigma ( X^{\delta} \,:\, \delta >\epsilon)\;\text{for} \;\epsilon >0 $$ we have the following proposition:
\begin{propn}\label{p.spamart} $(A^{\epsilon}_{s,t})_{\epsilon>0}$ form a $(\sigg^{\epsilon})$-martingale. \end{propn} \proof Let $\eta>\epsilon>0$. By considering the construction of the area given above for the truncated processes $X^{\eta}$ and $X^{\epsilon}$ we look at the difference at the level of the triangles $A^{\eta}_{k,n}$ and $A^{\epsilon}_{k,n}$. \begin{align*} \condexp{A^{\epsilon}_{k,n}-A^{\eta}_{k,n}}{\sigg^{\eta}} &= \E\bigg(A^{\eta,\epsilon}_{k,n}\\ &\quad+(X^{\eta,\epsilon}_{k,n} - d^{\eta,\epsilon}_{k,n})\otimes(X^{\eta}_{(k+1)/2,n-1}-X^{\eta}_{(k-1)/2,n-1})\\ &\quad+(X^{\eta}_{k,n} - d^{\eta}_{k,n})\otimes(X^{\eta,\epsilon}_{(k+1)/2,n-1}-X^{\eta,\epsilon}_{(k-1)/2,n-1})\big\vert \sigg^{\eta}\bigg) \end{align*} where the superscript $\eta,\epsilon$ signifies that the process is generated by the part of the L\'evy measure whose support is $(\epsilon,\eta]$. Using the spatial independence of the underlying L\'evy process we have \begin{align*} &= \expn{A^{\eta,\epsilon}_{k,n}} + \expn{(X^{\eta,\epsilon}_{k,n} - d^{\eta,\epsilon}_{k,n})}\otimes(X^{\eta}_{(k+1)/2,n-1}-X^{\eta}_{(k-1)/2,n-1})\\ &\qquad\qquad+(X^{\eta}_{k,n} - d^{\eta}_{k,n})\otimes\expn{(X^{\eta,\epsilon}_{(k+1)/2,n-1}-X^{\eta,\epsilon}_{(k-1)/2,n-1})}\\ &= 0\,.\eendproof \end{align*}
With the uniform control on the second moment of the martingale \begin{equation*} \expn{(A^{\epsilon}_{s,t})^2} \leq C(\nu) \;(t-s)^2 \qquad \forall \epsilon>0 \end{equation*} we conclude that $A^{\epsilon}_{s,t}$ converges a.s. as $\epsilon\rightarrow0$.
The algebraic identity \begin{equation}\label{e.algarea}
A_{s,u} = A_{s,t} + A_{t,u} + \half [X_{s,t},X_{t,u}] \qquad s<t<u \end{equation} for the anti-symmetric area process $A$ generated by a piecewise smooth path $X$ extends to the area process of the L\'evy process. This is due to \eqref{e.algarea} holding for the area processes $A^{\epsilon}$ of the truncated L\'evy processes $X^{\epsilon}$.
\begin{propn}\label{prop.parea} The L\'evy area of the L\'evy process $(X_t)_{t\geq0}$ has finite $(p/2)$-variation for $p>2$ a.s.. That is \begin{equation*} \sup_{\pi} \big(\sum_{\pi} \snorm{A_{t_{k-1},t_k}}^{p/2}\big)^{2/p}\;<\infty\qquad\as \end{equation*} where the supremum is taken over all finite partitions $\pi$ of $[0,T]$. \end{propn} \proof In Proposition \ref{prop.areaconv} we constructed the area process for a pair of times, a.s.. This can be extended to a countable collection of pairs of times, a.s.. In the proof below we assume that the area process has been defined for the times \begin{equation*} k2^{-n}T, (k+1)2^{-n}T \qquad k=0,1,\ldots,2^n-1, \qquad n\geq1. \end{equation*}
The proof follows the method of estimation used in \cite{benterry}. To estimate the area process for two arbitrary times $u<v$ we split up the interval $[u,v]$ in the following manner:
We select the largest dyadic interval $[(k-1)2^{-n}T,k2^{-n}]$ which is contained within $[u,v]$. Then we add dyadic intervals to either side of the initial interval, which are chosen maximally with respect to inclusion in the interval $[u,v]$. Continuing in this fashion we label the partition according to the lengths of the dyadics. We note that there are at most two dyadics of the same length in the partition which we label $[l_{1,k},r_{1,k}]$ and $[l_{2,k},r_{2,k}]$ where $r_{1,k}\leq l_{2,k}$. Then \begin{equation*} [u,v] = \bigcup_{k=1}^{\infty} \bigcup_{i=1,2} [l_{i,k},r_{i,k}]. \end{equation*} We estimate $A_{u,v}$ using the algebraic formula \eqref{e.algarea}. \begin{align*} A_{l_{1,m},r_{2,m}} &= \sum_{k=1}^m \sum_{i=1,2} A_{l_{i,k},r_{i,k}}\\ & + \half \; \sum_{1\leq a\leq b \leq2} \sum_{1 \leq j < k \leq m} \bigg[ X_{r_{a,k}}-X_{l_{a,k}} , X_{r_{b,j}}-X_{l_{b,j}} \bigg]. \end{align*} Noting that \begin{align*}
&\sum_{1\leq a\leq b \leq2} \sum_{1 \leq j < k \leq m} \slnorm{\bigg[ X_{r_{a,k}}-X_{l_{a,k}} , X_{r_{b,j}}-X_{l_{b,j}}\bigg]} \\ &=\sum_{1\leq a\leq b \leq2} \sum_{1 \leq j < k \leq m} \slnorm{(X_{r_{a,k}}-X_{l_{a,k}})\otimes (X_{r_{b,j}}-X_{l_{b,j}})\\ &\qquad \qquad \qquad \qquad \qquad-\;(X_{r_{b,j}}-X_{l_{b,j}})\otimes (X_{r_{a,k}}-X_{l_{a,k}})}\\ &\leq \sum_{1\leq a\leq b \leq2} \sum_{1 \leq j < k \leq m}\slnorm{X_{r_{a,k}}-X_{l_{a,k}}}\slnorm{X_{r_{b,j}}-X_{l_{b,j}}}\\ &\leq \bigg( \sum_{k=1}^m \sum_{i=1,2} \slnorm{X_{r_{i,k}}-X_{l_{i,k}}}\bigg)^2 \end{align*} we have the estimate: \begin{equation}\label{e.areaest} \snorm{A_{u,v}}^{p/2} \leq 2^{(p/2)-1} \bigg[ \bigg(\sum_{k=1}^{\infty} \sum_{i=1,2} \snorm{A_{l_{i,k},r_{i,k}}}\bigg)^{p/2} + \half \bigg(\sum_{k=1}^{\infty} \sum_{i=1,2} \snorm{X_{r_{i,k}}-X_{l_{i,k}}}\bigg)^{p} \bigg]. \end{equation} Using H\"older's inequality, with $p>2$ and $\gamma>p-1$, we have \begin{align} \snorm{A_{u,v}}^{p/2} &\leq 2^{(p/2)-1} \bigg[ \bigg( \sum_{n=1}^{\infty} n^{-\gamma/((p/2)-1)} \bigg)^{(p/2)-1} \; \sum_{n=1}^{\infty} n^{\gamma} \bigg( \sum_{i=1,2} \snorm{A_{l_{i,k},r_{i,k}}} \bigg)^{p/2}\nonumber\\ &\qquad \qquad + \half \;\bigg( \sum_{n=1}^{\infty} n^{-\gamma/(p-1)} \bigg)^{p-1} \; \sum_{n=1}^{\infty} n^{\gamma} \bigg( \sum_{i=1,2} \snorm{X_{r_{i,k}}-X_{l_{i,k}}} \bigg)^{p} \bigg]\nonumber\\ &\leq C_1(p,\gamma)\; \sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1,2} \snorm{A_{l_{i,k},r_{i,k}}}^{p/2} \nonumber\\ &\qquad+ C_2(p,\gamma) \; \sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1,2} \snorm{X_{r_{i,k}}-X_{l_{i,k}}}^{p}\label{e.est}. \end{align}
One can uniformly bound $\snorm{A_{u,v}}^{p/2}$ for any pair of times $u<v \in [0,T]$ by extending the estimate in \eqref{e.est} over all the dyadic intervals at each level $n$, that is, \begin{align*}
\snorm{A_{u,v}}^{p/2}&\leq C_1(p,\gamma)\; \sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1}^{2^n} \snorm{A_{l_{i,k},r_{i,k}}}^{p/2}\\ &\qquad\qquad + C_2(p,\gamma) \; \sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1}^{2^n} \snorm{X_{r_{i,k}}-X_{l_{i,k}}}^{p}. \end{align*} If the right hand side is finite a.s. then the area can be defined for any pair of times.
The $(p/2)$-variation of the L\'evy area can be estimated by the same bound. \begin{align}\label{e.est2} \sup_{\pi} \sum_{\pi} \snorm{A_{u,v}}^{p/2}&\leq C_1(p,\gamma)\; \sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1}^{2^n} \snorm{A_{l_{i,k},r_{i,k}}}^{p/2}\nonumber\\ &\qquad\qquad + C_2(p,\gamma) \; \sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1}^{2^n} \snorm{X_{r_{i,k}}-X_{l_{i,k}}}^{p}. \end{align}
We use \eqref{e.2ndmom} to control the first sum \begin{equation*} \expn{\snorm{A_{s,t}}^{p/2}} \leq C\; (t-s)^{p/2} \qquad\text{for}\; p\leq 4. \end{equation*} So we have \begin{align*} \expn{\sum_{n=1}^{\infty} n^{\gamma} \sum_{i=1}^{2^n} \snorm{A_{l_{i,k},r_{i,k}}}^{p/2}} &\leq C \sum_{n=1}^{\infty}n^{\gamma} \sum_{i=1}^{2^n} (2^{-n}T)^{p/2}\\ &= C \sum_{n=1}^{\infty}n^{\gamma} 2^{-n((p/2)-1)}\\ &< \infty \qquad \text{for}\; p>2. \end{align*} This implies that the first term in the right hand side of \eqref{e.est2} is a.s. finite. Now we consider the second term of \eqref{e.est2}.
\begin{lem}\label{l.finsum} \begin{equation*} \sum_{n=1}^{\infty} n^{\gamma} \sum_{k=0}^{2^n -1} \snorm{X_{(k+1)2^{-n}T}-{X_{k2^{-n}T}}}^p\quad<\infty \quad\as \end{equation*} \end{lem} Before proving the lemma we recall a result of Monroe \cite{mon2}. \begin{defn}\label{df.min} Let $B_t$ be a Brownian motion defined on a probability space $(\Omega, \sigf, \P)$. A stopping time $T$ is said to be minimal if for any stopping time $S\leq T$, $B(T) \indist B(S)$ implies that a.s. $S=T$. \end{defn}
\begin{thm}\cite[Theorem 11]{mon2}\label{thm.monroe11} Let $(M_t)_{t\geq0}$ be a right continuous martingale. Then there is a Brownian motion $(\Omega, \siggt, B_t)$ and a family $(T_t)$ of $\siggt$-stopping times such that the process $B_{T_t}$ has the same finite distributions as $M_t$. The family $T_t$ is right continuous, increasing, and for each $t$, $T_t$ is minimal. Moreover, if $M_t$ has stationary independent increments then so does $T_t$. \end{thm} \begin{rem} It should be noted that the stopping times $T_t$ are not generally independent of $B_t$. However, in the case of $\alpha$-stable processes $0< \alpha<2$ one can use subordination to gain independence of the stopping times \cite{boch}. \end{rem}
\proofof{Lemma \ref{l.finsum}} Let $(\tau_t)_{t\geq0}$ denote the collection of minimal stopping times for which \begin{equation*} X_t \indist B_{\tau_t}. \end{equation*} The proof will be completed once it has been shown that \begin{equation}\label{e.lem1} \sum_{n=1}^{\infty} n^{\gamma} \sum_{k=0}^{2^n -1} \snorm{B_{\tau((k+1)2^{-n}T)}-{B_{\tau(k2^{-n}T)}}}^p\quad<\infty \quad\as \end{equation} The following inequality holds because Brownian motion is $(1/p^{\prime})$-H\"older continuous a.s. for $p^{\prime}>2$: \begin{align}\label{e.lem2} \snorm{B_{\tau((t_{k+1,n})}-{B_{\tau(t_{k,n})}}}^p &\leq C \,\snorm{\tau(t_{k+1,n}) - \tau(t_{k,n})}^{{{p}\over{p^{\prime}}}}\\ & \qquad \forall\,k=0,\ldots,2^n-1,\forall\,n\geq 1, \;\as\nonumber \end{align} where $t_{k,n} \dfn k2^{-n}T$ and $2< p^{\prime} <p$.
\cite[Theorem 1]{mon} shows that the index of the process $\tau(s)$ is half that of the L\'evy process. Therefore, with probability one, $\tau(s)$ has finite $(1+\delta)$-variation for all $\delta>0$.
\begin{thm}\cite[Theorem 5]{mon2}\label{thm.inf} If $\tau$ is a minimal stopping time and $\E (B_{\tau}) =0$, then $\E(\tau) = \E(B_{\tau}^2)$. \end{thm} Consequently the process $(\tau_t)_{t \geq 0}$ can be controlled in the following way: \begin{equation}\label{e.finmom} \expn{\tau_t} = \expn{B_{\tau_t}^2} = \expn{X_t^2} = t\,\int_{\snorm{x}<1} \snorm{x}^2 \;\nu(dx) \end{equation} where $\nu$ is the L\'evy measure corresponding to the process $X_t$. From \eqref{e.finmom} and Theorem \ref{thm.monroe11} we note that the process $\tau_t$ is a L\'evy process whose L\'evy measure, say $\mu$, satisfies the following: \begin{equation*} \int_0^1 \,x \;\mu(dx) < \infty. \end{equation*} From this result we deduce that the process $\tau_t$ a.s. has bounded variation. From \cite[Theorem 5]{shtat} we note that there is a positive constant $A$ such that \begin{equation*} \prob{\tau_t \leq A\;t \;,\; \forall t\geq0} = 1. \end{equation*} From the above bound and using the fact that $\tau$ has stationary independent increments one can show \begin{align*} \prob{\tau(t_{k+1,n}) - \tau(t_{k,n}) \leq A (t_{k+1,n} - t_{k,n}) = A 2^{-n}\: \big\vert\;\tau(t_{k,n}) } &=1, \\ \prob{\bigcap_{n\geq1} \bigcap_{k\geq0}^{2^n-1} \bigg(\snorm{\tau(t_{k+1,n}) - \tau(t_{k,n})} \leq A \,2^{-n} \bigg)} &= 1. \end{align*} Returning to \eqref{e.lem2} we see that \begin{align*} \snorm{B_{\tau((t_{k+1,n})}-{B_{\tau(t_{k,n})}}}^p &\leq C\,\snorm{\tau(t_{k+1,n}) - \tau(t_{k,n})}^{{{p}\over{p^{\prime}}}}\\ &\leq C\, A\, 2^{-n({{p}\over{p^{\prime}}})}\\ \intertext{which implies that} \sum_{n=1}^{\infty} n^{\gamma} \sum_{k=0}^{2^n -1} \snorm{B_{\tau((k+1)2^{-n}T)}-{B_{\tau(k2^{-n}T)}}}^p &\leq C\, A\,\sum_{n=1}^{\infty} n^{\gamma}2^{-n({{p}\over{p^{\prime}}}-1)}< \infty \end{align*} due to $p^{\prime}$ being chosen in the interval $(2,p)$.\endproof
This lemma concludes the proof that the bound in \eqref{e.est2} is finite, which shows that the area process a.s. has finite $(p/2)$-variation.\endproof
In this section we have proved that the area process exists and has finite $(p/2)$-variation when $(X_t)_{t\geq 0}$ has the form \eqref{e.form}. To prove Theorems \ref{thm.areaconv}, \ref{thm.parea} we note that a general L\'evy process has the form $$ X_t = at + B_t + L_t + \sum_{\substack{ 0\leq s< t \\ \snorm{\Delta X_s}\geq1}} \Delta X_s \qquad\as $$ So, we need to add area corresponding to the drift vector and the jumps of size greater than one. However, this part of the L\'evy process has bounded variation and is piecewise smooth so there is no problem defining its area. Similarly, it has a.s. finite $(p/2)$-variation.
\appendix \section{Homeomorphic flows}\label{app.hom}
In this section we give a proof that the solutions, generated by \eqref{e.1} as the initial condition is varied, form a flow of homeomorphisms when the integrator is a continuous function. The proof modifies the one given in \cite{tel5} for the existence and uniqueness of solution to \eqref{e.1}. The main idea is that one uniformly bounds a sequence of iterated maps which have projections giving the convergence of the solutions with two different initial points and bounding the difference of the solutions.
First, we need some notation.
\begin{defn} Let $T^{(n)}(\R^d)$ denote the truncated tensor algebra of length $n$ over $\R^d$. That is \begin{equation*} T^{(n)}(\R^d) \dfn \bigoplus_{i=0}^n (\R^d)^{\otimes i} \end{equation*} where $(\R^d)^{\otimes 0} = \R$ and $T^{(\infty)}(\R^d)$ denotes the tensor algebra over $\R^d$.
Let $\Delta = [0,T]\times [0,T]$. A map $X:\Delta \rightarrow T^{(n)}(\R^d)$ will be called a multiplicative functional of size $n$ if for all times $s<t<u$ in $[0,T]$ the following relation holds in $T^{(n)}(\R^d)\,:$ \begin{equation*} X_{st}\otimes X_{tu} = X_{su} \end{equation*} and $X^{(0)}_{st} \equiv 1$.
A map $X : \Delta \rightarrow T^{(n)}(\R^d)$ is called a classical multiplicative functional if $t\rightarrow X_t \dfn X^{(1)}_{0t}$ is continuous and piecewise smooth and \begin{equation}\label{e.class} X^{(i)}_{st} = \iint_{s<u_1<\ldots<u_i<t} \;dX_{u_1}\ldots dX_{u_i} \end{equation} where the right hand side is a Lebesgue-Stieltjes integral. We denote the set of all classical multiplicative functionals in $T^{(n)}(\R^d)$ by $S^{(n)}(\R^d)$. \end{defn}
\begin{defn}\label{d.control} We call a continuous function $\omega:\Delta \rightarrow \R^+$ a control function if it is super-additive and regular, that is, \begin{alignat*}{3} \omega\,(s,t) + \omega\,(t,u) &\leq \omega\,(s,u)& \qquad\forall\; s<t<u\in [0,T]\\ \omega\,(s,s) &= 0 &\forall \;s\in [0,T]\,. \end{alignat*} \end{defn} \begin{eg} Let $X$ be a path of strong finite $p$-variation. Then we can define the following control function: \begin{equation} \ost \dfn \variationt{X}{p}{s}{t}^p. \end{equation} \end{eg}
\begin{defn}\label{d.enhp} A functional $X = (1,X^{(1)},\ldots,X^{(n)})$ defined on $T^{(n)}(\R^d)$ where $n=[p]$ is said to have finite $p$-variation if there is a control function $\omega$ such that \begin{equation}\label{e.varcon} \snorm{X_{st}^{(i)}} \leq {{\ost^{i/p}}\over{\beta (i/p)!}} \qquad \forall\; (s,t)\in \Delta,\; i=1,\ldots,n \end{equation} for some sufficiently large $\beta$ and $x! \dfn \Gamma(x+1)$. \end{defn} \begin{thm}\cite[Theorem 2.2.1]{tel5}\label{thm.enhance} Let $X^{(n)}$ be a multiplicative functional of degree $n$ which has finite $p$-variation, with $n\dfn [p]$ ($[p]$ denotes the integer part of $p$). Then for $m>n$ there is a unique multiplicative extension $X^{(m)}$ in $T^{(m)}(\R^d)$ which has finite $p$-variation. \end{thm}
\begin{rem} The above theorem shows that once a sufficient number of low order integrals associated to a path $X_t$ have been defined, then the remaining iterated integrals of $X_t$ are defined. \end{rem}
\begin{defn} We call a multiplicative functional $ X:\Delta\rightarrow T^{(n)}(\R^d)$ geometric if there is a control function $\omega$ such that for any positive $\epsilon$ there exists a classical multiplicative functional $ Y(\epsilon)$ which approximates $X$ in the following way: \begin{equation*} \sllnorm{\bigg( X_{st}- Y_{st}(\epsilon)\bigg)^{(i)}} \leq \epsilon \,\ost^{i/p} \qquad i=1,\ldots,n=[p]\,. \end{equation*} We denote the class of geometric multiplicative functionals with finite $p$\linebreak[4]-variation by $\Omega G (\R^d)^p$. \end{defn}
\begin{eg} Let $W_t$ be an $\R^d$-valued Brownian motion. Then the following functional $ W$ defined on $T^{(2)}(\R^d)$ belongs to $\Omega G (\R^d)^p$ for any $p>2$. \begin{equation}\label{e.bmgmf}
W_{st} \dfn \bigg(1,W_t - W_s, \iint_{s<u_1<u_2<t} \circ dW_{u_1}\, \circ dW_{u_2} \bigg) \end{equation} where $\circ dW_u$ denotes the Stratonovich integral. It should be noted that if one replaced the Stratonovich differential in \eqref{e.bmgmf} by the It\^o differential then one would not get an element of $\Omega G (\R^d)^p$. This is due to the quadratic variation term which occurs in the symmetric part of the area process $$ W^{(2)}_{st}=\iint_{s<u_1<u_2<t} dW_{u_1}\, dW_{u_2}.$$ \end{eg}
It was shown in \cite{sip} that one had sufficient control of the above functional to generate path-wise solutions to SDEs driven by a Brownian motion. This control was derived from a moment condition in the same spirit as Kolmogorov's criterion for H\"older continuous paths. The moment condition was verified for the above area by the use of known stochastic integral results, though one could also derive it from a construction depending on the linearly interpolated Brownian motion.
There are two stages to defining the integral against a geometric multiplicative functional. The first gives a functional which is almost multiplicative (see \cite{tel5} for definition). The second associates, uniquely, a multiplicative functional to the almost multiplicative functional.
\begin{thm}\cite{tel5} There is a unique geometric multiplicative functional $Y$ which we call the integral of the 1-form $\theta$ against the geometric multiplicative functional $ X$. We denote this by \begin{equation*} Y_{st} \dfn \int_s^t \theta (X_u) \;\delta X. \end{equation*} \end{thm}
\begin{cor}\label{cor.pint} One has the following control on the $p$-variation of $Y$: \begin{equation} \sllnorm{\bigg(\int_s^t \theta (X_u) \;\delta X\bigg)^{(i)}} \leq \bigg(C \;\ost\bigg)^{i/p}/\bigg(\beta \bigg(i/p\bigg)!\bigg)\qquad i=1,\ldots,[p] \end{equation} where $C$ depends on $p, \norm{f}{\lip(\gamma)}, \gamma, \lambda, \beta,L$ and $[p]$. \end{cor}
The estimate is derived from estimating both the almost multiplicative functional and the difference of it from the integral.
We now state two lemmas which help prove that the solutions of \eqref{e.1} are homeomorphic flows when the initial condition is varied.
\begin{lem}\label{lem.gron} Let $X$ be in $\geo{\R^d}$ controlled by a regular $\omega_0$. Let $f:\R^n\rightarrow \hom(\R^d,\R^n)$ be a $\lip(\gamma)$ map for some $\gamma>p$. Let $ Y^{(i)}_{st},\, i=1,2$ denote the element in $\geo{\R^n}$ which solves the rough integral equation $$ Y^{(i)}_{st} = \int_s^t f(Y^{(i)})\,\delta X $$ with initial condition $Y^{(i)}_{0}=a_i,\, i=1,2$. Let $ W_{st}$ be the multiplicative functional which records the difference in the multiplicative functionals $ Y^{(1)}_{st}$ and $ Y^{(2)}_{st}$. Then \begin{equation}\label{e.origbd} \slnorm{ W^{(i)}_{st}} \leq \theta^i {{\ost^{(i/p)}}\over{\beta(i/p)!}}\qquad \forall\; i \geq 1, \end{equation} where $\theta = \snorm{a_1 -a_2}$, $\omega \dfn C\,\omega_0$, the constant $C$ depends on $p,$ $\norm{f}{\lip(\gamma)},$ $\beta,$ $\gamma$. The bound holds for all times $s\leq t$ on the interval $J\dfn \{u\,:\,\omega\,(0,u) \leq 1\}$. \end{lem}
\begin{lem}\label{lem.gron2} With the assumptions of Lemma \ref{lem.gron} one can estimate the difference of the increments of $Y^{(1)}_{st}$ and $Y^{(2)}_{st}$ for any pair of times $0\leq s<t$ which satisfy $\ost \leq 1$ as follows: \begin{align*} \snorm{Y^{(1)}_{st}-Y^{(2)}_{st}} &\leq \theta\,\exp\bigg[ {{1}\over{\beta(1/p)!}}\,\big( \omega\,(0,s) + \omega\,(0,s)^{(1/p)}\big)\bigg]\;{{\ost^{(1/p)}}\over{\beta(1/p)!}} \end{align*} In particular for any $t>0$ one has: \begin{equation}\label{e.gro} \snorm{Y^{(1)}_t - Y^{(2)}_t} \leq \snorm{a_1 -a_2}\; C(t). \end{equation} \end{lem}
Now we can prove that the solutions form a flow of homeomorphisms as the initial condition is varied.
\proofof{Theorem \ref{thm.hom}} The continuity of solutions follows from Lemma \ref{lem.gron2}. It remains to show that the inverse map exists and is continuous. This can be checked by repeating all the previous arguments using the reversed path $(X_{t-s})_{0\leq s\leq t}$ as the integrator.\endproof
The induction part of the proof of Lemma \ref{lem.gron} will require the following lemma about rescaling:
\begin{lem}\label{lem.scal}\cite{tel5} Let $ X$ be a multiplicative functional in $T^{([p])}(\R^d)$ which is of finite $p$-variation controlled by $\omega$. Let $( X, Y)$ be an extension of $ X$ to $T^{([p])}(\R^d\oplus\R^n)$ of finite $p$-variation controlled by $K \omega$. Then $( X, \phi Y)$ is controlled by \begin{equation*} \max \bigg\{ 1, \phi^{kp/i} K : 1 \leq k \leq i \leq [p] \bigg\} \omega \end{equation*} where $\phi \in \R$. In particular, if $\phi \leq K^{-[p]/p} \leq 1$ then $( X, \phi Y)$ is controlled by $\omega$. \end{lem}
\proofof{Lemma \ref{lem.gron}} We set up an iteration scheme of multiplicative functionals which we will bound uniformly, by induction. A projection of the sequence proves that a Picard iteration scheme converges to the solutions of \eqref{e.1} starting from $a_1$ and $a_2$. Another projection shows that the difference of these solutions is bounded.
Let $\epsilon>0$ and $\eta >1$. Let $V^{(1)}_{st}$ be the geometric multiplicative functional given by \begin{align*} V^{(1)}_{st}&\dfn(Z_{st}^{(1)(1)},\,Y_{st}^{(1)(1)},\,Y_{st}^{(1)(0)},\,Z_{st}^{(2)(1)},\,Y_{st}^{(2)(1)},\,Y_{st}^{(2)(0)},\,W_{st}^{(1)},\,\epsilon^{-1}X_{st})\\ &=\bigg(\int_s^t\,f(a_1)\;\delta X -a_1,\,\int_s^t\,f(a_1)\;\delta X,\,a_1,\,\int_s^t\,f(a_2)\;\delta X -a_2,\,\\ &\qquad\int_s^t\,f(a_2)\;\delta X,\,a_2,\, \int_s^t\,f(a_1)-f(a_2)\;\delta X,\,\epsilon^{-1}X_{st}\bigg). \end{align*} The iteration step is a two stage process. Given $V^{(m)}$ we set $$ \tilde V^{(m+1)} = \int k^{m}_{\theta} (V^{(m)})\;\delta V^{(m)} $$ where $k^m_{\theta}$ is the $1$-form on $((\R^n)^{\oplus7}\oplus \R^d)$ given by \begin{align*} &k^m_{\theta}(a_1,\ldots,a_8)\,(dA_1,\ldots,dA_8)\\ &\qquad = \bigg( a_1\,g(a_2,a_3)\,dA_8,\, dA_3 + \eta^{-m} dA_1,\, dA_2,\,a_4\,g(a_5,a_6)\,dA_8,\\ &\qquad\qquad\qquad \, dA_6 + \eta^{-m} dA_4, \, dA_5, \, \theta^{-1}\,g(a_2,a_4)\,dA_8,\, dA_8 \bigg). \end{align*} $g(x,y)$ is the 1-form appearing in \cite[Lemma 3.2]{tel1} which satisfies the following relation with respect to $f$: \begin{equation*}\label{e.g} f^i(x)-f^i(y) = \sum_j (x-y)^j\;g^{ij}(x,y). \end{equation*} $\tilde V^{(m+1)}$ is well defined because $g$ and $k^m_{\theta}$ are both $\lip(\gamma)$ for some $\gamma>p-1$.
We define $V^{(m+1)}$ to be the geometric multiplicative functional obtained by rescaling the first and fourth components of $\tilde V^{(m+1)}$ by $\epsilon\eta$ and the seventh component by $\epsilon$.
The uniform bound on the iterates $(V^{(m)})_{m\geq1}$ will be obtained by induction. $X$ is controlled by a regular $\omega_0$ so there exists a constant $C$ such that $V^{(1)}$ is controlled by $\omega \dfn C\,\omega_0$. Suppose that $V^{(k)} (k\leq m)$ are controlled by $\omega$. From (Corollary \ref{cor.pint}) there is a constant $C_1$ such that $\tilde V^{(m+1)}$ is controlled by $C_1 \omega$. If we choose $\epsilon>0,\,\eta>1$ such that $\epsilon \leq C_1^{{{-[p]}\over{p}}}$ and $\epsilon \eta \leq C_1^{{{-[p]}\over{p}}}$, then Lemma \ref{lem.scal} implies that $V^{(m+1)}$ is controlled by $\omega$, completing the induction step.
The uniform control on the iterates $ V^{(m)}$ ensures the convergence of \linebreak $\{ Y^{(i)(m)}\}_{m\geq 1}$ to the solutions of \begin{align*} dY^{(i)}_t &=f(Y^{(i)}_t)\;dX_t \qquad Y^{(i)}_0=a_i,\;i=1,2. \end{align*}
Through the definition of $\{{}^{\theta} W^{(m)}\}_{m\geq 1}$, the sequence at the level of the paths will converge to the scaled difference of the two solutions $\theta^{-1} (Y^{(2)}-Y^{(1)})$. For $s,t$ in $J$ one has \begin{equation*} \snorm{{}^{\theta} W_{st}^{(i)}} \leq {{\ost^{i/p}}\over{\beta (i/p)!}} \qquad i=1,\ldots,[p], \end{equation*} which implies that
\begin{equation*} \snorm{ W_{st}^{(i)}} \leq \theta^i {{\ost^{i/p}}\over{\beta (i/p)!}} \qquad i=1,\ldots,[p].\eendproof \end{equation*}
\proofof{Lemma \ref{lem.gron2}} We define the following set of times: \begin{align}\label{e.times} t_0 \dfn 0 &\text{ and } t_j \dfn \inf\{ u > t_{j-1} \,:\, \omega\,(t_{j-1},u) = 1\} \qquad \forall\; j\in\{1,\ldots,n(s)\}\\ \text{where} \;&n(s) \dfn \max\{j\,:\, t_j \leq s\}\;\text{and}\; t_{n(s)+1}=s\nonumber\,. \end{align}
We solve the differential equation starting from $s$ and use \eqref{e.origbd} to show that \begin{equation*} \snorm{ W^k_{st}} \leq K(s)^k\,{{\ost^{(k/p)}}\over{\beta(k/p)!}} \end{equation*} where $K(s)$ is an upper bound on the supremum over all the possible differences of the paths $\snorm{ Y^{(1)}_s - Y^{(2)}_s}\,,$ at time $s$. The bound $K(s)$ is derived recursively by considering the analogous upper bound for the difference of the solutions to the differential equation over the time interval $[t_{i-1},t_i]$ given below: \begin{align} \snorm{Y^{(1)}_{t_{i}}-Y^{(2)}_{t_{i}}} &\leq \snorm{Y^{(1)}_{t_{i-1}}-Y^{(2)}_{t_{i-1}}} + \snorm{W_{t_{i-1}\,t_i}}\nonumber\\ &\leq \snorm{Y^{(1)}_{t_{i-1}}-Y^{(2)}_{t_{i-1}}}\bigg( 1 + {{\omega\,(t_{i-1},t_i)^{(1/p)}}\over{\beta(1/p)!}} \bigg)\nonumber\\ \intertext{which implies that} K(t_j) &\leq K(t_{j-1})\;\bigg\{ 1+ {{\omega\,(t_{j-1},t_j)^{(1/p)}}\over{\beta(1/p)!}}\bigg\}\qquad j=1,\ldots,n(s)+1.\nonumber \end{align} Therefore \begin{align*} \snorm{ W^k_{st}} &\leq K(t_0)^k\,\prod_{j=1}^{n(s)+1} \, \bigg\{ 1+ {{\omega\,(t_{j-1},t_j)^{(1/p)}}\over{\beta(1/p)!}}\bigg\}^k\;{{\ost^{(k/p)}}\over{\beta(k/p)!}}\\ &\leq \theta^k\,\exp\bigg[ k\,\big(\sum_{j=1}^{n(s)} {{\omega\,(t_{j-1},t_j)^{(1/p)}}\over{\beta(1/p)!}} + {{\omega\,(t_{n(s)},s)^{(1/p)}}\over{\beta(1/p)!}}\big)\bigg]\;{{\ost^{(k/p)}}\over{\beta(k/p)!}}, \end{align*} noting that $\omega(t_{j-1},t_j) = 1$ and using the sub-additivity of $\omega$ we obtain \begin{align*} &\leq \theta^k\,\exp\bigg[ {{k}\over{\beta(1/p)!}}\,\big( \omega\,(0,s) + \omega\,(0,s)^{(1/p)}\big)\bigg]\;{{\ost^{(k/p)}}\over{\beta(k/p)!}}. \end{align*} By considering the above bound at the level of the paths $(k=1)$ and repeatedly using the triangle inequality one deduces \eqref{e.gro}.\endproof
\address
\end{document} |
\begin{document}
\title{space{-0.in}
\author{ \ }
\begin{abstract} In this work we look for central configurations of the planar $1+n$ body problem such that, after the addition of one or two satellites, we have a new planar central configuration. We determine all such configurations in two cases: the first, the addition of two satellites considering that all satellites have equal infinitesimal masses and the second case where one satellite is added but the infinitesimal masses are not necessarily equal. \end{abstract}
\noindent{{\it Key words.} Planar central configuration, stacked central configuration, planar coorbital satellites problem.}
\date{ }
\title{space{-0.in} \section{Introduction}
\hspace{0,5cm} Central configurations of a system of $N$ bodies is one of the most classical and relevant topics in Celestial Mechanics. They are configurations such that the total Newtonian acceleration of every body is equal to a constant multiplied by the position vector of this body with respect to the center of mass of the configuration. Central configurations allow homographic motion, i. e. motion where the configuration of the system changes its size but keeps its shape. One of the reasons why central configurations are interesting is that they allow to obtain explicit homographic solutions of the N-body problem. They also arise as the limiting configuration of a total collapse.
The central configurations are invariant under rotation and homothety. So we are interested in the classes of central configurations modulus such transformations. There is an extensive literature concerning these configurations; see e.g. \cite{hagihara},\cite{saari}, \cite{wintner} for classic background.
In this work we study central configurations of the planar $1+n$ body problem, where we have one dominant mass and $n$ infinitesimal masses, called satellites, on a plane. This problem was first considered by Maxwell \cite{maxwell} trying to construct a model for Saturn's rings. Considering satellites with small and equal masses, Hall \cite{hall} shows that in a central configuration of $n$ satellites, the large body is at the center of a circle which passes through the satellites. Moreover, in this unpublished work, Hall shows that, if $n\geq e^{27000}$, there is a unique central configuration of this problem, that where the satellites are at the vertices of a regular polygon. Casasayas, Llibre and Nunes \cite{llibre1} proved that the regular polygon is the only one if $n \geq e^{73}$. Cors, LLibre and Ollé \cite{llibre2} obtain numerically evidences that there is only one central configuration if $n\geq 9$ and that every central configuration is symmetric with respect to a straight line. Moreover they proved that there are only 3 symmetric central configurations of the $1+4$ body problem. Albouy and Fu \cite{albouy} proved that all central configurations of the $1+4$ body problem are symmetric which settles the question in this case.
In the case where the satellites do not have necessarily equal masses, Renner and Sicardy \cite{sicardy} obtained results about the inverse problem, that is, given a configuration of the coorbital satellites, find the infinitesimal masses making it a central configuration. They also studied the linear stability. Corbera, Cors and LLibre \cite{llibre3} considering the $1+3$ body problem, found two different classes exhibiting symmetric and nonsymmetric configurations. And when two infinitesimal masses are equal, they provide evidence that the number of central configurations varies from five to seven.
Hampton \cite{hampton} provides a new family of planar central configurations for the 5 body problem with the property that two bodies can be removed and the remaining three bodies still form a central configuration. Such configurations are called stacked central configurations. Mello and Libre \cite{mello} studied one case of stacked planar central configuration with 5 bodies in which three bodies are at the vertices of an equilateral triangle and the other two bodies lie on a mediatrix of the triangle.
In this work we study stacked central configurations of the planar $1+n$ body problem in two situations: the first, that of addition (or removal) of one satellite in the problem with arbitrary small masses and the case of addition (or removal) of two sattelites when all satellites have equal masses. Moreover the argument used in the latter case also shows that in a central configuration of the coorbital satellites problem if two satellites have the same mass the removal of them does not result in a central configuration if $n\geq 5.$
\section{Preliminaries}\label{preliminaries}
\hspace{0,5cm}Consider $N$ masses, $m_1,...,m_N$, in $\mathbb{R}^2$ subject to their mutual Newtonian gravitational interaction. Let $M=diag\{m_1,m_1,...,m_N,m_N\}$ be the matrix of masses and let $q=(q_1,...,q_N), q_i\in \mathbb{R}^2$ be the position vector. The equations of motion in an inertial reference frame with origin at the center of mass are given by $$M\ddot{q} = \frac{\partial V}{\partial q},$$
where $V(q_1,...,q_N)=\displaystyle\sum_{1\leq i<j\leq N}\frac{m_im_j}{\|q_i-q_j\|}.$
A non-collision configuration $q=(q_1,...,q_N)$ with $\sum_{i=1}^Nm_iq_i =0$ is a {\it central configuration} if there exists a positive constant $\lambda$ such that $$M^{-1}V_q = \lambda q.$$
We are interested in the planar $N=1+n$ body problem, where the big mass is equal to 1 with position $q_0=0$. The remaining $n$ bodies with positions $q_i$, called satellites, have masses $m_i =\mu_i\epsilon, i=1,..,n$, where $\mu_i\in \mathbb{R}^{+}$ and $\epsilon > 0$ is a small parameter that tends to zero.
In all central configuration of the planar $1+n$ body problem the satellites lie on a circle centered at the big mass (\cite{llibre1,hall}), i.e. the satellites are coorbital. Since we are interested in central configuration modulus rotations and homothetic transformations, we can assume that the circle has radius 1 and that $q_1=(1,0)$.
We exclude collisions in the definition of central configuration and take as coordinates the angles $\theta_i$ between two consecutive particles. See e.g. \cite{llibre1} for details. In this coordinates the space of configuration is the simplex $$\Delta = \{\theta = (\theta_1,...,\theta_n); \sum_{i=1}^n\theta_i = 2\pi, \theta_i>0, i=1,..,n\}$$ and the equations characterizing the central configurations of the planar 1+n body problem are \begin{eqnarray} &&\mu_2f(\theta_1)+\mu_3f(\theta_1+\theta_2)+...+\mu_nf(\theta_1+\theta_2+...+\theta_{n-1})=0, \nonumber \\ &&\mu_3f(\theta_2)+\mu_4f(\theta_2+\theta_3)+...+\mu_1f(\theta_2+\theta_3+...+\theta_{n})=0, \nonumber \\ &&\mu_4f(\theta_3)+\mu_5f(\theta_3+\theta_4)+...+\mu_2f(\theta_3+\theta_4+...+\theta_{n}+\theta_1)=0,\nonumber\\ &&...\label{sistemageral}\\ &&\mu_nf(\theta_{n-1})+\mu_1f(\theta_{n-1}+\theta_{n})+...+\mu_{n-2}f(\theta_{n-1}+\theta_n+\theta_1+...+\theta_{n-3})=0, \nonumber\\ &&\mu_1f(\theta_{n})+\mu_2f(\theta_{n}+\theta_{1})+...+\mu_{n-1}f(\theta_{n}+\theta_1+\theta_2+...+\theta_{n-2})=0, \nonumber\\ &&\theta_1+...+\theta_n=2\pi,\nonumber \end{eqnarray}
where $f(x) = \displaystyle\sin(x)\left(1 - \frac{1}{8|\sin^3(x/2)|} \right).$ \\ \begin{definition} We say that a solution $(\tet1,...,\tet2)$ of the system (\ref{sistemageral}) is a central configuration of the planar $1+n$ body problem associated to the masses $\mu_1,...,\mu_n$. \end{definition} The following results can be found in \cite{albouy} and exhibit the properties of the function $f$ used to prove our results.
\begin{lemma}\label{lema1}
The function
$$f(x)=\sin(x)\left(1 - \frac{1}{8|\sin^3(x/2)|} \right), \ \ x\in (0,2\pi)$$ satisfies: \begin{enumerate}[i)]
\item $f(\pi-x)=-f(\pi+x), \forall x \in (0,\pi);$
\item $\displaystyle f'(x)=\cos(x)+\frac{3+\cos(x)}{16|\sin^3(x/2)|}\geq f'(\pi)=-7/8$, for all $x\in(0,2\pi);$
\item $f'''(x)> 0$, for all $x\in(0,2\pi);$ \item In $(0,\pi)$ there is a unique critical point $\tet c$ of $f$ such that $\tet c>3\pi/5$, $f'(\theta)>0$ in $(0,\tet c)$ and $f'(\theta)<0$ in $(\tet c,\pi).$ \end{enumerate} \end{lemma}
\begin{lemma}\label{lema2albouy} Consider four points $t_1^L,t_1^R,t_2^L,t_2^R$ such that $0<t_1^L<t_2^L<\theta_c<t_2^R<t_1^R<2\pi, f(t_1^L)=f(t_1^R)=f_1$ and $f(t_2^L)=f(t_2^R)=f_2$. Then $t_2^L+t_2^R<t_1^L+t_1^R.$ \end{lemma}
\begin{cor}\label{corolarioalbouy} Consider $0<t_1<\theta_c<t_2<2\pi.$ If $f(t_1)\geq f(t_2)$ then $t_1+t_2>2\theta_c>6\pi/5.$ \end{cor}
\begin{figure}
\caption{Graphic of $f$.}
\end{figure}
\section{Adding one satellite }\label{addone}
\hspace{0,5cm}In this section we consider the planar problem of $1+n$ bodies, where the satellites do not necessarily have the same mass. We are interested in the stacked central configurations when one new satellite is added. In the following result we show that such construction is possible only in one case. See Fig. \ref{fig_ccp34}.
\begin{theorem}\label{teoremastacked1} Let $(\theta_1,...,\theta_{n-1},\theta_n)$ be a planar central configuration of the $1+n$ body problem where the $n$ satellites have masses $\mu_i, i=1,...,n.$ Suppose that we add another satellite with mass $\mu_{n+1}$ forming a new central configuration of the $1+N$ body problem, with $N=n+1$. Then $n=3, \theta_1=\theta_2=\theta_3=2\pi/3$, the fourth satellite is on the bisector of one of these angles and the masses of the satellites are all the same. \end{theorem}
\noindent{\bf Proof: } First we will treat the case $n=2$. By hypothesis $(\theta_1,\theta_2)$ is a central configuration relative to the masses $\mu_1$ and $\mu_2.$ Without lost of generality, suppose that the third satellite, with mass $\mu_3$, is placed such that $(\theta_1,\theta_2',\theta_3')$ is a central configuration of the $1+3$ body problem, where $\theta_2=\theta_2'+\theta_3'$. So the following equations are satisfied: $$\mu_2f(\theta_1)=0$$ $$\mu_1f(\theta_2)=0$$ $$\mu_2f(\theta_1)+\mu_3f(\theta_1+\theta_2')=0$$ $$\mu_3f(\theta_2')+\mu_1f(\theta_2'+\theta_3')=0$$ $$\mu_1f(\theta_3')+\mu_2f(\theta_3'+\theta_1)=0$$
\begin{figure}
\caption{The only stacked central configuration according to Theorema \ref{teoremastacked1}.}
\label{fig_ccp34}
\end{figure}
It is easy to see that the above equations give us $$f(\theta_1)=f(\theta_1+\theta_2')= f(\theta_2')=0.$$
The roots of $f$ in $(0,2\pi)$ are $\pi/3,\pi$ and $5\pi/3$, which makes impossible the equalities above.
Consider now the case $n=3.$ In the same way, suppose that the fourth satellite, with mass $\mu_4$, is placed between the first satellite and the third one, forming the central configuration $(\theta_1, \theta_2, \theta_3', \theta_4'),$ with $\theta_3=\theta_3'+\theta_4'.$ The following equations are satisfied: $$\mu_2f(\theta_1)+\mu_3f(\theta_1+\theta_2)=0,$$ $$\mu_3f(\theta_2)+\mu_1f(\theta_2+\theta_3)=0,$$ $$\mu_1f(\theta_3)+\mu_2f(\theta_1+\theta_3)=0,$$ $$\mu_2f(\theta_1)+\mu_3f(\theta_1+\theta_2)+\mu_4f(\theta_1+\theta_2+\theta_3')=0,$$ $$\mu_3f(\theta_2)+\mu_4f(\theta_2+\theta_3')+\mu_1f(\theta_2+\theta_3'+\theta_4')=0,$$ $$\mu_4f(\theta_3')+\mu_1f(\theta_3'+\theta_4')+\mu_2f(\theta_3'+\theta_4'+\theta_1)=0.$$
Since $\theta_3=\theta_3'+\theta_4'$ the equations above give us $$f(\theta_3')=f(\theta_3'+\theta_2)=f(\theta_3'+\theta_2+\theta_1)=0.$$
So $\theta_3'=\pi/3, \theta_3'+\theta_2=\pi$ and $\theta_3'+\theta_2+\theta_1=5\pi/3.$ Hence, $\theta_1=\theta_2=\theta_3=2\pi/3$ and $\theta_3'=\theta_4'=\pi/3$. Substituting these values into the above equations, we see that $\mu_1=\mu_2=\mu_3=\mu_4$.
Finally we treat the case $n\geq 4.$ Assume that $(\theta_1,...,\theta_{n-1},\theta_n)$ and $(\theta_1,...,\theta_{n-1},\theta_n',\theta_{n+1}'),$ with $\theta_n=\theta_n'+\theta_{n+1}',$ are central configurations associated to the masses $\mu_1,...,\mu_n$ and $\mu_1,...,\mu_n,\mu_{n+1}$, respectively. In fact we are assuming, without lost of generality, that the new satellite, with mass $\mu_{n+1}$, was placed between the first satellite and the $n$th one.
The first equations of the respective systems are $$\mu_2f(\theta_1)+\mu_3f(\theta_1+\theta_2)+...+\mu_nf(\theta_1+...+\theta_{n-1})=0$$ and $$\mu_2f(\theta_1)+\mu_3f(\theta_1+\theta_2)+...+\mu_nf(\theta_1+...+\theta_{n-1})+\mu_{n+1}f(\theta_1+...+\theta_{n-1}+\theta_n')=0.$$ This gives us $$f(\theta_1+...+\theta_{n-1}+\theta_n')=-f(\theta_{n+1}')=0.$$
In the same way, the second equations are $$\mu_3f(\theta_2)+...+\mu_nf(\theta_2+...+\theta_{n-1})+\mu_1f(\theta_2+...+\theta_{n-1}+\theta_n)=0$$ and $$\mu_3f(\theta_2)+...+\mu_nf(\theta_2+...+\theta_{n-1})+\mu_{n+1}f(\theta_2+...+\theta_{n-1}+\theta_n')+\mu_1f(\theta_2+...+\theta_{n-1}+\theta_n'+\theta_{n+1}')=0.$$ So $$f(\theta_2+...+\theta_{n-1}+\theta_n')=-f(\theta_{n+1}'+\theta_1)=0.$$
Analogously, comparing the $i$th equations of the respective systems, we have $$f(\theta_{n+1}'+\theta_1+\theta_2+...+\theta_{i-1})=0.$$
Thus, we found $n$ distinct roots of $f$ in $(0,2\pi)$ given by $$\theta_{n+1}',\theta_{n+1}'+\theta_1,\theta_{n+1}'+\theta_1+\theta_2,...,\theta_{n+1}'+\theta_1+\theta_2+...+\theta_{n-1}.$$
So, we have a contradiction for $n\geq 4$ and the Theorem follows.
\begin{flushright}\caixapreta
\end{flushright} \section{Adding two satellites}\label{addtwo}
\begin{lemma}\label{lema_stacked2} Let $C=\{x_1,...,x_m\}$, with $x_i>0$ and $x_1+...+x_m\leq 2\pi$. Consider four non-empty subsets of $C$, $A_1,A_2,B_1,B_2$ such that $A_1\cup A_2=B_1\cup B_2=C$ and $B_1\subsetneq A_1$. Suppose that $$f\left(\sum_{x_i\in A_1}x_i\right)=f\left(\sum_{x_i\in A_2}x_i\right)$$ and $$f\left(\sum_{x_i\in B_1}x_i\right)=f\left(\sum_{x_i\in B_2}x_i\right).$$
Then $$\sum_{x_i\in C}x_i>2\theta_c>6\pi/5$$ and exactly one of the situations bellow must occur: \begin{itemize}
\item $\displaystyle\sum_{x_i\in A_1}x_i=\sum_{x_i\in A_2}x_i$;
\item $\displaystyle\sum_{x_i\in B_1}x_i=\sum_{x_i\in B_2}x_i$;
\item $\displaystyle\sum_{x_i\in A_1}x_i=\sum_{x_i\in B_2}x_i.$ \end{itemize} \end{lemma}
\noindent{\bf Proof: } By hypothesis we can use Lemma \ref{lema2albouy} and, observing that $$\displaystyle\sum_{x_i\in A_1}x_i+\displaystyle\sum_{x_i\in A_2}x_i=\displaystyle\sum_{x_i\in B_1}x_i+\displaystyle\sum_{x_i\in B_2}x_i=\displaystyle\sum_{x_i\in C}x_i,$$ we get $$\displaystyle\sum_{x_i\in A_1}x_i=\displaystyle\sum_{x_i\in A_2}x_i$$ or $$\displaystyle\sum_{x_i\in B_1}x_i=\displaystyle\sum_{x_i\in B_2}x_i.$$ Since $B_1\subsetneq A_1$ and hence $A_2\subsetneq B_2$, only one of the equalities above can be satisfied. So, by Corollary \ref{corolarioalbouy}, $$\sum_{x_i\in C}x_i>2\theta_c>6\pi/5.$$ \begin{flushright}\caixapreta
\end{flushright}
Considering all satellites with the same mass, the three following propositions give us all possible cases of getting a new planar central configuration of $1+(n+2)$ bodies by adding two satellites to a planar central configuration of $1+n$ bodies.
\begin{figure}
\caption{Illustration of the situation treated in Proposition \ref{prop_stacked1}}
\label{fig_stacked1}
\end{figure}
\begin{prop}\label{prop_stacked1} Let $(\theta_1,...,\theta_n)$ be a planar central configuration of $1+n$ bodies, with $n\geq4$ and satellites with the same mass. Suppose that we add two new satellites separated on the circle by at least two of the original satellites. Then the new $1+(n+2)$ bodies cannot form a central configuration. \end{prop} \noindent{\bf Proof: } Without lost of generality, suppose that the new satellites are placed one between the first and second satellites and the other between the $k$th and $(k+1)$th satellites, where $3\leq k\leq n-1$. See Fig. \ref{fig_stacked1}. So, we get the central configurations $$(\theta_1,\theta_2,...,\theta_k,...,\theta_n)$$ and $$(\theta_1^*,\theta_2^*,...,\theta_k^*,\theta_{k+1}^*,\theta_{k+2}^*,...,\theta_{n+2}^*)$$ of $1+n$ and $1+(n+2)$ bodies, respectively, satisfying $$\theta_1=\theta_1^*+\theta_2^*,$$ $$\theta_k=\theta_{k+1}^*+\theta_{k+2}^*,$$ $$\theta_i=\theta_{i+1}^* \mbox{ for } 2\leq i\leq k-1,$$ and $$\theta_i=\theta_{i+2}^* \mbox{ for } k+1\leq i \leq n.$$
The respective systems are \begin{eqnarray} &&f(\theta_1)+f(\theta_1+\theta_2)+...+f(\theta_1+\theta_2+...+\theta_{n-1})=0, \nonumber \\ &&f(\theta_2)+f(\theta_2+\theta_3)+...+f(\theta_2+\theta_3+...+\theta_{n})=0, \nonumber \\ &&f(\theta_3)+f(\theta_3+\theta_4)+...+f(\theta_3+\theta_4+...+\theta_{n}+\theta_1)=0,\nonumber\\ &&...\label{sistema_staked1}\\ &&f(\theta_{n-1})+f(\theta_{n-1}+\theta_{n})+...+f(\theta_{n-1}+\theta_n+\theta_1+...+\theta_{n-3})=0, \nonumber\\ &&f(\theta_{n})+f(\theta_{n}+\theta_{1})+...+f(\theta_{n}+\theta_1+\theta_2+...+\theta_{n-2})=0 \nonumber \end{eqnarray} and \begin{eqnarray} &&f(\theta_1^*)+f(\theta_1^*+\theta_2^*)+...+f(\theta_1^*+\theta_2^*+...+\theta_{n+1}^*)=0, \nonumber \\ &&f(\theta_2^*)+f(\theta_2^*+\theta_3^*)+...+f(\theta_2^*+\theta_3^*+...+\theta_{n+2}^*)=0, \nonumber \\ &&f(\theta_3^*)+f(\theta_3^*+\theta_4^*)+...+f(\theta_3^*+\theta_4^*+...+\theta_{n+2}^*+\theta_1^*)=0,\nonumber\\ &&...\label{sistema_staked2}\\ &&f(\theta_{n+1}^*)+f(\theta_{n+1}^*+\theta_{n+2}^*)+...+f(\theta_{n+1}^*+\theta_{n+2}^*+\theta_1^*+...+\theta_{n-1}^*)=0, \nonumber\\ &&f(\theta_{n+2}^*)+f(\theta_{n+2}^*+\theta_{1}^*)+...+f(\theta_{n+2}^*+\theta_1^*+\theta_2^*+...+\theta_{n}^*)=0 \nonumber \end{eqnarray}
The third equation of (\ref{sistema_staked2}) together with the second equation of (\ref{sistema_staked1}) give us $$f(\theta_2+...+\theta_{k-1}+\theta_{k+1}^*)+f(\theta_2+...+\theta_n+\theta_1^*)=0\Rightarrow$$ $$f(\theta_2+...+\theta_{k+1}^*)=f(\theta_2^*).$$
The $(k+1)$th equation of (\ref{sistema_staked2}) together with the $k$th of (\ref{sistema_staked1}) give us $$f(\theta_{k+1}^*)+f(\theta_k+...+\theta_n+\theta_1^*)=0\Rightarrow$$ $$f(\theta_{k+1}^*)=f(\theta_2^*+\theta_2+...+\theta_{k-1}).$$
So, by Lemma \ref{lema_stacked2}, we have $$\theta_2^*+\theta_2+...+\theta_{k-1}+\theta_{k+1}^*>\pi.$$
The $(k+3)$th equation of (\ref{sistema_staked2}) together with the $(k+1)$th equation of (\ref{sistema_staked1}) and the first equations of (\ref{sistema_staked1}) and (\ref{sistema_staked2}) give us, respectively $$f(\theta_{k+1}+...+\theta_n+\theta_1^*)+f(\theta_{k+1}+...+\theta_n+\theta_1+...+\theta_{k-1}+\theta_{k+1}^*)=0$$ and $$f(\theta_1^*)+f(\theta_1^*+\theta_2+...+\theta_{k-1}+\theta_{k+1}^*)=0.$$ So $$ f(\theta_{k+1}+...+\theta_n+\theta_1^*)=f(\theta_{k+2}^*)$$ and $$f(\theta_1^*)=f(\theta_{k+2}^*+\theta_{k+1}+...+\theta_n).$$
Again, by Lemma \ref{lema_stacked2} $$\theta_1^*+\theta_{k+2}^*+\theta_{k+1}+...+\theta_n>\pi.$$
Therefore we have the contradiction $$\theta_1+...+\theta_n>2\pi.$$ \begin{flushright}\caixapreta
\end{flushright}
\begin{figure}
\caption{Illustration of the situation treated in Proposition \ref{prop_stacked2}}
\label{fig_stacked2}
\end{figure}
\begin{prop}\label{prop_stacked2} Let $(\theta_1,...,\theta_n)$ be a planar central configuration of $1+n$ bodies, with $n\geq2$ and satellites with the same mass. Suppose that we add two satellites separated by one of the original satellites. Then the new configuration with $1+(n+2)$ bodies is a central configuration exactly in the cases bellow. See Fig. \ref{fig2_stacked2}.
\noindent $n=2:$\\ Original configuration: collinear = $(\pi,\pi)$.\\ New configurations: $$\mbox{square = }\left(\frac{\pi}{2},\frac{\pi}{2},\frac{\pi}{2},\frac{\pi}{2}\right), $$ or $$\mbox{kite = }\left(\frac{\pi}{3},\frac{\pi}{3},\frac{2\pi}{3},\frac{2\pi}{3}\right).$$ $n=4:$\\ Original configuration: kite = $\left(\frac{\pi}{3},\frac{\pi}{3},\frac{2\pi}{3},\frac{2\pi}{3}\right).$\\ New configuration: regular 6-gon = $\left(\frac{\pi}{3},\frac{\pi}{3},\frac{\pi}{3},\frac{\pi}{3},\frac{\pi}{3},\frac{\pi}{3}\right)$. \end{prop}
\noindent{\bf Proof: } First, consider the case $n=2.$ Suppose that we have the central configuration $(\tet1,\tet2)$ and $(\tet1^*,\tet2^*,\tet3^*,\tet4^*)$ of $1+2$ and $1+4$ bodies, respectively, where $\tet1=\tet1^*+\tet2^*$ and $\tet2=\tet3^*+\tet4^*$. The equations are: \begin{eqnarray} f(\tet1^*)&=&f(\tet4^*),\label{eq1n2n4}\\ f(\tet3^*)&=&f(\tet2^*),\label{eq2n2n4}\\ f(\tet2^*)+f(\tet2^*+\tet3^*)&=&f(\tet1^*).\label{eqn2n4} \end{eqnarray}
By Corollary \ref{corolarioalbouy} the equations (\ref{eq1n2n4}) and (\ref{eq2n2n4}), give us $$\tet1^*=\tet4^* \mbox{ or } \tet3^*=\tet2^*.$$
In fact, by Proposition 2 in \cite{albouy} we have $$\tet1^*=\tet4^* \mbox{ and } \tet3^*=\tet2^*.$$ So $\tet1=\tet2=\pi.$
The equation (\ref{eqn2n4}) becomes $h(\tet2^*)=0,$ where $$h(x):=f(x)+f(2x)-f(\pi+x), \ x\in (0,\pi).$$ It is easy to see that $h'''(x)>0$, because $f'''(x)>f'''(\pi+x)>0$. Hence, $h$ has at most three roots in $(0,\pi).$ We can check that $h(\pi/2)=h(\pi/3)=h(2\pi/3)=0.$ Therefore, $\tet2^*=\pi/2,$ $\tet2^*=\pi/3$ or $\tet2^*=2\pi/3.$ The first root gives us the square configuration and the last two roots correspond to the same configuration, the kite one, given by $(\pi/3,\pi/3,2\pi/3,2\pi/3).$
\begin{figure}
\caption{Stackeds central configurations according to Proposition \ref{prop_stacked2}}
\label{fig2_stacked2}
\end{figure}
Consider now $n\geq 3$ and suppose that the two new satellites were placed separated by the original second satellite, as in Fig. \ref{fig_stacked2}. Suppose that we get the central configurations $(\theta_1,...,\theta_n)$ and $(\theta_1^*,...,\theta_{n+2}^*)$ satisfying $$\theta_1=\theta_1^*+\theta_2^*,$$ $$\theta_2=\theta_3^*+\theta_4^*,$$ $$\theta_i=\theta_{i+2}^*, \mbox{ for } 3\leq i \leq n.$$
Let us see the case $n=3$. The equations reduce to \begin{eqnarray}
f(\theta_1^*)&=&f(\theta_3+\theta_4^*),\label{eqprop2st1} \\
f(\theta_3+\theta_1^*)&=&f(\theta_4^*),\label{eqprop2st2}\\
f(\theta_2^*)&=&f(\theta_3^*),\label{eqprop2st3}\\ f(\tet2^*)+f(\tet2^*+\tet3^*)&=&f(\tet3+\tet1^*)+f(\tet1^*).\label{eqprop2st4} \end{eqnarray}
By Lemma \ref{lema_stacked2}, (\ref{eqprop2st1}) and (\ref{eqprop2st2}) we have $$\theta_1^*+\theta_4^*+\theta_3>2\theta_c>6\pi/5.$$
It follows from this fact and from (\ref{eqprop2st3}) that $\theta_2^*=\theta_3^*$.
By Lemma \ref{lema_stacked2}, from (\ref{eqprop2st1}) and (\ref{eqprop2st2}) we have exactly one of the following situations \begin{eqnarray} \nonumber \theta_1^*=\theta_4^*,\\ \theta_1^*=\theta_3+\theta_4^*,\\ \nonumber \theta_4^*=\theta_3+\theta_1^*. \end{eqnarray} Suppose $\theta_1^*=\tet3+\tet4^*$. The equation (\ref{eqprop2st2}) becomes \begin{equation}\label{eqprop2st5} f(\tet4^*)=f(\tet4^*+2\tet3). \end{equation}
Since $(\tet1,\tet2,\tet3)$ is a planar central configuration of the $1+3$ body problem, we know that $\tet1=\tet2, \tet1=\tet3$ or $\tet2=\tet3.$ See \cite{llibre2}. As $\tet1=\tet1^*+\tet2^*=\tet3+\tet4^*+\tet2^*>\tet3,$ we have $$\tet1=\tet2>\tet3 \mbox{ or } \tet1>\tet2=\tet3.$$ In the first case, we have $\tet3>4\pi/9 \mbox{ and } \tet1<7\pi/9.$ From (\ref{eqprop2st5}) we have $\pi/4<\tet4^*<\pi/3$ and so by Lemma \ref{lema2albouy}, (\ref{eqprop2st5}) implies $$2\tet4^*+2\tet3>\pi/3+\pi \Rightarrow \tet1^*=\tet4^*+\tet3>2\pi/3\Rightarrow \tet2^*=\tet1-\tet1^*<7\pi/9-2\pi/3=\pi/9.$$ So $f(\tet2^*)<f(\pi/9)<-7.$ Therefore (\ref{eqprop2st4}) is not satisfied.
If $\tet1>\tet2=\tet3$ then $\tet3<5\pi/18.$ Equation (\ref{eqprop2st5}) gives us $\tet4^*>\pi/3$ and hence $\tet3=\tet2=\tet3^*+\tet4^*>\pi/3>5\pi/18>\tet3.$
Analogously we see that the case $\tet4^*=\tet3+\tet1^*$ is impossible too. So $$\theta_1^*=\theta_4^*$$ and, consequently $$\theta_1=\theta_2.$$ So, (\ref{eqprop2st1}) becomes \begin{equation}\label{eqp2st} f(\tet1^*)=f(\tet3+\tet1^*). \end{equation}
Since $(\theta_1,\theta_1,\theta_3)$ is a central configuration of $1+3$ bodies, we see that $$\f1+f(2\tet1)=0$$ and we will consider the three roots of this equation again. See \cite{llibre2}.
If $\tet1=2\pi/3$, then $\tet3 =2\pi/3$ and, by (\ref{eqp2st}) we have $$\tet1^*=\pi/3.$$
But, it is easy to see that the resultant configuration does not agree with (\ref{eqprop2st4}).
Consider now that $\tet1$ is the smallest solution of $f(\tet1)+f(2\tet1)=0$. We know that, in this case, $\theta_1<50^{\circ}<\tet c/2$, so $\tet3>\tet c'=2\pi-\tet c$, the second critical point of $f$. Therefore, $\tet3$ lies on the last increasing interval of $f$. Thus, $$f(\tet1^*)<f(\tet1)=f(\tet3)<f(\tet3+\tet1^*),$$ contradicting (\ref{eqp2st}).
Now, let $\tet1$ be the biggest solution of $f(\tet1)+f(2\tet1)$. We know that $138^{\circ}<\tet1<139^{\circ}$ and thus, $$\theta_3>82^{\circ}.$$
It is easy to see that, in this case, by (\ref{eqp2st}), we have $\pi/3<\tet1^*<\tet c<\tet1^*+\tet3<\pi.$
If $\tet1^*\geq 73^{\circ}$ then $\tet1^*+\tet3>155^{\circ}$, what implies $$0,365...=f(155^{\circ})>f(\tet1^*+\tet3)=f(\tet1^*)>f(73\pi/180)=0,388...$$ Therefore, $$\tet1^*<73^{\circ}$$ and consequently $$\tet2^*=\tet1-\tet1^*>65^{\circ}.$$
The function $f(\theta)+f(2\theta)$ is increasing from $0$ to its first critical point and then decreasing to its second critical point, bigger than $2\pi/3$. Since $f'(65^{\circ})+2f'(130^{\circ})>0$, the first critical point is bigger than $65^{\circ}.$ Thus, $$f(\tet2^*)+f(2\tet2^*)>f(65^{\circ})+f(130^{\circ})>0,8.$$
However, $2f(\tet1^*)<2f(73^{\circ})<2(0,39)<0,8$ contradicting (\ref{eqprop2st4}).
Now we consider $n\geq5$. The system for the $\tet i^*$'s, by the relations between the angles and the properties of $f$, reduces to \begin{eqnarray}
\label{eqsta6} f(\theta_4^*) &=& f(\theta_1^*+\theta_3+...+\theta_n), \\
\label{eqsta7} f(\theta_4^*+\theta_3) &=& f(\theta_1^*+\theta_4+...+\theta_n),\\
\label{eqsta8} f(\theta_1^*) &=& f(\theta_4^*+\theta_3+...+\theta_n),\\
\label{eqsta9} f(\theta_1^*+\theta_n) &=& f(\theta_4^*+\theta_3+...+\theta_{n-1}). \end{eqnarray}
The equations (\ref{eqsta6}) and (\ref{eqsta8}), by Lemma \ref{lema_stacked2}, imply exactly one of following situations: \begin{eqnarray}\label{eqsta10}
\nonumber \theta_1^* &=& \theta_4^*, \\
\theta_1^* &=& \theta_4^*+\theta_3+...+\theta_n,\mbox{ or } \\ \nonumber \theta_4^* &=& \theta_3+...+\theta_n+\theta_1^*. \end{eqnarray} Analogously, the equations (\ref{eqsta6}) and (\ref{eqsta7}), by Lemma \ref{lema_stacked2}, imply that exactly one of following equations is verified: \begin{eqnarray}\label{eqsta11}
\nonumber \theta_4^* &=& \theta_1^*+\theta_4+...+\theta_n, \\
\theta_4^* &=& \theta_1^*+\theta_3+...+\theta_n,\mbox{ or } \\ \nonumber \theta_4^*+\theta_3 &=&\theta_1^*+ \theta_4+...+\theta_n \end{eqnarray} and, by (\ref{eqsta8}) and (\ref{eqsta7}), the same Lemma implies exactly one of the following equalities: \begin{eqnarray}\label{eqsta12}
\nonumber \theta_1^* &=& \theta_4^*+\theta_3, \\
\theta_1^* &=& \theta_4^*+\theta_3+...+\theta_n, \mbox{ or } \\ \nonumber \theta_4^*+\theta_3 &=& \theta_4+...+\theta_n+\theta_1^*. \end{eqnarray}
The only possibility for a single equation to be satisfied in each group (\ref{eqsta10}), (\ref{eqsta11}) and (\ref{eqsta12}) is $$\theta_1^*=\theta_4^*.$$
From the group of equations (\ref{eqsta12}) we have \begin{equation}\label{stackedte3}
\theta_3=\theta_4+...+\theta_n. \end{equation}
Again, by Lemma \ref{lema_stacked2} applied to (\ref{eqsta8}) and (\ref{eqsta9}) and, since $\theta_1^*=\theta_4^*$, we have $$\theta_n = \theta_3+...+\theta_{n-1},$$ contradicting (\ref{stackedte3}).
Finally we consider the case $n=4$. The equations are \begin{eqnarray} f(\tet1^*)&=&f(\tet3+\tet4+\tet4^*),\label{stackedn4eq1}\\ f(\tet2^*)+f(\tet2^*+\tet3^*)+f(\tet2^*+\tet2)&=&f(\tet4+\tet1^*)+f(\tet1^*),\label{stackedn4eq2}\\ f(\tet3^*)&=&f(\tet2^*),\label{stackedn4eq3}\\ f(\tet4^*)&=&f(\tet3+\tet4+\tet1^*),\\ f(\tet4+\tet1^*)&=&f(\tet4^*+\tet3). \end{eqnarray}
Using the same argument of the case $n\geq 5$ we get $$\tet1^*=\tet4^*$$ and $$\tet3=\tet4.$$
Moreover, from (\ref{stackedn4eq1}) and (\ref{stackedn4eq3}), for the angles no greater than $2\pi$ we must have $$\tet2^*=\tet3^*.$$
In this case, the original configuration with $n=4$, has a symmetry axis passing through two satellites and hence, it should be the square $\tet1=\tet2=\tet3=\tet4=\pi/2$, or the kite configuration given by $$\tet1=\tet2=\pi/3 \mbox{ and } \tet3=\tet4=2\pi/3$$ or $$\tet1=\tet2=2\pi/3 \mbox{ and } \tet3=\tet4=\pi/3.$$
If $\tet3=2\pi/3$, from (\ref{stackedn4eq1}), we have $$f(\tet1^*)=f(\tet1^*+4\pi/3).$$
So, $$\tet1^*>\pi/4\Rightarrow \tet2^*=\tet1-\tet1^*=\pi/3-\tet1^*<\pi/12$$ $$ \Rightarrow f(\tet2^*)<f(\pi/12)<-14.$$
It is easy to see that the equation (\ref{stackedn4eq2}) is not satisfied in this case.
If $\tet3=\pi/3$, from (\ref{stackedn4eq1}) we have $$f(\tet1^*)=f(\tet1^*+2\pi/3).$$
Therefore, by the graph of $f$ $$\tet1^*=\pi/3$$ $$\Rightarrow \tet2^*=\pi/3$$ hence we have a regular 6-gon.
Finally the case $\tet1=\tet2=\tet3=\tet4=\pi/2.$ Taking $x=\tet1^*$, we can observe that the equations are \begin{eqnarray} f(x)&=&f(\pi+x),\label{quadradoparan6eq1}\\ 3f(x)&=&f(\pi/2-x)+f(\pi-2x)\label{quadradoparan6eq2}. \end{eqnarray}
We will show that the roots of the equations above are different. We can write $$f(x)=\frac{1}{4}\cos(x/2)\mathrm{sen}^{-2}(x/2)(8\mathrm{sen}^3(x/2)-1).$$
It is easy to see that, from (\ref{quadradoparan6eq1}), we have $0<x<\pi/3.$ Taking $u=\mathrm{sen}(x/2)$ and $v=\cos(x/2)$ we get \begin{eqnarray*} 4f(x)&=&vu^{-2}(8u^3-1),\\ 4f(\pi+x)&=&-uv^{-2}(8v^3-1),\\ 4f(\pi-2x)&=&-2uv(v^2-u^2)^{-2}(8(v^2-u^2)^3-1),\\ 4f(\pi/2-x)&=&\sqrt{2}(u+v)(v-u)^{-2}(2\sqrt{2}(v-u)^3-1). \end{eqnarray*}
Setting $P=4f(x)-4f(\pi+x)$ and $Q=12f(x)-4f(\pi/2-x)-4f(\pi-2x),$ the equations are $$P=0, \ \ Q=0.$$
Denote by $p$ and $q$ the numerators of $P$ and $Q$, respectively. Then $$p(u,v)=-u^3 - v^3 + 16 u^3 v^3$$ and $$q(u,v)=\sqrt{2} u^5+4 u^8-2 u^3 v-3 u^4 v+3 \sqrt{2} u^4 v+24 u^7 v-16 u^9 v+3 \sqrt{2} u^3 v^2-12 u^6 v^2+6 u^2 v^3+$$ $$+\sqrt{2} u^2 v^3-48 u^5 v^3+48 u^7 v^3+12 u^4 v^4-3 v^5+24 u^3 v^5-48 u^5 v^5-4 u^2 v^6+16 u^3 v^7.$$
Let $R_1$ and $R_2$ be the resultants of $p$ and $q$ with respect to $u$ and $v$, respectively. $R_1$ and $R_2$ are polynomials in the variables $v$ and $u$, respectively, such that, if $(u_0,v_0)$ is a solution for $p=0$ and $q=0$, then $R_1(v_0)=R_2(u_0)=0.$
We get $$R_1(v)=-2v^{12}r_1(v),$$ where \begin{eqnarray*} r_1(v)&=&-4+18 v+18 \sqrt{2} v-135 v^2+384 v^3-1800 v^4-1728 \sqrt{2} v^4+ 12240 v^5+...\\&&...+231928233984 v^{32}-21474836480 v^{33}-154618822656 v^{34}+34359738368 v^{36} \end{eqnarray*} and $$R_2(u)=2u^{12}r_2(u),$$ where \begin{eqnarray*} r_2(u)&=&4+18 u-18 \sqrt{2} u+135 u^2-384 u^3-1800 u^4+1728 \sqrt{2} u^4-...\\&&...-21474836480 u^{33}-154618822656 u^{34}+34359738368 u^{36}. \end{eqnarray*}
We reject the solution $u=v=0$ and we focus on polinomials $r_1(v)$ and $r_2(u).$ We will show that $r_1(v)$ have no roots in $[9/10,1]$. The substitution $v=\frac{9+s}{10+s}$ maps $(0,\infty)$ into $(9/10,1)$ and we get
\begin{eqnarray*} r_1(s)&=&\left.\frac{1}{(10+s)^{36}}\right(-1008258045536460519041298702036036780097011712+\\ &&+703154989278597577665542304052850511052800000 \sqrt{2}-...\\ &&...-6975267000684 s^{35}+4183796399670 \sqrt{2} s^{35}-21095906033 s^{36}+\\&&+\left.12595023450 \sqrt{2} s^{36}\right). \end{eqnarray*} $r_1(s)$ has no roots in $(0,\infty)$ since its numerator is a polynomial of degree 36 where all terms are negative. So we have $0<v<9/10.$
As $u^2+v^2=1,$ and $u=\sin(x/2)\leq 1/2$ then $$\frac{21}{50}<\frac{\sqrt{19}}{10}=\sqrt{1-(9/10)^2}<u\leq \frac{1}{2}.$$
Taking the rational substitution $u=\frac{21+t}{50+2t}$, we have
\begin{eqnarray*} r_2(t)&=&\left.\frac{1}{16(25+t)^{36}}\right(39355704304464511525941168410542220606454431557393-\\ &&-39249115788439924325393189378990445455012929687500 \sqrt{2}-...\\ &&...\left.-57616560 \sqrt{2} t^{34}+168012 t^{35}-152568 \sqrt{2} t^{35}+217 t^{36}-196 \sqrt{2} t^{36}\right). \end{eqnarray*}
Again, all coefficients of the numerator are negative. This shows that $r_2(u)$ does not have roots in $\left(\frac{21}{50},\frac{1}{2}\right)$. Therefore the solutions of (\ref{quadradoparan6eq1}) and (\ref{quadradoparan6eq2}) are different and the proposition follows. \begin{flushright}\caixapreta
\end{flushright}
\begin{figure}
\caption{Illustration of the situation treated in Proposition \ref{prop_stacked3}}
\label{fig_stacked3}
\end{figure}
\begin{prop}\label{prop_stacked3} Let $(\theta_1,...,\theta_n)$ be a planar central configuration of $1+n$ bodies. Suppose that we add two satellites between two of the consecutive original satellites and we get a new planar central configuration (with 1+(n+2) bodies). Then $n=2,$ in the original configuration we have an equilateral triangle formed by the satellites and the massive body and the new configuration is a kite given by $\left(\frac{\pi}{3},\frac{\pi}{3},\frac{2\pi}{3},\frac{2\pi}{3}\right).$ See Fig. \ref{fig3_stacked2}.
\end{prop}
\noindent{\bf Proof: } We suppose that the new satellites are placed between the first and the second original satellites. The relations between the angles are
$$\theta_1=\theta_1^*+\theta_2^*+\theta_3^*$$ and $$\theta_i=\theta_{i+2}^*, \ 2\leq i \leq n.$$
See Fig. \ref{fig_stacked3}
Firstly we consider the case $n=2.$ The following equations are satisfied: \begin{eqnarray} f(\tet1^*)&=&f(\tet2+\tet3^*),\label{eqst1n2n4}\\ f(\tet1^*)&=&f(\tet2^*)+f(\tet2^*+\tet3^*),\label{eqst2n2n4}\\ f(\tet2^*)&=&f(\tet1^*)+f(\tet3^*),\label{eqst3n2n4}\\ f(\tet3^*)&=&f(\tet1^*+\tet2).\label{eqst4n2n4} \end{eqnarray}
As $(\tet1,\tet2)$ is a planar central configuration for $n=2$ we get $\tet2=\pi,\tet2=5\pi/3$ or $\tet2=\pi/3.$
If $\tet2=\pi,$ then $\tet1=\tet1^*+\tet2^*+\tet3^*=\pi$ and, by Lemma \ref{lema_stacked2}, the equations (\ref{eqst1n2n4}) and (\ref{eqst4n2n4}) imply $\tet1^*=\tet3^*.$ So $f(\tet1^*)=f(\pi+\tet1^*)$ and by the graph of $f$, $\tet1^*<\pi/3$ and, thus $f(\tet1)<0.$
From (\ref{eqst3n2n4}) we get $$f(\tet2^*)=2f(\tet1^*)<0\Rightarrow \tet2^*<\frac{\pi}{3}.$$ This implies the contradiction $$ \tet1^*+\tet2^*+\tet3^*<\pi=\tet1.$$
\begin{figure}
\caption{Stacked central configuration according to Proposition \ref{prop_stacked3}}
\label{fig3_stacked2}
\end{figure}
If $\tet2=5\pi/3$ we conclude by the same argument that $\tet1^*=\tet3^*$ and so $$f(\tet1^*)=f(\tet1^*+5\pi/3).$$ By the graph of $f$ this equations is impossible because we should have $\tet1^*>\pi/3$ and the sum of the angles should be bigger than $2\pi.$
Finally the case $\tet2=\pi/3.$ By (\ref{eqst1n2n4}) and (\ref{eqst4n2n4}), the Lemma \ref{lema_stacked2} gives us three disjoint possibilities, namely, $\tet1^*=\pi/3+\tet3^*, \tet3^*=\pi/3+\tet1^*$ or $\tet1^*=\tet3^*.$
If $\tet1=\pi/3+\tet3^*$ then, by (\ref{eqst4n2n4}), we have $$f(\tet3^*)=f(2\pi/3+\tet3^*)\Rightarrow \tet3^*=\pi/3\Rightarrow f(\tet3^*)=0.$$ From (\ref{eqst2n2n4}) and (\ref{eqst3n2n4}) we get $f(\tet2^*+\pi/3)=0,$ so $\tet2^*=2\pi/3$ and consequently we have the kite configuration.
The case $\tet3=\pi/3+\tet1^*$ is analogous and results in the same configuration.
Now consider $\tet1^*=\tet3^*$. So $f(\tet1^*)=f(\pi/3+\tet1^*)$ and thus $$\pi/3<\tet1^*<\tet c<\tet1^*+\pi/3<\pi.$$
By Lemma \ref{lema2albouy} we have $$2\tet1^*+\pi/3<\pi/3+\pi\Rightarrow \tet1^*<\pi/2$$ and $$2\tet1^*+\pi/3>2\tet c>6\pi/5 \Rightarrow \tet1^*>\frac{13\pi}{30}.$$
From (\ref{eqst3n2n4}) we have $f(\tet2^*)=2f(\tet1^*).$ Because $\tet1^*<\pi/2$, then $\tet2^*=5\pi/3-2\tet1^*>2\pi/3.$ Therefore $$2f(\tet1^*)=f(\tet2^*)<f(2\pi/3)<0,7.$$ However, one checks that this fact contradicts the inequality $\tet1^*>13\pi/30,$ because $f(13\pi/30)>f(2\pi/5)>0,36.$
Consider the case $n=3.$ The equations become $$f(\theta_1)=f(\theta_2)=f(\theta_3),$$ $$f(\theta_1^*)=f(\theta_2+\theta_3+\theta_3^*),$$ $$f(\theta_3^*)=f(\theta_2+\theta_3+\theta_1^*),$$ $$f(\theta_3+\theta_1^*)=f(\theta_2+\theta_3^*).$$
By Lema \ref{lema_stacked2}, the three last equations above imply $$\theta_1^*=\theta_3^*.$$ Hence $$\theta_2=\theta_3.$$
We get then, \begin{eqnarray} f(\theta_1^*)&=&f(2\theta_2+\theta_1^*),\label{eqprop3stacked1}\\ f(\theta_2)&=&f(2\theta_1^*+\theta_2^*),\label{eqprop3stacked2}\\ f(\theta_2^*)+f(\theta_1^*+\theta_2^*)&=&f(\theta_1^*+\theta_2)+f(\theta_1^*).\label{eqprop3stacked3} \end{eqnarray}
On the other hand, the angle $\theta_2$ satisfies $$f(\theta_2)+f(2\theta_2)=0.$$ This equation has three solutions in $(0,\pi)$, namely, $\theta_2=2\pi/3, \theta_2=a \mbox{ and } \theta_2=b,$ with $0<a<50^{\circ} \mbox{ and } 2\pi/3<b<\pi.$ See \cite{llibre2}.
If $\theta_2=2\pi/3$ then $\theta_1=2\pi/3$. Since $f$ is injective in $(0,\pi/4)$, by (\ref{eqprop3stacked1}) we have $\theta_1^* > \pi/4$. So $\theta_2^*=\theta_1-2\theta_1^*=2\pi/3-2\theta_1^*<\pi/6.$ Therefore $$f(\theta_2^*)<-3.$$
It is not hard to see that (\ref{eqprop3stacked3}) is not satisfied in this case.
If $\theta_2=a<\theta_c/2$, then $\theta_1>2\pi-\theta_c$. Since $a>\pi/4$ then $2\theta_2>\pi/2$.
From the graph of $f$ we can conclude that $\pi/3<\theta_1^*<\theta_c$. Furthermore, we claim that $\theta_1^*<7\pi/18$. In fact, if $\theta_1^*>7\pi/18$ then $\theta_1^*+2\theta_2>8\pi/9$. So, we have the contradiction: $$0.317<f(7\pi/18)<f(\theta_1^*)=f(\theta_1^*+2\theta_2)<f(8\pi/9)<0.298.$$
Hence, $$\theta_1^*<7\pi/18\Rightarrow \theta_2^*>2\pi/3$$ $$\Rightarrow \theta_2+\theta_1^*<2\pi/3<\theta_2^*$$ $$\Rightarrow f(\theta_2^*)<f(\theta_2+\theta_1^*).$$
From (\ref{eqprop3stacked3}) we get $$f(\theta_1^*+\theta_2^*)>f(\theta_1^*)=f(\theta_1^*+2\theta_2)$$ $$\Rightarrow \theta_1^*+2\theta_2>\theta_1^*+\theta_2^*$$ $$\Rightarrow 2\theta_2>\theta_2^*,$$
However, this is impossible because $\theta_2<50^{\circ}$ and $\theta_2^*>120^{\circ}$.
$\theta_1=b>138^{\circ}$ is the remaining case. But this assumption leads us to $\theta_1<\pi/2$, a contradiction to the inequality $\theta_1^*>\pi/4$ which came from (\ref{eqprop3stacked1}) and the injectivity of the function $f$ in $(0,\pi/4)$.
Now suppose $n\geq 4.$ We will show that the new configuration is not a central one. From $$f(\theta_1)+f(\theta_1+\theta_2)+...+f(\theta_1+...+\theta_{n-1})=0,$$ the first equation for the new configuration becomes \begin{equation}\label{eqtheta1estrela} f(\theta_1^*)=f(\theta_3^*+\theta_2+...+\theta_n). \end{equation}
Analogously for the other equations, we get: \begin{equation}\label{eqtheta3estrela}
f(\theta_3^*)=f(\theta_2+...+\theta_n+\theta_1^*), \end{equation} \begin{equation}\label{eqtheta3estrela2}
f(\theta_3^*+\theta_2)=f(\theta_3+...+\theta_n+\theta_1^*), \end{equation} \begin{equation}\label{eqtheta3estrela23}
f(\theta_3^*+\theta_2+\theta_3)=f(\theta_4+...+\theta_n+\theta_1^*). \end{equation}
Notice that we can apply Lemma \ref{lema_stacked2} to equations (\ref{eqtheta1estrela}), (\ref{eqtheta3estrela}), (\ref{eqtheta3estrela2}) and (\ref{eqtheta3estrela23}) chosen pairwise. So, from equations (\ref{eqtheta1estrela}) and (\ref{eqtheta3estrela}) we have exactly one of the following equalities: \begin{eqnarray}\label{eqsta1}
\nonumber \theta_1^* &=& \theta_3^*, \\
\theta_1^* &=& \theta_3^*+\theta_2+...+\theta_n, \\
\nonumber \theta_3^* &=& \theta_2+...+\theta_n+\theta_1^*. \end{eqnarray}
From the equations (\ref{eqtheta1estrela}) and (\ref{eqtheta3estrela2}) we have exactly one of the following equalities:
\begin{eqnarray}\label{eqsta2}
\theta_1^* &=& \theta_3^*+\theta_2, \nonumber\\
\theta_1^* &=& \theta_3^*+\theta_2+...+\theta_n, \\
\theta_3^*+\theta_2 &=& \theta_3+...+\theta_n+\theta_1^*. \nonumber \end{eqnarray}
And from (\ref{eqtheta3estrela}) and (\ref{eqtheta3estrela2}) we have exactly one of the situations below:
\begin{eqnarray}\label{eqsta3}
\theta_3^* &=& \theta_2+...+\theta_n+\theta_1^*, \nonumber\\
\theta_3^*+\theta_2 &=& \theta_1^*+\theta_3+...+\theta_n, \\
\theta_3^* &=& \theta_3+...+\theta_n+\theta_1^*. \nonumber \end{eqnarray}
It is easy to see that from (\ref{eqsta1}), (\ref{eqsta2}) and (\ref{eqsta3}), we have \begin{eqnarray}\label{eqsta4}
\nonumber \theta_1^*&=&\theta_3^*,\\ \theta_2&=&\theta_3+...+\theta_n. \end{eqnarray}
Now, as $\theta_1^*=\theta_3^*$, by Lemma \ref{lema_stacked2} applied to (\ref{eqtheta1estrela}) and (\ref{eqtheta3estrela23}) we have $$\theta_2+\theta_3 = \theta_4+...+\theta_n.$$
However, this contradicts the equation (\ref{eqsta4}) and the Proposition follows.
\begin{flushright}\caixapreta
\end{flushright}
\noindent {\bf Remark:} Note that the argument to show that there exists no stacked central configurations in the situations treated in Propositions \ref{prop_stacked1} and \ref{prop_stacked3} for $n\geq 4$ and for $n\geq 5$ in Proposition \ref{prop_stacked2}, still works in the more general situation where the original satellites do not necessarily have the same masses, but the added ones have equal masses.
\begin{theorem} Consider a central configuration of the $1+n$ body problem associated to the masses $\mu_1,...,\mu_n$. Suppose that we add two satellites with the same masse $\mu$. If $n\geq 5$ then the new configuration is not a central one. If the two new satellites are not placed separated by exactly one of the original satellites, then the result holds for $n\geq 4.$ \end{theorem}
\noindent{\bf Proof: } The proofs of Propositions \ref{prop_stacked1} and \ref{prop_stacked3} for $n\geq 4$ and Proposition \ref{prop_stacked2} for $n\geq 5$ have used the Lemma \ref{lema_stacked2} in some equations. The equations in the present case are the same as those because the parameter $\mu$ appears now multiplying both sides of the equality, hence it can be canceled. Therefore the proof does not change. \begin{flushright}\caixapreta
\end{flushright}
\end{document} |
\begin{document}
\begin{abstract} We investigate the $K$-theory of unital UCT Kirchberg algebras $\mathcal{Q}_S$ arising from families $S$ of relatively prime numbers. It is shown that $K_*(\mathcal{Q}_S)$ is the direct sum of a free abelian group and a torsion group, each of which is realized by another distinct $C^*$-algebra naturally associated to $S$. The $C^*$-algebra representing the torsion part is identified with a natural subalgebra $\mathcal{A}_S$ of $\mathcal{Q}_S$. For the $K$-theory of $\mathcal{Q}_S$, the cardinality of $S$ determines the free part and is also relevant for the torsion part, for which the greatest common divisor $g_S$ of $\{p-1 : p \in S\}$ plays a central role as well. In the case where $\lvert S \rvert \leq 2$ or $g_S=1$ we obtain a complete classification for $\mathcal{Q}_S$. Our results support the conjecture that $\mathcal{A}_S$ coincides with $\otimes_{p \in S} \mathcal{O}_p$. This would lead to a complete classification of $\mathcal{Q}_S$, and is related to a conjecture about $k$-graphs. \end{abstract} \maketitle
\section{Introduction} Suppose $S$ is a non-empty family of relatively prime natural numbers and consider the submonoid of $\mathbb{N}^\times$ generated by $S$. Its action on $\mathbb{Z}$ by multiplication can be represented on $\ell^2(\mathbb{Z})$ by the bilateral shift $U$ and isometries $(S_p)_{p \in S}$ defined by $U\xi_n = \xi_{n+1}$ and $S_p\xi_n = \xi_{pn}$. The associated $C^*$-algebra $C^*\bigl(U,(S_p)_{p \in S}\bigr)$ admits a universal model $\mathcal{Q}_S$ that is generated by a unitary $u$ and isometries $(s_p)_{p \in S}$, subject to\vspace*{-2mm} \[\begin{array}{c} s_ps_q = s_qs_p \text{ for } q \in S,\quad s_pu=u^ps_p, \quad \text{ and } \quad \sum\limits_{m=0}^{p-1}u^ms_p^{\phantom{*}}s_p^*u^{-m} = 1. \end{array}\vspace*{-2mm}\] By results of \cites{KOQ} or \cite{Sta1}, $\mathcal{Q}_S$ is isomorphic to $C^*\bigl(U,(S_p)_{p \in S}\bigr)$ and belongs to the class of unital UCT Kirchberg algebras. In view of the Kirchberg-Phillips classification theorem \cites{Kir,Phi}, the information on $S$ encoded in $\mathcal{Q}_S$ can therefore be read off from its $K$-theory.
In special cases, $\mathcal{Q}_S$ and its $K$-theory have been considered before: If $S$ is the set of all primes, then $\mathcal{Q}_S$ coincides with the algebra $\mathcal{Q}_\mathbb{N}$ from \cite{CuntzQ} and it follows that $K_i(\mathcal{Q}_S) = \mathbb{Z}^\infty$ for $i=0,1$ and $[1]=0$. The other extreme case, where $S=\{p\}$ for some $p \geq 2$, appeared already in \cite{Hir}: Hirshberg showed that $(K_0(\mathcal{Q}_{\{p\}}),[1],K_1(\mathcal{Q}_{\{p\}})) = (\mathbb{Z} \oplus \mathbb{Z}/(p-1)\mathbb{Z},(0,1),\mathbb{Z})$. This result was recovered later in \cite{KatsuraIV} and \cites{CuntzVershik} as a byproduct. Note that $\mathcal{Q}_{\{p\}}$ coincides with Katsura's algebra $\mathcal{O}(E_{p,1})$, see \cite{KatsuraIV}*{Example~A.6}. Moreover, Larsen and Li analyzed the situation for $p=2$ in great detail, see \cite{LarsenLi}. The similarities and differences among these known cases raise several questions: \begin{enumerate}[(i)] \item Is $K_1(\mathcal{Q}_S)$ always torsion free? \item Is $2 \in S$ the only obstruction to torsion in $K_0(\mathcal{Q}_S)$? \item What is the $K$-theory of $\mathcal{Q}_S$ in the general case of $\lvert S \rvert \geq 2$? \item What does $\mathcal{Q}_S \cong \mathcal{Q}_T$ reveal about the relationship between $S$ and $T$? \end{enumerate} Through the present work, we provide a complete description in the case of $\lvert S \rvert=2$, for which the $K$-theory of $\mathcal{Q}_S$ satisfies \[(K_0(\mathcal{Q}_S),[1],K_1(\mathcal{Q}_S)) = (\mathbb{Z}^2 \oplus \mathbb{Z}/g_S\mathbb{Z},(0,1),\mathbb{Z}^2 \oplus \mathbb{Z}/g_S\mathbb{Z}),\] where $g_S=\gcd(\{p-1 : p \in S\})$, see Theorem~\ref{thm:main result}~(c). Thus we see that the first two questions from above have a negative answer (for instance, consider $S=\{3,5\}$ and $S=\{5,6\}$, respectively). More generally, we completely determine $K_*(\mathcal{Q}_S)$ in the case of $\lvert S \rvert \leq 2$ or $g_S = 1$, see Theorem~\ref{thm:main result}, and conclude that $\mathcal{Q}_S \cong \mathcal{Q}_T$ if and only if $\lvert S \rvert=\lvert T \rvert$ and $g_S=g_T$ in this case. In addition, Theorem~\ref{thm:main result} substantially reduces the problem in the remaining case of $\lvert S \rvert \geq 3$ and $g_S >1$. Thereby we also make progress towards a general answer to the remaining questions (iii) and (iv) from above.
In order to prove Theorem~\ref{thm:main result}, we first compare the stabilization of $\mathcal{Q}_S$ to the $C^*$-algebra $C_0(\mathbb{R}) \rtimes N \rtimes H$, where $N=\mathbb{Z}\bigl[\{\frac{1}{p} : p \in S\}\bigr]$, $H$ is the subgroup of $\mathbb{Q}_+^\times$ generated by $S$, and the action comes from the natural $ax+b$-action of $N \rtimes H$ on $\mathbb{R}$, see Section~\ref{sec:comp with real dyn}. This approach is inspired by methods of Cuntz and Li from \cite{CLintegral2}. However, the final part of their strategy is to use the Pimsner-Voiculescu sequence iteratively, see \cite{CLintegral2}*{Remark~3.16}, and depends on having free abelian $K$-groups, which does not work in our situation. Instead, we show that $K_*(\mathcal{Q}_S)$ decomposes as a direct sum of a free abelian group and a torsion group, both arising in a natural way from two distinguished $C^*$-algebras related to $\mathcal{Q}_S$, see Theorem~\ref{thm:decomposition of K-theory} and Corollary~\ref{cor:torsion and free part K-theory}. The determination of the torsion free part of $K_*(\mathcal{Q}_S)$ uses a homotopy argument, and thereby benefits heavily from the comparison with real dynamics. This allows us to prove that the rank of the torsion free subgroup of $K_i(\mathcal{Q}_S)$ equals $2^{\lvert S \rvert-1}$ for both $i=0,1$, see Proposition~\ref{prop:K-theory torsion free part}.
The torsion subgroup of $K_*(\mathcal{Q}_S)$ is realized by the semigroup crossed product $M_{d^\infty} \rtimes^e_\alpha H^+$, where $d$ is the product of all primes dividing some element of $S$, $H^+$ is the submonoid of $\mathbb{N}^\times$ generated by $S$, and the action $\alpha$ is inherited from a semigroup crossed product description of $\mathcal{Q}_S$, see Corollary~\ref{cor:torsion and free part K-theory}. Appealing to the recently introduced machinery for equivariantly sequentially split $*$-homomorphisms from \cite{BarSza1}, we show that $M_{d^\infty} \rtimes^e_\alpha H^+$ is a unital UCT Kirchberg algebra, just like $\mathcal{Q}_S$, see Corollary~\ref{cor:UHF into BD seq split cr pr}. Quite intriguingly, this paves the way to identify $M_{d^\infty} \rtimes^e_\alpha H^+$ with the subalgebra $\mathcal{A}_S = C^*(\{u^ms_p : p \in S, 0 \leq m \leq p-1\})$ of $\mathcal{Q}_S$, see Corollary~\ref{cor:subalgebra for torsion part}. That is why we decided to name $\mathcal{A}_S$ the \emph{torsion subalgebra}. This $C^*$-algebra is interesting in its own right as, for instance, it admits a model as the boundary quotient $\mathcal{Q}(U)$ of a particular right LCM submonoid $U$ of $\mathbb{N} \rtimes H^+$, see Proposition~\ref{prop:A_S as BQ of U}. As explained in Remark~\ref{rem:A_S as BQ of U}, this gives rise to a remarkable diagram for the semigroup $C^*$-algebras and boundary quotients related to the inclusion of right LCM semigroups $U \subset \mathbb{N} \rtimes H^+$.
With regards to the $K$-theory of $\mathcal{A}_S$ and hence $\mathcal{Q}_S$, the $k$-graph description for finite $S$ obtained in Corollary~\ref{cor:tor subalgebra via k-graphs} is more illuminating: The canonical $k$-graph for $\mathcal{A}_S$ has the same skeleton as the standard $k$-graph for $\bigotimes_{p \in S}\mathcal{O}_p$, but uses different factorization rules, see Remark~\ref{rem:Lambda_S flip}. It is apparent from the given presentation that $\mathcal{A}_S$ is isomorphic to $\mathcal{O}_p$ for $S=\{p\}$. If $S$ consists of two relatively prime numbers $p$ and $q$, then a result from \cite{Evans} shows that $\mathcal{A}_S$ coincides with $\mathcal{O}_p \otimes \mathcal{O}_q$. For the remaining cases, we extract vital information on $\mathcal{A}_S$ by applying Kasparov's spectral sequence \cite{kasparov} (see also \cite{barlak15}) to the $H^+$-action $\alpha$ on $M_{d^{\infty}}$, see Theorem~\ref{thm:K-theory for A_S}. More precisely, we obtain that $\mathcal{A}_S$ is isomorphic to $\bigotimes_{p \in S}\mathcal{O}_p$ if $\lvert S \rvert \leq 2$ or $g_S=1$. In the latter case, it actually coincides with $\mathcal{O}_2$. Additionally, we show that the order of every element in $K_*(\mathcal{A}_S)$ divides $g_S^{2^{\lvert S \rvert - 2}}$. As we remark at the end of this work, the same results can be obtained by employing the $k$-graph representation of $\mathcal{A}_S$ and using Evans' spectral sequence \cite{Evans} for the $K$-theory of $k$-graph $C^*$-algebras. In view of these results, it is very plausible that $\mathcal{A}_S$ always coincides with $\bigotimes_{p \in S}\mathcal{O}_p$. This would be in accordance with Conjecture~\ref{conj:k-graph}, which addresses independence of $K$-theory from the factorization rules for $k$-graphs under certain constraints. If $\mathcal{A}_S \cong \bigotimes_{p \in S}\mathcal{O}_p$ holds for all $S$, then we get a complete classification for $\mathcal{Q}_S$ with the rule that $\mathcal{Q}_S$ and $\mathcal{Q}_T$ are isomorphic if and only if $\lvert S \rvert = \lvert T \rvert$ and $g_S=g_T$, see Conjecture~\ref{conj:K-theory of QQ_S}.
At a later stage, the authors learned that Li and Norling obtained interesting results for the multiplicative boundary quotient for $\mathbb{N} \rtimes H^+$ by using completely different methods, see \cite{LN2}*{Subsection~6.5}. Briefly speaking, the multiplicative boundary quotient related to $\mathcal{Q}_S$ is obtained by replacing the unitary $u$ by an isometry $v$, see Subsection~\ref{subsec:BQ for ADS} for details. As a consequence, the $K$-theory of the multiplicative boundary quotient does not feature a non-trivial free part. It seems that $\mathcal{A}_S$ is the key to reveal a deeper connection between the $K$-theoretical structure of these two $C^*$-algebras. As this is beyond the scope of the present work, we only note that the inclusion map from $\mathcal{A}_S$ into $\mathcal{Q}_S$ factors through the multiplicative boundary quotient as an embedding of $\mathcal{A}_S$ and the natural quotient map. The results of \cite{LN2} together with our findings indicate that this embedding might be an isomorphism in $K$-theory. This idea is explored further in \cite{Sta3}*{Section~5}.
The paper is organized as follows: In Section~\ref{sec:prelim}, we set up the relevant notation and list some useful known results in Subsection~\ref{subsec:notation and basics}. We then link $\mathcal{Q}_S$ to boundary quotients of right LCM semigroups, see Subsection~\ref{subsec:BQ for ADS}, and $a$-adic algebras, see Subsection~\ref{subsec:a-adic algs}. These parts explain the central motivation behind our interest in the $K$-theory of $\mathcal{Q}_S$. In addition, the connection to $a$-adic algebras allows us to apply a duality theorem from \cite{KOQ}, see Theorem~\ref{thm:duality}, making it possible to invoke real dynamics. This leads to a decomposition result for $K_*(\mathcal{Q}_S)$ presented in Section~\ref{sec:K-theory}, which essentially reduces the problem to determining the $K$-theory of $\mathcal{A}_S$. The structure of the torsion subalgebra $\mathcal{A}_S$ is discussed in Section~\ref{sec:torsion part}. Finally, the progress on the classification of $\mathcal{Q}_S$ we obtain via a spectral sequence argument for the $K$-theory of $\mathcal{A}_S$ is presented in Section~\ref{sec:classification}.
\subsection*{Acknowledgments} The first named author was supported by SFB~$878$ \emph{Groups, Geometry and Actions}, GIF Grant~$1137$-$30.6/2011$, ERC through AdG~$267079$, and the Villum Fonden project grant `Local and global structures of groups and their algebras' (2014-2018). The second named author was supported by RCN through FRIPRO~$240913$. A significant part of the work was done during the second named author's research stay at Arizona State University, and he is especially grateful to the analysis group at ASU for their hospitality. He would also like to thank the other two authors of this paper for their hospitality during two visits to M\"{u}nster. The third named author was supported by ERC through AdG~$267079$ and by RCN through FRIPRO~$240362$. We are grateful to Alex Kumjian for valuable suggestions.
\section{Preliminaries}\label{sec:prelim} \subsection{Notation and basics}\label{subsec:notation and basics}
Throughout this paper, we assume that $S \subset \mathbb{N}^\times \setminus \{1\}$ is a non-empty family of relatively prime numbers. We write $p|q$ if $q \in p\mathbb{N}^\times$ for $p,q \in \mathbb{N}^\times$. Given $S$, we let $P := \{p \in \mathbb{N}^\times : p \text{ prime and } p|q \text{ for some } q \in S\}$. Also, we define $d:= \prod_{p \in P}p$ (which is a supernatural number in case $S$ is infinite, see Remark~\ref{rem:supernatural}) and $g_S$ to be the greatest common divisor of $\{p-1 : p \in S\}$, i.e.\ $g_S := \gcd(\{p-1 : p \in S\})$.
Recall that $\mathbb{N}^\times$ is an Ore semigroup with enveloping group $\mathbb{Q}_+^\times$, that is, $\mathbb{N}^\times$ embeds into $\mathbb{Q}_+^\times$ (in the natural way) so that each element $q \in \mathbb{Q}_+^\times$ can be displayed as $p^{-1}q$ with $p,q \in \mathbb{N}^\times$. The subgroup of $\mathbb{Q}_+^\times$ generated by $S$ is denoted by $H$. Note that the submonoid of $\mathbb{N}^\times$ generated by $S$, which we refer to as $H^+$, forms a positive cone inside $H$. As the elements in $S$ are relatively prime, $H^+$ is isomorphic to the free abelian monoid in $\lvert S \rvert$ generators. Finally, we let $H_k$ be the subgroup of $H$ generated by the $k$ smallest elements of $S$ for $1 \leq k \leq \lvert S \rvert$, and define $H_k^+$ as the analogous submonoid of $H^+$.
Though the natural action of $H^+$ on $\mathbb{Z}$ given by multiplication is irreversible, it has a natural extension to an action of $H$ by automorphisms, namely by acting upon the ring extension $\mathbb{Z}\bigl[\{ \frac{1}{p} : p \in S\}\bigr] = \mathbb{Z}\bigl[\{ \frac{1}{p} : p \in P\}\bigr]$, that will be denoted by $N$. Within this context we will consider the collection of cosets \[\begin{array}{c} \mathcal{F} := \{m+ h\mathbb{Z} : m \in \mathbb{Z}, h \in \mathbb{N}^\times, \frac{1}{h} \in N\}. \end{array}\]
\begin{defn}\label{def:Q_S} $\mathcal{Q}_S$ is defined to be the universal $C^*$-algebra generated by a unitary $u$ and isometries $(s_p)_{p \in S}$ subject to the relations: \[\begin{array}{llllll} \textnormal{(i)} & s_p^*s_q^{\phantom{*}} = s_q^{\phantom{*}}s_p^*, \quad & \textnormal{(ii)} & s_pu = u^ps_p, \quad \text{and} \quad & \textnormal{(iii)} & \sum\limits_{m=0}^{p-1} e_{m+p\mathbb{Z}} = 1 \end{array}\] for all $p,q \in S, p \neq q$, where $e_{m+p\mathbb{Z}} = u^ms_p^{\phantom{*}}s_p^*u^{-m}$. \end{defn}
Observe that the notation $e_{m+p\mathbb{Z}}$ is unambiguous, i.e.\ it does not depend on the representative $m$ of the coset $m+p\mathbb{Z}$, as \[ u^{m+pn}s_p^{\phantom{*}}s_p^*u^{-m-pn} \stackrel{(ii)}{=} u^ms_p^{\phantom{*}}u^{n-n}s_p^*u^{-m} = u^ms_p^{\phantom{*}}s_p^*u^{-m}. \]
\begin{rem}\label{rem:Q_S basic I} Let us briefly discuss the defining relations for $\mathcal{Q}_S$: \begin{enumerate}[a)] \item Condition~(i) is known as the double commutation relation for the isometries $s_p$ and $s_q$ with $p \neq q$. In particular, they commute as $s_p^*s_q^{\phantom{*}} = s_q^{\phantom{*}}s_p^*$ implies that $(s_ps_q)^*s_q^{\phantom{*}}s_p^{\phantom{*}} = 1$, which forces $s_ps_q=s_qs_p$. Thus the family $(s_p)_{p \in S}$ gives rise to a representation of the monoid $H^+$ by isometries and we write $s_h$ for $s_{p_1}\cdots s_{p_n}$ whenever $h = p_1 \cdots p_n \in H^+$ with $p_i \in S$. In fact, $u$ and $(s_p)_{p \in S}$ yield a representation of $\mathbb{Z} \rtimes H^+$ due to Definition~\ref{def:Q_S}~(i),(ii).
\item $\mathcal{Q}_S$ can also be defined as the universal $C^*$-algebra generated by a unitary $u$ and isometries $(s_p)_{p \in H^+}$ subject to (ii), (iii) and \[
\textnormal{(i') } s_ps_q = s_{pq} \text{ for all } p,q \in H^+. \] By a), we only need to show that (i') implies (i) for $p \neq q$. Note that (i') and (ii) imply that (iii) holds for all $p \in H^+$. In addition, (iii) implies the following: If $r \in S$ and $k \in \mathbb{Z}$ satisfy $s_r^*u^ks_r^{\phantom{*}} \neq 0$, then $k \in r\mathbb{Z}$. As $pq=qp$ and $p\mathbb{Z} \cap q\mathbb{Z} = pq\mathbb{Z}$, we get \[\begin{array}{lclclclcl} s_p^*s_q^{\phantom{*}} &\stackrel{(iii)}{=}& \sum\limits_{k=0}^{pq-1} s_p^*u^ks_{pq}^{\phantom{*}}s_{pq}^*u^{-k}s_q^{\phantom{*}} &\stackrel{(i')}{=}& \sum\limits_{k=0}^{pq-1} (s_p^*u^ks_p^{\phantom{*}})s_q^{\phantom{*}}s_p^*(s_q^*u^{-k}s_q^{\phantom{*}}) &=& s_q^{\phantom{*}}s_p^*. \end{array}\] \end{enumerate} \end{rem}
\begin{rem}\label{rem:can rep} The $C^*$-algebra $\mathcal{Q}_S$ has a canonical representation on $\ell^2(\mathbb{Z})$: Let $(\xi_n)_{n \in \mathbb{Z}}$ denote the standard orthonormal basis for $\ell^2(\mathbb{Z})$. If we define $U\xi_n := \xi_{n+1}$ and $S_p\xi_n := \xi_{pn}$, then it is routine to verify that $U$ and $(S_p)_{p \in S}$ satisfy (i)--(iii) from Definition~\ref{def:Q_S}. $\mathcal{Q}_S$ is known to be simple, see \cite{Sta1}*{Example~3.29~(a) and Proposition~3.2} for proofs and Proposition~\ref{prop:Q_S as O-alg} for the connection to \cite{Sta1}. Therefore, the representation from above is faithful and $\mathcal{Q}_S$ can be regarded as a subalgebra of $B(\ell^2(\mathbb{Z}))$. \end{rem}
\begin{rem}\label{rem:Q_S for 2 and N} For the case of $S=\{\text{all primes}\}$, the algebra $\mathcal{Q}_S$ coincides with $\mathcal{Q}_\mathbb{N}$ as introduced by Cuntz in \cite{CuntzQ}. Moreover, $S=\{2\}$ yields the \emph{$2$-adic ring $C^*$-algebra of the integers} that has been studied in detail by Larsen and Li in \cite{LarsenLi}. \end{rem}
\begin{defn} \label{def:D_S} The commutative subalgebra of $\mathcal{Q}_S$ generated by the projections $e_{m+h\mathbb{Z}}= u^ms_h^{\phantom{*}}s_h^*u^{-m}$ with $m \in \mathbb{Z}$ and $h \in H^+$ is denoted by $\mathcal{D}_S$. \end{defn}
\begin{rem}\label{rem:Q_S basic II} We record the following observations: \begin{enumerate}[a)] \item In view of Remark~\ref{rem:can rep}, $e_{m+h\mathbb{Z}}$ can be regarded as the orthogonal projection from $\ell^2(\mathbb{Z})$ onto $\ell^2(m+h\mathbb{Z})$. \item With regards to a), the projections $e_{m+h\mathbb{Z}}$ correspond to certain cosets from $\mathcal{F}$. However, projections arising as sums of such elementary projections may lead to additional cosets: If $h \in \mathbb{N}^\times$ belongs to the submonoid generated by $P$, then there is $h' \in H^+$ so that $h'=h\ell$ for some $\ell \in \mathbb{N}^\times$. Therefore, we get \[\begin{array}{c} e_{m+h\mathbb{Z}} = \sum\limits_{k=0}^{\ell-1} e_{m+hk + h'\mathbb{Z}}\end{array}\] and $e_{m+h\mathbb{Z}} \in \mathcal{D}_S$ for all such $h$. In fact, $\mathcal{F}$ equals the collection of all cosets for which the corresponding projection appears in $\mathcal{D}_S$, that is, the projection is expressible as a finite sum of projections $e_{m_i+h_i\mathbb{Z}}$ with $m_i \in \mathbb{Z}$ and $h_i \in H^+$. \end{enumerate} \end{rem}
\begin{defn}\label{def:B_S} The subalgebra of $\mathcal{Q}_S$ generated by $\mathcal{D}_S$ and $u$ is denoted by $\mathcal{B}_S$. \end{defn}
\begin{rem}\label{rem:BD subalg B_S} The $C^*$-algebra $\mathcal{B}_S$ is isomorphic to the Bunce-Deddens algebra of type $d^\infty$. If $p \in H^+$ and $(e_{i, j}^{(p)})_{0 \leq i,j \leq p-1}$ denote the standard matrix units in $M_p(\mathbb{C})$, then there is a unital $*$-homomorphism $M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}) \to \mathcal{B}_S$ mapping $e^{(p)}_{m,n} \otimes u^k$ to $e_{m+p\mathbb{Z}}u^{m-n+pk}$. Given another $q \in H^+$, the so constructed $*$-homomorphisms associated with $p$ and $pq$ are compatible with the embedding $\iota_{p,pq}\colon M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}) \to M_{pq}(\mathbb{C}) \otimes C^*(\mathbb{Z})$ given by $e_{i, j}^{(p)} \otimes 1 \mapsto \sum_{k=0}^{q-1}e_{i+pk, j+pk}^{(pq)} \otimes 1$ and $1 \otimes u \mapsto 1 \otimes u^q$. The inductive limit associated with $(M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}),\iota_{p,pq})_{p,q \in H^+}$, where $H^+ \subset \mathbb{N}^\times$ is directed set in the usual way, is isomorphic to the Bunce-Deddens algebra of type $d^\infty$. Moreover, under this identification, the natural UHF subalgebra $M_{d^\infty}$ of the Bunce-Deddens algebra corresponds to the $C^*$-subalgebra of $\mathcal{B}_S$ generated by all elements of the form $e_{m+p\mathbb{Z}}u^{m-n}$ with $p\in H^+$ and $0\leq m,n \leq p-1$.
There is a natural action $\alpha$ of $\mathbb{Z} \rtimes H^+$ on $\mathcal{B}_S$ given by $\alpha_{(k,p)}(x) = u^ks_pxs_p^*u^{-k}$ for $(k,p) \in \mathbb{Z} \rtimes H^+$. Under the above identification, $M_{d^\infty} \subset \mathcal{B}_S$ is invariant under the restricted $H^+$-action, as for $p,q \in H^+$ and $0\leq m,n \leq p-1$, \[ s^{\phantom{*}}_qe^{\phantom{*}}_{m+p\mathbb{Z}}u^{m-n}s_q^* = s_q^{\phantom{*}} u^ms^{\phantom{*}}_ps_p^*u^{-n} s_q^* = u^{qm}s_{pq}^{\phantom{*}}s_{pq}^*u^{-qn} =e^{\phantom{*}}_{qm+pq\mathbb{Z}}u^{qm-qn}. \] \vspace*{0mm} \end{rem}
Another way to present the algebra $\mathcal{Q}_S$ is provided by the theory of semigroup crossed products. Recall that, for an action $\beta$ of a discrete, left cancellative semigroup $T$ on a unital $C^*$-algebra $B$ by $*$-endomorphisms, a unital, covariant representation of $(B, \beta, T)$ is given by a unital $*$-homomorphism $\pi$ from $B$ to some unital $C^*$-algebra $C$ and a semigroup homomorphism $\varphi$ from $T$ to the isometries in $C$ such that the covariance condition $\varphi(t)\pi(b)\varphi(t)^* = \pi(\beta_t(b))$ holds for all $b \in B$ and $t \in T$. The semigroup crossed product $B \rtimes^{e}_\beta T$ is then defined as the $C^*$-algebra generated by a universal unital, covariant representation $(\iota_B,\iota_T)$ of $(B, \beta, T)$. We refer to \cite{LacRae} for further details. Note that if $T$ is a group, then this crossed product agrees with the full group crossed product $B \rtimes_\beta T$. Semigroup crossed products may be pathological or extremely complicated in some cases. But we will only be concerned with crossed products of left Ore semigroups acting by injective endomorphisms so that we maintain a close connection to group crossed products, see \cite{Lac}. With respect to $\mathcal{Q}_S$, we get isomorphisms \begin{equation}\label{eq:B_S and Q_S as crossed products} \mathcal{Q}_S \cong \mathcal{D}_S \rtimes^e_\alpha \mathbb{Z} \rtimes H^+ \cong \mathcal{B}_S \rtimes^e_\alpha H^+ \quad\text{and}\quad \mathcal{B}_S \cong \mathcal{D}_S\rtimes_\alpha \mathbb{Z}, \end{equation} see \cite{Sta1}*{Proposition~3.18 and Theorem~A.5}. Remark~\ref{rem:BD subalg B_S} reveals that the canonical subalgebra $M_{d^\infty} \subset \mathcal{B}_S$ and the isometries $(s_p)_{p \in H^+}$ give rise to a unital, covariant representation of $(M_{d^{\infty}},\alpha,H^+)$ on $\mathcal{Q}_S$. We will later see that this representation is faithful so that we can view $M_{d^\infty} \rtimes^e_\alpha H^+$ as a subalgebra of $\mathcal{Q}_S$, see Corollary~\ref{cor:subalgebra for torsion part}.
\subsection{Boundary quotients} \label{subsec:BQ for ADS} The set $S \subset \mathbb{N}^\times\setminus\{1\}$ itself can be thought of as a data encoding a dynamical system, namely the action $\theta$ of the free abelian monoid $H^+ \subset \mathbb{N}^\times$ on the group $\mathbb{Z}$ given by multiplication. $\theta_h$ is injective for $h \in H^+$ and surjective only if $h=1$. Furthermore, as every two distinct elements $p$ and $q$ in $S$ are relatively prime, we have $\theta_p(\mathbb{Z})+\theta_q(\mathbb{Z})=\mathbb{Z}$. Hence $(\mathbb{Z},H^+,\theta)$ forms an \emph{irreversible algebraic dynamical system} in the sense of \cite{Sta1}*{Definition 1.5}, compare \cite{Sta1}*{Example~1.8~(a)}. In fact, dynamics of this form were one of the key motivations for \cite{Sta1}. In order to compare the $C^*$-algebra $\mathcal{O}[\mathbb{Z},H^+,\theta]$ from \cite{Sta1}*{Definition 3.1} with $\mathcal{Q}_S$, let us recall the definition:
\begin{defn}\label{def:O-alg for IADS} Let $(G,P,\theta)$ be an irreversible algebraic dynamical system. Then $\mathcal{O}[G,P,\theta]$ is the universal $C^*$-algebra generated by a unitary representation $u$ of the group $G$ and a representation $s$ of the semigroup $P$ by isometries subject to the relations: \[\begin{array}{lrcl} (\text{CNP }1) & s_{p}u_{g} &\hspace*{-2.5mm}=\hspace*{-2.5mm}& u_{\theta_{p}(g)}s_{p},\vspace*{2mm}\\ (\text{CNP }2) & s_{p}^{*}u_gs_{q} &\hspace*{-2.5mm}=\hspace*{-2.5mm}& \begin{cases} u_{g_1}s_{\gcd(p,q)^{-1}q}^{\phantom{*}}s_{\gcd(p,q)^{-1}p}^{*}u_{g_2}& \text{ if } g = \theta_p(g_1)\theta_q(g_2),\\ 0& \text{ else,}\end{cases}\vspace*{2mm}\\ (\text{CNP }3) & 1 &\hspace*{-2.5mm}=\hspace*{-2.5mm}& \sum\limits_{[g] \in G/\theta_{p}(G)}{e_{g,p}} \hspace*{2mm}\text{ if } [G : \theta_{p}(G)]< \infty, \end{array}\] where $e_{g,p} = u_{g}s_{p}s_{p}^{*}u_{g}^{*}$. \end{defn}
Clearly, Definition~\ref{def:Q_S}~(ii) is the same as (CNP $1$). As $[\mathbb{Z}:h\mathbb{Z}] = h < \infty$ for every $h \in H^+$, (iii) corresponds to (CNP $3$) once we use Remark~\ref{rem:Q_S basic I}~a) and note that it is enough to have the summation relation for a set of generators of $H^+$. The case of distinct $p,q \in S$ and $g=0$ in (CNP $2$) yields (i). On the other hand, a slight modification of the argument in Remark~\ref{rem:Q_S basic I}~b) with $s_p^*u^ms_q$ in place of $s_p^*s_q$ establishes (CNP $2$) based on (i)--(iii). Thus we arrive at:
\begin{prop}\label{prop:Q_S as O-alg} The $C^*$-algebras $\mathcal{Q}_S$ and $\mathcal{O}[\mathbb{Z},H^+,\theta]$ are canonically isomorphic. \end{prop} According to \cite{Sta1}*{Corollary~3.28 and Example~3.29~(a)} $\mathcal{Q}_S$ is therefore a unital UCT Kirchberg algebra. While classification of $\mathcal{O}[G,P,\theta]$ by $K$-theory was achieved in \cite{Sta1} for irreversible algebraic dynamical systems $(G,P,\theta)$ under mild assumptions, and even generalized to \emph{algebraic dynamical systems} in \cite{bsBQforADS}, the range of the classifying invariant remained a mystery beyond the case of a single group endomorphism, where the techniques of \cite{CuntzVershik} apply. It thus seemed natural to go back to examples of dynamical systems involving $P = \mathbb{N}^k$ and try to understand the invariant in this case. In other words, our path lead back to $\mathcal{Q}_S$, and the present work aims at making progress precisely in this direction. \vspace*{1em}
\noindent There is also an alternative way of constructing $\mathcal{Q}_S$ directly from either of the semigroups $\mathbb{N} \rtimes H^+$ or $\mathbb{Z} \rtimes H^+$ using the theory of boundary quotients of semigroup $C^*$-algebras. To begin with, let us note that $(\mathbb{N} \rtimes H^+,N \rtimes H)$ forms a quasi lattice-ordered group. Hence we can form the Toeplitz algebra $\mathcal{T}(\mathbb{N} \rtimes H^+,N \rtimes H)$ using the work of Nica, see \cite{Nic}. But $\mathbb{Z} \rtimes H^+$ has non-trivial units, so it cannot be part of a quasi lattice-ordered pair. In order to treat both semigroups within the same framework, let us instead employ the theory of semigroup $C^*$-algebras from \cite{Li1}, which generalizes Nica's approach tremendously.
We note that both $\mathbb{N} \rtimes H^+$ and $\mathbb{Z} \rtimes H^+$ are cancellative, countable, discrete semigroups with unit. Moreover, they are \emph{right LCM} semigroups, meaning that the intersection of two principal right ideals is either empty or another principal right ideal (given by a right least common multiple for the representatives of the two intersected ideals). Thus their semigroup $C^*$-algebras both enjoy a particularly nice and tractable structure, see \cites{BLS1,BLS2}. Additionally, both are left Ore semigroups with amenable enveloping group $N \rtimes H \subset \mathbb{Q} \rtimes \mathbb{Q}_+^\times$. However, we would like to point out that $\mathbb{N} \rtimes H^+$ and $\mathbb{Z} \rtimes H^+$ are not left amenable (but right amenable) as they fail to be left reversible, see \cite{Li1}*{Lemma~4.6} for details.
Roughly speaking, semigroup $C^*$-algebras have the flavor of Toeplitz algebras. In particular, they tend to be non-simple except for very special situations. Still, we might hope for $\mathcal{Q}_S$ to be a quotient of $C^*(\mathbb{N} \rtimes H^+)$ or $C^*(\mathbb{Z} \rtimes H^+)$ obtained through some systematic procedure. This was achieved in \cite{LacRae} for $\mathbb{N} \rtimes \mathbb{N}^\times$, i.e.\ $S$ consisting of all primes, by showing that the boundary quotient of $\mathcal{T}(\mathbb{N} \rtimes \mathbb{N}^\times,\mathbb{Q} \rtimes \mathbb{Q}_+^\times) = C^*(\mathbb{N} \rtimes \mathbb{N}^\times)$ in the sense of \cite{CrispLaca} coincides with $\mathcal{Q}_\mathbb{N}$. Recently, this concept of a boundary quotient for a quasi lattice-ordered group from \cite{CrispLaca} was transferred to semigroup $C^*$-algebras in the context of right LCM semigroups, see \cite{BRRW}*{Definition 5.1}:
\begin{defn}\label{def:FS and boundary quotient} Let $T$ be a right LCM semigroup. A finite set $F \subset T$ is called a \emph{foundation set} if, for all $t\in T$, there is $f \in F$ satisfying $tT \cap fT \neq \emptyset$. The \emph{boundary quotient} $\mathcal{Q}(T)$ of a right LCM semigroup $T$ is the quotient of $C^*(T)$ by the relation \begin{equation}\label{eq:BQ} \begin{array}{c}\prod\limits_{f \in F} (1-e_{fT}) = 0 \quad \text{ for all foundation sets } F.\end{array} \end{equation} \end{defn}
To emphasize the relevance of this approach, let us point out that right LCM semigroups are much more general than quasi lattice-ordered groups. For instance, right cancellation may fail, so right LCM semigroups need not embed into groups.
On the one hand, this notion of a quotient of a semigroup $C^*$-algebra seems suitable as $\mathbb{N} \rtimes H^+$ and $\mathbb{Z} \rtimes H^+$ are right LCM semigroups. On the other hand, the abstract condition~\eqref{eq:BQ} prohibits an immediate identification of $\mathcal{Q}_S$ with $\mathcal{Q}(\mathbb{N} \rtimes H^+)$ or $\mathcal{Q}(\mathbb{Z} \rtimes H^+)$. This gap has been bridged successfully through \cite{bsBQforADS}:
\begin{prop}\label{prop:Q_S as BQ} There are canonical isomorphisms $\mathcal{Q}_S \cong \mathcal{Q}(\mathbb{Z} \rtimes H^+) \cong \mathcal{Q}(\mathbb{N} \rtimes H^+)$. \end{prop} \begin{proof} For $\mathbb{Z} \rtimes H^+$, \cite{bsBQforADS}*{Corollary~4.2} shows that $\mathcal{Q}(\mathbb{Z} \rtimes H^+) \cong \mathcal{O}[\mathbb{Z},H^+,\theta]$, and hence $\mathcal{Q}_S \cong \mathcal{Q}(\mathbb{Z} \rtimes H^+)$ by Proposition~\ref{prop:Q_S as O-alg}. Noting that $H^+$ is directed, this can also be seen immediately from \cite{bsBQforADS}*{Remark~2.2 and Proposition~4.1}. For $\mathbb{N}\rtimes H^+$, we infer from \cite{bsBQforADS}*{Example~2.8~(b)} that it suffices to consider \emph{accurate foundation sets} $F$ for \eqref{eq:BQ} by \cite{bsBQforADS}*{Proposition~2.4}, that is, $F$ consists of elements with mutually disjoint principal right ideals. Now $F \subset \mathbb{N} \rtimes H^+$ is an accurate foundation set if and only if it is an accurate foundation set for $\mathbb{Z} \rtimes H^+$. Conversely, given an accurate foundation set $F' =\{(m_1,h_1),\dots,(m_n,h_n)\} \subset \mathbb{Z} \rtimes H^+$, we can replace each $m_i$ by some $m_i' \in m_i+h_i\mathbb{N}$ with $m_i' \in \mathbb{N}$ to get an accurate foundation set $F \subset \mathbb{N} \rtimes H^+$ which uses the same right ideals as $F'$. This allows us to conclude that $\mathcal{Q}(\mathbb{Z} \rtimes H^+)$ and $\mathcal{Q}(\mathbb{N} \rtimes H^+)$ are isomorphic. \end{proof}
The fact that $\mathcal{Q}(\mathbb{N} \rtimes H^+)$ and $\mathcal{Q}(\mathbb{Z} \rtimes H^+)$ coincide is not at all surprising if we take into account \cite{BaHLR} and view $C^*(\mathbb{Z} \rtimes H^+)$ as the \emph{additive boundary quotient} of $\mathbb{N} \rtimes H^+$. Where there is an additive boundary, there is also a multiplicative boundary, see the boundary quotient diagram in \cite{BaHLR}*{Section~4}: The \emph{multiplicative boundary quotient} of $C^*(\mathbb{N} \rtimes H^+)$ is obtained by imposing the analogous relation to (iii) from Definition~\ref{def:Q_S}, i.e.\ $\sum_{k=0}^{p-1} e_{k+p\mathbb{N}} = 1$ for each $p \in S$. In comparison with Definition~\ref{def:Q_S}, the essential difference is that the semigroup element $(1,1)$ is implemented by a proper isometry $v_{(1,1)}$ instead of a unitary $u$. This multiplicative boundary quotient has been considered in \cite{LN2}*{Subsection~6.5}. As it turns out, its $K$-theory is hard to compute for larger $S$ as it leads to increasingly complicated extension problems of abelian groups. It is quite remarkable that there seems to be a deep common theme underlying the structure of the $K$-theory for both the multiplicative boundary quotient and $\mathcal{Q}_S$.
\subsection{\texorpdfstring{The $a$-adic algebras}{The a-adic algebras}} \label{subsec:a-adic algs}
Our aim is to compute the $K$-theory of $\mathcal{Q}_S$, and for this we need to make use of a certain duality result \cite{KOQ}*{Theorem~4.1} that allows us to translate our problem into real dynamics. This will be explained in the next section, but let us first recall the definition and some facts about $a$-adic algebras from \cite{KOQ} and \cite{Oml}, see also \cite{hr}*{Sections~10 and~25} for more on $a$-adic numbers.
Let $a=(a_k)_{k\in\mathbb{Z}}$ be a sequence of numbers in $\mathbb{N}^\times\setminus\{1\}$, and define the \emph{$a$-adic numbers} as the abelian group of sequences \[\begin{array}{c} \Omega_a = \left\{ x\in\prod\limits_{k=-\infty}^{\infty}\{0,1,\dotsc,a_k-1\} : \text{there exists $\ell\in\mathbb{Z}$ such that $x_k=0$ for all $k<\ell$}\right\} \end{array}\] under addition with carry (that is, like a doubly infinite odometer). The family of all subgroups $\{x\in\Omega_a:x_k=0\text{ for }k<\ell\}$ form a neighborhood basis of the identity. This induces a topology that makes $\Omega_a$ a totally disconnected, locally compact Hausdorff group. The \emph{$a$-adic integers} is the compact open subgroup \begin{equation}\label{eq:D-spectrum} \Delta_a=\{x\in\Omega_a:x_k=0\text{ for }k<0\}\subset\Omega_a. \end{equation} For $k \in \mathbb{Z}$, define the sequence $(e_k)_\ell=\delta_{k \ell}$. For $k\geq 1$, we may associate the rational number $(a_{-1}a_{-2}\dotsm a_{-k})^{-1}$ with $e_{-k}$ to get an injective group homomorphism from the non-cyclic subgroup \[\begin{array}{c} N_a=\left\{\frac{j}{a_{-1}a_{-2}\cdots a_{-k}}:j\in\mathbb{Z}, k\geq 1\right\} \subset \mathbb{Q} \end{array}\] into $\Omega_a$ with dense range. Note that $N_a$ contains $\mathbb{Z} \subset \mathbb{Q}$, and by identifying $N_a$ and $\mathbb{Z} \subset N_a$ with their images under the embedding into $\Omega_a$, it follows that $N_a \cap\Delta_a = \mathbb{Z}$.
The subgroups $N_a \cap\{x\in\Omega_a:x_k=0\text{ for }k<\ell\}$ for $\ell\in\mathbb{Z}$ give rise to a subgroup topology of $N_a$, and $\Omega_a$ is the Hausdorff completion (i.e.\ inverse limit completion) of $N_a$ with respect to this filtration. Therefore, the class of $a$-adic numbers $\Omega_a$ comprises all groups that are Hausdorff completions of non-cyclic subgroups of $\mathbb{Q}$. Loosely speaking, the negative part of the sequence $a$ determines a subgroup $N_a$ of $\mathbb{Q}$, and the positive part determines a topology that gives rise to a completion of $N_a$. Given a sequence $a$, let $a^*$ denote the dual sequence defined by $a^*_k=a_{-k}$, and write $N_a^*$ and $\Omega_a^*$ for the associated groups.
Let $H_a$ be any non-trivial subgroup of $\mathbb{Q}^\times_+$ acting on $N_a$ by continuous multiplication, meaning that for all $h\in H_a$, the map $N_a\to N_a$, $x\mapsto hx$ is continuous with respect to the topology described above. The largest subgroup with this property is generated by the primes dividing infinitely many terms of both the positive and negative tail of the sequence $a$, see \cite{KOQ}*{Corollary~2.2}, so we must assume that this subgroup is non-trivial (which holds in the cases we study). Then $H_a$ also acts on $\Omega_a$ by multiplication, and therefore $N_a\rtimes H_a$ acts on $\Omega_a$ by an $ax+b$-action.
\begin{defn}\label{def:Q_a,H} For a sequence $a=(a_k)_{k \in \mathbb{Z}}$ in $\mathbb{N}^\times\setminus\{1\}$ and a non-trivial subgroup $H_a$ of $\mathbb{Q}^\times_+$ acting by continuous multiplication on $N_a$, the crossed product $\overline{\mathcal{Q}}(a,H_a) := C_0(\Omega_a)\rtimes N_a\rtimes H_a$ is called the \emph{$a$-adic algebra} of $(a,H_a)$. \end{defn}
Clearly, interchanging $a$ and $a^*$ and manipulating the position of $a_0$ will not affect any structural property on the level of algebras. In fact, for our purposes, it will usually be convenient to assume that $a=a^*$. Therefore, we will often use the positive tail of the sequence $a$ in the description of $N_a$, and think of $N_a$ as the inductive limit of the system $\left\lbrace (\mathbb{Z} ,\cdot a_k) : k \geq 0 \right\rbrace$ via the isomorphism induced by \begin{equation}\label{eq:N_a ind lim} \begin{gathered} \begin{xy}
\xymatrix{
\mathbb{Z} \ar[rr]^{\cdot a_k} \ar[rd]_(0.3){\cdot \tfrac{1}{a_0a_1a_2\dotsm a_{k-1}}} & & \mathbb{Z} \ar[ld]^(0.3){\cdot \tfrac{1}{a_0a_1a_2\dotsm a_{k-1}a_k}}\\
& N_a &
} \end{xy} \end{gathered} \end{equation}
\begin{rem}\label{rem:a-adic stability} By \cite{KOQ}*{Corollary~2.8} the $a$-adic algebra $\overline{\mathcal{Q}}(a,H)$ is always a non-unital UCT Kirchberg algebra, hence it is stable by Zhang's dichotomy, see \cite{Z} or \cite{rordam-zd}*{Proposition~4.1.3}.
An immediate consequence of \eqref{eq:N_a ind lim} is that \[ \begin{array}{c} C_0(\Omega_a)\rtimes N_a \cong \overline{\bigcup\limits_{k=0}^\infty C(\frac{1}{a_0\dotsm a_k}\Delta_a)\rtimes \frac{1}{a_0\dotsm a_k}\mathbb{Z}}. \end{array} \] Moreover, by writing $\frac{1}{a_0\dotsm a_k}\Delta_a=\Delta_a+(\frac{a_0\dotsm a_k - 1}{a_0\dotsm a_k}+\Delta_a)+\dotsb+(\frac{1}{a_0\dotsm a_k}+\Delta_a)$ and checking how the translation action of $\frac{1}{a_0\dotsm a_k}\mathbb{Z}$ interchanges the components of this sum, one sees that \[ \begin{array}{c} C(\frac{1}{a_0\dotsm a_k}\Delta_a)\rtimes \frac{1}{a_0\dotsm a_k}\mathbb{Z} \cong M_{a_0\dotsm a_k}\left(C(\Delta_a)\rtimes\mathbb{Z}\right). \end{array} \] In particular, the natural embeddings of the increasing union above translates into embeddings into the upper left corners. Hence, it follows that $C_0(\Omega_a)\rtimes N_a$ is also stable. \end{rem}
\begin{rem}\label{rem:supernatural} A supernatural number is a function $d\colon\{\text{all primes}\} \to \mathbb{N} \cup \{\infty\}$, such that $\sum_{\text{$p$ prime}}d(p) = \infty$, and often written as a formal product $\prod_{\text{$p$ prime}} p^{d(p)}$. It is well known that there is a one-to-one correspondence between supernatural numbers and non-cyclic subgroups of $\mathbb{Q}$ containing $1$, and that the supernatural numbers form a complete isomorphism invariant both for the UHF algebras and the Bunce-Deddens algebras, see \cite{Gl} and \cite{BD}.
Every sequence $a=(a_k)_{k\geq 0}$ defines a function $d_a\colon\{\text{all primes}\} \to \mathbb{N} \cup \{\infty\}$ given by $d_a(p)=\sup\{n \in \mathbb{N} : p^n|a_0a_1 \dotsm a_k \text{ for some } k\geq 0\}$. More intuitively, $d_a$ is thought of as the infinite product $d_a=a_0a_1a_2\dotsm$. Moreover (see e.g.\ \cite{KOQ}*{Lemma~5.1}), we have \begin{equation}\label{eq:a-adic int} \begin{array}{c} \Delta_a\cong\prod\limits_{p\in d_a^{-1}(\infty)}\mathbb{Z}_p\times\prod\limits_{p\in d_a^{-1}(\mathbb{N})}\mathbb{Z}/p^{d_a(p)}\mathbb{Z}, \end{array} \end{equation} and thus the supernatural numbers are a complete isomorphism invariant for the homeomorphism classes of $a$-adic integers. \end{rem}
Now, as in Section~\ref{sec:prelim}, let $S$ be a set consisting of relatively prime numbers, and let $H^+$ and $H$ denote the submonoid of $\mathbb{N}^\times$ and the subgroup of $\mathbb{Q}^\times_+$ generated by $S$, respectively. The sequence $a_S$ is defined as follows: Since $H^+$ is a subset of $\mathbb{N}^\times$, its elements can be sorted into increasing order $1<a_{S,0}<a_{S,1}<\dotsb$, where $a_{S,0}=\min S$. Finally, we set $a_{S,k}=a_{S,-k}$ for $k<0$. If $S$ is a finite set, an easier way to form a suitable sequence $a_S$ is to let $q$ denote the product of all elements of $S$, and set $a_{S,k}=q$ for all $k\in\mathbb{Z}$. In both cases, $a_S^*=a_S$ and $N_{a_S}=N$. Henceforth we fix such a sequence $a_S$ and denote $\Omega_{a_S}$ and $\Delta_{a_S}$ by $\Omega$ and $\Delta$, respectively. The purpose of self-duality of $a_S$ is to have $N^*=N$, making the statement of Theorem~\ref{thm:duality} slightly more convenient by avoiding the explicit use of $N^*$. The sequences $a_S$ are the ones associated with supernatural numbers $d$ for which $d(p) \in \{0,\infty\}$ for every prime $p$. In this case \eqref{eq:a-adic int} implies that \[\begin{array}{c} \Delta \cong\prod\limits_{p \in P}\mathbb{Z}_p \quad\text{and}\quad \Omega \cong\prod\limits_{p \in P}\hspace*{-2mm}' \hspace*{2mm} \mathbb{Q}_p=\prod\limits_{p \in P}(\mathbb{Q}_p,\mathbb{Z}_p), \end{array}\] where the latter denotes the restricted product with respect to $\left\lbrace \mathbb{Z}_p : p\in P \right\rbrace$.
\begin{rem}\label{rem:spec of D_S} The spectrum of the commutative subalgebra $\mathcal{D}_S$ of $\mathcal{Q}_S$ from Definition~\ref{def:D_S} coincides with the $a$-adic integers $\Delta$ described in \eqref{eq:D-spectrum}. Indeed, for every $X=m+h\mathbb{Z}\in\mathcal{F}$, the projection $e_X$ in $\mathcal{D}_S$ corresponds to the characteristic function on the compact open subset $m+h\Delta$ of $\Delta$. Moreover, this correspondence extends to an isomorphism between the $C^*$-algebra $\mathcal{B}_S\cong\mathcal{D}_S\rtimes\mathbb{Z}$ of Definition~\ref{def:B_S} and $C(\Delta)\rtimes\mathbb{Z}$, which is equivariant for the natural $H^+$-actions on the algebras, both denoted by $\alpha$. \end{rem}
Let us write $e$ for the projection in $\overline{\mathcal{Q}}(a_S,H)$ representing the characteristic function on $\Delta$ in $C_0(\Omega)$. It is explained in \cite{Oml}*{Section~11.6} that $e$ is a full projection, and thus, by using Remark~\ref{rem:spec of D_S} together with \eqref{eq:B_S and Q_S as crossed products}, we have \begin{equation}\label{eq:full corner Q_S} e\overline{\mathcal{Q}}(a_S,H)e \cong (C(\Delta) \rtimes \mathbb{Z}) \rtimes^e_\alpha H^+ \cong (\mathcal{D}_S \rtimes \mathbb{Z}) \rtimes^e_\alpha H^+ \cong \mathcal{B}_S \rtimes^e_\alpha H^+ \cong \mathcal{Q}_S. \end{equation} In fact, since $N$ coincides with $(H^+)^{-1}\mathbb{Z}$, the above also follows from \cite{Lac}. Moreover, the argument in \cite{Oml} does not require $H$ to be non-trivial, so it can be used together with Remark~\ref{rem:spec of D_S} and \eqref{eq:B_S and Q_S as crossed products} to get \begin{equation}\label{eq:full corner B_S} e(C_0(\Omega) \rtimes N)e \cong C(\Delta) \rtimes \mathbb{Z} \cong \mathcal{D}_S \rtimes \mathbb{Z} \cong \mathcal{B}_S. \end{equation} Hence, by applying Remark~\ref{rem:a-adic stability} we arrive at the following result: \begin{prop}\label{prop:stable Q_S} The stabilization of $\mathcal{Q}_S$ is isomorphic to $\overline{\mathcal{Q}}(a_S,H)$, and the stabilization of $\mathcal{B}_S$ is isomorphic to $C_0(\Omega)\rtimes N$. \end{prop} Therefore Proposition~\ref{prop:stable Q_S} gives an alternative way to see that $\mathcal{Q}_S$ is a unital UCT Kirchberg algebra, which is also a consequence of Proposition~\ref{prop:Q_S as O-alg}.
\section{Comparison with real dynamics}\label{sec:comp with real dyn}
Let $S$ and $H_k$ be as specified in Section~\ref{sec:prelim}, $a=(a_k)_{k \in \mathbb{Z}}$ in $\mathbb{N}^\times \setminus\{1\}$, and $H_a$ a non-trivial subgroup of $\mathbb{Q}_+^\times$ that acts on $N_a$ by continuous multiplication. For convenience, we will assume $a^*= a$ so that $N_a^*=N_a$. Moreover, $N_a$ acts by translation and $H_a$ acts by multiplication on $\mathbb{R}$, respectively, giving rise to an $ax+b$-action of $N_a\rtimes H_a$ on $\mathbb{R}$. Let $\widehat{N}_a$ denote the Pontryagin dual of $N_a$. By \cite{KOQ}*{Theorem~3.3}, the diagonal embedding $N_a \to\mathbb{R}\times\Omega_a$ has discrete range, and gives an isomorphism \[(\mathbb{R}\times\Omega_a)/N_a \cong \widehat{N}_a.\] By applying Green's symmetric imprimitivity theorem, see e.g.\ \cite{Wil}*{Corollary~4.11}, we obtain that \[C_0(\Omega_a) \rtimes N_a \sim_M C_0(\mathbb{R}) \rtimes N_a,\] and this Morita equivalence is equivariant for the actions of $H_a$ by multiplication on one side and inverse multiplication on the other. The inverse map on $H_a$ does not have any impact on the crossed products, and thus \[\overline{\mathcal{Q}}(a,H_a) \sim_M C_0(\mathbb{R}) \rtimes N_a \rtimes H_a.\] All the above is explained in detail in \cite{KOQ}*{Proof of Theorem~4.1}. Moreover, recall that UCT Kirchberg algebras are either unital or stable, so by using Proposition~\ref{prop:stable Q_S} we get: \begin{thm}\label{thm:duality} The $a$-adic algebra $\overline{\mathcal{Q}}(a,H_a)$ is isomorphic to $C_0(\mathbb{R}) \rtimes N_a \rtimes H_a$. In particular, the stabilization of $\mathcal{Q}_S$ is isomorphic to $C_0(\mathbb{R}) \rtimes N \rtimes H$. \end{thm}
\begin{rem}\label{rem:generality} It follows from Theorem~\ref{thm:duality}, based on \cite{KOQ}*{Theorem~4.1}, that any $a$-adic algebra $\overline{\mathcal{Q}}(a,H_a)$ is isomorphic to a crossed product $C_0(\mathbb{R})\rtimes N_a \rtimes H_a$. Recall that $N_a$ can be any non-cyclic subgroup of $\mathbb{Q}$ and $H_a$ can be any non-trivial subgroup of $\mathbb{Q}_+^\times$ that acts on $N_a$ by multiplication. In the present work, we limit our scope to the case where $N_a$ and $H_a$ can be obtained from a family $S$ of relatively prime numbers for the benefit of a more concise exposition. In a forthcoming project, we aim at establishing analogous results to the ones proven here for all $a$-adic algebras. \end{rem}
\begin{rem}\label{rem:ind limit C_0(R) cross N} By employing the description of $N_a$ from \eqref{eq:N_a ind lim}, we can write $C_0(\mathbb{R}) \rtimes N_a$ as an inductive limit. For $k \geq 0$, define the automorphism $\gamma_k$ of $C_{0}(\mathbb{R})$ by \[\begin{array}{c} \gamma_0(f)(s)=f(s-1) \text{ and } \gamma_{k+1}(f)(s)=f\bigl(s-\frac{1}{a_0a_1\dotsm a_k}\bigr),\quad f\in C_0(\mathbb{R}). \end{array}\] Under the identification in \eqref{eq:N_a ind lim}, these automorphisms give rise to the natural $N_a$-action on $C_0(\mathbb{R})$, where $\gamma_k$ corresponds to the generator for the $k$th copy of $\mathbb{Z}$. For $k \geq 0$, let $u_k \in \mathcal{M}(C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z})$ denote the canonical unitary implementing $\gamma_k$ and consider the $*$-homomorphism $\phi_k\colon C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z} \to C_0(\mathbb{R}) \rtimes_{\gamma_{k+1}} \mathbb{Z}$ given by $\phi_k(f) = f$ and $\phi_k(fu_k) = fu_{k+1}^{a_k}$ for every $f\in C_0(\mathbb{R})$. The inductive limit description \eqref{eq:N_a ind lim} of $N_a$ now yields an isomorphism $\varphi\colon\varinjlim \left\lbrace C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z},\phi_k \right\rbrace \stackrel{\cong}{\longrightarrow} C_0(\mathbb{R}) \rtimes N_a$. \end{rem} \begin{rem}\label{rem:duality B_S} A modification of \cite{CuntzQ}*{Lemma~6.7}, using the inductive limit description from Remark~\ref{rem:ind limit C_0(R) cross N}, shows that $C_0(\mathbb{R}) \rtimes N_a$ is stable. Hence, it follows from the above together with Remark~\ref{rem:a-adic stability} that $C_0(\Omega_a) \rtimes N_a$ is isomorphic to $C_0(\mathbb{R}) \rtimes N_a$. In particular, Proposition~\ref{prop:stable Q_S} shows that the stabilization of $\mathcal{B}_S$ is isomorphic to $C_0(\mathbb{R}) \rtimes N$. \end{rem} We will make use of this fact below. \begin{lem} \label{lem:torsion algebra Morita} Let $\widetilde{\alpha}$ and $\beta$ denote the actions of $H_a$ on $C_0(\Omega_a)\rtimes N_a$ and $C_0(\mathbb{R})\rtimes N_a$, respectively. Then $\beta^{-1}$ is exterior equivalent to an action $\widetilde{\beta}$ for which there is an $\widetilde{\alpha}$ - $\widetilde{\beta}$-equivariant isomorphism $C_0(\Omega_a) \rtimes N_a \stackrel{\cong}{\longrightarrow} C_0(\mathbb{R}) \rtimes N_a$. \end{lem} \begin{proof} The respective actions $\widetilde{\alpha}$ and $\beta^{-1}$ of $H_a$ are Morita equivalent by \cite{KOQ}*{Proof of Theorem~4.1}. Moreover, both $C^*$-algebras are separable and stable, see Remark~\ref{rem:duality B_S}. Therefore, \cite{Com}*{Proposition on p.~16} implies that the actions are also outer conjugate, and the statement follows. \end{proof}
In the following, we denote by $\iota_{N_a}\colon C_0(\mathbb{R}) \into C_0(\mathbb{R}) \rtimes N_a$ the canonical embedding, which is equivariant for the respective $H_a$-actions $\beta$ (and also $\beta^{-1}$). We conclude this section by proving that $\iota_{N_a}$ induces an isomorphism between the corresponding $K_1$-groups.
\begin{prop} \label{prop:KtheoryA0} The canonical embedding $\iota_{N_a}\colon C_0(\mathbb{R}) \into C_0(\mathbb{R})\rtimes N_a$ induces an isomorphism between the corresponding $K_1$-groups. \end{prop} \begin{proof} Recall the isomorphism $\varphi\colon\varinjlim \left\lbrace C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z},\phi_k \right\rbrace \stackrel{\cong}{\longrightarrow} C_0(\mathbb{R}) \rtimes N_a$ from Remark~\ref{rem:ind limit C_0(R) cross N}. For $k \geq 0$, let $\iota_k\colon C_0(\mathbb{R})\to C_0(\mathbb{R})\rtimes_{\gamma_k}\mathbb{Z}$ be the canonical embedding. As $\iota_{k+1} = \phi_k \circ \iota_k$, we obtain the following commutative diagram \begin{equation}\label{eq:ind lim C_0(R)CrossN_a} \begin{gathered} \begin{xy} \xymatrix{ C_0(\mathbb{R}) \ar[r]^{\iota_{N_a}} \ar[d]_{\phi_{k,\infty} \circ \ \iota_k} & C_0(\mathbb{R}) \rtimes N_a \\
\varinjlim \left\lbrace C_0(\mathbb{R}) \rtimes_{\gamma_m} \mathbb{Z},\phi_m \right\rbrace \ar[ur]_\varphi } \end{xy} \end{gathered} \end{equation} Here, $\phi_{k,\infty}\colon C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z} \to \varinjlim \left\lbrace C_0(\mathbb{R}) \rtimes_{\gamma_m} \mathbb{Z},\phi_m \right\rbrace$ denotes the canonical $*$-homomorphism given by the universal property of the inductive limit.
As $K_0(C_0(\mathbb{R})) = 0$, the Pimsner-Voiculescu sequence \cite{PV} for $\gamma_k \in \op{Aut}(C_0(\mathbb{R}))$ reduces to an exact sequence \[ K_0(C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z}) \into K_1(C_0(\mathbb{R})) \stackrel{\op{id} - K_1(\gamma_k)}{\longrightarrow} K_1(C_0(\mathbb{R})) \stackrel{K_1(\iota_k)}{\onto} K_1(C_0(\mathbb{R}) \rtimes_{\gamma_k} \mathbb{Z}). \] For each $k\geq 0$, the automorphism $\gamma_k$ is homotopic to the identity on $\mathbb{R}$, so that $K_1(\gamma_k) = \op{id}$. It thus follows that $K_1(\iota_k)$ is an isomorphism. As $\iota_{k+1} = \phi_k \circ \iota_k$, we therefore get that $K_1(\phi_k)$ is an isomorphism as well. Hence, by continuity of $K$-theory, $K_1(\phi_{k,\infty})$ is an isomorphism. It now follows from \eqref{eq:ind lim C_0(R)CrossN_a} that $K_1(\iota_{N_a})$ is an isomorphism, which completes the proof. \end{proof}
\section{\texorpdfstring{A decomposition of the $K$-theory of $\mathcal{Q}_S$}{A decomposition of the K-theory}} \label{sec:K-theory} In this section, we show that $K_*(\mathcal{Q}_S)$ decomposes as a direct sum of a free abelian group and a torsion group, see Theorem~\ref{thm:decomposition of K-theory} and Corollary~\ref{cor:torsion and free part K-theory}. We would like to highlight that this is not just an abstract decomposition of $K_*(\mathcal{Q}_S)$, but a result that facilitates a description of the two parts by distinguished $C^*$-algebras associated to $S$, namely $M_{d^\infty} \rtimes^e_\alpha H^+$ for the torsion part, and $C_0(\mathbb{R}) \rtimes_\beta H$ for the free part. The free abelian part is then shown to have rank $2^{\lvert S\rvert - 1}$, see Proposition \ref{prop:K-theory torsion free part}, so that $\mathcal{Q}_S$ and $\mathcal{Q}_T$ can only be isomorphic if $S$ and $T$ have the same cardinality.
The following is the key tool for the proof of this section's main result, and we think it is of interest in its own right.
\begin{prop} \label{prop:splitted K-theory} Let $k \in \mathbb{N}\cup \left\lbrace \infty \right\rbrace$, $A,B,C$ $C^*$-algebras, and $\alpha\colon \mathbb{Z}^k \curvearrowright A$, $\beta\colon \mathbb{Z}^k \curvearrowright B$, and $\gamma\colon \mathbb{Z}^k \curvearrowright C$ actions. Let $v\colon \mathbb{Z}^k \to \mathcal U(\mathcal{M}(C))$ be a $\gamma$-cocycle and denote by $\tilde{\gamma}\colon \mathbb{Z}^k \curvearrowright C$ the induced action given by $\tilde{\gamma}_h = \operatorname{Ad}(v_h) \circ \gamma_h$ for $h \in \mathbb{Z}^k$. Let $\kappa\colon C \rtimes_{\tilde{\gamma}} \mathbb{Z}^k \stackrel{\cong}{\longrightarrow} C \rtimes_{\gamma} \mathbb{Z}^k$ be the $*$-isomorphism induced by the $\gamma$-cocycle $v$. Assume that $\varphi\colon A \to C$ is a non-degenerate $\alpha$ - $\gamma$-equivariant $*$-homomorphism and $\psi\colon B\to C$ a non-degenerate $\beta$ - $\tilde{\gamma}$-equivariant $*$-homomorphism such that $K_0(\varphi)$ and $K_1(\psi)$ are isomorphisms and $K_1(\varphi)$ and $K_0(\psi)$ are trivial. Then \[ K_*(\varphi \rtimes \mathbb{Z}^k) \oplus K_*(\kappa \circ (\psi \rtimes \mathbb{Z}^k))\colon K_*(A \rtimes_\alpha \mathbb{Z}^k) \oplus K_*(B \rtimes_\beta \mathbb{Z}^k) \to K_*(C \rtimes_\gamma \mathbb{Z}^k) \] is an isomorphism. \end{prop} \begin{proof} Consider the amplified action $\gamma^{(2)}\colon \mathbb{Z}^k \curvearrowright M_2(C)$ given by entrywise application of $\gamma$. Let $w\colon \mathbb{Z}^k \to \mathcal U(\mathcal{M}(M_2(C)))$ be the $\gamma^{(2)}$-cocycle given by $w_h= \operatorname{diag}(1,v_h)$ for $h \in \mathbb{Z}^k$. The induced $\mathbb{Z}^k$-action $\delta = \operatorname{Ad}(w) \circ \gamma^{(2)}$ satisfies $\delta_h(\operatorname{diag}(c,c')) = \operatorname{diag}(\gamma_h(c),\tilde{\gamma}_h(c'))$ for all $h \in \mathbb{Z}^k$ and $c,c' \in C$. Thus, $\eta = \varphi \oplus \psi\colon A \oplus B \to M_2(C)$ is a non-degenerate $\alpha \oplus \beta$ - $\delta$-equivariant $*$-homomorphism.
By additivity of $K$-theory, $K_*(\eta) = K_*(\varphi) + K_*(\psi)$. Hence, $K_*(\eta)$ is an isomorphism, as $K_0(\eta) = K_0(\varphi)$ and $K_1(\eta) = K_1(\psi)$. If $k \in \mathbb{N}$, an iterative use of the naturality of the Pimsner-Voiculescu sequence and the Five Lemma yields that $K_*(\eta \rtimes \mathbb{Z}^k)$ is an isomorphism. If $k = \infty$, it follows from continuity of $K$-theory that $K_*(\eta \rtimes \mathbb{Z}^\infty)$ is an isomorphism, since $K_*(\eta \rtimes \mathbb{Z}^k)$ is an isomorphism for every $k\in \mathbb{N}$.
Let $u \colon \mathbb{Z}^k \to \mathcal U(\mathcal{M}((A \oplus B) \rtimes_{\alpha \oplus \beta} \mathbb{Z}^k))$ and $\tilde{u} \colon \mathbb{Z}^k \to \mathcal U(\mathcal{M}(M_2(C)\rtimes_\delta \mathbb{Z}^k))$ denote the canonical representations, respectively. The covariant pair given by the natural inclusion $A \into 1_{\mathcal{M}(A)}((A \oplus B) \rtimes_{\alpha \oplus \beta} \mathbb{Z}^k) 1_{\mathcal{M}(A)}$ and the unitary representation $1_{\mathcal{M}(A)}u_h$, $h \in \mathbb{Z}^k$, gives rise to a $*$-homomorphism $\Phi_A \colon A \rtimes_\alpha \mathbb{Z}^k \to (A \oplus B) \rtimes_{\alpha \oplus \beta} \mathbb{Z}^k$. Similarly, we define $\Phi_B \colon B \rtimes_\beta \mathbb{Z}^k \to (A \oplus B) \rtimes_{\alpha \oplus \beta} \mathbb{Z}^k$. It is easy to check that $\Phi_A$ and $\Phi_B$ are orthogonal and \[ \Phi_A \oplus \Phi_B \colon A \rtimes_\alpha \mathbb{Z}^k \oplus B \rtimes_\beta \mathbb{Z}^k \to (A \oplus B) \rtimes_{\alpha \oplus \beta} \mathbb{Z}^k \] is an isomorphism. Moreover, let $\tilde{\varphi} \colon A \rtimes_\alpha \mathbb{Z}^k \to M_2(C) \rtimes_\delta \mathbb{Z}^k$ be the $*$-homomorphism induced by the covariant pair in $\mathcal{M}(e_{11}(M_2(C)\rtimes_\delta \mathbb{Z}^k) e_{11})$ given by the composition of the embedding $C \into M_2(C)$ into the upper left corner with $\varphi$ and the unitary representation $e_{11}\tilde{u}_h$, $h \in \mathbb{Z}^k$. Define $\tilde{\psi} \colon B \rtimes_\beta \mathbb{Z}^k \to M_2(C) \rtimes_\delta \mathbb{Z}^k$ analogously by considering the embedding $C \into M_2(C)$ into the lower right corner. By construction, the following diagram commutes \[ \xymatrix{ A \rtimes_\alpha \mathbb{Z}^k \oplus B \rtimes_\beta \mathbb{Z}^k \ar[drr]_{\tilde{\varphi} \oplus \tilde{\psi}} \ar[rr]_\cong^{\Phi_A \oplus \Phi_B} & & (A \oplus B) \rtimes_{\alpha \oplus \beta} \mathbb{Z}^k \ar[d]^{\eta \rtimes \mathbb{Z}^k} \\ & & M_2(C) \rtimes_\delta \mathbb{Z}^k }\] which shows that \[ K_*(\tilde{\varphi}) \oplus K_*(\tilde{\psi})\colon K_*(A \rtimes_\alpha \mathbb{Z}^k) \oplus K_*(B \rtimes_\beta \mathbb{Z}^k) \to K_*(M_2(C) \rtimes_{\delta} \mathbb{Z}^k) \] is an isomorphism.
Let $\kappa'\colon M_2(C) \rtimes_{\delta} \mathbb{Z}^k \stackrel{\cong}{\longrightarrow} M_2(C) \rtimes_{\gamma^{(2)}} \mathbb{Z}^k$ denote the isomorphism induced by the $\gamma^{(2)}$-cocycle $w$. Then the following diagram commutes and the proof is complete: \[ \xymatrix{
A \rtimes_\alpha \mathbb{Z}^k \ar[d]_{\tilde{\varphi}} \ar[r]^{\varphi \rtimes \mathbb{Z}^k} & C \rtimes_\gamma \mathbb{Z}^k \ar[rd]^{\op{id}_{C \rtimes_\gamma \mathbb{Z}^k} \oplus 0} \\
M_2(C) \rtimes_\delta \mathbb{Z}^k \ar[r]^{\kappa'}_\cong & M_2(C) \rtimes_{\gamma^{(2)}} \mathbb{Z}^k \ar[r]^\cong & M_2(C \rtimes_\gamma \mathbb{Z}^k) \\
B \rtimes_\beta \mathbb{Z}^k \ar[u]^{\tilde{\psi}} \ar[r]^*!/_0.5mm/{\labelstyle \psi \rtimes \mathbb{Z}^k} & C \rtimes_{\tilde{\gamma}} \mathbb{Z}^k \ar[r]^\kappa_\cong & C \rtimes_\gamma \mathbb{Z}^k \ar[u]_*!/_1.5mm/{\labelstyle 0 \oplus \op{id}_{C\rtimes_\gamma \mathbb{Z}^k}} } \] \end{proof}
\begin{rem} Proposition~\ref{prop:splitted K-theory} is true in a more general setting. In fact, $\mathbb{Z}^k$ could be replaced by any locally compact group $G$ with the following property: If $\varphi\colon A \to B$ is an $\alpha$ - $\beta$-equivariant $*$-homomorphism such that $K_*(\varphi)$ is an isomorphism, then $K_*(\varphi \rtimes G)$ is an isomorphism as well. \end{rem}
\begin{rem}\label{rem:dilations} Note that $K_1(M_{d^\infty}) = 0$ and the natural embedding $j\colon M_{d^\infty} \into \mathcal{B}_S$ induces an isomorphism between the corresponding $K_0$-groups. The invariance of $M_{d^\infty} \subset \mathcal{B}_S$ under the $H^+$-action $\alpha$, see Remark~\ref{rem:BD subalg B_S}, yields a non-degenerate $*$-homomorphism $j_\infty\colon M_{d^\infty,\infty} \to \mathcal{B}_{S,\infty}$ between the minimal automorphic dilations for $\alpha$, which is equivariant for the induced $H$-actions $\alpha_\infty$. From the concrete model of the minimal automorphic dilation as an inductive limit, see \cite{Lac}*{Proof of Theorem~2.1}, we conclude that $K_1(M_{d^\infty,\infty}) = 0$ and $K_0(j_\infty)$ is an isomorphism. Moreover, there is an isomorphism between $\mathcal{B}_S$ and $C(\Delta)\rtimes \mathbb{Z}$ that intertwines the actions of $H^+$, see Remark~\ref{rem:spec of D_S}. It then follows from \eqref{eq:full corner B_S} and \cite{Lac}*{Theorem~2.1} that $\alpha\colon H^+ \curvearrowright C(\Delta)\rtimes\mathbb{Z}$ dilates to $\widetilde{\alpha}\colon H \curvearrowright C_0(\Omega)\rtimes N$, where $\widetilde{\alpha}$ coincides with the $H$-action from Lemma~\ref{lem:torsion algebra Morita}. Consequently, there is an $\alpha_\infty$ - $\widetilde{\alpha}$-equivariant isomorphism $\mathcal{B}_{S,\infty} \stackrel{\cong}{\longrightarrow} C_0(\Omega)\rtimes N$. \end{rem}
As in Section~\ref{sec:comp with real dyn}, let $\iota_N\colon C_0(\mathbb{R}) \into C_0(\mathbb{R})\rtimes N$ denote the canonical embedding. Note that $\iota_N$ is non-degenerate and equivariant with respect to the $H$-actions $\beta$ (and also $\beta^{-1}$).
\begin{thm}\label{thm:decomposition of K-theory} The map \[ K_*(j \rtimes^e H^+) \oplus K_*(\iota_N \rtimes H)\colon K_*(M_{d^\infty} \rtimes^e_\alpha H^+) \oplus K_*(C_0(\mathbb{R}) \rtimes_{\beta} H) \to K_*(\mathcal{Q}_S), \] induced by the identifications $\mathcal{B}_S \rtimes^e_\alpha H^+ \cong \mathcal{Q}_S$ from \eqref{eq:B_S and Q_S as crossed products} and $(C_0(\mathbb{R}) \rtimes N) \rtimes_\beta H \cong \mathcal{Q}_S \otimes \mathcal{K}$ from Theorem~\ref{thm:duality}, is an isomorphism. \end{thm} \begin{proof} By combining Remark~\ref{rem:dilations} with Lemma~\ref{lem:torsion algebra Morita}, there exist an $H$-action $\widetilde\beta$ on $C_0(\mathbb{R}) \rtimes N$ that is exterior equivalent to $\beta^{-1}$, and a non-degenerate $\alpha_\infty$ - $\widetilde\beta$-equivariant $*$-homomorphism $\psi\colon M_{d^\infty,\infty}\to C_0(\mathbb{R}) \rtimes N$, namely the one coming from the composition \[ M_{d^\infty,\infty} \stackrel{j_\infty}{\longrightarrow} \mathcal{B}_{S,\infty} \stackrel{\cong}{\longrightarrow} C_0(\Omega) \rtimes N \stackrel{\cong}{\longrightarrow} C_0(\mathbb{R}) \rtimes N. \] Since $K_1(j_\infty) = 0$ and $K_0(j_\infty)$ is an isomorphism by Remark~\ref{rem:dilations}, the same also holds for $K_1(\psi)$ and $K_0(\psi)$, respectively. Now Proposition~\ref{prop:KtheoryA0} gives that $K_0(\iota_N)$ is trivial and $K_1(\iota_N)$ is an isomorphism. As $\psi$ and $\iota_N$ are non-degenerate, \begin{multline*} K_*(\kappa \circ (\psi \rtimes H)) \oplus K_*(\iota_N \rtimes H)\colon \\ K_*(M_{d^\infty,\infty} \rtimes_{\alpha_\infty} H) \oplus K_*(C_0(\mathbb{R}) \rtimes_{\beta^{-1}} H) \to K_*((C_0(\mathbb{R})\rtimes N)\rtimes_{\beta^{-1}} H) \end{multline*} is an isomorphism by Proposition~\ref{prop:splitted K-theory}, where $\kappa\colon(C_0(\mathbb{R})\rtimes N)\rtimes_{\widetilde\beta} H \stackrel{\cong}{\longrightarrow} (C_0(\mathbb{R})\rtimes N) \rtimes_{\beta^{-1}} H$ denotes the isomorphism induced by a fixed $\beta^{-1}$-cocycle defining $\widetilde{\beta}$. Since $K_*(j \rtimes^e H^+)$ corresponds to $K_*(j_\infty \rtimes H)$ under the isomorphisms induced by the minimal automorphic dilations, we also get that $K_*(j \rtimes^e H^+)$ corresponds to $K_*(\kappa \circ (\psi \rtimes H))$ under the isomorphism $K_*(\mathcal{B}_S \rtimes^e_\alpha H^+) \cong K_*((C_0(\mathbb{R}) \rtimes N) \rtimes_{\beta^{-1}} H)$. As $\mathcal{B}_S \rtimes^e_\alpha H^+ \cong \mathcal{Q}_S$ by \eqref{eq:B_S and Q_S as crossed products} and $(C_0(\mathbb{R})\rtimes N)\rtimes_{\beta^{-1}} H \cong (C_0(\mathbb{R})\rtimes N)\rtimes_\beta H \cong \mathcal{Q}_S \otimes \mathcal{K}$ by Theorem~\ref{thm:duality}, the conclusion follows. \end{proof}
We will now show that the two summands appearing in Theorem~\ref{thm:decomposition of K-theory} correspond to the torsion and the free part of $K_*(\mathcal{Q}_S)$, respectively.
\begin{prop}\label{prop:K-theory torsion free part} For $i=0,1$, $K_i(C_0(\mathbb{R}) \rtimes_{\beta} H)$ is the free abelian group in $2^{\lvert S \rvert-1}$ generators. \end{prop} \begin{proof} The result holds for any non-trivial subgroup $H$ of $\mathbb{Q}^\times_+$, and we prove it in generality, not necessarily requiring $H$ to be generated by $S$. Suppose that $k$ is the (possibly infinite) rank of $H$. Let $\{ h_i : 0 \leq i \leq k \}$ be a minimal generating set for $H$. For $t\in [0,1]$ and $1 \leq i\leq k$, define $\tilde{\beta}_{h_i,t} \in \operatorname{Aut}(C_0(\mathbb{R}))$ by $\tilde{\beta}_{h_i,t}(f)(s) = f((th_i^{-1} + 1 - t)s)$. Note that $\tilde{\beta}_{h_i,t}$ is indeed an automorphism as $h_i > 0$. Since multiplication on $\mathbb{R}$ is commutative, we see that for each $t \in [0,1]$, $\left\lbrace \tilde{\beta}_{h_i,t} \right\rbrace_{1\leq i\leq k}$ defines an $H$-action. Let $\gamma\colon H \curvearrowright C_0([0,1],C_0(\mathbb{R}))$ be the action given by $\gamma_{h_i}(f)(t) = \tilde{\beta}_{h_i,t}(f(t))$. We have the following short exact sequence of $C^*$-algebras: \[ C_0((0,1],C_0(\mathbb{R}))\rtimes_\gamma H \into C_0([0,1],C_0(\mathbb{R}))\rtimes_\gamma H \stackrel{\operatorname{ev}_0\rtimes H}{\onto} C_0(\mathbb{R})\rtimes_{\op{id}} H \] The Pimsner-Voiculescu sequence shows that $K_*(C_0((0,1],C_0(\mathbb{R}))\rtimes_\gamma H) = 0$, where we also use continuity of $K$-theory if $k = \infty$. The six-term exact sequence corresponding to the above extension now yields that $K_*(\operatorname{ev}_0 \rtimes H)$ is an isomorphism. A similar argument shows that $K_*(\operatorname{ev}_1 \rtimes H)$ is an isomorphism. We therefore conclude that for $i=0,1$, \[K_i(C_0(\mathbb{R}) \rtimes_{\beta} H) \cong K_i(C_0(\mathbb{R}) \rtimes_{\op{id}} H) \cong K_i(C_0(\mathbb{R}) \otimes C^*(H)).\] This completes the proof as $K_i(C_0(\mathbb{R}) \otimes C^*(H))$ is the free abelian group in $2^{k-1}$ generators. \end{proof}
\begin{prop} \label{prop:UHF cross prod torsion K-th} $K_*(M_{d^\infty} \rtimes^e_\alpha H^+)$ is a torsion group, which is finite if $S$ is finite. \end{prop} \begin{proof} As in Remark \ref{rem:BD subalg B_S}, we think of $M_{d^\infty}$ as the inductive limit $(M_p(\mathbb{C}),\iota_{p,pq})_{p,q \in H^+}$ with $\iota_{p,pq}\colon M_p(\mathbb{C}) \to M_{pq}(\mathbb{C})$ given by $e_{i, j}^{(p)} \otimes 1 \mapsto \sum_{k=0}^{q-1}e_{i+pk, j+pk}^{(pq)} \otimes 1$. With this perspective, $\alpha$ satisfies $\alpha_q(e^{(p)}_{m,m}) = e^{(pq)}_{qm,qm}$ for all $p,q \in H^+$ and $0 \leq m \leq p-1$. From this, one concludes that for $q \in H^+$, $K_0(\alpha_q)$ is given by multiplication with $1/q$ on $K_0(M_{d^\infty}) \cong N$. Hence, for $p \in S$, there exists a Pimsner-Voiculescu type exact sequence, see \cite{Pas}*{Theorem~4.1} and also \cite{CunPV}*{Proof of Proposition~3.1}, \[ \xymatrix{ 0 \ar[r] & K_1(M_{d^\infty} \rtimes^e_{\alpha_p} \mathbb{N}) \ar[r] & N \ar[r]^{\frac{p-1}{p}} & N \ar[r] & K_0(M_{d^\infty} \rtimes^e_{\alpha_p} \mathbb{N}) \ar[r] & 0 } \] This shows that $K_1(M_{d^\infty} \rtimes^e_{\alpha_p} \mathbb{N}) = 0$ and $K_0(M_{d^\infty} \rtimes^e_{\alpha_p} \mathbb{N}) \cong N / (p-1)N$. In particular, $K_*(M_{d^\infty} \rtimes^e_{\alpha_p} \mathbb{N})$ is a torsion group. If $S$ is finite, we can write $M_{d^\infty} \rtimes^e_\alpha H^+$ as an $\lvert S\rvert$-fold iterative crossed product by $\mathbb{N}$ and apply the Pimsner-Voiculescu type sequence repeatedly to get that $K_*(M_{d^\infty} \rtimes^e_\alpha H^+)$ is a torsion group. If $S$ is infinite, we may use continuity of $K$-theory to conclude the claim from the case of finite $S$.
Finiteness of $S$ implies finiteness of $K_*(M_{d^\infty} \rtimes^e_\alpha H^+)$ because $N/(p-1)N$ is finite for all $p \in S$, which follows from the forthcoming Lemma~\ref{lem:N/gN}. \end{proof}
Using Proposition~\ref{prop:K-theory torsion free part} and \ref{prop:UHF cross prod torsion K-th}, we record the following immediate consequence of the decomposition of $K_*(\mathcal{Q}_S)$ given in Theorem~\ref{thm:decomposition of K-theory}.
\begin{cor}\label{cor:torsion and free part K-theory} $K_*(\mathcal{Q}_S)$ decomposes as a direct sum of a free abelian group and a torsion group. More precisely, $K_*(j \rtimes^e H^+)$ is a split-injection onto the torsion subgroup and $K_*(\iota_N \rtimes H)$ is a split-injection onto the torsion free part of $K_*(\mathcal{Q}_S)$, respectively. \end{cor}
\section{The torsion subalgebra}\label{sec:torsion part} Within this section we analyze the structure of $M_{d^\infty} \rtimes^e_\alpha H^+$ and its role relative to $\mathcal{Q}_S$ more closely. First, we show that the inclusion $M_{d^\infty} \into \mathcal{B}_S$ is equivariantly sequentially split with respect to the $H^+$-actions $\alpha$ in the sense of \cite{BarSza1}*{Remark~3.17}, see Proposition~\ref{prop:UHF into BD is eq seq split}. According to \cite{BarSza1}, we thus get that $M_{d^\infty} \rtimes^e_\alpha H^+$ shares many structural properties with $\mathcal{B}_S \rtimes^e_\alpha H^+ \cong \mathcal{Q}_S$. Most importantly, $M_{d^\infty} \rtimes^e_\alpha H^+$ is a unital UCT Kirchberg algebra, see Corollary~\ref{cor:UHF into BD seq split cr pr}. By simplicity of $M_{d^\infty} \rtimes^e_\alpha H^+$, we conclude that this $C^*$-algebra is in fact isomorphic to the natural subalgebra $\mathcal{A}_S$ of $\mathcal{Q}_S$ that is generated by all the isometries $u^ms_p$ with $p \in S$ and $0 \leq m \leq p-1$, see Corollary~\ref{cor:subalgebra for torsion part}. By Corollary \ref{cor:torsion and free part K-theory}, it thus follows that the canonical inclusion $\mathcal{A}_S \into \mathcal{Q}_S$ induces a split-injection onto the torsion subgroup of $K_*(\mathcal{Q}_S)$. Due to this remarkable feature, we call $\mathcal{A}_S$ the \emph{torsion subalgebra} of $\mathcal{Q}_S$.
We then present two additional interesting perspectives on the torsion subalgebra $\mathcal{A}_S$. Firstly, $\mathcal{A}_S$ can be described as the boundary quotient of the right LCM subsemigroup $U = \{ (m,p) : p \in H^+, 0 \leq m \leq p-1\}$ of $\mathbb{N} \rtimes H^+$ in the sense of \cite{BRRW}, see Proposition~\ref{prop:A_S as BQ of U}. This yields a commutative diagram which might be of independent interest, see Remark~\ref{rem:A_S as BQ of U}.
Secondly, the boundary quotient perspective allows us to identify $\mathcal{A}_S$ for $k:= \lvert S \rvert < \infty$ with the $C^*$-algebra of the $k$-graph $\Lambda_{S,\theta}$ consisting of a single vertex with $p$ loops of color $p$ for every $p \in S$, see Corollary~\ref{cor:tor subalgebra via k-graphs}. Quite intriguingly, $\Lambda_{S,\theta}$ differs from the canonical $k$-graph model $\Lambda_{S,\sigma}$ for $\bigotimes_{p \in S} \mathcal{O}_p$ only with respect to its factorization rules, see Remark~\ref{rem:Lambda_S flip}. In fact, the corresponding $C^*$-algebras coincide for $\lvert S \rvert \leq 2$, see Proposition~\ref{prop:A_S for |S|=2}. After obtaining these intermediate results, we were glad to learn from Aidan Sims that, in view of Conjecture~\ref{conj:k-graph}, it is reasonable to expect that the results for $\lvert S \rvert \leq 2$ already display the general form, i.e.\ that $\mathcal{A}_S$ is always isomorphic to $\bigotimes_{p \in S} \mathcal{O}_p$.
\begin{prop}\label{prop:UHF into BD is eq seq split} The embedding $M_{d^\infty} \into \mathcal{B}_S$ is $\alpha$-equivariantly sequentially split. \end{prop} \begin{proof} Let $\iota\colon M_{d^\infty} \into \prod_{p \in H^+}M_{d^\infty} \big / \bigoplus_{p \in H^+} M_{d^\infty}$ denote the canonical inclusion as constant sequences and $\bar{\alpha}$ the induced action of $H^+$ on $\prod_{p \in H^+}M_{d^\infty} \big / \bigoplus_{p \in H^+} M_{d^\infty}$ given by componentwise application of $\alpha_h$ for $h \in H$. Clearly, $\prod_{p \in H^+}M_{d^\infty} \big / \bigoplus_{p \in H^+} M_{d^\infty}$ is canonically isomorphic to the sequence algebra of $M_{d^\infty}$, $\prod_{n \in \mathbb{N}}M_{d^\infty} \big / \bigoplus_{n \in \mathbb{N}} M_{d^\infty}$. In particular, this isomorphism intertwines $\bar{\alpha}$ and the natural $H^+$-action on the sequence algebra induced by $\alpha$. We therefore need to construct a $\alpha$ - $\bar{\alpha}$-equivariant $*$-homomorphism $\chi\colon \mathcal{B}_S \to \prod_{p \in H^+}M_{d^\infty} \big / \bigoplus_{p \in H^+} M_{d^\infty}$ making the following diagram commute: \begin{equation}\label{dia:UHF into BD is eq seq split} \begin{gathered} \begin{xy} \xymatrix{ M_{d^\infty} \ar@<-1ex>@{^{(}->}[dr] \ar[rr]^(0.35){\iota}&&\prod\limits_{p \in H^+}M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty}\\ &\mathcal{B}_S \ar@<-1ex>@{-->}_(0.35){\chi}[ur]} \end{xy} \end{gathered} \end{equation} Recall the inductive system $(M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}),\iota_{p,pq})_{p,q \in H^+}$ from Remark~\ref{rem:BD subalg B_S} whose inductive limit is isomorphic to $\mathcal{B}_S$. The canonical subalgebra $M_p(\mathbb{C}) \subset M_p(\mathbb{C}) \otimes C^*(\mathbb{Z})$ can in this way be considered as a subalgebra of $M_{d^\infty} \subset \mathcal{B}_S$ in a natural way. For each $p \in H^+$, the map $\chi_p\colon M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}) \to M_p(\mathbb{C})$ given by $\sum_{k=1}^n a_k \otimes u \mapsto \sum_{k=1}^n a_k$ is a $*$-homomorphism. Thus, the family $(\chi_p)_{p \in H^+}$ gives rise to a $*$-homomorphism \[ \begin{array}{c} \chi'\colon \prod\limits_{p \in H^+} M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}) \to \prod\limits_{p \in H^+} M_{d^\infty}.\end{array} \] Clearly, $\chi'\bigl(\bigoplus_{p \in H^+} M_p(\mathbb{C}) \otimes C^*(\mathbb{Z})\bigr) \subset \bigoplus_{p \in H^+} M_{d^\infty}$, so $\chi'$ induces a map \[ \begin{array}{c} \chi\colon \prod\limits_{p \in H^+} M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}) \bigr/ \bigl(\bigoplus\limits_{p \in H^+} M_p(\mathbb{C}) \otimes C^*(\mathbb{Z})\bigr) \to \prod\limits_{p \in H^+}M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty}. \end{array} \] Using the inductive limit description of $\mathcal{B}_S$ from Remark~\ref{rem:BD subalg B_S}, we can think of $\mathcal{B}_S$ as a subalgebra of $\prod_{p \in H^+} M_p(\mathbb{C}) \otimes C^*(\mathbb{Z}) \bigr/ \bigl(\bigoplus_{p \in H^+} M_p(\mathbb{C}) \otimes C^*(\mathbb{Z})\bigr)$. Moreover, because of the concrete realization of $M_{d^\infty}$ as the inductive limit associated with $(M_p(\mathbb{C}),\iota_{p,pq})_{p,q \in H^+}$, we have that $\chi$ restricts to the canonical embedding $\iota$ on $M_{d^\infty}$. Hence, \eqref{dia:UHF into BD is eq seq split} is commutative, when we ignore the question of equivariance, or, in other words, $\iota$ is sequentially split as an ordinary $*$-homomorphism. However, we claim that we also have a commutative diagram \begin{equation}\label{dia:UHF into BD equivariance} \begin{gathered} \begin{xy} \xymatrix{ \prod\limits_{p \in H^+}M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty} \ar^{\bar{\alpha}_p}[r] & \prod\limits_{p \in H^+}M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty} \\ \mathcal{B}_S \ar[u]^(0.4){\chi} \ar_{\alpha_p}[r] & \mathcal{B}_S \ar[u]_(0.4){\chi} } \end{xy} \end{gathered} \end{equation} for each $p \in H^+$. Let us expand this diagram for fixed $p$ and arbitrary $q \in H^+$ to: \begin{equation}\label{dia:UHF into BD equivariance zoom} \begin{gathered} \scalebox{0.8}{\begin{xy} \xymatrix{ \prod\limits_{p \in H^+}M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty} \ar^{\bar{\alpha}_p}[rrr] &&& \prod\limits_{p \in H^+}M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty} \\ &M_q(\mathbb{C}) \ar@{_{(}->}[ul] \ar^{\alpha_p}[r] & M_{pq}(\mathbb{C}) \ar@{^{(}->}[ur] \\ &M_q(\mathbb{C}) \otimes C^*(\mathbb{Z}) \ar@{^{(}->}[dl] \ar^{\chi_q}[u] \ar_*!/_0.5mm/{\labelstyle \alpha_p}[r] & M_{pq} \otimes C^*(\mathbb{Z}) \ar_{\chi_{pq}}[u] \ar@{_{(}->}[dr]\\ \mathcal{B}_S \ar[uuu]^{\chi} \ar_*!/_0.5mm/{\labelstyle \alpha_p}[rrr] &&& \mathcal{B}_S \ar[uuu]_{\chi} } \end{xy}} \end{gathered} \end{equation} It is clear that the four outer chambers are commutative, so we only need to check the centre. For every $0 \leq i,j \leq q-1$, we get \[\chi_{pq} \circ \alpha_p (e_{i, j}^{(q)} \otimes u) = \chi_{pq}(e_{pi, pj}^{(pq)} \otimes u^p) = e_{pi, pj}^{(pq)} = \alpha_p \circ \chi_q (e_{i, j}^{(q)} \otimes u)\] and therefore $\chi_{pq} \circ \alpha_p = \alpha_p \circ \chi_q$ on $M_q(\mathbb{C}) \otimes C^*(\mathbb{Z})$. This establishes the claim as we have $M_{d^\infty} = \varinjlim(M_q(\mathbb{C}),q\in H^+)$ and $\mathcal{B}_S = \varinjlim(M_q(\mathbb{C}) \otimes C^*(\mathbb{Z}),q\in H^+)$. \end{proof}
\begin{cor}\label{cor:UHF into BD seq split cr pr} The inclusion $M_{d^\infty} \rtimes^e_\alpha H^+ \to \mathcal{B}_S \rtimes^e_\alpha H^+$ is sequentially split. In particular, $M_{d^\infty} \rtimes^e_\alpha H^+$ is a UCT Kirchberg algebra. \end{cor} \begin{proof} By Proposition~\ref{prop:UHF into BD is eq seq split}, we know that $M_{d^\infty} \into \mathcal{B}_S$ is $\alpha$-equivariantly sequentially split. As this inclusion preserves the units, we can use the universal property of the semigroup crossed products $M_{d^\infty} \rtimes^e_\alpha H^+$ and $\mathcal{B}_S \rtimes^e_\alpha H^+$ to obtain a commutative diagram of $*$-homomorphisms \[ \xymatrix{ M_{d^\infty} \rtimes^e_\alpha H^+ \ar[dr] \ar[rr]^(0.4){\iota \rtimes^e H^+} && \left(\prod\limits_{p \in H^+} M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty} \right)\rtimes^e_{\bar{\alpha}} H^+\\ &\mathcal{B}_S \rtimes^e_\alpha H^+ \ar[ur] } \] Again by the universal property of semigroup crossed products, there is a natural $*$-homomorphism \[\begin{array}{c} \psi\colon\left(\prod\limits_{p \in H^+} M_{d^\infty} \big / \bigoplus\limits_{p \in H^+} M_{d^\infty} \right)\rtimes^e_{\bar{\alpha}} H^+ \to \prod\limits_{p \in H^+} M_{d^\infty} \rtimes^e_\alpha H^+ \big /\bigoplus\limits_{p \in H^+} M_{d^\infty} \rtimes^e_\alpha H^+ \end{array}\] such that $\psi \circ (\iota \rtimes^e H^+)$ coincides with the standard embedding. This shows that the inclusion $M_{d^\infty} \rtimes^e_\alpha H^+ \to \mathcal{B}_S \rtimes^e_\alpha H^+$ is sequentially split. It now follows from \cite{BarSza1}*{Theorem~2.9~(1)+(8)} that $M_{d^\infty} \rtimes^e_\alpha H^+$ is a Kirchberg algebra. Moreover, $M_{d^\infty} \rtimes^e_\alpha H^+$ satisfies the UCT by \cite{BarSza1}*{Theorem~2.10}. We note that this part also follows from standard techniques combined with the central result of \cite{Lac}. \end{proof}
We will now see that simplicity enables us to identify $M_{d^\infty} \rtimes^e_\alpha H^+$ with the following natural subalgebra of $\mathcal{Q}_S$, whose name is justified by the next result.
\begin{defn}\label{def:torsion subalgebra} The \emph{torsion subalgebra} $\mathcal{A}_S$ of $\mathcal{Q}_S$ is the $C^*$-subalgebra of $\mathcal{Q}_S$ generated by $\{u^ms_p : p \in S, 0 \leq m \leq p-1\}$. \end{defn}
Note that for $S=\{p\}$, the subalgebra $\mathcal{A}_S$ is canonically isomorphic to $\mathcal{O}_p$.
\begin{cor}\label{cor:subalgebra for torsion part}\label{cor:torsion subalgebra justification} The isomorphism $\mathcal{B}_S \rtimes^e_\alpha H^+ \stackrel{\cong}{\longrightarrow} \mathcal{Q}_S$ from \eqref{eq:B_S and Q_S as crossed products} restricts to an isomorphism $M_{d^\infty} \rtimes^e_\alpha H^+ \stackrel{\cong}{\longrightarrow} \mathcal{A}_S$. In particular, the canonical inclusion $\mathcal{A}_S \into \mathcal{Q}_S$ induces a split-injection onto the torsion subgroup of $K_*(\mathcal{Q}_S)$. \end{cor} \begin{proof} $\mathcal{A}_S$ contains the copy of $M_{d^\infty} \subset \mathcal{B}_S$ described in Remark~\ref{rem:BD subalg B_S}. Together with $s_p, p \in S$, which are also contained in $\mathcal{A}_S$, this defines a covariant representation of $(M_{d^\infty},\alpha)$ inside $\mathcal{A}_S$. The resulting $*$-homomorphism $M_{d^\infty} \rtimes^e_\alpha H^+ \to \mathcal{A}_S$ is surjective. By Corollary~\ref{cor:UHF into BD seq split cr pr}, $M_{d^\infty} \rtimes^e_\alpha H^+$ is simple, so this map is an isomorphism. The second claim is due to Corollary~\ref{cor:torsion and free part K-theory}. \end{proof}
Let us continue with the representation of $\mathcal{A}_S$ as a boundary quotient. When $\lvert S\rvert=k<\infty$, this will lead us to a $k$-graph model for $\mathcal{A}_S$ that is closely related to the canonical $k$-graph representation for $\bigotimes_{p \in S}\mathcal{O}_p$, see Remark~\ref{rem:Lambda_S flip}. Consider the subsemigroup $U:= \{ (m,h) \in \mathbb{N} \rtimes H^+ : 0 \leq m \leq h-1\}$ of $\mathbb{N} \rtimes H^+$. Observe that $U$ is a right LCM semigroup because \begin{equation}\label{eq:right LCM subsemigroup} (m,h)U \cap (m',h')U = \bigl((m,h)(\mathbb{N} \rtimes H^+) \cap (m',h')(\mathbb{N} \rtimes H^+)\bigr) \cap U \end{equation} for all $(m,h),(m',h') \in U$, and $\mathbb{N} \rtimes H^+$ is right LCM. We note that $U$ can be used to describe $\mathbb{N} \rtimes H^+$ as a Zappa-Sz\'{e}p product $U \bowtie \mathbb{N}$, where action and restriction are given in terms of the generator $1 \in \mathbb{N}$ and $(m,h) \in U$ by \[1.(m,h) = \begin{cases} (m+1,h) &\text{if } m<h-1, \\ (0,h) &\text{if } m=h-1,\end{cases} \quad \quad 1\rvert_{(m,h)} = \begin{cases} 0 &\text{if } m<h-1, \text{ and}\\ 1 &\text{if } m=h-1.\end{cases}\] In the case of $H^+=\mathbb{N}^\times$ this has been discussed in detail in \cite{BRRW}*{Subsection~3.2} and the very same arguments apply for the cases we consider here.
\begin{prop}\label{prop:A_S as BQ of U} $\mathcal{A}_S$ is canonically isomorphic to the boundary quotient $\mathcal{Q}(U)$. \end{prop} \begin{proof} Recall that $\mathcal{Q}(U)$ is the quotient of the full semigroup $C^*$-algebra $C^*(U)$ by relation \eqref{eq:BQ}. In particular, it is generated as a $C^*$-algebra by a representation $v$ of $U$ by isometries whose range projections are denoted $v^{\phantom{*}}_{(m,h)}v_{(m,h)}^*= e^{\phantom{*}}_{(m,h)U}$ for $(m,h) \in U$.
For every $h \in H^+$, we get a family of matrix units $(v^{\phantom{*}}_{(m,h)}v_{(n,h)}^*)_{0 \leq m,n \leq h-1}$ because \[\begin{array}{l} v_{(n,h)}^*v^{\phantom{*}}_{(m,h)} = v_{(n,h)}^*e^{\phantom{*}}_{(n,h)U \cap (m,h)U}v_{(m,h)} = \delta_{m,n} \quad \text{and} \quad \sum\limits_{m=0}^{h-1} e_{(m,h)U} = 1 \end{array}\] as $\{(m,h) : 0 \leq m \leq h-1\}$ is an accurate foundation set for $U$. That is to say that, for each $u \in U$, there is $0 \leq m \leq h-1$ such that $uU \cap (m,h)U \neq \emptyset$, and $(m,h)U \cap (n,h)U = \emptyset$ unless $m=n$, see \cite{bsBQforADS} for further details. Since \[\begin{array}{lcl} v^{\phantom{*}}_{(m,h)}v_{(n,h)}^* &=& v^{\phantom{*}}_{(m,h)} \bigl(\sum\limits_{k=0}^{h'-1}e^{\phantom{*}}_{(k,h')U}\bigr)v_{(n,h)}^*\\ &=& \sum\limits_{k=0}^{h'-1}v^{\phantom{*}}_{(m+hk,hh')}v_{(n+hk,hh')}^* \end{array}\] for each $h' \in H^+$, we see that $C^*(\{v^{\phantom{*}}_{(m,h)}v_{(n,h)}^* : h \in H^+, 0 \leq m,n \leq h-1\}) \subset \mathcal{Q}(U)$ is isomorphic to $M_{d^\infty}$. In fact, we get a covariant representation for $(M_{d^\infty},H^+,\alpha)$ as \[v^{\phantom{*}}_{(0,p)}v^{\phantom{*}}_{(m,h)}v_{(n,h)}^*v_{(0,p)}^* = v^{\phantom{*}}_{(pm,ph)}v_{(pn,ph)}^*.\] Thus we get a $*$-homomorphism $\varphi\colon \mathcal{A}_S \cong M_{d^\infty} \rtimes^e_\alpha H^+ \to \mathcal{Q}(U)$ given by $u^ms_h \mapsto v_{(m,h)}$, see Corollary~\ref{cor:subalgebra for torsion part}. The map is surjective, and due to Corollary~\ref{cor:UHF into BD seq split cr pr}, the domain is simple so that $\varphi$ is an isomorphism. \end{proof}
\begin{rem}\label{rem:A_S as BQ of U} Conceptually, it seems that there is more to Proposition~\ref{prop:A_S as BQ of U} than the proof entails: There is a commutative diagram \begin{equation}\label{eq:right LCM inclusion diagram} \begin{gathered} \xymatrix{ C^*(U) \ar^(0.4){\iota}@{^{(}->}[r] \ar@{->>}_{\pi_U}[d] & C^*(\mathbb{N} \rtimes H^+) \ar@{->>}^{\pi_{\mathbb{N} \rtimes H^+}}[d] \\ \mathcal{Q}(U) \ar@{^{(}->}_(0.4){\varphi^{-1}}[r] & \mathcal{Q}(\mathbb{N} \rtimes H^+) } \end{gathered} \end{equation} with $\iota$ induced by $U \subset \mathbb{N} \rtimes H^+$ and $\varphi$ as in the proof of Proposition~\ref{prop:A_S as BQ of U}. The fact that $\iota$ is an injective $*$-homomorphism follows from \cite{BLS2}*{Proposition~3.6}: $N \rtimes H$ is amenable and hence $C^*(\mathbb{N} \rtimes H^+) \cong C^*_r(\mathbb{N} \rtimes H^+)$, see \cite{BLS1}*{Example~6.3}, and similarly $C^*(U) \cong C^*_r(U)$. Note that the bottom row of \eqref{eq:right LCM inclusion diagram} is given by $\mathcal{A}_S \into \mathcal{Q}_S$, see Proposition~\ref{prop:Q_S as BQ} and Proposition~\ref{prop:A_S as BQ of U}. \end{rem}
By Corollary~\ref{cor:torsion subalgebra justification} and Proposition~\ref{prop:A_S as BQ of U}, the torsion part of the $K$-theory of the boundary quotient of $\mathbb{N} \rtimes H^+$ arises from the boundary quotient of the distinguished submonoid $U$, which in fact sits inside $\mathcal{Q}(\mathbb{N} \rtimes H^+)$ in the natural way.
For the remainder of this section, we will assume that $S$ is finite with cardinality $k$. This restriction is necessary in order to derive a $k$-graph model for $\mathcal{A}_S$, which we obtain via the boundary quotient representation of $\mathcal{A}_S$. Note that for $p,q \in \mathbb{N}^\times$ and $(m,n) \in \{0,\dots,p-1\} \times \{0,\dots,q-1\}$, there is a unique pair $(n',m') \in \{0,\dots,q-1\} \times \{0,\dots,p-1\}$ such that $m+pn = n'+qm'$. In other words, the map \[\theta_{p,q}\colon \{0,\dots,p-1\} \times \{0,\dots,q-1\} \to \{0,\dots,q-1\} \times \{0,\dots,p-1\}\] with $ (m,n) \mapsto (n',m')$ determined by $n'+qm' = m+pn$ is bijective.
\begin{rem}\label{rem:k-graph from S} For each $p \in S$, we can consider the $1$-graph given by a single vertex with $p$ loops $(m,p), 0 \leq m \leq p-1$. If we think of the collection of these $1$-graphs as the skeleton of a $k$-graph, i.e.\ the set of all edges of length at most $1$, where the vertices for different $p$ are identified, then the maps $\theta_{p,q}$ satisfy condition~(2.8) in \cite{FS}*{Remark~2.3}, and hence define a row-finite $k$-graph $\Lambda_{S,\theta}$. Indeed, this is obvious for $k=2$. For $k \geq 3$, let $p,q,r \in S$ be pairwise distinct elements and fix $0 \leq m_t \leq t-1$ for $t = p,q,r$. We compute \[\begin{array}{lclcl} m_p+p(m_q+qm_r) &=& m_p+p(m^{(1)}_r+rm^{(1)}_q) &=& m^{(2)}_r+r(m^{(1)}_p+pm^{(1)}_q)\\ &=& m^{(2)}_r+r(m^{(2)}_q+qm^{(2)}_p) &=& m^{(3)}_q+q(m^{(3)}_r+rm^{(2)}_p)\\ &=& m^{(3)}_q+q(m^{(3)}_p+pm^{(4)}_r) &=& m^{(4)}_p+p(m^{(4)}_q+qm^{(4)}_r), \end{array}\] where $0 \leq m^{(i)}_t \leq t-1$ for $t=p,q,r$ and $i=1,\ldots,4$ are uniquely determined by the $\theta_{s,t}$ for the respective values of $s$ and $t$. The bijection from (2.8) in \cite{FS}*{Remark~2.3} now maps $((m_p,p),(m_q,q),(m_r,r))$ to $((m^{(4)}_p,p),(m^{(4)}_q,q),(m^{(4)}_r,r))$. It is easy to check that $m^{(4)}_t = m_t$ for $t=p,q,r$, which shows that condition~(2.8) in \cite{FS}*{Remark~2.3} is valid. Applying \cite{KP}*{Definition 1.5} to the case of $\Lambda_{S,\theta}$, we see that $C^*(\Lambda_{S,\theta})$ is the universal $C^*$-algebra generated by isometries $(t_{(m,p)})_{p \in S, 0 \leq m \leq p-1}$ subject to the relations: \[\begin{array}{c} \textnormal{(i)} \ t_{(m,p)}t_{(n,q)} = t_{(n',q)}t_{(m',p)} \text{ if } m+pn = n'+qm' \quad \text{and} \quad\textnormal{(ii)} \ \sum\limits_{m=0}^{p-1} t^{\phantom{*}}_{(m,p)}t_{(m,p)}^* = 1 \end{array}\] for all $p,q \in S$. \end{rem}
\begin{cor}\label{cor:tor subalgebra via k-graphs} $\mathcal{A}_S$ is isomorphic to $C^*(\Lambda_{S,\theta})$. \end{cor} \begin{proof} We will work with $\mathcal{Q}(U)$ in place of $\mathcal{A}_S$ and invoke Proposition~\ref{prop:A_S as BQ of U}. Condition~(i) guarantees that $(m,p) \mapsto t_{(m,p)}$ yields a representation of $U$ by isometries as $U$ is generated by $(m,p)$ with $p \in S, 0 \leq m \leq p-1$, and $(m,p)(n,q) = (m+pn,pq) = (n',q)(m',p)$. (ii) holds for arbitrary $p \in H^+$ if we write $t_{(m,p)}$ for the product $t_{(m_1,p_1)}\cdots t_{(m_k,p_k)}$ where $(m_1,p_1)\cdots (m_k,p_k) = (m,p) \in U$ with $p_i \in S$. It is then straightforward to verify that we get a $*$-homomorphism $C^*(U) \to C^*(\Lambda_{S,\theta})$. Now let $F \subset U$ be a foundation set and set $h := \text{lcm}(\{h' : (m',h') \in F \text{ for some } 0 \leq m' \leq h'-1\})$. Then $F_a := \{ (m,h) : 0 \leq m \leq h-1\}$ is a foundation set that refines $F$. Therefore, it suffices to establish \eqref{eq:BQ} for $F_a$ in place of $F$. But as $F_a$ is accurate, \eqref{eq:BQ} takes the form $\sum_{m=0}^{h-1} t^{\phantom{*}}_{(m,h)}t_{(m,h)}^* = 1$, which follows from (ii) as explained in the proof of Proposition~\ref{prop:A_S as BQ of U}. Thus $v_{(m,p)} \mapsto t_{(m,p)}$ defines a surjective $*$-homomorphism $\mathcal{Q}(U) \to C^*(\Lambda_{S,\theta})$. By simplicity, see Corollary~\ref{cor:UHF into BD seq split cr pr} and Proposition~\ref{prop:A_S as BQ of U}, this map is also injective. \end{proof}
\begin{rem}\label{rem:Lambda_S flip} Similar to $\Lambda_{S,\theta}$, we can also consider the row-finite $k$-graph $\Lambda_{S,\sigma}$ with $\sigma_{p,q}$ being the flip, i.e.\ $\sigma_{p,q}(m,n) := (n,m)$. That is to say, we keep the skeleton of $\Lambda_{S,\theta}$, but replace $\theta$ by $\sigma$. In this case, it is easy to see that $C^*(\Lambda_{S,\sigma}) \cong \bigotimes_{p \in S} \mathcal{O}_p$. \end{rem}
With regards to the $K$-theory of $\mathcal{Q}_S$, it is interesting to ask whether $C^*(\Lambda_{S,\theta})$ and $C^*(\Lambda_{S,\sigma})$ are isomorphic or not. At least for $\lvert S \rvert \leq 2$, the answer is known to be positive.
\begin{prop} \label{prop:A_S for |S|=2} Let $p,q \geq 2$ be two relatively prime numbers and $S = \{p,q\}$. Then $\mathcal{A}_S \cong C^*(\Lambda_{S,\theta})\cong C^*(\Lambda_{S,\sigma}) \cong \mathcal{O}_p \otimes \mathcal{O}_q$. \end{prop} \begin{proof} We have seen in Corollary~\ref{cor:tor subalgebra via k-graphs} and Remark~\ref{rem:Lambda_S flip} that the UCT Kirchberg algebras $\mathcal{A}_S$ and $\mathcal{O}_p \otimes \mathcal{O}_q$ are both expressible as $C^*$-algebras associated with row-finite $2$-graphs $\Lambda_{S,\theta}$ and $\Lambda_{S,\sigma}$ sharing the same skeleton. The claim therefore follows from \cite{Evans}*{Corollary~5.3}. \end{proof}
Concerning a generalization of Proposition~\ref{prop:A_S for |S|=2} to the case of $\lvert S \rvert \geq 3$, we learned from Aidan Sims that the following conjecture for $k$-graphs might be true:
\begin{conj}\label{conj:k-graph} Suppose $\Lambda$ and $\Lambda'$ are row-finite $k$-graphs without sources such that $C^*(\Lambda)$ and $C^*(\Lambda')$ are unital, purely infinite and simple. If $\Lambda$ and $\Lambda'$ have the same skeleton, then the associated $C^*$-algebras are isomorphic. \end{conj}
Note that $C^*(\Lambda)$ and $C^*(\Lambda')$ are indeed unital UCT Kirchberg algebras, as separability, nuclearity and the UCT are automatically satisfied, see \cite{KP}*{Theorem~5.5}. We will come back to Conjecture~\ref{conj:k-graph} at the end of the next section.
\section{\texorpdfstring{Towards a classification of $\mathcal{Q}_S$}{Towards a classification}}\label{sec:classification}
This final section provides a survey of the progress on the classification of $\mathcal{Q}_S$ that we achieve through the preceding sections and a spectral sequence argument for $K_*(\mathcal{A}_S)$, see Theorem~\ref{thm:main result} and Theorem~\ref{thm:K-theory for A_S}. Recall that $N = \mathbb{Z}\bigl[\{ \frac{1}{p} : p \in S\}\bigr]$ and $g_S$ denotes the greatest common divisor of $\{p-1 : p \in S\}$. We begin by stating our main result.
\begin{thm}\label{thm:main result} Let $S \subset \mathbb{N}^\times\setminus\{1\}$ be a non-empty family of relatively prime numbers. Then the $K$-theory of $\mathcal{Q}_S$ satisfies \[\begin{array}{c} K_{i}(\mathcal{Q}_{S})\cong \mathbb{Z}^{2^{\lvert S \rvert-1}} \oplus K_i(\mathcal{A}_S),\quad i=0,1,\end{array}\] where $K_i(\mathcal{A}_S)$ is a torsion group. Moreover, the following statements hold: \begin{enumerate}[(a)] \item If $g_S=1$, then $K_i(\mathcal{Q}_S)$ is free abelian in $2^{\lvert S \rvert-1}$ generators for $i=0,1$, and $[1]=0$. \item If $\lvert S \rvert=1$, then $(K_{0}(\mathcal{Q}_{S}),[1],K_{1}(\mathcal{Q}_{S})) \cong (\mathbb{Z} \oplus \mathbb{Z}/g_S\mathbb{Z}, (0,1), \mathbb{Z})$. \item If $\lvert S \rvert=2$, then $(K_{0}(\mathcal{Q}_{S}),[1],K_{1}(\mathcal{Q}_{S})) \cong (\mathbb{Z}^2 \oplus \mathbb{Z}/g_S\mathbb{Z}, (0,1), \mathbb{Z}^2 \oplus \mathbb{Z}/g_S\mathbb{Z})$. \end{enumerate} \end{thm}
\begin{rem}\label{rem:K-theory of Q_S mod tor} Note that for $S = \left\lbrace p \right\rbrace$, the torsion subalgebra $\mathcal{A}_S$ is canonically isomorphic to the Cuntz algebra $\mathcal{O}_p$. Therefore, Theorem~\ref{thm:main result}~(b) recovers known results by Hirshberg \cite{Hir}*{Example~1, p.~106} and Katsura \cite{KatsuraIV}*{Example~A.6}. Indeed, it is already clear from the presentation for $\mathcal{O}(E_{p,1})$ described in \cite{KatsuraIV}*{Example~A.6} that it coincides with $\mathcal{Q}_S$. Theorem~\ref{thm:main result}~(c) shows an unexpected result for the $K$-groups of $\mathcal{Q}_S$ in the case of $S=\{p,q\}$ for two relatively prime numbers $p$ and $q$ with $g_S > 1$: $K_1(\mathcal{Q}_S)$ has torsion and is therefore, for instance, not a graph $C^*$-algebra, see \cite{RS}*{Theorem~3.2}. By virtue of (a), Theorem~\ref{thm:main result} also explains why $\mathcal{Q}_\mathbb{N}$ and $\mathcal{Q}_{2}$ have torsion free $K$-groups. More importantly, it shows that the presence of $2$ in the family $S$ is not the only way to achieve this. Indeed, $S$ can contain at most one even number. If $g_S=1$, then $S$ must contain an even number, and there are many examples, e.g.\ $S$ with $2^m+1,2n \in S$ for some $m,n \geq 1$. \end{rem}
In view of the Kirchberg-Phillips classification theorem \cites{Kir,Phi}, we get the following immediate consequence of Theorem~\ref{thm:main result}.
\begin{cor}\label{cor:isomorphism classes for |S| at most 2} Let $S,T \subset \mathbb{N}^\times \setminus\{1\}$ be non-empty families of relatively prime numbers. Then $\mathcal{Q}_S \cong \mathcal{Q}_T$ implies $\lvert S \rvert = \lvert T \rvert$. Moreover, the following statements hold: \begin{enumerate}[(a)] \item If $g_S=1=g_T$, then $\mathcal{Q}_S$ is isomorphic to $\mathcal{Q}_T$ if and only if $\lvert S \rvert=\lvert T \rvert$. \item If $\lvert S \rvert \leq 2$, then $\mathcal{Q}_S$ is isomorphic to $\mathcal{Q}_T$ if and only if $\lvert S \rvert=\lvert T \rvert$ and $g_S=g_T$. \end{enumerate} \end{cor}
Observe that the decomposition of $K_*(\mathcal{Q}_S)$ claimed in Theorem~\ref{thm:main result} follows from Corollary~\ref{cor:torsion and free part K-theory}, Proposition~\ref{prop:K-theory torsion free part} and Corollary~\ref{cor:subalgebra for torsion part}. To prove our main result, it is therefore enough to establish the following theorem reflecting our present knowledge on the torsion subalgebra $\mathcal{A}_S$, an object which is certainly of interest in its own right.
\begin{thm}\label{thm:K-theory for A_S} Let $S \subset \mathbb{N}^\times\setminus\{1\}$ be a non-empty family of relatively prime numbers. Then the following statements hold: \begin{enumerate}[(a)] \item If $g_S=1$, then $\mathcal{A}_S \cong \mathcal{O}_2 \cong \bigotimes_{p \in S} \mathcal{O}_p$. \item If $S = \left\lbrace p \right\rbrace$, then $\mathcal{A}_S \cong \mathcal{O}_p$. \item If $S = \left \lbrace p,q \right\rbrace$ with $p\neq q$, then $\mathcal{A}_S \cong \mathcal{O}_p \otimes \mathcal{O}_q$. \item For $\lvert S \rvert\geq 3$ and $g_S > 1$, $K_i(\mathcal{A}_S)$ is a torsion group in which the order of any element divides $g_S^{2^{\lvert S \rvert-2}}$. Moreover, $K_i(\mathcal{A}_S)$ is finite whenever $S$ is finite. \end{enumerate} \end{thm}
Note that in the case of infinite $S$ with $g_S > 1$, part (d) still makes sense within the realm of supernatural numbers. Based on Theorem~\ref{thm:main result} and Theorem~\ref{thm:K-theory for A_S}, we suspect that the general situation is in accordance with Conjecture~\ref{conj:k-graph}:
\begin{conj}\label{conj:K-theory of QQ_S} For a family $S \subset \mathbb{N}^\times\setminus\{1\}$ of relatively prime numbers with $\lvert S\rvert\geq 2$, $\mathcal{A}_S$ is isomorphic to $\bigotimes_{p \in S} \mathcal{O}_p$. Equivalently, $\mathcal{Q}_S$ is the unital UCT Kirchberg algebra with \[ (K_0(\mathcal{Q}_S),[1],K_1(\mathcal{Q}_S)) = (\mathbb{Z}^{2^{\lvert S \rvert-1}} \oplus (\mathbb{Z}/g_S\mathbb{Z})^{2^{\lvert S \rvert-2}},(0,e_1),\mathbb{Z}^{2^{\lvert S \rvert-1}} \oplus (\mathbb{Z}/g_S\mathbb{Z})^{2^{\lvert S \rvert-2}}), \] where $e_1 = (\delta_{1,j})_j \in (\mathbb{Z}/g_S\mathbb{Z})^{2^{\lvert S \rvert-2}}$. In particular, if $S,T \subset \mathbb{N}^\times \setminus\{1\}$ are non-empty sets of relatively prime numbers, then $\mathcal{Q}_S$ is isomorphic to $\mathcal{Q}_T$ if and only if $\lvert S \rvert=\lvert T \rvert$ and $g_S=g_T$. \end{conj}
\begin{rem}\label{rem:stable relations} It follows from Theorem~\ref{thm:main result} and Theorem~\ref{thm:K-theory for A_S}~(d) that the $K$-theory of $\mathcal{Q}_S$ is finitely generated if and only if $S$ is finite. Consequently, when $S$ is finite the defining relations of $\mathcal{Q}_S$ from Defintion~\ref{def:Q_S} are \emph{stable}, see \cite{enders1}*{Corollary~4.6} and \cite{loring1}*{Chapter~14}. \end{rem}
For the proof of Theorem~\ref{thm:K-theory for A_S}, we will employ the isomorphism $\mathcal{A}_S \cong M_{d^\infty} \rtimes_\alpha^e H$ and make use of a spectral sequence by Kasparov constructed in \cite{kasparov}*{6.10}. Let us briefly review the relevant ideas and refer to \cite{barlak15} for a detailed exposition. Given a $C^*$-dynamical system $(B,\beta,\mathbb{Z}^k)$, we can consider its \emph{mapping torus} \[ \mathcal{M}_\beta(B):=\left\lbrace f\in C(\mathbb{R}^k,B)\ : \ \beta_z (f(x))=f(x+z)\ \text{for all}\ x\in \mathbb{R}^k,\ z\in \mathbb{Z}^k \right\rbrace. \] It is well-known that $K_*(\mathcal{M}_\beta(B))$ is isomorphic to $K_{*+k}(B\rtimes_\beta\mathbb{Z}^k)$, see e.g.\ \cite{barlak15}*{Section~1}. The mapping torus admits a finite cofiltration \begin{equation} \label{cofiltrationMappingTorus} \mathcal{M}_\beta(B)=F_k \stackrel{\pi_k}{\onto} F_{k-1} \stackrel{\pi_{k-1}}{\onto} \cdots \stackrel{\pi_1}\onto F_0=A \stackrel{\pi_0}{\onto} F_{-1}=0 \end{equation} arising from the filtration of $\mathbb{R}^k$ by its skeletons \[ \emptyset = X_{-1} \subset \mathbb{Z}^k = X_0 \subset X_1 \subset \cdots\subset X_k = \mathbb{R}^k, \] where $X_\ell := \{ (x_1,\ldots,x_k) \in \mathbb{R}^k : \lvert\{ 1 \leq i \leq k : x_i \in \mathbb{R}\setminus\mathbb{Z}\}\rvert \leq \ell\}$.
As for filtrations of $C^*$-algebras by closed ideals \cite{schochet}, there is a standard way relying on Massey's technique of exact couples \cites{massey1,massey2} of associating a spectral sequence to a given finite cofiltration of a $C^*$-algebra. In this way, the cofiltration \eqref{cofiltrationMappingTorus} yields a spectral sequence $(E_\ell,d_\ell)_{\ell\geq 1}$ that converges to $K_*(\mathcal{M}_\beta(B))\cong K_{*+k}(B\rtimes_\beta \mathbb{Z}^k)$. Using Savinien-Bellissard's \cite{savinienBellissard} description of the $E_1$-term, we can summarize as follows.
\begin{thm}[cf.\ {\cite{kasparov}*{6.10}}, {\cite{savinienBellissard}*{Theorem~2}} and {\cite{barlak15}*{Corollary~2.5}}]\label{thm:spec seq for cr prod}~\newline Let $(B,\beta,\mathbb{Z}^k)$ be a $C^*$-dynamical system. There exists a cohomological spectral sequence $(E_\ell,d_\ell)_{\ell\geq 1}$ converging to $K_*(\mathcal{M}_\beta(B))\cong K_{*+k}(B\rtimes_\beta \mathbb{Z}^k)$. The $E_1$-term is given by \[ \begin{array}{l} E_1^{p,q}:= K_q(B) \otimes_\mathbb{Z} \Lambda^p(\mathbb{Z}^k),\text{ with}\\ d_1^{p,q}\colon E_1^{p,q}\to E_1^{p+1,q}, \quad x\otimes e\mapsto \sum\limits_{j=1}^k (K_q(\beta_j)-\operatorname{id})(x)\otimes (e_j\wedge e). \end{array} \] Furthermore, the spectral sequence collapses at the $(k+1)$th page, so that $E_\infty = E_{k+1}$. \end{thm}
By Bott periodicity, we have that $(E_\ell^{p,q+2},d_\ell^{p,q+2})=(E_\ell^{p,q},d_\ell^{p,q})$ for all $p,q \in \mathbb{Z}$. In particular, the $E_\infty$-term reduces to $E_\infty^{p,q}$ with $p\in \mathbb{Z}$ and $q=0,1$.
\begin{rem} Let us recall the meaning of convergence of the spectral sequence $(E_\ell,d_\ell)_{\ell\geq 1}$. For $q=0,1$, consider the diagram \[ K_q(\mathcal{M}_\gamma(B))=K_q(F_k) \longrightarrow K_q(F_{k-1}) \longrightarrow \cdots \longrightarrow K_q(F_0) \longrightarrow K_q(F_{-1})=0. \] Define $\mathcal{F}_p K_q(\mathcal{M}_\beta(B)):=\operatorname{ker}(K_q(\mathcal{M}_\beta(B))\to K_q(F_p))$ for $p=-1,\ldots,k$, and observe that this gives rise to a filtration of abelian groups \[ 0 \into \mathcal{F}_{k-1} K_q(\mathcal{M}_\beta(B)) \into \cdots \into \mathcal{F}_{-1} K_q(\mathcal{M}_\beta(B)) = K_q(\mathcal{M}_\beta(B)). \] One can now show the existence of exact sequences \begin{equation}\label{eq:exact sequences from filtration} 0 \longrightarrow \mathcal{F}_p K_{p+q}(\mathcal{M}_\beta(B)) \longrightarrow \mathcal{F}_{p-1} K_{p+q}(\mathcal{M}_\beta(B)) \longrightarrow E_{\infty}^{p,q} \longrightarrow 0, \end{equation} or in other words, there are isomorphisms \[ E_{\infty}^{p,q}\cong \mathcal{F}_{p-1} K_{p+q}(\mathcal{M}_\beta(B))/\mathcal{F}_p K_{p+q}(\mathcal{M}_\beta(B)). \] Hence, the $E_\infty$-term determines the $K$-theory of $\mathcal{M}_\beta(B)$, and thus of $B\rtimes_\beta \mathbb{Z}^k$, up to group extension problems. \end{rem}
Let us now turn to the $K$-theory of $M_{d^\infty} \rtimes_\alpha^e H^+$. By Laca's dilation theorem \cite{Lac}, see also Remark~\ref{rem:dilations}, we may and will determine the $K$-theory of the dilated crossed product $M_{d^\infty,\infty} \rtimes_{\alpha_\infty} H$ instead. Fix a natural number $1 \leq k \leq \lvert S \rvert$ and observe that $H_k \cong \mathbb{Z}^k$. Let $\alpha_\infty(k)$ be the $H_k$-action on $M_{d^\infty,\infty}$ induced by the $k$ smallest elements $p_1 < p_2 < \dotsb < p_k$ of $S$. It follows from the proof of Proposition~\ref{prop:UHF cross prod torsion K-th} that $K_0(\alpha_{\infty,p_\ell})$ is given by multiplication with $1/p_\ell$ on $K_0(M_{d^\infty}) \cong N$. It turns out to be more convenient to work with the action $\alpha_\infty^{-1}(k)$ given by the inverses of the $\alpha_\ell$, whose crossed product is canonically isomorphic to $M_{d^\infty,\infty} \rtimes_{\alpha_\infty(k)} H_k$.
Let $(E_\ell,d_\ell)_{\ell\geq 1}$ denote the spectral sequence associated with $\alpha^{-1}_\infty(k)$. As $K_1(M_{d^\infty})=0$, it follows directly from Theorem~\ref{thm:spec seq for cr prod} that $E_1^{p,1}=0$ for all $p \in \mathbb{Z}$. Moreover, according to Theorem~\ref{thm:spec seq for cr prod}, $d_1^{p,0}\colon N\otimes_\mathbb{Z} \Lambda^p(\mathbb{Z}^k)\to N\otimes_\mathbb{Z} \Lambda^p(\mathbb{Z}^k)$, $p \in \mathbb{Z}$, is given by \[\begin{array}{c} d_1^{p,0}(x\otimes e)=\sum_{\ell=1}^k (p_\ell -1)x\otimes e_\ell\wedge e=\sum_{\ell=1}^k x\otimes (p_\ell -1)e_\ell\wedge e. \end{array}\] In other words, $d_1^{p,0} = \op{id}_N\otimes h^p$ with \begin{equation}\label{eq:differential formula} \begin{array}{c} h^p\colon \Lambda^p(\mathbb{Z}^k)\to \Lambda^{p+1}(\mathbb{Z}^k),\quad h^p(e)=\sum_{\ell=1}^k (p_\ell-1)e_\ell\wedge e. \end{array} \end{equation} To obtain $E_2^{p,0}$, we therefore compute the cohomology of the complex $(\Lambda^p(\mathbb{Z}^k),h^p)_{p\in \mathbb{Z}}$. To do so, we consider $h^p$ as a matrix $A_p\in M_{\binom k {p+1} \times \binom k p}(\mathbb{Z})$, where the identification is taken with respect to the canonical bases of $\Lambda^p(\mathbb{Z}^k)$ and $\Lambda^{p+1}(\mathbb{Z}^k)$ in lexicographical ordering. The computation then mainly reduces to determining the Smith normal form of $A_p$.
\begin{thm}[Smith normal form]\label{thm:Smith normal form} Let $A$ be a non-zero $m\times n$-matrix over a principal ideal domain $R$. There is an invertible $m\times m$-matrix $S$ and an invertible $n\times n$-matrix $T$ over $R$, so that \[D:=SAT=\operatorname{diag}(\delta_1,\ldots,\delta_r,0,\ldots,0)\]
for some $r \leq \min(m,n)$ and non-zero $\delta_i\in R$ satisfying $\delta_i|\delta_{i+1}$ for $1 \leq i \leq r-1$. The elements $\delta_i$ are unique up to multiplication with some unit and are called \emph{elementary divisors} of $A$. The diagonal matrix $D$ is called a \emph{Smith normal form} of $A$. The $\delta_i$ can be computed as \begin{equation}\label{eq:det divisor formula} \begin{array}{c} \delta_1 = d_1(A),\quad \delta_i = \frac{d_i(A)}{d_{i-1}(A)}, \end{array} \end{equation} where $d_i(A)$, called the \emph{$i$-th determinant divisor}, is the greatest common divisor of all $i\times i$-minors of $A$. \end{thm}
Of course, $D$ can only be a diagonal matrix if $m=n$. The notation in Theorem~\ref{thm:Smith normal form} is supposed to mean that $D$ is the $m\times n$ matrix over $R$ with the $\min(m,n)\times \min(m,n)$ left upper block matrix being $\operatorname{diag}(\delta_1,\ldots,\delta_r,0,\ldots,0)$ and all other entries being zero.
For each $1\leq k\leq\lvert S\rvert$, set $g_k :=\gcd(\{p_\ell-1 : \ell=1,\cdots,k\})$. \begin{lem}\label{lem:ker d^p / im d^(p-1)} The group $\operatorname{ker}(h^p)/\operatorname{im}(h^{p-1})$ is isomorphic to $(\mathbb{Z}/g_k\mathbb{Z})^{\binom{k-1}{p-1}}$ for $1 \leq p \leq k$ and vanishes otherwise. \end{lem} \begin{proof} For $p\in \mathbb{Z}$, let $D_p=S_pA_pT_p$ denote the Smith normal form of $A_p$ with elementary divisors $\delta^{(p)}_1,\ldots,\delta^{(p)}_{r_p}$. As $\Lambda^p(\mathbb{Z}^k) = 0$ unless $0 \leq p\leq k$, $\operatorname{ker}(h^p)/\operatorname{im}(h^{p-1})$ vanishes if $p < 0$ or $p \geq k+1$.
If $p=0$, then $h^0\colon \mathbb{Z} \to \mathbb{Z}^k$ is given by $A_0 = (p_1-1,\ldots,p_k-1)$. Thus we have $r_0=1$ and $\delta^{(0)}_1=g_k$. Moreover, $h^0$ is injective, so $\operatorname{ker}(h^0)/\operatorname{im}(h^{-1})=0$.
Likewise, $p=k$ is simple as $h^{k-1}\colon \mathbb{Z}^k \to \mathbb{Z}$ is given by $A_{k-1} = (p_1-1,\ldots,p_k-1)^t$ and $h^k$ is zero because $\Lambda^{k+1}(\mathbb{Z}^k) = 0$. Therefore, $r_{k-1}=1$ and $\delta^{(k-1)}_1=g_k$, and hence \[ \operatorname{ker}(h^k)/\operatorname{im}(h^{k-1}) = \mathbb{Z}/g_k\mathbb{Z}=(\mathbb{Z}/g_k\mathbb{Z})^{\binom{k-1}{k-1}}. \] As this completes the proof for $p\leq0$ and $p\geq k$, we will assume $1\leq p\leq k-1$ from now on.
We start by showing that for $\ell=1,\cdots,k$, the matrix $A_p$ contains a $\binom{k-1}{p}\times \binom{k-1}p$ diagonal matrix with entries $\pm (p_\ell-1)$ (obtained by deleting suitable rows and columns). This will allow us to conclude that $A_p$ has a $j \times j$-minor equal to $\pm(p_\ell-1)^j$ for each $j=1,\ldots,\binom{k-1}p$. Thus we obtain that $d_j(A_p)$ divides $g_k^j$ for $j=1,\ldots,\binom{k-1}{p}$:
First, keep only those columns of $A_p$ which correspond to basis elements $e_{i_1}\wedge\ldots\wedge e_{i_p} \in \Lambda^p(\mathbb{Z}^k)$ satisfying $\ell \neq i_j$ for all $j=1,\ldots,p$. As this amounts to choosing $p$ elements out of $k-1$ without order and repetition, we are left with $\binom{k-1}p$ columns (out of $\binom k p$). Next, we restrict to those rows which correspond to basis elements $e_{i_1}\wedge\ldots\wedge e_{i_{p+1}} \in \Lambda^{p+1}(\mathbb{Z}^k)$ satisfying $\ell=i_j$ for some (necessarily unique) $j=1,\ldots,p-1$. Here again $\binom{k-1}p$ rows (out of $\binom k {p+1}$) remain. The resulting matrix describes the linear map \[ \Lambda^p(\mathbb{Z}^k) \supset \mathbb{Z}^{\binom{k-1}p}\to \mathbb{Z}^{\binom{k-1}p} \subset \Lambda^{p+1}(\mathbb{Z}^k),\ e_{i_1}\wedge\ldots\wedge e_{i_p}\mapsto (p_\ell-1)\cdot e_\ell\wedge e_{i_1}\wedge\ldots\wedge e_{i_p}, \]
which is nothing but a diagonal matrix of size $\binom{k-1}p$ with entries $\pm(p_\ell-1)$. As explained above, we thus obtain that $d_j(A_p)|g_k^j$ for $j=1,\ldots,\binom{k-1}{p}$.
We will now show that the converse holds as well, i.e.\ $g_k^j|d_j(A_p)$ for $j=1,\ldots,\binom{k-1}{p}$. Note that every $1\times 1$-minor is either zero or $p_\ell-1$ for some $\ell=1,\ldots,k$. This shows that $d_1(A_p)=g_k$ for $1 \leq p \leq k-1$. Let $1\leq j\leq \binom{k-1}{p}-1$ and assume that $d_j(A_p)=g_k^j$. Let $L$ be any $(j+1)\times (j+1)$-matrix arising from $A_p$ by deleting sums and rows. By the Laplace expansion theorem, the determinant of $L$ is given as a linear combination of some of its $j\times j$-minors. The coefficients in the linear combination all are entries of $L$. The occurring minors are all $j\times j$-minors of $A_p$. Hence, $g_k^j|\operatorname{det}(L)$ by assumption. In fact, we have $g_k^{j+1}|\operatorname{det}(L)$ because all entries in $A_p$ are divisible by $g_k$. Altogether, $d_j(A_p)=g_k^j$ for $j=1,\ldots,\binom{k-1}{p}$ and we have shown that for $p=1,\ldots,k-1$, $r_p\geq \binom{k-1}{p}$ and $\delta^{(p)}_j=g_k$ for $j=1,\ldots,\binom{k-1}p$.
Since $A_p$ and $D_p$ have isomorphic kernel and image, our considerations show that \[\begin{array}{lcr} \binom{k-1}p \leq \operatorname{rank}(\operatorname{im}(h^p)) &\quad\text{ and }\quad& \operatorname{rank}(\operatorname{ker}(h^p)) \leq \binom k p -\binom{k-1}p=\binom{k-1}{p-1}. \end{array}\] By $h^{p+1}\circ h^p=0$, we conclude that $\operatorname{rank}(\operatorname{ker}(h^{p+1})) = \operatorname{rank}(\operatorname{im}(h^p)) = \binom{k-1}p$ which implies $r_p=\binom{k-1}p$. Moreover, $h^{p}\circ h^{p-1}=0$ forces $T_p^{-1}S_{p-1}^{-1}(\operatorname{im}(D_{p-1}))\subset\operatorname{ker}(D_p)$ or, equivalently, $\operatorname{im}(D_{p-1})\subset\operatorname{ker}(D_pT_p^{-1}S_{p-1}^{-1})$. Since \[\operatorname{im}(D_{p-1})=g_k\mathbb{Z}^{\binom{k-1} {p-1}}\oplus\{0\}^{\binom k p - \binom{k-1} {p-1}}\] has the same rank as $\operatorname{ker}(D_pT_p^{-1}S_{p-1}^{-1})$, it means that \[\operatorname{ker}(D_pT_p^{-1}S_{p-1}^{-1})=\mathbb{Z}^{\binom{k-1}{p-1}}\oplus\{0\}^{\binom k p -\binom{k-1}{p-1}}.\] Moreover, $S_{p-1}$ is an automorphism of $\mathbb{Z}^{\binom k p}$ that restricts both to an isomorphism $\operatorname{ker}(A_p) \stackrel{\cong}{\longrightarrow} \operatorname{ker}(D_pT_p^{-1}S_{p-1}^{-1})$ and to an isomorphism $\operatorname{im}(A_{p-1}) \stackrel{\cong}{\longrightarrow} \operatorname{im}(D_{p-1})$. Hence, \[\begin{array}{lclcl} \operatorname{ker}(h^p) / \operatorname{im}(h^{p-1}) &=& \operatorname{ker}(A_p) / \operatorname{im}(A_{p-1}) & \cong & \operatorname{ker}(D_p T_p^{-1}S_{p-1}^{-1}) / \operatorname{im}(D_{p-1}) \vspace*{2mm}\\ &&&\cong& (\mathbb{Z}/g_k\mathbb{Z})^{\binom{k-1}{p-1}}. \end{array}\] \end{proof}
Lemma~\ref{lem:ker d^p / im d^(p-1)} now allows us to compute the $E_2$-term of the spectral sequence associated to $\alpha_{\infty}^{-1}(k)\colon H_k\curvearrowright M_{d^\infty,\infty}$ by appealing to the following simple, but useful observation.
\begin{lem}\label{lem:N/gN} The group $N/g_SN$ is isomorphic to $\mathbb{Z}/g_S\mathbb{Z}$. Moreover, for every $1\leq k\leq \lvert S\rvert$, the group $N/g_kN$ is isomorphic to a subgroup of $\mathbb{Z}/g_k\mathbb{Z}$. \end{lem} \begin{proof} Recall that $S$ consists of relatively prime numbers, $N=\mathbb{Z}\bigl[\{\frac{1}{p}:p\in S\}\bigr]$ and let us simply write $g$ for $g_S=\gcd(\{p-1:p\in S\})$. The map \[\begin{array}{c} N/gN \to \mathbb{Z}/g\mathbb{Z}, \quad \frac{1}{r}+gN \mapsto s + g\mathbb{Z}, \end{array}\]
where $r$ is a natural number and $s$ is the unique solution in $\{0,1,\dotsc,g-1\}$ of $rs=1 \pmod{g}$, defines a group homomorphism. To see this, note first that for every $p\in P$ there is a $q\in S$ such that $p|q$, i.e.\ $\gcd{(q-1,p)}=1$. Therefore, $\gcd{(g,p)}=1$ for all $p\in P$. If $\frac{1}{r}\in N$, then all the prime factors of $r$ come from $P$, and it follows that $\gcd{(g,r)}=1$. Thus, the above map is well-defined and extends by addition to the whole domain. Moreover, every $s$ appearing as a solution is relatively prime with $g$, meaning that the kernel is $gN$, i.e.\ the map is injective. Finally, the inverse map is given by $1+g\mathbb{Z}\mapsto 1+gN$.
For the second part, set $g_k'=g_k/\max{(\gcd{(g_k,r)})}$, where the maximum is taken over all natural numbers $r$ such that $\frac{1}{r}\in N$, i.e.\ $g_k'$ is the largest number dividing $g_k$ so that $\gcd{(g_k',r)}=1$ for all such $r$. Then $g_k'N=g_kN$ and a similar proof as above shows that $N/g_k N=N/g_k'N\cong\mathbb{Z}/g_k'\mathbb{Z}$. \end{proof}
\begin{prop}\label{prop:E for alpha(k)} For every $1 \leq k \leq \lvert S \rvert$, the respective group $E_2^{p,0}$ is isomorphic to a subgroup of $(\mathbb{Z}/g_k\mathbb{Z})^{\binom{k-1}{p-1}}$ for $1 \leq p \leq k$, and vanishes otherwise. $E_2^{p,1}$ vanishes for $p \in \mathbb{Z}$. \end{prop} \begin{proof} Note that $N$ is torsion free and hence a flat module over $\mathbb{Z}$. Thus, an application of Lemma~\ref{lem:ker d^p / im d^(p-1)} yields \[\begin{array}{lclclcl} E_2^{p,0} &\hspace*{-1mm}=\hspace*{-1mm}& \operatorname{ker}(\op{id}_{N}\otimes h^p)/\operatorname{im}(\op{id}_{N}\otimes h^{p-1}) &\hspace*{-1mm}\cong\hspace*{-1mm}& N\otimes_\mathbb{Z} \operatorname{ker}(h^p)/\operatorname{im}(h^{p-1}) \vspace*{2mm}\\ &\hspace*{-1mm}\cong\hspace*{-1mm}& N \otimes_\mathbb{Z} (\mathbb{Z}/g_k\mathbb{Z})^{\binom {k-1}{p-1}} &\hspace*{-1mm}\cong\hspace*{-1mm}& (N \otimes_\mathbb{Z} \mathbb{Z}/g_k\mathbb{Z})^{\binom {k-1}{p-1}} &\hspace*{-1mm}\cong\hspace*{-1mm}& (N/g_kN)^{\binom {k-1}{p-1}} \end{array}\] and Lemma~\ref{lem:N/gN} shows that $N/g_kN$ is isomorphic to a subgroup of $\mathbb{Z}/g_k\mathbb{Z}$. The second claim follows from the input data. \end{proof}
\begin{rem}\label{rem:apps for E2term lemma} Assume that $g_k=1$ for some $1 \leq k \leq \lvert S \rvert$, $k < \infty$, and let $k \leq \ell \leq \lvert S \rvert$ be a natural number. If $(E_i^{p,q})_{i \geq 1}$ denotes the spectral sequence associated with $\alpha_\infty^{-1}(\ell)$, then Proposition~\ref{prop:E for alpha(k)} yields $E_2^{p,0}=0$ for all $p \in \mathbb{Z}$. \end{rem}
\begin{proof}[Proof of Theorem~\ref{thm:K-theory for A_S}] Let $k \geq 1$ be finite with $k \leq \lvert S \rvert$. The main idea is to use the $E_\infty$-term of the spectral sequence associated with $\alpha^{-1}_\infty(k)$ to compute $K_*(M_{d^\infty} \rtimes^e_{\alpha(k)} H^+_k)$, up to certain group by employing convergence of this spectral sequence, see Theorem~\ref{thm:spec seq for cr prod}. Recall the general form \eqref{eq:exact sequences from filtration} of the extension problems involved. Since $\mathcal{F}_k K_q(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) =0$ and hence $\mathcal{F}_{k-1} K_q(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) \cong E_{\infty}^{k,q-k}$, we face $k$ iterative extensions of the form: \begin{equation}\label{eq:iterative extensions} \begin{array}{ccccc} E_{\infty}^{k,q-k} &\hspace*{-2mm}\into\hspace*{-2mm}& \mathcal{F}_{k-2} K_q(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) &\hspace*{-2mm}\onto\hspace*{-2mm}& E_{\infty}^{k-1,q-k+1} \vspace*{2mm}\\ \mathcal{F}_{k-2} K_q(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) &\hspace*{-2mm}\into\hspace*{-2mm}& \mathcal{F}_{k-3} K_q(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) &\hspace*{-2mm}\onto\hspace*{-2mm}& E_{\infty}^{k-2,q-k+2} \vspace*{0mm}\\ \vdots&&\vdots&&\vdots \vspace*{0mm}\\ \mathcal{F}_{0} K_{q}(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) &\hspace*{-2mm}\into\hspace*{-2mm}& \mathcal{F}_{-1} K_{q}(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) &\hspace*{-2mm}\onto\hspace*{-2mm}& E_{\infty}^{0,q}. \end{array} \end{equation} Using \[\begin{array}{lcl} \mathcal{F}_{-1} K_{q}(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) &=& K_{q}(\mathcal{M}_{\alpha_\infty}(M_{d^\infty,\infty})) \\ &\cong& K_{k+q}(M_{d^\infty,\infty} \rtimes_{\alpha_\infty(k)} H_k) \cong K_{k+q}(M_{d^\infty} \rtimes^e_{\alpha(k)} H^+_k), \end{array}\] we will thus arrive at $K_{k+q}(M_{d^\infty} \rtimes^e_{\alpha(k)} H^+_k)$, see Theorem~\ref{thm:spec seq for cr prod}. Recall that by Bott periodicity, $E_\infty^{p,q+2} \cong E_\infty^{p,q}$ for all $q \in \mathbb{Z}$. In addition, we know from Proposition~\ref{prop:E for alpha(k)} that for $p \in \mathbb{Z}$, the group $E_\infty^{p,1}$ is trivial, and $E_\infty^{p,0}$ vanishes unless $1 \leq p \leq k$, in which case it is a subquotient of $E_2^{p,0}$, and thus a subquotient of $(\mathbb{Z}/g_k\mathbb{Z})^{\binom {k-1}{p-1}}$.
Assume now that $g=1$. Clearly, this holds exactly if $g_k = 1$ for some $k \geq 1$. For such $k \geq 1$, the corresponding $E_\infty$-term is trivial, yielding $K_*(M_{d^\infty} \rtimes^e_{\alpha(k)} H^+_k) = 0$. Using continuity of $K$-theory if necessary, we obtain that $\mathcal{A}_S \cong M_{d^\infty} \rtimes^e_{\alpha} H^+$ has trivial $K$-theory. It follows from Kirchberg-Phillips classification that $\mathcal{A}_S \cong \mathcal{O}_2$. This proves (a).
If $S = \{p\}$, $\mathcal{A}_S \cong \mathcal{O}_p$ by the definition of $\mathcal{A}_S$, and (b) follows.
Claim (c) is nothing but Proposition~\ref{prop:A_S for |S|=2}.
Lastly, let us prove claim (d). Let $k \geq 2$, and denote by $(E_\ell,d_\ell)_{\ell \geq 1}$ the spectral sequence associated with $\alpha^{-1}_\infty(k)$. Recall that only those $E_\infty^{\ell,q-\ell}$ with $1 \leq \ell \leq k$ and $q-\ell \in 2\mathbb{Z}$ may be non-trivial subgroups of $(\mathbb{Z}/g_k\mathbb{Z})^{\binom{k-1}{\ell-1}}$. Keeping track of the indices, we get \[\begin{array}{c} \sum\limits_{\substack{1 \leq \ell \leq k:\\ \ell \text{ even}}} \binom {k-1} {\ell-1} =2^{k-2} = \sum\limits_{\substack{1 \leq \ell \leq k:\\ \ell \text{ odd}}} \binom {k-1} {\ell-1}. \end{array}\] This allows us to conclude that every element in $K_i(M_{d^\infty} \rtimes^e_{\alpha(k)} H^+_k)$ is a divisor of $g_k^{2^{k-2}}$. This concludes the proof, as $g_S = g_k$ for $k \leq \lvert S\rvert$ sufficiently large. \end{proof}
\begin{rem}\label{rem:K-theory for M_d^infty} It is possible to say something about the case $\lvert S \rvert=3$, though the answer is incomplete: Noting that $E_{\infty}^{2,-2} $ is a subgroup of $(\mathbb{Z}/g\mathbb{Z})^2$, $E_{\infty}^{3,-2}$ and $E_{\infty}^{1,0}$ are subgroups of $\mathbb{Z}/g\mathbb{Z}$, and the remaining terms vanish, we know that $K_1(M_{d^\infty} \rtimes^e_\alpha H^+) \cong E_\infty^{2,-2}$ and $K_0(M_{d^\infty} \rtimes^e_\alpha H^+)$ fits into an exact sequence \[E_\infty^{3,-2} \into K_0(M_{d^\infty} \rtimes^e_\alpha H^+) \onto E_\infty^{1,0}.\] But we cannot say more without additional information here. \end{rem}
\begin{rem} By considering $\mathcal{A}_S$ as the $k$-graph $C^*$-algebra $C^*(\Lambda_{S,\theta})$ for finite $S$, see Corollary~\ref{cor:tor subalgebra via k-graphs}, one could probably also apply Evans' spectral sequence \cite{Evans}*{Theorem~3.15} to obtain Theorem~\ref{thm:K-theory for A_S} by performing basically the same proof. In fact, Evans' spectral sequence is the homological counterpart of the spectral sequence used here. \end{rem}
\section*{References} \begin{biblist}
\bib{barlak15}{article}{
author={Barlak, Sel\c {c}uk},
title={On the spectral sequence associated with the Baum-Connes Conjecture for $\mathbb Z^n$},
note={\href {http://arxiv.org/abs/1504.03298}{arxiv:1504.03298v2}}, }
\bib{BarSza1}{article}{
label={BaSz},
author={Barlak, Sel\c {c}uk},
author={Szab\'{o}, G\'{a}bor},
title={Sequentially split $*$-homomorphisms between $C^*$-algebras},
note={\href {http://arxiv.org/abs/1510.04555}{arxiv:1510.04555v2}}, }
\bib{BD}{article}{
author={Bunce, John W.},
author={Deddens, James A.},
title={A family of simple $C^*$-algebras related to weighted shift operators},
journal={J. Funct. Anal.},
volume={19},
year={1975},
pages={13--24}, }
\bib{BaHLR}{article}{
author={Brownlowe, Nathan},
author={an Huef, Astrid},
author={Laca, Marcelo},
author={Raeburn, Iain},
title={Boundary quotients of the {T}oeplitz algebra of the affine semigroup over the natural numbers},
journal={Ergodic Theory Dynam. Systems},
volume={32},
year={2012},
number={1},
pages={35--62},
issn={0143-3857},
doi={\href {http://dx.doi.org/10.1017/S0143385710000830}{10.1017/S0143385710000830}}, }
\bib{BLS1}{article}{
author={Brownlowe, Nathan},
author={Larsen, Nadia S.},
author={Stammeier, Nicolai},
title={On $C^*$-algebras associated to right LCM semigroups},
journal={Trans. Amer. Math. Soc.},
year={2016},
doi={\href {http://dx.doi.org/10.1090/tran/6638}{10.1090/tran/6638}}, }
\bib{BLS2}{article}{
author={Brownlowe, Nathan},
author={Larsen, Nadia S.},
author={Stammeier, Nicolai},
title={$C^*$-algebras of algebraic dynamical systems and right LCM semigroups},
note={\href {http://arxiv.org/abs/1503.01599}{arxiv:1503.01599v1}}, }
\bib{BRRW}{article}{
author={Brownlowe, Nathan},
author={Ramagge, Jacqui},
author={Robertson, David},
author={Whittaker, Michael F.},
title={Zappa-{S}z\'{e}p products of semigroups and their $C^*$-algebras},
journal={J. Funct. Anal.},
volume={266},
year={2014},
number={6},
pages={3937--3967},
issn={0022-1236},
doi={\href {http://dx.doi.org/10.1016/j.jfa.2013.12.025}{10.1016/j.jfa.2013.12.025}}, }
\bib{bsBQforADS}{article}{
label={BrSt},
author={Brownlowe, Nathan},
author={Stammeier, Nicolai},
title={The boundary quotient for algebraic dynamical systems},
journal={J. Math. Anal. Appl.},
volume={438},
year={2016},
number={2},
pages={772--789},
doi={\href {http://dx.doi.org/10.1016/j.jmaa.2016.02.015}{10.1016/j.jmaa.2016.02.015}}, }
\bib{Com}{article}{
author={Combes, Fran\c {c}ois},
title={Crossed products and {M}orita equivalence},
journal={Proc. Lond. Math. Soc. (3)},
volume={49},
year={1984},
number={2},
pages={289--306},
issn={0024-6115},
doi={\href {http://dx.doi.org/10.1112/plms/s3-49.2.289}{10.1112/plms/s3-49.2.289}}, }
\bib{CrispLaca}{article}{
author={Crisp, John},
author={Laca, Marcelo},
title={Boundary quotients and ideals of {T}oeplitz {$C^*$}-algebras of {A}rtin groups},
journal={J. Funct. Anal.},
volume={242},
year={2007},
number={1},
pages={127--156},
issn={0022-1236},
doi={\href {http://dx.doi.org/10.1016/j.jfa.2006.08.001}{10.1016/j.jfa.2006.08.001}}, }
\bib{CunPV}{article}{
author={Cuntz, Joachim},
title={A class of $C^*$-algebras and topological Markov chains II. Reducible chains and the Ext-functor for $C^*$-algebras},
journal={Invent. Math.},
year={1981},
pages={25--40},
volume={63},
doi={\href {http://dx.doi.org/10.1007/BF01389192}{10.1007/BF01389192}}, }
\bib{CuntzQ}{article}{
author={Cuntz, Joachim},
title={$C^*$-algebras associated with the $ax+b$-semigroup over $\mathbb {N}$},
conference={ title={$K$-theory and noncommutative geometry}, },
book={ series={EMS Ser. Congr. Rep.}, publisher={Eur. Math. Soc., Z\"urich}, },
date={2008},
pages={201--215},
doi={\href {http://dx.doi.org/10.4171/060-1/8}{10.4171/060-1/8}}, }
\bib{CLintegral2}{article}{
author={Cuntz, Joachim},
author={Li, Xin},
title={{$C^*$}-algebras associated with integral domains and crossed products by actions on adele spaces},
journal={J. Noncommut. Geom.},
volume={5},
year={2011},
number={1},
pages={1--37},
issn={1661-6952},
doi={\href {http://dx.doi.org/10.4171/JNCG/68}{10.4171/JNCG/68}}, }
\bib{CuntzVershik}{article}{
author={Cuntz, Joachim},
author={Vershik, Anatoly},
title={{$C^*$}-algebras associated with endomorphisms and polymorphisms of compact abelian groups},
year={2013},
issn={0010-3616},
journal={Comm. Math. Phys.},
volume={321},
number={1},
doi={\href {http://dx.doi.org/10.1007/s00220-012-1647-0}{10.1007/s00220-012-1647-0}},
publisher={Springer-Verlag},
pages={157-179}, }
\bib{enders1}{article}{
author={Enders, Dominic},
title={Semiprojectivity for Kirchberg algebras},
note={\href {http://arxiv.org/abs/1507.06091}{arxiv:1507.06091v1}}, }
\bib{Evans}{article}{
author={Evans, D. Gwion},
title={On the {$K$}-theory of higher rank graph {$C\sp *$}-algebras},
journal={New York J. Math.},
volume={14},
year={2008},
pages={1--31},
issn={1076-9803},
note={\href {http://nyjm.albany.edu:8000/j/2008/14_1.html}{nyjm:8000/j/2008/14\textunderscore 1}}, }
\bib{FS}{article}{
author={Fowler, Neal J.},
author={Sims, Aidan},
title={Product systems over right-angled {A}rtin semigroups},
journal={Trans. Amer. Math. Soc.},
volume={354},
year={2002},
number={4},
pages={1487--1509},
issn={0002-9947},
doi={\href {http://dx.doi.org/10.1090/S0002-9947-01-02911-7}{10.1090/S0002-9947-01-02911-7}}, }
\bib{Gl}{article}{
author={Glimm, James G.},
title={On a certain class of operator algebras},
journal={Trans. Amer. Math. Soc.},
volume={95},
year={1960},
pages={318--340}, }
\bib{hr}{book}{
author={Hewitt, Edwin},
author={Ross, Kenneth A.},
title={Abstract harmonic analysis. Vol.~I},
series={Grundlehren Math. Wiss.},
volume={115},
edition={2},
note={Structure of topological groups, integration theory, group representations},
publisher={Springer-Verlag, Berlin-New York},
date={1979},
pages={ix+519},
isbn={3-540-09434-2}, }
\bib{Hir}{article}{
author={Hirshberg, Ilan},
title={On $C^*$-algebras associated to certain endomorphisms of discrete groups},
journal={New York J. Math.},
volume={8},
year={2002},
pages={99--109},
issn={1076-9803},
note={\href {http://nyjm.albany.edu:8000/j/2002/8_99.html}{nyjm.albany.edu:8000/j/2002/8\textunderscore 99}}, }
\bib{KOQ}{article}{
author={Kaliszewski, Steve},
author={Omland, Tron},
author={Quigg, John},
title={Cuntz-Li algebras from $a$-adic numbers},
journal={Rev. Roumaine Math. Pures Appl.},
volume={59},
date={2014},
number={3},
pages={331--370},
issn={0035-3965}, }
\bib{kasparov}{article}{
author={Kasparov, Gennadi},
title={Equivariant $KK$-theory and the Novikov conjecture},
journal={Invent. Math.},
volume={91},
date={1988},
number={1},
pages={147--201},
issn={0020-9910},
doi={\href {http://dx.doi.org/10.1007/BF01404917}{10.1007/BF01404917}}, }
\bib{KatsuraIV}{article}{
author={Katsura, Takeshi},
title={A class of {$C^*$}-algebras generalizing both graph algebras and homeomorphism {$C^*$}-algebras. {IV}. {P}ure infiniteness},
journal={J. Funct. Anal.},
volume={254},
year={2008},
number={5},
pages={1161--1187},
issn={0022-1236},
doi={\href {http://dx.doi.org/10.1016/j.jfa.2007.11.014}{10.1016/j.jfa.2007.11.014}}, }
\bib{Kir}{article}{
author={Kirchberg, Eberhard},
title={The classification of purely infinite C*-algebras using Kasparov's theory},
journal={to appear in Fields Inst. Commun, Amer. Math. Soc., Providence, RI}, }
\bib{KP}{article}{
author={Kumjian, Alex},
author={Pask, David},
title={Higher rank graph {$C^*$}-algebras},
journal={New York J. Math.},
volume={6},
year={2000},
pages={1--20},
issn={1076-9803},
note={\href {http://nyjm.albany.edu:8000/j/2000/6_1.html}{nyjm:8000/j/2000/6\textunderscore 1}}, }
\bib{Lac}{article}{
author={Laca, Marcelo},
title={From endomorphisms to automorphisms and back: dilations and full corners},
journal={J. Lond. Math. Soc. (2)},
volume={61},
year={2000},
number={3},
pages={893--904},
doi={\href {http://dx.doi.org/10.1112/S0024610799008492}{10.1112/S0024610799008492}}, }
\bib{LacRae}{article}{
author={Laca, Marcelo},
author={Raeburn, Iain},
title={Semigroup crossed products and the {T}oeplitz algebras of nonabelian groups},
journal={J. Funct. Anal.},
volume={139},
year={1996},
number={2},
pages={415--440},
issn={0022-1236},
doi={\href {http://dx.doi.org/10.1006/jfan.1996.0091}{10.1006/jfan.1996.0091}}, }
\bib{LarsenLi}{article}{
author={Larsen, Nadia S.},
author={Li, Xin},
title={The $2$-adic ring {$C^*$}-algebra of the integers and its representations},
journal={J. Funct. Anal.},
volume={262},
year={2012},
number={4},
pages={1392--1426},
doi={\href {http://dx.doi.org/10.1016/j.jfa.2011.11.008}{10.1016/j.jfa.2011.11.008}}, }
\bib{Li1}{article}{
author={Li, Xin},
title={Semigroup {$C^*$}-algebras and amenability of semigroups},
journal={J. Funct. Anal.},
volume={262},
year={2012},
number={10},
pages={4302--4340},
issn={0022-1236},
doi={\href {http://dx.doi.org/10.1016/j.jfa.2012.02.020}{10.1016/j.jfa.2012.02.020}}, }
\bib{LN2}{article}{
author={Li, Xin},
author={Norling, Magnus D.},
title={Independent resolutions for totally disconnected dynamical systems {II}: {$C^*$}-algebraic case},
journal={J. Operator Theory},
volume={75},
year={2016},
number={1},
pages={163--193},
note={\href {http://www.theta.ro/jot/archive/2016-075-001/2016-075-001-009.pdf}{2016-075-001/2016-075-001-009}}, }
\bib{loring1}{book}{
author={Loring, Terry A.},
title={Lifting solutions to perturbing problems in $C^*$-algebras},
series={Fields Inst. Monogr.},
volume={8},
publisher={Amer. Math. Soc., Providence, RI},
date={1997},
pages={x+165},
isbn={0-8218-0602-5}, }
\bib{massey1}{article}{
author={Massey, William S.},
title={Exact couples in algebraic topology.~I, II},
journal={Ann. of Math. (2)},
volume={56},
date={1952},
pages={363--396},
issn={0003-486X}, }
\bib{massey2}{article}{
author={Massey, William S.},
title={Exact couples in algebraic topology.~III, IV, V},
journal={Ann. of Math. (2)},
volume={57},
date={1953},
pages={248--286},
issn={0003-486X}, }
\bib{Nic}{article}{
author={Nica, Alexandru},
title={$C^*$-algebras generated by isometries and {W}iener-{H}opf operators},
journal={J. Operator Theory},
volume={27},
year={1992},
number={1},
pages={17--52},
issn={0379-4024}, }
\bib{Oml}{article}{
author={Omland, Tron},
title={$C^*$-algebras associated with $a$-adic numbers},
conference={ title={Operator algebra and dynamics}, },
book={ series={Springer Proc. Math. Stat.}, volume={58}, publisher={Springer, Heidelberg}, },
date={2013},
pages={223--238},
doi={\href {http://dx.doi.org/10.1007/978-3-642-39459-1_11}{10.1007/978-3-642-39459-1\textunderscore 11}}, }
\bib{Pas}{article}{
author={Paschke, William L.},
title={$K$-theory for actions of the circle group on $C^*$-algebras},
journal={J. Operator Theory},
volume={6},
number={1},
year={1981},
pages={125--133}, }
\bib{Phi}{article}{
author={Phillips, N. Christopher},
title={A classification theorem for nuclear purely infinite simple $C^*$-algebras},
journal={Doc. Math.},
volume={5},
year={2000},
pages={49--114},
issn={1431-0635}, }
\bib{PV}{article}{
author={Pimsner, Mihai V.},
author={Voiculescu, Dan-Virgil},
title={Exact sequences for $K$-groups and Ext-groups of certain cross-product $C^*$-algebras},
journal={J. Operator Theory},
volume={4},
number={1},
year={1980},
pages={93--118}, }
\bib{RS}{article}{
author={Raeburn, Iain},
author={Szyma{\'n}ski, Wojciech},
title={Cuntz-Krieger algebras of infinite graphs and matrices},
journal={Trans. Amer. Math. Soc.},
volume={356},
number={1},
pages={39--59},
year={2004},
doi={\href {http://dx.doi.org/10.1090/S0002-9947-03-03341-5}{10.1090/S0002-9947-03-03341-5}}, }
\bib{rordam-zd}{article}{
author={R{\o }rdam, Mikael},
title={Classification of nuclear, simple $C^*$-algebras},
conference={ title={Classification of nuclear $C^*$-algebras. Entropy in operator algebras} },
book={ series={Encyclopaedia Math. Sci.}, volume={126}, publisher={Springer, Berlin}, },
date={2002},
pages={1--145},
doi={\href {http://dx.doi.org/10.1007/978-3-662-04825-2_1}{10.1007/978-3-662-04825-2\textunderscore 1}}, }
\bib{savinienBellissard}{article}{
author={Savinien, Jean},
author={Bellissard, Jean},
title={A spectral sequence for the $K$-theory of tiling spaces},
journal={Ergodic Theory Dynam. Systems},
volume={29},
date={2009},
number={3},
pages={997--1031},
issn={0143-3857},
doi={\href {http://dx.doi.org/10.1017/S0143385708000539}{10.1017/S0143385708000539}}, }
\bib{schochet}{article}{
author={Schochet, Claude L.},
title={Topological methods for {$C^*$}-algebras.~I. Spectral sequences},
year={1981},
journal={Pacific J. Math.},
volume={96},
number={1},
pages={193--211},
note={\href {http://projecteuclid.org/euclid.pjm/1102734956}{euclid.pjm/1102734956}}, }
\bib{Sta1}{article}{
author={Stammeier, Nicolai},
title={On {$C^*$}-algebras of irreversible algebraic dynamical systems},
journal={J. Funct. Anal.},
volume={269},
year={2015},
number={4},
pages={1136--1179},
doi={\href {http://dx.doi.org/10.1016/j.jfa.2015.02.005}{10.1016/j.jfa.2015.02.005}}, }
\bib{Sta3}{article}{
author={Stammeier, Nicolai},
title={A boundary quotient diagram for right LCM semigroups},
note={\href {http://arxiv.org/abs/1604.03172}{arxiv:1604.03172}}, }
\bib{Wil}{book}{
author={Williams, Dana P.},
title={Crossed products of $C^*$-algebras},
series={Math. Surveys Monogr.},
volume={134},
publisher={Amer. Math. Soc., Providence, RI},
date={2007},
pages={xvi+528},
isbn={978-0-8218-4242-3},
isbn={0-8218-4242-0},
doi={\href {http://dx.doi.org/10.1090/surv/134}{10.1090/surv/134}}, }
\bib{Z}{article}{
author={Zhang, Shuang},
title={Certain $C^*$-algebras with real rank zero and their corona and multiplier algebras.~I},
journal={Pacific J. Math.},
volume={155},
number={1},
year={1992},
pages={169--197},
note={\href {http://projecteuclid.org/euclid.pjm/1102635475}{euclid.pjm/1102635475}}, }
\end{biblist}
\end{document} |
\begin{document}
\title{The Master Equation in a Bounded Domain with Neumann conditions}
\author{Michele Ricciardi}\thanks{Dipartimento di Informatica, Universit\`{a} degli studi di Verona.
Via S. Francesco, 22, 37129 Verona (VR), Italy.
\texttt{[email protected]}}
\date{\today}
\maketitle
\begin{abstract}
In this article we study the well-posedness of the Master Equation of Mean Field Games in a framework of Neumann boundary condition. The definition of solution is closely related to the classical one of the Mean Field Games system, but the boundary condition here leads to two Neumann conditions in the Master Equation formulation, for both space and measure. The global regularity of the linearized system, which is crucial in order to prove the existence of solutions, is obtained with a deep study of the boundary conditions and the global regularity at the boundary of a suitable class of parabolic equations.
\end{abstract}
\section{Introduction}
Mean Field Games theory is devoted to the study of differential games with a large number $N$ of small and indistinguishable agents. The theory was initially introduced by J.-M. Lasry and P.-L. Lions in 2006 (\cite{LL1, LL2, LL3, LL-japan}), using tools from mean-field theories, and in the same years by P. Caines, M. Huang and R. Malham\'{e} \cite{HCM}.
The macroscopic description used in mean field game theory leads to study coupled systems of PDEs, where the Hamilton-Jacobi-Bellman equation satisfied by the single agent's value function $u$ is coupled with the Kolmogorov Fokker-Planck equation satisfied by the distribution law of the population $m$. The simplest form of this system is the following
\begin{equation}\label{meanfieldgames}
\begin{cases}
-\partial_t u - \mathrm{tr}(a(x)D^2u) +H(x,Du)=F(x,m)\,,\\
\partial_t m - \sum\limits_{i,j} \partial_{ij}^2 (a_{ij}(x)m) -\mathrm{div}(mH_p(x,Du))=0\,,\\
m(0)=m_0\,, \hspace{2cm} u(T)=G(x,m(T))\,.
\end{cases}
\end{equation}
Here, $H$ is called the \emph{Hamiltonian} of the system, whereas $a$ is a uniformly elliptic matrix, representing the square of the diffusion term in the stochastic dynamic of the generic player, and $F$ and $G$ are the running cost and the final cost related to the generic player.\\
In his lectures at Coll\`{e}ge de France \cite{prontoprontopronto}, P.-L. Lions proved that the solutions $(u,m)$ of \eqref{meanfieldgames} are the trajectories of a new infinite dimensional partial differential equation.
This $PDE$ is called \textbf{\emph{Master Equation}} and summarizes the informations contained in \eqref{meanfieldgames} in a unique equation.
The definition of the Master Equation is related to its characteristics, which are solution of the MFG system. To be more precise, considering the solution $(u,m)$ of the system \eqref{meanfieldgames} with initial condition $m(t_0)=m_0$, one defines the function
\begin{equation}\label{defu}
U:[0,T]\times\Omega\times\mathcal{P}(\Omega)\to\R\,,\qquad U(t_0,x,m_0)=u(t_0,x)\,,
\end{equation}
where $\Omega\subseteq\R^d$ and $\mathcal{P}(\Omega)$ is the set of Borel probability measures on $\Omega$.
In order to give sense at this definition, the equation \eqref{meanfieldgames} must have a unique solution defined in $[0,T]\times\Omega$, for all $(t_0,m_0)$. So, we assume that $F$ and $G$ are \emph{monotone} functions with respect to the measure variable, a structure condition which ensures the existence of a unique solution for large time interval.
If we compute, at least formally, the equation satisfied by $U$, we obtain a Hamilton-Jacobi equation in the space of measures, called Master Equation.\\
The relevance of the Master Equation was recognized in different papers, and most important topics like existence, uniqueness and regularity results have been developed. For example, in \cite{14} and \cite{15} Bensoussan, Frehse and Yam reformulated this equation as a PDE set on an $L^2$ space, and in \cite{24} Carmona and Delarue interpreted it as a decoupling field of forward-backward stochastic differential equations in infinite dimension. A very general result of well-posedness of the Master Equation was given by Cardaliaguet, Delarue, Lasry and Lions in \cite{card}.
So far, most of the literature, especially in the Master Equation's papers, considers the case where the state variable $x$ belongs to the flat torus (i.e. periodic solutions, $\Omega=\mathbb{T}^d$), or, especially in the probabilistic literature, in the whole space $\R^d$.
But in many economic and financial applications it is useful to work with a process that remains in a certain domain of existence; thus, some conditions at the boundary need to be prescribed. See for instance the models analyzed by Achdou, Buera et al. in \cite{gol}.
In this paper we want to analyze this situation, by studying the well-posedness of the Master Equation in a framework of Neumann condition at the boundary.
In this case the MFG system \eqref{meanfieldgames} is constrained with the following boundary conditions: for all $x\in\partial\Omega$
\begin{equation}\label{fame}
a(x)D_x u(t,x)\cdot\nu(x)=0\,,\qquad \mathlarger{[}a(x) Dm(t,x)+m(H_p(x,Du)+\tilde{b}(x))\mathlarger{]}\cdot\nu(x)=0\,,
\end{equation}
where $\nu(\cdot)$ is the outward normal at $\partial\Omega$ and $\tilde{b}$ is a vector field defined as follows:
$$
\tilde{b}_i(x)=\mathlarger{\sum}\limits_{j=1}^d\frac{\partial a_{ji}}{\partial x_j}(x)\hspace{0.08cm},\hspace{2cm}i=1,\dots,d\hspace{0.08cm}.
$$\,.\\
The Master Equation, in this case, takes the following form
\begin{equation}\begin{split}
\label{Master}
\left\{
\begin{array}{rl}
&-\,\partial_t U(t,x,m)-\mathrm{tr}\left(a(x)D_x^2 U(t,x,m)\right)+H\left(x,D_x U(t,x,m)\right)\\&-\mathlarger{\ensuremath{\int_{\Omega}}}\mathrm{tr}\left(a(y)D_y D_m U(t,x,m,y)\right)dm(y)\\&+\mathlarger{\ensuremath{\int_{\Omega}}} D_m U(t,x,m,y)\cdot H_p(y,D_x U(t,y,m))dm(y)= F(x,m)\\&\mbox{in }(0,T)\times\Omega\times\mathcal{P}(\Omega)\hspace{0.08cm},
\\
&U(T,x,m)=G(x,m)\hspace{1cm}\mbox{in }\Omega\times\mathcal{P}(\Omega)\hspace{0.08cm},
\\
&a(x)D_x U(t,x,m)\cdot\nu(x)=0\hspace{1cm}\quad\,\mbox{for }(t,x,m)\in(0,T)\times\partial\Omega\times\mathcal{P}(\Omega)\,,\\
&a(y)D_m U(t,x,m,y)\cdot\nu(y)=0\hspace{1cm}\mbox{for }(t,x,m,y)\in(0,T)\times\Omega\times\mathcal{P}(\Omega)\times\partial\Omega\,,
\end{array}
\right.
\end{split}\end{equation}
where $D_mU$ is a derivation of $U$ with respect to the measure, whose precise definition will be given later. This definition, anyway, is strictly related to the one given by Ambrosio, Gigli and Savar\'{e} in \cite{ags} and by Lions in \cite{prontoprontopronto}.\\
We stress the fact that the last boundary condition, i.e.
$$
a(y)D_m U(t,x,m,y)\cdot\nu(y)=0\hspace{1cm}\mbox{for }(t,x,m,y)\in(0,T)\times\Omega\times\mathcal{P}(\Omega)\times\partial\Omega\,,
$$
is completely new in the literature. It relies on the fact that we have to pay attention to the space where $U$ is defined, i.e. $[0,T]\times\Omega\times\mathcal{P}(\Omega)$. Then, together with final data and Neumann condition with respect to $x$, there is another boundary condition caused by the boundary of $\mathcal{P}(\Omega)$.\\
The importance of this equation arises in the so-called \emph{convergence problem}. The Mean Field Games system approximates the $N$-player differential game, in the sense that the optimal strategies in the Mean Field Games system provide approximated Nash equilibria (called $\varepsilon$-Nash equilibria) in the $N$-player game. See, for instance, \cite{resultuno}, \cite{resultdue}, \cite{resulttre}.
Conversely, the convergence of the Nash Equilibria in the $N$-player game towards an optimal strategy in the Mean Field Games presents many difficulties, due to the lack of compactness properties of the problem. Hence, the Master Equation plays an instrumental role in order to study this problem. A convergence result in a framework of Neumann conditions at the boundary will be given in the forthcoming paper \cite{prossimamente}.\\
There are many papers about the well-posedness of the Master Equation. We point out here that these papers are studied in two different contexts: the first case is the so-called \emph{First order Master Equation}, studied in this article, where the Brownian motions in the dynamic of the agents in the $N$-player differential game are independent each other. The second case is the \emph{Second order Master Equation}, or \emph{Master Equation with common noise}. In this case, the dynamic has also
an additional Brownian term $dW_t$, which is common to all players. This leads to a different and more difficult type of Master Equation, with some additional terms depending also on the second derivative $D_{mm}U$. It is relevant to say that Mean Field Games with common noise were already studied by Carmona, Delarue and Lacker in \cite{loacker}.
Some preliminary results about the Master Equation were given by Lions in \cite{prontoprontopronto} and a first exhaustive result of existence and uniqueness of solutions was proved, with a probabilistic approach, by Chassagneux, Crisan and Delarue in \cite{28}, who worked in a framework with diffusion and without common noise. Buckhdan, Li, Peng and Rainer in \cite{bucchin} proved the existence of a classical solution using probabilistic arguments, when there is no coupling and no common noise. Furthermore, Gangbo and Swiech proved a short time existence for the Master Equation with common noise, see \cite{nuova16}.
But the most important result in this framework was achieved by Cardaliaguet, Delarue, Lasry and Lions in \cite{card}, who proved existence and uniqueness of solutions for the Master Equation with and without common noise, including applications to the convergence problem, in a periodic setting ($\Omega=\mathbb{T}^d$). Other recent results about Master Equation and convergence problem can be found in \cite{dybala, nuova1, gomez, nuova14, cicciocaputo, nuova11, ramadan, nuova4, fifa21, tonali, koulibaly}.\\
This article follows the main ideas of \cite{card}, but many issues appear, connected to the Neumann boundary condition, and more effort has to be done in order to gain the same results.
The function $U$ is defined as in \eqref{defu} and some estimates like global bounds and global Lipschitz regularity are proved.
The main issue in order to obtain that $U$ solves \eqref{Master} is to prove the $\mathcal{C}^1$ character of $U$ with respect to $m$. This step requires a careful analysis of the linearized mean field game system (see \cite{card}) in order to prove strong regularity of U in the space and in the measure variable.
However, these estimates requires strong regularities of $U$, and so of the Mean Field Games system, in the space and in the measure variable.
The space regularity is obtained in \cite{card} by differentiating the equation with respect to $x$. But in the Neumann case, and in general in any case of boundary conditions, these methods obviously cannot be applied so straightly, and these bounds are obtained using different kind of space-time estimates, which must be handled with care.
Indeed, regularity estimates for Neumann parabolic equation require compatibility conditions between initial and boundary data. Unfortunately, these compatibility conditions will be not always guaranteed in this context. This forces us to generalize the estimates obtained in \cite{card}, by a deeper study of the regularity of solutions for the Fokker-Planck equation.\\
The article is divided as follows.
In section $2$ we define some useful tools and we state the main assumptions we will need in order to prove the next results.
In the rest of the article (section $3$ to $6$), we analyze the well-posedness of the Master Equation \eqref{Master}.
The idea is quite classical: for each $(t_0,m_0)$ we consider the $MFG$ system \eqref{meanfieldgames} in $[t_0,T]\times\Omega$, with conditions \eqref{fame}, and we define
\begin{equation}\label{U}
U(t_0,x,m_0)=u(t_0,x)\,.
\end{equation}
Then we prove that $U$ is a solution of the Master Equation.\\
Section $3-5$ are completely devoted to prove technical results to ensure this kind of differentability.
In section $3$ we prove a first estimate of a solution $(u,m)$ of the Mean Field Games system, namely
$$
\norm{u}\amd\le C\,,\qquad\mathbf{d}_1(m(t),m(s))\le C|t-s|^\miezz\,,
$$
where $\mathbf{d}_1$ is a distance between measures called \emph{Wasserstein distance}, whose definition will be given in Section $3.2$.
In section $4$ we use the definition of $U$ from the Mean Field Games system in order to prove a Lipschitz character of $U$ with respect to $m$:
$$
\norm{U(t,\cdot,m_1)-U(t,\cdot,m_2)}_{2+\alpha}\le C\mathbf{d}_1(m_1,m_2)\,.
$$
In section $5$ we prove the $\mathcal{C}^1$ character of $U$ with respect to $m$. This goes through different estimates on linearized MFG systems.
Once proved the $\mathcal{C}^1$ character of $U$, we can prove that $U$ is actually the unique solution of the Master Equation \eqref{Master}. This will be done in Section $6$.\\
\section{Notation, Assumptions and Main Result}
Throughout this chapter, we fix a time $T>0$. $\Omega\subset\R^d$ will be the closure of an open bounded set, with boundary of class $\mathcal{C}^{2+\alpha}$, and we define $Q_T:=[0,T]\times\Omega$.
For $n\ge0$ and $\alpha\in(0,1)$ we denote with $\mathcal{C}^{n+\alpha}(\Omega)$, or simply $\mathcal{C}^{n+\alpha}$, the space of functions $\phi\in\mathcal{C}^n(\Omega)$ with, for each $\ell\in\N^r$, $1\le r\le n$, the derivative $D^\ell\phi$ is H\"{o}lder continuous with H\"{o}lder constant $\alpha$. The norm is defined in the following way:
\begin{align*}
\norm{\phi}_{n+\alpha}:=\sum\limits_{|\ell|\le n}\norminf{D^l\phi}+\sum\limits_{|\ell|= n}\sup\limits_{x\neq y}\frac{|D^\ell\phi(x)-D^\ell\phi(y)|}{|x-y|^\alpha}\,.
\end{align*}
Sometimes, in order to deal with Neumann boundary conditions, we will need to work with a suitable subspace of $\mathcal{C}^{n+\alpha}(\Omega)$.
So we will call $\mathcal{C}^{n+\alpha,N}(\Omega)$, or simply $\mathcal{C}^{n+\alpha,N}$, the set of functions $\phi\in\mathcal{C}^{n+\alpha}$ such that $aD\phi\cdot\nu_{|\partial\Omega}=0$, endowed with the same norm $\norm{\phi}_{n+\alpha}$.\\
Then, we define several parabolic spaces we will need to work with during the chapter, starting from $\mathcal{C}^{\frac{n+\alpha}{2},n+\alpha}([0,T]\times\Omega)$.
We say that $\phi:[0,T]\times\Omega\to\R$ is in $\mathcal{C}^{\frac{n+\alpha}{2},n+\alpha}([0,T]\times\Omega)$ if $\phi$ is continuous in both variables, together with all derivatives $D_t^rD_x^s\phi$, with $2r+s\le n$. Moreover, $\norm{\phi}_{\frac{n+\alpha}{2},n+\alpha}$ is bounded, where
\begin{align*}
\norm{\phi}_{\frac{n+\alpha}{2},n+\alpha}:=\sum\limits_{2r+s\le n}\norminf{D_t^rD^s_x\phi}&+\sum\limits_{2r+s=n}\sup\limits_t\norm{D_t^rD_x^s\phi(t,\cdot)}_{\alpha}\\&+\sum\limits_{0<n+\alpha-2r-s<2}\sup\limits_x\norm{D_t^rD_x^s\phi(\cdot,x)}_{\frac{n+\alpha-2r-s}{2}}\,.
\end{align*}
The space of continuous space-time functions which satisfy a H\"{o}lder condition in $x$ will be denoted by $\mathcal{C}^{0,\alpha}([0,T]\times\Omega)$. It is endowed with the norm
$$
\norm{\phi}_{0,\alpha}=\sup\limits_{t\in[0,T]}\norm{\phi(t,\cdot)}_\alpha\,.
$$
The same definition can be given for the space $\mathcal{C}^{\alpha,0}$. Finally, we define the space $\mathcal{C}^{1,2+\alpha}$ of functions differentiable in time and twice differentiable in space, with all derivatives in $\mathcal{C}^{0,\alpha}(\overline{Q_T})$. The natural norm for this space is
$$
\norm{\phi}_{1,2+\alpha}:=\norminf{\phi}+\norm{\phi_t}_{0,\alpha}+\norminf{D_x\phi}+\norm{D^2_x\phi}_{0,\alpha}\,.
$$
We note that, thanks to \emph{Lemma 5.1.1} of \cite{lunardi}, the first order derivatives of $\phi\in\mathcal{C}^{1,2}$ satisfy also a H\"{o}lder condition in time. Namely
\begin{equation}\label{precisissimongulaeva}
\norm{D_x\phi}_{\miezz,\alpha}\le C\norm{f}_{1,2+\alpha}\,.
\end{equation}
In order to study distributional solutions for the Fokker-Planck equation, we also need to define a structure for the dual spaces of regular functions.
We define, for $n\ge0$ and $\alpha\in(0,1)$, the space $\mathcal{C}^{-(n+\alpha)}(\Omega)$, called for simplicity $\mathcal{C}^{-(n+\alpha)}$ in this article, as the dual space of $\mathcal{C}^{n+\alpha}$, endowed with the norm
$$
\norm{\rho}_{-(n+\alpha)}=\sup\limits_{\norm{\phi}_{n+\alpha}\le 1}\langle\rho,\phi\rangle\,.
$$
With the same notations we define the space $\mathcal{C}^{-(n+\alpha),N}$ as the dual space of $\mathcal{C}^{n+\alpha,N}$ endowed with the same norm:
$$
\norm{\rho}_{-(n+\alpha),N}=\sup\limits_{\substack{\norm{\phi}_{n+\alpha}\le 1\\aD\phi\cdot\nu_{|\partial\Omega}=0}}\langle \rho,\phi \rangle\,.
$$
Finally, for $k\ge1$ and $1\le p\le+\infty$, we can also define the space $W^{-k,p}(\Omega)$, called for simplicity $W^{-k,p}$, as the dual space of $W^{k,p}(\Omega)$, endowed with the norm
$$
\norm{\rho}_{W^{-k,p}}=\sup\limits_{\norm{\phi}_{W^{k,p}}\le 1}\langle\rho,\phi\rangle\,.
$$
\begin{defn}
Let $m_1,m_2\in\mathcal{P}(\Omega)$ two Borel probability measures on $\Omega$.\\
We call the \emph{Wasserstein distance} between $m_1$ and $m_2$, and we write $\mathbf{d}_1(m_1,m_2)$ the quantity
\begin{equation}\label{wass1}
\mathbf{d}_1(m_1,m_2):=\sup\limits_{Lip(\phi)\le 1}\ensuremath{\int_{\Omega}} \phi(x)d(m_1-m_2)(x)\,.
\end{equation}
\end{defn}
We note that we can also write \eqref{wass1} as
\begin{align}\label{wass}
\mathbf{d}_1(m_1,m_2):=\sup\limits_{\substack{\norm{\phi}_{W^{1,\infty}}\le C\\Lip(\phi)\le 1}}\ensuremath{\int_{\Omega}} \phi(x)d(m_1-m_2)(x)\,,
\end{align}
for a certain $C>0$. Actually, for a fixed $x_0\in\Omega$, we can restrict ourselves to the functions $\phi$ such that $\phi(x_0)=0$, since
$$
\ensuremath{\int_{\Omega}} \phi(x)d(m_1-m_2)(x)=\ensuremath{\int_{\Omega}} (\phi(x)-\phi(x_0))d(m_1-m_2)(x)\,,
$$
and these functions obviously satisfies $\norm{\phi}_{W^{1,\infty}}\le C$ for a certain $C>0$.
We will always work with \eqref{wass}, where the restriction in $W^{1,\infty}$ allows us to obtain some desired estimates with respect to $\mathbf{d}_1$.
\begin{comment}
We will need also some different distances, that are a natural extension of the Wasserstein distance.
\begin{defn}
Let $m_1,m_2\in\mathcal{P}(\Omega)$, and let $\alpha\in(0,1)$. We define a generalized Wasserstein distance in this way:
$$
\daw{\alpha}(m_1,m_2):=\sup\limits_{\norm{\phi}_{\alpha}\le 1}\ensuremath{\int_{\Omega}} \phi(x)d(m_1-m_2)(x)\,.
$$
\end{defn}
Actually, we can define this distance for all $\alpha>0$, but only in case of $\alpha\in(0,1)$ these distances are equivalent to the Wasserstein one, as stated in the following proposition.
\begin{prop}
Let $\alpha\in(0,1)$. Then the following inequalities hold true:
\begin{equation}\label{equivwass}
\mathbf{d}_1(m_1,m_2)\le\daw{\alpha}(m_1,m_2)\le 3\,\mathbf{d}_1(m_1,m_2)^\alpha\,.\\
\end{equation}
Hence, $\mathbf{d}_1$ and $\daw{\alpha}$ are equivalent distances.
\begin{proof}
The inequality
$$
\mathbf{d}_1(m_1,m_2)\le\daw{\alpha}(m_1,m_2)
$$
is obvious. In order to prove the other one, we take $\phi$ $\alpha$-H\"older, with H\"older constant smaller than $1$, and we consider, for $L>0$, the following function:
$$
\phi_L(x)=\sup\limits_{z\in\Omega}\left(\phi(z)-L|x-z|\right)\,.
$$
It is easy to prove that $\phi_L$ is an $L$-Lipschitz function and that $\phi_L(x)\ge\phi(x)$.
We call $z_x$ a point where the $\sup$ is attained. Then we have
$$
\phi(x)\le\phi(z_x)-L|x-z_x|\implies L|x-z_x|\le|x-z_x|^\alpha\implies |x-z_x|\le L^{-\frac{1}{1-\alpha}}\,.
$$
Using these computations, we find
$$
\norm{\phi-\phi_L}_{\infty}=\sup\limits_{x\in\Omega}(\phi(z_x)-L|x-z_x|-\phi(x))\le L^{-\frac{\alpha}{1-\alpha}}\,.
$$
Now we can directly show the last inequality of \eqref{equivwass}:
\begin{align*}
\daw{\alpha}(m_1,m_2)&=\sup\limits_{\norm{\phi}_{\alpha}\le 1}\left(\ensuremath{\int_{\Omega}}(\phi(x)-\phi_L(x))d(m_1-m_2)(x)+\ensuremath{\int_{\Omega}}\phi_L(x)d(m_1-m_2)(x)\right)\le\\
&\le 2L^{-\frac{\alpha}{1-\alpha}}+L\,\daw{1}(m_1,m_2)\,.
\end{align*}
Choosing $L=\mathbf{d}_1(m_1,m_2)^{-(1-\alpha)}$, we finally obtain
$$
\daw{\alpha}(m_1,m_2)\le 3\,\mathbf{d}_1(m_1,m_2)^\alpha\,.
$$
\end{proof}
\end{prop}
Finally, we define the distance $\daw{0}$ in this way:
$$
\daw{0}(m_1,m_2):=\sup\limits_{\norminf{\phi}\le 1}\ensuremath{\int_{\Omega}} \phi(x)d(m_1-m_2)(x)\,.
$$
We note that this distance is well defined only if $m_1-m_2$ has a positive density with respect to the Lebesgue measure.
We note that $\mathcal{P}(\Omega)\subseteq W^{-1,\infty}(\Omega)$. Hence, the distance $\mathbf{d}_1$ is just the restriction of the $\norm{\cdot}_{W^{-1,\infty}}$ to the set $\mathcal{P}(\Omega)$.\\
\end{comment}
In order to give a sense to equation \eqref{Master}, we need to define a suitable derivation of $U$ with respect to the measure $m$.
\begin{defn}\label{dmu}
Let $U:\mathcal{P}(\Omega)\to\R$. We say that $U$ is of class $\mathcal{C}^1$ if there exists a continuous map $K:\mathcal{P}(\Omega)\times\Omega\to\R$ such that, for all $m_1$, $m_2\in\mathcal{P}(\Omega)$ we have
\begin{equation}\label{deu}
\lim\limits_{t\to0}\frac{U(m_1+s(m_2-m_1))-U(m_1)}{s}=\ensuremath{\int_{\Omega}} K(m_1,x)(m_2(dx)-m_1(dx))\,.
\end{equation}
\end{defn}
Note that the definition of $K$ is up to additive constants. Then, we define the derivative $\dm{U}$ as the unique map $K$ satisfying \eqref{deu} and the normalization convention
$$
\ensuremath{\int_{\Omega}} K(m,x)dm(x)=0\,.
$$
As an immediate consequence, we obtain the following equality, that we will use very often in the rest of the chapter: for each $m_1$, $m_2\in\mathcal{P}(\Omega)$ we have
$$
U(m_2)-U(m_1)=\int_0^1\ensuremath{\int_{\Omega}}\dm{U}((m_1)+s(m_2-m_1),x)(m_2(dx)-m_1(dx))\,.
$$
Finally, we can define the \emph{intrinsic derivative} of $U$ with respect to $m$.
\begin{defn}\label{Dmu}
Let $U:\mathcal{P}(\Omega)\to\R$. If $U$ is of class $\mathcal{C}^1$ and $\dm{U}$ is of class $\mathcal{C}^1$ with respect to the last variable, we define the intrinsic derivative $D_mU:\mathcal{P}(\Omega)\times\Omega\to\R^d$ as
$$
D_mU(m,x):=D_x\dm{U}(m,x)\,.
$$
\end{defn}
\begin{comment}
In order to work with Neumann conditions, we need to readapt these distances in this framework, in the following way:
\begin{defn}
Let $m_1, m_2\in\mathcal{P}(\Omega)$, and let $\alpha\in(0,2)$. We define the following distance:
$$
\daw{\alpha,N}(m_1,m_2)=\sup\limits_{\substack{\norm{\phi}_{\alpha}\le 1\\aD\phi\cdot\nu_{|\partial\Omega}=0}}\ensuremath{\int_{\Omega}} \phi(x)d(m_1-m_2)(x)\,.
$$
\end{defn}
If $\alpha<1$, $\phi$ may not have a derivative. In this case, with the condition $aD\phi\cdot\nu_{|\partial\Omega}$, we mean that
$$
-\int_{\Omega}\phi\,\mathrm{div}(aD\xi_\delta)\,dx+\int_{\partial\Omega}\phi\,aD\xi_\delta\cdot\nu\,dx\overset{\delta\to0}{\to}0\,,
$$
for each $\xi_\delta\ge0$ such that $\xi_\delta=0$ in $\Omega_{\delta}$, $\xi_\delta=1$ in $\partial\Omega$.
A very useful result is the following:
\begin{prop}
For each $\alpha\in(0,2)$ there exists a $C>0$ such that, $\forall m_1,m_2\in\mathcal{P}(\Omega)$
$$
\daw{\alpha,N}(m_1,m_2)\le\daw{\alpha}(m_1,m_2)\le C\daw{\alpha,N}(m_1,m_2)\,.
$$
In particular, the distances $\daw{\alpha}$ and $\daw{\alpha,N}$ are equivalent.
\begin{proof}
The left inequality is obvious.\\
To prove the right inequality we take a function $\phi\in\mathcal{C}^{\alpha}$ with $\norm{\phi}_\alpha\le 1$ and we make a suitable approximation of it.\\
In order to do that, we first need some useful tools.\\
For $\delta>0$ and $x\in\Omega\setminus\Omega_{\delta}$, we consider the following ODE in $\R^d$:
\begin{align}
\begin{cases}
\xi'(t;x)=-a(\xi(t;x))\nu(\xi(t;x))\,,\\
\xi(0;x)=x\,,
\end{cases}
\end{align}
and the corresponding hitting time with $\partial\Omega_\delta$:
$$
T(x):=\inf\left\{t\ge0 \,|\, \xi(t;x)\notin\Omega\setminus\Omega_\delta\right\}\,.
$$
We have that $T(x)<+\infty$ for each $x\in\Omega\setminus\Omega_{\delta}$. To prove that, we consider the auxiliary function
$$
\Phi(t,x)=\delta-d(\xi(t;x)).
$$
So, the function $T(x)$ can be rewritten as
$$
T(x)=\inf\left\{ t\ge0\,|\, \Phi(t,x)=0 \right\}\,,
$$
and his finiteness is an obvious consequence of the decreasing character of $\Phi$ in time:
$$
\partial_t\Phi(t,x)=-Dd(\xi(t;x))\cdot\xi'(t;x)=-\langle a(\xi(t;x))\nu(\xi(t;x)),\nu(\xi(t;x))\rangle\le-\lambda<0\,.
$$
Moreover, thanks to Dini's theorem we obtain that $T(x)$ is a $\mathcal{C}^1$ function and his gradient is given by
$$
\nabla T(x)=-\frac{\nabla_x\Phi(T(x),x)}{\partial_t\Phi(T(x),x)}=\frac{\nu(\xi(T(x);x))\mathrm{Jac}_x\xi(T(x);x)}{\langle a\nu,\nu\rangle (\xi(T(x);x))}\,.
$$
Thanks to the regularity of $a$ and $\Omega$, we can differentiate w.r.t. $x$ the ODE \eqref{ODE} and obtain that $\xi(t;\cdot)\in\mathcal{C}^{1+\gamma}$ for each $\gamma<1$. This implies $T(\cdot)\in\mathcal{C}^{\alpha}$ and
$$
\norm{T}_\alpha\le C\,,
$$
for a certain $C=C(a,\Omega)$.\\
Now we define, for $\phi\in\mathcal{C}^{\alpha}$ and $\delta>0$, the approximating functions $\phi_\delta$ in the following way:
$$
\phi_\delta(x)=
\left\{
\begin{array}{lll}
\phi(x)\quad & \mbox{if } & x\in\Omega_\delta\,,\\
\phi(\xi(T(x);x))\quad & \mbox{if } & x\in\Omega\setminus\Omega_\delta\,,
\end{array}
\right.
$$
eventually considering a $\mathcal{C}^{\alpha}$ regularization in $\Omega_{\delta}\setminus{\Omega_{2\delta}}\,$.
\end{proof}
\end{prop}
\end{comment}
We need the following assumptions:
\begin{hp}\label{ipotesi}
Assume that
\begin{itemize}
\item [(i)] (Uniform ellipticity) $\norm{a(\cdot)}_{1+\alpha}<\infty$ and $\exists\,\mu>\lambda>0$ s.t. $\forall\xi\in\mathbb{R}^d$ $$ \mu|\xi|^2\ge\langle a(x)\xi,\xi\rangle\ge\lambda|\xi|^2\,;$$
\item [(ii)]$H:\Omega\times\R^d\to\R$, $G:\Omega\times\mathcal{P}(\Omega)\to\R$ and $F:\Omega\times\mathcal{P}(\Omega)\to\R$ are smooth functions with $H$ Lipschitz with respect to the last variable;
\item [(iii)]$\exists C>0$ s.t.
$$
0< H_{pp}(x,p)\le C I_{d\times d}\hspace{0.08cm};
$$
\item [(iv)]$F$ satisfies, for some $0<\alpha<1$ and $C_F>0$,
$$
\ensuremath{\int_{\Omega}} \left(F(x,m)-F(x,m')\right) d(m-m')(x)\ge0
$$
and
$$
\sup\limits_{m\in\mathcal{P}(\Omega)}\left(\norm{F(\cdot,m)}_{\alpha}+\norm{\frac{\delta F}{\delta m}(\cdot,m,\cdot)}_{\alpha,2+\alpha}\right)+\mathrm{Lip}\left(\dm{F}\right)\le C_F\hspace{0.08cm},
$$
with
$$
\mathrm{Lip}\left(\dm{F}\right):=\sup\limits_{m_1\neq m_2}\left(\mathbf{d}_1(m_1,m_2)^{-1}\norm{\dm{F}(\cdot,m_1,\cdot)-\dm{F}(\cdot,m_2,\cdot)}_{\alpha,1+\alpha}\right)\hspace{0.08cm};
$$
\item [(v)]$G$ satisfies the same estimates as $F$ with $\alpha$ and $1+\alpha$ replaced by $2+\alpha$, i.e.
$$
\sup\limits_{m\in\mathcal{P}(\Omega)}\left(\norm{G(\cdot,m)}_{2+\alpha}+\norm{\frac{\delta G}{\delta m}(\cdot,m,\cdot)}_{2+\alpha,2+\alpha}\right)+\mathrm{Lip}\left(\dm{G}\right)\le C_G\hspace{0.08cm},
$$
with
$$
\mathrm{Lip}\left(\dm{G}\right):=\sup\limits_{m_1\neq m_2}\left(\mathbf{d}_1(m_1,m_2)^{-1}\norm{\dm{G}(\cdot,m_1,\cdot)-\dm{G}(\cdot,m_2,\cdot)}_{2+\alpha,2+\alpha}\right)\hspace{0.08cm};
$$
\item [(vi)] The following Neumann boundary conditions are satisfied:
\begin{align*}
&\left\langle a(y)D_y\dm{F}(x,m,y), \nu(y)\right\rangle_{|\partial\Omega}=0\,,\qquad \left\langle a(y)D_y\dm{G}(x,m,y),\nu(y)\right\rangle_{|\partial\Omega}=0\,,\\
&\langle a(x)D_xG(x,m), \nu(x)\rangle_{|\partial\Omega}=0\,,
\end{align*}
for all $m\in\mathcal{P}(\Omega)$.
\end{itemize}
\end{hp}
Some comments about the previous hypotheses: the first five are standard hypotheses in order to obtain existence and uniqueness of solutions for the Mean Field Games system. The hypotheses about the derivative of $F$ and $G$ with respect to the measure will be essential in order to obtain some estimates on a linearized MFG system.
As regards hypotheses $(vi)$, the second and the third boundary conditions are natural compatibility conditions, essential to obtain a classical solution for the $MFG$ and the linearized $MFG$ system. The first boundary condition will be essential in order to prove the Neumann boundary condition of $D_mU$, see Corollary \ref{delarue}.
With these hypotheses we are able to prove existence and uniqueness of a classical solution for the Master Equation \eqref{Master}. The main result of this paper is the following.
\begin{thm}\label{settepuntouno}
Suppose hypotheses \ref{ipotesi} are satisfied. Then there exists a unique classical solution $U$ of the Master Equation \eqref{Master}.
\end{thm}
But first, we have to prove some preliminary estimates about the Mean Field Games system and some other estimates on a linearyzed Mean Field Games system, which will be essential in order to ensure the $\mathcal{C}^1$ character of $U$ with respect to $m$.
\section{Preliminary estimates and Mean Field Games system}
In this section we start giving some technical results for linear parabolic equations, which will be useful in the rest of the Chapter.
Then we will obtain some preliminary estimates for the Master Equation, obtained by a deep analysis of the Mean Field Games related system.
We start with this technical Lemma.
\begin{lem}\label{sonobravo}
Suppose $a$ satisfies $(i)$ of Hypotheses \ref{ipotesi}, $b,f\in L^\infty(Q_T)$. Furthermore, let $\psi\in\mathcal{C}^{1+\alpha,N}(\Omega)$, with $0\le\alpha<1$. Then the unique solution $z$ of the problem
\begin{equation*}
\begin{cases}
-z_t-\mathrm{tr}(a(x)D^2z)+b(t,x)\cdot Dz=f(t,x)\,,\\
z(T)=\psi\,,\\
aDz\cdot\nu_{|\partial\Omega}=0
\end{cases}
\end{equation*}
satisfies
\begin{equation}\label{estensione}
\norm{z}\amu\le C\left(\norminf{f}+\norm{\psi}_{1+\alpha}\right)\,.
\end{equation}
\begin{proof}
Note that, if $f$ and $b$ are continuous bounded functions, with $b$ depending only on $x$, this result is simply \emph{Theorem 5.1.18} of \cite{lunardi}. In the general case, we argue as follows.
We can write $z=z_1+z_2$, where $z_1$ satisfies
\begin{equation}\label{problemadue}
\begin{cases}
-{(z_1)}_t-\mathrm{tr}(a(x)D^2z_1)=0\,,\\
z_1(T)=\psi\,,\\
aDz_1\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
\end{equation}
and $z_2$ satisfies
\begin{equation}\label{problemauno}
\begin{cases}
-{(z_2)}_t-\mathrm{tr}(a(x)D^2z_2)+b(t,x)\cdot Dz_2=f(t,x)-b(t,x)\cdot Dz_1\,,\\
z_2(T)=0\,,\\
aDz_2\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation}
Since in the equation \eqref{problemadue} of $z_1$ we do not have a drift term depending on time, we can apply \emph{Theorem 5.1.18} of \cite{lunardi} and obtain
$$
\norm{z_1}\amu\le C\norm{\psi}_{1+\alpha}\,.
$$
As regards \eqref{problemauno}, obviously $z_2(T)\in W^{2,p}(\Omega)$ $\forall p$, and from the estimate of $z_1$ we know that $f-bDz_1\in L^\infty$. So we can apply the Corollary of \emph{Theorem IV.9.1} of \cite{lsu} to obtain that, $\forall r\ge\frac{d+2}{2}\,,$
\begin{equation*}
\norm{z_2}_{1-\frac{d+2}{2r},2-\frac{d+2}{r}}\le C\norminf{f-bDz_1}\le C\left(\norminf{f}+\norm{\psi}_{1+\alpha}\right)\,.
\end{equation*}
Choosing $r=\frac{d+2}{1-\alpha}$, one has $2-\frac{d+2}{r}=1+\alpha$, and so \eqref{estensione} is satisfied for $z_2$.
Since $z=z_1+z_2$, estimate \eqref{estensione} holds also for $z$. This concludes the proof.
\end{proof}
\end{lem}
If the data $f=0$, we can generalize the result of Lemma \ref{sonobravo} if $\psi$ is only a Lipschitz function.
This result is well-known if $a\in\mathcal{C}^2(\Omega)$, by applying a classical Bernstein method. In our framework, we have the following result.
\begin{lem}\label{davverotecnico}
Suppose $a$ and $b$ be bounded continuous functions, and $\psi\in W^{1,\infty}(\Omega)$. Then the unique solution $z$ of the problem
\begin{equation}\label{bohmovrim}
\begin{cases}
-z_t-\mathrm{tr}(a(x)D^2z)+b(t,x)\cdot Dz=0\,,\\
z(T)=\psi\,,\\
aDz\cdot\nu_{|\partial\Omega}=0
\end{cases}
\end{equation}
satisfies a H\"{o}lder condition in $t$ and a Lipschitz condition in $x$, namely $\exists C$ such that
\begin{align}\label{nnavonmmna}
|z(t,x)-z(s,x)|\le C\norm{\psi}_{W^{1,\infty}}|t-s|^\miezz\,,\qquad|z(t,x)-z(t,y)|\le C\norm{\psi}_{W^{1,\infty}}|x-y|\,.
\end{align}
\begin{proof}
If $\psi\in\mathcal{C}^{1,N}$, estimates \eqref{nnavonmmna} is guaranteed by \eqref{estensione} of Lemma \ref{sonobravo}.
In the general case, we take $\psi^n\in\mathcal{C}^{1}$ such that $\psi^n\to\psi$ in $\mathcal{C}([0,T]\times\Omega)$ and\\$\norm{\psi^n}_{1}\le C\norm{\psi}_{W^{1,\infty}}$, and we want to make a suitable approximation of it in order to obtain a function $\tilde{\psi}^n\in\mathcal{C}^{1,N}$, also converging to $\psi$.\\
In order to do that, we first need some useful tools.\\
For $\delta>0$, $d(\cdot)$ the distance function from $\partial\Omega$, $\Omega_\delta=\{ x\in\Omega\,|\,d(x)\ge\delta \}$ and $x\in\Omega\setminus\Omega_{\delta}$, we consider the following ODE in $\R^d$:
\begin{align}\label{ODE}
\begin{cases}
\xi'(t;x)=-a(\xi(t;x))\nu(\xi(t;x))\,,\\
\xi(0;x)=x\,,
\end{cases}
\end{align}
where $\nu$ is an extension of the outward unit normal in $\Omega\setminus\Omega_{\delta}$. Actually, we know from \cite{cingul} that $$Dd(x)_{|\partial\Omega}=-\nu(x)\,,$$
so a suitable extension can be $\nu(x)=-Dd(x)\,$.
Then we consider the corresponding hitting time of $\partial\Omega_\delta$:
$$
T(x):=\inf\left\{t\ge0 \,|\, \xi(t;x)\notin\Omega\setminus\Omega_\delta\right\}\,.
$$
We have that $T(x)<+\infty$ for each $x\in\Omega\setminus\Omega_{\delta}$. To prove that, we consider the auxiliary function
$$
\Phi(t,x)=\delta-d(\xi(t;x)).
$$
So, the function $T(x)$ can be rewritten as
$$
T(x)=\inf\left\{ t\ge0\,|\, \Phi(t,x)=0 \right\}\,,
$$
and his finiteness is an obvious consequence of the decreasing character of $\Phi$ in time:
$$
\partial_t\Phi(t,x)=-Dd(\xi(t;x))\cdot\xi'(t;x)=-\langle a(\xi(t;x))\nu(\xi(t;x)),\nu(\xi(t;x))\rangle\le-\lambda<0\,.
$$
Moreover, thanks to Dini's theorem we obtain that $T(x)$ is a $\mathcal{C}^1$ function and his gradient is given by
$$
\nabla T(x)=-\frac{\nabla_x\Phi(T(x),x)}{\partial_t\Phi(T(x),x)}=\frac{\nu(\xi(T(x);x))\mathrm{Jac}_x\xi(T(x);x)}{\langle a\nu,\nu\rangle (\xi(T(x);x))}\,.
$$
Actually, thanks to the regularity of $a$ and $\Omega$, we can differentiate w.r.t. $x$ the ODE \eqref{ODE} and obtain that $\xi(t;\cdot)\in\mathcal{C}^{1}$.
Now we define the approximating functions $\tilde{\psi}^n$ in the following way:
\begin{equation}\label{psitildan}
\tilde{\psi}^n(x)=
\left\{
\begin{array}{lll}
\psi^n(x)\quad & \mbox{if } & x\in\Omega_\delta\,,\\
\psi^n(\xi(T(x);x))\quad & \mbox{if } & x\in\Omega\setminus\Omega_\delta\,,
\end{array}
\right.
\end{equation}
eventually considering a $\mathcal{C}^{1}$ regularization in $\Omega_{\delta}\setminus{\Omega_{2\delta}}\,$.
From the definition of $\tilde{\psi}^n$ and the $\mathcal{C}^1$ regularity of $\xi$ and $T$ we have $\tilde{\psi}^n\in\mathcal{C}^1$ and
$$
\|{\tilde{\psi}^n}\|_{1}\le C\norm{\psi^n}_1\le C\norm{\psi}_{W^{1,\infty}}\,.
$$
Moreover, since near the boundary $\tilde{\psi}^n$ is constant along the trajectories $a(\cdot)\nu(\cdot)$, we have that on $\partial\Omega$
$$
a(x)D{\tilde{\psi^n}}(x)\cdot\nu(x)_{|\partial\Omega}=\frac{\partial\tilde{\psi^n}}{\partial (a\nu(x))}(x)_{|\partial\Omega}=0\,,
$$
so $\tilde{\psi}^n\in\mathcal{C}^{1,N}$.
Now we consider $z^n$ as the solution of \eqref{bohmovrim} with $\psi$ replaced by $\tilde{\psi}^n$. Then Lemma \ref{sonobravo} implies that $\tilde{\psi}^n$ satisfies
$$
\norm{z^n}_{\miezz,1}\le C\|{\tilde{\psi}}\|_1\le C\norm{\psi}_{W^{1,\infty}}\,.
$$
Then, Ascoli-Arzel\`a's Theorem tells us that $\exists z$ such that $z^n\to z$ in $\mathcal{C}([0,T]\times\Omega)$. Passing to the limit in the weak formulation of $z^n$, we obtain that $z$ is the unique solution of \eqref{bohmovrim}.
Finally, since $z^n$ satisfies \eqref{nnavonmmna}, we can pass to the pointwise limit when $n\to+\infty$ and obtain the estimate \eqref{nnavonmmna} for $z$. This concludes the Lemma.
\end{proof}
\end{lem}
Now we start with the first estimates for the Master Equation.
The first result is obtained by the study of some regularity properties of the $MFG$ system, uniformly in $m_0$.
\begin{prop}
The system \eqref{meanfieldgames} with conditions \eqref{fame} has a unique classical solution $(u,m)\in \mathcal{C}^{1+\frac{\alpha}{2},2+\alpha}\times \mathcal{C}([0,T];\mathcal{P}(\Omega))$, and this solution satisfies
\begin{equation}\label{first}
\sup\limits_{t_1\neq t_2}\frac{\daw{1}(m(t_1),m(t_2))}{|t_1-t_2|^\frac 1 2}+\norm{u}_{1+\frac{\alpha}{2},2+\alpha}\le C\hspace{0.08cm},
\end{equation}
where $C$ does not depend on $(t_0,m_0)$.\\
Furthermore, $m(t)$ has a positive density for each $t>0$ and, if $m_0\in\mathcal{C}^{2+\alpha}$ and satisfies the Neumann boundary condition
\begin{equation}\label{neumannmzero}
\left(a(x)Dm_0+(\tilde{b}(0,x)+H_p(x,Du(0,x)))m_0\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{equation}
then $m\in\mathcal{C}^{1+\frac{\alpha}{2},2+\alpha}$.\\
Finally, the solution is stable: if $m_{0n}\to m_0$ in $\mathcal{P}(\Omega)$, then there is convergence of the corresponding solutions of \eqref{meanfieldgames}-\eqref{fame}: $(u_n,m_n)\to (u,m)$ in $\mathcal{C}^{1,2}\times\mathcal{C}([0,T];\mathcal{P}(\Omega))$.
\begin{proof}
We use a Schauder fixed point argument.\\
Let $X\subset\mathcal{C}([t_0,T];\mathcal{P}(\Omega))$ be the set
$$
X:=\left\{m\in\mathcal{C}([t_0,T];\mathcal{P}(\Omega))\mbox{ s.t. }\daw{1}(m(t),m(s))\le L|t-s|^\miezz\ \forall s,t\in[t_0,T] \right\}\hspace{0.08cm},
$$
where $L$ is a constant that will be chosen later.\\
It is easy to prove that $X$ is a convex compact set for the uniform distance.\\
We define a map $\Phi:X\to X$ as follows.\\
Given $\beta\in X$, we consider the solution of the following Hamilton-Jacobi equation
\begin{equation}\label{hj}
\begin{cases}
-u_t-\mathrm{tr}(a(x)D^2u)+H(x,Du)=F(x,\beta(t))\,,\\
u(T)=G(x,\beta(T))\,,\\
a(x)Du\cdot\nu(x)_{|\partial\Omega}=0\,.
\end{cases}
\end{equation}
Thanks to hypothesis $(iv)$ of \ref{ipotesi}, we have ${F(\cdot,\beta(\cdot))}\in\mathcal{C}^{\frac{\alpha}2,\alpha}$ and its norm is bounded by a constant independent of $\beta$. For the same reason $G(\cdot,\beta(T))\in\mathcal{C}^{2+\alpha}$.\\
It is well known that these hypotheses guarantee the existence and uniqueness of a classical solution. A proof can be found in \cite{lsu}, \textit{Theorem V.7.4}.\\
So, we can expand with Taylor formula the gradient term and obtain a linear equation satisfied by $u$:
\begin{align*}
\begin{cases}
-u_t-\mathrm{tr}(a(x)D^2u)+H(x,0)+V(t,x)\cdot Du=F(x,\beta(t))\,,\\
u(T)=G(x,\alpha(T))\,,\\
a(x)Du\cdot\nu_{\partial\Omega}=0\,.
\end{cases}
\end{align*}
with
$$
V(t,x):=\int_0^1 H_p(x,\lambda Du(t,x))\hspace{0.08cm} d\lambda\hspace{0.08cm}.
$$
Thanks to the Lipschitz hypothesis on $H$, $(ii)$ of \ref{ipotesi}, we know that $V\in L^\infty$. So, we can use the Corollary of \emph{Theorem IV.9.1} of \cite{lsu} to obtain
$$
Du\in\mathcal{C}^{\frac{\alpha}{2},\alpha}\implies V\in\mathcal{C}^{\frac{\alpha}2,\alpha}\hspace{0.08cm}.
$$
So, we can apply \emph{Theorem IV.5.3} of \cite{lsu} and get
\begin{align*}
\norm{u}_{1+\frac{\alpha}{2},2+\alpha}\le C\left(\norm{F}_{\frac{\alpha}{2},\alpha}+\norm{G}_{2+\alpha}\right)\hspace{0.08cm},
\end{align*}
where the constant $C$ does not depend on $\beta$, $t_0$, $m_0$.\\
Now, we define $\Phi(\beta)=m$, where $m\in\mathcal{C}([t_0,T];\mathcal{P}(\Omega))$ is the solution of the Fokker-Planck equation
\begin{equation}\label{fpk}
\begin{cases}
m_t-\mathrm{div}(a(x)Dm)-\mathrm{div}(m(\tilde{b}(x)+H_p(x,Du)))=0\,,\\
m(t_0)=m_0\,,\\
\left(a(x)Dm+(\tilde{b}+H_p(x,Du))m\right)\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
\end{equation}
It is easy to prove that the above equation has a unique solution in the sense of distribution. A proof in a more general case will be given in the next section, in Proposition \ref{peggiodellagerma}. We want to check that $m\in X$.\\
Thanks to the distributional formulation, we have
\begin{equation}\begin{split}\label{sotis}
&\ensuremath{\int_{\Omega}} \phi(t,x)m(t,dx)-\ensuremath{\int_{\Omega}}\phi(s,x)m(s,dx)\\\,+&\int_s^t\ensuremath{\int_{\Omega}}(-\phi_t-\mathrm{tr}(a(x)D^2\phi)+H_p(x,Du)\cdot D\phi)m(r,dx)dr=0\hspace{0.08cm},
\end{split}\end{equation}
for each $\phi\in L^{\infty}$ satisfying in the weak sense
$$
\begin{cases}
-\phi_t-\mathrm{tr}(a(x)D^2\phi)+H_p(x,Du)\cdot D\phi\in L^\infty(Q_T)\\
aD\phi\cdot\nu_{|\partial\Omega}=0
\end{cases}\hspace{0.08cm}.
$$
Take $\psi(\cdot)$ a $1$-Lipschitz function in $\Omega$. So, we choose $\phi$ in the weak formulation as the solution in $[t,T]$ of the following linear equation
\begin{equation}\label{coglia}\begin{cases}
-\phi_t-\mathrm{tr}(a(x)D^2\phi)+H_p(x,Du)\cdot D\phi=0\,,\\
\phi(t)=\psi\,,\\
a(x)D\phi\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
\end{equation}
Thanks to Lemma \ref{davverotecnico}, we know that $\phi(\cdot,x)\in\mathcal{C}^{\miezz}([0,T])$ and its H\"{o}lder norm in time is bounded uniformly if $\psi$ is $1$-Lipschitz.\\
Coming back to \eqref{sotis}, we obtain
\begin{align*}
\ensuremath{\int_{\Omega}}\psi(x)(m(t,dx)-m(s,dx))=\ensuremath{\int_{\Omega}}(\phi(t,x)-\phi(s,x))m(s,dx)\le C|t-s|^\miezz\hspace{0.08cm},
\end{align*}
and taking the $\sup$ over the $\psi$ $1$-Lipschitz,
$$
\mathbf{d}_1(m(t),m(s))\le C|t-s|^\miezz\hspace{0.08cm}.
$$
Choosing $L=C$, we have proved that $m\in X$.\\
Since $X$ is convex and compact, to apply Schauder's theorem we only need to show the continuity of $\Phi$.\\
Let $\beta_n\to\beta$, and let $u_n$ and $m_n$ the solutions of \eqref{hj} and \eqref{fpk} related to $\beta_n$. Since $\{u_n\}_n$ is uniformly bounded in $\mathcal{C}^{1+\frac{\alpha}{2},2+\alpha}$, from Ascoli-Arzel\`a's Theorem we have $u_n\to u$ in $\mathcal{C}^{1,2}$.\\
To prove the convergence of $\{m_n\}_n$, we take $\phi_n$ as the solution of \eqref{coglia} with $Du$ replaced by $Du_n$. Then, as before, $\{\phi_n\}_n$ is a Cauchy sequence in $\mathcal{C}^1$. Actually, the difference $\phi_{n,m}:=\phi_n-\phi_m$ satisfies
$$
\begin{cases}
-(\phi_{n,m})_t-\mathrm{tr}(a(x)D^2\phi_{n,m})+H_p(x,Du_n)\cdot D\phi_{n,m}=(H_p(x,Du_m)-H_p(x,Du_n))\cdot D\phi_m\,,\\
\phi_{n,m}(t)=0\,,\\
\bdone{\phi_{n,m}}\,,
\end{cases}
$$
and so Lemma \ref{sonobravo} implies
$$
\norm{\phi_{n,m}}\amu\le C\norminf{(H_p(x,Du_m)-H_p(x,Du_n))\cdot D\phi_m}\le C\norminf{Du_m-Du_n}\le \omega(n,k)\,,
$$
where $\omega(n,k)\to 0$ when $n,k\to\infty$, and where we use Lemma \ref{davverotecnico} in order to bound $D\phi_m$ in $L^\infty$, without compatibility conditions.\\
Using \eqref{sotis} with $(m_n,\phi_n)$ and $(m_k,\phi_k)$, for $n,k\in\mathbb{N}$, $s=0$, and subtracting the two equalities, we get
\begin{align*}
\ensuremath{\int_{\Omega}}\psi(x)(m_n(t,dx)-m_k(t,dx))=\ensuremath{\int_{\Omega}}(\phi_n(0,x)-\phi_k(0,x))m_0(dx)\le\omega(n,k)\,.
\end{align*}
Taking the sup over the $\psi$ $1$-Lipschitz and over $t\in[0,T]$, we obtain
\begin{align*}
\sup\limits_{t\in[0,T]}\mathbf{d}_1(m_n(t)),m_k(t))\le\omega(n,k)\,,
\end{align*}
which proves that $\{m_n\}_n$ is a Cauchy sequence in $X$. Then, $\exists m$ such that $m_n\to m$ in $X$.\\
Passing to the limit in \eqref{fpk}, we immediately obtain $m=\Phi(\beta)$, which conclude the proof of continuity.\\
So we can apply Schauder's theorem and obtain a classical solution of the problem \eqref{meanfieldgames}-\eqref{fame}. The estimate \eqref{first} follows from the above estimates for \eqref{hj} and \eqref{fpk}.\\
To prove the uniqueness, let $(u_1,m_1)$, $(u_2,m_2)$ be two solutions of \eqref{meanfieldgames}-\eqref{fame}.\\
We use inequality \eqref{dopo}, whose proof will be given in the next lemma, with $m_{01}(t_0)=m_{02}(t_0)=m_0$:
\begin{align*}
&\intc{t_0}\mathlarger{(}H(x,Du_2)-H(x,Du_1)-H_p(x,Du_1)(Du_2-Du_1)\mathlarger{)}m_1(t,dx)dt\hspace{0.08cm}+\\
+&\intc{t_0}\mathlarger{(}H(x,Du_1)-H(x,Du_2)-H_p(x,Du_2)(Du_1-Du_2)\mathlarger{)}m_2(t,dx)dt\le 0
\end{align*}
Since $H$ is strictly convex, the above inequality gives us $Du_1=Du_2$ in the set\\ $\{m_1>0\}\cup\{m_2>0\}$. Then $m_1$ and $m_2$ solve the same Fokker-Planck equation, and for uniqueness we have $m_1=m_2$.\\
So $F(x,m_1(t))=F(x,m_2(t))$, $G(x,m_1(T))=G(x,m_2(T))$ and $u_1$ and $u_2$ solve the same Hamilton-Jacobi equation, which implies $u_1=u_2$. The proof of uniqueness is complete.\\
Finally, if $m_0\in\mathcal{C}^{2+\alpha}$ satisfies \eqref{neumannmzero}, then, splitting the divergence terms in \eqref{fpk}, we have
\begin{equation*}
\begin{cases}
m_t-\mathrm{tr}(a(x)D^2m)-m\hspace{0.08cm}\mathrm{div}\left(\tilde{b}(x)+H_p(x,Du)\right)-\left(2\tilde{b}(x)+H_p(x,Du)\right)Dm=0\\
m(t_0)=m_0\\
\left(a(x)Dm+(\tilde{b}+H_p(x,Du))m\right)\cdot\nu_{|\partial\Omega}=0
\end{cases}\hspace{0.08cm}.
\end{equation*}
Then, thanks to \textit{Theorem IV.5.3} of \cite{lsu}, $m$ is of class $\mathcal{C}^{1+\frac{\alpha}{2},2+\alpha}$.\\
The stability of solutions is obtained in the same way we used for the continuity of $\Phi$. This concludes the proof.
\end{proof}
\end{prop}
With this proposition, we have obtained that
\begin{equation}\label{firstmaster}
\sup\limits_{t\in[0,T]}\sup\limits_{m\in\mathcal{P}(\Omega)}\norm{U(t,\cdot,m)}_{2+\alpha}\le C\hspace{0.08cm},
\end{equation}
which gives us an initial regularity result for the function $U$.\\
To complete the previous proposition, we need the following lemma, based on the so-called \textit{Lasry-Lions monotonicity argument}.
\begin{lem}
Let $(u_1,m_1)$ and $(u_2,m_2)$ be two solutions of System \eqref{meanfieldgames}-\eqref{fame}, with $m_1(t_0)=m_{01}$, $m_2(t_0)=m_{02}$. Then
\begin{equation}\begin{split}\label{dopo}
&\intc{t_0}\mathlarger{(}H(x,Du_2)-H(x,Du_1)-H_p(x,Du_1)(Du_2-Du_1)\mathlarger{)}m_1(t,dx)dt\\
+&\intc{t_0}\mathlarger{(}H(x,Du_1)-H(x,Du_2)-H_p(x,Du_2)(Du_1-Du_2)\mathlarger{)}m_2(t,dx)dt\\\le-&\ensuremath{\int_{\Omega}}(u_1(t_0,x)-u_2(t_0,x))(m_{01}(dx)-m_{02}(dx))\hspace{0.08cm}.
\end{split}
\end{equation}
\begin{proof}
See \emph{Lemma 3.1.2} of \cite{card}.
\end{proof}
\end{lem}
\section{Lipschitz continuity of $U$}
\begin{prop}\label{holder}
Let $(u_1,m_1)$ and $(u_2,m_2)$ be two solutions of system \eqref{meanfieldgames}-\eqref{fame}, with $m_1(t_0)=m_{01}$, $m_2(t_0)=m_{02}$. Then
\begin{equation}\begin{split}\label{lipsch}
\norm{u_1-u_2}_{1,2+\alpha}&\le C\mathbf{d}_1(m_{01},m_{02})\,,\\
\sup\limits_{t\in[t_0,T]}\mathbf{d}_1(m_1(t),m_2(t))& \le C\mathbf{d}_1(m_{01},m_{02})\,,
\end{split}\end{equation}
where $C$ does not depend on $t_0$, $m_{01}$, $m_{02}$. In particular
\begin{equation*}
\sup\limits_{t\in[0,T]}\sup_{m_1\neq m_2}\left[\left(\mathbf{d}_1(m_1,m_2)\right)^{-1}\norm{U(t,\cdot,m_1)-U(t,\cdot,m_2)}_{2+\alpha}\right]\le C\,.
\end{equation*}
\emph{
So, the solution of the Master Equation is Lipschitz continuous in the measure variable. This will be essential in order to prove the $\mathcal{C}^1$ character of $U$ with respect to $m$.
}
\begin{proof}
For simplicity, we show the result for $t_0=0$.\\
\textit{First step: An initial estimate.} Thanks to the hypotheses on $H$ and the Lipschitz bound of $u_1$ and $u_2$, \eqref{dopo} implies
\begin{align*}
&\ensuremath{\int_{0}^{T}\int_{\Omega}}|Du_1-Du_2|^2(m_1(t,dx)+m_2(t,dx))dt\le\\\le C&\ensuremath{\int_{\Omega}} (u_1(0,x)-u_2(0,x))(m_{01}(dx)-m_{02}(dx))\le C\norm{u_1-u_2}\amu\mathbf{d}_1(m_{01},m_{02}).
\end{align*}
\textit{Second step: An estimate on $m_1-m_2$}. We call $m:=m_1-m_2$. We take $\phi$ a sufficiently regular function satisfying $aD\phi\cdot\nu=0$, which will be chosen later. By subtracting the weak formulations \eqref{sotis} of $m_1$ and $m_2$ for $s=0$ and for $\phi$ as test function, we obtain
\begin{equation}\label{immigrato}
\begin{split}
&\ensuremath{\int_{\Omega}}\phi(t,x)m(t,dx)+\ensuremath{\int_{0}^{t}\int_{\Omega}}\left(-\phi_t-\mathrm{tr}(a(x)D^2\phi)+H_p(x,Du_1)D\phi\right)m(s,dx)ds+\\+&\ensuremath{\int_{0}^{t}\int_{\Omega}}(H_p(x,Du_1)-H_p(x,Du_2))D\phi\hspace{0.08cm} m_2(s,dx)ds=\ensuremath{\int_{\Omega}}\phi(0,x)(m_{01}(dx)-m_{02}(dx))\,.
\end{split}
\end{equation}
We choose $\phi$ as the solution of \eqref{coglia} related to $u_1$, with terminal condition $\psi\in W^{1,\infty}$. Using the Lipschitz continuity of $H_p$ with respect to $p$, we get
\begin{align*}
\ensuremath{\int_{\Omega}} \psi(x) m(t,dx)\le C\ensuremath{\int_{0}^{t}\int_{\Omega}} |Du_1-Du_2| m_2(s,dx)ds+C\mathbf{d}_1(m_{01},m_{02})\hspace{0.08cm},
\end{align*}
since, for Lemma \ref{davverotecnico}, $\phi$ is Lipschitz continuous with a constant bounded uniformly if $\psi$ is $1$-Lipschitz.\\
Now we use the Young's inequality and the first step to obtain
\begin{align*}
\ensuremath{\int_{\Omega}} \psi(x) m(t,dx)\le\hspace{0.08cm}&C\left(\ensuremath{\int_{0}^{t}\int_{\Omega}} |Du_1-Du_2|^2 m_2(s,dx)\right)^\miezz+C\mathbf{d}_1(m_{01},m_{02})\le\\\le& \hspace{0.08cm} C\left(\norm{u_1-u_2}\amu^\miezz\mathbf{d}_1(m_{01},m_{02})^\miezz+\mathbf{d}_1(m_{01},m_{02})\right)\hspace{0.08cm},
\end{align*}
and finally, taking the sup over the $\psi$ $1$-Lipschitz and the over $t\in[0,T]$,
\begin{equation}\label{secondstep}
\sup\limits_{t\in[0,T]}\mathbf{d}_1(m_1(t),m_2(t))\le C\left(\norm{u_1-u_2}\amu^\miezz\mathbf{d}_1(m_{01},m_{02})^\miezz+\mathbf{d}_1(m_{01},m_{02})\right)\hspace{0.08cm}.
\end{equation}
\textit{Third step: Estimate on $u_1-u_2$ and conclusion.} We call $u:=u_1-u_2$. Then $u$ solves the following equation
\begin{equation*}
\begin{cases}
-u_t-\mathrm{tr}(a(x)D^2 u)+V(t,x)Du=f(t,x)\\
u(T)=g(x)\\
a(x)Du\cdot\nu_{|\partial\Omega}=0
\end{cases}\hspace{0.08cm},
\end{equation*}
where
\begin{align*}
&V(t,x)=\int_0^1 H_p(x,\lambda Du_1(t,x)+(1-\lambda)Du_2(t,x)d\lambda\hspace{0.08cm};\\
&f(t,x)=\int_0^1\ensuremath{\int_{\Omega}}\dm{F}(x,\lambda m_1(t)+(1-\lambda)m_2(t),y)(m_1(t,dy)-m_2(t,dy))d\lambda\hspace{0.08cm};\\
&g(x)=\int_0^1\ensuremath{\int_{\Omega}}\dm{G}(x,\lambda m_1(T)+(1-\lambda)m_2(T),y)(m_1(T,dy)-m_2(T,dy))d\lambda\hspace{0.08cm}.
\end{align*}
From the regularity of $u_1$ and $u_2$, we have $V(t,\cdot)$ bounded in $\mathcal{C}^{\frac{\alpha}{2},\alpha}$. \\
We want to apply \emph{Theorem 5.1.21} of \cite{lunardi}. To do this, we have to estimate
$
\sup\limits_t\norm{f(t,\cdot)}_\alpha
$
First, we call
$$
m_\lambda(\cdot):=\lambda m_1(\cdot)+(1-\lambda)m_2(\cdot)\hspace{0.08cm}.
$$
We get
\begin{align*}
\sup\limits_{t\in[0,T]}\norm{f(t,\cdot)}_{\alpha}&\le\sup\limits_{t\in[0,T]}\int_0^1\norm{D_y\dm{F}(\cdot,m_\lambda(t),\cdot)}_{\alpha,\infty}d\lambda\,\mathbf{d}_1(m_1(t),m_2(t))\\
&\le C\sup\limits_{t\in[0,T]}\mathbf{d}_1(m_1(t),m_2(t))\hspace{0.08cm},
\end{align*}
where $C$ depends on the constant $C_F$ in hypotheses \ref{ipotesi}.\\
\begin{comment}
As regards the time estimate, one has
\begin{equation}\begin{split}\label{nemequittepas}
&\left|f(t,x)-f(s,x)\right|\le\\\le&\left|\int_0^1\ensuremath{\int_{\Omega}}\left(\dm{F}(x,m_\lambda(t),y)-\dm{F}(x,m_\lambda(s),y)\right)\left(m_1(t,dy)-m_2(t,dy)\right)d\lambda\right|+\\+&\left|\int_0^1\ensuremath{\int_{\Omega}}\dm{F}(x,m_\lambda(s),y)(m_1(t,dy)-m_1(s,dy)+m_2(s,dy)-m_2(t,dy)d\lambda\right|
\end{split}\end{equation}
Thanks to the hypotheses on $F$, the first integral is bounded above by
\begin{align*}
C\int_0^1\mathbf{d}_1(m_\lambda(t),m_\lambda(s))&\mathbf{d}_1(m_1(t),m_2(t))d\lambda\le\\&\le C\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\int_0^1\mathbf{d}_1(m_\lambda(t),m_\lambda(s))d\lambda\hspace{0.08cm}.
\end{align*}
We compute the Wasserstein distance between $m_\lambda(t)$ and $m_\lambda(s)$. For $\phi$ $1-$Lipschitz
\begin{align*}
&\ensuremath{\int_{\Omega}} \phi(x)(m_\lambda(t,dx)-m_\lambda(s,dx))=\\=\lambda&\ensuremath{\int_{\Omega}} \phi(x)(m_1(t,dx)-m_1(s,dx))\hspace{0.08cm}+\hspace{0.08cm}(1-\lambda)\ensuremath{\int_{\Omega}} \phi(x)(m_2(t,dx)-m_2(s,dx))\le\\\le \lambda\hspace{0.08cm}&\mathbf{d}_1(m_1(t),m_1(s))+(1-\lambda)\hspace{0.08cm}\mathbf{d}_1(m_2(t),m_2(s))\le C|t-s|^\miezz\le C|t-s|^{\frac{\alpha}{2}},
\end{align*}
where $C$ is changed in the last inequality.\\
So, the first integral in \eqref{nemequittepas} is bounded by
$$
C|t-s|^\frac{\alpha}{2}\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\hspace{0.08cm}.
$$
Now we estimate the second integral. We have
\begin{align*}
\left|\int_0^1\ensuremath{\int_{\Omega}}\dm{F}(x,m_\lambda(s),y)(m_1(t,dy)-m_1(s,dy)+m_2(s,dy)-m_2(t,dy)d\lambda\right|\le\\\le C\mathbf{d}_1(m_1(t),m_1(s))+C\mathbf{d}_1(m_2(t),m_2(s))\le C|t-s|^\miezz\hspace{0.08cm}.
\end{align*}
On the other hand, we have
\begin{align*}
\left|\int_0^1\ensuremath{\int_{\Omega}}\dm{F}(x,m_\lambda(s),y)(m_1(t,dy)-m_1(s,dy)+m_2(s,dy)-m_2(t,dy)d\lambda\right|\le\\\le C\mathbf{d}_1(m_1(t),m_2(t))+C\mathbf{d}_1(m_1(s),m_2(s))\le C\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\hspace{0.08cm}.
\end{align*}
Now, we use a very easy trick: if $A\le B$ and $A\le C$ it is obvious that
\begin{align*}
A=A^pA^{1-p}\le B^pC^{1-p}\hspace{0.08cm},\hspace{1cm}\forall\ 0\le p\le 1\hspace{0.08cm}.
\end{align*}
So, choosing $p=\alpha$, the second integral in \eqref{nemequittepas} is bounded by
$$
C|t-s|^{\frac{\alpha}2}\left(\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\right)^{1-\frac{\alpha}2}\hspace{0.08cm}.
$$
\end{comment}
In the same way
\begin{align}
\norm{g(\cdot)}_{2+\alpha}\le C\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\,.
\end{align}
So we can apply \emph{Theorem 5.1.21} of \cite{lunardi} and obtain
\begin{equation}\label{finalcountdown}
\begin{split}
\norm{u_1-u_2}_{1,2+\alpha}&\le C\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\,.
\end{split}
\end{equation}
\begin{comment}
Conversely, \emph{Theorem 5.1.18} of \cite{lunardi} tells us that
\begin{equation}\label{caniofatone}
\norm{u_1-u_2}\amu\le C\left( \norminf{f}+\norm{g}_{1+\alpha}\right)\le C\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\,.
\end{equation}
\end{comment}
Coming back to \eqref{secondstep}, this implies
\begin{align*}
\sup\limits_{t\in[0,T]}&\mathbf{d}_1(m_1(t),m_2(t))\le\\&\le C\left(\left(\sup\limits_{r\in[0,T]}\mathbf{d}_1(m_1(r),m_2(r))\right)^{\miezz}\mathbf{d}_1(m_{01},m_{02})^\miezz+\mathbf{d}_1(m_{01},m_{02})\right)\hspace{0.08cm},
\end{align*}
and, using a generalized Young's inequality, this allows us to conclude:
\begin{align}\label{oterz}
\sup\limits_{t\in[0,T]}&\mathbf{d}_1(m_1(t),m_2(t))\le C\mathbf{d}_1(m_{01},m_{02})\,.
\end{align}
Plugging this estimate in \eqref{finalcountdown}, we finally obtain
\begin{align*}
&\norm{u_1-u_2}_{1,2+\alpha}\le C\mathbf{d}_1(m_{01},m_{02})\label{osicond}\,.\\
\end{align*}
\end{proof}
\end{prop}
\section{Linearized system and differentiability of $U$ with respect to the measure}
The proof of existence and uniqueness of solutions for the Master Equation strongly relies on the $\mathcal{C}^1$ character of $U$ with respect to $m$.\\
The definition of the derivative $\frac{\delta U}{\delta m}$ is strictly related to the solution $(v,\mu)$ of the following \emph{linearized system}:
\begin{equation}\label{linDuDm}
\begin{cases}
-v_t-\mathrm{tr}(a(x)D^2v)+H_p(x,Du)\cdot Dv=\mathlarger{\frac{\delta F}{\delta m}}(x,m(t))(\mu(t))\,,\\
\mu_t-\mathrm{div}(a(x)D\mu)-\mathrm{div}(\mu(H_p(x,Du)+\tilde{b}))-\mathrm{div}(mH_{pp}(x,Du)Dv)=0\,,\\
v(T,x)=\mathlarger{\frac{\delta G}{\delta m}}(x,m(T))(\mu(T))\,,\qquad \mu(t_0)=\mu_0\,,\\
a(x)Dv\cdot\nu_{|\partial\Omega}=0\,,\hspace{1cm}\left(a(x)D\mu+\mu(H_p(x,Du)+\tilde{b})+mH_{pp}(x,Du)Dv\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation}
where we use the notation
$$
{\dm{F}}(x,m(t))(\rho(t)):=\left\langle{\dm{F}}(x,m(t),\cdot),\rho(t)\right\rangle
$$
and the same for $G$.\\
We want to prove that this system admits a solution and that the following equality holds:
\begin{equation}\label{reprform}
v(t_0,x)=\left\langle\frac{\delta U}{\delta m}(t_0,x,m_0,\cdot),\mu_0\right\rangle\,.
\end{equation}
First, we have to analyze separately the well-posedness of the Fokker-Planck equation in distribution sense:
\begin{equation}\label{linfp}
\begin{cases}
\mu_t-\mathrm{div}(a(x)D\mu)-\mathrm{div}(\mu b)=f\,,\\
\mu(0)=\mu_0\,,\\
\left(a(x)D\mu+\mu b\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation}
where $f\in L^1(W^{-1,\infty})$, $\mu_0\in\mathcal{C}^{-(1+\alpha)}$, $b\in L^\infty$.
A suitable distributional definition of solution is the following:
\begin{defn}\label{canzonenuova}
Let $f\in L^1(W^{-1,\infty})$, $\mu_0\in\mathcal{C}^{-(1+\alpha)}$, $b\in L^\infty$. We say that a function $\mu\in\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\cap L^1(Q_T)$ is a weak solution of \eqref{linfp} if, for all $\psi\in L^\infty(\Omega)$, $\xi\in\mathcal{C}^{1+\alpha,N}$ and $\phi$ solution in $[0,t]\times\Omega$ of the following linear equation
\begin{equation}\label{hjbfp}
\begin{cases}
-\phi_t-\mathrm{div}(aD\phi)+bD\phi=\psi\,,\\
\phi(t)=\xi\,,\\
aD\phi\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation}
the following formulation holds:
\begin{equation}\label{weakmu}
\langle \mu(t),\xi\rangle+\ensuremath{\int_{0}^{t}\int_{\Omega}}\mu(s,x)\psi(s,x)\,dxds=\langle\mu_0,\phi(0,\cdot)\rangle+\int_0^t\langle f(s),\phi(s,\cdot) \rangle\,ds\,,
\end{equation}
where $\langle \cdot,\cdot\rangle$ denotes the duality between $\mathcal{C}^{-(1+\alpha),N}$ and $\mathcal{C}^{1+\alpha,N}$ in the first case, between $\mathcal{C}^{-(1+\alpha)}$ and $\mathcal{C}^{1+\alpha}$ in the second case and between $W^{-1,\infty}$ and $W^{1,\infty}$ in the last case.
\end{defn}
We note that the definition is well-posed. Actually, $\phi(s,\cdot)$ is in $\mathcal{C}^{1+\alpha}$ $\forall s$ thanks to Lemma \ref{sonobravo}, so $ \langle\mu_0,\phi(0,\cdot)\rangle$ and $\langle f(s),\phi(s,\cdot)\rangle$ are well defined. Moreover, we have
$$
\norm{\phi(s,\cdot)}_{W^{1,\infty}}\le C\,.
$$
Hence, since $f\in L^1(W^{-1,\infty})$, the last integral is well defined too.
\begin{rem} We are mainly interested in a particular case of distribution $f$. If there exists an integrable function $c:[0,T]\times\Omega\to\R^n$ such that $\forall\phi\in W^{1,\infty}$
$$
\langle f(t),\phi\rangle=\ensuremath{\int_{\Omega}} c(t,x)\cdot D\phi(x)\,dx\,,
$$
then we can write the problem \eqref{linfp} in this way:
\begin{equation*}
\begin{cases}
\mu_t-\mathrm{div}(a(x)D\mu)-\mathrm{div}(\mu b)=\mathrm{div}(c)\,,\\
\mu(0)=\mu_0\,,\\
\left(a(x)D\mu+\mu b+c\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation*}
writing $f$ like a divergence and adjusting the Neumann condition, in order to make sense out of the integration by parts in the regular case.
In this case, in order to ensure the condition $f\in L^1(W^{-1,\infty})$, we can simply require $c\in L^1(Q_T)$. Actually we have, using Jensen's inequality,
\begin{align*}
\norm{f}_{L^1(W^{-1,\infty})}=\int_0^T\sup\limits_{\norm{\phi}_{W^{1,\infty}}\le 1}\left(\ensuremath{\int_{\Omega}} c(t,x)\cdot D\phi(x)\,dx\right)dt\le C\ensuremath{\int_{0}^{T}\int_{\Omega}}|c(t,x)|\,dxdt=\norm{c}_{L^1}\,,
\end{align*}
where $|\cdot|$ is any equivalent norm in $\R^d$.
\end{rem}
The next Proposition gives us an exhaustive existence and uniqueness result for \eqref{linfp}.
\begin{prop}\label{peggiodellagerma}
Let $f\in L^1(W^{-1,\infty})$, $\mu_0\in\mathcal{C}^{-(1+\alpha)}$, $b\in L^\infty$. Then there exists a unique solution of the Fokker-Planck equation \eqref{linfp}.
This solution satisfies
\begin{equation}\label{stimefokker}
\sup_t\norm{\mu(t)}_{-(1+\alpha),N}+\norm{\mu}_{L^p}\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,,
\end{equation}
where $p=\frac{d+2}{d+1+\alpha}\,$.
\begin{comment}
\begin{equation}\label{acciessmij}
\begin{split}
\sup_t\norm{\mu(t)}_{-(1+\alpha),N}+\norm{\mu}_{L^p}&\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,,\\
\norm{\mu}_{\gamma,-(1+\alpha),N}&\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^q(W^{-1,\infty})}\right)\,,
\end{split}
\end{equation}
where $\gamma=\min\left\{\frac{q-1}{q},\frac{\alpha}{2}\,\right\}\,.$
\end{comment}
Finally, the solution is stable: if $\mu^n_0\to\mu_0$ in $\mathcal{C}^{-(1+\alpha)}$, $\{b^n\}_n$ uniformly bounded and $b^n\to b$ in $L^p$ $\forall\,p$, $f^n\to f$ in $L^1(W^{-1,\infty})$, then, calling $\mu^n$ and $\mu$ the solutions related, respectively, to $(\mu^n_0,b^n,f^n)$ and $(\mu_0,b,f)$, we have $\mu^n\to\mu$ in $\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\cap L^p(Q_T)$.
\begin{proof}
For the existence part, we start assuming that $f$, $b$, $\mu_0$ are smooth functions, and that $\mu_0$ satisfies
\begin{equation}\label{neumannmu}
\left(a(x)D\mu_0+\mu_0 b\right)\cdot\nu_{|\partial\Omega}=0\,.
\end{equation}
In this case, we can split the divergence terms in \eqref{linfp} and obtain that $\mu$ is a solution of a linear equation with smooth coefficients. So the existence of solutions in this case is a straightforward consequence of the classical results in \cite{lsu}, \cite{lunardi}.
We consider the unique solution $\phi$ of \eqref{hjbfp} with $\psi=0$ and $\xi\in\mathcal{C}^{1+\alpha,N}$. Multiplying the equation of $\mu$ for $\phi$ and integrating by parts in $[0,t]\times\Omega$ we obtain
\begin{equation}\label{rhs}
\langle \mu(t),\xi\rangle=\langle\mu_0,\phi(0,\cdot)\rangle+\int_0^t\langle f(s),\phi(s,\cdot) \rangle\,ds\,.
\end{equation}
Thanks to Lemma \ref{sonobravo}, we know that
\begin{equation}\label{upa}
\norm{\phi}\amu\le C\norm{\xi}_{1+\alpha}\,.
\end{equation}
Then the right hand side term of \eqref{rhs} is bounded in this way:
\begin{equation*}
\langle\mu_0,\phi(0,\cdot)\rangle+\int_0^t\langle f(s),\phi(s,\cdot) \rangle\,ds\le C\norm{\xi}_{1+\alpha}\left(\norm{\mu_0}_{-(1+\alpha)}+
\int_0^t\norm{f(s)}_{W^{1,\infty}}\right)\,.
\end{equation*}
Coming back to \eqref{rhs} and passing to the $sup$ when $\xi\in\mathcal{C}^{1+\alpha,N}$, $\norm{\xi}_{1+\alpha}\le 1$, we obtain
\begin{equation}
\sup\limits_t\norm{\mu(t)}_{-(1+\alpha),N}\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,.
\end{equation}
\begin{comment}
For the time estimate, we consider the same $\phi$ and, for $s<t$, we integrate the equation in $[s,t]\times\Omega$, obtaining
$$
\langle\mu(t)-\mu(s),\xi\rangle=\langle\mu(s),\phi(s)-\phi(t)\rangle+\int_s^t\langle f(r),\phi(r,\cdot)\rangle\,dr\,.
$$
The first term in the right-hand side is easily estimated, using \eqref{upa}:
$$
\langle\mu(s),\phi(s)-\phi(t)\rangle\le C\norm{\phi(s)-\phi(t)}_{1+\alpha}\norm{\mu_0}_{-(1+\alpha)}\le C|t-s|^{\frac{\alpha}2}\norm{\xi}_{1+\alpha}\norm{\mu_0}_{-(1+\alpha)}\,.
$$
As regards the last term, we use again \eqref{upa} and Jensen's inequality to obtain
\begin{align*}
&\int_s^t\langle f(r),\phi(r,\cdot)\rangle\,dr\le C\norm{\xi}_{1+\alpha}\int_s^t\norm{f(r)}_{W^{-1,\infty}}\,dr\\
\le C\norm{\xi}_{1+\alpha}(t-s)^{\frac{q-1}{q}}&\left(\int_s^t\norm{f(r)}^q_{W^{-1,\infty}}\right)^{\frac 1q}= C\norm{\xi}_{1+\alpha}(t-s)^{\frac{q-1}{q}}\norm{f}_{L^q(W^{-1,\infty})}\,.
\end{align*}
Putting togethere these estimates and passing to the $sup$ again with $\xi\in\mathcal{C}^{1+\alpha,N}$ and $\norm{\xi}_{1+\alpha}\le1$, we obtain
$$
\norm{\mu}_{\gamma,-(1+\alpha),N}\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^q(W^{-1,\infty})}\right)\,.
$$
\end{comment}
Now we have to prove the $L^p$ estimate. We consider the solution of \eqref{hjbfp} with $t=T$, $\xi=0$ and $\psi\in L^r$, with $r>d+2$ (we recall that in this chapter we call $d$ the dimension of the space.).
Then the Corollary of \emph{Theorem IV.9.1} of \cite{lsu} tells us that
\begin{equation}\label{napule}
\norm{\phi}_{1-\frac{d+2}{2r},2-\frac{d+2}{r}}\le C\norm{\psi}_{L^r}\,.
\end{equation}
Choosing $r=\frac{d+2}{1-\alpha}$, one has $2-\frac{d+2}{r}=1+\alpha$. Integrating in $[0,T]\times\Omega$ the equation of $\mu$ one has
$$
\ensuremath{\int_{0}^{T}\int_{\Omega}}\mu\psi\,dxds=\langle\mu_0,\phi(0,\cdot)\rangle+\int_0^T\langle f(s),\phi(s,\cdot)\rangle\,ds\,.
$$
Thanks to \eqref{napule} we can estimate the terms on the right-hand side and obtain
\begin{equation}\label{minecessita}
\ensuremath{\int_{0}^{T}\int_{\Omega}}\mu\psi\,dxds\le C\norm{\psi}_{L^r}\left( \norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})} \right)\,.
\end{equation}
Passing to the $sup$ for $\norm{\psi}_{L^r}\le 1$, we finally get
$$
\norm{\mu}_{L^p}\le C\left( \norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})} \right)\,,
$$
with $p$ defined as the conjugate exponent of $r$, i.e. $p=\frac{d+2}{d+1+\alpha}$.
This proves estimates \eqref{stimefokker} in the regular case.\\
In the general case, we consider suitable smooth approximations $\mu_{0}^k$, $f^k$, $b^k$ converging to $\mu_0$, $f$, $b$ respectively in $\mathcal{C}^{-(1+\alpha),N}$, $L^1(W^{-1,\infty})$ and $L^q(Q_T)$ $\forall q\ge 1$, with $b_k$ bounded uniformly in $k$ and with $\mu_0^k$ satisying \eqref{neumannmu}.
We call $\mu^k$ the related solution of \eqref{linfp}. The above convergences tells us that, for a certain $C$,
\begin{align*}
\|{\mu^k_0}\|_{-(1+\alpha)}\le C\norm{\mu_0}_{-(1+\alpha)}\,,\qquad&\norminf{b_k}\le C\norminf{b}\\
&\|{f^k}\|_{L^1(W^{-1,\infty})}\le C\norm{f}_{L^1(W^{-1,\infty})}\,.
\end{align*}
Then we apply \eqref{stimefokker}, to obtain, uniformly in $k$,
\begin{equation}\label{blaffoff}
\sup_t\|{\mu^k(t)}\|_{-(1+\alpha),N}+\|{\mu^k}\|_{L^p}\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,,
\end{equation}
\begin{comment}
\begin{equation*}
\begin{split}
\sup_t\|{\mu^k(t)}\|_{-(1+\alpha),N}+\|{\mu^k}\|_{L^p}&\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,,\\
\|{\mu^k}\|_{\gamma,-(1+\alpha),N}&\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^q(W^{-1,\infty})}\right)\,.
\end{split}
\end{equation*}
\end{comment}
where $C$ actually depends on $b^k$, but since $b^k\to b$ it is bounded uniformly in $k$.
Moreover, the function $\mu^{k,h}:=\mu^k-\mu^h$ also satisfies \eqref{linfp} with data $b=b^k$,\\$f=f^k-f^h+\mathrm{div}(\mu^h(b^k-b^h))$, $\mu^0=\mu^k_0-\mu^h_0$. Then estimates \eqref{stimefokker} tell us that
\begin{align*}
&\sup_t\|{\mu^{k,h}(t)}\|_{-(1+\alpha),N}\,+\,\|{\mu^{k,h}}\|_{L^p}\\\le C&\left(\|{\mu^k_0-\mu^h_0}\|_{-(1+\alpha)}+\|{f^k-f^h}\|_{L^1(W^{-1,\infty})}+\|{\mathrm{div}(\mu^h(b^k-b^h))}\|_{L^1(W^{-1,\infty})}\right)\,,
\end{align*}
The first two terms in the right-hand side easily go to $0$ when $h,k\to+\infty$, since $\mu^k_0$ and $f^k$ are Cauchy sequences. As regards the last term, calling $p'$ the conjugate exponent of $p$, we have
\begin{align}\label{luigicoibluejeans}
\|{\mathrm{div}(\mu^h(b^k-b^h))}\|_{L^1(W^{-1,\infty})}\le C\ensuremath{\int_{0}^{T}\int_{\Omega}}\left|\mu^h(b^k-b^h)\right|\,dxdt\le C\|{b^k-b^h}\|_{L^{p'}}\,,
\end{align}
since $\mu^k$ is bounded in $L^p$ by \eqref{blaffoff} (here $C$ depends also on $\mu_0$ and $f$). So, also the last term goes to $0$ since $b^k$ is a Cauchy sequence in $L^q$ $\forall q\ge1$.
Hence, $\{\mu^k\}_k$ is a Cauchy sequence, and so there exists $\mu\in \mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\cap L^p(Q_T)$ such that
\begin{comment}
Then we can use Ascoli-Arzel\`a Theorem, the reflexivity of $L^p$ for $p>1$ and Banach-Alaoglu Theorem in order to obtain the existence of $\mu\in\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\cap L^1(Q_T)$ such that
$$
\mu^k\to\mu\qquad\mbox{strongly in }\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\,,\mbox{ weakly in } L^p(Q_T)\,.
$$
\end{comment}
$$
\mu^k\to\mu\qquad\mbox{strongly in }\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\,,\mbox{ strongly in } L^p(Q_T)\,.
$$
Furthermore, $\mu$ satisfies \eqref{stimefokker}.
To conclude, we have to prove that $\mu$ is actually a solution of \eqref{linfp} in the sense of Definition \ref{canzonenuova}.
We take $\phi$ and $\phi^k$ as the solutions of \eqref{hjbfp} related to $b$ and $b^k$. The weak formulation for $\mu^k$ implies that
\begin{equation*}
\langle \mu^k(t),\xi\rangle+\ensuremath{\int_{0}^{t}\int_{\Omega}}\mu^k(s,x)\psi(s,x)\,dxds=\langle\mu_0^k,\phi^k(0,\cdot)\rangle+\int_0^t\langle f^k(s),\phi^k(s,\cdot) \rangle\,ds\,,
\end{equation*}
We can immediately pass to the limit in the left-hand side, using the convergence of $\mu^k$ previously obtained.
For the right-hand side, we first need to prove the convergence of $\phi^k$ towards $\phi$. This is immediate: actually, the function $\tilde{\phi}^k:=\phi^k-\phi$ satisfies
\begin{equation*}
\begin{cases}
-\tilde{\phi}^k_t-\mathrm{div}(aD\tilde{\phi}^k)+b^kD\tilde{\phi}^k=(b^k-b)D\phi\,,\\
\tilde{\phi}^k(t)=0\,,\\
aD\tilde{\phi}^k\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
\end{equation*}
Then, the Corollary of \emph{Theorem IV.9.1} of \cite{lsu} implies, for a certain $q>d+2$ and depending on $\alpha$,
$$
\|{\tilde{\phi}^k}\|\amu\le C\|{(b^k-b)D\phi}\|_{L^q}\to0\,,
$$
since $D\phi$ is bounded in $L^\infty$ using Lemma \ref{sonobravo}.
Hence, $\phi^k\to\phi$ in $\mathcal{C}^{\frac{1+\alpha} 2,1+\alpha}$. This allows us to pass to the limit in the right-hand side too and prove that \eqref{weakmu} holds true, and so that $\mu$ is a weak solution of \eqref{linfp}. This concludes the existence part.\\
For the uniqueness part, we consider $\mu_1$ and $\mu_2$ two weak solutions of the system. Then, by linearity, the function $\mu:=\mu_1-\mu_2$ is a weak solution of
$$
\begin{cases}
\mu_t-\mathrm{div}(a(x)D\mu)-\mathrm{div}(\mu b)=0\,,\\
\mu(0)=0\,,\\
\left(a(x)D\mu+\mu b\right)\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
$$
Hence, the weak estimation \eqref{weakmu} implies, $\forall \psi\in L^\infty$ and $\forall\xi\in\mathcal{C}^{1+\alpha,N}$,
$$
\langle \mu(t),\xi\rangle+\ensuremath{\int_{0}^{t}\int_{\Omega}}\mu(s,x)\psi(s,x)\,dxds=0\,,
$$
which implies $$\norm{\mu}_{L^1}=\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(1+\alpha),N}=0$$ and concludes the uniquess part.\\
Finally, the stability part is an easy consequence of the estimates obtained previously. Let $f^n\to f$, $\mu^n_0\to\mu_0$ and $b^n\to b$. Then the function $\tilde{\mu}^n:=\mu^n-\mu$ satisfies \eqref{linfp} with $b\,,\mu_0$ and $f$ replaced by $b^n$, $\mu^n_0-\mu_0$, $f^n-f+\mathrm{div}(\mu(b^n-b))$. Then we use \eqref{stimefokker} to obtain
\begin{align*}
&\sup_t\|{\tilde{\mu}^n}\|_{-(1+\alpha),N}\,+\,\|{\tilde{\mu}^n}\|_{L^p}\\\le C&\left(\|{\mu^n_0-\mu_0}\|_{-(1+\alpha)}+\|{f^n-f}\|_{L^1(W^{-1,\infty})}+\|{\mathrm{div}(\mu(b^n-b))}\|_{L^1(W^{-1,\infty})}\right)\,,
\end{align*}
The first two terms in the right-hand side go to $0$. For the last term, the same computations of \eqref{luigicoibluejeans} imply
$$
\|{\mathrm{div}(\mu(b^n-b))}\|_{L^1(W^{-1,\infty})}\le C\|{b^n-b}\|_{L^{p'}}\to0\,.
$$
Then $\mu^n\to\mu$ in $\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})\cap L^p(Q_T)$, which concludes the Proposition.
\end{proof}
\end{prop}
The last proposition allows us to get another regularity result of $\mu$, when the data $b$ is more regular. This result will be essential in order to improve the regularity of $\dm{U}$ with respect to $y$.
\begin{cor}
Let $\mu_0\in\mathcal{C}^{-(1+\alpha)}$, $f\in L^1(W^{-1,\infty})$, $b\in\mathcal{C}^{\frac\alpha 2,\alpha}$. Then the unique solution $\mu$ of \eqref{linfp} satisfies
\begin{equation}\label{forsemisalvo}
\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(2+\alpha),N}\le C\left(\norm{\mu_0}_{-(2+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,.
\end{equation}
\begin{proof}
We take $\phi$ as the solution of \eqref{hjbfp}, with $\xi\in C^{2+\alpha,N}(\Omega)$ and $\psi=0$. Then we know from the classical results of \cite{lsu}, \cite{lunardi} (it is important here that $b\in\mathcal{C}^{\frac\alpha2,\alpha}$), that
$$
\norm{\phi}\amd\le C\norm{\xi}_{2+\alpha}\,.
$$
The weak formulation of $\mu$ \eqref{weakmu} tells us that
$$
\langle\mu(t),\xi\rangle=\langle\mu_0,\phi(0,\cdot)\rangle+\int_0^T\langle f(s),\phi(s,\cdot)\rangle\,ds\le C\left(\norm{\mu_0}_{-(2+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\norm{\xi}_{2+\alpha}\,.
$$
Hence, we can pass to the $sup$ for $\xi\in\mathcal{C}^{2+\alpha,N}$ with $\norm{\xi}_{2+\alpha}\le 1$ and obtain \eqref{forsemisalvo}.
\end{proof}
\begin{rem}
We stress the fact that \emph{we shall not formulate problem \eqref{linfp} directly with $\mu_0\in\mathcal{C}^{-(2+\alpha)}$.} Actually, the core of the existence theorem is the $L^p$ bound in space-time of $\mu$, and this is obtained by duality, considering test functions $\phi$ with data $\psi\in L^r$. For this function it is not guaranteed that $\phi(0,\cdot)\in\mathcal{C}^{2+\alpha}(\Omega)$, and an estimation like \eqref{minecessita} is no longer possible.
\end{rem}
\end{cor}
We can also obtain some useful estimates for the density function $m$, as stated in the next result.
\begin{cor}\label{samestrategies}
Let $(u,m)$ be the solution of the MFG system defined in \eqref{meanfieldgames}-\eqref{fame}. Then we have $m\in L^p(Q_T)$ for $p=\frac{d+2}{d+1+\alpha}$, with
\begin{equation}\label{mlp}
\norm{m}_{L^p}\le C\norm{m_0}_{-(1+\alpha)}\,.
\end{equation}
Furthermore, if $(u_1,m_1)$ and $(u_2,m_2)$ are two solutions of \eqref{meanfieldgames}-\eqref{fame} with initial conditions $m_{01}$ and $m_{02}$, then we have
\begin{equation}\label{m12p}
\norm{m_1-m_2}_{L^p(Q_T)}\le C\mathbf{d}_1(m_{01},m_{02})\,.
\end{equation}
\begin{proof}
Since $m$ satisfies \eqref{linfp} with $\mu=m_0\in\mathcal{P}(\Omega)\subset\mathcal{C}^{-(1+\alpha)}$, $b=H_p(x,Du)+\tilde{b}\in L^\infty$ and $f=0$, inequality \eqref{mlp} comes from Proposition \ref{peggiodellagerma}.
For the second inequality, we consider $m:=m_1-m_2$. Then $m$ solves the equation
\begin{equation*}
\begin{cases}
m_t-\mathrm{div}(aDm)-\mathrm{div}(m(H_p(x,Du_1)+\tilde{b}))=\mathrm{div}(m_2(H_p(x,Du_2)-H_p(x,Du_1)))\,,\\
m(t_0)=m_{01}-m_{02}\,,\\
\left[aDm+m\tilde{b}+m_1H_p(x,Du_1)-m_2H_p(x,Du_2)\right]\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation*}
i.e. $m$ is a solution of \eqref{linfp} with $f=\mathrm{div}(m_2(H_p(x,Du_2)-H_p(x,Du_1)))$, $\mu_0=m_{01}-m_{02}$, $b=H_p(x,Du_1)$. Then estimations \eqref{stimefokker} imply
$$
\norm{m_1-m_2}_{L^p(Q_T)}\le C\left(\norm{\mu_0}_{-(1+\alpha)}+\norm{f}_{L^1(W^{-1,\infty})}\right)\,.
$$
We estimate the right-hand side term. As regards $\mu_0$ we have
$$
\norm{\mu_0}_{-(1+\alpha)}=\sup\limits_{\norm{\phi}_{1+\alpha}\le 1}\ensuremath{\int_{\Omega}} \phi(x)(m_{01}-m_{02})(dx)\le C\mathbf{d}_1(m_{01},m_{02})\,.
$$
For the $f$ term we argue in the following way:
\begin{align*}
\norm{f}_{L^1(W^{-1,\infty})}&=\int_0^T\sup\limits_{\norm{\phi}_{W^{1,\infty}}\le 1}\left(\ensuremath{\int_{\Omega}} H_p(x,Du_2)-H_p(x,Du_1)D\phi\,m_2(t,dx)\right)\,dt\\&\le C\norm{u_1-u_2}\amu\le C\mathbf{d}_1(m_{01},m_{02})\,,
\end{align*}
which allows us to conclude.
\end{proof}
\end{cor}
In order to prove the representation formula \eqref{reprform}, we need to obtain some estimates for a more general linearized system of the form
\begin{equation}\label{linear}
\begin{cases}
-z_t-\mathrm{tr}(a(x)D^2z)+H_p(x,Du)Dz=\mathlarger{\dm{F}}(x,m(t))(\rho(t))+h(t,x)\,,\\
\rho_t-\mathrm{div}(a(x)D\rho)-\mathrm{div}(\rho(H_p(x,Du)+\tilde{b}))-\mathrm{div}(m H_{pp}(x,Du) Dz+c)=0\,,\\
z(T,x)=\mathlarger{\dm{G}}(x,m(T))(\rho(T))+z_T(x)\,,\qquad\rho(t_0)=\rho_0\,,\\
a(x)Dz\cdot\nu_{|\partial\Omega}=0\,,\quad\left(a(x)D\rho+\rho(H_p(x,Du)+\tilde{b})+mH_{pp}(x,Du) Dz+c\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation}
where we require
$$
z_T\in\mathcal{C}^{2+\alpha},\quad\rho_0\in\mathcal{C}^{-(1+\alpha)},\quad h\in \mathcal{C}^{0,\alpha}([t_0,T]\times\Omega),\quad c\in L^1([t_0,T]\times\Omega)\,.
$$
Moreover, $z_T$ satisfies
\begin{equation}\label{neumannzT}
aDz_T\cdot\nu_{|\partial\Omega}=0\,.
\end{equation}
A suitable definition of solution for this system is the following:
\begin{defn}\label{defn}
\begin{comment}
Let $\mathcal{C}^{-(2+\alpha),N}(\Omega))$ the dual space of $\{\phi\in\mathcal{C}^{2+\alpha}(\Omega)\mbox{ s.t. }a(x)D\phi(x)\cdot\nu(x)=0\ \forall x\in\partial\Omega\}$, endowed with the norm
$$
\norm{\rho}_{-(2+\alpha),N}=\sup\limits_{\substack{\norm{\phi}_{2+\alpha}\le 1\\aD\phi\cdot\nu_{|\partial\Omega}=0}}\langle \rho,\phi \rangle\,.
$$
\end{comment}
We say that a couple $(z,\rho)\in\mathcal{C}^{1,2+\alpha}\times\,\left(\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N}(\Omega))\cap L^1(Q_T)\right)$ is a solution of the equation \eqref{linear} if
\begin{itemize}
\item $z$ is a classical solution of the linear equation;
\item $\rho$ is a distributional solution of the Fokker-Planck equation in the sense of Definition \ref{canzonenuova}.
\end{itemize}
\end{defn}
We start with the following existence result.
\begin{prop}\label{linearD}
Let hypotheses \ref{ipotesi} hold for $0<\alpha<1$. Then there exists a unique solution $(z,\rho)\in\mathcal{C}^{1,2+\alpha}\times\,\left(\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N}(\Omega))\cap L^1(Q_T)\right)$ of system \eqref{linear}. This solution satisfies, for a certain $p>1$,
\begin{equation}
\begin{split}\label{stimelin}
\norm{z}_{1,2+\alpha}+\sup\limits_t\norm{\rho(t)}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le CM\hspace{0.08cm},
\end{split}
\end{equation}
where $C$ depends on $H$ and where $M$ is given by
\begin{equation}\label{emme}
M:=\norm{z_T}_{2+\alpha}+\norm{\rho_0}_{-(1+\alpha)}+\norm{h}_{0,\alpha}+\norm{c}_{L^1}\,.
\end{equation}
\begin{proof}
As always, we can assume $t_0=0$ without loss of generality.
The main idea is to apply Schaefer's Theorem.\\
\emph{Step 1: Definition of the map $\mathbf{\Phi}$ satisfying Schaefer's Theorem}.
We set $X:=\mathcal{C}([0,T];\mathcal{C}^{-(1+\alpha),N})$, endowed with the norm
\begin{equation*}
\norm{\phi}_X:=\sup\limits_{t\in[0,T]}\norm{\phi(t)}_{-(1+\alpha),N}\,.
\end{equation*}
For $\rho\in X$, we consider the classical solution $z$ of the following equation
\begin{equation}
\label{zlin}
\begin{cases}
-z_t-\tr{z}+H_p(x,Du)Dz=\mathlarger{\dm{F}}(x,m(t))(\rho(t))+h(t,x)\,,\\
z(T)=\mathlarger{\dm{G}}(x,m(T))(\rho(T))+z_T\,,\\
a(x)Dz\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
\end{equation}
We note that, from Hypotheses \ref{ipotesi}, we have
$$
\langle a(x)D_xG(x,m), \nu(x)\rangle_{|\partial\Omega}=0\quad\forall m\in\mathcal{P}(\Omega)\implies\left\langle a(x)D_x\dm{G}(x,m(T))(\mu(T)), \nu(x)\right\rangle_{|\partial\Omega}\!\!\!\!\!\!\!=0\,.
$$
Hence, compatibility conditions are satisfied for equation \eqref{zlin} and, from \emph{Theorem 5.1.21} of \cite{lunardi}, $z$ satisfies
\begin{equation}\label{stimz}
\begin{split}
\norm{z}_{1,2+\alpha}&\le C\left(\norm{z_T}_{2+\alpha}+\sup\limits_{t\in[0,T]}\norm{\rho(t)}_{-(2+\alpha),N}+\norm{h}_{0,\alpha}\right)\\&\le C\left(M+\sup\limits_{t\in[0,T]}\norm{\rho(t)}_{-(1+\alpha),N}\right)\,,
\end{split}
\end{equation}
where we also use hypothesis $(vi)$ of \ref{ipotesi}, for the boundary condition of $\dm{F}$.
Then we define $\mathbf{\Phi}(\rho):=\tilde{\rho}$, where $\tilde{\rho}$ is the solution in the sense of Definition \ref{canzonenuova} to:
\begin{equation}
\label{plin}
\begin{cases}
\tilde{\rho}_t-\mathrm{div}(a(x)D\tilde{\rho})-\mathrm{div}(\tilde{\rho} (H_p(x,Du)+\tilde{b}))-\mathrm{div}(mH_{pp}(x,Du) Dz+c)=0\\
\tilde{\rho}(0)=\rho_0\\
\left(a(x)D\tilde{\rho}+\tilde{\rho}(H_p(x,Du)+\tilde{b})+mH_{pp}(x,Du) Dz+c\right)\cdot\nu_{|\partial\Omega}=0
\end{cases}\hspace{0.08cm}.
\end{equation}
Thanks to Proposition \ref{peggiodellagerma}, we have $\tilde{\rho}\in X$. We want to prove that the map $\mathbf{\Phi}$ is continuous and compact.\\
For the compactness, let $\{\rho_n\}_n\subset X$ be a subsequence with $\norm{\rho_n}_X\le{C}$ for a certain $C>0$. We consider for each $n$ the solutions $z_n$ and $\tilde{\rho}_n$ of \eqref{zlin} and \eqref{plin} associated to $\tilde{\rho}_n$.\\
Using \eqref{stimz}, we have $\norm{z_n}_{1,2+\alpha}\le C_1$, where $C_1$ depends on $C$. Then, thanks to Ascoli-Arzel\`a's Theorem, and using also \eqref{precisissimongulaeva}, $\exists z$ s.t. $z_n\to z$ up to subsequences at least in $\mathcal{C}([0,T];\mathcal{C}^1(\Omega))$.
Using the pointwise convergence of $Dz_n$ and the $L^p$ boundedness of $m$ stated in \eqref{mlp}, we immediately obtain
$$
mH_{pp}(x,Du)Dz_n\,+\,c\to mH_{pp}(x,Du)Dz\,+\,c\qquad\mbox{in }L^1(Q_T)\,,
$$
which immediately implies
$$
\mathrm{div}(mH_{pp}(x,Du)Dz_n\,+\,c)\to\mathrm{div}{(mH_{pp}(x,Du)Dz\,+\,c)}\qquad\mbox{in }L^1(W^{-1,\infty})\,.
$$
Hence, stability results proved in Proposition \ref{peggiodellagerma} proves that $\tilde{\rho}_n\to\tilde{\rho}$ in $X$, where $\tilde{\rho}$ is the solution related to $Dz$. This proves the compactness result.
The continuity of $\Phi$ can be proved used the same computations of the compactness.
Finally, in order to apply Schaefer's theorem, we have to prove that
$$
\exists M>0 \mbox{ s.t. } \rho=\sigma\mathbf{\Phi}(\rho)\ \mbox{ and }\sigma\in[0,1]\implies\norm{\rho}_X\le M\hspace{0.08cm}.
$$
We will prove in the next step that, if $\rho=\sigma\mathbf{\Phi}(\rho)$, then the couple $(z,\rho)$ satisfies \eqref{stimelin}. This allows us to apply Schaefer's theorem and also gives us the desired estimate \eqref{stimelin}, since each solution $(z,\rho)$ of the system satisfies $\rho=\sigma\mathbf{\Phi}(\rho)$ with $\sigma=1$.\\
\emph{Step 2: Estimate of $\rho$ and $z$}. Let $(\rho,\sigma)\in X\times[0,1]$ such that $\rho=\sigma\mathbf{\Phi}(\rho)$. Then the couple $(z,\rho)$ satisfies
\begin{equation*}
\begin{cases}
-z_t-\mathrm{tr}(a(x)D^2z)+H_p(x,Du)Dz=\mathlarger{\dm{F}}(x,m(t))(\rho(t))+h(t,x)\\
\rho_t-\mathrm{div}(a(x)D\rho)-\mathrm{div}(\rho(H_p(x,Du)+\tilde{b}))-\sigma\mathrm{div}(mH_{pp}(x,Du) Dz+c)=0\\
z(T,x)=\mathlarger{\dm{G}}(x,m(T))(\rho(T))+z_T(x)\hspace{2cm}\rho(0)=\sigma\rho_0\\
a(x)Dz\cdot\nu_{|\partial\Omega}=0\hspace{1cm}\left(a(x)D\rho+\rho(H_p(x,Du)+\tilde{b})+\sigma(mH_{pp}(x,Du) Dz+c)\right)\cdot\nu_{|\partial\Omega}=0
\end{cases}\hspace{0.08cm}.
\end{equation*}
We want to use $z$ as test function for the equation of $\rho$. This is allowed since $z$ satisfies \eqref{hjbfp} with
\begin{align*}
\psi=\dm{F}(x,m(t))(\rho(t))+h(t,x)\in L^\infty(\Omega)\,,\qquad\xi=\dm{G}(x,m(T))(\rho(T))+z_T(x)\in\mathcal{C}^{1+\alpha,N}
\end{align*}
We obtain from the weak formulation of $\rho$:
\begin{equation*}
\begin{split}
&\ensuremath{\int_{\Omega}} \left(\rho(T,x)z(T,x)-\sigma\rho_0(x)z(0,x)\right)dx=-\sigma\ensuremath{\int_{0}^{T}\int_{\Omega}}\langle c,Dz\rangle dxdt+\\
-&\ensuremath{\int_{0}^{T}\int_{\Omega}}\rho(t,x)\left(\dm{F}(x,m(t))(\rho(t))+h\right)dxdt-\sigma\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du) Dz,Dz\rangle\hspace{0.08cm} dxdt\,.
\end{split}
\end{equation*}
Using the terminal condition of $z$ and the monotonicity of $F$ and $G$, we get a first estimate:
\begin{equation}\label{stimasigma}
\begin{split}
\sigma\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du) Dz,Dz\rangle\hspace{0.08cm} dxdt
\le&\sup\limits_{t\in[0,T]}\norm{\rho(t)}_{-(2+\alpha),N}\norm{z_T}_{2+\alpha}+\norm{\rho}_{L^p}\norminf{h}\\
+&\norm{z}_{1,2+\alpha}\left(\norm{\rho_0}_{-(2+\alpha),N}+\norm{c}_{L^1}\right)\\\le\,&M \left(\sup\limits_{t\in[0,T]}\norm{\rho(t)}_{-(1+\alpha),N}+\norm{\rho}_{L^1}+\norm{z}_{1,2+\alpha}\right)\,.
\end{split}
\end{equation}
We already know an initial estimate on $z$ in \eqref{stimz}. Now we need to estimate $\rho$.
Using \eqref{stimefokker} we obtain
\begin{equation}\label{duality}
\sup\limits_{t\in[0,T]}\norm{\rho}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le C\left(\norm{\sigma mH_{pp}(x,Du)Dz}_{L^1}+\norm{c}_{L^1}+\norm{\rho_0}_{-(1+\alpha)}\right)
\end{equation}
As regards the first term in the right hand side, we can use H\"{o}lder's inequality and \eqref{stimasigma} to obtain
\begin{align*}
&\norm{mH_{pp}(x,Du)Dz}_{L^1}=\sigma\sup\limits_{\substack{\norminf{\phi}\le 1\\\phi\in L^\infty(Q_T;\R^d)}}\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du)Dz,\phi\rangle\,dxdt\\
&\le\sigma\left(\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du) Dz,Dz\rangle\hspace{0.08cm} dxdt\right)^\miezz\left(\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du)\phi,\phi\rangle\hspace{0.08cm} dxdt\right)^\miezz\\
&\le
M^\miezz\left(\sup\limits_{t\in[0,T]}\norm{\rho(t)}^\miezz_{-(1+\alpha),N}+\norm{\rho}^\miezz_{L^1}+\norm{z}^\miezz_{1,2+\alpha}\right)\,
\end{align*}
Putting these estimates into \eqref{duality} we obtain
\begin{align*}
\sup\limits_{t\in[0,T]}\norm{\rho}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le C\left( M+M^\miezz\left(\sup\limits_{t\in[0,T]}\norm{\rho(t)}^\miezz_{-(1+\alpha),N}+\norm{\rho}^\miezz_{L^1}+\norm{z}^\miezz_{1,2+\alpha}\right)\right)\,.
\end{align*}
Using a generalized Young's inequality with suitable coefficients, we get
\begin{align}\label{stimarho}
\sup\limits_{t\in[0,T]}\norm{\rho}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le C\left(M+M^\miezz\norm{z}_{1,2+\alpha}^\miezz\right)\hspace{0.08cm}.
\end{align}
This gives us an initial estimate for $\rho$, depending on the estimate of $z$.
Coming back to \eqref{stimz}, \eqref{stimarho} implies
\begin{align*}
\norm{z}_{1,2+\alpha}\le C\left(M+M^\miezz\norm{z}_{1,2+\alpha}^\miezz\right)\hspace{0.08cm}.
\end{align*}
Using a generalized Young's inequality with suitable coefficients, this implies
$$
\norm{z}_{1,2+\alpha}\le Cm\,.
$$
Plugging this estimate in \eqref{stimarho}, we finally obtain
\begin{align*}
\norm{z}_{1,2+\alpha}+\sup\limits_{t\in[0,T]}\norm{\rho}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le CM\hspace{0.08cm}.
\end{align*}
This concludes the existence result.\\\\
\emph{Step 3. Uniqueness}. Let $(z_1,\rho_1)$ and $(z_2,\rho_2)$ be two solutions of \eqref{linear}. Then the couple $(z,\rho):=(z_1-z_2,\rho_1-\rho_2)$ satisfies the following linear system:
\begin{equation*}
\begin{cases}
-z_t-\mathrm{tr}(a(x)D^2z)+H_p(x,Du)Dz=\mathlarger{\dm{F}}(x,m(t))(\rho(t))=0\,,\\
\rho_t-\mathrm{div}(a(x)D\rho)-\mathrm{div}(\rho(H_p(x,Du)+\tilde{b}))-\mathrm{div}(m H_{pp}(x,Du) Dz)=0\,,\\
z(T,x)=\mathlarger{\dm{G}}(x,m(T))(\rho(T))\,,\qquad\rho(t_0)=0\,,\\
a(x)Dz\cdot\nu_{|\partial\Omega}=0\,,\quad\left(a(x)D\rho+\rho(H_p(x,Du)+\tilde{b})+mH_{pp}(x,Du) Dz\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation*}
i.e., a system of the form \eqref{linear} with $h=c=z_T=\rho_0=0$. Then estimation \eqref{stimelin} tells us that
$$
\norm{z}_{1,2+\alpha}+\sup\limits_{t\in[0,T]}\norm{\rho}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le 0\hspace{0.08cm}.
$$
and so $z=0$, $\rho=0$. This concludes the Proposition.
\end{proof}
\end{prop}
\begin{comment}
Once proved the existence result of \eqref{linear}, and consequently of \eqref{linDuDm}, we can obtain a different kind of regularity, that will be essential in order to prove the Lipschitz estimate of $\dm{U}$.
\begin{cor}
Suppose hypotheses \eqref{ipotesi} be satisfied. Then, if $(z,\rho)$ is a solution of \eqref{linear} and $c\in L^1(Q_T)$, we have
\begin{equation}\label{stimauno}
\norm{z}\amu+\sup\limits_{t\in[t_0,T]}\norm{\rho(t)}_{-1}\le C\left(\norm{z_T}_{1+\alpha}+\sup\limits_{t\in[t_0,T]}\norm{b(t,\cdot)}_1+\norm{c}_{L^1}+\norm{\rho_0}_{-1}\right)\,,
\end{equation}
where $\norm{c}_{L^1}=\int_{t_0}^T\norm{c(t)}_0 dt$, and $\norm{c(t)}_0$ is defined in this way:
$$
\norm{c(t)}_{0}:=\sup\limits_{\norminf{\phi}\le 1}\langle c(t),\phi\rangle\,.
$$
Moreover, if $(v,\mu)$ is a solution of \eqref{linDuDm}, we have $\forall t\in(t_0,T]$
\begin{equation}\label{stimadue}
\norm{\mu(t)}_{L^1(\Omega)}\le \frac{C}{\sqrt{t-t_0}}\norm{\mu_0}_{-1}\,.
\end{equation}
\begin{proof}
We suppose, without loss of generality, $t_0=0$.
We start from \eqref{stimauno}.
From now on, we call for simplicity
$$
R:=\norm{z_T}_{1+\alpha}+\sup\limits_{t\in[t_0,T]}\norm{b(t,\cdot)}_1+\norm{c}_{L^1}+\norm{\rho_0}_{-1}\,.
$$
Thanks to \emph{Theorem 5.1.18} of \cite{lunardi}, we have
$$
\norm{z}\amu\le C\left( \norm{z_T}_{1+\alpha}+\sup\limits_t\norm{\rho(t)}_{-1}+\norminf{b} \right)\le C\left(R+\sup\limits_t\norm{\rho(t)}_{-1}\right)\,.
$$
We have to estimate $\sup\limits_{t}\norm{\rho(t)}_{-1}$. First, the duality computation \eqref{duality} with $\sigma=1$ implies in this case
$$
\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du) Dz,Dz\rangle\, dxdt\le R\left(\sup\limits_t\norm{\rho(t)}_{-1}+\norm{z}\amu\right)\,.
$$
In order to estimate $\sup\limits_{t}\norm{\rho(t)}_{-1}$, we consider the solution $w$ of \eqref{eqw} with $\xi$ a Lipschitz function.
Then $w$ is a Lipschitz function with respect to $x$ and we have
$$
\sup\limits_t\norm{w(t,\cdot)}_1\le C\norm{\xi}_1\,.
$$
Equation \eqref{sonno} with $\sigma=1$ implies here, with the same techniques of the previous Theorem,
$$
\ensuremath{\int_{\Omega}}\rho(t)\xi\,dx\le C\left(R+R^\miezz(\sup\limits_t\norm{\rho(t)}_{-1}^\miezz+\norm{z}\amu^\miezz)\right)\norm{\xi}_1\,.
$$
Then, passing to the sup when $\norm{\xi}_1\le 1$ and using the estimate on $\norm{z}\amu$ and a generalized Young's inequality, we easily obtain \eqref{stimauno}.
Now, we consider a solution $(v,\mu)$ of \eqref{linDuDm}. Since here $b=c=z_T=0$, we have $R=\norm{\mu_0}_{-1}$.
Thanks to \eqref{stimauno} we have
\begin{equation}\label{corri}
\norm{v}\amu\le C\norm{\mu_0}_{-1}\,.
\end{equation}
Taking $w$ as solution of \eqref{eqw} with $\xi\in L^\infty(\Omega)$, we have
$$
\norm{w(s,\cdot)}_1\le C\sqrt{t-s}\norminf{\xi}
$$
Moreover, \eqref{porretta} and \eqref{corri} easily imply
$$
\intm{t}m|Dz|^2\,dxds\le C\norm{\mu_0}_{-1}^2\,,\qquad\intm{t}m|Dw|^2\, dxds\le C\norminf{\xi}^2\,.
$$
Then, from \eqref{sonno} we easily obtain
$$
\ensuremath{\int_{\Omega}}\xi\mu(t,dx)\le C\norminf{\xi}\norm{\mu_0}_{-1}+\frac1{\sqrt{t}}\norminf{\xi}\,.
$$
Passing to the sup when $\norminf{\xi}\le 1$, we obtain \eqref{stimadue}.
\end{proof}
\end{cor}
Now we come back to system \eqref{linDuDm}.
Since this system is a particular case of \eqref{linear}, with $b=c=z_T=0$, existence and uniqueness of solutions $(v,\mu)$ in the space $\mathcal{C}([t_0,T];\mathcal{C}^{2+\alpha}\times\mathcal{C}^{-\alpha})$ is already proved in Proposition \ref{linearD}.
But, in order to work with the Master Equation, we need further estimates for this system in the space $\mathcal{C}^{-(2+\alpha),N}$, i.e. the dual space of $\{\phi\in\mathcal{C}^{2+\alpha}(\Omega)\mbox{ s.t. }a(x)D\phi(x)\cdot\nu(x)=0\ \forall x\in\partial\Omega\}$, endowed with the norm
The following result gives us these estimates.
\begin{prop}\label{linearM}
The unique solution $(v,\mu)$ of the problem \eqref{linDuDm} satisfies
\begin{equation}\label{stimelin0}
\norm{v}\amd +\norm{\mu}_{\frac{\alpha}{2},-(2+\alpha),N}\le C\norm{\mu_0}_{-(2+\alpha),N}\,.
\end{equation}
\begin{proof}
As always, we can assume $t_0=0$ without loss of generality.
We take the couple $(v,\mu)$ solution of \eqref{linDuDm} in $\mathcal{C}([t_0,T];\mathcal{C}^{2+\alpha}\times\mathcal{C}^{-\alpha})$.
Since $\{\phi\in\mathcal{C}^{2+\alpha}(\Omega)\mbox{ s.t. }a(x)D\phi(x)\cdot\nu(x)=0\ \forall x\in\partial\Omega\}\subsetneq\mathcal{C}^\alpha$, passing to the dual spaces we have
$$
\mathcal{C}^{-\alpha}\subset\mathcal{C}^{-(2+\alpha),N}\,.
$$
Hence, $(v,\mu)$ belongs to the space $\mathcal{C}([t_0,T];\mathcal{C}^{2+\alpha}\times\mathcal{C}^{-(2+\alpha),N})$.\\
In order to obtain \eqref{stimelin0}, we have to readapt the computations developed in Proposition \ref{linearD} in this framework.
From \emph{Theorem IV.5.3} of \cite{lsu}, $v$ satisfies
$$
\norm{v}\amd\le C\left(\norm{v(T)}_{2+\alpha}+\norm{\dm{F}(x,m(t))(\mu(t))}\am\right)\,,
$$
and so, using the regularity of $F$ and $G$,
\begin{equation}\label{stimv}
\norm{v}\amd\le C\norm{\mu}_{\frac\alpha 2,-(2+\alpha),N}\,.
\end{equation}
We stress the fact that, in order to obtain \eqref{stimv}, we strongly need the boundary conditions $(vi)$ of Hypotheses \eqref{ipotesi}.
The duality argument between $v$ and $\mu$ tells us that
\begin{equation*}
\begin{split}
&\ensuremath{\int_{\Omega}} \left(\mu(T,x)v(T,x)-\mu_0(x)v(0,x)\right)dx=
-\ensuremath{\int_{0}^{T}\int_{\Omega}}\mu(t,x)\dm{F}(x,m(t))(\mu(t))\,dxdt\\-&\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Dv) Dv,Dv\rangle\hspace{0.08cm} dxdt\hspace{0.08cm}.
\end{split}
\end{equation*}
Using the terminal condition of $v$ and the monotonicity of $F$ and $G$, we get this initial estimate:
\begin{equation}\label{stimasigma0}
\begin{split}
\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du) Dv,Dv\rangle\hspace{0.08cm} dxdt\le \norm{\mu_0}_{-(2+\alpha),N}\norm{v}\amd
\end{split}
\end{equation}
We already know an initial estimate on $v$ in \eqref{stimv}. Now we need to estimate $\mu$, and we will do it by duality. Let $t\in(0,T]$, $\xi\in\mathcal{C}^{2+\alpha}$ such that $a(x)D\xi\cdot\nu_{|\partial\Omega}=0$ and let $\phi$ be the solution to the backward equation
\begin{equation}\label{eqw0}
\begin{cases}
-w_t-\tr{w}+H_p(x,Du)Dw=0\,,\hspace{2cm}\mbox{ in }[0,t]\times\Omega\,,\\
w(t)=\xi\,,\\
\bdone{w}\,.
\end{cases}\hspace{0.08cm}.
\end{equation}
Thanks to \emph{Theorem 5.1.20} of \cite{lunardi}, we have
\begin{equation}\label{sphi}
\norm{w}\amd\le C\norm{\xi}_{2+\alpha}\,.
\end{equation}
By duality, we obtain
\begin{equation}\label{sonno0}
\begin{split}
\ensuremath{\int_{\Omega}} \mu(t)\xi\hspace{0.08cm} dx-\ensuremath{\int_{\Omega}} \mu_0 w(0)\hspace{0.08cm} dx=-\intm{t}m\langle H_{pp}(x,Du) Dv,Dw\rangle\hspace{0.08cm} dxds\,.
\end{split}
\end{equation}
The last term in the left hand side is easily bounded by
$$
C\norm{\mu_0}_{-(2+\alpha),N}\norm{w}\amd\le C\norm{\mu_0}_{-(2+\alpha),N}\norm{\xi}_{2+\alpha}\,.
$$
As regards the term in the right hand side, we can use H\"{o}lder's inequality and \eqref{stimasigma0} to bound the integral by
\begin{align*}
&\left(\intm{t}m\langle H_{pp}(x,Du) Dv,Dv\rangle\hspace{0.08cm} dxds\right)^\miezz\left(\intm{t}m\langle H_{pp}(x,Du) Dw,Dw\rangle\hspace{0.08cm} dxds\right)^\miezz\hspace{0.08cm}\le\\\le\hspace{0.08cm}&C\norm{\xi}_{2+\alpha}\sigma\norm{\mu_0}_{-(2+\alpha),N}^\miezz\norm{v}\amd^\miezz\,.
\end{align*}
Taking the $\sup$ with $\norm{\xi}_{2+\alpha}\le1$, we get
\begin{align*}
\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(2+\alpha),N}\le C\left(\norm{\mu_0}_{-(2+\alpha),N}+\norm{\mu_0}_{-(2+\alpha),N}^\miezz\norm{v}\amd^\miezz\right)\,.
\end{align*}
This gives us an initial space estimate for $\mu$, depending on $v$.\\
As regards the time estimate, we argue in a similar way.
Given $t\in(0,T]$, we take $w$ as in \eqref{eqw0}. For $s<t$, we use $w$ in duality with $\mu$ in $(s,t)\times\Omega$:
\begin{align*}
\ensuremath{\int_{\Omega}} (\mu(t)-\mu(s))\xi\hspace{0.08cm} dx=\ensuremath{\int_{\Omega}}\mu(s)(w(s)-w(t))\,+&\int_{s}^{t}\ensuremath{\int_{\Omega}}(\mu w)_t dxd\tau\le\\
\le\norm{w(s)-w(t)}_{2+\alpha}\norm{\mu(s)}_{-(2+\alpha),N}-&\int_s^t\ensuremath{\int_{\Omega}} m H_{pp}(x,Du) DvDw\hspace{0.08cm} dxd\tau\,.
\end{align*}
The first term, thanks to \eqref{sphi}, is bounded by
\begin{align*}
C|t-s|^{\frac{\alpha}2}\norm{\xi}_{2+\alpha}\left(\norm{\mu_0}_{-(2+\alpha),N}+\norm{\mu_0}_{-(2+\alpha),N}^\miezz\norm{v}\amd^\miezz\right)\,.
\end{align*}
For the second term we argue as in the estimation of $sup\norm{\mu(t)}$:
\begin{align*}
&\qquad\int_s^t\ensuremath{\int_{\Omega}} m H_{pp}(x,Du) DvDw\hspace{0.08cm} dxd\tau\le\\&\le \left(\ensuremath{\int_{0}^{T}\int_{\Omega}} m H_{pp}(x,Du) DvDv\right)^\miezz\left(\int_s^t\ensuremath{\int_{\Omega}} m H_{pp}(x,Du) DwDw\right)^\miezz\le\\&\le\hspace{0.08cm} C\norm{\xi}_{2+\alpha}|t-s|^\miezz\norm{\mu_0}_{-(2+\alpha)}^\miezz\norm{v}\amd^\miezz\,.
\end{align*}
Putting these inequalities together and passing to the $sup$ when $\norm{\xi}_{2+\alpha}\le 1$, we obtain
\begin{align*}
\norm{\mu(t)-\mu(s)}_{-(2+\alpha),N}\le C|t-s|^\miezz\left(\norm{\mu_0}_{-(2+\alpha),N}+\norm{\mu_0}_{-(2+\alpha)}^\miezz\norm{v}\amd^\miezz\right)\,,
\end{align*}
and so
\begin{align}
\label{stimamu}
\norm{\mu}_{\frac\alpha 2,-(2+\alpha),N}\le C\left(\norm{\mu_0}_{-(2+\alpha),N}+\norm{\mu_0}_{-(2+\alpha),N}^\miezz\norm{v}\amd^\miezz\right)\,.
\end{align}
Coming back to \eqref{stimv}, \eqref{stimamu} implies
\begin{align*}
\norm{v}\amd\le C\left(\norm{\mu_0}_{-(2+\alpha),N}+\norm{\mu_0}_{-(2+\alpha),N}^\miezz\norm{v}\amd^\miezz\right)\,.
\end{align*}
Using a generalized Young's inequality with suitable coefficients, this implies
\begin{align}\label{finalmente0}
\norm{v}\amd+\norm{\mu}_{\frac\alpha 2,-(2+\alpha),N}\le C\norm{\mu_0}_{-(2+\alpha),N}\hspace{0.08cm}.
\end{align}
This concludes the proof.
\end{proof}
\end{prop}
\end{comment}
We are ready to prove that \eqref{linDuDm} has a fundamental solution. This solution will be the desired derivative $\dm{U}$.
\begin{prop}
Equation \eqref{linDuDm} has a fundamental solution, i.e. there exists a function $K:[0,T]\times\Omega\times\mathcal{P}(\Omega)\times\Omega\to\R$ such that, for any $(t_0,m_0,\mu_0)$ we have
\begin{equation}\label{repres}
v(t_0,x)=_{-(1+\alpha)}\!\!\langle \mu_0,K(t_0,x,m_0,\cdot)\rangle_{1+\alpha}
\end{equation}
Moreover, $K(t_0,\cdot,m_0,\cdot)\in\mathcal{C}^{2+\alpha}(\Omega)\times \mathcal{C}^{1+\alpha}(\Omega)$ with
\begin{equation}\label{kappa}
\sup\limits_{(t,m)\in[0,T]\times\mathcal{P}(\Omega)}\norm{K(t,\cdot,m,\cdot)}_{2+\alpha,1+\alpha}\le C\,,
\end{equation}
and the second derivatives w.r.t. $x$ and the first derivatives w.r.t. $y$ are continuous in all variables.
\begin{proof}
From now on, we indicate with $v(t,x;\mu_0)$ the solution of the first equation of \eqref{linDuDm} related to $\mu_0$.
We start considering, for $y\in\Omega$, $\mu_0=\delta_y$, the Dirac function at $y$. We define
$$
K(t_0,x,m_0,y)=v(t_0,x;\delta_y)
$$
Thanks to \eqref{stimelin}, one immediately knows that $K$ is twice differentiable w.r.t. $x$ and
$$
\norm{K(t_0,\cdot,m_0,y)}_{2+\alpha}\le C\norm{\delta_y}_{-(1+\alpha)}=C
$$
Moreover, we can use the linearity of the system \eqref{linear} to obtain
$$
\frac{K(t_0,x,m_0,y+he_j)-K(t_0,x,m_0,y)}h=v(t_0,x;\Delta_{h,j}\delta_{y})\,,
$$
where $\Delta_{h,j}\delta_{y}=\frac1h(\delta_{y+he_j}-\delta_y)$. Using stability results for \eqref{linDuDm}, proved previously, we can pass to the limit and find that
$$
\frac{\partial K}{\partial y_j}(t_0,x,m_0,y)=v(t_0,x;-\partial_{y_j}\delta_y)\,,
$$
where the derivative of the Dirac delta function is in the sense of distribution.
Since $\partial_{y_i}\delta_y$ is bounded in $\mathcal{C}^{-(1+\alpha)}$ for all $i,j$, from \eqref{stimelin} we deduce that the second derivatives of $K$ with respect to $x$ are well defined and bounded.
The representation formula \eqref{repres} is an immediate consequence of the linear character of the equation and of the density of the set generated by the Dirac functions. This concludes the proof.
\end{proof}
\end{prop}
Now we are ready to prove the differentiability of the function $U$ with respect to the measure $m$.
In particular, we want to prove that this fundamental solution $K$ is actually the derivative of $U$ with respect to the measure.
\begin{thm}
Let $(u_1,m_1)$ and $(u_2,m_2)$ be two solutions of the Mean Field Games system \eqref{meanfieldgames}-\eqref{fame}, associated with the starting initial conditions $(t_0,m_0^1)$ and $(t_0,m_0^2)$.
Let $(v,\mu)$ be the solution of the linearized system \eqref{linDuDm} related to $(u_2,m_2)$, with initial condition $(t_0,m_0^1-m_0^2)$. Then we have
\begin{equation}\label{boundmder}
\norm{u_1-u_2-v}_{1,2+\alpha}+\sup\limits_{t\in[0,T]}\norm{m_1(t)-m_2(t)-\mu(t)}_{-(1+\alpha),N}\le C\mathbf{d}_1(m_0^1,m_0^2)^{2}\,,
\end{equation}
Consequently, the function $U$ defined in \eqref{U} is differentiable with respect to $m$.
\begin{proof}
We call $(z,\rho)=(u_1-u_2-v,m_1-m_2-\mu)$. Then $(z,\rho)$ satisfies
\begin{equation*}
\begin{cases}
-z_t-\mathrm{tr}(a(x)D^2z)+H_p(x,Du_2)Dz=\mathlarger{\dm{F}}(x,m_2(t))(\rho(t))+h(t,x)\,,\\
\rho_t-\mathrm{div}(a(x)D\rho)-\mathrm{div}(\rho(H_p(x,Du_2)+\tilde{b}))-\mathrm{div}(m H_{pp}(x,Du_2) Dz+c)=0\,,\\
z(T,x)=\mathlarger{\dm{G}}(x,m_2(T))(\rho(T))+z_T(x)\,,\qquad\rho(t_0)=0,,\\
a(x)Dz\cdot\nu_{|\partial\Omega}=0\,,\quad\left(a(x)D\rho+\rho(H_p(x,Du)+\tilde{b})+mH_{pp}(x,Du) Dz+c\right)\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation*}
\begin{align*}
&h(t,x)=h_1(t,x)+h_2(t,x)\,,\\
&h_1=-\int_0^1 (H_p(x,sDu_1+(1-s)Du_2)-H_p(x,Du_2))\cdot D(u_1-u_2)\,ds\,,\\
&h_2=\int_0^1\!\ensuremath{\int_{\Omega}}\left(\dm{F}(x,sm_1(t)+(1-s)m_2(t),y)-\dm{F}(x,m_2(t),y)\right)(m_1(t)-m_2(t))(dy)ds,\\
&c(t)=c_1(t)+c_2(t)\,,\\
&c_1(t)=(m_1(t)-m_2(t))H_{pp}(x,Du_2)(Du_1-Du_2)\,,\\
&c_2(t)=m_1\int_0^1\left(H_{pp}(x,sDu_1+(1-s)Du_2)-H_{pp}(x,Du_2)\right)(Du_1-Du_2)\,ds\,,\\
&z_T=\int_0^1\ensuremath{\int_{\Omega}}\left(\dm{G}(x,sm_1(T)+(1-s)m_2(T),y)\right.\\
&\left.\hspace{6cm}-\dm{G}(x,m_2(T),y)\right)(m_1(T)-m_2(T))(dy)ds\,.
\end{align*}
So, \eqref{stimelin} implies that
\begin{equation}\label{rogueuno}
\norm{u_1-u_2-v}_{1,2+\alpha}+\sup\limits_{t\in[0,T]}\norm{m_1(t)-m_2(t)-\mu(t)}_{-(1+\alpha),N}\le C\left(\norm{h}_{0,\alpha}+\norm{c}_{L^1}+\norm{z_T}_{2+\alpha}\right)\,.
\end{equation}
Now we bound the right-hand side term in order to obtain \eqref{boundmder}.
We start with the term $h=h_1+h_2$. We can write
$$
h_1=-\int_0^1\int_0^1 s\, \langle H_{pp}(x,rsDu_1+(1-rs)Du_2)\,(Du_1-Du_2)\,,\, (Du_1-Du_2)\rangle\,drds\,.
$$
Using the properties of H\"{o}lder norm and \eqref{lipsch}, it is immediate to obtain
\begin{align*}
\norm{h_1}_{0,\alpha}\le C\norm{D(u_1-u_2)}_{0,\alpha}^2\le C\mathbf{d}_1(m_{0}^1,m_{0}^2)^2\,.
\end{align*}
As regards the $h_2$ term, we can immediately bound the quantity
$$
|h_2(t,x)-h_2(t,y)|
$$
by
\begin{align*}
|x-y|^\alpha\mathbf{d}_1(m_1(t),m_2(t))\int_0^1\norm{D_m F(\cdot,sm_1(t)+(1-s)m_2(t),\cdot)-D_m F(\cdot,m_2(t),\cdot)}_{\alpha,\infty}ds\,.
\end{align*}
Using the regularity of $F$ and \eqref{lipsch}, we get
$$
\norm{h_2}_{0,\alpha}=\sup\limits_{t\in[0,T]}\norm{h_2(t,\cdot)}_\alpha\le C\mathbf{d}_1(m_0^1,m_0^2)^2\,.
$$
\begin{comment}
For the time regularity, we use the notation $m_{1+s}(t)=sm_1(t)+(1-s)m_2(t)$. Then we have
\begin{align*}
|b(t,x)-&b(s,x)|\le\int_0^1\ensuremath{\int_{\Omega}}\left(\dm{F}(m_{1+s}(t))-\dm{F}(m_{1+s}(r))\right)(m_1(t)-m_2(t))(dy)ds\\
+&\int_0^1\ensuremath{\int_{\Omega}} \left(\dm{F}(m_2(r))-\dm{F}(m_2(t))\right)(m_1(t)-m_2(t))(dy)ds\\+&\int_0^1\ensuremath{\int_{\Omega}}\left(\dm{F}(m_s(r))-\dm{F}(m_2(r))\right)(m_1(t)-m_1(r)+m_2(r)-m_2(t))(dy)ds\,.
\end{align*}
Arguing as in the proof of the H\"{o}lder regularity of $U$, we obtain
$$
|b(t,x)-b(s,x)|\le C|t-s|^{\frac{\alpha}{2}}\mathbf{d}_1(m_{01},m_{02})^{2-2\alpha}
$$
This means that
$$
\norm{b_2}\am\le C\mathbf{d}_1(m_0^1,m_0^2)^{}\implies\norm{b}\am\le C\mathbf{d}_1(m_0^1,m_0^2)^{2-2\alpha}\,.
$$
\end{comment}
A similar estimate holds for the function $z_T$. As regards the function $c$, we have
\begin{align*}
\norm{c_1}_{L^1}=\ensuremath{\int_{0}^{T}\int_{\Omega}} H_{pp}(x,Du_2)(Du_1-Du_2)(m_1(t,dx)-m_2(t,dx))\,dt\\ \le C\norm{u_1-u_2}_{1,2+\alpha}\mathbf{d}_1(m_1(t),m_2(t))\le C\mathbf{d}_1(m_0^1,m_0^2)^{2}\,,
\end{align*}
and, using the notation $u_{1+s}:=sDu_1+(1-s)Du_2$,
\begin{align*}
\norm{c_2}_{L^1}=&\int_0^1\ensuremath{\int_{0}^{T}\int_{\Omega}} \left(H_{pp}(x,Du_{1+s})-H_{pp}(x,Du_2)\right)(Du_1-Du_2)m_1(t,dx)\,dtds\\
\le&\,C\norminf{Du_1-Du_2}^2\le C\mathbf{d}_1(m_0^1,m_0^2)^2\,.
\end{align*}
Substituting these estimates in \eqref{rogueuno}, we obtain \eqref{boundmder} and we conclude the proof.
\end{proof}
\end{thm}
Since
$$
v(t_0,x)=\ensuremath{\int_{\Omega}} K(t_0,x,m_{02},y)(m_{01}(dy)-m_{02}(dy))\,,
$$
equation \eqref{boundmder} implies
$$
\norminf{U(t_0,\cdot,m_{01})-U(t_0,\cdot,m_{02})-\ensuremath{\int_{\Omega}} K(t_0,\cdot,m_{02},y)(m_{01}-m_{02})(dy) }\le C\mathbf{d}_1(m_{01},m_{02})^{2}.
$$
As a straightforward consequence, we have that $U$ is differentiable with respect to $m$ and
$$
\dm{U}(t,x,m,y)=K(t,x,m,y)\,.
$$
Consequently, using \eqref{kappa} we obtain
\begin{equation}\label{regdu}
\sup\limits_t\norm{\dm{U}(t,\cdot,m,\cdot)}_{2+\alpha,1+\alpha}\le C\,.
\end{equation}
But, in order to make sense to equation \eqref{Master}, we need at least that $\dm{U}$ is almost everywhere twice differentiable with respect to $y$.
To do that, we need to improve the estimates \eqref{stimelin} for a couple $(v,\mu)$ solution of \eqref{linDuDm}.
\begin{prop}
Let $\mu_0\in\mathcal{C}^{-(1+\alpha)}$. Then the unique solution $(v,\mu)$ satisfies
\begin{equation}\label{sbrigati}
\norm{v}_{1,2+\alpha}+\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(2+\alpha),N}\le C\norm{\mu_0}_{-(2+\alpha)}\,.
\end{equation}
\begin{proof}
We consider the solution $(v,\mu)$ obtained in Proposition \ref{linearD}. Since $\mu$ satisfies $\mu=\sigma\Phi(\mu)$ with $\sigma=1$, we can use \eqref{stimz} with $z_T=h=0$ and obtain
\begin{equation}\label{cumnupnat}
\norm{v}_{1,2+\alpha}\le C\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(2+\alpha),N}\,.
\end{equation}
We want to estimate the right-hand side. Using \eqref{forsemisalvo} we have
\begin{equation}\label{ngroc}
\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(2+\alpha),N}\le C\left(\norm{\mu_0}_{-(2+\alpha)}+\norm{ mH_{pp}(x,Du)Dv}_{L^1}\right)\,.
\end{equation}
The last term is estimated, as in Proposition \ref{linearD}, by
\begin{equation}\label{mannaggia}
\norm{\sigma mH_{pp}(x,Du)Dv}_{L^1}\le C\left(\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du)Dv,Dv\rangle\,dxdt\right)^\miezz\,.
\end{equation}
The right-hand side term can be bounded using \eqref{stimasigma} with $h=z_T=c=0$:
\begin{equation}\label{crist}
\begin{split}
\ensuremath{\int_{0}^{T}\int_{\Omega}} m\langle H_{pp}(x,Du) Dv,Dv\rangle\hspace{0.08cm} dxdt
\le\norm{v}_{1,2+\alpha}\norm{\mu_0}_{-(2+\alpha)}\,.
\end{split}
\end{equation}
Hence, plugging estimates \eqref{mannaggia} and \eqref{crist} into \eqref{ngroc} we obtain
\begin{equation}\label{probbiatottquanta}
\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(2+\alpha),N}\le C\left(\norm{\mu_0}_{-(2+\alpha)}+\norm{v}_{1,2+\alpha}^\miezz\norm{\mu_0}_{-(2+\alpha),N}^\miezz\right)\,.
\end{equation}
Coming back to \eqref{cumnupnat} and using a generalized Young's inequality, we get
$$
\norm{v}_{1,2+\alpha}\le C\norm{\mu_0}_{-(2+\alpha)}\,,
$$
and finally, substituting the last estimate into \eqref{probbiatottquanta}, we obtain \eqref{sbrigati} and we conclude.
\end{proof}
\end{prop}
As an immediate Corollary, we get the desired estimate for $\dm{U}$.
\begin{cor}
Suppose hypotheses \ref{ipotesi} satisfied. Then the derivative $\dm{U}$ is twice differentiable with respect to $y$, together with its first and second derivatives with respect to $x$, and the following estimate hold:
\begin{equation}\label{lentezza}
\norm{\dm{U}(t,\cdot,m,\cdot)}_{2+\alpha,2+\alpha}\le C\,.
\end{equation}
\begin{proof}
We want to prove that, $\forall\,i,j$, the incremental ratio
\begin{equation}\label{alotteriarubabbeh}
R^h_{i,j}(x,y):=\frac{\partial_{y_i}\dm{U}(t_0,x,m_0,y+he_j)-\partial_{y_i}\dm{U}(t_0,x,m_0,y)}{h}
\end{equation}
is a Cauchy sequence for $h\to0\,$ together with its first and second derivatives with respect to $x$. Then we have to estimate, for $h,k>0$, the quantity $\left|D^l_xR^h_{i,j}(x,y)-D^l_xR^k_{i,j}(x,y)\right|\,,$ for $|l|\le 2$.
We already know that
$$
\partial_{y_i}\dm{U}(t_0,x,m_0,y)=v(t_0,x;-\partial_{y_i}\delta_y)\,.
$$
Using the linearity of the system \eqref{linDuDm}, we obtain that
$$
\left|D^l_xR^h_{i,j}(x,y)-D^l_xR^k_{i,j}(x,y)\right|=D^l_xv\left(t_0,x;\Delta_h^j(-\partial_{y_i}\delta_y)-\Delta_k^j(-\partial_{y_i}\delta_y)\right)\,,
$$
where $\Delta_h^j(-\partial_{y_i}\delta_y)=-\frac1h(\partial_{y_i}\delta_{y+he_j}-\partial_{y_i}\delta_y)\,.$
Hence, estimate \eqref{sbrigati} and Lagrange's Theorem implies
\begin{align*}
&\left|D^l_xR^h_{i,j}(x,y)-D^l_xR^k_{i,j}(x,y)\right|\le C\norm{\Delta_h^j(-\partial_{y_i}\delta_y)-\Delta_k^j(-\partial_{y_i}\delta_y)}_{-(2+\alpha)}\\&=\sup\limits_{\norm{\phi}_{2+\alpha}\le 1}\left(\frac{\partial_{y_i}\phi(y+he_j)-\partial_{y_i}\phi(y)}{h}-\frac{\partial_{y_i}\phi(y+ke_j)-\partial_{y_i}\phi(y)}{k}\right)\\&=\sup\limits_{\norm{\phi}_{2+\alpha}\le 1}\left(\partial^2_{y_iy_j}\phi(y_{\phi,h})-\partial^2_{y_iy_j}\phi(y_{\phi,k})\right)\le\sup\limits_{\norm{\phi}_{2+\alpha}\le 1}|y_{\phi,h}-y_{\phi,k}|^\alpha\le |h|^\alpha+|k|^\alpha\,,
\end{align*}
for a certain $y_{\phi,h}$ in the line segment between $y$ and $y+he_j$ and $y_{\phi,k}$ in the line segment between $y$ and $y+ke_j$.
Since the last term goes to $0$ when $h,k\to0$, we have proved that the incremental ratio \eqref{alotteriarubabbeh} and its first and second derivative w.r.t $x$ are Cauchy sequences in $h$, and so converging when $h\to0$. This proves that $D^l_x\dm{U}$ is twice differentiable with respect to $y$, for all $0\le|l|\le2\,$.
In order to show the H\"{o}lder bound for $\dm{U}$ w.r.t. $y$, we consider $y,y'\in\Omega$ and we consider the function
$$
R^h_{i,j}(x,y)-R^h_{i,j}(x,y')\,.
$$
Then we know from the linearity of \eqref{linDuDm}
$$
R^h_{i,j}(x,y)-R^h_{i,j}(x,y')=v(t_0,x;\Delta^j_h(-\partial_{y_i}\delta_y)-\Delta^j_h(-\partial_{y_i}\delta_{y'}))\,,
$$
and so, using \eqref{sbrigati} and
$$
\norm{R^h_{i,j}(\cdot,y)-R^h_{i,j}(\cdot,y')}_{2+\alpha}\le C\norm{\Delta^j_h(-\partial_{y_i}\delta_y)-\Delta^j_h(-\partial_{y_i}\delta_{y'})}_{-(2+\alpha)}\,.
$$
Now we pass to the limit when $h\to0$.
It is immediate to prove that
$$
\Delta^j_h(-\partial_{y_i}\delta_y)-\Delta^j_h(-\partial_{y_i}\delta_{y'})\overset{h\to0}{\longrightarrow}\partial_{y_j}\partial_{y_i}\delta_y-\partial_{y_j}\partial_{y_i}\delta_{y'}\qquad\mbox{in }\mathcal{C}^{-(2+\alpha)}\,.
$$
Since $D^l_xR^h_{i,j}(x,y)\to \partial^2_{y_iy_j}D^l_x\dm{U}(x,y)$ for all $|l|\le2$, we can use Ascoli-Arzel\`a to obtain that
$$
\norm{\partial^2_{y_iy_j}\dm{U}(t,\cdot,m,y)-\partial^2_{y_iy_j}\dm{U}(t,\cdot,m,y')}_{2+\alpha}\le C\norm{\partial_{y_j}\partial_{y_i}\delta_y-\partial_{y_j}\partial_{y_i}\delta_{y'}}_{-(2+\alpha)}\le C|y-y'|^\alpha\,,
$$
which proves \eqref{lentezza} and concludes the proof.
\end{proof}
\end{cor}
We conclude this part with a last property on the derivative $D_mU$, which will be essential in order to prove the uniqueness of solutions for the Master Equation.
\begin{cor}\label{delarue}
The function $U$ satisfies the following Neumann boundary conditions:
\begin{equation*}
\begin{split}
&a(x)D_x\dm{U}(t,x,m,y)\cdot\nu(x)=0\,,\qquad\forall x\in\partial\Omega, y\in\Omega,t\in[0,T],m\in\mathcal{P}(\Omega)\,,\\
&a(y)D_mU(t,x,m,y)\,\,\,\,\cdot\nu(y)=0\,,\qquad\forall x\in\Omega, y\in\partial\Omega,t\in[0,T],m\in\mathcal{P}(\Omega)\,.
\end{split}
\end{equation*}
\begin{proof}
Since $\dm{U}(t_0,x,m_0,y)=v(t_0,x)$, where $(v,\mu)$ is the solution of \eqref{linDuDm} with $\mu_0=\delta_y$, the first condition is immediate because of the Neumann condition of \eqref{linDuDm}.
For the second condition, we consider $y\in\partial\Omega$ and we take
$$
\mu_0=-\partial_w(\delta_y)\,,\qquad\mbox{with }w=a(y)\nu(y)\,.
$$
\begin{comment}
Then we consider the unique solution $(v,\mu)$ of \eqref{linDuDm}. The estimate \eqref{stimelin} tells us that
$$
\norm{v}_{1,2+\alpha}+\sup\limits_{t\in[0,T]}\norm{\mu(t)}_{-(1+\alpha),N}+\norm{\rho}_{L^p}\le C\norm{-\partial_w\delta_y}_{-(1+\alpha)}.
$$
We compute the right-hand side term. For a function $\varphi\in \mathcal{C}^{2+\alpha}(\Omega)$ such that $a(x)D\varphi(x)\cdot\nu(x)_{|\partial\Omega}=0$ we have
$$
\langle -\partial_w\delta_y,\varphi\rangle=\langle\delta_y,\partial_w\varphi\rangle=a(y)D\varphi(y)\cdot\nu(y)=0\,.
$$
Hence $\norm{-\partial_t\delta_y}_{-(2+\alpha),N}=0$, which implies
\begin{equation}\label{bastavaquesto}
v=0\,,\qquad\langle\mu,\varphi\rangle=0\quad\forall\varphi\in\mathcal{C}^{2+\alpha} \mbox{ s.t. }aD\varphi\cdot\nu_{|\partial\Omega}=0\,.
\end{equation}.
\end{comment}
We want to prove that $(v,\mu)=(0,\mu)$ is a solution of \eqref{linDuDm} with $\mu_0=-\partial_w\delta_y$, where $\mu$ is the unique solution in the sense of Definition \ref{canzonenuova} of
$$
\begin{cases}
\mu_t-\mathrm{div}(a(x)D\mu)-\mathrm{div}(\mu (H_p(x,Du)+\tilde{b}))=0\,,\\
\mu(t_0)=\mu_0\,,\\
\left(a(x)D\mu+\mu (H_p(x,Du)+\tilde{b})\right)\cdot\nu_{|\partial\Omega}=0\,.
\end{cases}
$$
We only have to check that, if $\mu$ is a solution of this equation, then $v=0$ solves
\begin{equation}\label{muovt}
\begin{cases}
-v_t-\mathrm{tr}(a(x)D^2v)+H_p(x,Du)\cdot Dv=\mathlarger{\frac{\delta F}{\delta m}}(x,m(t))(\mu(t))\,,\\
v(T,x)=\mathlarger{\frac{\delta G}{\delta m}}(x,m(T))(\mu(T))\,,\\
a(x)Dv\cdot\nu_{|\partial\Omega}=0\,,
\end{cases}
\end{equation}
which reduces to prove that
$$
\dm{F}(x,m(t))(\mu(t))=\dm{G}(x,m(T))(\mu(T))=0\,.
$$
\begin{comment}
This is already proved in \eqref{bastavaquesto}, since $\dm{F}(x,m(t),\cdot)$ and $\dm{G}(x,m(T),\cdot)$ satisfy the condition $aD\varphi\cdot\nu_{|\partial\Omega}=0$ and are in $\mathcal{C}^{2+\alpha}$, thanks to hypotheses $(iv)$, $(v)$ and $(vi)$ of \ref{ipotesi}.
\end{comment}
We will give a direct proof.
Choosing a test function $\phi(t,y)$ satisfying \eqref{hjbfp}, with $\psi(t,y)=0$ and $\xi(y)=\dm{F}(x,m(t),y)$, we have from boundary conditions of $\dm{F}$ that $\phi$ is a $\mathcal{C}^{\frac{1+\alpha}{2},1+\alpha}$ function satisfying Neumann boundary conditions.
It follows from the weak formulation of $\mu$ that
$$
\dm{F}(x,m(t))(\mu(t))=\langle \mu(t),\dm{F}(x,m(t),\cdot)\rangle=\langle\mu_0,\phi(0,\cdot)\rangle=0\,,
$$
since $aD\phi\cdot\nu_{|\partial\Omega}=0$ and
$$
\langle\mu_0,\phi(0,\cdot)\rangle=\langle -\partial_w\delta_y,\phi(0,\cdot)\rangle=a(y)D\phi(0,y)\cdot\nu(y)=0\,.
$$
Same computations hold for $\dm{G}$, proving that $v=0$ satisfies \eqref{muovt}.
Then we can easily conclude:
\begin{align*}
a(y)D_mU(t_0,x,m_0,y)\cdot\nu(y)&=D_y\dm{U}(t_0,x,m_0,y)\cdot w\\&=\left\langle\dm{U}(t_0,x,m_0,\cdot),\mu_0\right\rangle=v(t_0,x)=0\,.
\end{align*}
\end{proof}
\end{cor}
\section{Solvability of the first-order Master Equation}
The $\mathcal{C}^1$ character of $U$ with respect to $m$ is crucial in order to prove the main theorem of this chapter.
\begin{proof}[Proof of Theorem \ref{settepuntouno}]
We start from the existence part.\\
\emph{Existence}. We start assuming that $m_0$ is a smooth and positive function satisfying \eqref{neumannmzero}, and we consider $(u,m)$ the solution of $MFG$ system starting from $m_0$ at time $t_0$. Then
$$
\partial_t U(t_0,x,m_0)
$$
can be computed as the sum of the two limits:
$$
\lim\limits_{h\to0} \frac{U(t_0+h,x,m_0)-U(t_0+h,x,m(t_0+h))}{h}
$$
and
$$
\lim\limits_{h\to0} \frac{U(t_0+h,x,m(t_0+h))-U(t_0,x,m_0)}{h}\,.
$$
The second limit, using the very definition of $U$, is equal to
\begin{align*}
\lim\limits_{h\to0} \frac{u(t_0+h,x)-u(t_0,x)}{h}=u_t(t_0,x)=-\mathrm{tr}(a(x)D^2u(t_0,x))+H(x,Du(t_0,x))\\-F(x,m(t_0))=-\mathrm{tr}(a(x)D^2_xU(t_0,x,m_0))+H(x,D_xU(t_0,x,m_0))-F(x,m_0)\,.
\end{align*}
As regards the first limit, defining $m_s:=(1-s)m(t_0)+sm(t_0+h)$ and using the $\mathcal{C}^1$ regularity of $U$ with respect to $m$, we can write it as
\begin{align*}
-\lim\limits_{h\to 0}\int_0^1\ensuremath{\int_{\Omega}}\dm{U}(t_0+h,x,m_s,y)\frac{(m(t_0+h,y)-m(t_0,y))}h\,dyds\\
=-\int_0^1\ensuremath{\int_{\Omega}}\dm{U}(t_0,x,m_0,y)m_t(t_0,y)\,dyds=\ensuremath{\int_{\Omega}}\dm{U}(t_0,x,m_0,y)m_t(t_0,y)\,dy\\=-\ensuremath{\int_{\Omega}}\dm{U}(t_0,x,m_0,y) \,\mathrm{div}\!\left(a(y)Dm(t_0,y) +m(t_0,y)(\tilde{b}+H_p(y,Du(t_0,y)))\right)dy\,.
\end{align*}
Taking into account the representation formula \eqref{reprform} for $\dm{U}$ , we integrate by parts and use the boundary condition of $\dm{U}$ and $m$ to obtain
\begin{align*}
\ensuremath{\int_{\Omega}}\left[H_p(y,D_xU(t_0,y,m_0))D_mU(t_0,x,m_0,y)-\mathrm{tr}\left(a(y)D_yD_mU(t_0,x,m_0,y)\right)\right]dm_0(y)
\end{align*}
So with the computation of the two limits we obtain
\begin{align*}
&\partial_t U(t,x,m)=-\mathrm{tr}\left(a(x)D_x^2 U(t,x,m)\right)+H\left(x,D_x U(t,x,m)\right)\\&-\mathlarger{\ensuremath{\int_{\Omega}}}\mathrm{tr}\left(a(y)D_y D_m U(t,x,m,y)\right)dm(y)+\\&\mathlarger{\ensuremath{\int_{\Omega}}} D_m U(t,x,m,y)\cdot H_p(y,D_x U(t,y,m))dm(y)- F(x,m)\,.
\end{align*}
So the equation is satisfied for all $m_0\in\mathcal{C}^\infty$ satisfying \eqref{neumannmzero}, and so, with a density argument, for all $m_0\in\mathcal{P}(\Omega)$.
The boundary conditions are easily verified thanks to Corollary \ref{delarue}. This concludes the existence part.\\
\emph{Uniqueness}. Let $V$ be another solution of the Master Equation \eqref{Master} with Neumann boundary conditions. We consider, for fixed $t_0$ and $m_0$, with $m_0$ smooth satisfying \eqref{neumannmzero}, the solution $\tilde{m}$ of the Fokker-Planck equation:
\begin{equation*}
\begin{cases}
\tilde{m}_t-\mathrm{div}(a(x)D\tilde{m})-\mathrm{div}\left(\tilde{m}\left(H_p(x,D_xV(t,x,\tilde{m}))+\tilde{b}\right)\right)=0\,,\\
\tilde{m}(t_0)=m_0\,,\\
\left[a(x)D\tilde{m}+(\tilde{b}+D_xV(t,x,\tilde{m}))\right]\cdot\nu(x)_{|\partial\Omega}=0\,.
\end{cases}
\end{equation*}
This solution is well defined since $D_xV$ is Lipschitz continuous with respect to the measure variable.
Then we define $\tilde{u}(t,x)=V(t,x,\tilde{m}(t))$. Using the equations of $V$ and $\tilde{m}$, we obtain
\begin{align*}
\tilde{u}_t(t,x)=&\,V_t(t,x,\tilde{m}(t))+\ensuremath{\int_{\Omega}}\dm{V}(t,x,\tilde{m}(t),y)\,\tilde{m}_t(t,y)\,dy\\
=&\,V_t(t,x,\tilde{m}(t))+\ensuremath{\int_{\Omega}}\dm{V}(t,x,\tilde{m}(t),y)\,\mathrm{div}\!\left(a(y)D\tilde{m}(t,y)\right)\,dy\\+&\,\ensuremath{\int_{\Omega}} \dm{V}(t,x,\tilde{m}(t),y)\,\mathrm{div}\!\left(\tilde{m}\left(H_p(x,D_xV(t,x,\tilde{m}))+\tilde{b}\right)\right)\,dy\,.
\end{align*}
We compute the two integrals by parts. As regards the first, we have
\begin{align*}
&\ensuremath{\int_{\Omega}}\dm{V}(t,x,\tilde{m}(t),y)\,\mathrm{div}\!\left(a(y)D\tilde{m}(t,y)\right)\,dy\\=-&\ensuremath{\int_{\Omega}} a(y)D\tilde{m}(t,y)\,D_mV(t,x,\tilde{m}(t),y)\,dy+\int_{\partial\Omega}\dm{V}(t,x,\tilde{m}(t),y)\,a(y)D\tilde{m}(t,y)\cdot\nu(t,y)\,dy\\
=&\ensuremath{\int_{\Omega}}\mathrm{div}(a(y)D_yD_mV(t,x,\tilde{m}(t),y))\,\tilde{m}(t,y)\,dy-\ensuremath{\int_{\Omega}} a(y)D_mV(t,x,\tilde{m}(t),y)\cdot\nu(y)\tilde{m}(t,y)dy\\+&\int_{\partial\Omega}\dm{V}(t,x,\tilde{m}(t),y)\,a(y)D\tilde{m}(t,y)\cdot\nu(t,y)\,dy\,,
\end{align*}
while for the second
\begin{align*}
&\ensuremath{\int_{\Omega}}\dm{V}(t,x,\tilde{m}(t),y)\,\mathrm{div}\!\left(\tilde{m}\left(H_p(x,D_xV(t,x,\tilde{m}))+\tilde{b}\right)\right)\,dy\\
=-&\ensuremath{\int_{\Omega}}\left(H_p(x,D_xV(t,x,\tilde{m}))+\tilde{b}\right)D_mV(t,x,\tilde{m},y)\tilde{m}(t,y)dy\\+&\int_{\partial\Omega}\dm{V}(t,x,\tilde{m}(t),y)\left(H_p(x,D_xV(t,x,\tilde{m}))+\tilde{b}\right)\cdot\nu(y)\,\tilde{m}(t,y)dy\,.
\end{align*}
Putting together these estimates and taking into account the boundary conditions on $V$ and $m$:
$$
\left[a(x)D\tilde{m}+(\tilde{b}+D_xV(t,x,\tilde{m}))\right]\cdot\nu(x)_{|x\in\partial\Omega}=0\,,\qquad a(y)D_mV(t,x,m,y)\cdot\nu(y)_{|y\in\partial\Omega}=0\,,
$$
and the relation between the divergence and the trace term
$$
\mathrm{div}(a(x)D\phi(x))=\mathrm{tr}(a(x)D^2\phi(x))+\tilde{b}(x)D\phi(x)\,,\qquad\forall\phi\in W^{2,\infty}(\Omega)\,,
$$
we find
\begin{align*}
\tilde{u}_t(t,x)=&\,V_t(t,x,\tilde{m}(t))+\ensuremath{\int_{\Omega}}\mathrm{tr}(a(y)D_yD_mV(t,x,\tilde{m},y))\,d\tilde{m}(y)\\-&\,\ensuremath{\int_{\Omega}} -H_p(y,D_xV(t,y,\tilde{m})) D_mV(t,x,\tilde{m},y)\,d\tilde{m}(y)\\=&\,-\mathrm{tr}(a(x)D^2_xV(t,x,\tilde{m}(t)))+H(x,D_xV(t,x,\tilde{m}(t)))-F(x,\tilde{m}(t))\\=&\,-\mathrm{tr}(a(x)D^2\tilde{u}(t,x))+H(x,D\tilde{u}(t,x))-F(x,\tilde{m}(t))\,.
\end{align*}
This means that $(\tilde{u},\tilde{m})$ is a solution of the MFG system \eqref{meanfieldgames}-\eqref{fame}. Since the solution of the Mean Field Games system is unique, we get $(\tilde{u},\tilde{m})=(u,m)$ and so $V(t_0,x,m_0)=U(t_0,x,m_0)$ whenever $m_0$ is smooth.\\
Then, using a density argument, the uniqueness is proved.
\end{proof}
\textbf{Acknowledgements.}
I wish to sincerely thank P. Cardaliaguet and A. Porretta for the help and the support during the preparation of this article. I wish to thank also F. Delarue for the enlightening ideas he gave to me.
\end{document} |
\begin{document}
\title{A randomized polynomial kernelization for Vertex Cover with a smaller parameter}
\begin{abstract} In the \problem{Vertex Cover} problem we are given a graph $G=(V,E)$ and an integer $k$ and have to determine whether there is a set $X\subseteq V$ of size at most $k$ such that each edge in $E$ has at least one endpoint in $X$. The problem can be easily solved in time $\mathcal{O}^*(2^k)$, making it fixed-parameter tractable (FPT) with respect to $k$. While the fastest known algorithm takes only time $\mathcal{O}^*(1.2738^k)$, much stronger improvements have been obtained by studying \emph{parameters that are smaller than~$k$}. Apart from treewidth-related results, the arguably best algorithm for \problem{Vertex Cover} runs in time $\mathcal{O}^*(2.3146^p)$, where $p=k-LP(G)$ is only the excess of the solution size $k$ over the best fractional vertex cover (Lokshtanov et al.\ TALG 2014). Since $p\leq k$ but $k$ cannot be bounded in terms of $p$ alone, this strictly increases the range of tractable instances.
Recently, Garg and Philip (SODA 2016) greatly contributed to understanding the parameterized complexity of the \problem{Vertex Cover} problem. They prove that $2LP(G)-MM(G)$ is a lower bound for the vertex cover size of $G$, where $MM(G)$ is the size of a largest matching of $G$, and proceed to study parameter $\ell=k-(2LP(G)-MM(G))$. They give an algorithm of running time $\mathcal{O}^*(3^\ell)$, proving that \problem{Vertex Cover} is FPT in $\ell$. It can be easily observed that $\ell\leq p$ whereas $p$ cannot be bounded in terms of $\ell$ alone. We complement the work of Garg and Philip by proving that \problem{Vertex Cover} admits a randomized polynomial kernelization in terms of $\ell$, i.e., an efficient preprocessing to size polynomial in $\ell$. This improves over parameter $p=k-LP(G)$ for which this was previously known (Kratsch and Wahlstr\"om FOCS 2012). \end{abstract}
\section{Introduction}
A \emph{vertex cover} of a graph $G=(V,E)$ is a set $X\subseteq V$ such that each edge $e\in E$ has at least one endpoint in $X$. The \probname{Vertex Cover}\xspace problem of determining whether a given graph $G$ has a vertex cover of size at most $k$ has been an important benchmark problem in parameterized complexity for both \emph{fixed-parameter tractability} and \emph{(polynomial) kernelization},\footnote{Detailed definitions can be found in Section~\ref{section:preliminaries}. Note that we use $\ell$, rather than $k$, as the default symbol for parameters and use \probname{Vertex Cover}\xspace{}$(\ell)$ to refer to the \probname{Vertex Cover}\xspace problem with parameter $\ell$.} which are the two notions of tractability for parameterized problems. Kernelization, in particular, formalizes the widespread notion of efficient preprocessing, allowing a rigorous study (cf.~\cite{Kratsch14}). We present a randomized polynomial kernelization for \probname{Vertex Cover}\xspace for the to-date smallest parameter, complementing a recent fixed-parameter tractability result~by~Garg~and~Philip~\cite{GargP16}.
Let us first recall what is known for the so-called \emph{standard parameterization} \probname{Vertex Cover$(k)$}\xspace, i.e., with parameter $\ell=k$: There is a folklore $\mathcal{O}^*(2^k)$ time\footnote{We use $\mathcal{O}^*$ notation, which suppresses polynomial factors.} algorithm for testing whether a graph $G$ has a vertex cover of size at most $k$, proving that \probname{Vertex Cover$(k)$}\xspace is fixed-parameter tractable (\ensuremath{\mathsf{FPT}}\xspace); this has been improved several times with the fastest known algorithm due to Chen et al.~\cite{ChenKX10} running in time $\mathcal{O}^*(1.2738^k)$. Under the Exponential Time Hypothesis of Impagliazzo et al.~\cite{ImpagliazzoPZ01} there is no algorithm with runtime $\mathcal{O}^*(2^{o(k)})$. The best known kernelization for \probname{Vertex Cover$(k)$}\xspace reduces any instance $(G,k)$ to an equivalent instance $(G',k')$ with $|V(G')|\leq 2k$; the total size is $\mathcal{O}(k^2)$~\cite{ChenKJ01}. Unless \ensuremath{\mathsf{NP \subseteq coNP/poly}}\xspace and the polynomial hierarchy collapses there is no kernelization to size $\mathcal{O}(k^{2-\varepsilon})$~\cite{DellM14}.
At first glance, the \ensuremath{\mathsf{FPT}}\xspace and kernelization results for \probname{Vertex Cover$(k)$}\xspace seem essentially best possible. This is true for parameter $\ell=k$, but there are \emph{smaller parameters} $\ell'$ for which both \ensuremath{\mathsf{FPT}}\xspace-algorithms and polynomial kernelizations are known. The motivation for this is that even when $\ell'=\mathcal{O}(1)$, the value $\ell=k$ may be as large as $\Omega(n)$, making both \ensuremath{\mathsf{FPT}}\xspace-algorithm and kernelization for parameter $k$ useless for such instances (time $2^{\Omega(n)}$ and size guarantee $\mathcal{O}(n)$). In contrast, for $\ell'=\mathcal{O}(1)$ an \ensuremath{\mathsf{FPT}}\xspace-algorithm with respect to $\ell'$ runs in polynomial time (with only leading constant depending on $\ell'$). Let us discuss the relevant type of smaller parameter, which relates to \emph{lower bounds on the optimum} and was introduced by Mahajan and Raman~\cite{MahajanR99}; two other types are discussed briefly under related work.
Two well-known lower bounds for the size of vertex covers for a graph $G=(V,E)$ are the maximum size of a matching of $G$ and the smallest size of fractional vertex covers for $G$; we (essentially) follow Garg and Philip~\cite{GargP16} in denoting these two values by $MM(G)$ and $LP(G)$. Note that the notation $LP(G)$ comes from the fact that fractional vertex covers come up naturally in the linear programming relaxation of the \probname{Vertex Cover}\xspace problem, where we must assign each vertex a fractional value such that each edge is incident with total value of at least $1$. In this regard, it is useful to observe that the LP relaxation of the \problem{Maximum Matching} problem is exactly the dual of this. Accordingly, we have $MM(G)\leq LP(G)$ since each integral matching is also a fractional matching, i.e., with each vertex incident to a total value of at most $1$. Similarly, using $VC(G)$ to denote the minimum size of vertex covers of $G$ we get $VC(G)\geq LP(G)$ and, hence, $VC(G)\geq LP(G)\geq MM(G)$.
A number of papers have studied vertex cover with respect to ``above lower bound'' parameters $\ell'=k-MM(G)$ or $\ell''=k-LP(G)$ \cite{RazgonO09,RamanRS11,CyganPPW13,NarayanaswamyRRS12,LokshtanovNRRS14}. Observe that \[
k\geq k-MM(G) \geq k-LP(G). \] For the converse, note that $k$ can be unbounded in terms of $k-MM(G)$ and $k-LP(G)$, whereas $k-MM(G)\leq 2(k-LP(G))$ holds~\cite{KratschW12,Jansen_Thesis}. Thus, from the perspective of achieving fixed-parameter tractability (and avoiding large parameters) both parameters are equally useful for improving over parameter $k$. Razgon and O'Sullivan~\cite{RazgonO09} proved fixed-parameter tractability of \probname{Almost 2-SAT($k$)}\xspace, which implies that \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace is \ensuremath{\mathsf{FPT}}\xspace due to a reduction to \probname{Almost 2-SAT($k$)}\xspace by Mishra et al.~\cite{MishraRSSS11}. Using $k-MM(G)\leq 2(k-LP(G))$, this also entails fixed-parameter tractability of \probname{Vertex Cover}$(k-\mbox{\textsc{lp}})$\xspace.
After several improvements~\cite{RamanRS11,CyganPPW13,NarayanaswamyRRS12,LokshtanovNRRS14} the fastest known algorithm, due to Lokshtanov et al.~\cite{LokshtanovNRRS14}, runs in time $\mathcal{O}^*(2.3146^{k-MM(G)})$. The algorithms of Narayanaswamy et al.~\cite{NarayanaswamyRRS12} and Lokshtanov et al.~\cite{LokshtanovNRRS14} achieve the same parameter dependency also for parameter $k-LP(G)$. The first (and to our knowledge only) kernelization result for these parameters is a randomized polynomial kernelization for \probname{Vertex Cover}$(k-\mbox{\textsc{lp}})$\xspace by Kratsch and Wahlstr\"om~\cite{KratschW12}, which of course applies also to the larger parameter $k-MM(G)$.
Recently, Garg and Philip~\cite{GargP16} made an important contribution to understanding the parameterized complexity of the \probname{Vertex Cover}\xspace problem by proving it to be \ensuremath{\mathsf{FPT}}\xspace with respect to parameter $\ell=k-(2LP(G)-MM(G))$. Building on an observation of Lov\'asz and Plummer~\cite{LovaszP1986} they prove that $VC(G)\geq 2LP(G)-MM(G)$, i.e., that $2LP(G)-MM(G)$ is indeed a lower bound for the minimum vertex covers size of any graph $G$. They then design a branching algorithm with running time $\mathcal{O}^*(3^\ell)$ that builds on the well-known Gallai-Edmonds decomposition for maximum matchings to guide its branching choices.
\problembox{\probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace}{A graph $G=(V,E)$ and an integer $k\in\mathbb{N}$.}{$\ell=k-(2LP(G)-MM(G))$ where $LP(G)$ is the minimum size of fractional vertex covers for $G$ and $MM(G)$ is the maximum cardinality of matchings of $G$.}{Does $G$ have a vertex cover of size at most $k$, i.e., a set $X\subseteq V$ of size at most $k$ such that each edge of $E$ has at least one endpoint in $X$?}
Since $LP(G)\geq MM(G)$, we clearly have $2LP(G)-MM(G)\geq LP(G)$ and hence $\ell= k-(2LP(G)-MM(G))$ is indeed at most as large as the previously best parameter $k-LP(G)$. We can easily observe that $k-LP(G)$ cannot be bounded in terms of $\ell$: For any odd cycle $C$ of length $2s+1$ we have $LP(C)=\frac12(2s+1)$, $VC(C)=s+1$, and $MM(C)=s$. Thus, a graph $G$ consisting of $t$ vertex-disjoint odd cycles of length $2s+1$ has $LP(G)=\frac12t(2s+1)$, $VC(G)=t(s+1)$, and $MM(G)=ts$. For $k=VC(G)=t(s+1)$ we get \[ \ell=k - (2LP(G)-MM(G))=t(s+1) - t(2s+1) + ts=0 \] whereas \[ k-LP(G) = t(s+1) - \frac12t(2s+1) = \frac12t(2s+2) - \frac12t(2s+1) = \frac12t. \] Generally, it can be easily proved that $LP(G)$ and $2LP(G)-MM(G)$ differ by exactly $\frac12$ on any \emph{factor-critical} graph (cf.~Proposition~\ref{proposition:factorcritical:minvc:minfvc}).
As always in parameterized complexity, when presented with a new fixed-parameter tractability result, the next question is whether the problem also admits a polynomial kernelization. It is well known that decidable problems are fixed-parameter tractable if and only if they admit a (not necessarily polynomial) kernelization.\footnote{We sketch this folklore fact for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace: If the input is larger than $3^\ell$, where $\ell=k-(2LP(G)-MM(G))$, then the algorithm of Garg and Philip~\cite{GargP16} runs in polynomial time and we can reduce to an equivalent small yes- or no-instance; else, the instance size is bounded by $3^\ell$; in both cases we get size at most $3^\ell$ in polynomial time. The converse holds since a kernelization followed by any brute-force algorithm on an instance of, say, size $g(\ell)$ gives an \ensuremath{\mathsf{FPT}}\xspace running time in terms of $\ell$.} Nevertheless, not all problems admit polynomial kernelizations and, in the present case, both an extension of the methods for parameter $k-LP(G)$ \cite{KratschW12} or a lower bound proof similar to Cygan et al.~\cite{CyganLPPS14} or Jansen~\cite[Section 5.3]{Jansen_Thesis} (see related work) are conceivable.
\subparagraph{Our result.} We give a randomized polynomial kernelization for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace. This improves upon parameter $k-LP(G)$ by giving a strictly smaller parameter for which a polynomial kernelization is known. At high level, the kernelization takes the form of a (randomized) polynomial parameter transformation from \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace to \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace, i.e., a polynomial-time many-one (Karp) reduction with \emph{output parameter polynomially bounded in the input parameter}. It is well known (cf.~Bodlaender et al.~\cite{BodlaenderTY11}) that this implies a polynomial kernelization for the source problem, i.e., for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace in our case. Let us give some more details of this transformation.
Since the transformation is between different parameterizations of the same problem, it suffices to handle parts of any input graph $G$ where the input parameter $\ell=k-(2LP(G)-MM(G))$ is (much) smaller than the output parameter $k-MM(G)$. After the well-known LP-based preprocessing (cf.~\cite{GargP16}), the difference in parameter values is equal to the number of vertices that are exposed (unmatched) by any maximum matching $M$ of $G$. Consider the Gallai-Edmonds decomposition $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$ of $G=(V,E)$, where $D$ contains the vertices that are exposed by at least one maximum matching, $A=N(D)$, and $B=V\setminus (A\cup D)$. Let $M$ be a maximum matching and let $t$ be the number of exposed vertices. There are $t$ components of $G[D]$ that have exactly one exposed vertex each. The value $2LP(G)-MM(G)$ is equal to $|M|+t$ when $LP(G)=\frac12|V|$, as implied by LP-based preprocessing.
To reduce the difference in parameter values we will remove all but $\mathcal{O}(\ell^4)$ components of $G[D]$ that have an exposed vertex; they are called \emph{unmatched components} for lack of a matching edge to $A$ and we can ensure that they are not singletons. It is known that any such component $C$ is factor-critical and hence has no vertex cover smaller than $\frac12(|C|+1)$; this exactly matches its contribution to $|M|+t$: It has $\frac12(|C|-1)$ edges of $M$ and one exposed vertex. Unless the instance is trivially \textbf{no}\xspace all but at most $\ell$ of these components $C$ have a vertex cover of size $\frac12(|C|+1)$, later called a \emph{tight vertex cover}. The only reason not to use a tight vertex cover for $C$ can be due to adjacent vertices in $A$ that are not selected; this happens at most $\ell$ times. A technical lemma proves that this can always be traced to at most three vertices of $C$ and hence at most three vertices in $A$ that are adjacent with $C$.
In contrast, there are (matched, non-singleton) components $C$ of $G[C]$ that together with a matched vertex $v\in A$ contribute $\frac12(|C|+1)$ to the lower bound due to containing this many matching edges. To cover them at this cost requires not selecting vertex $v$. This in turn propagates along $M$-alternating paths until the cover picks both vertices of an $M$-edge, which happens at most $\ell$ times, or until reaching an unmatched component, where it may help prevent a tight vertex cover. We translate this effect into a two-way separation problem in an auxiliary directed graph. Selecting both vertices of an $M$-edge is analogous to a adding a vertex to the separator. Relative to a separator the question becomes which sets of at most three vertices of $A$ that can prevent tight vertex covers are still reachable by propagation. At this point we can apply representative set tools from Kratsch and Wahlstr\"om~\cite{KratschW12} to identify a small family of such triplets that works for all separators (and hence for all so-called \emph{dominant} vertex covers) and keep only the corresponding components.
\subparagraph{Related work.} Let us mention some further kernelization results for \probname{Vertex Cover}\xspace with respect to nonstandard parameters. There are two further types of interesting parameters: \begin{enumerate}
\item \emph{Width-parameters:} Parameters such as treewidth allow dynamic programming algorithms running in time, e.g., $\mathcal{O}^*(2^{\tw})$, independently of the size of the vertex cover. It is known that there are no polynomial kernels for \probname{Vertex Cover}\xspace (or most other \ensuremath{\mathsf{NP}}\xspace-hard problems) under such parameters~\cite{BodlaenderDFH09}. The treewidth of a graph is upper bounded by the smallest vertex cover, whereas graphs of bounded treewidth can have vertex cover size $\Omega(n)$.
\item \emph{``Distance to tractable case''-parameters:} \probname{Vertex Cover}\xspace can be efficiently solved on forests. By a simple enumeration argument it is fixed-parameter tractable when $\ell$ is the minimum number of vertices to delete such that $G$ becomes a forest. Jansen and Bodlaender~\cite{JansenB13} gave a polynomial kernelization to $\mathcal{O}(\ell^3)$ vertices. Note that the vertex cover size is an upper bound on $\ell$, whereas trees can have unbounded vertex cover size. The \ensuremath{\mathsf{FPT}}\xspace-result can be carried over to smaller parameters corresponding to distance from larger graph classes on which \probname{Vertex Cover}\xspace is polynomial-time solvable, however, Cygan et al.~\cite{CyganLPPS14} and Jansen~\cite[Section 5.3]{Jansen_Thesis} ruled out polynomial kernels for some of them. E.g., if $\ell$ is the deletion-distance to an outerplanar graph then there is no kernelization for \probname{Vertex Cover}\xspace{}$(\ell)$ to size polynomial in $\ell$ unless the polynomial hierarchy collapses~\cite{Jansen_Thesis}. \end{enumerate}
\subparagraph{Organization.} Section~\ref{section:preliminaries} gives some preliminaries. In Section~\ref{section:tightvertexcovers:factorcritical} we discuss vertex covers of factor-critical graphs and prove the claimed lemma about critical sets. Section~\ref{section:nicedecompositions} introduces a relaxation of the Gallai-Edmonds decomposition, called \emph{nice decomposition}, and Section~\ref{section:nicedecompositionsandvertxcovers} explores the relation between nice decompositions and vertex covers. The kernelization for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace is given in Section~\ref{section:kernelization}. In Section~\ref{section:proofofmatroidresult} we provide for self-containment a result on representative sets that follows readily from~\cite{KratschW12}. We conclude in Section~\ref{section:conclusion}.
\section{Preliminaries}\label{section:preliminaries}
We use the shorthand~$[n]:=\{1,\ldots,n\}$. By $A\mathbin{\dot\cup} B$ to denote the disjoint union of $A$ and $B$.
\subparagraph{Parameterized complexity.} Let us recall that a \emph{parameterized problem} is a set $Q\subseteq\Sigma^*\times\mathbb{N}$ where $\Sigma$ is any finite alphabet, i.e., a language of pairs $(x,\ell)$ where the component $\ell\in\mathbb{N}$ is called the \emph{parameter}. Recall also that a classical (unparameterized) problem is usually given as a set (language) $L\subseteq\Sigma^*$. For the classical problem \problem{Vertex Cover}, with instances $(G,k)$, asking whether $G$ has a vertex cover of size at most $k$, the canonical parameterized problem is \probname{Vertex Cover$(k)$}\xspace where the parameter value is simply $\ell=k$; this is the same procedure for any other decision problem obtained from an optimization problem by asking whether $\opt\leq k$ resp.\ $\opt\geq k$ and is called the \emph{standard parameterization}. We remark that this notation is usually abused by, e.g., using $(G,k)$ for an instance of \probname{Vertex Cover$(k)$}\xspace rather than the redundant $((G,k),k)$; we will use $(G,k)$ for $((G,k),k)$ and $(G,k,\ell)$ for $((G,k),\ell)$.
A parameterized problem $Q$ is \emph{fixed-parameter tractable} (\ensuremath{\mathsf{FPT}}\xspace) if there exists a function $f\colon\mathbb{N}\to\mathbb{N}$, a constant $c$, and an algorithm $A$ that correctly decides $(x,\ell)\in Q$ in time $f(\ell)\cdot |x|^c$ for all $(x,\ell)\in\Sigma^*\times\mathbb{N}$. A parameterized problem $Q$ has a \emph{kernelization} if there is a function $g\colon\mathbb{N}\to\mathbb{N}$ and a polynomial-time algorithm $K$ that on input $(x,\ell)$ returns an instance $(x',\ell')$ with $|x'|,\ell'\leq g(\ell)$ and with $(x,\ell)\in Q$ if and only if $(x',\ell')\in Q$. The function $g$ is called the \emph{size} of the kernelization $K$ and a polynomial kernelization requires that $g$ is polynomially bounded. A \emph{randomized (polynomial) kernelization} may err with some probability, in which case the returned instance is not equivalent to the input instance. Natural variants with one-side error respectively bounded error are defined completely analogous to randomized algorithms. For a more detailed introduction to parameterized complexity we recommend the recent books by Downey and Fellows~\cite{DowneyF13} and Cygan et al.~\cite{CyganFKLMPPS15}.
\subparagraph{Graphs.} We require both directed and undirected graphs; all graphs are finite and simple, i.e., they have no parallel edges or loops. Accordingly, an undirected graph $G=(V,E)$ consists of a finite set $V$ of vertices and a set $E\subseteq\binom{V}{2}$ of edges; a directed graph $H=(V,E)$ consists of a finite set $V$ and a set $E\subseteq V^2\setminus\{(v,v)\mid v\in V\}$. For clarity, all undirected graphs are called $G$ and all directed graphs are called $H$ (possibly with indices etc.). For a graph $G=(V,E)$ and vertex set $X\subseteq V$ we use $G-X$ to denote the graph induced by $V\setminus X$; we also use $G-v$ if $X=\{v\}$. Analogous definitions are used for directed graphs $H$.
Let $H=(V,E)$ be a directed graph and let $S$ and $T$ be two not necessarily disjoint vertex sets in $H$. A set $X\subseteq V$ is an \emph{$S,T$-separator} if in $G-X$ there is no path from $S\setminus X$ to $T\setminus X$; note that $X$ may overlap both $S$ and $T$ and that $S\cap T\subseteq X$ is required. The set $T$ is \emph{closest to $S$} if there is no $S,T$-separator $X$ with $X\neq T$ and $|X|\leq|T|$, i.e., if $T$ is the unique minimum $S,T$-separator in $G$ (cf.~\cite{KratschW12}). Both separators and closeness have analogous definitions in undirected graphs but they are not required here.
\begin{proposition}[cf.~\cite{KratschW12}]\label{proposition:closest}
Let $H=(V,E)$ be a directed graph and let $S,T\subseteq V$ such that $T$ is closest to $S$. For any vertex $v\in V\setminus T$ that is reachable from $S$ in $H-T$ there exist $|T|+1$ (fully) vertex-disjoint paths from $S$ to $T\cup\{v\}$. \end{proposition}
\begin{proof}
Assume for contradiction that such $|T|+1$ directed paths do not exist. By Menger's Theorem there must be an $S,T\cup\{v\}$-separator $X$ of size at most $|T|$. Observe that $X\neq T$ since $v$ is reachable from $S$ in $H-T$. Thus, $X$ is an $S,T$-separator of size at most $|T|$ that is different from $T$; this contradicts closeness of $T$. \end{proof}
For an undirected graph $G=(V,E)$, a \emph{matching} is any set $M\subseteq E$ such that no two edges in $M$ have an endpoint in common. If $M$ is a matching in $G=(V,E)$ then we will say that a path is $M$-alternating if its edges are alternatingly from $M$ and from $\overline{M}:=E\setminus M$. An $M,M$-path is an $M$-alternating path whose first and last edge are from $M$; it must have odd length. Similarly, we define $\overline{M},M$-paths, $M,\overline{M}$-paths (both of even length), and $\overline{M},\overline{M}$-paths (of odd length). If $M$ is a matching of $G$ and $v$ is incident with an edge of $M$ then we use $M(v)$ to denote the other endpoint of that edge, i.e., the \emph{mate} or \emph{partner} of $v$. Say that a vertex $v$ is \emph{exposed by $M$} if it is not incident with an edge of $M$; we say that $v$ is \emph{exposable} if it is exposed by some maximum matching of $G$. A graph $G=(V,E)$ is \emph{factor-critical} if for each vertex $v\in V$ the graph $G-v$ has a perfect matching (a \emph{near-perfect matching of $G$}); observe that all factor-critical graphs must have an odd number of vertices.
A \emph{vertex cover} of a graph $G=(V,E)$ is a set $X\subseteq V$ such that each edge $e\in E$ has at least one endpoint in $X$. There is a well-known linear programming relaxation of the \probname{Vertex Cover}\xspace problem for a graph $G=(V,E)$: \begin{align*} \min \quad& \sum_{v\in V} x(v)\\ s.t. \quad& x(u)+x(v)\geq 1\\ &x(v)\geq 0 \end{align*} The optimum value of this linear program can be computed in polynomial time and it is denoted $LP(G)$. The feasible solutions $x\colon V\to\mathbb{R}_{\geq 0}$ are called fractional vertex covers; the \emph{cost} of a solution/fractional vertex cover $x$ is $\sum_{v\in V} x(v)$. It is well-known that the extremal points $x$ of the linear program are half-integral, i.e., $x\in\{0,\frac12,1\}^V$. With this in mind, we will tacitly assume that all considered fractional vertex covers are half-integral. We will often use the simple fact that the size of any matching $M$ of $G$ lower bounds both the cardinality of vertex covers and the cost of fractional vertex covers of $G$.
\subparagraph{Gallai-Edmonds decomposition.} We will now introduce the Gallai-Edmonds decomposition following the well-known book of Lov\'asz and Plummer~\cite{LovaszP1986}.\footnote{We use $B$ instead $C$ for $V\setminus (A\cup D)$ to leave the letter $C$ for cycles and connected components.}
\begin{definition}\label{definition:ged} Let $G=(V,E)$ be a graph. The \emph{Gallai-Edmonds decomposition} of $G$ is a partition of $V$ into three sets $A$, $B$, and $D$ where \begin{itemize}
\item $D$ consists of all vertices $v$ of $G$ such that there is a maximum matching $M$ of $G$ that contains no edge incident with $v$, i.e., that leaves $v$ exposed,
\item $A$ is the set of neighbors of $D$, i.e., $A:=N(D)$, and
\item $B$ contains all remaining vertices, i.e., $B:=V\setminus(A\cup D)$. \end{itemize} \end{definition}
It is known (and easy to verify) that the Gallai-Edmonds decomposition of any graph $G$ is unique and can be computed in polynomial time. The Gallai-Edmonds decomposition has a number of useful properties; the following theorem states some of them.
\begin{theorem}[cf.\ {\cite[Theorem~3.2.1]{LovaszP1986}}]\label{theorem:ged} Let $G=(V,E)$ be a graph and let $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$ be its Gallai-Edmonds decomposition. The following properties hold: \begin{enumerate}
\item The connected components of $G[D]$ are factor-critical.
\item The graph $G[B]$ has a perfect matching.
\item Every maximum matching $M$ of $G$ consists of a perfect matching of $G[B]$, a near-perfect matching of each component of $G[D]$, and a matching of $A$ into $D$. \end{enumerate} \end{theorem}
\section{Tight vertex covers of factor-critical graphs}\label{section:tightvertexcovers:factorcritical}
In this section we study vertex covers of factor-critical graphs, focusing on those that are of smallest possible size (later called tight vertex covers). We first recall the fact that any factor-critical graph with $n\geq 3$ vertices has no vertex cover of size less than $\frac12(n+1)$. By a similar argument such graphs have no fractional vertex cover of cost less than $\frac12n$.
\begin{proposition}[folklore]\label{proposition:factorcritical:minvc:minfvc}
Let $G=(V,E)$ be a factor-critical graph with at least three vertices. Every vertex cover $X$ of $G$ has cardinality at least $\frac12(|V|+1)$ and every fractional vertex cover $x\colon V\to \mathbb{R}_{\geq 0}$ of $G$ has cost at least $\frac12|V|$. \end{proposition}
\begin{proof}
Let $X\subseteq V$ be a vertex cover of $G$. Since $G$ has at least three vertices and is factor-critical, it has a maximum matching $M$ of size $\frac12(|V|-1)\geq 1$. It follows that $X$ has size at least one. (This is not true for graphs consisting of a single vertex, which are also factor-critical. All other factor-critical graphs have at least three vertices.) Pick any vertex $v\in X$. Since $G$ is factor-critical, there is a maximum matching $M_v$ of $G-v$ of size $\frac12(|V|-1)$. It follows that $X$ must contain at least one vertex from each edge of $M_v$, and no vertex is contained in two of them. Together with $v$, which is not in any edge of $M_v$, this gives a lower bound of $1+\frac12(|V|-1)=\frac12(|V|+1)$, as claimed.
Let $x\colon V\to \mathbb{R}_{\geq 0}$ be a fractional vertex cover of $G$. We use again the matching $M$ of size at least one from the previous case; let $\{u,v\}\in M$. It follows that $x(u)+x(v)\geq 1$; w.l.o.g. we have $x(v)\geq \frac12$. Let $M_v$ be a maximum matching of $G-v$ of size $\frac12(|V|-1)$. For each edge $\{p,q\}\in M_v$ we have $x(p)+x(q)\geq 1$. Since the matching edges are disjoint we get a lower bound of $\sum_{p\in V\setminus\{v\}} x(p)\geq \frac12(|V|-1)$. Together with $x(v)\geq \frac12$ we get the claimed lower bound of $\frac12|V|$ for the cost of $x$. \end{proof}
Note that Proposition~\ref{proposition:factorcritical:minvc:minfvc} is tight for example for all odd cycles of length at least three, all of which are factor-critical. We now define tight vertex covers and critical sets.
\begin{definition}[tight vertex covers, critical sets]
Let $G=(V,E)$ be a factor-critical graph with $|V|\geq 3$. A vertex cover $X$ of $G$ is \emph{tight} if $|X|=\frac12(|V|+1)$. Note that this is different from a minimum vertex cover, and a factor-critical graph need not have a tight vertex cover; e.g., odd cliques with at least five vertices are factor-critical but have no tight vertex cover.
A set $Z\subseteq V$ is called a \emph{bad set} of $G$ if there is no tight vertex cover of $G$ that contains $Z$. The set $Z$ is a \emph{critical set} if it is a minimal bad set, i.e., no tight vertex cover of $G$ contains $Z$ but for all proper subsets $Z'$ of $Z$ there is a tight vertex cover containing $Z'$. \end{definition}
Observe that a factor-critical graph $G=(V,E)$ has no tight vertex cover if and only if $Z=\emptyset$ is a critical set of $G$. It may be interesting to note that a set $X\subseteq V$ of size $\frac12(|V|+1)$ is a vertex cover of $G$ if and only if it contains no critical set. (We will not use this fact and hence leave its two line proof to the reader.) The following lemma proves that all critical sets of a factor-critical graph have size at most three; this is of central importance for our kernelization. For the special case of odd cycles, the lemma has a much shorter proof and we point out that all critical sets of odd cycles have size exactly three.
\begin{lemma}\label{lemma:criticalsets:boundsize} Let $G=(V,E)$ be a factor-critical graph with at least three vertices. All critical sets $Z$ of $G$ have size at most three. \end{lemma}
\begin{proof}
Let $\ell\in\mathbb{N}$ with $\ell\geq1$ such that $|V|=2\ell+1$; recall that all factor-critical graphs have an odd number of vertices.
Assume for contradiction that there is a critical set $Z$ of $G$ of size at least four. Let $w,x,y,z\in Z$ be any four pairwise different vertices from $Z$. Let $M$ be a maximum matching of $G-w$. Since $G$ is factor-critical, we get that $M$ is a perfect matching of $G-w$ and has size $|M|=\ell$. Observe that any tight vertex cover of $G$ that contains $w$ must contain exactly one vertex from each edge of $M$, since its total size is $\frac12(|V|+1)=\ell+1$. We will first analyze $G$ and show that the presence of certain structures would imply that some proper subset $Z'$ of $Z$ is bad, contradicting the assumption that $Z$ is critical. Afterwards, we will use the absence of these structures to find a tight vertex cover that contains $Z$, contradicting the fact that it is a critical set.
If there is an $M,M$-path from $x$ to $y$ then $\{w,x,y\}$ is a bad set, i.e., no tight vertex cover of $G$ contains all three vertices $w$, $x$, and $y$, contradicting the choice of $Z$: Let $P=(v_1,v_2,\ldots,v_{p-1},v_p)$ denote an $M,M$-path from $v_1=x$ to $v_p=y$. Accordingly, we have $\{v_1,v_2\},\ldots,\{v_{p-1},v_p\}\in M$ and the path $P$ has odd length. Assume that $X$ is a tight vertex cover containing $w$, $x$, and $y$. It follows, since $w\in X$, that $X$ contains exactly one vertex per edge in $M$; in particular it contains exactly one vertex per matching edge on the path $P$. Since $v_1=x\in X$ we have $v_2\notin X$. Thus, as $\{v_2,v_3\}$ is an edge of $G$, we must have $v_3\in X$ to cover this edge; this in turn implies that $v_4\notin X$ since it already contains $v_3$ from the matching edge $\{v_3,v_4\}$. Continuing this argument along the path $P$ we conclude that $v_{p-1}\in X$ and $v_p\notin X$, contradicting the fact that $v_p=y\in X$. Thus, if there is an $M,M$-path from $x$ to $y$ then there is no tight vertex cover of $G$ that contains $w$, $x$, and $y$, making $\{w,x,y\}$ a bad set and contradicting the assumption that $Z$ is a critical set. It follows that there can be no $M,M$-path from $x$ to $y$. The same argument can be applied also to $x$ and $z$, and to $y$ and $z$, ruling out $M,M$-paths connecting them.
Similarly, if there is an edge $\{u,v\}\in M$ such that $z$ reaches both $u$ and $v$ by (different, not necessarily disjoint) $M,\overline{M}$-paths then no tight vertex cover of $G$ contains both $w$ and $z$, contradicting the choice of $Z$: Let $P=(v_1,v_2,\ldots,v_{p-1},v_p)$ denote an $M,\overline{M}$-path from $v_1=z$ to $v_p=u$ with $\{v_1,v_2\},\{v_3,v_4\},\ldots,\{v_{p-2},v_{p-1}\}\in M$. Let $X$ be a tight vertex cover of $G$ that contains $w$ and $z$. It follows (as above) that $v_1,v_3,\ldots,v_{p-2}\in X$ and $v_2,v_4,\ldots,v_{p-1}\notin X$, by considering the induced $M,M$-path from $z=v_1$ to $v_{p-1}$. The fact that $v_{p-1}\notin X$ directly implies that $v_p=u\in X$ in order to cover the edge $\{v_{p-1},v_p\}$. Repeating the same argument on an $M,\overline{M}$-path from $z$ to $v$ we get that $v\in X$. Thus, we conclude that $u$ and $v$ are both in $X$, contradicting the fact that $X$ must contain exactly one vertex of each edge in $X$. Hence, there is no tight vertex cover of $G$ that contains both $w$ and $z$. We conclude that $\{w,z\}$ is a bad set, contradicting the choice of $Z$. Hence, there is no edge $\{u,v\}\in M$ such that $z$ has $M,\overline{M}$-paths (not necessarily disjoint) to both $u$ and $v$.
Now we will complete the proof by using the established properties, i.e., the non-existence of certain $M$-alternating paths starting in $z$, to construct a tight vertex cover of $G$ that contains all of $Z$, giving the final contradiction. Using minimality of $Z$, let $X$ be a tight vertex cover of $G$ that contains $Z\setminus\{z\}$; by choice of $Z$ we have $z\notin X$. We construct the claimed vertex cover $X'\supseteq Z$ from $X'=X$ as follows: \begin{enumerate}
\item Add vertex $z$ to $X$ and remove $M(z)$, i.e., remove the vertex that $z$ is matched to.
\item Add all vertices $v$ to $X'$ that can be reached from $z$ by an $M,\overline{M}$-path.
\item Remove all vertices from $X'$ that can be reached from $z$ by an $M,M$-path of length at least three. (There is a single such path of length one from $z$ to $M(z)$ which, for clarity, was handled already above.) \end{enumerate}
We need to check four things: (1) The procedure above is well-defined, i.e., no vertex can be reached by both $M,M$- and $M,\overline{M}$-paths from $z$. (2) The size of $X'$ is at most $|X|=\ell+1$. (3) $X'$ is a vertex cover. (4) The set $X'$ contains $w$, $x$, $y$, and $z$.
(1) Assume that there is a vertex $v$ such that $z$ reaches $v$ both by an $M,M$-path $P=(v_1,v_2,\ldots,v_p)$ with $v_1=z$ and $v_p=v$, and by an $M,\overline{M}$-path $P'$. Observe that $\{v_{p-1},v_p\}\in M$ since $P$ is an $M,M$-path and, hence, that $P''=(v_1,\ldots,v_{p-1})$ is an $M,\overline{M}$-path from $v$ to $v_{p-1}$. Together, $P'$ and $P''$ constitute two $M,\overline{M}$-paths from $z$ to both endpoints $v_{p-1}$ and $v_p$ of the matching edge $\{v_{p-1},v_p\}$; a contradiction (since we ruled out this case earlier).
(2) In the first step, we add $z$ and remove $M(z)$. Note that $z\notin X$ implies that $M(z)\in X$ (we start with $X'=X$). Thus the size of $X'$ does not change. Consider a vertex $v$ that is added in the second step, i.e., with $v\notin X$: There is an $M,\overline{M}$-path $P$ from $z$ to $v$. Since $w\in X$ we know that $v\neq w$. Thus, since $M$ is a perfect matching of $G-w$, there is a vertex $u$ with $u=M(v)$. The vertex $u:=M(v)$ must be in $X$ to cover the edge $\{v,u\}\in M$, as $v\notin X$. Moreover, $u$ cannot be on $P$ since that would make it incident with a second matching edge other than $\{u,v\}$. Thus, by extending $P$ with $\{v,u\}$ we get an $M,M$-path from $z$ to $u$, implying that $u$ is removed in the second step. Since $u\in X$ the total size change is zero. Observe that the vertex $u=M(v)$ used in this argument is not used for any other vertex $v'$ added in the second step since it is only matched to $v$. Similarly, due to (1), the vertex $u$ is not also added in the second step since it cannot be simultaneously have an $M,\overline{M}$-path from $z$.
(3) Assume for contradiction that some edge $\{u,v\}$ is not covered by $X'$, i.e., that $u,v\notin X'$. Since $w\in X'$ is the only unmatched vertex it follows that both $u$ and $v$ are incident with some edge of $M$. We distinguish two cases, namely (a) $\{u,v\}\in M$ and (b) $\{u,v\}\notin M$.
(3.a) If $\{u,v\}\in M$ then without loss of generality assume $u\in X$ (as $X$ is a vertex cover). By our assumption we have $u\notin X'$, which implies that we have removed it on account of having an $M,M$-path $P$ from $z$ to $u$. Since $\{u,v\}\in M$ the path $P$ must visit $v$ as its penultimate vertex; there is no other way for an $M,M$-path to reach $u$. This, however, implies that there is an $M,\overline{M}$-path from $z$ to $v$, and that we have added $v$ in the second step; a contradiction.
(3.b) In this case we have $\{u,v\}\notin M$. Again, without loss of generality, assume that $u\in X$. Since $u\notin X'$ there must be an $M,M$-path $P$ from $z$ to $u$. If $P$ does not contain $v$ then extending $P$ by edge $\{u,v\}\notin M$ would give an $M,\overline{M}$-path from $z$ to $v$ and imply that $v\in X'$; a contradiction. In the remaining case, the vertex $v$ is contained in $P$; let $P'$ denote the induced path from $z$ to $v$ (not containing $u$ as it is the final vertex of $P$). Since $v\notin X'$ we know that $P'$ cannot be an $M,\overline{M}$-path, or else we would have $v\in X'$, and hence it must be an $M,M$-path. Now, however, extending $P'$ via $\{v,u\}\notin M$ yields an $M,\overline{M}$-path from $z$ to $u$, contradicting (1). Altogether, we conclude that $X'$ is indeed a vertex cover.
(4) Clearly, $z\in X'$ by construction. Similarly, $w\in X'$ since it is contained in $X$ and it cannot be removed since there is no incident $M$-edge (i.e., no $M,M$-paths from $z$ can end in $w$). Finally, regarding $x$ and $y$, we proved earlier that there are no $M,M$-paths from $z$ to $x$ or from $z$ to $y$. Thus, since both $x$ and $y$ are in $X$ they must also be contained in $X'$.
We have showed that under the assumption of minimality of $Z$ and using $|Z|\geq 4$ one can construct a vertex cover $X'$ of optimal size $\ell+1$ that contains $Z$ entirely. This contradicts the choice of $Z$ and completes the proof. \end{proof}
\section{(Nice) relaxed Gallai-Edmonds decomposition}\label{section:nicedecompositions}
The Gallai-Edmonds decomposition of a graph has a number of strong properties and, amongst others, has played a vital role in the FPT-algorithm for Garg and Philip~\cite{GargP16}. It is thus not surprising that we find it rather useful for the claimed kernelization. Unfortunately, in the context of reduction rules, there is the drawback that the Gallai-Edmonds decomposition of a graph and that the graph obtained from the reduction rule might be quite different. (E.g., even deleting entire components of $G[D]$ may ``move'' an arbitrary number of vertices from $A\cup D$ to $B$.) We cope with this problem by defining a relaxed variant of this decomposition. The relaxed form is no longer unique, but when applying certain reduction rules the created graph can effectively inherit the decomposition.
The definition mainly drops the requirement that $D$ is the set of exposable vertices and instead allows any set $D$ that gives the desired properties. Moreover, instead of a (strong) statement about all maximum matchings of $G$ (cf.\ Definition~\ref{definition:ged}) we simply require that a single maximum matching $M$ with appropriate properties be given along with $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$.
\begin{definition}[relaxed Gallai-Edmonds decomposition]\label{definition:relaxedged} Let $G=(V,E)$ be a graph. A \emph{relaxed Gallai-Edmonds decomposition of $G$} is a tuple $(A,B,D,M)$ where $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$ and $M$ is a maximum matching of $G$ such that \begin{enumerate}
\item $A=N(D)$,
\item each connected component of $G[D]$ is factor-critical,
\item $M$ restricted to $B$ is a perfect matching of $G[B]$,
\item $M$ restricted to any component $C$ of $G[D]$ is a near-perfect matching of $G[C]$, and
\item each vertex of $A$ is matched by $M$ to a vertex of $D$. \end{enumerate} \end{definition}
\begin{observation} Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a relaxed Gallai-Edmonds decomposition of $G$. For each connected component $C$ of $G[D]$ we have $N(C)\subseteq A$. (Note that this is purely a consequence of $N(D)=A$ and $C$ being a connected component of $G[D]$.) \end{observation}
It will be of particular importance for us in what way the matching $M$ of a decomposition $(A,B,D,M)$ of $G$ matches vertices of $A$ to vertices of components of $G[D]$. We introduce appropriate definitions next. In particular, we define sets $\ensuremath{\mathcal{C}_1}\xspace$, $\ensuremath{\hat{\mathcal{C}}_1}\xspace$, $\ensuremath{\mathcal{C}_3}\xspace$, $\ensuremath{\hat{\mathcal{C}}_3}\xspace$, $A_1$, and $A_3$ that are derived from $(A,B,D,M)$ and $G$ in a well-defined way. Whenever we have a decomposition $(A,B,D,M)$ of $G$ we will use these sets without referring again to this definition. We will use, e.g., $\ensuremath{\mathcal{C}_1}\xspace'$ in case where we require these sets for two decomposed graphs $G$ and $G'$.
\begin{definition}[matched/unmatched connected components of {$G[D]$}]\label{definition:mumcomponents} Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a relaxed Gallai-Edmonds decomposition of $G$. We say that a connected component $C$ of $G[D]$ is \emph{matched} if there are vertices $v\in C$ and $u\in N(C)\subseteq A$ such that $\{u,v\}\in M$; we will also say that $u$ and $C$ are matched to one another. Otherwise, we say that $C$ is \emph{unmatched}. Note that edges of $M$ with both ends in $C$ have no influence on whether $C$ is matched or unmatched.
We use $\ensuremath{\mathcal{C}_1}\xspace$ and $\ensuremath{\hat{\mathcal{C}}_1}\xspace$ to denote the set of matched and unmatched singleton components in $G[D]$. We use $\ensuremath{\mathcal{C}_3}\xspace$ and $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ for matched and unmatched non-singleton components. By $A_1$ and $A_3$ we denote the set of vertices in $A$ that are matched to singleton respectively non-singleton components of $G[D]$; note that $A=A_1\mathbin{\dot\cup} A_3$. We remark that the names $\ensuremath{\mathcal{C}_3}\xspace$ and $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ refer to the fact that these components have at least three vertices each as they are factor-critical and non-singleton. \end{definition}
\begin{observation} Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a relaxed Gallai-Edmonds decomposition of $G$. If a component $C$ of $G[D]$ is matched then there is a unique edge of $M$ that matches a vertex of $C$ with a vertex in $A$. This is a direct consequence of $M$ inducing a near-perfect matching on $G[C]$, i.e., that only a single vertex of $C$ is not matched to another vertex of $C$. \end{observation}
We now define the notion of a nice relaxed Gallai-Edmonds decomposition (short: nice decomposition), which only requires in addition that there are no unmatched singleton components with respect to decomposition $(A,B,D,M)$ of $G$, i.e., that $\ensuremath{\hat{\mathcal{C}}_1}\xspace=\emptyset$. Not every graph has a nice decomposition, e.g., the independent sets (edgeless graphs) have none. For the moment, we will postpone the question of how to actually find a nice decomposition (and how to ensure that there is one for each considered graph).
\begin{definition}[nice decomposition]\label{definition:nicedecomposition} Let $(A,B,D,M)$ be a relaxed Gallai-Edmonds decomposition of a graph $G$. We say that $(A,B,D,M)$ is a \emph{nice relaxed Gallai-Edmonds decomposition} (short a \emph{nice decomposition}) if there are no unmatched singleton components. \end{definition}
In the following section we will derive several lemmas about how vertex covers of $G$ and a nice decomposition $(A,B,D,M)$ of $G$ interact. For the moment, we will only prove the desired property that certain operations for deriving a graph $G'$ from $G$ allow $G'$ to effectively inherit the nice decomposition of $G$ (and also keep most of the related sets $\ensuremath{\mathcal{C}_1}\xspace$ etc.\ the same).
\begin{lemma}\label{lemma:inheritance} Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a relaxed Gallai-Edmonds decomposition, and let $C\in\ensuremath{\hat{\mathcal{C}}_1}\xspace\mathbin{\dot\cup}\ensuremath{\hat{\mathcal{C}}_3}\xspace$ be an unmatched component of $G[D]$. Then $(A,B,D',M')$ is a relaxed Gallai-Edmonds decomposition of $G'=G-C$ where $M'$ is $M$ restricted to $V(G')=V\setminus C$ and where $D':=D\setminus C$. The corresponding sets $A_1$, $A_3$, $\ensuremath{\mathcal{C}_1}\xspace$, and $\ensuremath{\mathcal{C}_3}\xspace$ are the same as for $G$. The sets $\ensuremath{\hat{\mathcal{C}}_1}\xspace$ and $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ differ only by the removal of component $C$, i.e., $\ensuremath{\hat{\mathcal{C}}_1}\xspace'=\ensuremath{\hat{\mathcal{C}}_1}\xspace\setminus\{C\}$ and $\ensuremath{\hat{\mathcal{C}}_3}\xspace'=\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus\{C\}$. Moreover, if $(A,B,D,M)$ is a nice decomposition then so is $(A,B,D',M')$. \end{lemma}
\begin{proof} Clearly, $A\mathbin{\dot\cup} B\mathbin{\dot\cup} D'$ is a partition of the vertex set of $G'$. Next, let us prove first that $M'$ is a maximum matching of $G'$: To get $M'$ we delete the edges of $M$ in $C$ and we delete all vertices in $C$. Thus, any matching of $G'$ that is larger than $M'$ could be extended to matching larger than $M$ for $G$ by adding the edges of $M$ on vertices of $C$. Now, we consider the connected components of $G'[D']$: We deleted the entire component $C$ of $G[D]$ to get $G'=G-C$. It follows that the connected components of $G'[D']$ are the same except for the absence of $C$, and they are factor-critical since that holds for all components of $G[D]$. Moreover, for any component $C'$ of $G[D']$ the set $M'$ induces a near-perfect matching, as it is the restriction of $M$ to $G-C$. Similarly, since $B\cap C=\emptyset$ the set $M'$ induces a perfect matching on $G[B]$. In the same way, if $\{u,v\}\in M$ where $u\in A$ and $v\in C'$ where $C'$ is a connected component of $G[D]$ other than $C$ then $u$ is also matched to a component of $G'[D']$ in $G'$, namely to $C'$. It follows that $A\subseteq N_{G'}(D')$ using that $C$ is unmatched. The inverse inclusion follows since $N_G(D)=A$ and we did not make additional vertices adjacent to $D'$. Thus, $N_{G'}(A)=D'$. Thus, $(A,B,D',M')$ is a relaxed Gallai-Edmonds decomposition.
Let us now check that the sets $A_1$, $A_3$, etc.\ are almost the same: We already saw that matching edges between vertices of $A$ and components of $G[D]$ persist in $G'$. It follows that $A'_1=A_1$ and $A'_3=A_3$, and that $\ensuremath{\mathcal{C}_3}\xspace'=\ensuremath{\mathcal{C}_3}\xspace$ and $\ensuremath{\mathcal{C}_1}\xspace'=\ensuremath{\mathcal{C}_1}\xspace$. If $C\in\ensuremath{\hat{\mathcal{C}}_1}\xspace$ then we get $\ensuremath{\hat{\mathcal{C}}_1}\xspace'=\ensuremath{\hat{\mathcal{C}}_1}\xspace\setminus\{C\}$; else we get $\ensuremath{\hat{\mathcal{C}}_1}\xspace'=\ensuremath{\hat{\mathcal{C}}_1}\xspace=\ensuremath{\hat{\mathcal{C}}_1}\xspace\setminus\{C\}$. Similarly, $\ensuremath{\hat{\mathcal{C}}_3}\xspace'=\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus\{C\}$. This completes the main statement of the lemma. The moreover part follows because $(A,B,D,M)$ being a nice decomposition of $G$ implies $\ensuremath{\hat{\mathcal{C}}_1}\xspace=\emptyset$, which yields $\ensuremath{\hat{\mathcal{C}}_1}\xspace'=\emptyset$ and, hence, that $(A,B,D',M')$ is a nice decomposition of $G'$. \end{proof}
\section{Nice decompositions and vertex covers}\label{section:nicedecompositionsandvertxcovers}
In this section we study the relation of vertex covers $X$ of a graph $G$ and any nice decomposition $(A,B,D,M)$ of $G$. As a first step, we prove a lower bound of $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$ on the size of vertex covers of $G$; this bound holds also for relaxed Gallai-Edmonds decompositions. Additionally, we show that $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|=2LP(G)-MM(G)$, if $(A,B,D,M)$ is a nice decomposition. Note that Garg and Philip~\cite{GargP16} proved that $2LP(G)-MM(G)$ is a lower bound for the vertex cover size for every graph $G$, but we require the bound of $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$ related to our decompositions, and the equality to $2LP(G)-MM(G)$ serves ``only'' to later relate to the parameter value $\ell=k-(2LP(G)-MM(G))$.
\begin{lemma}\label{lemma:nice:vclb}
Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a nice decomposition of $G$. Each vertex cover of $G$ has size at least $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|=2LP(G)-MM(G)$. \end{lemma}
\begin{proof}
Let $X$ be any vertex cover of $G$. For each edge of $M$ that is not in a component of $|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$ the set $X$ contains at least one endpoint of $M$. For each component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ the set $X$ contains at least $\frac12(|C|+1)$ vertices of $C$ by Proposition~\ref{proposition:factorcritical:minvc:minfvc}, since $C$ is also a component of $G[D]$ and all those components are factor-critical. Since $M$ contains a near-perfect matching of $C$, i.e., of cardinality $\frac12(|C|-1)$, the at least $\frac12(|C|+1)$ vertices of $C$ in $X$ can also be counted as one vertex per matching edge in $G[C]$ plus one additional vertex. Overall, the set $X$ contains at least $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$ vertices.
We now prove that $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|=2LP(G)-MM(G)$; first we show that $LP(G)=\frac12|V|$. Let $x\colon V\to\{0,\frac12,1\}$ be a fractional vertex cover of $V$. For each edge $\{u,v\}\in M$ that is not in a component of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ we have $x(u)+x(v)\geq 1$, in other words, $\frac12$ per vertex of the matching edge. For each $C\in \ensuremath{\hat{\mathcal{C}}_3}\xspace$ we have $\sum_{v\in C} x(v)\geq\frac12|C|$ by Proposition~\ref{proposition:factorcritical:minvc:minfvc} since $G[C]$ is factor-critical; again this equals $\frac12$ per vertex (of $C$). Using that $(A,B,D,M)$ is a nice decomposition, we can show that these considerations yield a lower bound of $\frac12(|V|)$; it suffices to check that all vertices have been considered: Components in $\ensuremath{\mathcal{C}_1}\xspace\cup\ensuremath{\mathcal{C}_3}\xspace$ are fully matched, all vertices in $A$ are matched (to components in $\ensuremath{\mathcal{C}_1}\xspace\cup\ensuremath{\mathcal{C}_3}\xspace$), and $M$ restricts to a perfect matching of $G[B]$; all these vertices contribute $\frac12$ each since they are in an edge of $M$ that is not in a component of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$. All remaining vertices are in components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ since $\ensuremath{\hat{\mathcal{C}}_1}\xspace=\emptyset$; these vertices contribute $\frac12$ per vertex by being in some component $C\in\ensuremath{\hat{\mathcal{C}}_1}\xspace$ that contributes $\frac12|C|$. Overall we get that the $x$ has cost at least $\frac12|V|$ and, hence, that $LP(G)\geq\frac12|V|$. Since $x(v)\equiv\frac12$ is a feasible fractional vertex cover for every graph, we conclude that $LP(G)=\frac12|V|$.
Now, let us consider $MM(G)$: Note that $MM(G)=|M|$ as $M$ is a maximum matching of $G$. Since $\ensuremath{\hat{\mathcal{C}}_1}\xspace=\emptyset$, we know that the only exposed vertices (w.r.t.~$M$) are in components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$; exactly one vertex per component. Thus, $|V|=2|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$, which implies \[
2LP(G)-MM(G)=|V|-|M|=2|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|-|M|=|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|. \] This completes the proof. \end{proof}
Intuitively, if the size of a vertex cover $X$ is close to the lower bound of $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$ then, apart from few exceptions (at most as many as the excess over the lower bound), it contains exactly one vertex per matching edge and exactly $\frac12(|C|+1)$ vertices per component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$, i.e., it induces a tight vertex cover on all but few components $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$.
Our analysis of vertex covers $X$ in relation to a fixed nice decomposition will focus on those parts of the graph where $X$ exceeds the number of one vertex per matching edge respectively $\frac12(|C|+1)$ vertices per (unmatched, non-singleton) component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$. To this end, we introduce the terms \emph{active component} and a set $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq X$, which essentially capture the places where $X$ ``overpays'', i.e., where it locally exceeds the lower bound.
\begin{definition}[active component]\label{definition:activecomponent}
Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a nice decomposition of $G$, and let $X$ be a vertex cover of $G$. A component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ is \emph{active (with respect to $X$)} if $X$ contains more than $\frac12(|C|+1)$ vertices of $C$, i.e., if $X\cap C$ is not a tight vertex cover of $G[C]$. \end{definition}
\begin{definition}[set $\ensuremath{X_{\mathtt{op}}}\xspace$]\label{definition:setxop} Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a nice decomposition of $G$. For $X\subseteq V$ define \emph{$\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)\subseteq A\cap X$} to contain all vertices $v$ that fulfill either of the following two conditions: \begin{enumerate}
\item $v\in A_1$ and $X$ contains both $v$ and $M(v)$.
\item $v\in A_3$ and $X$ contains $v$.\label{definition:setxop:condition2} \end{enumerate} \end{definition}
Both conditions of Definition~\ref{definition:setxop} capture parts of the graph where $X$ contains more vertices than implied by the lower bound. To see this for the second condition, note that if $v\in A_3\cap X$ then $X$ still needs at least $\frac12(|C|+1)$ vertices of the component $C\in\ensuremath{\mathcal{C}_3}\xspace$ that $v$ is matched to; since there are $\frac12(|C|+1)$ matching edges that $M$ has between vertices of $C\cup\{v\}$ we find that $X$ (locally) exceeds the lower bound, as $|X\cap(C\cup\{v\})|\geq 1+\frac12(|C|+1)$. Conversely, if $X$ does match the lower bound on $C\cup\{v\}$ then it cannot contain $v$.
We now prove formally that a vertex cover $X$ of size close to the lower bound of Lemma~\ref{lemma:nice:vclb} has only few active components and only a small set $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq X$.
\begin{lemma}\label{lemma:nice:boundxh:boundac}
Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a nice decomposition of $G$, let $X$ be a vertex cover of $G$, and let $\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)$. The set $\ensuremath{X_{\mathtt{op}}}\xspace$ has size at most $\ell$ and there are at most $\ell$ active components in $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ with respect to $X$ where $\ell=|X|-(|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|)=|X|-(2LP(G)-MM(G))$. \end{lemma}
\begin{proof}
By Lemma~\ref{lemma:nice:vclb} we have that $X$ has size at least $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|=2LP(G)-MM(G)$. Let $\ell=|X|-(|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|)$.
Assume first that $|\ensuremath{X_{\mathtt{op}}}\xspace|>\ell$. Let $\overline{M}\subseteq M$ denote the matching edges between a vertex of $A_1$ and the vertex of a (matched) singleton component from $\ensuremath{\mathcal{C}_1}\xspace$ that $X$ contains both endpoints of. Let $\overline{A}_3:=A_3\cap X$. By definition of $\ensuremath{X_{\mathtt{op}}}\xspace$ we get that $|\ensuremath{X_{\mathtt{op}}}\xspace|=|\overline{M}|+|\overline{A}_3|$. For $u\in \overline{A}_3$ consider the component $C_u\in\ensuremath{\mathcal{C}_3}\xspace$ with $\{u,v\}\in M$ and $v\in C_u$. Observe that $C_u$ is factor-critical and has at least three vertices, which implies that $X$ needs to contain at least $\frac12(|C_u|+1)$ vertices of $C_u$ (Proposition~\ref{proposition:factorcritical:minvc:minfvc}). Note that, $M$ contains exactly $\frac12(|C_u|+1)$ matching edges between vertices of $C_u\cup\{u\}$, but $X$ contains at least $\frac12(|C_u|)+1)+1$ vertices of $C\cup\{u\}$.
Observe that the arguments of Lemma~\ref{lemma:nice:vclb} still apply. That is, for each component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ the set $X$ contains at least $\frac12(|C|+1)$ of its vertices, and for all matching edges not in such a component we know that it contains at least one of its endpoints. Summing this up as in Lemma~\ref{lemma:nice:vclb} yields the lower bound of $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|=2LP(G)-MM(G)$. Now, however, for each edge of $\overline{M}$ we get an extra $+1$ in the bound, and the same is true for each vertex $u\in \overline{A}_3$ since $X$ contains at least $\frac12(|C_u|)+1)+1$ vertices of $C\cup\{u\}$, which is one more than the number of matching edges on these vertices. Thus, the size of $X$ is at least $2LP(G)-MM(G)+|\overline{M}|+|\overline{A_3}|>2LP(G)-MM(G)+\ell=|X|$; a contradiction.
Assume now that there are more than $\ell$ active components. We can apply the same accounting argument as before since $X$ needs to independently contain at least one vertex per matching edge and at least $\frac12(|C|+1)$ vertices per component $C\in\ensuremath{\mathcal{C}_3}\xspace$. Having more than $\ell$ active components, i.e., more than $\ell$ components of $C$ where $X$ has more than $\frac12(|C|+1)$ vertices would then give a lower bound of $|X|> 2LP(G)-MM(G)+\ell=|X|$; a contradiction. \end{proof}
The central question is of course how the different structures where $X$ exceeds the lower bound interact. We are only interested in aspects that are responsible for not allowing a tight vertex cover for any (unmatched, non-singleton) components $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$. This happens exactly due to vertices in $A$ that are adjacent to $C$ and that are not selected by $X$. Between components of $G[B]$ and non-singleton components of $G[D]$ there are $M$-alternating paths with vertices alternatingly from $A$ and from singleton components of $G[D]$ since vertices in $A$ are all matched to $D$ and singleton components in $G[D]$ have all their neighbors in $A$. Unless $X$ contains both vertices of a matching edge, it contains the $A$- or the $D$-vertices of such a path. Unmatched components of $G[D]$ and components of $G[B]$ have all neighbors in $A$. Matched components $C$ in $G[D]$ with matched neighbor $v\in A$ enforce not selecting $v$ for $X$ unless $X$ spends more than the lower bound; in this way, they lead to selection of $D$-vertices on $M$-alternating paths. Intuitively, this leads to two ``factions'' that favor either $A$- or $D$-vertices and that are effectively separated when $X$ selects both $A$- and $D$-endpoint of a matching edge. An optimal solution need not separate all neighbors in $A$ of any component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$, and $C$ may still have a tight vertex cover or paying for a larger cover of $C$ is overall beneficial. The following auxiliary directed graph $H$ captures this situation and for certain vertex covers $X$ reachability of $v\in A$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$ will be proved to be equivalent with $v\notin X$.
\begin{definition}[auxiliary directed graph $H$]\label{definition:graphh} Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a nice decomposition of $G$. Define a \emph{directed graph $H=H(G,A,B,D,M)$} on vertex set $A$ by letting $(u,v)$ be a directed edge of $H$, for $u,v\in A$, whenever there is a vertex $w\in D$ with $\{u,w\}\in E\setminus M$ and $\{w,v\}\in M$. \end{definition}
The first relation between $G$, with decomposition~$(A,B,D,M)$, and the corresponding directed graph $H=H(G,A,B,D,M)$ is straightforward: It shows how inclusion and exclusion of vertices in a vertex cover work along an $M$-alternating path, when $X$ contains exactly one vertex per edge. We will later prove a natural complement of this lemma, but it involves significantly more work and does not hold for all vertex covers.
\begin{lemma}\label{lemma:reachable} Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a nice decomposition of $G$, and let $X$ be a vertex cover of $G$. Let $H=H(G,A,B,D,M)$ and $\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)$. If $v\in A$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$ then $X$ does not contain $v$. \end{lemma}
\begin{proof} Let $P_H=(v_1,\ldots,v_p)$ be a directed path in $H-\ensuremath{X_{\mathtt{op}}}\xspace$ from some vertex $v_1\in A_3\subseteq A$ to $v_p=v\in A$, and with $v_1,\ldots,v_p\in A\setminus \ensuremath{X_{\mathtt{op}}}\xspace=V(H-\ensuremath{X_{\mathtt{op}}}\xspace)$. By construction of $H$, for each edge $(v_i,v_{i+1})$ with $i\in[p-1]$ there is a vertex $u_i\in D$ with $\{v_i,u_i\}\in E\setminus M$ and $\{u_i,v_{i+1}\}\in M$. Since $M$ is a matching, all vertices $u_i$ are pairwise different and none of them are in $P_H$ as $u_i\in D$ and $A\cap D=\emptyset$. It follows that there is a path \[ P=(v_1,u_1,v_2,u_2,v_3,\ldots,v_{p-1},u_{p-1},v_p) \] in $G$ where $\{v_i,u_i\}\in E\setminus M$ and $\{u_i,v_{i+1}\}\in M$ for $i\in[p-1]$. In other words, $P$ is an $\overline{M},M$-path from $v_1\in A_3$ to $v_p=v\in A$.
Consider any edge $\{u_i,v_{i+1}\}\in M$ of $P$ and apply Definition~\ref{definition:setxop}: If $v_{i+1}\in A_3$ then $v_{i+1}\notin \ensuremath{X_{\mathtt{op}}}\xspace$ implies that $v_{i+1}\notin X$. If $v_{i+1}\in A_1$ then $v_{i+1}\notin \ensuremath{X_{\mathtt{op}}}\xspace$ implies that $X$ does not contain both $u_i$ and $v_{i+1}$. In both cases $X$ does not contain both vertices of the edge $\{u_i,v_{i+1}\}\in M$. Thus, $X$ contains exactly one vertex each from $\{u_1,v_2\},\ldots,\{u_{p-1},v_p\}$.
Let us check that this implies that $u_{p-1}\in X$ and $v_p\notin X$. Observe that $v_1\notin X$ since $v_1\in A_3$ and $v_1\in X$ would imply $v_1\in \ensuremath{X_{\mathtt{op}}}\xspace$. Clearly, $X$ must then contain $u_1$ to cover the edge $\{v_1,u_1\}$, but then it does not contain $v_2$, which would be a second vertex from $\{u_1,v_2\}$. Thus, to cover $\{v_2,u_2\}$ the set $X$ must contain $u_2$, implying that it does not contain also $v_3$ from $\{u_2,v_3\}\in M$. By iterating this argument we get that $u_{p-1}\in X$ and $v_p\notin X$. Since $v=v_p$, this completes the proof. \end{proof}
We will now work towards a complement of Lemma~\ref{lemma:reachable}: We would like to show that, under the same setup as in Lemma~\ref{lemma:reachable}, if $v$ is not reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$ then $X$ does contain $v$. In general, this does not hold. Nevertheless, one can show that there always \emph{exists} a vertex cover of at most the same size, and with same set $\ensuremath{X_{\mathtt{op}}}\xspace$, that does contain $v$. Equivalently, we may put further restrictions on $X$ under which the lemma holds; to this end, we define the notion of a \emph{dominant} vertex cover.
\begin{definition}[dominant vertex cover]
Let $G=(V,E)$ be a graph and let $(A,B,D,M)$ be a nice decomposition of $G$. A vertex cover $X\subseteq V$ of $G$ is \emph{dominant} if $G$ has no vertex cover of size less than $|X|$ and no vertex cover of size $|X|$ contains fewer vertices of $D$. \end{definition}
We continue with a technical lemma that will be used to prove two lemmas about dominant vertex covers. The lemma statement is unfortunately somewhat opaque, but essentially it comes down to a fairly strong replacement routine that, e.g., can turn a given vertex cover into one that contains further vertices of $A$ and strictly less vertices of $D$.
\begin{lemma}\label{lemma:unify} Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a nice decomposition of $G$, and let $H=H(G,A,B,D,M)$. Let $X\subseteq V$ and $\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)$. Suppose that there is a nonempty set $Z\subseteq A\setminus X$ such that \begin{enumerate}
\item $X\cup Z$ is a vertex cover of $G$,
\item $X$ contains $M(z)$ for all $z\in Z$, and
\item $Z$ is not reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$. \end{enumerate}
Then there exists a vertex cover $\ensuremath{\overline{X}}\xspace$ of size at most $|X|$ that contains $Z$. Moreover, $\ensuremath{\overline{X}}\xspace\cap D\subsetneq X\cap D$ and $\ensuremath{\overline{X}}\xspace\cap A\supsetneq X\cap A$. \end{lemma}
\begin{proof}
We give a proof by minimum counterexample. Assume that the lemma does not hold, and pick sets $X$ and $Z$ that fulfill the conditions of the lemma but for which the claimed set $\overline{X}$ does not exist, and with minimum value of $|X\cap D|$ among such pairs of sets. (It is no coincidence that the choice of $X$ is reminiscent of a dominant vertex cover, but note that $X$ is not necessarily a vertex cover.) We will derive sets $X'$ and $Z'$ such that either $Z'=\emptyset$ and we can choose $\ensuremath{\overline{X}}\xspace:=X'$, or $Z'\neq\emptyset$ but then $X'$ and $Z'$ fulfill the conditions of the lemma and we have $|X'\cap D|<|X\cap D|$. In the latter case, the lemma must hold for $X'$ and $Z'$ and we will see that $\ensuremath{\overline{X}}\xspace:=\ensuremath{\overline{X}}\xspace'$ fulfills the then-part of the lemma for $X$ and $Z$. Thus, both cases contradict the assumption that $X$ and $Z$ constitute a counterexample, proving correctness of the lemma.
First, let us find an appropriate set $X'$. To this end, let $U:=\{M(z) \mid z\in Z\}$. Since $Z\subseteq A$ we know that $U\subseteq D$ and that each vertex $z\in Z$ is matched to a private vertex $M(z)\in U$; hence $|U|=|Z|\geq 1$. We have $U\subseteq X$ since $X$ contains $M(z)$ for all $z\in Z$. Define $X':=(X\setminus U)\cup Z$. We have $|X'|=|X|-|U|+|Z|=|X|$ since $U\subseteq X$ and $|Z|=|U|$, and because $Z\subseteq A\setminus X$ entails that $X\cap Z=\emptyset$. Moreover, since $\emptyset\neq U\subseteq D$ and $Z\cap D=\emptyset$, we get that $X'\cap D\subsetneq X\cap D$; this also means that $|X'\cap D|<|X\cap D|$. Similarly, since $\emptyset\neq Z\subseteq A$, $X\cap Z=\emptyset$, and $U\cap A=\emptyset$, we get $X'\cap A\supsetneq X\cap A$. Finally, note that $X'\cup U$ is a vertex cover since $X'\cup U = X\cup Z$ is a vertex cover.
Second, we define $Z':=\{v\mid v\in N(u) \mbox{ for some } u\in U\}\setminus X'$, i.e., $Z'$ contains all vertices $v$ that are neighbor of some $u\in U$ and that are not in $X'$. (Note that this not the same as $N(U)\setminus X'$ since a vertex $u\in U$ could have a neighbor $u'\in U$. Nevertheless, we show in a moment that $Z'\subseteq A$, ruling out this case as $U\subseteq D$ and $A\cap D=\emptyset$.) Clearly $X'\cap Z'=\emptyset$, and $Z\cap Z'=\emptyset$ since $Z\subseteq X'$. Observe that $X'\cup Z'$ is a vertex cover since $X'\cup U$ is a vertex cover: The only edges not covered by $X'\subsetneq X'\cup U$ have one endpoint in $U$ and the other one not in $X'$; these edges are covered by $Z'$ by definition.
Let us prove that $Z'\subseteq A\setminus X'$; it remains to prove $Z'\subseteq A$: Since $U\subseteq D$ and $N(D)=A$ we know that $N(u)\subseteq A\cup D$ for $u\in U$. Assume for contradiction that some $u\in U\subseteq D$ has a neighbor $v\in D$, and let $z\in Z$ with $u=M(z)$, using the definition of $U$. It follows that $u$ and $v$ are contained in the same non-singleton component $C$ of $G[D]$, as they are adjacent vertices of $D$. Moreover, $C$ is matched to $z$ since $u=M(z)$ implies $\{u,z\}\in M$. This in turn implies that $C$ is a matched non-singleton component, i.e., $C\in\ensuremath{\mathcal{C}_3}\xspace$, and, hence, $z\in A_3$. We also know find that $z\notin\ensuremath{X_{\mathtt{op}}}\xspace$ since $Z\subseteq A\setminus X$ entails $z\notin X$ (cf.\ Definition~\ref{definition:setxop}). Together, however, this implies that $z$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$, namely from $z\in A_3$; a contradiction. Thus, no vertex $u\in U$ has a neighbor $v\in D$, implying that $Z'\subseteq A$. Together with $X'\cap Z'=\emptyset$ we get $Z'\subseteq A\setminus X'$.
We now prove that $M(z')\in X'$ for all $z'\in Z'$. Pick any $z'\in Z'$ and note that $z'\in A\setminus X'$. Thus, $z'$ is matched to some vertex $w\in D$, i.e., $w=M(z')$. The set $X'\cup U$ is a vertex cover, implying that it contains at least one vertex of the edge $\{z',w\}$. Since $z'\in A\setminus X'\subseteq A$, it is neither in $X'$ nor in $U$ (recall that $U\subseteq D$ and $A\cap D=\emptyset$). Thus, $w\in X'\cup U$. If $w\in U$ then there exists $z\in Z$ with $w=M(z)$ by definition of $U$. Clearly, as $M$ is matching, we must have $z=z'$. This, however, violates our earlier observation that $Z\cap Z'=\emptyset$ since both sets would contain $z$. Thus, the only remaining possibility is that $w\in X'$. Hence, we get $M(z')=w\in X'$, as claimed.
Define $\ensuremath{X_{\mathtt{op}}}\xspace':=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X')$; to prove that no vertex of $Z'$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$ it will be convenient to first prove $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq\ensuremath{X_{\mathtt{op}}}\xspace'$: Let $v\in\ensuremath{X_{\mathtt{op}}}\xspace$ and recall that $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq A$. If $v\in A_3$ then $v\in\ensuremath{X_{\mathtt{op}}}\xspace$ implies that $v \in X$. By definition of $X'$ we have $v\in X'$ as only vertices in $U\subseteq D$ are in $X$ but not in $X'$. From $v\in X'$, for $v\in A_3$, we directly conclude that $v\in \ensuremath{X_{\mathtt{op}}}\xspace'$. If $v\in A_1$ then $v\in\ensuremath{X_{\mathtt{op}}}\xspace$ implies that $v,M(v)\in X$. This implies $v\in X'$ as before but we still need to show that $M(v)\in X'$. Assume for contradiction that $M(v)\notin X'$. Observe that this implies $M(v)\in U$ by definition of $X'$, as $M(v)\in X$. Thus, by definition of $U$, we get that $M(v)$ is matched to some vertex $z\in Z$, i.e., $M(v)=M(z)$. Since $M$ is a matching and $M(v)$ is matched to $v$, we of course get $v=z$. This implies $v=z\in Z$, which contradicts $v\in X$ as $Z\subseteq A\setminus X$. Thus, we have both $v\in X'$ and $M(v)\in X'$, which, for $v\in A_1$, implies that $v\in\ensuremath{X_{\mathtt{op}}}\xspace'$. Both cases together imply that $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq\ensuremath{X_{\mathtt{op}}}\xspace'$.
We will now prove that no vertex of $Z'$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$, using $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq\ensuremath{X_{\mathtt{op}}}\xspace'$. Let $P=(v_1,\ldots,v_p)$ be any directed path in $H$ with $v_1\in A_3$ and $v_p=z'\in Z'$. As $z'\in Z'$ there is $u\in U$ with $z'\in N(u)\setminus X'$. Similarly, since $u\in U$ there must be $z\in Z$ with $u=M(z)$; we have $z\neq z'$ since $Z\cap Z'=\emptyset$. Observe that this means that $\{z,u\}\in M$ and $\{u,z'\}\in E\setminus M$ as $u$ cannot be incident with two matching edges. This implies, by Definition~\ref{definition:graphh}, that $(z',z)$ is an edge in $H$. Thus, there is a directed walk $W$ from $v_1\in A_3$ to $z\in Z$ in $H$ by using path $P$ and appending the edge $(z',z)$. (With slightly more work one could see that this must be a path, but we do not need this fact.) Since no vertex of $Z$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$ we conclude that $W$ contains at least one vertex of $\ensuremath{X_{\mathtt{op}}}\xspace$. Note that $\ensuremath{X_{\mathtt{op}}}\xspace$ does not contain $z\in Z$ since we assumed $Z\subseteq A\setminus X$ and $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq X$. Thus, $\ensuremath{X_{\mathtt{op}}}\xspace$ contains a vertex of $P$ (noting that $z$ is the only vertex of $W$ that may not be in $P$). Since $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq\ensuremath{X_{\mathtt{op}}}\xspace'$ it follows that $\ensuremath{X_{\mathtt{op}}}\xspace'$ also contains a vertex of $P$; since $P$ was chosen arbitrarily it follows that no vertex of $Z'$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$, as claimed.
Finally, we distinguish two cases: (1) $Z'=\emptyset$ and (2) $Z'\neq\emptyset$. In the former case, we show that $\ensuremath{\overline{X}}\xspace:=X'$ is feasible; in the latter case we use the lemma on $X'$ and $Z'$ to get $\ensuremath{\overline{X}}\xspace'$ and then show that $\ensuremath{\overline{X}}\xspace:=\ensuremath{\overline{X}}\xspace'$ fulfills the then-part of the lemma.
(1) $Z'=\emptyset$: We get that $X'=X'\cup Z'$ is a vertex cover of $G$. We showed that $|X'|=|X|$, and that $X'\cap D\subsetneq X\cap D$ and $X'\cap A\supsetneq X\cap A$. Finally, by construction we have that $Z\subseteq X'$. Thus, $\ensuremath{\overline{X}}\xspace:=X'$ fulfills the properties claimed in the lemma, contradicting the fact that $X$ and $Z$ constitute a counterexample.
(2) $Z'\neq\emptyset$: Together with $Z'\neq\emptyset$ the above considerations show that $X'$ and $Z'$ fulfill the conditions of the lemma: The set $Z'$ is a nonempty subset of $A\setminus X'$; the set $X'\cup Z'$ is a vertex cover of $G$; the set $X'$ contains $M(z')$ for all $z'\in Z'$; and $Z'$ is not reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$, where $\ensuremath{X_{\mathtt{op}}}\xspace'=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X')$. Moreover, we know that $|X'\cap D|<|X\cap D|$, which implies that the lemma must hold for this choice of sets, as $X$ and $Z$ was assumed to be a counterexample with minimum value of $|X\cap D|$. Let $\ensuremath{\overline{X}}\xspace'$ be the outcome of applying the lemma to $X'$ and $Z'$; let us check that $\ensuremath{\overline{X}}\xspace:=\ensuremath{\overline{X}}\xspace'$ is feasible: \begin{itemize}
\item The lemma guarantees that $\ensuremath{\overline{X}}\xspace'$ is a vertex cover of $G$.
\item The lemma guarantees $|\ensuremath{\overline{X}}\xspace'|\leq |X'|$, and using $|X'|=|X|$ we conclude that $|\ensuremath{\overline{X}}\xspace'|\leq |X|$.
\item We know, as discussed in case (1), that $Z\subseteq X'$. The lemma guarantees that $\ensuremath{\overline{X}}\xspace'\cap A\supsetneq X'\cap A$ and $\ensuremath{\overline{X}}\xspace'\cap D\subsetneq X'\cap D$. The former, together with $Z\subseteq X'$ and $Z\subseteq A$, yields $Z\subseteq X'\cap A\subsetneq \ensuremath{\overline{X}}\xspace'\cap A$. Together with $X'\cap A\supsetneq X\cap A$ and $X'\cap D\subsetneq X\cap D$, we get $\ensuremath{\overline{X}}\xspace'\cap A\supsetneq X'\cap A \supsetneq X\cap A$ and $\ensuremath{\overline{X}}\xspace'\cap D\subsetneq X'\cap D\subsetneq X\cap D$. \end{itemize} Thus, $\ensuremath{\overline{X}}\xspace:=\ensuremath{\overline{X}}\xspace'$ is a feasible choice. Altogether, we find that in both cases there does in fact exist a valid set $\ensuremath{\overline{X}}\xspace$. This means that $X$ and $Z$ do not constitute a counterexample. Since there is no minimum counterexample, the lemma holds as claimed. \end{proof}
Now, as a first application of Lemma~\ref{lemma:unify} we prove a complement to Lemma~\ref{lemma:reachable}. Note that this lemma only applies to dominant vertex covers, whereas Lemma~\ref{lemma:reachable} holds for any vertex cover of $G$. Fortunately, after the rather long proof of Lemma~\ref{lemma:unify}, the present lemma is now a rather straightforward conclusion.
\begin{lemma}\label{lemma:notreachable} Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a nice decomposition of $G$, and let $H=H(G,A,B,D,M)$. Let $X$ be a dominant vertex cover of $G$ and let $\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)$. If $v\in A$ is not reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$ then $X$ contains $v$. \end{lemma}
\begin{proof} First, let us note that if $v\in\ensuremath{X_{\mathtt{op}}}\xspace$ then, by Definition~\ref{definition:setxop}, we know that $v\in X$. It remains to consider the more interesting case that $v\in A\setminus \ensuremath{X_{\mathtt{op}}}\xspace$.
Assume for contradiction that $v\notin X$. We will apply Lemma~\ref{lemma:unify} to reach a contradiction. To this end, we will define a set $Z$ such that $X$ and $Z$ fulfill the conditions of Lemma~r\ref{lemma:unify}. Let $Z:=\{v\}$. Clearly, we have $\emptyset\neq Z\subseteq A\setminus X$. Since $X$ is a vertex cover and $v\notin X$, the vertex $M(v)$ must be in $X$ in order to cover the edge $\{v,M(v)\}$. (Note that $v\in A$ implies that $M(v)\in D$ exists.) By assumption of the present lemma, $v$ is not reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$, where $\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)$. Thus, Lemma~\ref{lemma:unify} applies to $X$ and $Z$, and yields a set $\ensuremath{\overline{X}}\xspace$ that is a vertex cover of $G$ of size at most $X$ and with $|\ensuremath{\overline{X}}\xspace\cap D|<|X\cap D|$, contradicting the assumption that $X$ is a dominant vertex cover. Thus, the assumption that $v\notin X$ is wrong, and the lemma follows. \end{proof}
As a second application of Lemma~\ref{lemma:unify} we prove that sets $\ensuremath{X_{\mathtt{op}}}\xspace$ corresponding to dominant vertex covers are always closest to $A_3$ in the auxiliary directed graph $H$. This is a requirement for applying the matroid tools from Kratsch and Wahlstr\"om~\cite{KratschW12} later since closest sets allow to translate between reachability with respect to a closest cut and independence in an appropriate matroid. Unlike the previous lemma, there is still quite some work involved before applying Lemma~\ref{lemma:unify} in the proof.
\begin{lemma}\label{lemma:closest} Let $G=(V,E)$ be a graph, let $(A,B,D,M)$ be a nice decomposition of $G$, and let $H=H(G,A,B,D,M)$. Let $X$ be a dominant vertex cover of $G$ and let $\ensuremath{X_{\mathtt{op}}}\xspace=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X)$. Then $\ensuremath{X_{\mathtt{op}}}\xspace$ is closest to $A_3$ in $H$. \end{lemma}
\begin{proof}
Assume that $\ensuremath{X_{\mathtt{op}}}\xspace$ is not closest to $A_3$ in $H$ and, consequently, let $Y\subseteq V(H)=A$ be a minimum $A_3,\ensuremath{X_{\mathtt{op}}}\xspace$-separator in $H$ with $|Y|\leq|\ensuremath{X_{\mathtt{op}}}\xspace|$ and $Y\neq \ensuremath{X_{\mathtt{op}}}\xspace$. We will apply Lemma~\ref{lemma:unify} to appropriately chosen sets $X'$ and $Z$ (with $X'$ and $Z$ playing the roles of $X$ and $Z$ in the lemma).
Let $X':=(X\setminus (\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y))\cup (Y\setminus \ensuremath{X_{\mathtt{op}}}\xspace)$. Note that \begin{align*}
|\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y|=|\ensuremath{X_{\mathtt{op}}}\xspace|-|\ensuremath{X_{\mathtt{op}}}\xspace\cap Y|\geq |Y|-|\ensuremath{X_{\mathtt{op}}}\xspace\cap Y|=|Y\setminus \ensuremath{X_{\mathtt{op}}}\xspace|. \end{align*}
This implies that $|X'|\leq|X|$, using that $\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y \subseteq \ensuremath{X_{\mathtt{op}}}\xspace \subseteq X$ (see Definition~\ref{definition:setxop}). We can also observe that $X'$ and $X$ contain the same vertices of $D$, and hence also the same number since $\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y$ and $Y\setminus \ensuremath{X_{\mathtt{op}}}\xspace$ are both subsets of $A$. (Let us mention that these two properties are not needed to apply Lemma~\ref{lemma:unify} to $X'$ but they are needed for the outcome to have relevance for $X$.)
Let $\ensuremath{X_{\mathtt{op}}}\xspace'=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M,X')$ according to Definition~\ref{definition:setxop}. We show that $Y\subseteq \ensuremath{X_{\mathtt{op}}}\xspace'$ by proving that $y\in\ensuremath{X_{\mathtt{op}}}\xspace'$ for all $y\in Y$; we distinguish two cases depending on whether $y\in\ensuremath{X_{\mathtt{op}}}\xspace$.
Let $y\in Y\cap \ensuremath{X_{\mathtt{op}}}\xspace$. If $y\in A_1$ then $y\in\ensuremath{X_{\mathtt{op}}}\xspace$ implies $y,M(y)\in X$. By definition of $X'$ we also have $y,M(y)\in X'$: Only elements of $\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y\subseteq A$ are in $X$ but not in $X'$; neither $y\in Y\cap\ensuremath{X_{\mathtt{op}}}\xspace$ nor $M(y)\in D$ are affected by this. Thus, if $y\in A_1$, then $y,M(y)\in X'$, which implies $y\in \ensuremath{X_{\mathtt{op}}}\xspace'$. If $y\in A_3$ then $y\in\ensuremath{X_{\mathtt{op}}}\xspace$ implies $y\in X$. As before, the definition of $X'$ implies $y\in X'$, which yields $y\in \ensuremath{X_{\mathtt{op}}}\xspace'$. Thus, all $y\in Y\cap\ensuremath{X_{\mathtt{op}}}\xspace$ are also contained in $\ensuremath{X_{\mathtt{op}}}\xspace'$.
Now, let $y\in Y\setminus\ensuremath{X_{\mathtt{op}}}\xspace$. Since $Y$ is a minimal $A_3,\ensuremath{X_{\mathtt{op}}}\xspace$-separator, there must be an $A_3,y$-path in $H-(Y\setminus\{y\})$ or else $Y\setminus\{y\}$ would also be an $A_3,\ensuremath{X_{\mathtt{op}}}\xspace$-separator. (This is a standard argument, if $Y\setminus\{y\}$ were not a separator then there would be an $A_3,\ensuremath{X_{\mathtt{op}}}\xspace$-path avoiding $Y\setminus\{y\}$. This path needs to contain $y$, as $Y$ is a separator, and can be shortened to a path from $A_3$ to $y$.) Let $P$ be a directed path from some vertex $v\in A_3$ to $y$ in $H-(Y\setminus\{y\})$, i.e., a path in $H$ containing no vertex of $Y\setminus\{y\}$. We find that there can be no vertex of $\ensuremath{X_{\mathtt{op}}}\xspace$ on $P$: We already know that the final vertex $y$ of $P$ is not in $\ensuremath{X_{\mathtt{op}}}\xspace$. If $u$ is any earlier vertex of $P$ that is in $\ensuremath{X_{\mathtt{op}}}\xspace$ then $P$ could be shortened to a path from $v\in A_3$ to $u\in \ensuremath{X_{\mathtt{op}}}\xspace$ that avoids all vertices of $Y$ (since $y$ was the only vertex of $Y$ on $P$ but it comes after $u$); thus $Y$ would not separate $A_3$ from $\ensuremath{X_{\mathtt{op}}}\xspace$ in $H$. Since $P$ contains no vertex of $\ensuremath{X_{\mathtt{op}}}\xspace$, we conclude that $y$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$. By Lemma~\ref{lemma:reachable} we conclude that $y\notin X$. Since $Y\subseteq A$, the vertex $y$ is matched to some vertex $u\in D$, and $X$ must contain $u$ to cover the edge $\{u,y\}$. Since $X$ and $X'$ contain the same vertices of $D$, as observed above, we have $u\in X'$. Additionally, by construction of $X'$, we have $Y\setminus\ensuremath{X_{\mathtt{op}}}\xspace\subseteq X'$, implying that $y\in X'$. Thus, if $y\in A_1$ then we have $y\in X'$ and $M(y)=u\in X'$, which implies $y\in\ensuremath{X_{\mathtt{op}}}\xspace'$; if $y\in A_3$ then $y\in X'$ suffices to conclude $y\in\ensuremath{X_{\mathtt{op}}}\xspace'$. Together we get that $y\in Y\setminus\ensuremath{X_{\mathtt{op}}}\xspace$ implies $y\in\ensuremath{X_{\mathtt{op}}}\xspace'$; combined with the case $y\in Y\cap\ensuremath{X_{\mathtt{op}}}\xspace$ we get $Y\subseteq\ensuremath{X_{\mathtt{op}}}\xspace'$.
Let $Z:=\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y$. By definition of $X'$ we have $X'\cap Z=\emptyset$; since $\ensuremath{X_{\mathtt{op}}}\xspace\subseteq A$ this entails $Z\subseteq A\setminus X'$. Since $|Y|\leq|\ensuremath{X_{\mathtt{op}}}\xspace|$ and $Y\neq\ensuremath{X_{\mathtt{op}}}\xspace$, we conclude that $Z=\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y\neq\emptyset$. The set $X'\cup Z$ contains $X$ by definition of $X'$ and hence it is also a vertex cover of $G$. To get that $M(z)\in X'$ for $z\in Z$ we need to distinguish two cases: If $z\in A_1$ then $z\in\ensuremath{X_{\mathtt{op}}}\xspace$ implies $M(z)\in X$; note that $M(z)\in D$ as $z\in A$. Since $X'$ contains the same vertices of $D$ as $X$ we get $M(z)\in X'$. If $z\in A_3$ then we reach a contradiction: Recall that $Y$ is an $A_3,\ensuremath{X_{\mathtt{op}}}\xspace$-separator. This necessitates that $Y$ contains all vertices of $A_3\cap\ensuremath{X_{\mathtt{op}}}\xspace$, implying that $z\in Y$, contradicting $z\in\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y$. Thus, if $z\in\ensuremath{X_{\mathtt{op}}}\xspace\setminus Y$ then $z\in A_1$ and we get $M(z)\in X'$ as claimed. Finally, let us check that no vertex of $Z$ is reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$. This follows immediately from $Z\subseteq \ensuremath{X_{\mathtt{op}}}\xspace$ and $Y\subseteq\ensuremath{X_{\mathtt{op}}}\xspace'$, and the fact that $Y$ is an $A_3,\ensuremath{X_{\mathtt{op}}}\xspace$-separator in $H$.
By the above considerations we may apply Lemma~\ref{lemma:unify} to $X'$ and $Z$ and obtain a vertex cover \ensuremath{\overline{X}}\xspace of $G$ of size at most $|X'|\leq|X|$ that contains fewer vertices of $D$ than $X'$. Since $X$ and $X'$ contain the same number of vertices of $D$, we get $|\ensuremath{\overline{X}}\xspace\cap D|<|X\cap D|$, contradicting the choice of $X$ as a dominant vertex cover. \end{proof}
\section{Randomized polynomial kernelization}\label{section:kernelization}
In this section, we describe our randomized polynomial kernelization for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace. For convenience, let us fix an input instance $(G,k,\ell)$, i.e., $G=(V,E)$ is a graph for which we want to know whether it has a vertex cover of size at most $k$; the parameter is $\ell=k-(2LP(G)-MM(G))$, where $LP(G)$ is the minimum cost of a fractional vertex cover of $G$ and $MM(G)$ is the size of a largest matching.
From previous work of Garg and Philip~\cite{GargP16} we know that the well-known linear program-based preprocessing for \probname{Vertex Cover}\xspace (cf.~\cite{CyganFKLMPPS15}) can also be applied to \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace; the crucial new aspect is that this operation does not increase the value $k-(2LP-MM)$. The LP-based preprocessing builds on the half-integrality of fractional vertex covers and a result of Nemhauser and Trotter~\cite{NemhauserT1975} stating that all vertices with value $1$ and $0$ in an optimal fractional vertex cover $x\colon V\to\{0,\frac12,1\}$ are included respectively excluded in at least one minimum (integral) vertex cover. Thus, only vertices with value $x(v)=\frac12$ remain and the best LP solution costs exactly $\frac12$ times number of (remaining) vertices. For our kernelization we only require the fact that if $G$ is reduced under this reduction rule then $LP(G)=\frac12(|V(G)|)$; e.g., we do not require $x\colon V\to\{\frac12\}$ to be the unique optimal fractional vertex cover. Without loss of generality, we assume that our given graph $G=(V,E)$ already fulfills $LP(G)=\frac12|V|$.
\begin{observation}
If $LP(G)=\frac12|V|$ then $2LP(G)-MM(G)=|V|-MM(G)$. In other words, if $M$ is a maximum matching of $G$ then the lower bound $2LP(G)-MM(G)=|V|-MM(G)=|V|-|M|$ is equal to cardinality of $M$ plus the number of isolated vertices. \end{observation}
As a first step, let us compute the Gallai-Edmonds decomposition $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$ of $G$ according to Definition~\ref{definition:ged}; this can be done in polynomial time.\footnote{The main expenditure is finding the set $D$. A straightforward approach is to compute a maximum matching $M_v$ of $G-v$ for each $v\in V$. If $|M_v|=MM(G)$ then $v$ is in $D$ as $M_v$ is maximum and exposes $v$; otherwise $v\notin D$ as no maximum matching exposes $v$.} Using $LP(G)=\frac12|V|$ we can find a maximum matching $M$ of $G$ such that $(A,B,D,M)$ is a nice decomposition of $G$.
\begin{lemma}\label{lemma:kernel:nicedecomposition}
Given $G=(V,E)$ with $LP(G)=\frac12|V|$ and a Gallai-Edmonds decomposition $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$ of $G$ one can in polynomial time compute a maximum matching $M$ of $G$ such that $(A,B,D,M)$ is a nice decomposition of $G$. \end{lemma}
\begin{proof}
Let $\ensuremath{\mathcal{C}_1}\xspace$ denote the set of singleton components of $G[D]$ and let $I=V(\ensuremath{\mathcal{C}_1}\xspace)\subseteq D$ contain all vertices that are in singleton components of $G[D]$. Clearly, $I$ is an independent set since $G[I]$ is the subgraph of $G[D]$ containing just the singleton components. Assume for contradiction that there is a set $I'\subseteq I$ with $|N_G(I')|<|I'|$. It follows directly that there would be a fractional vertex cover of $G$ of cost less than $\frac12|V|$, namely assign $0$ to vertices of $I'$, assign $1$ to vertices of $N(I')$, and assign $\frac12$ to all other vertices. The total cost is \begin{align*}
0\cdot |I'| + 1\cdot |N(I')| + \frac12 |V\setminus(I'\cup N(I'))|<\frac12|I'|+\frac12 |N(I')|+\frac12|V\setminus(I'\cup N(I'))| = \frac12|V|. \end{align*}
All edges incident with $I'$ have their other endpoint in $N(I')$, which has value $1$. All other edges have two endpoints with value at least $\frac12$. This contradicts the assumption that $LP(G)=\frac12|V|$.
Thus, each $I'\subseteq I$ has at least $|I'|$ neighbors in $G$. By Hall's Theorem there exists a matching of $I'$ into $N(I')$, and standard bipartite matching algorithms can find one in polynomial time; let $M_1$ be such a matching. Using any matching algorithm that finds a maximum matching by processing augmenting paths, we can compute from $M_1$ in polynomial time a maximum matching $M$ of $G$. The matching $M$ still contains edges incident with all vertices of $I$ since extending a matching along an augmenting path does not expose any previously matched vertices.
Using the maximum matching $M$, let us check briefly that $(A,B,D,M)$ is indeed a nice decomposition of $G$. We know already that there are no unmatched singleton components since $M$ contains matching incident with all vertices of $I=V(\ensuremath{\mathcal{C}_1}\xspace)$ and all these edges are also incident to a vertex in $A$. (Recall that the neighborhood of each component of $G[D]$ in $G$ lies in $A$.) Since $V=A\mathbin{\dot\cup} B\mathbin{\dot\cup} D$ is a Gallai-Edmonds decomposition of $G$ we get from Definition~\ref{definition:ged} and Theorem~\ref{theorem:ged} that $A=N(D)$, each component of $G[D]$ is factor-critical, and that $M$ (by being a maximum matching of $G$) must induce a perfect matching of $G[B]$, a near-perfect matching of each component $C$ of $G[D]$, and a matching of $A$ into $D$. This completes the proof. \end{proof}
We fix a nice decomposition $(A,B,D,M)$ of $G$ obtained via Lemma~\ref{lemma:kernel:nicedecomposition}. We have already learned about the relation of dominant vertex covers $X$, their intersection with the set $A$, and separation of $A$ vertices from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace$, where $H=H(G,A,B,D,M)$. It is safe to assume that solutions are dominant vertex covers as among minimum vertex covers there is a minimum intersection with $D$. We would now like to establish that most components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ can be deleted (while reducing $k$ by the cost for corresponding tight vertex covers). Clearly, since any vertex cover pays at least for tight covers of these components, we cannot turn a yes- into a no-instance this way. However, if the instance is no then it might become yes.
In the following, we will try to motivate both the selection process for components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ that are deleted as well as the high-level proof strategy for establishing correctness. We will tacitly ignore most technical details, like parameter values, getting appropriate nice decompositions, etc., and refer to the formal proof instead. Assume that we are holding a no-instance $(G,k,\ell)$. Consider for the moment, the effect of deleting all components $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ that have tight vertex covers and updating the budget accordingly; for simplicity, say they all have such vertex covers. Let $(G_0,k_0,\ell)$ be the obtained instance; if this instance is no as well, then deleting any subset of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ also preserves the correct answer (namely: no). Else, if $(G_0,k_0,\ell)$ is yes then pick any dominant vertex cover $X^0$ for it. We could attempt to construct a vertex cover of $G$ of size at most $k$ by adding back the components of $C$ and picking a tight vertex cover for each; crucially, these covers must also handle edges between $C$ and $A$. Since $(G,k,\ell)$ was assumed to be a no-instance, there must be too many components $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ for which this approach fails. For any such component, the adjacent vertices in $A\setminus X^0$ force a selection of their neighbors $Z_A=N(A)\cap C$ that cannot be completed to a tight vertex cover of $C$. To avoid turning the no-instance $(G,k,\ell)$ into a yes-instance $(G',k',\ell)$ we have to keep enough components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ in order to falsify any suggested solution $X'$ of size at most $k'$ for $G$. The crux is that there may be an exponential number of such solutions and that we do not know any of them. This is where the auxiliary directed graph and related technical lemmas as well as the matroid-based tools of Kratsch and Wahlstr\"om~\cite{KratschW12} are essential.
Let us outline how we arrive at an application of the matroid-based tools. Crucially, if $C$ (as above) has no tight vertex cover containing $Z_A=N(A)\cap C$ then, by Lemma~\ref{lemma:criticalsets:boundsize}, there is a set $Z\subseteq Z_A$ of size at most three such that no tight vertex cover contains $Z$. Accordingly, there is a set $T\subseteq A\setminus X^0$ of size at most three whose neighborhood in $C$ contains $Z$. Thus, the fact that $X^0$ contains no vertex of $T$ is responsible for not allowing a tight vertex cover of $C$. This in turn, by Lemma~\ref{lemma:notreachable} means that all vertices in $T$ are reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace^0$. Recalling that a set $\ensuremath{X_{\mathtt{op}}}\xspace^0$ corresponding to a dominant vertex cover is also closest to $A_3$, we can apply a result from~\cite{KratschW12} that generates a sufficiently small representative set of sets $T$ corresponding to components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$. If a dominant vertex cover has any reachable sets $T$ then the lemma below guarantees that at least one such set is in the output. For each set we select a corresponding component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ and then start over on the remaining components. After $\ell+1$ iterations we can prove that for any not selected component $C$, which we delete, and any proposed solution $X'$ for the resulting graph that does not allow a tight vertex cover for $C$, there are $\ell+1$ other selected components on which $X'$ cannot be tight. This is a contradiction as there are at most $\ell$ such active components by Lemma~\ref{lemma:nice:boundxh:boundac}.
Concretely, we will use the following lemma about representative sets of vertex sets of size at most three regarding reachability in a directed graph (modulo deleting a small set of vertices). Notation of the lemma is adapted to the present application. The original result is for pairs of vertices in a directed graph (see~\cite[Lemma 2]{KratschW11_arxiv}) but extends straightforwardly to sets of fixed size $q$ and to sets of size at most $q$; a proof is provided in Section~\ref{section:proofofmatroidresult} for completeness. Note that the lemma is purely about reachability of small sets in a directed graph (like the \problem{Digraph Pair Cut} problem studied in~\cite{KratschW11_arxiv,KratschW12}) and we require the structural lemmas proved so far to negotiate between this an \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace.
\begin{lemma}\label{lemma:repsetofcriticalsets} Let $H=(V_H,E_H)$ be a directed graph, let $S_H\subseteq V_H$, let $\ell\in\mathbb{N}$, and let $\ensuremath{\mathcal{T}}\xspace$ be a family of nonempty vertex sets $T\subseteq V_H$ each of size at most three. In randomized polynomial time, with failure probability exponentially small in the input size, we can find a set $\ensuremath{\mathcal{T}}\xspace^*\subseteq\ensuremath{\mathcal{T}}\xspace$ of size $\mathcal{O}(\ell^3)$ such that for any set $X_H\subseteq V_H$ of size at most $\ell$ that is closest to $S_H$ if there is a set $T\in\ensuremath{\mathcal{T}}\xspace$ such that all vertices $v\in T$ are reachable from $S_H$ in $H-X_H$ then there is a corresponding set $T^*\in\ensuremath{\mathcal{T}}\xspace^*$ satisfying the same properties. \end{lemma}
Using the lemma we will be able to identify a small set $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$ of components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ that contains for each dominant vertex cover $X$ of $G$ of size at most $k$ all active components with respect to $X$. Conversely, if there is no solution of size $k$, we will have retained enough components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ to preserve this fact. Concretely, the set $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$ is computed as follows: \begin{enumerate}
\item Let $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0$ contain all components $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ that have no vertex cover of size at most $\frac12(|C|+1)$. Clearly, these components are active for every vertex cover of $G$. We know from Lemma~\ref{lemma:nice:boundxh:boundac} that there are at most $\ell$ such components if the instance is \textbf{yes}\xspace. We can use the algorithm of Garg and Philip~\cite{GargP16} to test in polynomial time whether any $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ has a vertex cover of size at most $k_C:=\frac12(|C|+1)$: We have parameter value
\[
k_C-(2LP(G[C])-MM(G[C]))=\frac12(|C|+1)-(|C|-\frac12(|C|-1))=0.
\]
We could of course also use an algorithm for \probname{Vertex Cover}\xspace parameterized above maximum matching size, where we would have parameter value $1$. If there are more than $\ell$ components $C$ with no vertex cover of size $\frac12(|C|+1)$ then we can safely reject the instance. Else, as indicated above, let $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0$ contain all these components and continue.
\item Let $i=1$. We will repeat the following steps for $i\in\{1,\ldots,\ell+1\}$.
\item Let $\ensuremath{\mathcal{T}}\xspace^i$ contain all nonempty sets $T\subseteq A$ of size at most three such that there is a component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus(\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0\cup\ldots\cup\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^{i-1})$ such that:\label{step:selectti}
\begin{enumerate}
\item There is a set $Z\subseteq N_G(T)\cap C$ of at most three neighbors of $T$ in $C$ such that no vertex cover of $G[C]$ of size $\frac12(|C|+1)$ contains $Z$. Note that $Z\neq\emptyset$ since $C\notin \ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0$ implies that it has at least some vertex cover of size $\frac12(|C|+1)$.
\item For each $C$ and $Z\subseteq C$ of size at most three, existence of a vertex cover of $G[C]$ of size $k_C:=\frac12(|C|+1)$ containing $Z$ can be tested by the algorithm of Garg and Philip~\cite{GargP16} since the parameter value is constant. Concretely, run the algorithm on $G[C\setminus Z]$ and solution size $k_C-|Z|$ and observe that the parameter value is
\[
(k_C-|Z|)-(2LP(G[C\setminus Z])-MM(G[C\setminus Z])).
\]
Using that $LP(G[C\setminus Z])\geq LP(G[C])-|Z|$ and $MM(G[C\setminus Z])\leq MM(G[C])=\frac12(|C|-1)$ this value can be upper bounded by
\begin{align*}
& k_C-|Z|-2LP(G[C]) +2|Z| + MM(G[C])\\
={} & \frac12(|C|+1) - |Z| - |C| + 2 |Z| + \frac12(|C|-1)\\
={} & |Z|.
\end{align*}
Since $|Z|\leq 3$ the parameter value is at most three and the FPT-algorithm of Garg and Philip~\cite{GargP16} runs in polynomial time.
\end{enumerate}
Intuitively, the condition is that $C$ must always be active for vertex covers not containing $T$, but for the formal correctness proof that we give later the above description is more convenient.
\item Apply Lemma~\ref{lemma:repsetofcriticalsets} to graph $H=H(G,A,B,D,M)$ on vertex set $V_H=A$, set $S_H=A_3\subseteq A$, integer $\ell$, and family $\ensuremath{\mathcal{T}}\xspace^i$ of nonempty subsets of $A$ of size at most three to compute a subset $\ensuremath{\mathcal{T}}\xspace^{i*}$ of $\ensuremath{\mathcal{T}}\xspace^i$ in randomized polynomial time. The size of $|\ensuremath{\mathcal{T}}\xspace^{i*}|$ is $\mathcal{O}(\ell^3)$.
\item Select a set $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i$ as follows: For each $T\in\ensuremath{\mathcal{T}}\xspace^{i*}$ add to $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i$ a component $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus(\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0\cup\ldots\cup\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^{i-1})$ such that $C$ fulfills the condition for $T$ in Step~\ref{step:selectti}, i.e., such that: \label{step:selectci}
\begin{enumerate}
\item There is a set $Z\subseteq N_G(T)\cap C$ of at most three neighbors of $T$ in $C$ such that no vertex cover of $G[C]$ of size $\frac12(|C|+1)$ contains $Z$. (We know that $Z$ must be nonempty.)
\end{enumerate}
Clearly, the size of $|\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i|$ is $\mathcal{O}(\ell^3)$. Note that the same component $C$ can be chosen for multiple sets $T\in\ensuremath{\mathcal{T}}\xspace^{i*}$ but we only require an upper bound on $|\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i|$
\item If $i<\ell+1$ then increase $i$ by one and return to Step~\ref{step:selectti}. Else return the set
\[
\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace:=\bigcup_{i=0}^{\ell+1}\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i.
\]
The size of $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$ is $\mathcal{O}(\ell^4)$ since it is the union of $\ell+2$ sets that are each of size $\mathcal{O}(\ell^3)$.\label{step:returnrelc} \end{enumerate}
In particular, we will be interested in the components $C\in\ensuremath{\hat{\mathcal{C}}_3}\xspace$ that are not in $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$. We call these \emph{irrelevant components} and let $\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace:=\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$ denote the set of all irrelevant components. (Of course we still need to prove that they are true to their name.)
\begin{lemma}\label{lemma:removeirrelevantcomponents}
Let $G'$ be obtained by deleting from $G$ all vertices of irrelevant components, i.e., $G':=G-\bigcup_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}C$, and let $k'=k-\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1)$, i.e., $k'$ is equal to $k$ minus the lower bounds for vertex covers of the irrelevant components. Then $G$ has a vertex cover of size at most $k$ if and only if $G'$ has a vertex cover of size at most $k'$. Moreover, $k-(2LP(G)-MM(G))=k'-(2LP(G')-MM(G'))$, i.e., the instances $(G,k,\ell)$ and $(G',k,\ell')$ of \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace have the same parameter value $\ell=\ell'$. \end{lemma}
\begin{proof}
Let us first discuss the easy direction: Assume that $G$ has a vertex cover $X$ of size at most $k$; prove that $G'$ has a vertex cover of size at most $k'$. Let $X'$ denote the restriction of $X$ to $G'$, i.e., $X'=X\cap V(G')=X\setminus U$ where $U=\bigcup_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace} C$. Clearly, $X'$ is a vertex cover of $G'$. Concerning the size of $X'$ let us observe the following: For each component $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ the set $X\cap C$ must be a vertex cover of $G[C]$ (this of course holds for any set of vertices in $G$). We know that each graph $G[C]$ for $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace\subseteq\ensuremath{\hat{\mathcal{C}}_3}\xspace$ is factor-critical and, hence, the size of $X\cap C$ is at least $\frac12(|C|+1)$. Summing over all $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ we find that $X'$ contains at least $\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1)$ vertices less than $X$. This directly implies that $|X'|\leq k'$ and completes this part of the proof.
Now, assume that $G'$ has a vertex cover of size at most $k'$; let $V':=V(G')$ and $E':=E(G')$. This part requires most of the lemmas that we established in the previous sections. It is of particular importance, that from the nice decomposition $(A,B,D,M)$ of $G$ we can derive a very similar nice decomposition of $G'$. For convenience let $\ensuremath{V_{\mathtt{irr}}}\xspace:=\bigcup_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}C$. By Lemma~\ref{lemma:inheritance} we may repeatedly delete unmatched components, such as $\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace\subseteq \ensuremath{\hat{\mathcal{C}}_3}\xspace$, and always derive a nice decomposition of the resulting graph. Doing this for all components in $\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ we end up with graph $G'$ and the nice decomposition $(A,B,D',M')$ where $M'$ is the restriction of $M$ to $V(G')=V\setminus \ensuremath{V_{\mathtt{irr}}}\xspace$ and $D'=D\setminus \ensuremath{V_{\mathtt{irr}}}\xspace$.
Let us now fix an arbitrary dominant vertex cover $X'$ of $G'$ with respect to $(A,B,D',M')$, i.e., $X'$ is of minimum size and contains the fewest vertices of $D'$ among minimum vertex covers of $G'$; clearly $|X'|\leq k'$. Our strategy will be to construct a vertex cover of $G$ of size at most $k$ by adding a vertex cover of size $\frac12(|C|+1)$ for each component $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$. The crux with this idea lies in the edges between components $C$ and the set $A$. We will need to show that we can cover edges between $C$ and $A\setminus X'$ by the selection of vertices in $C$ without spending more than $\frac12(|C|+1)$. Define $H':=H(G',A,B,D',M')$ according to Definition~\ref{definition:graphh} and define $\ensuremath{X_{\mathtt{op}}}\xspace':=\ensuremath{X_{\mathtt{op}}}\xspace(A_1,A_3,M',X')$ according to Definition~\ref{definition:setxop}; by Lemma~\ref{lemma:closest} the set $\ensuremath{X_{\mathtt{op}}}\xspace'$ is closest to $A_3$ in $H'$ and by Lemma~\ref{lemma:nice:boundxh:boundac} we have $|\ensuremath{X_{\mathtt{op}}}\xspace'|\leq \ell$. We claim that $H'$ is in fact identical with $H=H(G,A,B,D,M)$; let us see why this holds: Both graphs are on the same vertex set $A$. There is a directed edge $(u,v)$ in $H$ if there is a vertex $w\in D$ with $\{u,w\}\in E\setminus M$ and $\{w,v\}\in M$. Note that this implies $w\notin \ensuremath{V_{\mathtt{irr}}}\xspace$ as all components in $\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace\subseteq\ensuremath{\hat{\mathcal{C}}_3}\xspace$ are unmatched, whereas $\{w,v\}\in M$ and $w\in D$ and $v\in A$. Thus, $w$ exists also in $G'$ and $\{w,v\}\in M'$ since it is not an edge between vertices of a component in $\ensuremath{\hat{\mathcal{C}}_3}\xspace$. Similarly, $\{u,w\}\in E'\setminus M'$ since $M'\subseteq M$ and $E'$ contains all edges of $E$ that have no endpoint in $\ensuremath{V_{\mathtt{irr}}}\xspace$. Thus, $(u,v)$ is also a edge of $H'$. Conversely, if $(u,v)$ is a directed edge of $H'$ then there exists $w\in D'$ with $\{w,v\}\in M'$ and $\{u,w\}\in E'\setminus M'$. Clearly, $w\in D\supseteq D'$ and $\{w,v\}\in M\supseteq M'$. Since $M$ is a matching, it cannot contain both $\{u,w\}$ and $\{w,v\}$, hence $\{u,w\}\notin M$. Thus, using $E'\subseteq E$ we have $\{u,w\}\in E\setminus M$, implying that $(u,w)$ is also an edge of $H$. Thus, the two graphs $H$ and $H'$ are identical and, in particular, $\ensuremath{X_{\mathtt{op}}}\xspace'$ is also closest to $A_3$ in $H$.
Consider now the set $X'$ as a partial vertex cover of $G$. There are uncovered edges, i.e., with no endpoint in $X'$, inside components $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ and between such components and vertices in $A\setminus X'$. Since the remaining budget of $k-k'$ is exactly equal to smallest vertex covers for the components in $\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ we cannot add any vertices that are not in such a component (and not more than $\frac12(|C|+1)$ per component $C$). Thus, if $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ and $A\setminus X'$ has a set $Z$ of neighbors in $C$, then the question is whether there is size $\frac12(|C|+1)$ vertex cover of $G$ that includes $Z$. We will prove that this is always the case.
\begin{claim}\label{claim:key}
Let $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace\subseteq\ensuremath{\hat{\mathcal{C}}_3}\xspace$ and let $Z_C:=N_G(A\setminus X')\cap C$. There is a vertex cover $X_C$ of $G[C]$ with $Z_C\subseteq X_C$ of size at most $\frac12(|C|+1)$. \end{claim}
\begin{proof}
Assume for contradiction that there is no vertex cover of $G[C]$ that includes $Z_C$ and has size at most $\frac12(|C|+1)$. By Lemma~\ref{lemma:criticalsets:boundsize} there is a subset $Z\subseteq Z_C$ of size at most three such that no vertex cover of $G[C]$ of size at most $\frac12(|C|+1)$ contains $Z$: Let $Z$ be any minimal subset of $Z_C$ with this property; the lemma implies that $|Z|\leq 3$. (Note that $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace\subseteq \ensuremath{\hat{\mathcal{C}}_3}\xspace$ implies that $C$ has at least three vertices and that it is factor-critical as a component of $G[D]$.) Let $A_C$ be a minimal subset of $A\setminus X'$ such that its neighborhood in $C$ includes the set $Z$; since $Z$ has size at most three, the set $A_C$ also has size at most three. Since $A_C\cap X'=\emptyset$, by Lemma~\ref{lemma:notreachable}, each $v\in A_C$ is reachable from $A_3$ in $H'-\ensuremath{X_{\mathtt{op}}}\xspace'=H-\ensuremath{X_{\mathtt{op}}}\xspace'$.
We first prove that $C$ must have been considered in all $\ell+1$ iterations of computing $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$. If $Z=\emptyset$ then $C$ has no vertex cover of size $\frac12(|C|+1)$. This, however, would imply that $C\in\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0\subseteq\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$; a contradiction. For the remainder of the proof we have $Z\neq\emptyset$, i.e., $1\leq |Z|\leq 3$, and hence the set $A_C$ must be nonempty to ensure $Z\subseteq N_G(A_C)\cap C$ (and of size at most three). It follows that in each repetition of Step~\ref{step:selectti} the sets $T=A_C\subseteq A$, component $C$, and set $Z$ were considered. (Note that $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace=\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus \ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace=\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus (\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0\cup\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^1\cup\ldots\cup\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^{\ell+1})$.) We have $T=A_C\subseteq A$ nonempty and of size at most three, $Z\subseteq N_G(T)\cap C$ of size at most three, and there is no vertex cover of $G[C]$ of size $\frac12(|C|+1)$ that contains $Z$. Thus, the set $A_C$ is contained in all sets $\ensuremath{\mathcal{T}}\xspace^1,\ldots,\ensuremath{\mathcal{T}}\xspace^{\ell+1}$.
Now, for each $i\in\{1,\ldots,\ell+1\}$, we need to consider why $C$ was not added to $\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i$. Let us consider two cases, namely $A_C\in\ensuremath{\mathcal{T}}\xspace^{i*}$ and $A_C\notin\ensuremath{\mathcal{T}}\xspace^{i*}$: If $A_C\in\ensuremath{\mathcal{T}}\xspace^{i*}$ then in Step~\ref{step:selectci} we have selected a component $C^i\neq C$, with $C^i\in\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus(\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0\cup\ldots\cup\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^{i-1})$, such that there is a nonempty set $Z^i\subseteq N_G(A_C)\cap C^i$ such that $G[C^i]$ has no vertex cover of size $\frac12(|C^i|+1)$ that contains $Z^i$. For later reference let us remember the triple $(C^i,Z^i,A^i)$ with $A^i:=A_C$. We know that $C^i\in\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i\subseteq\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$, we know that there is no vertex cover of $G[C^i]$ of size $\frac12(|C^i|+1)$ that contains $Z^i$, and $Z^i\subseteq N_G(A^i)\cap C^i$. Crucially, all vertices $v\in A^i$ are reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$. (There will be a second source of such triples in the case that $A_C\notin\ensuremath{\mathcal{T}}\xspace^{i*}$, but with $A^i\neq A_C$ and with slightly more work for proving these properties of the triples in question.)
In the second case we have $A_C\notin\ensuremath{\mathcal{T}}\xspace^{i*}$. By Lemma~\ref{lemma:repsetofcriticalsets}, since $A_C\in\ensuremath{\mathcal{T}}\xspace^i$, all vertices of $A_C$ are reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$, and $\ensuremath{X_{\mathtt{op}}}\xspace'\subseteq A$ of size at most $\ell$, it follows that $\ensuremath{\mathcal{T}}\xspace^{i*}$ contains a set $A^i$ such that all vertices of $A^i$ are reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$. Thus, in Step~\ref{step:selectci} we have selected a component $C^i$ such that there is a set $Z^i\subseteq N_G(A_i)\cap C^i$ such that $G[C^i]$ has no vertex cover of size $\frac12(|C^i|+1)$ that contains $Z^i$. We remember the triple $(C^i,Z^i,A^i)$.
We find that, independently of whether $A_C\in\ensuremath{\mathcal{T}}\xspace^{i*}$ in iteration $i\in\{1,\ldots,\ell+1\}$ we get a triple $(C^i,Z^i,A_i)$ such that $C^i\in\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i$, with $Z^i\subseteq N_G(A_i)\cap C^i$, and such that all vertices $v\in A_i$ are reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$. We observe that the components $C^i$ are pairwise distinct: Say $1\leq i<j\leq\ell+1$. Then $C^i\in\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i$ and $C^j\in \ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus(\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^0\cup\ldots\cup\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^{j-1})$, implying that $C^j\notin \ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace^i$ as $i\leq j-1$, and hence that $C^j\neq C_i$. We use these components to prove that $X'$, as a vertex cover of $G'$, has at least $\ell+1$ active components, namely $C^1,\ldots,C^{\ell+1}$, which will be seen to contradict that it has size at most $k'$.
Let $i\in \{1,\ldots,\ell+1\}$. We have that all vertices of $A^i$ are reachable from $A_3$ in $H-\ensuremath{X_{\mathtt{op}}}\xspace'$; the same is true in $H'-{\ensuremath{X_{\mathtt{op}}}\xspace}$ since $H=H'$. From Lemma~\ref{lemma:reachable} applied to graph $G'$, nice decomposition $(A,B,D',M')$ of $G'$, and vertex cover $X'$ we get that $X'$ contains no vertex of $A^i$. Since $C^i\in\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$ we know that $C^i$ is also an unmatched non-singleton component of $G[D']$ with respect to matching $M'$ (Lemma~\ref{lemma:inheritance}). As $X'\cap A^i=\emptyset$ it follows directly that $X'$ contains $N_{G'}(A^i)\cap C^i=N_G(A^i)\cap C^i\supseteq Z^i$ (regarding $N_{G'}(A^i)\cap C^i=N_G(A^i)\cap C^i$ note that $G'$ differs from $G$ only by removing vertices of some other components of $\ensuremath{\hat{\mathcal{C}}_3}\xspace$, none of which are in these sets). Since $G'[C^i]=G[C^i]$ has no vertex cover of size $\frac12(|C^i|+1)$ that contains $Z^i$, it follows that $X'$ contains more than $\frac12(|C^i|+1)$ vertices of $C^i$, making $C^i$ an active component with respect to $X'$.
We proved that the vertex cover $X'$ of $G'$ has at least $\ell+1$ active components. Thus, by Lemma~\ref{lemma:nice:boundxh:boundac} its size is at least $|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|+\ell+1$ where $\ensuremath{\hat{\mathcal{C}}_3}\xspace'$ is the number of unmatched non-singleton components of $G'[D']$. On the other hand, we have $|X'|\leq k'= k-\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1)$ and $\ell=k-(2LP(G)-MM(G))=k-(|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|)$. It remains to compare $|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|$ with $|M|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace|$. \begin{enumerate}
\item Let $\ensuremath{M_{\mathtt{irr}}}\xspace\subseteq M$ denote the set of edges in $M$ whose endpoints are in $\ensuremath{V_{\mathtt{irr}}}\xspace$; recall that $\ensuremath{V_{\mathtt{irr}}}\xspace$ denotes the set of all vertices of (irrelevant) components in $\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$.
\item Thus, when creating $G'$, we are deleting $|\ensuremath{V_{\mathtt{irr}}}\xspace|$ vertices and $|\ensuremath{M_{\mathtt{irr}}}\xspace|$ matching edges (recalling that there are no matching edges with exactly one endpoint in $\ensuremath{V_{\mathtt{irr}}}\xspace$). Each component $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ contributes $|C|$ vertices to $\ensuremath{V_{\mathtt{irr}}}\xspace$ and $\frac12(|C|-1)$ edges to $\ensuremath{M_{\mathtt{irr}}}\xspace$. Thus, $|\ensuremath{M_{\mathtt{irr}}}\xspace|=\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|-1)=\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1)-|\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace|$.
\item Using this, we get
\begin{align*}
k'&= k-\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1)\\
&= |M| + |\ensuremath{\hat{\mathcal{C}}_3}\xspace| + \ell - (|\ensuremath{M_{\mathtt{irr}}}\xspace| + |\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace|)\\
&= |M'| + |\ensuremath{\hat{\mathcal{C}}_3}\xspace'| + \ell.
\end{align*} \end{enumerate}
Since $|X'|\geq |M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|+\ell+1=k'+1$, this contradicts the assumption that $X'\leq k'$. Thus, the initial assumption in the claim proof must be wrong and we get that there does exist a vertex cover of $G[C]$ that there does exist a vertex cover $X_C$ of size $\frac12(|C|+1)$ that contains $Z_C$. \end{proof}
Using the claim we can now easily complete $X'$ to a vertex cover $X$ of $G$ of size at most $k$: As observed before, we need to add vertices such as to cover all edges inside components $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ and edges between such components and $A\setminus X'$. Begin with $X:=X'$. Consider any component $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ and let $Z_C:=N_G(A\setminus X')\cap C$. By Claim~\ref{claim:key}, we know that there exists a vertex cover $X_C$ of size $\frac12(|C|+1)$ that contains $Z_C$. Clearly, by adding $X_C$ to $X$ we cover all edges of $C$ and all edges between $C$ and neighbors of $C$ that were not covered by $X'$. (The endpoints of these edges in $C$ exactly constitute the set $Z_C\subseteq X_C$.) By performing this step for all components $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ we add exactly $\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1)$ vertices, implying that \[
|X|= |X'|+\sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1) \leq k' + \sum_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace}\frac12(|C|+1). \] Since $\ensuremath{V_{\mathtt{irr}}}\xspace=\bigcup_{C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace} C$ are the only vertices present in $G$ but not in $G'=G-\ensuremath{V_{\mathtt{irr}}}\xspace$ it follows that all edges with no endpoint in $\ensuremath{V_{\mathtt{irr}}}\xspace$ are already covered by $X'$. Thus, $X$ is indeed a vertex cover of $G$ of size at most $k$, as claimed.
It remains, to prove that $k-(2LP(G)-MM(G))=k'-(2LP(G')-MM(G'))$. We already proved that $k'=|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|+\ell$. By Lemma~\ref{lemma:nice:vclb}, since $(A,B,D',M')$ is a nice decomposition of $G'$, we have $|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|=2LP(G')-MM(G')$. This directly implies that $k'-(2LP(G')-MM(G'))=k'-(|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|)=\ell=k-(2LP(G)-MM(G))$. \end{proof}
We can now complete our kernelization. According to Lemma~\ref{lemma:removeirrelevantcomponents} it is safe to delete all irrelevant components (and update $k$ accordingly). We obtain a graph $G'$ and integer $k'$ such that the following holds: \begin{enumerate}
\item $G'$ has a vertex cover of size at most $k'$ if and only if $G$ has a vertex cover of size at most $k$, i.e., the instances $(G,k)$ and $(G',k')$ for \probname{Vertex Cover}\xspace are equivalent.
\item As a part of the proof of Lemma~\ref{lemma:removeirrelevantcomponents} we showed that
\[
k'=|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|+\ell
\]
where $\ensuremath{\hat{\mathcal{C}}_3}\xspace'$ is the set of unmatched non-singleton components of $G'[D']$ with respect to $M'$.
\item From Lemma~\ref{lemma:inheritance} we know that $\ensuremath{\hat{\mathcal{C}}_3}\xspace'$ is equal to the set $\ensuremath{\hat{\mathcal{C}}_3}\xspace$ (of unmatched non-singleton components of $G[D]$ with respect to $M$) minus the components $C\in\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace$ that were removed to obtain $G'$. In other words, $\ensuremath{\hat{\mathcal{C}}_3}\xspace'=\ensuremath{\hat{\mathcal{C}}_3}\xspace\setminus\ensuremath{\mathcal{C}_{\mathtt{irr}}}\xspace=\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace$.
\item We know from Step~\ref{step:returnrelc} that $|\ensuremath{\mathcal{C}_{\mathtt{rel}}}\xspace|=\mathcal{O}(\ell^4)$. Hence, $|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|=\mathcal{O}(\ell^4)$.
\item Let us consider $p:=k'-|M'|$, which is the parameter value of $(G',k')$ when considered as an instance of \probname{Vertex Cover}\xspace parameterized above the size of a maximum matching. Clearly,
\[
p=k'-|M'|=|M'|+|\ensuremath{\hat{\mathcal{C}}_3}\xspace'|+\ell-|M'|=\ell+\mathcal{O}(\ell^4)=\mathcal{O}(\ell^4).
\]
\item We can now apply any polynomial kernelization for \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace to get a polynomial kernelization for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace. On input of $(G',k',p)$ it returns an equivalent instance $(G^*,k^*,p^*)$ of size $\mathcal{O}(p^c)$ for some constant $c$. We may assume that $k^*=\mathcal{O}(p^c)$ since else it would exceed the number of vertices in $G^*$ and we may as well return a \textbf{yes}\xspace-instance of constant size.
Let $\ell^*=k^*-(2LP(G^*)-MM(G^*))$, i.e., the parameter value of the instance $(G^*,k^*,\ell^*)$ of \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace. Clearly, $\ell^*\leq k^*=\mathcal{O}(p^c)$. Thus, $(G^*,k^*,\ell^*)$ has size and parameter value $\mathcal{O}(p^c)$. \end{enumerate}
Kratsch and Wahlstr\"om~\cite{KratschW12} give a randomized polynomial kernelization for \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace. The size is not analyzed since it relies on equivalence of \probname{Almost 2-SAT($k$)}\xspace and \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace under polynomial parameter transformations~\cite{RamanRS11}; the reductions preserve the parameter value but may increase the size polynomially. The size obtained for \probname{Almost 2-SAT($k$)}\xspace is $\mathcal{O}(p^{12})$, following from a kernelization to $\mathcal{O}(p^6)$ variables. Even without a size increase by the transformation back to \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace, which seems doable, we only get a size of $\mathcal{O}(p^{12})=\mathcal{O}(\ell^{48})$. We note, however, that the kernelization for \probname{Almost 2-SAT($k$)}\xspace also relies, amongst others, on computing a representative set of reachable tuples in a directed graph. It is likely that a direct approach for kernelizing \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace could make do with only a single iteration of this strategy.
\begin{theorem}\label{theorem:main} \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace has a randomized polynomial kernelization with error probability exponentially small in the input size. \end{theorem}
\section{Proof of Lemma~\ref{lemma:repsetofcriticalsets}}\label{section:proofofmatroidresult}
In this section we provide a proof of Lemma~\ref{lemma:repsetofcriticalsets}, which is a generalization of \cite[Lemma 2]{KratschW11_arxiv}; in that work, it is already pointed out that a generalization to $q$-tuples is possible by the same approach. Accordingly, the proof in this section is provided only to make the present work self-contained.
We need to begin with some basics on matroids; for a detailed introduction to matroids see Oxley~\cite{OxleyBook}: A \emph{matroid} is a pair $M=(U,\ensuremath{\mathcal{I}}\xspace)$ where $U$ is the \emph{ground set} and $\ensuremath{\mathcal{I}}\xspace\subseteq 2^U$ is a family of \emph{independent sets} such that \begin{enumerate}
\item $\emptyset\in\ensuremath{\mathcal{I}}\xspace$,
\item if $I\subseteq I'$ and $I'\in\ensuremath{\mathcal{I}}\xspace$ then $I\in\ensuremath{\mathcal{I}}\xspace$, and
\item if $I,I'\in\ensuremath{\mathcal{I}}\xspace$ with $|I|<|I'|$ then there exists $u\in I'\setminus I$ with $I\cup\{u\}\in\ensuremath{\mathcal{I}}\xspace$; this is called the augmentation axiom. \end{enumerate} A set $I\in\ensuremath{\mathcal{I}}\xspace$ is \emph{independent}; all other subsets of $U$ are \emph{dependent}. The maximal independent sets are called \emph{bases}; by the augmentation axiom they all have the same size. For $X\subseteq U$, the \emph{rank $r(X)$ of $X$} is the cardinality of the largest independent set $I\subseteq X$. The \emph{rank of $M$} is $r(M):=r(U)$.
Let $A$ be a matrix over a field $\ensuremath{\mathbb{F}}\xspace$, let $U$ be the set of columns of $A$, and let $\ensuremath{\mathcal{I}}\xspace$ contain those subsets of $U$ that are linearly independent over $\ensuremath{\mathbb{F}}\xspace$. Then $(U,\ensuremath{\mathcal{I}}\xspace)$ defines a matroid $M$ and we say that $A$ \emph{represents} $M$. A matroid $M$ is \emph{representable (over \ensuremath{\mathbb{F}}\xspace)} if there is a matrix $A$ (over \ensuremath{\mathbb{F}}\xspace) that represents it. A matroid representable over at least one field is called \emph{linear}.
Let $D=(V,E)$ be a directed graph and $S,T\subseteq V$. The set $T$ is \emph{linked to $S$} if there exist $|T|$ vertex-disjoint paths from $S$ to $T$; paths of length zero are permitted. For a directed graph $D=(V,E)$ and $S,T\subseteq V$ the pair $M=(V,\ensuremath{\mathcal{I}}\xspace)$ is a matroid, where $\ensuremath{\mathcal{I}}\xspace$ contains those subsets $T'\subseteq T$ that are linked to $S$ \cite{Perfect1968}. Matroids that can be defined in this way are called \emph{gammoids}; the special case with $T=V$ is called a \emph{strict gammoid}. Marx~\cite{Marx09} gave an efficient randomized algorithm for finding a representation of a strict gammoid given the underlying graph; the error probability can be made exponentially small in the runtime.
\begin{theorem}[\cite{Perfect1968,Marx09}]\label{theorem:gammoidrepresentation} Let $D=(V,E)$ be a directed graph and let $S\subseteq V$. The subsets $T\subseteq V$ that are linked to $S$ from the independent sets of a matroid over $M$. Furthermore, a representation of this matroid can be obtained in randomized polynomial time with one-side error. \end{theorem}
As in previous work~\cite{KratschW12} we use the notion of \emph{representative sets}. The definition was introduced by Marx~\cite{Marx09} inspired by earlier work of Lov\'asz~\cite{Lovasz1977}.
\begin{definition}[\cite{Marx09}]\label{definition:representativesets} Let $M=(U,\ensuremath{\mathcal{I}}\xspace)$ be a matroid and let $\ensuremath{\mathcal{Y}}\xspace$ be a family of subsets of $U$. A subset $\ensuremath{\mathcal{Y}}\xspace^*\subseteq\ensuremath{\mathcal{Y}}\xspace$ is \emph{$r$-representative} for $\ensuremath{\mathcal{Y}}\xspace$ if the following holds: For every $X\subseteq U$ of size at most $r$, if there is a set $Y\in\ensuremath{\mathcal{Y}}\xspace$ such that $X\cap Y=\emptyset$ and $X\cup Y\in\ensuremath{\mathcal{I}}\xspace$ then there is a set $Y^*\in\ensuremath{\mathcal{Y}}\xspace^*$ such that $X\cap Y^*=\emptyset$ and $X\cup Y^*\in\ensuremath{\mathcal{I}}\xspace$. \end{definition}
Note that in the definition we may as well require that $\ensuremath{\mathcal{Y}}\xspace$ is a family of independent sets of $M$; independence of $X\cup Y$ requires independence of $Y$.
Marx~\cite{Marx09} proved an upper bound on the required size of representative subsets of a family $\ensuremath{\mathcal{Y}}\xspace$ in terms of the rank of underlying matroid and the size of the largest set in $U$. The upper bound proof is similar to \cite[Theorem 4.8]{Lovasz1977} by Lov\'asz.
\begin{lemma}[\cite{Marx09}]\label{lemma:representativeset}
Let $M$ be a linear matroid of rank $r+s$ and let $\ensuremath{\mathcal{Y}}\xspace=\{Y_1,\ldots,Y_m\}$ be a collection of independent sets, each of size of $s$. If $\ensuremath{\mathcal{Y}}\xspace>\binom{r+s}{s}$ then there is a set $Y\in\ensuremath{\mathcal{Y}}\xspace$ such that $\ensuremath{\mathcal{Y}}\xspace\setminus \{Y\}$ is $r$-representative for $\ensuremath{\mathcal{Y}}\xspace$. Furthermore, given a representation $A$ of $M$, we can find such a set $Y$ in $f(r,s)\cdot(||A||m)^{\mathcal{O}(1)}$ time. \end{lemma}
The factor of $f(r,s)$ in the runtime of Lemma~\ref{lemma:representativeset} is due to performing linear algebra operations on vectors of dimension $\binom{r+s}{s}$. Since our application of the lemma has $s=3$ and $r$ bounded by the number of vertices in the underlying graph, this factor of the runtime is polynomial in the input size. We also remark that we will tacitly use the lemma for directly computing an $r$-representative subset $\ensuremath{\mathcal{Y}}\xspace^*\subseteq\ensuremath{\mathcal{Y}}\xspace$ of size at most $\binom{r+s}{s}$ since the lemma can clearly be iterated to achieve this. We note that faster algorithms for computing independent sets was given by Fomin et al.~\cite{FominLS14}, which leads to significantly better runtimes, in particular for the case of uniform matroids, when $s$ is not constant.
Now we are ready to prove Lemma~\ref{lemma:repsetofcriticalsets}. The proof follows the strategy used for \cite[Lemma 2]{KratschW11_arxiv}. For convenience, let us recall the lemma statement.
\begin{lemma}[recalling Lemma~\ref{lemma:repsetofcriticalsets}] Let $H=(V_H,E_H)$ be a directed graph, let $S_H\subseteq V_H$, let $\ell\in\mathbb{N}$, and let $\ensuremath{\mathcal{T}}\xspace$ be a family of nonempty vertex sets $T\subseteq V_H$ each of size at most three. In randomized polynomial time, with failure probability exponentially small in the input size, we can find a set $\ensuremath{\mathcal{T}}\xspace^*\subseteq\ensuremath{\mathcal{T}}\xspace$ of size $\mathcal{O}(\ell^3)$ such that for any set $X_H\subseteq V_H$ of size at most $\ell$ that is closest to $S_H$ if there is a set $T\in\ensuremath{\mathcal{T}}\xspace$ such that all vertices $v\in T$ are reachable from $S_H$ in $H-X_H$ then there is a corresponding set $T^*\in\ensuremath{\mathcal{T}}\xspace^*$ satisfying the same properties. \end{lemma}
\begin{proof} We begin with constructing a directed graph $D$ and vertex set $S$: \begin{enumerate}
\item Create a graph $\overline{H}$ from $H$ by adding $\ell+1$ new vertices $s_1,\ldots,s_{\ell+1}$ and adding all edges $(s_i,s)$ for $i\in\{1,\ldots,\ell+1\}$ and $s\in S$. Define $\overline{S}:=\{s_1,\ldots,s_{\ell+1}\}$ and note that $V(\overline{H})=V_H\cup \overline{S}$.
\item Let $D$ consist of three vertex-disjoint copies of $\overline{H}$. The vertex set $V^j$ of copy $j$ is $V^j=\{v^j\mid v\in V(\overline{H})$; let $S^j:=\{s_i^j\mid s_i \in \overline{S}\}\subseteq V^i$.
\item Let $S:=S^1\cup S^2\cup S^3$. Note that $|S|=3(\ell+1)$. \end{enumerate} Let $M$ be the strict gammoid defined by graph $D$ and source set $S$. Compute in randomized polynomial time a matrix $A$ that represents $M$ using Theorem~\ref{theorem:gammoidrepresentation}; it suffices to prove that we arrive at the claimed set $\ensuremath{\mathcal{T}}\xspace^*$ if $A$ does indeed represent $M$, i.e., if no error occurred.
We now define a family $\ensuremath{\mathcal{Y}}\xspace$ of subsets of $V(D)$, each of size three; for convenience, let $<$ be an arbitrary linear ordering of the vertex set $V_H$ of $H$: \begin{enumerate}
\item For $\{u,v,w\}\in\ensuremath{\mathcal{T}}\xspace$ with $u<v<w$ let $Y(\{u,v,w\}):=\{u^1,v^2,w^3\}$.
\item For $\{u,v\}\in\ensuremath{\mathcal{T}}\xspace$ with $u<v$ let $Y(\{u,v\}):=\{u^1,v^2,v^3\}$.
\item For $\{u\}\in\ensuremath{\mathcal{T}}\xspace$ let $Y(\{u\}):=\{u^1,u^2,u^3\}$. \end{enumerate} Let us remark that the particular assignment of vertices in $T\in\ensuremath{\mathcal{T}}\xspace$ to the three disjoint copies of $\overline{H}$ is immaterial so long as copies of all vertices are present. The following claim relates reachability of vertices in $T\in\ensuremath{\mathcal{T}}\xspace$ in $H-X_H$ to independence of $Y(T)$ in $M$.
\begin{claim}\label{claim:reachability:independence} Let $X_H\subseteq V_H$ be a set of at most $\ell$ vertices that is closest to $S_H$ in $H$, and let $T\in\ensuremath{\mathcal{T}}\xspace$. The vertices in $T$ are all reachable from $S_H$ in $H-X_H$ if and only if $Y(T)\cup I$ is independent in $M$ and $Y(T)\cap I=\emptyset$, where $I:=\{x^1,x^2,x^3\mid x\in X_H\}$. \end{claim}
\begin{proof}
Assume first that each vertex of $T$ is reachable from $S_H$ in $H-X_H$. Observe that this requires $T\cap X_H=\emptyset$. By Proposition~\ref{proposition:closest}, since $X_H$ is closest to $S_H$, we have that there exist $|X_H|+1$ vertex-disjoint paths from $S_H$ to $X_H\cup\{v\}$ for each vertex $v\in T$; in other words, $X_H\cup\{v\}$ is linked to $S_H$ in $H$. Since $|X_H\cup\{v\}|\leq \ell$, it follows directly that $X_H\cup\{v\}$ is linked to $\overline{S}$ in $\overline{H}$. Thus, for $v^j\in Y(T)$ with $j\in\{1,2,3\}$, it follows that $I^j\cup\{v_j\}$ is linked to $S^j$ in $D$, where $I^j:=\{x^j \mid x\in X_H\}$. Since the three copies of $\overline{H}$ in $D$ a vertex-disjoint, we conclude that $Y(T)\cup I^1\cup I^2\cup I^3=Y(T)\cup I$ is linked to $S=S^1\cup S^2\cup S^3$ in $D$. Thus, $Y(T)\cup I$ is independent in $M$, as claimed. To see that $Y(T)\cap I=\emptyset$ note that $v^j\in Y(T)\cap I$ would imply $v\in T$ and $v\in X_H$; a contradiction to $T\cap X_H=\emptyset$.
For the converse, assume that $Y(T)\cup I$ is independent in $M$ and that $Y(T)\cap I=\emptyset$. Let $v\in T$ and let $j\in\{1,2,3\}$ such that $v^j\in Y(T)$. Observe that $Y(T)\cap I=\emptyset$ implies $v\notin X_H$: Indeed, if $v\in X_H$ then we have $v^j$ in $Y(T)$; a contradiction. Now, independence of $Y(T)\cup I$ implies that $Y(T)\cup I$ is linked to $S$ in $D$. It follows, by vertex-disjointness of the three copies of $\overline{H}$ in $D$ that $y^j\cup I^j$ is linked to $S^j$ using only vertices $v^j$ with $v\in V(\overline{H})$. This implies that $y\cup X_H$ is linked to $\overline{S}$ in $\overline{H}$. Observe now that any path from $\overline{S}$ to $y\cup X_H$ must contain as its second vertex a vertex of $S$; here it is convenient that $X_H\subseteq V_H$ and $\overline{S}\cap V_H=\emptyset$, causing all paths to have length at least one and at least two vertices. Thus, we conclude that $y\cup X_H$ is linked to $S$ in $\overline{H}$, and hence also in $H$ since vertices of $\overline{S}$ cannot be internal vertices of paths (as they have only outgoing edges). Clearly, in a collection of $|X_H|+1$ paths from $S$ to $X_H\cup \{v\}$ the path from $S$ to $v$ cannot contain any vertex of $X_H$ as they are endpoints of the other paths. Thus, there exists a path from $S$ to $v$ that avoids $X_H$, implying that $v$ is reachable from $S$ in $H-X_H$. Since $v$ was chosen arbitrarily from $T$, the claim follows. \end{proof}
Now, use Lemma~\ref{lemma:representativeset} on the gammoid $M$ defined by graph $D$ and source set $S$, represented by the matrix $A$. The rank of $M$ is obviously exactly $|S|=3\ell+3$ since no set larger than $S$ can be linked to $S$ and $S$ itself is an independent set (as it is linked to itself). For the lemma choose $r=|S|-3=3\ell$ and $s=3$ and note that all sets in $\ensuremath{\mathcal{Y}}\xspace$ have size exactly $s=3$ as required. We obtain a set $\ensuremath{\mathcal{Y}}\xspace^*$ of size at most $\binom{r+s}{s}=\mathcal{O}(|S|^3)=\mathcal{O}(\ell^3)$ that $r$-represents $\ensuremath{\mathcal{Y}}\xspace$. Define a set $\ensuremath{\mathcal{T}}\xspace^*\subseteq\ensuremath{\mathcal{T}}\xspace$ by letting $\ensuremath{\mathcal{T}}\xspace^*$ contain those sets $T\in\ensuremath{\mathcal{T}}\xspace$ with $Y(T)\in\ensuremath{\mathcal{Y}}\xspace^*$. The size of $\ensuremath{\mathcal{T}}\xspace^*$ is equal to $|\ensuremath{\mathcal{Y}}\xspace^*|=\mathcal{O}(\ell^3)$ since each $Y\in\ensuremath{\mathcal{Y}}\xspace^*$ has exactly one $T\in\ensuremath{\mathcal{T}}\xspace$ with $Y=Y(T)$. (To see this, note that dropping the superscripts in $Y$ yields exactly the members of the corresponding set $T$; some may be repeated.)
\begin{claim}\label{claim:tstar} For any set $X_H\subseteq V_H$ of size at most $\ell$ that is closest to $S_H$ if there is a set $T\in\ensuremath{\mathcal{T}}\xspace$ such that all vertices $v\in T$ are reachable from $S_H$ in $H-X_H$ then there is a corresponding set $T^*\in\ensuremath{\mathcal{T}}\xspace^*$ satisfying the same properties. \end{claim}
\begin{proof}
Let $T\in\ensuremath{\mathcal{T}}\xspace$ such that all vertices $v\in T$ are reachable from $S_H$ in $H-X_H$. By Claim~\ref{claim:reachability:independence} the set $Y(T)\cup I$ is independent in $M$ and $Y(T)\cap I=\emptyset$, where $I=\{x^1,x^2,x^3\mid x\in X_H\}$. Note that $Y(T)$ in $\ensuremath{\mathcal{Y}}\xspace$ and that $|I|=3|X_H|\leq 3\ell=r$. Thus, by Lemma~\ref{lemma:representativeset} there must be a set $Y^*\in\ensuremath{\mathcal{Y}}\xspace^*$ such that $Y^*\cap I=\emptyset$ and $Y^*\cup I$ is an independent set of $M$. Let $T^*\in\ensuremath{\mathcal{T}}\xspace$ with $Y^*=Y(T^*)$; such a set $T^*$ exists by definition of $\ensuremath{\mathcal{Y}}\xspace$ and, as discussed above, it is uniquely defined. By Claim~\ref{claim:reachability:independence} it follows that that all vertices of $T$ are reachable from $S_H$ in $H-X_H$. This completes the proof of Claim~\ref{claim:tstar}. \end{proof}
We recall that for our case of $s=3$ the computation of $\ensuremath{\mathcal{Y}}\xspace^*$ can be seen to take time polynomial in the input size. The computed set $\ensuremath{\mathcal{Y}}\xspace^*$ fulfills the lemma statement unless the gammoid representation computed by Theorem~\ref{theorem:gammoidrepresentation} is erroneous, which has exponentially small chance of occurring. Note that boosting the success chance of Theorem~\ref{theorem:gammoidrepresentation} works by increasing the range of the random integers used therein (respectively, the field size): An additional factor of $2^p$ in the range of integers decreases the error probability by a factor of $2^{-p}$, while increasing the encoding size of the integers only by $p$ bits. Thus, by only a polynomial increase in the running time, we can get exponentially small error. This completes the proof. \end{proof}
\section{Conclusion}\label{section:conclusion}
We have presented a randomized polynomial kernelization for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace by giving a (randomized) polynomial parameter transformation to \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace. This improves upon the smallest parameter, namely $k-LP(G)$, for which such a result was known~\cite{KratschW12}. The kernelization for \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace \cite{KratschW12} involves reductions to and from \probname{Almost 2-SAT($k$)}\xspace, which can be done without affecting the parameter value (cf.~\cite{RamanRS11}). We have not attempted to optimize the total size. Given an instance $(G,k,\ell)$ for \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace we get an equivalent instance of \probname{Almost 2-SAT($k$)}\xspace with $\mathcal{O}(k^{24})$ variables and size $\mathcal{O}(k^{48})$, which still needs to be reduced to a \probname{Vertex Cover}\xspace instance.
It seems likely that the kernelization can be improved if one avoids the blackbox use of the kernelization for \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace and the detour via \probname{Almost 2-SAT($k$)}\xspace. In particular, the underlying kernelization for \probname{Almost 2-SAT($k$)}\xspace applies, in part, the same representative set machinery to reduce the number of a certain type of clauses. Conceivably the two applications can be merged, thus avoiding the double blow-up in size. As a caveat, it appears to be likely that this would require a much more obscure translation into a directed separation problem. Moreover, the kernelization for \probname{Almost 2-SAT($k$)}\xspace requires an approximate solution, and it is likely that the same would be true for this approach. It would of course also be interesting whether a deterministic polynomial kernelization is possible, but this is, e.g., already not known for \probname{Almost 2-SAT($k$)}\xspace and \probname{Vertex Cover}$(k-\mbox{\textsc{mm}})$\xspace.
We find the appearance of a notion of critical sets of size at most three and the derived separation problem in the auxiliary directed graph quite curious. For the related problem of separating at least one vertex from each of a given set of triples from some source $s$ by deleting at most $\ell$ vertices (a variant of \problem{Digraph Paircut}~\cite{KratschW12}) there is a natural $\mathcal{O}^*(3^\ell)$ time algorithm that performs at most $\ell$ three-way branchings before finding a solution (if possible). It would be interesting whether a complete encoding of \probname{Vertex Cover}$(k-(2\mbox{\textsc{lp}}-\mbox{\textsc{mm}}))$\xspace into a similar form would be possible, since that would imply an algorithm that exactly matches the running time of the algorithm of the algorithm by Garg and Philip~\cite{GargP16}.
\end{document} |
\begin{document}
\title{Entropy of automorphisms of ${\rm II}_1$-factors arising from the dynamical systems theory}
\author{V.Ya. Golodets, S.V. Neshveyev}
\date{\it B. Verkin Institute for Low Temperature Physics and Engineering, National Academy of Sciences of Ukraine, 47, Lenin Ave., 310164, Kharkov, Ukraine}
\maketitle
\begin{abstract} Let a countable amenable group $G$ acts freely and ergodically on a Lebesgue space $(X, \mu)$, preserving the measure $\mu$. If $T\in\mathop{\mbox{\rm Aut\,}}(X, \mu)$ is an automorphism of the equivalence relation defined by $G$ then $T$ can be extended to an automorphism $\alpha_T$ of the II$_1$-factor $M=L^\infty(X,\mu)\rtimes G$. We prove that if $T$ commutes with the action of $G$ then $H(\alpha_T)=h(T)$, where $H(\alpha_T)$ is the Connes-St{\o}rmer entropy of $\alpha_T$, and $h(T)$ is the Kolmogorov--Sinai entropy of $T$. We prove also that for given $s$ and $t$, $0\le s\le t\le\infty$, there exists a $T$ such that $h(T)=s$ and $H(\alpha_T)=t$. \end{abstract}
\section*{Introduction}
Entropy is an important notion in classical statistical mechanics and information theory. Initially the conception of entropy for automorphism in the ergodic theory was introduced by Kolmogorov and Sinai in 1958. This invariant proved to be extremely useful in the classical dynamical systems theory and topological dynamics. The extension of this notion onto quantum dynamical systems was done by Connes, Narnhofer, St{\o}rmer and Thirring~\cite{CS,CNT}. At the present time there are several other promising approaches to entropy of $C^*$-dynamical systems~\cite{S, AF, V}.
An important trend in dynamical entropy is its computation for various models. A lot of interesting results was obtained in this field in the recent years. We note several of them. St{\o}rmer, Voiculescu~\cite{SV}, and the second author~\cite{N} computed the entropy of Bogoliubov automorphisms of CAR and CCR algebras (see also~\cite{BG,GN2}). Pimsner, Popa~\cite{PP}, Choda~\cite{Ch1} computed the entropy of shifts of Temperley-Lieb algebras, Choda~\cite{Ch2}, Hiai~\cite{H} and St{\o}rmer~\cite{St} computed the entropy of canonical shifts. The first author, St{\o}rmer~\cite{GS1,GS2}, Price~\cite{Pr} computed entropy for a wide class of binary shifts.
In this paper we consider automorphisms of II$_1$ factors arising from the dynamical systems theory. Let a countable group $G$ acts freely and ergodically on a Lebesgue space $(X, \mu)$ and preserves $\mu$. Then one can construct the crossed product $M=L^\infty(X, \mu)\rtimes G$, which, as well known, is a II$_1$-factor. If $T\in\mathop{\mbox{\rm Aut\,}}(X, \mu)$ defines an automorphism of the ergodic equivalence relation induced by $G$ then $T$ can be extended to an automorphism $\alpha_T$ of $M$~\cite{FM}. It is a natural problem to compute the dynamical entropy $H(\alpha_T)$ in the sense of~\cite{CS} and to compare it with the Kolmogorov-Sinai entropy $h(T)$ of $T$. It should be noted that this last problem is a part of a more general problem. Namely, let $M$ be a II$_1$-factor, $\alpha\in\mathop{\mbox{\rm Aut\,}} M$, $A$ its $\alpha$-invariant Cartan subalgebra, $\alpha(A)=A$, then it is nature to investigate when $H(\alpha)$ is equal to
$H(\alpha|_A)$.
These problems are studied in our paper. In Section~\ref{2} we prove that if $T$ commutes with the action of $G$ then $H(\alpha_T)=h(T)$. More generally, we prove that this result is valid for crossed products of arbitrary algebras for entropies of Voiculescu~\cite{V} and of Connes-Narnhofer-Thirring~\cite{CNT}. In Section~\ref{3} we consider two examples to illustrate this result. These examples give non-isomorphic ergodic automorphisms of the hyperfinite ergodic equivalence relation with the same entropy. In Section~\ref{4} we construct several examples showing that the entropies $h(T)$ and $H(\alpha_T)$ can be distinct. These systems are non-commutative analogs of dynamical systems of algebraic origin (see \cite{A,Y,LSW,S}). In particular, some of our examples are automorphisms of non-commutative tori. In Section~\ref{5} we construct flows $T_t$ such that $H(\alpha_{T_1})>h(T_1)$. In particular, we show that the values $h(T)$ and $H(\alpha_T)$ can be arbitrary.
\section{Computation of entropy of automorphisms of crossed products}\label{2}
Let $(X, \mu)$ be a Lebesgue space, $G$ a countable amenable group of automorphisms $S_g$, $g\in G$, of $(X,\mu)$ preserving $\mu$, and $T$ an automorphism of $(X, \mu)$, $\mu\circ T=\mu$, such that $$ TS_g=S_gT,\quad g\in G. $$
\begin{theorem} \label{2.1} Let $(X, \mu)$, $G$ and $T$ be as above. Suppose $G$ acts freely and ergodically on $(X, \mu)$. Then $M=L^\infty(X, \mu)\rtimes_SG$ is the hyperfinite II$_1$-factor with the trace-state $\tau$ induced by $\mu$. The automorphism $T$ can be canonically extended to an automorphism $\alpha_T$ of $M$, and $$ H(\alpha_T)=h(T)\,, $$ where $H(\alpha_T)$ is the Connes-St{\o}rmer entropy of $\alpha_T$, and $h(T)$ is the Kolmogorov-Sinai entropy of $T$. \end{theorem}
We will prove the following more general result.
\begin{theorem} \label{2.2} Let $M$ be an approximately finite-dimensional W$^*$-algebra, $\sigma$ its normal state, $T$ a $\sigma$-preserving automorphism. Suppose a discrete amenable group $G$ acts on $M$ by automorphisms $S_g$ that commute with $T$ and preserve $\sigma$. The automorphism $T$ defines an automorphism $\alpha_T$ of $M\rtimes_SG$, and the state $\sigma$ is extended to the dual state which we continue to denote by $\sigma$. Then
(i) $hcpa_\sigma(\alpha_T)=hcpa_\sigma(T)$, where $hcpa_\sigma$ is the completely positive approximation entropy of Voiculescu \cite{V};
(ii) $h_\sigma(\alpha_T)=h_\sigma(T)$, where $h_\sigma$ is the dynamical entropy of Connes-Narnhofer-Thirring~\cite{CNT}. \end{theorem}
Since CNT-entropy coincides with KS-entropy in the classical case, and with CS-entropy for tracial $\sigma$ and approximately finite-dimensional $M$, Theorem~\ref{2.1} follows from Theorem~\ref{2.2}.
To prove Theorem~\ref{2.2} we will generalize a construction of Voiculescu~\cite{V}.
\begin{lemma} \label{2.3} Let $B$ be a C$^*$-algebra, $x_1,\ldots,x_n\in B$. Then the mapping $\Psi\colon{\rm Mat}_n({\mathbb C})\otimes B\to B$, $$ \Psi(e_{ij}\otimes b)=x_ibx^*_j, $$ is completely positive. \end{lemma}
\begin{verif}{} Consider the element $V\in{\rm Mat}_n(B)={\rm Mat}_n({\mathbb C})\otimes B$, $$ V=\pmatrix{x_1 & \ldots & x_n\cr
0 & \ldots & 0 \cr
& \ldots & \cr
0 & \ldots & 0 \cr}. $$ Consider also the projection $p=e_{11}\otimes 1\in{\rm Mat}_n({\mathbb C})\otimes B$. Then $\Psi$ is the mapping ${\rm Mat}_n(B)\to p{\rm Mat}_n(B)p=B$, $x\mapsto VxV^*$. \end{verif}
Let $\lambda$ be the canonical representation of $G$ in $M\rtimes G$, so that $({\rm Ad}\,\lambda(g))(a)=S_g(a)$ for $a\in M$.
\begin{lemma} \label{2.4} For any finite subset $F$ of $G$, there exist normal unital completely positive mappings $I_F\colon B(l^2(F))\otimes M\to M\rtimes G$ and $J_F\colon M\rtimes G\to B(l^2(F))\otimes M$ such that \begin{eqnarray*} I_F(e_{g,h}\otimes a)
&=&{1\over|F|}\lambda(g)a\lambda(h)^*
={1\over|F|}\lambda(gh^{-1})S_h(a),\\ J_F(\lambda(g)a)
&=&\sum_{h\in F\cap g^{-1}F}e_{gh,h}\otimes S_{h^{-1}}(a),\\ (I_F\circ J_F)(\lambda(g)a)
&=&{|F\cap g^{-1}F|\over|F|}\lambda(g)a,\\ \sigma\circ I_F
&=&{\rm tr}_F\otimes\sigma,\ \ \alpha_T\circ I_F=I_F\circ({\rm id}\otimes T),\\ ({\rm tr}_F\otimes\sigma)\circ J_F
&=&\sigma,\ \ ({\rm id}\otimes T)\circ J_F=J_F\circ\alpha_T, \end{eqnarray*} where ${\rm tr}_F$ is the unique tracial state on $B(l^2(F))$. \end{lemma}
\begin{verif}{} The complete positivity of $I_F$ follows from Lemma~\ref{2.3}. Consider $J_F$. Suppose that $M\subset B(H)$, and consider the regular representation of $M\rtimes G$ on $l^2(G)\otimes H$: $$ \lambda(g)(\delta_h\otimes\xi)=\delta_{gh}\otimes\xi, \ \
a(\delta_h\otimes\xi)=\delta_h\otimes S_{h^{-1}}(a)\xi\ \ (a\in M). $$ Let $P_F$ be the projection onto $l^2(F)\otimes H$. Then a direct computation shows that the mapping $J_F(x)=P_FxP_F$, $x\in M\rtimes G$, has the form written above. All others assertions follow immediately. \end{verif}
\begin{verif}{ of Theorem~\ref{2.2}}
\noindent (i) Since there exists a $\tau$-preserving conditional expectation $M\rtimes G\to M$, we have $hcpa_\sigma(\alpha_T)\ge hcpa_\sigma(T)$. To prove the opposite inequality we have to show that $hcpa_\sigma(\alpha_T,\omega)\le hcpa_\sigma(T)$ for any finite subset $\omega$ of $M\rtimes G$. Fix $\varepsilon>0$. We can find a finite subset $F$ of
$G$ such that $||(I_F\circ J_F)(x)-x||_\sigma<\varepsilon$ for any $x\in\omega$. Let $(\psi,\phi,B)\in{CPA}(B(l^2(F))\otimes M,{\rm tr}_F\otimes\sigma)$. Then $(I_F\circ\psi,\phi\circ J_F,B)\in{CPA}(M\rtimes G,\sigma)$. Suppose $$
||(\psi\circ\phi)(J_F(x))-J_F(x)||_{{\rm tr}_F\otimes\sigma}<\delta $$ for some $x\in\alpha^k_T(\omega)$ and $k\in{\mathbb N}$. Then $$
||(I_F\circ\psi\circ\phi\circ J_F)(x)-x||_\sigma
\le||(\psi\circ\phi)(J_F(x))-J_F(x)||_{\sigma\circ I_F}
+||(I_F\circ J_F)(x)-x||_\sigma<\delta+\varepsilon, $$ where we have used the facts that $\sigma\circ I_F={\rm tr}_F\otimes\sigma$ and that $\alpha_T$ commutes with $I_F\circ J_F$. Since $J_F\circ\alpha_T=({\rm id}\otimes T)\circ J_F$, we infer that $$ {rcp}_\sigma(\omega\cup\alpha_T(\omega)\cup\ldots\cup\alpha^{n-1}_T(\omega);
\delta+\varepsilon)\le{rcp}_{{\rm tr}_F\otimes\sigma}(J_F(\omega)\cup
\ldots\cup({\rm id}\otimes T)^{n-1}(J_F(\omega));\delta), $$ so that (for $\delta<\varepsilon$) \begin{eqnarray*} hcpa_\sigma(\alpha_T,\omega;2\varepsilon)
&\le& hcpa_\sigma(\alpha_T,\omega;\varepsilon+\delta)
\le hcpa_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T,J_F(\omega);\delta)\\
&\le& hcpa_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T)=hcpa_\sigma(T), \end{eqnarray*} where the last equality follows from the subadditivity of the entropy~\cite{V}. Since $\varepsilon>0$ was arbitrary, the proof of the inequality $hcpa_\sigma(\alpha_T,\omega)\le hcpa_\sigma(T)$ is complete.
\noindent (ii) We always have $h_\sigma(\alpha_T)\ge h_\sigma(T)$. To prove the opposite inequality consider a channel $\gamma\colon B\to M\rtimes G$, i.~e., a unital completely positive mapping of a finite-dimensional C$^*$-algebra~$B$. We have to prove that $h_\sigma(\alpha_T;\gamma)\le h_\sigma(T)$. Fix $\varepsilon>0$. We can choose $F$ such that $$
||(I_F\circ J_F\circ\gamma-\gamma)(x)||_\sigma\le\varepsilon||x||
\ \ \hbox{for any}\ x\in B. $$ By~\cite[Theorem IV.3]{CNT}, \begin{equation} \label{e2.1} {1\over n}H_\sigma(\gamma,\alpha_T\circ\gamma,\ldots,\alpha^{n-1}_T\circ\gamma)
\le\delta+{1\over n}
H_\sigma(I_F\circ J_F\circ\gamma,\alpha_T\circ I_F\circ J_F\circ\gamma,
\ldots,\alpha^{n-1}_T\circ I_F\circ J_F\circ\gamma), \end{equation} where $\delta=\delta(\varepsilon,{\rm rank}\, B)\to0$ as $\varepsilon\to0$. Since $\sigma\circ I_F={\rm tr}_F\otimes\sigma$, it is easy to see from the definition of mutual entropy $H_\sigma$ \cite{CNT} that \begin{equation} \label{e2.2} H_\sigma(I_F\circ J_F\circ\gamma,I_F\circ J_F\circ\alpha_T\circ\gamma,
\ldots,I_F\circ J_F\circ\alpha^{n-1}_T\circ\gamma)
\le H_{{\rm tr}_F\otimes\sigma}(J_F\circ\gamma,J_F\circ\alpha_T\circ\gamma,
\ldots,J_F\circ\alpha^{n-1}_T\circ\gamma) \end{equation} Since $I_F\circ J_F$ commutes with $\alpha_T$, and $J_F\circ\alpha_T=({\rm id}\otimes T)\circ J_F$, we infer from (\ref{e2.1}) and (\ref{e2.2}) that $$ h_\sigma(\alpha_T;\gamma)\le\delta
+h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T;J_F\circ\gamma)
\le\delta+h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T). $$ Since we could choose $F$ such that $\delta$ was arbitrary small, we see that it suffices to prove that $h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T)=h_\sigma(T)$. For abelian $M$ this is proved by standard arguments, using \cite[Corollary VIII.8]{CNT}. To handle the general case we need the following lemma.
\begin{lemma} \label{2.5} For any finite-dimensional C$^*$-algebra $B$, any state $\phi$ of $B$, and any positive linear functional $\psi$ on ${\rm Mat}_n({\mathbb C})\otimes B$, we have $$
S({\rm tr}_n\otimes\phi,\psi)\le S(\phi,\psi|_B)+2\psi(1)\log n. $$ \end{lemma}
\begin{verif}{} By \cite[Theorem 1.13]{OP}, $$
S({\rm tr}_n\otimes\phi,\psi)=S(\phi,\psi|_B)+S(\psi\circ E,\psi), $$ where $E={\rm tr}_n\otimes{\rm id}\colon{\rm Mat}_n({\mathbb C})\otimes B\to B$ is the (${\rm tr}_n\otimes\phi$)-preserving conditional expectation (note that we adopt the notations of~\cite{CNT}, so we denote by $S(\omega_1,\omega_2)$ the quantity which is denoted by $S(\omega_2,\omega_1)$ in~\cite{OP}). By the Pimsner-Popa inequality~\cite[Theorem 2.2]{PP}, we have $$ E(x)\ge{1\over n^2}x\ \ \hbox{for any}\ \ x\in{\rm Mat}_n({\mathbb C})\otimes B,\ x\ge0. $$ In particular, $\psi\circ E\ge{1\over n^2}\psi$, whence $S(\psi\circ E,\psi)\le2\psi(1)\log n$. \end{verif}
Since $M$ is an AFD-algebra, to compute the entropy of ${\rm id}\otimes T$ it suffices to consider subalgebras of the form $B(l^2(F))\otimes B$, where $B\subset M$. From Lemma~\ref{2.5} and the definitions~\cite{CNT} we immediately get $$ h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T;B(l^2(F))\otimes B)\le h_\sigma(T;B)
+2\log|F|. $$ Hence
$h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T)\le h_\sigma(T)+2\log|F|$. Applying this inequality to $T^m$, we obtain $$
h_{{\rm tr}_F\otimes\sigma}(({\rm id}\otimes T)^m)\le h_\sigma(T^m)+2\log|F|
\ \ \forall m\in{\mathbb N}. $$ But since $M$ is an AFD-algebra, we have $h_{{\rm tr}_F\otimes\sigma}(({\rm id}\otimes T)^m)
=m\cdot h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T)$ and $h_\sigma(T^m)=m\cdot h_\sigma(T)$. So dividing the above inequality by $m$, and letting $m\to\infty$, we obtain $h_{{\rm tr}_F\otimes\sigma}({\rm id}\otimes T)\le h_\sigma(T)$, and the proof of Theorem is complete. \end{verif}
\noindent{\bf Remarks.}
\noindent(i) For any AFD-algebra $N$ and any normal state $\omega$ of $N$, we have $h_{\omega\otimes\sigma}({\rm id}\otimes T)=h_\sigma(T)$. Indeed, we may suppose that $N$ is finite-dimensional and $\omega$ is faithful (because if $p$ is the support of $\omega$, then $h_{\omega\otimes\sigma}({\rm id}\otimes T)
=h_{\omega\otimes\sigma}(({\rm id}\otimes T)|_{pNp\otimes M})$). Now the only thing we need is a generalization of the Pimsner-Popa inequality. Let $p_1,\ldots,p_m$ be the atoms of a maximal abelian subalgebra of the centralizer of the state $\omega$. Then $$ (\omega\otimes{\rm id})(x)\ge\left(\sum^m_{i=1}{1\over\omega(p_i)}\right)^{-1}x
\ \ \hbox{for any}\ \ x\in N\otimes M,\ x\ge0, $$ by \cite[Theorem 4.1 and Proposition 5.4]{L}.
\noindent(ii) By Corollary 3.8 in \cite{V}, $hcpa_\mu(T)=h(T)$ for ergodic $T$. For non-ergodic $T$, the entropies can be distinct. Indeed, let $X_1$
be a $T$-invariant measurable subset of $X$, $\lambda=\mu(X)$, $0<\lambda<1$. Set $\mu_1=\lambda^{-1}\mu|_{X_1}$, $T_1=T|_{X_1}$, $X_2=X\backslash X_1$,
$\mu_2=(1-\lambda)^{-1}\mu|_{X_2}$, $T_2=T|_{X_2}$. It is easy to see that $h(T)=\lambda h(T_1)+(1-\lambda)h(T_2)$. On the other hand, it can be proved that $$ hcpa_\mu(T)=\max\{hcpa_{\mu_1}(T_1),hcpa_{\mu_2}(T_2)\}. $$ So if $h(T_1),h(T_2)<\infty$, $h(T_1)\ne h(T_2)$, then $h(T)<hcpa_\mu(T)$.
To obtain an invariant which coincides with KS-entropy in the classical case, one can modify Voiculescu's definition replacing ${\rm rank}\, B$ with $\exp S(\sigma\circ\psi)$ in~\cite[Definition 3.1]{V}. Theorem~\ref{2.2} remains true for this modified entropy.
\section{Examples}\label{3}
We present two examples to illustrate Theorem~\ref{2.1}. These examples give non-isomorphic ergodic automorphisms of amenable equivalence relations with the same KS-entropy.
Let us first describe a general construction.
\begin{proposition} \label{3.1} Let $S_0$, $S_1$, $S_2$ be ergodic automorphisms of $(X,\mu)$ such that $S_0$ commutes with $S_1$ and $S_2$, and $S_1$ is conjugate with neither $S_2$, nor $S^{-1}_2$ by an automorphism commuting with $S_0$. Set $M_i=L^\infty(X,\mu)\rtimes_{S_i}{\mathbb Z}$, $i=1,2$, and let $\alpha_i$ be the automorphism of $M_i$ induced by $S_0$. Then there is no isomorphism $\phi$ of $M_1$ onto $M_2$ such that $\phi\circ\alpha_1=\alpha_2\circ\phi$ and $\phi(L^\infty(X,\mu))=L^\infty(X,\mu)$. \end{proposition}
\begin{verif}{} Suppose such a $\phi$ exists. Let $U_i\in M_i$ be the unitary corresponding to $\alpha_i$, $i=1,2$, $A=L^\infty(X,\mu)\subset M_1$. Set $U=\phi^{-1}(U_2)$. Since $U$ is a unitary operator from $M_1$ such that $({\rm Ad}\, U)(A)=A$, it is easy to check that $U$ has the form $$ U=\sum_{i\in{\mathbb Z}}a_iU_1^i E_i,\quad a_i\in{\mathbb T}, $$ where $\{E_i\} $ is a family of projections from $A$, $E_iE_j=0$, for $i\not=j$, $\sum\limits_iE_i=\sum\limits_iU_1^iE_iU_1^{-i}=I$. Since $\alpha_1(U)=U$, we have $\alpha_1(E_i)=E_i$, $i\in{\mathbb Z}$. But $S_0$ is ergodic, therefore $E_i=I$ or $E_i=0$. Hence
$U=a_iU_1^i$ for some $i\in{\mathbb Z}$ and $a_i\in{\mathbb T}$. Since $\phi$ is an isomorphism, we have either $i=-1$, or $i=1$. We see that $\phi|_{L^\infty(X,\mu)}$ is an automorphism that commutes with $S_0$ and conjugates $S_2$ with either $S^{-1}_1$, or $S_1$. \end{verif}
\noindent{\bf Remark.} It follows from Proposition~\ref{3.1} that $S_0$ defines non-isomorphic automorphisms of the ergodic equivalence relations induced by $S_1$ and $S_2$ on $X$ correspondingly, despite of $H(\alpha_1)=H(\alpha_2)=h(S_0)$.
\begin{example} \label{3.2} \rm Let $X=[0, 1]$ be the unit interval, $\mu$ the Lebesgue measure on $X$, $t_0$, $t_1$ and $t_2$ irrational numbers from $[0, 1]$ such that $t_2\ne t_1,\,1-t_1$. Consider the shifts $S_ix=x+t_i\pmod1$, $x\in[0, 1]$. Any automorphism of $X$ commuting with $S_0$ commutes with $S_1$ and $S_2$. Since $S_1\ne S^{\pm1}_2$, Proposition~\ref{3.1} is applicable. Note that $h(S_0)=0$. \end{example}
\begin{example} \label{3.3} \rm Let $(X, \mu)$ be a Lebesgue space, $T_t$ a Bernoulli flow on $(X, \mu)$ with $h(T_1)=\log2$ \cite{O}. Choose $t_i\in{\mathbb R}$, $t_i\ne0$ ($i=0,1,2$), $t_1\ne\pm t_2$, and set $S_i=T_{t_i}$. Then $h(S_1)\ne h(S_2)$, and we can apply Proposition~\ref{3.1}. \end{example}
\section{Entropy of automorphisms and their restrictions to a Cartan subalgebra} \label{4}
Let $M$ be a II$_1$-factor, $A$ its Cartan subalgebra, $\alpha\in\mathop{\mbox{\rm Aut\,}} M$ such that $\alpha(A)=A$. We consider cases when
$H(\alpha)>H(\alpha|_A)$.
Suppose a discrete abelian group $G$ acts freely and ergodically by automorphisms $S_g$ on $(X,\mu)$, $\beta$ an automorphism of $G$, and $S$ an automorphism of $(X,\mu)$ such that $TS_g=S_{\beta(g)}T$. Then $T$ induces an automorphism $\alpha_T$ of $M=L^\infty(X,\mu)\rtimes_SG$. Explicitly, $$ \alpha_T(f)(x)=f(T^{-1}x) \ \hbox{for}\ f\in L^\infty(X,\mu),
\ \alpha_T(\lambda(g))=\lambda(\beta(g)). $$ The algebra $A=L^\infty(X,\mu)$ is a Cartan subalgebra of $M$. On the other hand, the operators $\lambda(g)$ generate a maximal abelian subalgebra
$B\cong L^\infty(\hat G)$ of $M$, and $\alpha_T|_B=\hat\beta$, the dual automorphism of $\hat G$. We have $$ H(\alpha_T)\ge\max\{h(T),h(\hat\beta)\}, $$
so if $h(\hat\beta)>h(T)$, then $H(\alpha_T)>H(\alpha_T|_A)$.
To construct such examples we consider systems of algebraic origin.
Let $G_1$ and $G_2$ be discrete abelian groups, and $T_1$ an automorphism of $G_1$. Suppose there exists an embedding $l\colon G_2\hookrightarrow\hat G_1$ such that $l(G_2)$ is a dense $\hat T_1$-invariant subgroup. Set $T_2=\hat T_1|_{G_2}$. The group $G_2$ acts by translations on $\hat G_1$ ($g_2\cdot\chi_1=\chi_1+l(g_2)$), and we fall into the situation described above (with $X=\hat G_1$, $G=G_2$, $T=\hat T_1$ and $\beta=T_2$).
The roles of $G_1$ and $G_2$ above are almost symmetric. Indeed, to be given an embedding $G_2\hookrightarrow\hat G_1$ with dense range is just the same as to be given a non-degenerate pairing $\langle\cdot\,,\,\cdot\rangle\colon G_1\times G_2\to{\mathbb T}$, then the equality
$T_2=\hat T_1|_{G_2}$ means that this pairing is $T_1\times T_2$-invariant. The pairing gives rise to an embedding $r\colon G_1\hookrightarrow\hat G_2$. Then $G_1$ acts on $\hat G_2$ by translations $g_1\cdot\chi_2=\chi_2-r(g_1)$, and $L^\infty(\hat G_1)\rtimes G_2\cong G_1\ltimes L^\infty(\hat G_2)$. In fact, both algebras are canonically isomorphic to the twisted group W$^*$-algebra $W^*(G_1\times G_2,\omega)$, where $\omega$ is the bicharacter defined by $$ \omega((g'_1,g'_2),(g''_1,g''_2))=\langle g''_1,g'_2\rangle. $$ Then $\alpha_T$ is nothing else than the automorphism induced by the $\omega$-preserving automorphism $T_1\times T_2$.
Let $R={\mathbb Z}[t,t^{-1}]$ be the ring of Laurent polynomials over ${\mathbb Z}$, $f\in{\mathbb Z}[t]$, $f\ne 1$, a polynomial whose irreducible factors are not cyclotomic, equivalently, $f$ has no roots of modulus 1. Fix $n\in\{2,3,\ldots,\infty\}$. Set $G_1=R/(f^\sim)$ and $G_2=\mathop{\oplus}\limits^n_{k=1}R/(f)$, where $f^\sim(t)=f(t^{-1})$. Let $T_i$ be the automorphism of $G_i$ of multiplication by $t$. Let $\chi$ be a character of $G_2$. Then the mapping $R\ni f_1\mapsto f_1(\hat T_2)\chi\in\hat G_2$ defines an equivariant homomorphism $G_1\to\hat G_2$. In other words, if $\chi=(\chi_1,\ldots,\chi_n)\in\hat G_2\subset\hat R^n$, then the pairing is given by $$ \langle f_1,(g_1,\dots,g_n)\rangle=\prod^n_{k=1}\chi_k(f^\sim_1\cdot g_k), $$ where $(f^\sim_1\cdot g_k)(t)=f_1(t^{-1})g_k(t)$. This pairing is non-degenerate iff the orbit of $\chi$ under the action of $\hat T_2$ generates a dense subgroup of $\hat G_2$. Since $T_2$ is aperiodic, the dual automorphism is ergodic. Hence the orbit is dense for almost every choice of $\chi$.
Now let us estimate entropy. First, by Yuzvinskii's formula \cite{Y,LW}, $h(\hat T_1)=m(f)$, $h(\hat T_2)=n\cdot m(f)$, where $m(f)$ is the logarithmic Mahler measure of $f$, $$
m(f)=\int^1_0\log|f(e^{2\pi is})|ds=\log|a_m|
+\sum_{j\colon|\lambda_j|>1}\log|\lambda_j|, $$ where $a_m$ is the leading coefficient of $f$, and $\{\lambda_j\}_j$ are the roots of $f$. Now suppose that the coefficients of the leading and the lowest terms of $f$ are equal to 1. Then $G_1\times G_2$ is a free abelian group of rank $(n+1)\deg f$, and by a result of Voiculescu~\cite{V} we have $H(\alpha_T)\le h(\hat T_1\times\hat T_2)=(n+1)m(f)$.
Note also that since the automorphism $T_1\times T_2$ is aperiodic, the automorphism $\alpha_T$ is mixing.
Let us summarize what we have proved.
\begin{theorem} \label{4.1} For given $n\in\{2,3,\ldots,\infty\}$ and a polynomial $f\in{\mathbb Z}[t]$, $f\ne1$, whose coefficients of the leading and the lowest terms are equal to 1 and which has no roots of modulus 1, there exists a mixing automorphism $\alpha$ of the hyperfinite II$_1$-factor and an $\alpha$-invariant Cartan subalgebra $A$ such that $$
H(\alpha|_A)=m(f),\ \ n\cdot m(f)\le H(\alpha)\le(n+1)m(f). $$ \end{theorem}
The possibility of constructing on this way systems with arbitrary values $H(\alpha|_A)<H(\alpha)$ is closely related to the question, whether 0 is a cluster point of the set
$\{m(f)\,|\,f\in{\mathbb Z}[t]\}$ (note that it suffices to consider irreducible polynomials whose leading coefficients and constant terms are equal to 1). This question is known as Lehmer's problem, and there is an evidence that the answer is {\it negative} (see \cite{LSW} for a discussion).
In estimating the entropy above we used the result of Voiculescu stating that the entropy of an automorphism of a non-commutative torus is not greater than the entropy of its abelian counterpart. It is clear that this result should be true for a wider class of systems. Consider the most simple case where the polynomial $f$ is a constant.
\begin{example} \label{4.2} \rm Let $f=2$ and $n=2$. Then $G_1=R/(2)\cong\mathop\oplus\limits_{k\in{\mathbb Z}}{\mathbb Z}/2{\mathbb Z}$, $G_2=G_1\oplus G_1$, $T_1$ is the shift to the right, $T_2=T_1\oplus T_1$. Let $G_1(0)={\mathbb Z}/2{\mathbb Z}\subset G_1$ and $G_2(0)={\mathbb Z}/2{\mathbb Z}\oplus{\mathbb Z}/2{\mathbb Z}\subset G_2$ be the subgroups sitting at the 0th place. Set $$ G^{(n)}_i=G_i(0)\oplus T_iG_i(0)\oplus\ldots\oplus T^n_iG_i(0). $$ Then $\displaystyle H(\alpha_T)\le hcpa_\tau(\alpha_T)\le\lim_{n\to\infty}{1\over n}\log{\rm rank}\,
C^*(G^{(n)}_1\times G^{(n)}_2,\omega)\le 3\log2$, so (for $A=L^\infty(\hat G_1)$) $$
H(\alpha_T|_A)=\log2\ \ \hbox{and}\ \ 2\log2\le H(\alpha_T)\le 3\log2. $$ The actual value of $H(\alpha_T)$ is probably depends on the choice of the character $\chi\in\hat G_2$. We want to show that $H(\alpha_T)=2\log2$ for some special choice of $\chi$. For this it suffices to require the pairing
$\langle\cdot\,,\,\cdot\rangle|_{G^{(n)}_1\times G^{(n)}_2}$ be non-degenerate in the first variable for any $n\ge0$ (so that $C^*(G^{(n)}_2)$ is a maximal abelian subalgebra of $C^*(G^{(n)}_1\times G^{(n)}_2,\omega)$, and the rank of the latter algebra is equal to $4(n+1)$). The embedding $G_1\hookrightarrow\hat G_2$ is given by $$ g_1\mapsto\prod_{n\in{\mathbb Z}:g_1(n)\ne0}\hat T^n_2\chi, \ \
g_1=(g_1(n))_n\in\mathop{\oplus}_{n\in{\mathbb Z}}{\mathbb Z}/2{\mathbb Z}. $$ So we must choose $\chi$ in a way such that the character $\prod^m_{k=1}\hat T^{n_k}_2\chi$ is non-trivial on $G^{(n)}_2$ for any $0\le n_1<\ldots< n_m\le n$. Identify $\hat G_2$ with $\prod_{n\in{\mathbb Z}}({\mathbb Z}/2{\mathbb Z}\oplus{\mathbb Z}/2{\mathbb Z})$. Then $\hat T_2$ is the shift to the right, and we may take any $\chi=(\chi_n)_n$ such that
(i) $\chi_n=0$ for $n<0$, $\chi_0\ne0$;
(ii) the group generated by $\hat T^n_2\chi$ is dense in $\hat G_2$. \end{example}
Finally, we will show that it is possible to construct systems with positive entropy, which have zero entropy on a Cartan subalgebra.
\begin{example} \label{4.3} \rm Let $p$ be a prime number, $p\ne2$, $\hat G_1={\mathbb Z}_p$ (the group of $p$-adic integers), $G_2=\cup_{n\in{\mathbb N}}2^{-n}{\mathbb Z}\subset\hat G_1$, $\hat T_1$ and $T_2$ the automorphisms of multiplication by $2$. The group $G_1$ is the inductive limit of the groups ${\mathbb Z}/p^n{\mathbb Z}$, and $T_1$ acts on them as the automorphism of division by~$2$. Hence $$
H(\alpha_T|_A)=\lim_{n\to\infty}H(\alpha_T|_{C^*({\mathbb Z}/p^n{\mathbb Z})})=0. $$ Since $G_2=R/(t-2)$, we have $h(\hat T_2)=\log2$, so $H(\alpha_T)\ge\log2$. We state that $$ H(\alpha_T)=hcpa_\tau(\alpha_T)=\log2. $$ The automorphism $T^{p^{n-1}(p-1)}_1$ is identical on ${\mathbb Z}/p^n{\mathbb Z}$. Since $$ W^*({\mathbb Z}/p^n{\mathbb Z}\times G_2,\omega)={\mathbb Z}/p^n{\mathbb Z}\ltimes L^\infty(\hat G_2), $$ by Theorem \ref{2.2} we infer $$
hcpa_\tau(\alpha^{p^{n-1}(p-1)}_T|_{W^*({\mathbb Z}/p^n{\mathbb Z}\times G_2,\omega)})
=h(\hat T^{p^{n-1}(p-1)}_2), $$
whence $hcpa_\tau(\alpha_T|_{W^*({\mathbb Z}/p^n{\mathbb Z}\times G_2,\omega)})=\log2$, and $$ hcpa_\tau(\alpha_T)=\lim_{n\to\infty}
hcpa_\tau(\alpha_T|_{W^*({\mathbb Z}/p^n{\mathbb Z}\times G_2,\omega)})=\log2. $$ \end{example}
\section{Flows on II$_1$-factors with invariant Cartan subalgebras} \label{5}
Using examples of previous sections and the construction of associated flow we will construct systems with arbitrary values of
$H(\alpha|_A)$ and $H(\alpha)$ ($0\le H(\alpha|_A)\le H(\alpha)\le\infty$).
Suppose a discrete amenable group $G$ acts freely and ergodically by measure-preserving transformations $S_g$ on $(X,\mu)$, $T$ an automorphism of $(X,\mu)$ and $\beta$ an automorphism of $G$ such that $TS_g=S_{\beta(g)}T$. Consider the flow $F_t$ associated with $T$. So $Y={\mathbb R}/{\mathbb Z}\times X$, $d\nu=dt\times d\mu$, $$ F_t(\dot{r},x)=(\dot{r}+\dot{t},T^{[r+t]}x)\ \ \hbox{for}\ \ r\in[0,1),\
x\in X, $$ where $t\mapsto\dot{t}$ is the factorization mapping ${\mathbb R}\to{\mathbb R}/{\mathbb Z}$. The semidirect product group $G_0=G\times_\beta{\mathbb Z}$ acts on $(X,\mu)$. This action is ergodic. It is also free, if \begin{equation} \label{e5.1} \hbox{there exist no }g\in G\hbox{ and no }n\in{\mathbb N}\hbox{ such that }S_g=T^n
\hbox{ on a set of positive measure.} \end{equation} Let $\Gamma$ be a countable dense subgroup of ${\mathbb R}/{\mathbb Z}$, it acts by translations on ${\mathbb R}/{\mathbb Z}$. Set ${\cal G}=\Gamma\times G_0$. The group ${\cal G}$ is amenable. It acts freely and ergodically on $(Y,\nu)$. The corresponding equivalence relation is invariant under the flow, so we obtain a flow $\alpha_t$ on $L^\infty(Y,\nu)\rtimes{\cal G}$. Compute its entropy.
Let $\alpha_T$ be the automorphism of $L^\infty(X,\mu)\rtimes G$ defined by $T$. We state that \begin{equation} \label{e5.3}
H(\alpha_t)=|t|H(\alpha_T),\ \ hcpa_\tau(\alpha_t)=|t|hcpa_\tau(\alpha_T),
\ \ \hbox{and}\ \ H(\alpha_t|_{L^\infty(Y,\nu)})=|t|h(T). \end{equation}
Since $h(F_t)=|t|h(F_1)=|t|h({\rm id}\times T)$, the last equality in~(\ref{e5.3}) is evident. To prove the first two note that $$
H(\alpha_t)=|t|H(\alpha_1)\ \ \hbox{and}\ \
hcpa_\tau(\alpha_t)=|t|hcpa_\tau(\alpha_1) $$ (see \cite[Proposition 10.16]{OP} for the first equality, the second is proved analogously). We have $$ L^\infty(Y,\nu)\rtimes{\cal G}=(L^\infty({\mathbb R}/{\mathbb Z})\rtimes\Gamma)\otimes
(L^\infty(X,\mu)\rtimes G_0),\ \ \alpha_1={\rm id}\otimes\tilde\alpha_T, $$ where $\tilde\alpha_T$ is the automorphism of $L^\infty(X,\mu)\rtimes G_0$ defined by $T$. Since completely positive approximation entropy is subadditive and monotone \cite{V}, we have $hcpa_\tau({\rm id}\otimes\tilde\alpha_T)=hcpa_\tau(\tilde\alpha_T)$. We have also $H({\rm id}\otimes\tilde\alpha_T)=H(\tilde\alpha_T)$ by Remark following the proof of Theorem~\ref{2.2}. Since $$ L^\infty(X,\mu)\rtimes G_0=(L^\infty(X,\mu)\rtimes_SG)\rtimes_{\alpha_T}{\mathbb Z}, $$ we obtain $hcpa_\tau(\tilde\alpha_T)=hcpa_\tau(\alpha_T)$ and $H(\tilde\alpha_T)=H(\alpha_T)$ by virtue of Theorem~\ref{2.2}. So $hcpa_\tau(\alpha_1)=hcpa_\tau(\alpha_T)$ and $H(\alpha_1)=H(\alpha_T)$,
and the proof of the equalities~(\ref{e5.3}) is complete.
\begin{theorem} \label{5.1} For any $s$ and $t$, $0\le s< t\le\infty$, there exist an automorphism $\alpha$ of the hyperfinite II$_1$-factor and an $\alpha$-invariant Cartan subalgebra~$A$ such that $$
H(\alpha|_A)=s\ \ \hbox{and} \ \ H(\alpha)=t. $$ \end{theorem}
\begin{verif}{}{} Consider a system from Example~\ref{4.3}. Then the condition (\ref{e5.1}) is satisfied, so the construction above leads to a flow $\alpha_t$ and an $\alpha_t$-invariant Cartan subalgebra
$A_1$ such that $$ H(\alpha_t|_{A_1})=0 \ \ \hbox{and}\ \ H(\alpha_t)=hcpa_\tau(\alpha_t)
=|t|\log2. $$ As in Example~\ref{3.3}, consider a Bernoulli flow $S_t$ on $(X,\mu)$ with $h(S_1)=\log2$. Then for the corresponding flow $\beta_t$ on $L^\infty(X,\mu)\rtimes_{S_1}{\mathbb Z}$ we have (with $A_2=L^\infty(X,\mu)$) $$
H(\beta_t|_{A_2})=H(\beta_t)=hcpa_\tau(\beta_t)=|t|\log2. $$ Since Connes-St{\o}rmer' entropy is superadditive~\cite{SV} and Voiculescu's entropies are subadditive, we conclude that $$
H((\alpha_t\otimes\beta_s)|_{A_1\otimes A_2})=|s|\log2,\ \
H(\alpha_t\otimes\beta_s)=H(\alpha_t)+H(\beta_s)=(|t|+|s|)\log2. $$
Finally, consider an infinite tensor product of systems from Example~\ref{4.3}. Thus we obtain an automorphism $\gamma$ and an $\alpha$-invariant Cartan subalgebra~$A_3$ such that $$
H(\gamma|_{A_3})=0\ \ \hbox{and}\ \ H(\gamma)=\infty. $$
Then $H(\beta_s\otimes\gamma)|_{A_2\otimes A_3})=|s|\log2$, $H(\beta_s\otimes\gamma)=\infty$. \end{verif}
\section{Final remarks} \label{6}
\subsection Let $p_1$ and $p_2$ be prime numbers, $p_i\ge3$, $i=1,2$. Construct automorphisms $\alpha_1$ and $\alpha_2$ as in Example~\ref{4.3}.
\begin{proposition} If $p_1\ne p_2$, then $\alpha_1$ and $\alpha_2$ are not conjugate as automorphisms of the hyperfinite II$_1$-factor, though $H(\alpha_1)=H(\alpha_2)=\log 2$. \end{proposition}
\begin{verif}{} Indeed, the automorphisms define unitary operators $U_i$ on $L^2(M, \tau)$. As we can see, the point part $S_i$ of the spectrum of $U_i$ is non-trivial. If $p_1\ne p_2$, then $S_1\ne S_2$, so $\alpha_1$ and $\alpha_2$ are not conjugate. \end{verif}
\subsection The automorphisms of Theorem~\ref{4.1} and Example~\ref{4.2} are ergodic. On the other hand, the automorphisms of Example~\ref{4.3} are not ergodic, even on the Cartan subalgebra. Moreover, any ergodic automorphism of compact abelian group has positive entropy (it is even Bernoullian), so with the methods of Section~\ref{4} we can not construct ergodic automorphisms with positive entropy and zero entropy restriction to a Cartan subalgebra (however, for actions of ${\mathbb Z}^d$, $d\ge2$, we are able to construct such examples).
The construction of Section~\ref{5} leads to non-ergodic automorphisms also, even if we start with an ergodic automorphism (such as in Example~\ref{4.2}).
{\bf Acknowledgement.} The first author (V.G.) is grateful to Erling St{\o}rmer for interesting and helpful discussions of the first version of this paper.
\end{document} |
\begin{document}
\title{Quantum criticality of the sub-Ohmic spin-boson model within displaced Fock states} \author{Shu He$^{1}$, Liwei Duan$^{1}$, and Qing-Hu Chen$^{1,2,*}$}
\address{ $^{1}$ Department of Physics, Zhejiang University, Hangzhou 310027, P. R. China \\ $^{2}$ Center for Statistical and Theoretical Condensed Matter Physics, Zhejiang Normal University, Jinhua 321004, P. R. China
}
\date{\today }
\begin{abstract} The spin-boson model is analytically studied using displaced Fock states (DFS) without discretization of the continuum bath. In the orthogonal displaced Fock basis, the ground-state wavefunction can be systematically improved in a controllable way. Interestingly, the zeroth-order DFS reproduces exactly the well known Silbey-Harris results. In the framework of the second-order DFS, the magnetization and the entanglement entropy are exactly calculated. It is found that the magnetic critical exponent $\beta$ is converged to $0.5$ in the whole sub-Ohmic bath regime $0<s<1$, compared with that by the exactly solvable generalized Silbey-Harris ansatz. It is strongly suggested that the system with sub-Ohmic bath is always above its upper critical dimension, in sharp contrast with the previous findings. This is the first evidence of the violation of the quantum-classical Mapping for $ 1/2<s<1$. \end{abstract}
\pacs{03.65.Yz, 03.65.Ud, 71.27.+a, 71.38..k}
\maketitle
The spin-boson model ~\cite{Leggett,weiss} describes a qubit (two-level system) coupled with a dissipative environment represented by a continuous bath of bosonic modes. There are currently considerable interests in this quantum many-body system due to the rich physics of quantum criticality and decoherence~\cite{weiss,Hur,Kopp}, applied to the emerging field of quantum computations~\cite{Thorwart}, quantum devices~\cite{mak}, and quantum biology~\cite{reng,omer}. It is widely used to study the microscopic behavior of the open quantum systems~\cite{Leggett}. The coupling between the qubit and the environment is characterized by a spectral function $ J(\omega )$ which is proportional to $\omega ^s$. The spectral exponent $s$ varies the coupling into three different cases: sub-Ohmic ($s<1$), Ohmic ($ s=1$), and super-Ohmic ($s>1$).
As a paradigmatic model to study the influence of environment on the quantum system, the spin-boson model has been extensively and persistently studied by many analytical and numerical approaches. On the analytical side, a pioneer work is undoubtedly the variational study based on the polaronic unitary transformation by Silbey-Harris (SH) ansatz~\cite{Silbey}. Based on the GHZ ansatz, Zheng \textit{et al.} developed an analytical approach \cite {zheng} to study both static and dynamical behavior of the dissipative two-level system. Chin \textit{et al.} generalized the Silbey-Harris (GSH) variational polaronic ansatz to a asymmetrically one in the sub-Ohmic spin-boson model~\cite{Chin}. All these studies are based on single coherent state in both levels. Recently, this single coherent states ansatz was improved by simply adding other coherent states on the equal footing\cite {mD1} and by superpositions of two degenerate single coherent states\cite {ZhengLu}. By the way, the similar idea was also proposed by one of the present author and a collaborator in 2005 for single-mode case\cite{ren} independently.
On the numerical side, almost all advanced numerical approaches in the quantum many-particle physics have been applied and extended to this model. The numerical renormalization group (NRG) was applied at the earlier stage \cite{Bulla} for the sub-Ohmic baths, but the direct applications yields incorrect critical exponents of the quantum phase transitions (QPT) for $ 0<s<1/2$ and therefore invalidate the famous quantum-to-classical correspondence due to the Hilbert-space truncation error and the mass flow error\cite{vojta,phi,Tong}. Later on, quantum Monte Carlo simulations based on a imaginary path internal~\cite{QMC}, sparse polynomial space approach~ \cite{ED}, exact diagonalization in terms of shift bosons \cite{Zhang} have sequentially developed and all found the mean-field critical exponent for $ 0<s<1/2$. The density matrix renormalization group (DMRG)~ was also applied, but not successful in the analysis of the critical phenomena~\cite{DMRG}. More recently, using the DMRG algorithm combined with the optimized phonon basis, a variational matrix product state (MPS) approach formulated on a Wilson chain\cite{VMPS} was developed and the Hilbert-space truncation can be alleviated systematically. Very recently, an alternative to the conventional MPS representation was also proposed\cite{Frenzel}. For $1/2<s<1 $, the magnetic critical exponent $\beta$ obtained in two MPS approaches~ \cite{VMPS,Frenzel} and the NRG \cite{Bulla,Tong} is much less than $0.5$, indicating that the system is below its upper critical dimension.
Among the numerical approaches to the celebrated continuum spin-boson model, the discretization of the energy spectrum of the bath should be performed at the very beginning, except for some approaches formulated on path integral \cite{QMC} where the bath is analytically integrated out. Whether the artificial discretization will change the nature of the model system is still unclear. To ensure the convergence, the number of the bosonic modes then is set large enough so that the Hilbert-space truncation can be controlled systematically, and therefore the bath are described in a very complicated way, like in the various MPS approaches ~\cite{VMPS,Frenzel} and NRG \cite{Bulla,Tong}. To the best of our knowledge, the phonon state in the bath of the spin-boson model has not been analytically well described, except for approaches with more than one nonorthogonal coherent states \cite {mD1,ZhengLu}.
In this work, we propose an analytic ground state (GS) for the spin-boson model without discretization of the spectra. The phonon state is expanded in the novel orthogonal basis, and therefore described in a controllable way. The GS wavefuction can be obtained self-consistently, and all GS properties can then be numerically exactly calculated. The convergency of the criticality is discussed without ambiguity.
The Hamiltonian of the spin-boson model is given by \begin{equation} H=-\frac \Delta 2\sigma _x+\sum_k\omega _ka_k^{\dagger }a_k+\frac 12\sigma _z\sum_kg_k(a_k^{\dagger }+a_k), \label{hamiltonian} \end{equation} where $\sigma _x$ and $\sigma _z$ are Pauli matrices, $\Delta $ is the tunneling amplitude between two levels, $\omega _k$ and $a_k^{\dagger }$ are the frequency and creation operator of the $k$-th harmonic oscillator, and $ g_k$ is the interaction strength between the $k$-th bosonic mode and the local spin. The spin-boson coupling is characterized by the spectral function, \begin{equation} J(\omega )=\pi \sum_kg_k^2\delta (\omega _k-\omega )=2\pi \lambda \omega _c^{1-s}\omega ^s,0<\omega <\omega _c, \end{equation} with $\omega _c$ a cutoff frequency. The dimensionless parameter $\lambda $ denotes the coupling strength. The rich physics of the quantum dissipation is second-order QPT from delocalization to localization for $0<s<1$, as a consequence of the competition between the amplitude of tunneling of the spin and the effect of the dissipative bath.
To outline the approach more intuitively, we first consider the case without symmetry breaking, such as the delocalized phase. By using $|\uparrow
\rangle $ and $|\downarrow \rangle $ to represent the eigenstate of $\sigma _z,$ the GS wavefucntion can be in principle expressed in the following set of complete orthogonal basis $\prod_{i=0}^na_{k_i}^{\dagger }|0\rangle $
\begin{eqnarray}
&&|\Psi ^{\prime }\rangle =\left( 1+\sum_k\alpha _ka_k^{\dagger }+\sum_{k_1,k_2}u_{k_1,k_2}a_{k_1}^{\dagger }a_{k_2}^{\dagger }+...\right)
|0\rangle |\uparrow \rangle \nonumber \\ &&+\left( 1-\sum_k\alpha _ka_k^{\dagger }+\sum_{k_1,k_2}u_{k_1,k_2}a_{k_1}^{\dagger }a_{k_2}^{\dagger }+...\right)
|0\rangle |\downarrow \rangle , \label{Fock} \end{eqnarray}
where $|0\rangle $ is vacuum of bath modes, $\alpha _k,u_{k_1k_2},...$ are the coefficients and even parity is considered, However, it is practically impossible to perform direct diagnializaion in this way to get reasonable results, because very high order expansions is needed. Alternatively, the wavefucntion (\ref{Fock}) can be also expressed in terms of the other set of complete orthogonal basis,$\;D\left( \alpha _k\right) {\prod }
a_{k_i}^{\dagger }|0\rangle \;$with $D\left( \alpha _k\right) =\exp \left[ \sum_k\alpha _k\left( a_k^{\dagger }-a_k\right) \right] $ a unitary operators with displacement $\alpha _k$ given in Eq. (\ref{Fock}), as \begin{eqnarray}
&&|\Psi \rangle =D\left( \alpha _k\right) \left( 1+\sum_{k_1,k_2}b_{k_1k_2}a_{k_1}^{\dagger }a_{k_2}^{\dagger }+...\right)
|0\rangle |\uparrow \rangle \nonumber \\ &&+D\left( -\alpha _k\right) \left( 1+\sum_{k_1,k_2}b_{k_1k_2}a_{k_1}^{\dagger }a_{k_2}^{\dagger }+...\right)
|0\rangle |\downarrow \rangle , \label{DFS} \end{eqnarray}
where the linear term $a_k^{\dagger }|0\rangle $ should be absent because the expansion of the whole phonon state of each level in the Fock space can completely reproduce the first two terms in Eq. (\ref{Fock}). Note above that the phonon state in each level is generated by operating on the Fock state with a unitary displacement operators, thus we call it as displaced Fock states (DFS). Only the first term $D\left( \pm \alpha _k\right)
|0\rangle $ can reach the whole Hilbert-space, so no truncation is made in this sense. If the expansion is taken to infinity, a exact solution would be obtained. In other words, the true wavefunction should take the form of Eq. { \ref{DFS}) . However, it is impossible to really perform an infinite expansion. Even for a few terms expansion, it is very time consuming. Fortunately, it will be shown later that only two terms in the expansion would give the converging results in some important issues. }
First, as a zeroth-order DFS, we only consider the first term in Eq. (\ref {DFS}). Projecting the Schr\"{o}dinger equation onto the orthogonal states $
\left\langle 0\right| \ D^{\dagger }\left( \alpha _k\right) \;$and$\
\left\langle 0\right| a_kD^{\dagger }\left( \alpha _k\right) $ gives \begin{equation} \sum_k\omega _k\alpha _k^2+\sum_kg_k\alpha _k-\frac \Delta 2\exp \left[ -2\sum_k\alpha _k^2\right] =E, \label{Eq_ohmic_01} \end{equation} \begin{equation} \omega _k\alpha _k+\frac 12g_k+\Delta \exp \left[ -2\sum_k\alpha _k^2\right] \alpha _k=0, \label{Eq_ohmic_02} \end{equation} where we have used the properties of the unitary displacement operators \begin{equation} D^{\dagger }\left( \alpha _k\right) a_k^{\dagger }D\left( \alpha _k\right) =a_k^{\dagger }+\alpha (k);\;D^{\dagger }\left( \alpha _k\right) a_kD\left( \alpha _k\right) =a_k+\alpha (k). \nonumber \end{equation} Eq. (\ref{Eq_ohmic_02}) immediately yields \begin{equation} \alpha _k=\frac{-\frac 12g_k}{\omega _k+\Delta \exp \left( -2\sum_k\alpha _k^2\right) }, \label{Eq_dis} \end{equation} Interestingly, this is just the SH result, although here it is not obtained through a variational scheme. So we arrive at the right track of the previous well-known analytical results only by the zeroth-order approximation. The advantage of the this technique is that we can easily go further to get more accurate results in a controllable way, by both modifying the displacement of the unitary operators and adding the correlations among different bosonic modes step by step.
In the second-order DFS, we only keep two terms in Eq. (\ref{DFS}). Similarly, projecting the Schr\"{o}dinger equation onto $\left\langle 0\right| \ D^{\dagger }\left( \alpha _k\right) ,\;\left\langle 0\right|
a_kD^{\dagger }\left( \alpha _k\right) $ , and $\ \left\langle 0\right| a_{k_1}a_{k_2}D^{\dagger }\left( \alpha _k\right) \;$yields the following three equations for unknown $E,\alpha _k$, and$\ b_{k_1,k_2}$, \begin{equation} E=\sum_k\left( \omega _k\alpha _k^2+g_k\alpha _k\right) -\frac 12\Delta \eta \left( 1+4\sum_kB_k\alpha _k\right) ,\ \label{Eq_2_en} \end{equation} \begin{equation} \alpha _k=-\frac{\frac{g_k}2+2\sum_{k^{\prime }}b_{k,k^{\prime }}\left[ \left( \omega _{k^{\prime }}-\Delta \eta \right) \alpha _{k^{\prime }}+\frac{ g_{k^{\prime }}}2\right] }{\omega _k+\Delta \eta \left( 1+4\sum_kB_k\alpha _k\right) }, \label{Eq_21} \end{equation} \begin{equation} b_{k_1,k_2}=-\frac{B_{k_1}\alpha _{k_2}+B_{k_2}\alpha _{k_1}-\alpha _{k_1}\alpha _{k_2}\left( 1+4\sum_kB_k\alpha _k\right) }{2\sum_kB_k\alpha _k+\left( \omega _{k_1}+\omega _{k_2}\right) /\left( \Delta \eta \right) }, \label{Eq_22} \end{equation} where \begin{eqnarray*} B_k &=&\sum_{k^{\prime }}b_{k,k^{\prime }}\alpha _{k^{\prime }}, \\ \eta &=&\exp \left[ -2\sum_k\alpha _k^2\right] . \end{eqnarray*} Both $\alpha (k)$ and$\ b_{k_1,k_2}$ can be obtained by solving the two coupled equations (\ref{Eq_21}) and (\ref{Eq_22}) self-consistently, which in turn give the GS energy and wavefunction. In our opinion, this is actually a parameter-free analytical approach.
Due to the QPT from the delocalized phase to the localized one in the sub-Ohmic spin-boson model, we should relax wavefucntions (\ref{DFS}) to the asymmetrical one \begin{eqnarray}
&&|\Psi \rangle =D\left( \alpha _k\right) \left( 1+\sum_{k_1,k_2}b_1(k_1,k_2)a_{k_1}^{\dagger }a_{k_2}^{\dagger }+...\right)
|0\rangle |\uparrow \nonumber \\ &&+D(\beta _k)\left( r+\sum_{k_1,k_2}b_2(k_1,k_2)a_{k_1}^{\dagger
}a_{k_2}^{\dagger }+...\right) |0\rangle |\downarrow \rangle , \label{sub-Ohmic} \end{eqnarray} where $r$ is the asymmetrical parameter. If $r=1$ and $\beta _k=-$ $\alpha _k $, the previous symmetrical results are recovered.
The zero-order DFS will give the same results as that in generalized SH polaronic ansatz \cite{Chin}, then it is also called the GSH ansatz in the remaining of the paper. In the second-order DFS, we have double equations for the counterparts in the symmetrical case. The number of unknown parameters are also doubled, due to the asymmetrical coefficients. We leave detailed derivations to Appendix A.
Proceeding as the scheme outlined above, we can straightforwardly perform the further expansion in the orthogonal diplaced Fock basis $D\left( \alpha _k\right) {\prod }a_{k_i}^{\dagger }|0\rangle $ in the controllable way, and obtain the solution within any desired accuracy in principle. The challenges remain on the pathway to high dimensional integral in the further extensions, due to both the analytical derivations and exponentially increasing computational difficulties. On the other hand, the criterion of the precise description of the criticality can be that the further correction does not change the nature in the last approximation. Fortunately, it will be shown later that the second-order correction really does not changes the critical exponents in the GSH ansatz at all, so the further corrections to the second-odder DFS is not necessary, at least in the sense of the criticality.
The magnetization $\langle \sigma _z\rangle $ can be used as an order parameter in the QPT of this model. It shows a power law behavior near the critical point, \begin{equation} \langle \sigma _z\rangle \propto \left( \lambda -\lambda _c\right) ^\beta . \end{equation} The entanglement entropy between the qubit and the bath is defined as \cite {Osterloh} \begin{eqnarray*} S &=&-Tr\rho _A\log _2\rho _A=-Tr\rho _B\log _2\rho _B, \\
\rho _{A(B)} &=&Tr_{B(A)}\left\langle \Psi \right| \Psi \rangle , \end{eqnarray*} where $A$ is the qubit and $B$ is the bath, $\Psi $ is the GS wavefucntion of the whole system. In the spin-boson model , it is \cite{Kopp} \[ S=-p_{+}\log _2p_{+}-p_{-}\log _2p_{-}, \] where \[ p_{\pm }=\left( 1\pm \sqrt{\langle \sigma _x\rangle ^2+\langle \sigma _z\rangle ^2}\right) /2. \]
We stress here that in the present approach we do not need to discreatize the bosonic energy band like in many previous studies at the very beginning. All $k-$summation in the coupled equations can be transformed into continuous $\ $integral like$\;\int_0^{\omega _c}d\omega J(\omega )I(\omega ) $. In this work, all integrals are numerically calculated within a Gaussian-logarithmical (GL) integration with very high accuracy. The detailed demonstration is given in Appendix B. Without loss of generality, we set $ \Delta =0.1,\omega _c=1$ in the calculation throughout this paper, if not specified.
First, we calculate the magnetization and evaluate the critical points within both the GSH and the second-order DFS. The results for $s=0.2,0.4,0.6$ , and $0.8$ are presented in Fig. 1 (a) by the solid lines (second-order DFS) and dashed lines (GSH). Both shows that there exist a critical point which separate the delocalized phase ($\langle \sigma _z\rangle =0$) to the localized one ($\langle \sigma _z\rangle \neq 0$). The critical coupling strengths$\;\lambda _c$ by the second-oeder DFS are larger than those by GSH, the correction becomes remarkable for $s>0.5$. It follows that the GSH critical point will be modified by the second-order DFS. To determine the critical point within the second-order DFS more precisely, we also calculate the entanglement entropy, which exhibits a cusp characteristics around the critical point. The results are given in Fig. 1(b). Both the magnetization and the entanglers entropy result in the consistent value for the QPT critical point.
\begin{figure}
\caption{(Color online) The magnetization and the entanglement entropy as a function the coupling strength within the GSH (dashed lines) and the second order DFS (solid lines) for $ s=0.2,0.4,0.6$, and $0.8$.}
\label{tunneling}
\end{figure}
It should be pointed out that that the critical coupling strength obtained in the second-order DFS is not the true one in this model either, because it will be definitely revised by the third-order DFS, although the revision is probably small. It is expected that the converging critical point would be only obtained in the high-order DFS, which is however a challenging task at the moment, and also beyond the scope of the present study. The more crucial issue in a QPT is the criticality. So the natural question is '' the criticality described by GSH could be changed in second-order DFS?''
In Fig. 2, we present the magnetization within both GSH and the second-order DFS as a function of $\lambda /(\lambda -\lambda _c)$ in a log-log plot for $ s=0.2,0.4,0.6$, and $0.8$. It is demonstrated that, using both approaches, the magnetic critical exponent$\;\beta $ is always $0.5$ with an error bar $ (-0.01,0.01)$, even for $s=0.6$ and $0.8$. Note that the second-order DFS should be the dominate correction to the GSH, as indicted in the critical points. But for the critical exponent, we do not find any visible deviation from the GSH ones. We can not imagine that the further corrections would change this observation but the second-order correction does not. It is therefore strongly suggested that even in $s>1/2$, the magnetic exponent $ \beta$ in the sub-Ohmic spin-boson model is always $0.5$, quite different from those obtained in the MPS \cite{VMPS,Frenzel} and NRG \cite{Bulla,Tong}.
\begin{figure}
\caption{ The log-log plot of the magnetization $\langle \sigma _z\rangle $ as a function the coupling strength within the GSH and the second order DFS for $s=0.2,0.4,0.6$, and $0.8$. $\omega _c=1,\Delta =0.1$ }
\label{critical_curves_ECS}
\end{figure}
In summary, a new analytic approach referred to DFS is proposed in the spin-boson model with the continuum spectral function. The zero-order approximation is just the well known SH approach, the further corrections can be performed step by step. For the sub-Ohmic baths, the second-order DFS can modify the GSH critical coupling strength of the QPT, especially for $ s>1/2$. But the critical exponent is not changed at all, and is always $0.5$ for the whole bath regime $0<s<1$, a mean-field value for the system above the upper critical dimension. This is a direct strict evidence of failure of quantum-classical mapping in the sub-ohmic spin-boson model, at least for $ s>1/2$.
\textsl{Outlook}. It is expected that the sufficient number of integral grids for the converged results increases exponentially with the further corrections in the DFS. The Monte Carlo integral might be used in the high dimensional integral. But the analytical derivation in the high order DFS is also challenging task. New methods, probably like some diagrammatic techniques, in the framework of the DFS is highly called for. The progress along this avenue may hopefully lead to a true exact solution to this celebrated model, which is perhaps our future ambitions.
Because each summation over $k$ in the final expressions is related to $ \sum_kg_k^2$, we propose a discretized spin-boson Hamiltonian ($\omega _c=1$ ) as follows \begin{equation} H=-\frac \Delta 2\sigma _x+\sum_k\omega _ka_k^{\dagger }a_k+\frac 12\sigma _z\sum_k\sqrt{\frac{W(\omega _k)J(\omega _k)}\pi }(a_k^{\dagger }+a_k), \label{dis_H} \end{equation} where $\omega _k=\omega _{m,n}$ is the Gaussian integration point in Eq. ( \ref{combination}), $W(\omega _k)$ is the Gaussian weight. Applying the present DFS approach to this Hamiltonian, all results obtained in this paper are recovered completely by direct summation over $k$. It is shown in Appendix B that a limited number of discretizations can give results with very high accuracy. In this sense, Hamiltonian (\ref{dis_H}) is equivalent to the model for one qubit coupled with a finite number bosonic modes, which facilities the further study. We believe that Eq. (\ref{dis_H}) with discretized bosonic modes could be a new starting Hamiltonian for any advanced approaches. The dynamics based on polaron trial state by the name of the Davydov D1 ansatz within the Dirac-Frenkel time dependent variational procedure \cite{duan} can be revisited using the discretized one directly.
This work is supported by National Natural Science Foundation of China under Grant No. 11474256, and National Basic Research Program of China under Grant No.~2011CBA00103.
$^{\ast }$ Corresponding author. Email:[email protected]
\appendix
\section{DFS for the sub-Ohmic baths}
In the zeroth order approximation, we only select the first term Eq. (\ref {sub-Ohmic}). Similar to the derivation in the symmetric case, projecting the Schr\"{o}dinger equation in the upper level onto the orthogonal basis $
\left\langle 0\right| \ D^{\dagger }\left( \alpha _k\right) \;$and$\
\left\langle 0\right| a_kD^{\dagger }\left( \alpha _k\right) $ and low level onto $\left\langle 0\right| \ D^{\dagger }\left( \beta _k\right) \;$and$\
\left\langle 0\right| a_kD^{\dagger }\left( \beta _k\right) $ result in \begin{eqnarray} \sum_k\left( \omega _k\alpha _k^2+g_k\alpha _k\right) -\frac \Delta 2r\ \Gamma &=&E, \label{E_upper} \\ \omega _k\alpha _k+\frac 12g_k+\frac \Delta 2r\Gamma D_k &=&0, \label{dis_upper} \end{eqnarray} and \begin{eqnarray} \sum_k\left( \omega _k\beta _k^2-g_k\beta _k\right) -\frac \Delta {2r}\ \Gamma &=&E, \label{E_down} \\ \omega _k\beta _k-\frac 12g_k-\frac \Delta {2r}\Gamma D_k &=&0, \label{dis_down} \end{eqnarray} where \begin{eqnarray*} \Gamma &=&\exp \left[ -\frac 12\sum_kD_k^2\right] , \\ D_k &=&\alpha _k-\beta _k, \end{eqnarray*} which are the same as those obtained variationally within the GSH ansatz\ \cite{Chin}.
For the second-order DFS, the first two terms in Eq. (\ref{sub-Ohmic}) is kept. Proceeding as procedures outlines above, Projecting the Schr\"{o}dinger equation in the upper level onto the orthogonal states $
\left\langle 0\right| \ D^{\dagger }\left( \alpha _k\right) ,\ \left\langle 0\right| a_kD^{\dagger }\left( \alpha _k\right) $, and $\ \left\langle 0\right| a_{k_1}a_{k_2}D^{\dagger }\left( \alpha _k\right) $ and low level onto $\left\langle 0\right| \ D^{\dagger }\left( \beta _k\right)
,\left\langle 0\right| a_kD^{\dagger }\left( \beta _k\right)$, and $\
\left\langle 0\right| a_{k_1}a_{k_2}D^{\dagger }\left( \beta _k\right) $ yield the following six equations \begin{widetext} \begin{eqnarray} \sum_k\left[ \omega _k\alpha _k^2+g_k\alpha _k\right] -\frac \Delta 2\Gamma \left[ r+\sum_kB_kD_k\right] \ &=&E, \label{sub_1up} \\ r\sum_k\left[ \omega _k\beta _k^2-g_k\beta _k\right] -\frac \Delta 2\Gamma \left[ 1+\sum_kA_kD_k\right] \ &=&rE, \label{sub_1down} \end{eqnarray}
\begin{equation} \left[ \omega _k\alpha _k+\frac{g_k}2\right] +\sum_{k^{\prime }}2b_1\left( k,k^{\prime }\right) \left[ \omega _{k^{\prime }}\alpha _{k^{\prime }}+\frac{ g_{k^{\prime }}}2\right] -\Delta \Gamma B_k+\frac \Delta 2\Gamma D_k\left[ r+\sum_kB_kD_k\right] =0, \label{sub_2up} \end{equation} \begin{equation} r\left[ \omega _k\beta _k-\frac{g_k}2\right] +\sum_{k^{\prime }}2b_2\left( k,k^{\prime }\right) \left[ \omega _{k^{\prime }}\beta _{k^{\prime }}-\frac{ g_{k^{\prime }}}2\right] +\Delta \Gamma A_k-\frac \Delta 2\Gamma D_k\left[ 1+\sum_kA_kD_k\right] =0, \label{sub_2down} \end{equation} \begin{eqnarray} &&b_1(k_1,k_2)\left( \omega _{k_1}+\omega _{k_2}\right) +\frac \Delta 2\Gamma \left[ r+\sum_kB_kD_k\right] b_1(k_1,k_2) \nonumber \\ &&-\frac \Delta 2b_2(k_1,k_2)\Gamma +\frac \Delta 2\Gamma \left[ B_{k_1}D_{k_2}+B_{k_2}D_{k_1}\right] -\frac \Delta 4\Gamma D_{k_1}D_{k_2}\left[ r+\sum_kB_kD_k\right] =0, \label{sub_3up} \end{eqnarray} \begin{eqnarray} &&b_2(k_1,k_2)\left( \omega _{k_1}+\omega _{k_2}\right) +\frac \Delta {2r}\Gamma \left[ 1+\sum_kA_kD_k\right] b_2(k_1,k_2) \nonumber \\ &&-\frac \Delta 2b_1(k_1,k_2)\Gamma +\frac \Delta 2\Gamma \left[ A_{k_1}D_{k_2}+A_{k_2}D_{k_1}\right] -\frac \Delta 4\Gamma D_{k_1}D_{k_2}\left[ 1+\sum_kA_kD_k\right] =0, \label{sub_3down} \end{eqnarray} \end{widetext} where \begin{eqnarray*} \ \ A_k &=&\sum_{k^{\prime }}b_1(k^{\prime },k)D_{k^{\prime }}, \\ \ \ B_k &=&\sum_{k^{\prime }}b_2(k^{\prime },k)D_{k^{\prime }}. \end{eqnarray*} The self-consistent solutions for the four coupled equations Eqs. (\ref {sub_2up}), (\ref{sub_2down}), (\ref{sub_3up}) and (\ref{sub_3down}) will give all results in the second-order DFS. If set $r=1,\alpha _k=-\beta _k$ and $b_1(k_1,k_2)=b_2(k_1,k_2)$, Eqs. (\ref{Eq_21}) and (\ref{Eq_22}) in the symmetric case are recovered completely.
\section{Gaussian-logarithmical integration for the continuous integral}
The symmetrical case is also used to illustrate a effective numerical approach to the calculation of the summation clearly. In the zeroth-order approximation, also the well known SH ansatz, we can set \[ \alpha _k=\alpha _k^{\prime }g_k, \] Eq. (\ref{Eq_dis}) becomes \[ \alpha _k^{\prime }=-\frac{1/2}{\omega _k+\Delta \exp \left( -2\sum_k\alpha _k^{\prime 2}g_k^2\right) }, \] so $\alpha _k^{\prime }$ is only related to $g_k$ implicitly.
According to the spectral density, we have \begin{equation} \alpha ^{\prime }(\omega )=-\frac{1/2}{\omega +\Delta \exp \left[ -\frac 2\pi \int_0^{\omega _c}d\omega ^{\prime }\alpha ^{\prime 2}(\omega ^{\prime })J(\omega ^{\prime })\right] }, \label{SH_cont} \end{equation} which can be solved numerically by iterations.
In the second-order approximation, we can set \begin{eqnarray*} \alpha _k &=&\alpha _k^{\prime }g_k, \\ b_{k_1,k_2} &=&b_{k_1,k_2}^{\prime }g_{_{k_1}}g_{_{k_2}}. \end{eqnarray*} Inserting to Eqs. (\ref{Eq_21}) and (\ref{Eq_22}) gives \[ \alpha _k^{\prime }=\frac{-\frac 12+2\sum_{k^{\prime }}g_{k^{\prime }}^2b_{k,k^{\prime }}^{\prime }\left[ \left( \omega _{k^{\prime }}-\Delta \eta \right) \alpha _{k^{\prime }}^{\prime }+1/2\right] }{\omega _k+\Delta \eta \left( 1+4\zeta \right) }, \]
\[ b_{k_1,k_2}^{\prime }=\frac{\alpha _{k_1}^{\prime }\alpha _{k_2}^{\prime }\left( 1+4\zeta \right) -\sum_{k^{\prime }}g_{k^{\prime }}^2\alpha _{k^{\prime }}^{\prime }\left( b_{k_1,k^{\prime }}^{\prime }\alpha _{k_2}^{\prime }+b_{k_2,k^{\prime }}^{\prime }\alpha _{k_1}^{\prime }\right) }{2\zeta +\left( \omega _{k_1}+\omega _{k_2}\right) /\left( \Delta \eta \right) }, \]
where \[ \zeta =\sum_kg_k^2\sum_{k^{\prime }}g_{k^{\prime }}^2b_{k,k^{\prime }}^{\prime }\alpha _{k^{\prime }}^{\prime }\alpha _k^{\prime }. \] Given $g_k$, both $\alpha _k^{\prime }$ and $b_{k_1,k_2}^{\prime }\;$can be obtained self-consistently. Note that each $k$-summation takes the form of $ \sum_kg_k^2I(k)$ where $I(k)$ does not depend on $g_k^2\;$explicitly, and so both $\alpha _k^{\prime }$ and $b_{k_1,k_2}^{\prime }\;$are functionals of $ g_k$. $\;$Without loss of generality, $k$ is corresponding to $\omega $ one by one, the $k$-summation can be transformed to the $\omega \;$integral as \[ \sum_kg_k^2I(k)\rightarrow \int_0^{\omega _c}d\omega \frac{J(\omega )}\pi I(\omega ), \] so we have \begin{equation} \alpha ^{\prime }(\omega )=\frac{-\frac 12+\xi (\omega )-2\Delta \eta \chi (\omega )}{\omega +\Delta \eta \left( 1+4\zeta \right) }, \label{displacement} \end{equation}
\begin{equation} b^{\prime }\left( \omega _1,\omega _2\right) =\frac{\alpha ^{\prime }(\omega _1)\alpha ^{\prime }(\omega _2)\left( 1+4\zeta \right) -\kappa (\omega _1,\omega _2)}{2\zeta +\left( \omega _{_1}+\omega _{_2}\right) /(\Delta \eta )}, \label{sec_coeff} \end{equation} where \begin{eqnarray*} \xi (\omega ) &=&\int_0^{\omega _c}d\omega ^{\prime }\frac{J(\omega ^{\prime })}\pi \left[ 2\omega ^{\prime }\alpha ^{\prime }(\omega ^{\prime })+1\right] b^{\prime }\left( \omega ,\omega ^{\prime }\right) , \\ \chi (\omega ) &=&\int_0^{\omega _c}d\omega ^{\prime }\frac{J(\omega ^{\prime })}\pi \alpha ^{\prime }(\omega ^{\prime })b^{\prime }\left( \omega ,\omega ^{\prime }\right) , \\ \kappa (\omega _1,\omega _2) &=&\chi (\omega _1)\alpha ^{\prime }(\omega _2)+\chi (\omega _2)\alpha ^{\prime }(\omega _1), \end{eqnarray*} are some functions for $\omega ,$ and \begin{eqnarray*} \zeta &=&\int_0^{\omega _c}d\omega \frac{J(\omega )}\pi \int_0^{\omega _c}d\omega ^{\prime }\frac{J(\omega ^{\prime })}\pi \alpha ^{\prime }(\omega )\alpha ^{\prime }(\omega ^{\prime })b^{\prime }\left( \omega ,\omega ^{\prime }\right) , \\ \eta &=&\exp \left[ -2\int_0^{\omega _c}d\omega ^{\prime }\frac{J(\omega ^{\prime })}\pi \alpha ^{\prime 2}(\omega ^{\prime })\right] , \end{eqnarray*} are constants. If both $\alpha ^{\prime }(\omega )$ and $b^{\prime }\left( \omega _1,\omega _2\right) $ are obtained, all observables can in turn be calculated. For example, using Eq. (\ref{Eq_2_en}), the energy in the second-order DFS can be calculate as \begin{equation} E=\int_0^{\omega _c}d\omega \frac{J(\omega )}\pi \alpha ^{\prime }(\omega )\left[ \omega \alpha ^{\prime }(\omega )+1\right] -\frac 12\Delta \eta \left( 1+4\zeta \right) \ . \end{equation}
The self-consistent solutions in the coupled equations Eqs. (\ref {displacement}) and (\ref{sec_coeff})) are in no way obtained analytically, numerical calculation should be performed. Note that the low frequency modes play the dominant role in the QPT of the sub-ohmic spin-boson model. At the critical point, there is an infrared divergence of the integrand like $ \int_0^{\omega _c}\omega ^{s-2}d\omega \;$in the limit of $\omega \rightarrow 0$ for sub-ohmic bath, which is called as the infrared catastrophe. Thanks to the Gaussian quadrature rules, where the zero frequency is not touched. We can discretize the whole frequency interval with Gaussian grids, the integral can be numerically exactly achieved with a large number of Gaussian grids. It is very time consuming to calculate the integral in this way, especially for high dimensional integral involved in the high-order DFS. According to the structure of the integrand, it is not economical to deal with the high and low frequency regime on the equal footing. To increase the efficiency, we combine the logarithmic discretization and Gaussian quadrature rule. First, we divide the $\omega $ interval $[0,1]$ into $M+1$ sub-intervals as $[\Lambda ^{-(m+1)},\Lambda ^{-m}]\;(m=0,1,2,M-1)\;$and$\;[0,\Lambda ^{-M}]$ , then we apply the Gaussian quadrature rule to each logarithmical sub-interval. So the continuous integral is calculated by the following summation \begin{equation} \int_0^1J(\omega )I(\omega )d\omega =\sum_{m=0}^M\sum_{n=1}^NW_{m,n}J(\omega _{m,n})I(\omega _{m,n}), \label{combination} \end{equation} where $N$ is the number of gaussian points inserted in each sub-interval, $ W_{m,n}$ is corresponding Gaussian weight.
\begin{figure}
\caption{ Magnetization $\langle \sigma _z\rangle$ as a function of the coupling strength $\lambda$ in the GSH ansatz. (a) For $s = 0.6$, converged results within GL integration (open squares), numerical exact ones (solid lines), and those within logarithmic discretization with different truncation numbers $K = 10, 20$, and $30$. (b) The converged magnetization within GL integration (open circles) and logarithmic discretization(filled squares) for $s = 0.2,0.4$, and $0.6$. Numerical exact ones are denoted by the solid lines.}
\label{CompareIntegral}
\end{figure}
\begin{figure}
\caption{ (Color online) Magnetization $\langle \sigma _z\rangle$ as a function of the coupling strength $\lambda$ in the second-order DFS within GL integration using different $M, N$, and $\Lambda$ for (a) $s=0.6$ and (b) $s=0.8$(b).}
\label{Fig_converge}
\end{figure}
To demonstrate the efficiency of the Gaussian-logarithmical (GL) integration, we first apply it to the GSH ansatz, which is also the zero-order approximation in the DFS. The one-dimensional integral can be numerically exactly done by Gaussian integration over the whole interval with a huge number of discretizations, and corresponding results can be regarded as a benchmark. After careful examinations, using the GL technique, the converging results for the magnetization can be archived if set $M=6,N=9, $ and $\Lambda =9$ The corresponding results for $s=0.6$ are presented in Fig. \ref{CompareIntegral} (a) with open squares, which agrees excellently with the numerically exact one by a huge number of discretization in the Gaussian integration (solid lines).
We can also perform the logarithmic discretization of the bosonic energy band, as was widely used in the previous studies, such as NRG \cite{Bulla} and multi-coherent states \cite{mD1}. In the GSH ansatz, this can be easily done by set \begin{equation} g_k^2=\int_{\Lambda ^{-(k+1)}}^{\Lambda ^{-k}}\frac{J(\omega )}\pi d\omega ,\;\omega _k=\frac 1{g_k^2}\int_{\Lambda ^{-(k+1)}}^{\Lambda ^{-k}}\frac{ J(\omega )}\pi \omega d\omega , \label{Dis_GSH_H} \end{equation} in Eqs. (A1-A4) of Appendix A. The spectral density is truncated to a number $K$ of modes. The summation is performed over the integer $k$ directly and the self-consistent solution with discretized form can be also obtained. The logarithmic grid is chosen as $\Lambda =2$, the same as that in Refs. \cite {Bulla,mD1}. The magnetization as a function of $\lambda $ for $s=0.6$ with such a logarithmic discretization are collected in Fig. \ref{CompareIntegral} (a) with different truncation number $K$ of bosonic modes. The converging results can be also obtained for $K\geq 20$, which is however obviously different from the numerically exact one. Note that this kind of logarithmic discretization of the bosonic energy band at the very beginning is not equivalent to the logarithmic discretization of the continuous integral derived in the end of the DFS approach.
The converged magnetization within both the GL and logarithmic discreatization for different values of $s$ are carefully examined, and the results are exhibited in Fig. \ref{CompareIntegral} (b). The deviation between these two convergent ones increases with $s$, and becomes remarkable for $s\ge 0.3$.
Then we turn to the second-order DFS study, a central issue in this work. Two-dimensional integral will be involved in this case, so direct Gaussian integral with huge number of discretization is practically difficult. Fortunately, it has been convincingly shown above that in the framework of GSH ansatz, the Gaussian-logarithmic discretization with dozens of grids to the continuous integral can effectively give results with very high accuracy. Therefore we extend this numerical technique to the present case. Interestingly, a excellent convergence behavior for $s=0.6$(a) and $s=0.8$ (b) is demonstrated in Fig. \ref{Fig_converge} with different value of $M,N$ , and $\Lambda $. The converged results obtained in this way compose the main achievement in this work.
\end{document} |
\begin{document}
\title{Digital nets in dimension two with the optimal order of $L_p$ discrepancy}
\begin{abstract} We study the $L_p$ discrepancy of two-dimensional digital nets for finite $p$. In the year 2001 Larcher and Pillichshammer identified a class of digital nets for which the symmetrized version in the sense of Davenport has $L_2$ discrepancy of the order $\sqrt{\log N}/N$, which is best possible due to the celebrated result of Roth. However, it remained open whether this discrepancy bound also holds for the original digital nets without any modification.
In the present paper we identify nets from the above mentioned class for which the symmetrization is not necessary in order to achieve the optimal order of $L_p$ discrepancy for all $p \in [1,\infty)$.
Our findings are in the spirit of a paper by Bilyk from 2013, who considered the $L_2$ discrepancy of lattices consisting of the elements $(k/N,\{k \alpha\})$ for $k=0,1,\ldots,N-1$, and who gave Diophantine properties of $\alpha$ which guarantee the optimal order of $L_2$ discrepancy. \end{abstract}
\centerline{\begin{minipage}[hc]{130mm}{ {\em Keywords:} $L_p$ discrepancy, digital nets, Hammersley net\\ {\em MSC 2010:} 11K06, 11K38} \end{minipage}}
\allowdisplaybreaks
\section{Introduction} Discrepancy is a measure for the irregularities of point distributions in the unit interval (see, e.g., \cite{kuinie}). Here we study point sets $\mathcal{P}$ with $N$ elements in the two-dimensional unit interval $[0,1)^2$. We define the {\it discrepancy function} of such a point set by $$ \Delta_{\mathcal{P}}(\boldsymbol{t})=\frac{1}{N}\sum_{\boldsymbol{z}\in\mathcal{P}}\boldsymbol{1}_{[\boldsymbol{0},\boldsymbol{t})}(\boldsymbol{z})-t_1t_2, $$ where for $\boldsymbol{t}=(t_1,t_2)\in [0,1]^2$ we set $[\boldsymbol{0},\boldsymbol{t})=[0,t_1)\times [0,t_2)$ with area $t_1t_2$ and denote by $\boldsymbol{1}_{[\boldsymbol{0},\boldsymbol{t})}$ the indicator function of this interval. The {\it $L_p$ discrepancy} for $p\in [1,\infty)$ of $\mathcal{P}$ is given by
$$ L_{p}(\mathcal{P}):=\|\Delta_{\mathcal{P}}\|_{L_{p}([0,1]^2)}=\left(\int_{[0,1]^2}|\Delta_{\mathcal{P}}(\boldsymbol{t})|^p\,\mathrm{d} \boldsymbol{t}\right)^{\frac{1}{p}} $$ and the {\it star discrepancy} or {\it $L_{\infty}$ discrepancy} of $\mathcal{P}$ is defined as
$$ L_{\infty}(\mathcal{P}):=\|\Delta_{\mathcal{P}}\|_{L_{\infty}([0,1]^2)}=\sup_{\boldsymbol{t} \in [0,1]^2}|\Delta_{\mathcal{P}}(\boldsymbol{t})|. $$
The $L_p$ discrepancy is a quantitative measure for the irregularity of distribution of a point set. Furthermore, it is intimately related to the worst-case integration error of quasi-Monte Carlo rules; see \cite{DP10,kuinie, LP14,Nied92}.
It is well known that for every $p\in [1,\infty)$ we have\footnote{Throughout this paper, for functions $f,g:\mathbb{N} \rightarrow \mathbb{R}^+$, we write $g(N) \lesssim f(N)$, if there exists a $C>0$ such that $g(N) \le C f(N)$ with a positive constant $C$ that is independent of $N$. Likewise, we write $g(N) \gtrsim f(N)$ if $g(N) \geq C f(N)$. Further, we write $f(N) \asymp g(N)$ if the relations $g(N) \lesssim f(N)$ and $g(N) \gtrsim f(N)$ hold simultaneously.} \begin{equation} \label{roth} L_p(\mathcal{P}) \gtrsim_p \frac{\sqrt{\log{N}}}{N}, \end{equation} for every $N \ge 2$ and every $N$-element point set $\mathcal{P}$ in $[0,1)^2$. Here $\log$ denotes the natural logarithm. This was first shown by Roth \cite{Roth2} for $p = 2$ and hence for all $p \in [2,\infty]$ and later by Schmidt \cite{schX} for all $p\in(1,2)$. The case $p=1$ was added by Hal\'{a}sz \cite{hala}. For the star discrepancy we have according to Schmidt~\cite{Schm72distrib} that \begin{equation} \label{schmidt} L_{\infty}(\mathcal{P}) \gtrsim \frac{\log{N}}{N}, \end{equation} for every $N \ge 2$ and every $N$-element point set $\mathcal{P}$ in $[0,1)^2$.
\paragraph{Irrational lattices.} It is well-known, that the lower bounds in \eqref{roth} and \eqref{schmidt} are best possible in the order of magnitude in $N$. For example, when the irrational number $\alpha=[a_0;a_1,a_2,\ldots]$ has bounded partial quotients in it's continued fraction expansion, then the lattice $\mathcal{P}_{\alpha}$ consisting of the points $(k/N,\{k \alpha\})$ for $k=0,1,\ldots,N-1$, where $\{\cdot\}$ denotes reduction modulo one, has optimal order of star discrepancy in the sense of \eqref{schmidt} (see, e.g., \cite{lerch} or \cite[Corollary~3.5 in combination with Lemma~3.7]{Nied92}). This is, in this generality, not true anymore when, e.g., the $L_2$ discrepancy is considered. However, in 1956 Davenport~\cite{daven} showed that the symmetrized version $\mathcal{P}_{\alpha}^{{\rm sym}}:=\mathcal{P}_{\alpha}\cup \mathcal{P}_{-\alpha}$ of $\mathcal{P}_{\alpha}$ consisting of $2N$ points has $L_2$ discrepancy of the order $\sqrt{\log N}/N$ which is optimal with respect to \eqref{roth}.
Later Bilyk~\cite{bil} introduced a further condition on $\alpha$ which guarantees the optimal order of $L_2$ discrepancy without the process of symmetrization. If and only if the bounded partial quotients satisfy $|\sum_{k=0}^{N-1} (-1)^k a_k| \lesssim_{\alpha} \sqrt{n}$, then $L_2(\mathcal{P}_{\alpha}) \asymp_{\alpha} \sqrt{\log N}/N$.
\paragraph{Digital nets.} In this paper we study analog questions for digital nets over $\mathbb{Z}_2$, which are an important class of point sets with low star discrepancy. Since we only deal with digital nets over $\mathbb{Z}_2$ and in dimension 2 we restrict the necessary definitions to this case. For the general setting we refer to the books of Niederreiter~\cite{Nied92} (see also \cite{Nied87}), of Dick and Pillichshammer~\cite{DP10}, or of Leobacher and Pillichshammer~\cite{LP14}.
Let $n\in \mathbb{N}$ and let $\mathbb{Z}_2$ be the finite field of order 2, which we identify with the set $\{0,1\}$ equipped with arithmetic operations modulo 2. A two-dimensional digital net over $\mathbb{Z}_2$ is a point set $\{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{2^n-1}\}$ in $[0,1)^2$, which is generated by two $n\times n$ matrices over $\mathbb{Z}_2$. The procedure is as follows. \begin{enumerate} \item Choose two $n \times n$ matrices $C_1$ and $C_2$ with entries from $\mathbb{Z}_2$. \item For $r\in\{0,1,\dots,2^n-1\}$ let $r=r_0+2r_1 +\cdots +2^{n-1}r_{n-1}$ with $r_i\in\{0,1\}$ for all $i\in\{0,\dots,n-1\}$ be the dyadic expansion of $r$, and set $\vec{r}=(r_0,\ldots,r_{n-1})^{\top}\in \mathbb{Z}_2^n$. \item For $j=1,2$ compute $C_j \vec{r}=:(y_{r,1}^{(j)},\ldots ,y_{r,n}^{(j)})^{\top}\in \mathbb{Z}_2^n$, where all arithmetic operations are over $\mathbb{Z}_2$. \item For $j=1,2$ compute $x_r^{(j)}=\frac{y_{r,1}^{(j)}}{2}+\cdots +\frac{y_{r,n}^{(j)}}{2^n}$ and set $\boldsymbol{x}_{r}=(x_r^{(1)},x_{r}^{(2)})\in [0,1)^2$. \item Set $\mathcal{P}:=\{\boldsymbol{x}_0,\dots,\boldsymbol{x}_{2^n-1}\}$. We call $\mathcal{P}$ a {\it digital net over $\mathbb{Z}_2$} generated by $C_1$ and $C_2$. \end{enumerate}
One of the most well-known digital nets is the {\it 2-dimensional Hammersley net $\mathcal{P}^{{\rm Ham}}$ in base 2} which is generated by the matrices $$C_1 = \left ( \begin{array}{llcll} 0 & 0 & \cdots & 0 & 1\\ 0 & 0 & \cdots & 1 & 0 \\ \multicolumn{5}{c}\dotfill\\ 0 & 1 & \cdots & 0 & 0 \\ 1 & 0 & \cdots & 0 & 0 \end{array} \right ) \ \ \mbox{ and } \ \ C_2 = \left ( \begin{array}{llcll} 1 & 0 & \cdots & 0 & 0\\ 0 & 1 & \cdots & 0 & 0 \\ \multicolumn{5}{c}\dotfill\\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{array} \right ).$$ Due to the choice of $C_1$ the first coordinates of the elements of the Hammersley net are $x_r^{(1)}=r/2^n$ for $r=0,1,\ldots,2^n -1$.
\paragraph{$(0,n,2)$-nets in base 2.} A point set $\mathcal{P}$ consisting of $2^n$ elements in $[0,1)^2$ is called a {\it $(0,n,2)$-net in base 2}, if every dyadic box $$\left[\frac{m_1}{2^{j_1}},\frac{m_1+1}{2^{j_1}}\right) \times \left[\frac{m_2}{2^{j_2}},\frac{m_2+1}{2^{j_2}}\right),$$ where $j_1,j_2\in\mathbb{N}_0$ and $m_1\in\{0,1,\dots,2^{j_1}-1\}$ and $m_2\in\{0,1,\dots,2^{j_2}-1\}$ with volume $2^{-n}$, i.e. with $j_1+j_2=n$, contains exactly one element of $\mathcal{P}$.
It is well known that a digital net over $\mathbb{Z}_2$ is a $(0,n,2)$-net in base 2 if and only if the following condition holds: For every choice of integers $d_1,d_2\in \mathbb{N}_0$ with $d_1+d_2=n$ the first $d_1$ rows of $C_1$ and the first $d_2$ rows of $C_2$ are linearly independent.
Every digital $(0,n,2)$-net achieves the optimal order of star discrepancy in the sense of \eqref{schmidt}, whereas there exist nets which do not have the optimal order of $L_p$ discrepancy for finite $p$. One example is the Hammersley net as defined above for which we have (see \cite{FauPil,Lar,Pill}) $$L_p(\mathcal{P}^{{\rm Ham}})=\left(\left(\frac{n}{8 \cdot 2^n}\right)^p+O(n^{p-1})\right)^{1/p} \ \ \mbox{for all $p\in [1,\infty)$}$$ and $$L_{\infty}(\mathcal{P}^{{\rm Ham}})=\frac{1}{2^n} \left(\frac{n}{3}+\frac{13}{9}-(-1)^n \frac{4}{9 \cdot 2^n}\right).$$
\paragraph{Symmetrized nets.}
Motivated by the results of Davenport for irrational lattices, Larcher and Pillichshammer~\cite{lp01} studied the symmetrization of digital nets. Let $\boldsymbol{x}_r=(x_r,y_r)$ for $r=0,1,\ldots,2^n-1$ be the elements of a digital net generated by the matrices $$C_1 = \left ( \begin{array}{llcll} 0 & 0 & \cdots & 0 & 1\\ 0 & 0 & \cdots & 1 & 0 \\ \multicolumn{5}{c}\dotfill\\ 0 & 1 & \cdots & 0 & 0 \\ 1 & 0 & \cdots & 0 & 0 \end{array} \right )\ \ \mbox{ and }\ \ C_2 = \left ( \begin{array}{llcll} 1 & a_{1,2} & \cdots & a_{1,n-1} & a_{1,n}\\ 0 & 1 & \cdots & a_{2,n-1} & a_{2,n} \\ \multicolumn{5}{c}\dotfill\\ 0 & 0 & \cdots & 1 & a_{n-1,n} \\ 0 & 0 & \cdots & 0 & 1 \end{array} \right ), $$ with entries $a_{j,k} \in \mathbb{Z}_2$ for $1 \le j <k \le n$. The matrix $C_2$ is a so-called ``{\it non-singular upper triangular (NUT) matrix}''. Then the {\it symmetrized net} $\mathcal{P}^{{\rm sym}}$ consisting of $(x_r,y_r)$ and $(x_r,1-y_r)$ for $r=0,1,\ldots,2^n-1$ has $L_2$ discrepancy of optimal order $$L_2(\mathcal{P}^{{\rm sym}}) \asymp \frac{\sqrt{n}}{2^{n+1}} \ \ \ \mbox{for every $n \in \mathbb{N}$.}$$
In the present paper we show in the spirit of the paper of Bilyk~\cite{bil} that there are NUT matrices $C_2$ such that symmetrization is not required in order to achieve the optimal order of $L_2$ discrepancy. Or result we be true for the $L_p$ discrepancy for all finite $p$ and not only for the $L_2$ case.
\section{The result}
The central aim of this paper is to provide conditions on the generating matrices $C_1,C_2$ which lead to the optimal order of $L_p$ discrepancy of the corresponding nets. We do so for a class of nets which are generated by $n\times n$ matrices over $\mathbb{Z}_2$ of the following form: \begin{equation} \label{matrixa} C_1= \begin{pmatrix} 0 & 0 & \cdots & 0 & 1\\ 0 & 0 & \cdots & 1 & 0 \\ \multicolumn{5}{c}\dotfill\\ 0 & 1 & \cdots & 0 & 0 \\ 1 & 0 & \cdots & 0 & 0 \end{pmatrix} \end{equation} and a NUT matrix of the special form \begin{equation} C_2= \begin{pmatrix} 1 & a_{1} & a_{1} & \cdots & a_{1} & a_{1} & a_{1} \\ 0 & 1 & a_{2} & \cdots & a_{2} & a_{2} & a_{2} \\ 0 & 0 & 1 & \cdots & a_{3} & a_{3} & a_{3} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \\ 0 & 0 & 0 & \cdots & 1 & a_{n-2} & a_{n-2} \\ 0 & 0 & 0 & \cdots & 0 & 1 & a_{n-1} \\ 0 & 0 & 0 & \cdots & 0 & 0 & 1 \end{pmatrix}, \end{equation} where $a_i\in \mathbb{Z}_2$ for all $i\in\{1,\dots,n-1\}$. We study the $L_p$ discrepancy of the digital net $\mathcal{P}_{\boldsymbol{a}}$ generated by $C_1$ and $C_2$, where $\boldsymbol{a}=(a_1,\dots,a_{n-1})\in \mathbb{Z}_2^{n-1}$. The set $\mathcal{P}_{\boldsymbol{a}}$ can be written as \begin{equation}\label{darstPa} \mathcal{P}_{\boldsymbol{a}}=\left\{\bigg(\frac{t_n}{2}+\dots+\frac{t_1}{2^n},\frac{b_1}{2}+\dots+\frac{b_n}{2^n}\bigg):t_1,\dots, t_n \in\{0,1\}\right\}, \end{equation} where $b_k=t_k\oplus a_{k}(t_{k+1}\oplus \dots \oplus t_n)$ for $k\in\{1,\dots,n-1\}$ and $b_n=t_n$. The operation $\oplus$ denotes addition modulo 2.
The following result states that the order of the $L_p$ discrepancy of the digital nets $\mathcal{P}_{\boldsymbol{a}}$ is determined by the number of zero elements in $\boldsymbol{a}$.
\begin{theorem} \label{theo1} Let $h_n=h_n(\boldsymbol{a})=\sum_{i=1}^{n-1}(1-a_i)$ be the number of zeroes in the tuple $\boldsymbol{a}$. Then we have for all $p\in[1,\infty)$ $$L_p(\mathcal{P}_{\boldsymbol{a}})\asymp_p \frac{\max\{\sqrt{n},h_n(\boldsymbol{a})\}}{2^n}.$$ In particular, the net $\mathcal{P}_{\boldsymbol{a}}$ achieves the optimal order of $L_p$ discrepancy for all $p\in [1,\infty)$ if and only if $h_n(\boldsymbol{a})\lesssim \sqrt{n}$. \end{theorem}
The proof of Theorem~\ref{theo1}, which will be given in Section~\ref{haarf}, is based on Littlewood-Paley theory and tight estimates of the Haar coefficients of the discrepancy function $\Delta_{\mathcal{P}_{\boldsymbol{a}}}$.
For example, if $\boldsymbol{a}=\boldsymbol{0}:=(0,0,\ldots,0)$ we get the Hammersley net $\mathcal{P}^{{\rm Ham}}$ in dimension 2. We have $h_n(\boldsymbol{0})=n-1$ and hence $$L_p(\mathcal{P}_{\boldsymbol{0}})\asymp_p \frac{n}{2^n}.$$ If $\boldsymbol{a}=\boldsymbol{1}:=(1,1,\ldots,1)$, then we have $h_n(\boldsymbol{1})=0$ and hence $$L_p(\mathcal{P}_{\boldsymbol{1}})\asymp_p \frac{\sqrt{n}}{2^n}.$$
\begin{remark} \rm The approach via Haar functions allows the precise computation of the $L_2$ discrepancy of digital nets via Parseval's identity. We did so for a certain class of nets in~\cite{Kritz}. It would be possible but tedious to do the same for the class $\mathcal{P}_{\boldsymbol{a}}$ of nets considered in this paper. However, we only executed the massive calculations for the special case where $\boldsymbol{a}=\boldsymbol{1}:=(1,1,\dots,1)$, hence where $C_2$ is a NUT matrix filled with ones in the upper right triangle. We conjecture that this net has the lowest $L_2$ discrepancy among the class of nets $\mathcal{P}_{\boldsymbol{a}}$ for a fixed $n\in\mathbb{N}$. The exact value of its $L_2$ discrepancy is given by \begin{equation}\label{LpP1} L_2(\mathcal{P}_{\boldsymbol{1}})=\frac{1}{2^n}\left(\frac{5n}{192}+\frac{15}{32}+\frac{1}{4\cdot 2^{n}}-\frac{1}{72\cdot 2^{2n}}\right)^{1/2}. \end{equation} We omit the lengthy proof, but its correctness may be checked with Warnock's formula~\cite{Warn} (see also \cite[Proposition~2.15]{DP10})for small values of $n$. Compare \eqref{LpP1} with the exact $L_2$ discrepancy of $\mathcal{P}^{{\rm Ham}}=\mathcal{P}_{\boldsymbol{0}}$ which is given by (see \cite{FauPil,HaZa,Pill,Vi}) $$L_2(\mathcal{P}_{\boldsymbol{0}})=\frac{1}{2^n}\left(\frac{n^2}{64}+\frac{29n}{192}+\frac{3}{8}-\frac{n}{16 \cdot 2^n}+\frac{1}{4\cdot 2^n}-\frac{1}{72 \cdot 2^{2n}}\right)^{1/2}.$$ \end{remark}
\section{The proof of Theorem~\ref{theo1} via Haar expansion of the discrepancy function} \label{haarf}
A dyadic interval of length $2^{-j}, j\in {\mathbb N}_0,$ in $[0,1)$ is an interval of the form $$ I=I_{j,m}:=\left[\frac{m}{2^j},\frac{m+1}{2^j}\right) \ \ \mbox{for } \ m\in \{0,1,\ldots,2^j-1\}.$$ The left and right half of $I_{j,m}$ are the dyadic intervals $I_{j+1,2m}$ and $I_{j+1,2m+1}$, respectively. The Haar function $h_{j,m}$ is the function on $[0,1)$ which is $+1$ on the left half of $I_{j,m}$, $-1$ on the right half of $I_{j,m}$ and 0 outside of $I_{j,m}$. The $L_\infty$-normalized Haar system consists of all Haar functions $h_{j,m}$ with $j\in{\mathbb N}_0$ and $m=0,1,\ldots,2^j-1$ together with the indicator function $h_{-1,0}$ of $[0,1)$. Normalized in $L_2([0,1))$ we obtain the orthonormal Haar basis of $L_2([0,1))$.
Let ${\mathbb N}_{-1}=\mathbb{N}_0 \cup \{-1\}$ and define ${\mathbb D}_j=\{0,1,\ldots,2^j-1\}$ for $j\in{\mathbb N}_0$ and ${\mathbb D}_{-1}=\{0\}$. For $\boldsymbol{j}=(j_1,j_2)\in{\mathbb N}_{-1}^2$ and $\boldsymbol{m}=(m_1,m_2)\in {\mathbb D}_{\boldsymbol{j}} :={\mathbb D}_{j_1} \times {\mathbb D}_{j_2}$, the Haar function $h_{\boldsymbol{j},\boldsymbol{m}}$ is given as the tensor product $$h_{\boldsymbol{j},\boldsymbol{m}}(\boldsymbol{t}) = h_{j_1,m_1}(t_1) h_{j_2,m_2}(t_2) \ \ \ \mbox{ for } \boldsymbol{t}=(t_1,t_2)\in[0,1)^2.$$
We speak of $I_{\boldsymbol{j},\boldsymbol{m}} = I_{j_1,m_1} \times I_{j_2,m_2}$ as dyadic boxes with level $|\boldsymbol{j}|=\max\{0,j_1\}+\max\{0,j_2\}$, where we set $I_{-1,0}=\boldsymbol{1}_{[0,1)}$. The system
$$ \left\{2^{\frac{|\boldsymbol{j}|}{2}}h_{\boldsymbol{j},\boldsymbol{m}}: \boldsymbol{j}\in\mathbb{N}_{-1}^2, \boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}\right\} $$ is an orthonormal basis of $L_2([0,1)^2)$ and we have Parseval's identity which states that for every function $f\in L_2([0,1)^2)$ we have \begin{equation} \label{parseval}
\|f\|_{L_2([0,1)^2)}^2=\sum_{\boldsymbol{j}\in \mathbb{N}_{-1}^2} 2^{|\boldsymbol{j}|} \sum_{\boldsymbol{m}\in\mathbb{D}_{\boldsymbol{j}}} |\mu_{\boldsymbol{j},\boldsymbol{m}}|^2, \end{equation} where the numbers $\mu_{\boldsymbol{j},\boldsymbol{m}}=\mu_{\boldsymbol{j},\boldsymbol{m}}(f)=\langle f, h_{\boldsymbol{j},\boldsymbol{m}} \rangle =\int_{[0,1)^2} f(\boldsymbol{t}) h_{\boldsymbol{j},\boldsymbol{m}}(\boldsymbol{t})\,\mathrm{d}\boldsymbol{t}$ are the so-called Haar coefficients of $f$. There is no such identity for the $L_p$ norm of $f$ for $p \not=2$; however, for a function $f\in L_p([0,1)^2)$ we have a so-called Littlewood-Paley inequality. It involves the square function $S(f)$ of a function $f\in L_p([0,1)^2)$ which is given as
$$S(f) = \left( \sum_{\boldsymbol{j} \in \mathbb{N}_{-1}^2} \sum_{\boldsymbol{m} \in \mathbb{D}_{\boldsymbol{j}}} 2^{2|\boldsymbol{j}|} \, |\mu_{\boldsymbol{j},\boldsymbol{m}}|^2 \, {\mathbf 1}_{I_{\boldsymbol{j},\boldsymbol{m}}} \right)^{1/2},$$ where ${\mathbf 1}_I$ is the characteristic function of $I$.
\begin{lemma}[Littlewood-Paley inequality]\label{lpi}
Let $p \in (1,\infty)$ and let $f\in L_p([0,1)^2)$. Then
$$ \| S(f) \|_{L_p} \asymp_{p} \| f \|_{L_p}.$$ \end{lemma}
In the following let $\mu_{\boldsymbol{j},\boldsymbol{m}}$ denote the Haar coefficients if the local discrepancy function $\Delta_{\mathcal{P}_{\boldsymbol{a}}}$, i.e., $$\mu_{\boldsymbol{j},\boldsymbol{m}}=\int_{[0,1)^2} \Delta_{\mathcal{P}_{\boldsymbol{a}}}(\boldsymbol{t}) h_{\boldsymbol{j},\boldsymbol{m}}(\boldsymbol{t}) \,\mathrm{d} \boldsymbol{t}.$$ In order to estimate the $L_p$ discrepancy of $\mathcal{P}_{\boldsymbol{a}}$ by means of Lemma~\ref{lpi} we require good estimates of the Haar coefficients $\mu_{\boldsymbol{j},\boldsymbol{m}}$. This is a very technical and tedious task which we defer to the appendix. In the following we just collect the obtained bounds:
\begin{lemma} \label{coro1} Let $\boldsymbol{j}=(j_1,j_2)\in \mathbb{N}_{0}^2$. Then
\begin{itemize}
\item[(i)] if $j_1+j_2\leq n-3$ and $j_1,j_2\geq 0$ then $|\mu_{\boldsymbol{j},\boldsymbol{m}}| \lesssim 2^{-2n}$.
\item[(ii)] if $j_1+j_2\ge n-2$ and $0\le j_1,j_2\le n$ then $|\mu_{\boldsymbol{j},\boldsymbol{m}}| \lesssim 2^{-n-j_1-j_2}$ and
$|\mu_{\boldsymbol{j},\boldsymbol{m}}| = 2^{-2j_1-2j_2-4}$ for all but at most $2^n$ coefficients $\mu_{\boldsymbol{j},\boldsymbol{m}}$ with $\boldsymbol{m}\in {\mathbb D}_{\boldsymbol{j}}$.
\item[(iii)] if $j_1 \ge n$ or $j_2 \ge n$ then $|\mu_{\boldsymbol{j},\boldsymbol{m}}| = 2^{-2j_1-2j_2-4}$.
\end{itemize}
Now let $\boldsymbol{j}=(-1,k)$ or $\boldsymbol{j}=(k,-1)$ with $k\in \mathbb{N}_0$. Then
\begin{itemize}
\item[(iv)] if $k<n$ then $|\mu_{\boldsymbol{j},\boldsymbol{m}}| \lesssim 2^{-n-k}$.
\item[(v)] if $k\ge n$ then $|\mu_{\boldsymbol{j},\boldsymbol{m}}| = 2^{-2k-3}$.
\end{itemize}
Finally, if $h_n=\sum_{i=1}^{n-1}(1-a_i)$, then
\begin{itemize}
\item[(vi)] $\mu_{(-1,-1),(0,0)} = 2^{-n-3}(h_n+5)+2^{-2n-2}$.
\end{itemize} \end{lemma}
\begin{remark}\rm We remark that Proposition~\ref{coro1} shows that the only Haar coefficient that is relevant in our analysis is the coefficient $\mu_{(-1,-1),(0,0)}$. All other coefficients do not affect the order of $L_p$ discrepancy significantly: they are small enough such that their contribution to the over all $L_p$ discrepancy is of the order of Roth's lower bound.
The proof of Proposition~\ref{coro1} is split into several cases which take several pages of very technical and tedious computations. We would like to mention that the proof of the formula for the important coefficient $\mu_{(-1,-1),(0,0)}$ is manageable without excessive effort. \end{remark}
Now the proof of Theorem~\ref{theo1} can be finished by inserting the upper bounds on the Haar coefficients of $\Delta_{\mathcal{P}_{\boldsymbol{a}}}$ into Lemma~\ref{lpi}. This shows the upper bound. For details we refer to the paper \cite{HKP14} where the same method was applied (we remark that our Proposition~\ref{coro1} is a direct analog of \cite[Lemma~1]{HKP14}; hence the proof of Theorem~\ref{theo1} runs along the same lines as the proof of \cite[Theorem 1]{HKP14} but with \cite[Lemma~1]{HKP14} replaced by Proposition~\ref{coro1}).
The matching lower bound is a consequence of $$L_p(\mathcal{P}_{\boldsymbol{a}}) \ge L_1(\mathcal{P}_{\boldsymbol{a}}) =\int_{[0,1]^2} | \Delta_{\mathcal{P}_{\boldsymbol{a}}}(\boldsymbol{t})| \,\mathrm{d} \boldsymbol{t} \ge \left|\int_{[0,1]^2} \Delta_{\mathcal{P}_{\boldsymbol{a}}}(\boldsymbol{t}) \,\mathrm{d} \boldsymbol{t}\right|=|\mu_{(-1,-1),(0,0)}|$$ and item {\it (vi)} of Lemma~\ref{coro1}.
\section{Appendix: Computation of the Haar coefficients $\mu_{\boldsymbol{j},\boldsymbol{m}}$}
Let $\mathcal{P}$ be an arbitrary $2^n$-element point set in the unit square. The Haar coefficients of its discrepancy function $\Delta_{\mathcal{P}}$ are given as follows (see~\cite{hin2010}). We write $\boldsymbol{z}=(z_1,z_2)$.
\begin{itemize}
\item If $\boldsymbol{j}=(-1,-1)$, then
\begin{equation} \label{art1} \mu_{\boldsymbol{j},\boldsymbol{m}}=\frac{1}{2^n}\sum_{\boldsymbol{z}\in \mathcal{P}} (1-z_1)(1-z_2)-\frac14. \end{equation}
\item If $\boldsymbol{j}=(j_1,-1)$ with $j_1\in \mathbb{N}_0$, then
\begin{equation} \label{art2} \mu_{\boldsymbol{j},\boldsymbol{m}}=-2^{-n-j_1-1}\sum_{\boldsymbol{z}\in \mathcal{P}\cap I_{\boldsymbol{j},\boldsymbol{m}}} (1-|2m_1+1-2^{j_1+1}z_1|)(1-z_2)+2^{-2j_1-3}. \end{equation}
\item If $\boldsymbol{j}=(-1,j_2)$ with $j_2\in \mathbb{N}_0$, then
\begin{equation} \label{art3} \mu_{\boldsymbol{j},\boldsymbol{m}}=-2^{-n-j_2-1}\sum_{\boldsymbol{z}\in \mathcal{P}\cap I_{\boldsymbol{j},\boldsymbol{m}}} (1-|2m_2+1-2^{j_2+1}z_2|)(1-z_1)+2^{-2j_2-3}. \end{equation}
\item If $\boldsymbol{j}=(j_1,j_2)$ with $j_1,j_2\in \mathbb{N}_0$, then
\begin{align} \label{art4} \mu_{\boldsymbol{j},\boldsymbol{m}}=&2^{-n-j_2-j_2-2}\sum_{\boldsymbol{z}\in \mathcal{P}\cap I_{\boldsymbol{j},\boldsymbol{m}}} (1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|) \nonumber \\ &-2^{-2j_1-2j_2-4}. \end{align}
\end{itemize}
In all these identities the first summands involving the sum over $\boldsymbol{z}\in \mathcal{P}\cap I_{\boldsymbol{j},\boldsymbol{m}}$ come from the counting part $\frac{1}{N}\sum_{\boldsymbol{z}\in\mathcal{P}}\boldsymbol{1}_{[\boldsymbol{0},\boldsymbol{t})}(\boldsymbol{z})$ and the second summands come from the linear part $-t_1t_2$ of the discrepancy function, respectively. Note that we could also write $\boldsymbol{z}\in \mathring{I}_{\boldsymbol{j},\boldsymbol{m}}$, where $\mathring{I}_{\boldsymbol{j},\boldsymbol{m}}$ denotes the interior of $I_{\boldsymbol{j},\boldsymbol{m}}$, since the summands in the formulas~\eqref{art2}--\eqref{art4} vanish if $\boldsymbol{z}$ lies on the boundary of the dyadic box. Hence, in order to compute the Haar coefficients of the discrepancy function, we have to deal with the sums over $\boldsymbol{z}$ which appear in the formulas above and to determine which points $\boldsymbol{z}=(z_1,z_2)\in \mathcal{P}$ lie in the dyadic box $I_{\boldsymbol{j},\boldsymbol{m}}$ with $\boldsymbol{j}\in \mathbb{N}_{-1}^2$ and $\boldsymbol{m}=(m_1,m_2)\in\mathbb{D}_{\boldsymbol{j}}$. If $m_1$ and $m_2$ are non-negative integers, then they have a dyadic expansion of the form \begin{equation} \label{mdyadic} m_1=2^{j_1-1}r_1+\dots+r_{j_1} \text{\, and \,} m_2=2^{j_2-1}s_1+\dots+s_{j_2} \end{equation} with digits $r_{i_1},s_{i_2}\in\{0,1\}$ for all $i_1\in\{1,\dots,j_1\}$ and $i_2\in\{1,\dots,j_2\}$, respectively. Let $\boldsymbol{z}=(z_1,z_2)=\big(\frac{t_n}{2}+\dots+\frac{t_1}{2^n},\frac{b_1}{2}+\dots+\frac{b_n}{2^n}\big)$ be a point of our point set $\mathcal{P}_{\boldsymbol{a}}$. Then $\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}$ if and only if \begin{equation} \label{cond} t_{n+1-k}=r_k \text{\, for all \,} k\in \{1,\dots, j_1\} \text{\, and \,} b_k=s_k \text{\, for all \,} k\in \{1,\dots, j_2\}. \end{equation} Further, for such a point $\boldsymbol{z}=(z_1,z_2)\in I_{\boldsymbol{j},\boldsymbol{m}}$ we have \begin{equation} \label{z1} 2m_1+1-2^{j_1+1}z_1=1-t_{n-j_1}-2^{-1}t_{n-j_1-1}-\dots-2^{j_1-n+1}t_1 \end{equation} and \begin{equation} \label{z2} 2m_2+1-2^{j_2+1}z_2=1-b_{j_2+1}-2^{-1}b_{j_2+2}-\dots-2^{j_2-n+1}b_n. \end{equation}
There are several parallel tracks between the proofs in this section and the proofs in~\cite[Section 3]{Kritz}, where we computed the Haar coefficients for a simpler class of digital nets. \\
Let in the following $\mathcal{H}_j:=\{i\in\{1,\dots,j\}: a_i=0\}$ for $j\in\{1,\dots,n-1\}$. Then $h_n=|\mathcal{H}_{n-1}|$ is the parameter as defined in Theorem~\ref{theo1}.
\paragraph{Case 1: $\boldsymbol{j}\in\mathcal{J}_1:=\{(-1,-1)\}$}
\begin{proposition} \label{prop1}
Let $\boldsymbol{j}\in \mathcal{J}_1$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$. Then we have
$$ \mu_{\boldsymbol{j},\boldsymbol{m}}=\frac{h_n+5}{2^{n+3}}+\frac{1}{2^{2n+2}}. $$ \end{proposition}
\begin{proof}
By~\eqref{art1} we have
\begin{align*}
\mu_{\boldsymbol{j},\boldsymbol{m}}=& \frac{1}{2^n}\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}} (1-z_1)(1-z_2)-\frac14 \\
=&1-\frac{1}{2^n}\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_1-\frac{1}{2^n}\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_2+\frac{1}{2^n}\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_1z_2-\frac14 \\
=&-\frac14+\frac{1}{2^n}+\frac{1}{2^n}\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_1z_2,
\end{align*}
where we regarded $\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_1=\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_2=\sum_{l=0}^{2^n-1}l/2^n=2^{n-1}-2^{-1}$ in the last step.
It remains to evaluate $\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}}z_1z_2$. Using the representation of $\mathcal{P}_{\boldsymbol{a}}$ in \eqref{darstPa}, we have
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}}z_1z_2=& \sum_{t_1,\dots,t_n=0}^1 \left(\frac{t_n}{2}+\dots+\frac{t_{1}}{2^n}\right)\left(\frac{b_1}{2}+\dots+\frac{b_n}{2^n}\right) \\
=&\sum_{k=1}^n \sum_{t_1,\dots,t_n=0}^1 \frac{t_kb_k}{2^{n+1-k}2^k}+\sum_{\substack{k_1,k_2=1 \\ k_1 \neq k_2}}^n \sum_{t_1,\dots,t_n=0}^1 \frac{t_{k_1}b_{k_2}}{2^{n+1-k_1}2^{k_2}}=:S_1+S_2.
\end{align*}
Note that $b_k$ only depends on $t_k,t_{k+1},\ldots,t_n$ and $b_n=t_n$. We have
\begin{align*}
S_1=& \frac{1}{2^{n+1}}\sum_{k=1}^n 2^{k-1} \sum_{t_k\dots,t_n=0}^1 t_kb_k=\frac{1}{2^{n+2}}2^n \sum_{t_n=0}^1 t_nb_n+\frac{1}{2^{n+2}}\sum_{k=1}^{n-1} 2^k \sum_{t_k\dots,t_n=0}^1 t_kb_k \\
=& \frac14+\frac{1}{2^{n+2}}\sum_{k=1}^{n-1} 2^k \sum_{t_{k+1}\dots,t_n=0}^1 (1\oplus a_k(t_{k+1}\oplus \dots \oplus t_n)) \\
=& \frac14+\frac{1}{2^{n+2}}\sum_{k=1}^{n-1} 2^k 2^{n-k-1}(2-a_k)=\frac14+\frac18 \left(n-1+\sum_{k=1}^{n-1}(1-a_k)\right)=\frac18(n+h_n+1).
\end{align*}
To compute $S_2$, assume first that $k_1<k_2$. Then
\begin{align*}
\sum_{t_1,\dots,t_n=0}^1 t_{k_1}b_{k_2}=& 2^{k_1-1} \sum_{t_{k_1},\dots,t_n=0}^1 t_{k_1}b_{k_2}=2^{k_1-1} \sum_{t_{k_1+1},\dots,t_n=0}^1 b_{k_2}\\
=&2^{k_1-1}2^{k_2-k_1-1}\sum_{t_{k_2},\dots,t_n=0}^1 b_{k_2}=2^{k_1-1}2^{k_2-k_1-1}2^{n-k_2}=2^{n-2}.
\end{align*}
Similarly, we observe that we obtain the same result also for $k_1>k_2$ and hence
$$ S_2=\frac{1}{2^{n+1}}\sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^n 2^{k_1-k_2}2^{n-2}=\frac18 \sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^n 2^{k_1-k_2}=\frac{1}{8}\left(-n+2^{n+1}-4+\frac{2}{2^n}\right). $$
Now we put everything together to arrive at the claimed formula. \end{proof}
\paragraph{Case 2: $\boldsymbol{j}\in\mathcal{J}_2:=\{(-1,j_2): 0\leq j_2 \leq n-2\}$} \begin{proposition} \label{prop2} Let $\boldsymbol{j}=(-1,j_2)\in \mathcal{J}_2$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$. If $\mathcal{H}_{j_2}=\{1,\dots,j_2\}$, then
$$ \mu_{\boldsymbol{j},\boldsymbol{m}}=2^{-2n-2j_2-4}\left(-2^{2j_2+2}(a_{j_2+1}-1)+2^{n+j_2}(a_{j_2+1}a_{j_2+2}-2)+2^{2n+2}\sum_{k=1}^{j_2}\frac{s_k}{2^{n+1-k}}\right), $$ where the latter sum is zero for $j_2=0$. Otherwise, let $w\in\{1,\dots,j_2\}$ be the greatest index with $a_w=1$. If $a_{j_2+1}=0$, then \begin{align*} \mu_{\boldsymbol{j},\boldsymbol{m}}=&2^{-2n-2}-2^{-n-j_2-3}+2^{-n-2j_2+w-5}+2^{-2j_2-2}\varepsilon \\ &+2^{-2n-j_2+w-4}a_{j_2+2}(1-2(s_{w}\oplus \dots \oplus s_{j_2})). \end{align*} If $a_{j_2+1}=1$, then \begin{align*} \mu_{\boldsymbol{j},\boldsymbol{m}}=&-2^{-n-j_2-3}+2^{-j_2+w-2n-3}+2^{-2j_2-n+w-4}+2^{-2j_2-2}\varepsilon \\ &-2^{-2n-j_2+w-2}(s_{w}\oplus \dots \oplus s_{j_2})+2^{-n-j_2-4}a_{j_2+2}. \end{align*}
In the latter two expressions, we put $\varepsilon=\sum_{\substack{k=1 \\k\neq w} }^{j_2}\frac{t_k(m_2)}{2^{n+1-k}}$, where the values $t_k(m_2)$ depend only on $m_2$ and are either 0 or 1. Hence, in any case we have $|\mu_{\boldsymbol{j},\boldsymbol{m}}|\lesssim 2^{-n-j_2}$. \end{proposition}
\begin{proof} We only show the case where $j_2\geq 1$ and $\mathcal{H}_{j_2} \neq \{1,\dots,j_2\}$, since the other case is similar but easier.
Let $w\in\{1,\dots,j_2\}$ be the greatest index with $a_w=1$. By~\eqref{art3}, we need to evaluate the sum
$$ \sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}}(1-z_1)(1-|2m_2+1-2^{j_2+1}z_2|). $$
By~\eqref{cond}, the condition $\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}$ yields the identities $b_k=s_k$ for all $k\in \{1,\dots,j_2\}$, which lead to
$t_k=s_k$ for all $k\in\{1,\dots,j_2\}$ such that $a_k=0$. Assume
that $$ \{k\in \{1,\dots,j_2\}: a_k=1\}=\{k_1,\dots,k_l\} $$ for some $l\in\{1,\dots j_2\}$, where $k_1<k_2<\dots<k_l$ and $k_l=w$.
We have $t_{k_i}=s_{k_i}\oplus s_{k_i+1}\oplus \dots \oplus s_{k_{i+1}}$ for all $i\in\{1,\dots,l-1\}$ and
$t_w=s_w \oplus \dots \oplus s_{j_2} \oplus t_{j_2+1}\oplus \dots \oplus t_n$. Hence, we can write
$$ 1-z_1=1-u-\frac{t_{j_2+1}}{2^{n-j_2}}-\frac{s_w \oplus \dots \oplus s_{j_2} \oplus t_{j_2+1}\oplus \dots \oplus t_n}{2^{n+1-w}}-\varepsilon, $$
where $u=2^{-1}t_n+\dots+2^{-(n-j_2-1)}t_{j_2+2}$ and $$\varepsilon=\varepsilon(m_2)=\sum_{\substack{k=1 \\k\neq w} }^{j_2}\frac{t_k(m_2)}{2^{n+1-k}}.$$
For the expression $1-|2m_2+1-2^{j_2+1}z_2|$ we find by~\eqref{z2}
$$ 1-|2m_2+1-2^{j_2+1}z_2|=1-|1-t_{j_2+1}\oplus a_{j_2+1}(t_{j_2+2}\oplus \dots \oplus t_n)-v|, $$
where $v=v(t_{j_2+2},\dots,t_n)=2^{-1}b_{j_2+2}+\dots+2^{-(n-j_2-1)}b_n.$ With these observations, we find (writing $T_j=t_j\oplus \dots\oplus t_n$ for $1\leq j \leq n-1$ and $t_w(t_{j_2+1})=s_{w}\oplus\dots\oplus s_{j_2}\oplus t_{j_2+1} \oplus T_{j_2+2}$)
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}}&(1-z_1)(1-|2m_2+1-2^{j_2+1}z_2|) \\
=& \sum_{t_{j_2+1}, \dots , t_n=0}^{1}\left(1-u-\frac{t_{j_2+1}}{2^{n-j_2}}-\frac{t_w(t_{j_2+1})}{2^{n+1-w}}-\varepsilon\right) \\
&\times\left(1-|1-t_{j_2+1}\oplus a_{j_2+1}T_{j_2+2}-v|\right) \\
=&\sum_{t_{j_2+2}, \dots , t_n=0}^{1} \bigg\{ \left(1-u-\frac{a_{j_2+1}T_{j_2+2}}{2^{n-j_2}}-\frac{t_w(a_{j_2+1}T_{j_2+1})}{2^{n+1-w}}-\varepsilon\right)v \\
&+ \left(1-u-\frac{a_{j_2+1}T_{j_2+2}\oplus 1}{2^{n-j_2}}-\frac{t_w(a_{j_2+1}T_{j_2+1}\oplus 1)}{2^{n+1-w}}-\varepsilon\right)(1-v) \bigg\}\\
=& \sum_{t_{j_2+2}, \dots , t_n=0}^{1} 2^{-n-1}\bigg(-2^{j_2+1}-2^w+2^{n+1}-2^{n+1}\varepsilon+2^wt_w(a_{j_2+1}T_{j_2+1})-2^{n+1}u \\ &+2^{j_2+1}v+2^w v
-2^{w+1}t_w(a_{j_2+1}T_{j_2+1})v-2^{j_2+1}(a_{j_2+1}T_{j_2+2})(2v-1)\bigg).
\end{align*}
Let first $a_{j_2+1}=1$ and hence $t_w(a_{j_2+1}T_{j_2+2})=t_w(T_{j_2+2})=s_{w}\oplus\dots\oplus s_{j_2}$ does not depend on t $t_i$.
Since
$$ \sum_{t_{j_2+2}, \dots , t_n=0}^{1} u=\sum_{t_{j_2+2}, \dots , t_n=0}^{1} v=\sum_{l=0}^{2^{n-j_2-1}-1}\frac{l}{2^{n-j_2+1}}=2^{n-j_2-2}-\frac12, $$
we obtain
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}}&(1-z_1)(1-|2m_2+1-2^{j_2+1}z_2|) \\
=& 2^{-n-1}\bigg((-2^{j_2+1}-2^w+2^{n+1}-2^{n+1}\varepsilon+2^wt_w(T_{j_2+2}))2^{n-j_2-1} \\
&+(2^w+2^{j_2+1}-2^{n+1}-2^{w+1}t_w(T_{j_2+2}))\left(2^{n-j_2-2}-\frac12\right) \\
&-2^{j_2+1}\sum_{t_{j_2+2}, \dots , t_n=0}^{1}T_{j_2+2}(2v-1)\bigg).
\end{align*}
We analyze the last expression. We find
\begin{align*} \sum_{t_{j_2+2}, \dots , t_n=0}^{1}& T_{j_2+2}(2v-1) \\ =& 2\sum_{t_{j_2+2}, \dots , t_n=0}^{1}T_{j_2+2}v-\sum_{t_{j_2+2}, \dots , t_n=0}^{1}T_{j_2+2}=2\sum_{t_{j_2+2}, \dots , t_n=0}^{1}T_{j_2+1}v-2^{n-j_2-2},\end{align*}
where
\begin{align*}
\sum_{t_{j_2+2}, \dots , t_n=0}^{1}T_{j_2+2}v=&\sum_{t_{j_2+2}, \dots , t_n=0}^{1} (t_{j_2+2}\oplus T_{j_2+3})\left(\frac{t_{j_2+2}\oplus a_{j_2+2} T_{j_2+3}}{2}+\frac{b_{j_2+3}}{4}+\cdots+\frac{b_n}{2^{n-j_1-1}}\right) \\
=& \sum_{t_{j_2+3}, \dots , t_n=0}^{1} \left(\frac{(T_{j_2+3}\oplus 1)\oplus a_{j_2+2} T_{j_2+3}}{2}+\frac{b_{j_2+3}}{4}+\cdots+\frac{b_n}{2^{n-j_2-1}}\right) \\
=& \sum_{t_{j_2+3}, \dots , t_n=0}^{1} \frac{1\oplus (1-a_{j_2+2})T_{j_2+3}}{2}+\sum_{l=0}^{2^{n-j_2-2}-1}\frac{l}{2^{n-j_2-1}} \\
=&\frac12 \sum_{t_{j_2+3}, \dots , t_n=0}^{1} \left(1-(1-a_{j_2+2})T_{j_2+3}\right)+2^{n-j_2-4}-\frac14 \\
=&\frac12 \left(2^{n-j_2-2}-(1-a_{j_2+2})2^{n-j_2-3}\right)+2^{n-j_2-4}-\frac14 \\
=&2^{n-j_2-4}(1+a_{j_2+2})+2^{n-j_2-4}-\frac14.
\end{align*}
We put everything together and apply~\eqref{art3} to find the result for $a_{j_2+1}=1$. \\
Now assume that $a_{j_2+1}=0$. Then $t_w(a_{j_2+1}T_{j_2+2})=t_w(0)=s_w\oplus\dots\oplus s_{j_2} \oplus T_{j_2+2}$. Hence we have
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}}&(1-z_1)(1-|2m_2+1-2^{j_2+1}z_2|) \\
=& 2^{-n-1}\bigg( (-2^{j_1+1}+2^{n+1}-2^w-2^{n+1}\varepsilon)2^{n-j_2-1}+(2^{j_2+1}-2^{n+1})(2^{n-j_2-2}-\frac12) \\
&+ 2^w \cdot 2^{n-j_2-2}-2^{w+1}\sum_{t_{j_2+2},\dots,t_n=0}^{1} vt_w(0)\bigg).
\end{align*}
We considered $\sum_{t_{j_2+2},\dots,t_n=0}^{1}t_w(0)=2^{n-j_2-2}.$ It remains to evaluate $\sum_{t_{j_2+2},\dots,t_n=0}^{1} vt_w(0).$ We find
\begin{eqnarray*}
\lefteqn{\sum_{t_{j_2+2},\dots,t_n=0}^{1} (s_w\oplus\dots\oplus s_{j_2}\oplus t_{j_2+2}\oplus T_{j_2+3})\left(\frac{t_{j_2+2}\oplus a_{j_2+2}T_{j_2+3}}{2}+\frac{b_{j_2+2}}{4}+\dots+\frac{b_n}{2^{n-j_2-1}}\right)} \\
&=& \sum_{t_{j_2+3},\dots,t_n=0}^{1} \left(\frac{(s_w\oplus\dots\oplus s_{j_2}\oplus T_{j_2+3} \oplus 1)\oplus a_{j_2+1}T_{j_2+3}}{2}+\frac{b_{j_2+2}}{4}+\dots+\frac{b_n}{2^{n-j_2-1}}\right) \\
&=& \frac12 \sum_{t_{j_2+3},\dots,t_n=0}^{1} (1-a_{j_2+2})T_{j_2+3}\oplus s_w\oplus\dots\oplus s_{j_2} \oplus 1 +\sum_{l=0}^{2^{n-j_2-2}-1}\frac{l}{2^{n-j_2-1}} \\
&=& 2^{n-j_2-4}(1+a_{j_2+2}(1-2(s_w\oplus\dots\oplus s_{j_2})) +2^{n-j_2-4}-\frac14.
\end{eqnarray*}
Again, we put everything together and apply~\eqref{art3} to find the result for $a_{j_2+1}=0$. \end{proof}
\paragraph{Case 3: $\boldsymbol{j}\in\mathcal{J}_3:=\{(k,-1): k\geq n\}\cup\{(-1,k): k\geq n\}$}
\begin{proposition}
Let $\boldsymbol{j}\in \mathcal{J}_3$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$. Then we have
$$ \mu_{\boldsymbol{j},\boldsymbol{m}}=\frac{1}{2^{2k+3}}. $$ \end{proposition}
\begin{proof}
This claim follows from~\eqref{art2} and~\eqref{art3} together with the fact that no point of $\mathcal{P}_{\boldsymbol{a}}$ is contained in the interior of $I_{\boldsymbol{j},\boldsymbol{m}}$ if $j_1\geq n$ or $j_2\geq n$. Hence, only the linear part of $\Delta_{\mathcal{P}_{\boldsymbol{a}}}$ contributes to the Haar coefficients in this case. \end{proof}
\paragraph{Case 4: $\boldsymbol{j}\in\mathcal{J}_4:=\{(0,-1)\}$}
\begin{proposition} \label{prop5}
Let $\boldsymbol{j}\in \mathcal{J}_4$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$. Then we have
$$ \mu_{\boldsymbol{j},\boldsymbol{m}}=-\frac{1}{2^{n+3}}+\frac{1}{2^{2n+2}}. $$ \end{proposition}
\begin{proof} For $\boldsymbol{z}=(z_1,z_2)\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}=\mathcal{P}_{\boldsymbol{a}} $ we have $1-z_2=1-\frac{b_1}{2}-\dots-\frac{b_n}{2^n}$ and
$$ 1-|2m_1+1-2z_1|=1-\left|1-t_n-\frac{t_{n-1}}{2}-\dots-\frac{t_1}{2^{n-1}}\right| $$ by~\eqref{z1}.
We therefore find, after summation over $t_n$,
\begin{align*}
\sum_{\boldsymbol{z} \in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}}&(1-|2m_1+1-2z_1|)(1-z_2) \\
=&\sum_{t_1,\dots,t_n=0}^1 \left(1-\left|1-t_n-\frac{t_{n-1}}{2}-\dots-\frac{t_1}{2^{n-1}}\right|\right)\left(1-\frac{b_1(t_n)}{2}-\dots-\frac{b_n(t_n)}{2^n}\right) \\
=&\sum_{t_1,\dots,t_{n-1}=0}^1 \left(u(1-v(0))+(1-u)\left(1-v(1)-\frac{1}{2^n}\right)\right) \\
=& \sum_{t_1,\dots,t_{n-1}=0}^1 \left(1-\frac{1}{2^n}-v(1)+\frac{1}{2^n}u+uv(1)-uv(0)\right) \\
=& 2^{n-1}\left(1-\frac{1}{2^n}\right)+\left(\frac{1}{2^n}-1\right)(2^{n-2}-2^{-1})+\sum_{t_1,\dots,t_{n-1}=0}^1uv(1)-\sum_{t_1,\dots,t_{n-1}=0}^1uv(0).
\end{align*}
Here we use the short-hands $u=2^{-1}t_{n-1}+\dots+2^{-n+1}t_1$ and $v(t_n)=2^{-1}b_1(t_n)+\dots+2^{-n+1}b_{n-1}(t_n)$ and the fact
that $\sum_{t_1,\dots,t_{n-1}=0}^1 u= \sum_{t_1,\dots,t_{n-1}=0}^1 v(1)=2^{n-2}-2^{-1}$. It is not difficult to observe
that $\sum_{t_1,\dots,t_{n-1}=0}^1 uv(0)= \sum_{t_1,\dots,t_{n-1}=0}^1 uv(1)$; hence
$$ \sum_{\boldsymbol{z} \in \mathcal{P}_{\boldsymbol{a}}}(1-|2m_1+1-2z_1|)(1-z_2)=\frac14+2^{n-2}-\frac{1}{2^{n+1}}. $$
The rest follows with~\eqref{art2}. \end{proof}
For the following two propositions, we use the shorthand $R=r_1\oplus \dots \oplus r_{j_1}$.
\paragraph{Case 5: $\boldsymbol{j}\in\mathcal{J}_5:=\{(j_1,-1): 1\leq j_1 \leq n-2 \}$}
\begin{proposition}
Let $\boldsymbol{j}\in \mathcal{J}_5$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$. Then we have
$$ \mu_{\boldsymbol{j},\boldsymbol{m}}=2^{-2n-2}-2^{-n-j_1-3}+2^{-2j_1-2}\varepsilon-2^{-2n-1}R-2^{-n-j_1-3}a_{n-j_1-1}(1-2R), $$
where \begin{equation} \label{abcd} \varepsilon=\varepsilon(m_1)=\frac{r_1}{2^n}+\sum_{k=2}^{j_1}\frac{r_k \oplus a_{n+1-k}(r_{k-1}\oplus\dots\oplus r_1)}{2^{n+1-k}}.\end{equation}
Hence, we have \end{proposition}
\begin{proof} By~\eqref{art2}, we need to evaluate the sum
$$\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} (1-|2m_1+1-2^{j_1+1}z_1|)(1-z_2).$$ The condition $\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}$ forces $t_n=r_1,\dots t_{n+1-j_1}=r_{j_1}$ and therefore \begin{align*}
1-z_2=& 1-\frac{b_1}{2}-\dots-\frac{b_n}{2^n}=1-v(t_{n-j_1})-\frac{t_{n-j_1}\oplus a_{n-j_1} R}{2^{n-j_1-1}}-\varepsilon, \end{align*} where \begin{align*} v(t_{n-j_1})&=\frac{b_1}{2}+\dots+\frac{b_{n-j_1-1}}{2^{n-j_1-1}} \\ &=\frac{t_1\oplus a_1(t_2\oplus\dots\oplus t_{n-j_1}\oplus R)}{2}+\dots+\frac{t_{n-j_1-1}\oplus a_{n-j_1-1}(t_{n-j_1}\oplus R)}{2^{n-j_1-1}} \end{align*} and $\varepsilon$ as in~\eqref{abcd}. Further, by~\eqref{z1} we write $2m_1+1-2^{j_1+1}z_1=1-t_{n-j_1}-u$, where $u=2^{-1}t_{n-j_1-1}+\dots+2^{j_1-n+1}t_1$. Then \begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} &(1-|2m_1+1-2^{j_1+1}z_1|)(1-z_2) \\
=& \sum_{t_1,\dots,t_{n-j_1}=0}^{1}\left(1-v(t_{n-j_1})-\frac{t_{n-j_1}\oplus a_{n-j_1} R}{2^{n-j_1-1}}-\varepsilon\right)(1-|1-t_{n-j_1}-u|) \\
=& \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} \Bigg\{\left(1-v(0)-\frac{a_{n-j_1} R}{2^{n-j_1-1}}-\varepsilon\right)u\\&+\left(1-v(1)-\frac{1\oplus a_{n-j_1} R}{2^{n-j_1-1}}-\varepsilon\right)(1-u)\Bigg\}\\
=& \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} \{ 1-2^{j_1-n}-\varepsilon+2^{j_1-n}a_{n-j_1}R+2^{j_1-n}u-v(1)\\ &-2^{1+j_1-n}a_{n-j_1}Ru+uv(1)-uv(0) \} \\
=& 2^{n-j_1-1}( 1-2^{j_1-n}-\varepsilon+2^{j_1-n}a_{n-j_1}R) \\&+\left(2^{n-j_1-2}-2^{-1}\right)(2^{j_1-n}-1-2^{1+j_1-n}a_{n-j_1}R)
+\sum_{t_1,\dots,t_{n-j_1-1}=0}^{1}(uv(1)-uv(0)). \end{align*} We understand $b_1,\dots,b_{n-j_1-1}$ as functions of $t_{n-j_1}$ and have \begin{align*}
\sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} uv(0)=& \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1}\left(\frac{t_{n-j_1-1}}{2}+\dots+\frac{t_1}{2^{n-j_1-1}}\right)\left(\frac{b_1(0)}{2}+\dots+\frac{b_{n-j_1-1}(0)}{2^{n-j_1-1}}\right) \\
=& \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} \left(\sum_{k=1}^{n-j_1-1} \frac{t_kb_k(0)}{2^{n-j_1-k}2^k}+\sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^{n-j_1-1} \frac{t_{k_1}b_{k_2}(0)}{2^{n-j_1-k_1}2^{k_2}}\right). \end{align*} The first sum simplifies to \begin{align*}
\sum_{k=1}^{n-j_1-1}& 2^{k-1}\sum_{t_k,\dots,t_{n-j_1-1}=0}^{1}\frac{t_kb_k(0)}{2^{n-j_1-k}2^k} \\
=& \frac{1}{2^{n-j_1}}\sum_{k=1}^{n-j_1-2}2^{k-1}\sum_{t_k,\dots,t_{n-j_1-1}=0}^{1}t_k(t_k\oplus a_k(t_{k+1}\oplus t_{n-j_1-1}\oplus R)) \\
&+\frac{1}{2^{n-j_1}}2^{n-j_1-2}\sum_{t_{n-j_1-1}=0}^{1}t_{n-j_1-1}(t_{n-j_1-1}\oplus a_{n-j_1-1}R) \\
=& \frac{1}{2^{n-j_1}}\sum_{k=1}^{n-j_1-2}2^{k-1}\sum_{t_{k+1},\dots,t_{n-j_1-1}=0}^{1}(1\oplus a_k(t_{k+1}\oplus t_{n-j_1-1}\oplus R)) \\
&+\frac{1}{4}(1 \oplus a_{n-j_1-1}(R\oplus 1)) \\
=& \frac{1}{2^{n-j_1}}\sum_{k=1}^{n-j_1-2}2^{k-1}2^{n-j_1-k-2}(2-a_k)+\frac14 (1 \oplus a_{n-j_1-1}(R\oplus 1)) \\
=& \frac{1}{8}\sum_{k=1}^{n-j_1-2}(2-a_k)+\frac14 (1 \oplus a_{n-j_1-1}(R\oplus 1)). \end{align*} Basically by the same arguments as in the proof of Proposition~\ref{prop1} we also find \begin{align*}
\sum_{t_1,\dots,t_{n-j_1-1}=0}^{1}\sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^{n-j_1-1} \frac{t_{k_1}b_{k_2}}{2^{n-j_1-k_1}2^{k_2}}=\frac18 \sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^1 2^{k_1-k_2}. \end{align*} Hence, we obtain $$ \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} uv(0)=\frac{1}{8}\sum_{k=1}^{n-j_1-2}(2-a_k)+\frac14(1 \oplus a_{n-j_1-1}(R\oplus 1))+\frac18 \sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^1 2^{k_1-k_2}. $$ We can evaluate $\sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} uv(1)$ in almost the same way; the result is $$ \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} uv(1)=\frac{1}{8}\sum_{k=1}^{n-j_1-2}(2-a_k)+\frac14 (1 \oplus a_{n-j_1-1}R)+\frac18 \sum_{\substack{k_1,k_2=0 \\ k_1 \neq k_2}}^1 2^{k_1-k_2}. $$ Hence the difference of these two expressions is given by $$ \sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} uv(1)-\sum_{t_1,\dots,t_{n-j_1-1}=0}^{1} uv(0)=\frac14 a_{n-j_1-1}(2R-1). $$ Now we put everything together and use~\eqref{art2} to find the claimed result on the Haar coefficients. \end{proof}
\paragraph{Case 6: $\boldsymbol{j}\in\mathcal{J}_6:=\{(j_1,j_2): j_1+j_2 \leq n-3 \}$}
\begin{proposition} Let $\boldsymbol{j}\in \mathcal{J}_6$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$. If $\mathcal{H}_{j_2}=\{1,\dots,j_2\}$ or if $j_2=0$, then we have
$$ \mu_{\boldsymbol{j},\boldsymbol{m}}=2^{-2n-2}(1-2a_{n-j_1}R)(1-a_{j_2+1}). $$ Otherwise, let $w\in\{1,\dots,j_2\}$ be the greatest index with $a_w=1$. If $a_{j_2+1}=0$, then $$ \mu_{\boldsymbol{j},\boldsymbol{m}}=2^{-2n-2}(1-2a_{n-j_1}R). $$ If $a_{j_2+1}=1$, then $$ \mu_{\boldsymbol{j},\boldsymbol{m}}=-2^{-2n-j_2+w-3}(1-2a_{n-j_1}R)(1-2(s_w\oplus\dots\oplus s_{j_2})). $$
Note that for $j_1=0$ we set $a_{n-j_1}R=0$ in all these formulas. Hence, in any case we have $|\mu_{\boldsymbol{j},\boldsymbol{m}}|\lesssim 2^{-2n}$ \end{proposition}
\begin{proof}
The proof is similar in all cases; hence we only treat the most complicated case where $j_2\geq 1$ and $\mathcal{H}_{j_2} \neq \{1,\dots,j_2\}$.
By~\eqref{art4}, we need to study the sum
$$ \sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} (1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|), $$
where the condition $\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}$ forces $t_{n+1-k}=r_k$ for all $k\in\{1,\dots,j_1\}$
as well as $b_k=s_k$ for all $k\in\{1,\dots,j_2\}$.
We have already seen in the proof of Proposition 2 that the latter equalities allow us to
express the digits $t_k$ by the digits $s_1,\dots,s_{j_2}$ of $m_2$ for all $k\in\{1,\dots,j_2\}\setminus\{w\}$.
We also have $t_w=s_w\oplus\dots\oplus s_{j_2}\oplus t_{j_2+1}\oplus\dots\oplus t_n$. With~\eqref{z1}, these observations lead to
$$ 2m_1+1-2^{j_1+1}z_1=1-t_{n-j_1}-u-2^{j_1+j_2-n+1}t_{j_2+1}-2^{j_1+w-1}t_w-\varepsilon_2(m_2), $$
where $u=2^{-1}t_{n-j_1-1}+\dots+2^{j_1+j_2-n+2}t_{j_2+2}$ and
$ \varepsilon_2 $ is determined by $m_2$.
Further, we write with~\eqref{z2}
$$ 2m_2+1-2^{j_2+1}z_2=1-b_{j_2+1}-v-2^{j_1+j_2-n+1}b_{n-j_1}-\varepsilon_1(m_1), $$
where $v=v(t_{n-j_1})=2^{-1}b_{j_1+2}+\dots+2^{j_1+j_2-n+2}b_{n-j_1-1}$ and $\varepsilon_1$ is obviously determined by $m_1$. Hence, we have
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} &(1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|) \\
=& \sum_{t_{j_2+1},\dots,t_{n-j_1}=0}^1 \left(1-|1-t_{n-j_1}-u-2^{j_1+j_2-n+1}t_{j_2+1}-2^{j_1+w-1}t_w-\varepsilon_2(m_2)|\right) \\
&\times \bigg(1-|1-t_{j_2+1}\oplus a_{j_2+1}(t_{j_2+2}\oplus \dots \oplus t_{n-j_1}\oplus R)-v(t_{n-j_1})\\ &-2^{j_1+j_2-n+1}(t_{n-j_1}\oplus a_{n-j_1}R)-\varepsilon_1)|\bigg).
\end{align*}
Recall we may write $t_w=s_w\oplus \dots \oplus s_{j_2} \oplus t_{j_2+1}\oplus t_{j_2+2}\oplus\dots\oplus t_{n-j_1-1} \oplus t_{n-j_1}\oplus R$. We stress the
dependence of $t_w$ on $t_{j_2+1}\oplus t_{n-j_1}$ by writing $t_w(t_{j_2+1}\oplus t_{n-j_1})$. If $a_{j_2+1}=0$, then we
obtain after summation over $t_{j_2+1}$ and $t_{n-j_1}$
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} &(1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|) \\
=& \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} \bigg\{
(u+2^{j_1+w-1}t_w(0)+\varepsilon_2)(v(0)+2^{j_1+j_2-n+1}a_{n-j_1}R+\varepsilon_1)\\
&+ (u+2^{j_1+j_2-n+1}+2^{j_1+w-1}t_w(1)+\varepsilon_2)(1-v(0)-2^{j_1+j_2-n+1}a_{n-j_1}R-\varepsilon_1) \\
&+ (1-u-2^{j_1+w-1}t_w(1)-\varepsilon_2)(v(0)+2^{j_1+j_2-n+1}(a_{n-j_1}R\oplus 1)+\varepsilon_1)\\
&+ (1-u-2^{j_1+j_2-n+1}-2^{j_1+w-1}t_w(0)-\varepsilon_2)(1-v(1) \\ &-2^{j_1+j_2-n+1}(a_{n-j_1}R\oplus 1)-\varepsilon_1)
\bigg\} \\
=& \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} \bigg\{1+2^{2(n+j_1+j_2+1)}+2^{j_1+w-1}-2^{2j_1+j_2-n+w}-2^{2j_1+2j_2-2n+3}a_{n-j_1}R \\
&+(2^{1+2j_1+j_2-n+w}-2^{j_1+w})t_w(0) +2^{j_1+w}(2t_w(0)-1)+2^{n+j_1+j_2+1}(v(1)-v(0)) \\
&-2^{w+j_1-1}(v(1)+v(0))+2^{j_1+w}(t_w(0)v(0)+t_w(0)v(1)).\bigg\}
\end{align*}
We regarded $t_w(1)=1-t_w(0)$. By standard argumentation, we find
$$ \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}v(0)=\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}v(1)=2^{n-j_1-j_2-3}-\frac12 $$
and
$$ \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} t_w(0)=\sum_{t_{j_2+2},\dots,t_{n-j_1-2}=0}^{1}1=2^{n-j_1-j_2-3}. $$
We use the short-hand $T=t_{j_2+3}\oplus\dots\oplus t_{n-j_1-1}$, which allows us to write
\begin{align*}
\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} &t_w(0)v(0)=\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} (s_w\oplus\dots \oplus s_{j_2}\oplus t_{j_2+2} \oplus \dots\oplus t_{n-j_1-1}\oplus R) \\
&\times \bigg( \frac{t_{j_2+2}\oplus a_{j_2+2}(t_{j_2+3}\oplus\dots\oplus t_{n-j_1-1}\oplus R)}{2} \\
&+\frac{t_{j_2+3}\oplus a_{j_2+3}(t_{j_2+4}\oplus\dots\oplus t_{n-j_1-1}\oplus R)}{4}+\dots+\frac{t_{n-j_1-1}\oplus a_{n-j_1-1}R}{2^{n-j_1-j_2-2}}\bigg) \\
=& \sum_{t_{j_2+3},\dots,t_{n-j_1-1}=0}^{1}\frac12 (s_w\oplus\dots \oplus s_{j_2}\oplus T \oplus R \oplus 1 \oplus a_{j_2+2}(T\oplus R)) \\&+\sum_{l=0}^{2^{n-j_1-j_2-3}-1}\frac{l}{2^{n-j_1-j_2-2}} \\
=& \sum_{t_{j_2+3},\dots,t_{n-j_1-1}=0}^{1}\frac12 (s_w\oplus\dots \oplus s_{j_2}\oplus 1 \oplus (1-a_{j_2+2})(T\oplus R))\\ &+2^{n-j_1-j_2-5}-\frac14.
\end{align*}
Similarly, we can show
\begin{align*}
\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} t_w(0)v(1)=&\sum_{t_{j_2+3},\dots,t_{n-j_1-1}=0}^{1}\frac12 (s_w\oplus\dots \oplus s_{j_2} \oplus (1-a_{j_2+2})(T\oplus R \oplus 1))
\\ &+2^{n-j_1-j_2-5}-\frac14
\end{align*}
and therefore
$$ \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} t_w(0)(v(0)+v(1))=2^{n-j_1-j_2-4}+2^{n-j_1-j_2-5}-\frac14, $$
a fact which can be found by distinguishing the cases $a_{j_2+1}=0$ and $a_{j_2+1}=1$.
We put everything together and obtain
\begin{align*} \sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} &(1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|) \\ &=2^{j_1+j_2-n}+2^{n-j_1-j_2-2}-2^{-n+j_1+j_2+1}a_{n-j_1}R, \end{align*}
which leads to the claimed result for $a_{j_2+1}=0$ via~\eqref{art4}.\\
Now assume that $a_{j_2+1}=1$. In this case, it is more convenient to consider $t_w$ as a function
of $t_{j_2+1}\oplus\dots\oplus t_{n-j_1}\oplus R$. We obtain after summation over $t_{j_2+1}$ and $t_{n-j_1}$
\begin{align*}
\sum_{\boldsymbol{z}\in \mathcal{P}_{\boldsymbol{a}}\cap I_{\boldsymbol{j},\boldsymbol{m}}} &(1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|) \\
=& \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} \bigg\{
(u+2^{j_1+j_2-n+1}(T\oplus R)+2^{j_1+w-1}t_w(0)+\varepsilon_2)\\&\times(v(0) +2^{j_1+j_2-n+1}a_{n-j_1}R+\varepsilon_1)\\
&+ (u+2^{j_1+j_2-n+1}(T\oplus R\oplus 1)+2^{j_1+w-1}t_w(1)+\varepsilon_2)\\&\times(1-v(0)-2^{j_1+j_2-n+1}a_{n-j_1}R-\varepsilon_1) \\
&+ (1-u-2^{j_1+j_2-n+1}(T\oplus R\oplus 1)-2^{j_1+w-1}t_w(0)-\varepsilon_2)\\&\times(v(1)+2^{j_1+j_2-n+1}(a_{n-j_1}R\oplus 1)+\varepsilon_1)\\
&+ (1-u-2^{j_1+j_2-n+1}(T\oplus R)-2^{j_1+w-1}t_w(1)-\varepsilon_2)\\&\times(1-v(1)-2^{j_1+j_2-n+1}(a_{n-j_1}R\oplus 1)-\varepsilon_1)
\bigg\} \\
=& \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1} 2^{-2n}\bigg\{2^{n+j_1+j_2+1}+2^{2j_1+j_2+w+1}(1-2t_w(0)+2a_{n-j_1}R(2t_w(0)-1)) \\
&-2^{2(j_1+j_2+1)}+2^{2n}+(2^{2j_1+2j_2+3}-2^{n+j_1+j_2+2})(T\oplus R) \\ &+2^{n+j_1+j_2+2}\varepsilon_1(2(T\oplus R)-1)\\
&-2^{n+j_1+j_2+1}(v(0)+v(1))+2^{n+j_1+j_2+1}(2t_w(0)-1)(v(1)-v(0))\\&+2^{n+j_1+w}(v(0)+v(1)) +2^{n+j_1+j_2+2}(T\oplus R)(v(1)+v(0)).\bigg\}
\end{align*}
Again, we used $t_w(1)=1-t_w(0)$. Note that $t_w(0)=s_w\oplus\dots\oplus s_{j_2}$ is independent of the
digits $t_{j_2+2},\dots,t_{n-j_1-1}$. We have
$$ \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}T=\sum_{t_{j_2+2},\dots,t_{n-j_1-2}=0}^{1}1=2^{n-j_1-j_2-3} $$
and we know the sums $\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}v(0)$ and $\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}v(1)$ from above. Similarly as above we can show
\begin{align*}
\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}&(T\oplus R)v(0) \\
=& \frac12 \sum_{t_{j_2+3},\dots,t_{n-j_1-1}=0}^{1} (1\oplus (1-a_{j_2+2})(t_{j_2+3}\oplus \dots\oplus t_{n-j_1-1}\oplus R)) \\&+\sum_{l=0}^{2^{n-j_1-j_2-3}-1} \frac{l}{2^{n-j_1-j_2-2}}
\end{align*}
as well as
\begin{align*}
\sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}&(T\oplus R)v(1) \\
=& \frac12 \sum_{t_{j_2+3},\dots,t_{n-j_1-1}=0}^{1} (1-a_{j_2+2})(t_{j_2+3}\oplus \dots\oplus t_{n-j_1-1}\oplus R\oplus 1)\\&+\sum_{l=0}^{2^{n-j_1-j_2-3}-1} \frac{l}{2^{n-j_1-j_2-2}},
\end{align*}
which yields
$$ \sum_{t_{j_2+2},\dots,t_{n-j_1-1}=0}^{1}(T\oplus R)(v(1)+v(0))=2^{n-j_1-j_2-4}+2\sum_{l=0}^{2^{n-j_1-j_2-3}-1} \frac{l}{2^{n-j_1-j_2-2}}. $$
Now we can combine our results with~\eqref{art4} to obtain the claimed result.
\end{proof}
\paragraph{Case 7: $\boldsymbol{j}\in\mathcal{J}_7:=\{(j_1,j_2): 0\leq j_1,j_2 \leq n-1 \textit{\, and \,} j_1+j_2\geq n-2\}$}
\begin{proposition} Let $\boldsymbol{j}\in \mathcal{J}_7$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$.
Then we have $|\mu_{\boldsymbol{j},\boldsymbol{m}}|\lesssim 2^{-n-j_1-j_2}$ for all $\boldsymbol{m}\in\mathbb{D}_{\boldsymbol{j}}$ and $|\mu_{\boldsymbol{j},\boldsymbol{m}}|=2^{-2j_1-2j_2-4}$ for all
but at most $2^n$ elements $\boldsymbol{m}\in\mathbb{D}_{\boldsymbol{j}}$.
\end{proposition}
\begin{proof}
At most $2^n$ of the $2^{|\boldsymbol{j}|}$ dyadic boxes $I_{\boldsymbol{j},\boldsymbol{m}}$ for $\boldsymbol{m}\in\mathbb{D}_{\boldsymbol{j}}$ contain points. For the empty boxes, only the linear part of the discrepancy function contributes
to the corresponding Haar coefficients; hence $|\mu_{\boldsymbol{j},\boldsymbol{m}}|=2^{-2j_1-2j_2-4}$ for all
but at most $2^n$ elements $\boldsymbol{m}\in\mathbb{D}_{\boldsymbol{j}}$. The non-empty boxes contain at most 4 points. Hence we find by~\eqref{art4}
\begin{align*}
|\mu_{\boldsymbol{j},\boldsymbol{m}}|\leq& 2^{-n-j_1-j_2-2}\sum_{\boldsymbol{z} \in \mathcal{P}\cap I_{\boldsymbol{j},\boldsymbol{m}}}|(1-|2m_1+1-2^{j_1+1}z_1|)(1-|2m_2+1-2^{j_2+1}z_2|)| \\&+2^{-2j_1-2j_2-4} \\
\leq& 2^{-n-j_1-j_2-2}4 +2^{-2j_1-2j_2-4}\leq 2^{-n-j_1-j_2}+2^{-j_1-j_2-(n-2)-4}\lesssim 2^{-n-j_1-j_2}.
\end{align*}
\end{proof}
\paragraph{Case 8: $\boldsymbol{j}\in\mathcal{J}_8:=\{(n-1,-1),(-1,n-1)\}$}
\begin{proposition} Let $\boldsymbol{j}\in \mathcal{J}_8$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$.
Let $\boldsymbol{j}=(n-1,-1)$ or $\boldsymbol{j}=(-1,n-1)$. Then $\mu_{\boldsymbol{j},\boldsymbol{m}}\lesssim 2^{-2n}$.
\end{proposition}
\begin{proof}
At most 2 points lie in $I_{\boldsymbol{j},\boldsymbol{m}}$. Hence, if $\boldsymbol{j}=(n-1,-1)$, then by~\eqref{art2} we have
\begin{align*} |\mu_{\boldsymbol{j},\boldsymbol{m}}|\leq& 2^{-n-j_1-1}\sum_{\boldsymbol{z} \in \mathcal{P}\cap I_{\boldsymbol{j},\boldsymbol{m}}}|(1-|2m_1+1-2^{j_1+1}z_1|)(1-z_2)|+2^{-2j_1-3} \\
=& 2^{-n-j_1-1}2+2^{-2j_1-3}=2^{-2n+1}+2^{-2n-1}\lesssim 2^{-2n}.
\end{align*}
The case $\boldsymbol{j}=(-1,n-1)$ can be shown the same way.
\end{proof}
\paragraph{Case 9: $\boldsymbol{j}\in\mathcal{J}_{9}:=\{(j_1,j_2): j_1\geq n \textit{\, or \,} j_2\geq n\}$}
\begin{proposition} Let $\boldsymbol{j}\in \mathcal{J}_{9}$ and $\boldsymbol{m}\in \mathbb{D}_{\boldsymbol{j}}$.
Then $\mu_{\boldsymbol{j},\boldsymbol{m}}=-2^{-2j_1-2j_2-4}$.
\end{proposition}
\begin{proof}
The reason is that no point is contained in the interior of $I_{\boldsymbol{j},\boldsymbol{m}}$ in this case and hence only the
linear part of the discrepancy function contributes to the Haar coefficient in~\eqref{art4}.
\end{proof}
\noindent{\bf Authors' Address:}
\noindent Ralph Kritzinger and Friedrich Pillichshammer, Institut f\"{u}r Finanzmathematik und angewandte Zahlentheorie, Johannes Kepler Universit\"{a}t Linz, Altenbergerstra{\ss}e 69, A-4040 Linz, Austria.\\ {\bf Email:} ralph.kritzinger(at)jku.at and friedrich.pillichshammer(at)jku.at
\end{document} |
\begin{document}
\title{Extremal sequences for the Bellman function of three variables of the dyadic maximal operator related to Kolmogorov's inequality}
\begin{abstract} We give a characterization of the extremal sequences for the Bellman function of three variables of the dyadic maximal operator in relation to Kolmogorov's inequality. In fact we prove that they behave approximately like eigenfunctions of this operator for a specific eigenvalue. For this approach we use the one introduced in \cite{11}, where the respective Bellman function has been precisely evaluated. \end{abstract}
\section{Introduction} \label{sec:1} The dyadic maximal operator on $\mb R^n$ is a useful tool in analysis and is defined by \begin{equation} \label{eq:1p1}
\mc M_d\phi(x) = \sup\left\{ \frac{1}{|S|} \int_S |\phi(u)|\,\mr du: x\in S,\ S\subseteq \mb R^n\ \text{is a dyadic cube} \right\}, \end{equation}
for every $\phi\in L^1_\text{loc}(\mb R^n)$, where $|\cdot|$ denotes the Lebesgue measure on $\mb R^n$, and the dyadic cubes are those formed by the grids $2^{-N}\mb Z^n$, for $N=0, 1, 2, \ldots$.\\
It is well known that it satisfies the following weak type (1,1) inequality \begin{equation} \label{eq:1p2}
\left|\left\{ x\in\mb R^n: \mc M_d\phi(x) > \lambda \right\}\right| \leq \frac{1}{\lambda} \int_{\left\{\mc M_d\phi > \lambda\right\}} |\phi(u)|\,\mr du, \end{equation} for every $\phi\in L^1(\mb R^n)$, and every $\lambda>0$,
from which follows in view of Kolmogorov's inequality the following $L^q$-inequality \begin{equation} \label{eq:1p3}
\int_E \left|\mc M_d\phi(u)\right|^q\mr du \leq \frac{1}{1-q} |E|^{1-q} \|\phi\|_1^q, \end{equation} for every $q\in(0,1)$, every $\phi\in L^1(\mb R^n)$ and every measurable subset of $\mb R^n$, $E$, of finite measure. It is not difficult to see that the weak type inequality \eqref{eq:1p2} is best possible. For refinements of this inequality one can see \cite{15}, \cite{17} and \cite{18}.
An approach for studying in more depth the behaviour of this maximal operator is the introduction of the so called Bellman functions related to them which reflect certain deeper properties of them by localizing. Such functions related to the $L^q$ inequality \eqref{eq:1p3} have been precisely evaluated in \cite{11}. Define $\Av_E(\psi)=\frac{1}{|E|} \int_E |\psi|$, where $E\subseteq \mb R^n$ is measurable of positive measure and $\psi$ is measurable on $E$, and fixing a dyadic cube define the localized maximal operator $\mc M'_d\phi$ as in \eqref{eq:1p1} but with the dyadic cubes $S$ being assumed to be contained in the ambient dyadic cube $Q$. Then for every $q\in(0,1)$ we let
\begin{equation} \label{eq:1p4}
B_q(f,h)=\sup\left\{ \frac{1}{|Q|} \int_Q (\mc M'_d\phi)^q: \Av_Q(\phi)=f,\ \Av_Q(\phi^q)=h, \right\} \end{equation}
where $\phi$ is nonnegative in $L^1(Q)$ and the variables $f, h$ satisfy $0<h\leq f^q$. By a scaling argument it is easy to see that the above is independent of the choice of $Q$ (so we just have written $B_q(f,h)$ and we may take $Q=[0,1]^n$).
In \cite{11}, now the function \eqref{eq:1p4} has been precisely evaluated. The proof has been given in a much more general setting of tree-like structures on probability spaces.
More precisely we consider a non-atomic probability space $(X,\mu)$ and let $\mc T$ be a family of measurable subsets of $X$, that has a tree-like structure similar to the one in the dyadic case (the exact definition will be given in Section \ref{sec:2}).
Then we define the dyadic maximal operator associated with $\mc T$, by \begin{equation} \label{eq:1p5}
\mc M_{\mc T}\phi(x) = \sup \left\{ \frac{1}{\mu(I)} \int_I |\phi|\,\mr d\mu: x\in I\in \mc T \right\}, \end{equation} for every $\phi\in L^1(X,\mu)$. \\
This operator is related to the theory of martingales and satisfies essentially the same inequalities as $\mc M'_d$ does. Now we define the corresponding Bellman function of $\mc M_{\mc T}$, by \begin{multline} \label{eq:1p6} B_q^Q(f,h,L,k) = \sup \left\{ \int_E \left[ \max(\mc M_{\mc T}\phi, L)\right]^q\mr d\mu: \phi\geq 0, \int_X\phi\,\mr d\mu=f, \right. \\ \left. \int_X\phi^q\,\mr d\mu = h,\ E\subseteq X\ \text{measurable with}\ \mu(E)=k\right\}, \end{multline} the variables $f, h, L, k$ satisfying $0<h\leq f^q$, $L\geq f$ and $k\in (0,1]$.
The evaluation of \eqref{eq:1p6} is now given in \cite{11}, and has been done in several steps. The first one is to find the value of \begin{equation} \label{eq:1p7} B_q^{\mc T}(f,h,f,1) = \sup\left\{ \int_X (\mc M_{\mc T}\phi)^q\,\mr d\mu:\ \phi \geq 0,\ \int_X \phi\,\mr d\mu = f,\ \int_X \phi^q\,\mr d\mu = h\right\}. \end{equation}
It is proved in \cite{11}, that \eqref{eq:1p7} equals $h\,\omega_q\!\left(\frac{f^q}{h}\right)$ where $\omega_q: [1,+\infty) \to [1,+\infty)$ is defined as $\omega_q(z) = \left[H^{-1}_q(z)\right]^q$, where $H^{-1}_q$ is the inverse of $H_q$ given by $H_q(z) = (1-q)z^q + qz^{q-1}$, for $z\geq 1$. \\
The second step for the evaluation of \eqref{eq:1p6} is to find $B_q^{\mc T}(f,h,L,1)$ for arbitrary $L\ge f$. We state the related result: \begin{ctheorem}{1} \label{thm:1} With the above notation \begin{equation} \label{eq:1p8} B_q^{\mc T}(f,h,L,1) = h\,\omega_q\!\left( \frac{(1-q)L^q + qL^{q-1}f}{h}\right). \end{equation} \end{ctheorem}
Our aim in this paper is to characterize the extremal sequences of functions involving \eqref{eq:1p8}. More precisely we will prove the following \begin{ctheorem}{A} \label{thm:a} Let $\phi_n: (X,\mu) \to \mb R^+$ be such that $\int_X \phi_n\mr d\mu=f$ and $\int_X \phi_n^q\mr d\mu=h$, where $f,h$ are fixed with $0< h\leq f^q$, $q\in (0,1)$ and $n\in \mb N$. Suppose additionally that $L\geq f$. Then the following are equivalent: \begin{enumerate}[i)] \item \quad $\displaystyle \lim_n \int_X \left[ \max(\mc M_{\mc T}\phi_n, L)\right]^q\mr d\mu = B_q^{\mc T} (f, h, L, 1) $
\item \quad $\displaystyle \lim_n \int_X \left| \max(\mc M_{\mc T}\phi_n, L) - c^\frac{1}{q}\phi_n\right|^q\mr d\mu = 0 $, \end{enumerate} where $c = \omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right)$. \end{ctheorem}
We discuss now the method of the proof of Theorem \ref{thm:a}. We begin by proving two Theorems (\ref{thm:4p1} and \ref{thm:4p2}) which are generalizations of the results in \cite{11}. By using these theorems, we prove Theorem \ref{thm:4p3} which is valid for any extremal sequence and in fact is a weak form of Theorem \ref{thm:a}. We then apply Theorem \ref{thm:4p3} to a new sequence of functions, called $(g_{\phi_n})_n$, which arises from $(\phi_n)_n$ by a natural, as we shall see, way.
The function $g_{\phi_n}$ is in fact equal to $\phi_n$ on the set $\left\{ \mc M_{\mc T} \phi_n \leq L\right\}$, and constant on certain subsets of $E_n = \left\{\mc M_{\mc T}\phi_n > L\right\}$, which are enough for one to describe the behavior of $\mc M_{\mc T}\phi_n$ in $E_n$.
This new sequence has the property that it is in fact arbitrary close to $(\phi_n)_n$, thus it is extremal. An application then of Theorem \ref{thm:4p3} to this new sequence and some combinations of lemmas will enable us to provide the proof of Theorem \ref{thm:a}.
We need also to mention that the extremizers for the standard Bellman function for the case $p>1$ has been studied in \cite{16}, inspired by \cite{10}. In this paper we study the more general case (for the Bellman function of three variables and for $q\in (0,1)$) which presents additional difficulties because of the presence of the third variable $L$.
We note also that further study of the dyadic maximal operator can be seen in \cite{19} and \cite{18} where symmetrizations principles for this operator are presented, while other approaches for the determination of certain Bellman function can be found in \cite{26}, \cite{27}, \cite{31}, \cite{32}, and \cite{33}.
Also we need to say that the phenomenon that the norm of a maximal operator is attained by a sequence of eigenfuntions doesn't occur here for the first time, for example see \cite{4} and \cite{5}.
Nevertheless as far as we know this phenomenon is presented here and in \cite{6} for the first time for the case of more generalized norms, such as the Bellman functions that we describe.
There are several problems in Harmonic Analysis where Bellman functions naturally arise. Such problems (including the dyadic Carleson imbedding and weighted inequalities) are described in \cite{14} (see also \cite{12}, \cite{13}) and also connections to Stochastic Optimal Control are provided, from which it follows that the corresponding Bellman functions satisfy certain nonlinear second order PDE.
The exact computation of a Bellman function is a difficult task which is connected with the deeper structure of the corresponding Harmonic Analysis problem. Thus far several Bellman functions have been computed (see \cite{2}, \cite{3}, \cite{10}, \cite{25}, \cite{27}, \cite{31}, \cite{32}, \cite{33}). L.Slavin, A.Stokolos and V. Vasyunin \cite{26} linked the Bellman function computation to solving certain PDE's of the Monge-Amp\`{e}re type, and in this way they obtained an alternative proof of the Bellman functions related to the dyadic maximal operator in \cite{10}. In this last mentioned work it is precisely evaluated the corresponding to \eqref{eq:1p7} Bellman function for the case $q>1$. Also in \cite{33} using the Monge-Amp\`{e}re equation approach a more general Bellman function than the one related to the dyadic Carleson imbedding Theorem has been precisely evaluated thus generalizing the corresponding result in \cite{10}. For more recent developments and results related to the Bellman function technique we refer to \cite{1}, \cite{6}, \cite{7}, \cite{22}, \cite{23}, \cite{24}, \cite{28}, \cite{29}, \cite{36}. Additional results can be found in \cite{2}, \cite{21}, \cite{34}, \cite{35}, while for the study of the general theory of maximal operators one can consult \cite{30}.
In this paper, as in our previous ones, we use Bellman functions as a means to gain deeper understanding of the corresponding maximal operators and we are not using the standard techniques as Bellman dynamics and induction, corresponding PDE's, obstacle conditions etc. Instead, our methods being different from the Bellman function technique, we rely on the combinatorial structure of these operators. For such approaches, which enable us to study and solve problems such as the one which is described in this article one can see \cite{8}, \cite{9}, \cite{10}, \cite{11}, \cite{16} and \cite{19}.
\section{Preliminaries} \label{sec:2} Let $(X,\mu)$ be a nonatomic probability space. We give the following from \cite{10} or \cite{11}. \begin{definition} \label{def:2p1} A set $\mc T$ of measurable subsets of $X$ will be called a tree if the following are satisfied \begin{enumerate}[i)] \item $X\in\mc T$ and for every $I\in\mc T$, $\mu(I) > 0$. \item For every $I\in\mc T$ there corresponds a finite of countable subset $C(I)$ of $\mc T$ containing at least two elements such that
\begin{enumerate}[a)] \item the elements of $C(I)$ are pairwise disjoint subsets of $I$. \item $I = \cup\, C(I)$. \end{enumerate} \item $\mc T = \cup_{m\geq 0} \mc T_{(m)}$, where $\mc T_{(0)} = \left\{ X \right\}$ and $\mc T_{(m+1)} = \cup_{I\in \mc T_{(m)}} C(I)$. \item The following holds \[ \lim_{m\to\infty} \sup_{I\in \mc T_{(m)}} \mu(I) = 0 \] \end{enumerate} \end{definition}
\noindent We state now the following lemma as is given in \cite{10}. \begin{lemma} \label{lem:2p1} For every $I\in \mc T$ and every $\alpha\in (0,1)$ there exists a subfamily $\mc F(I) \subseteq \mc T$ consisting of pairwise disjoint subsets of $I$ such that \[ \mu\!\left( \underset{J\in\mc F(I)}{\bigcup} J \right) = \sum_{J\in\mc F(I)} \mu(J) = (1-\alpha)\mu(I). \] \end{lemma}
\noindent Suppose now that we are given a tree $\mc T$ on a nonatomic probability space $(X,\mu)$. Then we define the associated dyadic maximal operator $\mc M_{\mc T}$ by \eqref{eq:1p5}. (see the Introduction).
\begin{definition} \label{def:2p2} Let $(\phi_n)_n$ be a sequence of $\mu$-measurable nonnegative functions defined on $X,\ q\in(0,1)$, $0<h\leq f^q$ and $L\geq f$. Then $(\phi_n)_n$ is called extremal if the following hold $\int_X \phi_n\,\mr d\mu = f$, $\int_X \phi_n^q\,\mr d\mu = h$ for every $n\in \mb N$, and $\lim_n \int_X \left[ \max\left( \mc M_{\mc T}\phi_n,L\right) \right]^q\mr d\mu = c h$, where $c = \omega_q\!\left( \frac{(1-q)L^q + qL^{q-1}f}{h} \right)$. (See Theorem \ref{thm:1}, relation \eqref{eq:1p8}). \end{definition}
For the proof of Theorem \ref{thm:1} an effective linearization was introduced for the operator $\mc M_{\mc T}$ valid for certain functions $\phi$.
We describe it. For $\phi \in L^1(X,\mu)$ nonnegative function and $I\in \mc T$ we define $\Av_I(\phi) = \frac{1}{\mu(I)} \int_I \phi\,\mr d\mu$. We will say that $\phi$ is $\mc T$-good if the set \[ \mc A_\phi = \left\{ x\in X: \mc M_{\mc T}\phi(x) > \Av_I(\phi)\ \text{for all}\ I\in \mc T\ \text{such that}\ x\in I \right\} \] has $\mu$-measure zero.
\noindent Let now $\phi$ be $\mc T$-good and $x\in X\setminus\mc A_\phi$. We define $I_\phi(x)$ to be the largest in the nonempty set \[ \big\{I\in \mc T: x\in T\ \text{and}\ \mc M_{\mc T}\phi(x) = \Av_I(\phi)\big\}. \]
Now given $I\in\mc T$ let \begin{align*} A(\phi,I) &= \big\{ x\in X\setminus\mc A_\phi: I_\phi(x)=I \big\} \subseteq I,\ \text{and} \\[2pt] S_\phi &= \big\{ I\in\mc T: \mu\left(A(\phi,I)\right) > 0 \big\} \cup \big\{X\big\}. \end{align*}
Obviously then, $\mc M_{\mc T}\phi = \sum_{I\in S_\phi} \Av_I(\phi) \mc X_{A(\phi,I)}$, $\mu$-almost everywhere on $X$, where $\mc X_S$ is the characteristic function of $S\subseteq X$.
We define also the following correspondence $I \to I^\star$ by: $I^\star$ is the smallest element of $\{ J\in S_\phi: I\subsetneq J \}$.
This is defined for every $I\in S_\phi$ except $X$. It is obvious that the $A(\phi,I)$'s are pairwise disjoint and that $\mu\!\left( \cup_{I\notin S_\phi} A(\phi,I) \right)=0$, so that $\cup_{I\in S_\phi} A(\phi,I) \approx X$, where by $A\approx B$ we mean that $\mu(A\setminus B) = \mu(B\setminus A) = 0$. \\
We will need the following \begin{lemma} \label{lem:2p2} Let $\phi$ be $\mc T$-good and $I\in \mc T$, $I \neq X$. Then $I\in S_\phi$ if and only if every $J\in \mc T$ that contains properly $I$ satisfies $\Av_J(\phi) < \Av_I(\phi)$. \end{lemma}
\begin{proof} Suppose that $I\in S_\phi$. Then $\mu(A(\phi,I))>0$. As a consequence $A(\phi,I)\neq \emptyset$, so there exists $x\in A(\phi,I)$. By the definition of $A(\phi,I)$ we have that $I_\phi(x)=I$, that is $I$ is the largest element of $\mc T$ such that $\mc M_{\mc T}\phi(x) = \Av_I(\phi)$. As a consequence the implication stated in our lemma holds.
Conversely now, suppose that $I\in \mc T$ and for every $J\in \mc T$ with $J \supsetneq I$ we have that $\Av_J(\phi) < \Av_I(\phi)$. Then since $\phi$ is $\mc T$-good, for every $x\in I\setminus \mc A_\phi$ there exists $J_x$ ($= I_\phi(x)$) in $S_\phi$ such that $\mc M_{\mc T}\phi(x) = \Av_{J_x}(\phi)$ and $x\in J_x$. By our hypothesis we must have that $J_x \subseteq I$.
Now, consider the family $S' = \left\{ J_x,\ x\in I\setminus \mc A_\phi \right\}$. This has the property $\cup_{x\in I\setminus \mc A_\phi} J_x \approx I$. Choose a subfamily $S^2 = \{J_1, J_2, \ldots\}$ of $S'$, maximal under the $\subseteq$ relation. Then $I\approx \cup_{i=1}^\infty J_i$ where the last union is pairwise disjoint because of the maximality of $S^2$.
Suppose now that $I$ does not belong to $S_\phi$. This means that $\mu(A(\phi,I))=0$, that is we must have that for every $x\in I\setminus\mc A_\phi$, $J_x\subsetneq I$. Since $J_x$ belongs to $S_\phi$ for every such $x$, by the first part of the proof of this Lemma we conclude that $\Av_{J_x}(\phi) > \Av_I(\phi)$.
Thus for every $i$, we must have that $\Av_{J_i}(\phi) > \Av_I(\phi)$. Since $S^2$ is a partition of $I$, we reach to a contradiction. Thus we must have that $I\in S_\phi$. \end{proof}
Now the following is true, obtained in \cite{3}. \begin{lemma} \label{lem:2p3} Let $\phi$ be $\mc T$-good \begin{enumerate}[i)] \item If $I, J\in S_\phi$ then either $A(\phi,J)\cap I = \emptyset$ or $J\subseteq I$. \item If $I\in S_\phi$, then there exists $J\in C(I)$ such that $J\notin S_\phi$. \item For every $I\in S_\phi$ we have that \[ I \approx \underset{\substack{J\in S_\phi \\ J\subseteq I\ \,}}{\cup} A(\phi,J). \] \item For every $I\in S_\phi$ we have that \begin{gather*} A(\phi,I) = I\setminus \underset{\substack{J\in S_\phi \\ J^\star = I\ }}{\cup} J,\ \ \text{so thet} \\ \mu(A(\phi,I)) = \mu(I) - \sum_{\substack{J\in S_\phi\\ J^\star=I\ }} \mu(J). \end{gather*} \end{enumerate} \end{lemma}
\section{Some technical Lemmas} In this section we collect some technical results whose proofs can be seen in \cite{4}. We begin with
\begin{lemma} \label{lem:3p1} Let $0<q<1$ be fixed. Then \begin{enumerate}[i)] \item The function $\omega_q: [1,+\infty) \to [1,+\infty)$ is strictly increasing and strictly concave. \item The function $U_q(x) = \frac{\omega_q(x)}{x}$ is strictly increasing on $[1,+\infty)$. \end{enumerate} \end{lemma}
\noindent For the next Lemma we consider for any $q\in (0,1)$ the following formula \[ \sigma_q(k,x) = \frac{H_q\!\left(\frac{x(1-k)}{1-kx}\right)}{H_q(x)}, \] defined for all $k, x$ such that $0<k<1$ and $0<x<\frac{1}{k}$. A straightforward computation shows (as is mentioned in \cite{4}) that \[ \sigma_q(k,x) = \frac{(1-q)x + q - kx}{(1-k)^{1-q}(1-kx)^q((1-q)x+q)}. \]
We state now the following \begin{lemma} \label{lem:3p2} \begin{enumerate}[i)] \item For any fixed $\lambda>1$ the equation \begin{equation} \label{eq:3p1} H_q\!\left(\frac{x(1-k)}{1-kx}\right) = \lambda H_q(x), \end{equation} has a unique solution $x = \mc X(\lambda,k) = \mc X_\lambda(k)$ on the interval $\left(1,\frac 1 k\right)$ and it has a solution in the interval $(0,1)$ if and only if $\lambda<(1-k)^{q-1}$ in which case this is also unique.
\item For $\mu\geq 0$ define the following function \begin{equation} \label{eq:3p2} R_{q,\mu}(k,x) = \left(\frac{x(1-k)}{1-kx}\right)^q \frac{1}{\sigma_q(k,x)} + \left(\mu^q-x^q\right)(1-k), \end{equation} on $W=\big\{(k,x): 0<k<1\ \text{and}\ 1<x<\frac{1}{k}\big\}$. \\
Then if $\mu > 1$ and $\xi$ is in $(0,1]$ the maximum value of $R_{q,\mu}$ on the set $\left\{ (k,x)\in W: 0<k\leq \xi\ \text{and}\ \sigma_q(k,x)=\lambda\right\}$ is equal to $\frac{1}{\lambda}\omega_q(\lambda H_q(\mu))$ if $\xi\geq k_0(\lambda,\mu)$, where $k_0(\lambda,\mu)$ is given by \begin{equation} \label{eq:3p3} k_0(\lambda,\mu) = \frac{\omega_q(\lambda H_q(\mu))^\frac{1}{q} - \mu}{\mu\!\left(\omega_q(\lambda H_q(\mu))^\frac{1}{q}-1\right)}, \end{equation} and is the unique in $\left(0, \frac 1 \mu\right)$ solution of the equation $\sigma_q(k_0,\mu)=\lambda$.
Additionally comparing with \rnum{1}) of this Lemma we have that $\mc X_\lambda(k_0)=\mu$. \end{enumerate} \end{lemma}
\noindent Now for the next Lemma we fix real numbers $f,\ h$ and $k$ with $0<h<f^q$ and $0<k<1$, and we consider the functions \[ \ell_k(B) = (1-k)^{1-q}(f-B)^q + k^{1-q}B^q, \] defined for $0\leq B\leq f$ and \begin{equation} \label{eq:3p4} R_k(B) = \left\{ \begin{aligned} & \left( h - (1-k)^{1-q}(f-B)^q \right) \omega_q\!\left( \frac{k^{1-q}B^q}{h - (1-k)^{1-q}(f-B)^q} \right), \\ & \hphantom{\frac{k^{1-q}B^q}{1-q},}\hspace{70pt} \text{if}\ (1-k)^{1-q}(f-B)^q< h\leq \ell_k(B), \\[5pt] & \frac{k^{1-q}B^q}{1-q},\hspace{70pt} \text{if}\ h\leq (1-k)^{1-q}(f-B)^q, \end{aligned} \right. \end{equation} defined for all $B\in [0,f]$ such that $\ell_k(B)\geq h$. \\
Noting that $\ell_k$ has an absolute maximum at $B=kf$ with $\ell_k(kf) = f^q > h$ and that it is monotone on each of the intervals $(0, kf)$ and $(kf, f)$ we conclude that either $\ell_k(f) \le h$ i.e. $k^{1-q}f^q < h$, in which case the equation $\ell_k(B) = h$ has a unique solution in $(kf,f)$ and this is denoted by $\rho_1 = \rho_1(f,h,k)$, or $\ell_k(f) \geq h$, in which case we set $\rho_1 = \rho_1(f,h,k) = f$.
Also either $\ell_k(0)<h$, i.e. $(1-k)^{1-q}f^q < h$ in which case the equation $\ell_k(B)=h$ has a unique solution on $(0,kf)$ and this is denoted by $\rho_0 = \rho_0(f,h,k)$, or $\ell_k(0)\geq h$ in which case we set $\rho_0 = \rho_0(f,h,k) = 0$. In all cases the domain of definition of $R_k$ is the interval $W_k = W_k(f,h) = [\rho_0,\rho_1]$. \\
We are now able to give the following \begin{lemma} \label{lem:3p3} The maximum value of the function $R_k$ on $W_k$ is attained at the unique point $B^\star = \mc X_\lambda(k)kf > kf$ where $\lambda = \frac{f^q}{h}$ (see Lemma \ref{lem:3p2}). Moreover \begin{equation} \label{eq:3p5} \max_{W_k}(R_k) = h\,\omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k)) \right) - (1-k)f^q(\mc X_\lambda(k))^q. \end{equation}
Additionally $B^\star$ satisfies \[ (1-k)^{1-q}(f-B^\star)^q < h < \ell_k(B^\star). \] \end{lemma}
The above Lemmas are enough for us to study the extremal sequences for \eqref{eq:1p8} as we shall see in the next Section.
\section{Extremal sequences for the Bellman function} We prove the following \begin{theorem} \label{thm:4p1} Let $\phi$ be $\mc T$-good function such that $\int_X \phi\,\mr d\mu = f$. Let also $B=\{I_j\}_j$ be a family of pairwise disjoint elements of $S_\phi$, which is maximal on $S_\phi$ under $\subseteq$ relation. That is $I\in S_\phi \Rightarrow I\cap (\cup I_j) \neq \emptyset$. Then the following inequality holds \begin{multline*} \int_{X\setminus \cup_j I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq \\ \frac{1}{(1-q)\beta} \left[ (\beta+1) \left(f^q - \sum\mu(I_j)y_{I_j}^q\right) - (\beta+1)^q \int_{X\setminus \cup_j I_j} \phi^q\,\mr d\mu \right] \end{multline*} for every $\beta>0$, where $y_{I_j} = \mathrm{Av}_{I_j}(\phi)$. \end{theorem}
\begin{proof} We follow \cite{4}.
Let $S = S_\phi$, $\alpha_I = \mu(A(\phi,I))$, $\rho_1 = \frac{\alpha_I}{\mu(I)}\in(0,1]$ and \[ y_I = \Av_I(\phi) = \frac{1}{\mu(I)} \sum_{J\in S: J\subseteq I} \alpha_J x_J,\ \ \text{for every}\ \ I\in S, \] where $x_J= \frac{1}{\alpha_J} \int_{A(\phi,J)} \phi\,d\mu$, for any $J \in S_\phi$. It is easy now to see in view of Lemma \ref{lem:2p3} \rnum{4}) that \[ y_I\mu(I) = \sum_{J\in S: J^\star=I} y_J\mu(J) + \alpha_I x_I, \] and so by using concavity of the function $t\to t^q$, we have for any $I\in S$, \begin{align} \label{eq:4p1} [y_i\mu(I)]^q &= \left( \sum_{J\in S: J^\star=I} y_J\mu(J) + \alpha_Ix_I\right)^q \notag \\
&= \left( \sum_{J\in S: J^\star = I} \tau_I\mu(J)\frac{y_I}{\tau_I} + \sigma_I\alpha_I\frac{x_I}{\sigma_I}\right)^q \notag \\
&\geq \sum_{J\in S: J^\star = I} \tau_I\mu(J)\left(\frac{y_J}{\tau_I}\right)^q + \sigma_I\alpha_I\left(\frac{x_I}{\sigma_I}\right)^q, \end{align} where $\tau_I, \sigma_I > 0$ satisfy
\[ \tau_I(\mu(I)-\alpha_I) + \sigma_I\alpha_I = \sum_{J\in S: J^\star=I} \tau_I\mu(J) + \sigma_I\alpha_I = 1. \] We now fix $\beta>0$ and let \[ \sigma_I = ((\beta+1)\mu(I) - \beta\alpha_I)^{-1},\ \ \tau_I = (\beta+1)\sigma_I \]
which satisfy the above relation and thus we get by dividing with $\sigma_I^{1-q}$ that \begin{equation} \label{eq:4p2} ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q} (y_I\mu(I))^q \geq \sum_{J\in S: J^\star=I} (\beta+1)^{1-q} \mu(J) y_J^q + \alpha_Ix_I^q, \end{equation}
However, \begin{equation} \label{eq:4p3} x_I^q = \left( \frac{1}{\alpha_I} \int_{A(\phi,I)} \phi\,d\mu\right)^q \geq \frac{1}{\alpha_I}\int_{A(\phi,I)} \phi^q\,\mr d\mu. \end{equation}
We sum now \eqref{eq:4p2} over all $I\in S$ such that $I\supsetneq I_j$ for some $j$ (which we denote by $I\supsetneq \mr{piece}(B)$) and we obtain \begin{multline} \label{eq:4p4} \sum_{I\supsetneq \mr{piece}(B)} ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q} (y_I\mu(I))^q \geq\\ \sum_{\substack{I\supsetneq \mr{piece}(B)\\ I\neq X}} (\beta+1)^{1-q} \mu(I) y_I^q + \sum_j (\beta+1)^{1-q} \mu(I_j) y_{I_j}^q + \sum_{I\supsetneq \mr{piece}(B)} a_I x_I^q. \end{multline}
Note that the first two sums are produced in \eqref{eq:4p4} because of maximality of $(I_j)$. \eqref{eq:4p4} now gives: \begin{multline} \label{eq:4p5} \sum_{I\supsetneq \mr{piece}(B)} (\beta+1)^{1+q}\mu(I)y_I^q - \sum_{I\supsetneq \mr{piece}(B)} ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q} (y_I\mu(I))^q \leq \\ (\beta+1)^{1-q}y_X^q - \int_{X\setminus \cup I_j} \phi^q\,\mr d\mu - \sum_j (\beta+1)^{1-q}\mu(I_j)y_{I_j}^q, \end{multline}
in view of H\"{o}lder's inequality \eqref{eq:4p3}. Thus \eqref{eq:4p5} gives \begin{multline} \label{eq:4p6} \sum_{I\supsetneq \mr{piece}(B)} \left[(\beta+1)^{1-q}\mu(I) - ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q}\mu(I)^q\right]y_I^q \leq \\ (\beta+1)^{1-q} \left(f^q - \sum\mu(I_j)y_{I_j}^q\right) - \int_{X\setminus \cup I_j} \phi^q\,\mr d\mu, \end{multline}
On the other side we have that \begin{align} \label{eq:4p7} & \frac{1}{\mu(I)} \left[ (\beta+1)^{1-q}\mu(I) - ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q}\mu(I)^q\right] = \notag \\ & (\beta+1)^{1-q} - ((\beta+1) - \beta\rho_I)^{1-q} \geq
(1-q)(\beta+1)^{-q}\beta\rho_I = \notag \\ & (1-q)(\beta+1)^{-q}\beta\frac{\alpha_I}{\mu(I)}, \end{align}
where the inequality in \eqref{eq:4p7} comes from the differentiation mean value theorem on calculus.
From the last two inequalities we conclude \begin{equation} \label{eq:4p8} (1-q)(\beta+1)^{-q}\beta \!\! \sum_{I\supsetneq \mr{piece}(B)} a_Iy_I^q \leq (\beta+1)^{1-q} \left( f^q - \sum\mu(I_j)y_{I_j}^q\right) - \int_{X\setminus \cup I_j} \!\!\phi^q\,\mr d\mu. \end{equation}
Now it is easy to see that \[ \sum_{I\supsetneq \mr{piece}(B)} a_Iy_I^q = \int_{X\setminus \cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu, \]
because $B=\{I_j\}_j$ is a family of elements of $S_\phi$. Then \eqref{eq:4p8} becomes \begin{multline*} \int_{X\setminus \cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq \\ \frac{1}{(1-q)\beta} \left[ (\beta+1)\bigg(f^q - \sum_j \mu(I_j)y_{I_j}^q\bigg) - (\beta+1)^q\int_{X\setminus \cup I_j} \phi^q\,\mr d\mu \right] \end{multline*} for any fixed $\beta>0$, and $\phi: \mc T$-good.
In this way we derived the proof of Theorem \ref{thm:4p1}. \end{proof}
In the same lines as above we can prove: \begin{theorem} \label{thm:4p2} Let $\phi$ be $\mc T$-good and $\mc A=\{I_j\}$ be a pairwise disjoint family of elements of $S_\phi$. Then for every $\beta>0$ we have that: \[ \int_{\cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq \frac{1}{(1-q)\beta} \left[ (\beta+1) \sum \mu(I_j)y_{I_j}^q - (\beta+1)^q \int_{\cup I_j} \phi^q\,\mr d\mu \right]. \] \end{theorem}
\begin{proof} We use the technique mentioned above in Theorem \ref{eq:4p1} by summing inequality \eqref{eq:4p2} with respect to all $I\in S_\phi$ with $I\subseteq I_j$ for any $j$. The rest details are easy to be verified. \end{proof}
We have now the following generalization of Theorem \ref{thm:4p1}. \begin{corollary} \label{cor:4p1} Let $\phi$ be $\mc T$-good and $\mc A = \{I_j\}$ be a pairwise disjoint family of elements of $S_\phi$. Then for every $\beta>0$ \begin{multline} \label{eq:4p9} \int_{X\setminus\cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq \frac{1}{(1-q)\beta} \left[ (\beta+1) \left( f^q - \sum\mu(I_j) y_{I_j}^q\right) - \right. \\ \left. (\beta+1)^q \int_{X\setminus\cup I_j} \phi^q\,\mr d\mu \right], \end{multline} where $y_{I_j} = Av_{I_j}(\phi)$, $f=\int_X \phi\,\mr d\mu$. \end{corollary}
\begin{proof} We choose a pairwise disjoint family $(J_i)_i = B \subseteq S_\phi$ such that the union $\mc A\cup B$ is maximal under the relation $\subseteq$ in $S_\phi$, and $I_j\cap J_i = \emptyset$ for all $i, j$.
Then if we apply Theorem \ref{thm:4p1} for $\mc A\cup B$ and Theorem \ref{thm:4p2} for $B$, and sum the two inequalities we derive the proof of our Corollary. \end{proof}
\noindent We now proceed to the \begin{proof}[Proof of Theorem \ref{thm:a}] Suppose that we are given an extremal sequence $\phi_n: (X,\mu)\to \mb R^+$ of functions, such that $\int_X \phi_n\,\mr d\mu = f$, $\int_X \phi_n^q\,\mr d\mu = h$ for any $n\in \mb N$ and \begin{equation} \label{eq:4p10} \lim_n \int_X \left[ \max(\mc M_{\mc T}\phi_n, L)\right]^q\mr d\mu = h c. \end{equation}
We prove that \begin{equation} \label{eq:4p11}
\lim_n \int_X \left| \max(\mc M_{\mc T}\phi_n,L) - c^{\frac{1}{q}}\phi_n \right|^q\mr d\mu = 0. \end{equation}
For the proof of \eqref{eq:4p11} we are going to give the chain of inequalities from which one gets Theorem \ref{thm:1}.
Then we use the fact that these inequalities become equalities in the limit. \\
Fix a $n\in\mb N$ and write $\phi=\phi_n$. For this $\phi$ we have the following \begin{equation} \label{eq:4p12} I_\phi := \int_X \left[ \max(\mc M_{\mc T}\phi,L) \right]^q\mr d\mu = \int_{\{\mc M_{\mc T}\phi \geq L\}} (\mc M_{\mc T}\phi)^q\,\mr d\mu + L^q(1-\mu(E_\phi)) \end{equation} where $E_\phi = \{\mc M_{\mc T}\phi \geq L\}$. \\
We write $E_\phi$ as $E_\phi = \cup I_j$, where $I_j$ are maximal elements of the $\mc T$, such that \begin{equation} \label{eq:4p13} \frac{1}{\mu(I_j)} \int_{I_j} \phi\,\mr d\mu \geq L. \end{equation}
We set for any $j$, $\alpha_j = \int_{I_j} \phi^q\,\mr d\mu$ and $\beta_j = \mu(I_j)^{1-q} \left( \int_{I_j} \phi\,\mr d\mu\right)^q$. Additionally we set $A = \sum \alpha_j = \int_E \phi^q\,\mr d\mu \leq h$, where $E := E_\phi$, and $B = \sum_j \left( \mu(I_j)^{q-1} \beta_j\right)^\frac{1}{q} = \int_E \phi\,\mr d\mu \leq f$. \\
We also set $k=\mu(E)$. Note that the variables $A$, $B$, $k$ depend on the function $\phi$. \\
From \eqref{eq:4p12} we now obtain \begin{equation} \label{eq:4p14} I_\phi = L^q(1-k) + \sum_j \int_{I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu. \end{equation}
Note now that from the maximality of any $I_j$ we have that $\mc M_{\mc T}\phi(x) =\\ \mc M_{\mc T(I_j)}\phi(x)$, for every $x\in I_j$ where $\mc T(I_j) = \{ J\in\mc T: J\subseteq I_j \}$.
We now apply Theorem \ref{thm:a} for the measure space $\left( I_j, \frac{\mu(\cdot)}{\mu(I_j)}\right)$ and for $L=\frac{1}{\mu(I_j)} \int_{I_j}\phi\,\mr d\mu = \Av_{I_j}(\phi)$, for any $j$, and we get that \begin{equation} I_\phi \leq L^q(1-k) + \sum_j \alpha_j\, \omega_q\! \left(\frac{\beta_j}{\alpha_j}\right). \end{equation}
Note that $k^{1-q} B^q = \left(\sum_j \mu(I_j)\right)^{1-q} \left(\sum_j \left(\mu(I_j)^{q-1} \beta_j\right)^\frac{1}{q}\right)^q \geq \sum \beta_j \geq A$ in view of H\"{o}lder's inequality. \\
We now use the concavity of the function $\omega_q$, as can be seen in Lemma \ref{lem:3p1} \rnum{1}), and we conclude that \begin{equation} \label{eq:4p16} I_\phi \leq L^q(1-k) + A\,\omega_q\!\left(\frac{\sum \beta_j}{A}\right) \leq L^q(1-k) + A\,\omega_q\!\left(\frac{k^{1-q}B^q}{A}\right), \end{equation} where the last inequality comes from the fact that $\omega_q$ is increasing.
It is not difficult to see that the parameters $A, B$ and $k$ satisfy the following inequalities: \begin{align*} & A \leq k^{1-q} B^q,\quad A\leq h,\quad B\leq f,\quad 0\leq k\leq 1\quad \text{and} \\ & h-A \leq (1-k)^{1-q}(f-B)^q, \end{align*} the last one being $\int_{X\setminus E} \phi^q\,\mr d\mu \leq \mu(X\setminus E)^{1-q} \left(\int_{X\setminus E} \phi\,\mr d\mu\right)^q$.
It is also easy to see that $B\geq kL$, by \eqref{eq:4p13} and the disjointness of $\{I_j\}_j$. From the above inequalities and \eqref{eq:4p16} we conclude that \begin{equation} \label{eq:4p17} I_\phi \leq L^q(1-k) + R_k(B), \end{equation}
where $R_k$ is given by \eqref{eq:3p4}. Thus using Lemma \ref{lem:3p3} we have that \begin{multline} \label{eq:4p18} I_\phi \leq L^q(1-k) + R_k(B^\star) = \\ L^q(1-k) + h\,\omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k))\right) - (1-k)f^q\left(\mc X_\lambda(k)\right)^q, \end{multline}
where $\lambda = \frac{f^q}{h}$, $\mc X_\lambda(k)$ is given in Lemma \ref{lem:3p2} and $B^\star = \mc X_\lambda(k) k f > k f$. According to Lemma \ref{lem:3p2} $\mc X_\lambda(k) $ satisfies $1< \mc X_\lambda(k)< \frac{1}{k}$ and \[ H_q\!\left(\frac{\mc X_\lambda(k)(1-k)}{1-k\mc X_\lambda(k)}\right) = \lambda H_q(x). \]
From \eqref{eq:4p18} we have that \begin{equation} \label{eq:4p19} I_\phi \leq \left[ L^q - f^q(\mc X_\lambda(k))^q\right](1-k) + h\,\omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k))\right). \end{equation}
We now set $\mu = \frac{L}{f} > 1$. Then \eqref{eq:4p19} becomes \begin{equation} \label{eq:4p20} I_\phi \leq f^q \left\{ \left[ \mu^q - (\mc X_\lambda(k))^q\right](1-k) + \omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k))\right)\frac{1}{\sigma_q(k,\mc X_\lambda(k))}\right \} \end{equation}
Remember that $\mc X_\lambda(k)$ satisfies $\sigma_q(k,\mc X_\lambda(k)) = \lambda = \frac{f^q}{h}$ by Lemma \ref{lem:3p2}. Now by the last equation we have that \begin{multline} \label{eq:4p21} \frac{f^q}{h} H_q(\mc X_\lambda(k)) = H_q\!\left(\frac{\mc X_\lambda(k)(1-k)}{1-k\mc X_\lambda(k)}\right) \implies \\ \omega_q\!\left(\frac{f^q}{h}H_q(\mc X_\lambda(k))\right) = \omega_q\!\left(H_q\!\left(\frac{\mc X_\lambda (k)(1-k)}{1-k\mc X_\lambda(k)}\right)\right). \end{multline}
Remember that $\omega_q(z) = \left(H_q^{-1}(z)\right)^q$, for any $z\geq 1$. Thus $\omega_q\!\left(\frac{f^q}{h}H_q(\mc X_\lambda(k))\right) = \left(\frac{\mc X_\lambda(k)(1-k)}{1-k\mc X_\lambda(k)}\right)^q$. Thus from \eqref{eq:4p20} we have as a consequence that \begin{align} \label{eq:4p22} I_\phi &= f^q \left\{ \left[ \mu^q - \mc X_\lambda(k)^q\right](1-k) + \frac{1}{\sigma_q(k,\mc X_\lambda(k))} \left(\frac{(1-k)\mc X_\lambda(k)}{1-k\mc X_\lambda(k)}\right)^q\right\} \notag \\
&= f^q R_{q,\mu}(k,\mc X_\lambda(k)). \end{align}
According then to Lemma \ref{lem:3p2} \rnum{2}) we have that \begin{multline} \label{eq:4p23} I_\phi \leq f^q\left\{ \frac{1}{\lambda} \omega_q(\lambda H_q(\mu))\right\} = h\,\omega_q\!\left(\frac{f^q}{h} H_q\!\left(\frac{L}{f}\right)\right) = \\ h\,\omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right) = h c = B_q^{\mc T}(f,h,L,1). \end{multline}
Now if $\phi$ runs along $(\phi_n)$, we see by the extremality of this sequence that in the limit we have equality in \eqref{eq:4p23}.
That is we have equalities in the limit to all the previous steps which lead to \eqref{eq:4p23}. \\
If we let now $\phi=\phi_n$, we write $A=A_n,\ B=B_n$ and $k=k_n$. Since we have equality in the last inequality giving \eqref{eq:4p23} we conclude that $k\to k_0$, where $k_0$ satisfies: \[ k_0(\lambda,\mu) = \frac{\omega_q(\lambda H_q(\mu))^{\frac{1}{q}} - \mu}{\mu\left(\omega_q(\lambda H_q(\mu))^\frac{1}{q} - 1\right)} \quad \text{and} \quad \mc X_\lambda(k_0) = \mu = \frac{L}{f}. \]
Additionally we must have that $B_n \to B^\star = k_0 f \mc X_\lambda(k_0) = k_0 f \frac{L}{f} = k_0 L$, which means exactly that $\lim \frac{1}{\mu(E_n)} \int_{E_n} \phi\,\mr d\mu = L$, where $E_n = \{\mc M_{\mc T}\phi_n \geq L\}$, with $\mu(E_n) = k_n \to k_0$.
This gives us equality in the weak type inequality for $(\phi_n)_n$, in case where $\lambda=L$. \\
We wish to prove that if we define $I_n^{(1)} := \int_X \big| \max(\mc M_{\mc T}\phi_n, L) - c^{\frac{1}{q}}\phi_n\big|^q\mr d\mu$, we then have that $\lim_n I_n^{(1)} = 0$.
Thus we write \begin{align*}
I_n^{(1)} &= \int_{E_n} \left| \mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu + \int_{X\setminus E_n} \left|L-c^\frac{1}{q}\phi_n\right|^q\mr d\mu \\
&= J_n + \Lambda_n, \end{align*} where $J_n$ and $\Lambda_n$ have the obvious meaning.
Remember now in the above chain of inequalities leading to \eqref{eq:4p23}, we have already proved that \[ \int_X [\max(\mc M_{\mc T}\phi_n,L)]^q\,\mr d\mu < L^q(1-k_n) + A_n \omega_q\!\left(\frac{k_n^{1-q}B_n^q}{A_n}\right) \]
and that we used the fact that $A_n\omega_q\!\left(\frac{k_n^{1-q}B_n^q}{A_n}\right) \leq R_k(B_n)$. Thus we must have in the inequality $A_n \geq h-(1-k)^{1-q}(f-B_n)^q=C_n$, equality in the limit according to the way that $R_k(B)$ is defined.
Thus we must have that $h-A_n \approx (1-k_n)^{1-q}(f-B_n)^q$, or equivalently \begin{equation} \label{eq:4p24} \bigg( \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi\,\mr d\mu\bigg)^q \approx \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n^q\,\mr d\mu. \end{equation}
Additionally we must have that \begin{equation} \label{eq:4p25} \int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \approx A_n\,\omega_q\!\left(\frac{k_n^{1-q}B_n^q}{A_n}\right). \end{equation}
We first prove that $\Lambda_n = \int_{X\setminus E_n} \big|L-c^\frac{1}{q}\phi_n\big|^q\,\mr d\mu \to 0$, as $n\to \infty$.
Since $\int_{E_n}\phi_n\,\mr d\mu = B_n \to B^\star = L k_0$, we must have that $\frac{1}{\mu(X\setminus E_n)} \int_{X\setminus E_n} \phi_n\,\mr d\mu = \frac{f-B_n}{1-k_n} \to \frac{f-B^\star}{1-k_0} = \frac{f-L k_0}{1-k_0}$. \\
By the properties that $k_0$ satisfies, we have that \begin{equation} \label{eq:4p26} k_0 = k_0(\lambda,\mu) = \frac{\omega_q(\lambda H_q(\mu))^\frac{1}{q} - \mu}{\mu\left(\omega_q(\lambda H_q(\mu))^\frac{1}{q} - 1\right)}, \end{equation}
where $\lambda = \frac{f^q}{\mu}$, $\mu = \frac{L}{f}$. Of course $\omega_q\!\left( \frac{f^q}{h} H_q\!\left(\frac{L}{f}\right)\right) = c$, thus \eqref{eq:4p26} gives \[ k_0 = \frac{c^\frac{1}{q} - \frac{L}{f}}{\frac{L}{f}(c^\frac{1}{q}-1)} = \frac{f c^\frac{1}{q} - L}{L c^\frac{1}{q} - L} \implies \frac{f-k_0L}{1-k_0} = \frac{L}{c^\frac{1}{q}}. \]
Thus \eqref{eq:4p24} becomes: \begin{equation} \label{eq:4p27} \left[ \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n^q\,\mr d\mu\right]^\frac{1}{q} \approx \left( \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n\,\mr d\mu\right) \cong \frac{L}{c^\frac{1}{q}} =: \tau \end{equation}
In order to show that $\Lambda_n \to 0$, as $n\to \infty$ it is enough to prove that $\int_{X\setminus E_n} |\phi_n-\tau|\,\mr d\mu \to 0$, as $n\to\infty$, where $\tau$ is defined as above.
We use now the following elementary inequality \begin{equation} \label{eq:4p28} t + \frac{1-q}{q} \geq \frac{t^q}{q}, \end{equation} which holds for every $q\in(0,1)$ and every $t>0$.
Additionally we have equality in \eqref{eq:4p28} only if $t=1$. We also assume that $\tau=1$ on \eqref{eq:4p27}. We can overcome this difficulty by dividing \eqref{eq:4p27} by $\tau$ and by considering $\frac{\phi_n}{\tau}$ instead of $\phi_n$.\\
By \eqref{eq:4p28} we have that \begin{equation} \label{eq:4p29} \frac{\phi_n^q(x)}{q} \leq \frac{1-q}{q} + \phi_n(x),\quad \text{for all}\ \ (X\!\setminus\! E_n)\cap \{\phi_n>1\} \end{equation}
and that \begin{equation} \label{eq:4p30} \frac{\phi_n^q(y)}{q} \leq \frac{1-q}{q} + \phi_n(y),\quad \text{for all}\ \ y\in (X\!\setminus\! E_n) \cap \{\phi_n \leq 1\}. \end{equation}
By integrating in the respective domains inequalities \eqref{eq:4p29} and \eqref{eq:4p30} we immediately get: \begin{align} \label{eq:4p31} \frac{1}{q} \int\limits_{(X\setminus E_n)\cap\{\phi_n > 1\}}\!\! \phi_n^q\,\mr d\mu \leq \frac{1-q}{q}\,\mu\!\left( (X\!\setminus\! E_n) \cap \{\phi_n>1\} \right) + \int\limits_{(X\setminus E_n)\cap\{\phi_n>1\}}\!\! \phi_n\,\mr d\mu, \\ \frac{1}{q} \int\limits_{(X\setminus E_n)\cap\{\phi_n\leq 1\}}\!\! \phi_n^q\,\mr d\mu \leq \frac{1-q}{q}\,\mu\!\left( (X\!\setminus\! E_n) \cap \{\phi_n \leq 1\} \right) + \int\limits_{(X\setminus E_n)\cap\{\phi_n \leq 1\}}\!\! \phi_n\,\mr d\mu. \label{eq:4p32} \end{align}
Adding \eqref{eq:4p31} and \eqref{eq:4p32} we conclude that \begin{equation} \label{eq:4p33} \frac{1}{q}\frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n^q\,\mr d\mu \leq \frac{1-q}{q} + \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n\,\mr d\mu. \end{equation}
Since now \eqref{eq:4p27} holds, with $\tau=1$, we conclude that we have equality in \eqref{eq:4p33} in the limit.
Thus we must have equality in the limit in both of \eqref{eq:4p31} and \eqref{eq:4p32}. Thus we have that \begin{align} \label{eq:4p34} \frac{1}{\mu((X\!\setminus\! E_n)\cap\{\phi_n > 1\})} \int_{(X\setminus E_n)\cap\{\phi_n > 1\}} \phi_n\,\mr d\mu &\approx 1\ \ \text{and} \notag \\ \frac{1}{\mu((X\!\setminus\! E_n)\cap\{\phi_n \leq 1\}} \int_{(X\setminus E_n)\cap\{\phi_n \leq 1\}} \phi_n\,\mr d\mu &\approx 1. \end{align}
Then from \eqref{eq:4p34} we have as a consequence that \begin{multline*} \int\limits_{(X\setminus E_n)\cap\{\phi_n > 1\}} (\phi_n-1)\,\mr d\mu = \mu((X\!\setminus\! E_n)\cap\{\phi_n > 1\})\, \cdot \\ \Bigg\{\frac{1}{\mu((X\!\setminus\! E_n)\cap\{\phi_n > 1\})} \int\limits_{(X\setminus E_n)\cap\{\phi_n > 1\}}\!\! \phi_n\,\mr d\mu - 1\Bigg\} \end{multline*} tends to zero, as $n\to\infty$. \\
By the same way $\int_{(X\setminus E_n)\cap\{\phi_n \leq 1\}} (1-\phi_n)\,\mr d\mu \to 0$, so as a result we have $\int_{X\setminus E_n} |\phi_n-1|\,\mr d\mu \approx 0$. Since now $\int_{X\setminus E_n} |\phi_n-1|^q\,\mr d\mu \leq \mu(X\setminus E_n)^{1-q} \big[\int_{X\setminus E_n} |\phi_n-1|\,\mr d\mu\big]^q$ and $\mu(E_n)\to k_0\in (0,1)$
we have that $\lim_n \int_{X\setminus E_n} |\phi_n-1|^q\,\mr d\mu = 0$. \\
By the above reasoning we conclude $\Lambda_n = \int_{X\setminus E_n} |L-c^\frac{1}{q}\phi_n|^q\,\mr d\mu \to 0$, as $n\to \infty$. \end{proof}
\noindent We now prove the following \begin{theorem} \label{thm:4p3} Let $(\phi_n)_n$ be extended, where $0 < h \leq L^q$, $L \geq f$ are fixed. Consider for each $n\in\mb N$ a pairwise disjoint family $\mc A_n = (I_{j,n})_j$ such that the following limit exists: \[ \lim_n \sum_{I\in \mc A_n} \mu(I) y_{I,n}^q,\ \ \text{where}\ \ y_{I,n} = \Av_I(\phi_n),\ I\in \mc A_n. \]
Suppose also that $\cup\mc A_n = \cup_j I_{j,n} \subseteq \{\mc M_{\mc T}\phi_n \geq L\}$, for each $n=1, 2, \ldots$. Then $\lim_n \int_{\cup\mc A_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{\cup\mc A_n} \phi_n^q\,\mr d\mu$, where $c = \omega_q\!\left(\frac{(1-q)L^q + qL^{1-q}f}{h}\right)$. \end{theorem}
\begin{proof} Define $\ell_n = \sum_{I\in\mc A_n} \mu(I) y_{I,n}^q$. By Theorem \ref{eq:4p2} we immediately see that for each $n\in\mb N^\star$ \begin{equation} \label{eq:4p35} \int_{\cup\mc A_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \frac{1}{(1-q)\beta} \left[ (\beta+1) \sum_{I\in\mc A_n} \mu(I)y_{I,n}^q - (\beta+1)^q \int_{\cup\mc A_n} \phi_n^q\,\mr d\mu \right]. \end{equation}
Suppose that $E_n = \{\mc M_{\mc T}\phi_n \geq L\} = \cup I_j^{(n)}\ n=1, 2, \ldots$ where $I_j^{(n)}\in S_{\phi_n}$, for each $j$. \\
Now by our hypothesis we have that $\cup\mc A_n\subseteq E_n$, for all $n\in\mb N^\star$. Thus \[ E_n\setminus \cup\mc A_n = \cup_j\left[ I_j^{(n)}\setminus \cup\mc A_n\right]. \]
Consider now for each $j$ and $n$ the probability since $\left( I_j^{(n)}, \frac{\mu(\cdot)}{\mu(I_j^{(n)})}\right)$, and apply there Theorem \ref{thm:4p1}, to get after summing on $j$ the following inequality \begin{multline} \label{eq:4p36} \int_{E_n\setminus \cup\mc A_n}(\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \\ \frac{1}{(1-q)\beta} \bigg\{ (\beta+1) \bigg[ \sum_{I\in \{I_j^{(n)}\}_j} \mu(I) y_{I,n}^q - \sum_{I\in\mc A_n} \mu(I)y_{I,n}^q\bigg] - (\beta+1)^q\!\!\! \int\limits_{E_n\setminus \cup\mc A_n}\! \phi_n^q\,\mr d\mu \bigg\}. \end{multline}
Summing \eqref{eq:4p35} and \eqref{eq:4p36} we have as a consequence that: \begin{equation} \label{eq:4p37} \int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \frac{1}{(1-q)\beta} \bigg[ (\beta+1) \sum_{I\in \{I_j^{(n)}\}_j} \mu(I)y_{I,n}^q - (\beta+1)^q\int_{E_n}\phi_n^q\,\mr d\mu \bigg]. \end{equation}
Using now the concavity of $t\mapsto t^q$, for $q\in(0,1)$ we obtain the inequality \begin{equation} \label{eq:4p38} \sum_{I\in\{I_j^{(n)}\}_j} \mu(I)y_{I,n}^q \leq \frac{\left(\sum_{I\in\{I_j^{(n)}\}_j} \mu(I)y_{I,n}\right)^q}{\left(\sum_{I\in\{I_j^{(n)}\}_j} \mu(I)\right)^{q-1}} = \frac{\left(\int_{E_n} \phi_n\,\mr d\mu\right)^q}{\mu(E_n)^{q-1}}. \end{equation}
Thus \eqref{eq:4p37} in view of \eqref{eq:4p38} gives \begin{multline} \label{eq:4p39} \int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \frac{1}{(1-q)\beta} \left[ (\beta+1) \frac{1}{\mu(E_n)^{q-1}} \left( \int_{E_n} \phi_n\,\mr d\mu \right)^q - \right. \\ \left. (\beta+1)^q\int_{E_n} \phi_n^q\,\mr d\mu \right] \end{multline}
By our hypothesis we have that \begin{equation} \label{eq:4p40} \int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \approx \\ \int_{E_n}\phi_n^q\,\mr d\mu\cdot \omega_1\!\left(\frac{k_n^{1-q}\left(\int_{E_n}\phi_n\,\mr d\mu\right)^q}{\int_{E_n}\phi_n^q\,\mr d\mu}\right), \end{equation} since $(\phi_n)$ is extremal, where $k_n=\mu(E_n)$, for all $n\in\mb N$. \\
But then by the definition of $\omega_q$; this means exactly that we have equality in the limit in \eqref{eq:4p39} for $\beta=\beta_n=\omega_q\!\left(\frac{k_n^{1-q}\left(\int_{E_n}\phi_n\,\mr d\mu\right)^q}{\int_{E_n}\phi_n^q\,\mr d\mu}\right)^\frac{1}{k}-1$ (see (3.18) and (3.19) in \cite{4}). \\
We set $c_{1,n} = \frac{k_n^{1-q}\left(\int_{E_n}\phi_n\,\mr d\mu\right)^q}{\int_{E_n}\phi_n^q\,\mr d\mu}$. We now prove that $c_{1,n}\to \frac{(1-q)L^q + qL^{q-1}f}{h}$, as $n\to \infty$.
Indeed, note that by the chain of inequalities leading to the least upper bound $B_q^{\mc T}(f,h,L,1) = c\,h$, we must have that \begin{equation} \label{eq:4p41} L^q(1-k_0) + \omega_q(c_{1,n})\int_{E_n}\phi_n^q\,\mr d\mu \approx c\,h,\quad \text{as}\ \ n\to\infty, \end{equation} where we suppose that $k_n\to k_0$ (we pass to a subsequence if necessary).
Now \eqref{eq:4p41} can be written as \begin{equation} \label{eq:4p42} L^q(1-k_n) + \omega_q(c_{1,n})\int_{E_n}\phi_n^q\,\mr d\mu \approx \left(h - \int_{E_n}\phi_n^q\,\mr d\mu\right)c + \int_{E_n}\phi_n^q\,\mr d\mu\!\cdot\! c. \end{equation}
But as we have already proved before, we have \begin{align*} \begin{aligned} & L^q(1-k_n) \approx
\left(h-\int_{E_n}\phi_n^q\,\mr d\mu\right)c \iff
\frac{1}{1-k_n} \left(h - \int_{E_n}\phi_n^q\,\mr d\mu\right)= \frac{L^q}{c} \iff \\ & \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n}\phi_n^q\,\mr d\mu = \frac{L^q}{c}\ \text{which is indeed right, since}
\end{aligned} &\\
\int_{X\setminus E_n} \left|\phi_n - \frac{L}{c^\frac{1}{q}}\right|^q\mr d\mu \approx 0. & \end{align*}
Thus \eqref{eq:4p41} gives $\omega_q(c_{1,n})\int_{E_n}\phi_n^q\,\mr d\mu \approx c\int_{E_n}\phi_n^q\,\mr d\mu$ and since it is easy to see that $\lim_n \int_{E_n}\phi_n^q\,\mr d\mu > 0$,
since $(\phi_n)$ is extremal, we have that $\lim \omega_q(c_{1,n}) = c = \omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right)$ or that $c_{1,n} \to \frac{(1-q)L^q + qL^{q-1}f}{h}$, as $n\to \infty$. \\
From \eqref{eq:4p40} we conclude now that $\int_{E_n}(\mc M_{\mc T}\phi_n)^q\,\mr d\,\mu \approx c\int_{E_n}\phi_n^q\,\mr d\mu$ and that, as we have said before we have equality in \eqref{eq:4p39} for the value of $\beta = c^\frac{1}{q}-1$, in the limit.
But \eqref{eq:4p39} comes from \eqref{eq:4p35} and \eqref{eq:4p36} by summing, so we must have equality in \eqref{eq:4p35} in the limit for this value of $\beta = c^\frac{1}{q}-1 = \omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right)^\frac{1}{q}-1$. \\
But the right side of \eqref{eq:4p35} is minimized for $\beta=\beta_n=\omega_q\!\left(\frac{\ell}{s}\right)^\frac{1}{q}-1$, where $\ell=\lim_n\sum_{I\in\mc A_n} \mu(I)y_{I,n}^q$, $s=\lim_n\int_{\cup\mc A_n}\phi_n^q\,\mr d\mu$. \\
Thus we must have that $\omega_q\!\left(\frac{\ell}{s}\right)=c$, and for the value of $\beta=c^\frac{1}{q}-1$, we get by the equality in \eqref{eq:4p35} that \[ \lim_n\int_{\cup\mc A_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{\cup\mc A_n}\phi_n^q\,\mr d\mu. \]
By this we end the proof of our theorem. \end{proof}
\noindent We now proceed to prove that \[
J_n = \int_{E_n} \left| \mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu \to 0,\ \ \text{as}\ n \to \infty. \]
For this proof we are going to use Theorem \ref{thm:4p3} for a sequence $\left(g_{\phi_n}\right)_n$ which arises from $(\phi_n)_n$ in a canonical way and is extremal by construction. We prove the following
\begin{lemma} \label{lem:4p1} Let $\phi$ be $\mc T$-good and $L \geq f = \int_X\phi\,\mr d\mu$. There exists a measurable function $g_\phi: X\to \mb R^+$ such that for every $I\in \mc T$ such that $I\in S_\phi$ and $\Av_I(\phi)\geq L$ we have that $g_\phi$ assumes two values ($c_I^\phi$ or $0$) on $A(\phi,I)=A_I$.
Moreover $g_\phi$ satisfies $\Av_I(\phi) = \Av_I(g_\phi)$, for every $I\in \mc T$ that contains an element of $S_\phi$ (that is, it is not contained in any of the $A_J$'s). \end{lemma}
\begin{proof} Let $\phi$ be $\mc T$-good and $L\geq f$. \\ We first define $g_\phi(t)=\phi(t)$, $t\in X\!\setminus\! E_\phi$, where we set $E_\phi = \{\mc M_{\mc T}\phi \geq L\}$.
Then we write $E_\phi = \cup I_j$, where $I_j$ are maximal elements of the tree $\mc T$ such that $\Av_{I_j}(\phi)\geq L$. Then by Lemma \ref{lem:2p2} we have that $I_j\in S_\phi$. Note now that \begin{equation} \label{eq:4p43} I_j = A(\phi,I_j) \cup \bigg( \cup_{\substack{J=I_j\\ J\in S_\phi}} J\bigg),\ \ \text{for any}\ j. \end{equation}
Fix a $j$. We define first the following function $g_{\phi,j}^{(1)}(t) = \phi(t)$, $t\in I_j\!\setminus\! A(\phi,I_j)$. We write $A_{I_j} = A(\phi,I_j) = \cup_i I_{i,j}$, where $I_{i,j}$ are maximal elements of $\mc T$, subject to the relation $I_{i,j}\subseteq A_{I_j}$ for any $i$. For each $i$ we have $I_{i,j}\in \mc T_{\left(k_{i,j}\right)}$ for some $k_{i,j} \geq 1$, $k_{i,j} \in \mb N$. Let $I_{i,j}'$ be the unique element of $\mc T$, such that $I_{i,j}' \in \mc T_{(k_{i,j}-1)}$ and $I_{i,j}' \supsetneq I_{i,j}$. \\
We set $\Omega_j = \cup_i I_{i,j}'$, for our $j$. Note now that for every $i$, $I_{i,j}' \notin S_\phi$. This is true because of \eqref{eq:4p43} and the structure that the tree $\mc T$ has by its definition.
Consider now a maximal subfamily of $(I_{i,j}')$ that still covers $\Omega_j$. Then we can write $\Omega_j = \cup_{k=1}^\infty I_{i_k,j}'$, for some sequence of integers $i_1\!<\!i_2\!<\! \ldots \!<\!i_k\!<\!\ldots$, possibly finite, where the family $\left(I_{i_k,j}'\right)_k$, $k=1, 2, \ldots$.
Additionally we obviously have that $\Omega_j \subseteq I_j$. By the maximality of any $I_{i_k,j}$, $k\in \mb N$, we have that $I_{i_k,j}' \cap \left(I_j\setminus A_{I_j}\right) \neq \emptyset$, so there exists $J_j\in S_\phi$ such that $J_j^\star = I_j$ with $J_i \cap I_{i_k,j}' \neq \emptyset$.
Since now each $I_{i_k,j}'$ is not contained in any of $J_i$ (since it contains elements of $A_{I_j}$) we must have that it actually contains any such $J_i$. That is we can write, for any $k\in \mb N$ \[ I_{i_k,j}' = \left[ \cup J_{k,j,m}\right] \cup \left[B_{j,k}\right], \] where for any $n\in \mb N$ \[ J_{k,j,m}\in S_\phi,\ \ J_{k,j,m}^\star = I_j\ \ \text{and}\ \ B_{j,k} = I_{i_k,j}'\cap A_{I_j}. \] Of course we have $\cup_k B_{j,k} = A_{I_j}$. \\
We define the following function on $I_j$. We name it as $g_{\phi,j,1}: I_j \to \mb R^+$. We set $g_{\phi,j,1}(t) = \phi(t)$, $t\in I_j\setminus A_{I_j}$. Now we are going to construct $g_{\phi,j,1}$ on $A_{I_j}$ in such way that for every $I\in \mc T$ such that $I$ contains an element $J\in S_\phi$, such that $J^\star=I$, we have that $\Av_I(g_{\phi,j,1}) = \Av_I(\phi)$.
We proceed to this as follows: For any $k$, $B_{j,k}$ is a union of elements of the tree $\mc T$. Using Lemma \ref{lem:2p1}, we construct for any $\alpha\in (0,1)$ (that will be fixed later) a pairwise disjoint family of elements of $\mc T$ and subsets of $B_{j,k}$ named as $A_{\phi,j,k}$, such that $\sum_{J\in A_{\phi,j,k}} \mu(J) = \alpha\mu(B_{j,k})$,
We define now the function $g_{\phi,j,1,k}: B_{j,k}\to \mb R^+$ by the following way: \[ g_{\phi,j,k,1}(t) := \left\{ \begin{aligned} & c_{j,k,1}^\phi, &t&\in \cup\left\{J: J\in\mc A_{\phi,j,k}\right\} \\ & 0, \quad &t&\in B_{j,k}\setminus \cup\left\{J: J\in\mc A_{\phi,j,k}\right\} \end{aligned} \right. \]
such that \begin{equation} \label{eq:4p44} \left. \begin{aligned} \int_{B_{j,k}} g_{\phi,j,k,1}\,\mr d\mu = c_{j,k,1}^\phi \gamma_{j,k,1}^\phi = \int_{B_{j,k}}\phi\,\mr d\mu&,\ \text{and} \\ \int_{B_{j,k}} g_{\phi,j,k,1}^q\,\mr d\mu = \left(c_{j,k,1}^\phi\right)^q \gamma_{j,k,1}^\phi = \int_{B_{j,k}} \phi^q\,\mr d\mu& \end{aligned}\ \right\}, \end{equation} where $\gamma_{j,k,1}^\phi = \mu\!\left(\cup_{J\in A_{\phi,j,k}} J\right) = \alpha \mu(B_{j,k})$.
It is easy to see that such choices for $c_{j,k,1}^\phi\geq 0$, $\gamma_{j,k,1}^\phi\in [0,1]$ always exist. In fact we just need to set \[ \gamma_{j,k,1}^\phi = \left[ \frac{\left(\int_{B_{j,k}} \phi\,\mr d\mu\right)^p}{\int_{B_{j,k}}\phi^p\,\mr d\mu} \right]^\frac{1}{(p-1)} \leq \mu(B_{j,k}), \] by H\"{o}lder's inequality, and also $\alpha = \gamma_{j,k,1}^\phi / \mu(B_{j,k})$, $c_{j,k,1}^\phi = \int_{B_{j,k}} \phi\,\mr d\mu / \gamma_{j,k,1}^\phi$. \\
We let then $g_{\phi,j,1}(t) := g_{\phi,j,k,1}(t)$, if $t\in B_{j,k}$. Note that $g_{\phi,j,1}$ may assume more that one positive values on $A_{I_j} = \cup_k B_{j,k}$. It is easy then to see that there exists a common positive value, denoted by $c_{I_j}^{\phi}$ and measurable sets $L_k\subseteq B_{j,k}$, such that if we define $g'_{\phi,j,1}(t) :=c_{I_j}^{\phi}\chi_{L_k}(t)$ for $t\in B_{j,k}$ for any $k$ and $g'_{\phi,j,1}(t) :=\phi(t)$ for $t\in X\setminus B_{j,k}$, where $\chi_S$ denotes the characteristic function of $S$, we still have $\int_{B_{j,k}}\phi\,\mr d\mu = \int_{B_{j,k}}g'_{\phi,j,k,1}\,\mr d\mu=c_{I_j}^{\phi}\mu(L_k)$ and $\int_{A_{I_j}}\phi^q\,\mr d\mu = \int_{A_{I_j}}(g'_{\phi,j,k,1})^q\,\mr d\mu$. For the construction of $L_k$ and $c_{I_j}^{\phi}$ we just need to find first the subsets $L_k$ of $B_{j,k}$, for which the first inequalities mentioned above are true, and this can be done for arbitrary $c_{I_j}^{\phi}$, since $(X,\mu)$ is nonatomic. Then we just need to find the value of the constant $c_{I_j}^{\phi}$ for which the second integral equality is also true. Note also that for these choices for $L_k$ and $c_{I_j}^{\phi}$ we may not have $\int_{B_{j,k}}\phi^q\,\mr d\mu = \int_{B_{j,k}}g'_{\phi,j,k,1})^q\,\mr d\mu$, but the respective equality with $A_{I_j}$ in place of $B_{j,k}$ should be true.
We have thus defined $g_{\phi,j,1}'$. It is obvious now that if $I\in\mc T: I\cap A_{I_j} \neq \emptyset$ and $I\subsetneq A_{I_j}$ (that is $I\cap J\neq \emptyset$ for some $J\in S_\phi$ with $J^\star=I_j$), we must have that $I$ is a union of some of the $I_{i_k,j}'$ and some of the $J$'s for which $J\in S_\phi$ and $J^\star=I$.
Then obviously we should have by the construction we just made that $\int_I g_{\phi,j,1}'\,\mr d\mu = \int_I\phi\,\mr d\mu$. We inductively continue and define $g_{\phi,j,2}':= g_{\phi,j,1}'$ on $A_{I_j}$, and by working also in any of $J\in S_\phi$ such that $J^\star=I_j$, we define it in all the $A_J$'s by the same way as before.
We continue defining with $g_{\phi,j,\ell}'$, $\ell=3, 4, \ldots$. We set at last $g_\phi(t):= \lim_\ell g_{\phi,j,\ell}'(t)$ for any $t\in I_j$. Note that is in fact the sequence $(g_{\phi,j,\ell}'(t))_\ell$ is constant for every $t\in I_j$. Then by it's definition, $g_\phi$ should satisfy the conclusions of our lemma. In this way we derive it's proof. \end{proof}
\begin{cremark} It is not difficult to see that for every $I\in S_\phi$, $I\subseteq I_j$ for some $j$ the function $g_\phi$ that is constructed in the previous lemma satisfies $\mu(\{g_\phi=0\} \cap A_I) \geq \mu(\{\phi=0\} \cap A_I)$. This can be seen if we repeat the previous proof by working on the set $\{\phi>0\}\cap A_I$ for any such $I$.
As a consequence, since $E_\phi = \cup_j I_j = \cup_j \bigg(\cup_{\substack{I\subseteq I_j\\ I\in S_\phi}} A(\phi,I)\bigg)$, we conclude that $\mu(\{g_\phi=0\}\cap E_\phi) \geq \mu(\{\phi=0\}\cap E_\phi)$. \\ \end{cremark}
\noindent We prove now the following \begin{lemma} \label{lem:4p2} For an extremal sequence $(\phi_n)_n$ of $\mc T$-good functions we have that $\lim_n \mu( \{\phi_n=0\} \cap \{ \mc M_{\mc T}\phi_n \geq L\} ) = 0$. \end{lemma}
\noindent Before we proceed to the proof of the above Lemma, we prove the following \begin{lemma} \label{lem:4p3} For an extremal sequence $(\phi_n)$, consisting of $\mc T$-good functions, such that $S_{\phi_n}$ is the respective subtree of any $\phi_n$, the following holds \[ \lim_n \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \frac{\left(\int_{A(\phi_n,I)} \phi_n\,\mr d\mu\right)^q}{\alpha_{I,n}^{q-1}} = \lim_n \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu, \] where $\alpha_{I,n} = \mu(A(\phi,I))$, for $I\in S_{\phi_n}$, $n=1, 2, \ldots$. \end{lemma}
\begin{proof} Remember that the following inequalities have been used in the evaluation of the function $B_\phi^{\mc T}(f,h,L,1)$: \begin{equation} \label{eq:4p45} \int_{I_j} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \alpha_j \omega_q\! \left(\frac{\beta_j}{\alpha_j}\right). \end{equation}
Thus we must have equality in the limit in the following inequality: \begin{equation} \label{eq:4p46} \sum_j \int_{I_j} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \sum_j \alpha_j \omega_q \left(\frac{\beta_j}{\alpha_j}\right). \end{equation}
But in the proof of (4.45) the following inequality was used in order to pass from (3.16) to (3.17) in \cite{4}: \[ \sum_{\substack{I\in S_{\phi}}} \alpha_I x_I^q \geq \int_X \phi^q\,\mr d\mu. \]
Now in place of $X$ in the last integral we have the $I_j$'s, so from equality in \eqref{eq:4p46} in the limit, we immediately obtain the statement of our Lemma \ref{lem:4p3}. Our proof is complete. \end{proof}
\noindent We now return to the \begin{proof}[Proof of Lemma \ref{lem:4p2}] It is enough due to the comments mentioned above that $\lim_n \mu(\{g_{\phi_n}=0\} \cap E_{\phi_n}) = 0$. For this, we just need to prove that $$\lim_n \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\alpha_{I,n}-\gamma_I^{\phi_n}) = 0,$$ where $\alpha_{I,n} = \mu(A(\phi_n,I))$, and $\gamma_I^{\phi_n}=\mu(A(\phi_n,I)\cap \{g_{\phi_n}>0\})$ for $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$. \\
For those $I$ we set \[ P_{I,n} = \frac{\int_{A_{I,n}} \phi_n^q\,\mr d\mu}{\alpha_{I,n}^{q-1}},\ \text{where}\ A_{I,n} = A(\phi_n,I). \]
Then we obviously have that $\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \alpha_{I,n}^{q-1} P_{I,n} = \int_{E_{\phi_n}} \phi_n^q$. \\ Additionally $\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\gamma_I^{\phi_n})^{q-1} P_{I,n} \geq h$, since $0<q<1$ and $\gamma_I^{\phi_n} \leq \alpha_{I,n}$ for $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$. However \begin{multline} \label{eq:4p47} \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\gamma_I^{\phi_n})^{q-1} P_{I,n} = \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\gamma_I^{\phi_n})^{q-1} \frac{(c_I^{\phi_n})^q \gamma_I^{\phi_n}}{(\alpha_{I,n})^{q-1}} = \\ \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \frac{(\gamma_I^{\phi_n} . c_I^{\phi_n})^q}{(\alpha_{I,n})^{q-1}} = \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \frac{\left(\int_{A_{I,n}} \phi_n\,\mr d\mu\right)^q}{(\alpha_{I,n})^{q-1}} \approx \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu, \end{multline} by Lemmas \ref{lem:4p1} and \ref{lem:4p3}. \\
We define now for any $R>0$ the set \[ S_{\phi_n,R} = \cup\left\{ A_{I,n}: I\in S_{\phi_n},\ I\subseteq E_{\phi_n},\ P_{I,n}<R(a_{I,n})^{2-q}\right\}. \]
Then for $I\in S_{\phi_n}$ such that $I\subseteq E_{\phi_n}$ and $P_{I,n} < R(\alpha_{I,n})^{2-q}$ we have that \begin{align} \label{eq:4p48} & \int_{A_{I,n}} \phi_n^q\,\mr d\mu < R \alpha_{I,n} \implies\ \text{(by summing up to all such $I$)} \notag \\ & \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu < R \mu(S_{\phi_n,R}). \end{align}
Additionally we have that \begin{equation} \label{eq:4p49}
\Bigg|\!\!\! \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \alpha_{I,n}^{q-1} P_{I,n} - \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu\ \Bigg| = \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu, \end{equation}
and \begin{align} \label{eq:4p50}
& \Bigg|\!\! \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} P_{I,n} - \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu\ \Bigg| \overset{\eqref{eq:4p47}}{\approx} \notag \\
& \Bigg|\!\! \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} P_{I,n} - \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \left(\gamma_I^{\phi_n}\right)^{1-q} P_{I,n}\ \Bigg| = \notag \\
& \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} P_{I,n} =
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} \frac{(c_I^{\phi_n})^q\gamma_I^{\phi_n}}{(\alpha_{I,n})^{q-1}} = \notag \\
& \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \frac{(\gamma_I^{\phi_n} c_I^{\phi_n})^q}{(\alpha_{I,n})^{q-1}} =
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \frac{\left(\int_{A_{I,n}} \phi_n\,\mr d\mu\right)^q}{\alpha_{I,n}^{2-q}} \approx
\int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu, \end{align} where the last equality in the limit is explained by the same reasons as Lemma \ref{lem:4p3} does.
Using \eqref{eq:4p49} and \eqref{eq:4p50} we conclude that \begin{equation} \label{eq:4p51} \limsup_n\hspace{-10pt} \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left[ \left(\gamma_I^{\phi_n}\right)^{q-1} - (\alpha_{I,n})^{q-1} \right] P_{I,n} \leq 2 \lim_n \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu, \end{equation}
By Theorem \ref{thm:4p3} now, and Lemma \ref{lem:2p3} (using the form that the $A_{I,n}$ have, and a diagonal argument) we have that the following is true \begin{equation} \label{eq:4p52} \lim_n \int_{S_{\phi_n,R}} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu. \end{equation}
Since $\mc M_{\mc T}\phi \geq f$, on $X$ we conclude by \eqref{eq:4p48} and \eqref{eq:4p52} that \begin{equation} \label{eq:4p53} f^q \limsup_n \mu(S_{\phi_n,R}) \leq c\,R \limsup \mu(S_{\phi_n,R}). \end{equation}
Thus if $R>0$ is chosen small enough, we must have because of \eqref{eq:4p53} that $\lim_n \mu(S_{\phi_n,R})=0$, thus by \eqref{eq:4p48} we have $\lim_n \int_{S_{\phi_n,R}}\phi^q\,\mr d\mu=0$, and so by \eqref{eq:4p51} we obtain \begin{equation} \label{eq:4p54} \lim_n\hspace{-10pt} \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n} \geq R a_{I,n}^{2-q}}}\hspace{-10pt} \left[ \left(\gamma_I^{\phi_n}\right)^{q-1} - \left(\alpha_{I,\phi_n}\right)^{q-1}\right] P_{I,n} = 0. \end{equation}
We consider now, for any $y>0$ the function $\phi_y(x) = \frac{x^{q-1}y^{2-q}-y}{y-x}$, defined for $x\in(0,y)$. Is is easy to see that $\lim_{x\to 0^+} \phi_y(x) = +\infty$, $\lim_{x\to y^-} \phi_y(x)=1-q$. Moreover $\phi_y'(x) = \frac{(y-1)x^{q-2}y^{3-q} - (q-2)_x^{q-1}y^{2-q} - y}{(y-x)^2}$, $x\in (0,y)$. \\
Then by setting $x=\lambda y$, $\lambda\in(0,1)$ we define the following function $g(\lambda)=(q-1)\lambda^{q-2} - (q-2)\lambda^{q-1} - 1$, which as is easily seen satisfies $g(\lambda)<0$, for all $\lambda\in(0,1)$. But $\phi_y'(x) = \frac{y\,g(\lambda)}{(1-\lambda)^2y^2}<0$, so that $\phi_y$ is decreasing on $(0,y)$.
Thus $\phi_y(x) \geq 1-q$, for all $x\in(0,y)$ $\implies x^{q-1}y^{2-q}-y \geq (1-q)(y-x)$, $\forall x\in(0,y)$. \\ From the above and \eqref{eq:4p54} we see that \[ \lim_n \quad \sum_{\mathpalette\mathclapinternal{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n}\geq R_{\alpha_{I,n}}^{2-q}}}} \quad \left(\alpha_{I,n}-\gamma_{I,n}^{\phi_n}\right) = 0 \implies \mu(E_{\phi_n}) - \mu(S_{\phi_n,R}) - \quad\sum_{\mathpalette\mathclapinternal{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n}\geq R_{\alpha_{I,n}}^{2-q}}}}\quad \left(\gamma_n^{\phi_n}\right) \approx 0. \]
Since then $\mu(S_{\phi_n,R})\to0$ we conclude that \begin{equation} \label{eq:4p55} \mu(E_{\phi_n}) \approx \quad\sum_{\mathpalette\mathclapinternal{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n}\geq R_{\alpha_{I,n}}^{2-q}}}}\quad \gamma_I^{\phi_n} \leq \sum_{I\in S_{\phi_n}, I\subseteq E_{\phi_n}} (\gamma_I^{\phi_n}) \leq \sum_{I\in S_{\phi_n}, I\subseteq E_{\phi_n}} \alpha_{I,n} = \mu(E_{\phi_n}). \end{equation}
Thus from \eqref{eq:4p55} we immediately see that $\sum_{I\in S_{\phi_n}, I\subseteq E_{\phi_n}}\!\! \left(\alpha_{I,n}-\gamma_I^{\phi_n}\right) \approx 0$, or that $\mu(\{g_{\phi_n}=0\}\cap E_{\phi_n}) \approx 0$, and by this we end the proof of Lemma \ref{lem:4p2}. \end{proof}
Now as we have mentioned before, by the construction of $g_{\phi_n}$, we have that $\int_I g_{\phi_n}\,\mr d\mu = \int_I \phi_n\,\mr d\mu$, for every $I\in S_{\phi_n}$.
Thus $\mc M_{\mc T}g_\phi \geq \mc M_{\mc T}\phi\ \text{on}\ X \implies \lim_n \int_X (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq h\, c$. Since $\int_X g_{\phi_n}\,\mr d\mu = f$ and $\int g_{\phi_n}^q\,\mr d\mu=h$, by construction, we conclude that $$\lim_n\int _X (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu =ch,$$ or that $(g_{\phi_n})_n$ is an extremal sequence.
We prove now the following Lemmas needed for the end of the proof of the characterization of the extremal sequences for \eqref{eq:1p8}
\begin{lemma} \label{lem:4p4} With the above notation there holds: \[
\lim \int_{E_{\phi_n}} \left|\mc M_{\mc T}g_{\phi_n} - c^\frac{1}{q}g_{\phi_n}\right|^q\mr d\mu = 0. \] \end{lemma}
\begin{proof} We define for every $n\in\mb N^\star$ the set: \[ \Delta_n = \left\{ t\in E_{\phi_n}:\ \mc M_{\mc T}g_{\phi_n} \geq c^\frac{1}{q} g_{\phi_n}(t)\right\}. \]
It is obvious, by passing, if necessary to a subsequence that \begin{equation} \label{eq:4p56} \lim_n \int_{\Delta_n} (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq c \lim_n \int_{\Delta_n} g_{\phi_n}^q\,\mr d\mu. \end{equation}
We consider now for every $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$ the set $(E_{\phi_n}\!\setminus \Delta_n)\cap A_{I,n}$ where $A_{I,n} = A(\phi_n,I)$. We distinguish two cases.
\begin{enumerate}[\hspace{-5pt}(i)] \item $\Av_I(\phi_n) = y_{I,n} > c^\frac{1}{q} c_I^{\phi_n}$, where $c_I^{\phi_n}$ is the positive value of $g_{\phi_n}$ on $A_{I,n}$ (if it exists). Then because of Lemma \ref{lem:4p1} we have that \[ \mc M_{\mc T}g_{\phi_n}(t) \geq \Av_I(g_{\phi_n}) =\Av_I(\phi_n) > c^\frac{1}{q} c_I^{\phi_n} \geq c^\frac{1}{q} g_\phi(t), \] for each $t\in A_{I,n}$. Thus $(E_{\phi_n}\!\setminus \Delta_n)\cap A_{I,n} = \emptyset$ in this case. \\
We study now the second one. \\ \item $y_I \leq c^\frac{1}{q} c_I^{\phi_n}$. Let now $t\in A_{I,n}$ with $g_\phi(t)>0$, that is $g_{\phi_n}(t) = c_I^{\phi_n}$. We prove that for this $t$ we have $\mc M_{\mc T}g_{\phi_n} \leq c^\frac{1}{q} g_{\phi_n}(t) = c^\frac{1}{q} c_I^{\phi_n}$. \end{enumerate}
Suppose now that we have the opposite inequality. Then there exists $J_t\in \mc T$ such that $t\in J_t$ and $\Av_{J_t}(g_{\phi_n}) > c^\frac{1}{q} c_I^{\phi_n}$. Then one of the following holds
\begin{enumerate}[(a)] \item $J_t \subseteq A_{I,n}$. Then by the form of $g_{\phi_n} / A_{I,n}$ (equals $0$ or $c_i^{\phi_n}$), we have that $\Av_{J_t}(g_{\phi_n}) \leq c_I^{\phi_n} \leq c^\frac{1}{q} c_I^{\phi_n}$, which is a contradiction. Thus this case is excluded. \item $J_t$ is not a subset of $A_{I,n}$. Then two subcases can occur. \begin{enumerate}[($\mr b_1$)] \setlength{\parsep}{0pt} \item $J_t \subseteq I \subseteq E_{\phi_n}$ and contains properly an element of $S_{\phi_n}$, $J'$, for which $(J')^\star = I$. Since now (\rnum{2}) holds, $t\in J_t$ and $\Av_{J_t}(g_{\phi_n}) > c^\frac{1}{q}c_I^{\phi_n}$, we must have that $J' \subsetneq J_t \subsetneq I$.
We choose now an element $J_t'$ of $\mc T$, $J_t \subsetneq I$ which contains $J_t$, with maximum value on the average $\Av_{J_t'}(\phi_n)$. Then by it's choice we have that for each $K\in \mc T$ such that $J_t' \subset K \subsetneq I$ there holds $\Av_K(\phi) \leq \Av_{J_t'}(\phi)$. Since now $I\in S_{\phi_n}$ and $\Av_I(\phi_n) \leq c^\frac{1}{q}c_I^{\phi_n}$, by Lemma \ref{lem:2p2} and the choice of $J_t'$ we have that $\Av_K(\phi_n) < \Av_{J_t'}(\phi_n)$ for every $K\in \mc T$ such that $J_t' \subseteq K$. So again by Lemma \ref{lem:2p2} we conclude that $J_t'\in S_{\phi_n}$. But this is impossible, since $J' \subsetneq J_t' \subsetneq I$, $J'\!, I\in S_{\phi_n}$ and $(J')^\star = I$. We turn now to the second subcase. \item $I\subsetneq J_t$. Then by application of Lemma \ref{lem:4p1} we have that $\Av_{J_t}(\phi_n) = \Av_{J_t}(g_{\phi_n}) > c^\frac{1}{q}c_I^{\phi_n} \geq y_{I,n} = \Av_I(\phi_n)$ which is impossible by Lemma \ref{lem:2p2}, since $I\in S_{\phi_n}$. \end{enumerate} \end{enumerate}
Thus in any of the two cases ($b_1$) and ($b_2$) we have proved that we have \\ $(E_{\phi_n}\!\setminus \Delta_n) \cap A_{I,n} = A_{I,n} \setminus \{g_{\phi_n}\!=0\}$, while we showed that in the case (\rnum{1}), $(E_{\phi_n}\!\setminus \Delta_n)\cap A_{I,n} = \emptyset$. \\
Since $\cup \{A_I: I\in S_{\phi_n},\ I\subseteq E_{\phi_n}\} \approx E_{\phi_n}$, we conclude by the above discussion that $E_{\phi_n}\!\setminus \Delta_n$ can be written as $\left(\cup_{I\in S_{1,\phi_n}} A_{I,n}\right)\setminus \Gamma_{\phi_n}$, where $\mu(\Gamma_{\phi_n})\to 0$ and $S_{1,\phi_n}$ is a subtree of $S_{\phi_n}$. Then by Lemma \ref{lem:2p3} and Theorem \ref{thm:4p3}, by passing if necessary to a subsequence, we have
\[ \lim_n \int_{\cup_{I\in S_{1,\phi_n}}\! A_{I,n}} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{\cup_{I\in S_{1,\phi_n}}\! A_{I,n}} \phi_n^q\,\mr d\mu, \] so since $\lim_n \mu(\Gamma_{\phi_n}) = 0$, we conclude that \begin{equation} \label{eq:4p57} \lim_n \int_{E_{\phi_n}\!\setminus\Delta_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{E_{\phi_n}\!\setminus\Delta_n} \phi_n^q\,\mr d\mu. \end{equation}
Because then of the relation $\mc M_{\mc T}g_\phi \geq \mc M_{\mc T}\phi$, on $X$ we have by \eqref{eq:4p57} as a consequence that \begin{equation} \label{eq:4p58} \lim_n \int_{E_{\phi_n}\!\setminus\Delta_n} (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq c \lim_n \int_{E_{\phi_n}\!\setminus \Delta_n} g_{\phi_n}^q\,\mr d\mu. \end{equation}
Adding \eqref{eq:4p56} and \eqref{eq:4p58}, we obtain \begin{equation} \label{eq:4p59} \lim_n \int_{E_{\phi_n}} (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq c\int_{E_{\phi_n}} g_{\phi_n}^q\,\mr d\mu, \end{equation} which in fact is an equality, because if we had strict inequality in \eqref{eq:4p59} we would produce since $g_{\phi_n} = \phi_n$ on $X\setminus E_{\phi_n}$, that $\lim_n \int_X (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu > c\, h$, as we can easily see.
This is a contradiction, since $\int_X g_{\phi_n}\,\mr d\mu = f$ and $\int_X g_{\phi_n}^q\,\mr d\mu = h$, for every $n\in\mb N$, and because of Theorem \ref{thm:1}.
Thus we must have equality in both \eqref{eq:4p56} and \eqref{eq:4p58}. \\ Our proof is completed. \end{proof} We proceed now to the following
\begin{lemma} \label{lem:4p5} Let $X_n\subset X$, and $h_n, z_n: X_n\to \mb R^+$ be measurable functions such that $h_n^q = z_n$, where $q\in(0,1)$ is fixed. Suppose additionally that $g_n, w_n: X\to \mb R^+$ satisfy $g_n^q = w_n$. Suppose also that $g_n \geq h_n$, on $X_n$.
Then if $\lim_n \int_{X_n} (w_n-z_n)\,\mr d\mu = 0$ and the sequence $\int_{X_n}\! w_n\,\mr d\mu$ is bounded, we have that $\lim_n \int_{X_n} (g_n-h_n)^q\,\mr d\mu = 0$. \end{lemma}
\begin{proof} We set $I_n = \int_{x_n} \Big(w_n^\frac{1}{q} - z_n^\frac{1}{q}\Big)^q\mr d\mu$. \\ For every $p>1$, the following elementary inequality is true $x^p-y^p \leq p(x-y)x^{p-1}$, for $x>y>0$. Thus for $p=\frac{1}{q}$, we have $w_n^p - z_n^p \leq p(w_n-z_n)w_n^{p-1} \implies$ \begin{equation} \label{eq:4p60} I_n \leq \left(\frac{1}{q}\right)^q \int_{X_n} (w_n-z_n)^q w_n^{1-q}\,\mr d\mu. \end{equation}
If we use now H\"{o}lder's inequality in \eqref{eq:4p60} we immediately obtain that $I_n \leq \left(\frac{1}{q}\right)^q \left(\int_{X_n} (w_n-w)\,\mr d\mu\right)^q \left(\int_{X_n} w_n\right)^{1-q} \to 0$, as $n\to \infty$, by our hypothesis. \end{proof}
\noindent Let now $(\phi_n)$ be an extremal sequence of functions. We define $g_{\phi_n}': (X,\mu)\to \mb R^+$ by \[ g_{\phi_n}'(t) = c_I^{\phi_n},\ t\in A_{I,n} = A(\phi_n,I),\ \ \text{for}\ I\in S_{\phi_n}. \]
\noindent We prove now the following \begin{lemma} \label{lem:4p6}
With the above notation $\lim_n \int_{E_{\phi_n}} |g_{\phi_n}' - \phi_n|^q\,\mr d\mu = 0$. \end{lemma}
\begin{proof} We are going to use again the inequality $t + \frac{1-q}{q} \geq \frac{t^q}{q}$, which holds for every $t>0$ and $q\in(0,1)$. In view of Lemma \ref{lem:4p5} we just need to prove that \[ \int_{\{\phi_n \geq g_{\phi_n}'\}\cap E_{\phi_n}}\!\!\! [\phi_n^q - (g_{\phi_n}')^q]\,\mr d\mu \to 0\quad \text{and}\quad \int_{\{g_{\phi_n}' > \phi_n\}\cap E_{\phi_n}}\!\!\! [(g_{\phi_n}')^q - \phi_n]\,\mr d\mu \to 0. \]
We proceed to this as follows. \\ For every $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$, we set \begin{align*} \Delta_{I,n}^{(1)} = \{g_{\phi_n}' \leq \phi_n\} \cap A(\phi_n, I), \\ \Delta_{I,n}^{(2)} = \{\phi_n < g_{\phi_n}'\} \cap A(\phi_n,I). \end{align*}
From the inequality mentioned in the beginning of this proof we have that, if $c_I^{\phi_n} > 0$, then \[ \frac{\phi_n(x)}{c_I^{\phi_n}} + \frac{1-q}{q} \geq \frac{1}{q} \frac{\phi_n^q(x)}{(c_I^{\phi_n})^q}, \ \ \forall x\in A_{I,n}, \]
so integrating over every $\Delta_{I,n}^{(j)}$, $j=1,2$ we obtain \begin{align} \label{eq:4p61} & \frac{1}{c_I^{\phi_n}} \int_{\Delta_{I,n}^{(j)}} \phi_n\,\mr d\mu + \frac{1-q}{q}\mu(\Delta_{I,n}^{(j)}) \geq \frac{1}{q} \frac{1}{(c_I^{\phi_n})^q} \int_{\Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu \implies \notag \\ & \sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^q \int_{\Delta_{I,n}^{(j)}} \phi_n\,\mr d\mu + \frac{1-q}{q} \sum_{I\in S_{\phi_n}'} \mu(\Delta_{I,n}^{(j)}) (c_I^{\phi_n})^q \geq
\frac{1}{q} \int_{\cup_{I\in S_{\phi_n}'}\!\!\! \Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu, \end{align} for $j=1,2$, where $S_{\phi_n}' = \{I\in S_{\phi_n}: I\subseteq E_{\phi_n},\ c_I^{\phi_n}>0\}$. \\
From the definition of $g_{\phi_n}'$ we see that \eqref{eq:4p61} gives \begin{multline} \label{eq:4p62} \int_{\cup_{I\in S_{\phi_n}'} \Delta_{I,n}^{(j)}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu + \frac{1-q}{q} \sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^q \mu(\Delta_{I,n}^{(j)}) \geq \\ \frac{1}{q} \int_{\cup_{I\in S_{\phi_n}'} \Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu,\ \ \text{for}\ j=1,2. \end{multline}
Note now that \[ \sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^q \mu(\Delta_{I,n}^{(j)}) = \begin{cases} \int_{\{\phi_n\geq g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu, & j=1 \\[10pt] \int_{\{\phi_n<g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu, & j=2, \end{cases} \] and \[ \int_{\cup_{I\in S_{\phi_n}'}\!\!\!\Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu = \begin{cases} \int_{\{\phi_n\geq g_{\phi_n}'\}\cap E_{\phi_n}} \phi_n^q\,\mr d\mu, & j=1 \\[10pt] \int_{\{\phi_n<g_{\phi_n}'\}\cap E_{\phi_n}} \phi_n^q\,\mr d\mu, & j=2, \end{cases} \] because if $c_I^{\phi_n}=0$, for some $I\in S_{\phi_n}'$, $I\subseteq E_{\phi_n}$, then $\phi_n=0$ on the respective $A_{I,n}$, and conversely. Additionally: \[ \int_{\cup_{I\in S_{\phi_n}'}\!\!\!\Delta_{I,n}^{(j)}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu = \begin{cases} \int_{\{\phi_n\geq g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu, & j=1 \\[10pt] \int_{\{\phi_n<g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu, & j=2. \end{cases} \]
So we conclude the following two inequalities:
\begin{equation} \label{eq:4p63} \int\limits_{\{0 < g_{\phi_n}' \leq \phi_n\} \cap E_{\phi_n}}\hspace{-26pt} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu + \frac{1-q}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' \leq \phi_n\} \cap E_{\phi_n}}} (g_{\phi_n}')^q\,\mr d\mu\quad \geq \quad \frac{1}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' \leq \phi_n\} \cap E_{\phi_n}}} \phi_n^q\,\mr d\mu, \end{equation}
and \begin{equation} \label{eq:4p64} \int\limits_{\{g_{\phi_n}' > \phi_n\} \cap E_{\phi_n}}\hspace{-21pt} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu + \frac{1-q}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' > \phi_n\} \cap E_{\phi_n}}} (g_{\phi_n}')^q\,\mr d\mu\quad \geq \quad \frac{1}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' >\phi_n\} \cap E_{\phi_n}}} \phi_n^q\,\mr d\mu. \end{equation}
If we sum the above inequalities we get: \begin{equation} \label{eq:4p65} \sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^{q-1} (c_I^{\phi_n} \gamma_I^{\phi_n}) + \frac{1-q}{q} \int_{E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu \geq \frac{1}{q} \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu. \end{equation}
Now the following are true because of Lemma \ref{lem:4p2} \[ \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu = \sum_{I\in S_{\phi_n}'} \gamma_I^{\phi_n}(c_I^{\phi_n})^q \approx \int_{E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu. \]
Thus in \eqref{eq:4p65} we must have euality in the limit. As a result we obtain equalities in both \eqref{eq:4p63} and \eqref{eq:4p64} in the limit. \\
As a consequence, if we set \[ t_n = \int_{\{g_{\phi_n}'\leq \phi_n\}\cap E_{\phi_n}} \phi_n^q\,\mr d\mu, \qquad s_n = \int_{\{g_{\phi_n}'\leq \phi_n\}\cap E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu, \] we must have that \begin{equation} \label{eq:4p66} \int_{\{\phi_n \geq g_{\phi_n}' > 0\}\cap E_{\phi_n}} \phi_n (g_{\phi_n}')^q\,\mr d\mu + \frac{1-q}{q}s_n \approx \frac{1}{q}t_n. \end{equation}
But as can be easily seen we have that \begin{equation} \label{eq:4p67} \bigg[\qquad \int\limits_{\qquad\mathpalette\mathclapinternal{\{0 < g_{\phi_n}' \leq \phi_n\}\cap E_{\phi_n}}} \phi_n(g_{\phi_n}')^{q-1}\,\mr d\mu\bigg]^q \cdot \bigg[\qquad \int\limits_{\qquad\mathpalette\mathclapinternal{\{0 < g_{\phi_n}' \leq \phi_n\}\cap E_{\phi_n}}} (g_{\phi_n}')^q\,\mr d\mu\bigg]^{1-q} \geq \qquad \int\limits_{\qquad\mathpalette\mathclapinternal{\{0 < g_{\phi_n}' \leq \phi_n\}\cap E_{\phi_n}}} \phi_n^q\,\mr d\mu. \end{equation}
From \eqref{eq:4p66} and \eqref{eq:4p67} we have as a result that \begin{equation} \label{eq:4p68} \frac{t_n^\frac{1}{q}}{s_n^{\frac{1}{q}-1}} + \frac{1-q}{q}s_n \leq \frac{1}{q}t_n \implies \left(\frac{t_n}{s_n}\right)^\frac{1}{q} + \frac{1-q}{q} \leq \frac{1}{q}\left(\frac{t_n}{s_n}\right), \end{equation} in the limit. \\
But for every $n\in\mb N$ we have that $\left(\frac{t_n}{s_n}\right)^\frac{1}{q} + \frac{1-q}{q} \geq \frac{1}{q}\left(\frac{t_n}{s_n}\right)$. Thus we have equality in \eqref{eq:4p68} in the limit. This means that $\frac{t_n}{s_n}\approx 1$ and since $(t_n)_n$ and $(s_n)_n$ are bounded sequences, we conclude that \[ t_n-s_n\to 0 \implies \int_{\{g_{\phi_n}'\leq \phi_n\}\cap E_{\phi_n}} [\phi_n^q-(g_{\phi_n}')^q]\,\mr d\mu \to 0,\ \ \text{as}\ n\to\infty. \]
In a similar way we prove that $\int_{\{\phi_n < g_{\phi_n}'\}\cap E_{\phi_n}} [(g_{\phi_n}')^q - \phi_n^q]\,\mr d\mu \to 0$. Thus Lemma \ref{lem:4p6} is proved. \\ \end{proof}
\noindent We now proceed to the following. \begin{lemma} \label{lem:4p7} With the above notation, we have that \[
\lim_n \int_{E_{\phi_n}} \left| \mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu = 0. \] \end{lemma}
\begin{proof}
We set $J_n = \int_{E_{\phi_n}} \left|\mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu$. \\ It is true that $(x+y)^q < x^q+y^q$, whenever $x,y>0$, $q\in (0,1)$.
Thus \begin{multline*}
J_n \leq \int_{E_{\phi_n}} |\mc M_{\mc T}\phi_n - \mc M_{\mc T}g_{\phi_n}|^q\,\mr d\mu + \int_{E_{\phi_n}} |\mc M_{\mc T}g_{\phi_n}-c^\frac{1}{q}g_{\phi_n}|^q\,\mr d\mu + \\
c\int_{E_{\phi_n}} |g_{\phi_n}-\phi_n|^q\,\mr d\mu = J_n^{(1)} + J_n^{(2)} + J_n^{(3)}. \end{multline*}
By Lemmas \ref{lem:4p6} and \ref{lem:4p2} we have that $J_n^{(3)}\to 0$, as $n\to \infty$. Also, $J_n^{(2)}\to 0$ by Lemma \ref{lem:4p4}. We look now at $J_n^{(1)} = \int_{E_{\phi_n}} |\mc M_{\mc T}\phi_n - \mc M_{\mc T}g_{\phi_n}|^q\,\mr d\mu$.
As we have mentioned before $\mc M_{\mc T}g_{\phi_n} \geq \mc M_{\mc T}\phi_n$, on $X$, thus $J_n^{(1)} = \int_{E_{\phi_n}} (\mc M_{\mc T}g_{\phi_n} - \mc M_{\mc T}\phi_n)^q\,\mr d\mu$. \\
Since $\lim_n\int_{E_{\phi_n}} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = \lim_n\int_{E_{\phi_n}} (\mc M_{\mc T}\phi_m)^q\,\mr d\mu = c \lim \int_{E_{\phi_n}} \phi_n^q$ we immediately see that $J_n^{(1)}\to 0$, by Lemma \ref{lem:4p5}. \\
The proof of Lemma \ref{lem:4p7} is thus complete, completing also the proof of Theorem \ref{thm:a}. \end{proof}
\begin{remark} \label{rem:4p1} We need to mention that Theorem \ref{thm:a} holds true on $\mb R^n$ without the hypothesis that the sequence $(\phi_n)_n$ consists of $\mc T$-good functions. This is true since in the case of $\mb R^n$, where $\mc T$ is the usual tree of dyadic subcubes of a fixed cube $Q$, the class of $\mc T$-good functions contains the one of the dyadic step functions on $Q$, which are dense on $L^1(X,\mu)$. \end{remark}
\noindent Nikolidakis Eleftherios\\ Visiting Professor\\ Department of Mathematics \\ University of Ioannina \\ Greece\\ E-mail address: [email protected]
\end{document} |
\begin{document}
\author[M. Nasernejad, A. A. Qureshi, K. Khashyarmanesh, and L. G. Roberts]{ Mehrdad ~Nasernejad$^{1}$, Ayesha Asloob Qureshi$^{2,*}$, Kazem Khashyarmanesh$^{1}$, and Leslie G. Roberts$^{3}$} \title[Classes of normally and nearly normally torsion-free ideals]{Classes of normally and nearly normally torsion-free monomial ideals} \subjclass[2010]{13B25, 13F20, 05E40.} \keywords { Normally torsion-free ideals, Nearly normally torsion-free ideals, Associated prime ideals, $t$-spread monomial ideals, Hypergraphs.} \thanks{$^*$Corresponding author}
\thanks{E-mail addresses: m\_{[email protected]}, [email protected], [email protected], and [email protected]} \maketitle
\begin{center} {\it $^1$Department of Pure Mathematics,
Ferdowsi University of Mashhad,\\ P.O.Box 1159-91775, Mashhad, Iran\\
$^2$Sabanc\i\;University, Faculty of Engineering and Natural Sciences, \\ Orta Mahalle, Tuzla 34956, Istanbul, Turkey\\ $^3$Department of Mathematics and Statistics, Queen's University, \\ Kingston, Ontario, Canada, K7L 3N6 } \end{center}
\begin{abstract} In this paper, our main focus is to explore different classes of nearly normally torsion-free ideals. We first characterize all finite simple connected graphs with nearly normally torsion-free cover ideals. Next, we characterize all normally torsion-free $t$-spread principal Borel ideals that can also be viewed as edge ideals of uniform multipartite hypergraphs. \end{abstract}
\section{Introduction}
Let $\mathcal{H}=(V_{\mathcal{H}},E_{\mathcal{H}})$ be a simple hypergraph on vertex set $V_{\mathcal{H}}$ and edge set $E_{\mathcal{H}}$. The edge ideal of $\mathcal{H}$, denoted by $I(\mathcal{H})$, is the ideal generated by the monomials corresponding to the edges of $\mathcal{H}$. A hypergraph $\mathcal{H}$ is called {\em Mengerian} if it satisfies a certain min-max equation, which is known as the Mengerian property in hypergraph theory or as the max-flow min-cut property in integer programming. Algebraically, it is equivalent to $I(\mathcal{H})$ being normally torsion-free, see \cite[Corollary 10.3.15]{HH1}, \cite[Theorem 14.3.6]{V1}.
Let $R$ be a commutative Noetherian ring and $I$ be an ideal of $R$. In addition, let $\mathrm{Ass}_R(R/I)$ be the set of all prime ideals associated to $I$. An ideal $I$ is called {\it normally torsion-free} if $\mathrm{Ass}(R/I^k)\subseteq \mathrm{Ass}(R/I)$, for all $k\geq 1$ \cite[Definition 1.4.5]{HH1}. In other words, if $I$ has no embedded primes, then $I$ is normally torsion-free if and only if the ordinary powers of $I$ coincide with the symbolic powers of $I$. Normally torsion-free ideals have been a topic of several papers, however, a few classes of these ideals originate from graph theory. Simis, Vasconcelos and Villarreal showed in \cite{SVV} that a finite simple graph is bipartite if and only if its edge ideal is normally torsion-free. An analogue of bipartite graphs in higher dimensions is considered to be a hypergraph that avoids ``special odd cycles". Such hypergraphs are called {\em balanced}, and a well-known result of Fulkerson, Hoffman and Oppenheim in \cite{FHO} states that balanced hypergraphs are Mengerian. It follows immediately that the edge ideals of balanced hypergraphs are normally torsion-free. However, unlike the case of bipartite graphs, it should be noted that the converse of this statement is not true. We refer the reader to \cite{B} for the related definitions in hypergraph theory.
A monomial ideal $I$ in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ is called {\it nearly normally torsion-free} if there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$, see \cite[Definition 2.1]{Claudia}. This concept generalizes normally torsion-freeness to some extent. Recently, the author, in \cite[Theorem 2.3]{Claudia}, characterized all connected graphs whose edge ideals are nearly normally torsion-free. Our main aim is to further explore different classes of nearly normally torsion-free ideals and normally torsion-free ideals that originate from hypergraph theory.
This paper is organized as follows: In Section~\ref{prem}, we provide some notation and definitions which appear throughout the paper. In Section~\ref{generalresults}, we give some results to develop different criteria that help to investigate whether an ideal is normally torsion-free. For this purpose, we employ the notion of monomial localization of a monomial ideal with respect to a prime monomial ideal, see Lemmas \ref{Lem. 1} and \ref{Lem. 2}. In what follows, we provide two applications of Corollary \ref{Cor. 1}. In the first application, we give a class of nearly normally torsion-free monomial ideals, which is concerned with the monomial ideals of intersection type (Proposition \ref{App. 1}). For the second application, we turn our attention to the $t$-spread principal Borel ideals, and reprove one of the main results of \cite{Claudia} (Proposition \ref{App. 2}). We close Section~\ref{generalresults} by giving a concrete example of a nearly normally torsion-free ideal that does not have the strong persistence property, see Definition~\ref{spersistence}. Note that the ideal given in our example is not square-free. The question that whether nearly normally torsion-free square-free monomial ideals have the persistence or the strong persistence property still remains open.
In Section~\ref{coverideals}, one of our main results is Theorem \ref{Main. 1} which proves that if $G$ is a finite simple connected graph, then the cover ideal of $G$ is nearly normally torsion-free if and only if $G$ is either a bipartite graph or an almost bipartite graph. In proving this we found reference \cite {JS} very helpful. The second main result of Section~\ref{coverideals} is Corollary~\ref{Cor. NFT1} which provides a new class of normally torsion-free ideals based on the existing ones. To reach this purpose, we start with a hypergraph $\mathcal{H}$ whose edge ideal is normally torsion-free, in other words, we take a Mengerian hypergraph $\mathcal{H}$. By using the notion of coloring of hypergraphs, in Theorem~\ref{NTF1}, we show that the new hypergraph $\mathcal{H'}$ obtained by adding a ``whisker" to $\mathcal{H}$ is also Mengerian. Here, to add a whisker to $\mathcal{H}$, one adds a new vertex and an edge of size two consisting of this new vertex and an existing vertex of $\mathcal{H}$.
In Section~\ref{borel}, we study $t$-spread principal Borel ideals, see Definition~\ref{boreldef}. An interesting feature of $t$-spread principal Borel ideals is noted in Theorem~\ref{complete} that shows a $t$-spread principal Borel ideal is normally torsion-free if and only if it can viewed as an edge ideal of a certain $d$-uniform $d$-partite hypergraph. As we noted in Example~\ref{oddcycle}, the hypergraph associated to a $t$-spread principal Borel ideal may contain special odd cycles, and hence these hypergraphs are not necessarily balanced. This limits us to use the result of Fulkerson, Hoffman and Oppenheim in \cite{FHO}. We prove Theorem~\ref{complete} by applying a combination of algebraic and combinatorial techniques. We make use of \cite[Theorem 3.7]{SNQ} which gives a criterion to check whether an ideal is normally torsion-free. In addition, we use the notion of linear relation graph associated to monomial ideals, see Definition~\ref{linearrelationgraphdef}, to study the set of associated primes ideals of $t$-spread principal Borel ideals.
\section{Preliminaries}\label{prem}
Let $R$ be a commutative Noetherian ring and $I$ be an ideal of $R$. A prime ideal $\mathfrak{p}\subset R$ is an {\it associated prime} of $I$ if there exists an element $v$ in $R$ such that $\mathfrak{p}=(I:_R v)$, where $(I:_R v)=\{r\in R |~ rv\in I\}$. The {\it set of associated primes} of $I$, denoted by $\mathrm{Ass}_R(R/I)$, is the set of all prime ideals associated to $I$. The minimal members of $\mathrm{Ass}_R(R/I)$ are called the {\it minimal} primes of $I$, and $\mathrm{Min}(I)$ denotes the set of minimal prime ideals of $I$. Moreover, the associated primes of $I$ which are not minimal are called the {\it embedded} primes of $I$. If $I$ is a square-free monomial ideal, then $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$, for example see \cite[Corollary 1.3.6]{HH1}.
\begin{definition}\label{spersistence} The ideal $I$ is said to have the {\it persistence property} if $\mathrm{Ass}(R/I^k)\subseteq \mathrm{Ass}(R/I^{k+1})$ for all positive integers $k$. Moreover, an ideal $I$ satisfies the {\it strong persistence property} if $(I^{k+1}: I)=I^k$ for all positive integers $k$, for more details refer to \cite{ HQ, N2}. The strong persistence property implies the persistence property, however the converse is not true, as noted in \cite{HQ}. \end{definition}
Here, we should recall the definition of symbolic powers of an ideal.
\begin{definition} (\cite[Definition 4.3.22]{V1})
Let $I$ be an ideal of a ring $R$ and $\mathfrak{p}_1, \ldots, \mathfrak{p}_r$ the minimal primes of $I$. Given an integer $n \geq 1$, the {\it $n$-th symbolic power} of $I$ is defined to be the ideal $$I^{(n)} = \mathfrak{q}_1 \cap \cdots \cap \mathfrak{q}_r,$$ where $\mathfrak{q}_i$ is the primary component of $I^n$ corresponding to $\mathfrak{p}_i$. \end{definition}
Furthermore, we say that $I$ has the {\it symbolic strong persistence property} if $(I^{(k+1)}: I^{(1)})=I^{(k)}$ for all $k$, where $I^{(k)}$ denotes the $k$-th symbolic power of $I$, cf. \cite{KNT, RT}. \par
Let $R$ be a unitary commutative ring and $I$ an ideal in $R$. An element $f\in R$ is {\it integral} over $I$, if there exists an equation
$$f^k+c_1f^{k-1}+\cdots +c_{k-1}f+c_k=0 ~~\mathrm{with} ~~ c_i\in I^i.$$
The set of elements $\overline{I}$ in $R$ which are integral over $I$ is the
{\it integral closure} of $I$. The ideal $I$ is {\it integrally closed}, if $I=\overline{I}$, and $I$ is {\it normal} if all powers of $I$ are integrally closed, we refer to \cite{HH1} for more information. \par
In particular, if $I$ is a monomial ideal, then the notion of integrality becomes simpler, namely, a monomial $u \in R=K[x_1, \ldots, x_n]$ is integral over $I\subset R$ if and only if there exists an integer $k$ such that $u^k \in I^k$, see \cite[Theorem 1.4.2]{HH1}.
An ideal $I$ is called {\it normally torsion-free} if $\mathrm{Ass}(R/I^k)\subseteq \mathrm{Ass}(R/I)$, for all $k\geq 1$. If $I$ is a square-free monomial ideal, then $I$ is normally torsion-free if and only if $I^k=I^{(k)}$, for all $k \geq 1$, see \cite[Theorem 1.4.6]{HH1}.
\begin{definition}
A monomial ideal $I$ in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ is called {\it nearly normally torsion-free} if there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$, see \cite[Definition 2.1]{Claudia}.
\end{definition}
Next, we recall some notions which are related to graph theory and hypergraphs.
\begin{definition} Let $G=(V(G), E(G))$ be a finite simple graph on the vertex set $V(G)=\{1,\ldots,n\}$. Then the {\it edge ideal } associated to $G$ is the monomial ideal $$I(G)=(x_ix_j \ : \ \{i,j\}\in E(G)) \subset R=K[x_1,\ldots, x_n]. $$ \end{definition}
A finite {\it hypergraph} $\mathcal{H}$ on a vertex set $[n]=\{1,2,\ldots,n\}$ is a collection of edges $\{ E_1, \ldots, E_m\}$ with $E_i \subseteq [n]$, for all $i=1, \ldots,m$. The vertex set $[n]$ of $\mathcal{H}$ is denoted by $V_{\mathcal{H}}$, and the edge set of $\mathcal{H}$ is denoted by $E_{\mathcal{H}}$. Typically, a hypergraph is represented as a pair $(V_{\mathcal{H}}, E_{\mathcal{H}})$. A hypergraph $\mathcal{H}$ is called {\it simple}, if $E_i \subseteq E_j$ implies $i = j$. Moreover, if $|E_i|=d$, for all $i=1, \ldots, m$, then $\mathcal{H}$ is called {\em $d$-uniform } hypergraph. A 2-uniform hypergraph $\mathcal{H}$ is just a finite simple graph. If $\mathcal{W}$ is a subset of the vertices of $\mathcal{H}$, then the {\it induced subhypergraph} of $\mathcal{H}$ on $\mathcal{W}$ is $(\mathcal{W}, E_{\mathcal{W}})$, where $E_{\mathcal{W}}=\{E\cap \mathcal{W}: E \in E_{\mathcal{H}} \text{ and } E \cap \mathcal{W} \neq \emptyset\}$.
In \cite{HaV}, H\`a and Van Tuyl extended the concept of the edge ideal to hypergraphs.
\begin{definition} \cite{HaV} Let $\mathcal{H}$ be a hypergraph on the vertex set $V_{\mathcal{H}}=[n]$, and $E_{\mathcal{H}} = \{E_1, \ldots, E_m\}$ be the edge set of $\mathcal{H}$. Then the {\it edge ideal} corresponding to $\mathcal{H}$ is given by
$$I(\mathcal{H}) = (\{x^{E_i}~ | ~E_i\in E_{\mathcal{H}}\}),$$ where $x^{E_i}=\prod_{j\in E_i} x_j$.
A subset $W \subseteq V_{\mathcal{H}}$ is a {\it vertex cover} of $\mathcal{H}$ if $W \cap E_i\neq \emptyset$ for all $i=1, \ldots, m$. A vertex cover $W$ is {\it minimal} if no proper subset of $W$ is a vertex cover of $\mathcal{H}$. Let $W_1, \ldots, W_t$ be the minimal vertex covers of $\mathcal{H}$. Then the cover ideal of the hypergraph $\mathcal{H}$, denoted by $J(\mathcal{H})$, is given by $J(\mathcal{H})=(X_{W_1}, \ldots, X_{W_t}),$ where $X_{W_j}=\prod_{r\in W_j}x_r$ for each $j=1, \ldots, t$.
\end{definition}
Moreover, recall that the Alexander dual of a square-free monomial ideal $I$, denoted by $I^\vee$, is given by
$$I^\vee= \bigcap_{u\in \mathcal{G}(I)} (x_i~:~ x_i|u).$$
In particular, according to Proposition 2.7 in \cite{FHM}, one has $J(\mathcal{H})=I(\mathcal{H})^{\vee}$, where $I(\mathcal{H})^{\vee}$ denotes the Alexander dual of $I(\mathcal{H})$.
Throughout this paper, we denote the unique minimal set of monomial generators of a monomial ideal $I$ by $\mathcal{G}(I)$. Also, $R=K[x_1,\ldots, x_n]$ is a polynomial ring over a field $K$, $\mathfrak{m}=(x_1, \ldots, x_n)$ is the graded maximal ideal of $R$, and $x_1, \ldots, x_n$ are indeterminates.
\section{Some classes of nearly normally torsion-free ideals}\label{generalresults}
In this section, our aim is to give additional classes of nearly normally torsion-free monomial ideals. To achieve this, we first recall the definition of the monomial localization of a monomial ideal with respect to a monomial prime ideal. Let $I$ be a monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$.
We denote by $V^*(I)$ the set of monomial prime ideals containing $I$. Let $\mathfrak{p}=(x_{i_1}, \ldots, x_{i_r})$ be a monomial prime ideal. The {\it monomial localization} of $I$ with respect to $\mathfrak{p}$, denoted by $I(\mathfrak{p})$, is the ideal in the polynomial ring $R(\mathfrak{p})=K[x_{i_1}, \ldots, x_{i_r}]$ which is obtained from $I$ by applying the $K$-algebra homomorphism $R\rightarrow R(\mathfrak{p})$ with $x_j\mapsto 1$ for all $x_j\notin \{x_{i_1}, \ldots, x_{i_r}\}$.
\begin{lemma} \label{Lem. 1} Let $I$ be a nearly normally torsion-free monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$. Then there exists a monomial prime ideal $\mathfrak{p}\in V^*(I)$ such that $I(\mathfrak{p}\setminus \{x_i\})$ is normally torsion-free for all $x_i\in \mathfrak{p}$. \end{lemma}
\begin{proof} Since $I$ is nearly normally torsion-free, there exists a positive integer $k$ and a monomial prime ideal $\mathfrak{p}$ such that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$. We claim that $I(\mathfrak{p}\setminus \{x_i\})$ is normally torsion-free for all $x_i\in \mathfrak{p}$. To do this, fix $x_i\in \mathfrak{p}$, and set $\mathfrak{q}:=\mathfrak{p}\setminus \{x_i\}$.
We need to show that
$\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^\ell) \subseteq \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$ for all $\ell$. Fix $\ell \geq 1$. It follows from \cite[Lemma 4.6]{RNA} that $$\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^\ell)= \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I^\ell(\mathfrak{q}))=\{Q~:~ Q\in \mathrm{Ass}_{R}(R/I^\ell)~\mathrm{and}~ Q \subseteq \mathfrak{q}\}.$$ Pick an arbitrary element $Q\in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^\ell)$. Thus, $Q\in \mathrm{Ass}_{R}(R/I^\ell)$ and $ Q \subseteq \mathfrak{q}$. Since $\mathfrak{q}=\mathfrak{p}\setminus \{x_i\}$, this yields that $Q\neq \mathfrak{p}$; thus, one must have $Q\in \mathrm{Min}(I)$. Hence, $Q\in \mathrm{Ass}_R(R/I)$, and so $Q\in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$. Therefore, $I(\mathfrak{p}\setminus \{x_i\})$ is normally torsion-free. \end{proof}
\begin{lemma} \label{Lem. 2} Let $I$ be a monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ such that $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$. Let $I(\mathfrak{m}\setminus \{x_i\})$ be normally torsion-free for all $i=1, \ldots, n$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. Then $I$ is nearly normally torsion-free. \end{lemma}
\begin{proof} In the light of $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$, it is enough for us to show that $\mathrm{Ass}_R(R/I^k) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{m}\}$ for all $k \geq 2.$ To achieve this, fix $k \geq 2$, and take an arbitrary element $Q \in \mathrm{Ass}_R(R/I^k)$. If $Q=\mathfrak{m}$, then the proof is over. Hence, let $Q\neq \mathfrak{m}$. On account of $Q$ is a monomial prime ideal, this implies that $Q\subseteq \mathfrak{m}\setminus \{x_j\}$ for some $x_j \in \mathfrak{m}$. Put $\mathfrak{q}:=\mathfrak{m}\setminus \{x_j\}$. Because $I(\mathfrak{q})$ is normally torsion-free, we thus have $\mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^k) \subseteq \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$. Also, one can deduce from \cite[Lemma 4.6]{RNA} that $Q\in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/(I(\mathfrak{q}))^k)$, and so $Q \in \mathrm{Ass}_{R(\mathfrak{q})}(R(\mathfrak{q})/I(\mathfrak{q}))$. This yields that $Q \in\mathrm{Ass}_R(R/I)$, and hence $Q\in \mathrm{Min}(I)$. This completes the proof. \end{proof}
As an immediate consequence of Lemma \ref{Lem. 2}, we obtain the following corollary:
\begin{corollary} \label{Cor. 1} Let $I$ be a square-free monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$. Let $I(\mathfrak{m}\setminus \{x_i\})$ be normally torsion-free for all $i=1, \ldots, n$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. Then $I$ is nearly normally torsion-free. \end{corollary}
As an application of Lemma \ref{Lem. 2}, we give a class of nearly normally torsion-free monomial ideals in the subsequent theorem. To see this, one needs to recall from \cite{HV} that a monomial ideal is said to be a {\it monomial ideal of intersection type} when it can be presented as an intersection of powers of monomial prime ideals.
\begin{proposition} \label{App. 1}
Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring and
$I=\cap_{\mathfrak{p}\in \mathrm{Ass}_R(R/I)}\mathfrak{p}^{d_{\mathfrak{p}}}$ be a monomial ideal of intersection type such that for any $1\leq i \leq n$, there exists unique
$\mathfrak{p}\in \mathrm{Ass}_R(R/I)$ with $I(\mathfrak{m}\setminus \{x_i\})=\mathfrak{p}^{d_{\mathfrak{p}}}$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. Then $I$ is nearly normally torsion-free.
\end{proposition}
\begin{proof}
We first note that the assumption concludes that the monomial ideal $I$ must have the following form
$$I=(\mathfrak{m}\setminus \{x_1\})^{d_1} \cap (\mathfrak{m}\setminus \{x_2\})^{d_2} \cap \cdots \cap (\mathfrak{m}\setminus \{x_n\})^{d_n},$$
for some positive integers $d_1, \ldots, d_n$.
This gives rise to $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)$. In addition, since $(\mathfrak{m}\setminus \{x_i\})^{d_i}$, for each $i=1, \ldots, n$, is normally torsion-free, the claim can be deduced promptly from Lemma \ref{Lem. 2}. \end{proof}
To show Lemma \ref{Lem. Multipe}, we require the following auxiliary result.
\begin{theorem} (\cite[Theorem 5.2]{KHN2})\label{5.2KHN}
Let $I$ be a monomial ideal of $R$ with $\mathcal{G}(I) =\{u_1,\ldots,u_m\}$. Also assume that there exists a monomial $h=x_{j_1}^{b_1}\cdots x_{j_s}^{b_s}$ such that $h | u_i$ for all $i=1,\ldots,m$. By setting $J:=(u_1/h,\ldots,u_m/h)$, we have $$\mathrm{Ass}_R(R/I)=\mathrm{Ass}_R(R/J)\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}.$$ \end{theorem}
The next lemma says that a monomial ideal is nearly normally torsion-free if and only if its monomial multiple is nearly normally torsion-free under certain conditions. It is an updated version of \cite[Lemma 3.5]{HLR} and \cite[Lemma 3.12]{SN}.
\begin{lemma}\label{Lem. Multipe} Let $I$ be a monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$, and $h$ be a monomial in $R$ such that $\mathrm{gcd}(h,u)=1$ for all $u\in \mathcal{G}(I)$. Then $I$ is nearly normally torsion-free if and only if $hI$ is nearly normally torsion-free. \end{lemma} \begin{proof} $(\Rightarrow)$ Assume that $I$ is nearly normally torsion-free. Let $h=x_{j_1}^{b_1}\cdots x_{j_s}^{b_s}$ with $j_1, \ldots, j_s \in \{1, \ldots, n\}$. On account of Theorem \ref{5.2KHN}, we obtain, for all $\ell$, \begin{equation} \mathrm{Ass}_R(R/(hI)^{\ell})=\mathrm{Ass}_R(R/I^{\ell})\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}. \label{12} \end{equation} Due to $\mathrm{gcd}(h,u)=1$ for all $u\in \mathcal{G}(I)$, it is routine to check that \begin{equation} \mathrm{Min}(hI)=\mathrm{Min}(I)\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}. \label{13} \end{equation} Since $I$ is nearly normally torsion-free, there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that
$\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$. Select an arbitrary element $\mathfrak{q}\in \mathrm{Ass}_R(R/(hI)^m)$.
Then \eqref{12} implies that $\mathfrak{q}\in \mathrm{Ass}_R(R/I^m)\cup\{ (x_{j_1}),\ldots,(x_{j_s})\}$.
If $1\leq m \leq k$, then \eqref{13} yields that
$\mathfrak{q}\in \mathrm{Min}(hI)$. Hence, let $m\geq k+1$. The claim follows readily from \eqref{12}, \eqref{13}, and $\mathrm{Ass}_R(R/I^m) \subseteq \mathrm{Min}(I) \cup \{\mathfrak{p}\}$.
$(\Leftarrow)$ Let $hI$ be nearly normally torsion-free. This means that there exist a positive integer $k$ and a monomial prime ideal
$\mathfrak{p}$ such that
$\mathrm{Ass}_R(R/(hI)^m)=\mathrm{Min}(hI)$ for all $1\leq m\leq k$, and
$\mathrm{Ass}_R(R/(hI)^m) \subseteq \mathrm{Min}(hI) \cup \{\mathfrak{p}\}$ for all $m \geq k+1$. Take an arbitrary element $\mathfrak{q}\in \mathrm{Ass}_R(R/I^m)$. Because $\mathfrak{q}\in \mathrm{Ass}_R(R/I^m)$, we get $\mathfrak{q} \notin
\{ (x_{j_1}),\ldots,(x_{j_s})\}$. If $1\leq m \leq k$, then \eqref{12} gives that
$\mathfrak{q}\in \mathrm{Ass}_R(R/(hI)^m)$, and so $\mathfrak{q}\in \mathrm{Min}(hI)$. As $\mathfrak{q} \notin
\{ (x_{j_1}),\ldots,(x_{j_s})\}$, we gain $\mathfrak{q}\in \mathrm{Min}(I)$. We thus assume that $m\geq k+1$.
One can derive the assertion according to the facts $\mathfrak{q} \notin \{ (x_{j_1}),\ldots,(x_{j_s})\}$,
\eqref{12}, \eqref{13}, and $\mathrm{Ass}_R(R/(hI)^m) \subseteq \mathrm{Min}(hI) \cup \{\mathfrak{p}\}$. \end{proof}
We conclude this section by observing that nearly normally torsion-freeness does not imply the strong persistence property. It is not known if nearly normally torsion-freeness implies the persistence property.
\begin{example} Let $R=K[x_1, x_2, x_3]$ be the polynomial ring over a field $K$ and $I:=(x_2^4, x_1x_2^3, x_1^3x_2, x_1^4x_3)$ be a monomial ideal of $R$. On account of $$ I=(x_1, x_2^4) \cap (x_1^3, x_2^3) \cap (x_1^4, x_2) \cap (x_2, x_3), $$ one can conclude that $\mathrm{Ass}_R(R/I)=\mathrm{Min}(I)=\{(x_1, x_2), (x_2, x_3)\}.$ We claim that $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I) \cup \{(x_1, x_2, x_3)\}$ for all $m\geq 2$. To prove this claim, fix $m\geq 2$. In what follows, we verify that $(x_1, x_2, x_3)\in \mathrm{Ass}_R(R/I^m)$. To establish this, we show that $(I^m:_Rv)=(x_1, x_2, x_3)$, where $v:=x_1^{3m-1} x_2^{m+1}$. To see this, one may consider the following statements: \begin{itemize} \item[(i)] Since $vx_1=x_1^{3m}x_2^{m+1}= (x_1^3x_2)^mx_2$ and $x_1^3x_2\in I$, we get $vx_1\in I^m,$ and so $x_1 \in (I^m:_Rv)$; \item[(ii)] Due to $vx_2=x_1^{3m-1}x_2^{m+2}= (x_1^3x_2)^{m-1} (x_1^2x_2^3)$ and $x_1^2x_2^3\in I$, one has $vx_2 \in I^m,$ and hence $x_2 \in (I^m:_Rv)$; \item[(iii)] As $vx_3=x_1^{3m-1}x_2^{m+1}x_3 =(x_1^3x_2)^{m-2}(x_1^5x_2^3x_3)$ and $x_1^5x_2^3x_3\in I^2$, we obtain $vx_3 \in I^m,$ and thus $x_3 \in (I^m:_Rv)$. \end{itemize}
Consequently, $(x_1, x_2, x_3) \subseteq (I^m:_Rv).$ For the converse, let $v \in I^m$. This leads to
there exist monomials $h_1, \ldots, h_m \in \mathcal{G}(I)$ such that $x_3\nmid h_i$ for each $i=1, \ldots, m$, and $h_1 \cdots h_m | x_1^{3m-1} x_2^{m+1}$. Hence, we have the following equality \begin{equation} h_1 \cdots h_m=(x_2^4)^{{\alpha}_1} (x_1x_2^3)^{{\alpha}_2} (x_1^3x_2)^{{\alpha}_3}, \label{14} \end{equation} for some nonnegative integers $ {\alpha}_1, \alpha_2, {\alpha}_3$ with ${\alpha}_1 + \alpha_2 + {\alpha}_3=m.$ Especially, one can derive from (\ref{14}) that \begin{equation} 4\alpha_1 + 3\alpha_2 + \alpha_3 \leq m+1, \label{15} \end{equation} and \begin{equation} \alpha_2 + 3\alpha_3 \leq 3m-1. \label{16} \end{equation} Since $\sum_{i=1}^3 \alpha_i =m$, we obtain from (\ref{15}) that $m+3\alpha_1 + 2 \alpha_2 \leq m+1$, and so $3\alpha_1 + 2\alpha_2 \leq 1$. We thus have $\alpha_1= \alpha_2=0$, and so $\alpha_3 =m$. Moreover, it follows from (\ref{16}) that $3m \leq 3m-1$, which is a contradiction. Therefore, $v \notin I^m$, and so $(I^m:_Rv) = (x_1, x_2, x_3)$. This gives rise to $(x_1, x_2, x_3) \in \mathrm{Ass}_R(R/I^m)$ for all $m\geq 2$. Hence, $\mathrm{Ass}_R(R/I^m)=\mathrm{Min}(I) \cup \{(x_1, x_2, x_3)\}$ for all $m\geq 2$. This means that $I$ has the persistence property and is a nearly normally torsion-free ideal. On the other hand, using Macaulay2 \cite{GS} shows that $(I^2:_RI) \neq I$. We therefore deduce that $I$ does not satisfy the strong persistence property. \end{example}
\section{Nearly normally torsion-freeness and cover ideals}\label{coverideals}
In this section, our goal is to characterize all finite simple connected graphs such that their cover ideals are nearly normally torsion-free. To do this, one has to recall some results which will be used in the proof of Theorem \ref{Main. 1}. We begin with the following lemma.
\begin{lemma} \cite[Lemma 2.11]{FHV2} \label{FHV2}
Let $\mathcal{H}$ be a finite simple hypergraph on
$V = \{x_1, \ldots , x_n\}$ with cover ideal
$J(\mathcal{H}) \subseteq R=k[x_1, \ldots, x_n]$.
Then
$$P = (x_{i_1} , \ldots , x_{i_r}) \in \mathrm{Ass}(R/J(\mathcal{H})^d) \Leftrightarrow P = (x_{i_1} , \ldots , x_{i_r}) \in \mathrm{Ass}(k[P]/J(\mathcal{H}_P)^d),$$ where $k[P] =k[x_{i_1} , \ldots , x_{i_r}]$, and $\mathcal{H}_P$ is the induced hypergraph of $\mathcal{H}$ on the vertex set $P = \{x_{i_1} , \ldots , x_{i_r}\} \subseteq V$. \end{lemma}
In the following proposition, we investigate the associated primes of powers of the cover ideals of odd cycle graphs.
\begin{proposition}\label{Pro. 1} \cite[Proposition 3.6]{NKA}
Suppose that $C_{2n+1}$ is a cycle graph on the vertex set $[2n+1]$, $R=K[x_1, \ldots, x_{2n+1}]$ is a polynomial ring over a field $K$,
and $\mathfrak{m}$ is the unique homogeneous maximal ideal of $R$. Then $$\mathrm{Ass}_R(R/(J(C_{2n+1}))^s)= \mathrm{Ass}_R(R/J(C_{2n+1}))\cup \{\mathfrak{m}\},$$ for all $s\geq 2$. In particular,
$$\mathrm{Ass}^\infty(J(C_{2n+1}))=\{(x_i, x_{i+1})~: ~ i=1, \ldots 2n\}\cup\{(x_{2n+1}, x_1)\}\cup \{\mathfrak{m}\}.$$
\end{proposition}
Next theorem explores the relation between associated primes of powers of the cover ideal of the union of two finite simple graphs with the associated primes of powers of the cover ideals of each of them, under the condition that they have only one common vertex.
\begin{theorem} \cite[Theorem 11]{KNT} \label{IntersectionOne}
Let $G=(V(G), E(G))$ and $H=(V(H), E(H))$ be two finite simple connected graphs such that $|V(G)\cap V(H)|=1$. Let $L=(V(L), E(L))$ be the finite simple graph such that $V(L):=V(G) \cup V(H)$ and $E(L):=E(G) \cup E(H)$.
Then $$\mathrm{Ass}_{R}(R/J(L)^s)=\mathrm{Ass}_{R_1}(R_1/J(G)^s)\cup \mathrm{Ass}_{R_2}(R_2/J(H)^s),$$ for all $s$, where $R_1=K[ x_\alpha : \alpha\in V(G)]$, $R_2=K[ x_\alpha : \alpha\in V(H)]$,
and $R=K[ x_\alpha : \alpha\in V(L)]$. \end{theorem}
The subsequent theorem examines the relation between associated primes of powers of the cover ideal of the union of two finite simple connected graphs with the associated primes of powers of the cover ideals of each of them, under the condition that they have only one edge in common.
\begin{theorem} \cite[Theorem 12]{KNT} \label{IntersectionTwo}
Let $G=(V(G), E(G))$ and $H=(V(H), E(H))$ be two finite simple connected graphs such that $|V(G)\cap V(H)|=2$ and
$|E(G)\cap E(H)|=1$. Let $L=(V(L), E(L))$ be the finite simple graph such that $V(L):=V(G) \cup V(H)$ and $E(L):=E(G) \cup E(H)$.
Then $$\mathrm{Ass}_{R}(R/J(L)^s)=\mathrm{Ass}_{R_1}(R_1/J(G)^s)\cup \mathrm{Ass}_{R_2}(R_2/J(H)^s),$$ for all $s$, where $R_1=K[ x_\alpha : \alpha\in V(G)]$, $R_2=K[ x_\alpha : \alpha\in V(H)]$,
and $R=K[ x_\alpha : \alpha\in V(L)]$. \end{theorem}
To understand the proof of Theorem \ref{Main. 1}, we first review some notation from \cite{JS} as follows:
Let $G=(V(G), E(G))$ be a finite simple connected graph. For any $x,y \in V(G)$, an $(x,y)$-path is simply a path between the vertices $x$ and $y$ in $G$. Also, for a vertex subset $W$ of a graph $G$, $\langle W \rangle$ will denote the subgraph of $G$ induced by $W$.
Let $G$ be an {\it almost bipartite} graph, that is to say, $G$ has only one induced odd cycle subgraph, say $C_{2k+1}$. For each $i\in V(C_{2k+1})$, let
$$A_i=\{x\in V(G)|~x\neq i ~\text{and for all~} j\in V(C_{2k+1}), ~i \text{~appears on every~} (x,j)\text{-path}\}.$$ Based on \cite[Page 540]{JS}, it should be noted that the set $A_i$ may be empty for some $i$, and it follows from \cite[Lemma 2.3]{JS} that if $A_i\neq \emptyset$, then the induced subgraph $\langle A_i \rangle$ is bipartite in its own right. Also, for every edge $e=\{i,j\} \in E(C_{2k+1})$, let \begin{align*}
B_e= \{& x\in V(G)\setminus V(C_{2k+1})|~\text{for every}~ m\in V(C_{2k+1})\setminus \{i,j\}, ~\text{there } \\ & \text{is an}~ (x,m)\text{-path in which} ~ i \text{~appears but}~ j~\text{does not, and an } \\ & (x,m)\text{-path in which} ~ j \text{~appears but}~ i~ \text{does not}\}. \end{align*} Once again, according to \cite[Page 541]{JS}, one must note that the set $B_e$ may be empty for some $e$, and it can be deduced from \cite[Lemma 2.3]{JS} that if $B_e\neq \emptyset$, then the induced subgraph $\langle B_e \rangle$ is bipartite in its own right. Furthermore, the $A_i$'s and $B_e$'s are all mutually disjoint. In other words, each vertex of $G$ is in exactly one of the following sets: (i) $V(C_{2k+1})$, (ii) $A_i$ for some $i$, and (iii) $B_e$ for some $e$. Moreover, for each $i,j\in V(C_{2k+1})$ with $i\neq j$, and $e, e'\in E(C_{2k+1})$ with $e\neq e'$, one can easily derive from the definitions that there is no path between any two vertices of $\langle A_i \rangle$ and $\langle B_e \rangle$, or $\langle A_i \rangle$ and $\langle A_j \rangle$, or $\langle B_e \rangle$ and $\langle B_{e'} \rangle$.
As an example, consider the following graph $G$ from \cite{JS}.
\begin{center}
\scalebox{1} { \begin{pspicture}(0,-2.5629687)(6.0628123,2.5629687) \psdots[dotsize=0.12](2.7809374,0.42453125) \psdots[dotsize=0.12](3.7809374,-0.37546876) \psdots[dotsize=0.12](1.7809376,-0.37546876) \psdots[dotsize=0.12](3.1809375,-1.3954687) \psdots[dotsize=0.12](2.3809376,-1.4154687) \psline[linewidth=0.04cm](2.7609375,0.42453125)(3.7209375,-0.35546875) \psline[linewidth=0.04cm](2.7609375,0.42453125)(1.7809376,-0.35546875) \psline[linewidth=0.04cm](3.7409375,-0.37546876)(3.1809375,-1.3354688) \psline[linewidth=0.04cm](1.7809376,-0.39546874)(2.3609376,-1.3754687) \psline[linewidth=0.04cm](2.3809376,-1.3954687)(3.1409376,-1.3954687) \psdots[dotsize=0.12](3.9809375,-1.3954687) \psline[linewidth=0.04cm](3.1809375,-1.4154687)(3.2009375,-1.3954687) \psline[linewidth=0.04cm](3.1809375,-1.3954687)(3.9809375,-1.3954687) \psline[linewidth=0.04cm](3.9609375,-1.4154687)(3.9609375,-1.3954687) \psdots[dotsize=0.12](4.7809377,-1.3754687) \psline[linewidth=0.04cm](3.9809375,-1.3954687)(4.7409377,-1.3954687) \psdots[dotsize=0.12](4.5809374,0.00453125) \psdots[dotsize=0.12](5.4009376,0.40453124) \psdots[dotsize=0.12](3.5609374,0.8045313) \psdots[dotsize=0.12](4.4009376,1.2045312) \psdots[dotsize=0.12](3.2009375,1.2245313) \psdots[dotsize=0.12](2.7409375,2.0445313) \psdots[dotsize=0.12](2.3809376,1.2245313) \psdots[dotsize=0.12](1.4009376,-0.97546875) \psdots[dotsize=0.12](0.6009375,-0.15546875) \psline[linewidth=0.04cm](0.5809375,-0.15546875)(1.7409375,-0.35546875) \psline[linewidth=0.04cm](0.6009375,-0.17546874)(1.3609375,-0.9554688) \psline[linewidth=0.04cm](1.3809375,-0.9554688)(2.3609376,-1.4154687) \psline[linewidth=0.04cm](2.7409375,2.0645313)(2.3809376,1.2445313) \psline[linewidth=0.04cm](2.7409375,2.0245314)(3.1609375,1.2645313) \psline[linewidth=0.04cm](2.3809376,1.2245313)(2.7609375,0.44453126) \psline[linewidth=0.04cm](3.2009375,1.2245313)(2.7809374,0.44453126) \psline[linewidth=0.04cm](2.7609375,0.44453126)(3.5409374,0.8045313) \psline[linewidth=0.04cm](3.5609374,0.82453126)(4.4009376,1.2045312) \psline[linewidth=0.04cm](4.3809376,1.2245313)(5.3809376,0.40453124) \psline[linewidth=0.04cm](3.7809374,-0.35546875)(4.5609374,0.00453125) \psline[linewidth=0.04cm](4.5609374,0.04453125)(4.5809374,0.08453125) \psline[linewidth=0.04cm](4.5809374,0.02453125)(5.4009376,0.40453124) \psline[linewidth=0.04cm](3.5609374,0.8045313)(4.5609374,0.04453125) \usefont{T1}{ptm}{m}{n} \rput(2.4723437,0.47453126){$1$} \usefont{T1}{ptm}{m}{n} \rput(1.7723438,-0.08546875){$2$} \usefont{T1}{ptm}{m}{n} \rput(2.3523438,-1.7054688){$3$} \usefont{T1}{ptm}{m}{n} \rput(3.1923437,-1.7254688){$4$} \usefont{T1}{ptm}{m}{n} \rput(3.9123437,-0.62546873){$5$} \usefont{T1}{ptm}{m}{n} \rput(3.4723437,1.4145312){$6$} \usefont{T1}{ptm}{m}{n} \rput(2.7323437,2.3745313){$7$} \usefont{T1}{ptm}{m}{n} \rput(2.0523438,1.3945312){$8$} \usefont{T1}{ptm}{m}{n} \rput(3.9523437,-1.7254688){$9$} \usefont{T1}{ptm}{m}{n} \rput(4.802344,-1.7054688){$10$} \usefont{T1}{ptm}{m}{n} \rput(3.4823437,1.0345312){$11$} \usefont{T1}{ptm}{m}{n} \rput(4.662344,-0.24546875){$12$} \usefont{T1}{ptm}{m}{n} \rput(4.382344,1.4745313){$13$} \usefont{T1}{ptm}{m}{n} \rput(5.662344,0.43453124){$14$} \usefont{T1}{ptm}{m}{n} \rput(0.32234374,-0.08546875){$15$} \usefont{T1}{ptm}{m}{n} \rput(1.4623437,-0.72546875){$16$} \usefont{T1}{ptm}{m}{n} \rput(2.7523437,-2.3854687){$G$} \psdots[dotsize=0.12](0.9809375,-1.3954687) \psline[linewidth=0.04cm](0.5809375,-0.17546874)(0.9609375,-1.3554688) \psline[linewidth=0.04cm](0.9809375,-1.3954687)(2.3609376,-1.4154687) \usefont{T1}{ptm}{m}{n} \rput(0.96234375,-1.7054688){$17$} \end{pspicture} }
\end{center}
Direct computations show that $A_1=\{6, 7, 8\}, ~A_4=\{9, 10\}, ~A_2=A_3=A_5=\emptyset,$
and
$B_{\{1,5\}}=\{11, 12, 13, 14\}, ~
B_{\{2,3\}}=\{15, 16, 17\},~ B_{\{1,2\}}=B_{\{3,4\}}=B_{\{4,5\}}=\emptyset.$
The following theorem is the first main result of this section.
\begin{theorem} \label{Main. 1} Assume that $G=(V(G), E(G))$ is a finite simple connected graph, and $J(G)$ denotes the cover ideal of $G$. Then $J(G)$ is nearly normally torsion-free if and only if $G$ is either a bipartite graph or an almost bipartite graph. \end{theorem}
\begin{proof} To show the forward implication, let $J(G)$ be nearly normally torsion-free. Suppose, on the contrary, that $G$ is neither bipartite nor almost bipartite. This gives that $G$ has at least two induced odd cycle subgraphs, say $C$ and $C'$. It follows from Proposition \ref{Pro. 1} that $\mathfrak{p}=(x_j~:~ j \in V(C))\in \mathrm{Ass}(J(C)^s)$ and $\mathfrak{p}'=(x_j~:~ j \in V(C'))\in \mathrm{Ass}(J(C')^s)$ for all $s\geq 2$. Since $G_{\mathfrak{p}}=C$ and $G_{\mathfrak{p}'}=C'$, we can deduce from Lemma \ref{FHV2} that $\mathfrak{p}, \mathfrak{p}' \in \mathrm{Ass}_R(R/J(G)^s)$ for all $s\geq 2$. This contradicts the assumption that $J(G)$ is nearly normally torsion-free.
Conversely, if $G$ is bipartite, then on account of \cite[Corollary 2.6]{GRV}, one has $J(G)$ is normally torsion-free, and so $J(G)$ is nearly normally torsion-free. Next, we assume that $G$ is an almost bipartite graph, and let $C$ be its unique induced odd cycle subgraph. Put $\mathfrak{p}=(x_j~:~ j\in V(C))$. We claim that $\mathrm{Ass}(J(G)^s)=\mathrm{Min}(J(G)) \cup \{\mathfrak{p}\}$ for all $s\geq 2$. Fix $s\geq 2$. For any $i\in V(C)$ and $e\in E(C)$, assume that $A_i$ and $B_e$ are the vertex subsets of $G$ as defined in the discussion above. Without loss of generality, suppose that $A_i \neq \emptyset$ for all $i=1, \ldots, r$ and $B_{e_j}\neq \emptyset$ for all $j=1, \ldots, t$. Set $H_i:=\langle A_i \cup \{i\}\rangle$ for all
$i=1, \ldots, r$, and $L_j:=\langle B_{e_j} \cup \{\alpha_j, \beta_j\}\rangle$ for all $j=1, \ldots, t$, where $e_j=\{\alpha_j, \beta_j\}$. Accordingly, we get $|V(C) \cap V(H_i)|=1$ for all $i=1, \ldots, r$, $|V(C) \cap V(L_j)|=2$
and $|E(C) \cap E(L_j)|=1$ for all $j=1, \ldots, t$. On the other hand, it should be noted that all $H_i$ and $L_j$ are bipartite, and so \cite[Corollary 2.6]{GRV} yields that $\mathrm{Ass}(J(H_i)^s)=\mathrm{Min}(J(H_i))$ and $\mathrm{Ass}(J(L_j)^s)=\mathrm{Min}(J(L_j))$ for all $i=1, \ldots, r$ and $j=1, \ldots, t$. Now, repeated applications of Theorems \ref{IntersectionOne} and \ref{IntersectionTwo} give that $\mathrm{Ass}(J(G)^s)=\mathrm{Ass}(J(C)^s) \cup \mathrm{Min}(J(H_i)) \cup \mathrm{Min}(J(L_j))$ for all $i=1, \ldots, r$ and $j=1, \ldots, t$. By virtue of Proposition \ref{Pro. 1}, one obtains $\mathrm{Ass}(J(C)^s)=\{\mathfrak{p}\} \cup \mathrm{Min}(J(C))$. We thus have $\mathrm{Ass}(J(G)^s)= \mathrm{Min}(J(G)) \cup \{\mathfrak{p}\}$, as claimed. This shows that $J(G)$ is nearly normally torsion-free, and the proof is done. \end{proof}
Now, we focus on cover ideals of hypergraphs. For this purpose, we state the following theorem. To do this, we recall some definitions which will be necessary for understanding Theorem \ref{NTF1}. We list them as follows:
\begin{definition} (see \cite[Definition 2.7]{FHV2})
Let $\mathcal{H}= (V_{\mathcal{H}} , E_{\mathcal{H}})$ be a hypergraph. A {\em $d$-coloring} of $\mathcal{H}$ is any partition of $V_{\mathcal{H}} = C_1\cup \cdots \cup C_d$ into $d$ disjoint sets such that for every $E \in E_{\mathcal{H}}$, we have $E\nsubseteq C_i$ for all $i = 1, \ldots ,d$. (In the case of a graph $G$, this simply means that any two vertices connected by an edge receive different colors.) The $C_i$'s are called the color classes of $\mathcal{H}$. Each color class $C_i$ is an {\em independent set}, meaning that $C_i$ does not contain any edge of the hypergraph. The chromatic number of $\mathcal{H}$, denoted by $\chi(\mathcal{H})$, is the minimal $d$ such that $\mathcal{H}$ has a $d$-coloring. \end{definition} \begin{definition} (see \cite[Definition 2.8]{FHV2})
A hypergraph $\mathcal{H}$ is called {\em critically $d$-chromatic} if $\chi(\mathcal{H})= d$, but for every vertex $x\in V_{\mathcal{H}}$, $\chi(\mathcal{H}\setminus \{x\})< d$, where $\mathcal{H}\setminus \{x\}$ denotes the hypergraph $\mathcal{H}$ with $x$ and all edges containing $x$ removed. \end{definition}
\begin{definition} (see \cite[Definition 4.2]{FHV2})
Let $\mathcal{H}= (V_{\mathcal{H}} , E_{\mathcal{H}})$ be a hypergraph with $V_{\mathcal{H}}=\{x_1, \ldots, x_n\}$. For each $s$, the {\em $s$-th expansion} of $\mathcal{H}$ is defined to be the hypergraph obtained by replacing each vertex $x_i \in V_{\mathcal{H}}$ by a collection $\{x_{ij}~|~ j=1, \ldots, s\}$, and replacing $E_{\mathcal{H}}$ by the edge set that consists of edges
$\{x_{i_1l_1}, \ldots, x_{i_rl_r}\}$ whenever
$\{x_{i_1}, \ldots, x_{i_r}\}\in E_{\mathcal{H}}$ and edges
$\{x_{il}, x_{ik}\}$ for $l\neq k$. We denote this hypergraph by $\mathcal{H}^s$. The new variables $x_{ij}$ are called the shadows of $x_i$. The process of setting $x_{il}$ to equal to $x_i$ for all $i$ and $l$ is called the {\em depolarization}.
\end{definition}
\begin{theorem} \label{NTF1} Assume that $\mathcal{G}=(V(\mathcal{G}), E(\mathcal{G}))$ and $\mathcal{H}=(V(\mathcal{H}), E(\mathcal{H}))$ are
two finite simple hypergraphs such that $V(\mathcal{H})=V(\mathcal{G})\cup \{w\}$ with $w\notin V(\mathcal{G})$, and $E(\mathcal{H})=E(\mathcal{G}) \cup \{\{v,w\}\}$ for some vertex $v\in V(\mathcal{G})$. Then $$\mathrm{Ass}_{R'}(R'/J(\mathcal{H})^s)=\mathrm{Ass}_{R}(R/J(\mathcal{G})^s)\cup \{(x_v, x_w)\},$$
for all $s$, where $R=K[ x_\alpha : \alpha\in V(\mathcal{G})]$ and $R'=K[ x_\alpha : \alpha\in V(\mathcal{H})]$. \end{theorem} \begin{proof} For convenience of notation, set $I:=J(\mathcal{G})$ and $J:=J(\mathcal{H})$. We first prove that $\mathrm{Ass}_{R}(R/I^s)\cup \{(x_v, x_w)\}\subseteq \mathrm{Ass}_{R'}(R'/J^s)$ for all $s$. Fix $s\geq 1$, and assume that $\mathfrak{p}=(x_{i_1}, \ldots, x_{i_r})$ is an arbitrary element of $\mathrm{Ass}_R(R/I^s)$. According to \cite[Lemma 2.11]{FHV2}, we get $\mathfrak{p}\in \mathrm{Ass}(K[\mathfrak{p}]/J(\mathcal{G}_\mathfrak{p})^s)$, where $K[\mathfrak{p}]=K[x_{i_1}, \ldots, x_{i_r}]$ and $\mathcal{G}_\mathfrak{p}$ is the induced subhypergraph of $\mathcal{G}$ on the vertex set $\{i_1, \ldots, i_r\}\subseteq V(\mathcal{G})$. Since $\mathcal{G}_\mathfrak{p}= \mathcal{H}_\mathfrak{p}$, we have $\mathfrak{p}\in \mathrm{Ass}(K[\mathfrak{p}]/J(H_\mathfrak{p})^s)$. This yields that $\mathfrak{p}\in \mathrm{Ass}_{R'}(R'/J^s)$. On account of $(x_v, x_w)\in \mathrm{Ass}_{R'}(R'/J^s)$, one derives $\mathrm{Ass}_{R}(R/I^s)\cup \{(x_v, x_w)\}\subseteq \mathrm{Ass}_{R'}(R'/J^s)$. To complete the proof, it is enough for us to show the reverse inclusion. Assume that $\mathfrak{p}=(x_{i_1}, \ldots, x_{i_r})$ is
an arbitrary element of $\mathrm{Ass}_{R'}(R'/J^s)$ with
$\{i_1, \ldots, i_r\}\subseteq V(\mathcal{H})$. If $\{i_1, \ldots, i_r\}\subseteq V(\mathcal{G})$, then \cite[Lemma 2.11]{FHV2} implies that $\mathfrak{p}\in \mathrm{Ass}_R(R/I^s)$, and the proof is done. Thus, let $w\in \{i_1, \ldots, i_r\}$.
It follows from \cite[Corollary 4.5]{FHV2} that the associated primes of $J(\mathcal{H})^s$ will correspond to critical chromatic subhypergraphs of size $s+1$ in the $s$-th expansion of $\mathcal{H}$. This means that one can take the induced subhypergraph on the vertex set $\{i_1, \ldots, i_r\}$, and
then form the $s$-th expansion on this induced subhypergraph, and within this new hypergraph find a critical $(s+1)$-chromatic hypergraph. Notice that since this expansion cannot have any critical chromatic subgraphs, this implies that $\mathcal{H}_{\mathfrak{p}}$ must be connected. Hence,
$v\in \{i_1, \ldots, i_r\}$. Without loss of generality, one may assume that $i_1=v$ and $i_2=w$.
Thanks to $w$ is only connected to $v$ in the hypergraph $\mathcal{H}$, and because this induced subhypergraph is critical, if we remove the vertex $w$, one can color the resulting hypergraph with at least $s$ colors. This leads to that $w$ has to be adjacent to at least $s$ vertices. But the only thing $w$ is adjacent to is the shadows of $w$ and the shadows of $v$, and so one has a clique among these vertices. Accordingly,
$w$ and its neighbors will form a clique of size $s+1$. Since
a clique is a critical graph, it follows that we do not need any element of $\{i_3, \ldots, i_r\}$ or their shadows when making the critical $(s+1)$-chromatic hypergraph. Consequently, we obtain $\mathfrak{p}=(x_v, x_w)$, as required. \end{proof}
Before stating the next result, it should be noted that one can always view a square-free monomial ideal as the cover ideal of a
simple hypergraph. In fact, assume that $I$ is a square-free monomial ideal, and $I^\vee$ denotes its Alexander dual.
Also, let $\mathcal{H}$ denote the hypergraph corresponding to $I^\vee$. Then, we have $I=J(\mathcal{H})$, where
$J(\mathcal{H})$ denotes the cover ideal of the hypergraph $\mathcal{H}$. Consult \cite{FHM} for further details and information.
For instance, consider the following square-free monomial ideal in the polynomial ring $R=K[x_1,x_2,x_3,x_4,x_5]$ over a field $K$,
$$I=(x_1x_2x_3, x_2x_3x_4, x_3x_4x_5, x_4x_5x_1, x_5x_1x_2).$$
Then the Alexander dual of $I$ is given by
\begin{align*}
I^\vee = &(x_1, x_2, x_3) \cap (x_2, x_3, x_4) \cap (x_3, x_4, x_5) \cap (x_4, x_5, x_1) \cap (x_5, x_1, x_2)\\
= &(x_3x_5, x_2x_5, x_2x_4, x_1x_4, x_1x_3).
\end{align*}
Now, define the hypergraph $\mathcal{H}=(\mathcal{X, \mathcal{E}})$ with $\mathcal{X}=\{x_1,x_2,x_3,x_4,x_5\}$ and
$$\mathcal{E}=\{\{x_3,x_5\}, \{x_2,x_5\}, \{x_2, x_4\}, \{x_1, x_4\}, \{x_1, x_3\}\}.$$ Then the edge ideal and cover ideal of the hypergraph $\mathcal{H}$ are given by $$I(\mathcal{H})=(x_3x_5, x_2x_5, x_2x_4, x_1x_4, x_1x_3),$$ and $$J(\mathcal{H})= I(\mathcal{H})^\vee =(x_3,x_5) \cap (x_2,x_5) \cap (x_2, x_4) \cap (x_1, x_4) \cap (x_1, x_3).$$
It is easy to see that $J(\mathcal{H})=I$, as claimed.
We are in a position to provide the second main result of this section in the following corollary.
\begin{corollary} \label{Cor. NFT1}
Let $I$ be a normally torsion-free square-free monomial ideal in $ R=K[x_{1},\ldots ,x_{n}]$ with $\mathcal{G}(I) \subset R$. Then the ideal $L:=IS\cap (x_{n},x_{n+1})\subset S=R[x_{n+1}]$ satisfies the following statements: \begin{itemize} \item[(i)] $L$ is normally torsion-free. \item[(ii)] $L$ is nearly normally torsion-free. \item[(iii)] $L$ is normal. \item[(iv)] $L$ has the strong persistence proeprty. \item[(v)] $L$ has the persistence property. \item[(vi)] $L$ has the symbolic strong persistence property.
\end{itemize}
\end{corollary}
\begin{proof} (i) In the light of the argument which has been stated above, we can assume that $I=J(\mathcal{H})$ such that the hypergraph
$\mathcal{H}$ corresponds to $I^\vee$, where
$I^\vee$ denotes the Alexander dual of $I$. Fix $t\geq 1$.
It follows now from Theorem \ref{NTF1} that
$$\mathrm{Ass}_{S}(S/L^t)=\mathrm{Ass}_{R}(R/J(\mathcal{H})^t)\cup \{(x_n, x_{n+1})\}.$$ Since $I$ is normally torsion-free, one can deduce that $\mathrm{Ass}_{R}(R/J(\mathcal{H})^t)=\mathrm{Min}(J(\mathcal{H}))$, and so $\mathrm{Ass}_{S}(S/L^t)=\mathrm{Min}(J(\mathcal{H}))\cup \{(x_n, x_{n+1})\}.$ Therefore, $\mathrm{Ass}_{S}(S/L^t)=\mathrm{Min}(L)$. This means that $L$ is normally torsion-free, as desired. \par (ii) It is obvious from the definition of nearly normally torsion-freeness and (i). \par (iii) By virtue of \cite[Theorem 1.4.6]{HH1}, every normally torsion-free square-free monomial ideal is normal. Now, the assertion can be deduced from (i). \par (iv) Due to \cite[Theorem 6.2]{RNA}, every normal monomial ideal has the strong persistence property, and hence the claim follows readily from (iii). \par (v) It is shown in \cite{HQ} that the strong persistence property implies the persistence property. Therefore, one can derive the assertion from (iv). \par (vi) Thanks to \cite[Theorem 11]{RT}, the strong persistence property implies the symbolic strong persistence property, and thus the claim is an immediate consequence of (iv).
\end{proof}
\section{The case of $t$-spread principal Borel ideals}\label{borel}
In this section, we focus on normally torsion-free and nearly normally torsion-free $t$-spread principal Borel ideals. Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring over a field $K$. Let $t$ be a positive integer. A monomial $x_{i_1} x_{i_2} \cdots x_{i_d} \in R$ with $i_1 \leq i_2 \leq \cdots \leq i_d$ is called {\it $t$-spread} if $i_j -i_{j-1} \geq t$ for all $j=2, \ldots, d$. A monomial ideal in $R$ is called a {\it $t$-spread monomial ideal} if it is generated by $t$-spread monomials. A 0-spread monomial ideal is just an ordinary monomial ideal, while a 1-spread monomial ideal is just a square-free monomial ideal. In the following text, we will assume that $t \geq 1$.
Let $I\subset R$ be a $t$-spread monomial ideal. Then $I$ is called a {\it $t$-spread strongly stable ideal} if for all $t$-spread monomials $u\in \mathcal{G}(I)$, all $j\in \mathrm{supp}(u)$
and all $1\leq i <j$ such that $x_i(u/x_j)$ is a $t$-spread monomial, it follows that $x_i(u/x_j)\in I$.
\begin{definition}\label{boreldef}
A monomial ideal $I\subset R$ is called a {\it $t$-spread principal Borel} if there exists a $t$-spread monomial $u\in \mathcal{G}(I)$ such that $I$ is the smallest $t$-spread strongly stable ideal which contains $u$. We denote it as $I=B_t(u)$. It should be noted that for a $t$-spread monomial $u=x_{i_1} x_{i_2} \cdots x_{i_d} \in R$, we have $x_{j_1} x_{j_2} \cdots x_{j_d}\in \mathcal{G}(B_t(u))$ if and only if
$j_1\leq i_1, \ldots, j_d \leq i_d$ and $j_k - j_{k-1} \geq t$ for $k\in \{2, \ldots, d\}$. We refer the reader to \cite{EHQ} for more information.
\end{definition}
To see an application of Corollary \ref{Cor. 1}, we re-prove Proposition 4.4 from \cite{Claudia}.
\begin{proposition} \label{App. 2} Let $u=x_ix_n$ be a $t$-spread monomial in $R=K[x_1, \ldots, x_n]$ with $i\geq t$. Then $I=B_t(u)$ is nearly normally torsion-free. \end{proposition}
\begin{proof} We first assume that $i=t$. In this case, it follows from the definition that $$I=(x_1x_{t+1}, x_1x_{t+2}, \ldots, x_1x_{n}, x_2x_{t+2}, \ldots, x_2x_n, \ldots, x_tx_{2t}, x_tx_{2t+1}, \ldots, x_tx_n).$$ It is routine to check that $I$ is the edge ideal of a bipartite graph with the vertex set $ \{1, 2, \ldots, t\} \cup \{t+1, t+2, \ldots, n\}$. In addition, \cite[Corollary 14.3.15]{V1} implies that $I$ is normally torsion-free. Now, let $i>t$. One can conclude from the definition that the minimal generators of $I$ are as follows: $$x_i x_n, x_i x_{n-1}, \ldots, x_i x_{i+t}, x_{i-1} x_n, x_{i-1} x_{n-1}, \ldots, x_{i-1}x_{i+t-1}, $$ $$ x_{i-2} x_n, x_{i-2} x_{n-1}, \ldots, x_{i-2} x_{i+t-2}, \ldots, x_1x_n, x_1 x_{n-1}, \ldots, x_1 x_{t+1}.$$ Our strategy is to use Corollary \ref{Cor. 1}. To do this, one has to show that $I(\mathfrak{m}\setminus \{x_z\})$ is normally torsion-free for all $z=1, \ldots, n$, where $\mathfrak{m}=(x_1, \ldots, x_n)$. First, let $1\leq z \leq i$. Direct computation gives that \begin{align*} I(\mathfrak{m}\setminus \{x_z\})&= (x_{\alpha} x_{\beta}~:~ \alpha = 1, \ldots, z-1, \beta = t+1, \ldots, z+t-1, \beta - \alpha \geq t) \\ & + (x_n, x_{n-1}, \ldots, x_{z+t}). \end{align*} One can easily see that $(x_{\alpha} x_{\beta}~:~ \alpha = 1, \ldots, z-1, \beta = t+1, \ldots, z+t-1, \beta - \alpha \geq t)$ is the edge ideal of a bipartite graph with the following vertex set $$\{1, \ldots, z-1\} \cup \{t+1, \ldots, z+t-1\},$$ and so is normally torsion-free. Also, we know that every prime monomial ideal is normally torsion-free. Moreover, on account of the two ideals $(x_n, x_{n-1}, \ldots, x_{z+t})$ and $(x_{\alpha} x_{\beta}~:~ \alpha = 1, \ldots, z-1, \beta = t+1, \ldots, z+t-1, \beta - \alpha \geq t)$ do not have any common variables, we conclude from \cite[Theorem 2.5]{SN} that $I(\mathfrak{m}\setminus \{x_z\})$ is normally torsion-free as well. Now, let $i+1 \leq z \leq n$. It is routine to check that \begin{align*} I(\mathfrak{m}\setminus \{x_z\})&= (x_{\alpha} x_{\beta}~:~ \alpha = z-t+1, \ldots, i, \beta = z+1, \ldots, n, \beta - \alpha \geq t) \\ & + (x_1, x_2, \ldots, x_{z-t}). \end{align*} It is not hard to see that $(x_{\alpha} x_{\beta}~:~ \alpha = z-t+1, \ldots, i, \beta = z+1, \ldots, n, \beta - \alpha \geq t)$ is the edge ideal of a bipartite graph with the following vertex set $$\{z-t+1, \ldots, i\} \cup \{z+1, \ldots, n\},$$ and hence is normally torsion-free. In addition, by a similar argument, the ideal $(x_1, x_2, \ldots, x_{z-t})$ is normally torsion-free. Furthermore, since $(x_1, x_2, \ldots, x_{z-t})$ and $(x_{\alpha} x_{\beta}~:~ \alpha = z-t+1, \ldots, i, \beta = z+1, \ldots, n, \beta - \alpha \geq t)$ have no common variables, we are able to derive from \cite[Theorem 2.5]{SN} that $I(\mathfrak{m}\setminus \{x_z\})$ is normally torsion-free too. Inspired by Corollary \ref{Cor. 1}, we conclude that $I$ is nearly normally torsion-free, as claimed. \end{proof}
We illustrate the statement of Proposition~\ref{App. 2} through the following example.
\begin{example} Let $u= x_4x_7$ and $t=3$. If $v=x_ax_b \in \mathcal{G}(B_3(u))$, then $a \in [1,4]$ and $b \in [4,7]$. The complete list of minimal generators of $B_3(x_4x_{7})$ is given below: \begin{center} \begin{tabular}{ c c c c} $x_1x_4$\\ $x_1x_5$\quad & \quad $x_2x_5$\\ $x_1x_6$\quad & \quad $x_2x_6$\quad & \quad $x_3x_6$\\ $x_1x_7$\quad & \quad $x_2x_7$\quad & \quad $x_3x_7$\quad & \quad $x_4x_7$ \end{tabular} \end{center} The monomial localizations of $I$ at $\mathfrak{m}\setminus \{x_k\}$, for each $k=1, \ldots 7$, are listed below:
\begin{center} \begin{tabular}{l} $I(\mathfrak{m}\setminus \{x_1\})=(x_4,x_5,x_6,x_7)$;\\ $I(\mathfrak{m}\setminus \{x_2\})=(x_5,x_6,x_7)+(x_1x_4)$;\\ $I(\mathfrak{m}\setminus \{x_3\})=(x_6,x_7)+(x_1x_4,x_1x_5,x_2x_5) $;\\ $I(\mathfrak{m}\setminus \{x_4\})=(x_7)+(x_1,x_2x_5,x_2x_6,x_3x_6)$;\\
$I(\mathfrak{m}\setminus \{x_5\})=(x_1,x_2)+(x_3x_6,x_3x_7,x_4x_7)$;\\ $I(\mathfrak{m}\setminus \{x_6\})=(x_1,x_2,x_3)+(x_4x_7)$;\\ $I(\mathfrak{m}\setminus \{x_7\})=(x_1,x_2,x_3,x_4)$. \end{tabular} \end{center}
Therefore, for each $k=1, \ldots, 7$, the monomial localization $I(\mathfrak{m}\setminus \{x_k\})$ is either a monomial prime ideal, or sum of a monomial prime ideal and an edge ideal of a bipartite graph, which have no common variables. In each of the above cases, $I(\mathfrak{m}\setminus \{x_k\})$ is a normally torsion-free ideal. Hence, it follows from Corollary~\ref{Cor. 1} that $I$ is nearly normally torsion-free. \end{example}
We further investigate the class of ideals of type $B_t(u)$ in the context of normally torsion-freeness and nearly normally torsion-freeness. Given $a,b \in \mathbb{N}$, we set $[a,b]:=\{c \in \mathbb{N}: a \leq c \leq b\}$. Let $u:=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ and \begin{equation}\label{ak} A_k:=[(k-1)t+1,i_k], \text{ for each } k=1, \ldots d. \end{equation}
We can describe the minimal generators of $I=B_t(u)$ in the following way: if $x_{j_1}\cdots x_{j_d} \in \mathcal{G}(I)$, then for each $k=1, \ldots ,d$, we have $j_k \in A_k$. For example, a complete list of minimal generators of $B_2(x_3x_5x_{7})$ is given below: \begin{center} \begin{tabular}{ c c c } $x_1x_3x_5$\\ $x_1x_3x_6$\\ $x_1x_3x_7$\\
$x_1x_4x_6$\quad & \quad $x_2x_4x_6$\\ $x_1x_4x_7$\quad & \quad $x_2x_4x_7$\\ $x_1x_5x_7$\quad & \quad $x_2x_5x_7$\quad & \quad $x_3x_5x_7$ \end{tabular} \end{center} Here, $A_1=[1,3]$, $A_2=[3,5]$, and $A_3=[5,7]$.
Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$, then it forces us to have $i_k \leq kt$, for each $k=1,\ldots, d-2$ because $u$ is a $t$-spread monomial. In this case, $A_i\cap A_j = \emptyset$. For example, a complete list of minimal generators of $B_3(x_3x_6x_9)$ is given below: \begin{center} \begin{tabular}{ c c c } $x_1x_4x_7$\\ $x_1x_4x_8$\\ $x_1x_4x_9$\\ $x_1x_5x_8$\quad & \quad $x_2x_5x_8$\\ $x_1x_5x_9$\quad & \quad $x_2x_5x_9$\\ $x_1x_6x_9$\quad & \quad $x_2x_6x_9$\quad & \quad $x_3x_6x_9$
\end{tabular} \end{center}
Here, $A_1=[1,3]$, $A_2=[4,6]$, and $A_3=[7,9]$.
\begin{remark}\label{dpartiteduniform}
If $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$, then $B_t(u)$ is an edge ideal of a {\em $d$-uniform $d$-partite} hypergraph whose vertex partition is given by $A_1, \ldots, A_d$. Recall that $\mathcal{H}$ is a $d$-partite hypergraph if its vertex set $V_{\mathcal{H}}$ is a disjoint union of sets $V_1, \ldots, V_d$ such that if $E$ is an edge of $\mathcal{H}$, then $|E \cap V_i | \leq 1$. Moreover, a hypergraph is $d$-uniform if each edge of $\mathcal{H}$ has size $d$. In particular, if $\mathcal{H}$ is a $d$-uniform $d$-partite hypergraph with vertex partition $V_1, \ldots, V_d$, then $|E|=d$ and $|E \cap V_i | =1$ for each $E \in E_{\mathcal{H}}$. \end{remark}
Let $\mathcal{A}_t$ denote the family of all ideals of the form $B_t(u)$ such that $B_t(u)$ is an edge ideal of a $d$-partite $d$-uniform hypergraph, for some $d$. From the above discussion it follows that $B_t(u) \in \mathcal{A}_t$ if and only if there exists some $d$ such that $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$.
Let $\mathcal{H}$ be a hypergraph. A sequence $v_1, E_1, v_2, E_2, \ldots, v_s, E_s, v_{s+1}=v_1$ of distinct edges and vertices of $\mathcal{H}$ is called a cycle in $\mathcal{H}$ if $v_i,v_{i+1} \in E_i$, for all $i=1, \ldots, s$. Such a cycle is called {\em special} if no edge contain more than two vertices of the cycle. After translating the language of simplicial comlexes to hypergraphs, it can be seen from \cite[Theorem 10.3.16]{HH1} that if a hypergraph does not have any special odd cycles, then its edge ideal is normally torsion-free. The following example shows that hypergraphs with edge ideals in $\mathcal{A}_t$, may contain special odd cycles.
\begin{example}\label{oddcycle} Let $\mathcal{H}$ be the hypergraph whose edge ideal is $I=B_3(x_3x_6x_9x_{12})$. Then the following sequence of the vertices and the edges of $\mathcal{H}$ gives a special odd cycle. \[ 1, \{1,4,9,12\}, 9, \{2,5,9,12\}, 5, \{1,5,8,11\}, 1 \] \end{example}
For any monomial ideal $I \subset R=K[x_1, \ldots,x_n]$, the {\it deletion} of $I$ at $x_i$ with $1\leq i \leq n$, denoted by $I\setminus x_i$, is obtained by setting $x_i=0$ in every minimal generator of $I$, that is, we delete every minimal generator $u\in \mathcal{G}(I)$ with $x_i\mid u$. For a monomial $u \in R$, we denote the support of $u$ by $\mathrm{supp}(u)=\{x_i : x_i|u \}$. Moreover, for a monomial ideal $I\subset R$, we set $\mathrm{supp}(I)=\cup_{u \in \mathcal{G}(I)} \mathrm{supp}(u)$. Then we observe the following:
\begin{remark}\label{rem1} Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$, and $I=B_t(u) \subset R$. Note that we have $I \in \mathcal{A}_t$. \begin{enumerate} \item If $i_k < kt$, for some $1 \leq k \leq d-1$, then the variables $x_{i_k+1}, \ldots, x_{kt}$ do not appear in $\mathrm{supp}(I)$. For example, a complete list of minimal generators of $B_3(x_2x_5x_9)$ is given below: \begin{center} \begin{tabular}{ c c c } $x_1x_4x_7$\\ $x_1x_4x_8$\\ $x_1x_4x_9$\\ $x_1x_5x_8$\quad & \quad $x_2x_5x_8$\\ $x_1x_5x_9$\quad & \quad $x_2x_5x_9$\\ \end{tabular} \end{center} Here, $A_1=[1,2]$, $A_2=[4,5]$, and $A_3=[7,9]$, and $x_3, x_6 \notin \mathrm{supp}(B_3(x_2x_5x_9))$.
\item If $i_d<n$, then $x_{{i_d}+1}, \ldots, x_n \notin \mathrm{supp} (I)$. Hence we can always assume that $i_d=n$.
\item If $i_k > (k-1)t+1$, that is, $|A_k| >1$ for some $k = 1, \ldots, d$, then $I\setminus x_{i_k}=B_t(v)$ where $v$ is chosen with the following property: $v=x_{j_1}\ldots x_{j_d} \in \mathcal{G}(I\setminus x_{i_k})$ and for any other $w=x_{l_1}\ldots x_{l_d} \in \mathcal{G}(I\setminus x_{i_k})$, we have $l_k \leq j_k$. For example, $B_3(x_2x_5x_9)\setminus x_5=B_3(x_1x_4x_9)$. Therefore, we conclude that $I\setminus x_{i_k} \in \mathcal{A}_t$, for all $x_{i_k}$, with $k = 1, \ldots, d$.
\item The definition of $A_k$ immediately implies that $|A_k|=1$ if and only if $i_k=(k-1)t+1$. Moreover, if $i_k=(k-1)t+1$ for some $1 \leq k \leq d$, then $i_j=(j-1)t+1$, for all $j \leq k$ because $u$ is a $t$-spread monomial. In this case $x_1x_{t+1}\cdots x_{(k-1)t+1}$ divides every minimal generator of $B_t(u)$ and hence $B_t(u)=x_1x_{t+1}\cdots x_{(k-1)t+1} J$, where $J$ can be identified as a $t$-spread principal Borel ideal generated in degree $d-k$ in it's ambient polynomial ring. Hence, $J=B_t(v) \in \mathcal{A}_t$, with $v=u/x_1x_{t+1}\cdots x_{(k-1)t+1} $.
For example, if $u=x_1x_4x_9x_{12}$, then $B_3(u)=x_1x_4J$ with $J=B_t(v) \subset K[x_7, \ldots, x_{12}]$ and $v=x_9x_{12}$. In fact, by substituting $y_{i-6}=x_i$ for $i=7, \ldots, 12$, $J$ can be identified with $B_3(y_3y_6)\subset K[y_1, \ldots, y_6]$.
\item If $|A_k| \geq 2$, then $i_k > (k-1)t+1$. This forces $|A_j| \geq 2$ for all $j=k, \ldots,d$ because $u$ is a $t$-spread monomial.
\end{enumerate} \end{remark}
In what follows, our aim is to show that for any fixed $t$, all ideals in $A_t$ are normally torsion-free. It is known from \cite[Corollary]{AEL} that $B_t(u)$ satisfies the persistence property and the Rees algebra $\mathcal{R}(B_t(u))$ is a normal Cohen-Macaulay domain. It is a well-known fact that for any non-zero graded ideal $I \subset R=K[x_1, \ldots, x_n]$, if $\mathcal{R}(I)$ is Cohen-Macaulay then $\lim_{k \rightarrow \infty} \mathrm{depth}(R/I^k)= n-\ell(I)$, for example see \cite[Proposition 10.3.2]{HH1}, where $\ell(I)$ denotes the analytic spread of $I$, that is, the Krull dimension of the fiber ring $\mathcal{R}(I)/\mathfrak{m}\mathcal{R}(I)$. This leads us to the following corollary which will be used in the subsequent results. Note that due to Remark~\ref{rem1}(1), one needs to pay attention to the ambient ring of $B_t(u)$. Here by the ambient ring, we mean the polynomial ring $R$ containing $B_t(u)$ such that all variables in $R$ appear in $\mathrm{supp}(B_t(u))$.
\begin{corollary}\label{cor1} Let $I=B_t(u)\subset R=K[x_i: x_i \in \mathrm{supp}(I)]$. If $\ell(I)<n$, then $\lim_{k \rightarrow \infty} \mathrm{depth}(R/I^k)\neq 0$. In particular, if $\ell(I)<n$, then $\mathfrak{m} \notin \mathrm{Ass} (R/I^k)$, where $\mathfrak{m}$ is the unique graded maximal ideal of $R$. \end{corollary}
Next we compute $\ell(I)$, for all $I \in \mathcal{A}_t$. For this, we first recall the definition of linear relation graph from \cite[Definition 3.1]{HQ}.
\begin{definition}\label{linearrelationgraphdef} Let $I\subset R$ be a monomial ideal with $\mathcal{G}(I)=\{u_1, \ldots, u_m\}$. The {\em linear relation graph} $\Gamma$ of $I$ is the graph with the edge set \[ E(\Gamma)=\{\{i,j\}: \text{there exist $u_k$,$u_l \in \mathcal{G}(I)$ such that $x_iu_k=x_ju_l$}\}, \]
and the vertex set $V(\Gamma)=\bigcup_{\{i,j\}\in E(\Gamma)}\{i,j\}$.
\end{definition}
It is known from \cite[Lemma 5.2]{DHQ} that if $I$ is a monomial ideal generated in degree $d$ and the first syzygy of $I$ is generated in degree $d+1$ then \begin{equation}\label{eq1} \ell(I)=r-s+1 \end{equation} where $r$ is the number of vertices and $s$ is the number of connected components of the linear relation graph of $I$. If $u$ is a $t$-spread monomial of degree $d$, then it can be concluded from \cite[Theorem 2.3]{AEL} that the first syzygy of $B_t(u)$ is generated in degree $d+1$. Hence \cite[Lemma 5.2]{DHQ} gives a way to compute the analytic spread of $B_t(u)$. Before proving the other main result of this section, we first analyze the linear relation graph of $B_t(u)$.
\begin{lemma}\label{aklemma} Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ and $A_k=[(k-1)t+1,i_k]$, for each $k=1, \ldots d$. Moreover, let $\Gamma$ be the linear relation graph of $I=B_t(u)$. Then we have the following:
\begin{enumerate}
\item[(i)] For some $k=1, \ldots, d$, $|A_k| \geq 2$ if and only if $A_k \subseteq V(\Gamma)$. Moreover, if $|A_k| \geq 2$, then the induced subgraph of $\Gamma$ on $A_k$ is a complete graph. \item[(ii)] Let $i_k < kt+1$, for some $1 \leq k \leq d$. Then $\Gamma$ does not contain any edge with one endpoint in $A_r$ and the other endpoint in $A_s$ for any $1 \leq r<s\leq k$. \end{enumerate} \end{lemma}
\begin{proof}
(i) First we show that if $A_k \subseteq V(\Gamma)$, then $|A_k| \geq 2$. Assume that $|A_k|=1$, for some $k$. Then from Remark~\ref{rem1}(4), we see that $i_k=(k-1)t+1$. In this case, each generator of $B_t(u)$ is a multiple of $x_{i_k}$ and by following the definition of $\Gamma$, we see that $A_k=\{i_k\} \not\subseteq V(\Gamma) $.
Conversely, we show that if $|A_k| \geq 2$, for some $k$, then $A_k \subseteq V(\Gamma)$ and the induced subgraph on $A_k$ is a complete graph. Take any $f,h \in A_k$ with $f<h$. If $k=1$, then $f<h \leq i_1$ and the monomials $v=x_h(u/x_{i_1})$ and $v'=x_f(u/x_{i_1})$ belong to $\mathcal{G}(I)$ because $I$ is $t$-spread strongly stable. Hence $\{f,h\} \in \Gamma$, as required. Similar argument shows that if $|A_d| \geq 2$, then $|A_d| \subseteq V(\Gamma)$ and the induced subgraph on $|A_d|$ is also a complete graph. Now assume that $1 < k < d$. Since $|A_k| \geq 2$, we have $i_k > (k-1)t+1$ and $(k-1)t+1 \leq f<h \leq i_k$. By using the fact that $I$ is $t$-spread strongly stable, we see that $v=x_1 \cdots x_{(k-2)t+1}x_f x_{kt+1}\cdots x_{i_d} \in \mathcal{G}(I)$ and $v'=x_1 \cdots x_{(k-2)t+1}x_h x_{kt+1}\cdots x_{i_d} \in \mathcal{G}(I)$. Moreover, $v=(v'/x_h)x_f$ and $\{f,h\} \in E(\Gamma)$, as required.
(ii) If $|A_r|=1$ or $|A_s|=1$, then by using (i), we see that the statement holds trivially. Assume that $|A_r| \geq 2$ and $|A_s| \geq 2$. Since $i_k< kt+1$, we have $i_j<jt+1$, for all $j=1, \ldots, k$ because $u$ is a $t$-spread monomial. In this case $A_r \cap A_s = \emptyset$ for all $1 \leq r < s \leq k$. Moreover, $|A_r| <t$ for all $r=1, \ldots, k$. Take $f \in A_r$ and $h \in A_s$ for some $1<r < s \leq k$. Then for any $u \in \mathcal{G}(I)$ with $x_h | u$, we have $(u/x_h)x_f \notin G(I)$. This can be easily verified because $(u/x_h)x_f$ is not a $t$-spread monomial as it contains two variables with indices in $A_r$ and $|A_r|<t$. Hence, we do not have any edge in $\Gamma$ of the form $\{f,h\}$ where $f \in A_r$ and $h \in A_s$. \end{proof}
\begin{proposition}\label{limdepth}
Let $I=B_t(x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}) \subset R=K[x_i: x_i \in \mathrm{supp}(I)]$ with $i_{d-1} \leq(d-1)t$. Then $\ell(I) < n=|\mathrm{supp}(I)|$. In particular, $\mathfrak{m} \notin \mathrm{Ass} (R/I^k)$ for all $k\geq 1$, where $\mathfrak{m}$ is the unique graded maximal ideal of $R$. \end{proposition}
\begin{proof}
If $I$ is a principal ideal, then the assertion holds trivially. Therefore, we may assume that $I$ is not a principal ideal. To show that $\ell(I)<n$, from the equality in (\ref{eq1}), it is enough to prove that the linear relation graph $\Gamma$ of $I$ has more than one connected components. Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ and $A_k=[(k-1)t+1,i_k]$, for $k=1, \ldots d$. Since $I$ is not principal, it follows from Remark~\ref{rem1}(4) that $|A_k| \geq 2$, for some $1 \leq k \leq d$.
Since $i_{d-1} \leq(d-1)t$, one can deduce from Lemma~\ref{aklemma} that we do not have any edge in $\Gamma$ of the form $\{f,h\}$ with $f$ and $h$ in different $A_k$'s and if $|A_k| \geq 2$, for some $i$, then $A_k \subset V(\Gamma)$ and the induced subgraph on $A_k$ is a complete graph. This shows that $V(\Gamma)$ is the union of all $A_i$'s for which $|A_i|\geq 2$. Moreover, each such $A_i$ determines a connected component in $\Gamma$. Hence $\Gamma$ has only one connected components if and only if $|A_d|\geq 2$ and $|A_i| =1$, for all $i=1, \ldots, d-1$. In this case, $\ell(I)= |A_d|<n$. Otherwise, $\Gamma$ has at least two connected components and again we obtain $\ell(I)<n$, as required. Then the assertion $\mathfrak{m} \notin \mathrm{Ass} (R/I^k)$, for all $k \geq 1$, follows from Corollary~\ref{cor1}. \end{proof}
Before proving Theorem~\ref{main-NTF}, we first recall the following theorem which gives a criterion to check whether a square-free monomial ideal is normally torsion-free or not.
\begin{theorem}\label{use}\cite[Theorem 3.7]{SNQ} Let $I$ be a square-free monomial ideal in a polynomial ring $R=K[x_1, \ldots, x_n]$ over a field $K$ and $\mathfrak{m}=(x_1, \ldots, x_n)$. If there exists a square-free monomial $v \in I$ such that $v\in \mathfrak{p}\setminus \mathfrak{p}^2$ for any $\mathfrak{p}\in \mathrm{Min}(I)$, and $\mathfrak{m}\setminus x_i \notin \mathrm{Ass}(R/(I\setminus x_i)^s)$ for all $s$ and $x_i \in \mathrm{supp}(v)$, then the following statements hold: \begin{itemize} \item[(i)] $I$ is normally torsion-free. \item[(ii)] $I$ is normal. \item[(iii)] $I$ has the strong persistence proeprty. \item[(iv)] $I$ has the persistence property. \item[(v)] $I$ has the symbolic strong persistence property. \end{itemize}
\end{theorem}
Now, we state the second main result of this section. \begin{theorem}\label{main-NTF} Let $I=B_t(x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}) \subset R=K[x_i: x_i \in \mathrm{supp}(I)]$ with $i_{d-1} \leq(d-1)t$. Then $I$ is normally torsion-free. \end{theorem}
\begin{proof} We may assume that $i_k \neq (k-1)t+1$, for all $1 \leq k \leq d$. Otherwise, from Remark~\ref{rem1}(4), it follows that $I=wB_t(v)$ where $w$ is the product of all variables for which $i_k =(k-1)t+1$, $v=u/w$, and $B_t(v)$ is a $t$-spread principal Borel ideal in it's ambient ring. Then from \cite[Lemma 3.12]{SN}, it follows that $I$ is normally torsion-free if and only if $B_t(v)$ is normally torsion-free. Therefore, one may reduce the discussion to $B_t(v)$ whose generators are not a multiple of a fixed monomial.
Let $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_1>1$. To show that $I=B_t(u)$ is normally torsion-free, we will use Theorem~\ref{use}. It can be seen from \cite[Theorem 4.2]{Claudia} that $u \in \mathfrak{p}\setminus \mathfrak{p}^2$, for all $\mathfrak{p} \in \mathrm{Min}(I)$. Recall that the family of ideals $\mathcal{A}_t$ is defined by: $B_t(u) \in \mathcal{A}_t$ if and only if there exists some $d$ such that $u=x_{i_1}x_{i_2}\cdots x_{i_{d-1}}x_{i_d}$ with $i_{d-1} \leq(d-1)t$. From Remark~\ref{rem1}(3), it follows that $I\setminus{x_{i_k}} \in \mathcal{A}_t$, for all $k=1, \ldots, d$. Furthermore, Proposition~\ref{limdepth} implies that the unique graded maximal ideal of the ambient ring of $I\setminus{x_{i_k}}$ does not belong to $\mathrm{Ass}(R/(I\setminus x_i)^s)$ for all $s$. \end{proof}
Next, we prove the converse of Theorem~\ref{main-NTF} to obtain the complete characterization of normally torsion-free $t$-spread principal Borel ideals.
\begin{theorem}\label{complete} Let $I=B_t(x_{i_1}x_{i_2}\ldots x_{i_{d-1}}x_{i_d}) \subset R=K[x_i: x_i \in \mathrm{supp}(I)]$. Then $I$ is normally torsion-free if and only if $i_{d-1} \leq(d-1)t$. In other words, $I$ is normally torsion-free if and only if $I$ can be viewed as an edge ideal of a $d$-uniform $d$-partite hypergraph. \end{theorem} \begin{proof}
Following Theorem~\ref{main-NTF}, it is enough to show that if $i_{d-1} >(d-1)t$, then $I$ is not normally torsion-free. Let $k$ be the smallest integer for which $i_k > kt$. As we explained in Remark~\ref{rem1}(5), if $i_k > kt$, then $i_j> jt$ and $|A_j| \geq 2$, for all $j= k, \ldots, d$. The sets $A_j$'s are defined in (\ref{ak}). It follows from Lemm~\ref{aklemma} that $A=A_k \cup A_{k+1} \cup \cdots \cup A_d \subset V(\Gamma) $, where $\Gamma$ is linear relation graph of $I$ and for each $j=k, \ldots, d$, the induced subgraph of $\Gamma$ on $A_j$ is a complete graph. Moreover, $i_j> jt$, for each $j=k, \ldots, d$ gives that $i_j \in A_j \cap A_{j+1} \neq \emptyset$, for each $j=k, \ldots, d-1$. Therefore, we conclude that the induced subgraph on $A$ is connected.
Set $\mathfrak{p}=(x_{(k-1)t+1}, \ldots, x_{i_d})$. Here the crucial observation is that if we take the monomial localization of $I$ at $\mathfrak{p}$; in other words, if we map all variables $x_i $ to 1 where $i \in A_1 \cup \cdots \cup A_{k-1}$, then we are reducing the degree of each generator of $B_t(u)$ by $k-1$. It is because $A \cap B = \emptyset$, where $B=A_1 \cup \cdots \cup A_{k-1}$. Hence $I(\mathfrak{p})$ can be viewed as a $t$-spread principal Borel ideal by a shift of indices of variables. More precisely, each $j \in A $ is shifted to $j-(k-1)t$. Therefore, $I(\mathfrak{p})= B_t(x_{j_1} \cdots x_{j_k})$, where $t<j_1 < \cdots < j_k$. The linear relation graph of $I(\mathfrak{p})$ is isomorphic to the induced subgraph of $\Gamma$ on vertex set $A$. Then $\ell( I(\mathfrak{p})) = |\mathrm{supp} (B_t(x_{j_1} \cdots x_{j_k}))|=\mathrm{dim}(R(\mathfrak{p}))$, and $\lim_{k \rightarrow \infty} \mathrm{depth}(R(\mathfrak{p})/I(\mathfrak{p})^k)= 0$. This shows that $\mathfrak{p} \in \mathrm{Ass }(R/I^k)$, for some $k>1$. Moreover, $\mathfrak{p} \notin \mathrm{Ass}(R/I)$ because of \cite[Theorem 4.2]{Claudia}. Hence we conclude that $I$ is not normally torsion-free. \end{proof} In the subsequent example, we illustrate the construction of $\mathfrak{p}$ as in the proof of Theorem~\ref{complete}. \begin{example} Let $u=x_2x_7x_{10}x_{13}$ be a 3-spread monomial and $I=B_3(u)$. Here $i_1=2, i_2=7, i_3=10, i_4=13$. Moreover $i_1=2 < t=3$, but $i_2=7 >2.3$. Set $\mathfrak{p}=(x_4, x_5, \ldots, x_{13})$ as in the proof of Theorem~\ref{complete}. Then the minimal generators of $I(\mathfrak{p})$ are listed below. In the following table, $u \rightarrow v$ indicates that $u \in \mathcal{G}(I(\mathfrak{p}))$ and $v$ is the monomial obtained by shifting each $j \in [4,13]$ to $j-3$. \begin{center} \begin{tabular}{ l l l l} $x_4x_7x_{10} \rightarrow x_1x_4x_{7}$\\ $x_4x_7x_{11} \rightarrow x_1x_4x_{8}$\\ $x_4x_7x_{12} \rightarrow x_1x_4x_{9}$\\ $x_4x_7x_{13} \rightarrow x_1x_4x_{10}$\\ $x_4x_8x_{11} \rightarrow x_1x_5x_{8}$&$x_5x_8x_{11} \rightarrow x_2x_5x_{8}$\\ $x_4x_8x_{12} \rightarrow x_1x_5x_{9}$&$x_5x_8x_{12} \rightarrow x_2x_5x_{9}$\\ $x_4x_8x_{13} \rightarrow x_1x_5x_{10}$&$x_5x_8x_{13} \rightarrow x_2x_5x_{10}$\\ $x_4x_9x_{12} \rightarrow x_1x_6x_{9}$&$x_5x_9x_{12} \rightarrow x_2x_6x_{9}$&$x_6x_9x_{12} \rightarrow x_3x_6x_{9}$\\ $x_4x_9x_{13} \rightarrow x_1x_6x_{10}$&$x_5x_9x_{13} \rightarrow x_2x_6x_{10}$&$x_6x_9x_{13}\rightarrow x_3x_6x_{10}$\\ $x_4x_{10}x_{13} \rightarrow x_1x_7x_{10}$&$x_5x_{10}x_{13} \rightarrow x_2x_7x_{10}$&$x_6x_{10}x_{13}\rightarrow x_3x_7x_{10}$&$x_7x_{10}x_{13}\rightarrow x_4x_7x_{10}$ \end{tabular} \end{center} Therefore, $I(\mathfrak{p})$ can be viewed as a $3$-spread principal Borel ideal $B_3(x_4x_7x_{10})$. A direct computation in Macaulay2 \cite{GS} shows that $(x_1, \ldots, x_{10})$ is an associated prime of the third power of $B_3(x_4x_7x_{10})$. Consequently, $\mathfrak{p}$ is also an associated prime of the third power of $I$. \end{example}
When $u$ is a monomial of degree 3, then we have the following:
\begin{lemma} Let $u=x_ax_bx_n$ be a $t$-spread monomial and $I=B_t(u)\subset R=K[x_i: x_i \in \mathrm{supp}(I)]$. Then we have the following: \begin{enumerate} \item[\em{(i)}] if $b < 2t+1$, then $I$ is normally torsion-free.
\item[\em{(ii)}] $a=1$ and $b\geq2t+1$, then $I$ is nearly normally torsion-free.
\item[\em{(iii)}] $a>1$ and $b\geq2t+1$, then $I$ is not nearly normally torsion-free.
\end{enumerate} \end{lemma}
\begin{proof} (i) It follows from Theorem~\ref{main-NTF}.
(ii) Let $u\in \mathcal{G}(I)$. Then $u=x_1x_{i_2}x_{i_3}$, where $i_2\in \{t+1, \ldots, b\}$ and $i_3\in \{2t+1,\ldots,n\}$. It can be easily seen that $I=x_1J$, where $J=B_t(x_{b}x_n)$. It follows from Proposition~\ref{App. 2} that $B_t(x_{b}x_n)$ is nearly normally torsion-free. Then we get the required result by using Lemma~\ref{Lem. Multipe}.
(iii) We will prove the assertion by constructing two monomial prime ideals $\mathfrak{p}_1$ and $\mathfrak{p}_2$ that belong to $\mathrm{Ass}(R/I^k)$, for some $k>1$, but $\mathfrak{p}_1, \mathfrak{p}_2 \notin \mathrm{Ass}(R/I)$. Recall from (\ref{ak}) that if $v \in \mathcal{G}(I)$, then $v=x_{a'}x_{b'}x_{c'}$ with $ a' \in A_1=[1, a]$, $b' \in A_2=[t+1,b]$, and $c'\in A_3=[2t+1, n]$. Since $a>1$, we have $|A_1|> 1$ and since $b\geq 2t+1$, we have $|A_2|> 1$ and $|A_3|>1$.
Let $\mathfrak{p}_1=(x_{t+1}, \ldots, x_{i_d})$. As shown in the last part of the proof of Theorem~\ref{complete}, $\mathfrak{p}_1 \in \mathrm{Ass}(R/I^k)$, for some $k>1$, but $\mathfrak{p}_1 \notin \mathrm{Ass}(R/I)$. Let $\mathfrak{p}_2=(x_1, x_{t+2}, x_{t+3}, \ldots, x_n)$, then from \cite[Theorem 4.2]{Claudia}, we have $\mathfrak{p}_2 \notin \mathrm{Ass}(R/I)$. We claim that $I(\mathfrak{p}_2)$ can be viewed as the $t$ spread principal Borel ideal $B_t(x_rx_n)$ where $r>t$. Indeed, by substituting $x_{t+1}=1$, all $t$-spread monomials of the form $x_1x_{t+1}x_{c'}$, with $c' \in A_3$ are reduced to $x_1x_{c'}$. Therefore, the monomials $x_1x_{b'}x_c \in \mathcal{G}(I)$ with $b' \in [t+2,b]$ and $c' \in [2t+2,n]$ do not appear in $\mathcal{G}(I(\mathfrak{p}_2))$.
Since $a>1$, we have $2 \in A_1$. Moreover, if $v=x_{a'}x_{b'}x_{c'} \in \mathcal{G}(I)$ with $a'\geq 2$, then $b' \geq t+2$ and $c' \geq 2t+2$ because $v$ is a $t$-spread monomial .
By substituting $x_{i}=1$, for all $i \in [2,a]$, all monomials of the form $x_i x_{b'}x_{c'}$, with $b' \in [t+2, b]$, $c' \in [2t+2,n]$ are reduced to $x_{b'}x_{c'}$. This shows that $I(\mathfrak{p}_2)$ is generated in degree 2 by $t$-spread monomials of the following form: \begin{center} \begin{tabular}{l} $x_1x_{c'}$ with $c' \in A_3=[2t+1,n]$; \\
$x_{b'}x_{c'}$ with $b'\in [t+2,b]$ and $c' \in [2t+2,n]$ and $c' - b'\geq t$. \end{tabular} \end{center}
Note that $\mathrm{supp}(I(\mathfrak{p}_2))=\{x_1, x_{t+2}, x_{t+3}, \ldots, x_n\}$. Moreover, if we shift the indices of variables in $\{x_{t+2}, x_{t+3}, \ldots, x_n\}$ by $k\rightarrow k-t$, then $I(\mathfrak{p}_2)$ can be viewed as a $t$-spread principal Borel ideal $B_t(x_{r}x_{s})$ where $s=n-t$ and $r=b-t>t$ because $b \geq 2t+1$. Moreover, the linear relation graph of $B_t(x_{r}x_{s})$ is a connected with $ |\mathrm{supp}(I(\mathfrak{p}_2))| $ vertices. Hence, $\lim_{k \rightarrow \infty} \mathrm{depth}(R(\mathfrak{p}_2)/I(\mathfrak{p}_2)^k)= 0$, and consequently $\mathfrak{p}_2 \in \mathrm{Ass}(R/I^k)$, for some $k >1$. \end{proof}
\noindent{\bf Acknowledgments.}
We would like to thank Professor Adam Van Tuyl for his valuable comments in preparation of Theorem \ref{NTF1}. In addition, the authors are deeply grateful to the referee for careful reading of the manuscript and valuable suggestions which led to significant improvements in the quality of this paper.
\end{document} |
\begin{document}
\begin{abstract} In this paper, we study
non-wandering homeomorphisms of the two torus homotopic to
the identity,
whose rotation sets
are non-trivial segments from $(0,0)$ to some totally irrational point $(\alpha,\beta)$. We show that for any $r \geq 1,$ this rotation set only appears for $C^r$ diffeomorphisms satisfying some degenerate conditions. And when such a rotation set does appear, assuming several natural conditions that are generically satisfied in the area-preserving world, we give a clearer description of its rotational behaviour. More precisely, the dynamics admits bounded deviation along the direction $-(\alpha,\beta)$ in the lift, and the rotation set is locked inside an arbitrarily small cone with respect to
small $C^0$-perturbations of the dynamics. On the other hand, for any non-wandering homeomorphism $f$ with this kind of rotation set, we also present a perturbation scheme in order for the rotation set to be eaten by rotation set of some nearby dynamics, in the sense that the later set has non-empty interior and contains the former one. These two flavors interplay and share the common goal of understanding the stability/instability properties of this kind of rotation set. \end{abstract}
\maketitle{}
\tableofcontents
\section{Introduction}
The notion of a rotation number was introduced by Poincar\'e in order to gather information on the average "rotational" linear speed of a dynamical system. Rotation theory is well understood
only for circle homeomorphisms
or endomorphisms,
and it
is still a great source of
problems in two dimensional manifolds (annulus, two torus, higher genus
surfaces, and so on).
Focusing only on the two torus,
there
are two most important and related topics, namely,
the shape of the rotation set
and how it changes depending on the dynamics.
In this paper, we will work on both topics, based on one specific
type of rotation set.
On the one hand, we try to understand the variation of a rotation set depending on the homeomorphism, under different regularities. On the other hand, we study how a certain shape of the rotation set restricts the dynamics.
In general, with respect to the Hausdorff topology,
the rotation set varies
upper-semicontinuously.
It is also known that if
the rotation set of some $f$ has non-empty interior,
then it is in fact continuous at
$f$ (see ~\cite{MZ} and ~\cite{MZ2}).
Moreover,
for a $C^0$-open and dense subset of homeomorphisms, the rotation set is stable (i.e., it remains unchanged under small perturbations, see~\cite{Passeggi_rational} and \cite{Andres_C0}). Note that this is true both in the set of all homeomorphisms, and in the set of area-preserving ones.
Below, in order to state our main results,
we will use some notations that are mostly standard, and postpone their precise definitions to Section~\ref{preliminary}.
Our objective is to look at an interesting situation, where the rotation set is a segment connecting $(0,0)$ to some totally irrational point $(\alpha,\beta)$. We want to understand how it can be changed under sufficiently small $C^0$ perturbations, and what properties should such a homeomorphism satisfy. An Example with such a rotation set was first described in~\cite{Handel_no_periodic_orbit} by Handel, who attributed it to Katok. It is produced by a slowing-down procedure from a constant speed irrational flow.
A smooth area-preserving example was obtained by Addas-Zanata and Tal in~\cite{Salvador_fareast}. See also the paper~\cite{Kwapisz_Diophantine} by Kwapisz and Mathison which shows some special ergodic properties for an explicit example.
In the $C^0$ category, the more precise task for us is to study when and how the rotation set of a dynamical system is ready to grow.
One of the first results on this subject appeared in ~\cite{Salvador}, where it was proved that if the rotation set $\rho(\widetilde f)$
contains a non-rational vector $(\alpha,\beta)$ as an extremal point,
then for any supporting line $r$ (i.e., a straight line containing $(\alpha, \beta)$,
such that the rotation set avoids one connected component of its complement, denoted $O$),
by certain arbitrarily small $C^0$-perturbations,
the new rotation set will intersect $O$
(See also \cite{Guiheneuf_instability} for a $C^1$ version of this theorem).
In this paper, we are able to go one step further in this direction.
\begin{theorem}\label{instability_general} Let $\widetilde f \in \widetilde{\text{Homeo}}_{0,\text{nw}}(\Bbb T^2)$, whose rotation set $\rho(\widetilde f)$ is a segment from $(0,0)$ to some totally irrational point $(\alpha,\beta)$. For any $\varepsilon>0$, there exists $\widetilde g \in \widetilde {\text{Homeo}}_0(\Bbb T^2)$
with $\text{dist}_{C^0}(\widetilde g, \widetilde f)<\varepsilon$, such that,
$\rho(\widetilde g)$ has non-empty interior,
and $\rho(\widetilde f)\backslash \{(0,0)\}
\subset \text{Int}(\rho(\widetilde g))$. \end{theorem}
\begin{Remark} If the map $\widetilde{f}$ from the above theorem preserves area, then the perturbed map $\widetilde{g}$ can also be chosen in the area-preserving world. This is a well-known fact (follows from Lemma \ref{firstclosing}), but it deserves to be mentioned, as it is one of the only cases known to us on how to perturb a non-wandering homeomorphism and remain non-wandering. \end{Remark}
It is interesting to ask if the same statement is also true in $C^1$ topology. If we go on to consider $C^r$ diffeomorphisms, $r\geq 1$, clearer descriptions should be expected. In particular, we proved the following result, which suggests
the non-genericity of the set of non-wandering diffeomorphisms
with this special kind of rotation set. We say "suggest", because it is not known how to perturb a non-wandering diffeomorphisms and remain non-wandering (unless of course, in some particular cases, like area-preserving maps).
\begin{theorem}\label{generic_case} Let $\widetilde f\in \widetilde{\text{Diff}}_{0,\text{nw}} (\Bbb T^2)$. Assume that for every integer $n>0$, the linear part $Df^n$ computed at each $n$-periodic point does not have $1$ as an eigenvalue and there are no saddle-connections. Then, the rotation set $\rho(\widetilde f)$ is not a segment from $(0,0)$ to some totally irrational point $(\alpha,\beta)$. \end{theorem}
Note that the above conditions are satisfied by generic area-preserving $C^r$ diffeomorphisms, for all $r \geq 1.$ Naturally, the next
task is the following. How can we understand a typical non-wandering diffeomorphism $f$ which does admit such a rotation set? This seems to be hard, because very little is known on the set of non-wandering homeomorphisms or diffeomorphisms of a surface. As we said above, unlike in the area-preserving case, there does not exist a method available to make a perturbation within the set of non-wandering homeomorphisms.
Nevertheless, in the area-preserving setting, if one wants to obtain more information on
the non-generic diffeomorphisms, traditionally,
one works with generic families. This inspires us to formulate nice conditions, which hold true in a broader set of diffeomorphisms. This approach eventually helps us to detect properties, which general non-wandering homeomorphisms might satisfy. Along this direction, we obtain the following two results.
\begin{theorem}\label{carac_map} Suppose $f\in \text{Diff}_{0,\text{nw}}(\Bbb T^2)$ satisfies certain natural conditions,
see Definition~(\ref{generic_in_generic}).
Suppose also that $\rho(\widetilde {f})$ for some lift $\widetilde f$
is a segment from $(0,0)$ to a totally irrational point $(\alpha,\beta).$ Then, $f$ has (finitely many) fixed points (and no periodic point which is not fixed), all with $0$ topological index, and the local dynamics around them is obtained by gluing exactly two hyperbolic sectors. The stable branch of any of these fixed points does not intersect the unstable branch of any other point. Moreover, in the plane, for each fixed point, its unstable and stable branches are bounded in the direction orthogonal to $(\alpha,\beta)$ and the unstable (resp. stable) branch goes to infinity following the vector $(\alpha,\beta)$ (resp. $-(\alpha,\beta)$). Finally, any stable or unstable branch of a fixed point is dense in the torus. \end{theorem}
\begin{theorem}\label{generic_family_not_perturbable} Under the same hypotheses of the above Theorem,
for any non-zero integer vector $(a,b),$ $a$ and $b$ coprimes, there exists a simple closed curve $\theta$ in $\Bbb T^2,$ with homological direction $(a,b),$ such that any connected component of the lift of $\theta$ to the plane is a Brouwer line for $\widetilde{f}$. Moreover, for any straight line $\gamma$ containing $(0,0)$ and avoiding $(\alpha,\beta)$, there exists $\varepsilon>0$, such that, for any $\widetilde g\in \widetilde{\text{Homeo}}_{0}(\Bbb T^2)$, with $d_{C^0}(\widetilde f,\widetilde g)<\varepsilon$, $\rho(\widetilde g)$ is contained in the union of $\gamma$ and one of the connected components of its complement, the one which contains $\rho(\widetilde f)\backslash \{(0,0)\}$.
\end{theorem}
The next Corollary is a direct consequence of Theorem \ref{generic_family_not_perturbable}. Nevertheless, we will actually prove the Corollary before the theorem (see Lemma~\ref{Generic_Bounded_in_(-alpha,-beta)}).
\begin{coro}\label{coro_bounded} Let $f$ satisfy the conditions in
Theorem~\ref{generic_family_not_perturbable}. Then it has bounded deviation along the direction $-(\alpha,\beta)$. \end{coro}
We also consider the unbounded deviation along the direction $(\alpha,\beta)$. The next theorem requires one more condition, the existence of an invariant foliation, which is clearly satisfied in the particular case when $f$ is the time-one map of some flow. See its more precise statement in Section 7.
\begin{theorem}\label{generic_bounded_unbounded} Consider any $\widetilde f$ as in Theorem~\ref{generic_family_not_perturbable}. If $f$ preserves a $C^0$ foliation of $\Bbb T^2$, then $\widetilde f$ has unbounded deviation along $(\alpha,\beta)$. \end{theorem}
This last result leads to the following interesting question. \begin{question}\label{Question_unbounded}
For $\widetilde f$ which lifts $f\in \text{Homeo}_{0,\text{nw}}(\Bbb T^2)$,
suppose $\rho(\widetilde f)$ is the line segment from $(0,0)$ to a totally
irrational $(\alpha,\beta)$. Is it true that $\widetilde f$ has
unbounded deviation along the direction $(\alpha,\beta)$? \end{question}
To conclude, let us briefly describe the organisation of this paper and the main ideas used in the proofs.
In Section 2, we will summarise some notations and previous results that will be used along the text.
In Section 3, we introduce a perturbation technique, which is very useful under the condition of unbounded deviations along some direction. The purpose is to find $\varepsilon$-pseudo periodic orbits,
which can be "closed" in order to become periodic orbits, so with rational rotation vector.
The difficulty is that,
a priory, the original method in \cite{Salvador} does not give enough information
to locate the position of these rational numbers, except that they are outside the rotation set
$\rho(\widetilde f)$.
In Section 4, we focus on the case when the map has bounded deviations.
We apply a result proved by J{\"a}ger (see~\cite{Linearization}) in order to obtain
a semi-conjugacy between the restriction of the dynamics to a certain
minimal set and the rigid torus rotation.
Then, we prove that whenever one can perturb the rigid rotation, we
can also perturb the original homeomorphism. Combining both results from Section 3 and
Section 4, we complete the proof of Theorem~\ref{instability_general}.
From Section 5, we start working with diffeomorphisms.
There, we prove Theorem~\ref{generic_case}.
There are three main ingredients in this proof. The first one belongs to the theory of
prime ends rotation numbers.
The second one is the so called \textit{bounded disk lemma},
firstly proved by Koropecki and Tal in \cite{Strictly_Toral}.
The third one consists of certain properties of invariant branches at hyperbolic periodic saddles, mostly from Fernando Oliveira's paper \cite{Oliveira}.
In Section 6, we work with diffeomorphisms satisfying
certain conditions,
which in the area-preserving case, are generic in the complement of
the set of maps which satisfy the hypotheses of theorem \ref{generic_case}. See for instance \cite{Dumortier} and \cite{Melo}. First, we collect several results describing dynamical properties of maps which satisfy the conditions in Definition~(\ref{generic_in_generic}). These results together imply Theorem \ref{carac_map} and are also an important part
in the proof of existence of the Brouwer lines (Theorem \ref{generic_family_not_perturbable}).
In Section 7,
we continue to study diffeomorphisms satisfying the
conditions from Section 6 and prove
Theorem~\ref{generic_bounded_unbounded}.
\\
\noindent \textbf{Notational Remark.} There will be
a small abuse of notation among the text below.
For example,
when we introduce integers, positive constants along
the arguments,
choices will be made differently in different
subsections, sometimes with the same name.
However, they will be consistent
within one single subsection. \\
\noindent \textbf{Acknowledgements.} We thank Andres Koropecki and F\' abio Armando Tal for many helpful discussions.
S.A-Z. was partially supported by CNPq grant (Grant number 306348/2015-2). X-C.L. is supported by Fapesp P\'os-Doutorado grant
(Grant Number 2018/03762-2).
\section{Preliminaries and Previous Results}\label{preliminary} The main purpose of this section is to fix notations and to recall some previous results for later use. In some cases, the formulation contains some minor variations from the reference, and we will give some short proofs only stressing the differences. We will also show some elementary lemmas as well.
Note that some of the notations were already used in the statements of the theorems in the introduction. \subsection{Planar Topology and Dynamics}\label{notation1}
\\
For any planar subset $M$, denote by $\text{Inter}(M)$ the interior of $M$, and by $\partial M$ the boundary of $M$. The following property in planar topology will be used. We say a planar set $F$ separates the points $x$ and $y$ if they are in different connected components of $F^c.$
\begin{lemma}\label{Newman}[Theorem 14.3 in Chapter V of~\cite{Newman_elements}]. Let $F$ be a closed subset of the plane $\Bbb R^2$, separating two points $x$ and $y$. Then some connected component of $F$ also separates $x$ and $y$. \end{lemma}
Let $M$ denote a metric space, and consider a homeomorphism $f:M\to M$. For any starting point $x_0$, we often use subscript to denote the $f$-iterates of $x_0$, i.e., $x_n=f^n(x_0)$. For $\varepsilon>0$, we call an $\varepsilon$-pseudo periodic orbit (with period $n$), for a finite sequence of points $\{x^{(0)}, x^{(1)}, \cdots, x^{(n-1)} \}$, with the following properties. \begin{align} \text{dist} (f(x^{(i)}),x^{(i+1)}) & <\varepsilon, \text{ for any } i=0,\cdots,n-2, \text{ and, }\\ \text{dist} (f(x^{(n-1)}), x^{(0)}) & <\varepsilon. \end{align} Moreover, any point in an $\varepsilon$-pseudo periodic orbit as above, will be referred as an $\varepsilon$-pseudo periodic point.
We say a point $p$ is $f$-recurrent, if there exists $n_k\to \infty$, such that $f^{n_k}(p)\to p$. The following is a standard fact for all non-wandering dynamical systems on compact metric spaces $M$. We provide a proof for completeness.
\begin{lemma}\label{dense} Suppose $f:M \to M$ is non-wandering. Then, the set of $f$-recurrent points, denoted by $R(f)\subset M$, is dense. \end{lemma} \begin{proof} Pick any open disk $B=B_0$. It suffices to show $B\bigcap R(f)\neq \emptyset$. Since $f$ is non-wandering, there is some $n_1$ such that $f^{-n_1}(B_0)\bigcap B_0 \neq \emptyset$. Then we can choose some small closed disk $B_1$, with radius smaller than $1$, such that $B_1 \subset f^{-n_1}(B_0)\bigcap B_0$. Note that every point in $B_1$ will return to $B_0$ at iterate $n_1$.
Inductively, suppose we have found increasing integers $n_1<\cdots <n_k$,
and closed disks $\{B_i\}_{i=1}^k$, such that for all
$i=1,\cdots, k$, $B_i$ has diameter smaller than $\frac 1i$, such that
$B_i \subset \text{Inter}( B_{i-1}\bigcap f^{-n_i}(B_{i-1}) )$. Then, we can choose some $n_{k+1}> n_k$ such that $B_k \bigcap f^{-n_{k+1}}(B_k)\neq \emptyset$, and some closed disk $B_{k+1}$ with radius smaller than $\frac {1}{k+1}$, such that $B_{k+1}\subset \text{Inter}(B_k \bigcap f^{-n_{k+1}}(B_k))$.
Now for all $k\geq 1$, $B_k$ consists of points that will return to $B_{k-1}$ at time $n_k$. Now $\bigcap_{k\geq 1}B_k$ is a singleton, say, $\{x^\ast\}$. It follows that $f^{n_k}(x^\ast)\to x^\ast.$ \end{proof}
Denote by $\Bbb T^2$ the two-dimensional "flat" torus, whose universal covering space is $\Bbb R^2$, and let $\pi :\Bbb R^2 \to \Bbb T^2$ be the natural projection. Let $\text{Homeo}_0(\Bbb T^2)$ denote the set of homeomorphisms of $\Bbb T^2$ homotopic to the identity. Then, denote by $\text{Homeo}_{0,\text{nw}}(\Bbb T^2)$ and $\text{Homeo}_{0,\text{Leb}}(\Bbb T^2),$ the set of non-wandering and area-preserving homeomorphisms, respectively. Note that both are subsets of $\text{Homeo}_0(\Bbb T^2)$. We also denote by $\widetilde {\text{Homeo}}_0(\Bbb T^2)$ (respectively, $\widetilde{\text{Homeo}}_{0,nw}(\Bbb T^2)$) the set of lifts of homeomorphisms from $\text{Homeo}_0(\Bbb T^2)$ (respectively, $\text{Homeo}_{0,nw}(\Bbb T^2)$) to the plane.
Similarly, for $r\geq1$, or $r=\infty$, denote by $\text{Diff}_{r,\text{nw}}(\Bbb T^2)$ (respectively $\text{Diff}_{r,\text{Leb}}(\Bbb T^2)$) the set of $C^r$ diffeomorphisms of $\Bbb T^2$, which are non-wandering (respectively, area-preserving) and homotopic to the identity. Also, the sets of their lifts are denoted by $\widetilde{\text{Diff}}_{r,\text{nw}}(\Bbb T^2)$ and $\widetilde{\text{Diff}}_{r,\text{Leb}} (\Bbb T^2)$, respectively.
Usually, the choice of a lift
is not relevant for our purposes,
as can be seen in the next subsection.
We say a subset $K\subset \Bbb T^2$ is \textit{essential} if there exists a non-trivial homotopy class, such that for any representative loop $\beta$ of it, $\beta\cap K\neq \emptyset$. A subset $K$ is called \textit{inessential} if it is contained in a topological disk in $\Bbb T^2$. In this case, the complement of it is called \textit{fully essential}. We will need the following result. For more details about the above notations, see ~\cite{KoroleCalTal} and ~\cite{Strictly_Toral}.
\begin{lemma}\label{Bounded Invariant Disks} [Theorem 6 in ~\cite{KoroleCalTal}] Let $f\in \text{Homeo}_{0,\text{nw}}(\Bbb T^2)$ and suppose that the set of fixed points
is inessential. Then there exists $M>0$, such that, for any $f$-invariant topological open disk $D$, each connected component of $\pi^{-1}(D)$ has diameter bounded from above by $M$. \end{lemma}
\subsection{Misiurewicz-Ziemian Rotation Set}
\\
Consider $\widetilde{f} \in \widetilde{\text{Homeo}}_0 (\Bbb T^2)$. The foundations of the rotation theory in the torus were mainly developed by Misiurewicz and Ziemian in~\cite{MZ}. There, the following notion of a rotation set $\rho(\widetilde f)$ appears: \begin{equation}\label{Misiurewicz_ziemann}
\rho(\widetilde f): =\{v =\lim_{k\to \infty}\frac{1}{n_k}(\widetilde f^{n_k}(\widetilde {z}_k)-\widetilde z_k), \big| n_k\to \infty, \widetilde z_k\in \Bbb R^2, \text{whenever the limit exists}\}. \end{equation} \begin{Remark} For any $f \in \text{Homeo}_0(\Bbb T^2)$, and any $f$-invariant compact subset $K \subset \Bbb T^2$, $\rho(\widetilde f, K)$ is the rotation set of the map restricted to the invariant set $K$, which is defined similarly as in (\ref{Misiurewicz_ziemann}), where the only difference is that, the points $\widetilde z_k$ in the expression are only allowed to be chosen in $\pi^{-1}(K)$. \end{Remark}
One can also define the point-wise rotation vector as follows. For any $z\in \Bbb T^2$, \begin{equation} \rho(\widetilde f,z): =\lim_{n\to \infty} \frac{1}{n} (\widetilde f^n(\widetilde z)-\widetilde z), \text{ when the limit exists}. \end{equation}
Another important definition is as follows. Consider any $f$-invariant Borel probability measure $\mu$, and denote by $\rho_\mu(\widetilde f)=\int_{\Bbb T^2} \big( \widetilde f(\widetilde x)-\widetilde x \big) d\mu(x)$ the average rotation vector of the measure $\mu$ (note that the integrand in this expression is in fact a function on $\Bbb T^2$). Define \begin{equation} \rho_{\text{meas}}(\widetilde f):=
\{\rho_\mu(\widetilde f) \big | \mu \text{ is a $f$-invariant Borel probability measure} \}. \end{equation}
The following result gathers many important properties of these notions:
\begin{theorem}\label{rotationset}[See~\cite{MZ} and~\cite{MZ2}] For any $\widetilde f\in \widetilde{\text{Homeo}}_0(\Bbb T^2)$, $\rho_{\text{meas}}(\widetilde f)$ equals $\rho(\widetilde f)$, which is a compact and convex subset of $\Bbb R^2$. Moreover, every extremal point of the rotation set can be realized as the average rotation vector of some ergodic measure $\mu$.
\end{theorem}
\subsection{Bounded Deviations}
\\
For any non-trivial vector $w \in \Bbb R^2$, denote by $\text{pr}_w$ the projection of a vector along the $w$ direction. \begin{equation} \text{pr}_w: \Bbb R^2 \to \Bbb R, r \mapsto \langle r, w\rangle. \end{equation} Next, we introduce the important notation of bounded deviations.
\begin{definition}\label{definition_deviation} Fix a non-trivial vector $w \in \Bbb R^2$. We say that $\widetilde f$ has bounded deviation along direction $w$ (from its rotation set $\rho(\widetilde f)$), if there exists $M>0$, such that for any $n\geq 0$ and any $\widetilde x\in \Bbb R^2$, \begin{equation}\label{one_direction} \text{pr}_w(\widetilde f^n(\widetilde x)-\widetilde x-n \rho(\widetilde f)) \leq M. \end{equation} \end{definition} \begin{Remark} Note that with respect to this definition, having bounded deviation along $w$ and $-w$ are two different conditions.
\end{Remark}
The following statement essentially follows from Gottschalk-Hedlund theorem, see also ~\cite{Linearization} for a somewhat more elementary proof.
\begin{lemma}[Proposition A in~\cite{Linearization}]\label{jager} Let $f \in \text{Homeo}_0(\Bbb T^2)$ preserve a minimal set $K \subset \Bbb T^2$. Suppose $\rho(\widetilde f, K) = \{ (\alpha,\beta)\}$ where $(\alpha,\beta)$ is totally irrational, and
$\widetilde f\big|_{\pi^{-1}(K)}$ has bounded deviation along every direction.
Then, there exists a continuous surjective map $\phi:K\to \Bbb T^2$, homotopic to the inclusion, satisfying that \begin{equation}
\phi\circ f\big|_K=R_{(\alpha,\beta)}\circ \phi, \end{equation} where $R_{(\alpha,\beta)}$ is the rigid rotation on $\Bbb T^2$. \end{lemma}
The following result establishes the bounded deviation property in the perpendicular direction when the rotation set is the special one we are interested in.
\begin{theorem}[\cite{Guilherme_Thesis}]\label{Fabio_Bounded} Suppose $\widetilde f \in \widetilde{\text{Homeo}}_0(\Bbb T^2)$, and $\rho(\widetilde f)$ is the segment from $(0,0)$ to a totally irrational point $(\alpha,\beta)$. Then $\widetilde f$ has bounded deviation along the perpendicular directions $(-\beta,\alpha)$ and $(\beta,-\alpha)$. \end{theorem}
\subsection{Some Fundamental Tools in Topological Dynamics}
\\ Let $f: \Bbb R^2 \to \Bbb R^2$ denote an orientation-preserving
homeomorphism.
A properly
embedded oriented line $\gamma:\Bbb R\to \Bbb R^2$
is called a \textit{Brouwer line}, if $f(\gamma(\Bbb R))$ and $f^{-1}(\gamma(\Bbb R))$ belong to different connected components of the complement of $\gamma(\Bbb R)$.
We also abuse notation by writing
$\gamma=\gamma(\Bbb R).$
We call these components
\emph{the right of $\gamma$} and \emph{the left of $\gamma$},
and denote them by $\mathcal R(\gamma)$ and $\mathcal L(\gamma)$,
respectively.
The following result is usually attributed to
Brouwer (see also~\cite{Brown_newproof}), we refer to~[\cite{Franks_PB}, Proposition 1.3] for a very useful generalization. Here we state a weaker version, which is sufficient for our use. \begin{lemma}\label{Brouwer} Suppose $f:\Bbb R^2\to \Bbb R^2$ is an orientation-preserving homeomorphism. If there exists a topological open disk $U$, such that $f(U)\bigcap U =\emptyset$, and for some $k\geq 2$, $f^k(U)\bigcap U\neq \emptyset$, then, if the fixed points are isolated, $f$ admits a fixed point with positive topological index. \end{lemma}
\begin{definition} Let $f: (U,p)\to (V,p)$ denote a local homeomorphism, where $U$ and $V$ are two open neighbourhoods of an isolated fixed point $p \in \Bbb R^2$. Choose a small disk $D\subset U$, whose boundary is $\beta=\partial D$. Define
$g:\beta \to S^1$, such that $g(x)=\frac{x-f(x)}{\|x-f(x)\|}$. The topological index of $f$ at the point $p$ is
defined as the degree of the map $g$, denoted as $\text{Index}_f(p)$. \end{definition} The following is a consequence of the classical result usually referred to as Lefschetz fixed-point formula. (See for example Theorem 8.6.2 in \cite{Katokbook}).
\begin{lemma}\label{Lefschetz} Let $f \in \text{Homeo}_+^0(\Bbb T^2)$ and assume all fixed points are isolated. Then \begin{equation} \sum_{p\in \text{Fix}(f)} \text{Index}_f(p)=0. \end{equation} \end{lemma}
\subsection{Generic Conditions}
\\
Recall $\text{Diff}_{r,\text{nw}}(\Bbb T^2)$ is the set of non-wandering $C^r$ diffeomorphisms of $\Bbb T^2$, which are homotopic to the identity, and $\text{Diff}_{r,\text{Leb}}(\Bbb T^2)\subset \text{Diff}_{r,\text{nw}}(\Bbb T^2)$ is the subset of area-preserving ones. Below, we say $p$ is a periodic \textit{saddle-like} point for $f$ if there is $n>0$, such that $p$ is a $f^n$-fixed point, and with respect to $f^n$, the dynamics near $p$ is obtained by gluing a finite number of topological saddle-sectors, see \cite{Dumortier}.
As usual, we denote by $W^u(p)$ (respectively, $W^s(p)$) the union of $p$ and all the unstable branches at $p$ (respectively, the stable branches at $p$).
\begin{definition}\label{Gr_definition} For any $r\geq 1$ or $r=\infty$, define $\mathcal G^r \subset \text{Diff}_{r,\text{nw}}(\Bbb T^2)$ as the subset of diffeomorphisms $f$ satisfying the following two conditions: \begin{enumerate} \item if $p \in \Bbb T^2$ is an $n$-periodic point, then $Df^n(p)$ does not have $1$ as an eigenvalue. \item $f$ does not have saddle connections. \end{enumerate} \end{definition}
\begin{Remark}\label{these_conditions_are_generic} By Theorem 3 (iii) and Theorem 9 of~\cite{Robinson}, for all $r \geq 1$, the set $\mathcal G^r \bigcap \text{Diff}_{r,\text{Leb}}(\Bbb T^2)$ contains a residual subset of $\text{Diff}_{r,\text{Leb}}(\Bbb T^2)$. Thus the above conditions are generic for area-preserving diffeomorphisms. \end{Remark}
\begin{definition}\label{topological_transverse} Let $f:S\to S$ be a $C^1$ diffeomorphism on a closed surface $S$. Let $p,q$ be two periodic saddle-like points. We say that $W^u(p)$ and $W^s(q)$ intersect at a point $w$ in a topologically transverse way, if there exists an open disk $B=B(w,\delta)$ with some radius $\delta>0$ such that (denote by $\alpha,$ respectively $\beta$, the connected component of $W^u(p)\bigcap B,$ respectively $W^s(q)\bigcap B$, both containing the point $w$):
\begin{enumerate} \item $B \backslash \alpha= B_1 \sqcup B_2$, which is a disjoint union. \item $\beta\backslash \{w\}= \beta_1\sqcup \beta_2,$ which is a disjoint union and $\beta_1 \subset B_1\bigcup \alpha$, $\beta_2 \subset B_2\bigcup \alpha$, with $\beta_1 \not \subset \alpha$ and $\beta_2\not\subset \alpha$. In other words, $\beta_1$ intersects $B_1$ and $\beta_2$ intersects $B_2$. \end{enumerate} \end{definition}
\begin{Remark}\label{saddle_like_case_noncontracible} The radius of the disk, $\delta>0,$ might have to be taken strictly away from $0$, because the connected component of $\alpha\bigcap\beta$ containing $w$ could be a non-trivial arc. \end{Remark}
The following lemma will be useful for obtaining non-contractible periodic orbits, i.e., those orbits with non-zero rotation vectors. \begin{lemma}\label{create_topological_horseshoe}[Lemma 0 in \cite{Salvador_Growth}] Suppose $\widetilde f\in \widetilde{ \text{Diff}}_{r}(\Bbb T^2)$ has
a hyperbolic periodic saddle point $\widetilde q$, and suppose $W^u(\widetilde q)$ and $W^s(\widetilde q)+(a,b)$ intersect in a
topologically transverse way, for some integer vector $(a,b).$ Then there exists some integer $N>0$ such that the diffeomorphism $\widetilde g: = \widetilde f^{N}-(a,b)$ admits a fixed point $\widetilde p$. Thus, $\rho(\widetilde f, \pi(\widetilde p) )=(\frac{a}{N},\frac{b}{N})$. \end{lemma} \begin{Remark}\label{saddle_like_points_horseshoe} Following exactly the same proof as in \cite{Salvador_Growth}, the conclusion is also true when the periodic point $\widetilde q$ has topological index $0$, and admits exactly one stable and one unstable branch, whose local dynamics is described in Figure 1. \end{Remark} \subsection{A Broader Class of (Non-generic) Diffeomorphisms}
\\
The following definition appears as Definition 1.6 on page 12 of~\cite{Dumortier}.
Such a study is based on the important work of Takens
(See Theorem 4.6 in~\cite{Takens}).
Although all the theory in \cite{Dumortier} was stated for $C^\infty$ maps, for the results we consider below,
there is no substantial difference in the $C^r$ case.
\begin{definition}\label{Lojasiewicz_type} Let $f: (U,p)\to (W,p)$ be
a local planar $C^r$-diffeomorphism with an isolated fixed point $p$.
Assume all eigenvalues of $Df(p)$ belong to the unit circle and
let $S$ be the semi-simple part of $Df(p)$ in its Jordan
normal form. Then up to a $C^r$-coordinate change,
there exists a formal $C^r$ vector field $X$, invariant under $S$,
such that, the $r$-jet of $f$ and
the $r$-jet of $S \circ X_1$ coincide at $p$, where $X_1$ is the time-1 map of the formal flow generated by $X$.
We say $f$ is \textit{of Lojasiewicz type} at $p$, if the following condition holds:
\begin{itemize}
\item there exists an integer $k\leq r$ and constants $C,\delta>0$,
such that, for any $z$ satisfying $\|z-p\|\leq \delta,$ then
\begin{equation}
\|X(z)\| \geq C \|z-p\|^k.
\end{equation}
\end{itemize}
\end{definition}
The following result was essentially obtained in Section 2 of ~\cite{Addas_Calvez}. \begin{lemma}\label{section_two_lemma} Assume $f\in \text{Diff}_{r,\text{nw}}(\Bbb T^2)$ has an isolated fixed point $p$ and the topological index of $f$ at $p$ is zero. Also suppose that if both eigenvalues of $Df$ at $p$ lie in the unit circle, then $f$ is of Lojasiewicz type at $p.$ Then, there exists exactly one stable and one unstable branch at $p$. Moreover, the local dynamics can be precisely described, see Figure~\ref{the_local_situation}.
\end{lemma}
\begin{figure}
\caption{The Local Dynamics Around a Fixed Point}
\label{the_local_situation}
\end{figure} \begin{proof} Consider the linear transformation $Df(p)$, which has positive determinant. If $1$ is not an eigenvalue of $Df(p)$, then $p$ is either an hyperbolic fixed saddle point, an elliptic fixed point (that is, both eigenvalues are in the unit circle and not real), or $-1$ is an eigenvalue. In all the above possibilities, $p$ has topological index equal to $-1$ or $1$. As the index at $p$ is zero, the above cases do not happen. If the two eigenvalues of $Df(p)$ are $1$ and some $a>0$ with $a\neq 1$, then as an application of center manifold theory (see \cite{Carr}), we get that $p$ can be a topological saddle, a topological sink (or source), or a saddle-node. Since by assumption $p$ has topological index $0$, it must be a saddle-node. In this case, $p$ has two saddle sectors, and one sector which is either contracting or expanding, a contradiction with the non-wandering condition. For more details, see Proposition 6 from~\cite{Addas_Calvez}, as well as~\cite{Carr}.
Thus, $Df(p)$ must have both eigenvalues equal to $1$. The rest of the proof follows exactly the same lines from the argument in Section 2 of \cite{Addas_Calvez}. \end{proof}
\begin{definition}\label{generic_in_generic}
For any $r\geq 1$ or $r=\infty$, define $\mathcal K^r \subset \text{Diff}_{r,\text{nw}}(\Bbb T^2)$ to be the subset of diffeomorphisms $f$ satisfying the following four conditions: \begin{enumerate} \item for all $n>0$, there are at most finitely many $f^n$-fixed points.
\item for any $f^n$-fixed point $z$, of prime period $n$, if $1$ is an eigenvalue of $Df^n(z),$ then it has multiplicity $2$ and $f^n$ is of Lojasiewicz type at $z$. Moreover, in this case the $\text{Index}_{f^n}(z)$ is zero, and so Lemma~\ref{section_two_lemma} implies that the local dynamics near $z$ is given by Figure 1. For families of maps, this situation corresponds to the birth or death of periodic points. \item for any $f^n$-fixed point $w$, of prime period $n$, if $-1$ is an eigenvalue of $Df^n(w),$ then $\text{Index}_{f^n}(z)$ is $1.$ For families, this situation corresponds to a period-doubling bifurcation. \item there are no connections between saddle-like periodic points. \end{enumerate}
\end{definition}
$\mathcal K^r\backslash \mathcal G^r$
can be thought as a sort of set of typical diffeomorphisms in the complement of $\mathcal G^r$. The definition can also be justified as follows. Let $\mathcal F$ denote some one-parameter
$C^r$-generic family of area-preserving diffeomorphisms,
\begin{equation}
\mathcal F:=\{f_t\}_{t\in[a,b]} \subset \text{Diff}_{r,\text{Leb}}(\Bbb T^2), \text{ for some } r\geq 1.
\end{equation}
The following statement
is a combination of results from ~\cite{Salvador_Growth} and ~\cite{Addas_Calvez}.
The proofs were based on previous results contained in~\cite{Meyer} and~\cite{Dumortier}.
\begin{lemma}\label{generic_family}[Section 1.3.3 of~\cite{Salvador_Growth} and
Section 2 of ~\cite{Addas_Calvez}]
For such a generic
$C^r$-family $\mathcal F$ as above,
all maps $f_t$ belong to $\mathcal K^r$; in particular, such a family never has saddle-like connections (tangencies are of course allowed), and with respect to periodic points, the only allowed degeneracies for a certain parameter $t$ are, period-doubling bifurcations (item (3) above) and the one which appears in item (2), which as we already said, is related to the birth or disappearance of periodic points for families of maps.
\end{lemma}
The next result gives a perturbation consequence based on the local dynamics near fixed points. Let $Fix(\widetilde{f})=\{z\in \Bbb T^2:\forall \widetilde{z}\in \pi ^{-1}(z),$ $ \widetilde{f}(\widetilde{z})=\widetilde{z}\}.$
\begin{prop}[Proposition 9 of \cite{Addas_Calvez}]\label{AC} Suppose $\widetilde f \in \widetilde{\text{Homeo}}_0 (\Bbb T^2)$. Assume $Fix(\widetilde{f})$ is finite and for any $z_0\in \text{Fix}(\widetilde{f})$, there exists a local continuous chart $\psi:U\to \Bbb R^2$, such that, for any $z\in ( U\cap f^{-1}(U) )\backslash \{z_0\}$, $\text{pr}_1\circ \psi\circ f(z) > \text{pr}_1 \circ \psi(z)$.
Then, there exists $\varepsilon>0$,
such that for
any any $\widetilde g \in \widetilde{\text{Homeo}}_0 (\Bbb T^2)$
with $\text{dist}_{C^0}(\widetilde g,\widetilde f)<\varepsilon$,
$(0,0)\notin \text{int}(\rho(\widetilde g))$. \end{prop} \subsection{Prime Ends Rotation Numbers}
\\
For an open topological disk $D$ contained in some surface, one can attach an artificial circle, called the \textit{prime ends circle}, denoted by $b_{\text{PE}}(D)$. Moreover, the prime ends circle $b_{\text{PE}}(D)$ can be be topologized, such that the union $D\bigcup b_{\text{PE}}(D)$ is homeomorphic to the standard closed unit disk in $\Bbb R^2$. We call it the \textit{prime ends disk}, and this procedure is referred to as
\textit{prime ends compactification}. This is the beginning of Carath\'eodory's prime ends theory, and we refer to~\cite{Mather_topological}, \cite{Mather_invariant} and \cite{PE_01} for more details.
If $f$ is a homeomorphism on the closure of an open topological disk $D$ into itself, then $f$ extends to the unit disk $D\bigcup b_{\text{PE}}(D)$, so it induces a
homeomorphism on the
prime ends circle $b_{\text{PE}}(D)$.
Then, the dynamics restricted to this circle defines
a rotation number, called the \textit{prime ends rotation number}, and denoted by $\rho_{\text{PE}}(f,D)$. The following lemma is a combination of important results from several papers.
\begin{lemma}\label{primeends} Suppose $\widetilde h$
is a lift of some $h\in \text{Diff}_{r ,\text{nw}}(\Bbb T^2)$
satisfying the following properties: \begin{enumerate} \item There are at most finitely many $h^n$-fixed points for all $n\geq 1$;
\item For all $n\geq 1$, and any $h^n$-fixed point $p$, of prime period $n$,
\begin{enumerate}
\item either none of the eigenvalues of $Dh^n(p)$ is equal to $1$, \item or, if one of the eigenvalues of $Dh^n(p)$ is $1$, then the topological index of $h^n$ at $p$ is zero and actually, both eigenvalues are equal to $1$ (see the proof of Lemma \ref{section_two_lemma}). Moreover, $p$ is of Lojasiewicz type for $h^n$. \end{enumerate} \end{enumerate} Let $K\subset \Bbb R^2$ be an $\widetilde h$-invariant continuum and let $O$ denote an $\widetilde h$-invariant connected component of $K^c.$ Write $\rho_{\text{PE}}(\widetilde {h}, O)$ to denote the prime ends rotation number of $\widetilde h$ restricted to $O$. Then the following statements are true: \begin{enumerate}[i)] \item If $\rho_{\text{PE}}(\widetilde {h}, O)$ is rational and $O$ is bounded, or $\rho_{\text{PE}}(\widetilde {h}, O)$ is zero, then
$\partial O$ contains only saddle-like $\widetilde {h}$-periodic points, and connections between these saddle-like periodic points. \item If $O$ is not equal to $K^c$ and $\rho_{\text{PE}}(\widetilde {h}, O)$ is irrational, then there is no $\widetilde h$-periodic point in $\partial O$. \end{enumerate} \end{lemma}
\begin{proof}[Sketch of the proof] Let us show item (i). Assume $\rho_{\text{PE}}(\widetilde {h}, O)=\frac pq$ which is in reduced form. Then, as $h$ is non-wandering, a prime chain corresponding to a $q$-periodic prime end $\widehat{z}$ has the property that each of its crosscuts must intersect its image under $\widetilde{h}^{q},$ otherwise the corresponding crossections would contain wandering domains, even in the torus (this argument goes back to Cartwright-Littlewood, see for instance proposition 2.1 of \cite{Franks_LeCalvezRegions}). So, the principal set of $\widehat{z}$ is made of $\widetilde{h}^{q}$-fixed points, something that implies the first assertion of item (i), i.e., there exists some $\widetilde{h}^{q}$-fixed point $z\in \partial O$.
The main results of \cite{PE_02} (see also Theorem 1.2, Corollary 1.3 and Theorem 1.4 of the report \cite{ICMKoro} from ICM 2018), imply: \begin{enumerate} \item if $q=1$, then all $\widetilde{h}$-periodic points $z\in \partial O$ are fixed and the eigenvalues of $D\widetilde{h}(z)$ are both real and positive; \item if $O$ is bounded, for any value of $p/q$, all $\widetilde{h}$-periodic points $z\in \partial O$ have prime period $q$ and the eigenvalues of $D\widetilde{h}^q(z)$ are both real and positive; \end{enumerate}
So, as $h$ is non-wandering, a periodic point $z\in \partial O$ is either an hyperbolic saddle or both eigenvalues of $Dh^q(z)$ are equal to $1$ and $z$ has topological index $0$. In this way, Lemma \ref{section_two_lemma} implies that $z$ is a saddle-like periodic point, either a hyperbolic saddle or a point with zero index and local dynamics as is Figure~\ref{the_local_situation}.
With this local description, Theorem 5.1 in~\cite{Mather_invariant} implies that the boundary $\partial O$ contains connections between saddle-like periodic points, as described above. We should remark that although in reference \cite{ICMKoro}, most statements assume preservation of area and boundedness of $O$ as a planar subset, the arguments therein only use the fact that the dynamics is non-wandering restricted to a small neighbourhood of the compact set $K$. This completes the proof of
item (i).
Item (ii) is a direct consequence of Theorem C of \cite{PE_01}. \end{proof}
\section{Perturbations for Homeomorphisms with Unbounded Deviation} \label{proofs_unbounded}
In this section and in the next, for any $w\in \Bbb R^2$, we will write $[w]$ to denote an integral vector which is the closest to $w$. Also, a condition assumed in all the theorems proved here is unbounded deviation for a fixed direction, along which, we want our rotation set to grow.
\begin{theorem}\label{instability} Let $\widetilde f \in \widetilde{\text{Homeo}}_{0,\text{nw}}(\Bbb T^2)$ whose rotation set $\rho(\widetilde f)$ is a line segment from $(0,0)$ to some $(\alpha,\beta) \in \Bbb R ^2$ which is totally irrational. Assume $\widetilde f$ has unbounded deviation along the direction $(\alpha,\beta)$. Then $\widetilde f$ can be $C^0$-approximated by $\widetilde g \in \widetilde {\text{Homeo}_0}(\Bbb T^2)$ such that $\rho(\widetilde g)$ has non-empty interior and
\begin{equation}
\rho(\widetilde f)\backslash \{(0,0)\}
\subset \text{Int}( \rho(\widetilde g) ).
\end{equation} \end{theorem}
When the rotation set is as above, we can also study a similar situation around the other endpoint, $(0,0)$.
\begin{theorem}\label{case_of_0} Let $\widetilde{f}$ and $\rho(\widetilde f)$ be as in Theorem~\ref{instability}. Assume $\widetilde {f}$ admits unbounded deviation along $-(\alpha,\beta)$. Then $\widetilde {f}$ can be $C^0$-approximated by $\widetilde g \in \widetilde{\text{Homeo}}_0(\Bbb T^2)$, such that $(0,0) \in \text{Int}( \rho(\widetilde g) )$. \end{theorem}
\subsection{Some Preparations}\label{forms}
\\ Given the totally irrational vector $(\alpha,\beta)$, define \begin{align} L_0 & : y=\frac{\beta}{\alpha}x. \\ L_1 & : \alpha x+\beta y=\alpha^2+\beta^2, \end{align} which are straight lines along the directions $(\alpha, \beta)$, $(-\beta,\alpha)$, respectively, and intersecting at the point $(\alpha,\beta)$. Also define the four connected components of the complement of $L_0 \bigcup L_1$ in $\Bbb R^2$. See Figure~\ref{the_four_regions}. \begin{align}
\Delta_0 & = \{ \widetilde z \in \Bbb R^2 \big| \text{pr}_{(-\beta,\alpha)} (\widetilde z) >0
\text{ and } \text{pr}_{(\alpha,\beta)} \big(\widetilde z - (\alpha,\beta) \big) > 0 \}. \label{region1}\\
\Delta_1 & = \{ \widetilde z \in \Bbb R^2 \big| \text{pr}_{(-\beta,\alpha)} (\widetilde z)<0 \text{ and } \text{pr}_{(\alpha,\beta)}\big( \widetilde z-(\alpha, \beta) \big) < 0\}. \\
\Omega_0 & = \{ \widetilde z \in \Bbb R^2 \big| \text{pr}_{(-\beta,\alpha)} (\widetilde z)<0 \text{ and } \text{pr}_{(\alpha,\beta)} \big(\widetilde z-(\alpha,\beta) \big) >0 \}. \\
\Omega_1 & = \{ \widetilde z \in \Bbb R^2 \big| \text{pr}_{(-\beta,\alpha)}(\widetilde z) >0 \text{ and } \text{pr}_{(\alpha,\beta)} \big(\widetilde z -(\alpha,\beta) \big) < 0 \}.\label{region4} \end{align}
\begin{figure}
\caption{The Four Regions.}
\label{the_four_regions}
\end{figure}
\begin{lemma}\label{firstclosing} Let $\{x^{(0)}, \cdots, x^{(n-1)}\}$ be an $\varepsilon$-pseudo periodic orbit of some $f\in \text{Homeo}_0(\Bbb T^2)$. Then, $f$ can be $C^0$-$\varepsilon$-perturbed to $g$, also in $\text{Homeo}_0(\Bbb T^2)$
such that $g^n(x^{(0)})=x^{(0)}$. Moreover, if $f$ preserves area, then so does $g$. \end{lemma}
\begin{proof} See~\cite{Franks_ETDS_8}. For the area-preserving case, just note that $g$ is obtained from $f$ by a series of perturbations supported in finitely many disjoint disks. And it is well-known that these perturbations, which are the identity in the boundary of each disk, can be chosen as area-preserving themselves.
\end{proof}
Recall that Theorem~\ref{Fabio_Bounded} says that $\widetilde f$ has bounded deviation along perpendicular directions. Based on this, the following lemma gives small displacements along the perpendicular directions, for some chosen iterates.
\begin{lemma}[Small Displacement]\label{displacement}
Let $w$ be either $(-\beta,\alpha)$ or $(\beta,-\alpha).$
Then, for any $\delta>0$ and any $\widetilde z_0\in \Bbb R^2$,
there exists $n_0$ such that,
\begin{equation}\label{bounded_displacement}
\text{pr}_{w}
(\widetilde f^n (\widetilde{z}_0) -\widetilde f^{n_0} (\widetilde {z}_0) )
< \delta, \text{ for any } n>n_0.
\end{equation} \end{lemma}
\begin{proof}
Observing Theorem~\ref{Fabio_Bounded},
we can choose $n_0 \geq 1$, with
\begin{equation}
\text{pr}_{w} (\widetilde f^{n_0}
(\widetilde {z}_0)- \widetilde {z}_0) >
\sup_{n\geq 1} \{
\text{pr}_{w} (\widetilde f^n (\widetilde {z_0})- \widetilde {z_0}) \} - \delta.
\end{equation}
Then, for any $n>n_0$,
(\ref{bounded_displacement}) follows immediately. \end{proof}
\subsection{The Irrational Endpoint} \begin{proof}[Proof of Theorem~\ref{instability}]
Fix $\varepsilon>0$. By Theorem 1 of~\cite{Salvador}, for any ergodic measure with average rotation vector $(\alpha,\beta)$, around a typical point which is $f$-recurrent, there is some point $y$ and a positive integer $n$, such that \begin{align} & \text{dist}_{\Bbb T^2}(f^n(y), y)<\varepsilon.\label{epsilon_close}\\ & \text{pr}_{(\alpha,\beta)}([ \widetilde f^n(\widetilde y) - \widetilde y] -n (\alpha,\beta)) > 0 . \label{epsilon_away} \end{align} Considering Figure~\ref{the_four_regions}, the last estimate implies that \begin{equation}\label{Delta0Omega0} \frac {1}{n} [\widetilde f^n(\widetilde y) -\widetilde y] \in \Omega_0 \bigcup \Delta_0. \end{equation}
From now on, we assume that $\widetilde y_0$ and $n^\ast \geq 1$ satisfy that $f^{n^\ast}(y_0)$ is $\varepsilon$-close to $y_0$, and \begin{equation}\label{choice_of_y0} \frac 1n [\widetilde f^{n^\ast}(\widetilde y_0)-\widetilde y_0] \in \Delta_0. \end{equation} The other possibility (the rotation vector obtained in the above expression belonging to $\Omega_0$) is analogous.
The goal is to show that, we can always find another $6\varepsilon$-pseudo periodic orbit lifting to a $6\varepsilon$-pseudo $\widetilde f$-orbit segment starting at some $\widetilde z'$ and ending at some $\widetilde z''$, so that the rational vector \begin{equation} \frac 1n (\widetilde z''-\widetilde z') \in \Omega_0. \end{equation}
We assume the totally irrational $(\alpha,\beta)$ has norm $1$ for simplicity. For any $K>0$, define $\mathcal R_K\subset \Bbb T^2$ to be the set of points $x$ such that for at least $K$ choices of positive integers $n$,
we have $\text{dist}_{\Bbb T^2}(f^n(x), x) \leq \frac 1K$ and $\text{pr}_{(\alpha,\beta)} \big(\widetilde f^n (\widetilde x) - \widetilde x - n(\alpha,\beta) \big) \geq K.$ We claim that $\mathcal R_K$ is non-empty for any positive constant $K$. In fact, we can cover $\Bbb T^2$ with $N$ disks (for some integer $N$), all with diameter $\frac 1K.$ Then by the assumption on unbounded deviation in the direction $(\alpha, \beta)$, it is not hard to find $\widetilde x$, and integers $0=m_0<m_1<\cdots<m_{KN}$, such that, for any $k=1,\cdots,KN$, \begin{equation} \text{pr}_{(\alpha,\beta)}
\big ( \widetilde f^{m_k}(\widetilde x) - \widetilde f^{m_{k-1}}(\widetilde x) - (m_k-m_{k-1})(\alpha,\beta) \big) \geq K. \end{equation}
By the pigeonhole principle,
for at least $K+1$ choices of the indices among those $m_j$'s, the corresponding
iterates of $x_0$ lie in one single disk with diameter no more than $\frac 1K$.
Clearly, between any two of these chosen ones, say $m_i<m_j$, we see
\begin{equation}
\text{pr}_{(\alpha,\beta)}
\big(\widetilde f^{m_j} (\widetilde x)
- \widetilde f^{m_i}(\widetilde x)
-(m_j-m_i)(\alpha,\beta) ) \geq K,
\end{equation}
and in particular the claim follows by taking the first iterate among those.
Next we take an accumulation point $x^\ast$ of the sets $\mathcal R_K$ as $K$ tends to infinity. By Lemma~\ref{dense}, the point $x^\ast -4\varepsilon (-\beta,\alpha)$
is approximated by an $f$-recurrent point. Then, with the help of Lemma~\ref{displacement}, one can find a point $z_1$ which is some forward iterate of the recurrent point we just found, and a positive integer $n_1$, such that both $z_1$ and $f^{n_1} (z_1)$ are $\varepsilon$-close to $x^\ast -4\varepsilon (-\beta,\alpha)$ and \begin{equation}
\text{pr}_{(-\beta,\alpha)} \big ( \widetilde f^{n_1} (\widetilde z_1)-
\widetilde{z}_1 \big ) < \varepsilon. \end{equation}
Next, by choosing another $f$-recurrent point near the point $ f^{n_1} (z_1)- 4\varepsilon (-\beta,\alpha)$ and then applying Lemma~\ref{displacement}, we can find another orbit segment satisfying similar estimates. In fact, we can inductively find $z_1, z_2, \cdots,z_{K_0}$ with disjoint orbits, and integers $n_1,n_2,\cdots,n_{K_0}$, such that, for any $i=2,\cdots, K_0$, both $z_i$ and $f^{n_i}(z_i)$ are $\varepsilon$-close to $f^{n_{i-1}}(z_{i-1})-4\varepsilon(-\beta,\alpha)$ and \begin{equation}
\text{pr}_{(-\beta,\alpha)} \big ( \widetilde f^{n_i} (\widetilde z_i)-
\widetilde{z}_i \big ) < \varepsilon. \end{equation} Moreover, $K_0$ is chosen as the smallest integer such that $f^{K_0}(z_{K_0})-4\varepsilon (-\beta,\alpha)$ is $\varepsilon$-close to $x^\ast$ in $\Bbb T^2$. It is not difficult to ensure $K_0$ is found to be finite and the process stops. Then we sum all the deviations (in the direction $(\alpha,\beta)$) of each of the above orbit segments. \begin{equation}\label{M2} M := \sum_{i=1}^{K_0} \text{pr}_{(\alpha,\beta)} \big( \widetilde f^{n_i}(\widetilde z_i) - \widetilde z_i - n_i(\alpha,\beta) \big). \end{equation}
When we consider a point $x$
in $\mathcal R_K$ with $K> |M| +\varepsilon(K_0+1)$,
by definition, for at least $K$ positive integers, its corresponding iterates all lie in a same disk of diameter at most $1/K$, and pairwise the deviation along the direction $(\alpha,\beta)$ is
at least $|M| +\varepsilon(K_0+1)$. Moreover, the dynamics has bounded deviation along the perpendicular direction (see Theorem~\ref{Fabio_Bounded}). So when $K$ is sufficiently large, we can find two of these iterates so that along the direction $(-\beta,\alpha)$, the deviation is at most $\varepsilon$ (with a similar argument as in Lemma~\ref{displacement}).
Recall now that $x^\ast$ is an accumulation point of $\mathcal R_K$. Therefore, by the above paragraph, we can find $z_0$ and an integer $n_0$, such that both $z_0$ and $f^{n_0}(z_0)$ are $\varepsilon$-close to $x^\ast$,
\begin{align} & \text{pr}_{(-\beta,\alpha)} \big( \widetilde f^{n_0}(\widetilde z_0) - \widetilde z_0 \big) < \varepsilon \text{ and } \label{jump_01} \\ & \text{pr}_{(\alpha,\beta)} \big( \widetilde f^{n_0}(\widetilde z_0) - \widetilde z_0 - n_0 (\alpha,\beta)\big)
> |M| +\varepsilon(K_0+1). \label{large_deviation} \end{align}
Moreover, we can require that the orbits of $z_i$, $i=0,\cdots,K_0$ and $y_0$ are all pairwise disjoint. Then we write down the following $K_0+1$ point-wise disjoint
$f$-orbit segments, namely \begin{equation}\label{K+1segments}
\{z_1, f(z_1), \cdots, f^{n_1}(z_1)\}, \cdots,
\{z_{K_0}, \cdots, f^{n_{K_0}}(z_{K_0})\},
\{z_0, \cdots, f^{n_0}(z_0)\}. \end{equation} They together form a $6\varepsilon$-pseudo periodic orbit of period
\begin{equation}
\ell= \sum_{j=0}^{K_0}n_j.
\end{equation} Note that, (\ref{large_deviation}) implies the final deviation of the whole pseudo-orbit along $(\alpha,\beta)$ is positive.
Among those $K+1$ segments in (\ref{K+1segments}), the way we jump between two consecutive orbit segment gives at least $2\varepsilon$ deviation along the direction $(\beta,-\alpha)$, and within each segment the deviation along $(-\beta,\alpha)$ is at most $\varepsilon$. So in the end we see positive deviation along $(\beta,-\alpha)$. It follows that this pseudo orbit sees a rotation vector in the region $\Omega_0$.
Thus, we can apply Lemma~\ref{firstclosing} twice in order to
close this and the $\varepsilon$-pseudo periodic orbits containing $y_0$
which we found at the beginning of this proof.
So we obtain $\widetilde g$ which is $6\varepsilon$-close to $\widetilde f$ in the $C^0$ topology, and admits two periodic points $y_0$ and $z_1$.
By (\ref{choice_of_y0}),
$\rho(\widetilde g, y_0)\in \Delta_0$. And by the above construction, it follows that $\rho(\widetilde g, z_1)\in \Omega_0$.
Since $(0,0)$ is an extremal point of $\rho(\widetilde f)$, by ~\cite{Franks_ETDS_8} $f$ admits a contractible fixed point $p^\ast$. Now as we look back at the whole perturbation process above, we can choose all the orbit segments far from $p^\ast$. This means that
the perturbations can be made away from $p^\ast$.
Thus, the rotation set of $\widetilde g$ satisfies
$\rho(\widetilde g) \supset
\{(0,0), \rho(\widetilde g, z_1),
\rho(\widetilde g, y_0)\}$. Since any rotation set is convex (Theorem~\ref{rotationset}),
$\rho(\widetilde g)$ is as required.
As $\varepsilon$ can be arbitrarily small, the proof of Theorem~\ref{instability} is completed. \end{proof}
\subsection{Origin as Endpoint}
\begin{proof}[Proof of Theorem~\ref{case_of_0}] This proof is similar to the above one. We fix $\varepsilon$ now. First let us define two new regions as follows. \begin{align}
\mathcal D_0: = \{\widetilde z \in \Bbb R^2 \big| \text{pr}_{(\alpha,\beta)}(\widetilde z)<0 \text{ and } \text{pr}_{(-\beta,\alpha)} (\widetilde z) >0 \} .\\
\mathcal D_1: = \{\widetilde z \in \Bbb R^2 \big| \text{pr}_{(\alpha,\beta)}(\widetilde z)<0 \text{ and } \text{pr}_{(-\beta,\alpha)} (\widetilde z)<0 \}. \end{align}
For any $K>0$, define $\mathcal R'_K$ to be the set of points $z$ such that for at least $K$ choices of integers $n>0$, the following holds. \begin{align}
\text{dist}_{\Bbb T^2}
(f^n(z), z ) & \leq \frac 1K. \label{recurrence} \\
\text{pr}_{(\alpha,\beta)}
(\widetilde f^n(\widetilde z)-\widetilde z) &
\leq -K. \label{deviation} \end{align}
With the condition of unbounded deviations in the direction $-(\alpha,\beta)$,
by a similar argument as in previous subsection, we can show $\mathcal R'_K$ is non-empty for any $K>0$.
By taking $K\geq \frac 1\varepsilon$,
we can find an $\varepsilon$-pseudo periodic point $y_0$ of period $n^\ast$,
which realizes a rotation vector in the region $\mathcal D_0\cup \mathcal D_1$.
Without loss of generality, assume it lies in $\mathcal D_0$ (the other possibility is analogous).
More precisely, for a lift $\widetilde y_0$ of $y_0$ and some integer $n_\ast>0$,
$\text{dist}_{\Bbb T^2}(f^{n_\ast}(y_0)-y_0) <\varepsilon$, and
\begin{equation}
v =\frac{1}{n_\ast}[\widetilde f^{n_\ast}(\widetilde y_0)- \widetilde y_0] \in \mathcal D_0.
\end{equation}
Then, very similarly to the previous subsection, let $y^{\ast}$ be an accumulation point of
$\mathcal R'_K$ as $K$ tends to infinity. With the help of Lemmas~\ref{dense} and~Lemma~\ref{displacement}, as well as the definition of $\mathcal R_K'$,
we can choose finitely many orbit segments,
which altogether form a $6\varepsilon$-pseudo periodic orbit, lifting to a $6\varepsilon$-pseudo orbit for $\widetilde f$,
namely $\{\widetilde z_0, \widetilde z_1, \cdots, \widetilde z_{n'}\}$, such that
\begin{equation}
v'= \frac{1}{n'}[\widetilde z_{n'} -\widetilde z_0] \in \mathcal D_1.
\end{equation}
Thus, we obtain two pseudo periodic orbits seeing rotation vectors
$v\in \mathcal D_0$ and $v' \in \mathcal D_1.$
Since $(\alpha,\beta)$ is an
extremal point of $\rho(\widetilde f)$,
by Theorem~\ref{rotationset}, there exists some ergodic $f$-invariant measure $\mu$
satisfying $\rho_\mu(\widetilde f)=(\alpha,\beta)$.
Moreover, for a $\mu$-typical point $x \in \Bbb T^2$,
for some increasing integer sequence $n_j$,
and any lift $\widetilde {x}$,
\begin{align}
\lim_{j\to \infty} f^{n_j}(x) & = x.\\
\lim_{j\to \infty}
\frac{1}{n_j} \big( \widetilde f^{n_j}(\widetilde x) -\widetilde x \big )& = (\alpha,\beta).
\end{align} Then for sufficiently large $n_j$,
$\{x, f(x),\cdots, f^{n_j}(x)\}$ forms an $\varepsilon$-pseudo periodic
orbit, such that the vectors $v, v'$ and
\begin{equation}
w = \frac {1}{n_j} [ \widetilde f^{n_j}(\widetilde x) -\widetilde x ]
\end{equation}
spans a triangle, which contains the origin $(0,0)$ in its interior.
Note that we can choose the three pseudo orbits to be pair-wise disjoint.
Then, applying Lemma~\ref{firstclosing} three times, we obtain a $6\varepsilon$-perturbation $\widetilde g$ of $\widetilde f$. The rotation set $\rho(\widetilde g)$ contains
at least the three rational points
$v,v'$ and $w$.
By the convexity of the rotation set,
$(0,0)$ is contained in $\text{Int}\big(\rho(\widetilde g) \big)$.
We have completed the proof. \end{proof}
\section{Perturbations for Homeomorphisms with Bounded Deviation}
\subsection{The Totally Irrational Rigid Rotation}\label{proofs_bounded}
\\
We start by showing a simple perturbation result for the rigid rotation $R_{(\alpha,\beta)}$ on $\Bbb T^2$, where $(\alpha,\beta)$ is totally irrational. \begin{prop}\label{Rigid_Case}
For any $\varepsilon>0$, there exists $\widetilde g\in \widetilde{ \text{Homeo}}_0(\Bbb T^2)$ with $\text{dist}_{C^0}(\widetilde g,R_{(\alpha,\beta)})<\varepsilon$, such that, $\rho(\widetilde g)$ has interior, and $(\alpha,\beta) \in \text{Int}(\rho(\widetilde g))$. \end{prop} \begin{proof} Since $(\alpha,\beta)$ is totally irrational, the rigid rotation $R_{(\alpha,\beta)}$ is minimal. For any $x_0\in \Bbb T^2$ and for any small $\varepsilon>0$, consider a small disk $B=B(x_0,\varepsilon)$, which is divided into four regions as follows. \begin{align} \Delta_1(x_0,\varepsilon): & = \{x\in B(x_0,\varepsilon)
\big| \text{pr}_{(\alpha,\beta)}(x-x_0)< 0, \text{pr}_{(-\beta,\alpha)}(x-x_0)<0\}. \label{new_delta_1}\\ \Delta_0 (x_0,\varepsilon): & = \{x\in B(x_0,\varepsilon)
\big| \text{pr}_{(\alpha,\beta)}(x-x_0)> 0, \text{pr}_{(-\beta,\alpha)}(x-x_0)> 0\}.\\ \Omega_1(x_0,\varepsilon): & = \{x\in B(x_0,\varepsilon)
\big| \text{pr}_{(\alpha,\beta)}(x-x_0) < 0, \text{pr}_{(-\beta,\alpha)}(x-x_0)> 0\}.\\
\Omega_0(x_0,\varepsilon):
& = \{x\in B(x_0,\varepsilon)
\big| \text{pr}_{(\alpha,\beta)}(x-x_0)> 0, \text{pr}_{(-\beta,\alpha)}(x-x_0)<0\}. \label{new_omega_0} \end{align} By minimality of $R_{(\alpha,\beta)}$, we can choose some integer $n$, such that $R_{(\alpha,\beta)}^{n}(x_0) \in \Delta_1(x_0,\varepsilon)$. Then, for proper choices of lifts $\widetilde R$ and $\widetilde{x_0}$
of $R_{(\alpha,\beta)}$ and $x_0$, respectively,
$\widetilde R^{n}(\widetilde x_0)$ is $\varepsilon$-close to $\widetilde x_0 +(a,b)$ for some $(a,b)\in \Bbb Z^2$. Moreover, we can write $v_1=(\frac{a}{n},\frac{b}{n})$, and then clearly $v_1 \in \Delta_0((\alpha,\beta),\frac{\varepsilon}{n})$.
In other words, we can find an $\varepsilon$-pseudo periodic orbit for the rigid rotation $R_{(\alpha,\beta)}$, which sees a rational rotation vector in the region $\Delta_0((\alpha,\beta),\frac{\varepsilon}{n})$. We argue in a similar way with respect to the other three regions. Then, we obtain four $\varepsilon$-pseudo periodic orbits for $R_{(\alpha,\beta)}$, starting with $x_0, y_0,z_0,w_0$, respectively. We can also require these orbit segments to be point-wise disjoint. Then, applying Lemma~\ref{firstclosing} four times, these four pseudo orbits can be closed via an $\varepsilon$-perturbation, which produces four periodic orbits. These periodic orbits will have four rational rotation vectors $v_1,v_2,v_3,v_4$, respectively, whose convex hull contains $(\alpha,\beta)$ in the interior. Therefore $(\alpha,\beta)\in \text{Int}(\rho(\widetilde g))$. \end{proof}
\begin{Remark} This proposition can be compared with Theorem 1 in~\cite{Karaliolios}, where with respect to sufficiently high regularity, due to the ``KAM" phenomenon, the perturbed rotation set either misses $(\alpha,\beta)$, or it equals $\{(\alpha, \beta)\}$, provided $(\alpha,\beta)$ satisfies certain Diophantine conditions.
\end{Remark}
\subsection{Bounded Deviations}
\\
In this subsection, we assume that $\widetilde f$ has bounded deviation along the direction $(\alpha, \beta)$. We consider this case for the sake of completeness, but it is possible that it might not happen at all (cf. Theorem~\ref{generic_bounded_unbounded} and Question~\ref{Question_unbounded}).
\begin{theorem}\label{Bounded_Deviation_Case}
Suppose $\widetilde f\in \widetilde{\text{Homeo}}_0(\Bbb T^2)$, whose rotation set
$\rho(\widetilde f)$ is the segment from $(0,0)$ to the totally irrational point $(\alpha,\beta)$.
Assume $\widetilde f$ has bounded deviation along the direction $(\alpha,\beta)$. Then $\widetilde f$ can be $C^0$-approximated by $\widetilde g\in \widetilde{\text{Homeo}}_{0}(\Bbb T^2)$,
such that
$\rho(\widetilde g)$ has interior,
and
$\rho(\widetilde f)\backslash\{(0,0)\} \subset \text{Int} (\rho(\widetilde g) )$.
\end{theorem}
Assume for some $M>0$, for any $\widetilde x \in \Bbb R^2$
and any $n\geq 1$,
\begin{equation}\label{Bounded_by_M}
\text{pr}_{(\alpha,\beta)}( \widetilde f^n (\widetilde x) -\widetilde x -n(\alpha,\beta)) \leq M.
\end{equation}
\begin{definition}\label{Mab_Sab}
Let $\mathcal M_{(\alpha,\beta)}$
denote the set of ergodic $f$-invariant Borel probability measures,
which have $(\alpha,\beta)$ as rotation vector.
Then write
\begin{equation}
\mathcal S_{(\alpha,\beta)} :=
\overline {\bigcup_{\mu \in \mathcal M_{(\alpha,\beta)}} \text{supp}(\mu)},
\end{equation}
where $\text{supp}(\mu)$ denotes the support of $\mu$. \end{definition} The following Lemma, whose proof depends on Atkinson's theorem on Cocycles (see~\cite{Atkinson}), appears as Lemma 6 of~\cite{Salvador_Uniform} or Proposition 65 of~\cite{Forcing}. \begin{lemma}\label{geq_-M}
Suppose $\widetilde f$ satisfies condition (\ref{Bounded_by_M}).
Then for any $x\in \mathcal S_{(\alpha,\beta)}$ with a lift $\widetilde x$,
and for any $n\geq 1$,
\begin{equation}\label{Bounded_Below}
\text{pr}_{(\alpha,\beta)} \big(\widetilde f^n(\widetilde x) -\widetilde x -n(\alpha,\beta) \big) \geq -M.
\end{equation} In particular,
any invariant ergodic measure $\mu$ such that $\text{supp}(\mu) \subset \mathcal S_{(\alpha,\beta)}$
is contained in $\mathcal M_{(\alpha,\beta)}$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{Bounded_Deviation_Case}]
Fix $\varepsilon>0$. Choose any minimal set $K\subset \mathcal S_{(a,b)}$.
Then $\rho(\widetilde f, K)=\{(\alpha,\beta)\}$.
By assumption (\ref{Bounded_by_M}),
Theorem~\ref{Fabio_Bounded} and Lemma~\ref{geq_-M},
$\widetilde f\big |_{\pi^{-1}(K)}$
has bounded deviation along every direction.
So we can apply Lemma~\ref{jager} in order to
find a semi-conjugacy $\phi$
between $(K,f\big|_K)$ and
$(\mathbb T^2,R)$, where $R= R_{(\alpha,\beta)}$
denotes the rigid rotation on $\mathbb T^2$ by $(\alpha,\beta)$.
There is a lift $\widetilde \phi$ of $\phi$,
which conjugates
$\widetilde f\big|_{\pi^{-1}(K)}$ and $\widetilde R$.
Note that the pre-image of every point under $\widetilde \phi$
has diameter uniformly bounded from above, say by a constant $C_\phi>0$.
Up to renormalizing the torus to a finite cover, we can assume that
\begin{equation}\label{bounded_diameter}
C_\phi<1/6.
\end{equation}
Then, there exists a positive integer $n_0$, such that for any $n\geq n_0$, the following can be ensured. For any $x\in\Bbb T^2$ and its pre-image $\gamma=\phi^{-1}(x)$,
and for any points $p\in \gamma$ and $q\in f^n(\gamma)$,
one can find an $\varepsilon$-pseudo orbit segment of length $n$,
starting at $p$ and ending at $q$, such that every jump happens within a leaf $\phi^{-1}(f^k(\gamma))$, for $k\geq 1$.
Recall the four regions defined from (\ref{new_delta_1}) to
(\ref{new_omega_0}).
Since $R= R_{(\alpha,\beta)}$
is minimal, for some sufficiently large positive integer $n > n_0$,
and for a point $x_0\in \Bbb T^2$ with its pre-image $\gamma_0=\phi^{-1}(x_0)$,
we have that
\begin{align}
& R^{n}(x)\in \Omega_1(x,\varepsilon). \label{belong_to_omega_1}\\
& \text{dist}_{\Bbb T^2}(f^n(\gamma_0), \gamma_0)<\varepsilon. \label{gamma_0_close}
\end{align}
Estimate (\ref{belong_to_omega_1}) means that, by choosing a lift $\widetilde x_0$ of $x_0$, and writing $(a,b)= [\widetilde R^{n}(\widetilde x_0) - \widetilde x_0]$, we have
\begin{equation}\label{vk_definition}
v= \frac{[\widetilde R^{n}(\widetilde x_0) -
\widetilde x_0]}{n}= (\frac{a}{n}, \frac{b}{n}) \in \Omega_0.
\end{equation}
Equivalently (see Figure~\ref{the_four_regions}),
\begin{align}
\text{pr}_{(\alpha,\beta)} \big (v-(\alpha,\beta) \big) > 0. \label{pr_alphabeta}\\
\text{pr}_{(-\beta,\alpha)} \big (v-(\alpha,\beta) \big) < 0. \label{pr_betaalpha}
\end{align}
On the other hand, by estimate (\ref{gamma_0_close}) and noting $n> n_0$,
we can find an $\varepsilon$-pseudo periodic orbit, containing a point
$p\in\gamma_0$ so that for some $q\in f^n(\gamma_0)$,
\begin{equation}
\text{dist}_{\Bbb T^2}(p, q)=
\text{dist}_{\Bbb T^2}(\gamma_0,f^n(\gamma_0))<\varepsilon.
\end{equation}
This pseudo-orbit is obtained in the following way.
For each $k=1,\cdots,n-1$, the orbit has a jump within
the pre-image $\phi^{-1}(f^k(\gamma_0))$, so that
at the $(n-1)$-th time, it arrives at $f^{-1}(q)$. Then the final jump
happens from $q$ to $p$.
This pseudo-orbit lifts to
an $\varepsilon$-pseudo orbit for $\widetilde f$ in $\Bbb R^2$.
Recall from (\ref{bounded_diameter}) that,
the pre-image under the lifted semi-conjugacy $\widetilde \phi$ of any point
has diameter bounded by $1/6$. It follows that the pseudo-orbit must see the same
integer translate as the rigid rotation. More precisely,
the $\varepsilon$-pseudo-orbit above must start at some
point $\widetilde p$ and end at $\widetilde p+(a,b),$ so that
it sees the rotation vector $v=(\frac an,\frac bn) \in \Omega_0$.
In a similar way, we can find another $\varepsilon$-pseudo $f$-periodic orbit which sees
a rational rotation vector in $\Delta_0$.
Moreover, as we explained before, there is also at least one contractible fixed point $p_\ast$.
Note that we can choose the two pseudo orbits and the fixed point to be pairwise disjoint.
So, if we apply Lemma~\ref{firstclosing} twice, we obtain
an $\varepsilon$-perturbation
$\widetilde g\in
\widetilde{\text{Homeo}}_0
(\mathbb T^2)$,
with at least three periodic orbits,
$p_\ast$,$p_0$, $u_0$,
such that,
$\rho(\widetilde g,p_\ast)=0,
\rho(\widetilde g,p_0)=v$ and $\rho(\widetilde g,u_0)=u$
span a triangle which contains $\rho(\widetilde f)\backslash \{(0,0)\}$
in its interior. By Theorem~\ref{rotationset},
this triangle is clearly included in $\rho(\widetilde g)$,
and the proof of the Theorem is completed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{instability_general}]
Theorem~\ref{instability_general} follows immediately from
Theorem~\ref{instability} and Theorem~\ref{Bounded_Deviation_Case}.
\end{proof}
\section{Generic Diffeomorphisms}
In this section, we prove the following theorem.
\begin{theorem} \label{generic_not_this_set}[Theorem~\ref{generic_case} restated] Let $f\in \mathcal G^r$. Then for any lift $\widetilde{f}$, the rotation set $\rho ( \widetilde{f})$ can not be a segment from $(0,0)$ to a totally irrational point $(\alpha ,\beta )$. \end{theorem}
{\it Proof. }The proof of this theorem will go through the whole section. We assume the following conditions and arrive at a contradiction in the end of the proof. \begin{align}\label{Gr_condition} & f\in \mathcal G^r.\\ & \rho(\widetilde f) \text{ is the segment from} (0,0) \text{ to the totally irrational point } (\alpha,\beta). \label{rotation_set_assumption} \end{align}
Since $(\alpha,\beta)$ is an extremal point of $\rho(\widetilde f)$, by Theorem~\ref{rotationset}, we can choose an $f$-recurrent point $z_\ast$, such that \begin{equation} \label{choice_of_zast} \rho (\widetilde f,z_\ast)=\lim_{n\to \infty}\frac1n \big(\widetilde f^n(\widetilde{z}_\ast ) -\widetilde z_\ast \big)= (\alpha,\beta). \end{equation}
Now, consider the family of all the $f$-invariant open topological disks. We can define a partial order among this family with respect to the usual inclusion relation. It is standard to check that with respect to this order, the family forms a partially ordered set, for which every chain has an upper bound. So by Zorn's lemma, we conclude the existence of maximal elements. As $f$ is non-wandering, Lemma~\ref{Bounded Invariant Disks} implies that there exists a constant $M>0$, such that, every connected component $\widetilde{D}$ of the lift of a maximal open $f$-invariant disk $D$ is $\widetilde{f}$ -invariant, and \begin{equation} \text{diam}(\widetilde{D})<M. \end{equation} Also, if $D$ is an $f$-invariant maximal open topological disk, then it contains fixed points. This follows from a classical argument: pick some point $p\in D$. If $p$ is not fixed, then for a sufficiently small open ball $B$ centered at $p$, contained in $D$, we have: $B$ is disjoint from $f(B)$ and $f^n(B)$ intersects $B$ for a sufficiently large $n>0$ (there are no wandering points). And this implies the existence of a fixed point inside $D,$ see Lemma \ref{Brouwer}. Since for $f\in \mathcal G^r$, there are finitely many fixed points, it follows that there are at most finitely many maximal open $f$-invariant disks.
Note that $z_{*}$ is disjoint from the closure of the union of these finitely many maximal open $f$-invariant disks, because the orbit of any $ \widetilde{z}_{*}\in \pi ^{-1}(z_{*})$ is unbounded in $\Bbb R^2$. Thus, we can choose some $\delta >0$ such that, the open disk $B(z_{*},\delta )$ is still disjoint from the closure of the union of these maximal open $f$ -invariant disks. Define the set \begin{equation} \label{defineu} U:=\text{ the connected component of }\bigcup_{n\in \Bbb Z}f^n\big( B(z_{*},\delta )\big) \text{ which contains }z_{*}. \end{equation}
The following lemma allows us to find a good hyperbolic saddle fixed point.
\begin{lemma} \label{Saddle_Essential} There exists at least one fixed hyperbolic saddle point $Q_{*}$ which is contained in $\overline{U}$. \end{lemma}
\begin{proof}
Clearly $U$ is open. Since $f$ is non-wandering, $U$ is $f^{n^\ast}$-invariant for some $n^\ast>0$. Recall the notions in subsection~\ref{notation1}, and claim that $U$ is essential. Suppose otherwise and let $U_{filled}$ be the union of $U$ with all the connected components of the
complement of $U$
which are contractible. This construction implies that $U_{filled}$ is an open disk and, by Lemma~\ref{Bounded Invariant Disks} applied to $f^{n^\ast}$,
all connected components of the lift of $U_{filled}$ to the plane have bounded diameter and are $\widetilde{f}^{n^\ast}$-invariant (because $(0,0)$ is the only rational point contained in the rotation set).
In particular, any lift of $z_\ast$ has bounded orbit, a contradiction with (\ref{choice_of_zast}).
The next claim is that $U$ is in fact fully essential. Suppose it is not, then all the homotopically non-trivial loops contained in $U$ are homotopic to each other. Fix one homotopically non-trivial loop $\gamma \subset U$ and choose connected components of their lifts, $\widetilde \gamma$ and $\widetilde U$, such that, \begin{equation} \widetilde \gamma \subset \widetilde U. \end{equation} Clearly, there exists some integer vector $(a,b)\neq (0,0)$, such that $\widetilde \gamma=\widetilde \gamma+(a,b)$. Moreover, since by assumption $U$ is not fully essential, $\widetilde U \bigcap (\widetilde U+ i(-b,a)) = \emptyset$ for any integer $i\neq 0$. Therefore, \begin{equation} \widetilde U \text{ is contained in the strip bounded by } \widetilde \gamma-(-b,a) \text{ and }\widetilde \gamma+(-b,a). \end{equation} This is a contradiction with the choice of $z_\ast$
and the fact that $(\alpha, \beta)$ is totally irrational.
So $U$ is fully essential. Moreover, if $\overline{U}$ is not the whole torus, then any connected component of $(\overline{U})^c$ is a periodic open disk (the periodicity follows from the non-wandering hypothesis). In particular, by the choice of $\delta>0,$ each $f$-invariant maximal open disk (if any) is a connected component of $(\overline{U})^c$.
By assumption~(\ref{Gr_condition}), every fixed point has non-zero topological index. In this case, the fixed point is also called a non-degenerate fixed point. In general there are two types of such points: \begin{enumerate} \item $p\in\text{Fix}(f)$ has topological index $1$. \item $p\in \text{Fix}(f)$ has topological index $-1$. In this case, $p$ is a hyperbolic saddle, and both eigenvalues of $Df(p)$ are positive real numbers, one larger than $1$ and the other smaller. \end{enumerate}
Now by Lemma~\ref{primeends}, condition (2) of Definition~\ref{Gr_definition} implies that each $f$-invariant open disk $D$ has prime ends rotation number $\rho_{\text{PE}}(f,D)\notin \Bbb Q$. In particular, such a prime ends rotation number is not zero, so the sum of the indices of fixed points contained in the union of all the (finitely many) maximal $f$-invariant open disks is positive (or zero, in case $\overline{U}$ is the whole torus). By the Lefschetz fixed point formula (Lemma~\ref{Lefschetz}), the sum of the indices at all the fixed points is zero. And as $(0,0)$ is an extremal point of the rotation set, $f$ must have fixed points (see ~\cite{Franks_ETDS_8}). So it follows that there exists at least one negatively indexed fixed point, denoted $Q_\ast,$ contained in the complement of the union of these maximal open $f$-invariant disks. Thus, $Q_\ast$ is a fixed hyperbolic saddle point, which belongs to $\overline{U},$ and we have finished the proof. \end{proof}
For a fixed hyperbolic saddle point $Q_{*}$ (or a fixed saddle-like point), let an \textit{unstable branch} (respectively, \textit{stable branch}) at $Q_{*}$ be one of the connected components of $ W^u(Q_{*})\backslash \{Q_{*}\}$ (respectively, one of the connected components of $W^s(Q_{*})\backslash \{Q_{*}\}$). Choose a lift $\widetilde{Q} _{*}$ of the hyperbolic saddle point $Q_{*},$ which is fixed by $\widetilde{f }.$ We can then lift the corresponding stable and unstable branches at $ Q_{*} $ to those branches at $\widetilde{Q}_{*}$.
\begin{prop}\label{stable_unstable} It is not possible that some unstable branch $\widetilde{\lambda}_u$ and some stable branch $\widetilde{\lambda}_s$ at the hyperbolic saddle $\widetilde {Q}_\ast$ intersect. \end{prop}
\begin{proof} Suppose by contradiction that a stable branch $\widetilde{\lambda}_s$ at $\widetilde {Q}_\ast$ intersects an unstable branch $\widetilde{\lambda}_u$ at $\widetilde{Q}_\ast$. We can then choose an intersection point $\widetilde w$, such that, the arc along $\widetilde{\lambda}_u$ from $\widetilde{Q}_\ast$ to $\widetilde w$ and the arc along $\widetilde{\lambda}_s$ from $\widetilde {Q}_\ast$ to $\widetilde w$ are disjoint, except at their endpoints. It follows that, the union of these two arcs bounds a topological disk $\widetilde D$. Now we define \begin{equation} \widetilde D_{\text{sat}} =\bigcup_{n\in \Bbb Z} \widetilde f^n(\widetilde D). \end{equation} Note that $\widetilde D_{\text{sat}}$ is an open and connected
$\widetilde f$-invariant subset of the plane.
If there exists some integer vector $(a,b)\in \Bbb Z^2\backslash \{(0,0)\}$, such that \begin{equation} \widetilde D_{\text{sat}} \bigcap \big(\widetilde D_{\text{sat}}+(a,b) \big) \neq \emptyset, \end{equation} then either $\widetilde{\lambda}_u$ intersects $\widetilde{\lambda}_s+(a,b)$ topologically transversely, or $\widetilde{\lambda}_u$ intersects $\widetilde{\lambda}_s -(a,b)$ topologically transversely (see definition~\ref{topological_transverse}). In both cases, it follows from Lemma~\ref{create_topological_horseshoe} that there exists a periodic orbit $p_0$ whose rotation vector $\rho(\widetilde f,p_0)$ is non-zero and rational, which is a contradiction with assumption (\ref{rotation_set_assumption}).
Thus, we are left with the case when $\widetilde D_{\text{sat}}$ does not intersect any of its non-trivial integer translations. As before, we consider the filled open set $\text{Fill}(\widetilde D_{\text{sat}}),$ which is given by the union of $\widetilde D_{\text{sat}}$ and all the bounded connected components of its complement.
It is not hard to see that $\text{Fill}(\widetilde D_{\text{sat}})$ is an open topological disk, which does not intersect any of its non-zero integer translations. Thus, we can consider its projection $D_{\text{Fill}}:= \pi (\text{Fill}(\widetilde D_{\text{sat}}))$, which is an $f$-invariant open disk. By Lemma~\ref{Bounded Invariant Disks}, $\text{Fill}(\widetilde D_{\text{sat}})$ has bounded diameter. Since $f\in \mathcal G^r$, in particular there are no saddle connections, by Lemma~\ref{primeends}, the prime ends rotation number $\rho_{\text{PE}}(\widetilde f, \text{Fill}(\widetilde D_{\text{sat}}) )$ is irrational, and so
the boundary $\partial \text{Fill}(\widetilde D_{\text{sat}})$ does not contain any periodic point. This implies that \begin{equation} \widetilde {Q}_\ast \in \text{Fill}(\widetilde D_{\text{sat}}). \end{equation} This is a contradiction because $Q_\ast$ does not to belong to a fixed open disk, see Lemma~\ref{Saddle_Essential}. So $\widetilde \lambda_u$ does not intersect any stable branch at $\widetilde {Q}_\ast$. \end{proof}
\begin{Remark}\label{no_homoclinic_intersection_saddle_like}
The same conclusion is also true for the unstable and the stable branch
at an index $0$ saddle-like fixed point. The proof follows the same lines as above, with a difference that, when we apply Lemma~\ref{create_topological_horseshoe} in the arguments, we actually need the statement for saddle-like fixed points, as was explained in Remark~\ref{saddle_like_points_horseshoe}. Note also, the conditions stated in Lemma~\ref{primeends} are such that in both cases it can be applied. \end{Remark}
\begin{lemma} \label{stable_unbounded} Each stable or unstable branch at $\widetilde{Q} _{*} $ is unbounded in $\Bbb R^2$. \end{lemma}
\begin{proof} For definiteness, fix any unstable branch $\widetilde{\lambda}_u$ at $\widetilde{Q}_\ast$, and assume by contradiction that it is bounded.
The first claim is that, the closure $\text{cl} (\widetilde{\lambda}_u)$ must intersect all the other branches at $\widetilde{Q}_\ast$. To see the claim, assume by contradiction that $\text{cl} ( \widetilde{\lambda}_u)$ does not intersect some branch $\widetilde \lambda$. Then there exists some connected component $\widetilde U$ of the complement of $\text{cl}(\widetilde {\lambda}_u)$, containing $\widetilde \lambda$. Since $\widetilde \lambda$ is $\widetilde f$-invariant, so is $\widetilde U$.
Note that $\widetilde{Q}_\ast \in \partial \widetilde U$, and it is in fact accessible through the branch $\widetilde \lambda$, from the interior of $\widetilde U$. Thus, the prime ends rotation number $\rho_{\text{PE}}(\widetilde f,\widetilde U)$ must be equal to $0$. And so, Lemma \ref{primeends} implies the existence of saddle-connections, something that contradicts item (2) of Definition~\ref{Gr_definition}.
The second claim is that, if $\widetilde \lambda$ is any other branch at $\widetilde{Q}_\ast$, then $\text{cl}(\widetilde{\lambda}_u)\supset \widetilde \lambda$. To prove this claim, note first that if $\widetilde \lambda$ is another unstable branch, then $\widetilde \lambda$ does not intersect $\widetilde{\lambda}_u$. And if $\widetilde \lambda$ is a stable branch, then by Proposition~\ref{stable_unstable}, we again obtain that $\widetilde \lambda$ does not intersect $\widetilde{\lambda}_u$. The following argument is a variation of one due to Fernando Oliveira in the area-preserving case (see Lemma 2 of ~\cite{Oliveira}). We include it here for completeness.
Assume by contradiction that \begin{equation}\label{not_subset} \text{cl}(\widetilde{\lambda}_u) \not\supset \widetilde \lambda. \end{equation} Since $\text{cl}(\widetilde{\lambda}_u)$ is a connected $\widetilde f$-invariant compact subset, there is a compact simple arc $\gamma$ contained in $\widetilde \lambda$, such that $\text{cl}(\widetilde {\lambda}_u)\bigcap \gamma$ consists of exactly the two endpoints of $\gamma$. Then
there are two possibilities:
\begin{enumerate} \item for all non-zero integer vectors $(m,n)$, $\gamma\bigcap \big (\text{cl}(\widetilde {\lambda}_u)+(m,n) \big)=\emptyset$. \item for some $(m_0,n_0) \in \Bbb Z^2\backslash \{(0,0)\}$, $\gamma\bigcap \big( \text{cl} (\widetilde {\lambda}_u)+(m_0,n_0) \big) \neq \emptyset$. \end{enumerate}
In case (1), we can find a bounded connected component of the complement of $\text{cl}(\widetilde{\lambda}_u)\cup \gamma$, whose boundary contains $\gamma$. We denote it as $\widetilde O$. Then look at the set $O=\pi(\widetilde O)$. It is not hard to see that, $O$ contains a wandering domain for $f$, which is a contradiction.
In case (2), as $\gamma$ is contained in a branch at $\widetilde{Q}_\ast,$ $\text{cl} (\widetilde {\lambda}_u)$ intersects $\text{cl} (\widetilde {\lambda}_u)+(m_0,n_0)$. So we define
\begin{equation} \widetilde L: = \bigcup_{k\in \Bbb Z} \text{cl}(\widetilde {\lambda}_u ) + k(m_0,n_0). \end{equation} Then $\widetilde L$ is closed, connected, $\widetilde L + (m_0,n_0) =\widetilde L$ and it is bounded in the direction perpendicular to $(m_0,n_0).$ Moreover, $\widetilde f(\widetilde L)=\widetilde L.$
But this shows that $\rho(\widetilde f)$ must be contained in a line of rational slope (parallel to the vector $(m_0,n_0)$), which is a contradiction with assumption (\ref{rotation_set_assumption}). This shows our second claim.
Note that we could have started with any stable branch $\widetilde{\lambda_s}$ as well. So these two claims above show that, under the assumptions (\ref{Gr_condition}) and (\ref{rotation_set_assumption}), if one branch at the hyperbolic fixed point $\widetilde {Q_\ast}$ is bounded, then all four branches are bounded. Moreover, in this case, they all have the same closure. Then the final arguments follow exactly from Oliveira in \cite{Oliveira}, page 582, namely we get an intersection between a stable and an unstable branch. As this homoclinic intersection contradicts Proposition~\ref{stable_unstable}, all four branches at $\widetilde{Q}^{*}$ are unbounded.
To conclude, we present a sketch of Oliveira's argument used above. Let us denote the four branches at $\widetilde{Q}_{*}$ by $\widetilde{\lambda }_{1,u},\widetilde{\lambda }_{2,u},\widetilde{\lambda }_{1,s}$ and $\widetilde{\lambda }_{2,s}$. We are assuming that cl$(\widetilde{\lambda }_{i,j})$ is bounded and equal to the same continuum for all $i\in \{1,2\}$ and $j\in \{u,s\}.$ This implies that there exist two branches, one stable and one unstable, and a local quadrant $Quad$ at $\widetilde{Q}^{*}$ adjacent to at least one of them, such that both branches accumulate on $\widetilde{Q}^{*}$ through $Quad.$ It is not hard to show that a picture similar to one of the possibilities in Figure~\ref{oliveira1} must happen. More precisely, there always exists a Jordan curve separating the local $\widetilde{\lambda }_{1,s}$ from the hatched area of $Quad$.
\begin{figure}
\caption{The contractible case}
\label{oliveira1}
\end{figure}
And it is easy to see that the stable branch can not accumulate on $\widetilde{Q}^{*}$ through $Quad$ without intersecting the unstable branch: the only way it can enter $Quad$ is through the hatched area. And it has to intersect
the unstable branch in order to reach that area. \end{proof}
\begin{Remark}\label{unbounded_for_saddle_like} Similar to the previous
proposition,
the same conclusion holds for any index $0$ saddle-like fixed point.
The proof follows the same lines of the above proof. \end{Remark}
\begin{prop}\label{same_closure} The projections of the four branches are two by two disjoint and they have the same closure,
denoted as follows. \begin{equation} K = \text{cl}(\pi(\widetilde \lambda_{1,s})) = \text{cl}(\pi(\widetilde \lambda_{2,s}))
= \text{cl}(\pi(\widetilde \lambda_{1,u})) = \text{cl}(\pi(\widetilde \lambda_{2,u})). \end{equation} Moreover, each connected component of the complement of $K$ is a periodic open disk. \end{prop}
\begin{proof} Let us consider one branch, for instance, $\widetilde \lambda_{1,u}$, which is unbounded in $\Bbb R^2$ by Lemma~\ref{stable_unbounded}. Then, the closure of its projection,
$\text{cl}(\pi(\widetilde \lambda_{1,u}))$, is an essential subset of $\Bbb T^2$. If $\text{cl}(\pi(\widetilde \lambda_{1,u}))$ is not fully essential, then some connected component $U$ of its complement is itself essential. Since $f$ is non-wandering, it follows that $U$ is periodic. But then, the rotation set $\rho(\widetilde f)$ has to be contained in some affine line with rational slope, which is a contradiction with assumption~(\ref{rotation_set_assumption}). So, $\text{cl}(\pi(\widetilde \lambda_{1,u}))$ is fully essential. By item (1) of Definition~\ref{Gr_definition} and assumption (\ref{Gr_condition}), the fixed point set of $f^n$ is finite, for all $n\geq 1$. So, Theorem~\ref{Bounded Invariant Disks} implies that every connected component of the lift of the complement of $\text{cl}(\pi(\widetilde {\lambda}_{1,u}))$ is a bounded $\widetilde{f}$-periodic disk. Thus, if $\widetilde \lambda$ is any branch (possibly the same branch), and since $\widetilde \lambda$ is unbounded,
then $\pi(\widetilde \lambda)$ intersects $\text{cl}( \pi (\widetilde \lambda_{1,u}))$. If $\pi(\widetilde \lambda)$ also intersects $\text{cl}( \pi (\widetilde \lambda_{1,u}))^c$, then by an argument very similar to that in the proof of Lemma~\ref{stable_unbounded} to treat possibility (1), we conclude that there exists a wandering domain inside an open periodic disk in the torus, a contradiction. Thus $\pi(\widetilde \lambda)\subset \text{cl}(\pi (\widetilde \lambda_{1,u}))$. Since we have chosen the two branches arbitrarily, the proof is over. \end{proof}
By Proposition~\ref{same_closure}, for the fixed saddle point $Q_{*}$ in the torus, each of its four branches accumulates on all the other three branches, as well as on itself. Now we need to recall the final arguments of the proof of Theorem 2 in ~\cite{Oliveira}. More precisely, from page 591 to page 594 of~\cite{Oliveira}, the starting conditions are that all branches are unbounded in $\Bbb R^2$, and the closure of every branch in $\Bbb T^2$ accumulates on all the four branches. Under these conditions, following exactly the arguments in that paper, we get that \begin{equation} \pi (\widetilde{\lambda }_{1,u}\bigcup \widetilde{\lambda }_{2,u})\bigcap \pi (\widetilde{\lambda }_{1,s}\bigcup \widetilde{\lambda }_{2,s})\neq \emptyset \end{equation} with a topologically transverse intersection.
So, either \begin{equation} (\widetilde{\lambda }_{1,u}\bigcup \widetilde{\lambda }_{2,u})\bigcap ( \widetilde{\lambda }_{1,s}\bigcup \widetilde{\lambda }_{2,s})\neq \emptyset , \end{equation} or \begin{equation} (\widetilde{\lambda }_{1,u}\bigcup \widetilde{\lambda }_{2,u})\bigcap ( \widetilde{\lambda }_{1,s}\bigcup \widetilde{\lambda }_{2,s}+(m,n)),\text{ for some }(m,n)\in \Bbb Z^2\backslash (0,0), \end{equation} in both cases, with topologically transverse intersections.
The first is a contradiction with Proposition~\ref{stable_unstable}, and for the second case, we can use a similar argument as in the proof of Proposition~\ref{stable_unstable}, to create a non-contractible periodic orbit. This is a contradiction with the assumption on the shape of $\rho ( \widetilde{f})$, i.e., (\ref{rotation_set_assumption}).
To conclude, as we did in the proof of Lemma \ref{stable_unbounded}, we present a sketch of the argument contained in the aforementioned pages of \cite{Oliveira}.
First, note that for any local quadrant, $Quad$ at $Q_{*},$ both adjacent branches accumulate on $Q_{*}$ through it. Let $Quad$ be contained in the first quadrant and $\lambda _{1,u}$ and $\lambda _{1,s}$ be the branches adjacent to it. Follow $\lambda _{1,u}$ starting at $Q_{*}$ until the first time it reaches $ Quad,$ at a point $z\in \partial Quad.$ This could happen so that this sub arc from $\lambda _{1,u}$ whose endpoints are $Q_{*}$ and $z,$ united with a segment from $z$ to $Q_{*},$ is either a contractible loop, or not. If the loop is contractible, then it bounds a disk $D$ that separates the hatched area in the left case of Figure \ref{oliveira1} from a local part of $\lambda _{1,s}.$ As the only way for $\lambda _{1,s}$ to enter $Quad$ is through the hatched area, there must be an intersection between $\lambda _{1,s}$ and $\lambda _{1,u}.$
So we are left to consider the case when, for any of the four branches, whenever it returns to some adjacent local quadrant $Quad_i$ $(i=1,2,3,4),$ it forms a non-contractible loop ($Quad_1$ is in the first quadrant and so on, rotating counterclockwise). In this case, the situation in the universal cover is as in Figure \ref{oliveira2}.
\begin{figure}
\caption{The curvilinear rectangle Rect}
\label{oliveira2}
\end{figure}
There, we consider $\widetilde{\lambda }_{1,s}$ and $\widetilde{\lambda } _{1,u}$ starting at $\widetilde{Q}_{*}$ until the first point each of them has in some connected component of $\pi ^{-1}($$\partial Quad_1)$ and then back to some translate of $\widetilde{Q}_{*}$ through a segment: we get a ''web'' on the plane, whose building blocks are all the integer translates of the curvilinear rectangle, denoted $Rect$ in Figure \ref{oliveira2}.
We know that $\widetilde{\lambda }_{2,s}$ and $\widetilde{\lambda }_{2,u}$ are both unbounded, so they have to leave $Rect.$ If they do not intersect $\widetilde{\lambda }_{1,s}$ and $\widetilde{\lambda }_{1,u},$ the only possibilities are, for $\widetilde{\lambda }_{2,u}$ it leaves $Rect$ through the connected components of $\pi ^{-1}($$\partial Quad_1)$ that contain $\widetilde{Q} _{*}+r(1,u)$ or $\widetilde{Q}_{*}+r(1,u)+r(1,s).$ And $\widetilde{\lambda } _{2,s}$ leaves $Rect$ through the connected components of $\pi ^{-1}($$ \partial Quad_1)$ that contain $\widetilde{Q}_{*}+r(1,s)$ or $\widetilde{Q} _{*}+r(1,u)+r(1,s).$ It is easy to see that, unless both $\widetilde{\lambda }_{2,s}$ and $\widetilde{\lambda }_{2,u}$ leave $Rect$ through the connected component of $\pi ^{-1}($$\partial Quad_1)$ that contains $\widetilde{Q} _{*}+r(1,u)+r(1,s),$ the diagram in Figure \ref{oliveira2} implies that there must be an intersection between $\widetilde{\lambda }_{2,s}$ and $ \widetilde{\lambda }_{2,u}$ and we are done.
So, assume this is the case. Now we fall in the situation described in Figure \ref{oliveira3}.
\begin{figure}
\caption{The curvilinear rectangle Rect'}
\label{oliveira3}
\end{figure}
Again, as $\widetilde{\lambda }_{2,s}$ and $\widetilde{\lambda }_{2,u}$ are both unbounded, they have to leave $Rect^{\prime }.$ If they do not intersect $\widetilde{\lambda }_{1,s}$ and $\widetilde{\lambda }_{1,u},$ from the position of the exits and entrances in $Rect^{\prime },$ there must be an intersection between $\widetilde{\lambda }_{2,s}$ and $\widetilde{\lambda } _{2,u}$ (we are using the fact that unstable branches leave $Rect^{\prime }$ through one of the exits and stable branches leave $Rect^{\prime }$ through one of the entrances).
A final remark is that Figures 3,4 and 5 were taken from \cite{Oliveira}: we just adapted them to our notation.
$\Box $
\section{A Broader Class of (non generic) Diffeomorphisms} In the proof of Theorem~\ref{instability}, we keep the fixed points away from the support of the perturbation. Thus, the rotation set after the perturbation still contains the point $(0,0)$. On the other hand, it seems possible that $(0,0)$ will be "mode locked" in the following sense. Possibly, for all sufficiently small perturbations, $(0,0)$ is not contained in the interior of the perturbed rotation set. This intuition comes from the phenomenon called rational mode locking. The case when the rotation set has non-empty interior was treated in ~\cite{Addas_Calvez}. One of the theorems proved there states that rational mode locking happens under some conditions that are satisfied for generic one-parameter families.
This section has two objectives. First, we prove several results describing the dynamics of diffeomorphisms $f\in \mathcal K^r$, which together, imply Theorem \ref{carac_map}. And then, using the previous results and some delicate topological arguments, we show the existence of lots of Brouwer lines in the universal cover, which are lifts of essential loops in the torus for all possible homotopy classes. In the end of the section we explain how Theorem~\ref{generic_family_not_perturbable} in the introduction can be deduced from the existence of Brouwer lines.
We start by describing the dynamics in $\mathcal K^r.$
In this whole section, $f \in \mathcal K^r$ and $\widetilde{f}$ is a lift of $f$ whose rotation set is the segment from $(0,0)$ to a totally irrational point $(\alpha,\beta)$.
\begin{prop}\label{corollary_about_periodic_non_fixed_point}
Every periodic point $p$ is indeed a fixed point, with topological index $0$, and the local dynamics around $p$ can be described explicitly as in Lemma~\ref{section_two_lemma}. In particular, $p$ admits exactly one stable and one unstable branch. \end{prop}
\begin{proof} By the assumption on the shape of the rotation set $\rho(\widetilde f)$, all the periodic orbits of $f$ must be contractible, that is, they lift to periodic orbits of $\widetilde{f}$. Suppose by contradiction that there exists a periodic point which is not fixed, or there exists a fixed point which does not have topological index $0$. In the first case, by Lemma~\ref{Brouwer}, there exists some fixed point with positive topological index. On the other hand, it is easy to see from the definition of $\mathcal K^r$ that the indices at fixed points can only assume one of the following values: $-1,0$ or $1$. Observing Lemma~\ref{Lefschetz}, as in both possibilities above there exists fixed points with non-zero index, there must always exist a fixed point which has topological index $-1.$ And from the definition of $\mathcal K^r$ this point is a hyperbolic saddle.
By the same arguments used to prove Lemma~\ref{Saddle_Essential}, there exists a fixed hyperbolic saddle contained in $\overline{U},$ where $U$ is defined in expression \ref{defineu}. Now the proof goes exactly as in Theorem~\ref{generic_not_this_set},
and as in that proof, we get a contraction with the shape of the rotation set $\rho(\widetilde f)$. So every fixed point has topological index $0$ and there are no other periodic points. Since $f\in \mathcal K^r$, the local dynamics around a fixed point is given by Lemma~\ref{section_two_lemma}. \end{proof}
The next lemma is Corollary~\ref{coro_bounded}. Note that it depends on Theorem~\ref{case_of_0}. \begin{lemma}[Corollary~\ref{coro_bounded} restated]\label{Generic_Bounded_in_(-alpha,-beta)} The lift $\widetilde f$ has bounded deviation along the direction $-(\alpha, \beta)$. Equivalently, there exists $M>0$, such that for any $\widetilde x$ and $n\geq 1$, \begin{equation}\label{Bounded_in_-v} \text{pr}_{-(\alpha,\beta)} (\widetilde f^n(\widetilde x)-\widetilde x) \leq M. \end{equation} \end{lemma}
\begin{proof}
By Proposition~\ref{AC}, for any $\widetilde g \in \widetilde{\text{Homeo}}_0(\Bbb T^2)$ which is a sufficiently small perturbation of $\widetilde f$,
it is not possible that $\rho(\widetilde g)$ contains
$(0,0)$ in its interior.
Then by Theorem~\ref{case_of_0}, $\widetilde f$ must have bounded deviation along $-(\alpha,\beta)$.
\end{proof}
\begin{lemma}\label{density_unboundedness}
For any $\widetilde{f}$-fixed point $\widetilde p$,
its stable and unstable branches are both unbounded. Their projections to the torus do not intersect. Moreover, the projection of each branch is dense in the torus.
\end{lemma}
\begin{proof}
A first observation is that there is no periodic open disk. If such a disk existed, then from our hypotheses, its prime ends
rotation number would be irrational. So $f$ would have periodic points with
positive index (see the end of the proof of Lemma 5.2), something that is
not allowed by Proposition 6.2.
Fix some fixed point $p$ in the torus. Consider any $\widetilde p$ in the plane that
lifts $p$.
The proofs of Proposition 5.3 and Lemma 5.4 imply that both
the stable and the unstable branches at $\widetilde p$ are unbounded (see
Remarks 5.1 and 5.2).
Now we show that $W^s(p)$ is dense in $\Bbb T^2$
(a similar argument works for $W^u(p)$).
Since the lift $W^s(\widetilde p)$ is unbounded, the closure
$\overline{W^s(p)}$
must be an essential subset of $\Bbb T^2$.
Assume its complement
$\Bbb T^2\backslash \overline{W^s(p)}$
is non-empty.
If it contains an essential component,
then as we have already done in many previous arguments,
this implies that the rotation set is
contained in a straight line with rational slope, a contradiction.
Thus, $\Bbb T^2\backslash \overline{W^s(p)}$ is inessential.
So each connected component is a periodic open disk. As there are none,
$W^s(p)$ is dense in $\Bbb T^2.$
Now, if the stable and unstable branches at $p$ intersect, then
for some $\widetilde p$ lift of $p$, either its stable and unstable
branches intersect,
or the unstable branch at $\widetilde p$ intersects (maybe in a tangency)
the stable branch at
$\widetilde p+(m,l)$ for some non-zero integer vector $(m,l)$.
In this second case, there exists a Jordan curve in the plane, which is
given by the union of two arcs: one contained in the unstable branch at $\widetilde p$ and the other contained in the stable branch at $\widetilde p+(m,l)$. As this Jordan curve bounds a disk and $W^s(p)$ is dense in $\Bbb T^2$, we get that the stable branch at some integer translate $\widetilde p+(m',l')$ has a topologically transverse intersection with the unstable branch at $\widetilde p$. If $(m',l')$ is non-zero, Lemma 2.10 and Remark 2.5 give a contradiction.
So we are left to consider the case when the stable and unstable branches at $\widetilde p$ intersect. As we did before, there exists an open disk
$\widetilde D$ in the plane
whose boundary is a Jordan curve containing $\widetilde p$,
consisting of two compact arcs.
One of the arcs is contained in the unstable branch at
$\widetilde p$ and the other is contained in the stable. Now, either
for some integer $n>0,$ $\widetilde{f}^n(\widetilde D)$
intersects some non-zero integer translate of $\widetilde D,$ something that
is not allowed, again by Lemma 2.10 and Remark 2.5, or not. In this second
possibility, if we consider
$\widetilde D_{\text{sat}} =\bigcup_{n\in \Bbb Z}
\widetilde f^n(\widetilde D),$ then
$Fill(\widetilde D_{\text{sat}})$ is open and disjoint from all its non-zero
translates. Therefore, when projected to the torus, it is an $f$-invariant
bounded open disk (see Lemma 2.3). As such disks do not exist, there are
no homoclinic intersections in the torus.
\end{proof}
The next lemma shows that $f$ also does not admit heteroclinic intersections.
\begin{lemma}\label{no_heteroclinic_points}
For any two fixed points
$p_1$ and $p_2$,
we have
\begin{equation}
W^u(p_1)\bigcap W^s(p_2)=\emptyset.
\end{equation}
\end{lemma}
\begin{proof} Suppose $f$ admits some fixed points $p_1$ and $p_2$, and \begin{equation} W^u(p_1) \bigcap W^s(p_2) \neq \emptyset. \end{equation} We consider some lifts $W^u(\widetilde{p_1})$ and $W^s(\widetilde{p_2})$ of these branches, such that their intersection is non-empty. Then, since there are no saddle-like connections, we can find some Jordan curve, which is the union of one sub-arc of
$W^u(\widetilde{p_1})$
and one sub-arc of $W^s(\widetilde{p_2})$, respectively.
This
Jordan curve bounds a topological disk,
denoted $\widetilde U$.
The projection
$U=\pi(\widetilde U)$ is a proper
open subset of $\Bbb T^2$.
Since both $W^u(p_1)$ and $W^s(p_2)$ are dense in $\Bbb T^2$, each of them must
intersect $U$.
So, there are homoclinic intersections, which do not exist by
the previous Lemma. This contradiction ends the proof.
\end{proof}
The goal now is to show that both invariant branches at a fixed point tend to infinity.
\begin{lemma}\label{unstable_goes_to_infinity}
Let $\widetilde p$ be a fixed point (for $\widetilde{f}$).
Then both its stable and unstable branches
intersect every compact set
in a closed subset.
More precisely,
for the unstable branch $W^u(\widetilde p)\backslash\{\widetilde p\}$
for example,
if $\widetilde \lambda \subset W^u(\widetilde p)\backslash\{\widetilde p\}$ denotes the closure of a fundamental domain,
then for any compact set $K$,
the set $\{n\geq 1 \big | \widetilde f^n(\widetilde \lambda)\bigcap K \neq \emptyset\}$ is finite.
\end{lemma}
\begin{proof}
Consider the unstable branch $W^u(\widetilde p)\backslash\{\widetilde p\}$
and choose a fundamental domain contained in it, whose closure we denote by $\widetilde \lambda$.
Suppose by contradiction that there exists some compact set $K\subset \Bbb R^2$, an integer sequence $n_i \to +\infty$, and a sequence $\widetilde q_i \in \widetilde \lambda$, such that \begin{equation} \widetilde f^{n_i}(\widetilde q_i)\in K. \end{equation}
By extracting a subsequence if necessary,
we can assume
$\widetilde q \in \lambda$ is the limit point of the sequence
$\{\widetilde q_i\}_{i\geq 1}$.
Considering the $\omega$-limit set of $\widetilde q$, denoted
$\omega(\widetilde q)$, there are three possibilities:
Either $\omega(\widetilde q))$ is empty, a singleton, or it has more than
one point.
If it contains more than one point, then some point $\widetilde w$ in it
is not fixed, because each fixed point is isolated.
Then $\widetilde w$ is contained in some disk $U$,
such that, \begin{align}
\widetilde f(U)\bigcap U & = \emptyset, \\ \widetilde f^k(U)\bigcap U & \neq \emptyset, \text{ for some } k\geq 2. \end{align} So Lemma~\ref{Brouwer} implies that $\widetilde f$ admits some fixed point with positive topological index, a
contradiction with Proposition~\ref{corollary_about_periodic_non_fixed_point}.
If $\omega (\widetilde q)$ is a singleton, say,
equal to $\{\widetilde r\}$, then $\widetilde r$ is necessarily a fixed point.
So $\widetilde q$ belongs to the stable branch
at the point $\widetilde r$, that is, there is an heteroclinic point,
a contradiction with Lemma 6.5.
So, $\omega(\widetilde q)$ is empty.
In particular,
this means that the sequence $\{\widetilde f^n(\widetilde q)\}_{n\geq 1}$ converges
to infinity as
$n$ tends to $+\infty$.
By Theorem~\ref{Fabio_Bounded} and Lemma~\ref{Generic_Bounded_in_(-alpha,-beta)},
$\widetilde f$ has bounded deviation along the three
directions $-(\alpha, \beta), (-\beta,\alpha)$ and $(\beta,-\alpha)$. So the sequence $\widetilde f^n(\widetilde q)$ converges to infinity
along the direction $(\alpha,\beta)$.
In particular, for some large $n_0$, \begin{equation} \inf_{\widetilde z\in K} \big( \text{pr}_{(\alpha,\beta)} ( \widetilde f^{n_0}(\widetilde q)- \widetilde z) \big)>2M, \end{equation} where $M>0$ comes from estimate (\ref{Bounded_in_-v}).
Then there exists some small disk $B$ containing $\widetilde q$, such that, for every point $b\in B$, the above estimate also holds true. Now, we can choose a sufficiently large $i$, with $\widetilde q_i\in B$, and $n_i > n_0$. Thus, \begin{align}
& \text{pr}_{(\alpha,\beta)} \big( \widetilde f^{n_0} (\widetilde q_i) - \widetilde f^{n_i}(\widetilde q_i)\big) \\ \geq & \inf_{\widetilde z\in K} \big( \text{pr}_{(\alpha,\beta)} ( \widetilde f^{n_0}(\widetilde q_i)- \widetilde z) \big) \\ \geq & 2M. \end{align} As this is a contradiction with Lemma \ref{Generic_Bounded_in_(-alpha,-beta)}, the proof is completed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{carac_map}]\renewcommand{\qedsymbol}{}
The proof now follows easily from Proposition \ref{corollary_about_periodic_non_fixed_point}, Lemmas \ref{density_unboundedness}, \ref{no_heteroclinic_points}, \ref{unstable_goes_to_infinity} and Theorem \ref{Fabio_Bounded}.
\end{proof}
\begin{theorem}\label{Thm_Generic_Family} Let $\widetilde f$ denote some lift of some $f\in \mathcal K^r$, and suppose $\rho(\widetilde f)$ is the line segment from $(0,0)$ to $(\alpha,\beta)$. Then, for any coprime integer pair $(a,b)\neq (0,0)$, there exists a torus loop $\ell= \ell_{(a,b)}$,
which can be lifted to an $\widetilde f$-Brouwer line $\widetilde \ell$, such that $\widetilde \ell+(a,b)=\widetilde \ell$. \end{theorem}
\begin{proof}
Up to a change of coordinates and/or considering $f^{-1}$ if necessary, we reduce to the case when $(a,b)=(0,1)$ and $\alpha >0.$
\begin{prop}\label{good_curve}
There exists an oriented properly
embedded curve $\widetilde \gamma \subset \Bbb R^2$,
with the following properties.
\begin{enumerate}
\item $\widetilde \gamma+(0,1)=\widetilde \gamma$,
and $\widetilde \gamma$ is oriented in the direction $(0,1)$.
\item $\widetilde \gamma$ does not contain any $\widetilde f$-fixed points.
\item Let
$\mathcal R(\widetilde \gamma)$ denote the unbounded complementary domain to the right of
$\widetilde \gamma$. For any $\widetilde f$-fixed point
contained in $\mathcal R(\widetilde \gamma)$,
its unstable branch does not intersect $\widetilde \gamma$.
\item Analogously, let
$\mathcal L(\widetilde \gamma)$
denote the unbounded complementary domain to the left
of
$\widetilde \gamma$.
For any $\widetilde f$-fixed point
contained in $\mathcal L(\widetilde \gamma)$,
its stable branch does not intersect $\widetilde \gamma$.
\end{enumerate}
\end{prop}
\begin{proof}
Start with a vertical line $\ell$, oriented upwards, which does not contain any $\widetilde f$-fixed point. The complement, $\ell^c,$ consists of two unbounded connected components. We denote by $\mathcal R(\ell)$ (respectively, $\mathcal L(\ell)$) the right component (respectively, the left component). Let $O_-$ (respectively, $O_+$) denote the union of the stable branches (respectively, unstable branches) of all the $\widetilde f$-fixed points belonging to $\mathcal L(\ell)$ (respectively, $\mathcal R(\ell)$). We claim that both $O_-$ and $O_+$
are closed sets.
The arguments are similar, so it suffices to
prove the claim for
$O_-$.
Let $\{z_i\}_{i\geq 1}$ be a sequence of points in $O_-$ converging to some point $z$. Choose a small closed disk $B(z,1)$ containing $z$. By Theorem~\ref{Fabio_Bounded}, Lemmas 6.2 and ~\ref{unstable_goes_to_infinity}, there are only finitely many $\widetilde f$-fixed points in $\mathcal L(\ell)$, whose stable branches intersects $B(z,1)$. Moreover, the intersection of one such stable branch with $B(z,1)$ is a closed set. So $O_-\bigcap B(z,1)$ is closed. Therefore, $z$ belongs to $O_-$, which implies that $O_-$ is closed.
It is clear that $(O_-)^c$ has a connected component which is unbounded to the right. More precisely, this component contains some translated domain $\mathcal R(\ell)+(M',0)$, where $M'$ is a positive constant obtained from the constant $M$ in (\ref{Bounded_in_-v}) by defining $M'=M\cos \theta$, where $\theta$ is the angle between the horizontal line and the vector $(\alpha,\beta)$.
Suppose by contradiction that $(O_-)^c$ is not connected. Then there exists a connected component $B$, which is contained in $\mathcal L(\ell)+(M',0)$. Observe $B$ is open and its boundary $\partial B$ is contained in $O_-$, and recall that the unstable branch of every $f$-fixed point is dense in $\Bbb T^2$. It follows that, some unstable branch of some $\widetilde f$-fixed point intersects $B$. Since the branch is unbounded to the right, it must intersect the boundary of $B$, which is a contradiction, because stable and unstable branches do not intersect. So, $(O_-)^c$ is connected. The same holds true for $(O_+)^c$.
Now consider the one point compactification of the plane, identified with $\Bbb S^2$. The two closed sets $O_-$ and $O_+$ lift to $\widehat{O_-}$ and $\widehat {O_+}$, respectively, which are connected closed subsets,
because every stable and unstable branch lifts to some closed set containing the point $\infty\in S^2$. Clearly, $\widehat{O_-} \bigcap \widehat {O_+} =\{\infty\}$. By Lemma~\ref{Newman}, the complement of $O_- \bigcup O_+$ is an open connected subset of $\Bbb R^2$. Note that, if a point $\widetilde z\in (O_- \bigcup O_+)^c$, then $\widetilde z+(0,k)\in (O_- \bigcup O_+)^c$ for any $k\in \Bbb Z$, because of the relations $O_-=O_-+(0,1)$ and $O_+=(O_+)+(0,1)$.
So, we can choose an arc $\delta$ connecting $\widetilde z$ and $\widetilde z+(0,1)$ such that \begin{align} \delta \bigcap (O_- \bigcup O_+) & =\emptyset \text{ and } \\ \delta \bigcap (\delta+(0,1)) & \text{ contains exactly one point}. \end{align} Therefore, the union \begin{equation} \widetilde \gamma:= \bigcup_{i\in \Bbb Z}( \delta+(0,i)). \end{equation} is a properly embedded real line, which satisfies all four properties.
\end{proof}
\begin{lemma}\label{good_neighbourhood} Let $\widetilde \gamma$ be obtained from Proposition~\ref{good_curve}.
For any fixed point $\widetilde q\in\mathcal R(\widetilde \gamma)$,
there exists a small closed neighbourhood
$\widetilde K$ containing $\widetilde q$, such that
$\{\widetilde f^n(\widetilde K)\}_{n\geq 0} \subset \mathcal R(\widetilde \gamma)$.
In particular, the forward iterates of $\widetilde K$ do not intersect
$\widetilde \gamma$.
\end{lemma}
\begin{proof}
Fix some fundamental domain $\widetilde{\lambda }$ of $W^u(\widetilde{q})$ very close to $\widetilde{q},$ whose endpoints are $\widetilde{y}$ and $ \widetilde{f}(\widetilde{y}).$ As $W^u(\widetilde{q})$ is unbounded to the right, there exists $N>0$ such that $\text{dist}(\widetilde{f}^N(\widetilde{\lambda }),\widetilde{\gamma })>2M+C_{\widetilde f},$ where $M$ is the constant obtained in (\ref{Bounded_in_-v}), and $C_{\widetilde{f}}$ is given by:
\begin{equation}\label{C_tildef_definition}
C_{\widetilde f}=\max_{\widetilde z \in \Bbb R^2} \|\widetilde f(\widetilde z) -\widetilde z\|. \end{equation}
\begin{figure}
\caption{The Neighbourhoods $K$ and $V$}
\label{figlemma68}
\end{figure}
Note that $\bigcup_{n=0}^\infty \widetilde f^n(\widetilde \lambda)$ does not intersect $\widetilde \gamma$. We can choose a small open neighbourhood $V$ of $\widetilde{\lambda }$, such that $\bigcup_{n=0}^N$$\widetilde{f}^n(V)$ is sufficiently close to $\bigcup_{n=0}^N \widetilde{f}^n(\widetilde{\lambda })$, so that it does not intersect $ \widetilde{\gamma }$. Observing Lemma~\ref{Generic_Bounded_in_(-alpha,-beta)}, we can also ensure that $\widetilde{f}^n(V)$ does not intersect $\widetilde{\gamma },$ for all $n>N$. Finally, choose a small neighbourhood $K$ of $\widetilde q$, such that, for every point in $K$, either it belongs to the stable branch of $\widetilde q$, or it has a forward iterate belonging to $V$
(See Figure~\ref{figlemma68} for the choices of these neighbourhoods).
In fact,
$\bigcup_{n=0}^\infty \widetilde f^n(K)\subset K\bigcup (\bigcup_{n=0}^\infty \widetilde f^n(V))$.
Therefore, all non-negative iterates of $K$ can not leave
$\mathcal R(\widetilde{\gamma }).$ \end{proof}
\begin{lemma}\label{N_times_becomes_free}
Let $\widetilde \gamma$ be the curve obtained in
Proposition~\ref{good_curve}.
Then there exists
a positive integer $N$ such that,
\begin{equation}\label{gamma_is_free}
\widetilde f^N(\widetilde \gamma)\bigcap \widetilde \gamma =\emptyset.
\end{equation}
\end{lemma}
\begin{proof}
Suppose by contradiction that
there exists some sequence
of points $\widetilde z_{(n)} \in \widetilde \gamma$
such that $\widetilde f^n(\widetilde z_{(n)})\in \widetilde \gamma$.
Noticing item (1) of Proposition~\ref{good_curve},
we can choose all the $\widetilde z_{(n's)}$ in a compact fundamental domain of
$\widetilde{\gamma}$, denoted $K.$ In particular,
they have an
accumulation point
$\widetilde z_\ast$.
Up to extracting
a subsequence, simply
assume $\widetilde z_{(n)} \to \widetilde z_\ast$.
Moreover, from Theorem~\ref{Fabio_Bounded}, there exists a constant $M^*>0$ such that, for all integers $n>0,$ $\widetilde{f}^n(K) \cap \widetilde{\gamma}$ is contained in the $M^*$-neighbourhood of $K$ in $\widetilde{\gamma}$.
By Theorem~\ref{Fabio_Bounded} and
Lemma~\ref{Generic_Bounded_in_(-alpha,-beta)},
the forward orbit $\{\widetilde f^n(\widetilde z_\ast)\}_{n\geq 0}$
is bounded in three
directions $(-\alpha,-\beta), (-\beta,\alpha), (\beta,-\alpha)$.
Then, either $\{\widetilde f^n(\widetilde z_\ast)\}_{n\geq 0}$
is unbounded in the direction
$(\alpha,\beta)$, or it is bounded.
We seek contradictions in both cases.
If $\{\widetilde f^n(\widetilde z_\ast)\}_{n\geq 0}$ is bounded,
then the omega limit set $\omega(\widetilde z_\ast)$
must be a single fixed point, otherwise, by the arguments
used in the proof of Lemma~\ref{unstable_goes_to_infinity},
one finds a positive index fixed point, which does not exist by
Proposition~\ref{corollary_about_periodic_non_fixed_point}.
So $\widetilde z_\ast$
belongs to some stable branch $W^s(\widetilde q)$,
for some fixed point $\widetilde q \in \mathcal R(\widetilde \gamma)$.
By Lemma~\ref{good_neighbourhood},
there exists a compact neighbourhood $\widetilde K$ of $\widetilde q$,
whose forward iterates do not intersect $\widetilde \gamma$. And
there exists a positive integer $m_0$, such that
$\bigcup_{k=0}^{m_0} \widetilde f^{-k}(\widetilde K)$
contains some neighbourhood $N$
of
$\widetilde z_\ast$. This is a contradiction, because for sufficiently
large integers $m > m_0$,
$\widetilde z_{(m)}\in N$
and $\widetilde f^m(\widetilde z_{(m)})\in \widetilde \gamma$.
The other case is when the orbit of $\widetilde z_\ast$ is unbounded.
Then, for sufficiently large $k_0$,
\begin{equation}
\inf_{\widetilde w\in K} \text{pr}_{(\alpha,\beta)}
(\widetilde f^{k_0}(\widetilde z_\ast)- \widetilde w) > 10(M+M^*+1).
\end{equation}
Then, when $m\geq k_0$ is sufficiently large,
\begin{equation}
\text{pr}_{(\alpha,\beta)}
\big(\widetilde f^{k_0}(\widetilde z_{(m)})-\widetilde z_{(m)} \big) >10(M+M^*+1).
\end{equation}
Noticing Lemma~\ref{Generic_Bounded_in_(-alpha,-beta)},
this provides a contradiction,
since $$ \widetilde f^m(\widetilde z_{(m)})\in M^* \text{-neighbourhood of } K. $$
The proof is completed now.
\end{proof}
For the oriented curve $\widetilde \gamma$ from
Proposition~\ref{good_curve} above,
$\mathcal R(\widetilde \gamma)$ denotes
the unbounded connected component of $(\widetilde \gamma)^c$
in the direction of $(\alpha,\beta)$. In
Lemma~\ref{N_times_becomes_free}, we have obtained the integer $N$ such that
$\widetilde f^N(\widetilde \gamma) \subset \mathcal R(\widetilde \gamma)$.
The following is a standard argument. Consider the finite union of
curves,
\begin{equation}
Q:= \bigcup_{j=0}^{N-1} \widetilde f^j(\widetilde \gamma).
\end{equation}
Clearly, the complement of $Q$ has a component which is unbounded in the
direction of $(\alpha,\beta)$. If
$\widetilde \ell$ is the boundary of this component, then
$\widetilde f(\widetilde \ell) \cap \widetilde \ell=\emptyset.$ This $\widetilde \ell$ is clearly the lift of a vertical loop in the torus. And so the proof of Theorem \ref{Thm_Generic_Family} is over.
\end{proof}
We close this section by restating and proving the remaining part of Theorem~\ref{generic_family_not_perturbable}.
\begin{theorem}[Remains of Theorem~\ref{generic_family_not_perturbable}] Let $\widetilde f$ denote some lift of $f\in \mathcal K^r$, and $\rho(\widetilde f)$ is the line segment from $(0,0)$ to $(\alpha,\beta)$. Let $\gamma$ be any straight line passing through $(0,0)$, which does not contain $\rho(\widetilde f)$. Then there exists $\varepsilon_0>0$ such that for any $\widetilde g\in \widetilde{ \text{Homeo}}_0(\Bbb T^2)$,
which is $C^0$-$\varepsilon_0$-close to $\widetilde f$, the rotation set $\rho(\widetilde g)$ does not intersect the connected component of $\gamma^c$ which does not intersect $\rho(\widetilde f)$. \end{theorem}
\begin{proof} Choose two reduced integer vectors, $(a,b)$ and $(a',b')$, with the following properties. \begin{enumerate} \item the two rays from $(0,0)$ in the directions $(a,b)$ and $(a',b')$ define a closed cone $C$ which contains the vector $(\alpha,\beta)$ in its interior. \item the interior of $C$ is contained in one of the connected component of $\gamma^c$. \end{enumerate} By Theorem~\ref{Thm_Generic_Family}, there are two $\widetilde f$-Brouwer lines $\widetilde \ell_1$ and $\widetilde \ell_2$, such that, $\widetilde \ell_1+(a,b)=\widetilde \ell_1$, and $\widetilde \ell_2+(a',b')=\widetilde \ell_2$. Since both $\widetilde \ell_1$ and $\widetilde \ell_2$ are lifts of simple closed curves in $\Bbb T^2$, there exists $\varepsilon_0$, such that, for any $\widetilde g\in \widetilde{\text{Homeo}}_0(\Bbb T^2)$, with $\text{dist}_{C^0}(\widetilde g,\widetilde f)<\varepsilon_0$, those two lines $\widetilde \ell_1$ and $\widetilde \ell_2$ are still Brouwer lines for $\widetilde g$. This implies that $\rho(\widetilde g)\subset C$. In particular, $\rho(\widetilde g)\backslash \{(0,0)\}$ is contained in the connected component of $\gamma^c$ which contains $\rho(\widetilde f)\backslash \{(0,0)\}$. \end{proof}
\section{Unbounded Deviations} In this section, we show the following theorem.
\begin{theorem}\label{unbounded_final}[Theorem~\ref{generic_bounded_unbounded} restated] Suppose $\widetilde f$ is a lift of some $f\in \mathcal K^r$, and $\rho(\widetilde f)$ is a segment from $(0,0)$ to a totally irrational point $(\alpha,\beta)$. Assume further that $f$ preserves a
foliation on $\Bbb T^2$. Then $\widetilde f$ has unbounded deviation along the direction $(\alpha,\beta)$. \end{theorem}
\begin{proof} Let us assume by contradiction that
there exists $M>0$, such that
\begin{equation}\label{Bounded_by_S} \sup_{\widetilde x \in \Bbb R^2, n\geq 1} \text{pr}_{(\alpha,\beta)} \big ( \widetilde f^n (\widetilde x) -\widetilde x -n(\alpha,\beta) \big) \leq M.
\end{equation} Recalling Definition~\ref{Mab_Sab}, $\mathcal S_{(\alpha,\beta)}$ is the closure of the union of the support of all the $f$-invariant ergodic probability measures whose average rotation vector for $\widetilde f$ is $(\alpha,\beta)$. Then, Lemma~\ref{geq_-M} shows that
for any lift $\widetilde x\in \Bbb R^2$ of
some $x\in \mathcal S_{(\alpha,\beta)}$, and any $n\geq 1$,
\begin{equation}\label{Bounded_Below_again}
\text{pr}_{(\alpha,\beta)} \big(\widetilde f^n(\widetilde x) -\widetilde x -n(\alpha,\beta) \big) \geq -M.
\end{equation}
Clearly, for any fixed point $p$, there exists a small disk
$B$ containing
$p$, such that for any point in $B$, its $f$-iterates will
remain close to $p$ for a long time, both
in the future and in the past.
Expression (\ref{Bounded_Below_again})
implies immediately that,
when $B$ is sufficiently small,
the whole orbit of an arbitrary point in $\mathcal S_{(\alpha,\beta)}$ does not intersect
$B$.
Choose a fixed point $p$. Since $f$ preserves the foliation $\mathcal F$, then the leaf $\mathcal F(p)$ containing $p$ must be the union of $W^s(p)$ and $W^u(p)$. Choose a local leaf $L\subset \mathcal F(p)$, which connects some point $y\in W^s(p)$ and $y'\in W^u(p)$. Let $V$ be an open neighbourhood of $L$ such that for any local leaf in $V,$ its forward and backward iterates under $f$ also intersect $V$. Choose two small arcs $\gamma$ and $\gamma'$, both contained in $V$, transverse to the local foliation restricted to $V$, such that the arc $\gamma$ connects $y$ to a point $x$ and $\gamma'$ connects $y'$ to a point $x'$. Moreover, $f(\gamma)$ and $f^{-1}(\gamma')$ are also both contained in $V$ and $x$ and $x'$ bound a local leaf $\theta^+$.
It is also convenient to choose $\theta^+$ so that it belongs either to $W^u(p)$ or $W^s(p)$. This is possible because both branches are dense in $\Bbb T^2$, see Lemma \ref{density_unboundedness}.
Denote the closed region bounded by $\gamma,\theta^+,\gamma',L$ as $K$ (See Figure~\ref{fig7a}). Note that $K$ can be chosen arbitrarily close to $L$.
\begin{figure}
\caption{Local Foliation around $p$.}
\label{fig7a}
\end{figure}
The claim is that, some local leaf in $K$, which is contained in $W^s(p)$ or $W^u(p)$, and different from $L$, must contain a fundamental domain of $W^s(p)$ or $W^u(p)$, that is, some sub-arc connecting a point and its image. Suppose by contradiction that this is not true.
Then one of the following cases must happen (see Figure~\ref{fig7a}). \begin{enumerate} \item $f(\theta^+)$ intersects $\theta^+$. \item $f(\theta^+)$ is above $\theta^+$. \item $f(\theta^+)$ is below $\theta^+$. \end{enumerate}
If case (1) happens, since $\theta^+$ belongs to $W^s(p)$ or $W^u(p)$, then it contains a fundamental domain of $W^s(p)$ or $W^u(p)$ and we are done.
Up to considering the backward dynamics and exchanging the roles of stable and unstable branch, we can simply assume $f(\theta^+)$ is below $\theta^+$. Then, $f(\gamma)$ is contained in $K$, provided the region is chosen sufficiently close to $L$.
Since $W^s(p)$ is dense, we can follow it until the first time it enters the region $K$. Denote by $z \in W^s(p)$ the first entering point $(z \in \gamma')$. The local leaf containing $z$, denoted $T$, intersects $\gamma$ at a point $q$. If $f^{-1}(z) \in T$, then we find the fundamental domain in $T$. If $f^{-1}(z) \notin T$, then $f(q)\in K$, and this contradicts the fact that $z$ is the first returning point to $K$ along $W^s(p)$.
So, there is a fundamental domain of $W^s(p)$ contained in some local leaf in $K$, other than $L$. Now we pick a lift $\widetilde p$ of $p$, and consider the lifted leaf $\widetilde{\mathcal F}(\widetilde p)$ containing $W^s(\widetilde p)$ and $W^u(\widetilde p)$. By previous paragraphs, there is some integer $(a,b)$ such that,
the curves $\widetilde {\mathcal F}(\widetilde p)$ and $\widetilde {\mathcal F}(\widetilde p)+(a,b)$ bound an infinite strip $\widetilde H$, whose union with these two curves covers $\Bbb T^2$. Moreover, we can find a small fundamental domain for $\widetilde f$ restricted to $\widetilde H$, namely $\widetilde K'\subset \widetilde K$, such that for any point $\widetilde z \in \widetilde H$ whose orbit is positively and negatively unbounded, its orbit must intersect $\widetilde K'$ (See Figure~\ref{fig7b}).
\begin{figure}
\caption{A Fundamental Domain that All Unbounded Trajectories cross.}
\label{fig7b}
\end{figure}
On the other hand, we can choose $K \subset B$, where the disk $B$ was obtained at the beginning of the proof. Therefore, $K'=\pi(\widetilde K') \subset K\subset B$ intersects the orbit of any chosen point in $\mathcal S_{(\alpha,\beta)}$ (one whose orbit is unbounded both in the future and past). And this is a contradiction with the fact that $\mathcal S_{(\alpha,\beta)}$ avoids $B$. \end{proof}
\end{document} |
\begin{document}
\title{\LARGE \bf
Adaptive Learning to Speed-Up Control of Prosthetic Hands: a Few Things Everybody Should Know
} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract} A number of studies have proposed to use domain adaptation to reduce the training efforts needed to control an upper-limb prosthesis exploiting pre-trained models from prior subjects. These studies generally reported impressive reductions in the required number of training samples to achieve a certain level of accuracy for intact subjects. We further investigate two popular methods in this field to verify whether this result equally applies to amputees. Our findings show instead that this improvement can largely be attributed to a suboptimal hyperparameter configuration. When hyperparameters are appropriately tuned, the standard approach that does not exploit prior information performs on par with the more complicated transfer learning algorithms. Additionally, earlier studies erroneously assumed that the number of training samples relates proportionally to the efforts required from the subject. However, a repetition of a movement is the atomic unit for subjects and the total number of repetitions should therefore be used as reliable measure for training efforts. Also when correcting for this mistake, we do not find any performance increase due to the use of prior models. \end{abstract}
\section{Introduction}\label{sec:introduction}
A majority of upper-limb amputees is interested in prostheses controlled via \ac{SEMG}, but they perceive the difficult control as a great concern~\cite{atkins96}. Machine learning has opened a new path to tackle this problem by allowing the prosthesis to adapt to the myoelectric signals of a specific user. Although these methods have been applied with success in an academic setting (e.g., \cite{castellini09} and references therein), they require a long and painful training procedure to learn models with satisfactory performance.
Several studies have proposed to reduce the amount of required training data by leveraging over previous models from different subjects~\cite{orabona09, tommasi11,patricia14}. The underlying idea is that a model for a new target user can be bootstrapped from a set of prior source models. Though the idea is appealing and initial studies have shown remarkable improvements, there is no conclusive evidence that these strategies lead to tangible benefits in the real world. An obvious limitation in the earlier studies is that they only considered intact subjects. This is relevant since myoelectric signals are user-dependent and this holds in particular for amputees, as the amputation and subsequent muscular use have a considerable impact on the quality on the myoelectric signals~\cite{farina02}.
Other limitations relate to the technical and conceptual execution of the experimental validation. First, the hyperparameters of the algorithms were not optimized for the method at hand, but rather chosen based on how well they performed on average when applied on the data of other subjects. Individual hyperparameter optimization for the methods and number of training samples is crucial for the successful application of machine learning algorithms and omission of this procedure may skew results. For instance, this tuning procedure may give an unfair disadvantage to the baseline that does not use prior information from pre-trained users.
On the conceptual level, the previous studies used the number of training samples (i.e., windows of the myoelectric signals) as measure for the required training effort. A consequence of this strategy is that not all available training samples were used to classify the movements, since subjects cannot produce individual samples. Instead, a repetition of a movement consisting of multiple windows is the atomic unit for subjects. This artificial reduction of training information is likely to be disadvantageous for the baseline that only relies on target training data. In our evaluation, we instead use the number of movement repetitions as realistic measure of the training effort. Furthermore, we consider all possible data, subjects and combinations of training repetitions, thereby removing the possible effects of random selection from the evaluation.
In this paper, we provide more insight into the benefits of domain adaptation for prosthetic control by augmenting the experiments by \citet{patricia14} with amputated subjects while also addressing other limitations. This results in three experimental settings, namely (1) the original experiments according to the setup common in literature~\cite{orabona09,tommasi11,patricia14} extended with amputated subjects, (2) the same setup with hyperparameter optimization and finally (3) a realistic setup where we also address the conceptual issues. In each setting, we perform three experiments, where intact and amputated subjects make up the groups of target and source subjects.
This paper is structured as follows. In \autoref{sec:relatedwork} we present the related work on domain adaptation in the context of myoelectric prosthetics. The algorithms that will be considered in our experiments will then be explained in detail in \autoref{sec:algorithms}. We continue with our experimental setup in \autoref{sec:setup}, after which we will present the results in \autoref{sec:experiments}. Finally, we conclude the paper in \autoref{sec:conclusions}.
\section{Related Work}\label{sec:relatedwork}
One of the first attempts to classify myoelectric signals of three volunteers was by \citet{graupe75}. In the following years, studies on prosthetic control led to many advances in the analysis and understanding of \ac{SEMG}. \citet{castellini09} noted that myoelectric signals differ significantly from person to person and that models trained for different subjects are therefore not automatically reusable. However, they showed that a pre-trained model could be used to classify samples from similar subjects.
Several studies continued in this direction with different strategies to build more robust models that take advantage of past information from source subjects or, in the context of repeatability, from the target itself. \citet{matsubara11} proposed to separate myoelectric data in user-dependent and motion-dependent components, and to reuse models by quickly learning just the user-dependent component for new subjects. \citet{sensingero09} presented different methods based on an appropriate concatenation of target and source data. Others still approached the problem by searching for a mapping to project data from different subjects into a common domain~\cite{chattopadhyay11,khushaba14}; a similar strategy was also used to reduce the recalibration time for a target that attempts to use the prosthesis on different days~\cite{liu16towards}.
Other studies proposed to leverage over prior models from already trained source subjects without requiring direct access to their data~\cite{orabona09,tommasi11,patricia14,liu16}. They tested different types of so-called \ac{HTL} algorithms showing a gain in performance with respect to non-adaptive baselines. We are particularly interested in the findings of \citet{tommasi11,patricia14}, who worked with a significant number of classes and intact subjects from the public \ac{NINAPRO} database~\cite{atzori14}. They report that the number of training samples required to obtain a given level of performance can be reduced by an order of magnitude as compared to learning from scratch.
\section{Algorithms} \label{sec:algorithms}
We first describe the mathematical background by means of a base learning algorithm in \autoref{sec:algorithms:background}, then we proceed with the domain adaptation methods included in our evaluations in \autoref{sec:algorithms:adaptive}.
\subsection{Background}\label{sec:algorithms:background} Let us define a training dataset $D = \left\{ \bm{x}_{i}, y_i \right\}_{i=1}^N$ of $N$ input samples $\bm{x}_i \in \mathcal{X} \subseteq \mathbb{R}^d$ and corresponding labels $y_i \in \mathcal{Y} = \left\{1, \ldots, G\right\}$. In the context of myoelectric classification, the inputs are the myoelectric signals and the labels are the movements chosen from a set of $G$ possible classes.
The goal of a classification algorithm is to find a function $h(\bm{x})$ that, for any future input vector $\bm{x}$, can determine the corresponding output $y$. Among the algorithms that construct such a model, \acp{SVM} are some of the most popular.
The base of the domain adaptation algorithms described later on is the \ac{LSSVM}~\cite{suykens02}, a variant of \ac{SVM} with a squared loss and equality constraint. It writes the output hypothesis as $h(\bm{x}) = \langle\bm{w}, \phi(\bm{x}) \rangle + b $, where $\bm{w}$ and $b$ are the parameters of the separating hyperplane between positive and negative samples. The optimal solution is thus given by \begin {equation} \label{eq:ls-svm} \begin{aligned} &\underset{\bm{w},b}{\text{min}} \left\{ \dfrac{1}{2} \Vert \bm{w} \Vert^{2} + \dfrac{C}{2} \sum\limits_{i=1}^N \xi_{i}^{2} \right\} \\ &\text{s.t.}\quad y_{i} = \langle\bm{w},\phi(\bm{x}_{i}) \rangle+b+\xi_{i}, \quad \forall \textit{i} \in \{ 1,...,N \}\enspace,\\ \end{aligned}\end {equation} where $C$ is a regularization parameter and $\bm{\xi}$ denotes the prediction errors. We approach our multi-class classification problem via a one-vs-all scheme to discriminate each class from all others. To obtain a better solution we mapped the input vectors $\bm{x}_i$ into a higher dimensional feature space using $\phi(\bm{x}_i)$. Usually, this mapping $\phi(\cdot)$ is unknown and we work directly with the kernel function $K(\bm{x}\textprime, \bm{x})~=~\langle \phi(\bm{x}\textprime), \phi(\bm{x}) \rangle$~\cite{suykens02}. In the following, we use a \ac{RBF} kernel \begin {equation} \label{eq:GausKer} K(\bm{x}\textprime, \bm{x}) = e^{- \gamma \parallel \bm{x}\textprime - \bm{x} \parallel^{2}}\enspace \text{with}\enspace \gamma > 0\enspace. \end {equation}
\subsection{Adaptive Learning}\label{sec:algorithms:adaptive} Domain adaptation algorithms construct a classification model for a new target using past experience from the sources. More specifically, let us assume that we have $K$ different sources, where each source is a classification model for the same set of movements. The used \ac{HTL} algorithms can then be described as follows.
\subsubsection{\acl{MULTIKT}}\label{sec:algorithms:multikt} This method aims to find a new separating hyperplane $\bm{w}$ that is close to a linear combination of the pre-trained source hypotheses $\hat{\bm{w}}^{k}$~\cite{tommasi11,tommasi14}. We solve the optimization problem \begin{equation} \label{eq:OptimMultiAdaMulti} \begin{aligned} &\underset{\bm{w},b}{\text{min}}\> \left\{ \dfrac{1}{2} \Big\Vert \bm{w}-\sum\limits_{k=1}^K \beta^{k} \hat{\bm{w}}^{k} \Big\Vert^{2}+\dfrac{C}{2} \sum\limits_{i=1}^N \xi_{i}^{2} \right\} \\ &\text{s.t.}\quad y_{i} = \langle\bm{w}, \phi(\bm{x}_{i})\rangle + b + \xi_{i}\enspace. \end{aligned} \end{equation} The vector $\bm{\beta} = \left[\beta_{1},...,\beta_{K}\right]^{T}$ with $\beta_k \geq 0$ and $\Big\Vert \bm{\beta} \Big\Vert^{2}~\leq~1$ represents the contribution of each source in the target problem and is obtained by optimizing a convex upper bound of the leave-one-out misclassification loss~\cite{tommasi14}. A more general case consists of different weights for different classes of the same source, such that $\beta_{k,g}$ is the weight associated to class $g$ of source $k$. In this work we used this latter version of the algorithm.
\subsubsection{\acl{MKAL}}\label{sec:algorithms:mkal} This algorithm combines source and target information via a linear combination of kernels~\cite{orabona10, orabona12}. Let us define \begin{align} \label{eq:WBar_PhiBar} \bar{\bm{w}} & = [\bm{w}^{0},\bm{w}^{1},...,\bm{w}^{K}] \quad \text{and} \\ \bar{\phi}(\bm{x},y) & = [\phi^{0}(\bm{x},y),\phi^{1}(\bm{x},y),...,\phi^{K}(\bm{x},y)]\enspace, \end{align} respectively as the concatenation of the target and source hyperplanes and the mapping functions into the corresponding feature spaces. These are both composed of $(K + 1)$ elements: the first refers to the target and the remaining ones to the sources. The optimization problem becomes \begin{equation} \label{eq:MinProb_MKAL} \begin{aligned} &\underset{\bar{\bm{w}}}{\text{min}} \left\{ \dfrac{\lambda}{2} \parallel \bar{\bm{w}} \parallel^{2}_{2,p} + \dfrac{1}{N} \sum\limits_{i=1}^N \xi_{i} \right\}\\ &\text{s.t.}\quad \langle \bar{\bm{w}} , (\bar{\phi} (\bm{x}_{i},y_{i}) - \bar{\phi} (\bm{x}_{i},y) ) \rangle \geq 1 - \xi_{i},\, \forall \textit{i} \ y \neq y_{i}\enspace. \end{aligned} \end{equation} The element $p$ regulates the level of sparsity in the solution $\bar{\bm{w}}$ and can vary in the range $(1, 2]$. The solution is obtained via stochastic gradient descent during $T$ epochs over the shuffled training samples.
\section{Experimental Setup}\label{sec:setup}
The experimental evaluation is subdivided in three settings. The first one is modeled after related literature for this kind of experiments with \ac{SEMG} data~\cite{orabona09,tommasi11,patricia14}. The second is identical but adds hyperparameter optimization for the target models. The third and final one is a novel framework in which we fixed the shortcomings of the previous settings to make the experiments as realistic as possible. We will refer to the settings as original, optimized and realistic. In the following, we first explain the used data and classifiers, and subsequently elaborate on the experimental settings.
\subsection{Data}\label{sec:setup:data} The data used in our work are from the \ac{NINAPRO} database\footnote{\url{http://ninapro.hevs.ch/}}~\cite{atzori14}, the largest publicly available database for prosthetic movement classification with 40 intact subjects and 11 amputees. Each subject executed 40 movements for 6 times, such that each repetition was alternated with a rest posture. While performing the movements, twelve electrodes acquired \ac{SEMG} data from the arm of the subject. The standardized data were used according to the control scheme by \citet{englehart03}, where we extracted features from a sliding window of 200\,ms and an increment of 10\,ms. The resulting set of windows was subsequently split in train and test sets for the classifier; data from repetitions $\left\{ 1, 3, 4, 6 \right\}$ were dedicated to training while data from repetitions 2 and 5 were used as test. To reduce the computational requirements, we subsampled the training data by a factor of 10 at regular intervals.
\subsection{Classifiers}\label{sec:setup:classifiers}
The algorithms used to build the classification models were the two mentioned \ac{HTL} algorithms together with two baselines: \begin{itemize} \item the \emph{\ac{NOTRANSFER}}, which uses an \ac{LSSVM} with \ac{RBF} kernel trained only on the target data. This corresponds to learning without the help of prior knowledge. \item the \emph{\ac{PRIOR}}, which learns an \ac{LSSVM} with linear kernel on top of the raw predictions of the source models. This measures the relevance of the source hypotheses by using them as feature extractors for the target data. \item \emph{\ac{MULTIKT}}, as explained in \autoref{sec:algorithms:multikt}, which learns a model on the target data that is close to a weighted combination of the source hypotheses. \item \emph{\ac{MKAL}}, as explained in \autoref{sec:algorithms:mkal}, which linearly combines an \ac{RBF} kernel on the target data with the source predictions. Parameters \textit{p} and \textit{T} were set to 1.04 and 300.
\end{itemize} The classification models for the sources were based on a non-linear \ac{SVM} or \ac{LSSVM} with \ac{RBF} kernel.
\subsection{Settings}\label{sec:setup:settings}
For each of the settings, we ran three experiments with distinct groups of target and source subjects: \begin{itemize} \item Intact-Intact: intact target subjects exploit prior knowledge of other intact sources; \item Amputees-Amputees: amputated target subjects exploit previous experience of other amputees; \item Amputees-Intact: amputated target subjects exploit prior knowledge of intact subjects. \end{itemize} In the first and second experiment, each subject toke the role of target just once, while the remaining subjects were used as sources. In the third, all of the amputees were once the target and the set of intact subjects was used only as sources.
\subsubsection{Original Setting}\label{sec:setup:settings:original}
The purpose of the original and optimized settings is to investigate the isolated impact of hyperparameter optimization on the performance of the target classifiers. We therefore replicated, as closely as possible, the experiments from \citet{patricia14} with 9 amputees\footnote{We omitted two amputees from the database that had only 10 electrodes due to insufficient space on their stump.}. and a random subset of 20 intact subjects from the \ac{NINAPRO} database. For these subjects we considered 17 movements plus the rest posture, appropriately subsampled to balance it with the other movements. The \ac{SEMG} representation used in this setting was the average of \ac{MAV}, \ac{VAR} and \ac{WL} features~\cite{kuzborskij12}, as to reduce the dependency on one specific type of representation. The details of this and the subsequent settings are presented schematically in \autoref{tab:setup:settings}.
\begin{table*}
\caption{Experimental settings.}
\centering
\begin{tabular} {l p{2.0cm} c l l p{3.0cm} p{3.0cm}}
\textbf{Setting} & \textbf{Movements} & \# \textbf{Int./Amp.} & \textbf{Features} & \textbf{Source model} & \textbf{Source hyperparameters} & \textbf{Target hyperparameters} \\
\midrule
\textbf{Original} & 9 wrist, 8 finger, subsampled rest & 20/9 & avg. \acs{MAV}/\acs{VAR}/\acs{WL} & \acs{SVM} & balanced accuracy on other subjects & same as source \\
\addlinespace
\textbf{Optimized} & 9 wrist, 8 finger, subsampled rest & 20/9 & avg. \acs{MAV}/\acs{VAR}/\acs{WL} & \acs{SVM} & balanced accuracy on other subjects & balanced accuracy of 5-fold \acs{CV} over shuffled training samples of target\\
\addlinespace
\textbf{Realistic} & 9 wrist, 8 finger, 23 grasp & 40/8 & \acs{MDWT} & \acs{LSSVM} & accuracy of k-fold \acs{CV} over repetitions of source & accuracy of k-fold \acs{CV} over train repetitions of target \\
\end{tabular}
\label{tab:setup:settings} \end{table*}
The source models were created by training an \ac{SVM} with \ac{RBF} kernel using all training repetitions of the respective subject. For the target models we trained the classifiers on an increasing number of random samples, from 120 to 2160 in steps of 120, from the training repetitions. The hyperparameters for both the source and target models were chosen from $C, \gamma \in \lbrace 0.01,0.1,1,10,100,1000 \rbrace$ and kept constant regardless of the number of training samples. For each parameter configuration, we evaluated the average balanced classification accuracy of each source subject when tested on the target subjects. For the target subject and its source models, we then chose the configuration that maximizes this average, making sure to exclude the data from the target subject. The motivation for this procedure is that biased regularization in \ac{MULTIKT} requires the source and target models to ``live'' in the same space.
\subsubsection{Optimized Setting}\label{sec:setup:settings:hyper}
Strictly speaking, the above assumption only requires that all the sources have the same \ac{RBF} bandwidth $\gamma$ as the related target. Moreover, \ac{MULTIKT} can also be interpreted as predicting the difference between the source predictions and the true labels~\cite{kuzborskij16}. In this alternative interpretation, there is no need for source and target models to use the same kernel. We therefore tuned the hyperparameters in the optimized setting for each individual target and for each training set size based on 5-fold \ac{CV} on the target training set. Note, however, that we still use the original method to determine the parameters for the source models.
\subsubsection{Realistic Setting}\label{sec:setup:settings:realistic}
In the final setting we extended the hyperparameter optimization procedure and attempted to address all issues to make the experiments as realistic as possible. First, we considered all available movements and subjects in the \ac{NINAPRO} database\footnote{Data for the first amputated subject was omitted, since the acquisition was interrupted prematurely.}. As \ac{SEMG} representation we used \ac{MDWT} features, which have previously shown excellent performance in related work on this database~\cite{gijsberts14}.
The main conceptual innovation with respect to the previous settings is that we trained target models on an increasing number of repetitions. The motivation is that the effort of the subject during data acquisition is given by the required number of repetitions of each movements, so we analyze the accuracy as a function of this atomic unit. Given the set of training repetitions $\left\{ 1, 3, 4, 6 \right\}$, we considered all possible subsets of length between 1 and 4 repetitions. For all these cases, we optimized the target model using $k$-fold \ac{CV}, where each fold corresponded to samples belonging to one repetition. In the exceptional case of only a single training repetition, we instead used 5-fold \ac{CV} over the samples. The parameter grid was extended to $C \in \lbrace 2^{-6}, 2^{-4}, \ldots, 2^{12}, 2^{14} \rbrace$ and $\gamma \in \lbrace 2^{-20}, 2^{-18}, \ldots, 2^{-2}, 2^{0} \rbrace$. Models for the source subjects on the other hand were trained using all repetitions and the hyperparameters were optimized specifically for the individual subject using 6-fold \ac{CV}, where the folds again corresponded to the repetitions. The source models were built with \ac{LSSVM} instead of \ac{SVM} to be more coherent with the other classifiers, which are all derived from \ac{LSSVM}. Due to the much larger number of samples in this realistic setting\footnote{Each repetition consists of approximately 35000 samples.} we further subsampled the data used for hyperparameter optimization by a factor of 4. For the same reason we decided to omit \ac{MKAL} from the analysis.
\section{Experiments} \label{sec:experiments}
In this section we first investigate the isolated impact of hyperparameter optimization when applied to the original setting. Then we verify whether the findings also apply to the realistic setting described in \autoref{sec:setup:settings:realistic}. An in-depth discussion follows on the explanations of the results.
\subsection{Results}\label{sec:experiments:hyper}
In \autoref{fig:experiments:original} we report the balanced classification accuracy as a function of the number of training samples averaged over all target subjects. The dotted lines indicate the results obtained in the original experimental framework usually employed in literature (see \autoref{sec:setup:settings:original}). As in the related studies, \ac{MKAL} and \ac{MULTIKT} outperform the baselines \ac{NOTRANSFER} and \ac{PRIOR} by a significant margin for all training set sizes. This has led to the claim that the adaptive algorithms can achieve similar performance as \ac{NOTRANSFER} using an order of magnitude less training samples. Since this improvement is observed whether the target and source subjects are intact or amputated, it is also assumed that amputees can equally exploit prior information from intact as well as other amputated subjects.
When looking at the solid lines in \autoref{fig:experiments:original}, which show results with hyperparameter optimization, we observe that the discrepancies between the algorithms disappear. In other words, the \ac{NOTRANSFER} baseline performs just as well as or even slightly better than the adaptive algorithms. Furthermore, with hyperparameter optimization all methods now outperform the results in the original setting. The only exception to this observation is \acs{PRIORLIN}, which has lower accuracy in the Amputee-Intact experiment. Contrary to the earlier statements, this demonstrates that prior models from intact subjects are not as useful as those from other amputees. Together with the observation that \ac{MKAL} and \ac{MULTIKT} perform nearly identically to \ac{NOTRANSFER}, this also allows us to conclude that rather than transferring from prior models, the \ac{HTL} algorithms rely almost exclusively on target data.
\begin{figure*}
\caption{Balanced classification accuracy for \ac{MULTIKT}, \ac{MKAL}, \ac{NOTRANSFER} and \ac{PRIORLIN} in the original and hyperparameter optimized settings.}
\label{fig:experiments:hyper:intactintact}
\label{fig:experiments:hyper:amputeeamputee}
\label{fig:experiments:hyper:amputeeintact}
\label{fig:experiments:original}
\end{figure*}
\autoref{fig:experiments:realistic} shows the standard classification accuracy for the realistic setting described in \autoref{sec:setup:settings:realistic} averaged over the target subjects and all possible combinations of a given number of training repetitions. Also in this setting the hyperparameters were tuned appropriately and the differences among the methods are again negligible. In addition, we observe significantly lower accuracy among amputees compared to intact subjects, confirming the deterioration of the myoelectric signals due to amputation and subsequent lack of muscular use.
\begin{figure*}
\caption{Standard classification accuracy for \ac{MULTIKT}, \ac{NOTRANSFER} and \ac{PRIORLIN} in the realistic setting.}
\label{fig:experiments:realistic:intactintact}
\label{fig:experiments:realistic:amputeeamputee}
\label{fig:experiments:realistic:amputeeintact}
\label{fig:experiments:realistic}
\end{figure*}
\subsection{Discussion}\label{sec:experiments:discussion}
The results clearly show that the improvements usually attributed to prior knowledge can instead be explained by suboptimal hyperparameter optimization. With properly tuned hyperparameters, the \ac{NOTRANSFER} baseline that completely ignores source information performs as well as the more complicated domain adaptation methods.
There are multiple explanations for the observed differences in performance in the original setting. First, the hyperparameters were chosen based on the performance of an \ac{SVM} when transferring from the source subjects to the target subjects. This gives a disadvantage to \ac{NOTRANSFER}, which does not exploit prior knowledge to train the classifier. Furthermore, this parameter setting is also problematic since all methods are based on \ac{LSSVM}, which uses a different loss function than \ac{SVM}. As can be seen in the objective function in \autoref{eq:ls-svm}, the regularization parameter $C$ is multiplied with the absolute magnitude of all training losses, so an optimal setting for \ac{SVM} does not necessarily work well for \ac{LSSVM}.
A further problem is that the value of $C$ was determined using the total set of training samples. The same value was subsequently used when training on much smaller subsets, leading to a different tradeoff between minimizing training errors and regularizing the solution. This affects all methods except \ac{MKAL}, for which the specific implementation multiplied the given value of $C$ with the number of training samples. \ac{MKAL} therefore effectively used a much larger value of $C$ (i.e, much less regularization), explaining why it performed superior to the other methods.
A similar, though slightly more complicated, argument holds for \ac{MULTIKT}. Recall the formulation of biased regularization in \autoref{eq:OptimMultiAdaMulti}; the linear combination of source hypotheses allows to reduce the effect of the regularization term by moving the bias in the direction of the optimal solution. In other words, for the same value of $C$ this allows to concentrate more on minimizing the training errors on the target data.
\section{Conclusions} \label{sec:conclusions}
In this paper, we have tested two popular domain adaptation algorithms that were proposed to reduce the training time needed to control a prosthesis. We found that the improvements in earlier studies can in fact be attributed to suboptimal hyperparameter optimization, which penalized in particular the \ac{NOTRANSFER} reference method. When the hyperparameters are appropriately tuned on the training data of the target subject, the previously reported differences vanish.
This result also holds when correcting for other technical and conceptual mistakes in the original experimental framework. The accuracy of the classification methods in our updated setting was evaluated with respect to the number of repetitions of each movement, which represents the real effort for the user during the training phase, and for all subjects in the \ac{NINAPRO} database. Also in this case, we do not observe any differences between the \ac{HTL} algorithms and the \ac{NOTRANSFER} baseline.
Intuitively, it should be possible to improve performance on a specific task by using prior information from related tasks. Our findings show, however, that in the context of prosthetic control \ac{MULTIKT} and \ac{MKAL}, which transfer just source hypotheses rather than source data, do not lead to improved performance. In future work, we will therefore continue to investigate how to successfully leverage over prior information to reduce the training effort for an amputee. Among the directions we consider are unsupervised domain adaptation via distribution alignment~\cite{pan11,fernando13} and subject invariant data representations using deep learning methods.
{\small
}
\end{document} |
\begin{document}
\title[\small{Microscopic approach to field dissipation in the Jaynes-Cummings model}]{Microscopic approach to field dissipation in the Jaynes-Cummings model}
\author{C. A. Gonz\'alez-Guti\'errez} \address{Instituto de Ciencias F\'isicas,
Universidad Nacional Aut\'onoma de M\'exico, Avenida Universidad s/n, 62210 Cuernavaca, Morelos, M\'exico} \ead{[email protected], [email protected]}
\author{D. Sol\'is-Valles} \address{Tecnologico de Monterrey, Escuela de Ingenier\'ia y Ciencias, Ave. Eugenio Garza Sada 2501, Monterrey, N.L., M\'exico, 64849.} \ead{[email protected]}
\author{B. M. Rodr\'iguez-Lara} \address{Tecnologico de Monterrey, Escuela de Ingenier\'ia y Ciencias, Ave. Eugenio Garza Sada 2501, Monterrey, N.L., M\'exico, 64849. \\ Instituto Nacional de Astrof\'isica, \'Optica y Electr\'onica, Calle Luis Enrique Erro No. 1, Sta. Ma. Tonantzintla, Pue. CP 72840, M\'exico} \ead{[email protected], [email protected]}
\begin{abstract} We use the microscopic derivation of the Jaynes-Cummings model master equation under field losses to study the dynamics of initial field states beyond the single-excitation manifold. We show that field-qubit detuning, as well as finite temperature, modify the effective decay rate in the model using entropy measures, like qubit-field purity and von Neumann entropy of the field, for initial Fock states. For initial semi-classical states of the field, we show that the microscopic approach, in phase space, provides an evolution to thermal equilibrium that is smoother than the one provided by the standard phenomenological approach. \end{abstract}
\pacs{42.50.Ar, 03.67.Lx, 42.50.Pq, 85.25.-j}
\maketitle
\section{Introduction}
The Jaynes-Cummings (JC) model \cite{Jaynes1963p89}, describing the interaction of a single bosonic mode with a two-level system, plays a key role in our understanding of interaction between radiation and matter. It is of central importance for the description of quantum effects, for example, the existence of Rabi oscillations for Fock field states \cite{Jaynes1963p89} and the collapse and revival of the atomic inversion in the presence of coherent fields \cite{Eberly1980p1323}, and constitutes a basic building block for the implementation of quantum gates \cite{Nielsen2000}. The model has been implemented in a variety of experimental platforms \cite{Raimond2001p565,Leibfried2003p281,Xiang2013p623}, where the unavoidable effect of the environment over closed-system dynamics is observed as a deterioration, or even complete suppression, of the expected quantum phenomena \cite{Rempe1987p353,Cirac1994p1202,Meekhof1996p1796,Brune1996p1800}. Thus, an adequate description of loss-mechanisms in different physical scenarios became essential to compare with experimental results, and lead to the proposal and study of different decoherence and dissipation models in the literature \cite{Puri1986p3610,Eiselt1989p351,Eiselt1991p346,Barnett2007p2033,Chiorescu2004p159,Wallraff2004p162,Wilczewski2009p033836,Wilczewski2009p013802}.
Here, we are interested in the microscopic approach to field dissipation in the standard Jaynes-Cummings model. The microscopic approach has demonstrated fundamental dynamical differences with the usual phenomenological approach for the single excitation manifold of the Jaynes-Cummings at zero \cite{Scala2007p013811} and finite \cite{Scala2007p14527} temperature. Both, the microscopic and phenomenological models of dissipation make use of the Born-Markov approximation, that considers a memory-less environment that couples weakly to the system. They differ on the fact that the microscopic approach uses the dressed state basis that diagonalizes the JC model in order to derive the effective master equation, while the phenomenological approach uses the microscopic master equation derived for just a dissipative field mode.
In the following, we review the microscopic derivation of the master equation for field dissipation in the Jaynes-Cummings model, and provide analytical expressions for the state evolution of the system that agree with previous results. Then, we compare the dynamics under this master equation and the standard phenomenological approach beyond the single excitation manifold at zero and finite temperature with a flat environment. In particular, we demonstrate the time evolution due to initial number and coherent field states through standard observables, like atomic inversion, mean photon number, entropy-related measures, such as purity and von Neumann entropy, and phase space quantities, like quadratures of the field and Husimi Q-function. Finally, we provide our conclusions and perspective for the description of dissipation for radiation matter interaction in strong-coupling regimes \cite{Boite2016p033827}.
\section{Microscopic approach for the open JC model}
We follow the formal microscopic derivation of the Markovian master equation for the JC model \cite{Scala2007p013811}, and start from the standard JC Hamiltonian \cite{Jaynes1963p89}, \begin{equation}\label{JCModel} \hat H_{JC}=\frac{\omega_{0}}{2} \hat\sigma_{z} + \omega \hat{a}^{\dagger} \hat{a} +g\left(\hat a\hat \sigma_{+}+\hat a^{\dagger}\hat \sigma_{-}\right), \end{equation} describing a two-level system, a qubit, with transition frequency $\omega_{0}$ and modeled by the standard atomic inversion operator, $\hat{\sigma}_{z}$, and lowering (raising), $\hat{\sigma}_{-}$ ($\hat{\sigma}_{+}$), operators, interacting with a boson field with frequency $\omega$, described by the annihilation (creation) operator $\hat{a}$ ($\hat{a}^\dagger$); the strength of the interaction is provided by the coupling parameter $g$. The JC model assumes near resonance, $\omega \sim \omega_{0}$, and weak coupling, $g \ll \omega, \omega_{0}$. It relates to experimental realizations in cavity-QED \cite{Raimond2001p565}, trapped-ion-QED \cite{Leibfried2003p281}, circuit-QED and more \cite{Xiang2013p623}, Fig \ref{fig:Fig1}.
\begin{figure}
\caption{Schematics for two experimental realizations of the Jaynes-Cummings model, (a) cavity-QED, and (b) ion-trap-QED.}
\label{fig:Fig1}
\end{figure}
The JC model is a typical example of an integrable system, it conserves the total number of excitations $\hat{N}= \hat{a}^{\dagger} \hat{a} + \left( 1 + \hat{\sigma}_{2} \right) /2$. It has a ground state provided by the boson vacuum and the qubit ground state, \begin{eqnarray} \vert \epsilon_{0} \rangle = \vert 0, g \rangle, \end{eqnarray} with zero total excitation number, $\langle \hat{N} \rangle = 0 $, and the rest of eigenstates are given by the dressed state basis \cite{Gerry2005}, \begin{eqnarray} \vert \epsilon_{n,+} \rangle &=& c_{n} \vert n,e \rangle + s_{n} \vert n+1,g \rangle,\nonumber \\ \vert \epsilon_{n,-} \rangle &=&-s_{n} \vert n,e \rangle + c_{n} \vert n+1,g \rangle, \label{dressed} \end{eqnarray} which define subspaces with mean total excitation $\langle \hat{N} \rangle = n + 1 $, for integer index $n=0,1,2,\ldots$. The normalization coefficients are given by $c_{n} = \cos \left( \theta_{n} /2 \right)$, $s_{n} = \sin \left( \theta_{n} /2 \right)$, and the rotation angle $\theta_{n} = \arctan 2 g \sqrt{n+1} / \Delta$, where the detuning is defined by $\Delta = \omega_{0} - \omega$. The energy spectrum, \begin{eqnarray} \epsilon_{0} &=& - \frac{\omega_{0}}{2}, \nonumber \\ \epsilon_{n,\pm} &=& \left( n + \frac{1}{2} \right) \omega \pm \frac{\Omega_{n}}{2} , \end{eqnarray} is given in terms of the Rabi frequency, $\Omega_{n} = \sqrt{ \Delta^{2} + 4 g^2 \left( n+1 \right)}$.
Now, we follow the standard formalism for open quantum systems \cite{Breuer2002}. In other words, we model the environment as a collection of non-interacting bosons, $\hat H_{B} = \sum_{k} \omega_k^{\mathstrut } \hat{b}_{k}^{\dagger} \hat{b}_k^{\mathstrut }$, that bilinearly couple to the field via the interaction Hamiltonian, $\hat H_{I} = \hat{X}\hat X_{B} $ with $\hat{X}= \hat{a}^{\dagger} + \hat{a}$ and $\hat{X}_{B}= \sum_{k} g_{k} \left(\hat{b}_{k}^{\dagger} + \hat{b}_{k} \right)$. Then, we use the eigenmode decomposition, $\hat{X}(\nu) = \sum_{\nu} \hat{\Pi}(\epsilon) \hat{X}~ \hat{\Pi}(\epsilon^{\prime})$, in terms of the projection operator $\hat{\Pi}(\epsilon)$ onto the dressed subspace with effective frequency $\epsilon$ and frequency difference $\nu= \epsilon^{\prime} - \epsilon$, \begin{eqnarray} \hat{X}(\nu) = \sum_{\epsilon^{\prime} - \epsilon = \nu} \langle \epsilon \vert \hat{X} \vert \epsilon^{\prime} \rangle ~ \vert \epsilon \rangle \langle \epsilon^{\prime} \vert. \end{eqnarray} This provides us with the explicit form of the boson field operator $\hat{X}$, in terms of the Bohr eigenfrequencies of the central system, such that the jump operators for the JC ladder become, \begin{eqnarray}\label{jump::operators} \hat{X}(\epsilon_{0,\pm}-\epsilon_{0}) &=& s_{0} \vert \epsilon_0 \rangle \langle \epsilon_{0,+} \vert + c_{0} \vert \epsilon_0 \rangle \langle \epsilon_{0,-} \vert, \nonumber \\ \hat{X}(\epsilon_{n^{\prime},+}-\epsilon_{n,+}) &=& \delta_{n,n^{\prime}-1} \left[c_{n} c_{n+1}\sqrt{n+1} + s_{n}s_{n+1}\sqrt{n+2}\right] \vert \epsilon_{n,+} \rangle \langle \epsilon_{n+1,+} \vert, \nonumber \\ \hat{X}(\epsilon_{n^{\prime},-}-\epsilon_{n,-}) &=& \delta_{n,n^{\prime}-1} \left[s_{n}s_{n+1}\sqrt{n+1}+ c_{n}c_{n+1}\sqrt{n+2}\right] \vert \epsilon_{n,-}\rangle \langle\epsilon_{n+1,-}\vert, \nonumber \\ \hat{X}(\epsilon_{n^{\prime},\pm}-\epsilon_{n,\mp}) &=& \delta_{n,n^{\prime}-1} \left[s_{n}c_{n+1}\sqrt{n+2}-c_{n}s_{n+1}\sqrt{n+1}\right] \vert \epsilon_{n,\pm}\rangle \langle\epsilon_{n+1,\mp} \vert. \nonumber \\ \end{eqnarray}
Writing down the von Neumann equation for the the total density operator in the interaction picture with the reference free Hamiltonian $\hat H_{0}=\hat H_{JC}+\hat H_{B}$, using the Born-Markov and rotating wave approximations (RWA), and taking the average over the degrees of freedom of the environment trough the partial trace operation, we can obtain the following master equation in the Schrödinger picture, \begin{eqnarray}\label{general::master::eq} \dot{\rho}(t)&=&-{i}[\hat H_{JC},\rho(t)]+\sum_{\nu>0}\gamma(\nu) \left[ \hat{X}(\nu)\rho(t)\hat{X}^{\dagger}(\nu)-\frac{1}{2}\{\hat{X}^{\dagger}(\nu) \hat{X}(\nu),\rho(t)\}\right] \nonumber\\ && + \sum_{\nu>0}\gamma(-\nu) \left[\hat{X}^{\dagger}(\nu)\rho(t)\hat{X}(\nu)-\frac{1}{2}\{\hat{X}(\nu) \hat{X}^{\dagger}(\nu),\rho(t)\}\right]. \end{eqnarray} Note that the RWA is valid only for couplings larger than the decay rate, $2g\gg\gamma$. The effective frequency-dependent decay rates are given by the Fourier transform, \begin{eqnarray} \gamma(\nu)&=&\int _{0}^{\infty} ds ~ e^{i \nu s} ~ \mathrm{Tr}_{B} \left[ \hat{X}_{B}^{\dagger}(s) \hat{X}_{B}(0) \right], \nonumber \\ &=&\left\{ \begin{array}{ll} \vert g(\nu) \vert^2 D(\nu) \left[ 1 + \bar{n}\left( \nu \right)\right] , & \nu > 0 \\ \vert g(\vert \nu \vert ) \vert^2 D(\vert \nu \vert) \bar{n}\left( \vert \nu \vert \right) , & \nu < 0, \end{array} \right. \end{eqnarray} with the continuum coupling distribution, $g(\nu)$, and the density of modes, $D(\nu)$, providing the environment spectral density, $\vert g(\nu) \vert^2 D(\nu)$; for example, a flat environment has a constant spectral density equal to the common decay rate, $\vert g(\nu) \vert^2 D(\nu) = \gamma$. Finally, the average number of thermal bosons in the environment is defined by $\bar{n}(\nu) = 1 / \left( e^{\nu / k_{B} T} - 1 \right)$, with Boltzmann constant $k_{B}$ and finite temperature $T$.
In order to provide an explicit working form, we consider the microscopic master equation for the JC model interacting with a flat thermal bath at finite temperature, \begin{eqnarray}\label{eq:DSME} \dot{\rho}(t)&=& - i [\hat H_{JC},\rho(t)] + \gamma_{1} s^{2}_{0} \hat{\mathcal{D}} (\vert \epsilon_{0} \rangle \langle \epsilon_{0,+}\vert ) +\gamma_{2}c^{2}_{0}\hat{\mathcal{D}} (\vert\epsilon_{0} \rangle \langle \epsilon_{0,-}\vert ) \nonumber \\ && + \sum_{n=0}^{\infty}\gamma_{3}a_{n}^2 \hat{\mathcal{D}} (\vert\epsilon_{n,+} \rangle \langle \epsilon_{n+1,+}\vert ) + \sum_{n=0}^{\infty}\gamma_{4}b_{n}^2 \hat{\mathcal{D}} (\vert\epsilon_{n,-} \rangle \langle \epsilon_{n+1,-}\vert ) \nonumber \\ &&+ \sum_{n=0}^{\infty}\gamma_{5}d_{n}^2 \hat{\mathcal{D}} (\vert\epsilon_{n,-} \rangle \langle \epsilon_{n+1,+}\vert ) + \sum_{n=0}^{\infty}\gamma_{6}d_{n}^2 \hat{\mathcal{D}} (\vert\epsilon_{n,+} \rangle \langle \epsilon_{n+1,-}\vert ) \nonumber \\ && + \tilde{\gamma}_{1}s^{2}_{0}\hat{\mathcal{D}} (\vert\epsilon_{0,+} \rangle \langle \epsilon_{0}\vert ) +\tilde{\gamma}_{2}c^{2}_{0}\hat{\mathcal{D}} (\vert\epsilon_{0,-} \rangle \langle \epsilon_{0}\vert )\nonumber\\ &&+ \sum_{n=0}^{\infty}\tilde{\gamma}_{3}a_{n}^2 \hat{\mathcal{D}} (\vert \epsilon_{n+1,+} \rangle \langle \epsilon_{n,+}\vert )+ \sum_{n=0}^{\infty}\tilde{\gamma}_{4}b_{n}^2 \hat{\mathcal{D}} (\vert \epsilon_{n+1,-} \rangle \langle \epsilon_{n,-}\vert ) \nonumber \\ && + \sum_{n=0}^{\infty}\tilde{\gamma}_{5}d_{n}^2 \hat{\mathcal{D}} (\vert \epsilon_{n+1,+} \rangle \langle \epsilon_{n,-}\vert ) + \sum_{n=0}^{\infty}\tilde{\gamma}_{6}d_{n}^2 \hat{\mathcal{D}} (\vert \epsilon_{n+1,-} \rangle \langle \epsilon_{n,+}\vert ), \end{eqnarray} where we have used the standard notation for dissipators $\hat{\mathcal{D}}(\hat{O}) = \hat{O} \rho \hat{O}^{\dagger}- \{ \hat{O}^{\dagger} \hat{O},\rho\} / 2$. The auxiliary coefficients are defined trough the relations, \begin{eqnarray} a_n=c_{n}c_{n+1}\sqrt{n+1} ~+~ s_{n}s_{n+1}\sqrt{n+2}, \nonumber \\ b_n=s_{n}s_{n+1}\sqrt{n+1} ~+~ c_{n}c_{n+1}\sqrt{n+2}, \nonumber \\ d_n=s_{n}c_{n+1}\sqrt{n+2} ~-~ c_{n}s_{n+1}\sqrt{n+1}, \end{eqnarray} and the explicit frequency-dependent decay rates are given by, \begin{eqnarray} \begin{array}{lcllcl} \gamma_1 &=& \left[1+\bar{n}(\frac{\omega_0+\omega+\Omega_0}{2})\right] \gamma, &\tilde{\gamma}_1 &=& \bar{n}(\frac{\omega_0+\omega+\Omega_0}{2}) \gamma, \\ \gamma_2&=& \left[1+\bar{n}(\frac{\omega_0+\omega-\Omega_0}{2})\right] \gamma, & \tilde{\gamma}_2 &=& \bar{n}(\frac{\omega_0+\omega-\Omega_0}{2}) \gamma, \\ \gamma_3 &=& \left[1+\bar{n}(\omega+\frac{\Omega_{n+1}-\Omega_{n}}{2})\right] \gamma, & \tilde{\gamma}_3 &=& \bar{n}(\omega+\frac{\Omega_{n+1}-\Omega_{n}}{2}) \gamma,\\ \gamma_4 &=& \left[1+\bar{n}(\omega-\frac{\Omega_{n+1}-\Omega_{n}}{2})\right] \gamma, &\tilde{\gamma}_4 &=& \bar{n}(\omega-\frac{\Omega_{n+1}-\Omega_{n}}{2}) \gamma, \\ \gamma_5 &=& \left[1+\bar{n}(\omega+\frac{\Omega_{n+1}+\Omega_{n}}{2})\right] \gamma , &\tilde{\gamma}_5 &=& \bar{n}(\omega+\frac{\Omega_{n+1}+\Omega_{n}}{2}) \gamma, \\ \gamma_6 &=& \left[1+\bar{n}(\omega-\frac{\Omega_{n+1}+\Omega_{n}}{2})\right] \gamma, &\tilde{\gamma}_6 &=& \bar{n}(\omega-\frac{\Omega_{n+1}+\Omega_{n}}{2}) \gamma, \end{array} \end{eqnarray} where we have used the Kubo-Martin-Schwinger condition to relate the downward and upward transition rates, $\tilde\gamma(\nu)=\exp\left(-\nu/k_{B} T\right)\gamma(\nu)$. Figure \ref{fig:Fig2} shows these decay channels in the dressed state ladder of the JC model. Note that the resonant model, $\Delta=0$, reduces to the one obtained in Ref.~\cite{Scala2007p013811} in the zero-$T$ limit, $T\rightarrow 0$, $\bar{n}(\nu)\rightarrow 0$, and $\gamma_{i}\rightarrow\gamma$, $\tilde\gamma_{i}\rightarrow 0$.
In the following, we compare the microscopic approach with the standard phenomenological approach for field dissipation with a flat environment, which is commonly described in the literature by the following master equation \cite{Haroche1992, CohenTannoudji1998}, \begin{eqnarray}\label{eq:phME} \dot\rho_{\rm{ph}}(t)&=&-{ i} [\hat H_{JC},\rho_{\rm{ph}}(t)]+ \gamma \left[\bar{n}(\omega)+1 \right] \left[ \hat{a} \rho_{\rm{ph}}(t) \hat{a}^{\dagger} - \frac{1}{2} \{\hat{a}^{\dagger} \hat{a}, \rho_{\rm{ph}}(t)\} \right] \nonumber \\ &&+ \gamma \bar{n}(\omega) \left[ \hat{a}^{\dagger}\rho_{\rm{ph}}(t) \hat{a} - \frac{1}{2} \{\hat{a} \hat{a}^{\dagger},\rho_{\rm{ph}}(t)\} \right], \end{eqnarray} and is valid for a broader range of parameters provided that coupling is small compared to the free field and qubit frequencies, $\omega,\omega_0\gg\gamma$.
\begin{figure}
\caption{A schematic example of some allowed transitions in the dressed energy ladder, decay and excitation channels, due to a finite temperature environment.}
\label{fig:Fig2}
\end{figure}
\section{Single-excitation manifold at zero temperature} One of the basic signatures in the dynamics of the JC model are the so called Rabi oscillations, showing the periodic exchange of excitations between the qubit and the field mode. In absence of losses, this is an indefinitely reversible process of coherent evolution. In real cavity-QED experiments \cite{Brune1996p1800}, the cavity is in fact open and subject to decoherence, making Rabi oscillations decay and eventually disappear with the inevitable scape of the photon to the environment.
Single-excitation dynamics at zero temperature under the microscopic approach \cite{Scala2007p013811} are described by the following simplified form of Eq. (\ref{eq:DSME}), \begin{eqnarray} \dot{\rho}(t)&=& - i [\hat H_{JC},\rho(t)] + \gamma\left[ s^{2}_{0} \hat{\mathcal{D}} (\vert \epsilon_{0} \rangle \langle \epsilon_{0,+}\vert ) + c^{2}_{0}\hat{\mathcal{D}} (\vert\epsilon_{0} \rangle \langle \epsilon_{0,-}\vert )\right], \end{eqnarray} which immediately shows jump operators describing transitions from states $\vert\epsilon_{0,\pm}\rangle$ to the ground state $\vert\epsilon_0\rangle$. In fact, in this microscopic description with dressed states, decay of the two bare states $\vert e,0\rangle$ and $\vert{g,1}\rangle$ is allowed, in contrast to the phenomenological description where the bare state $\vert{g,1}\rangle$ provides the only decay channel to the ground sate. Furthermore, it is possible to construct an analytic solution for the case of pure initial states in the single-excitations manifold, $\vert \psi(0)\rangle= \alpha \vert 0,e\rangle + \beta \vert 1,g\rangle$ with $\beta = \sqrt{1- \vert \alpha^2 \vert}$, using the damping basis technique \cite{Briegel1993p3311} \begin{eqnarray} \rho(t)&=& \left\{ 1 - \left[ \vert \alpha \vert^{2} +c_{0}^2- 2 c_{0} s_{0} \Re\left( \alpha \beta^{\ast} \right) \right] e^{-\gamma s_{0}^2 t} + \right. \nonumber \\ &&\left. - \left[ \vert \alpha \vert^{2} + s_{0}^2 + 2 c_{0} s_{0} \Re\left(\alpha\beta^{\ast}\right)\right] e^{-\gamma c_{0}^2 t} \right\} \vert\epsilon_{0} \rangle \langle \epsilon_{0}\vert + \nonumber\\ && + \left[ \vert\alpha\vert^{2} + c_{0}^2 - 2 c_{0} s_{0} \Re\left(\alpha \beta^{\ast}\right)\right]e^{-\gamma s_{0}^2 t} \vert\epsilon_{0,-} \rangle \langle \epsilon_{0,-}\vert + \nonumber \\ && + \left[ \vert \alpha \vert^{2} + s_{0}^2 + 2 c_{0} s_{0} \Re\left(\alpha\beta^{\ast}\right) \right] e^{-\gamma c_{0}^2 t} \vert\epsilon_{0,+} \rangle \langle \epsilon_{0,+}\vert + \nonumber \\ &&+ e^{-\gamma t/2} \left\{ \left[ \left( 1 - 2 \vert \alpha \vert^{2} \right) c_{0} s_{0} + \alpha^{*} \beta c_{0}^{2} -\alpha \beta^{*} s_{0}^{2} \right] e^{i\Omega_{0}t} \vert\epsilon_{0,-} \rangle \langle \epsilon_{0,+}\vert + \right. \nonumber \\ && \left. + \left[ \left( 1 - 2 \vert \alpha \vert^{2} \right) c_{0} s_{0} + \alpha \beta^{*} c_{0}^{2} -\alpha^{*} \beta s_{0}^{2} \right] e^{-i\Omega_{0}t} \vert\epsilon_{0,+} \rangle \langle \epsilon_{0,-}\vert \right\}. \end{eqnarray} Off-resonant interaction makes one of the two decay channels dominant, and gives the possibility to control the decay to the ground state; for example, as we increase the detuning of the qubit-field interaction, the coherent exchange of the excitation is maintained for longer times or, equivalently, the life time of the photon inside the cavity increases as we can see in Fig. \ref{fig:Fig3}. This interesting asymmetry could be useful for increasing the number of operations trough simple quantum gates using cavity-QED implementations.
Meanwhile, the phenomenological description in the single-excitation manifold, \begin{eqnarray} \dot\rho_{\rm{ph}}(t)&=&- i [\hat H_{JC},\rho_{\rm{ph}}(t)]+ \gamma \hat{\mathcal{D}} (\vert 0,g \rangle \langle 1,g \vert ), \end{eqnarray} shows the direct decay of the state $\vert 1,g \rangle$ to the ground state. For this master equation, it is also possible to find an exact solution for the same initial state as before,
\begin{eqnarray} \rho_{\rm{ph}}(t)&=& \left[ 1 - \vert a(t) \vert^{2} - \vert b(t) \vert^{2} \right] \vert \epsilon_{0} \rangle \langle \epsilon_{0} \vert + \vert\Psi(t)\rangle\langle\Psi(t)\vert , \end{eqnarray} where the time evolution of the single-excitation state $\vert \Psi(t)\rangle$ is given by \begin{eqnarray} \vert\Psi(t)\rangle=\left[ c_{0} a(t) + s_{0} b(t) \right] \vert \epsilon_{0,+} \rangle + \left[c_{0} b(t) - s_{0} a(t) \right] \vert \epsilon_{0,-} \rangle, \end{eqnarray} with the time-dependent functions, \begin{eqnarray} a(t)&=&\left[ \alpha \cosh \frac{ \Omega t}{2} +\frac{\alpha\left(\gamma-2i\Delta\right)-4i\beta g}{2 \Omega}\sinh \frac{ \Omega t}{2} \right] e^{-\frac{1}{4}\left(\gamma+6i\Delta\right)t }, \nonumber \\ b(t) &=& \left[\beta\cosh\frac{ \Omega t}{2} -\frac{\beta\left(\gamma-2i\Delta\right)+4i\alpha g}{2\tilde\Omega}\sinh\frac{ \Omega t}{2}\right]e^{-\frac{1}{4}\left(\gamma+6i\Delta\right)t }, \end{eqnarray} where the auxiliar frequency $\Omega=\sqrt{\gamma^2-16g^2-4\Delta\left(\Delta+i\gamma\right)}/2$ is complex. The only difference between the two treatments is the presence of a high-frequency modulation, at short propagation times, in the decay to the ground state dynamics under the phenomenological description, insets Fig. \ref{fig:Fig3}.
\begin{figure}
\caption{Probability of finding the system in the ground state for initial states (a) $\vert \psi(0) \rangle = \vert 0,e \rangle$ and (b) $\vert \psi(0) \rangle = \vert 1,g \rangle$ under the dynamics provided by the microscopic description of dissipation at zero-$T$ and different detunings. Insets: phenomenological description. Simulation parameters: $\{\gamma,\omega_0\}=\{~0.2,~100\}g$.}
\label{fig:Fig3}
\end{figure}
Figure \ref{fig:Fig3} shows the probability to find the system in the ground state for a near-resonance system, $\omega_{0} \sim \omega \gg g$, for different detuning between the qubit and field frequencies for initial states in the single-excited state manifold. An initial qubit in the excited state, $\vert \psi(0) \rangle = \vert 0,e \rangle$, produces slower effective decay to the ground state with larger absolute values of the detuning, Fig. \ref{fig:Fig3}(a), while an initial qubit in the ground state, $\vert \psi(0) \rangle = \vert 0,g \rangle$, produces larger effective decay rates to the ground state with larger absolute values of the detuning, Fig. \ref{fig:Fig3}(b). The same process is observed in the phenomenological approach with the addition of a higher frequency oscillation, insets in Fig. \ref{fig:Fig3}. The damped Rabi oscillations in the atomic inversion dynamics, $\langle \hat{\sigma}_{z} \rangle = \mathrm{Tr}\{\sigma_{z}\rho_{q}(t)\}$, can be seen in Figure \ref{fig:Fig4}. \begin{figure}
\caption{Evolution of the population inversion for initial states (a) $\vert \psi(0) \rangle = \vert 0,e \rangle$ and (b) $\vert \psi(0) \rangle = \vert 1,g \rangle$ under the dynamics provided by the microscopic description of dissipation at zero-$T$ and different detunings. Simulation parameters: $\{\gamma,\omega_0\}=\{~0.2,~100\}g$.}
\label{fig:Fig4}
\end{figure} Regardless of the effective decay rate to the ground state, the resonant system will go faster to a pure state, as shown by the qubit-field purity, $ P = \mathrm{Tr}~ \rho^2(t)$ in Fig. \ref{fig:Fig5}, and von Neumann entropy for the field, $S = -\mathrm{Tr}~ \rho_{f}(t) \ln \rho_{f}(t)$ in Fig. \ref{fig:Fig6}. Purity minima appears at longer scaled times for larger absolute values of the detuning for the initial state $\vert \psi(0) \rangle = \vert 0,e \rangle$, Fig. \ref{fig:Fig5}(a), and the opposite happens for the initial state $\vert \psi(0) \rangle = \vert 1,g \rangle$, Fig. \ref{fig:Fig5}(b). On-resonance, $\Delta = 0$, and for an initial state $\vert \psi(0) \rangle = \vert 0,e \rangle $, it is possible to find a simple expression for the purity, $ P(t)=1-2\left[1-e^{-\gamma t/2}\right]e^{-\gamma t /2}$, that reaches its minimum at the scaled time $gt = 2 \ln 2/\gamma$.
\begin{figure}
\caption{Evolution of the qubit-field purity for initial states (a) $\vert \psi(0) \rangle = \vert 0,e \rangle$ and (b) $\vert \psi(0) \rangle = \vert 1,g \rangle$ under the dynamics provided by the microscopic description of dissipation at zero-$T$ and different detunings. Simulation parameters: $\{\gamma,\omega_0\}=\{~0.2,~100\}g$.}
\label{fig:Fig5}
\end{figure} In the case of the closed JC model, the von Neumann entropy of the field provides a measure of the entanglement between the qubit and field \cite{Gerry2005}. However, this is not true if the system is open. Here, it is not possible to express the qubit-field state vector at any time using the appropriate Schmidt decomposition, hence the respective qubit and field von Neumann entropies are not expected to be equivalents giving rise to different behaviors. Actually, the information flow from the qubit-field system to the environment must be reflected in the entropies of the subsystems. In Figure \ref{fig:Fig6}, we show the von Neumann entropy of the field as a function of time for the two initial bare states in the single excitation manifold, showing the usual dynamics of entanglement and disentanglement of the qubit-field system but with the effects of damping, or decoherence, due to the interaction with the environment. It provides a highly oscillatory picture, that depends on the detuning, for how the time evolution of the field reduced density matrix departures from, and asymptotically comes back, to a quantum pure state. \begin{figure}
\caption{Evolution of von Neumann entropy of the field for initial states (a) $\vert \psi(0) \rangle = \vert 0,e \rangle$ and (b) $\vert \psi(0) \rangle = \vert 1,g \rangle$ under the dynamics provided by the microscopic description of dissipation at zero-$T$ and different detunings. Simulation parameters: $\{\gamma,\omega_0\}=\{~0.2,~100\}g$.}
\label{fig:Fig6}
\end{figure}
In the single-excitation limit, we can think of the field as an effective qubit, and calculate the two-qubit concurrence for the field-matter state. The concurrence, an entanglement measure based on the concept of entanglement of formation \cite{Wootters1998p2245,Wootters2001p27}, is defined as $C = 2\mathrm{max}\{\lambda_i\}-\sum_{i}\lambda_i$, where the set $\{\lambda_{i}\}$ are the square roots of the eigenvalues of the operator $R=\rho\left(\sigma_y\otimes\sigma_y\right)\rho^{*}\left(\sigma_y\otimes\sigma_y\right)$ in decreasing order. The concurrence is zero for separable states, and takes its maximum value of one for maximally entangled states. Its behaviour under the microscopic approach shows the evolution from the separable initial state to an almost maximally entangled state in the time for half a Rabi oscillation in the single-excitation manifold, Fig. \ref{fig:Fig7}. Obviously, this will be affected by the decoherence induced by the environment. The effective decay rates induced show that an initial state $\vert \psi(0) \rangle = \vert 0,e \rangle$ produces a higher entangled state for small evolution times and higher detuning, Fig. \ref{fig:Fig7}(a). An initial pure separable state of the form $\vert \psi(0) \rangle = \vert 1,g \rangle$ produces and maintains higher concurrence values for lower detuning, Fig. \ref{fig:Fig7}(b). This is in agreement with the information provided by von Neumann entropy and our previous discussion on the effect of the detuning on the effective decay rates. \begin{figure}
\caption{Evolution of the qubit and single-excitation field concurrence for initial states (a) $\vert \psi(0) \rangle = \vert 0,e \rangle$ and (b) $\vert \psi(0) \rangle = \vert 1,g \rangle$ under the dynamics provided by the microscopic description of dissipation at zero-$T$ and different detunings. Simulation parameters: $\{\gamma,\omega_0\}=\{~0.2,~100\}g$.}
\label{fig:Fig7}
\end{figure}
\section{Beyond the single-excitation manifold at finite temperature}
As we go beyond the single-excitation manifold, starting with an initial state with more than one total excitation, the oscillations in the ground state probability, $P_{0,g}$, provided by dynamics in the phenomenological description, insets in Fig. \ref{fig:Fig3}, have larger frequencies and, eventually, make the the phenomenological description indistinguishable to the naked eye from that of the microscopic one using the variables presented above. Here, we will show that it is possible to use phase space dynamics to notice the differences between the two approaches. Sadly, it becomes cumbersome and impractical to address analytically the dynamics beyond the single-excitation manifold at zero-$T$, and we must resort to numeric simulations in order to create intuition for these systems. In the following, we numerically solve the microscopic master equation, Eq. (\ref{eq:DSME}), by two methods, brute force iterative Runge--Kutta methods and direct diagonalization of the Liouvillian \cite{NavarreteBenlloch2015}, in both cases the dimension of the master equation is truncated once a desired convergence is reached.
\subsection{Fock states}
At zero-$T$, an initial state in the $\langle N\rangle$-excitation manifold, $\vert \psi(0) \rangle = \vert n,e \rangle$ or $\vert \psi(0) \rangle = \vert n+1,g \rangle$, should present similar dissipation dynamics to those described above: the effective decay rate for initial excited and ground state dynamics will differ and be related to the detuning between the qubit and field frequencies. We can see this in the time evolution of the atomic inversion for an initial state $\vert \psi(0) \rangle = \vert 4,e \rangle$, Fig. \ref{fig:Fig8}(a), and $\vert \psi(0) \rangle = \vert 5,g \rangle$, Fig. \ref{fig:Fig8}(b), but becomes more evident in the qubit-field purity, Fig. \ref{fig:Fig8}(c) and Fig. \ref{fig:Fig8}(d), and von Neumann entropy of the field, Fig. \ref{fig:Fig8}(e) and Fig. \ref{fig:Fig8}(f). The dynamics provided by the phenomenological approach still have a higher modulating frequency, but it becomes so high that the differences are indistinguishable without further analysis.
\begin{figure}
\caption{Time evolution of the atomic inversion (first row), qubit-field purity (second row), and von Neumann entropy of the field (third row), for initial states $\vert \psi(0) \rangle = \vert 4,e \rangle$ (left column) and $\vert \psi(0) \rangle = \vert 5,g \rangle$ (right column), under dynamics provided by the microscopic approach to dissipation at zero-$T$ for different detuning between the qubit and field frequencies. Simulation parameters: $\{\gamma,\omega_0\}=\{~0.2,~100\}g$}
\label{fig:Fig8}
\end{figure}
At finite-$T$, the dynamics are equivalent to those at zero-$T$ with a slight increase of the effective decay rate due to temperature effects and, obviously, the final state of the radiation-matter system, in the asymptotic limit, will reach the thermal equilibrium steady state of the open system. Figure \ref{fig:Fig9}(a) and \ref{fig:Fig9}(b) shows the time evolution of the atomic inversion, Fig. \ref{fig:Fig9}(c) and \ref{fig:Fig9}(d) that of the qubit-field purity, and Fig. \ref{fig:Fig9}(e) and \ref{fig:Fig9}(f) the time evolution of the von Neumann entropy of the field of initial states $\vert \psi(0) \rangle = \vert 4,e \rangle$ and $\vert \psi(0) \rangle = \vert 5,g \rangle$, in that order for each case, under JC dynamics interacting with a low-$T$ thermal environment with average thermal photons $\bar{n}=0.1$; a value related to cavity-QED experiments \cite{Wilczewski2009p033836,Wilczewski2009p013802}.
\begin{figure}\label{fig:Fig9}
\end{figure}
\subsection{Coherent states}
In order to study more complex dynamics, let us consider initial states involving coherent states of the field, $\vert\alpha\rangle= \exp(-|\alpha|^{2}/2) \sum_{n=0}^{\infty} \alpha^{n}/\sqrt{n!} \vert n\rangle$. These are the most classical quantum states in which a field mode can be prepared, thus the name of semi-classical states of the field. For the sake of simplicity, we start from a pure and separable initial state, $\vert \psi(0) = \vert \alpha, g \rangle$, that shows collapse and revival of the atomic inversion at the approximate scaled revival time $g t_{r} \sim 2\pi \sqrt{\vert \alpha \vert^2}$ for the closed system. Figure \ref{fig:Fig10} shows the atomic inversion and mean photon number evolution under the microscopic and phenomenological approaches to dissipation for a single revival time. Cavity losses slightly affect the initial collapse of the atomic inversion, but heavily suppress the revival, Fig \ref{fig:Fig10}(a), in agreement with previous results employing the phenomenological approach. It is possible to observe differences between the two approaches at short times, but the dynamics seem to become identical as the system evolves. Note that care must be exerted to use simulation parameters that satisfy the restrictions mentioned above for each model.
\begin{figure}\label{fig:Fig10}
\end{figure}
Obviously, the effects of detuning at each and every manifold with constant total excitation number described above will survive. For example, an initial state composed by a coherent field and the two-level system in the ground state, $\vert \psi(0) \rangle = \vert \alpha, g \rangle$, will have a lower effective decay rate for larger detuning, Fig. \ref{fig:Fig11}(a), as expected. Figure \ref{fig:Fig11}(b) shows the atomic inversion evolution, on-resonance for different decay rates, where we can observe that the collapse dynamics, for times shorter than half the revival time, are barely modified while the revival dynamics is strongly suppressed for increasing decay rate.
\begin{figure}
\caption{Time evolution of the atomic inversion for an initial state composed of a coherent field with $\alpha = \sqrt{5}$ and the atom in the ground state, $\vert \psi(0) \rangle = \vert \alpha, g \rangle$. (a) Varible detuning with fixed decay rate $\gamma$, simulation parameters: $\{\gamma,\omega_0\}=\{0.005,~100\}g$, (b) variable decay rate on-resonance, simulation parameters: $\omega_0= \omega = 100g$. }
\label{fig:Fig11}
\end{figure}
A substantial deviation between the two approaches is easier to detect using the time evolution of the field quadratures, $\hat{q} = \left( \hat{a} + \hat{a}^{\dagger} \right)/2$ and $\hat{p} = \left( \hat{a} -\hat{a}^{\dagger}\right)/2i $, whose mean values for a coherent state are equivalent to the real and imaginary part of the analogue classical complex field amplitude. Interestingly enough, the microscopic approach to dissipation provides us with an intuitively expected, spiral decay evolution of the field quadratures, Fig. \ref{fig:Fig12}(a), similar to the one obtained by the phenomenological approach for just a dissipative cavity. The time evolution for the field quadratures under the phenomenological approach shows the differences and high frequency modulation in the form of deviations from the spiral decay of the free dissipative field, Fig. \ref{fig:Fig12}(b). Furthermore, the effect of finite-$T$, an increased decay rate, is more evident in the microscopic approach, Fig. \ref{fig:Fig12}(c), than in the phenomenological approach, Fig.\ref{fig:Fig12}(d), both in short- and moderate-time scales.
\begin{figure}\label{fig:Fig12}
\end{figure}
The variances of the field quadratures, $\langle \Delta \hat{x} \rangle = \langle \hat{x}^2 \rangle - \langle \hat{x} \rangle^2$, for the microscopic, Fig. \ref{fig:Fig13}(a) and Fig. \ref{fig:Fig13}(c), and the phenomenological approaches, Fig. \ref{fig:Fig13}(b) and Fig. \ref{fig:Fig13}(d), show an even greater difference on the open system dynamics provided by the two approaches. Under the microscopic approach to dissipation, the initial coherent state of the field stops minimizing the uncertainty relation for the field quadratures in a shorter time than under phenomenological open dynamics. Furthermore, open microscopic dynamics predict lower fluctuations in the variances of the field quadratures, leading to a smoother transition to the steady state, the coherent vacuum state at zero-$T$, than the one predicted by phenomenological open dynamics.
\begin{figure}
\caption{Time evolution of the field quadratures variances for an initial state $\vert \psi(0) \rangle = \vert \alpha, g \rangle$ with $\alpha = \sqrt{5}$ under the microscopic (left column) and phenomenological approaches (right column) to dissipation at zero-$T$, simulation parameters: $\{\gamma,\omega_0\}=\{0.1,~100\}g$.}
\label{fig:Fig13}
\end{figure}
These differences in the mean values of the quadratures and their variances can also be observed in phase space thorough quasi-probability distributions, like Husimi Q-function, $Q(\alpha)= \langle \alpha \vert \hat{\rho} \vert \alpha \rangle / \pi$, shown in Fig. \ref{fig:Fig14}. The dynamics of the Q-function under a microscopic description of dissipation starts from a well defined Gaussian phase space distribution corresponding to a coherent state that, smoothly and quickly, becomes a donut-shaped distribution whose radius starts diminishing until it takes the Gaussian distribution form of coherent vacuum. Fig. \ref{fig:Fig14}(a)-Fig. \ref{fig:Fig14}(d). Meanwhile, the evolution of the Q-function under the phenomenological description follows a more complicated dynamics that might look like a decaying spiral to the coherent vacuum. As visualization help, the reader can find animations for both processes in the links provided below \footnote{ Husimi Q-function time evolution under dynamics provided by the microscopic \url{http://www.hambrientosvagabundos.org/mpg/Micro.mp4} and the phenomenological \url{http://www.hambrientosvagabundos.org/mpg/Pheno.mp4} approaches.}.
\begin{figure}
\caption{Snapshots of Husimi Q-function for different evolution times of an initial state $\vert \psi(0) \rangle = \vert \alpha, g \rangle$ with $\alpha = \sqrt{5}$ under the microscopic (top row) and phenomenological approaches (bottom row) to dissipation at zero-$T$ at scaled times (a),(e) $gt= 0$, (b),(f) $gt= 2g t_{r} / 3$, (c),(g) $gt= 4g t_{r}/3 $, and (d),(h) $gt= 2g t_{r}$ with simulation parameters: $\{\gamma,\omega_0\}=\{0.1,~100\}g$.}
\label{fig:Fig14}
\end{figure}
We conducted an analysis for initial squeezed coherent states of the field but the dynamics are similar to those for coherent states, the mean value of the quadratures follow a spiral decay to the coherent vacuum and, at short times, the variances of the quadratures equalize and follow a behaviour equivalent to that of coherent states.
\section{Conclusions}
We have derived the microscopic master equation for the Jaynes-Cummings model under field dissipation at finite temperature and off-resonance. We revisited evolution in the well-known zero-$T$ single-excitation manifold, where the difference in the dynamics under the microscopic and phenomenological approaches appear as a high-frequency modulation of the ground state probability in the phenomenological approach, constructed an analytic closed form for the state evolution, and show the effect of detuning between the qubit and field frequencies on the effective decay rates; for initial states with an excited qubit a larger detuning produces a lower decay rate and the opposite for initial states with the qubit in the ground state. This is obvious, due to the decay channels, and deliver a consequent ordering of the qubit-field purity minima. Interestingly enough, these minima are not observed in the von Neumann entropy for the field or the concurrence of the joint qubit-field state that show a high-frequency modulation.
We confirmed numerically these behaviours beyond the single-excitation manifold at finite temperatures for initial Fock states of the field, where the dynamics start in a well-defined excitation manifold, and studied dissipation for initial coherent states, where the dynamics start in an extended superposition of excitation manifolds. For initial coherent states of the field, dynamics under the microscopic approach provides a faster suppression of the collapse and revivals of the population inversion than the phenomenological approach, but the real difference is observed in phase-space, where the microscopical approach provides a smooth spiral decay trajectory of the field quadratures, while the phenomenological approach produces more convoluted dynamics with highly oscillating variances in the quadratures.
In summary, while a phenomenological treatment makes it simpler to create a building block approach to open systems that does not differ much at short times from the predictions of a formal treatment, a microscopic treatment of dissipation produces smoother dynamics that are closer to what semi-classical intuition might signal. This seems to suggest that it becomes imperative to follow formal approaches to dissipation in order to describe multipartite interaction models.
\ack{
Carlos A. Gonz\'alez-Guti\'errez is grateful to CONACYT for financial support under doctoral fellowship No. 385108 and acknowledges the assistance of Ing. Israel Rebolledo in elaborating the ion-trap picture showed in Fig. \ref{fig:Fig1}.
B. M. Rodr\'iguez-Lara thanks Alexander Moroz for fruitful discussion, acknowledges funding through CONACYT CB-2015-01-255230 grant, and thanks the Photonics and Mathematical Optics Group for hosting him. }
\section*{References}
\end{document} |
\begin{document}
\parskip=4pt \parindent=18pt \baselineskip=22pt \setcounter{page}{1} \centerline{\Large\bf A Class of Special Matrices and Quantum Entanglement}
\begin{center} Shao-Ming Fei$^{\dag\ddag}$~~~ and ~~~Xianqing Li-Jost$^\ddag$
\begin{minipage}{6.2in} $^\dag$Department of Mathematics, Capital Normal University, Beijing 100037, P.R. China\\ $^\dag$Institut f{\"u}r Angewandte Mathematik, Universit{\"a}t Bonn, 53115 Bonn, Germany\\ E-mail: [email protected]\\ $^\ddag$Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany\\ E-mail: [email protected]
\end{minipage} \end{center} \vskip 1 true cm \parindent=18pt \parskip=6pt \begin{center} \begin{minipage}{5in}
\centerline{\large Abstract}
We present a kind of construction for a class of special matrices with at most two different eigenvalues, in terms of some interesting multiplicators which are very useful in calculating eigenvalue polynomials of these matrices. This class of matrices defines a special kind of quantum states --- $d$-computable states. The entanglement of formation for a large class of quantum mixed states is explicitly presented.
\end{minipage} \end{center}
Keywords: Entanglement of formation, Generalized concurrence, $d$-computable states
PACS: 03.65.Bz; 89.70.+c
\section{Introduction}
Quantum entangled states are playing an important role in quantum communication, information processing and quantum computing \cite{DiVincenzo}, especially in the investigation of quantum teleportation \cite{teleport,teleport1}, dense coding \cite{dense}, decoherence in quantum computers and the evaluation of quantum cryptographic schemes \cite{crypto}. To quantify entanglement, a number of entanglement measures such as the entanglement of formation and distillation \cite{Bennett96a,BBPS,Vedral}, negativity \cite{Peres96a,Zyczkowski98a}, relative entropy \cite{Vedral,sw} have been proposed for bipartite states \cite{crypto,BBPS} [11-13]. Most of these measures of entanglement involve extremizations which are difficult to handle analytically. For instance the entanglement of formation \cite{Bennett96a} is intended to quantify the amount of quantum communication required to create a given state. The entanglement of formation for a pair of qubits can be expressed as a monotonically increasing function of the ``concurrence'', which can be taken as a measure of entanglement in its own right \cite{HillWootters}. From the expression of this concurrence, the entanglement of formation for mixed states of a pair of qubits is calculated \cite{HillWootters}. Although entanglement of formation is defined for arbitrary dimension, so far no explicit analytic formulae for entanglement of formation have been found for systems larger than a pair of qubits, except for some special symmetric states \cite{th}.
For a multipartite quantum system, the degree of entanglement will neither increase nor decrease under local unitary transformations on a quantum subsystem. Therefore the measure of entanglement must be an invariant of local unitary transformations. The entanglements have been studied in the view of this kind of invariants and a generalized formula of concurrence for high dimensional bipartite and multipartite systems is derived from the relations among these invariants \cite{note}. The generalized concurrence can be used to deduce necessary and sufficient separability conditions for some high dimensional mixed states \cite{qsep}. However in general the generalized concurrence is not a suitable measure for $N$-dimensional bipartite quantum pure states, except for $N=2$. Therefore it does not help in calculating the entanglement of formation for bipartite mixed states.
Nevertheless in \cite{fjlw} it has been shown that for some class of quantum states with $N>2$, the corresponding entanglement of formation is a monotonically increasing function of a generalized concurrence, and the entanglement of formation can be also calculated analytically. Let ${\cal H}$ be an $N$-dimensional complex Hilbert space with orthonormal basis $e_i$, $i=1,...,N$. A general bipartite pure state on ${\cal H}\otimes {\cal H}$ is of the form, \begin{equation}\label{psi} \vert\psi>=\sum_{i,j=1}^N a_{ij}e_i\otimes e_j,~~~~~~a_{ij}\in\Cb \end{equation} with normalization $\displaystyle\sum_{i,j=1}^N a_{ij}a_{ij}^\ast=1$. The entanglement of formation $E$ is defined as the entropy of either of the two sub-Hilbert spaces \cite{BBPS}, \begin{equation} \label{epsiq}
E(|\psi \rangle) = - {\mbox{Tr}\,} (\rho_1 \log_2 \rho_1) = - {\mbox{Tr}\,} (\rho_2 \log_2 \rho_2)\,, \end{equation} where $\rho_1$ (resp. $\rho_2$) is the partial trace of $\bf
|\psi\rangle\langle\psi|$ over the first (resp. second) Hilbert space of ${\cal H}\otimes{\cal H}$. Let $A$ denote the matrix with entries given by $a_{ij}$ in (\ref{psi}). $\rho_1$ can be expressed as $\rho_1=AA^\dag$.
The quantum mixed states are described by density matrices $\rho$ on ${\cal H}\otimes{\cal H}$, with pure-state decompositions, i.e., all ensembles of states $|\psi_i \rangle$ of the form (\ref{psi}) with probabilities $p_i\geq 0$,
$\rho = \sum_{i=1}^l p_i |\psi_i \rangle \langle\psi_i|$, $\sum_{i=1}^l p_i =1$ for some $l\in{I\!\! N}$. The entanglement of formation for the mixed state $\rho$ is defined as the average entanglement of the pure states of the decomposition, minimized over all decompositions of
$\rho$, $E(\rho) = \mbox{min}\, \sum_{i=1}^l p_i E(|\psi_i \rangle)$.
For $N=2$ equation (\ref{epsiq}) can be written as $
E(|\psi \rangle)|_{N=2} =h((1+\sqrt{1-C^2})/2), $ where $h(x) = -x\log_2 x - (1-x)\log_2 (1-x)$. $C$ is called concurrence,
$C(|\psi \rangle) =2|a_{11}a_{22}-a_{12}a_{21}\vert$ \cite{HillWootters}. $E$ is a monotonically increasing function of $C$ and therefore $C$ can be also taken as a kind of measure of entanglement. Calculating $E(\rho)$ is then reduced to the calculation of the corresponding minimum of
$C(\rho) = \mbox{min}\, \sum_{i=1}^M p_i C(|\psi_i \rangle)$, which simplifies the problems, as $C(|\psi_i \rangle)$ has a much simpler expression than $E(|\psi_i \rangle)$.
For $N\geq 3$, there is no such concurrence $C$ in general. The concurrences discussed in \cite{note} can be only used to judge whether a pure state is separable (or maximally entangled) or not \cite{qsep}. The entanglement of formation is no longer a monotonically increasing function of these concurrences.
Nevertheless, for a special class of quantum states such that $AA^\dag$ has only two non-zero eigenvalues, a kind of generalized concurrence has been found to simplify the calculation of the corresponding entanglement of formation \cite{fjlw}. Let $\lambda_1$ (resp. $\lambda_2$) be the two non-zero eigenvalues of $AA^\dag$ with degeneracy $n$ (resp. $m$), $n+m\leq N$, and $D$ the maximal non-zero diagonal determinant, $D=\lambda_1^n\lambda_2^m$. In this case the entanglement of formation of $|\psi \rangle$ is given by
$E(|\psi \rangle)=-n \lambda_1 \log_2 \lambda_1 -m \lambda_2 \log_2 \lambda_2$. It is straightforward to show that $E(|\psi\rangle)$ is a monotonically increasing function of $D$ and hence $D$ is a kind of measure of entanglement in this case. In particular for the case $n=m>1$, we have \begin{equation}\label{hehe}
E(|\psi \rangle)=n \left(-x\log_2 x - (\frac{1}{n}-x)\log_2 (\frac{1}{n}-x)\right), \end{equation} where $$ x = \frac{1}{2}\left(\frac{1}{n}+\sqrt{\frac{1}{n^2}(1-d^2)}\right) $$ and \begin{equation}\label{GC} d\equiv 2nD^{\frac{1}{2n}}=2n\sqrt{\lambda_1\lambda_2}. \end{equation} $d$ is defined to be the generalized concurrence in this case. Instead of calculating $E(\rho)$ directly, one may calculate the minimum decomposition of $D(\rho)$ or $d(\rho)$ to simplify the calculations. In \cite{fjlw} a class of pure states (\ref{psi}) with the matrix $A$ given by \begin{equation}\label{a} A=\left( \begin{array}{cccc} 0&b&a_1&b_1\\ -b&0&c_1&d_1\\ a_1&c_1&0&-e\\ b_1&d_1&e&0 \end{array} \right), \end{equation} $a_1,b_1,c_1,d_1,b,e\in\Cb$, is considered. The matrix $AA^\dag$ has two eigenvalues with degeneracy two, i.e., $n=m=2$ and
$\vert AA^\dag\vert=|b_1c_1-a_1d_1+be|^4$. The generalized concurrence $d$ is given by
$d=4|b_1c_1-a_1d_1+be|$. Let $p$ be a $16\times 16$ matrix with only non-zero entries $p_{1,16}=p_{2,15}=-p_{3,14}=p_{4,10}=p_{5,12}=p_{6,11} =p_{7,13}=-p_{8,8}=-p_{9,9}=p_{10,4}=p_{11,6}=p_{12,5} =p_{13,7}=-p_{14,3}=p_{15,2}=p_{16,1}=1$. $d$ can be further written as \begin{equation}\label{dp}
d=|\langle \psi |p\psi^* \rangle |. \end{equation} Let $\Psi$ denote the set of pure states (\ref{psi}) with $A$ given as the form of (\ref{a}). Consider all mixed states with density matrix $\rho$ such that its decompositions are of the form \begin{equation}\label{rho1}
\rho = \sum_{i=1}^M p_i |\psi_i \rangle \langle \psi_i|,~~~~\sum_{i=1}^M p_i =1,~~~~|\psi_i\rangle\in\Psi.
\end{equation} All other kind of decompositions, say decomposition from $|\psi_i^\prime\rangle$, can be obtained from a unitary linear combination of $|\psi_i\rangle$ \cite{HillWootters,fjlw}. As linear combinations of
$|\psi_i\rangle$ do not change the form of the corresponding matrices (\ref{a}), once $\rho$ has a decomposition with all $|\psi_i\rangle\in\Psi$, all other decompositions, including the minimum decomposition of the entanglement of formation, also satisfy that $|\psi_i^\prime\rangle\in\Psi$. Then the minimum decomposition of the generalized concurrence is \cite{fjlw} \begin{equation} \label{drho} d(\rho)=\Lambda_1 - \sum_{i=2}^{16}\Lambda_i, \end{equation} where $\Lambda_i$, in decreasing order, are the square roots of the eigenvalues of the Hermitian matrix $R \equiv \sqrt{\sqrt{\rho}p{\rho^\ast}p\sqrt{\rho}}$, or, alternatively, the square roots of the eigenvalues of the non-Hermitian matrix $\rho p{\rho^\ast}p$.
\section{Entanglement of formation for a class of high dimensional quantum states}
An important fact in obtaining the formula (\ref{drho}) is that the generalized concurrence $d$ is a quadratic form of the entries of the matrix $A$, so that $d$ can be expressed in the form of (\ref{dp}) in terms of a suitable matrix $p$. Generalizing to the $N$-dimensional case we call an pure state (\ref{psi}) \underline{$d$-computable} if $A$ satisfies the following relations: \begin{equation}\label{dcomputable} \begin{array}{l} \vert AA^\dag\vert = ([ A ][ A ]^\ast)^{N/2},\\[3mm]
\vert AA^\dag - \lambda Id_N\vert=(\lambda^2 - \| A \| \lambda + [ A ][ A ]^\ast)^{N/2}, \end{array}
\end{equation} where $[A]$ and $\|A \|$ are quadratic forms of $a_{ij}$, $Id_N$ is the $N\times N$ identity matrix. We denote ${\cal A}$ the set of matrices satisfying (\ref{dcomputable}), which implies that for $A\in{\cal A}$, $AA^\dag$ has at most two different eigenvalues, each one has order $N/2$ and $d$ is a quadratic form of the entries of the matrix $A$.
In the following we give a kind of constructions of high dimensional $d$-computable states. For all $N^2\times N^2$ density matrices with decompositions on these $N$-dimensional $d$-computable pure states, their entanglement of formations can be calculated with a similar formula to (\ref{drho}) (see (\ref{d2k1})).
We first present a kind of construction for a class of $N$-dimensional, $N = 2^k$, $2\leq k\in{I\!\! N}$, $d$-computable states. Set $$ A_2= \left( \begin{array}{cc} a&-c \\[3mm] c&d\\[3mm] \end{array} \right), $$ where $a,c,d \in \Cb$. For any $b_1,c_1 \in \Cb$, a $4\times 4$ matrix $A_4\in{\cal A}$ can be constructed in the following way, \begin{equation}\label{ha4} A_4= \left( \begin{array}{cc} B_2&A_2\\[3mm] -A_2^t&C_2^t\\[3mm] \end{array} \right) = \left( \begin{array}{cccc} 0&b_1&a&-c\\[3mm] -b_1&0&c&d\\[3mm] -a&-c&0&-c_1\\[3mm] c&-d&c_1&0 \end{array} \right), \end{equation} where $$ B_2 = b_1J_2, ~~~~ C_2 = c_1J_2, ~~~ J_2= \left( \begin{array}{cc} 0&1 \\[3mm] -1&0\\[3mm] \end{array} \right). $$ $A_4$ satisfies the relations in (\ref{dcomputable}): $$ \begin{array}{l}
\left| A_4 A^\dag_4 \right|=[(b_1c_1+a d + c^2)(b_1c_1+a d + c^2)^\ast]^2= ([ A_4 ][ A_4 ]^\ast)^2,\\[3mm]
\left| A_4 A^\dag_4 - \lambda Id_4 \right| = (\lambda^2 - (b_1b_1^\ast+c_1c_1^\ast+aa^\ast+2cc^\ast+dd^\ast)\lambda\\[3mm] ~~~~~~~~~~~~~~~~~~~~~~+ (b_1c_1+ a d + c^2)(b_1c_1+ a d + c^2)^\ast)^2\\[3mm]
~~~~~~~~~~~~~~~~~~~= (\lambda^2 - \| A_4 \|\lambda + [ A_4 ][ A_4 ]^\ast)^2, \end{array} $$ where \begin{equation} [ A_4 ]=(b_1c_1+a d + c^2),~~~
\| A_4 \|=b_1b_1^\ast+c_1c_1^\ast+aa^\ast+2cc^\ast+dd^\ast. \end{equation}
$A_8\in{\cal A}$ can be obtained from $A_4$, \begin{equation}\label{a8} A_8= \left( \begin{array}{cc} B_4&A_4\\[3mm] -A_4^t&C_4^t\\[3mm] \end{array} \right), \end{equation} where \begin{equation}\label{i4} B_4 = b_2J_4, ~~~~C_4 = c_2J_4, ~~~~ J_4= \left( \begin{array}{cccc} 0&0&0&1\\[3mm] 0&0&1&0\\[3mm] 0&-1&0&0\\[3mm] -1&0&0&0 \end{array} \right),~~~~ b_2,~c_2 \in \Cb. \end{equation}
For general construction of high dimensional matrices $A_{2^{k+1}}\in{\cal A}$, $2 \leq k\in{I\!\! N}$, we have \begin{equation}\label{a2k} A_{2^{k+1}}= \left( \begin{array}{cc} B_{2^{k}}&A_{2^{k}}\\[3mm] (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^t&C_{2^{k}}^t \end{array} \right) \equiv\left( \begin{array}{cc} b_{k}J_{2^{k}}&A_{2^{k}}\\[3mm] (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^t&c_{k}J_{2^{k}}^t \end{array} \right), \end{equation}
\begin{equation} \label{i2k} J_{2^{k+1}}= \left( \begin{array}{cc} 0&J_{2^{k}}\\[3mm] (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^t&0\\[3mm] \end{array} \right), \end{equation} where $b_k, c_k \in \Cb$, $B_{2^{k}}=b_{k}J_{2^{k}}$, $C_{2^{k}}=c_{k}J_{2^{k}}$. We call $J_{2^{k+1}}$ multipliers. Before proving that $A_{2^{k+1}}\in{\cal A}$, we first give the following lemma.
{\sf Lemma 1}. $A_{2^{k+1}}$ and $J_{2^{k+1}}$ satisfy the following relations: \begin{equation}\label{ii2k} \begin{array}{l} J_{2^{k+1}}^tJ_{2^{k+1}}=J_{2^{k+1}}J_{2^{k+1}}^t=Id_{2^{k+1}},\\[3mm] J_{2^{k+1}}^tJ_{2^{k+1}}^t=J_{2^{k+1}}J_{2^{k+1}} =(-1)^{\frac{(k+1)(k+2)}{2}}Id_{2^{k+1}}, \end{array} \end{equation} \begin{equation}\label{ai2k} \begin{array}{ll} J_{2^{k+1}}^\dag =J_{2^{k+1}}^t ,~~~& J_{2^{k+1}}^t=(-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k+1}},\\[3mm] A_{2^{k+1}}^t=(-1)^{\frac{k(k+1)}{2}}A_{2^{k+1}},~~~& A_{2^{k+1}}^\dag=(-1)^{\frac{k(k+1)}{2}}A^*_{2^{k+1}}. \end{array} \end{equation}
{\sf Proof}. One easily checks that relations in (\ref{ii2k}) hold for $k=1$. Suppose (\ref{ii2k}) hold for general $k$. We have $$ \begin{array}{rcl} J_{2^{k+1}}^tJ_{2^{k+1}}&=& \left( \begin{array}{cc} 0&(-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^t\\[3mm] J_{2^{k}}&0\\[3mm] \end{array} \right) \left( \begin{array}{cc} 0&J_{2^{k}}\\[3mm] (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^t&0\\[3mm] \end{array} \right)\\[9mm] &=&\left( \begin{array}{cc} (-1)^{(k+1)(k+2)}J_{2^{k}}J_{2^{k}}^t&0\\[3mm] 0&J_{2^{k}}^tJ_{2^{k}}\\[3mm] \end{array} \right) =Id_{2^{k+1}} \end{array} $$ and $$ \begin{array}{rcl} J_{2^{k+1}}^tJ_{2^{k+1}}^t &=& \left( \begin{array}{cc} 0&(-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^t\\[3mm] J_{2^{k}}&0\\[3mm] \end{array} \right) \left( \begin{array}{cc} 0&(-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^t\\[3mm] J_{2^{k}}&0\\[3mm] \end{array} \right)\\[9mm] &=&\left( \begin{array}{cc} (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}J_{2^{k}}^t&0\\[3mm] 0&(-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^tJ_{2^{k}}\\[3mm] \end{array} \right) =(-1)^{\frac{(k+1)(k+2)}{2}}Id_{2^{k+1}}. \end{array} $$ Therefore the relations for $J_{2^{k+1}}^tJ_{2^{k+1}}$ and $J_{2^{k+1}}^tJ_{2^{k+1}}^t$ are valid also for $k+1$. The cases for $J_{2^{k+1}}J_{2^{k+1}}^t$ and $J_{2^{k+1}}J_{2^{k+1}}$ can be similarly treated.
The formula $J_{2^{k+1}}^t=(-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k+1}}$ in (\ref{ai2k}) is easily deduced from (\ref{ii2k}) and the fact $J_{2^{k+1}}^\dag = J_{2^{k+1}}^t$.
The last two formulae in (\ref{ai2k}) are easily verified for $k=1$. If it holds for general $k$, we have then, $$ A_{2^{k+1}}^t = \left( \begin{array}{cc} B_{2^{k}}^t&(-1)^{\frac{k(k+1)}{2}}A_{2^{k}}\\[3mm] A_{2^{k}}^t&C_{2^{k}} \end{array} \right)
=\left( \begin{array}{cc} (-1)^{\frac{k(k+1)}{2}}B_{2^{k}}&(-1)^{\frac{k(k+1)}{2}}A_{2^{k}}\\[3mm] A_{2^{k}}^t&(-1)^{\frac{k(k+1)}{2}}C_{2^{k}}^t \end{array} \right) =(-1)^{\frac{k(k+1)}{2}}A_{2^{k+1}}, $$ i.e., it holds also for $k+1$. The last equality in (\ref{ai2k}) is obtained from the conjugate of the formula above.
$\rule{2mm}{2mm}$
{\sf Lemma 2.} The following relations can be verified straightforwardly from Lemma 1, \begin{equation}\begin{array}{l}\label{BC} B_{2^{k}}^t=(-1)^{\frac{k(k+1)}{2}}B_{2^{k}},~~~ C_{2^{k}}^t=(-1)^{\frac{k(k+1)}{2}}C_{2^{k}},\\[3mm] B_{2^{k}}^\dag =(-1)^{\frac{k(k+1)}{2}}B^*_{2^{k}},~~~ C_{2^{k}}^\dag =(-1)^{\frac{k(k+1)}{2}}C^*_{2^{k}}. \end{array} \end{equation} \begin{equation}\label{BCI} B_{2^{k+1}}^{-1}=\frac{1}{b_k^2}B_{2^{k+1}}^t =\frac{1}{b_kb_k^*}B^\dag_{2^{k+1}},~~~ C_{2^{k+1}}^{-1}=\frac{1}{c_k^2}C_{2^{k+1}}^t =\frac{1}{c_kc_k^*}C^\dag_{2^{k+1}}. \end{equation} \begin{equation}\label{BBCC} \begin{array}{l}
B_{2^{k+1}}^tB_{2^{k+1}}=B_{2^{k+1}}B_{2^{k+1}}^t =b_k^2 Id_{2^{k+1}},~~ C_{2^{k+1}}^tC_{2^{k+1}}=C_{2^{k+1}}C_{2^{k+1}}^t =c_k^2 Id_{2^{k+1}},\\[3mm] B_{2^{k+1}}^\dag B_{2^{k+1}}=B_{2^{k+1}}B^\dag_{2^{k+1}} =b_k b_k^* Id_{2^{k+1}},~~~ C_{2^{k+1}}^\dag C_{2^{k+1}}=C_{2^{k+1}}C_{2^{k+1}}^\dag =c_k c_k^* Id_{2^{k+1}}. \end{array} \end{equation}
For any $A_{2^{k+1}}\in \cal A$, $k\geq 2$, we define \begin{equation} \begin{array}{rcl}
||A_{2^{k+1}}||&=:&b_kb_k+c_kc_k+||A_{2^k}||,\\[3mm] [A_{2^{k+1}}]&=:&(-1)^{k(k+1)/2}b_kc_k-[A_{2^k}]. \end{array} \end{equation}
{\sf Lemma 3}. For any $k\geq 2$, we have, \begin{equation}\label{lemma3} \begin{array}{rcl} (A_{2^{k+1}}J_{2^{k+1}})(J_{2^{k+1}}A_{2^{k+1}})^t&=& (A_{2^{k+1}}J_{2^{k+1}})^t(J_{2^{k+1}}A_{2^{k+1}})\\[3mm] &=&((-1)^{\frac{k(k+1)}{2}}b_kc_k-[A_{2^{k}}])Id_{2^{k+1}} =[A_{2^{k+1}}] Id_{2^{k+1}},\\[4mm] (A_{2^{k+1}}^\ast J_{2^{k+1}})(J_{2^{k+1}}A_{2^{k+1}}^\ast)^t&=& (A_{2^{k+1}}^\ast J_{2^{k+1}})^t(J_{2^{k+1}}A_{2^{k+1}}^\ast) =[A_{2^{k+1}}]^\ast Id_{2^{k+1}}. \end{array} \end{equation}
{\sf Proof}. One can verify that Lemma 3 holds for $k=2$. Suppose it is valid for $k$, we have $$ \begin{array}{l} (A_{2^{k+1}}J_{2^{k+1}})(J_{2^{k+1}}A_{2^{k+1}})^t\\[3mm] =\left( \begin{array}{cc} (-1)^{\frac{(k+1)(k+2)}{2}}A_{2^{k}}J_{2^{k}}^t&B_{2^{k}}J_{2^{k}}\\[3mm] (-1)^{\frac{(k+1)(k+2)}{2}}C_{2^{k}}^tJ_{2^{k}}^t& (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^tJ_{2^{k}} \end{array} \right) \left( \begin{array}{cc} (-1)^{\frac{k(k+1)}{2}}J_{2^{k}}A_{2^{k}}^t&J_{2^{k}}C_{2^{k}}^t\\[3mm] (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^tB_{2^{k}}& (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^tA_{2^{k}} \end{array} \right)^t\\[9mm] =\left( \begin{array}{cc} e_{11}& e_{12}\\[3mm] e_{21}& e_{22} \end{array} \right), \end{array} $$ where $$ \begin{array}{rcl} e_{11}&=&(-1)^{\frac{(k+1)(k+2)+k(k+1)}{2}}A_{2^{k}}J_{2^{k}}^tA_{2^{k}}J_{2^{k}}^t +(-1)^{\frac{k(k+1)}{2}}b_kc_k Id_{2^{k}}\\[3mm] &=&(-1)^{\frac{(k+1)(k+2)+k(k-1)}{2}}(A_{2^{k}}J_{2^{k}}^t)(J_{2^{k}}A_{2^{k}}^t)^t +(-1)^{\frac{k(k+1)}{2}}b_kc_k Id_{2^{k}}\\[3mm] &=&((-1)^{\frac{k(k+1)}{2}}b_kc_k-[A_{2^{k}}])Id_{2^{k}},\\[3mm] e_{12}&=&b_k A_{2^{k}}J_{2^{k}}^t+ (-1)^{\frac{(k+1)(k+2)+k(k+1)}{2}}b_kA_{2^{k}}^tJ_{2^{k}}\\[3mm] &=& b_kA_{2^{k}}J_{2^{k}}^t(1+(-1)^{\frac{(k+1)(k+2)+k(k-1)}{2}})=0,\\[3mm] e_{21}&=&(-1)^{\frac{(k+1)(k+2)}{2}}c_k A_{2^{k}}J_{2^{k}}^t +(-1)^{\frac{k(k+1)}{2}}c_k A_{2^{k}}^tJ_{2^{k}}=0,\\[3mm] e_{22}&=& (-1)^{\frac{k(k+1)}{2}}b_k c_kId_{2^{k}}+ (-1)^{\frac{(k+1)(k+2)+k(k+1)}{2}}A_{2^{k}}^tJ_{2^{k}}A_{2^{k}}^tJ_{2^{k}}\\[3mm] &=&(-1)^{\frac{k(k+1)}{2}}b_k c_k Id_{2^{k}}+ (-1)^{\frac{(k+1)(k+2)+k(k-1)}{2}}(A_{2^{k}}J_{2^{k}})(J_{2^{k}}A_{2^{k}})^t\\[3mm] &=&((-1)^{\frac{k(k+1)}{2}}b_kc_k-[A_{2^{k}}])Id_{2^{k}}. \end{array} $$ Hence $$ (A_{2^{k+1}}J_{2^{k+1}})(J_{2^{k+1}}A_{2^{k+1}})^t =((-1)^{\frac{k(k+1)}{2}}b_kc_k-[A_{2^{k}}])Id_{2^{k+1}}=[A_{2^{k+1}}])Id_{2^{k+1}}. $$ Similar calculations apply to $(A_{2^{k+1}}J_{2^{k+1}})^t(J_{2^{k+1}}A_{2^{k+1}})$. Therefore the Lemma holds for $k+1$. The last equation can be deduced from the first one.
{\sf Theorem 2}. $A_{2^{k}}$ satisfies the following relation: \begin{equation} \label{thm2}
|A_{2^{k+1}}A_{2^{k+1}}^\dag|=([A_{2^{k+1}}][A_{2^{k+1}}]^*)^{2^k} =[((-1)^{\frac{k(k+1)}{2}}b_kc_k-[A_{2^{k}}]) ((-1)^{\frac{k(k+1)}{2}}b^*_kc^*_k-[A_{2^{k}}]^*)]^{2^k}. \end{equation}
{\sf Proof}. By using Lemma 1-3, we have $$ \begin{array}{rcl}
|A_{2^{k+1}}|
&=&\left| \left( \begin{array}{cc} B_{2^{k}}& A_{2^{k}}\\[3mm] (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^t&C_{2^{k}}^t \end{array} \right)
\right|\\[9mm]
&=&\left| \left( \begin{array}{cc} Id_{2^{k}}&-A_{2^{k}}(C_{2^{k}}^t)^{-1}\\[3mm] 0&Id_{2^{k}} \end{array} \right) \left( \begin{array}{cc} B_{2^{k}}& A_{2^{k}}\\[3mm] (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^t&C_{2^{k}}^t \end{array} \right)
\right|\\[9mm]
&=&\left| \left( \begin{array}{cc} B_{2^{k}}-(-1)^{\frac{k(k+1)}{2}}A_{2^{k}}(C_{2^{k}}^t)^{-1}A_{2^{k}}^t& 0\\[3mm] (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^t&C_{2^{k}}^t \end{array} \right)
\right|\\[9mm]
&=&|b_kc_k J_{2^{k}}J_{2^{k}}^t-(-1)^{\frac{k(k+1)}{2}}\frac{1}{c_k^2}
A_{2^{k}}C_{2^{k}}A_{2^{k}}^tC_{2^{k}}^t|\\[4mm]
&=&|b_kc_k Id_{2^{k}}-(-1)^{\frac{k(k+1)}{2}}
(A_{2^{k}}J_{2^{k}})(J_{2^{k}}A_{2^{k}})^t|\\[4mm]
&=&|(-1)^{\frac{k(k+1)}{2}}b_kc_k Id_{2^{k}}-
[A_{2^{k}}]Id_{2^{k}}| =((-1)^{\frac{k(k+1)}{2}}b_kc_k-[A_{2^{k}}])^{2^k}. \end{array} $$ Therefore $$
|A_{2^{k+1}}A_{2^{k+1}}^\dag|=([A_{2^{k+1}}][A_{2^{k+1}}]^\ast)^{2^k}. $$
$\rule{2mm}{2mm}$
{\sf Lemma 4}. $A_{2^{k+1}}$ and $J_{2^{k+1}}$ satisfy the following relations: $$ \begin{array}{l} (A_{2^{k+1}}J_{2^{k+1}})(J_{2^{k+1}}A_{2^{k+1}})^\dag +(J_{2^{k+1}}A_{2^{k+1}}^\ast)(J_{2^{k+1}}A_{2^{k+1}})^t\\[3mm] ~~~~~~~~=A_{2^{k+1}}A_{2^{k+1}}^\dag+J_{2^{k+1}}A_{2^{k+1}}^\ast A_{2^{k+1}}^tJ_{2^{k+1}}^t
=||A_{2^{k+1}}||Id_{2^{k+1}},\\[3mm] (A_{2^{k+1}}J_{2^{k+1}})^t(A_{2^{k+1}}^\ast J_{2^{k+1}}) +(J_{2^{k+1}}A_{2^{k+1}})^\dag(J_{2^{k+1}}A_{2^{k+1}})\\[3mm] ~~~~~~~~=A_{2^{k+1}}^\dag A_{2^{k+1}}+J_{2^{k+1}}^tA_{2^{k+1}}^tA_{2^{k+1}}^\ast J_{2^{k+1}}
=||A_{2^{k+1}}||Id_{2^{k+1}}. \end{array} $$
{\sf Proof}. It can be verified that the first formula holds for $k=2$, if it holds for $k$, we have $$ \begin{array}{l} (A_{2^{k+1}}J_{2^{k+1}})(A_{2^{k+1}}J_{2^{k+1}})^\dag +(J_{2^{k+1}}A^*_{2^{k+1}})(J_{2^{k+1}}A_{2^{k+1}})^t\\[3mm] =\left( \begin{array}{cc} (-1)^{\frac{(k+1)(k+2)}{2}}A_{2^{k}}J_{2^{k}}^t&B_{2^{k}}J_{2^{k}}\\[3mm] (-1)^{\frac{(k+1)(k+2)}{2}}C_{2^{k}}^tJ_{2^{k}}^t& (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^tJ_{2^{k}} \end{array} \right) \left( \begin{array}{cc} (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}A_{2^{k}}^\dag& (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}C^*_{2^{k}}\\[3mm] J_{2^{k}}^tB_{2^{k}}^\dag& (-1)^{\frac{k(k+1)}{2}}J^t_{2^{k}}A_{2^{k}}^* \end{array} \right)\\[9mm] +\left( \begin{array}{cc} (-1)^{\frac{k(k+1)}{2}}J_{2^{k}}A_{2^{k}}^\dag&J_{2^{k}}C_{2^{k}}^\dag\\[3mm] (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^tB_{2^{k}}^*& (-1)^{\frac{(k+1)(k+2)}{2}}J_{2^{k}}^tA_{2^{k}}^* \end{array} \right) \left( \begin{array}{cc} (-1)^{\frac{k(k+1)}{2}}A_{2^{k}}J_{2^{k}}^t& (-1)^{\frac{(k+1)(k+2)}{2}}B_{2^{k}}^tJ_{2^{k}}\\[3mm] C_{2^{k}}J_{2^{k}}^t& (-1)^{\frac{(k+1)(k+2)}{2}}A_{2^{k}}^tJ_{2^{k}} \end{array} \right)\\[9mm] =\left( \begin{array}{cc} f_{11}& f_{12}\\[3mm] f_{21}& f_{22} \end{array} \right), \end{array} $$ where, by using Lemma 1 and 2, $$ \begin{array}{rcl} f_{11}=f_{22}&=& A_{2^{k}}A_{2^{k}}^\dag+J_{2^{k}}A_{2^{k}}^\dag A_{2^{k}}J_{2^{k}}^t+BB^\dag +J_{2^{k}}C^\dag CJ_{2^{k}}^t \\[3mm] &=& A_{2^{k}}A_{2^{k}}^\dag+J_{2^{k}}(-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^* (-1)^{\frac{k(k+1)}{2}}A^t_{2^{k}}J_{2^{k}}^t+(b_kb_k^*+c_kc_k^*)Id_{2^{k}}\\[3mm]
&=&(||A_{2^{k}}||+b_kb_k^*+c_kc_k^*)Id_{2^{k}}=||A_{2^{k+1}}||Id_{2^{k}}, \end{array} $$ $$ \begin{array}{rcl} f_{12}&=&A_{2^{k}}C^*_{2^{k}}+(-1)^{\frac{k(k+1)}{2}}B_{2^{k}}A^*_{2^{k}} +(-1)^{\frac{k(k+1)+(k+1)(k+2)}{2}}b_kJ_{2^{k}}A_{2^{k}}^\dag +(-1)^{\frac{(k+1)(k+2)}{2}}c^*_kA_{2^{k}}^tJ_{2^{k}}\\[3mm] &=&(-1)^{\frac{k(k+1)}{2}}(B_{2^{k}}A^*_{2^{k}} +(-1)^{\frac{k(k-1)+(k+1)(k+2)}{2}}B_{2^{k}}A^*_{2^{k}})\\[3mm] &&+A_{2^{k}}C^*_{2^{k}} +(-1)^{\frac{k(k-1)+(k+1)(k+2)}{2}}A_{2^{k}}C^*_{2^{k}}=0, \end{array} $$ $$ \begin{array}{rcl} f_{21}&=&C_{2^{k}}^tA^\dag_{2^{k}}+(-1)^{\frac{k(k+1)}{2}}A_{2^{k}}^tB_{2^{k}}^\dag +(-1)^{\frac{k(k+1)+(k+1)(k+2)}{2}}b^*_kA_{2^{k}}J_{2^{k}}^t +(-1)^{\frac{(k+1)(k+2)}{2}}c_kJ_{2^{k}}^tA^*_{2^{k}}\\[3mm] &=&(-1)^{\frac{k(k+1)}{2}}(b_kA_{2^{k}}^tJ_{2^{k}}^t +(-1)^{\frac{k(k-1)+(k+1)(k+2)}{2}}b_kA_{2^{k}}^tJ_{2^{k}}^t)\\[3mm] &&+c^*_k J_{2^{k}}^tA_{2^{k}}^t +(-1)^{\frac{k(k-1)+(k+1)(k+2)}{2}}c_k^* J_{2^{k}}^tA_{2^{k}}^t =0. \end{array} $$ Hence the first formula holds also for $k+1$. The second formula can be verified similarly.
$\rule{2mm}{2mm}$
{\sf Lemma 5}. Matrices $B_{2^k}$, $A_{2^k}$ and $C_{2^k}$ satisfy the following relations: \begin{equation}\label{fa2k} ((-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast) ((-1)^{k(k+1)\over 2}A_{2^k}^\ast B_{2^k}+C_{2^k}^\ast A_{2^k})^t = F(A_{2^{k+1}})Id_{2^k}, \end{equation} where \begin{equation}\label{F} F(A_{2^{k+1}}) =c_k^{\ast 2}[A_{2^k}]+b_k^2[A_{2^k}]^{\ast}
+(-1)^{k(k+1)\over 2} b_kc_k^{\ast} \| A_{2^k} \|. \end{equation}
{\sf Proof}. By using Lemma 3 and 4, we have $$ \begin{array}{l} ((-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast) ((-1)^{k(k+1)\over 2}A_{2^k}^\ast B_{2^k}+C_{2^k}^\ast A_{2^k})^t\\[3mm] =b_k^2(J_{2^k}A_{2^k}^{\ast})(A_{2^k}^{\ast}J_{2^k})^t +c_k^{\ast 2}(A_{2^k}J_{2^k})(J_{2^k}A_{2^k})^t\\[3mm] ~~~+(-1)^{k(k+1)\over 2}b_k c_k^{\ast}[(A_{2^k}^{\ast}J_{2^k})(A_{2^k}^{\ast}J_{2^k})^t +(J_{2^k}A_{2^k}^{\ast})(J_{2^k}A_{2^k})^t]\\[3mm] =(c_k^{\ast 2}[A_{2^k}]+b_k^2[A_{2^k}]^{\ast}
+(-1)^{k(k+1)\over 2} b_kc_k^{\ast} \| A_{2^k} \|) Id_{2^k} =F(A_{2^{k+1}})Id_{2^k}. \end{array} $$
{\sf Lemma 6}. $A_{2^k}$ and $J_{2^k}$ satisfy the following relation: \begin{equation}\label{fa2kp}
\| A_{2^k} \| J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t = [ A_{2^k} ][ A_{2^k} ]^\ast Id_{2^k} + J_{2^k}A_{2^k}^\ast A_{2^k}^tA_{2^k}^\ast A_{2^k}^tJ_{2^k}^t. \end{equation}
{\sf Proof}. From (\ref{fa2k}) we have the following relation: $$ \begin{array}{l} F(A_{2^{k+1}})J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t\\[4mm] =((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast )(-1)^{k(k+1)\over 2} b_k (A_{2^k}^\ast J_{2^k})^tJ_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t\\[4mm]
\quad+ ((-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast )c_k^\ast
(J_{2^k}A_{2^k})^tJ_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t\\[4mm] =(-1)^{k(k+1)\over 2} b_k((-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast )
[(A_{2^k}^\ast J_{2^k})^t(J_{2^k}A_{2^k}^\ast) ] A_{2^k}^tJ_{2^k}^t\\[4mm]
\quad+ c_k^\ast [(-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast
(J_{2^k}A_{2^k})^tJ_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t
+A_{2^k}C_{2^k}^\ast (J_{2^k}A_{2^k})^tJ_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t ]\\[4mm] =(-1)^{k(k+1)\over 2} b_k((-1)^{k(k+1)\over 2} b_kJ_{2^k}A_{2^k}^\ast [ A_{2^k} ] A_{2^k}^tJ_{2^k}^t + c_k^\ast A_{2^k}J_{2^k} [A_{2^k}]^\ast A_{2^k}^tJ_{2^k}^t )\\[4mm]
\quad + c_k^\ast [ (-1)^{k(k+1)\over 2} b_kJ_{2^k}A_{2^k}^\ast
A_{2^k}^tA_{2^k}^\ast A_{2^k}^tJ_{2^k}^t
+c_k^\ast [A_{2^k}] J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t ]\\[4mm] =b_k^2[ A_{2^k} ]^\ast J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t + (-1)^{k(k+1)\over 2} b_kc_k^\ast [ A_{2^k} ][ A_{2^k} ]^\ast Id_{2^k})\\[4mm]
\quad +(-1)^{k(k+1)\over 2} c_k^\ast
b_kJ_{2^k}A_{2^k}^\ast A_{2^k}^tA_{2^k}^\ast A_{2^k}^tJ_{2^k}^t +c_2^2 [A_{2^k}] J_{2^k}A_{2^k}A_{2^k}^tJ_{2^k}^t ]\\[4mm] =(b_k^2 + c_k^2)[ A_{2^k} ]J_{2^k}A_{2^k} A_{2^k}^tJ_{2^k}^t + (-1)^{k(k+1)\over 2} b_kc_k ([ A_{2^k} ] ^2 Id_{2^k}\\[4mm] \quad +(-1)^{k(k+1)\over 2} c_k b_kJ_{2^k}A_{2^k}A_{2^k}^tA_{2^k}A_{2^k}^tJ_{2^k}^t). \end{array} $$
Using (\ref{F}) we have $$
\| A_{2^k} \| J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t = [ A_{2^k} ][ A_{2^k} ]^\ast Id_{2^k} + J_{2^k}A_{2^k}^\ast A_{2^k}^tA_{2^k}^\ast A_{2^k}^tJ_{2^k}^t. $$
$\rule{2mm}{2mm}$
{\sf Theorem 3}. The eigenvalue polynom of $A_{2^{k+1}}A_{2^{k+1}}^\dag$ satisfies the following relations: \begin{equation} \label{thm4}
\begin{array}{l} |A_{2^{k+1}}A_{2^{k+1}}^\dag-\lambda Id_{2^{k+1}} |
=(\lambda^2-||A_{2^{k+1}}||\lambda+[A_{2^{k+1}}][A_{2^{k+1}}]^*)^{2^k},\\[3mm]
|A_{2^{k+1}}^\dag A_{2^{k+1}}-\lambda Id_{2^{k+1}} |
=(\lambda^2-||A_{2^{k+1}}||\lambda+[A_{2^{k+1}}][A_{2^{k+1}}]^*)^{2^k}. \end{array} \end{equation}
{\sf Proof}. Let $$ \Lambda_k=-[(c_kc_k^\ast-\lambda) Id_{2^k} + A_{2^k}^tA_{2^k}^\ast] [(-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast]^{-1}. $$
$$ \begin{array}{l}
\left|A_{2^{k+1}}A_{2^{k+1}}^\dag - \lambda Id_{2^{k+1}} \right|
= \left| \left( \begin{array}{cc} (-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast&(b_kb_k^\ast-\lambda)Id_{2^k} + A_{2^k}A_{2^k}^\dag \\[3mm] (c_kc_k^\ast-\lambda) Id_{2^k} + A_{2^k}^tA_{2^k}^\ast&(-1)^{k(k+1)\over 2} A_{2^k}^tB_{2^k}^\dag +C_{2^k}^tA_{2^k}^\dag \\[3mm] \end{array} \right)
\right|\\[9mm]
=\left| \left( \begin{array}{cc} Id_{2^k}&0\\[3mm] \Lambda_k&Id_{2^k}\\[3mm] \end{array}\right) \left( \begin{array}{cc} (-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast&(b_kb_k^\ast-\lambda)Id_{2^k} + A_{2^k}A_{2^k}^\dag \\[3mm] (c_kc_k^\ast-\lambda) Id_{2^k} + A_{2^k}^tA_{2^k}^\ast&(-1)^{k(k+1)\over 2} A_{2^k}^tB_{2^k}^\dag +C_{2^k}^tA_{2^k}^\dag \\[3mm] \end{array}
\right)\right|\\[9mm]
=\left| \left(\begin{array}{cc} (-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast&(b_kb_k^\ast-\lambda)Id_{2^k} + A_{2^k}A_{2^k}^\dag \\[3mm] 0&-\Lambda_k[(b_kb_k^\ast-\lambda)Id_{2^k} + A_{2^k}A_{2^k}^\dag]
+(-1)^{k(k+1)\over 2} A_{2^k}^tB_{2^k}^\dag+C_{2^k}^tA^\dag_{2^k} \end{array} \right)
\right|\\[9mm]
=\left|I+II \right|, \end{array} $$ where $$ \begin{array}{rcl} I&=&((-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast+A_{2^k}C_{2^k}^\ast) ((-1)^{k(k+1)\over 2} B_{2^k}^\ast A_{2^k}+A_{2^k}^\ast C_{2^k})^t\\[3mm] &=&(-1)^{k(k+1)\over 2}b_kc_k[A_{2^k}]^\ast Id_{2^k} +(-1)^{k(k+1)\over 2}b_k^\ast c_k^\ast [A_{2^k}]Id_{2^k} +b_kb_k^\ast J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t +c_kc_k^\ast A_{2^k}A_{2^k}^\dag \end{array} $$ and, by using Lemma 5, $$ \begin{array}{rcl} II&=&-((-1)^{k(k+1)\over 2} B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast ) \Lambda_k[(b_kb_k^\ast -\lambda)Id_{2^k} + A_{2^k}A_{2^k}^\dag]\\[3mm] &=&[(c_kc_k^\ast -\lambda)((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast) +((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast )A_{2^k}^tA_{2^k}^\ast]\\[3mm] &&[(b_kb_k^\ast -\lambda)((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast)^{-1} +((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast)^{-1}A_{2^k}A_{2^k}^\dag]\\[3mm] &=&(b_kb_k^\ast -\lambda)(c_kc_k^\ast -\lambda)Id_{2^k}+(b_kb_k^\ast -\lambda) ((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast \\[3mm] &&+A_{2^k}C_{2^k}^\ast )A_{2^k}^tA_{2^k}^\ast ((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast )^{-1} +(c_kc_k^\ast -\lambda)A_{2^k}A_{2^k}^\dag\\[3mm] &&+((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast)A_{2^k}^tA_{2^k}^\ast ((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast)^{-1}A_{2^k}A_{2^k}^\dag\\[3mm] &=&(b_kb_k^\ast -\lambda)(c_kc_k^\ast -\lambda)Id_{2^k}+ (c_kc_k^\ast -\lambda)A_{2^k}A_{2^k}^\dag +\frac{b_kb_k^\ast -\lambda}{F(A_{2^{k+1}})}III +\frac{1}{F(A_{2^{k+1}})}III A_{2^k}A_{2^k}^\dag, \end{array} $$ where $$ \begin{array}{rcl} III&=&((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast )A_{2^k}^tA_{2^k}^\ast ((-1)^{k(k+1)\over 2}A_{2^k}^\ast B_{2^k}+C_{2^k}^\ast A_{2^k})^t\\[3mm] &=&((-1)^{k(k+1)\over 2}B_{2^k}A_{2^k}^\ast +A_{2^k}C_{2^k}^\ast ) A_{2^k}^tJ_{2^k}^tJ_{2^k}A_{2^k}^\ast ((-1)^{k(k+1)\over 2}A_{2^k}^\ast B_{2^k}+C_{2^k}^\ast A_{2^k})^t\\[3mm] &=&[(-1)^{k(k+1)\over 2}b_k (J_{2^k}A_{2^k}^\ast )(J_{2^k}A_{2^k})^t +c_k^\ast (A_{2^k}J_{2^k})(J_{2^k}A_{2^k})^t]\\[3mm] &&[(-1)^{k(k+1)\over 2}b_k (J_{2^k}A_{2^k}^\ast )(A_{2^k}^\ast J_{2^k})^t +c_k^\ast (J_{2^k}A_{2^k}^\ast )(J_{2^k}A_{2^k})^t]\\[3mm] &=&[(-1)^{k(k+1)\over 2}b_k (J_{2^k}A_{2^k}^\ast )(J_{2^k}A_{2^k})^t +c_k^\ast [A_{2^k}]Id_{2^k}]\cdot\\[3mm] &&~~[(-1)^{k(k+1)\over 2}b_k [A_{2^k}]^\ast Id_{2^k} +c_k^\ast (J_{2^k}A_{2^k}^\ast )(J_{2^k}A_{2^k})^t]\\[3mm] &=&(b_k^2[A_{2^k}]^\ast+c_k^{\ast 2}[A_{2^k}])J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t +(-1)^{k(k+1)\over 2}b_kc_k^\ast J_{2^k}A_{2^k}^\ast A_{2^k}^tA_{2^k}^\ast A_{2^k}^tJ_{2^k}^t\\[3mm] &&+(-1)^{k(k+1)\over 2}b_kc_k^\ast [A_{2^k}][A_{2^k}]^\ast Id_{2^k}.\\[3mm] \end{array} $$ From Lemma 6, we get $$ \begin{array}{rcl} III&=&(b_k^2[A_{2^k}]^\ast+c_k^{\ast 2}[A_{2^k}])J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t
+(-1)^{k(k+1)\over 2}b_kc_k^\ast ||A_{2^k}||J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t\\[3mm] &=&F(A_{2^{k+1}})J_{2^k}A_{2^k}^\ast A_{2^k}^tJ_{2^k}^t. \end{array} $$
From Lemma 3 we also have $$ \begin{array}{rcl} III A_{2^k}A_{2^k}^\dag&=&III A_{2^k}J_{2^k} J_{2^k}^tA_{2^k}^\dag\\[3mm] &=&F(A_{2^{k+1}})J_{2^k}A_{2^k}^\ast(J_{2^k}A_{2^k})^t(A_{2^k}J_{2^k}) (A_{2^k}^\ast J_{2^k})^t =F(A_{2^{k+1}})[A_{2^k}][A_{2^k}]^\ast Id_{2^k}. \end{array} $$
Therefore, $$ \begin{array}{l}
\left|A_{2^{k+1}}A_{2^{k+1}}^\dag - \lambda Id_{2^{k+1}} \right|
=|I+II|\\[3mm]
=|-\lambda^2 Id_{2^k}+\lambda(b_kb_k^\ast+c_kc_k^\ast+||A_{2^k}||)Id_{2^k} -(b_kb_k^\ast c_kc_k^\ast-(-1)^{k(k+1)\over 2}b_k^\ast c_k^\ast[A_{2^k}]\\[3mm]
~~~-(-1)^{k(k+1)\over 2}b_kc_k[A_{2^k}]^\ast+[A_{2^k}][A_{2^k}]^\ast)Id_{2^k}|\\[3mm]
=(\lambda^2-||A_{2^{k+1}}||\lambda+[A_{2^{k+1}}][A_{2^{k+1}}]^\ast)^{2^k}, \end{array} $$ where the first formula in Lemma 4 is used. The second formula in Theorem 3 is obtained from the fact that $A_{2^{k+1}}A_{2^{k+1}}^\dag$ and $A_{2^{k+1}}^\dag A_{2^{k+1}}$ have the same eigenvalue set.
$\rule{2mm}{2mm}$
From Theorem 2 and 3 the states given by (\ref{a2k}) are $d$-computable. In terms of (\ref{GC}) the generalized concurrence for these states is given by $$ d_{2^{k+1}}=2^{k+1}\vert[A_{2^{k+1}}]\vert=2^{k+1}\vert b_kc_k+ b_{k-1}c_{k-1}+...+b_1c_1+ad+c^2\vert. $$
Let $p_{2^{k+1}}$ be a symmetric anti-diagonal $2^{2k+2}\times 2^{2k+2}$ matrix with all the anti-diagonal elements $1$ except for those at rows $2^{k+1}-1 + s(2^{k+2}-2)$, $2^{k+1} + s(2^{k+2}-2)$, $2^{k+2}-1 + s(2^{k+2}-2)$, $2^{k+2} + s(2^{k+2}-2)$, $s=0,...,2^{k+1}-1$, which are $-1$. $d_{2^{k+1}}$ can then be written as \begin{equation}\label{dkp}
d_{2^{k+1}}=|\langle \psi_{2^{k+1}} |p_{2^{k+1}}\psi_{2^{k+1}}^{*} \rangle |, \end{equation} where \begin{equation}\label{psi2k1} \vert\psi_{2^{k+1}}\rangle=\sum_{i,j=1}^{2^{k+1}} (A_{2^{k+1}})_{ij}\,e_i\otimes e_j. \end{equation}
Let $\Phi$ denote the set of pure states with the form (\ref{psi2k1}). For mixed states with density matrices such that their decompositions are of the form \begin{equation}\label{rho12k1}
\rho_{2^{2k+2}} = \sum_{i=1}^M p_i |\psi_i \rangle \langle \psi_i|,~~~~\sum_{i=1}^M p_i =1,~~~~|\psi_i\rangle\in\Phi, \end{equation} their entanglement of formations, by using a similar calculation in obtaining formula (\ref{drho}) \cite{fjlw}, are then given by $E(d_{2^{k+1}}(\rho_{2^{2k+2}}))$, where \begin{equation}\label{d2k1} d_{2^{k+1}}(\rho_{2^{2k+2}})=\Omega_1 - \sum_{i=2}^{2^{2k+2}}\Omega_i,
\end{equation} and $\Omega_i$, in decreasing order, are the the square roots of the eigenvalues of the matrix $\rho_{2^{2k+2}} p_{2^{k+1}}{\rho_{2^{2k+2}}^\ast}p_{2^{k+1}}$. Here again due to the form of the so constructed matrix $A_{2^{k+1}}$ in (\ref{a2k}), once $\rho$ has a decomposition with all $|\psi_i\rangle\in\Phi$, all other decompositions of $|\psi_i^\prime\rangle$
also satisfy $|\psi_i^\prime\rangle\in\Phi$. Therefore from high dimensional $d$-computable states $A_{2^{k+1}}$, $2\leq k\leq N$, the entanglement of formation for a class of density matrices whose decompositions lie in these $d$-computable quantum states can be obtained analytically.
\section{Remarks and conclusions}
Besides the $d$-computable states constructed above, from (\ref{ha4}) we can also construct another class of high dimensional $d$-computable states given by $2^{k+1}\times 2^{k+1}$ matrices $A_{2^{k+1}}$, $2\leq k\in{I\!\! N}$, \begin{equation}\label{newa} A_{2^{k+1}}= \left( \begin{array}{cc} B_k&A_k\\[3mm] -A_k^t&C_k\\[3mm] \end{array} \right) \equiv\left( \begin{array}{cc} b_{k}I_{2^{k}}&A_{2^{k}}\\[3mm] -A_{2^{k}}^t&c_{k}I_{2^{k}} \end{array} \right), \end{equation} where $b_k,~c_k\in\Cb$, $I_4=J_4$, \begin{equation} I_{2^{k+1}}= \left( \begin{array}{cc} 0&I_{2^{k}}\\[3mm] -I_{2^{k}}&0\\[3mm] \end{array} \right) \end{equation} for $k+2~ mode~4 =0$, \begin{equation} I_{2^{k+1}}= \left( \begin{array}{cc} 0&I_{2^{k}}\\[3mm] I_{2^{k}}&0 \end{array} \right) \end{equation} for $k+2~ mode~4 =1$, \begin{equation} I_{2^{k+1}}= \left( \begin{array}{cccc} 0&0&0&I_{2^{k-1}}\\[3mm] 0&0&-I_{2^{k-1}}&0\\[3mm] 0&I_{2^{k-1}}&0&0\\[3mm] -I_{2^{k-1}}&0&0&0 \end{array} \right) \end{equation} for $k+2~ mode~4 =2$, and \begin{equation} I_{2^{k+1}}= \left( \begin{array}{cccccccc} 0&0&0&0&0&0&0&I_{2^{k-2}}\\[3mm] 0&0&0&0&0&0&-I_{2^{k-2}}&0\\[3mm] 0&0&0&0&0&-I_{2^{k-2}}&0&0\\[3mm] 0&0&0&0&I_{2^{k-2}}&0&0&0\\[3mm] 0&0&0&-I_{2^{k-2}}&0&0&0&0\\[3mm] 0&0&I_{2^{k-2}}&0&0&0&0&0\\[3mm] 0&I_{2^{k-2}}&0&0&0&0&0&0\\[3mm] -I_{2^{k-2}}&0&0&0&0&0&0&0 \end{array} \right) \end{equation} for $k+2~ mode~4 =3$.
One can prove that the matrices in (\ref{newa}) also give rise to $d$-computable states: $$
|A_{2^{k+1}}A_{2^{k+1}}^\dag| =[(c^2+ad-\sum_{i=1}^{k}b_ic_i)(c^2+ad-\sum_{i=1}^{k}b_ic_i)^*]^{2^k}, $$ $$ \begin{array}{rcl}
|A_{2^{k+1}}A_{2^{k+1}}^\dag-\lambda Id_{2^{k+1}}| &=&\displaystyle[\lambda^2-(aa^\ast+2cc^\ast+dd^\ast +\sum_{i=1}^{k}b_ib_i^\ast+\sum_{i=1}^{k}c_ic_i^\ast)\lambda\\[4mm] &&\displaystyle+(c^2+ad -\sum_{i=1}^{k}b_ic_i)(c^2+ad-\sum_{i=1}^{k}b_ic_i)^*]^{2^k}. \end{array} $$ The entanglement of formation for a density matrix with decompositions in these states is also given by a formula of the form (\ref{d2k1}).
In addition, the results obtained above may be used to solve linear equation systems, e.g., in the analysis of data bank, described by $A{\bf x}={\bf y}$, where $A$ is a $2^{k}\times 2^{k}$ matrix, $k\in{I\!\! N}$, ${\bf x}$ and ${\bf y}$ are $2^{k}$-dimensional column vectors. When the dimension $2^{k}$ is large, the standard methods such as Gauss elimination to solve $A{\bf x}={\bf y}$ could be not efficient. From our Lemma 3, if the matrix $A$ is of one of the following forms: $A_{2^k}$, $B_{2^k}A_{2^k}$, $A_{2^k}^t$ or $A_{2^k}^tB_{2^k}^t$, the solution ${\bf x}$ can be obtained easily by applying the matrix multiplicators. For example, $A_{2^k}{\bf x}={\bf y}$ is solved by $$ {\bf x}=\frac{1}{[A_{2^k}]}(A_{2^k}J_{2^k})^t J_{2^k}{\bf y}. $$ The solution to $B_{2^k}A_{2^k}{\bf x}={\bf y}$ is given by $$ {\bf x}=\frac{1}{b_k[A_{2^k}]}(A_{2^k}J_{2^k})^t J_{2^k}{\bf y}. $$
We have presented a kind of construction for a class of special matrices with at most two different eigenvalues. This class of matrices defines a special kind of $d$-computable states. The entanglement of formation for these $d$-computable states is a monotonically increasing function of a the generalized concurrence. From this generalized concurrence the entanglement of formation for a large class of density matrices whose decompositions lie in these $d$-computable quantum states is obtained analytically. Besides the relations to the quantum entanglement, the construction of $d$-computable states has its own mathematical interests.
\end{document} |
\begin{document}
\title{Engineering Framework for Optimizing Superconducting Qubit Designs}
\author{\mbox{Fei Yan}} \thanks{[email protected]}
\altaffiliation{Current address: Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Youngkyu Sung} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Philip Krantz} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Archana Kamal} \altaffiliation{Current address: Department of Physics and Applied Physics, University of Massachusetts Lowell, Lowell, MA 01854, USA} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{David K. Kim} \affiliation{MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02421, USA} \author{Jonilyn L. Yoder} \affiliation{MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02421, USA} \author{Terry P. Orlando} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Simon Gustavsson} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{\mbox{William D. Oliver}} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02421, USA} \affiliation{Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA}
\begin{abstract}
Superconducting quantum technologies require qubit systems whose properties meet several often conflicting requirements, such as long coherence times and high anharmonicity.
Here, we provide an engineering framework based on a generalized superconducting qubit model in the flux regime, which abstracts multiple
circuit design parameters and thereby supports design optimization across multiple qubit properties.
We experimentally investigate a special parameter regime which has
both high anharmonicity ($\sim\!1$\,GHz) and
long quantum coherence times ($T_1\!=\!40\!-\!80\,\mathrm{\mu s}$ and $T_\mathrm{2Echo}\!=\!2T_1$).
\end{abstract}
\maketitle
Since the first direct observation of quantum coherence in a superconducting qubit more than 20 years ago \cite{Nakamura-Nature-1999}, many variants have been designed and studied \cite{krantz2019quantum},
such as the Cooper-pair box (CPB) \cite{Nakamura-Nature-1999}, the persistent-current flux qubit (PCFQ) \cite{Orlando-PRB-1999,Mooij-Science-1999}, the transmon \cite{Koch-PRA-2007,Paik-PRL-2011}, the fluxonium \cite{Manucharyan-Science-2009}, and the capacitively-shunted flux qubit (CSFQ) \cite{Steffen-PRL-2010,Yan-NComms-2016}.
These superconducting qubit designs were usually categorized according to the ratio between the effective charging energy $E_\mathrm{C}$ and Josephson energy $E_\mathrm{J}$, into the charge ($E_\mathrm{J} \leq E_\mathrm{C}$) or flux ($E_\mathrm{J} \gg E_\mathrm{C}$) regime \cite{Clarke-Nature-2008}.
The CPB (Fig.~\ref{fig:circuit_main}), a representative in the charge regime, provides large anharmonicity that facilitates fast gate operations. However, strong background charge noise limits its coherence time \cite{Nakamura-Nature-1999}, and the dispersion from quasiparticle tunneling causes severe frequency instability \cite{schreier2008suppressing}.
Likewise, qubits in the flux regime, including the transmon, PCFQ, CSFQ, fluxonium and the rf-SQUID qubit, have been studied extensively as potential elements for gate-based quantum computing \cite{Kelly-Nature-2015,Ofek-Nature-2016,corcoles2015demonstration,Riste-NComms-2015}, quantum annealing \cite{Johnson-Nature-2011,Barends-Nature-2016}, quantum simulations \cite{Barends-NComms-2015} and many other applications, largely due to the flexibility in engineering their Hamiltonians and due to their relative insensitivity to charge noise.
In this work, we provide an engineering framework based on a generalized flux qubit (GFQ) model which accommodates most (if not all) contemporary qubit variants.
The framework facilitates an understanding of how key qubit properties are related to circuit parameters.
The increased complexity, i.e., the use of both a shunt capacitor and an array of Josephson junctions, enables better control over coherence, anharmonicity and qubit frequency.
As an example of implementing this framework, we experimentally demonstrate a special parameter regime, the ``quarton'' regime, named after its quartic potential profile.
In comparison with other state-of-the-art designs, the quarton can simultaneously maintain a desirable qubit frequency ($3\!-\!4$\,GHz), large anharmonicity ($\sim\!1$\,GHz), and high coherence ($T_1\!=\!40\!-\!80\,\mathrm{\mu s}$, $T_\mathrm{2Echo}\!=\!2T_1$).
Such a configurable energy level structure is advantageous with respect to the problem of frequency crowding in highly connected qubit systems.
We show experimentally that quarton qubits with as few as 8 and 16 array junctions and with a much smaller shunt capacitor allow for a compact design, promising better scalability and reproducibility.
\begin{figure}\label{fig:circuit_main}
\end{figure}
In the GFQ circuit with $N$ array junctions (Fig.~\ref{fig:circuit_main}), each junction is associated with a gauge-invariant (branch) phase $\varphi_i$. These phases satisfy the fluxoid quantization condition, $\sum_{k=1}^{N+1}\!\varphi_k \,+\, \varphi_\mathrm{e} = 2\pi z$ ($z\in\mathbb{Z}$), where $\varphi_\mathrm{e} = 2\pi \Phi_\mathrm{e}/\Phi_0$. $\Phi_\mathrm{e}$ is the external magnetic flux threading the qubit loop and $\Phi_0=h/2\mathrm{e}$ is the superconducting flux quantum.
Although the full Hamiltonian is $N$-dimensional, the symmetry among the array junctions allows the full Hamiltonian to be approximated by a one-dimensional Hamiltonian; and this dimension coincides with a lower-energy mode that describes the qubit \cite{Ferguson-PRX-2013}.
This one-dimensional Hamiltonian is given by
\begin{align} \label{eq:H_plus} \mathcal{H} &= - 4 E_\mathrm{C} \, {\partial_{\phi}}^2 + E_\mathrm{J} \Big( - \gamma N \cos(\phi/N) - \cos(\phi + \varphi_\mathrm{e}) \Big) \;, \end{align}
where $\phi=\varphi_1+\varphi_2+...+\varphi_N$ is the phase variable of the qubit mode, and $\gamma$ is the size ratio between the array junction and the smaller principle junction.
The effective charging energy is $E_\mathrm{C} = \mathrm{e}^2/2C_\Sigma$, where $C_\Sigma = C_\mathrm{sh} + C_\mathrm{J} + \gamma C_\mathrm{J}/N + C_\mathrm{g}$ is the total capacitance across the principal junction.
$C_\mathrm{g}$ is the correction from stray capacitances from superconducting islands to ground, which may become significant for large $N$ \cite{Ferguson-PRX-2013,Viola-PRB-2015}.
The principal junction has a Josephson energy $E_\mathrm{J} = I_\mathrm{c} \Phi_0/2\pi$.
In the $E_\mathrm{J}\!\gg\!E_\mathrm{C}$ limit, $\phi$ is well-defined and has small quantum fluctuations.
Such a multi-junction qubit (total junction number $\geqslant3$) achieves the best coherence when biased at $\varphi_\mathrm{e}=\pi$, where the qubit frequency is (at least) first-order insensitive to flux fluctuations. At this working point, we may expand the potential part of Eq.~(\ref{eq:H_plus}) to fourth order,
\begin{align} \label{eq:H_plus_4th} \mathcal{H} &= - 4 E_\mathrm{C} \, {\partial_{\phi}}^2 + E_\mathrm{J} \Big( \frac{\gamma/N-1}{2} \phi^2 + \frac{1}{24} \phi^4 \Big) \;, \end{align}
where we have assumed $N^3\!\gg\!\gamma$.
Depending on the value of $\gamma/N$, the problem can be categorized into one of the three regimes illustrated in Fig.~\ref{fig:circuit_main}(b).
The fluxon regime (\textbf{\romannumeral 1}), $1\!<\!\gamma\!<\!N$, was first demonstrated in the traditional PCFQ with $N=2$ \cite{VanDerWal-Science-2000}, where the potential assumes a double-well profile, providing strong anharmonicity.
The fluxonium extends the case to $N\!\approx\!100$ \cite{Pop-Nature-2014}.
The energy eigenstates can be treated as hybridized states via quantum-mechanical tunneling between neighboring wells.
The plasmon regime (\textbf{\romannumeral 2}), $\gamma\!>\!N$, was explored in the CSFQ, where the potential assumes a single-well profile. A leading quadratic term and a minor quartic term lead to weak anharmonicity, though the CSFQ is still more anharmonic than the transmon due to partial cancellation of the quadratic term \cite{Yan-NComms-2016}.
The quarton regime (\textbf{\romannumeral 3}), $\gamma \approx N$, approximates the problem to a particle in a quartic potential.
As we will show later, the quarton design has desirable features in its energy level configuration.
We notice that a similar design with $\gamma\approx N=3$ was used in a parametric amplifier, but it operates at non-degenerate bias to exploit the cubic potential term and, to the contrary, eliminate the quartic one \cite{frattini20173}.
\begin{figure}\label{fig:parameter_tradespace}
\end{figure}
To find qubit designs with predetermined desirable properties, we may expand the parameter space beyond $E_\mathrm{J}/E_\mathrm{C}$, to include $I_\mathrm{c}$, $C_\Sigma$, $N$ and $\gamma/N$ as the four independent design parameters.
Our engineering framework provides an abstraction that captures the underlying physics to develop a set of rules or guidelines by which one can understand the parameter-property tradespace, as illustrated in Fig.~\ref{fig:parameter_tradespace}(a).
Two of the circuit parameters, $I_\mathrm{c}$ and $C_\Sigma$, have been studied extensively.
In general, a lower $I_\mathrm{c}$ and a higher $C_\Sigma$ are preferred for reducing sensitivity to flux and charge noise respectively.
The energy level structure is also generally sensitive to their values, depending on the specific case.
In the following, we focus on the discussion of the other two quantities, $\gamma/N$ and $N$.
First, we consider $\gamma/N$ as an independent variable instead of $\gamma$, because the Hamiltonian in Eq.~(\ref{eq:H_plus_4th}) is parameterized by $E_\mathrm{J}$, $E_\mathrm{C}$, and $\gamma/N$.
We find that a smaller $\gamma/N$ leads to a smaller qubit frequency and a larger anharmonicity, except for certain cases such as very small $N$ or $\gamma\approx1$.
An example is shown in Fig.~\ref{fig:parameter_tradespace}(b).
An intuitive explanation is as follows.
With a symmetric potential profile, the wavefunctions of the ground state $\ket{0}$ and the second-excited state $\ket{2}$ have even parity while the excited state $\ket{1}$ has odd parity.
Reducing $\gamma/N$ will raise the potential energy around $\phi = 0$, pushing up $\ket{0}$ and $\ket{2}$ due to their non-zero amplitudes at $\phi = 0$. In contrast, the odd-parity $\ket{1}$ state is unaffected, leading to a smaller $\omega_{01}$ and a greater $\omega_{12}$.
At the critical value $\gamma/N=1$, the quadratic term in Eq.~(\ref{eq:H_plus_4th}) is canceled, resulting in a quartic potential.
We can find the solutions numerically with $E_n = \lambda_n ( \frac{2}{3} E_\mathrm{J} {E_\mathrm{C}}^2 )^{1/3}$. For the lowest three levels, we find $\lambda_0=1.0604$, $\lambda_1=3.7997$ and $\lambda_2=7.4557$.
Note that $\lambda_2-\lambda_1\approx \frac{4}{3}(\lambda_1-\lambda_0)$, suggesting that the anharmonicity of the quarton qubit is about 1/3 of its qubit frequency.
This interesting finding is useful in practice, as it is common to operate qubits in the frequency range of 3-6\,GHz, in part for better qubit initialization ($\omega_{01}\!\gg\!k_\mathrm{B}T/\hbar$), and in part for compatibility with high-performance microwave control electronics, although, exceptions exist using non-adiabatic control \cite{campbell2020universal,zhang2020universal}.
One third of the qubit frequency gives 1-2\,GHz anharmonicity, sufficient for suppressing leakage to non-computational states and alleviating the frequency-crowding problem, so that higher single- and two-qubit gate fidelities are achievable.
Second, we find that the charge dispersion that causes qubit frequency instability and dephasing can be efficiently suppressed by increasing $N$.
The size of the charge dispersion $\Delta\epsilon$ can be estimated from the tight-binding hopping amplitude between neighboring lattice sites in the potential landscape, which becomes exponentially small with respect to the height of the inter-lattice barrier.
In the example of a single junction, it was shown that $\Delta\epsilon \propto \exp(-\sqrt{8 E_\mathrm{J}/E_\mathrm{C}})$ \cite{Koch-PRA-2007}.
It is more complicated for flux qubits due to the multi-dimensionality of the Hamiltonian.
In the quarton, a simulation of the full $N$-dimensional Hamiltonian shows that the suppression is even more efficient, because the barrier height scales with $N^2$ [Fig.~\ref{fig:parameter_tradespace}(c)].
Typically, $N\geqslant6$ is sufficient to suppress charge dispersion down to the kilohertz level.
This implies that it is not necessary to use $\sim\!\!100$ junctions to suppress the charge noise for better coherence~\cite{Manucharyan-Science-2009,Pop-Nature-2014}.
A more compact and easier-to-fabricate design can be done with the quarton.
There are other advantages of using a junction array.
For example, $T_1$ relaxation due to quasiparticle tunneling across an array junction may be improved by increasing $N$, since the corresponding matrix element $\bra{0} \sin (\frac{\varphi_i}{2}) \ket{1} \approx \bra{0} \sin (\frac{\phi}{2N}) \ket{1}$ scales as $1/N$ (it vanishes at $\varphi_\mathrm{e}=\pi$ for tunneling across the principal junction) \cite{Catelani-PRB-2011}.
However, it is also important to limit the array size, since more junctions lead to more decohering channels from the parasitic capacitance to ground and the Aharonov-Casher effect, as well as keeping the qubit loop area small to avoid excess flux noise.
In general, a trade-off has to be found.
Such an example is discussed within the context of optimizing coherence time given only a few noise sources \cite{mizel2019rightsizing}.
The enhanced controllability over qubit properties in the GFQ model allows more flexibility in qubit design.
For example, the transmon requires a large shunt capacitance to suppress charge dispersion.
This unavoidably lowers $E_\mathrm{C}$ and anharmonicity, as well as increases the qubit footprint.
With the introduction of the junction array in the GFQ, one gains more freedom in configuring the qubit frequency and anharmonicity while, at the same time, the junction array helps suppress charge dispersion.
\begin{figure}\label{fig:modality_compare}
\end{figure}
To demonstrate the concept, we implemented the quarton design with $N=8,16$.
As a practical matter, we have found it best to decide first on the target qubit frequency and its anharmonicity.
Since $\mathcal{A}/\omega_{01}$ mostly depends on $\gamma/N$, one may fix $\gamma/N$ before optimizing other parameters, simplifying the design process.
The circuit layout, fabrication process, and measurement setup are similar to those presented in our previous work \cite{Yan-NComms-2016}.
The aluminum metallization layer is patterned with square-shaped pads for the shunt capacitor and a half-wave-length transmission-line resonator for readout.
A slightly larger qubit loop, about $10\times20\,\mathrm{\mu m}^2$, is used here for housing the array junctions.
The junctions are made in the standard dolan-bridge style.
We tested multiple samples with varying parameters.
The typical design parameters are $I_\mathrm{C}=15-40$\,nA, $C_\Sigma=20-30$\,fF, $\gamma/N=0.85-1.1$.
Results are shown in Table I and Fig.~\ref{fig:modality_compare}.
Most samples have qubit frequencies
spread within 2-4\,GHz and anharmonicities above 800\,MHz, including some ideal cases like sample H ($\omega_{01}=3.4$\,GHz, $\mathcal{A}=1.9$\,GHz).
Fig.~\ref{fig:modality_compare}(b) shows that $\mathcal{A}/\omega_{01}$ ratios of these samples spread around the 1/3 line.
In comparison to transmon-type qubits and CSFQs which have much lower anharmonicities (200-300\,MHz and 500\,MHz respectively) and to fluxonium qubits whose qubit frequencies are consistently below 1\,GHz, the quarton qubits demonstrate a practically useful parameter regime where
trade-offs between qubit frequencies and anharmonicities can be made.
However, by comparing the predicted and experimentally inferred values of $\gamma/N$, we find that variation during fabrication and subsequent aging may cause significant fluctuations in actual values.
Qubit frequencies in many samples undershoot our target values ($\geqslant3$\,GHz), possibly due to junction-aging effects that reduce $I_\mathrm{c}$.
Studying and improving reproducibility will be a main objective in the future.
The quarton qubits show comparable $T_1$ times with respect to 2D transmon-type qubits and CSFQs [Fig.~\ref{fig:modality_compare}(c)].
We believe surface participation is the common key factor affecting coherence.
Fluxonium devices generally have longer $T_1$ times, in some cases on the order of 1\,ms \cite{Pop-Nature-2014}.
The enhancement is due to the suppressed dipole matrix element in the deep fluxon regime, and due to weaker low-frequency noise from Ohmic or super-Ohmic dielectric loss, i.e., the power spectral densities $S(\omega)\propto\omega^d$ ($d\geqslant1$) \cite{Nguyen_2019}.
The quarton qubits also show long spin-echo times $T_\mathrm{2Echo}$.
The highlighted samples in Table I approaches the $T_\mathrm{2Echo}\!=\!2T_1$ limit, indicating low residual thermal cavity photons due to our optimized measurement setup \cite{yan2018distinguishing}.
To conclude, our GFQ framework facilitates the understanding of how key qubit properties are related to circuit parameters.
In particular, we find the effectiveness of $\gamma/N$ in tuning the ratio between anharmonicity and qubit frequency and the effectiveness in suppressing charge dispersion by increasing $N$.
We experimentally demonstrate how to take advantage of these findings by testing the quarton design, which simultaneously achieves a desirable qubit frequency, large anharmonicity, and high coherence while maintaining a compact design.
The configurable energy level structure alleviates the problem of frequency crowding, promising better two-qubit gate performance from schemes such as parametric gates \cite{mckay2016universal,caldwell2018parametrically}.
Future improvement in reproducibility can transform such designs into powerful building blocks in quantum information processing.
\begin{table*}[t]
\begin{tabular}{ccccccccccc}
\hline
Device & $N$ & $I_\mathrm{c}$ [nA] & $C_\mathrm{sh}$ [fF] & $\gamma/N$ & $\omega_{01}$ [GHz] & $\mathcal{A}$ [GHz] & $\mathcal{A}/\omega_{01}$ & $T_1$ [$\mu$s] & $T_\mathrm{2Echo}$ [$\mu$s] & Local Bias \\
\hline
\rowcolor{green!25}
A & 8 & 21 & 20 & 0.92 & 3.6 & 1.0 & 0.28 & $43.1\pm7.5$ & 70-100 & No \\
\hline
B & 8 & 21 & 30 & 0.95 & 2.8 & 0.8 & 0.29 & 23 & - & No \\
\hline
\rowcolor{green!25} C & 8 & 21 & 30 & 0.92 & 2.6 & 1.0 & 0.38 & $82.9\pm7.9$ & 100-125 & No \\
\hline
D & 8 & 18 & 30 & 0.92 & 2.4 & 0.9 & 0.38 & 20 & - & No \\
\hline
E & 8 & 18 & 30 & 0.93 & 2.0 & 1.0 & 0.50 & 50 & 12 & Yes \\
\hline
F & 8 & 40 & 20 & 0.93 & 3.7 & 1.6 & 0.43 & - & - & Yes \\
\hline
G & 8 & 40 & 20 & 0.98 & 4.7 & 1.2 & 0.26 & 10 & 7 & No \\
\hline
H & 8 & 40 & 20 & 1.0 & 3.4 & 1.9 & 0.56 & 23 & 15 & Yes \\
\hline
\rowcolor{green!25} I & 16 & 14 & 20 & 0.84 & 3.0 & 0.9 & 0.30 & $50.6\pm9.2$ & 110-140 & No \\
\hline
J & 16 & 14 & 20 & 1.09 & 3.8 & 0.6 & 0.16 & 30 & - & No \\
\hline
K & 16 & 27 & 20 & 0.88 & 2.6 & 1.0 & 0.38 & 30 & 20 & Yes \\
\hline
\rowcolor{blue!25} CSFQ & 2 & 60 & 50 & 1.2 & 4.7 & 0.5 & 0.11 & 35-55 & 70-90 & No \\ \hline \end{tabular} \vspace*{1mm} \caption[] {
Design parameters ($N$, $I_\mathrm{c}$, $C_\mathrm{sh}$, $\gamma/N$) and measurement results ($T_1$ and spin-echo dephasing time $T_\mathrm{2Echo}$) of quarton qubits.
The $\gamma/N$ ratios are designed to be in the vinicity of 1 with slight variations.
We note that the measured frequencies mostly have better agreement with a 30-40\% lower $I_\mathrm{c}$ than the designed value, possibly due to junction aging.
Highlighted are samples with repeated measurements of coherence times (typically 50-100 times over ~10 hours).
Results of other samples lack of statistical confidence, and are presented for reference.
The CSFQ results from Ref.~\cite{Yan-NComms-2016} are also listed for comparison.
Note that all quartons have much smaller shunt capacitance than CSFQ.
The reduced footprint promises better scalability. }\label{table1} \end{table*}
\end{document} |
\begin{document}
\date{\empty} \title{Corners and fundamental corners for the groups $\mr{Spin}(n,1)$} \author{Domagoj Kova\v cevi\' c and Hrvoje Kraljevi\' c, University of Zagreb \thanks{The authors were supported by the QuantiXLie Centre of Excellence, a project cofinanced by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (Grant KK.01.1.1.01.0004).} \thanks{2010 Mathematics Subject Classification: Primary 20G05, Secondary 16S30}} \maketitle
\noindent\underline{Abstract.} We study corners and fundamental corners of the irreducible representations of the groups $G=\mr{Spin}(n,1)$ that are not elementary, i.e. that are equivalent to subquotients of reducible nonunitary principal seres representations. For even $n$ we obtain results in a way analogous to the results in [10] for the groups $\mr{SU}(n,1).$ Especially, we again get a bijection between the nonelementary part ${\hat G}^0$ of the unitary dual $\hat G$ and the unitary dual $\hat K.$ In the case of odd $n$ we get a bijection between ${\hat G}^0$ and a true subset of $\hat K.$
\section{Introduction}
\indent{\bf 1. Elementary representations.} Let $G$ be a connected semisimple Lie group with finite center, $\mathfrak{g}_0$ its Lie algebra, $K$ its maximal compact subgroup, and $\mathfrak{g}_0=\mathfrak{k}_0\oplus\mathfrak{p}_0$ the corresponding Cartan decomposition of $\mathfrak{g}_0.$ Let\linebreak $P=MAN$ be a minimal parabolic subgroup of $G;$ here the Lie algebra $\mathfrak{a}_0$ of the subgroup $A$ is a Cartan subspace of $\mathfrak{p}_0,$ i.e. a Lie subalgebra of $\mathfrak{g}_0$ which is maximal among those contained in $\mathfrak{p}_0,$ $M=K\cap P$ is the centralizer of $\mathfrak{a}_0$ in $K,$ its Lie algebra $\mathfrak{m}_0$ is the centralizer of $\mathfrak{a}_0$ in $\mathfrak{k}_0,$ $N=\exp(\mathfrak{n}_0),$ where $\mathfrak{n}_0$ is the sum of root subspaces $\mathfrak{g}_0^{\alpha}$ with respect to some choice $\Delta^+(\mathfrak{g}_0,\mathfrak{a}_0)$ of positive restricted roots of the pair $(\mathfrak{g}_0,\mathfrak{a}_0).$ Denote by $\Delta_P$ the modular function of the group $P.$ Then $\Delta_P(m)=1$ for every $m\in M,$ $\Delta_P(n)=1$ for every $n\in N$ and for $H\in\mathfrak{a}_0$ we have $$
\Delta_P(\exp\,H)=\mathrm{e}^{\mr{Tr}(\mr{ad}\,H)|\mathfrak{n}_0}=\mathrm{e}^{2\delta(H)},\quad\delta=\frac{1}{2}\sum_{\alpha\in\Delta^+(\mathfrak{g}_0,\mathfrak{a}_0)}(\dim\,\mathfrak{g}_0^{\alpha})\alpha. $$ Thus, $$ \Delta_P(man)=\mathrm{e}^{2\delta(\log\,a)},\quad m\in M,\,a\in A,\,n\in N, $$
where $\log:A\longrightarrow\mathfrak{a}_0$ is the inverse map of the bijection $\exp|\mathfrak{a}_0:\mathfrak{a}_0\longrightarrow A.$ Let $\sigma$ be an irreducible unitary representation of the compact group $M$ on a finitedimensional unitary space ${\cal H}_{\sigma}.$ Let $\mathfrak{a}$ be the complexification of $\mathfrak{a}_0.$ For $\nu\in\mathfrak{a}^*$ let $a\mapsto a^{\nu}$ be the onedimensional representation of the Abelian group $A$ defined by $$ a^{\nu}=\mathrm{e}^{\nu(\log\,a)},\quad a\in A. $$ Define the representation $\sigma\otimes\nu$ of the group $P=MAN$ on the space ${\cal H}_{\sigma}$ by $$ (\sigma\otimes\nu)(man)=a^{\nu}\sigma(m),\quad m\in M,\,a\in A,\,n\in N. $$ Let $\pi^{\sigma,\nu}$ be the representation of $G$ induced by the representation $\sigma\otimes\nu.$ The space of the representation $\pi^{\sigma,\nu}$ is the Hilbert space ${\cal H}^{\sigma,\nu}$ of all (classes of) Haar$-$measurable functions $f:G\longrightarrow{\cal H}_{\sigma}$ such that $$ f(px)=\sqrt{\Delta_P(p)}(\sigma\otimes\nu)(p)f(x)\quad\forall p\in P,\,\,\forall x\in G, $$ and such that $$
\int_K\|f(k)\|_{{\cal H}_{\sigma}}^2\mathrm{d}\mu(k)<+\infty, $$
where $\mu$ is the normed Haar measure on $K$ and $\|\,\cdot\,\|_{{\cal H}_{\sigma}}$ is the norm on the unitary space ${\cal H}_{\sigma}.$ The representation $\pi^{\sigma,\nu}$ is given by the right action of $G:$ $$ \left[\pi^{\sigma,\nu}(x)f\right](y)=f(yx),\quad f\in{\cal H}^{\sigma,\nu},\,x,y\in G. $$ The representations $\pi^{\sigma,\nu},$ $\sigma\in\hat{M},$ $\nu\in\mathfrak{a}^*,$ are called {\bf elementary representations} of $G.$\\ \indent Since $\Delta_P(man)=a^{2\delta},$ the condition $f(px)=\sqrt{\Delta_P(p)}(\sigma\otimes\nu)(p)f(x)$ can be written as $$ f(manx)=a^{\nu+\delta}\sigma(m)f(x),\quad m\in M,\,a\in A,\,n\in N,\,x\in G. $$ \indent From classical results of Harish$-$Chandra we know that all elementary representations are admissible and of finite length and that every completely irreducible admissible representation of $G$ on a Banach space is infinitesimally equivalent to an irreducible subquotient of an elementary representation. Infinitesimal equivalence of completely irreducible admissible representations means algebraic equivalence of the corresponding $(\mathfrak{g},K)-$modules. We will denote by $\wideparen{G}$ the set of all infinitesimal equivalence classes of completely irreducible admissible representations of $G$ on Banach spaces. $\wideparen{G}^e$ will denote the set of infinitesimal equivalence classes of irreducible elementary representations and $\wideparen{G}^0=\wideparen{G}\setminus\wideparen{G}^e$ the set of infinitesimal equivalence classes of irreducible suquotients of reducible elementary representations. It is also due to Harish$-$Chandra that every irreducible unitary representation is admissible and that infinitesimal equivalence between such representations is equivalent to their unitary equivalence. Thus the unitary dual $\hat{G}$ of $G$ can be regarded as a subset of $\wideparen{G}.$ We denote $\hat{G}^e=\hat{G}\cap\wideparen{G}^e$ and $\hat{G}^0=\hat{G}\cap\wideparen{G}^0=\hat{G}\setminus\hat{G}^e.$\\
\indent{\bf 2. Infinitesimal characters.} For a finitedimensional complex Lie algebra $\mathfrak{g}$ we denote by ${\cal U}(\mathfrak{g})$ the universal enveloping algebra of $\mathfrak{g}$ and by $\mathfrak{Z}(\mathfrak{g})$ the center of ${\cal U}(\mathfrak{g}).$ Any unital homomorphism $\chi:\mathfrak{Z}(\mathfrak{g})\longrightarrow\mathbb{C}$ is called {\bf infinitesimal character} of $\mathfrak{g}.$ We denote by $\hat{\mathfrak{Z}}(\mathfrak{g})$ the set of all infinitesimal characters of $\mathfrak{g}.$ If $\pi$ is a representation of $\mathfrak{g}$ on a vector space $V$ we say that $\chi\in\hat{\mathfrak{Z}}(\mathfrak{g})$ is the infinitesimal character of the representation $\pi$ (or of the corresponding ${\cal U}(\mathfrak{g})-$module $V)$ if $$ \pi(z)v=\chi(z)v\qquad\forall z\in\mathfrak{Z}(\mathfrak{g}),\,\,\forall v\in V. $$ Let now $\mathfrak{g}$ be semisimple and let $\mathfrak{h}$ be its Cartan subalgebra. Denote by $\Delta=\Delta(\mathfrak{g},\mathfrak{h})\subseteq\mathfrak{h}^*$ the root system of the pair $(\mathfrak{g},\mathfrak{h}),$ by $W=W(\mathfrak{g},\mathfrak{h})$ its Weyl group, by $\Delta^+$ a choice of positive roots in $\Delta,$ by $\mathfrak{g}^{\alpha}$ the root subspace of $\mathfrak{g}$ for a root $\alpha\in\Delta,$ and $$ \mathfrak{n}=\sum_{\alpha\in\Delta^+}\dotplus\,\mathfrak{g}^{\alpha}\qquad\mathrm{and}\qquad\overline{\mathfrak{n}}=\sum_{\alpha\in\Delta^+}\dotplus\,\mathfrak{g}^{-\alpha}. $$ Then we have direct sum decomposition $$ {\cal U}(\mathfrak{g})={\cal U}(\mathfrak{h})\,\dotplus\,(\mathfrak{n}\,{\cal U}(\mathfrak{g})+{\cal U}(\mathfrak{g})\,\overline{\mathfrak{n}}). $$
Denote by $\eta:{\cal U}(\mathfrak{g})\longrightarrow{\cal U}(\mathfrak{h})$ the corresponding projection. By a result of Harish$-$Chandra the restriction $\eta|\mathfrak{Z}(\mathfrak{g})$ is an injective homomorphism of $\mathfrak{Z}(\mathfrak{g})$ into the algebra ${\cal U}(\mathfrak{h}).$ Since the Lie algebra $\mathfrak{h}$ is Abelian, the algebra ${\cal U}(\mathfrak{h})$ identifies with the symmetric algebra ${\cal S}(\mathfrak{h})$ over $\mathfrak{h},$ thus with the polynomial algebra ${\cal P}(\mathfrak{h}^*)$ over the dual space $\mathfrak{h}^*$ of $\mathfrak{h}.$ Therefore $\eta|\mathfrak{Z}(\mathfrak{g})$ is a monomorphism of $\mathfrak{Z}(\mathfrak{g})$ into ${\cal P}(\mathfrak{h}^*).$ This monomorphism depends on the choice of $\Delta^+.$ This dependence is repared by the automorphism $\gamma=\gamma_{\Delta^+}$ of the algebra $U(\mathfrak{h})={\cal P}(\mathfrak{h}^*)$ defined by $$ (\gamma(u))(\lambda)=u(\lambda-\rho),\,\,\lambda\in\mathfrak{h}^*,\,\,u\in{\cal U}(\mathfrak{h})={\cal P}(\mathfrak{h}^*),\,\,\mathrm{where}\,\,\rho=\rho_{\Delta^+}=\frac{1}{2}\sum_{\alpha\in\Delta^+}\alpha. $$
Now the restriction $\omega=(\gamma\circ\eta)|\mathfrak{Z}(\mathfrak{g})$ is independent on the choice of $\Delta^+$ and is a unital isomorphism of the algebra $\mathfrak{Z}(\mathfrak{g})$ onto the algebra ${\cal P}(\mathfrak{h}^*)^W$ of polynomial functions on $\mathfrak{h}^*$ invariant under the Weyl group $W=W(\mathfrak{g},\mathfrak{h})$ of the root system $\Delta=\Delta(\mathfrak{g},\mathfrak{h}).$ $\omega$ is called the {\bf Harish$-$Chandra isomorphism.} By evaluation at the points of $\mathfrak{h}^*$ one obtains all infinitesimal characters: for $\lambda\in\mathfrak{h}^*$ we define infinitesimal character $\chi_{\lambda}\in\hat{\mathfrak{Z}}(\mathfrak{g})$ by $$ \chi_{\lambda}(z)=(\omega(z))(\lambda)=(\eta(z))(\lambda-\rho),\qquad z\in\mathfrak{Z}(\mathfrak{g}). $$ Then $\lambda\mapsto\chi_{\lambda}$ is a surjection of $\mathfrak{h}^*$ onto $\hat{\mathfrak{Z}}(\mathfrak{g})$ and for $\lambda,\mu\in\mathfrak{h}^*$ we have $\chi_{\lambda}=\chi_{\mu}$ if and only if $\mu=w\lambda$ for some $w\in W.$\\ \indent A choice of an ordered basis $(H_1,\ldots,H_{\ell})$ of $\mathfrak{h}$ identifies the dual space $\mathfrak{h}^*$ with $\mathbb{C}^{\ell}:$ $\lambda\in\mathfrak{h}^*$ identifies with the $\ell-$tuple $(\lambda(H_1),\ldots,\lambda(H_{\ell}))\in\mathbb{C}^{\ell}.$ Now, if $\mathfrak{h}^{\prime}$ is another Cartan subalgebra of $\mathfrak{g},$ then there exists an inner automorphism $\varphi$ of $\mathfrak{g}$ such that $\mathfrak{h}^{\prime}=\varphi(\mathfrak{h}).$ $\varphi$ carries $(H_1,\ldots,H_{\ell})$ to a basis $(H_1^{\prime},\ldots,H_{\ell}^{\prime})$ of $\mathfrak{h}^{\prime}$ which we use for the identification of ${\mathfrak{h}^{\prime}}^*$ with $\mathbb{C}^{\ell}.$ If an $\ell-$tuple $(c_1,\ldots,c_{\ell})\in\mathbb{C}^{\ell}$ corresponds to $\lambda\in\mathfrak{h}^*$ and to $\lambda^{\prime}\in{\mathfrak{h}^{\prime}}^*$ then the corresponding infinitesimal characters are the same: $\chi_{\lambda}=\chi_{\lambda^{\prime}}.$\\
\indent We return now to the notations of {\bf 1.} If $\mathfrak{l}_0$ is any real Lie algebra (or its subspace) we will denote by $\mathfrak{l}$ its complexification. It is well known that an elementary representation has infinitesimal character. We are going to write down the formula for the infinitesimal character of the elementary representation $\pi^{\sigma,\nu},$ $\sigma\in\hat{M},$ $\nu\in\mathfrak{a}^*.$ Let $\mathfrak{d}_0$ be a Cartan subalgebra of the reductive Lie subalgebra $\mathfrak{m}_0.$ Denote by $\Delta_{\mathfrak{m}}=\Delta(\mathfrak{m},\mathfrak{d})\subseteq\mathfrak{d}^*$ the root system of the pair $(\mathfrak{m},\mathfrak{d}).$ Choose a subset $\Delta_{\mathfrak{m}}^+$ of positive roots in $\Delta_{\mathfrak{m}}$ and set $$ \delta_{\mathfrak{m}}=\rho_{\Delta_{\mathfrak{m}}^+}=\frac{1}{2}\sum_{\alpha\in\Delta_{\mathfrak{m}}^+}\alpha. $$ Denote by $\lambda_{\sigma}\in\mathfrak{d}^*$ the highest weight of the representation $\sigma$ with respect to $\Delta_{\mathfrak{m}}^+.$ Now, $\mathfrak{h}_0=\mathfrak{d}_0\dotplus\mathfrak{a}_0$ is a Cartan subalgebra of $\mathfrak{g}_0$ and its complexification $\mathfrak{h}=\mathfrak{d}\dotplus\mathfrak{a}$ is a Cartan subalgebra of $\mathfrak{g}.$ Then the infinitesimal character of the elementary representation $\pi^{\sigma,\nu}$ is $\chi_{\Lambda(\sigma,\nu)},$ where $\Lambda(\sigma,\nu)\in\mathfrak{h}^*$ is given by $$
\Lambda(\sigma,\nu)|\mathfrak{d}=\lambda_{\sigma}+\delta_{\mathfrak{m}}\qquad\mathrm{and}\qquad\Lambda(\sigma,\nu)|\mathfrak{a}=\nu. $$
\indent{\bf 3. Corners and fundamental corners} Suppose now that the rank of $\mathfrak{g}$ is equal to the rank of $\mathfrak{k}.$ Choose a Cartan subalgebra $\mathfrak{t}_0$ of $\mathfrak{k}_0.$ It is then also Cartan subalgebra of $\mathfrak{g}_0$ and the complexification $\mathfrak{t}$ is Cartan subalgebra of the complexifications $\mathfrak{k}$ and $\mathfrak{g}.$ Let $\Delta_K=\Delta(\mathfrak{k},\mathfrak{t})\subseteq\Delta=\Delta(\mathfrak{g},\mathfrak{t})$ be the root systems of the pairs $(\mathfrak{k},\mathfrak{t})$ and $(\mathfrak{g},\mathfrak{t})$ and $W_K=W(\mathfrak{k},\mathfrak{t})\subseteq W=W(\mathfrak{g},\mathfrak{t})$ the corresponding Weyl groups. Choose positive roots $\Delta_K^+$ in $\Delta_K$ and let $C$ be the corresponding $W_K-$Weyl chamber in $\mathfrak{t}_{\mathbb{R}}^*=i\mathfrak{t}_0^*.$ Denote by ${\cal D}$ the set of all $W-$Weyl chambers in $i\mathfrak{t}_0^*$ contained in $C.$ For $D\in{\cal D}$ we denote by $\Delta^D$ the corresponding positive roots in $\Delta$ and let $\Delta_P^D$ be the noncompact roots in $\Delta^D,$ i.e. $\Delta_P^D=\Delta^D\setminus\Delta_K^+.$ Set $$ \rho_K=\frac{1}{2}\sum_{\alpha\in\Delta_K^+}\alpha\qquad\mathrm{and}\qquad\rho_P^D=\frac{1}{2}\sum_{\alpha\in\Delta_P^D}\alpha. $$
\indent Recall some definitions from [10]. For a representation $\pi$ of $G$ and for $q\in\hat{K}$ we denote by $(\pi:q)$ the multiplicity of $q$ in $\pi|K.$ The $K-${\bf spectrum} $\Gamma(\pi)$ of a representation $\pi$ of $G$ is defined by $$ \Gamma(\pi)=\{q\in\hat{K};\,\,(\pi:q)>0\}. $$ We identify $q\in\hat{K}$ with its maximal weight in $i\mathfrak{t}_0^*$ with respect to $\Delta_K^+.$ For $q\in\Gamma(\pi)$ and for $D\in{\cal D}$ we say: \begin{enumerate} \item[$(i)$] $q$ is a $D-${\bf corner} for $\pi$ if $q-\alpha\not\in\Gamma(\pi)$ $\forall\alpha\in\Delta_P^D;$ \item[$(ii)$] $q$ is a $D-${\bf fundamental corner} for $\pi$ if it is a $D-$corner for $\pi$ and $\chi_{q+\rho_K-\rho_P^D}$ is the infinitesimal character of $\pi;$ \item[$(iii)$] $q$ is {\bf fundamental corner} for $\pi$ if it is a $D-$fundamental corner for $\pi$ for some $D\in{\cal D}.$ \end{enumerate} \indent In [10] for the case of the groups $G=SU(n,1)$ and $K=U(n)$ the following results were proved: \begin{enumerate} \item[{\bf 1.}] Elementary representation $\pi^{\sigma,\nu}$ is reducible if and only if there exist $q\in\Gamma(\pi^{\sigma,\nu})$ and $D\in{\cal D}$ such that $\chi_{q+\rho_K-\rho_P^D}$ is the infinitesimal character of $\pi^{\sigma,\nu},$ i.e. if and only if $\Lambda(\sigma,\nu)=w(q+\rho_K-\rho_P^D)$ for some $w\in W.$ \item[{\bf 2.}] Every $\pi\in\wideparen{G}^0$ has either one or two fundamental corners. \item[{\bf 3.}] $\hat{G}^0=\{\pi\in\wideparen{G}^0;\,\,\pi\,\,\mathrm{has}\,\,\mathrm{exactly}\,\,\mathrm{one}\,\,\mathrm{fundamental}\,\,\mathrm{corner}\}.$ \item[{\bf 4.}] For $\pi\in\hat{G}^0$ denote by $q(\pi)$ the unique fundamental corner of $\pi.$ Then $\pi\mapsto q(\pi)$ is a bijection of $\hat{G}^0$ onto $\hat{K}.$ \end{enumerate} \indent In this paper we investigate the analogous notions and results for the groups $\mr{Spin}(n,1).$
\section{The groups $\mr{Spin}(n,1)$}
\indent In the rest of the paper $G=\mr{Spin}(n,1),$ $n\geq3,$ is the connected and simply connected real Lie group with simple real Lie algebra $$ \mathfrak{g}_0=\mathfrak{s}\mathfrak{o}(n,1)=\{A\in\mathfrak{g}\mathfrak{l}(n+1,\mathbb{R});\,\,A^t=-\Gamma A\Gamma\},\qquad\Gamma=\left[\begin{array}{cc}I_n&0\\0&-1\end{array}\right], $$ i.e. $$ \mathfrak{g}_0=\left\{\left[\begin{array}{cc}B&a\\alpha^t&0\end{array}\right];\,\,B\in\mathfrak{s}\mathfrak{o}(n),\,\,a\in M_{n,1}(\mathbb{R})\right\}. $$ Here and in the rest of the paper we use the usual notation: \begin{enumerate} \item[$\bullet$] For $n,m\in\mathbb{N}$ $M_{m,n}(K)$ is the vector space of $m\times n$ matrices over a field $K.$ \item[$\bullet$] $\mathfrak{g}\mathfrak{l}(n,K)$ is $M_{n,n}(K),$ considered as a Lie algebra with commutator $[A,B]=AB-BA.$ \item[$\bullet$] $\mr{GL}(n,K)$ is the group of invertible matrices in $M_{n,n}(K).$ \item[$\bullet$] $A^t$ is the transpose of a matrix $A.$ \item[$\bullet$] $\mathfrak{s}\mathfrak{o}(n,K)=\{B\in\mathfrak{g}\mathfrak{l}(n,K);\,\,B^t=-B\}.$ \item[$\bullet$] $\mathfrak{s}\mathfrak{o}(n)=\mathfrak{s}\mathfrak{o}(n,\mathbb{R}).$ \item[$\bullet$] $\mr{SO}(n)=\{A\in\mr{GL}(n,\mathbb{R});\,\,A^{-1}=A^t,\,\,\det\,A=1\}.$ \end{enumerate}
\indent For the group $G=\mr{Spin}(n,1)$ we choose Cartan decomposition $\mathfrak{g}_0=\mathfrak{k}_0\oplus\mathfrak{p}_0$ as follows $$ \mathfrak{k}_0=\left\{\left[\begin{array}{cc}B&0\\0&0\end{array}\right];\,\,B\in\mathfrak{s}\mathfrak{o}(n)\right\},\qquad\mathfrak{p}_0=\left\{\left[\begin{array}{cc}0&a\\alpha^t&0\end{array}\right];\,\,a\in M_{n,1}(\mathbb{R})\right\}. $$ The complexifications are: $$ \mathfrak{g}=\mathfrak{s}\mathfrak{o}(n,1,\mathbb{C})=\{A\in\mathfrak{g}\mathfrak{l}(n+1,\mathbb{C});\,\,A^t=-\Gamma A\Gamma\}, $$ i.e. $$ \mathfrak{g}=\left\{\left[\begin{array}{cc}B&a\\alpha^t&0\end{array}\right];\,\,B\in\mathfrak{s}\mathfrak{o}(n,\mathbb{C}),\,\,a\in M_{n,1}(\mathbb{C})\right\}, $$ $$ \mathfrak{k}=\left\{\left[\begin{array}{cc}B&0\\0&0\end{array}\right];\,\,B\in\mathfrak{s}\mathfrak{o}(n,\mathbb{C})\right\},\qquad\mathfrak{p}=\left\{\left[\begin{array}{cc}0&a\\alpha^t&0\end{array}\right];\,\,a\in M_{n,1}(\mathbb{C})\right\}. $$ $\mr{Spin}(n,1)$ is double cover of the identity component $\mr{SO}_0(n,1)$ of the Lie group $$ \mr{SO}(n,1)=\{A\in\mr{GL}(n+1,\mathbb{R});\,\,A^{-1}=\Gamma A^t\Gamma,\,\,\det\,A=1\}. $$ The analytic subgroup $K\subset G$ whose Lie algebra is $\mathfrak{k}_0$ is a maximal compact subgroup of $G$ isomorphic with the double cover $\mr{Spin}(n)$ of the group $\mr{SO}(n).$\\ \indent Now we choose Cartan subalgebras. $E_{p,q}$ will denote the $(n+1)\times(n+1)$ matrix with $(p,q)-$entry equal $1$ and all the other entries $0.$ Set $$ I_{p,q}=E_{p,q}-E_{q,p},\qu1\leq p,q\leq n,\quad p\not=q, $$ and $$ B_p=E_{p,n+1}+E_{n+1,p},\qu1\leq p\leq n. $$ Then $\{I_{p,q};\,\,1\leq q<p\leq n\}$ is a basis of the real Lie algebra $\mathfrak{k}_0$ and of its complexification $\mathfrak{k}$ and $\{B_p;\,\,1\leq p\leq n\}$ is a basis of the real subspace $\mathfrak{p}_0$ and of its complexification $\mathfrak{p}.$ Now $\mathfrak{t}_0=\mr{span}_{\mathbb{R}}\left\{I_{2p,2p-1};\,\,1\leq p\leq\frac{n}{2}\right\}$ is a Cartan subalgebra of $\mathfrak{k}_0$ and its complexification $\mathfrak{t}=\mr{span}_{\mathbb{C}}\left\{I_{2p,2p-1};\,\,1\leq p\leq\frac{n}{2}\right\}$ is a Cartan subalgebra of $\mathfrak{k}.$\\ \indent We consider now separately two cases: $n$ even and $n$ odd.
\begin{center} {\bf $n$ even, $n=2k$} \end{center}
\indent In this case $\mathfrak{t}_0$ is also a Cartan subalgebra of $\mathfrak{g}_0$ and $\mathfrak{t}$ is a Cartan subalgebra of $\mathfrak{g}.$ Set $$ H_p=-iI_{2p,2p-1},\qqu1\leq p\leq k. $$ Dual space $\mathfrak{t}^*$ identifies with $\mathbb{C}^k$ as follows: $$ \mathfrak{t}^*\ni\lambda=(\lambda(H_1),\ldots,\lambda(H_k))\in\mathbb{C}^k. $$ Let $\{\alpha_1,\ldots,\alpha_k\}$ be the canonical basis of $\mathbb{C}^k=\mathfrak{t}^*.$ The root system of the pair $(\mathfrak{g},\mathfrak{t})$ is $$ \Delta=\Delta(\mathfrak{g},\mathfrak{t})=\{\pm\alpha_p\pm\alpha_q;\,\,1\leq p,q\leq k,\,\,p\not=q\}\cup\{\pm\alpha_p;\,\,1\leq p\leq k\}. $$ The Weyl group $W$ of $\Delta$ consists of all permutations of the coordinates combined with multiplying some coordinates with $-1:$ $$ W=\mathbb{Z}_2^k\rtimes S_k=\{(\varepsilon,\sigma);\,\,\varepsilon\in\mathbb{Z}_2^k,\,\,\sigma\in S_k\}, $$ where $\mathbb{Z}_2$ is the multiplicative group $\{1,-1\}$ and $S_k$ is the group of permutations of $\{1,\ldots,k\}.$ $(\varepsilon,\sigma)\in W$ acts on $\mathfrak{t}^*=\mathbb{C}^k$ as follows: $$ (\varepsilon,\sigma)(\lambda_1,\lambda_2,\ldots,\lambda_k)=(\varepsilon_1\lambda_{\sigma(1)},\varepsilon_2\lambda_{\sigma(2)},\ldots,\varepsilon_k\lambda_{\sigma(k)}). $$ \indent The root system $\Delta_K$ of the pair $(\mathfrak{k},\mathfrak{t})$ is $\{\pm\alpha_p\pm\alpha_q;\,\,p\not=q\}.$ We choose positive roots in $\Delta_K:$ $$ \Delta_K^+=\{\alpha_p\pm\alpha_q;\,\,1\leq p<q\leq k\}. $$ The corresponding Weyl chamber in $\mathbb{R}^k=i\mathfrak{t}_0^*$ is $$
C=\{\lambda\in\mathbb{R}^k;\,\,(\lambda|\gamma_j)>0,\,1\leq j\leq k\}=\{\lambda\in\mathbb{R}^k;\,\lambda_1>\cdots>\lambda_{k-1}>|\lambda_k|>0\}, $$ and its closure is $$
\overline{C}=\{\lambda\in\mathbb{R}^k;\,(\lambda|\gamma_j)\geq0,\,1\leq j\leq k\}=\{\lambda\in\mathbb{R}^k;\,\lambda_1\geq\cdots\geq\lambda_{k-1}\geq|\lambda_k|\}. $$ The Weyl group $W_K$ of the root system $\Delta_K$ is the subgroup of $W$ consisting of all $(\varepsilon,\sigma)$ with even number of $\varepsilon_j=-1:$ $$ W_K=\{(\varepsilon,\sigma)\in W;\,\,\varepsilon_1\varepsilon_2\cdots\varepsilon_k=1\}\simeq\mathbb{Z}_2^{k-1}\rtimes S_k. $$ We parametrize now the equivalence classes of irreducible finitedimensional representations of the Lie algebra $\mathfrak{k}$ (i.e. the unitary dual $\hat{K}$ of the group $K=\mr{Spin}(2k)\,)$ by identifying them with the corresponding highest weights. Thus $$ \begin{array}{c}
\hat{K}=\left\{(m_1,\ldots,m_k)\in\mathbb{Z}^k\cup\left(\frac{1}{2}+\mathbb{Z}\right)^k;\,\,m_1\geq m_2\geq\cdots\geq m_{k-1}\geq|m_k|\right\}. \end{array} $$
\begin{center} {\bf $n$ odd, $n=2k+1$} \end{center}
\indent Now $\mathfrak{t}_0$ is not a Cartan subalgebra of $\mathfrak{g}_0.$ Set $$ H=B_n=B_{2k+1}=E_{2k+1,2k+2}+E_{2k+2,2k+1},\quad\mathfrak{a}_0=\mathbb{R} H,\quad\mathfrak{h}_0=\mathfrak{t}_0\dotplus\mathfrak{a}_0. $$ Then $\mathfrak{h}_0$ is a Cartan subalgebra of $\mathfrak{g}_0$ and all the other Cartan subalgebras of $\mathfrak{g}_0$ are $\mr{Int}(\mathfrak{g}_0)-$conjugated with $\mathfrak{h}_0.$ The ordered basis $(H_1,\ldots,H_k,H)$ of the complexification $\mathfrak{h}$ of $\mathfrak{h}_0$ is used for the identificaton of $\mathfrak{h}^*$ with $\mathbb{C}^{k+1}:$ $$ \mathfrak{h}^*\ni\lambda=(\lambda(H_1),\ldots,\lambda(H_k),\lambda(H))\in\mathbb{C}^{k+1}. $$ $\mathfrak{t}^*$ identifies with $\mathbb{C}^k$ through ordered basis $(H_1,\ldots,H_k)$ of $\mathfrak{t}$ and $\mathfrak{a}^*$ identifies with $\mathbb{C}$ through $H:$ $$ \mathfrak{t}^*\ni\mu=(\mu(H_1),\ldots,\mu(H_k))\in\mathbb{C}^k,\qquad\mathfrak{a}^*\ni\nu=\nu(H)\in\mathbb{C}. $$ Furthermore, $\mathfrak{t}^*$ and $\mathfrak{a}^*$ are identified with subspaces of $\mathfrak{h}^*$ as follows: $$
\mathfrak{t}^*=\{\lambda\in\mathfrak{h}^*;\,\,\lambda|\mathfrak{a}=0\}=\{\lambda\in\mathbb{C}^{k+1};\,\,\lambda_{k+1}=0\}, $$ $$
\mathfrak{a}^*=\{\lambda\in\mathfrak{h}^*;\,\,\lambda|\mathfrak{t}=0\}=\{(0,\ldots,0,\nu);\,\,\nu\in\mathbb{C}\}. $$ \indent Let $\{\alpha_1,\ldots,\alpha_{k+1}\}$ be the canonical basis of $\mathbb{C}^{k+1}.$ The root system\linebreak $\Delta=\Delta(\mathfrak{g},\mathfrak{h})$ of the pair $(\mathfrak{g},\mathfrak{h})$ is $$ \Delta=\{\pm\alpha_p\pm\alpha_q;\,\,1\leq p,q\leq k+1,\,\,p\not=q\}. $$ The Weyl group $W=W(\mathfrak{g},\mathfrak{h})$ consists of all permutations of coordinates combined with multiplying even number of coordinates with $-1:$ $$ W=\mathbb{Z}_2^k\rtimes S_{k+1}=\{(\varepsilon,\sigma);\,\,\varepsilon\in\mathbb{Z}_2^{k+1},\,\,\varepsilon_1\cdots\varepsilon_{k+1}=1,\,\,\sigma\in S_{k+1}\}. $$ The root system $\Delta_K=\Delta(\mathfrak{k},\mathfrak{t})$ of the pair $(\mathfrak{k},\mathfrak{t})$ is $$ \Delta_K=\{\pm\alpha_p\pm\alpha_q;\,\,1\leq p,q\leq k,\,\,p\not=q\}\cup\{\pm\alpha_p;\,\,1\leq p\leq k\}. $$ Choose positive roots in $\Delta_K$ as follows: $$ \Delta_K^+=\{\alpha_p\pm\alpha_q;\,\,1\leq p<q\leq k\}\cup\{\alpha_p;\,\,1\leq p\leq k\}. $$ The corresponding Weyl chamber in $\mathbb{R}^k=i\mathfrak{t}_0^*$ is $$
C=\{\lambda\in\mathbb{R}^k;\,(\lambda|\gamma_j)>0,\,1\leq j\leq k\}=\{\lambda\in\mathbb{R}^k;\,\lambda_1>\cdots>\lambda_k>0\} $$ and its closure is $$
\overline{C}=\{\lambda\in\mathbb{R}^k;\,(\lambda|\gamma_j)\geq0,\,1\leq j\leq k\}=\{\lambda\in\mathbb{R}^k;\,\lambda_1\geq\cdots\geq\lambda_k\geq0\} $$ The dual $\hat{K}$ is again identified with the highest weights of ireducible representations. Thus: $$ \begin{array}{c} \hat{K}=\left\{q=(m_1,\ldots,m_k)\in\mathbb{Z}_+^k\cup\left(\frac{1}{2}+\mathbb{Z}_+\right)^k;\,\,m_1\geq m_2\geq\cdots\geq m_k\right\}. \end{array} $$
\begin{center} {\bf Elementary representations of the groups $\mr{Spin}(n,1)$} \end{center}
\indent Regardless the parity of $n$ we put $$ H=B_n=E_{n,n+1}+E_{n+1,n},\qquad\mathfrak{a}_0=\mathbb{R} H. $$ Then $\mathfrak{a}_0$ is maximal among Abelian subalgebras of $\mathfrak{g}_0$ contained in $\mathfrak{p}_0.$ As we already said, if $n$ is odd, $n=2k+1,$ then $\mathfrak{h}_0=\mathfrak{t}_0\dotplus\mathfrak{a}_0$ is a Cartan subalgebra of $\mathfrak{g}_0$ and all the other Cartan subalgebras are $\mr{Int}(\mathfrak{g}_0)-$conjugated to $\mathfrak{h}_0.$ If $n$ is even, $n=2k,$ set $$ \mathfrak{h}_0=\mr{span}_{\mathbb{R}}\{iH_1,\ldots,iH_{k-1},H\}. $$ It is a Cartan subalgebra of $\mathfrak{g}_0.$ In this case $\mathfrak{g}_0$ has two $\mr{Int}(\mathfrak{g}_0)-$conjugacy classes of Cartan subalgebras; $\mathfrak{h}_0$ and $\mathfrak{t}_0$ are their representatives. Their complexifications $\mathfrak{h}$ and $\mathfrak{t}$ are $\mr{Int}(\mathfrak{g})-$conjugated. Explicitely, the matrix $$ A=\left[\begin{array}{ccc}\frac{1}{\sqrt{2}}P_k&\frac{1}{\sqrt{2}}P_k&-ie_k\\-\frac{1}{\sqrt{2}}Q_k&\frac{1}{\sqrt{2}}I_k&0_k\\-\frac{i}{\sqrt{2}}e_k^t&\frac{i}{\sqrt{2}}e_k^t&0\end{array}\right]\in\mr{SO}(2k,1,\mathbb{C}), $$ where $P_k=I_k-E_{k,k}=\mathrm{diag}(1,\ldots,1,0),$ $Q_k=I_k-2E_{k,k}=\mathrm{diag}(1,\ldots,1,-1),$ $e_k\in M_{k,1}(\mathbb{C})$ is given by $e_k^t=[0\cdots0\,1]$ and $0_k$ is the zero matrix in $M_{k,1}(\mathbb{C}),$ has the properties $$ AH_jA^{-1}=H_j,\qu1\leq j\leq k-1,\quad\mathrm{and}\quad AH_kA^{-1}=H; $$ thus, $A\mathfrak{t} A^{-1}=\mathfrak{h}.$ As we mentioned before, this means that the parameters from $\mathbb{C}^k=\mathfrak{h}^*=\mathfrak{t}^*$ of the infinitesimal characters obtained through the two Harish$-$Chandra isomorphisms $\mathfrak{Z}(\mathfrak{g})\longrightarrow{\cal P}(\mathfrak{h}^*)^W$ and $\mathfrak{Z}(\mathfrak{g})\longrightarrow{\cal P}(\mathfrak{t}^*)^W$ coincide if the identifications of $\mathfrak{h}^*$ and $\mathfrak{t}^*$ with $\mathbb{C}^k$ are done throught the two ordered bases $(H_1,\ldots,H_{k-1},H)$ of $\mathfrak{h}$ and $(H_1,\ldots,H_{k-1},H_k)$ of $\mathfrak{t}.$\\
\indent For both cases, $n$ even and $n$ odd, $\mathfrak{m}_0$ (the centralizer of $\mathfrak{a}_0$ in $\mathfrak{k}_0)$ is the subalgebra of all matrices in $\mathfrak{g}_0$ with the last two rows and columns $0.$ The subgroup $M$ is isomorphic to $\mr{Spin}(n-1).$ A Cartan subalgebra of $\mathfrak{m}_0$ is $$ \mathfrak{d}_0=\mathfrak{t}_0\cap\mathfrak{m}_0=\mr{span}_{\mathbb{R}}\{iH_1,\ldots,iH_{k-1}\},\qquad k=\left[\frac{n}{2}\right]. $$ The elements of $\hat{M}$ are identified with their highest weights. For $n$ even, $n=2k,$ we have $$ \begin{array}{c} \hat{M}=\left\{(n_1,\ldots,n_{k-1})\in\mathbb{Z}_+^{k-1}\cup\left(\frac{1}{2}+\mathbb{Z}_+\right)^{k-1};\,\,n_1\geq n_2\geq\cdots\geq n_{k-1}\geq0\right\} \end{array} $$ and for $n$ odd, $n=2k+1,$ we have $$ \begin{array}{c} \end{array}
\hat{M}=\left\{(n_1,\ldots,n_k)\in\mathbb{Z}^k\cup\left(\frac{1}{2}+\mathbb{Z}\right)^k;\,\,n_1\geq n_2\geq\cdots\geq n_{k-1}\geq|n_k|\right\}. $$ The branching rules for the restriction of representations of $K$ to the subgroup $M$ are the following:\\ \indent If $n$ is even, $n=2k,$ we have $$
(m_1,\ldots,m_k)|M=\bigoplus_{(n_1,\ldots,n_{k-1})\prec(m_1,\ldots,m_k)}(n_1,\ldots,n_{k-1}); $$ here the symbol $(n_1,\ldots,n_{k-1})\prec(m_1,\ldots,m_k)$ means that either all $m_i$ and $n_j$ are in $\mathbb{Z}$ or all of them are in $\frac{1}{2}+\mathbb{Z}$ and $$
m_1\geq n_1\geq m_2\geq n_2\cdots\geq m_{k-1}\geq n_{k-1}\geq|m_k|. $$ \indent If $n$ is odd, $n=2k+1,$ we have $$
(m_1,\ldots,m_k)|M=\bigoplus_{(n_1,\ldots,n_k)\prec(m_1,\ldots,m_k)}(n_1,\ldots,n_k); $$ now the symbol $(n_1,\ldots,n_k)\prec(m_1,\ldots,m_k)$ means again that either all $m_i$ and $n_j$ are in $\mathbb{Z}$ or all of them are in $\frac{1}{2}+\mathbb{Z}$ and now that $$
m_1\geq n_1\geq m_2\geq n_2\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq|n_k|. $$
\indent The restriction $\pi^{\sigma,\nu}|K$ is the representation of $K$ induced by the representation $\sigma$ of the subgroup $M,$ thus it does not depend on $\nu.$ By Frobenius Reciprocity Theorem the multiplicity of $q\in\hat{K}$ in $\pi^{\sigma,\nu}|K$ is equal to the multiplicity of $\sigma$ in $q|M.$ Thus $$
\pi^{\sigma,\nu}|K=\bigoplus_{\sigma\prec(m_1,\ldots,m_k)}(m_1,\ldots,m_k). $$ Hence, the multiplicity of every $q=(m_1,\ldots,m_k)\in\hat{K}$ in the elementary representation $\pi^{\sigma,\nu}$ is either $1$ or $0$ and the $K-$spectrum $\Gamma(\pi^{\sigma,\nu})$ consists of all $q=(m_1,\ldots,m_k)\in\hat{K}\cap(n_1+\mathbb{Z})^k$ such that $$
\begin{array}{ll}m_1\geq n_1\geq m_2\geq n_2\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq|m_k|&\,\,\mathrm{if}\,\,n=2k\\ &\\
m_1\geq n_1\geq m_2\geq n_2\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq|n_k|&\,\,\mathrm{if}\,\,n=2k+1. \end{array} $$
\section{Representations of $\mr{Spin}(2k,1)$}
\indent In this section we first write down in our notation the known results on elementary representations and its irreducible subquotients for the groups $\mr{Spin}(2k,1)$ (see [1], [2], [3], [7], [8], [9], [11], [12]). For $\sigma=(n_1,\ldots,n_{k-1})$ in $\hat{M}\subseteq\mathbb{R}^{k-1}=i\mathfrak{d}_0^*$ and for $\nu\in\mathbb{C}=\mathfrak{a}^*$ the elementary representation $\pi^{\sigma,\nu}$ is irreducible if and only if either $\nu\not\in\frac{1}{2}+n_1+\mathbb{Z}$ or $$ \begin{array}{c} \nu\in\left\{\pm\left(n_{k-1}+\frac{1}{2}\right),\pm\left(n_{k-2}+\frac{3}{2}\right),\ldots,\pm\left(n_2+k-\frac{5}{2}\right),\pm\left(n_1+k-\frac{3}{2}\right)\right\}. \end{array} $$ If $\pi^{\sigma,\nu}$ is reducible it has either two or three irreducible subquotients. If it has two, we will denote them by $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu};$ an exception is the case of nonintegral $n_j$ and $\nu=0,$ when we denote them by $\omega^{\sigma,0,\pm}.$ If $\pi^{\sigma,\nu}$ has three irreducible subquotients, we will denote them by $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu,\pm}.$ Their $K-$spectra are as follows: \begin{enumerate} \item[$(a1)$] If $n_j\in\mathbb{Z}_+$ and $\nu\in\left\{\pm\frac{1}{2},\pm\frac{3}{2},\ldots,\pm\left(n_{k-1}-\frac{1}{2}\right)\right\}$ (this is possible only if $n_{k-1}\geq1)$ the representation $\pi^{\sigma,\nu}$ has three irreducible subquotients $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu,\pm}.$ Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)$ in $\hat{K}\cap\mathbb{Z}^k$ such that: $$ \begin{array}{ll}
\Gamma(\tau^{\sigma,\nu}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1},\,\,|m_k|\leq|\nu|-\frac{1}{2};\\
\Gamma(\omega^{\sigma,\nu,+}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq|\nu|+\frac{1}{2};\\
\Gamma(\omega^{\sigma,\nu,-}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1},\,\,\,-|\nu|-\frac{1}{2}\geq m_k\geq -n_{k-1}. \end{array} $$ \item[$(a2)$] If $n_j\in\left(\frac{1}{2}+\mathbb{Z}_+\right)$ and $\nu\in\left\{\pm1,\ldots,\pm\left(n_{k-1}-\frac{1}{2}\right)\right\}$ (this is possible only if $n_{k-1}\geq\frac{3}{2})$ the representation $\pi^{\sigma,\nu}$ has three irreducible subquotients $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu,\pm}.$ Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)$ in $\hat{K}\cap\left(\frac{1}{2}+\mathbb{Z}\right)^k$ such that: $$ \begin{array}{ll}
\Gamma(\tau^{\sigma,\nu}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1},\,\,|m_k|\leq|\nu|-\frac{1}{2};\\
\Gamma(\omega^{\sigma,\nu,+}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq|\nu|+\frac{1}{2};\\
\Gamma(\omega^{\sigma,\nu,-}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1},\,\,\,-|\nu|-\frac{1}{2}\geq m_k\geq -n_{k-1}. \end{array} $$ \item[$(a3)$] If $n_j\in\left(\frac{1}{2}+\mathbb{Z}_+\right)$ and if $\nu=0$ the representation has two irreducible subquotients $\omega^{\sigma,0,\pm};$ they are both subrepresentations since $\pi^{\sigma,0}$ is unitary. Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)$ in $\hat{K}\cap\left(\frac{1}{2}+\mathbb{Z}\right)^k$ such that: $$ \begin{array}{ll} \Gamma(\omega^{\sigma,0,+}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq\frac{1}{2};\\ \Gamma(\omega^{\sigma,0,-}):&\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1},\,\,\,-\frac{1}{2}\geq m_k\geq -n_{k-1}. \end{array} $$ \item[$(bj)$] If $n_{j-1}>n_j$ for some $j\in\{2,\ldots,k-1\}$ and if $$ \begin{array}{c} \nu\in\left\{\pm\left(n_j+k-j+\frac{1}{2}\right),\pm\left(n_j+k-j+\frac{3}{2}\right),\ldots,\pm\left(n_{j-1}+k-j-\frac{1}{2}\right)\right\}, \end{array} $$ then $\pi^{\sigma,\nu}$ has two irreducible subquotients $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu}.$ Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)\in\hat{K}\cap(n_1+\mathbb{Z})^k$ such that: $$ \begin{array}{ll} \Gamma(\tau^{\sigma,\nu}):&\quad m_1\geq n_1\geq\cdots\geq m_{j-1}\geq n_{j-1},\\
&\quad|\nu|-k+j-\frac{1}{2}\geq m_j\geq n_j\geq\cdots\geq n_{k-1}\geq|m_k|;\\
\Gamma(\omega^{\sigma,\nu}):&\quad m_1\geq n_1\geq\cdots\geq n_{j-1}\geq m_j\geq|\nu|-k+j+\frac{1}{2},\\
&\quad n_j\geq m_{j+1}\geq\cdots\geq n_{k-1}\geq|m_k|. \end{array} $$ \item[$(c)$] If $$ \begin{array}{c} \nu\in\left\{\pm\left(n_1+k-\frac{1}{2}\right),\pm\left(n_1+k+\frac{1}{2}\right),\pm\left(n_1+k+\frac{3}{2}\right),\ldots\right\}, \end{array} $$ then $\pi^{\sigma,\nu}$ has two irreducible subquotients: finitedimensional representation $\tau^{\sigma,\nu}$ and infinitedimensional $\omega^{\sigma,\nu}.$ Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)\in\hat{K}\cap\left(n_1+\mathbb{Z}\right)^k$ such that: $$ \begin{array}{ll}
\Gamma(\tau^{\sigma,\nu}):&\,\,|\nu|-k+\frac{1}{2}\geq m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq|m_k|;\\
\Gamma(\omega^{\sigma,\nu}):&\,\,m_1\geq|\nu|-k+\frac{3}{2},\,\,n_1\geq m_2\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq|m_k|. \end{array} $$ \end{enumerate}
\indent Irreducible elementary representation $\pi^{\sigma,\nu}$ is unitary if and only if either $\nu\in i\mathbb{R}$ (so called {\bf unitary principal series}) or $\nu\in\langle-\nu(\sigma),\nu(\sigma)\rangle,$ where $$ \nu(\sigma)=\min\,\{\nu\geq0;\,\,\pi^{\sigma,\nu}\,\,\mathrm{is}\,\,\mathrm{reducible}\} $$ (so caled {\bf complementary series}). Notice that for nonintegral $n_j$'s $\pi^{\sigma,0}$ is reducible, thus $\nu(\sigma)=0$ and the complementary series is empty. In the case of integral $n_j$'s we have the following possibilities: \begin{enumerate} \item[$(a)$] If $n_{k-1}\geq1,$ then $\nu(\sigma)=\frac{1}{2}.$ The reducible elementary representation $\pi^{\sigma,\frac{1}{2}}$ is of the type $(a1).$ \item[$(b)$] If $n_{k-1}=0$ and $n_1\geq1,$ let $j\in\{2,\ldots,k-1\}$ be such that\linebreak $n_{k-1}=\cdots=n_j=0<n_{j-1}.$ Then $\nu(\sigma)=k-j+\frac{1}{2}.$ The reducible elementary representation $\pi^{\sigma,k-j+\frac{1}{2}}$ is of the type $(bj).$ \item[$(c)$] If $\sigma$ is trivial, i.e. $n_1=\cdots=n_{k-1}=0,$ then $\nu(\sigma)=k-\frac{1}{2}.$ The reducible elementary representation $\pi^{\sigma,k-\frac{1}{2}}$ is of the type $(c).$ \end{enumerate}
\indent Among irreducible subquotients of reducible elementary representations the unitary ones are $\omega^{\sigma,\nu,\pm},$ $\tau^{\sigma,\nu(\sigma)}$ and $\omega^{\sigma,\nu(\sigma)}.$\\
\indent Now we write down the infinitesimal characters. The dual space $\mathfrak{h}^*$ of the Cartan subalgebra $\mathfrak{h}=\mathfrak{d}\dotplus\mathfrak{a}$ is identified with $\mathbb{C}^k$ through the basis $(H_1,\ldots,H_{k-1},H).$ The infinitesimal character of the elementary representation $\pi^{\sigma,\nu}$ is $\chi_{\Lambda(\sigma,\nu)},$ where $\Lambda(\sigma,\nu)\in\mathfrak{h}^*$ is given by $$ \begin{array}{c}
\Lambda(\sigma,\nu)|\mathfrak{d}=\lambda_{\sigma}+\delta_{\mathfrak{m}}\qquad\mathrm{and}\qquad\Lambda(\sigma,\nu)|\mathfrak{a}=\nu, \end{array} $$ where $\lambda_{\sigma}\in\mathfrak{d}^*$ is the highest weight of the representation $\sigma$ and $\delta_{\mathfrak{m}}$ is the halfsum of positive roots of the pair $(\mathfrak{m},\mathfrak{d}).$ Using the earlier described identifications of $\mathfrak{d}^*=\mathbb{C}^{k-1}$ and $\mathfrak{a}^*=\mathbb{C}$ with subspaces of $\mathfrak{h}^*=\mathbb{C}^k$ we have\linebreak $\lambda_{\sigma}=(n_1,\ldots,n_{k-1},0),$ $\delta_{\mathfrak{m}}=\left(k-\frac{3}{2},k-\frac{5}{2},\ldots,n_{k-1}+\frac{1}{2},0\right),$ $\nu=(0,\ldots,0,\nu),$ hence $$ \begin{array}{c} \Lambda(\sigma,\nu)=\left(n_1+k-\frac{3}{2},n_2+k-\frac{5}{2},\ldots,n_{k-1}+\frac{1}{2},\nu\right). \end{array} $$ As we pointed out, if $\mathfrak{t}^*$ is identified with $\mathbb{C}^k$ through the basis $(H_1,\ldots,H_k)$ od $\mathfrak{t},$ the same parameters determine this infinitesimal character with respect to Harish$-$Chandra isomorphism $\mathfrak{Z}(\mathfrak{g})\longrightarrow{\cal P}(\mathfrak{t}^*)^W.$\\ \indent The $W_K-$chamber in $\mathbb{R}^k=i\mathfrak{t}_0^*$ corresponding to chosen positive roots $\Delta_K^+$ is $$
C=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1>\lambda_2>\ldots>\lambda_{k-1}>|\lambda_k|>0\}. $$ The set ${\cal D}$ of $W-$chambers contained in $C$ consists of two elements: $$ D_1=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>\lambda_k>0\} $$ and $$ D_2=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>-\lambda_k>0\}. $$ The closure $\overline{D}_1$ is fundamental domain for the action of $W$ on $\mathbb{R}^k,$ i.e. each $W-$orbit in $\mathbb{R}^k$ intersects with $\overline{D}_1$ in one point. We saw that the reducibility criteria imply that $\Lambda(\sigma,\nu)\in\mathbb{R}^k$ whenever $\pi^{\sigma,\nu}$ is reducible. We denote by $\lambda(\sigma,\nu)$ the unique point in the intersection of $W\Lambda(\sigma,\nu)$ with $\overline{D}_1.$ In the following theorem without loss of generality we can suppose that $\nu\geq0,$ since $\pi^{\sigma,\nu}$ and $\pi^{\sigma,-\nu}$ have the same irreducible subquotients. \begin{tm}\begin{enumerate}\item[$(i)$] $\pi^{\sigma,\nu}$ is reducible if and only if its infinitesimal character is $\chi_{\lambda}$ for some $\lambda\in\Lambda,$ where $$ \begin{array}{c} \Lambda=\left\{\lambda\in\mathbb{Z}_+^k\cup\left(\frac{1}{2}+\mathbb{Z}_+\right)^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>\lambda_k\geq0\right\}. \end{array} $$ We write $\Lambda$ as the disjoint union $\Lambda^*\cup\Lambda^0,$ where $$ \begin{array}{c} \Lambda^*=\left\{\lambda\in\mathbb{Z}_+^k\cup\left(\frac{1}{2}+\mathbb{Z}_+\right)^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>\lambda_k>0\right\}, \end{array} $$ $$ \begin{array}{c} \Lambda^0=\left\{\lambda\in\mathbb{Z}_+^k\cup\left(\frac{1}{2}+\mathbb{Z}_+\right)^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>0,\,\,\lambda_k=0\right\}. \end{array} $$ \item[$(ii)$] For $\lambda\in\Lambda^*$ there exist $k$ ordered pairs $(\sigma,\nu),$ $\sigma=(n_1,\ldots,n_{k-1})\in\hat{M},$ $\nu\geq0,$ sucha that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ These ordered pairs are: \begin{enumerate} \item[$(a)$] $\nu=\lambda_k,$ $n_1=\lambda_1-k+\frac{3}{2},$ $n_2=\lambda_2-k+\frac{5}{2},$ $\ldots$ $n_{k-1}=\lambda_{k-1}-\frac{1}{2}.$ \item[$(bj)$] $\nu=\lambda_j,$ $n_1=\lambda_1-k+\frac{3}{2},$ $\ldots$ $n_{j-1}=\lambda_{j-1}-k+j-\frac{1}{2},$\linebreak $n_j=\lambda_{j+1}-k+j+\frac{1}{2},$ $\ldots$ $n_{k-1}=\lambda_k-\frac{1}{2},$ $2\leq j\leq k-1.$ \item[$(c)$] $\nu=\lambda_1,$ $n_1=\lambda_2-k+\frac{3}{2},$ $\ldots$ $n_s=\lambda_{s+1}-k+s+\frac{1}{2},$ $\ldots$ $n_{k-1}=\lambda_k-\frac{1}{2}.$ \end{enumerate} \item[$(iii)$] For $\lambda\in\Lambda^0,$ the ordered pair $(\sigma,\nu),$ $\sigma=(n_1,\ldots,n_{k-1})\in\hat{M},$ $\nu\in\mathbb{R},$ such that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu},$ is unique: $$ \begin{array}{c} n_1=\lambda_1-k+\frac{3}{2},\,\,n_2=\lambda_2-k+\frac{5}{2},\,\ldots\,,\,\,n_{k-1}-\frac{1}{2},\quad\nu=\lambda_k=0. \end{array} $$ \end{enumerate} \end{tm}
{\bf Proof:} $(i)$ We already know that for reducible elementary representation $\pi^{\sigma,\nu}$ one has $\Lambda(\sigma,\nu)\in\mathbb{Z}^k\cup\left(\frac{1}{2}+\mathbb{Z}\right)^k.$ As the Weyl group $W=W(\mathfrak{g},\mathfrak{t})$ consists of all permutations of coordinates combined with multiplying some of the coordinates with $-1,$ we conclude that the infinitesimal character of $\pi^{\sigma,\nu}$ is $\chi_{\lambda}$ for some $\lambda\in\Lambda.$ The sufficiency will follow from the proofs of $(ii)$ and $(iii).$\\ \indent $(ii)$ Let $\lambda\in\Lambda^*$ and suppose that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ This means that $\Lambda(\sigma,\nu)$ and $\lambda$ are $W-$conjugated. Now, since $\lambda_j>0,$ $\forall j\in\{1,\ldots,k\},$ $n_s-k+s+\frac{1}{2}>0,$ $\forall s\in\{1,\ldots,k-1\}$ and $\nu\geq0,$ we conclude that necessarily $\nu=\lambda_j$ for some $j\in\{1,\ldots,k\}.$ We inspect now each of these $k$ possibilities.\\ \indent $(a)$ $\nu=\lambda_k.$ Then necessarily $$ \begin{array}{c} n_1=\lambda_1-k+\frac{3}{2},\,\,\ldots\,\,,\,\,n_{k-1}=\lambda_{k-1}-\frac{1}{2}. \end{array} $$ We check now that so defined $(k-1)-$tuple $(n_1,\ldots,n_{k-1})$ is indeed in $\hat{M}.$ For $1\leq j\leq k-2$ we have $$ \begin{array}{c} n_j-n_{j+1}=\left(\lambda_j-k+\frac{2j+1}{2}\right)-\left(\lambda_{j+1}-k+\frac{2j+3}{2}\right)=\lambda_j-\lambda_{j+1}-1\in\mathbb{Z}_+. \end{array} $$ Further, if $\lambda\in\mathbb{N}^k,$ then $\lambda_{k-1}\geq2,$ thus $n_{k-1}\geq\frac{3}{2},$ and if $\lambda\in\left(\frac{1}{2}+\mathbb{Z}_+\right)^k,$ then $\lambda_{k-1}\geq\frac{3}{2},$ thus $n_{k-1}\geq1.$ Especially, $\sigma=(n_1,\ldots,n_{k-1})\in\hat{M}.$ Finally, we see that $\nu=\lambda_k\leq\lambda_{k-1}-1=n_{k-1}-\frac{1}{2},$ so we conclude that the elementary representation $\pi^{~s,\nu}$ is reducible.\\ \indent $(bj)$ $\nu=\lambda_j$ for some $j\in\{2,\ldots,k-1\}.$ Then necessarily $n_1=\lambda_1-k+\frac{3}{2},$ $\ldots,$ $n_{j-1}=\lambda_{j-1}-k+\frac{2j-1}{2},n_j=\lambda_{j+1}-k+\frac{2j+1}{2},$ $\ldots,$ $n_{k-1}=\lambda_k-\frac{1}{2}.$ We check now that so defined $(k-1)-$tuple $(n_1,\ldots,n_{k-1})$ is indeed in $\hat{M}.$ For $1\leq s\leq j-2$ we have $$ \begin{array}{c} n_s-n_{s+1}=\left(\lambda_s-k+\frac{2s+1}{2}\right)-\left(\lambda_{s+1}-k+\frac{2s+3}{2}\right)=\lambda_s-\lambda_{s+1}-1\in\mathbb{Z}_+. \end{array} $$ Further, $$ \begin{array}{c} n_{j-1}-n_j=\left(\lambda_{j-1}-k+\frac{2j-1}{2}\right)-\left(\lambda_{j+1}-k+\frac{2j+1}{2}\right)=\lambda_{j-1}-\lambda_{j+1}-1\in\mathbb{N}. \end{array} $$ For $j\leq s\leq k-2$ we have $$ \begin{array}{c} n_s-n_{s+1}=\left(\lambda_{s+1}-k+\frac{2s+1}{2}\right)-\left(\lambda_{s+2}-k+\frac{2s+3}{2}\right)=\lambda_{s+1}-\lambda_{s+2}-1\in\mathbb{Z}_+. \end{array} $$ Finally, $n_{k-1}=\lambda_k-\frac{1}{2}\in\frac{1}{2}\mathbb{Z}_+.$ Thus, $\sigma=(n_1,\ldots,n_{k-1})\in\hat{M}.$\\ \indent We check now that the elementary representation $\pi^{\sigma,\nu}$ is reducible. We have $$ \begin{array}{c} 1\leq\lambda_{j-1}-\lambda_j=n_{j-1}+k-j+\frac{1}{2}-\nu\quad\Leftrightarrow\quad\nu\leq n_{j-1}+k-j-\frac{1}{2} \end{array} $$ and $$ \begin{array}{c} 1\leq\lambda_j-\lambda_{j+1}=\nu-n_j-k+j+\frac{1}{2}\quad\Leftrightarrow\quad\nu\geq n_j+k-j+\frac{1}{2}. \end{array} $$ Thus, $$ \begin{array}{c} \nu\in\left\{n_j+k-j+\frac{1}{2},\ldots,n_{j-1}+k-j-\frac{1}{2}\right\} \end{array} $$ and we conclude that $\pi^{\sigma,\nu}$ is reducible.\\ \indent $(c)$ $\nu=\lambda_1.$ Then necessarily $$ \begin{array}{c} n_1=\lambda_2-k+\frac{3}{2},\ldots,n_s=\lambda_{s+1}-k+\frac{2s+1}{2},\ldots,n_{k-1}=\lambda_k-\frac{1}{2}. \end{array} $$ As before we see that for $1\leq s\leq k-2$ we have $$ n_s-n_{s+1}=\lambda_{s+1}-\lambda_{s+2}-1\in\mathbb{Z}_+. $$ Further, $n_{k-1}=\lambda_k-\frac{1}{2}\in\frac{1}{2}\mathbb{Z}_+.$ Thus, $\sigma=(n_1,\ldots,n_{k-1})\in\hat{M}.$ Finally, $$ \begin{array}{c} 1\leq\lambda_1-\lambda_2=\nu-\left(n_1+k-\frac{3}{2}\right)\quad\Leftrightarrow\quad\nu\geq n_+k-\frac{1}{2}, \end{array} $$ i.e. $$ \begin{array}{c} \nu\in\left\{n_1+k-\frac{1}{2},n_1+k+\frac{1}{2},n_1+k+\frac{3}{2},\ldots\right\}. \end{array} $$ Thus, the elementary representation $\pi^{\sigma,\nu}$ is reducible.\\
\indent $(iii)$ Let $\lambda\in\Lambda^0$ and suppose that the elementary representation $\pi^{\sigma,\nu},$ $\sigma=(n_1,\ldots,n_{k-1})\in\hat{M},$ $\nu\in\mathbb{R},$ has infinitesimal character $\chi_{\lambda}.$ As in the proof of $(ii)$ we conclude that necessarily $|\nu|=\lambda_j$ for some $j\in\{1,\ldots,k\}.$ The assumption $j<k$ would imply $n_{k-1}+\frac{1}{2}=\lambda_k=0$ and this is impossible since $n_{k-1}\geq0.$ Thus, we conclude that $j=k,$ i.e. $\nu=\lambda_k=0.$ It follows that $$ \begin{array}{c} n_1=\lambda_1-k+\frac{3}{2},\,\,n_2=\lambda_2-k+\frac{5}{2},\,\,\ldots\,\,,\,\,n_{k-1}=\lambda_{k-1}-\frac{1}{2}. \end{array} $$ As before we conclude that so defined $\sigma=(n_1,\ldots,n_{k-1})$ is in $\hat{M}.$ Finally, $n_j$ are nonintegral and $\nu=0,$ therefore $\pi^{\sigma,0}$ is reducible.\\
\indent Fix now $\lambda\in\Lambda^*.$ By $(ii)$ in Theorem 1. there exist $k$ pairs $(\sigma,\nu)\in\hat{M}\times\frac{1}{2}\mathbb{N}$ such that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ Denote them by $(\sigma_j,\nu_j),$ $1\leq j\leq k,$ where $\nu_j=\lambda_j$ and $$ \begin{array}{ll} (c)&\sigma_1=\left(\lambda_2-k+\frac{3}{2},\ldots,\lambda_{s+1}-k+\frac{2s+1}{2},\ldots,\lambda_k-\frac{1}{2}\right),\\ &\\ (bj)&\sigma_j=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_{j-1}-k+\frac{2j-1}{2},\lambda_{j+1}-k+\frac{2j+1}{2},\ldots,\lambda_k-\frac{1}{2}\right),\\ &\qqu2\leq j\leq k-1,\\ &\\ (a)&\sigma_k=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_s-k+\frac{2s+1}{2},\ldots,\lambda_{k-1}-\frac{1}{2}\right). \end{array} $$ \indent There are altogether $k+2$ mutually infinitesimally inequivalent irreducible subquotients of the reducible elementary representations $\pi^{\sigma_1,\nu_1},\ldots,\pi^{\sigma_k,\lambda_k}$ which we denote by $\tau_1^{\lambda},\ldots,\tau_k^{\lambda},\omega_+^{\lambda},\omega_-^{\lambda}:$ $$ \begin{array}{l} \tau_1^{\lambda}=\tau^{\sigma_1,\nu_1},\\ \\ \tau_2^{\lambda}=\omega^{\sigma_1,\nu_1}\cong\tau^{\sigma_2,\nu_2},\\ \\ \vdots\\ \\ \tau_j^{\lambda}=\omega^{\sigma_{j-1},\nu_{j-1}}\cong\tau^{\sigma_j,\nu_j},\\ \\ \vdots\\ \\ \tau_k^{\lambda}=\omega^{\sigma_{k-1},\nu_{k-1}}\cong\tau^{\sigma_k,\nu_k},\\ \\ \omega_+^{\lambda}=\omega^{\sigma_k,\nu_k,+},\\ \\ \omega_-^{\lambda}=\omega^{\sigma_k,\nu_k,-}. \end{array} $$ \indent The $K-$spectra of these irreducible representations consist of all\linebreak $q=(m_1,\ldots,m_k)\in\hat{K}\cap\left(\lambda_1+\frac{1}{2}+\mathbb{Z}\right)^k$ that satisfy: $$ \begin{array}{ll}
\Gamma(\tau_1^{\lambda}):&\lambda_1-k+\frac{1}{2}\geq m_1\geq\lambda_2-k+\frac{3}{2}\geq\cdots\geq m_{k-1}\geq\lambda_k-\frac{1}{2}\geq|m_k|,\\ &\,\,\vdots\\ \Gamma(\tau_j^{\lambda}):&m_1\geq\lambda_1-k+\frac{3}{2}\geq m_2\geq\cdots\geq m_{j-1}\geq\lambda_{j-1}-k+j-\frac{1}{2},\\
&\lambda_j-k+j-\frac{1}{2}\geq m_j\geq\cdots\geq m_{k-1}\geq\lambda_k-\frac{1}{2}\geq|m_k|,\\ &\,\,\vdots\\ \Gamma(\tau_k^{\lambda}):&m_1\geq\lambda_1-k+\frac{3}{2}\geq m_2\geq\lambda_2-k+\frac{5}{2}\geq\cdots m_{k-1}\geq\lambda_{k-1}-\frac{1}{2},\\
&\lambda_k-\frac{1}{2}\geq|m_k|,\\ &\\ \Gamma(\omega_+^{\lambda}):&m_1\geq\lambda_1-k+\frac{3}{2}\geq\cdots\geq m_{k-1}\geq\lambda_{k-1}-\frac{1}{2}\geq m_k\geq\lambda_k+\frac{1}{2},\\ &\\ \Gamma(\omega_-^{\lambda}):&m_1\geq\lambda_1-k+\frac{3}{2}\geq\cdots\geq m_{k-1}\geq\lambda_{k-1}-\frac{1}{2}\geq-m_k\geq\lambda_k+\frac{1}{2}. \end{array} $$ \indent It is obvious that each of these representations $\pi$ has one $D_1-$corner, we denote it by $q_1(\pi),$ and one $D_2-$corner, we denote it by $q_2(\pi).$ The list is: $$ \begin{array}{l} q_1(\tau_1^{\lambda})=\left(\lambda_2-k+\frac{3}{2},\ldots,\lambda_{k-1}-\frac{3}{2},\lambda_k-\frac{1}{2},-\lambda_k+\frac{1}{2}\right),\\ q_2(\tau_1^{\lambda})=\left(\lambda_2-k+\frac{3}{2},\ldots,\lambda_{k-1}-\frac{3}{2},\lambda_k-\frac{1}{2},\lambda_k-\frac{1}{2}\right),\\ q_1(\tau_j^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_{j-1}-k+j-\frac{1}{2},\lambda_{j+1}-k+j+\frac{1}{2},\ldots,\lambda_k-\frac{1}{2},-\lambda_k+\frac{1}{2}\right),\\ q_2(\tau_j^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_{j-1}-k+j-\frac{1}{2},\lambda_{j+1}-k+j+\frac{1}{2},\ldots,\lambda_k-\frac{1}{2},\lambda_k-\frac{1}{2}\right),\\ q_1(\tau_k^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\lambda_k+\frac{1}{2}\right),\\ q_2(\tau_k^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\lambda_k-\frac{1}{2}\right),\\ q_1(\omega_+^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\lambda_k+\frac{1}{2}\right),\\ q_2(\omega_+^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\lambda_{k-1}-\frac{1}{2}\right),\\ q_1(\omega_-^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\lambda_{k-1}+\frac{1}{2}\right),\\ q_2(\omega_-^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\lambda_k-\frac{1}{2}\right). \end{array} $$ We inspect now which of these corners are fundamental. Since $$ \begin{array}{l} \rho_K-\rho_P^{D_1}=\left(k-\frac{3}{2},k-\frac{5}{2},\ldots,\frac{1}{2},-\frac{1}{2}\right),\\ \\ \rho_K-\rho_P^{D_2}=\left(k-\frac{3}{2},k-\frac{5}{2},\ldots,\frac{1}{2},\frac{1}{2}\right), \end{array} $$ we have $$ \begin{array}{ll} q_1(\tau_1^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_2,\ldots,\lambda_k,-\lambda_k),&\mathrm{not\,\,fundamental},\\ q_2(\tau_1^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_2,\ldots,\lambda_k,\lambda_k),&\mathrm{not\,\,fundamental},\\ q_1(\tau_j^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_1,\ldots,\lambda_{j-1},\lambda_{j+1},\ldots,\lambda_k,-\lambda_k),&\mathrm{not\,\,fundamental},\\ q_2(\tau_j^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_1,\ldots,\lambda_{j-1},\lambda_{j+1},\ldots,\lambda_k,\lambda_k),&\mathrm{not\,\,fundamental},\\ q_1(\tau_k^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_1,\ldots,\lambda_{k-1},-\lambda_k),&\mathrm{fundamental},\\ q_2(\tau_k^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_1,\ldots,\lambda_{k-1},\lambda_k),&\mathrm{fundamental},\\ q_1(\omega_+^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_1,\ldots,\lambda_{k-1},\lambda_k),&\mathrm{fundamental},\\ q_2(\omega_+^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_1,\ldots,\lambda_{k-1},\lambda_{k-1}),&\mathrm{not\,\,fundamental},\\ q_1(\omega_-^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_1,\ldots,\lambda_{k-1},-\lambda_{k-1}),&\mathrm{not\,\,fundamental},\\ q_2(\omega_-^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_1,\ldots,\lambda_{k-1},-\lambda_k),&\mathrm{fundamental}. \end{array} $$ \indent Notice that finite dimensional $\tau_1^{\lambda}$ is not unitary and $q_1(\tau_1^{\lambda})\not=q_2(\tau_1^{\lambda})$ unless it is the trivial $1-$dimensional representation $(\lambda=\left(k-\frac{1}{2},k-\frac{3}{2},\ldots,\frac{1}{2}\right))$ when $q_1(\tau_1^{\lambda})=q_2(\tau_1^{\lambda})=(0,\ldots,0).$ Next, $\tau_j^{\lambda}$ for $2\leq j\leq k$ is not unitary and $q_1(\tau_j^{\lambda})\not=q_2(t_j^{\lambda}).$ Finally, $\omega_+^{\lambda}$ and $\omega_-^{\lambda}$ are unitary (these are the discrete series representations) and each of them has one fundamental corner; the other corner is not fundamental.\\
\indent We consider now the case $\lambda\in\Lambda^0,$ so $\lambda_k=0.$ Then the unique pair $(\sigma,\nu)\in\hat{M}\times\mathbb{R},$ such that $\chi_{\lambda}$ is the infinitesimal character of the elementary representation $\pi^{\sigma,\nu},$ is $$ \begin{array}{ll} \sigma=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2}\right),&\nu=0. \end{array} $$ The elementary representation $\pi^{\sigma,0}$ is unitary and it is direct sum of two unitary irreducible representations $\omega_+^{\lambda}$ and $\omega_-^{\lambda}.$ Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)\in\hat{K}\cap\left(\frac{1}{2}+\mathbb{Z}\right)^k$ that satisfy $$ \begin{array}{ll} \Gamma(\omega_+^{\lambda}):&m_1\geq\lambda_1-k+\frac{3}{2}\geq\cdots\geq m_{k-1}\geq\lambda_{k-1}-\frac{1}{2}\geq m_k\geq\frac{1}{2},\\ &\\ \Gamma(\omega_-^{\lambda}):&m_1\geq\lambda_1-k+\frac{3}{2}\geq\cdots\geq m_{k-1}\geq\lambda_{k-1}-\frac{1}{2}\geq-m_k\geq\frac{1}{2}. \end{array} $$ Again each of these representations have one $D_1-$corner and one $D_2-$corner: $$ \begin{array}{l} q_1(\omega_+^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\frac{1}{2}\right),\\ q_2(\omega_+^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\lambda_{k-1}-\frac{1}{2}\right),\\ q_1(\omega_-^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\lambda_{k-1}+\frac{1}{2}\right),\\ q_2(\omega_-^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\frac{1}{2}\right). \end{array} $$ Two of them are fundamental: $$ \begin{array}{ll} q_1(\omega_+^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_1,\ldots,\lambda_{k-1},0),&\mathrm{fundamental},\\ q_2(\omega_+^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_1,\ldots,\lambda_{k-1},\lambda_{k-1}),&\mathrm{not\,\,fundamental},\\ q_1(\omega_-^{\lambda})+\rho_k-\rho_P^{D_1}=(\lambda_1,\ldots,\lambda_{k-1},-\lambda_{k-1}),&\mathrm{not\,\,fundamental},\\ q_2(\omega_-^{\lambda})+\rho_k-\rho_P^{D_2}=(\lambda_1,\ldots,\lambda_{k-1},0),&\mathrm{fundamental}. \end{array} $$ Thus, we see that again each of these unitary representation has one fundamental corner and the other corner is not fundamental.\\
\indent To sumarize, we see that $\pi\in\wideparen{G}^0$ with exactly one fundamental corner is unitary; its fundamental corner we denote by $q(\pi).$ For all the others $\pi\in\hat{G}^0$ one has $q_1(\pi)=q_2(\pi)$ and we denote by $q(\pi)$ this unique corner of $\pi.$ \begin{tm} $\pi\mapsto q(\pi)$ is a bijection of $\hat{G}^0$ onto $\hat{K}.$ \end{tm}
{\bf Proof:} We have $$ \hat{G}^0=\bigcup_{j=1}^{k}\{\tau_j^{\lambda};\,\,\lambda\in\Lambda_j^*\}\cup\{\omega_+^{\lambda};\,\,\lambda\in\Lambda\}\cup\{\omega_-^{\lambda};\,\,\lambda\in\Lambda\}, $$ where $\Lambda_1^*=\left\{\left(k-\frac{1}{2},k-\frac{3}{2},\ldots,\frac{1}{2}\right)\right\}$ and for $2\leq j\leq k$ $$ \begin{array}{c} \Lambda_j^*=\left\{\lambda\in\Lambda^*;\,\,\lambda_{j-1}>k-j+\frac{3}{2}\,\,\mathrm{and}\,\,\lambda_s=k-s+\frac{1}{2}\,\,\mathrm{for}\,\,j\leq s\leq k\right\}. \end{array} $$ \indent Let $q=(m_1,\ldots,m_k)\in\hat{K}.$ We have three possibilities:\\ \indent$(1)$ $m_k=0.$ Then $q\in\mathbb{Z}_+^k$ and $m_1\geq m_2\geq\cdots\geq m_{k-1}\geq0.$ Let $$ j=\min\,\{s;\,\,1\leq s\leq k,\,\,m_s=0\}. $$ Set $$ \begin{array}{ll} \lambda_s=m_s+k-s+\frac{1}{2},&\qu1\leq s\leq j-1,\\ &\\ \lambda_s=k-s+\frac{1}{2},&\quad j\leq s\leq k. \end{array} $$ Then for $1\leq s\leq j-2$ we have $\lambda_s-\lambda_{s+1}=m_s-m_{s+1}+1\geq1,$ next $\lambda_{j-1}-\lambda_j=m_{j-1}+1\geq2,$ further, for $j\leq s\leq k-1$ we have $\lambda_s-\lambda_{s+1}=1,$ and finally $\lambda_k=\frac{1}{2}.$ Thus, we see that $\lambda\in\Lambda^*.$ If $j=1$ we see that $\lambda$ is the unique element of $\Lambda_1^*.$ If $j\geq2$ we have $m_{j-1}>0$ and so $$ \begin{array}{c} \lambda_{j-1}=m_{j-1}+k-j+1+\frac{1}{2}>k-j+\frac{3}{2} \end{array} $$ i.e. $\lambda\in\Lambda_j^*.$ From the definition of $\lambda$ we see that $q=q(\tau_j^{\lambda}).$\\ \indent$(2)$ $m_k>0.$ Set now $$ \begin{array}{c} \lambda_j=m_j+k-j-\frac{1}{2}. \end{array} $$ Then $\lambda_j-\lambda_{j+1}=m_j-m_{j-1}+1\geq1$ for $1\leq j\leq k-1$ and $\lambda_k=m_k-\frac{1}{2}\geq0.$ Thus, $\lambda\in\Lambda$ and one sees that $q=q(\omega_+^{\lambda}).$\\ \indent$(3)$ $m_k<0.$ Set now $$ \begin{array}{c} \lambda_j=m_j+k-j-\frac{1}{2},\,\,1\leq j\leq k-1,\quad\lambda_k=-m_k-\frac{1}{2}. \end{array} $$
Then $\lambda_j-\lambda_{j+1}=m_j-m_{j+1}+1\geq1$ for $1\leq j\leq k-2,$ further\linebreak $\lambda_{k-1}-\lambda_k=m_{k-1}+m_k+1=m_{k-1}-|m_k|+1\geq1,$ and finally $\lambda_k=|m_k|-\frac{1}{2}\geq0.$ Thus, $\lambda\in\Lambda$ and one sees that $q=q(\omega_-^{\lambda}).$\\ \indent We have proved that $\pi\mapsto q(\pi)$ is a surjection of $\hat{G}^0$ onto $\hat{K}.$ From the proof we see that this map is injective too.\\
\indent Consider now minimal $K-$types in the sense of Vogan: we say that $q\in\hat{K}$ is a {\bf minimal} $K-${\bf type} of the representation $\pi$ if $q\in\Gamma(\pi)$ and $$
\|q+2\rho_K\|=\min\,\{\|q^{\prime}+2\rho_K\|;\,\,q^{\prime}\in\Gamma(\pi)\}. $$ For $q\in\hat{K}$ we have $$
\|q+2\rho_K\|^2=(m_1+2k-2)^2+(m_2+2k-4)^2+\cdots+(m_{k-1}+2)^2+m_k^2 $$ and so we find:\\ \indent If $\lambda\in\Lambda\cap\left(\frac{1}{2}+\mathbb{Z}\right)^k,$ i.e. $\lambda\in\Lambda^*$ and $\Gamma(\tau_j^{\lambda})\subseteq\mathbb{Z}^k,$ the representation $\tau_j^{\lambda}$ has one minimal $K-$type which we denote by $q^V(\tau_j^{\lambda}):$ $$ \begin{array}{l} q^V(\tau_1^{\lambda})=\left(\lambda_2-k+\frac{3}{2},\lambda_3-k+\frac{5}{2},\ldots,\lambda_k-\frac{1}{2},0\right),\\ \\ q^V(\tau_j^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_{j-1}-k+j-\frac{1}{2},\lambda_{j+1}-k+j+\frac{1}{2},\ldots,\lambda_k-\frac{1}{2},0\right),\\ \qquad\qqu2\leq j\leq k-1,\\ \\ q^V(\tau_k^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},0\right). \end{array} $$
\indent If $\lambda\in\Lambda\cap\mathbb{Z}^k,$ i.e. $\Gamma(\tau_j^{\lambda})\subseteq\left(\frac{1}{2}+\mathbb{Z}\right)^k,$ the representation $\tau_j^{\lambda}$ has two minimal $K-$types $q_1^V(\tau_j^{\lambda})$ and $q_2^V(\tau_j^{\lambda}):$ $$ \begin{array}{l} q_1^V(\tau_1^{\lambda})=\left(\lambda_2-k+\frac{3}{2},\lambda_3-k+\frac{5}{2},\ldots,\lambda_k-\frac{1}{2},\frac{1}{2}\right),\\ \\ q_2^V(\tau_1^{\lambda})=\left(\lambda_2-k+\frac{3}{2},\lambda_3-k+\frac{5}{2},\ldots,\lambda_k-\frac{1}{2},-\frac{1}{2}\right),\\ \\ q_1^V(\tau_j^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_{j-1}-k+j-\frac{1}{2},\lambda_{j+1}-k+j+\frac{1}{2},\ldots,\lambda_k-\frac{1}{2},\frac{1}{2}\right),\\ \qquad\qqu2\leq j\leq k-1,\\ \\ q_2^V(\tau_j^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\ldots,\lambda_{j-1}-k+j-\frac{1}{2},\lambda_{j+1}-k+j+\frac{1}{2},\ldots,\lambda_k-\frac{1}{2},-\frac{1}{2}\right),\\ \qquad\qqu2\leq j\leq k-1,\\ \\ q_1^V(\tau_k^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\frac{1}{2}\right),\\ \\ q_2^V(\tau_k^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\frac{1}{2}\right). \end{array} $$ \indent Finally, for every $\lambda\in\Lambda$ the representation $\omega_{\pm}^{\lambda}$ has one minimal $K-$type $q^V(\omega_{\pm}^{\lambda}):$ $$ \begin{array}{l} q^V(\omega_+^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},\lambda_k+\frac{1}{2}\right),\\ \\ q^V(\omega_-^{\lambda})=\left(\lambda_1-k+\frac{3}{2},\lambda_2-k+\frac{5}{2},\ldots,\lambda_{k-1}-\frac{1}{2},-\lambda_k-\frac{1}{2}\right). \end{array} $$ \indent So we see that if $\pi\in\wideparen{G}^0$ has two minimal $K-$types it is not unitary. Further, every $\pi\in\hat{G}^0$ has one minimal $K-$type $q^V(\pi)$ and it coincides with $q(\pi).$ But there exist nonunitary representations in $\wideparen{G}^0$ that have one mi\-ni\-mal $K-$type: this property have all $\tau_j^{\lambda}$ for $\lambda\in\Lambda\cap\left(\frac{1}{2}+\mathbb{Z}_+\right)^k$ that are not subquotinets of the ends of compelemntary series. In other words, unitarity of a representation $\pi\in\wideparen{G}^0$ is not characterized by having unique minimal $K-$type.
\section{Representations of $\mr{Spin}(2k+1,1)$}
\indent Now $M=\mr{Spin}(2k).$ Cartan subalgebra $\mathfrak{t}_0$ of $\mathfrak{k}_0$ (resp. $\mathfrak{t}$ of $\mathfrak{k})$ is also Cartan subalgebra of $\mathfrak{m}_0$ (resp. $\mathfrak{m}).$ The root systems are: $$ \Delta_K=\Delta(\mathfrak{k},\mathfrak{t})=\{\pm\alpha_p\pm\alpha_q;\,\,1\leq p,q\leq k,\,\,p\not=q\}\cup\{\pm\alpha_p;\,\,1\leq p\leq k\} $$ and $$ \Delta_M=\Delta(\mathfrak{m},\mathfrak{t})=\{\pm\alpha_p\pm\alpha_q;\,\,1\leq p,q\leq k,\,\,p\not=q\}. $$ We choose positive roots: $$ \Delta_K^+=\{\alpha_p\pm\alpha_q;\,\,1\leq p<q\leq k\}\cup\{\alpha_p;\,\,1\leq p\leq k\}, $$ $$ \Delta_M^+=\{\alpha_p\pm\alpha_q;\,\,1\leq p<q\leq k\}. $$ The corresponding Weyl chambers in $\mathbb{R}^k=i\mathfrak{t}_0^*$ are $$ C_K=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>\lambda_k>0\} $$ with the closure $$ \overline{C}_K=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_{k-1}\geq\lambda_k\geq0\} $$ and $$
C_M=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1>\lambda_2>\cdots>\lambda_{k-1}>|\lambda_k|>0\} $$ with the closure $$
\overline{C}_M=\{\lambda\in\mathbb{R}^k;\,\,\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_{k-1}\geq|\lambda_k|\}. $$ The halfsums of positive roots are $$ \begin{array}{ccc} \rho_K=\left(k-\frac{1}{2},k-\frac{3}{2},\ldots,\frac{3}{2},\frac{1}{2}\right)&\mathrm{and}&\delta_{\mathfrak{m}}=(k-1,k-2,\ldots,1,0). \end{array} $$ Now $$ \begin{array}{c} \hat{K}=\left\{(m_1,\ldots,m_k)\in\mathbb{Z}_+^k\cup\left(\frac{1}{2}+\mathbb{Z}_+\right)^k;\,\,m_1\geq m_2\geq\cdots\geq m_{k-1}\geq m_k\geq0\right\}\\ \\
\hat{M}=\left\{(n_1,\ldots,n_k)\in\mathbb{Z}^k\cup\left(\frac{1}{2}+\mathbb{Z}\right)^k;\,\,n_1\geq n_2\geq\cdots\geq n_{k-1}\geq|n_k|\right\}. \end{array} $$ The branching rule is $$
(m_1,\ldots,m_k)|M=\bigoplus_{(n_1,\ldots,n_k)\prec(m_1,\ldots,m_k)}(n_1,\ldots,n_k) $$ where $(n_1,\ldots,n_k)\prec(m_1,\ldots,m_k)$ means that $(m_1,\ldots,m_k)\in(n_1+\mathbb{Z})^k$ and $$
m_1\geq n_1\geq m_2\geq n_2\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq|n_k|. $$ So by the Frobenius Reciprocity Theorem for $\sigma=(n_1,\ldots,n_k)\in\hat{M}$ and $\nu\in\mathbb{C}=\mathfrak{a}^*$ we have $$
\pi^{\sigma,\nu}|K=\bigoplus_{(n_1,\ldots,n_k)\prec(m_1,\ldots,m_k)}(m_1,\ldots,m_k). $$ We identify the dual $\mathfrak{h}^*$ with $\mathbb{C}^{k+1}$ so that $\lambda\in\mathfrak{h}^*$ is identified with the\linebreak $(k+1)-$tuple $(\lambda(H_1),\ldots,\lambda(H_k),\lambda(H))$ and $\mathfrak{t}^*=\mathbb{C}^k$ is identified with the subspace of $\mathfrak{h}^*=\mathbb{C}^{k+1}$ of all $(k+1)-$tuples with $0$ at the end. The infinitesimal character of the elementary representation $\pi^{\sigma,\nu}$ is equal $\chi_{\Lambda(\sigma,\nu)},$ where $\Lambda(\sigma,\nu)\in\mathfrak{h}^*$ is defined by $$
\Lambda(\sigma,\nu)|\mathfrak{t}=\lambda_{\sigma}+\delta_{\mathfrak{m}}\quad\mathrm{and}\quad\Lambda(\sigma,\nu)|\mathfrak{a}=\nu. $$ Here $\lambda_{\sigma}$ is the highest weight of $\sigma$ with respect to $\Delta_M^+.$ Thus $$ \Lambda(\sigma,\nu)=(n_1+k-1,n_2+k-2,\ldots,n_{k-1}+1,n_k,\nu). $$ For $\sigma=(n_1,\ldots,n_k)\in\hat{M}\cap\mathbb{Z}^k$ and $\nu\in\mathbb{C}$ the elementary representation $\pi^{\sigma,\nu}$ is ireducible if and only if either $\nu\not\in\mathbb{Z}$ or $$
\nu\in\{0,\pm1,\ldots,\pm|n_k|,\pm(n_{k-1}+1),\pm(n_{k-2}+2),\ldots,\pm(n_1+k-1)\}. $$ For $\sigma\in\hat{M}\cap\left(\frac{1}{2}+\mathbb{Z}\right)^k$ and $\nu\in\mathbb{C}$ the representation $\pi^{\sigma,\nu}$ is irreducible if and only if either $\nu\not\in\left(\frac{1}{2}+\mathbb{Z}\right)$ or $$ \begin{array}{c}
\nu\in\left\{\pm\frac{1}{2},\ldots,\pm|n_k|,\pm(n_{k-1}+1),\pm(n_{k-2}+2),\ldots,\pm(n_1+k-1)\right\}. \end{array} $$ If the elementary representation $\pi^{\sigma,\nu}$ is reducible, it always has two irreducible subquotients which will be denoted by $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu}.$ The $K-$spectra of these representations consist of all $q=(m_1,\ldots,m_k)\in\hat{K}\cap(n_1+\mathbb{Z})^k$ that satisfy: $$ \begin{array}{l}
\bullet\,\mathrm{If}\,\,n_{k-1}>|n_k|\,\,\mathrm{and}\,\,\nu\in\{\pm(|n_k|+1),\pm(|n_k|+2),\ldots,\pm n_{k-1}\}:\\
\Gamma(\tau^{\sigma,\nu}):\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1}\,\,\mathrm{and}\,\,|\nu|-1\geq m_k\geq|n_k|,\\
\Gamma(\omega^{\sigma,\nu}):\,\,m_1\geq n_1\geq\cdots\geq m_{k-1}\geq n_{k-1}\geq m_k\geq|\nu|.\\ \bullet\,\mathrm{If}\,\,n_{j-1}>n_j\,\,\mathrm{for\,\,some}\,\,j\in\{2,\ldots,k-1\}\,\,\mathrm{and}\\ \nu\in\{\pm(n_j+k-j+1),\pm(n_j+k-j+2),\ldots,\pm(n_{j-1}+k-j)\}:\\ \Gamma(\tau^{\sigma,\nu}):\,\,m_1\geq n_1\geq\cdots\geq m_{j-1}\geq n_{j-1}\,\,\mathrm{and}\\
\qquad\qquad|\nu|-k+j-1\geq m_j\geq n_j\geq\cdots\geq m_k\geq|n_k|,\\
\Gamma(\omega^{\sigma,\nu}):\,\,m_1\geq n_1\geq\cdots\geq m_{j-1}\geq n_{j-1}\geq m_j\geq|\nu|-k+j\,\,\mathrm{and}\\
\qquad\qquad n_j\geq m_{j+1}\geq\cdots\geq m_k\geq|n_k|.\\ \bullet\,\mathrm{If}\,\,\nu\in\{\pm(n_1+k),\pm(n_1+k+1),\ldots\}:\\
\Gamma(\tau^{\sigma,\nu}):\,\,|\nu|-k\geq m_1\geq n_1\geq\cdots\geq m_k\geq|n_k|,\\
\Gamma(\omega^{\sigma,\nu}):\,\,m_1\geq|\nu|-k+1\,\,\mathrm{and}\,\,n_1\geq m_2\geq n_2\geq\cdots\geq m_k\geq|n_k|. \end{array} $$ \indent Similarly to the preceeding case of even $n=2k$ we now write down the infinitesimal characters of reducible elementary representations $\pi^{\sigma,\nu}$ (and so of its irreducible subquotients $\tau^{\sigma,\nu}$ and $\omega^{\sigma,\nu}$ too). We know that the infinitesimal character of $\pi^{\sigma,\nu}$ is $\chi_{\Lambda(\sigma,\nu)},$ where $$ \Lambda(\sigma,\nu)=(n_1+k-1,n_2+k-2,\ldots,n_{k-1}+1,n_k,\nu). $$ Since $\nu\in\frac{1}{2}\mathbb{Z}\subset\mathbb{R}=\mathfrak{a}_0^*$ we have $\Lambda(\sigma,\nu)\in i\mathfrak{t}_0^*\oplus\mathfrak{a}_0^*=\mathbb{R}^{k+1}.$\\ \indent The root system of the pair $(\mathfrak{g},\mathfrak{h})$ is $$ \Delta=\{\pm\alpha_p\pm\alpha_q;\,\,1\leq p,q\leq k+1,\,\,p\not=q\}. $$ We choose positive roots: $$ \Delta^+=\{\alpha_p\pm\alpha_q;\,\,1\leq p<q\leq k+1\}. $$ The corresponding Weyl chamber in $\mathbb{R}^{k+1}$ is $$
D=\{\lambda\in\mathbb{R}^{k+1};\,\,\lambda_1>\lambda_2>\cdots>\lambda_k>|\lambda_{k+1}|>0\} $$ with the closure $$
\overline{D}=\{\lambda\in\mathbb{R}^{k+1};\,\,\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_k\geq|\lambda_{k+1}|\}. $$ The Weyl group $W$ of $\Delta$ consists of all permutations of coordinates in $\mathbb{C}^{k+1}=\mathfrak{h}^*$ combined with multiplying even number of coordinates with $-1.$ By Harish$-$Chandra's theorem $\chi_{\lambda}=\chi_{\lambda^{\prime}}$ if and only if $\lambda,\lambda^{\prime}\in\mathfrak{h}^*$ are in the same $W-$orbit. As $\overline{D}$ is a fundamental domain for the action of $W$ on $\mathbb{R}^{k+1}=i\mathfrak{t}_0^*\oplus\mathfrak{a}_0^*,$ there exists unique $\lambda(\sigma,\nu)\in\overline{D}$ such that $\chi_{\Lambda(\sigma,\nu)}=\chi_{\lambda(\sigma,\nu)}.$ We now write down $\lambda(\sigma,\nu)$ for all reducible elementary representations $\pi^{\sigma,\nu}.$ In the following for $\sigma=(n_1,\ldots,n_k)\in\hat{M}$ we write $-\sigma$ for its contragredient class in $\hat{M}:$ $-\sigma=(n_1,\ldots,n_{k-1},-n_k).$ Without loss of geq-ne\-ra\-li\-ty we can suppose that $\nu\geq0$ because $\pi^{\sigma,\nu}$ and $\pi^{-\sigma,-\nu}$ have equivalent irreducible subquotients and because $\Lambda(\sigma,\nu)$ is $W-$conjugated with $\Lambda(-\sigma,-\nu):$ multiplying the last two coordinates by $-1.$\\
\indent $\bullet$ If $n_{k-1}>|n_k|$ and $\nu\in\{|n_k|+1,|n_k|+2,\ldots,n_{k-1}\}$ we have $n_{k-1}>\nu>|n_k|$ and so $$ \lambda(\sigma,\nu)=(n_1+k-1,n_2+k-2,\ldots,n_{k-1}+1,\nu,n_k). $$ \indent $\bullet$ If $2\leq j\leq k-1,$ $n_{j-1}>n_j$ and $\nu\in\{n_j+k-j+1,\ldots,n_{j-1}+k-j\}$ we have $n_{j-1}+k-j+1>\nu>n_j+k-j$ and so $$ \lambda(\sigma,\nu)=(n_1+k-1,\ldots,n_{j-1}+k-j+1,\,\nu\,,n_j+k-j,\ldots,n_{k-1}+1,n_k). $$ \indent $\bullet$ If $\nu\in\{n_1+k,n_1+k+1,\ldots\}$ we have $\nu>n_1+k-1$ and so $$ \lambda(\sigma,\nu)=(\nu,n_1+k-1,\ldots,n_{k-1}+1,n_k). $$ \indent Similarly to the preceeding case of even $n=2k,$ we see that now every reducible elementary representation has infinitesimal character $\chi_{\lambda}$ with $\lambda\in\Lambda,$\linebreak where $$ \begin{array}{c}
\Lambda=\left\{\lambda\in\mathbb{Z}^{k+1}\cup\left(\frac{1}{2}+\mathbb{Z}\right)^{k+1};\,\,\lambda_1>\lambda_2>\cdots>\lambda_k>|\lambda_{k+1}|\right\}. \end{array} $$ We again write $\Lambda$ as the disjoint union $\Lambda=\Lambda^*\cup\Lambda^0,$ where $$ \begin{array}{c}
\Lambda^*=\left\{\lambda\in\mathbb{Z}^{k+1}\cup\left(\frac{1}{2}+\mathbb{Z}\right)^{k+1};\,\,\lambda_1\lambda_2>\cdots>\lambda_k>|\lambda_{k+1}|>0\right\},\\ \\ \Lambda^0=\{\lambda\in\mathbb{Z}_+^{k+1};\,\,\lambda_1>\lambda_2>\cdots>\lambda_k>0,\,\,\lambda_{k+1}=0\}. \end{array} $$
\begin{tm} $(i)$ For every $\lambda\in\Lambda^*$ there exist $k+1$ ordered pairs $(\sigma,\nu),$ $\sigma=(n_1,\ldots,n_k)\in\hat{M},$ $\nu\geq0,$ such that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ These are $(\sigma_j,\nu_j),$ where $\nu_j=\lambda_j$ for $1\leq j\leq k,$ $\nu_{k+1}=|\lambda_{k+1}|$ and $$ \begin{array}{l} \sigma_1=(\lambda_2-k+1,\lambda_3-k+2,\ldots,\lambda_k-1,\lambda_{k+1}),\\ \sigma_j=(\lambda_1-k+1,\ldots,\lambda_{j-1}-k+j-1,\lambda_{j+1}-k+j,\ldots,\lambda_k-1,\lambda_{k+1}),\\ \qqu2\leq j\leq k-1,\\ \sigma_k=(\lambda_1-k+1,\lambda_2-k+2,\ldots,\lambda_{k-1}-1,\lambda_{k+1}),\\ \sigma_{k+1}=\left\{\begin{array}{ll} \!\!(\lambda_1-k+1,\lambda_2-k+2,\ldots,\lambda_{k-1}-1,\lambda_k)&\,\,\mathrm{if}\,\,\lambda_{k+1}>0\\ \!\!(\lambda_1-k+1,\lambda_2-k+2,\ldots,\lambda_{k-1}-1,-\lambda_k)&\,\,\mathrm{if}\,\,\lambda_{k+1}<0 \end{array} \right. \end{array} $$ $\pi^{\sigma_j,\nu_j},$ $1\leq j\leq k,$ are reducible, while $\pi^{\sigma_{k+1},\nu_{k+1}}$ is irreducible.\\ \indent $(ii)$ For $\lambda\in\Lambda^0$ there exist $k+2$ ordered pairs $(\sigma,\nu),$ $\sigma=(n_1,\ldots,n_k)\in\hat{M},$ $\nu\geq0,$ such that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ These are the $(\sigma_j,\nu_j),$ where $\nu_j=\lambda_j$ for $1\leq j\leq k,$ $\nu_{k+1}=\nu_{k+2}=0$ and $$ \begin{array}{l} \sigma_1=(\lambda_2-k+1,\lambda_3-k+2,\ldots,\lambda_k-1,0),\\ \sigma_j=(\lambda_1-k+1,\ldots,\lambda_{j-1}-k+j-1,\lambda_{j+1}-k+j,\ldots,\lambda_k-1,0),\\ \qqu2\leq j\leq k-1,\\ \sigma_k=(\lambda_1-k+1,\lambda_2-k+2,\ldots,\lambda_{k-1}-1,0),\\ \sigma_{k+1}=(\lambda_1-k+1,\lambda_2-k+2,\ldots,\lambda_{k-1}-1,\lambda_k),\\ \sigma_{k+2}=(\lambda_1-k+1,\lambda_2-k+2,\ldots,\lambda_{k-1}-1,-\lambda_k). \end{array} $$ $\pi^{\sigma_j,\nu_j},$ $1\leq j\leq k,$ are reducible, while $\pi^{\sigma_{k+1},\nu_{k+1}}$ and $\pi^{\sigma_{k+2},\nu_{k+2}}$ are irreducible. \end{tm}
{\bf Proof:} $(i)$ Fix $\lambda\in\Lambda^*$ and let $(\sigma,\nu),$ $\sigma=(n_1,\ldots,n_k)\in\hat{M},$ $\nu\geq0,$ be such that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ Then $\Lambda(\sigma,\nu)$ and $\lambda$ are in the same $W-$orbit. Since $\nu\geq0$ we have necessarily $\nu=\lambda_j$ for some $j\leq k$ or $\nu=|\lambda_{k+1}|.$\\ \indent Suppose $\nu=\lambda_j$ for some $j\leq k.$ Since $W$ acts as permutations of coordinates combined with multiplying evene number of coordinates by $-1,$ the inequalities $$
n_1+k-1>n_2+k-2>\cdots>n_{k-1}+1>|n_k| $$ and $$
\lambda_1>\lambda_2>\cdots>\lambda_{j-1}>\lambda_{j+1}>\cdots>\lambda_k>|\lambda_{k+1}|>0 $$ imply $$ n_1+k-1=\lambda_1,\ldots,n_{j-1}+k-j+1=\lambda_{j-1}, $$ $$ n_j+k-j=\lambda_{j+1},\ldots,n_{k-1}+1=\lambda_k,\,\,n_k=\lambda_{k+1}. $$ The following possibilities follow: $$ \begin{array}{ll} \nu_1=\lambda_1,&\!\!\sigma_1=(\lambda_2-k+1,\lambda_3-k+2,\ldots,\lambda_k-1,\lambda_{k+1}),\\ &\\ \nu_j=\lambda_j,&\!\!\sigma_j=(\lambda_1-k+1,\ldots,\lambda_{j-1}-k+j-1,\lambda_{j+1}-k+j,\ldots,\lambda_k-1,\lambda_{k+1}),\\ &\qqu2\leq j\leq k-1,\\ &\\ \nu_k=\lambda_k,&\!\!\sigma_k=(\lambda_1-k+1,\ldots,\lambda_{k-1}-1,\lambda_{k+1}). \end{array} $$ One easily checks that so defined $\sigma_1,\ldots,\sigma_k$ are really in $\hat{M}$ and that $\pi^{\sigma_j,\nu_j}$ are reducible.\\ \indent Suppose now that $\lambda_{k+1}>0$ and $\nu=\lambda_{k+1}.$ Then it follows that necessarily $$ n_1+k-1=\lambda_1,n_2+k-2=\lambda_2,\ldots,n_{k-1}+1=\lambda_{k-1},n_k=\lambda_k, $$ i.e. $$ n_1=\lambda_1-k+1,n_2=\lambda_2-k+2,\ldots,n_{k-1}=\lambda_{k-1}-1,n_k=\lambda_k. $$ On the other hand, if $\lambda_{k+1}<0,$ hence $\nu=-\lambda_{k+1},$ we see that in $W-$action which $\Lambda(\sigma,\nu)$ transforms into $\lambda$ there should be one more change of sign and so necessarily $n_k=-\lambda_k.$ Thus we have $$ n_1=\lambda_1-k+1,n_2=\lambda_2-k+2,\ldots,n_{k-1}=\lambda_{k-1}-1,n_k=-\lambda_k. $$ One checks that so defined $$ \sigma_{k+1}=(n_1,\ldots,n_k)=(\lambda_1-k+1,\ldots,\lambda_{k-1}-1,\pm\lambda_k) $$
is really in $\hat{M}.$ Further, we have $|n_k|-\nu_{k+1}=\lambda_k-|\lambda_{k+1}|\in\mathbb{N}.$ Thus, either $\nu_{k+1}\in\{0,1,\ldots,|n_k|-1\}$ or $\nu_{k+1}\in\{\frac{1}{2},\frac{3}{2},\ldots,|n_k|-1\}.$ Therefore, the elementary representation $\pi^{\sigma_{k+1},\nu_{k+1}}$ is irreducible.\\
\indent $(ii)$ Let $\lambda\in\Lambda^0$ and let $(\sigma,\nu),$ $\sigma=(n_1,\ldots,n_k)\in\hat{M},$ $\nu\geq0,$ be such that $\chi_{\lambda}$ is the infinitesimal character of $\pi^{\sigma,\nu}.$ As in the proof of $(i)$ we find that necessarily $\nu=\lambda_j$ for some $j.$ The rest of the proof for $j\leq k$ is completely the same as in $(i).$ So we are left with the case $\nu=\lambda_{k+1}=0.$ As in $(i)$ besause of the inequalities $n_1+k-1>n_2+k-2>\cdots>n_{k-1}+1>|n_k|$ and $\lambda_1>\lambda_2>\cdots>\lambda_k>0$ we get two possibilies for $\sigma,$ $\sigma=\sigma_{k+1}$ and $\sigma_{k+2}$ from the statement $(ii).$ Finally, as in the proof of $(i)$ we check that $\sigma_{k+1},\sigma_{k+2}\in\hat{M}$ and that the representations $\pi^{\sigma_{k+1},0}$ and $\pi^{\sigma_{k+2},0}$ are irreducible.\\
\indent We note that in fact the representations $\pi^{\sigma_{k+1},0}$ and $\pi^{\sigma_{k+2},0}$ are equivalent, but this is unimportant for studying and parametrizing $\wideparen{G}^0$ and $\hat{G}^0.$\\
\indent Fix $\lambda\in\Lambda.$ By Theorem 3. there exist $k$ ordered pairs $(\sigma,\nu),$ $\sigma\in\hat{M},$ $\nu\geq0,$ with reducible $\pi^{\sigma,\nu}$ having $\chi_{\lambda}$ as the infinitesimal character. There are $k+1$ mutually inequivalent irreducible subquotients of these elementary representations; we denote them $\tau_1^{\lambda},\ldots,\tau_k^{\lambda},\omega^{\lambda}:$ $$ \begin{array}{l} \tau_1^{\lambda}=\tau^{\sigma_1,\nu_1},\\ \\ \tau_2^{\lambda}=\omega^{\sigma_1,\nu_1}\cong\tau^{\sigma_2,\nu_2},\\ \\ \vdots\\ \\ \tau_j^{\lambda}=\omega^{\sigma_{j-1},\nu_{j-1}}\cong\tau^{\sigma_j,\nu_j},\\ \\ \vdots\\ \\ \tau_k^{\lambda}=\omega^{\sigma_{k-1},\nu_{k-1}}\cong\tau^{\sigma_k,\nu_k},\\ \\ \omega^{\lambda}=\omega^{\sigma_k,\nu_k}. \end{array} $$ Their $K-$spectra consist of all $q=(m_1,\ldots,m_k)\in\hat{K}\cap(n_1+\mathbb{Z})^k$ satisfying: $$ \begin{array}{ll}
\Gamma(\tau_1^{\lambda}):&\lambda_1-k\geq m_1\geq\lambda_2-k+1\geq m_2\geq\cdots\geq\lambda_k-1\geq m_k\geq|\lambda_{k+1}|.\\ &\\ \Gamma(\tau_j^{\lambda}):&m_1\geq\lambda_1-k+1\geq\cdots\geq m_{j-1}\geq\lambda_{j-1}-k+j-1\,\,\mathrm{and}\\
&\lambda_j-k+j-1\geq m_j\geq\cdots\geq\lambda_k-1\geq m_k\geq|\lambda_{k+1}\,\,\mathrm{for}\,\,2\leq j\leq k.\\ &\\ \Gamma(\omega^{\lambda}):&m_1\geq\lambda_1-k+1\geq\cdots\geq m_{k-1}\geq\lambda_{k-1}-1\geq m_k\geq\lambda_k. \end{array} $$
\indent The definitons of corners and fundamental corners do not have sense when $\mr{rank}\,\mathfrak{k}<\mr{rank}\,\mathfrak{g}.$ Consider the Vogan's minimal $K-$types. Note that $$
\|q+2\rho_K\|^2=(m_1+2k-1)^2+(m_2+2k-3^2+\cdots+(m_k+1)^2, $$ so every $\pi\in\wideparen{G}^0$ has unique minimal $K-$type that will be denoted by $q^V(\pi):$ this is the element $(m_1,\ldots,m_k)\in\Gamma(\pi)$ whose every coordinate $m_j$ is the smallest possible. \begin{tm} The map $\pi\mapsto q^V(\pi)$ is a surjection of $\wideparen{G}^0$ onto $\hat{K}.$ More precisely, for $q=(m_1,\ldots,m_k)\in\hat{K}:$\\ \indent $(a)$ There exist infinitely many $\lambda$'s in $\Lambda$ such that $q^V(\tau_1^{\lambda})=q.$\\ \indent $(b)$ Let $j\in\{2,\ldots,k\}.$ The number of mutually different $\lambda$'s $\Lambda$ such the $q^V(\tau_j^{\lambda})=q$ is equal: $$ \begin{array}{cl} 0&\quad\mathrm{if}\,\,\,m_{j-1}=m_j,\\ m_{j-1}-m_j&\quad\mathrm{if}\,\,\,m_{j-1}>m_j\,\,\,\mathrm{and}\,\,\,m_k=0,\\ 2(m_{j-1}-m_j)&\quad\mathrm{if}\,\,\,m_{j-1}>m_j\,\,\,\mathrm{and}\,\,\,m_k>0. \end{array} $$ \indent $(c)$ The number of $\lambda$'s in $\Lambda$ such that $q^V(\omega^{\lambda})=q$ is equal: $$ \begin{array}{cl} 0&\quad\mathrm{if}\,\,\,m_k=0\,\,\,\mathrm{or}\,\,\,m_k=\frac{1}{2},\\ 1&\quad\mathrm{if}\,\,\,m_k=1,\\ 2\left[m_k-\frac{1}{2}\right]&\quad\mathrm{if}\,\,\,m_k>1. \end{array} $$ Here we use the usual notation for $p\in\mathbb{R}:$ $[p]=\max\,\{j\in\mathbb{Z};\,\,j\leq p\}.$ \end{tm}
{\bf Proof:} $(a)$ These are all $\lambda\in\Lambda$ such that $$ \lambda_1\in(m_1+k+\mathbb{Z}_+),\,\,\,\lambda_j=m_{j-1}+k-j+1\,\,\,2\leq j\leq k,\,\,\,\lambda_{k+1}=\pm m_k. $$ \indent $(b)$ These are all $\lambda\in\Lambda$ such that $$ \begin{array}{ll} \lambda_s=m_s+k-s,&\,\,1\leq s\leq j-1,\\ \lambda_{j-1}>\lambda_j>\lambda_{j+1},&\\ \lambda_s=m_{s-1}+k-s+1,&\,\,j+1\leq s\leq k,\\ \lambda_{k+1}=\pm m_k.& \end{array} $$ \indent$(c)$ These are all $\lambda\in\Lambda$ such that $$ \begin{array}{l}
\lambda_s=m_s+k-s,\,\,\,1\leq s\leq k,\,\,\,|\lambda_{k+1}|<m_k. \end{array} $$ The number of such $\lambda$'s is $0$ if $m_k=0$ or $m_k=\frac{1}{2},$ exactly $1$ if $m_k=1$ $(\lambda_{k+1}=0),$ and twice the number of natural numbers $<m_k$ if $m_k\geq\frac{3}{2}.$\\
\indent We now parametrize $\hat{G}^0.$ A class in $\wideparen{G}^0$ is unitary if and only if it is an irreducible subquotient of an end of complementary series. For $\sigma\in\hat{M}$ the complementary series is nonempty if and only if $\sigma$ is selfcontragredient, i.e. equivalent to its contragredient. Contragredient representation of\linebreak $\sigma=(n_1,\ldots,n_{k-1},n_k)$ is $-\sigma=(n_1,\ldots,n_{k-1},-n_k).$ Thus, $\sigma$ is selfcontragredient if and only if $n_k=0.$ In this case we set $$ \nu(\sigma)=\min\,\{\nu\geq0;\,\,\pi^{\sigma,\nu}\,\,\mathrm{is}\,\,\mathrm{reducible}\}. $$ From the necessary and sufficient conditions for reducibility of elementary representations we find that for $\sigma=(n_1,\ldots,n_{k-1},0)\in\hat{M}:$\\ \indent $\bullet$ If $n_1=\cdots=n_{k-1}=0,$ i.e. if $\sigma=\sigma_0=(0,\ldots,0)$ is the trivial onedimensional representation of $M,$ then $$ \nu(\sigma_0)=k. $$ In this case $$ \Gamma(\tau^{\sigma_0,k})=\{(0,\ldots,0)\}\quad\mathrm{and}\quad\Gamma(\omega^{\sigma_0,k})=\{(s,0,\ldots,0);\,\,s\in\mathbb{N}\} $$ and so $q^V(\tau^{\sigma_0,k})=(0,\ldots,0)$ and $q^V(\omega^{\sigma_0,k})=(1,0,\ldots,0).$\\ \indent $\bullet$ If $n_1>0,$ let $j\in\{2,\ldots,k\}$ be the smallest index such that $n_{j-1}>0.$ Then $$ \nu(\sigma)=k-j+1. $$ The $K-$spectra of irreducible subquotients of $\pi^{\sigma,k-j+1}$ consist of all $(m_1,\ldots,m_k)$ in $\hat{K}\cap\mathbb{Z}_+^k$ such that $$ \begin{array}{ll} \Gamma(\tau^{\sigma,k-j+1}):&\,\,m_1\geq n_1\geq\cdots\geq m_{j-1}\geq n_{j-1}\quad\mathrm{and}\quad m_s=0\,\,\forall s\geq j,\\ &\\ \Gamma(\omega^{\sigma,k-j+1}):&\,\,m_1\geq n_1\geq\cdots\geq m_{j-1}\geq n_{j-1}\geq m_j\geq1\quad\mathrm{and}\quad m_s=0\,\,\forall s>j. \end{array} $$ So we have $$ q^V(\tau^{\sigma,k-j+1})=(n_1,\ldots,n_{j-1},0,\ldots,0),\,\,\,q^V(\omega^{\sigma,k-j+1})=(n_1,\ldots,n_{j-1},1,0,\ldots,0). $$ Thus \begin{tm} The map $\pi\mapsto q^V(\pi)$ is a bijection of $\hat{G}^0$ onto $$ \hat{K}_0=\{q=(m_1,\ldots,m_k)\in\hat{K};\,\,m_k=0\}. $$ \end{tm}
\end{document} |
\begin{document}
\title{Multistage Entanglement Swapping} \author{Alexander M. Goebel$^1$} \author{Claudia Wagenknecht$^1$} \author{Qiang Zhang$^{2}$} \author{Yu-Ao Chen$^{1,2}$} \author{Kai Chen$^{2}$}\email{[email protected]} \author{J\"{o}rg Schmiedmayer$^{3}$} \author{Jian-Wei Pan$^{1,2}$}
\affiliation{$^1$Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Philosophenweg 12, 69120 Heidelberg, Germany\\ $^2$Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China\\ $^3$Atominstitut der \"{o}sterreichischen Universit\"{a}ten, TU-Wien, A-1020 Vienna, Austria} \pacs{03.67.Bg, 03.67.Mn, 42.50.Dv, 42.50.Xa}
\begin{abstract} We report an experimental demonstration of entanglement swapping over two quantum stages. By successful realizations of two cascaded photonic entanglement swapping processes, entanglement is generated and distributed between two photons, that originate from independent sources and do not share any common past. In the experiment we use three pairs of polarization entangled photons and conduct two Bell-state measurements (BSMs) one between the first and second pair, and one between the second and third pair. This results in projecting the remaining two outgoing photons from pair 1 and 3 into an entangled state, as characterized by an entanglement witness. The experiment represents an important step towards a full quantum repeater where multiple entanglement swapping is a key ingredient. \end{abstract}
\date{May 30, 2008} \maketitle
Entanglement swapping is arguably one of the most important ingredients for quantum repeaters and quantum relays, which lays at the heart of quantum communication \cite{Zukowski1993,qrepeater,qrelay,QOreview}. For photonic quantum communication, the distance is largely limited due to decoherence from coupling to the environment and an increasing loss of photons in a quantum channel. This leads to an exponential decay in the fidelity of quantum information. This drawback can eventually be overcome by subdividing larger distances into smaller sections over which entanglement or quantum states can be distributed. The sections are then bridged by entanglement swapping processes \cite{qrepeater,qrelay}. The swapping procedure therefore constitutes one of the key elements for a quantum relay \cite{qrelay}, and a full quantum repeater \cite{qrepeater} if combined with quantum purification \cite{BDSW1996,purificationexperiments} and quantum memory \cite{memory}. As a result, quantum communication becomes feasible despite of realistic noise and imperfections. At the same time, the overhead for the used resources and communication time only increase polynomially with the distance \cite{qrepeater,qrelay,QOreview}.
Experimentally, photonic entanglement swapping has so far been successfully achieved for the case of discrete variables \cite{discreteswapping,Gisin2007}, and for continuous variable \cite{CVswapping}, both via a single stage process. However, only after successful multiple swapping, will we be able to have a fully functional quantum repeater. There are additional advantages utilizing a multiple swapping process. For a quantum relay with many segments, it is equivalent to significantly lower the dark-count rate, which is a substantial factor limiting the transmission distance of successful quantum communication \cite{qrelay}. For quantum information carriers possessing mass, multiple swapping processes can speed up the distribution of entanglement by a factor that is proportional to the number of segments used \cite{multiparticleswapping}. Moreover, multistage entanglement swapping can improve the protection of quantum states against noise from amplitude errors \cite{multiparticleswapping}.
We report in this letter an experimental demonstration of a multiple entanglement swapping over two stages. This is achieved by utilizing three synchronous spatially independent pairs of polarization entangled photons, and performing BSMs among the three segments between the two communication parties. Two successful BSMs yield a final maximally entanglement pair distributed between the two parties. To quantitatively evaluate the performance, we have observed the quality of the output state by the characterization of an entanglement witness, which confirms genuine entanglement generation. Our experiment implements an entanglement distribution over two distant stations which are initially independent of each other and have never physically interacted in the past. This proof-of-principle demonstration constitutes an important step towards robust long-distance quantum relays, quantum repeaters and related quantum protocols based on multiple entanglement swapping.
The principle for multistage entanglement swapping is sketched in Fig. \ref{sketchofprinciple}. Consider three independent stations, each simultaneously emitting a pair of Einstein-Podolsky- Rosen (EPR) maximally entangled photons. In our experiments, we generate these states through the process of spontaneous parametric down-conversion \cite{SPDCII}. By post-selecting events with only one photon in each output arm, we obtain polarization entangled photons in the state \begin{equation}
|\Psi\rangle_{123456} =
|\Psi^{-}\rangle_{12}\times|\Psi^{-}\rangle_{34}\times|\Psi^{-}\rangle_{56}, \label{groundstate} \end{equation}
where $|\Psi^{-}\rangle_{ij}$ is one of the four maximally entangled Bell states, which form a complete orthonormal basis for the joint state of two entangled photons \begin{eqnarray*}
|\Psi^{\pm}\rangle_{ij} & = &
\tfrac{1}{\sqrt{2}}(|H\rangle_{i}|V\rangle_{j}\pm|V\rangle_{i}|H\rangle_{j}) \nonumber\\
|\Phi^{\pm}\rangle_{ij} & = &
\tfrac{1}{\sqrt{2}}(|H\rangle_{i}|H\rangle_{j}\pm|V\rangle_{i}|V\rangle_{j}). \label{Bellstates} \end{eqnarray*}
Here $|H\rangle$ ($|V\rangle$) denotes the state of a horizontally (vertically) polarized photon. Note that photon pairs 1-2, 3-4 and 5-6 are entangled in an antisymmetric polarization state. The states of the three pairs are factorizable from each other, namely, there is no entanglement among photons from different pairs. \begin{figure}
\caption{Principle of multistage entanglement swapping: three EPR sources produce pairs of entangled photons 1-2, 3-4 and 5-6. Photon 2 from the inial state and photon 3 from the first ancillary pair are subjected to a joint BSM, and so are photon 4 from the first ancillary and photon 5 from the second acillary pair. The two BSMs project outgoing photons 1 and 6 onto an entangled state. Thus the entanglement of the initial pair is swapped to an entanglement between photons 1 and 6.}
\label{sketchofprinciple}
\end{figure}
As a first step we perform a joint BSM on photons 2 and 3, that is, photons 2 and 3 are projected onto one of the four Bell states. This measurement also projects photons 1 and 4 onto a Bell state, in a form depending on the result of the BSM of photons 2 and 3. Close inspection shows that for the initial state given in Eq.~(\ref{groundstate}), the emerging state of photons 1 and 4 is identical to the one that photons 2 and 3 collapse into. This is a consequence of the fact that the state of Eq.~(\ref{groundstate}) can be rewritten as \begin{eqnarray}
|\Psi\rangle_{123456} & = &\tfrac{1}{2}\,
[|\Psi^{+}\rangle_{14}|\Psi^{+}\rangle_{23}
-|\Psi^{-}\rangle_{14}|\Psi^{-}\rangle_{23}\nonumber\\
&&-|\Phi^{+}\rangle_{14}|\Phi^{+}\rangle_{23}
+|\Phi^{-}\rangle_{14}|\Phi^{-}\rangle_{23}]\nonumber\\
&&\times|\Psi^{-}\rangle_{56} \label{totalstate} \end{eqnarray} In all cases photons 1 and 4 emerge entangled despite the fact that they never interacted with one another in the past. The joint measurement of photons 2 and 3 tells about the type of entanglement between photons 1 and 4.
Without loss of generality, we assume in the first step that photons 2 and 3 have collapsed into the state
$|\Phi^{+}\rangle_{23}$ as a result of the first BSM. The remaining four-photon state is then of the form \begin{eqnarray}
|\Psi\rangle_{1456} & = &\tfrac{1}{2}\,
[|\Psi^{+}\rangle_{16}|\Phi^{-}\rangle_{45}
+|\Psi^{-}\rangle_{16}|\Phi^{+}\rangle_{45}\nonumber\\
&&-|\Phi^{+}\rangle_{16}|\Psi^{-}\rangle_{45}
-|\Phi^{-}\rangle_{16}|\Psi^{+}\rangle_{45}] \label{state1456} \end{eqnarray}
In a similar manner we perform a second BSM on photons 4 and 5. Again a detection of the state
$|\Phi^{+}\rangle_{45}$ results in projecting the remaining photons 1 and 6 onto the Bell state \begin{equation}
|\Psi^-\rangle_{16}=\tfrac{1}{\sqrt{2}}(|H\rangle_{1}|V\rangle_{6}-
|V\rangle_{1}|H\rangle_{6}) \label{finalstate16} \end{equation}
\begin{figure}\label{setupswap}
\end{figure} A schematic diagram of our setup for multistage entanglement swapping is illustrated in Fig.~\ref{setupswap}. We use a pulsed high-intensity ultraviolet (UV) laser with a central wavelength of 390nm, a pulse duration of around 180 fs and a repetition rate of 76 MHz. The beam successively passes through two $\beta$-Barium-Borate (BBO) crystals, and is reflected to pass again through the second BBO to generate three polarization entangled photon pairs via type-II parametric down conversion \cite{SPDCII}.
Due to the high average power of 1W UV-light and improvements in collection efficiency and stability of the photon sources \cite{Zhang}, we are able to observe up to $10^5$ photon pairs per second from each source. With this brightness of the entangled photon sources we could obtain around 4.5 six-photon events per minute in our setup.
For the joint BSM of photons 2 and 3 (photons 4 and 5), we choose to analyze the case of detecting the projection onto a
$|\Phi^{+}\rangle$ state. Using a polarizing beam splitter (PBS) allows the projection of photons 2 and 3 (4 and 5) onto the state
$|\Phi^{+}\rangle$ upon detecting a $|+\rangle|+\rangle$ or
$|-\rangle|-\rangle$ coincidence at detectors D2 and D3 (D4 and D5)
(with $|\pm\rangle=(|H\rangle\pm|V\rangle)/\sqrt{2}$). In our experiment only the $|+\rangle|+\rangle$ coincidences were registered, which reduces the overall success probability by a factor of 1/64. This could be improved by installing a half wave plate (HWP) at $22.5^{\circ}$, which corresponds to a polarization rotation of $45^{\circ}$, and a PBS after each output arm of PBS23 (PBS45). This configuration would also allow to detect the state
$|\Phi^{-}\rangle$, which results in a $|+\rangle|-\rangle$ or
$|-\rangle|+\rangle$ coincidence \cite{Pan1998}. Thus, a factor of 1/4 for the overall success probability could be achieved in an ideal case.
As shown in equations Eq.~(\ref{totalstate},\ref{state1456},\ref{finalstate16}) the projection measurements onto $|\Phi^{+}_{23}\rangle$ and
$|\Phi^{+}_{45}\rangle$ leave photons 1 and 6 in the maximally entangled state $|\Psi^{-}_{16}\rangle$. In contrast to quantum state tomography, the measurement of witness operators does not provide a complete reconstruction of the original quantum state, it however allows to check with a minimal number of local measurements for a entanglement character of a quantum state. To verify that the two photons are really in an entangled state, and thus the swapping operation is successful, the expectation value of the corresponding witness operator \cite{Guehne2002,Barbieri2003} is expected to take a negative value. In our case, the applied witness operator $W$ is the most efficient one since it involves only the minimal number of local measurements \cite{Guehne2002}. It can be measured locally by choosing correlated measurement settings, that involve only the simultaneous detection of linear, diagonal and circular polarizations for both photons. We have performed local measurements on the outgoing state of photons 1 and 6 in the three complementary bases; linear (H/V), diagonal (+/-) and circular (R/L) (with
$|L\rangle=(|H\rangle+i|V\rangle)/\sqrt{2}$ and
$|R\rangle=(|H\rangle-i|V\rangle)/\sqrt{2}$).
The entanglement witness is given by \begin{eqnarray}
W& = & \tfrac{1}{2}\ (|HH\rangle\langle HH|
+|VV\rangle\langle VV| + |++\rangle\langle ++| \nonumber\\
&& + |--\rangle\langle --| - |RL\rangle\langle RL|
-|LR\rangle\langle LR|). \label{witnessform} \end{eqnarray} \begin{figure}
\caption{Experimental expectation values for every correlation function of the entanglement witness for the swapped state. The results are derived by twofold coincidence measurements along three complementary common bases: a) $|H\rangle|V\rangle$; b)
$|+\rangle|-\rangle$; and c) $|R\rangle|L\rangle$, conditioned on a fourfold coincidence event in $|++++\rangle$ for detectors D2-D3-D4-D5 which ensures two successful Bell state measurements.}
\label{witness}
\end{figure} In the experiment, we perform measurements for each correlation function of the witness. The expectation values are shown in Fig.~\ref{witness}. Experimental integration time for each local measurement took about 60 hours and we recorded about 180 events of desired two-qubit coincidences. Every expectation value for a correlation function is obtained by making a von Neumann measurement along a specific basis and computing the probability over all the possible events. For example, for a HH correlation
$\text{Tr} (\rho |HH\rangle\langle HH|)$, we perform measurements along the H/V basis. Then its value is given by the number of coincidence counts of HH over the sum of all coincidence counts of HH, HV, VH and VV. We proceed likewise for the other correlation settings. The witness can then directly be evaluated to $\text{Tr}(\rho W)=-0.16 \pm 0.03$. The negativity of the measured witness implies clearly that entanglement has indeed been swapped. The imperfection of our data is due to the non-ideal quality of entangled states generated from the high power UV beam, as well as the partial distinguishability of independent photons at PBS23 and PBS45, which leads to non-perfect interferences and a degrading of entanglement output quality \cite{degrading}. Moreover, double pair emission by a single source causes noise of an order of 10 spurious six-fold coincidences in 60 hours and was not subtracted in calculating the expectation value of the witness operator.
To ensure that there is no entanglement between photons 1 and 6 before either of the entanglement swapping process, we have performed a complete quantum state tomography. The experimental expectation values for various bases are illustrated in Fig.~\ref{tomography}. Concurrence \cite{concurrence} is a monotonic function of entanglement, ranging from 0 for a separable state to 1 for a maximally entangled state. In terms of concurrence, we can thus quantify the degree of entanglement through a reconstructed density matrix $\rho_{init}$ for the initial combined state from the data shown in Fig.~\ref{tomography}. The concurrence $C_{init}$ derived from $\rho_{init}$ is $C_{init}=\max(0,-0.39 \pm 0.01)=0$. As expected the concurrence is indeed 0, therefore photons 1 and 6 did not reveal any entanglement whatsoever before the swapping. Ideally, for a completely mixed state the expectation values for all local measurements should be 0, except for the unity operator, which should be 1. The contributions of the measurement settings other than the unity operator are mainly due to noise caused by scattered light of the UV beam at the BBO crystal. For convenience of comparison, we also performed the same witness measurement of Eq.~(\ref{witnessform}) to give $\langle W \rangle=0.28\pm 0.01$, which is safely above the bound $\langle W \rangle < 0$ needed to reveal entanglement. However, after the two-stage entanglement swapping, entanglement arises as unambiguously confirmed by negativity of expectation value for the witness $\langle W \rangle =-0.16 \pm 0.03$ as discussed above. \begin{figure}
\caption{Complete quantum state tomography on photon 1 and 6 before entanglement swapping. Label X corresponds to measurement setting $\sigma_x$, while Y and Z are for $\sigma_y$ and $\sigma_z$, respectively. The result shows that the photons didn't reveal any entanglement whatsoever before the swapping operation.}
\label{tomography}
\end{figure}
In conclusion, we have for the first time provided a proof-of-principle demonstration of a two-stage entanglement swapping using photonic qubits. The feasibility and effectiveness of this process has been verified by a successful distribution of genuine entanglement after two simultaneously independent swapping process. This result yields the possibility of immediate near-future applications of various practical quantum information processing tasks. If combined with narrow-band entanglement sources, the implementation of quantum relays (without quantum memory) and quantum repeaters (with quantum memory) would become within current reach \cite{qrepeater,memory,Gisin2007}, as well as quantum state transfer and quantum cryptography networks in a more efficient way and over much larger distances of around hundreds of kilometers \cite{qrelay}. Our demonstration also allows for the possibility of utilizing multi-party, multiple stages entanglement swapping to achieve global quantum communication networks though with significant challenges ahead \cite{multiparticleswapping}.
This work was supported by the Marie Curie Excellence Grant from the EU, the Alexander von Humboldt Foundation, the National Fundamental Research Program of China under Grant No.2006CB921900, the CAS, and the NNSFC. K.C acknowledges support of the Bairen program of CAS. C.W. was additionally supported by the Schlieben-Lange Program of the ESF.
\end{document} |
\begin{document}
\begin{abstract} We obtain results on the condensation principle called local club condensation. We prove that in extender models an equivalence between the failure of local club condensation and subcompact cardinals holds. This gives a characterization of $\square_{\kappa}$ in terms of local club condensation in extender models. Assuming $\axiomfont{GCH}$, given an interval of ordinals $I$ we verify that iterating the forcing defined by Holy-Welch-Wu, we can preserve $\axiomfont{GCH}$, cardinals and cofinalities and obtain a model where local club condensation holds for every ordinal in $I$ modulo those ordinals which cardinality is a singular cardinal.
We prove that if $\kappa$ is a regular cardinal in an interval $I$, the above iteration provides enough condensation for the combinatorial principle $\Dl_{S}^{*}(\Pi^{1}_{2})$, and in particular $\diamondsuit(S)$, to hold for any stationary $S \subseteq \kappa$. \end{abstract} \date{\today}
\title{On Local Club Condensation}
\section{Introduction}
\emph{Local club condensation} is a condensation principle that abstracts some of the condensation properties of $L$, G\"odels constructible hierarchy. Local club condensation was first defined in \cite{FHl} and it is part of the outer model program which searches for forcing models that have $L$-like features.
\begin{convention}
The class of ordinals is denoted by $\ord$.
The transitive closure of a set $X$ is denoted by $\trcl(X)$,
and the Mostowski collapse of a structure $\mathfrak B$ is denoted by $\clps(\mathfrak B)$.
\end{convention}
In order to define condensation principles we define filtrations which is an abstraction of the stratfication $\langle L_{\alpha} \mathrel{|}\allowbreak \alpha < \ord \rangle $
of $L$.
\begin{defn}
Given ordinals $\alpha < \beta$ we say that $\langle M_{\xi} \mathrel{|}\allowbreak \alpha < \xi < \beta \rangle $ is a \emph{filtration} iff
\begin{enumerate}
\item for all $\xi \in (\alpha,\beta)$, $M_{\xi}$ is transitive, $\xi \subseteq M_{\xi}$,
\item for all $\xi \in (\alpha,\beta)$, $M_{\xi} \cap \ord = \xi $,
\item for all $\xi \in (\alpha,\beta)$, $ |M_{\xi}| \leq \max \{\aleph_0,|\xi|\}$,
\item if $\xi < \zeta$, then $M_{\xi} \subseteq M_{\zeta}$,
\item if $\xi$ is a limit ordinal, then $M_{\xi}=\bigcup_{\alpha < \xi} M_{\alpha}$.
\end{enumerate}
\end{defn}
\begin{convention}\label{Union}
Given a filtration $\langle M_{\xi} \mathrel{|}\allowbreak \xi < \beta \rangle$, if $\beta$ is a limit ordinal we let $M_{\beta}:=\bigcup_{\gamma < \beta} M_{\gamma}$.
\end{convention}
The following is an abstract formulation of the Condensation lemma that holds for the constructible hirerarchy $\langle L_{\alpha} \mathrel{|}\allowbreak \alpha \in \ord \rangle $:
\begin{defn} Suppose that $\kappa$ and $\lambda$ are regular cardinals and that $ \vec{M} = \langle M_{\alpha} \mathrel{|}\allowbreak \kappa < \alpha < \lambda \rangle $ is a filtration. We say that $\vec{M}$ satisfies \emph{strong condensation} iff for every $\alpha \in (\kappa,\lambda)$ and every $ (X,\in) \prec_{\Sigma_{1}} (M_{\alpha},\in)$ there exists $\bar{\alpha}$ such that $\text{clps}(X,\in) = (M_{\bar{\alpha}},\in)$.
\end{defn}
While strong condensation is not consistent with the existence of large cardinals, see \cite{FHl} and \cite{schvlck}, Local club condensation, which we define below, is consistent with any large cardinal, see \cite[Theorem1]{FHl}.
\begin{defn}[Holy,Welch,Wu,Friedman \cite{HWW},\cite{FHl}] \label{LCCupto}
Let $ \kappa $ be a cardinal of uncountable cofinality.
We say that $\vec{M}=\langle M_\beta \mathrel{|}\allowbreak \beta < \kappa \rangle $ is a witnesses to the fact that \emph{local club condensation holds in $(\eta,\zeta)$},
and denote this by $\langle H_{\kappa},{\in}, \vec M\rangle \models \axiomfont{LCC}(\eta,\zeta)$,
iff all of the following hold true:
\begin{enumerate}
\item $\eta < \zeta \leq \kappa+1$;
\item $\vec M$ is a \emph{ filtration} such that $M_{\kappa}= H_\kappa$ \footnote{See Convention \ref{Union}},
\item For every ordinal $\alpha$ in the interval $(\eta,\zeta)$ and every sequence $\mathcal{F} = \langle (F_{n},k_{n}) \mathrel{|}\allowbreak n \in \omega \rangle$ such that, for all $n \in \omega$, $k_{n} \in \omega$ and $F_{n} \subseteq (M_{\alpha})^{k_{n}}$, there is a sequence
$\vec{\mathfrak{B}} = \langle \mathcal{B}_{\beta} \mathrel{|}\allowbreak \beta < |\alpha| \rangle $ having the following properties:
\begin{enumerate}
\item for all $\beta<|\alpha|$, $\mathcal{B}_{\beta}$ is of the form $\langle B_{\beta},{\in}, \vec{M} \mathbin\upharpoonright (B_{\beta} \cap\ord), (F_n\cap(B_\beta)^{k_n})_{n\in\omega} \rangle$;
\item for all $\beta<|\alpha|$, $\mathcal{B}_{\beta} \prec \langle M_{\alpha},{\in}, \vec{M}\mathbin\upharpoonright \alpha, (F_n)_{n\in\omega} \rangle$;\footnote{Note that the case $ \alpha= \kappa $ uses Convention~\ref{Union}.}
\item for all $\beta<|\alpha|$, $\beta\subseteq B_\beta$ and $|B_{\beta}| < |\alpha|$;
\item for all $\beta < |\alpha|$, there exists $\bar{\beta}<\kappa$ such that
$$\clps(\langle B_{\beta},{\in}, \langle B_{\delta} \mathrel{|}\allowbreak \delta \in B_{\beta}\cap\ord \rangle \rangle) = \langle M_{\bar{\beta}},{\in}, \langle M_{\delta} \mathrel{|}\allowbreak \delta \in \bar{\beta} \rangle \rangle;$$
\item $\langle B_\beta\mathrel{|}\allowbreak\beta<|\alpha|\rangle$ is $\subseteq$-increasing, continuous and converging to $M_\alpha$.
\end{enumerate}
\end{enumerate}
For $\vec{\mathfrak{B}}$ as in Clause~(3) above we say that
\emph{$\vec{\mathfrak{B}}$ witnesses $\axiomfont{LCC}(\eta,\zeta)$ at $\alpha$ with respect to $\mathcal{F}$}.
We write $\axiomfont{LCC}(\eta,\zeta]$ for $\axiomfont{LCC}(\eta,\zeta+1)$. \end{defn}
In section 2 we present our resutls regarding Local Club Condensation in extender models.
An \emph{extender model} is an inner model of the form $L[E]$, it is a generalization of $L$ that can accommodate large cardinals. An inner model of the form $L[E]$ is the smallest transitive proper class that is a model of $\axiomfont{ZF}$ and is closed under the operator $x \mapsto x \cap E$, where $E:\ord \rightarrow V$ and each $E_{\alpha} = \emptyset $ or $E_{\alpha}$ is a partial extender. $L[E]$ models can be stratified using the $L$-hirearchy and the $J$-hirearchy, for example:
\begin{itemize}
\item $J_{\empty}^{E}=\emptyset$,
\item $J_{\alpha+1}^{E}= rud_{E}(J_{\alpha}^{E}\cup\{J_{\alpha}^{E}\})$,
\item $J_{\gamma}^{E}=\bigcup_{\beta < \gamma}J_{\beta}^{E}$ if $\gamma$ is a limit ordinal.
\end{itemize}
and finally $$L[E] = \bigcup_{\alpha \in \ord}J_{\alpha}^{E}.$$
In \cite[Theorem 8]{FHl} it is shown that Local Club Condensation holds in various extender models, we extend \cite[Theorem~8]{FHl} to an optimal result for extender models that are weakly iterable (see Defnition \ref{weaklyit}). We carachterize Local club condensation in extender models in terms of subcompact cardinals\footnote{A subcompact cardinal is a large cardinal that is located in the consistency strengh hirearchy below a supercompact cardinal and above a superstrong cardinal. See definition in \cite{SquareinK}}.
\begin{thma} \label{NoSubcompact}
If $L[E]$ is an extender model that is weakly iterable, then given an infinite cardinal $\kappa$ the following are equivalent:
\begin{itemize}
\item[(a)] $\langle L_{\kappa^{+}}[E],{\in},\langle L_\beta[E]\mathrel{|}\allowbreak\beta\in\kappa^{+} \rangle\rangle\models\axiomfont{LCC}(\kappa^{+},\kappa^{++}]$.
\item[(b)] $L[E] \models ( \kappa ~ \text{is not a subcompact cardinal})$.
\end{itemize}
In addtion for every limit cardinal $\kappa$ with $\cf(\kappa)>\omega$ we have \begin{center}$\langle L_{\kappa^{+}}[E],{\in},\langle L_\beta[E]\mathrel{|}\allowbreak\beta\in\kappa^{+} \rangle\rangle\models\axiomfont{LCC}(\kappa,\kappa^{+}].$\end{center}
\end{thma}
We warn the reader that it is not known how to construct an extender model that is weakly iterable and has a subcompact cardinal, but this is part of the aim of the inner model theory program and it is desirable to know what hold in such models.
Corollary~A provides an equivalence between $\square_{\kappa}$ and a condensation principle that holds in the interval $(\kappa^{+},\kappa^{++})$, Corollary~ A is immediate from Theorem~A and the main result in \cite{MR2081183}:
\begin{cora} If $L[E]$ is an extender model with Jensen's $\lambda$-indexing that is weakly iterable, then given $\kappa$, an $L[E]$ cardinal, the following are equivalent: \begin{itemize}
\item[(a)] $L[E]\models \square_{\kappa}$
\item[(b)] $\langle L_{\kappa}[E], \in, \langle L_{\beta}[E] \mathrel{|}\allowbreak \beta < \kappa^{+} \rangle \rangle \models \axiomfont{LCC}(\kappa^{+},\kappa^{++})$ \end{itemize} \end{cora}
We verify that a subcompact cardinal is an even more severe impediment for $\axiomfont{LCC}$ to hold:
\begin{thmb} Suppose $L[E]$ is an extender model with Jensen's $\lambda$-indexing such that every countable elementary submodel of $L[E]
$ is $(\omega_{1}+1,\omega_{1})$-iterable. In $L[E]$, if an ordinal $\kappa$ is a subcompact cardinal, then there is no $\vec{M}$ such that $\langle M_{\kappa^{++}}, \in , \vec{M} \rangle \models \axiomfont{LCC}(\kappa^{+},\kappa^{++})$ and $M_{\kappa^{++}}=H_{\kappa^{++}}^{L[E]}$ and $M_{\kappa^{+}}=H_{\kappa^{+}}^{L[E]}$. \end{thmb}
In section 3 we prove how to force local club condensation on a given interval of ordinals $I$ modulo ordinals with singular cardinality (Theorem~C) . It was already obtained in \cite{FHl} a model where local club condensation holds on arbritrary intervals $I$ including ordinals with singular cardinality, this was done via class forcing, although we do not obtain as much condensation as in \cite{FHl}, building on \cite{HWW} we define a set forcing $\mathbb{P}$ which is simpler than the forcing in \cite{FHl}, and which will force enough condensation for a few applications, see section 4.
\begin{thmc}
If $\axiomfont{GCH}$ holds and $\kappa$ is a regular cardinal and $\alpha$ is an ordinal, then there is a set forcing $\mathbb{P}$ which is $<\kappa$-directed closed and $\kappa^{+\alpha+1}$-cc, $\axiomfont{GCH}$ preserving such that in $V^{\mathbb{P}}$ there is a filtration $\langle M_{\alpha} \mathrel{|}\allowbreak \alpha < \kappa^{+\alpha+1} \rangle $ such that for every regular cardinal $ \theta \in [\kappa,\kappa^{+\alpha+1}]$ we have $H_{\theta}=M_{\theta}$ and $\langle M_{\alpha}\mathrel{|}\allowbreak \alpha < \kappa^{+\alpha}\rangle \models \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\alpha}) $. \end{thmc}
In Section 4 we show that the iteration of the forcing from \cite{HWW} implies $\Dl^{*}_{S}(\Pi^{1}_{2})$ (see definition in \cite{FMR}) which is a combinatorial principle defined in \cite{FMR} and is a variation of Devlin's $\diamondsuit^{\sharp}_{\kappa}$ (see \cite{Devlin}).
\begin{cord} Let $\kappa$ be an uncountable regular cardinal and let $\mu$ be a cardinal such that $\mu^{+} \leq \kappa$. If $\axiomfont{GCH}$ holds, then there is a set forcing $\mathbb{P}$ which is $<\mu^{+}$-directed and $\kappa^{+}$-cc, $\axiomfont{GCH}$ preserving, such that in $V^{\mathbb{P}}$ we have $\Dl^{*}_{S}(\Pi^{1}_{2})$, in particular $\diamond(S)$, for any stationary $S \subseteq \kappa$. \end{cord} \section{Local club condensation in extender models}
The main result in this section is Theorem~A which extends \cite{FHl}[Theorem~8] and gives a characterization of local club condensation in terms of subcompact cardinals.
For the standard notation on inner model theory and fine structure like \textit{premouse, projectum, standard parameter} and etc. we refer the reader to \cite{MR1876087}.
\begin{defn}
Given a premouse $\mathcal{M}$, a parameter $p \in (\mathcal{M} \cap \ord^{<\omega})$ and $\xi \in \mathcal{M} \cap \ord$ and $\langle \varphi_{i} \mathrel{|}\allowbreak i \in \omega \rangle $ a primitive recursive enumeration of all $\Sigma_{1}$ formulas in the premice language we define $$T_{p}^{M}(\xi)=\{ (a,i) \in (\xi^{<\omega}\times \omega) \mathrel{|}\allowbreak M\models \varphi_{i}(a,p)\}.$$ \end{defn}
\begin{fact}\label{Phi} Given $\langle \varphi_{n} \mathrel{|}\allowbreak n \in \omega \rangle $ a primitive recursive enumeration of all $ \Sigma_1 $ formulas in the premice language, there exists a $\Sigma_{1}$-formula $\Phi(w,x,y)$ in the premice language such that for any premouse $\mathcal{M}$ the following hold:
\begin{itemize}
\item If $n \in \omega$ is such that $\varphi_{n}(x) = \exists y \phi_n(x,y)$ and $\phi_n$ is $\Sigma_0$, then for every $x \in \mathcal{M}$ there exists
$ y_0 \in \mathcal{M}$ such that $(\mathcal{M}\models \phi_n(x,y_0))$ iff there exists $y_1 \in \mathcal{M}$ such that $ (\mathcal{M} \models \Phi(n,x,y_1))$
\item For every $n \in \omega$ and for every $x \in \mathcal{M}$, if there are $y_{0},y_{1} \in \mathcal{M}$ such that $ (\mathcal{M} \models \Phi(n,x,y_0) \wedge \Phi(n,x,y_1) ))$ then $y_0 = y_1$
\end{itemize} \end{fact} \begin{defn}
Let $\mathcal{M}$ be a premouse we denote by $h_1^{\mathcal{M}}$ the partial function from $\omega \times \mathcal{M} $ into $\mathcal{M}$ defined by the formula $\Phi$ from Fact \ref{Phi}. Given $ X \subseteq \mathcal{M}$ and $ p \in \mathcal{M}$ we denote by $h_{1}^{\mathcal{M}}[X,p]$ the set $h_{1}^{\mathcal{M}}[(X\times\{p\})^{<\omega}]$. \end{defn} \begin{fact} \label{Fact1}
\begin{enumerate}
\item Suppose $L[E]$ is an extender model. Let $\gamma$ be an ordinal such that $E_{\gamma}\neq\emptyset$, then there exists $g \in J_{\gamma+1}^{E}$ such that $g:\lambda(E_{\gamma})\rightarrow \gamma$ onto.
\item $\mathcal{P}(J_{\gamma}^{E})\cap J_{\gamma+1}^{E} = \Sigma_{\omega}(J_{\gamma}^{E}) $
\end{enumerate} \end{fact}
\begin{lemma} \label{Collapse}
Suppose $L[E]$ is an extender model and $\gamma$ is such that $E_{\gamma} \neq \emptyset$ and $L_{\gamma}[E]=J_{\gamma}^{E}$. Then there exists $g \in L_{\gamma+1}[E]$ such that $g:\lambda(E_{\gamma})\rightarrow \gamma$ and $g$ is onto. \end{lemma} \begin{proof} It follows from Fact \ref{Fact1} and that $\Sigma_{\omega}(J_{\gamma}^{E}) = L_{\gamma+1}[E]$.
\end{proof}
\begin{remark}
Notice that in particular for any premouse $\mathcal{M}$, if $\gamma \in \mathcal{M} \cap \ord $ and $\mathcal{M}\models ``\gamma \text{ is a cardinal}"$ it follows from Fact \ref{Fact1} that $ E_{\gamma} = \emptyset$, as otherwise $$J_{\gamma+1}^{E} \models ``\gamma \text{ is not a cardinal},"$$ and hence $$\mathcal{M}\models ``\gamma \text{ is not a cardinal}."$$ \end{remark}
\begin{defn}\label{weaklyit}
We say that an extender model $L[E]$ is \emph{weakly iterable} iff for every $\alpha \in \ord$ if there exists an elementary embedding $\pi:\bar{\mathcal{M}} \rightarrow (J_{\alpha}^{E},\in,E|\alpha,E_{\alpha})$, then $\bar{\mathcal{M}}$ is $(\omega_{1}+1,\omega_{1})$-iterable.\footnote{See definiton 9.1.10 in \cite{MR1876087} for the definition of $(\omega_{1}+1,\omega_{1})$-iterable. } \end{defn}
\begin{lemma} \label{Condensation} Let $L[E]$ be an extender model that is weakly iterable and let $\kappa$ be a cardinal in $L[E]$.
Suppose $i:\mathcal{N} \rightarrow \mathcal{M}$ is the inverse of the Mostowisk collapse of $h_{1}^{\mathcal{M}}[\gamma \cup \{p_{1}^{\mathcal{M}}\}]$, $\rho_{1}(N)=\gamma$, $\crit(\pi)=\gamma$, $\gamma < \kappa$, $\mathcal{M} = \langle J_{\alpha}^{E}, \in, E\mathbin\upharpoonright \alpha, E_{\alpha} \rangle$ for some $\alpha \in (\kappa^{+},\kappa^{++})$. Then $\mathcal{N} \triangleleft \mathcal{M}$ if and only if $E_{\gamma}=\emptyset$.
\end{lemma}
\begin{proof} The proof is a special case of condensation lemma. Suppose that $E_{\gamma}= \emptyset$, we will verify that $\mathcal{N} \triangleleft \mathcal{M}$.
Let $H \prec_{\Sigma_{\omega}} V_{\Omega}$ for some $\Omega$ large enough, where $i \in H$ and $H$ is countable. Let $ \pi:\bar{H} \rightarrow V_{\Omega}$ be the inverse of the Mostowisk colapse of $H$, let $\pi(\bar{\mathcal{N}}) = \mathcal{N}$, $\pi(\bar{\mathcal{M}})=\mathcal{M}$ and $\pi(\bar{i})=i$.
Let $e$ be an enumeration of $\bar{\mathcal{M}}$ and let $ \Sigma$ be an $e$-minimal $(\omega_{1},\omega_{1}+1)$-strategy for $\bar{M}$ \footnote{The existence of an $e$-minimal iteration strategy follows from the hypothesis that $L[E]$ is weakly iterable and Neeman-Steel lemma, see \cite[Theorem9.2.11]{MR1876087}}. Since $\bar{i}$ embedds $\langle \bar{\mathcal{M}},\bar{\mathcal{N}},\bar{\gamma} \rangle$ into $\bar{\mathcal{M}}$, it follows from 9.2.12 in \cite{MR1876087} that we can compare $\langle \bar{\mathcal{M}},\bar{\mathcal{N}},\bar{\gamma}\rangle$ and $\bar{\mathcal{M}}$ and we have the following:
\begin{itemize}
\item $\bar{\mathcal{M}}$ wins the comparison,
\item the last model on the phalanx side is above $\bar{\mathcal{N}}$,
\item there is no drop on the branch of the phalanx side.
\end{itemize}
From the fact that $h_{1}^{\mathcal{N}}(\gamma \cup \bar{p}) = \mathcal{\mathcal{N}}$ it follows that $h_{1}^{\bar{\mathcal{N}}}(\bar{\gamma}\cup q)=\bar{\mathcal{N}}$ where $\pi(q)=\bar{p}$. This implies that $\bar{\mathcal{N}}$ can not move in the comparison, as otherwise it would drop and we already know that it is the $\bar{\mathcal{M}}$ side which wins the comparison. Let $\mathcal{T}$ be the iteration tree on $\bar{\mathcal{M}}$ and $\mathcal{U}$ the iteration tree on the phalanx $\langle \bar{M},\bar{N},\bar{\gamma} \rangle$.
\begin{claim}
$\bar{\mathcal{N}} \neq \mathcal{M}^{\mathcal{T}}_{\infty}$
\end{claim}
\begin{proof} We already know that $\bar{\mathcal{N}} \triangleleft \mathcal{M}^{\mathcal{T}}_{\infty}$. If $\bar{\mathcal{M}}$ does not move then $\bar{\mathcal{N}} \neq \mathcal{M}^{\mathcal{T}}_{\infty}=\mathcal{M}$ since they have different cardinality. Suppose $\mathcal{T}$ is non-trivial and $\mathcal{M}^{\mathcal{T}}_{\infty} = \bar{\mathcal{N}}$ let $b^{\mathcal{T}}$ be the main branch in $\mathcal{T}$. Let $\eta $ be the last drop in $b^{\mathcal{T}}$. In order to $\mathcal{M}^{\mathcal{T}}_{\infty}$ be 1-sound we need $\crit(E_{\eta}^{\mathcal{T}}) < \rho_{1}(\mathcal{M}_{\eta}^{\mathcal{T}})$ and since $\lambda(E_{0}^{\mathcal{T}}) > \gamma$ we have $\lambda(E_{\eta}^{\mathcal{T}}) > \gamma$. This implies that $\rho_{1}(\mathcal{M}_{\infty}^{\mathcal{T}}) \geq \rho_{1}(\mathcal{M}_{\eta+1}^{\mathcal{T}}) > \pi_{\eta^{*},\eta+1}^{\mathcal{T}}(\kappa_{\eta}) \geq \gamma = \rho_{1}(\mathcal{N})$, which is a contradiction since we are assuming that $\mathcal{M}^{\mathcal{T}}_{\infty} = \bar{\mathcal{N}}$.
\end{proof}
Since $\bar{\mathcal{N}} $ is a proper initial segment of $\mathcal{M}^{\mathcal{T}}_{\infty}$ it will follows that $\bar{\mathcal{M}}$ does not move. For a contradiction, suppose $\bar{\mathcal{M}}$ moves then the index $\lambda(E_{0}^{\mathcal{U}}) $ of the first extender used on the $\bar{\mathcal{M}}$ side is greater than $\gamma$ since $\bar{\mathcal{N}}\mathbin\upharpoonright \gamma = \bar{\mathcal{M}}\mathbin\upharpoonright \gamma$ and, by our hypothesis, $E_{\gamma}= \emptyset$. Moreover the cardinal in $\mathcal{M}^{\mathcal{U}}_{lh(\mathcal{T}-1)}$, the last model in the iteration on the $\bar{\mathcal{M}}$ side of the comparison. We have the following: \begin{itemize}
\item $\bar{\mathcal{N}}$ is a proper initial segment of $\mathcal{M}^{\mathcal{T}}_{\infty}$,
\item $\lambda(E_{0}^{\mathcal{U}}) \leq (\bar{\mathcal{N}}\cap \ord)$,
\item $h_{1}^{\bar{\mathcal{N}}}(\bar{\gamma}\cup q)=\bar{\mathcal{N}}$,
\item $h_{1}^{\bar{\mathcal{N}}} \mathbin\upharpoonright (\bar{\gamma}\cup \{q\}) \in \mathcal{M}^{\mathcal{T}}_{\infty}$,
\end{itemize} then there exists a surjection from $\bar{\gamma}$ onto the index of $E_{0}^{\mathcal{T}}$ in $\mathcal{M}^{\mathcal{T}}_{\infty}$ which is a contradiction.
Thus we must have $\bar{\mathcal{N}} \triangleleft \bar{\mathcal{M}}$ and by elementarity of $\pi$ we have $\mathcal{N} \triangleleft \mathcal{M}$.
\end{proof}
\begin{lemma} \label{Club}
Let $L[E]$ be an extender model that is weakly iterable. In $L[E]$, let $\kappa$ be a cardinal which is not a subcompact cardinal. Let $ \beta \in (\kappa^{+},\kappa^{++})$ and $\mathcal{M}= (J_{\beta}^{E},\in,E\mathbin\upharpoonright \beta, E_{\beta})$ and suppose that $\rho_{1}(\mathcal{M})= \kappa^{+}$. Then there is club $C \subseteq \kappa^{+}$ such that for all $\gamma \in C $ if $\mathcal{N} = \text{clps}(h_{1}^{\mathcal{M}}(\gamma \cup \{p_{1}^{\mathcal{M}}\}))$ then $\rho_{1}(\mathcal{N})= \gamma$. \end{lemma} \begin{proof}
Let $g$ be a function with domain $\kappa^{+}$ such that for each $\xi < \kappa^{+}$ we have that $g(\xi) = h_{1}^{\mathcal{M}}(\xi \cup \{p_{1}^{\mathcal{M}}\}) \cap \ord^{<\omega}$.
Let $f: \kappa^{+} \rightarrow \kappa^{+}$ where given $ \gamma < \kappa^{+} $, $f(\gamma) $ is the least ordinal such that for every $r \in g(\xi)$ we have that $ T_{r}^{\mathcal{M}}(\gamma) \in J_{f(\gamma)}^{E}$. Notice that $T_{r}^{\mathcal{M}}(\gamma) \subseteq \bigcup_{n \in \omega} \mathcal{P}([\gamma]^{n})$, hence it can be codded as a subset of $\gamma$ and therefore, by acceptability, it follows that $f(\gamma) < \kappa^{+}$.
Let $ C$ be a club subset of the club of closure points of $f$ and such that $\gamma \in C $ implies $\gamma = h_{1}^{\mathcal{M}}(\gamma \cup \{p_{1}^{\mathcal{M}}\}) \cap \kappa^{+}$. We will verify that $C$ is the club we sought.
Let $ \gamma \in C $. Let $\pi:\mathcal{N}\rightarrow J_{\beta}^{E}$ be the inverse of the Mostowisk collapse of $h_{1}^{\mathcal{M}}(\gamma \cup \{p_{1}^{\mathcal{M}}\})$. Then for each $ \xi < \gamma $ we have $T_{r}^{\mathcal{N}}(\xi)=T_{\pi(r)}^{\mathcal{M}}(\xi) \in J_{\gamma}^{E} = \mathcal{N}\mathbin\upharpoonright \gamma$. Therefore $\rho_{1}(\mathcal{N}) \geq \gamma$.
Notice that by a standard diagonal argument $a = \{ \xi \in \gamma \mathrel{|}\allowbreak \xi \not\in h^{\mathcal{N}_{\gamma}}_{1}(\xi,p_{1})\} \not\in \mathcal{N}_{\gamma}$ since $\mathcal{N}_{\gamma}=h_{1}[\gamma \cup \{p_{1}\}]$, thus $\rho_{1}^{\mathcal{M}}\geq \gamma$.
\end{proof}
\begin{lemma}\label{NoSubcompact} Let $L[E]$ be an extender model that is weakly iterable. Given $\kappa \in \ord$ if $\kappa$ is a successor cardinal in $L[E]$ then the following are equivalent:
\begin{itemize}
\item[(a)] $\langle L_{\kappa^{+}}[E],{\in},\langle L_\beta[E]\mathrel{|}\allowbreak\beta\in\kappa^{+} \rangle\rangle\models\axiomfont{LCC}(\kappa^{+},\kappa^{++}]$.
\item[(b)] $L[E] \models ( \kappa ~ \text{is not a subcompact cardinal})$.
\end{itemize}
and if $\kappa$ is a limit cardinal of uncountable cofinality, then $${\langle L_{\kappa^{+}}[E],{\in},\langle L_\beta[E]\mathrel{|}\allowbreak\beta\in\kappa^{+} \rangle\rangle\models\axiomfont{LCC}(\kappa,\kappa^{+}]}.$$
\end{lemma} \begin{proof} Let $\alpha \in (\kappa^{+},\kappa^{++})$. Let $\beta \geq \alpha$ such that $\beta \in (\kappa^{+},\kappa^{++})$ and $\rho_{1}((J_{\beta}^{E},\in,E\mathbin\upharpoonright\beta,E_{\beta}) ) = \kappa^{+}$.
Let $\mathcal{M} = (J_{\beta}^{E},\in,E\mathbin\upharpoonright\beta,E_{\beta})$, \[D= \{ \gamma <\kappa^{+} \mathrel{|}\allowbreak h_{1}(\gamma \cup \{p_{1}\})\cap \kappa^{+}=\gamma \}\] and for each $\gamma \in D$ let $\mathcal{N}_{\gamma} = \clps(h_{1}(\gamma \cup \{p_{1}\}))$. By Lemma \ref{Club} $D$ contains a club $ F \subseteq D$ such that $\gamma \in F$ implies that there are $\pi_{\gamma}:N_{\gamma} \rightarrow J_{\beta}^{E}$ where $\pi_{\gamma}$ is $\Sigma^{(1)}_{1}$, $\pi_{\gamma}\mathbin\upharpoonright \gamma = id \mathbin\upharpoonright \gamma$, $\pi_{\gamma}(\gamma) = \kappa^{+}$, $\rho_{1}(N_{\gamma}) = \gamma$. We can also assume that $\gamma \in F $ implies $L_{\gamma}[E]=J_{\gamma}^{E}$.
We verify first the implication $\neg(b) \Rightarrow \neg(a)$.
Suppose that $\langle B_{\gamma} \mathrel{|}\allowbreak \gamma < |\alpha| \rangle $ is a continuous chain of elementary submodels of $\mathcal{N} = \langle J_{\alpha}^{E},\in,E|\alpha,E_{\alpha} \rangle$ such that for all $\gamma < |\alpha|$ we have $|B_{\gamma}| < |\alpha|$ and $\bigcup_{\gamma < |\alpha|} B_{\gamma} = \mathcal{M}$. We will verify that for stationary many $\gamma$'s we have that $\clps(B_{\gamma}) $ is not of the form $J_{\zeta}^{E}$ for any $\zeta$.
As $|\alpha|=\kappa$ is a regular cardinal it follows that for club many $\gamma$'s we have $B_{\gamma} = h_{1}^{\mathcal{M}}(\gamma\cup\{p_{1}\}) \cap \mathcal{N} = \pi^{-1}(L_{\alpha}[E])$
From $\neg(b)$ by Schimmerling-Zeman carachterization of $\square_{\kappa}$ (see \cite{SquareinK}][Theorem~0.1]), we can assume that for stationary many $\gamma \in F$ we have $E_{\gamma}^{\mathcal{M}} \neq \emptyset$ . Notice that $N_{\gamma} \models ``\gamma \text{ is a cardinal}"$ and therefore $E^{\mathcal{N}_{\gamma}}=\emptyset$ by Proposition \ref{Collapse}. On the other hand, from Proposition \ref{Collapse} we have $L_{\gamma+1}^{E} \models ``\gamma \text{ is not a cardinal}"$. Since $L_{\gamma+1}[E] \subseteq J_{\gamma+1}[E]$ it follows that $\mathcal{N}_{\gamma} = \clps(B_{\gamma})$ is different from $J_{\zeta}^{E}$ for every $\zeta > \gamma$. Therefore $\axiomfont{LCC}(\kappa^{+},\kappa^{++})$ does not hold.
Next we verify $(b) \rightarrow (a)$. Suppose $\mathfrak{N} = \langle L_{\alpha}^{E},\in, E | \alpha, E_{\alpha}, (\mathcal{F}_{n} \mathrel{|}\allowbreak n \in \omega ) \rangle $. We can assume without loss of generality that $\beta$ is large enough so that $\mathfrak{N} \in \mathcal{M}$.
We verify that $\langle L_{\tau}[E] \mathrel{|}\allowbreak \tau < \kappa^{++} \rangle $ witnesses $\axiomfont{LCC}(\kappa^{+},\kappa^{++}] $ at $\alpha$. Let $ \vec{\mathcal{R}} := \{ h_{1}^{\mathcal{M}}[\gamma \cup \{p_{1}^{\mathcal{M}} \}\cup \{u^{\mathcal{M}}_{1}\}] \mathrel{|}\allowbreak \gamma < \kappa^{+} \}$ where for any given $X \subseteq \mathcal{M}$, $h_{1}^{\mathcal{M}}[X]$ denotes the $\Sigma_{1}$-Skolem hull of $X$ in $\mathcal{M}$ and $ p^{\mathcal{M}_{1}}$ is the first standard parameter. It follows that
$$C= \{ \gamma < \kappa^{+} \mathrel{|}\allowbreak \crit(\clps(h_{1}^{\mathcal{M}}[\gamma \cup \{p_{1}^{\mathcal{M}}, u^{\mathcal{M}}_{1}\}])) = \gamma\} $$ is a club
and by lemma \ref{Club}
$$D=\{\gamma < \kappa^{+} \mathrel{|}\allowbreak \rho_{1}(\mathcal{N}_{\gamma})=\gamma\} $$ is also a club. From Theorem 1 in \cite{MR1860606} and $(b)$ it follows that there is a club $F \subseteq \{ \gamma < \kappa^{+} \mathrel{|}\allowbreak E_{\gamma}=\emptyset \} $. By Lemma \ref{Condensation}, for every $\gamma \in D \cap F$ we have $N_{\gamma}\triangleleft \mathcal{M}$. We have $L_{\alpha}[E] \triangleleft \mathcal{M}$, therefore $\clps(L_{\alpha}[E]\cap h_{1}^{\mathcal{M}}(\gamma \cup \{p_{1}\})) \triangleleft \mathcal{N}_{\gamma}$, hence $ \clps(L_{\alpha}[E]\cap h_{1}^{\mathcal{M}}(\gamma \cup \{p_{1}\}))=\clps(B_{\gamma}) \triangleleft \mathcal{M}$ which verifies the equivalence between $(b) \rightarrow (a)$
Now suppose $\kappa$ is a limit cardinal. The same argument used for the implication $(b) \Rightarrow (a)$ follows with the difference that we do not use Theorem 1 of \cite{MR1860606}, instead we use that the cardinals below $\kappa$ form a club and that for every cardinal $\mu< \kappa$ we have $E_{\mu}=\emptyset$. \end{proof}
\begin{defn}
Given two predicates $A$ and $E$ we say that $A$ is equivalent to $E$ iff $J_{\alpha}^{A}=J_{\alpha}^{E}$ for all $\alpha < \ord$. \end{defn}
\begin{cor}\label{IncompatiblePredicates} If $A \subseteq \ord$ is such that \begin{itemize}
\item $L[A] \models (\kappa$ is a subcompact cardinal $),$ and
\item $\langle L_{\kappa^{++}}[A],\in, \langle L_{\beta}[A] \mathrel{|}\allowbreak \beta < \kappa^{++} \rangle \rangle \models \axiomfont{LCC}(\kappa^{+},\kappa^{++}]$,
\end{itemize} then there is no extender sequence such that $L[E]$ is weakly iterable and $E$ is equivalent to $A$. \end{cor}
\begin{remark} In \cite{FHl} from the hypothesis that there is $\kappa$ a subcompact cardinal in $V$, it is obtained $A \subseteq \ord$ in a class generic extension which satisfies the hypothesis of corollary \ref{IncompatiblePredicates}. \end{remark}
\begin{cor}
\label{NoWitness} Suppose that $L[E]$ is a extender model with Jensen's $\lambda$-indexing and for every ordinal $\alpha$ the premouse $\mathcal{J}_{\alpha}^{E}$ is weakly iterable. If $\kappa$ is an ordinal such that $$L[E] \models \kappa \text{ is a subcompact cardinal,}$$ then for no $\vec{M}=\langle M_{\alpha} \mathrel{|}\allowbreak \alpha < \kappa^{++} \rangle $ with $M_{\kappa^{+}}=H_{\kappa^{+}} $, $M_{\kappa^{++}}=H_{\kappa^{++}} $ and $\langle H_{\kappa^{++}},\in,\vec{M}\rangle \models \axiomfont{LCC}(\kappa^{+},\kappa^{++})$. \end{cor}
\begin{defn} We say that a nice filtration $\vec{M}$ for $H_{\kappa^{+}}$ strongly fails to condensate iff there is a stationary set $S \subseteq \kappa^{+}$ such that for any $\beta \in S $ and any continuous chain $\vec{B}$ of elementary submodels of $M_{\beta}$ there are stationary many points $\alpha$ where $B_{\alpha}$ does not condensate. \end{defn} \begin{lemma} If $\vec{M}$ is a filtration for $H_{\kappa^{+}}$ with $M_{\kappa} = H_{\kappa}$ that strongly fails to condensate, then there is no filtration $\vec{N}$ of $H_{\kappa^{+}}$ with $N_{\kappa}= H_{\kappa}$ that witnesses $\axiomfont{LCC}(\kappa,\kappa^{+})$. \end{lemma}
\begin{proof} Let $\vec{N}$ be a filtration of $H_{\kappa^{+}}$ with $N_{\kappa}=H_{\kappa}$ and $N_{\kappa^{+}} = H_{\kappa^{+}}$. Then there is a club $D \subseteq \kappa^{+} $ where $ N_{\beta} = M_{\beta}$ for every $\beta \in D$. Let $ \beta\in S \cap D $, and let $\vec{\mathfrak{B}}=\langle \mathfrak{B}_{\tau} \mathrel{|}\allowbreak \tau < |\beta|=\kappa \rangle $ be any chain of elementary submodels of $M_{\beta}=N_{\beta}$. \end{proof}
\begin{cor}
Suppose $V$ is an extender model which is weakly iterable. If there exists $\kappa$ such that $L[E] \models ``\kappa \text{ is a subcompact cardinal}"$, then there is no sequence $\vec{M} = \langle M_{\alpha} \mathrel{|}\allowbreak \alpha < \kappa^{+}\rangle$ in $L[E]$ such that $\langle M_{\kappa^{+}}, \in, \vec{M} \rangle \models \axiomfont{LCC}(\kappa^{+},\kappa^{++})$. \end{cor} \section{Forcing Local Club Condensation}
In \cite{FHl} it is shown, via class forcing, how to obtain a model of local club condensation for all ordinals above $\omega_{1}$. Later a simpler forcing was presented in \cite{HWW} which forces condensation on an interval of the form $(\kappa,\kappa^{+})$ where $\kappa$ is a regular cardinal. In this section we show that iterating the forcing from \cite{HWW} and obtain a set forcing $\mathbb{P}$ which forces local club condensation on all ordinals of an interval $(\kappa,\kappa^{+\alpha})$ modulo ordinals with singular cardinality. We will denote by $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\alpha})$ (see Definition \ref{DefLCCReg}) the property that local club condensation holds for all ordinals in the interval $(\kappa,\kappa^{+\alpha})$ modulo those which cardinality is a singular cardinal.
Iterating the forcing from \cite{HWW} gives us a set forcing which is relatively simpler than the class forcing from \cite{FHl} and $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\alpha})$ is enough condensation for applications where $\axiomfont{LCC}(\kappa,\kappa^{+\alpha})$ was used before, see Section 4 for applications.
\begin{defn} \label{Slow}
Let $\kappa$ be a regular cardinal and $\alpha$ an ordinal such that $\kappa^{+\alpha}$ is a regular cardinal. We say that $\Psi(\vec{M},\vec{\theta})$ holds iff $\vec{M}=\langle M_{\gamma} \mathrel{|}\allowbreak \gamma < \kappa^{+\alpha} \rangle$ is a filtration and for every regular cardinal $\theta \in (\kappa,\kappa^{+\alpha})$ we have $M_{\theta} = H_{\theta}$.\end{defn}
\begin{defn} \label{DefLCCReg}
Let $\kappa$ be a regular cardinal and $\alpha$ an ordinal. We say that $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\alpha})$ holds iff there is a filtration $\vec{M}=\langle M_{\gamma} \mathrel{|}\allowbreak \gamma < \kappa^{+\alpha} \rangle$ such that $\langle M_{
\kappa^{+\alpha}}, \langle M_{\gamma} \mathrel{|}\allowbreak \gamma < \kappa^{+\alpha} \rangle \rangle \models \axiomfont{LCC}(\alpha) $ for all $\alpha \in (\kappa,\kappa^{+\alpha})$ with $|\alpha|\in\reg$. \end{defn}
The main result of this section is the following:
\begin{thmc}\label{ThmHWW} Suppose $V$ models $\axiomfont{ZFC} + \axiomfont{GCH}$ and $\kappa$ is a regular cardinal and $\beta$ is an ordinal. Then there exists a set-sized forcing $\mathbb{P}$ which is cardinal preserving, cofinality preserving, $\axiomfont{GCH}$ preserving and forces the existence of a filtration $\vec{M}$ such that $\Psi(\vec{M},\kappa,\kappa^{+\beta})$ holds and $\langle M_{\kappa^{+\beta}},\in,\vec{M}\rangle \models \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta})$. \end{thmc}
We start recalling the forcing from \cite{HWW} which we will iterate to obtain our model. We present the definitions of the forcing from \cite{HWW} for self containment, we will work mainly with abstract properties of the forcing from Theorem \ref{HWW} below.
\begin{convention}\begin{itemize}\item \label{IS} If $ a, b $ are sets of ordinals we write $ a\triangleleft b$ iff $ \sup(a) \cap b = a $.
\item If $ \mathbb{P} = \langle \langle \mathbb{P}_{\alpha} \mathrel{|}\allowbreak \alpha < \beta \rangle, \langle \mathbb{\dot{Q}}_{\alpha} \mathrel{|}\allowbreak \alpha + 1 < \beta \rangle \rangle $ is a forcing iteration, given $\zeta < \beta$ we denote by $\mathbb{\dot{R}}_{\zeta,\beta}$ a $\mathbb{P}_{\alpha}$-name such that $ \mathbb{P} = \mathbb{P}_{\zeta}*\mathbb{\dot{R}}_{\zeta,\beta}$ (For the existence of such name $\dot{\mathbb{R}}_{\zeta,\beta}$ see for example \cite[Section 5]{MR823775}).
\end{itemize}
\end{convention}
\begin{defn} Let $\kappa$ be a regular cardinal. Suppose $\kappa \leq \alpha < \kappa^{+}$, a \emph{condition at $\alpha$} is a pair $(f_{\alpha},c_{\alpha})$ which is either trivial, i.e. $(f_{\alpha},c_{\alpha}) = (\emptyset,\emptyset)$, or there is $\gamma_{\alpha} < \kappa$ such that \begin{enumerate}
\item $c_{\alpha}:\gamma_{\alpha} \rightarrow 2$ is such that $C_{\alpha}((f_{\alpha},c_{\alpha}):=\{ \delta < \gamma_{\alpha} \mathrel{|}\allowbreak c_{\alpha}(\delta)=1 \} = c_{\alpha}^{-1}\{1\}$.
\item $f:\max(C_{\alpha}) \rightarrow \alpha$ is an injection and
\item $f_{\alpha}[\max(C_{\alpha})] \subseteq \max(C_{\alpha})$ \end{enumerate} \end{defn}
\begin{defn} Let $\kappa$ be a regular cardinal and $\alpha \in (\kappa,\kappa^{+})$ we also define a function $A$ with domain $[\kappa,\kappa^{+})$ such that for every $\alpha$, $A(\alpha)$ is a $\mathbb{H}_{\alpha}$-name for either $0$ or $1$. We fix a wellorder $\mathcal{W}$ of $H_{\kappa^{+}}$ of order-type $\kappa^{+}$. Let $\beta \in [\kappa,\kappa^{+})$ and assume that $A \mathbin\upharpoonright \beta$ and $\mathbb{H}_{\beta}$ have been defined.
Let $A(\beta)$ be the canonical $\mathbb{H}_{\beta}$-name for either $0$ or $1$ such that for any $\mathbb{H}_{\beta}$-generic $G_{\beta}$, $A(\beta) = 1$ iff $ \beta = {\prec} \gamma, {\prec} \delta, \varepsilon {\succ} {\succ}$ \footnote{${\prec} \gamma, {\prec} \delta, \varepsilon {\succ} {\succ}$ denotes the G\"odel pairing.}, $\dot{x}$ is the $\gamma^{\text{th}}$-name (in the sense of $\mathcal{W}$) $\mathbb{H}_{\delta}$-nice name for a subset of $\kappa$, $\varepsilon < \kappa$ and $\varepsilon \in \dot{x}^{G_{\beta}}$ \footnote{As $\delta < \beta$, we identify $\dot{x}$ with a $\mathbb{H}_{\beta}$-name using the induction hypothesis that $\mathbb{H}_{\delta} \prec \mathbb{H}_{\beta}$.}
Suppose that $ A\mathbin\upharpoonright \beta $ is defined, we proceed to define $\mathbb{H}_{\beta}$. Suppose $p$ is an $\beta$-sequence such that for each $\alpha < \beta$ we have $p(\alpha) \in \mathbb{H}_{\alpha}$ and suppose that $|\supp(p)| = | \{\tau < \beta \mathrel{|}\allowbreak p(\tau) \neq 1_{\mathbb{H}_{\tau}}\}| < \kappa$. If $\beta = \alpha +1$ for some $\alpha$, then we require that \begin{itemize}
\item $p\mathbin\upharpoonright \beta \in \mathbb{H}_{\beta}$ for every $
\beta < \alpha$ and if $\alpha = \beta + 1$, the following holds:
\item $p(\beta)=(f_{\beta},c_{\beta})$ is a condition at $\beta$,
\item if $C_{\beta}\neq \emptyset$, then $p\mathbin\upharpoonright \beta$ decides $A(\beta)=a_{\beta}$,
\item for all $\delta \in C_{\beta}\neq \emptyset$,$p(\otp(f_\beta[\delta]))=a_\beta$,
\item $\gamma^{p} = \supp(p)\cap \kappa = \gamma_\beta = dom(c_\beta)$ for any $\beta \in C\text{-}\supp(p)$, where $C\text{-}\supp(p):=\{\gamma < \beta \mathrel{|}\allowbreak C_\gamma(p(\gamma)\neq \emptyset \}$
\item $\exists \delta^{p}$ such that for all $\beta \in C\text{-}\supp(p), \max(C_\beta)=\delta^{p}$,
\item $\beta_0 <\beta_1 $ both in $C\text{-}\supp(p)$, $$f_{\beta_{0}}[\delta^{p}] \triangleleft f_{\beta_1}[\delta^{p}]$$
\footnote{See Convention \ref{IS}} and
$$f_{\beta_1}[\delta^{p}] \setminus \beta_0 \neq \emptyset$$
For $p$ and $q$ in $\mathbb{H}_{\alpha}$ we let $q\leq p $ iff $q \mathbin\upharpoonright \kappa \leq p \mathbin\upharpoonright \kappa$ and for every $\beta \in [\lambda,\alpha) $, $q(\beta) \leq p(\beta)$. \end{itemize} \end{defn}
We will work with a forcing that is equivalent to $\mathbb{H}_{\beta}$ and is a subset of $ H_{\kappa^{+}}$.
\begin{defn}\label{Forcingdefn}
If $\kappa$ is a regular cardinal and $\pi:\mathbb{H}_{\kappa,\kappa^{+}} \rightarrow H_{\kappa^{+}}$ is such that $\pi(p) = p \mathbin\upharpoonright \supp(p)$, then we define $\mathbb{P}_{\kappa,\kappa^{+}}:= \rng(\pi)$ and given $s,t \in \mathbb{P}_{\kappa,\kappa^{+}}$ we let $s \leq_{\mathbb{P}_{\kappa,\kappa^{+}}} t $ iff $\pi^{-1}(s) \leq_{\mathbb{H}_{\kappa,\kappa^{+}}} \pi^{-1}(t)$. \end{defn}
Next we describe how we will iterate the forcing from Definition \label{Forcingdefn}.
\begin{defn} Let $\alpha$ be an ordinal and $\kappa$ a regular cardinal. We define $\mathbb{P}_{\kappa,\kappa^{+\alpha}} $ as the iteration $ \langle \langle \mathbb{P}_{\kappa,\kappa^{+\tau}} \mathrel{|}\allowbreak \tau \leq \alpha \rangle , \langle \dot{\mathbb{Q}}_{\tau} \mathrel{|}\allowbreak \tau < \alpha \rangle \rangle $ as follows: \begin{enumerate}
\item If $\tau = \beta+1$ for some $\beta < \tau$ and $\kappa^{+\beta}$ is a regular cardinal. If there exists $\dot{\mathbb{Q}}_{\beta}\subseteq H_{\kappa^{+\beta+1}}$ such that $\mathbb{P}_{\kappa,\kappa^{+\beta}} \Vdash \dot{\mathbb{Q}}_{\beta}=\mathbb{P}_{\kappa^{+\beta},\kappa^{+\beta+1}}$ we let $\mathbb{P}_{\kappa,\kappa^{+\beta+1}} = \mathbb{P}_{\kappa,\kappa^{+\beta}}*\dot{\mathbb{Q}}_{\beta}$, otherwise we stop the iteration.
\item If $\tau = \beta+1$ for some $\beta < \tau$ and $\kappa^{+\beta}$ is a singular cardinal, we let $\dot{\mathbb{Q}}_{\beta} = \check{1}$.
\item If $\beta$ is a limit ordinal and $\kappa^{+\beta}$ is a regular cardinal, then $\mathbb{P}_{\beta}$ is the direct limit of $\langle \mathbb{P}_{\kappa,\kappa^{+\beta}} , \dot{\mathbb{Q}}_{\beta}
\mathrel{|}\allowbreak \theta < \tau \rangle $.
\item If $\tau$ is a limit ordinal and $\kappa^{+\beta}$ is singular, then $\mathbb{P}_{\kappa,\kappa^{+\tau}}$ is the inverse limit of $\langle \mathbb{P}_{\kappa,\kappa^{+\theta}} , \dot{\mathbb{Q}}_{\theta}
\mathrel{|}\allowbreak \theta < \tau \rangle $.
\end{enumerate}
\end{defn}
\begin{remark}
Given an ordinal $\alpha$ and a regular cardinal $\kappa$ the forcing $\mathbb{P}_{\kappa,\kappa^{+\alpha}}$ is obtained forcing $\mathbb{P}_{\kappa^{\beta},\kappa^{+\beta+1}} $ for each successor ordinal $\beta < \alpha$ such that $\kappa^{+\beta}$ is a regular cardinal. If $\beta$ is a limit ordinal but not an inaccessible cardinal, then we take inverse limits, if $\kappa$ is an inaccessible cardinal we take direct limits. \end{remark}
\begin{remark} \label{SeqBijections}
Let $\kappa$ be a regular cardinal and let $G$ be $\mathbb{P}_{\kappa,\kappa^{+}}$-generic. Consider $$f_{\alpha}:= \bigcup \{f \mathrel{|}\allowbreak \exists p ( \alpha \in \dom(p)\wedge p \in G \wedge p(\alpha)= (c,f) ) \}$$ By a standard density argument we have that $f_{\alpha}$ is a bijection from $\kappa$ onto $\alpha$. It also holds that $\alpha \in A $ iff $\{\gamma < \kappa \mathrel{|}\allowbreak \otp(f_{\alpha}[\gamma]) \in A\}$ contains a club and $\alpha \not\in A$ iff $\{\gamma < \kappa \mathrel{|}\allowbreak \otp(f_{\alpha}[\gamma]) \not\in A\}$ contains a club. \end{remark}
\begin{thm}[\cite{HWW}] \label{HWW} Suppose $\axiomfont{GCH}$ holds and $\kappa$ is a cardinal. Then $\mathbb{P}_{\kappa}$ is a $<\kappa$- directed closed, $\kappa^{+}$-cc forcing such that $|\mathbb{P}|=\kappa$ and for all $G$, $\mathbb{P}$-generic, the following holds in $V[G]$:
\begin{itemize}
\item There is $\vec{M} =\langle M_{\alpha} \mathrel{|}\allowbreak \alpha \leq \kappa^{+} \rangle $ which witnesses $\axiomfont{LCC}(\kappa,\kappa^{+})$,
\item $M_{\kappa}= H_{\kappa}$,
\item $M_{\kappa^{+}}=H_{\kappa^{+}}$,
\item There exists $A \subseteq \kappa^{+} $ such that for all $ \beta < \kappa^{+}$ we have $( M_{\beta}=L_{\beta}[A])$.
\end{itemize} \end{thm}
We will need the following facts:
\begin{fact} \label{Baum} \cite[Theorem 2.7]{MR823775} Let $\mathbb{P}_{\alpha}$ be the inverse limit of $\langle \mathbb{P}_\beta, \dot{\mathbb{Q}}_{\beta} \mathrel{|}\allowbreak \beta < \alpha \rangle$. Suppose that $\kappa$ is a regular cardinal and for all $\beta < \alpha$, $$\Vdash_{\mathbb{P}_{\beta}} \dot{\mathbb{Q}}_{\beta} \text{ is } \kappa\text{-directed closed}.$$
Suppose also that all limits are inverse or direct and that if $\beta \leq \alpha$, $\beta $ is a limit ordinal and $\cf(\beta)<\kappa$, then $\mathbb{P}_{\beta}$ is the inverse limit of $\langle \mathbb{P}_{\gamma} \mathrel{|}\allowbreak \gamma < \beta \rangle$. Then $\mathbb{P}_{\alpha}$ is $\kappa$-directed closed. \end{fact}
\begin{fact} \label{cc} Suppose $\cf(\kappa)>\omega$. If $\mathbb{P}$ is a $\kappa$-cc forcing and $\mathbb{P} \Vdash \dot{\mathbb{Q}}$ is $\kappa$-cc, then $\mathbb{P}*\mathbb{Q}$ is $\kappa$-cc. \end{fact}
\begin{fact} \label{Htheta} Let $\mathbb{P}$ be a partial order and $\theta$ a regular cardinal. Suppose $\mathbb{P}$ preserves cardinals. If $G$ is $\mathbb{P}$-generic, then $H_{\theta}^{V} = H_{\theta}^{V[G]}$. \end{fact}
\begin{proof} Let $G$ be $\mathbb{P}$-generic and $w \in H_{\theta}^{V[G]}$. Let $\delta = | \text{trcl}\{w\}|$ and suppose $f: \delta \rightarrow \text{trcl}\{w\}$ is a bijection. Let $ R \subseteq \delta $ such that $ (x,y) \in R $ if and only if $f(x) \in f(y)$. Then the Mostowski collapse of $(\delta,R)$ is equal to $w$. Since $\mathbb{P}$ is $< \theta$-closed, it follows that $(\delta,R) \in V$ and hence $ w \in V$. Since $\mathbb{P}$ preserves cardinals it follows that $|w|=\delta$ and $w \in H_{\theta}$.
\end{proof}
\begin{fact}
\label{ccHmu} Let $\mu$ be a cardinal. Suppose $\mathbb{P}$ is a forcing that is $\mu^{+}$-cc and $\mathbb{P}\subseteq H_{\mu^{+}}$. If $G$ is $\mathbb{P}$-generic, then $H_{\mu^{+}}^{V[G]}=H_{\mu^{+}}[G]$. \end{fact} \begin{proof} We proceed by $\in$ induction on the elements of $H_{\mu^{+}}^{V[G]}$. Notice that it suffices to prove the result for subsets of $\kappa^{+}$, since every $ x \in H_{\kappa^{+}}^{V[G]}$ is of the form $\trcl(\gamma,R)$ for some $\gamma < \kappa^{+}$ and $R \subseteq \gamma\times \gamma$. Let $x = \sigma[G] \in H_{\mu^{+}}^{V[G]}$ such that $x \subseteq \kappa^{+}$. As $\mathbb{P}$ is $\mu^{+}$-cc, there is an ordinal $\gamma$ such that $ x\subseteq \gamma $ and $1_{\mathbb{P}}\Vdash \sigma \subseteq \gamma$. Let $\theta = \bigcup\{ A_{\tau}\times\{\tau\} \mathrel{|}\allowbreak \exists p \in \mathbb{P} \exists \xi \in \gamma (p \Vdash \check{\xi} = \pi ) \wedge \tau \in H_{\mu^{+}} \}$ and each $A_{\tau} \subseteq \mathbb{P}$ such that:
\begin{enumerate}
\item $A_{\tau}$ is an antichain,
\item $q \in A_{\tau}$ implies $ q \Vdash \tau \in \sigma $,
\item $q \in A_{\tau}$ is maximal with respect to the above two properties.
\end{enumerate}
It follows that $\theta \in H_{\mu^{+}}$ and $\theta[G]=\sigma[G]$.
\end{proof}
\begin{remark}
If $\mathbb{P}$ and $\mu$ satisfy the hypothesis from Fact \ref{ccHmu} and $\sigma$ is a $\mathbb{P}$-name such that $1_{\mathbb{P}} \Vdash \sigma \subseteq H_{\mu^{+}}$, then using Fact \ref{ccHmu} we can find a $\mathbb{P}$-name $\pi \subseteq H_{\mu^{+}}$ such that $1_{\mathbb{P}} \Vdash \sigma = \pi$. \end{remark}
\begin{remark}\label{Inacc} Given a regular cardinal $\kappa$ and a limit ordinal $\beta$, we have $\cf(\kappa^{+\beta}) = \cf(\beta)$. Therefore if $\beta < \kappa^{+\beta}$ it follows that $\kappa^{+\beta}$ is singular. On the other hand if $\kappa^{+\beta} = \cf(\kappa^{+\beta}) = \cf(\beta) \leq \beta$, then $\beta$ is a weakly inaccessible cardinal, i.e. a cardinal that is a limit cardinal and regular. Thus $\kappa^{+\beta}$ is regular iff $\beta$ is a weakly inaccessible cardinal. \end{remark}
\begin{convention} Let $\mathbb{P}$ be a set forcing and $\varphi(\sigma_0,\cdots,\sigma_n)$ a formula in the forcing language. We write $\mathbb{P} \Vdash \varphi(\sigma_0,\cdots, \sigma_n)$ if for all $p \in \mathbb{P}$ we have $p \Vdash \varphi(\sigma_0,\cdots,\sigma_n)$.
\end{convention}
\begin{lemma} Suppose $\axiomfont{GCH} $ holds. Let $\kappa$ be a regular cardinal and $\beta$ an ordinal. Then $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ preserves $\axiomfont{GCH}$, cardinals and cofinalities and if $\kappa^{+\beta}$ is a regular cardinal then there exists $\dot{\mathbb{Q}}_{\beta} \subseteq H_{\kappa^{+\beta+1}}$ a $\mathbb{P}_{\kappa,\kappa^{+\beta}}$-name such that $\mathbb{P}_{\kappa,\kappa^{+\beta}}\Vdash \mathbb{P}_{\kappa^{+\beta},\kappa^{+\beta+1}}=\dot{\mathbb{Q}}_{\beta}$.
\end{lemma} \begin{proof} We prove the lemma by induction. Besides the statement of the lemma we carry the following additional induction hypothesis:
\begin{enumerate}[$(1)_\beta$]
\item $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ preserves cardinals and cofinalities,
\item If $\kappa^{+\beta}$ is a regular cardinal and not the successor of a singular cardinal, then $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ is $\kappa^{+\beta}\text{-cc}$ and there exists $\dot{\mathbb{Q}}_{\beta} \subseteq H_{\kappa^{+\beta+1}}$ such that $\mathbb{P}_{\kappa,\kappa^{+\beta}} \Vdash (\mathbb{P}_{\kappa^{+\beta},\kappa^{+\beta+1}} = \dot{\mathbb{Q}}_{\beta}) $.
\item If $\kappa^{+\beta}$ is a successor of a singular cardinal, then $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ is $\kappa^{+\beta+1}$-cc,
\item If $\kappa^{+\beta}$ is a singular cardinal, then $|\mathbb{P}_{\beta}| \leq \kappa^{+\beta+1}$
\end{enumerate}
For $\beta = 1 $ the lemma follows from Theorem \ref{HWW}. Suppose that $(1)_{\theta}$ to $(4)_{\theta}$ and that the lemma holds for all $\theta < \beta$. We will verify that $(1)_{\beta}$ to $(4)_{\beta}$ and that the lemma holds for $\beta$.
$\blacktriangleright $ Suppose $\beta = \theta + 1 $ for some ordinal $\theta$ such that $\kappa^{+\theta}$ is regular. From $(2)_{\theta}$ in our induction hypothesis, $\mathbb{P}_{\kappa,\kappa^{+\theta}}$ is $\kappa^{+\theta}$-cc, hence by Fact \ref{ccHmu}, for any $G$, $\mathbb{P}_{\kappa,\kappa^{+\theta}}$-generic, we have $H_{\kappa^{+\theta+1}}[G]= H_{\kappa^{+\theta+1}}^{V[G]}$. Thus there exists $\dot{\mathbb{Q}}_{\beta} \subseteq H_{\kappa^{\theta+1}}$ such that $\mathbb{P}_{\kappa,\kappa^{+\beta}} \Vdash ``\mathbb{P}_{\kappa^{\beta},\kappa^{+\beta+1}} = \dot{\mathbb{Q}}_{\beta}"$.
From our induction hypothesis $\mathbb{P}_{\kappa,\kappa^{+\theta}}$ preserves $\axiomfont{GCH}$, cardinals and cofinalities and from $(2)_{\theta}$ we have that $\mathbb{P}_{\kappa,\kappa^{+\theta}}$ is $\kappa^{+\theta}$-cc. We also have that $$\mathbb{P}_{\kappa,\kappa^{+\theta}} \Vdash ``\mathbb{P}_{\kappa^{+\theta},\kappa^{+\theta+1}} \text{ preserves } \axiomfont{GCH}, \text{ cardinals, cofinalities and it is } \kappa^{+\theta+1}\text{-cc}".$$
Altogether implies that $\mathbb{P}_{\kappa,\kappa^{+\beta}}$, which is $\mathbb{P}_{\kappa,\kappa^{+\theta}}*\dot{\mathbb{Q}}_{\beta}$, preserves $\axiomfont{GCH}$, cardinals and cofinalities. By Fact \ref{cc} we have that $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ is $\kappa^{+\beta}$-cc.
$\blacktriangleright $ Suppose $\kappa^{+\theta}$ is singular and $\beta = \theta + 1 $. By our induction hypothesis $(4)_{\theta}$ we have $|\mathbb{P}_{\kappa^{+\theta}}| \leq \kappa^{+\theta+1}$. As $ \dot{\mathbb{Q}}_{\theta}$ is the trivial forcing, it follows that $|\mathbb{P}_{\kappa,\kappa^{+\beta}}|\leq \kappa^{+\beta}$ and $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ is $\kappa^{+\beta +1}$-cc. Therefore if $G$ is $\mathbb{P}_{\kappa,\kappa^{+\beta+1}}$-generic, $H_{\kappa^{+\beta+1}}[G] = H_{\kappa^{+\beta+1}}^{V[G]}$, hence we can find $\dot{\mathbb{Q}}_{\beta+1} \subseteq H_{\kappa^{+\beta+1}}$ as sought and $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ preserves $\axiomfont{GCH}$, cardinals and cofinalities.
$\blacktriangleright $ Suppose that $\beta$ is a limit ordinal and $\kappa^{+\beta}$ is a singular cardinal.
From our induction hypothesis we have that for every $\zeta < \beta $ the forcing $\mathbb{P}_{\kappa,\kappa^{+\zeta}}$ preserves cardinals and by Theorem \ref{Baum} $$\mathbb{P}_{\kappa,\kappa^{+\zeta}} \Vdash ``\dot{\mathbb{R}}_{\kappa^{+\zeta},\kappa^{+\beta}} \text{ is } {<}\kappa^{+\tau}\text{-closed}."$$ Therefore all cardinals below $\kappa^{+\beta}$ are preserved. Thus $\kappa^{+\beta}$ remains a cardinal in $V[G_{\beta}]$ and $\cf(\kappa^{+\beta})^{V[G_{\beta}]}=(\cf(\kappa^{+\beta}))^{V[G_\tau]}= (\cf(\kappa^{+\beta}))^V$.
As $ \cf(\kappa^{+\beta})^{+} < \kappa^{+\beta}$, we can fix $\tau < \beta$ such that $\kappa^{+\tau} \geq cf(\kappa^{+\beta})$. From our induction hypothesis we have that $\mathbb{P}_{\kappa,\kappa^{+\tau+1}}$ preserves cardinals. From Theorem \ref{Baum} we have that $\mathbb{P}_{\kappa,\kappa^{+\tau +1}}$ forces $\dot{\mathbb{R}_{\kappa^{+\tau+1},\kappa^{+\beta}}}$ to be ${<}\cf(\kappa^{+\beta})^{+}$-closed. Therefore $((\kappa^{+\beta})^{\cf(\kappa^{+\beta}}))^{V[G_{\tau}]} = ((\kappa^{+\beta})^{\cf(\kappa^{+\beta}}))^{V[G_{\beta}]} $ and $(\kappa^{+\beta+1})^{V[G_\theta]} = (\kappa^{+\beta+1})^{V[G_{\beta}]}$. We have verified above that
\begin{itemize}
\item $\mathbb{P}_{\kappa,\kappa^{+\beta}}\Vdash (\kappa^{+\beta})^{V} \text{is a cardinal}$
\item $\mathbb{P}_{\kappa,\kappa^{+\beta}} \Vdash (\cf(\kappa^{+\beta}))=\cf^{V}(\kappa^{+\beta})$
\item $\mathbb{P}_{\kappa,\kappa^{+\beta}}\Vdash 2^{\kappa^{+\beta}}=\kappa^{+\beta+1}$
\end{itemize}
It is also clear from the above that $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ preserves $\axiomfont{GCH}$, cardinals and cofinalities below $\kappa^{+\beta}$.
From our induction hypothesis $(2)_{\theta}$ it follows that for each $\theta < \beta$ we have $|\dot{\mathbb{Q}}_{\theta}|\leq \kappa^{+\theta+1}$, then using $\axiomfont{GCH}$ it follows that $|\mathbb{P}_{\kappa,\kappa^{+\beta}}| \leq \kappa^{+\beta+1}$ and hence $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ is $\kappa^{+\beta+2}$-cc.
Thus $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ preserves $\axiomfont{GCH}$, cardinals and cofinalities above $\kappa^{+\beta+2}$.
$\blacktriangleright$ If $\kappa^{+\beta}$ is a limit cardinal and regular, then $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ is the direct limit of $\langle \mathbb{P}_{\kappa,\kappa^{+\tau}},\dot{\mathbb{Q}}_{\tau+1} \mathrel{|}\allowbreak \tau<\kappa^{+\beta} \rangle$. From our induction hypothesis $(2)_{\theta}$ for $\theta < \beta$, we have $|\dot{\mathbb{Q}}_{\tau}| \leq \kappa^{+\tau+1}$. Therefore $|\mathbb{P}_{\kappa^{+\beta}}| \leq \kappa^{+\beta}$ and hence $\mathbb{P}_{\kappa^{+\beta}}$ is $\kappa^{+\beta+1}$-cc and preserves $\axiomfont{GCH}$, cardinals and cofinalties at cardinals greater or equal than $\kappa^{+\beta}$. From our induction hypothesis we have that cofinalities cardinals and $\axiomfont{GCH}$ are preserved below $\kappa^{+\beta}$. Hence $\mathbb{P}_{\kappa,\kappa^{+\beta}}$ preserves cofinalities, cardinals and $\axiomfont{GCH}$.
\end{proof}
\begin{comment}
\begin{subclaim} If $G_{0}$ is $\mathbb{P}_{\kappa}$-generic and $G_{1}$ is $\mathbb{Q}^{V[G_{0}]}_{\kappa^{+}}$-generic, then $H_{\lambda^{+}}^{V[G_{0}][G_{1}]}=H_{\lambda^{+}}^{V[G_{0}]}$.
\end{subclaim}
\begin{proof} We have that $H^{V[G_{0}][G_{1}]}_{\lambda^{+}} = L_{\lambda^{+}}[B] $. Since $\mathbb{Q}_{\lambda^{+}}$ is ${<}\lambda^{+}$-closed, it follows that $(B^{<\lambda^{+}} )^{V[G_{0}][G_{1}]} = (B^{<\lambda^{+}})^{V[G_{0}]}$ and hence for every $\gamma < \lambda^{+}$ we have $B\mathbin\upharpoonright \gamma \in N_{\lambda^{+}} = L_{\lambda^{+}}[A]=H^{V[G_{0}]}_{\lambda^{+}}$. This implies that $L_{\lambda^{+}}[B] \subseteq L_{\lambda^{+}}[A] $, which verifies the subclaim.
\end{proof}
From $H_{\lambda^{+}}^{V[G_{0}][G_{1}]} = H_{\lambda^{+}}^{V[G_{0}]}$, it follows that there is a club $C$ in $\lambda^{+}$ such that $\vec{N} \mathbin\upharpoonright C = \vec{M} \mathbin\upharpoonright C$.
It is clear that if $\alpha \in (\lambda,\lambda^{+}]$ then $\vec{N}^{\frown}\vec{M} \mathbin\upharpoonright_{\lambda^{++} \setminus \lambda^{+}}$ witnesses $\axiomfont{LCC}(\lambda,\lambda^{++}]$ at $\alpha$. We are left with verifying it for $\alpha \in (\lambda^{+},\lambda^{++}]$. Consider $\vec{\mathfrak{B}}=\langle \mathfrak{B}_{\tau} \mathrel{|}\allowbreak \tau < |\alpha| = \lambda^{+} \rangle$ a continuous chain obtained from the fact that $\vec{M}$ witnesses $\axiomfont{LCC}(\lambda^{+},\lambda^{++}]$ at $\alpha$.
For each $\tau \in \vec{B} $ there is $\beta(\tau) \in \lambda^{+}$ such that $ \clps(B_{\tau}) = M_{\beta(\tau)}$ and $\{\beta(\tau) \mathrel{|}\allowbreak \tau < \lambda^{+}\}$ forms a club in $\lambda^{+}$. Then $D = C \cap \{\beta(\tau) \mathrel{|}\allowbreak \tau < \lambda^{+}\}$ is a club and $\vec{B}\mathbin\upharpoonright \{ \tau \mathrel{|}\allowbreak \beta(\tau) \in D \}$ is the sought chain. \end{comment}
Lemma \ref{AbstractLimit}, below, will be used in a context where $W_{\tau}= V[G_{\tau}]$ and $G_{\tau}$ is $\mathbb{P}_{\kappa,\kappa^{+\tau}}$-generic.
\begin{lemma}\label{AbstractLimit} Let $\langle W_{\tau} \mathrel{|}\allowbreak \tau \leq \beta \rangle$ be a sequence of transitive proper classes that model $\axiomfont{ZFC}$ and suppose that $\tau_0 < \tau_1 <\beta $ implies $W_{\tau_0} \subseteq W_{\tau_{1}}$ and $\card^{W_{\tau_{0}}}=\card^{W_{\tau_{1}}}$. Suppose further that the following hold:
\begin{enumerate}
\item for each $\tau < \beta$ the folowing holds in $W_{\tau}$: there exists $A_{\tau} \subseteq \kappa^{+\tau} $ such that $ \vec{M}^{\tau} = \langle L[A_{\tau}]_{\zeta} \mathrel{|}\allowbreak \zeta < \kappa^{+\tau + 1} \rangle$ witnesses $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\tau})$,
\item For $\tau_0 < \tau_1 < \beta $ we have $H_{\tau_{0}^{+}}^{W_{\tau_0}}=L[A_{\tau_0}]_{\tau_{0}^{+}} = L[A_{\tau_1}]_{\tau_{1}^{+}}=H_{\tau_{1}^{+}}^{W_{\tau_1}}$,
\item for every $\tau < \beta$ we have $H_{\tau}^{W_{\tau}} = H_{\tau}^{W_{\beta}}$ and
\item $$\mathbb{A}:=\bigcup\{ A_\tau \mathbin\upharpoonright (\kappa^{+\tau},\kappa^{+\tau+1}) \mathrel{|}\allowbreak \reg(\kappa^{+\tau}) \wedge \tau < \beta \} \cup \bigcup \{A_\tau \mathbin\upharpoonright (\kappa^{+\tau},\kappa^{+\tau+2}) \mathrel{|}\allowbreak \text{Sing}(\kappa^{+\tau})\}$$ is an element of $ W_{\beta}$. \end{enumerate}
Then $\vec{M} = \langle L_{\zeta}[\mathbb{A}] \mathrel{|}\allowbreak \zeta < \kappa^{+\beta} \rangle $ witnesses $\axiomfont{LCC}_{\reg}(\kappa, \kappa^{+\beta})$ in $W_{\beta}$. \end{lemma}
\begin{proof} We work in $W_{\beta}$. Let $\alpha \in (\kappa,\kappa^{+\beta})$ such that $|\alpha|$ is a regular cardinal. Let $ \mathbb{S}= \langle L_{\alpha}[\mathbb{A}], \in, (\mathcal{F}_{n})_{n\in\omega} \rangle \in H_{|\alpha|^{+}}$.
We will find $\vec{B}$ that witnesses $\axiomfont{LCC}$ at $\alpha$ for $\mathbb{S}$. There is $\vec{B_{0}} \in W_{\tau}$ where $\kappa^{+\tau} = |\alpha|$, which witnesses $\axiomfont{LCC}$ at $\alpha$ in $W_{\tau}$ with respect to $(\mathcal{F}_{n})_{n \in \omega}$. Since $L_{\tau}[A_{\tau}]=H_{\tau}^{W_{\tau}} = H_{\tau}^{W_{\tau^{+}}}=L_{\tau}[A_{\tau^{+}}]$, it follows that there is a club $C \subseteq \kappa^{+\tau}$ such that $ \iota \in C $ implies $L_{\iota}[A_{\tau}] = L_{\iota}[\mathbb{A}]$. Thus $\vec{B}= \vec{B_0} \mathbin\upharpoonright C $ will witness $\axiomfont{LCC}$ at $\alpha$ with respect to $(\mathcal{F}_n)_{n \in \omega}$ in $W_{\beta}$.
\end{proof}
\begin{lemma}\label{Succ} Let $\kappa$ be a regular cardinal and $\beta$ an ordinal. Suppose that there exists $\vec{M} = \langle L_{\alpha}[A] \mathrel{|}\allowbreak \kappa \leq \alpha < \kappa^{+\beta} \rangle$ which witnesses $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta})$ and $\Psi(\vec{M},\kappa,\kappa^{+\beta})$ holds. If $\kappa^{+\beta}$ is a regular cardinal, then $\mathbb{P}_{\kappa^{+\beta},\kappa^{+\beta+1}} \Vdash \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta})$ and if $ \kappa^{+\beta}$ is a singular cardinal then $\mathbb{P}_{\kappa^{+\beta+1},\kappa^{+\beta+2}} \Vdash \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta+1}) \wedge \Psi(\vec{M},\kappa,\kappa^{+\beta}) $. \end{lemma} \begin{proof} We split the proof into two cases depending on whether $\kappa^{+\beta}$ is regular or not.
$\blacktriangleright$ Suppose $\kappa^{+\beta}$ is a regular cardinal. Let $B \subseteq (\kappa^{\beta},\kappa^{\beta+1})$ such that $\langle L_{\alpha}[B] \mathrel{|}\allowbreak \alpha < \kappa^{+\beta+1} \rangle $ witnesses $\axiomfont{LCC}_{\reg}(\kappa^{\beta},\kappa^{+\beta+1})$. Since $\mathbb{P}_{\kappa^{+\beta},\kappa^{+\beta+1}} $ is $< \kappa^{+\beta}$-closed, it follows that for $G$, $\mathbb{P}_{\kappa^{+\beta},\kappa^{+\beta+1}}$-generic we have, by Fact \ref{Htheta} that $(H_{\kappa^{+\beta}})^{V[G]} = (H_{\kappa^{+\beta}})^{V}$.
We then let $\vec{N}= \langle L_{\alpha}[C] \mathrel{|}\allowbreak \alpha < \kappa^{+\beta+1} \rangle $ where $C = (A \cap \kappa^{+\beta}) \cup (B \setminus \kappa^{+\beta})$, witness $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta+1})$.
$\blacktriangleright$ Suppose $\kappa^{+\beta}$ is a singular cardinal. Let $G$ be $\mathbb{P}_{\kappa^{+\beta+1},\kappa^{+\beta+2}}$-generic over $V$. Let $G$ be $\mathbb{P}$-generic, from Fact \ref{Htheta} it follows that for every cardinal $\theta < \kappa^{+\beta+1}$ we have $H_{\theta}^{V} = H_{\theta}^{V[G]}$.
Let $ B \subseteq \kappa^{+\beta+2}$ be such that $\vec{N}= \langle L_{\gamma}[B] \mathrel{|}\allowbreak \gamma < \kappa^{+\beta+2} \rangle$ witnesses $\axiomfont{LCC}(\kappa^{+\beta+1},\kappa^{+\beta+2}) $ in $V[G]$. Let $ C:= A \cup (B \setminus \kappa^{+\beta})$. Then $\vec{W}:= \langle L_{\alpha}[C] \mathrel{|}\allowbreak \alpha < \kappa^{+\beta+2} \rangle $ witnesses $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta+2})$.
\end{proof}
\begin{thmc} \label{succSing} If $\axiomfont{GCH}$ holds and $\kappa$ is a regular cardinal and $\alpha$ is an ordinal, then there is a set forcing $\mathbb{P}$ which is $<\kappa$-directed closed and $\kappa^{+\alpha+1}$-cc, $\axiomfont{GCH}$ preserving such that in $V^{\mathbb{P}}$ there is a filtration $\langle M_{\alpha} \mathrel{|}\allowbreak \alpha < \kappa^{+\alpha} \rangle $ such that $\Psi(\vec{M},\kappa,\kappa^{+\beta+1})$ holds and $\langle M_{\alpha}\mathrel{|}\allowbreak \alpha < \kappa^{+\alpha}\rangle \models \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\alpha}) $ \end{thmc} \begin{proof} We prove by induction that the following hold:
\begin{enumerate}
\item for each $\tau < \beta$ there exists $A_{\tau} \subseteq \kappa^{+\tau} \in V[G_{\tau}]$ such that, in $V[G_{\tau}]$ we have that $ \vec{M}^{\tau} = \langle L[A_{\tau}]_{\zeta} \mathrel{|}\allowbreak \zeta < \kappa^{+\tau + 1} \rangle \models \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\tau})$ and $\Psi(\vec{M},\kappa,\kappa^{+\tau})$
\item For $\tau_0 < \tau_1 < \beta $ we have $H_{\tau_{0}^{+}}^{V[G_{\tau}]}=L_{\tau_{0}^{+}}[A_{\tau_0}] = L_{\tau_{0}^{+}}[A_{\tau_1}]=H_{\tau_{1}^{+}}^{V[G_{\tau}] }$,
\item for every $\tau < \beta$ we have $H_{\tau}^{V[G_{\tau}]} = H_{\tau}^{V[G_{\beta}]}$ and
\item $$\mathbb{A}:=\bigcup\{ A_\tau \mathbin\upharpoonright (\kappa^{+\tau},\kappa^{+\tau+1}) \mathrel{|}\allowbreak \reg(\kappa^{+\tau}) \wedge \tau < \beta \} \cup \bigcup \{A_\tau \mathbin\upharpoonright (\kappa^{+\tau},\kappa^{+\tau+2}) \mathrel{|}\allowbreak \text{Sing}(\kappa^{+\tau})\}$$ is an element of
$ V[G_{\beta}]$.
\end{enumerate}
If $\beta=1$ the lemma follows from Theorem \ref{HWW}. If $\beta = \theta+1$, from our induction hypothesis and Lemma \ref{Succ} it follows that $\mathbb{P}_{\kappa,\kappa^{+\beta+}} \Vdash \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\beta})$.
If $\beta$ is a limit ordinal all we need to verify is that \begin{equation}\label{eq1} \begin{gathered} H_{\kappa^{+\tau+1}}^{V_{\tau}} = H_{\kappa^{+\tau+1}}^{V_{\beta}} \end{gathered} \end{equation} for every $\tau < \beta$ in order to apply Lemma \ref{AbstractLimit}. Since for each $\tau < \beta$ we have that $\mathbb{P}_{\kappa,\kappa^{+\zeta}} \Vdash ``\dot{\mathbb{R}}_{\kappa^{+\zeta}} \text{ is } <\kappa^{+\zeta}\text{-closed}"$ and $\mathbb{P}_{\kappa,\kappa^{+\zeta}}$ preserve cardinals and cofinalities \eqref{eq1} follows from Fact \ref{Htheta}
\begin{comment}Suppose that $\kappa^{\beta}$ is a limit cardinal. Let $B = \bigcup_{\theta < \beta} A_{\theta} \mathbin\upharpoonright (\kappa^{\theta},\kappa^{+\theta + 1})$, where $A_{\theta} \subseteq \kappa^{+\theta+1}$ is the predicate witnessing $\axiomfont{LCC}(\kappa^{+\theta},\kappa^{\theta+1})$. We will prove that $\vec{M} = \langle L_{\alpha}[A] \mathrel{|}\allowbreak \alpha < \theta \rangle \models \axiomfont{LCC}_{\reg}(\kappa, \kappa^{+\theta})$.
We have that $\mathbb{P}_{\kappa,\kappa^{\zeta}} \Vdash ``\dot{\mathbb{R}}_{\kappa^{\zeta}} \text{ is } <\kappa^{+\zeta}\text{-closed}"$ and $\mathbb{P}_{\kappa,\kappa^{\zeta}}$ preserve cardinals and cofinalities. Therefore for any $\beta < \theta $ such that $\kappa^{+\beta}$ is regular we have $(H_{\kappa^{+\beta}})^{V^{\mathbb{P}_{\kappa,\kappa^{+\theta}}}} = (H_{\kappa^{+\theta}})^{V^{\mathbb{P}_{\theta}}}$
$(H_{\kappa^{+\beta+1}})^{V^{\mathbb{P}_{\kappa,\kappa^{+\beta}}}} = (H_{\kappa^{+\beta+1}})^{V^{\mathbb{P}_{\beta+1}}}.$
\end{comment}
\end{proof}
\section{Applciations}
In this section we show that the iteration of the forcing from \cite{HWW} can replace some uses of the main forcing in \cite{FHl}.
\begin{comment} \begin{lemma}\label{Delta1} Let $\kappa$ be a regular cardinal and $\alpha\in \ord$. Suppose $\axiomfont{GCH}$ holds in $V$. Let $\mu $ be a regular cardinal such that $\mu^{+}\leq \kappa$. Then $\mathbb{P}_{\mu,\kappa^{+}}$ forces that there exists $\vec{M}= \langle M_{\alpha} \mathrel{|}\allowbreak \alpha < \kappa^{+} \rangle$, a filtration, such that \begin{enumerate}
\item $H_{\kappa}=M_{\kappa}$, $H_{\kappa^{+}}=M_{\kappa^{+}}$
\item there is $ A \subseteq \kappa^{+}$ such that for all $\alpha < \kappa^{+}$ we have $ M_{\alpha}= L_{\alpha}[A]$
\item $\langle M,\in,\vec{M} \rangle \models \axiomfont{LCC}(\kappa,\kappa^{+})$
there is a well order of $H_{\kappa^{+}}$ that is $\Delta_{1}$ on a parameter $a \subseteq \kappa$
\end{enumerate} \end{lemma} \begin{proof} Let $\vec{M}$ be given by Theorem~C. We know by Theorem~C that (1) and (2) hold for $\vec{M}$. Let us verify that (3) also holds. Let $ \langle f_{\beta} \mathrel{|}\allowbreak \kappa < \beta < \kappa^{+} \rangle $ be the sequence of bijections obtained by forcing with $\mathbb{P}_{\kappa,\kappa^{+}}$. Then we have that $ \beta \in A$ iff $\{\gamma < \alpha \mathrel{|}\allowbreak \otp(f_{\beta}[\delta]) \in A \}$ contains a club and $\beta \not\in A $ iff $ \{ \gamma < \alpha \mathrel{|}\allowbreak \otp(f_{\beta}[\gamma]) \not\in A \} $ contains a club.
\end{proof} \end{comment}
\begin{defn} Let $\mu, A, \vec{F}$ be sets. We say that $\Xi(A,\mu,\vec{f})$ holds iff $\mu$ is a regular cardinal, $A $ is a function such that $A:\mu^{+}\rightarrow 2 $ and $\vec{f}$ is a sequence of bijections $\langle f_{\beta} \mathrel{|}\allowbreak \kappa \leq \beta < \mu^{+} \rangle $ such that for each $\beta < \mu$, $f_{\beta}:\mu \rightarrow \beta$, and the following hold:
\begin{itemize}
\item $H_{\mu^{+}} = L_{\mu^{+}}[A]$,
\item $ (\xi,1)\in A \setminus \mu \leftrightarrow \exists C ( C \text{ is a club } \wedge C \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f_\xi[\gamma])\in A \})$,
\item $ (\xi,1) \in A \setminus \mu \leftrightarrow \exists C ( C \text{ is a club } \wedge C \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f_\xi[\gamma])\in A \})$.
\end{itemize}
\end{defn} \begin{lemma}\label{complexity} Let $\mu$ be a regular cardinal, $A $ a function $A:\mu^{+}\rightarrow 2 $ and $\vec{f}=\langle f_{\beta} \mathrel{|}\allowbreak \kappa \leq \beta < \mu^{+} \rangle $ a sequence of bijections such that $f_{\beta}:\mu \rightarrow \beta$ for each $\beta < \mu^{+}$. Suppose $\Xi(A,\mu,\vec{f})$ holds. Given $\zeta \in \mu^{+}\setminus \mu$, the following are equivalent:
\begin{enumerate}
\item $\zeta \in A \setminus \mu $,
\item $\exists f \exists C ( f:\mu \rightarrow \zeta \wedge f \text{ is a bijection } \wedge C \text{ is a club } \wedge C \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f[\gamma])\in A \}$,
\item $ \forall f \exists C ( f:\mu \rightarrow \zeta \wedge f \text{ is a bijection } \wedge C \text{ is a club } \wedge C \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f[\gamma])\in A \}$,
\item $\forall f \forall C ( f:\mu \rightarrow \zeta \wedge f \text{ is a bijection } \wedge C \text{ is a club } \rightarrow C \not\subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f[\gamma])\not\in A \}$.
\end{enumerate} Moreover $\xi \in A \setminus \mu$ is $\Delta_1(\{A \mathbin\upharpoonright \mu,\xi\})$ over $H_{\mu^{+}}$. \end{lemma} \begin{proof} Let $\zeta \in \mu^{+}\setminus \mu$. As $\mu, A, \vec{f}$ witness the condensation axiom, it follows that
$\zeta \in A \setminus \mu \leftrightarrow \exists C ( C \text{ is a club } \wedge C \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f_\zeta[\gamma])\in A \}$.
Let $f$ be a bijection from $\mu$ onto $\zeta$. Then from the regularity of $\mu$ it follows that there exists $ ( C \text{ a club }$ such that $D \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f_\zeta[\gamma])\in A \}$ iff there exists $D $ a club such that $ C \subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f_\zeta[\gamma])\in A \}$.
Thus (1) (2) and (3) are equivalent.
Let us verify that (4) is equivalent to (1). Since $ \mu,A,\vec{f}$ witness the condensation axiom, it follows that $ \zeta \not\in A $ iff there exists a club $C$ such that $ C \subseteq \{ \gamma < \mu \mathrel{|}\allowbreak \otp(f_{\zeta}[\gamma]) \not\in A \}$. Let $f$ be a bijection $f:\mu \rightarrow \zeta$. From the regularity of $\mu$ it follows that there exists a club $C$ such that $ C \subseteq \{ \gamma < \mu \mathrel{|}\allowbreak \otp(f_{\zeta}[\gamma]) \not\in A \}$ iff there exists a club $D$ such that $ D \subseteq \{ \gamma < \mu \mathrel{|}\allowbreak \otp(f_{\zeta}[\gamma]) \not\in A \}$. Thus (1) is equivalent to (4).
The moreover part follows from the equivalence between (1),(2) and (4), and the fact that $ C \not\subseteq \{\gamma < \mu \mathrel{|}\allowbreak \otp(f[\gamma])\not\in A \}$ is equivalent to $ \forall h \forall \gamma \forall \beta ( (\gamma \in C \wedge h:\beta \rightarrow f[\gamma] \wedge h \text{ is an isomorphis}) \rightarrow (\beta,0) \in A)$.
\end{proof} Our next result, Theorem~D, is an adaptation of \cite[Theorem~39]{FHl}.
\begin{thmd} Suppose that $\theta$ is an ordinal, $\kappa$ is a regular cardinal and $\kappa^{+\theta}$ is a regular cardinal. Then $\mathbb{P}_{\kappa,\kappa^{+\theta+1}}$ forces that $\axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\theta+1}) $ holds and that there exists a well order of $H_{\kappa^{+}}$ that is $\Delta_1$ definable over $H_{\kappa^{+}}$ in a parameter $a \subseteq \kappa^{+\theta}$. \end{thmd} \begin{proof} We have that $\mathbb{P}_{\kappa,\kappa^{+\theta+1}}$ forces that there exists $\vec{M}= \langle M_{\alpha} \mathrel{|}\allowbreak \alpha < \kappa^{+\theta+1} \rangle$, a filtration, such that \begin{enumerate}
\item $H_{\kappa^{+\beta}}=M_{\kappa^{+\beta}}$ for every $\beta \leq \theta+1$,
\item there exists $ A \subseteq \kappa^{+\theta+1}$ such that for all $\alpha < \kappa^{+\theta+1}$ we have $ M_{\alpha}= L_{\alpha}[A]$
\item $\langle M,\in,\vec{M} \rangle \models \axiomfont{LCC}_{\reg}(\kappa,\kappa^{+\theta+1})$
\end{enumerate}
Let $ \langle f_{\beta} \mathrel{|}\allowbreak \kappa < \beta < \kappa^{+} \rangle $ be the sequence of bijections obtained by forcing with $\mathbb{P}_{\kappa,\kappa^{+}}$, see remark \ref{SeqBijections}. Then we have that $ \beta \in A$ iff $\{\gamma < \kappa^{+\theta} \mathrel{|}\allowbreak \otp(f_{\beta}[\delta]) \in A \}$ contains a club and $\beta \not\in A $ iff $ \{ \gamma < \kappa^{+\theta} \mathrel{|}\allowbreak \otp(f_{\beta}[\gamma]) \not\in A \} $ contains a club.
Therefore by Lemma \ref{complexity} we can define $A \cap \kappa^{+\theta+1}$ in $H_{\kappa^{+\theta+1}}$ using $A \cap \kappa^{+\theta}$ with a $\Delta_1$ formula. The concatenation of the definition of $A$ with the $\Delta_1$ well order of $L_{\kappa^{+\theta+1}}[A]$ gives the $\Delta_1$ well order we sought.
\end{proof}
\begin{cord} Suppose that $\theta$ is an ordinal, $\kappa$ is a regular cardinal. Then $\mathbb{P}_{\kappa,\kappa^{+\theta+1}}$ forces that for every $S \subseteq \kappa$ stationary we have $\Dl^{*}_{S}(\Pi^{1}_{2})$ and in particular $\diamondsuit(S)$.
\end{cord} \begin{proof} Follows from Theorem~D and \cite[Theorem~2.24]{FMR}.
\end{proof}
\section{acknowledgments}
The author is greatful to Assaf Rinot and Miguel Moreno for several discussions on local club condensation. The author thanks Liuzhen Wu and Peter Holy for discussions on how to force local club condensation, and Farmer Schlutzenberg and Martin Zeman for discussions on condensation properties of extender models.
\end{document} |
\begin{document}
\title[An IVP with a time-measurable pseudo-differential operator in a weighted $L_p$-space] {A regularity theory for an initial value problem with a time-measurable pseudo-differential operator in a weighted $L_p$-space }
\author[J.-H. Choi]{Jae-Hwan Choi} \address[J.-H. Choi]{Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea} \email{[email protected]}
\author[I. Kim]{Ildoo Kim} \address[I. Kim]{Department of Mathematics, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea} \email{[email protected]}
\author[J.B. Lee]{Jin Bong Lee} \address[J.B. Lee]{Research Institute of Mathematics, Seoul National University, Seoul 08826, Republic of Korea} \email{[email protected]}
\thanks{The second author has been supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2020R1A2C1A01003959). The third author has been supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2021R1C1C2008252)}
\subjclass[2020]{35B30, 35S05, 35B65, 47G30}
\keywords{Initial value problem, (time-measurable) Pseudo-differential operator, Muckenhoupt's weight, Variable smoothness}
\maketitle \begin{abstract} We study initial value problems with (time-measurable) pseudo-differential operators in weighted $L_p$-spaces. Initial data are given in generalized Besov spaces and regularity assumptions on symbols of our pseudo-differential operators depend on the dimension of the space-variable and weights. These regularity assumptions are characterized based on some properties of weights. However, any regularity condition is not given with respect to the time variable. We show the uniqueness, existence, and maximal regularity estimates of a solution $u$ in weighted Sobolev spaces with variable smoothness. We emphasize that a weight given in our estimates with respect to the time variable is beyond the scope of Muckenhoupt's class. \end{abstract}
\mysection{Introduction}
Pseudo-differential operators naturally arise in theories of partial differential equations when one considers equations with fractional smoothness depending on variables. In this paper, we study the following homogeneous initial value problem: \begin{equation}\label{eqn:model eqn}
\begin{cases}
\partial_tu(t,x)=\psi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d,
\end{cases} \end{equation} where $\psi(t,-i\nabla)$ is a (time-measurable) pseudo-differential operators whose (complex-valued) symbol is $\psi(t, \xi)$, \textit{i.e.} \begin{align*} \psi(t,-i\nabla)u(t,x)= \cF^{-1}\left[ \psi(t,\xi) \cF[u(t,\cdot)](\xi) \right] (x). \end{align*} Here $\cF$ and $\cF^{-1}$ denote the $d$-dimensional Fourier transform and Fourier inverse transform on $\bR^d$, respectively, \textit{i.e.} $$
\mathcal{F}[f](\xi) := \frac{1}{(2\pi)^{d/2}} \int_{\mathbb{R}^d} e^{-i \xi \cdot x} f(x) dx, \,\,\,\,
\mathcal{F}^{-1}[f](x) := \frac{1}{(2\pi)^{d/2}} \int_{\mathbb{R}^d} e^{ix\cdot \xi} f(\xi) d\xi. $$ Our goal is to establish a well-posedness result and a maximal regularity theory to equation \eqref{eqn:model eqn} in weighted $L_p$-spaces with appropriate assumptions on symbols $\psi(t,\xi)$ depending on weights. To elaborate these conditions on $\psi(t,\xi)$, we introduce two definitions so called an ellipticity condition and a regular upper bound condition. We say that a symbol $\psi(t,\xi)$ or the operator $\psi(t,-i\nabla)$ satisfies {\bf an ellipticity condition} (with ($\gamma$, $\kappa$))
if there exists a $\gamma\in(0,\infty)$ and $\kappa\in(0,1]$ such that \begin{align}\label{condi:ellipticity}
\frR [-\psi(t,\xi)]\geq \kappa |\xi|^{\gamma},\quad \forall (t,\xi)\in (0,\infty)\times\bR^d, \end{align} where $\frR[z]$ denotes the real part of the complex number $z$. On the other hand, for $n \in \bN$, we say that a symbol $\psi(t,\xi)$ or the operator $\psi(t,-i\nabla)$ has {\bf a $n$-times regular upper bound} (with ($\gamma$, $M$)) if there exist positive constants $\gamma$ and $M$ such that \begin{align}\label{condi:reg ubound}
|D^{\alpha}_{\xi}\psi(t,\xi)|\leq M|\xi|^{\gamma-|\alpha|},\quad \forall (t,\xi)\in (0,\infty) \times(\bR^{d}\setminus\{0\}), \end{align}
for any ($d$-dimensional) multi-index $\alpha$ with $|\alpha| \leq n$. The positive number $\gamma $ is called the order of the operator $\psi(t,-i\nabla)$ and we fix $\gamma \in (0,\infty)$ throughout the whole paper. We want to find a minimal $n$ in \eqref{condi:reg ubound} and a maximal $r$ depending on weights $\mu(dt)$ and $w(dx)$ so that \begin{align*}
\int_0^T \|u(t,\cdot)\|^q_{H_p^r(\bR^d, w )}\mu(dt)
\leq N \int_{\bR^d} |u_0(x)|^p w(dx), \end{align*} where $N$ is independent of $u_0$ and $H_p^r(\bR^d, w\,dx)$ denotes the classical weighted Bessel potential space (weighted fractional Sobolev space) whose norm is given by \begin{align*}
\| f \|_{H_p^r(\bR^d,w\,dx)}
:= \| (1 - \Delta)^{r/2} f \|_{L_p(\bR^d,w\,dx)}
:= \int_{\bR^d} |(1 - \Delta)^{r/2} f(x)|^pw(dx) \end{align*} and \begin{align*}
(1 - \Delta)^{s/2} f(x)= \cF^{-1}\left[ (1+|\xi|^2)^{s/2} \cF[f](\xi) \right] (x). \end{align*}
In particular, if $ w(dx) = |x|^{b}dx$ with $ b \in \left(-d,d(p-1)\right)$ and $\mu(dt) = t^adt$ with $a \in (-1,\infty)$, then for $p=q \in [2,\infty)$, $n= \left\lfloor \frac{b+d}{p} \right\rfloor+2$, and $r= \frac{\gamma (a+1)}{p}$, we have \begin{align*}
\int_0^T \left(\int_{\bR^d} | (1-\Delta)^{r/2}u(t,\cdot)|^p |x|^b dx \right)^{q/p} t^a dt
\leq N \int_{\bR^d} |u_0(x)|^p |x|^b dx, \end{align*} which could be easily obtained from our main result and $\left\lfloor \frac{b+d}{p} \right\rfloor$ denotes the greatest integer which is smaller or equal to $\frac{b+d}{p}$. We consider $w(dx)= w(x)dx$ with a function $w$ in Muckenhoupt's $A_p$-class and $\mu(dt)$ characterized based on the Laplace transform of $\mu$. To introduce these results with general weights, however, we need to mention that the classical weighted fractional Sobolev space does not fit to include a solution $u$ and initial data $u_0$ in an optimal way. Thus we need generalized weighted Besov spaces and Sobolev spaces with variable smoothness to restrict solutions and data to an optimal class depending on given weights. These generalizations are possible due to the Littlewood-Paley theory. Before introducing these general weighted spaces associated with Littlewood-Paley projections, we first recall the most important weighted class so called Muckenhoupt's class in $L_p$-spaces. \begin{defn} For $p\in(1,\infty)$, let $A_p(\bR^d)$ be the class of all nonnegative and locally integrable functions $w$ satisfying \begin{align}
\notag [w]_{A_p(\bR^d)} &:=\sup_{x_0\in\bR^d,r>0}\left(-\hspace{-0.40cm}\int_{B_r(x_0)}w(x)dx\right)\left(-\hspace{-0.40cm}\int_{B_r(x_0)}w(x)^{-1/(p-1)}dx\right)^{p-1}\\
\label{def ap}
&:=\sup_{x_0\in\bR^d,r>0}\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}w(x)dx\right)\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}w(x)^{-1/(p-1)}dx\right)^{p-1} <\infty, \end{align}
where $|B_r(x_0)|$ denotes the Lebesgue measure of $B_r(x_0) := \{ x \in \bR^d : |x-x_0| < r\}$. The class $A_{\infty}(\bR^d)$ could be defined as the union of $A_p(\bR^d)$ for all $p\in(1,\infty)$, \textit{i.e.}
$$
A_\infty(\bR^d)=\bigcup_{p\in(1,\infty)}A_p(\bR^d).
$$ \end{defn} We can relate a constant to each $w \in A_p(\bR^d)$ to characterize sufficient smoothness of our symbols. For each $w \in A_p(\bR^d)$, we define \begin{align}\label{2021-01-19-01} R_{p,d}^{w} := \sup \left\{ p_0 \in (1,2] : w \in A_{p/p_0}(\bR^d) \right\} \end{align} and say that $R_{p,d}^{w}$ is \textbf{the regularity constant of the weight $w\in A_p(\bR^d)$}. Due to the reverse H\"older property of Muckenhoupt's class, $R_{p,d}^{w}$ is well-defined, \textit{i.e.} $R_{p,d}^{w} \in (1,2]$. We give a regularity assumption on a symbol $\psi(t,\xi)$ on the basis of the regularity constant of the given weight $w\in A_p(\bR^d)$. More precisely, we assume that $\psi$ has a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$. Next we introduce generalized weighted Besov spaces and Sobolev spaces mentioned above. \begin{defn}\label{def:bessel besov}
We choose a function $\Psi$ in the Schwartz class $\mathcal{S}(\mathbb{R}^d)$ whose Fourier transform $\mathcal{F}[\Psi]$ is nonnegative, supported in the annulus $\{\xi \in \bR^d : \frac{1}{2}\leq |\xi| \leq 2\}$, and $\sum_{j\in\mathbb{Z}} \mathcal{F}[\Psi](2^{-j}\xi) = 1$ for all $\xi \not=0$.
Then we define the Littlewood-Paley projection operators $\Delta_j$ and $S_0$ as $\mathcal{F}[\Delta_j f](\xi) = \mathcal{F}[\Psi](2^{-j}\xi) \mathcal{F}[f](\xi)$ and $S_0f = \sum_{j\leq 0} \Delta_j f$, respectively. Using the notations $\Delta_j$ and $S_0$, we introduce weighted Bessel potential and Besov spaces. Let $p\in(1,\infty)$, $q\in(0,\infty)$, and $w\in A_p(\bR^d)$. For sequences $\boldsymbol{r}: \bN \to \bR$ and $\tilde{\boldsymbol{r}}: \bZ \to \bR$, we can define the following Besov and Sobolev spaces with variable smoothness.
\begin{enumerate}[(i)]
\item
((Inhomogeneous) Weighted Bessel potential space)
We denote by $H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$ the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}:=\|S_0f\|_{L_p(\bR^d,w\,dx)}+\left\|\left(\sum_{j=1}^{\infty}|2^{\boldsymbol{r}(j)}\Delta_jf|^{2}\right)^{1/2}\right\|_{L_p(\bR^d,w\,dx)}<\infty,
$$ where $\cS'(\bR^d)$ denotes the tempered distributions on $\cS(\bR^d)$.
\item
((Inhomogeneous) Weighted Besov space)
$B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$ denotes the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)}:=\|S_0f\|_{L_p(\bR^d,w\,dx)}+\left(\sum_{j=1}^{\infty}2^{q\boldsymbol{r}(j)}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}^q\right)^{1/q}<\infty.
$$
\item
((Homogeneous) Weighted Bessel potential space) We use $\dot{H}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)$ to denote the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{\dot{H}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)}:=\left\|\left(\sum_{j\in\bZ}|2^{\tilde{\boldsymbol{r}}(j)}\Delta_jf|^{2}\right)^{1/2}\right\|_{L_p(\bR^d,w\,dx)}<\infty.
$$
\item
((Homogeneous) Weighted Besov space)
$\dot{B}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)$ denotes the space of all $f \in \cS'(\bR^d)$ satisfying
$$
\|f\|_{\dot{B}_{p,q}^{\tilde{\boldsymbol{r}}}(\bR^d,w\,dx)}:=\left(\sum_{j\in\bZ}2^{q\tilde{\boldsymbol{r}}(j)}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}^q\right)^{1/q}<\infty.
$$
\end{enumerate}
\end{defn}
\begin{rem} Note that if $\boldsymbol{r}(j) = sj$ for $s\in\bR$, then the space $H_p^{\boldsymbol{r}}(\bR^d, w\,dx)$ is equivalent to the classical weighted Bessel potential space $H_p^s(\bR^d, w\,dx)$ whose norm is given by $$
\| f \|_{H_p^s(\bR^d,w\,dx)} := \| (1 - \Delta)^{s/2} f \|_{L_p(\bR^d,w\,dx)} $$ (see Corollary \ref{classical sobo}). \end{rem} Finally, we are ready to mention our main result, which is Theorem \ref{22.12.27.16.53}.
We first give a rough version of main result rather than present rigorous conditions and inequalities, which might not be helpful to understand the whole picture. Assume that the Laplace transform of a measure $\mu$ is controlled in a $\gamma$-scaling with a parameter $a \in (0,\infty)$ as follows: $$ \cL_{\mu}(2^{\gamma j}) := \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{\gamma j a} 2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ. $$ Then for any sequence $\boldsymbol{r}:\bN \to \bR$, we can find a unique solution $u$ to \eqref{eqn:model eqn} so that
\begin{align*}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
t^a\mu\left(dt\right)
\leq N\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*} In particular, our measure $\mu(dt)$ includes $w(t)dt$ for any $w \in A_\infty(\bR^d)$ and it seems a surprise since the weight $w$ could be chosen independently to the exponent $q$, which is believed to be impossible for inhomogeneous problems. Indeed, Theorem~\ref{22.12.27.16.53} contains precise statements on weights and regularity of symbols.
For previous works related to our results, we start with the related zero-initial inhomogeneous problems, \textit{i.e.} \begin{equation}
\label{20230126 01}
\begin{cases}
\partial_tu(t,x)=\psi(t,-i\nabla)u(t,x) + f(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=0,\quad &x\in\bR^d.
\end{cases} \end{equation} There are tons of papers handling these equations in $L_p$-spaces. We first mention results handling time-measurable pseudo-differential operators. We refer readers to \cite{kim2015parabolic,kim2016lplq,kim2016lp,kim2018lp} treating data and solutions in various $L_p$-spaces without weights. For a weighted theory, see recent result of the first and second authors \cite{Choi_Kim2022}. Moreover, if the order $\gamma$ of the operator $\psi(t,-i\nabla)$ is less than or equal to $2$, then these operators become generators of stochastic processes so called additive processes which is a time-direction generalization of a L\'evy process.
Particularly for $\gamma<2$, these operators can be represented in a form of non-local operators. We refer to \cite{dong2021sobolev,gyongy2021lp,kang2021lp,kim2019lp,kim2012lp,kim2021lq,kim2021sobolev,mikulevivcius1992,mikulevivcius2014,mikulevivcius2017p,mikulevivcius2019cauchy,zhang2013maximal,zhang2013lp} for $L_p$-theories with generators of stochastic processes and non-local operators. We also introduce papers \cite{neerven2012maximal,neerven2020,portal2019stochastic} handling general operators with smooth symbols in $L_p$-spaces based on a $H^\infty$-calculus approach.
On the other hand, there are not many results treating non-zero initial value problems with general operators such as \eqref{eqn:model eqn} in $L_p$-spaces. We found \cite{Choi_Kim2023,Dong_Kim2021,Dong_Liu2022,Gallarati_Veraar2017} as recent results to non-zero initial value problems with general operators. In \cite{Choi_Kim2023}, the first and second authors considered initial value problems with generators of general additive processes.
In \cite{Gallarati_Veraar2017}, Gallarati and Veraar studied an $L_p$-maximal regularity theory of evolution equations with general operators by means of $H^\infty$-calculus. Especially, in section \cite[Section 4.4]{Gallarati_Veraar2017}, they obtained solvability to an equation of the type of \eqref{eqn:model eqn} in $L_p((0,T),v\, dt; X_1)\cap W_{p}^{1}((0,T),v\, dt; X_0)$. Here $X_0$ and $X_1$ are Banach spaces which have a finite cotype and the initial data space $X_{v,p}$ is given by a trace of a certain interpolated space determined by $X_0$ and $X_1$, \textit{i.e.} $$
X_{v,p}:=\{x\in X_0:\|x\|_{X_{v,p}}<\infty\},\quad \|x\|_{X_{v,p}}:=\inf\{\|u\|_{L_p((0,T),v\, dt; X_1)\cap W_{p}^{1}((0,T),v\, dt; X_0)}:u(0)=x\}. $$ It is not easy to identify $X_{v,p}$ as a certain interpolation space in general. However, this abstract space can be characterized by a real interpolation space if $v(t)$ is given by a power-type weight $v(t)=t^\gamma$. Indeed, due to \cite[Theorem 1.8.2]{triebel1978interpolation}, $$ X_{v,p}=(X_0,X_1)_{1-\frac{(1+\gamma)}{p},p},\quad \gamma\in(-1,p-1), $$ where $(X_0,X_1)_{\theta,p}$ stands for the real interpolation space between $X_0$ and $X_1$. We refer to \cite{Hemmel_Lindemulder2022,Kohne2010,Kohne2014,Lindemulder2020,Meyries2012}, which provide such an identification for initial data spaces with the power type weights in various situations such as smooth domains, quasi-linear equations, and systems of equations.
For another generalization, time-fractional non-zero initial value problems were considered in \cite{Dong_Kim2021,Dong_Liu2022}. The authors in \cite{Dong_Kim2021,Dong_Liu2022} considered the following second-order problems with general coefficients: \begin{equation}\label{eqn:time frac}
\begin{cases}
\partial_t^\alpha u(t,x) = a^{ij}(t,x)D_{x_i x_j}u(t,x) + f(t,x), \quad &(t,x) \in (0,T)\times \mathbb{R}^d,\\
u(0, x) = u_0(x), \quad &x\in \bR^d,
\end{cases} \end{equation} In \cite{Dong_Kim2021}, a weighted Slobodeckij space is used as initial data spaces to obtain the solvability of \eqref{eqn:time frac} in $L_q((0,T),t^\gamma\, dt; W_p^2(\bR^d,w\,dx))$ for $\alpha \in (0,1]$. In \cite{Dong_Liu2022}, the range of $\alpha$ is extended to $(0,2)$.
Next we want to mention methods we used to obtain the result. Comparing the difficulties of the initial value problems with inhomogeneous problems such as \eqref{20230126 01}, we recall that at least formally a solution $u$ to \eqref{20230126 01} is given by \begin{align}\label{inhom soln}
u(t,x) = \int_0^t \int_{\mathbb{R}^d} K(t,s, x-y) f(s,y) dyds, \end{align} where \begin{align*} K(t,s, x-y) = \cF^{-1}\left[ \exp\left( \int_s^t \psi(r,\xi)dr \right) \right](x-y). \end{align*} Since the operator $f \mapsto \int_0^t \int_{\mathbb{R}^d} 1_{s<t} (-\Delta)_x^{\gamma/2}K(t,s, x-y) f(s,y) dyds$ becomes a singular integral operator on $L_p(\bR^{d+1})$, we apply well-known weighted $L_p$-boundedness of singular integral operators with Muckenhoupt's $A_p$-class to obtain the maximal regularity of a solution $u$ in weighted $L_p$-spaces. However, a solution $u$ to \eqref{eqn:model eqn} is given by \begin{align*}
u(t,x) = \int_{\mathbb{R}^d} K(t,0, x-y) u_0(y) dy \end{align*} and the integral operator $u_0 \mapsto \int_{\mathbb{R}^d} K(t,0, x-y) u_0(y) dy$ becomes an extension operator from $L_p(\bR^d)$ to $L_p(\bR^{d+1})$. Thus we cannot apply weighted $L_p (\bR^{d+1})$-boundedness of singular integral operators which plays an very important role to inhomogeneous problems. Moreover, for the inhomogeneous case, an $L_q(L_p)$-extension from $L_p$-estimates is easily obtained due to Rubio de Francia's extrapolation theorem. However, our weight class with respect to the time-variable is larger than Muckenhoupt's $A_p(\bR)$-class and it makes Rubio de Francia's powerful theory impossible to be adopted for $L_q(L_p)$-extensions. To overcome all these difficulties, we use the Littlewood-Paley theory and the Laplace transform. Roughly speaking, we estimate every fraction of the Littlewood-Paley projection of a solution with the help of the Hardy-Littlewood maximal function and the Fefferman-Stein sharp function depending on optimal scales given by the Laplace transform of the weighted measure with respect to the time variable.
It is also remarkable that the generality of a weight $\mu$ makes our result to initial value problems more valuable since the weight $\mu$ cannot be covered in inhomogeneous problems. Commonly, the generality of a free data $f$ in \eqref{20230126 01} makes people consider a non-zero initial value problem with a simple model operator. For a short detail, consider a solution $u_1$ to \begin{equation*}
\begin{cases}
\partial_tu_1(t,x)= \Delta^{\gamma/2}u_1(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d,
\end{cases} \end{equation*} and then by taking $f=\psi(t,-i\nabla)u_1(t,x)-\Delta^{\gamma/2}u_1(t,x)$ in $\eqref{20230126 01}$, also consider $u_2$ such that \begin{equation*}
\begin{cases}
\partial_tu_2(t,x)=\psi(t,-i\nabla)u_2(t,x) +\psi(t,-i\nabla)u_1(t,x)-\Delta^{\gamma/2}u_1(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=0,\quad &x\in\bR^d.
\end{cases} \end{equation*} Then obviously due to the linearity of the equations, the function $u:=u_1+u_2$ becomes a solution to \begin{equation*}
\begin{cases}
\partial_tu(t,x)=\psi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d.
\end{cases} \end{equation*} Then a weighted estimate of a solution $u$ to \eqref{eqn:model eqn} can be recovered from weighted estimates of both $u_1$ and $u_2$. However, this scheme for a weighted estimate of $u$ cannot be fully restored since a weight class for $u_2$ is smaller than the one for $u_1$ as mentioned above, which make our direct estimate to the initial value problem \eqref{eqn:model eqn} more novel. Apart from the novelty, even for the model operator $\Delta^\gamma$, our weighted estimate is new thanks to the generality of a measure $\mu$.
We finish the introduction by presenting special cases of the main result for an easy application. \begin{thm}
\label{22.02.03.13.35}
Let $T \in (0,\infty)$, $p\in(1,\infty)$, $q\in(0,\infty)$, $a \in (0,\infty)$, $w\in A_p(\bR^d)$, and $\mu$ be a nonnegative Borel measure on $[0.\infty)$ whose Laplace transform is well-defined, \textit{i.e.} \begin{align*} \cL_{\mu}(\lambda):=\int_0^{\infty}e^{-\lambda t}\mu(dt)<\infty,\quad \forall\lambda\in(0,\infty). \end{align*} Assume that for any $k \in (1,\infty)$, \begin{align}
\label{upper mu scaling} N_k:=\sup \frac{\mu(kI)}{\mu(I)} < \infty, \end{align} where the supremeum is taken over all open intervals $I \subset (0,\infty)$. Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$ $($see \eqref{condi:ellipticity}, \eqref{condi:reg ubound}, and \eqref{2021-01-19-01}$)$. Then for any $u_0\in B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}_a}{q}}(\bR^d,w\,dx)$, there exists a unique solution $u\in L_q((0,T),t^a\mu;H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx))$ to \eqref{eqn:model eqn} so that \begin{align}
\label{int a priori 1}
\left(\int_{0}^T \|u(t,\cdot)\|_{H^{\boldsymbol{\gamma}}_p(\bR^d,w\,dx)}^q t^a\mu(dt)\right)^{1/q}
\leq N\left(1+\mu_{a,T}^{1/q}\right)\|u_0\|_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}_a}{q}}(\bR^d,w\,dx)}, \end{align} and \begin{align}
\label{int a priori 2}
\left(\int_{0}^T\left\||\psi(t,-i\nabla)u(t,\cdot)| +|\Delta^{\gamma/2}u(t,\cdot)|\right\|_{L_p(\bR^d,w\,dx)}^qt^a\mu(dt)\right)^{1/q}\leq N\|u_0\|_{\dot{B}_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}_a}{q}}(\bR^d,w\,dx)}, \end{align} where $[w]_{A_p(\bR^d)}\leq K$, $N=N(a,d,\gamma,\kappa,N_{16^{\gamma}\kappa^{-1}/(1\wedge q)},K,M,p,q)$, $\boldsymbol{\gamma}(j)=\gamma j$, $\mu_{a,T}=\int_0^T t^a\mu\left(dt\right)$, and $$ \boldsymbol{\mu}_a(j):=\gamma ja-\log_2(\cL_{\mu}(2^{j\gamma})),\quad \forall j\in\bZ. $$
\end{thm} As the most important case, we state a second-order case with a polynomial weight in the Sobolev space. To the best of our knowledge, even this second-order case is new since $ a\in (-1,\infty)$ below. \begin{thm}
\label{second-order case}
Let $T \in (0,\infty)$, $p\in (1,\infty)$, $a \in (-1,\infty)$, and $w\in A_p(\bR^d)$.
Assume that $a^{ij}(t)$ is a measurable function on $(0,T)$ for all $i,j \in \{1,\ldots,d\}$ and there exist positive constants $\kappa$ and $M$ such that \begin{align}
\label{ellip coefficient}
\kappa |\xi|^2 \leq a^{ij}(t) \xi^i \xi^j \leq M |\xi|^2 \qquad \forall (t,\xi) \in (0,T) \times \bR^d. \end{align} Then for any $u_0\in L_p(\bR^d,w\,dx)$, there exists a unique solution $u\in L_{p \vee 2}\left((0,T),t^adt;H^{2(a+1)/(p\vee 2)}_p(\bR^d,w\,dx)\right)$ to \begin{align}
\notag
&\partial_tu(t,x)=a^{ij}(t)u_{x^ix^j}(t,x),\quad (t,x)\in(0,T)\times\bR^d,\\
\label{second eqn}
&u(0,x)=u_0(x),\quad x\in\bR^d, \end{align} where $p \vee 2:= \max\{p,2\}$. Moreover, $u$ satisfies the following estimation: \begin{align}
\label{second a priori}
\int_{0}^T \|u(t,\cdot)\|_{ H^{2(a+1)/(p \vee 2)}_p(\bR^d,w\,dx)}^{ (p \vee 2)} t^a dt
\leq N \int_{\bR^d} |u_0(x)|^p w(x)dx \end{align} where $[w]_{A_p(\bR^d)}\leq K$ and $N=N(a,d,\kappa,M,K,T,p,q)$. \end{thm}
In Section \ref{22.12.27.14.47}, we will derive Theorem \ref{22.02.03.13.35} and Theorem \ref{second-order case} from Theorem \ref{22.12.27.16.53}, in which the case of a more generic Borel measure $\mu$ is considered, and provide useful examples and corresponding inhomogeneous results in Section \ref{22.12.28.13.34}. The proof of the main result (Theorem \ref{22.12.27.16.53}) is given in Section \ref{pf main thm} and a proof of a key estimate to obtain the main theorem is given Section \ref{sec:prop}. Finally, in the appendix, we present weighted multiplier and Littlewood-Paley theories used to prove our main theorem.
\subsection*{Notations} \begin{comment} \begin{itemize}
\item
Let $\bN$ and $\bZ$ denote the natural number system and the integer number system, respectively.
As usual $\fR^{d}$, $d \in \bN$, stands for the Euclidean space of points $x=(x^{1},...,x^{d})$.
For $i=1,...,d$, a ($d$-dimensional) multi-index $\alpha=(\alpha_{1},...,\alpha_{d})$ with
$\alpha_{i}\in\{0,1,2,...\}$, and functions $u(x)$ we set
$$
f_{x^{i}}=\frac{\partial f}{\partial x^{i}}=D_{i}f,\quad
D^{\alpha}f=D_{1}^{\alpha_{1}}\cdot...\cdot D^{\alpha_{d}}_{d}f.
$$
For $\alpha_i =0$, we define $D^{\alpha_i}_i f = f$. For a function $f$ defined on $[0,T] \times \bR^d$ and a ($d$-dimensional) multi-index, we write $D^\alpha f(t,\cdot)$ or $D^\alpha_xf(t,x)$ if it is a derivative with respect to the variable $x$.
Sometimes, we simply use the notation $D^\alpha f(t,x)$ if there is no confusion.
\item
The gradient of a function $f$ is denoted by
\[
\nabla f = (D_1f, D_2f, \cdots, D_df).
\]
where $D_{i}f = \frac{\partial f}{\partial x^{i}}$ for $i=1,...,d$.
\item
Let $C^\infty(\bR^d)$ denote the space of infinitely differentiable functions on $\fR^d$.
Let $\cS(\bR^d)$ be the Schwartz space consisting of infinitely differentiable and rapidly decreasing functions on $\bR^d$ and
$\cS'(\bR^d)$ be the space of tempered distributions on $\bR^d$. We say that $f_n \to f$ in $S(\bR^d)$ as $n \to \infty$ if for all multi indexes $\alpha$ and $\beta$
\begin{align*}\label{sch conver}
\sup_{x \in \bR^d} |x^\alpha (D^\beta (f_n-f))(x)| \to 0 \quad \text{as}~ n \to \infty.
\end{align*}
\item
For $p \in [1,\infty)$, a normed space $F$, and a measure space $(X,\mathcal{M},\mu)$, we denote by $L_{p}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
\[
\left\Vert u\right\Vert _{L_{p}(X,\cM,\mu;F)}:=\left(\int_{X}\left\Vert u(x)\right\Vert _{F}^{p}\mu(dx)\right)^{1/p}<\infty
\]
where $\mathcal{M}^{\mu}$ denotes the completion of $\cM$ with respect to the measure $\mu$. We also denote by $L_{\infty}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
$$
\|u\|_{L_{\infty}(X,\cM,\mu;F)}:=\inf\left\{r\geq0 : \mu(\{x\in X:\|u(x)\|_F\geq r\})=0\right\}<\infty.
$$
If there is no confusion for the given measure and $\sigma$-algebra, we usually omit them.
\item
For $\cO\subseteq \bR^d$, we denote by $\cB(\cO)$ the set of all Borel sets contained in $\cO$.
\item
For $\cO\subset \fR^d$ and a normed space $F$, we denote by $C(\cO;F)$ the space of all $F$-valued continuous functions $u : \cO \to F$ with the norm
\[
|u|_{C}:=\sup_{x\in O}|u(x)|_F<\infty.
\]
\item
We denote the $d$-dimensional Fourier transform of $f$ by
\[
\cF[f](\xi) := \frac{1}{(2\pi)^{d/2}}\int_{\bR^{d}} e^{- i\xi \cdot x} f(x) dx,
\]
and the $d$-dimensional inverse Fourier transform of $f$ by
\[
\cF^{-1}[f](x) := \frac{1}{(2\pi)^{d/2}}\int_{\bR^{d}} e^{ ix \cdot \xi} f(\xi) d\xi.
\]
\item
Take a function $\Psi\in\cS(\bR^d)$ whose Fourier transform $ \cF[\Psi]$ is infinitely differentiable, supported in an annulus $\{\xi\in\bR^d : \frac{1}{2} \leq |\xi| \leq 2\}$, $\cF[\Psi]\geq0$ and
$$
\sum_{j\in \bZ} \cF[\Psi](2^{-j}\xi)=1 \quad \forall \xi\neq0.
$$
For a tempered distribution $f$ and $j\in \bZ$, define
$$
\Delta_j f(x):=\cF^{-1}\left[\cF[\Psi](2^{-j}\cdot)\cF [f]\right](x)
$$
and
$$
S_0 f(x):=\cF^{-1}[\cF[\Psi_0]\cF[f]](x):=\sum_{ j=-\infty}^0 \Delta_j f(x).
$$
\item
If we write $N=N(a,b,\cdots)$, this means that the constant $N$ depends only on $a,b,\cdots$.
\item
For $a,b\in \bR$,
$$
a \wedge b := \min\{a,b\},\quad a \vee b := \max\{a,b\},\quad \lfloor a \rfloor:=\max\{n\in\bZ: n\leq a\}.
$$
\item
For $r>0$,
$$
B_r(x):=\{y\in\bR^d:|x-y|< r\},\quad
\overline{B_r(x)}:=\{y\in\bR^d:|x-y|\leq r\}
$$ \end{itemize} \end{comment} \begin{itemize}
\item
For $p \in [1,\infty)$, a normed space $F$, and a measure space $(X,\mathcal{M},\mu)$, we denote by $L_{p}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
\[
\left\Vert u\right\Vert _{L_{p}(X,\cM,\mu;F)}:=\left(\int_{X}\left\Vert u(x)\right\Vert _{F}^{p}\mu(dx)\right)^{1/p}<\infty
\]
where $\mathcal{M}^{\mu}$ denotes the completion of $\cM$ with respect to the measure $\mu$. We also denote by $L_{\infty}(X,\cM,\mu;F)$ the space of all $\mathcal{M}^{\mu}$-measurable functions $u : X \to F$ with the norm
$$
\|u\|_{L_{\infty}(X,\cM,\mu;F)}:=\inf\left\{r\geq0 : \mu(\{x\in X:\|u(x)\|_F\geq r\})=0\right\}<\infty.
$$
If there is no confusion for the given measure and $\sigma$-algebra, we usually omit them.
\item
For $\cO\subseteq \bR^d$, we denote by $\cB(\cO)$ the set of all Borel sets contained in $\cO$.
\item
For $\cO\subset \fR^d$ and a normed space $F$, we denote by $C(\cO;F)$ the space of all $F$-valued continuous functions $u : \cO \to F$ with the norm
\[
|u|_{C}:=\sup_{x\in O}|u(x)|_F<\infty.
\]
\item We write $a \lesssim b$ if there is a positive constant $N$ such that $ a\leq N b$. We use $a \approx b$ if $a \lesssim b$ and $b \lesssim a$. If we write $N=N(a,b,\cdots)$, this means that the constant $N$ depends only on $a,b,\cdots$. A generic constant $N$ may change from a location to a location, even within a line. The dependence of a generic constant $N$ is usually specified in each statement of theorems, propositions, lemmas, and corollaries.
\item
For $a,b\in \bR$,
$$
a \wedge b := \min\{a,b\},\quad a \vee b := \max\{a,b\},\quad \lfloor a \rfloor:=\max\{n\in\bZ: n\leq a\}.
$$
\item
For $r>0$,
$$
B_r(x):=\{y\in\bR^d:|x-y|< r\},\quad
\overline{B_r(x)}:=\{y\in\bR^d:|x-y|\leq r\}.
$$
\item Let $\mu$ be a nonnegative Borel measure on $(0,\infty)$ and $c$ be a positive constant. We use the notation $\mu(c\,dt)$ to denote the scaled measure defined by $$ \int_{\bR^d}f(t)\mu(c\,dt):=\int_{\bR^d}f(t/c)\mu(dt). $$ \end{itemize}
\mysection{Main results and proofs of Theorems \ref{22.02.03.13.35} and \ref{second-order case}} \label{22.12.27.14.47}
\subsection{Main results}
To state the main result, we need the following definitions among sequences and functions.
\begin{defn}
\label{doubling sequence} We say that a sequence $\boldsymbol{r} : \bZ \to \bR$ has {\bf a controlled difference} if $$
\|\boldsymbol{r}\|_d:=\sup_{j\in\bZ}|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|<\infty. $$ \end{defn}
\begin{defn}
\label{weight} Let $ a \in \bR$. We say that a function $\phi : (0,\infty) \to \bR$ is {\bf controlled by a sequence $\boldsymbol{\mu} : \bZ \to \bR$ in a $\gamma$-dyadic way with a parameter $a$} if there exists a positive constant $N_\phi$ such that $$ \phi(2^{\gamma j}) \leq N_\phi 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ. $$ \end{defn} Since we handle a general weight, it is impossible to understand the meaning of a solution to \eqref{eqn:model eqn} even in a weak formulation. However, it is still possible to find an approximation of nice functions to a solution $u$ in which an approximating function satisfies the equation in a strong sense. Thus we need to consider a space of smooth functions approximating a solution to mention the exact meaning of a solution to \eqref{eqn:model eqn}. Due to the lack of regularity of the symbol $\psi$, the classical Schwartz class does not fit to this space and it leads to consider larger class consisting of locally integrable smooth functions. We use the notation $C_{p}^{1,\infty}([0,T]\times\bR^d)$ to denote this space. Here is a rigorous mathematical definition.
\begin{enumerate}[(i)]
\item The space $C_p^{\infty}([0,T]\times\bR^d)$ denotes the set of all $\cB([0,T]\times\bR^d)$-measurable functions $f$ on $[0,T]\times\bR^d$ such that for any multi-index $\alpha$ with respect to the space variable $x$, \begin{equation*} D^{\alpha}_{x}f\in L_{\infty}([0,T];L_2(\bR^d)\cap L_p(\bR^d)). \end{equation*} \item The space $C_p^{1,\infty}([0,T]\times\bR^d)$ denotes the set of all $f\in C_p^{\infty}([0,T]\times\bR^d)$ such that $\partial_tf\in C_p^{\infty}([0,T]\times\bR^d)$ and for any multi-index $\alpha$ with respect to the space variable $x$, \begin{equation*} D^{\alpha}_{x}f\in C([0,T]\times \bR^d), \end{equation*} where $\partial_tf(t,\cdot)$ denotes the right derivative and the left derivative at $t=0$ and $t=T$, respectively. \end{enumerate}
Our main result is the following sufficient condition for the existence and uniqueness of a solution in weighted $L_p$-spaces with variable smoothness. \begin{thm}
\label{22.12.27.16.53}
Let $T \in (0,\infty)$, $p\in(1,\infty)$, $q\in(0,\infty)$, $a \in (0,\infty)$, $w\in A_p(\bR^d)$, $\mu$ be a nonnegative Borel measure on $(0,\infty)$, and
$\boldsymbol{r}:\bZ\to(-\infty,\infty)$, $\boldsymbol{\mu}:\bZ\to(-\infty,\infty)$ be sequences having a controlled difference.
Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$ $($see \eqref{condi:ellipticity}, \eqref{condi:reg ubound}, and \eqref{2021-01-19-01}$)$.
Additionally, assume that the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}$ in a $\gamma$-dyadic way with parameter $a$, \textit{i.e.} \begin{align}
\label{20230210 01} \cL_{\mu}(2^{\gamma j}) := \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ. \end{align} Then for any $u_0\in B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)$, there exists a unique solution $$ u\in L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right) $$ to \eqref{eqn:model eqn}.
Moreover, if $u \in C_{p}^{1,\infty}([0,T]\times\bR^d) $ and $u_0 \in C_c^\infty(\bR^d)$, then the following \textit{a priori} estimates hold:
\begin{align}
\label{main a priori est 0}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa }{16^\gamma}dt\right)
\leq N(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
and
\begin{align}
\notag
&\int_{0}^T\left\||\psi(t,-i\nabla)u(t,\cdot)|+|\Delta^{\gamma/2}u(t,\cdot)|\right\|_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\\
\label{main a priori est}
&\leq N\|u_0\|^q_{\dot B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
where $[w]_{A_p(\bR^d)}\leq K$, $N=N(a,\|\boldsymbol{r}\|_d, \|\boldsymbol{\mu}\|_d, N_{\cL_\mu},d,\gamma,\kappa,K,M,p,q)$, and
\begin{align*} \mu_{a,T,\kappa,\gamma,q}=\int_0^T t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right). \end{align*} \end{thm} The proof of this main theorem will be given in Section \ref{pf main thm}.
\begin{rem} Let $\mu$ be a nonnegative Borel measure on $(0,\infty)$ and assume that \begin{align}
\label{20230210 03} \inf_{j \in \bZ}\frac{\cL_{\mu}(2^{\gamma (j+1)})}{\cL_{\mu}(2^{\gamma j})} >0, \end{align} where \begin{align*} \cL_{\mu}(2^{\gamma j}) := \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt). \end{align*} Then there is an easy way to obtain the vector $\boldsymbol{\mu}$ to denote the optimal regularity improvement in Theorem \ref{22.12.27.16.53}. Indeed, since \begin{align*} \cL_{\mu}(2^{\gamma j}) =2^{\log_2\left( \cL_{\mu}(2^{\gamma j}) \right)} =2^{j\gamma a} 2^{-j\gamma a + \log_2\left( \cL_{\mu}(2^{\gamma j}) \right)}, \end{align*} we can take \begin{align*} \boldsymbol{\mu}(j) = j\gamma a -\log_2\left( \cL_{\mu}(2^{\gamma j}) \right) \end{align*} so that \eqref{20230210 01} holds with $N_{\cL_\mu}=1$. In addition, it is obvious that $\boldsymbol{\mu}$ has a controlled difference due to \eqref{20230210 03} and the fact that $\cL_{\mu}(2^{\gamma j})$ is non-increasing as $j$ increases. \end{rem}
\begin{rem} The most interesting example of $\mu$ in Theorem \ref{22.12.27.16.53} is $\mu(dt) = w(t)dt$ with $w \in A_\infty(\bR)$. In the next section, we give a detail to show that the class of measures $\mu$ in Theorem \ref{22.12.27.16.53} includes $\mu(dt) = w(t)dt$ for all $w \in A_\infty(\bR)$. However, we emphasize that our measure $\mu(dt)$ does not need to have a density. In other words, the class of our measures $\mu$ is larger than Muckenhoupt's class. Indeed, for any $t_0 \in (0,\infty)$, consider the Dirac delta measure centered at $t_0$, \textit{i.e.} $\mu(dt) = \delta_{t_0}(t)dt$. Then \eqref{20230210 03} holds since \begin{align*} \inf_{j \in \bZ}\frac{\cL_{\mu}(2^{\gamma (j+1)})}{\cL_{\mu}(2^{\gamma j})} =2^{-\gamma t_0}. \end{align*} Therefore, taking \begin{align*} \boldsymbol{\mu}(j) = j\gamma a -\log_2\left(\exp\left( - 2^{\gamma j} t_0 \right) \right), \end{align*} we could get a regularity improvement of a solution $u$ as much as $\frac{\boldsymbol{\mu}}{q}$ with respect to the space variable $x$ from Theorem \ref{22.12.27.16.53}. \end{rem}
We finally present the mathematical meaning of our solution. \begin{defn}[Solution]
\label{def sol} Assume that all parameters are given as in Theorem \ref{22.12.27.16.53}. We say that $u \in L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa }{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)$ is a solution to \eqref{eqn:model eqn} if there exists a sequence $u_n\in C_p^{1,\infty}([0,T]\times\bR^d)$ such that $u_n(0,\cdot)\in C_c^{\infty}(\bR^d)$ and \begin{equation}
\label{202301014 01} \begin{gathered} \partial_tu_n(t,x)=\psi(t,-i\nabla)u_n(t,x),\quad \forall (t,x)\in(0,T)\times\bR^d,\\ u_n(0,\cdot)\to u_0 \quad\text{in}\quad B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx),\\ u_n\to u\quad \text{in}\quad L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa }{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right) \end{gathered} \end{equation} as $n\to\infty$. Due to this definition, we have \eqref{main a priori est 0} for any solution $$ u\in L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right) $$ to \eqref{eqn:model eqn} with the corresponding $u_0 \in B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)$. \end{defn} \begin{rem} Since our equation is linear, by using \textit{a priori} estimate we may consider that our solution is a strong solution in $L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma} dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right)$. Indeed, due to \eqref{main a priori est}, for all $n, m \in \bN$, we have
\begin{align*}
&\int_{0}^T\left\|\partial_t(u_n - u_m)(t,\cdot)\right\|_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\\
&=\int_{0}^T\left\|\psi(t,-i\nabla)(u_n - u_m)(t,\cdot)\right\|_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\\
&\leq N\|u_n(0,\cdot) - u_m(0,\cdot)\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*} Thus defining $\partial_tu$ and $\psi(t,-i\nabla)u$ as the limits of $\partial_tu_n$ and $\psi(t,-i\nabla)u_n$ in $$ L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)\right), $$ respectively, we understand that our solution $u$ is a strong solution. However, the limits $\partial_tu$ and $\psi(t,-i\nabla)u$ are not well-defined in general, which means that they could be dependent on the choice of a sequence $u_n$ approximating to $u$. To make the limits $\partial_tu$ and $\psi(t,-i\nabla)u$ uniquely determined, we need the condition $\boldsymbol{r}-\boldsymbol{\gamma} \geq 0$ and another condition on a measure $\mu$, which will be specified in the next remark. \end{rem}
\begin{rem} To show that our solution becomes the classical weak solution, we need an extra condition on a measure $\mu$. For simplicity, we may assume that the scaling constant $\frac{(1\wedge q) \kappa}{16^\gamma}$ is 1. Additionally, assume that the measure $\mu(dt)$ has a density $\mu(t)$ and $t^{-a} (1/\mu)(t)$ is locally in $L_{q/(q-1)}$, \textit{i.e.} $\mu(dt)=\mu(t)dt$ and $\int_0^{T_0} t^{-\frac{aq}{q-1}} (\mu(t))^{-\frac{q}{q-1}} dt < \infty$ for all $T_0 \in (0,T)$. Let $u$ be a solution to \eqref{eqn:model eqn}. Then there exists a $u_n\in C_p^{1,\infty}([0,T]\times\bR^d)$ so that \eqref{202301014 01} holds. Then for any $n \in \bN$ and $\varphi \in C_c^\infty ( (0,T) \times \bR^d)$, we have \begin{align}
\label{20230114 02} -\int_0^T\int_{\bR^d} u_n(t,x) \partial_t\varphi(t,x) dt dx = \int_0^T \int_{\bR^d} u_n(t,x) \overline{\psi}(t,-i\nabla) \varphi(t,x) dt dx, \end{align} where \begin{align*} \overline{\psi}(t,-i\nabla)u(t,x)= \cF^{-1}\left[ \overline{\psi(t,\xi)} \cF[u(t,\cdot)](\xi) \right] (x) \end{align*} and $\overline{\psi(t,\xi)}$ denotes the complex conjugate of $\psi(t,\xi)$. Moreover, applying H\"older's inequality, we have \begin{align*}
&\left|\int_0^T\int_{\bR^d} u_n(t,x) \partial_t\varphi(t,x) dt dx -\int_0^T\int_{\bR^d} u(t,x) \partial_t\varphi(t,x) dt dx \right| \\
&\leq N\| u_n -u\|_{L_q((0,T),t^a \mu;L_p(\bR^d,w\,dx))} \left(\int_0^{T_0} t^{-\frac{aq}{q-1}} (\mu(t))^{-\frac{q}{q-1}} dt\right)^{\frac{q-1}{q}}, \end{align*} where $T_0$ is a positive number so that $\partial_t \varphi(t,x) = 0$ for all $t \in [T_0,T)$. Thus \begin{align*} \lim_{n \to \infty}\int_0^T\int_{\bR^d} u_n(t,x) \partial_t\varphi(t,x) dt dx =\int_0^T\int_{\bR^d} u(t,x) \partial_t\varphi(t,x) dt dx. \end{align*} Similarly, \begin{align*} \lim_{n\to \infty}\int_0^T \int_{\bR^d} u_n(t,x) \overline{\psi}(t,-i\nabla) \varphi(t,x) dt dx =\int_0^T \int_{\bR^d} u(t,x) \overline{\psi}(t,-i\nabla) \varphi(t,x) dt dx. \end{align*} Finally, taking the limit in \eqref{20230114 02}, we show that our solution becomes a classical weak solution, \textit{i.e.} \begin{align}
\label{20230114 03} -\int_0^T\int_{\bR^d} u(t,x) \partial_t\varphi(t,x) dt dx = \int_0^T \int_{\bR^d} u(t,x) \overline{\psi}(t,-i\nabla) \varphi(t,x) dt dx \end{align} for all $\varphi \in C_c^\infty ( (0,T) \times \bR^d)$. \end{rem}
\begin{rem} We want to emphasize that \eqref{main a priori est 0} and \eqref{main a priori est} can be distinguished. In other words, there could be different two measures $\mu_1$ and $\mu_2$ so that $\boldsymbol{\mu_1}(j) = \boldsymbol{\mu_2}(j)$ for all $j \in \bN$ but there exists a $k \in \bZ$ such that $\boldsymbol{\mu_1}(k) \neq \boldsymbol{\mu_2}(k)$. A concrete example taking $\mu_1(dt) = t^{b_1}dt$ and $\mu_2(dt) = \left(t^{b_1} + t^{b_2}\right)dt$ will be given in Example \ref{conc example}. \end{rem}
We finish the section by proving Theorem \ref{22.02.03.13.35} and Theorem \ref{second-order case}. \begin{proof}[Proof of Theorem \ref{22.02.03.13.35}] It is easy to check that \eqref{upper mu scaling} implies that \begin{equation} \label{22.10.03.15.43}
\cL_{\mu}(\lambda)\leq N_k\cL_{\mu}(k\lambda),\quad \forall \lambda\in(0,\infty). \end{equation} Recall that a sequence $\boldsymbol{\mu}_a$ is defined by $$ \boldsymbol{\mu}_a(j):=\gamma ja-\log_2(\cL_{\mu}(2^{j\gamma})). $$ We first show that $\boldsymbol{\mu}_a$ has a controlled difference. By the definition of the Laplace transform, $\cL_\mu(\lambda)$ is non-increasing with respect to $\lambda$. Moreover, by \eqref{22.10.03.15.43}, $$ 1\leq \frac{\cL_{\mu}(2^{\gamma j})}{\cL_{\mu}(2^{\gamma(j+1)})}\leq N_{2^{\gamma}}. $$ Thus $$
\sup_{j\in\bZ}|\boldsymbol{\mu}_a(j+1)-\boldsymbol{\mu}_a(j)|\leq \gamma a+\log_2(N_{2^{\gamma}})<\infty. $$ Next we prove that the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}_a$ in a $\gamma$-dyadic way with a parameter $a$, \textit{i.e.}
$$ \cL_{\mu}(2^{\gamma j}) := \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}_a(j)},\quad \forall j\in\bZ. $$ However, it is a direct consequence of the definition of $\boldsymbol{\mu}_a$ with $N_{\cL_\mu}=1$. Therefore, we obtain the existence and uniqueness result of Theorem \ref{22.02.03.13.35} and the following estimates from Theorem \ref{22.12.27.16.53} with $\boldsymbol{r}=\boldsymbol{\gamma}$: \begin{align*}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq N(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align*}
and
\begin{align*}
\int_{0}^T\left\||\psi(t,-i\nabla)u(t,\cdot)|+|\Delta^{\gamma/2}u(t,\cdot)|\right\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\leq N\|u_0\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align*} Note that all above estimates hold even for the measure $\mu(c\,dt)$ with $c>0$ since we have \begin{align} N_k:=\sup \frac{\mu(ckI)}{\mu(cI)} = \sup \frac{\mu(kI)}{\mu(I)}, \end{align} where the supremum is taken over all open intervals $I \subset (0,\infty)$. Therefore considering the measure $$ \mu\left( \frac{16^\gamma}{(1\wedge q) \kappa}dt\right) $$ instead of $\mu(dt)$, we finally have \eqref{int a priori 1} and \eqref{int a priori 2}. \end{proof} \begin{proof}[Proof of Theorem \ref{second-order case}] For $f \in \cS(\bR^d)$, it follows that \begin{align*} a^{ij}(t)f_{x^ix^j} =-\cF^{-1}\left[ a^{ij}(t) \xi^i \xi^j \cF[f] (\xi) \right]. \end{align*} Then due to \eqref{ellip coefficient}, the symbol $\psi(t,\xi):=-a^{ij}(t) \xi^i \xi^j$ satisfies an ellipticity condition with $(2,\kappa)$ and has a $n$-times regular upper bound with $(2,M)$ for any $n \in \bN$ by using the trivial extension $a^{ij}(t)=a^{ij}(T)$ for all $ t \geq T$. Take $a_0 \in (0,1)$ so that $a-a_0 > -1$ and the exponent $q$ depending on the range of $p$ as follows: \begin{align*} \begin{cases} &q=2 \quad \text{if $p \in (1,2)$} \\ &q=p \quad \text{if $p \in [2,\infty)$}. \end{cases} \end{align*} Put $\boldsymbol{r}(j) = 2j(a+1)/q$ for all $j \in \bZ$, and $\mu(dt)=t^{a-a_0} dt$. By a simple change of variable, \begin{align*}
\cL_{\mu}(2^{2 j})
\leq 2^{-2 j(a-a_0 +1)}\cL_{\mu}(1)
= 2^{ 2j a_0} 2^{-2 j (a +1)}\cL_{\mu}(1). \end{align*} Set $$ \boldsymbol{\mu}(j) = 2 j (a+1) \quad \forall j \in \bZ. $$ Then the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}$ in a $2$-dyadic way with parameter $a_0$. Therefore by Theorem \ref{22.12.27.16.53},
for any $u_0\in B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)$, there exists a unique solution $$ u\in L_q\left((0,T),t^{a_0} \mu\left(\frac{(1\wedge q) \kappa}{16^2} dt\right);H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right) $$ to \eqref{second eqn} and $u$ satisfies \begin{align}
\notag
&\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{2(a+1)/q}(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^2}dt\right)\\
&=\left(\frac{(1\wedge q) \kappa}{16^2}\right)^{a_0-a} \int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{2(a+1)/q}(\bR^d,w\,dx)}^q
t^{a_0} \mu\left(\frac{(1\wedge q) \kappa}{16^2}dt\right)\notag\\
\label{2023012320}
&\leq N(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{2(a+1)/q-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}.
\end{align} We apply a well-known embedding theorem in Besov space (cf. \cite[Theorem 6.4.4]{BL1976}). Then \begin{align}
\label{2023012321} \begin{cases}
&\|u_0\|^p_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}
\leq \|u_0\|^p_{H_{p}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}
=\|u_0\|^p_{L_p(\bR^d,w\,dx)} \quad \text{if $p \in (1,2)$ and $q=2$} \\
&\|u_0\|^p_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}
\leq \|u_0\|^p_{H_{p}^{\boldsymbol{r}-\frac{\boldsymbol{\mu}}{p}}(\bR^d,w\,dx)}
=\|u_0\|^p_{L_p(\bR^d,w\,dx)} \quad \text{if $p=q \in [2,\infty)$}. \end{cases} \end{align} Finally, putting \eqref{2023012321} in \eqref{2023012320}, we have \eqref{second a priori}. \end{proof}
\mysection{Examples and inhomogeneous problems} \label{22.12.28.13.34}
We proved that the regularity improvement is given by $\boldsymbol{\mu}/q$ in Theorem \ref{22.12.27.16.53}. Here $\boldsymbol{\mu}$ satisfies
$$ \cL_{\mu}(2^{\gamma j}) := \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ. $$ It seems not easy to figure out the exact value of $\boldsymbol{\mu}$ since this quantity relates to the Laplace transform of the measure $\mu$. In this section, we present a concrete example showing the regularity improvement, which could be directly gained from the measures of open intervals starting at the origin. Briefly speaking, it is possible if measures of all open intervals starting at the origin satisfy a weak scaling property. Here is the precise statement. Suppose that for any $k\in(1,\infty)$, there exist positive constants $b_k$ and $B_k$ such that \begin{equation} \label{22.04.22.16.39}
0<b_k\leq \frac{\mu((0,\theta))}{\mu((0,k\theta))}\leq B_k<1,\quad \forall \theta\in(0,\infty). \end{equation} Choose \begin{align}
\label{20230128 100} a_0 \in \left[0, -\log_2B_2 \right) \end{align} and denote $$ \mu^{-a_0}(dt) = t^{-a_0} \mu(dt). $$ We claim that \begin{equation} \label{22.10.31.16.02} \cL_{\mu^{-a_0}}(\lambda) \approx \lambda^{a_0} \mu((0,1/\lambda)) \quad \forall \lambda \in (0,\infty). \end{equation} Indeed, by \eqref{22.04.22.16.39} and the ratio test, \begin{align}
\notag \int_{1}^{\infty}e^{-t}\mu^{-a_0}(dt) =\int_{1}^{\infty}e^{-t}t^{-a_0}\mu(dt) \leq \int_{1}^{\infty}e^{-t}\mu(dt) &=\sum_{n=1}^{\infty}\int_{2^{n-1}}^{2^n}e^{- t}\mu(dt)\\
\notag &\leq \sum_{n=1}^{\infty}e^{-2^{n-1}}\mu((0,2^n)) \\
\label{2023012401} &\leq \mu((0,1))\sum_{n=0}^{\infty}e^{- 2^{n-1}}b^{-n}_2<\infty. \end{align} Using \eqref{22.04.22.16.39} again, we have \begin{align}
\notag
\int_{1}^{\infty}e^{-t}\mu^{-a_0}(dt)
=\int_0^1e^{- t}t^{-a_0}\mu(dt)
&\leq \sum_{n=0}^{\infty}\int_{2^{-n-1}}^{2^{-n}}t^{-a_0}\mu(dt)\\
\label{22.04.23.19.49}
&\leq \sum_{n=0}^{\infty}2^{(n+1)a_0}\mu((0,2^{-n}))\leq \mu((0,1))\sum_{n=0}^{\infty}2^{(n+1)a_0} B^n_2. \end{align} Due to the choice of $a_0$ in \eqref{20230128 100}, it is obvious that $2^{a_0}B_2 < 1$ and the last term in \eqref{22.04.23.19.49} converges. Therefore combining \eqref{2023012401} and \eqref{22.04.23.19.49}, we obtain \begin{align}
\label{20230124 100} \cL_{\mu^{-a_0}}(1) \approx \mu((0,1)). \end{align} Next we use the scaling property. For $\lambda \in (0,\infty)$, define the measures as $$ \mu_{1/\lambda} (dt) := \mu \left(\frac{1}{\lambda} \,dt\right) $$ and $$ \mu^{-a_0}_{1/\lambda} (dt) := \mu^{-a_0} \left(\frac{1}{\lambda}\, dt\right). $$ Then for any $k \in (1,\infty)$, \begin{equation*}
0<b_k\leq \frac{\mu_{1/\lambda}((0, \theta))}{\mu_{1/\lambda}((0,k \theta))}\leq B_k<1,\quad \forall \theta\in(0,\infty). \end{equation*} Thus applying \eqref{20230124 100} with $\mu_{1/\lambda}$ instead of $\mu(dt)$, we prove the claim that $$ \cL_{\mu^{-a_0}}(\lambda)=\lambda^{a_0} \cL_{\mu^{-a_0}_{1/\lambda}}(1) \approx \lambda^{a_0} \mu_{1/\lambda}((0,1))= \lambda^{a_0} \mu((0,1/\lambda)). $$ Therefore, for all $C_\mu \in (0,\infty)$, $a \in [0,\infty)$, and $j \in \bZ$, \begin{align*} \cL_{\mu^{-a_0}}(2^{j \gamma}) \approx C_\mu 2^{j\gamma a_0}\mu((0,2^{-j\gamma})) = 2^{j \gamma a} \cdot 2^{-\left( j \gamma (a-a_0)-\log_2(C_\mu\mu((0,2^{-j\gamma}))) \right)} \end{align*} In other words, the Laplace transform of $\mu^{-a_0}$ with $a_0 \in \left[0, -\log_2B_2 \right)$ is controlled by a sequence \begin{align}
\label{20230124 05} \boldsymbol{\mu}(j)=\left( j \gamma (a-a_0)-\log_2(C_\mu\mu((0,2^{-j\gamma}))) \right) \end{align}
in a $\gamma$-dyadic way with a parameter $a$. In particular, the Laplace transform of $\mu^{-a}$ is controlled by a sequence \begin{align*} \boldsymbol{\mu}(j)=\left(-\log_2(C_\mu\mu((0,2^{-j\gamma}))) \right) \end{align*}
in a $\gamma$-dyadic way with a parameter $a$ if $a \in \left[0, -\log_2B_2 \right)$.
\begin{example}
\label{conc example} One of the most important examples satisfying \eqref{22.04.22.16.39} is the measure induced by a function in $A_\infty(\bR)$. We show that the measure $\mu(dt):=w'(t)dt$ with $w' \in A_{\infty}(\bR)$ could be a measure satisfying assumptions in Theorem \ref{22.12.27.16.53}. We emphasize that $w'$ does not have to be in $A_{q}(\bR)$ even for $q \in (1,\infty)$, which cannot be expected for inhomogeneous problems.
Let $w'\in A_{\infty}(\bR)$ and $\mu(dt):=w'(t)dt$. Then there exists $\nu\in(1,\infty)$ such that $w'\in A_{\nu}(\bR)$. By \cite[Proposition 7.1.5 and Lemma 7.2.1]{grafakos2014classical}, $$ b_k:=\frac{1}{k^{\nu}[w']_{A_{\nu}(\bR)}}\leq \frac{\mu((0,\theta))}{\mu((0,k\theta))}\leq \left(1-\frac{(1-k^{-1})^{\nu}}{[w']_{A_{\nu}(\bR)}}\right)=:B_k,\quad \forall k\in(1,\infty)~\text{and}~ \theta \in (0,\infty). $$ Thus $\mu$ also satisfies \begin{align*} \frac{ \mu(k I)}{\mu(I)} \leq k^{\nu}[w']_{A_\nu} \quad \forall k \in (1,\infty), \end{align*} which implies $$ N_k := \sup \frac{ \mu(k I)}{\mu(I)} \leq k^{\nu}[w']_{A_\nu} <\infty \quad \forall k \in (1,\infty), $$ where the sup is taken over all the open intervals $I$ in $(0,\infty)$.
Moreover, it is well-known that $|t|^{b_1}$ is in $A_\infty(\bR)$ if $ b_1 >-1$ (\textit{e.g.}, \cite[Example 7.1.7]{grafakos2014classical}). Then by a simple calculation, $\mu_1((0,2^{-j\gamma})) := \int_0^{2^{-j \gamma}} t^{b_1} dt \approx 2^{-j \gamma (b_1+1)}$. Thus due to \eqref{20230124 05}, choosing an appropriate scaling constant $C_{\mu_1}$, we have \begin{align*} \boldsymbol{\mu_1}(j)= j \gamma (a+b_1+1) \quad \forall j \in \bZ. \end{align*} Next for $-1<b_1 < b_2 <\infty$, we consider \begin{align*} \mu_2((0,2^{-j\gamma})) := \int_0^{2^{-j \gamma}} \left(t^{b_1} +t^{b_2}\right)dt \approx \left( 2^{-j \gamma (b_1+1)} + 2^{-j \gamma (b_2+1)} \right). \end{align*} Therefore, it follows that \begin{align*} \mu_2((0,2^{-j\gamma})) \lesssim \begin{cases} &2^{-j \gamma (b_1+1)} \quad \text{if}~ j \in \bN \\ &2^{-j \gamma (b_2+1)} \quad \text{if $j$ is a non-positive integer} \end{cases} \end{align*} and the optimal regularity improvement becomes \begin{align*} \boldsymbol{\mu_2}(j)\lesssim \begin{cases} &j \gamma (a+b_1+1) \quad \text{if}~ j \in \bN \\ &j \gamma (a+b_2+1) \quad \text{if $j$ is a non-positive integer}. \end{cases} \end{align*} \end{example} Finally, restricting a weight $w'$ to $A_q(\bR)$ and combining \cite[Theorem 2.14]{Choi_Kim2022}, we can handle the following inhomogeneous problems with non-zero initial conditions. \begin{corollary}
Let $T \in (0,\infty)$, $p\in(1,\infty)$, $q\in(1,\infty)$, $w'\in A_{q}(\bR)$, $w\in A_p(\bR^d)$, $C_{w'} \in (0,\infty)$, and
$\boldsymbol{r}:\bZ\to(-\infty,\infty)$ be a sequence having a controlled difference.
Suppose that $[w]_{A_{p}(\bR^d)}\leq K$ and $[w']_{A_{q}(\bR)}\leq K_0$. Additionally, assume that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor \vee \left\lfloor \frac{d}{R_{q,1}^{w'}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$. Then for any
$$
u_0\in B_{p,q}^{\boldsymbol{r}-\boldsymbol{w}'/q}(\bR^d,w\,dx)~\text{and}~ f\in L_q((0,T),w'\,dt;H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)),
$$
there exists a unique solution
$$ u\in L_q\left((0,T),w'\,dt;H_p^{\boldsymbol{r}}(\bR^d,w\,dx)\right) $$ to the equation
\begin{equation}
\label{inhomo problem}
\begin{cases}
\partial_tu(t,x)=\psi(t,-i\nabla)u(t,x)+f(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d,
\end{cases} \end{equation} where $\boldsymbol{\gamma}(j) = \gamma j$ and \begin{align*} \boldsymbol{w'}(j)=-\log_2\left(C_{w'} \int_0^{2^{-j\gamma}} w'(t)dt \right). \end{align*}
Moreover, $u$ satisfies
\begin{align}
\label{20230124 08}
\int_{0}^T\left\|u(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^qw'(t)dt
\leq N(1+T)^q\left(\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\boldsymbol{w}'/q}(\bR^d,w\,dx)}+\int_0^T\|f(t,\cdot)\|^q_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)} w'(t)dt\right)
\end{align}
where $N=N(\|\boldsymbol{r}\|_d,C_{w'},d,\gamma,\kappa,K,K_0,M,p,q)$. \end{corollary} \begin{proof} Since the uniqueness comes from the homogeneous case (Theorem \ref{22.12.27.16.53}), we only prove the existence of a solution $u$ satisfying \eqref{20230124 08}. Moreover, due to Proposition \ref{22.05.03.11.34}, we may assume that $\boldsymbol{r}=\boldsymbol{\gamma}$. We set $$ \mu(dt)= w'(t) dt. $$ Let $a_0 \in \left(0, -\log_2B_2 \right)$ and recall $$ \mu^{-a_0}(dt)= t^{-a_0}w'\left( t\right)dt $$ where \begin{align*} B_2=\left(1-\frac{(1-2^{-1})^{q}}{[w']_{A_{q}(\bR)}}\right). \end{align*} Then due to \eqref{20230124 05}, the Laplace transform of $\mu^{-a_0}$ is controlled by a sequence \begin{align*} \boldsymbol{w'}(j):=\boldsymbol{\mu^{-a_0}}(j) =-\log_2(C_{w'}\mu((0,2^{-j\gamma}))) \end{align*}
in a $\gamma$-dyadic way with parameter $a_0$. Besides, it is well-known (\textit{cf}. \cite[Proposition 7.1.5]{grafakos2014classical}) that \begin{align*} w'(t)dt \leq [w']_{A_q(\bR)}\left(\frac{16^\gamma}{(1\wedge q) \kappa}\right)^{q-1} w'\left(\frac{(1\wedge q) \kappa}{16^\gamma}t\right)dt. \end{align*} Thus by Theorem \ref{22.12.27.16.53}, there exists a solution $u^1$ to the equation
\begin{equation*}
\begin{cases}
\partial_tu^1(t,x)=\psi(t,-i\nabla)u^1(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u^1(0,x)=u_0(x),\quad &x\in\bR^d
\end{cases} \end{equation*} such that
\begin{align}
\notag
\int_{0}^T\left\|u^1(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
w'(t)dt
&\lesssim \int_{0}^T\left\|u^1(t,\cdot)\right\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}^q
t^{a_0}\mu^{-a_0}\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
\label{20230124 10}
&\leq N(1+\mu^{-a_0}_{a_0,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{r}-\frac{\boldsymbol{w'}}{q}}(\bR^d,w\,dx)},
\end{align} where
\begin{align}
\notag \mu^{-a_0}_{a_0,T,\kappa,\gamma,q} =\int_0^T t^{a_0}\mu^{-a_0}\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)&=\left(\frac{(1\wedge q) \kappa}{16^\gamma}\right)^{1-a_0} \int_0^T w'\left(\frac{(1\wedge q) \kappa}{16^\gamma}t\right)dt \\
\notag &= \left(\frac{(1\wedge q) \kappa}{16^\gamma}\right)^{-a_0}\int_0^{\frac{(1\wedge q) \kappa}{16^\gamma}T} w'\left(t\right)dt \\
\label{20230124 10-2} &= \left(\frac{(1\wedge q) \kappa}{16^\gamma}\right)^{-a_0} \mu\left(\left(0,\frac{(1\wedge q) \kappa}{16^\gamma}T\right)\right). \end{align} Moreover, due to some properties of the $A_p$-class again,
\begin{align}
\label{20230124 11}
\mu\left(\left(0,\frac{(1\wedge q) \kappa}{16^\gamma}T\right)\right) \leq \mu(\lambda(0,1))\leq \lambda^{q}[w']_{A_q(\bR)}\mu((0,1)),\quad \lambda:=1+T.
\end{align} On the other hand, by \cite[Theorem 2.14.]{Choi_Kim2022}, there exists a solution $u^2$ to
\begin{equation*}
\begin{cases}
\partial_tu^2(t,x)=\psi(t,-i\nabla)u^2(t,x)+f(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u^1(0,x)=0,\quad &x\in\bR^d
\end{cases} \end{equation*} such that
\begin{align}
\label{20230124 12}
\int_{0}^T\left\|u^2(t,\cdot)\right\|_{H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)}^q w'(t)dt
\leq N(1+T)^q\int_0^T\|f(t,\cdot)\|^q_{H_p^{\boldsymbol{r}-\boldsymbol{\gamma}}(\bR^d,w\,dx)} w'(t)dt.
\end{align} Due to the linearity, $u:=u^1+u^2$ becomes a solution to \eqref{inhomo problem}. Combining all \eqref{20230124 10}, \eqref{20230124 10-2}, \eqref{20230124 11}, and \eqref{20230124 12}, we finally have \eqref{20230124 08}.
The corollary is proved. \end{proof}
\mysection{Proof of Theorem \ref{22.12.27.16.53}}
\label{pf main thm}
We define kernels related to the symbol $\psi(t,\xi)$ first. For $(t,s,x) \in \bR \times \bR \times \bR^d$ and $\varepsilon \in [0,1]$, we set \begin{align*}
p(t,s,x):=1_{0 \leq s < t} \cdot \frac{1}{(2\pi)^{d/2}}\int_{\bR^d} \exp\left(\int_{s}^t\psi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi, \end{align*} \begin{align*} P_{\varepsilon}(t,s,x) :=\Delta^{\frac{\varepsilon\gamma}{2}}p(t,s,x) :=-(-\Delta)^{\frac{\varepsilon\gamma}{2}}p(t,s,x)
:=1_{0 \leq s <t} \cdot \frac{1}{(2\pi)^{d/2}}\int_{\bR^d} |\xi|^{\varepsilon \gamma}\exp\left(\int_{s}^t\psi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi, \end{align*} and \begin{align*}
\psi(t,-i\nabla)p(t,s,x):=(\psi(t,-i\nabla)p)(t,s,x)
:=1_{0 \leq s < t} \cdot \frac{1}{(2\pi)^{d/2}}\int_{\bR^d} \psi(t,\xi)\exp\left(\int_{s}^t\psi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi. \end{align*} For these kernels, we introduce integral operators as follows: \begin{align*} \cT_{t,s} f(x) := \int_{\bR^d} p(t,s,x-y)f(y)dy,\quad \cT_{t,s}^{\varepsilon} f(x) :=\Delta^{\frac{\varepsilon\gamma}{2}}\cT_{t,s} f(x) :=\int_{\bR^d} P_{\varepsilon}(t,s,x-y)f(y)dy, \end{align*} and \begin{align*}
\psi(t,-i\nabla)\cT_{t,s} f(x) := \int_{\bR^d} \psi(t,-i\nabla) p(t,s,x-y)f(y)dy. \end{align*} These operators are closely related to solutions of our initial value problems. Formally, it is easy to check that \begin{align*} \partial_t\cT_{t,0} f(x) = \psi(t,-i\nabla)\cT_{t,0} f(x) \end{align*} and \begin{align*} \lim_{t \to 0} \cT_{t,0} f(x) = f(x). \end{align*} Thus if a symbol $\psi$ and an initial data $u_0$ are nice enough, for instance $\psi$ satisfies the ellipticity condition and $u_0\in C_c^{\infty}(\bR^d)$, then the function $u(t,x):=\cT_{t,0}u_0(x)$ becomes a classical strong solution to the Cauchy problem \begin{equation*}
\begin{cases}
\partial_tu(t,x)=\psi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad &x\in\bR^d.
\end{cases} \end{equation*}
Therefore, roughly speaking, it is sufficient to show boundedness of $\cT_{t,0}^\varepsilon u_0$ in appropriate spaces for our \textit{a priori} estimates. More precisely, due to the definitions of the Besov and Sobolev spaces, we have to estimate their Littlewood-Paley projections. We recall that $\Psi$ is a function in the Schwartz class $\mathcal{S}(\mathbb{R}^d)$ whose Fourier transform $\mathcal{F}[\Psi]$ is nonnegative, supported in an annulus $\{\frac{1}{2}\leq |\xi| \leq 2\}$, and $\sum_{j\in\mathbb{Z}} \mathcal{F}[\Psi](2^{-j}\xi) = 1$ for $\xi \not=0$.
Then we define the Littlewood-Paley projection operators $\Delta_j$ and $S_0$ as $\mathcal{F}[\Delta_j f](\xi) = \mathcal{F}[\Psi](2^{-j}\xi) \mathcal{F}[f](\xi)$, $S_0f = \sum_{j\leq 0} \Delta_j f$, respectively. We denote \begin{align}
\label{def T ep j} \cT_{t,s}^{\varepsilon,j} f(x):=\int_{\bR^d}\Delta_jP_{\varepsilon}(t,s,x-y)f(y)dy~ \text{and}~\cT_{t,s}^{\varepsilon,\leq0}f(x):=\int_{\bR^d}S_0P_{\varepsilon}(t,s,x-y)f(y)dy, \end{align} where \begin{align*} \Delta_jP_{\varepsilon}(t,s,x-y) :=(\Delta_jP_{\varepsilon})(t,s,x-y) :=\Delta_j\left[P_{\varepsilon}(t,s,\cdot)\right](x-y). \end{align*} Similarly, \begin{align*} \psi(t,-i\nabla)\cT_{t,s}^{j} f(x):=\int_{\bR^d}\Delta_j \psi(t,-i\nabla) p(t,s,x-y)f(y)dy \end{align*} and \begin{align*} \psi(t,-i\nabla)\cT_{t,s}^{\leq 0}f(x):=\int_{\bR^d}S_0\psi(t,-i\nabla)p(t,s,x-y)f(y)dy. \end{align*}
Next we recall Hardy-Littlewood's maximal function and Fefferman-Stein's sharp (maximal) function. For a locally integrable function $f$ on $\bR^d$, we define \begin{align*} \mathbb{M} f(x)
&:=\sup_{x \in B_r(x_0)} -\hspace{-0.40cm}\int_{B_r(x_0)}|f(y)|dy := \sup_{x \in B_r(x_0)} \frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}|f(y)|dy \end{align*} and \begin{align*} f^\sharp(x) :=\mathbb{M}^\sharp f(x)
&:=\sup_{x \in B_r(x_0)} -\hspace{-0.40cm}\int_{B_r(x_0)}-\hspace{-0.40cm}\int_{B_r(x_0)}|f(y_0)-f(y_1)|dy_0dy_1 \\
&:= \sup_{x \in B_r(x_0)} \frac{1}{|B_r(x_0)|^2}\int_{B_r(x_0)}\int_{B_r(x_0)}|f(y_0)-f(y_1)|dy_0dy_1, \end{align*} where the supremum is taken over all balls $B_r(x_0)$ containing $x$ with $r \in (0,\infty)$ and $x_0 \in \bR^d$. Moreover, for a function $f(t,x)$ defined on $(0,\infty) \times \bR^d$, we use the notation $\mathbb{M}_x^{\sharp}\big( f(t,x) \big)$ or $\mathbb{M}_x^{\sharp}\big( f(t,\cdot) \big)(x)$ to denote the sharp function with respect to the variable $x$ after fixing $t$. We recall a weighted version of the Hardy-Littlewood Theorem and Fefferman-Stein Theorem which play an important role in our main estimate. \begin{prop}
\label{lem:FS ineq} Let $p \in (1,\infty)$ and $w \in A_p(\bR^d)$. Assume that $[w]_{A_p(\bR^d)}\leq K$ for a positive constant $K$. Then there exists a positive constant $N=N(d,K,p)$ such that for any $f\in L_p(\bR^d)$,
\begin{align*}
\big\| \mathbb{M} f \big\|_{L_p(\bR^d, w\,dx)}
\leq
N\big\|f \big\|_{L_p(\bR^d, w\,dx)}
\end{align*}
and
\begin{align*}
\big\|f \big\|_{L_p(\bR^d, w\,dx)}
\leq
N \big\| \mathbb{M}_x^{\sharp} f \big\|_{L_p(\bR^d, w\,dx)}.
\end{align*} \end{prop} This weighted version of the Hardy-Littlewood Theorem and the Fefferman-Stein Theorem is very well-known. For instance, see \cite[Theorems 2.2 and 2.3]{Dong_Kim2018}.
\begin{prop}\label{prop:maximal esti}
Let $p \in (1,\infty)$, $\varepsilon \in [0,1]$, $w \in A_p(\bR^d)$, and $p_0\in(1,2]$ be a constant so that $p_0 \leq R_{p,d}^w$ and
$$
\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor.
$$
Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then there exist positive constants $N=N(d,\delta,\varepsilon,\gamma,\kappa,M$, $N'=N'(d,p_0,\varepsilon,\gamma,\kappa,M)$, $N''=N''(d,p_0, \delta,\gamma,\kappa,M)$, and $N'''=N'''(d,p_0,\gamma,\kappa,M)$ such that for all $t\in (0,\infty)$, $f\in \cS(\bR^{d})$, and $j \in \bZ$,
\begin{equation*}
\begin{gathered}
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,j}f\right)(x)
\leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,\leq 0}f\right)(x)\leq N'\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\\
\bM^{\sharp}_x\left(\psi(t,-i\nabla)\cT_{t,0}^{j}f\right)(x)
\leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\psi(t,-i\nabla)\cT_{t,0}^{\leq0}f\right)(x)\leq N'''\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0}.
\end{gathered}
\end{equation*}
\end{prop}
The proof of Proposition \ref{prop:maximal esti} is given in Sections \ref{sec:prop}.
\begin{corollary}
\label{main ingra}
Let $p \in (1,\infty)$ and $w \in A_p(\bR^d)$.
Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then there exist positive constants $N$, $N'$, $N''$, and $N'''$ such that for all $t\in (0,\infty)$, $f\in \cS(\bR^{d})$, and $j \in \bZ$, \begin{align*}
\big\|\cT_{t,0}^{\varepsilon,j}f\big\|_{L_p(\bR^d, w\,dx)} \leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1), \end{align*} \begin{align*}
\big\|\cT_{t,0}^{\varepsilon,\leq 0}f\big\|_{L_p(\bR^d, w\,dx)} \leq N' \big\|f \big\|_{L_p(\bR^d, w\,dx)}, \end{align*} \begin{align*}
\big\|\psi(t,-i\nabla)\cT_{t,0}^{j}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1), \end{align*} \begin{align*}
\big\|\psi(t,-i\nabla)\cT_{t,0}^{\leq0}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''' \big\|f \big\|_{L_p(\bR^d, w\,dx)}, \end{align*} where the dependence of constants $N$, $N'$, $N''$, and $N'''$ is given similarly to those in Proposition \ref{prop:maximal esti} with additional dependence to $p$ and an upper bound of the $A_p$ semi-norm of $w$. \end{corollary} \begin{proof} First, we claim that there exists a $p_0 \in (1,2]$ such that $p_0 \leq R_{p,d}^w \wedge p$, $w \in A_{p/p_0}(\bR^d)$, and $\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor$. Recall that \begin{align*} R_{p,d}^{w} := \sup \left\{ p_0 \in (1,2] : w \in A_{p/p_0}(\bR^d) \right\}. \end{align*} It is well-known that $ R_{p,d}^{w} >1$ and $$ w \in A_{p/p_0}(\bR^d) \quad \forall p_0 \in \left(1,R_{p,d}^{w} \right) $$ due to the reverse H\"older inequality (\textit{e.g.} see \cite[Remark 2.2]{Choi_Kim2022}). Note that $\left\lfloor\frac{d}{p}\right\rfloor$ is left-continuous and piecewise-constant with respect to $p$. Therefore, there exists a $p_0 \in \left(1,R_{p,d}^{w} \right)$ such that $\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor$ and $w \in A_{p/p_0}(\bR^d)$. It only remains to show that $p_0 \in \left(1,R_{p,d}^{w} \right)$ above is less that or equal to $p$. If $p \geq 2$, then it is obvious since $ p_0< 2$. Thus we only consider the case $p \in (1,2)$. Recall that $A_p(\bR^d)$ is defined only for $p>1$ ($A_1(\bR^d)$-class is not introduced in this paper (see \eqref{def ap}). Thus any $p_0 \in (1,2)$ such that $w \in A_{p/p_0}(\bR^d)$ is obviously less than to $p$.
Next we apply the weighted version of Hardy-Littlewood Theorem and Fefferman-Stein Theorem. Let $f \in \cS(\bR^d)$ and choose a $p_0$ in the claim, i.e. $p_0 \in \left(1, R_{p,d}^w \wedge p\right)$, $w \in A_{p/p_0}(\bR^d)$, and $\left\lfloor\frac{d}{p_0}\right\rfloor = \left\lfloor\frac{d}{R_{p,d}^w}\right\rfloor$. Then by Proposition \ref{prop:maximal esti}, we have
\begin{equation*}
\begin{gathered}
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,j}f\right)(x)
\leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\cT_{t,0}^{\varepsilon,\leq 0}f\right)(x)\leq N'\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\\
\bM^{\sharp}_x\left(\psi(t,-i\nabla)\cT_{t,0}^{j}f\right)(x)
\leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0},\quad \forall\delta\in(0,1),\\
\bM^{\sharp}_x\left(\psi(t,-i\nabla)\cT_{t,0}^{\leq0}f\right)(x)\leq N'''\left(\bM\left(|f|^{p_0}\right)(x)\right)^{1/p_0}.
\end{gathered}
\end{equation*} Moreover, recalling $\frac{p}{p_0} \in (1,\infty)$ and applying Proposition \ref{lem:FS ineq}, we obtain \begin{align*}
\big\|\cT_{t,0}^{\varepsilon,j}f\big\|_{L_p(\bR^d, w\,dx)} \leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1), \end{align*} \begin{align*}
\big\|\cT_{t,0}^{\varepsilon,\leq 0}f\big\|_{L_p(\bR^d, w\,dx)} \leq N' \big\|f \big\|_{L_p(\bR^d, w\,dx)}, \end{align*} \begin{align*}
\big\|\psi(t,-i\nabla)\cT_{t,0}^{j}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''2^{j\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \big\|f \big\|_{L_p(\bR^d, w\,dx)} \quad \forall \delta \in (0,1), \end{align*} \begin{align*}
\big\|\psi(t,-i\nabla)\cT_{t,0}^{\leq0}f \big\|_{L_p(\bR^d, w\,dx)} \leq N''' \big\|f \big\|_{L_p(\bR^d, w\,dx)}. \end{align*} Since $p_0 \in (1,2)$, the corollary is proved. \end{proof}
Based on Corollary \ref{main ingra}, we prove the following main estimate which yields Theorem \ref{22.12.27.16.53}. \begin{thm}\label{thm:ep gamma esti}
Let $p\in(1,\infty)$, $q\in(0,\infty)$, $\varepsilon \in [0,1]$, $a \in (0,\infty)$, $w\in A_p(\bR^d)$, $\mu$ be a nonnegative Borel measrue on $(0,\infty)$, and $\boldsymbol{\mu}:\bZ\to(-\infty,\infty)$ be a sequence having a controlled difference. Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{R_{p,d}^{w}}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$. Additionally, assume that the Laplace transform of $\mu$ is controlled by a sequence $\boldsymbol{\mu}$ in a $\gamma$-dyadic way with parameter $a$, \textit{i.e.} \begin{align}
\label{laplace cond} \cL_{\mu}(2^{\gamma j}) := \int_0^\infty \exp\left( - 2^{\gamma j} t \right) \mu(dt) \leq N_{\cL_\mu} \cdot 2^{j\gamma a}2^{-\boldsymbol{\mu}(j)},\quad \forall j\in\bZ. \end{align} Then there exists a positive constant $N$ such that for any $u_0\in C_c^{\infty}(\bR^d)$,
\begin{align}
\label{20230118 01}
\int_{0}^T\left\|\Delta^{\frac{\varepsilon\gamma}{2}}\cT_{t,0} u_0 \right\|_{L_p(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq N(1+ \mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\varepsilon\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
\begin{align}
\label{20230118 02}
\int_{0}^T\left\|\Delta^{\frac{\varepsilon\gamma}{2}} \cT_{t,0} u_0\right\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\leq N\|u_0\|^q_{\dot{B}_{p,q}^{\varepsilon \boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
\begin{align}
\label{20230118 03}
\int_{0}^T\left\|\psi(t,-i\nabla)\cT_{t,0} u_0 \right\|_{L_p(\bR^d,w\,dx)}^q
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq N'(1+\mu_{a,T,\kappa,\gamma,q})\|u_0\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
and
\begin{align}
\label{20230118 04}
\int_{0}^T\left\|\psi(t,-i\nabla)\cT_{t,0} u_0\right\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\leq N'\|u_0\|^q_{\dot{B}_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)},
\end{align}
where $[w]_{A_p(\bR^d)}\leq K$, $N=N(a, N_{\cL_\mu},d,\varepsilon, \gamma,\kappa,K,M,p,q)$, $N'=N'(a, N_{\cL_\mu},d, \gamma,\kappa,K,M,p,q,R_{p,d}^w)$, $\varepsilon \boldsymbol{\gamma}(j)=\varepsilon \gamma j$ for all $j \in \bZ$, and \begin{align*} \mu_{a,T,\kappa,\gamma,q}=\int_0^T t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right). \end{align*} \end{thm}
\begin{proof} Due to the upper bounds of $L_p$-norms in Corollary \ref{main ingra}, the proofs of \eqref{20230118 03} and \eqref{20230118 04} are very similar to those of \eqref{20230118 01} and \eqref{20230118 02} when $\varepsilon=1$. Thus we only prove \eqref{20230118 01} and \eqref{20230118 02}. We make use of the Littlewood-Paley operators $\Delta_j$. By using the almost orthogonal property of Littleewood-Paley operators, we have (at least in a distribution sense)
\begin{align}
\label{22.02.15.16.48}
&\Delta^{\frac{\varepsilon\gamma}{2}} \cT_{t,0} u_0
=\sum_{j\in\bZ}\Delta_j(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0}u_0=\sum_{j\in\bZ}\sum_{i \in \bZ}\Delta_j(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0}\Delta_i u_0
=\sum_{j\in\bZ}\sum_{i=-1}^1\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\\
&\label{22.02.15.16.49}
=\cT_{t,0}^{\varepsilon,\leq0}(S_0u_0)+\cT_{t,0}^{\varepsilon,1}(\Delta_0u_0)+\sum_{j=1}^{\infty}\sum_{i=-1}^1\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0),
\end{align}
where $\cT_{t,0}^{\varepsilon,\leq0}$ and $\cT_{t,0}^{\varepsilon, j}$ are defined in \eqref{def T ep j}.
Thus by Minkowski's inequality,
\begin{align}
&\int_0^T \|(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0} u_0\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq \int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right),\label{ineq:22.02.22.12.37} \end{align} and \begin{align}
&\int_0^T \|(-\Delta)^{\varepsilon\gamma/2}\cT_{t,0} u_0\|_{L_p(\bR^d,w\,dx)}^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq \int_0^T \bigg( \big\|\cT_{t,0}^{\varepsilon,\leq0}(S_0u_0)+\cT_{t,0}^{\varepsilon,1}(\Delta_0u_0)\big\|_{L_p(\bR^d, w\,dx)}+\sum_{j=1}^{\infty}\sum_{i=-1}^1 \big\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\big\|_{L_p(\bR^d, w\,dx)} \bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right).
\label{ineq:22.02.22.12.38}
\end{align} Moreover, by Corollary \ref{main ingra}, we have
\begin{align}
\sum_{i=-1}^1 \|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)} &\leq N2^{j\varepsilon\gamma}e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{4^{\gamma}}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)},\label{claim:hom} \end{align} and \begin{align}
\|\cT_{t,0}^{\varepsilon,\leq 0}(S_0u_0) \|_{L_p(\bR^d, w\,dx)} &\leq N\| S_0 u_0\|_{L_p(\bR^d, w\,dx)}.
\label{claim:inhom}
\end{align}
Due to \eqref{ineq:22.02.22.12.38} and \eqref{claim:inhom}, to show \eqref{20230118 01}, it is sufficient to show that
\begin{align}
\label{20230120 01}
\int_0^T \bigg(\sum_{j\in\bN}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq
N \|u_0\|_{B_{p,q}^{\varepsilon\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)}^q.
\end{align} Similarly, to show \eqref{20230118 02}, it is sufficient to show
\begin{align}
\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)
\leq
N \|u_0\|_{\dot{B}_{p,q}^{\varepsilon\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)}^q. \label{ineq:hom}
\end{align}
Since the proofs of \eqref{20230120 01} and \eqref{ineq:hom} are very similar, we only focus on proving the difficult case \eqref{ineq:hom}.
To verify \eqref{ineq:hom}, we apply \eqref{claim:hom} to \eqref{ineq:22.02.22.12.37} with $\delta= (1-2^{-\gamma})$ and obtain
\begin{align}
\notag
&\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)(t,\cdot)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
\label{20230120 11}
&\leq N\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{-\kappa t2^{j\gamma} 2^{-3\gamma}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right).
\end{align} We estimate it depending on the range of $q$. First, if $q\in(0,1]$, then simply we have
\begin{align*}
&\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{-\kappa t2^{j\gamma}2^{-3\gamma}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
&\lesssim \sum_{j\in\bZ}2^{qj\varepsilon\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\int_0^\infty \exp\left(-2^{j\gamma}2^\gamma t\right) t^a \mu(dt) \\
&\lesssim \sum_{j\in\bZ}2^{qj\varepsilon\gamma - j\gamma a}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\cL_\mu(2^{j \gamma}),
\end{align*} where the simple inequality that $t^ae^{-2^{j\gamma}2^{\gamma}t} \leq N(a,\gamma)2^{-j\gamma a}e^{-2^{j\gamma}t}$ is used in the last part of the computation above. Finally applying \eqref{laplace cond}, we have \eqref{ineq:hom}.
Next we consider the case $q\in(1,\infty)$. Divide $\bZ$ into two parts as follows:
$$
\bZ=\{j\in\bZ:2^{j\gamma}t\leq1\}\cup\{j\in\bZ:2^{j\gamma}t>1\}=:\cI_1(t)\cup\cI_2(t).
$$
Thus, we have
\begin{align}
&\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq N\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{-\kappa t2^{j\gamma}2^{-3\gamma}}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right)\nonumber\\
&\leq N\int_0^T\left(\sum_{j\in\bZ}2^{j\varepsilon\gamma}e^{- t2^{j\gamma}2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^qt^a\mu\left(dt\right)\nonumber\\
&= N\int_0^T\left(\sum_{j\in\cI_1(t)}\cdots\right)^q\mu(dt) +N \int_0^T\left(\sum_{j\in\cI_2(t)}\cdots\right)^qt^a\mu\left(dt\right)
=: N(I_1 + I_2).\label{cI1 cI2}
\end{align}
Let $b\in (0,a]$ whose exact value will be chosen later.
Then we put $2^{j\gamma b/q} 2^{-j \gamma b/q}$ in the summation with respect to $\cI_1(t)$ and make use of H\"older's inequality to obtain
\begin{align}
I_1=
&\int_0^T\left(\sum_{j\in\cI_1(t)}2^{j\varepsilon\gamma}e^{- t2^{j\gamma}2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^q t^a \mu(dt)\nonumber\\
&\leq \int_0^T\left(\sum_{j\in\cI_1(t)}2^{\frac{j\gamma b}{q-1}}\right)^{q-1}\left(\sum_{j\in\cI_1(t)}2^{jq\varepsilon\gamma-j\gamma b}e^{-q t2^{j\gamma} 2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q\right)t^a\mu(dt).\label{cI1}
\end{align}
and similarly,
\begin{align}
I_2=
&\int_0^T\left(\sum_{j\in\cI_2(t)}2^{j\varepsilon\gamma} e^{- t2^{j\gamma}2^\gamma} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}\right)^q t^a\mu(dt)\nonumber\\
&\leq \int_0^T\left(\sum_{j\in\cI_2(t)}2^{\frac{j\gamma b}{q-1}}e^{- t2^{j\gamma}2^\gamma}\right)^{q-1}\left(\sum_{j\in\cI_2(t)}2^{jq\varepsilon\gamma-j\gamma b}e^{-t2^{j\gamma}2^\gamma}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q\right) t^a \mu(dt).\label{cI2}
\end{align} We estimate \eqref{cI1} first. For each $t>0$, we can choose $j_1(t)\in\bZ$ such that $2^{j_1(t)\gamma}t\leq 1$ and $2^{(j_1(t)+1)\gamma}t>1$. Roughly speaking, $j_1(t) \approx -\frac{1}{\gamma}\log_2t$.
Thus, we have
\begin{align}
\sum_{j\in\cI_1(t)}2^{\frac{j\gamma b}{q-1}}=\frac{2^{\frac{j_1(t)\gamma b}{q-1}}}{1-2^{-\frac{\gamma b}{q-1}}}\leq N(b,\gamma ,q)t^{-\frac{b}{q-1}}.\label{ineq:22.02.22.14.06}
\end{align} Putting \eqref{ineq:22.02.22.14.06} in \eqref{cI1}, we obtain
\begin{align}
\cI_1
&\leq
N
\int_0^T
t^{-b}
\sum_{j\in\cI_1(t)}
2^{jq\varepsilon\gamma-j\gamma b}
e^{-q t2^{j\gamma}2^\gamma}
\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
t^a \mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ}
2^{jq\varepsilon\gamma-j\gamma b} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\int_0^T e^{-qt 2^{j\gamma} 2^\gamma} t^{a-b}\mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ}
2^{jq\varepsilon\gamma-j\gamma b} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
2^{-j\gamma(a-b)}\int_0^\infty e^{-t 2^{j\gamma}} \mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ} 2^{jq\varepsilon\gamma-j\gamma a}\cL_{\mu}(2^{j\gamma}) \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q,\nonumber
\end{align} where $N=N(b,\gamma,\kappa)$. Therefore, by \eqref{laplace cond}, \begin{align} \label{ineq:cI1 final}
\cI_1\leq N(b,N_{\cL_\mu},\gamma,\kappa)\sum_{j\in\bZ}2^{q\varepsilon\boldsymbol{\gamma}(j)-\boldsymbol{\mu}(j)}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q. \end{align}
For $\cI_2$, we choose a sufficiently small $b\in (0,a]$ satisfying
$$
a\geq\frac{b}{(q-1) 2^\gamma}.
$$
Then we can check that
$$
f(\lambda):=\lambda^{\frac{b}{q-1}}e^{-2^\gamma\lambda}
$$
is a decreasing function on $(1,\infty)$.
Using $f(\lambda)$ and taking $\lambda = 2^{j\gamma}t$, we have
\begin{align}
\label{20230120 30}
\sum_{j\in\cI_2(t)}2^{\frac{j\gamma b}{q-1}}e^{-t2^{j\gamma}2^\gamma}&=t^{-\frac{b}{q-1}}\sum_{j\in\cI_2(t)}(2^{j\gamma}t)^{\frac{b}{q-1}}e^{-t2^{j\gamma}2^\gamma}\leq Nt^{-\frac{b}{q-1}}\int_{-\frac{1}{\gamma}\log_{2}t}^{\infty}(2^{\lambda\gamma}t)^{\frac{b}{q-1}}e^{- t2^{\lambda\gamma}2^\gamma}d\lambda. \end{align} Moreover, applying the simple change of the variable $t 2^{\lambda \gamma} \to \lambda$, the above term is less than or equal to \begin{align}
Nt^{-\frac{b}{q-1}}\int_1^{\infty} \frac{f(\lambda)}{\lambda}d\lambda=N(q,b,\kappa,\gamma)t^{-\frac{b}{q-1}}.\label{ineq:22.02.22.14.35}
\end{align}
Putting \eqref{20230120 30} and \eqref{ineq:22.02.22.14.35} in \eqref{cI2}, we have
\begin{align}
\cI_2
&\leq N
\int_0^{\infty}
\sum_{j\in\cI_2(t)}
2^{jq\varepsilon\gamma-j\gamma b}
e^{- t2^{j\gamma}2^\gamma}
\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
t^{a-b}\mu(dt)\nonumber\\
&\leq
N
\sum_{j\in\bZ}
2^{jq\varepsilon\gamma-j\gamma a}
\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q
\cL_{\mu}(2^{j\gamma}),\nonumber
\end{align} where $N=N(b,\delta,\gamma,\kappa)$. Therefore, by \eqref{laplace cond} again, \begin{align} \label{ineq:cI2 final}
\cI_2\leq N(b,N_{\cL_\mu},\gamma,\kappa)\sum_{j\in\bZ}2^{q\varepsilon\boldsymbol{\gamma}(j)-\boldsymbol{\mu}(j)}\|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q. \end{align} Finally combining \eqref{cI1 cI2}, \eqref{ineq:cI1 final}, and \eqref{ineq:cI2 final}, we obtain
\begin{align*}
\int_0^T \bigg(\sum_{j\in\bZ}\sum_{i=-1}^1\|\cT_{t,0}^{\varepsilon,j+i}(\Delta_{j}u_0)\|_{L_p(\bR^d,w\,dx)}\bigg)^q t^a\mu(dt)
&\leq N\sum_{j\in\bZ} 2^{q\varepsilon\boldsymbol{\gamma}(j)-\boldsymbol{\mu}(j)} \|\Delta_ju_0\|_{L_p(\bR^d,w\,dx)}^q\\
&= N\|u_0\|_{\dot{B}_{p,q}^{\varepsilon\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)}^q,
\end{align*}
which proves \eqref{ineq:hom} since $b$ can be chosen depending only on $a$,$\gamma$, and $q$. \end{proof}
\begin{proof}[Proof of Theorem \ref{22.12.27.16.53}] Due to Proposition \ref{22.05.03.11.34}, without loss of generality, we assume that $\boldsymbol{r}(j)=\boldsymbol{\gamma}(j)=j\gamma$. First, we prove \textit{a priori} estimates \eqref{main a priori est 0} and \eqref{main a priori est}. By \cite[Theorem 2.1.5]{choi_thesis}, for any $u_0 \in C_c^\infty(\bR^d)$,
there is a unique classical solution $u \in C_p^{1,\infty}([0,T] \times \bR^d)$ to the Cauchy problem
\begin{equation*}
\begin{cases}
\partial_tu(t,x)=\psi(t,-i\nabla)u(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u(0,x)=u_0(x),\quad & x\in\bR^d.
\end{cases}
\end{equation*} and the solution $u$ is given by
$$
u(t,x):=\int_{\bR^d}p(t,0,x-y)u_0(y)dy=\cT_{t,0}u_0(x).
$$ Thus due to Theorem \ref{thm:ep gamma esti}, we have \eqref{main a priori est 0} and \eqref{main a priori est}.
Next we prove the existence of a solution. By Proposition \ref{22.04.24.20.57}-($ii$), for $u_0\in B_{p,q}^{\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)$, there exists a $\{u_0^n\}_{n=1}^{\infty}\subseteq C_c^{\infty}(\bR^d)$ such that $u_0^n\to u_0$ in $B_{p,q}^{\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx)$.
By \cite[Theorem 2.1.5]{choi_thesis} again,
$$
u_n(t,x):=\int_{\bR^d}p(t,0,x-y)u_0^n(y)dy=\cT_{t,0}u_0^n(x)
\in C_p^{1,\infty}([0,T]\times\bR^d)
$$
becomes a unique classical solution to the Cauchy problem
\begin{equation*}
\begin{cases}
\partial_tu_n(t,x)=\psi(t,-i\nabla)u_n(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
u_n(0,x)=u_0^n(x),\quad & x\in\bR^d.
\end{cases}
\end{equation*} Moreover, due to the linearity of the equation, applying Theorem \ref{thm:ep gamma esti} again, for all $n,m \in \bN$, we have \begin{align*}
&\int_{0}^T\left(\left\|u_n-u_m \right\|_{H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx))}^q +\left\|\psi(t,-i\nabla)(u_n - u_m) \right\|_{L_p(\bR^d,w\,dx)}^q \right)
t^a\mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right) \\
&\leq N'(1+\mu_{a,T,\kappa,\gamma,q})\|u^n_0-u^m\|^q_{B_{p,q}^{\boldsymbol{\gamma}-\frac{\boldsymbol{\mu}}{q}}(\bR^d,w\,dx)}. \end{align*} In particular, $u_n$ becomes a Cauchy sequence in $$ L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)\right). $$ Since the above space is a quasi-Banach space, there exists a $u$ which is given by the limit of $u_n$ in the space. Therefore by Definition \ref{def sol}, this $u$ becomes a solution.
At last, we prove the uniqueness of a solution. We assume that there exist two solutions $u$ and $v$. Then by Definition \ref{def sol}, there exist
$u_n,v_n\in C_p^{1,\infty}([0,T]\times\bR^d)$ such that $u_n(0,\cdot),v_n(0,\cdot)\in C_c^{\infty}(\bR^d)$, \begin{equation*} \begin{gathered} \partial_tu_n(t,x)=\psi(t,-i\nabla)u_n(t,x),\quad \partial_tv_n(t,x)=\psi(t,-i\nabla)v_n(t,x)\quad \forall (t,x)\in(0,T)\times\bR^d,\\ u_n(0,\cdot), v_n(0,\cdot)\to u_0 \quad\text{in}\quad B_{p,q}^{\boldsymbol{\gamma}-\boldsymbol{\mu}/q}(\bR^d,w\,dx), \end{gathered} \end{equation*} and $u_n\to u$, $v_n\to v$ in $$ L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma}dt\right);H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)\right). $$ as $n\to\infty$. Then $w_n:=u_n-v_n$ satisfies
\begin{equation*}
\begin{cases}
\partial_tw_n(t,x)=\psi(t,-i\nabla)w_n(t,x),\quad &(t,x)\in(0,T)\times\bR^d,\\
w_n(0,x)=u_n(0,x)-v_n(0,x),\quad & x\in\bR^d.
\end{cases}
\end{equation*}
Due to \cite[Theorem 2.1.5]{choi_thesis} and Theorem \ref{thm:ep gamma esti}, we conclude that $w_n\to0$ in
$$ L_q\left((0,T),t^a \mu\left(\frac{(1\wedge q) \kappa}{16^\gamma} dt\right);H_p^{\boldsymbol{\gamma}}(\bR^d,w\,dx)\right). $$ as $n\to\infty$. Since the limit is unique, $$ 0=\lim_{n \to \infty} w_n = \lim_{n \to \infty}u_n - \lim_{n \to \infty}u_n = u -v. $$ The theorem is proved. \end{proof}
\mysection{Proof of Proposition \ref{prop:maximal esti}}\label{sec:prop}
Recall \begin{align}
\label{def fund} p(t,s,x):=1_{0 < s< t} \cdot \frac{1}{(2\pi)^{d/2}}\int_{\bR^d} \exp\left(\int_{s}^t\psi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi \end{align} and \begin{align}
\label{def kernel}
P_{\varepsilon}(t,s,x):=(-\Delta)^{\varepsilon\gamma/2}p(t,s,x),\quad \varepsilon\in[0,1]. \end{align} We fix $\varepsilon \in [0,1]$ throughout this section. The proof of Proposition \ref{prop:maximal esti} is twofold. In the first subsection, we obtain quantitative estimates for the kernel $ P_{\varepsilon}$. As the special case $\varepsilon =0$, we obtain an estimate for the fundamental solution $P_{0}(t,s,x):=p(t,s,x)$ to \eqref{eqn:model eqn}, which means \begin{equation*}
\begin{cases}
\partial_tp(t,s,x)=\psi(t,-i\nabla)p(t,s,x),\quad &(t,x)\in(s,\infty)\times\bR^d,\\
\lim_{t \downarrow s}p(t,s,x)=\delta_0(x),\quad &x\in\bR^d.
\end{cases} \end{equation*} Here, $\delta_0$ is the Dirac measure centered on the origin. We show some quantitative estimates in the first subsection and then by using these estimates, prove an important lemma which controls mean oscillations of operators $\cT_{t,0}^{\varepsilon, j}$, $\cT_{t,0}^{\varepsilon, \leq 0}$, $\psi(t,-i\nabla)\cT_{t,0}^{ j}$, and $\psi(t,-i\nabla) \cT_{t,0}^{\leq 0}$ in the second subsection. Finally, we prove Proposition \ref{prop:maximal esti} based on the mean oscillation estimates.
\subsection{Estimates on fundamental solutions} Recall \begin{align*} \psi(t,-i \nabla)p(t,s,x):=1_{0 < s< t} \cdot \frac{1}{(2\pi)^{d/2}}\int_{\bR^d} \exp\left(\int_{s}^t\psi(r,\xi)dr\right)e^{ix\cdot\xi}d\xi. \end{align*} Here is our main kernel estimate. \begin{thm}\label{22.02.15.11.27}
Let $k$ be an integer such that $k>\lfloor d/2\rfloor$. Assume that $\psi(t,\xi)$ satisfies an ellipticity condition with $(\gamma,\kappa)$ and has a $k$-times regular upper bound with $(\gamma,M)$.
\begin{enumerate}[(i)]
\item
Let $p\in[2,\infty]$, $(n,m,|\alpha|)\in[0,k]\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$. Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m,n)$ such that for all $t>s>0$,
\begin{equation}
\label{22.01.27.13.46}
\left\||\cdot|^{n}\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)},
\end{equation}
where $p'$ is the H\"older conjugate of $p$, \textit{i.e.} $1/p+1/p'=1$ $(p'=1$ if $p=\infty)$.
\item
Let $p\in[1,2]$ and $(m,|\alpha|)\in\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$.
Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m)$ such that for all $t>s>0$,
\begin{equation}\label{22.01.27.13.58}
\left\|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|+d/p'\right)}.
\end{equation}
\item
Let $p\in[2,\infty]$, $(n,m,|\alpha|)\in[0,k]\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$. Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m,n)$ such thatfor all $t>s>0$,
\begin{equation}
\label{2023012301}
\left\||\cdot|^{n}\partial_t^mD_x^{\alpha}\Delta_j \psi(t,-i\nabla) p(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)}.
\end{equation}
\item
Let $p\in[1,2]$ and $(m,|\alpha|)\in\times\{0,1\}\times \{0,1,2\}$, and $\delta\in(0,1)$.
Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m)$ such that for all $t>s>0$,
\begin{equation}
\label{2023012302}
\left\|\partial_t^mD_x^{\alpha}\Delta_j\psi(t,-i\nabla)p(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|+d/p'\right)}.
\end{equation}
\end{enumerate} \end{thm}
\begin{proof} The proofs of \eqref{2023012301} and \eqref{2023012302} are very similar to those of \eqref{22.01.27.13.46} and \eqref{22.01.27.13.58} with $\varepsilon =1$ due to \eqref{condi:reg ubound}, \textit{i.e.} $$
|\psi(t,\xi)| \lesssim |\xi|^\gamma. $$ Thus we only focus on proving \eqref{22.01.27.13.46} and \eqref{22.01.27.13.58}.
The proofs highly rely on the following lemma whose proof is given in the last part of this subsection.
\begin{lem}\label{lem:kernel esti}
Let $k\in \bN$, $\alpha$ be a ($d$-dimensional) multi-index and $m\in\{0,1\}$.
Assume that $\psi(t,\xi)$ satisfies an ellipticity condition with $(\gamma,\kappa)$ and has a $k$-times regular upper bound with $(\gamma,M)$.
Then there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,n)$ such that for all $n\in\{0,1,\cdots,k\}$, $t>s>0$, $j\in\bZ$, and $\delta\in(0,1)$,
\begin{equation*}
\begin{gathered}
\sup_{x\in\bR^d}|x|^{n} |\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}\\
\left(\int_{\bR^d}|x|^{2n}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^2dx\right)^{1/2}\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/2\right)}.
\end{gathered}
\end{equation*}
\end{lem}
We temporarily assume that Lemma \ref{lem:kernel esti} holds to complete the proof of Theorem \ref{22.02.15.11.27}.
We prove \eqref{22.01.27.13.46} first. We divide the proof into two cases: integer case and non-integer case, \textit{i.e.} $n=0,1,2,\cdots,k$ and $n\in[0,k]\setminus\{0,1,2,\cdots,k\}$.
\begin{enumerate}
\item[Case 1.]
Assume $n$ is an integer, \textit{i.e.} $n=0,1,2,\cdots,k$.
If $p=2$ or $p=\infty$, then \eqref{22.01.27.13.46} directly holds due to Lemma \ref{lem:kernel esti}.
For $p\in(2,\infty)$, we apply Lemma \ref{lem:kernel esti} again and obtain
\begin{align*}
&\int_{\bR^d}|x|^{pn}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,x)|^{p}dx\\
&=\int_{\bR^d}|x|^{2n}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,x)|^{2}|x|^{(p-2)n}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,x)|^{p-2}dx\\
&\leq N\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}\right)^{p-2}\int_{\bR^d}|x|^{2n}|\partial_t^mD_x^{\alpha}K_{\varepsilon}(t,s,x)|^{2}dx\\
&\leq N\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}\right)^{p-2}\times\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/2\right)}\right)^{2}\\
&=N\left(e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)}\right)^{p}.
\end{align*}
\item[Case 2.]
Assume $n$ is a non-integer, \textit{i.e.} $n\in[0,k]\setminus\{0,1,2,\cdots,k\}$.
Observe that for any $q\in[1,\infty)$,
\begin{equation}\label{22.01.27.13.59}
\begin{aligned}
&|x|^{qn}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^q\\
&=\left(|x|^{q\lfloor n\rfloor}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^q\right)^{1-(n-\lfloor n\rfloor)}\times\left(|x|^{q(\lfloor n\rfloor+1)}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^q\right)^{n-\lfloor n\rfloor}.
\end{aligned}
\end{equation}
We use the result of Case 1 with \eqref{22.01.27.13.59} repeatedly, \textit{i.e.} we use \eqref{22.01.27.13.46} with $\lfloor n \rfloor$ and $\lfloor n \rfloor+1$ after applying \eqref{22.01.27.13.59} for the proof of this case.
Using \eqref{22.01.27.13.59} with $q=1$ and the result of Case 1, \eqref{22.01.27.13.46} holds if $p=\infty$.
For other $p$, \textit{i.e.} $p \in [2,\infty)$, we use \eqref{22.01.27.13.59} with $q=p$ and apply H\"older's inequality with $\frac{1}{n-\lfloor n \rfloor}$. Then finally due to the result of Case 1, we have
\begin{align*}
&\left(\int_{\bR^d}|x|^{pn}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^pdx\right)^{1/p}\\
&\leq \left(\int_{\bR^d}|x|^{p(\lfloor n\rfloor+1)}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^pdx\right)^{(n-\lfloor n\rfloor)/p}\times\left(\int_{\bR^d}|x|^{p\lfloor n\rfloor}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^pdx\right)^{(1-(n-\lfloor n\rfloor))/p}\\
&\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-n+d/p'\right)}.
\end{align*}
\end{enumerate}
Next we prove \eqref{22.01.27.13.58}. The case $p=2$ holds due to \eqref{22.01.27.13.46} with $n=0$.
Moreover, we claim that it is sufficient to show that \eqref{22.01.27.13.58} holds for $p=1$.
Indeed, for $p \in (1,2)$ there exists a $\lambda \in (0,1)$ such that $p=\lambda+2(1-\lambda)$ and
\begin{equation}\label{ineq:22 02 28 13 27}
\begin{aligned}
|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^p=|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^{\lambda}\times|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^{2(1-\lambda)}.
\end{aligned}
\end{equation} Applying H\"older's inequality with \eqref{ineq:22 02 28 13 27} and $\frac{1}{\lambda}$, we obtain \eqref{22.01.27.13.58}.
Thus we focus on showing \eqref{22.01.27.13.58} with $p=1$.
Let $j \in \bZ$. We consider $|x| \leq 2^{-j}$ and $|x|>2^{-j}$, separately.
\begin{align*}
\int_{\bR^d}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx&=\int_{|x|\leq 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx + \int_{|x|> 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx.
\end{align*}
For $|x| \leq 2^{-j}$ we make use of $(i)$ with $p=\infty$ and $n=0$. Then
\begin{align*}
\int_{|x|\leq 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx&\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|)}.
\end{align*}
For $|x|>2^{-j}$ we put $d_2:=\lfloor d/2\rfloor+1$ and note that $d_2\leq k$.
Then by H\"older's inequality and $(i)$ with $p=2$,
\begin{align*}
\int_{|x|> 2^{-j}}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|dx&\leq \left(\int_{|x|> 2^{-j}}|x|^{-2d_2}\right)^{1/2} \left(\int_{|x|> 2^{-j}}|x|^{2d_2}|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|^2dx\right)^{1/2}\\
&\leq N2^{j(d_2-d/2)}e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d_2+d/2\right)}\\
&= Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|\right)}.
\end{align*}
The theorem is proved. \end{proof}
\begin{corollary}\label{22.02.15.14.36}
Let $k$ be an integer such that $k>\lfloor d/2\rfloor$. Assume that $\psi(t,\xi)$ satisfies an ellipticity condition with $(\gamma,\kappa)$ and has a $k$-times regular upper bound with $(\gamma,M)$. Then, for all $p\in[1,\infty]$ and $(m,|\alpha|)\in\{0,1\}\times\{0,1,2\}$,
there exist positive constants $N$ and $N'$ such that for all $t>s>0$
\begin{align}
\label{20230130 01}
\left\|\partial_t^mD_x^{\alpha}S_0P_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq N,
\end{align} and
$$
\left\|\partial_t^mD_x^{\alpha}S_0 \psi(t,-i\nabla)p(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq N',
$$
where $N=N(|\alpha|,d,\varepsilon,\gamma,\kappa,M,m,p)$ and $N'=N'(|\alpha|,d,\gamma,\kappa,M,m,p)$. \end{corollary}
\begin{proof} Due to similarity, we only prove \eqref{20230130 01}.
Since $S_0:=\sum_{j\leq 0}\Delta_j$, we make use of Minkowski's inequality and Theorem \ref{22.02.15.11.27} to obtain
\begin{align*}
\left\|\partial_t^mD_x^{\alpha}S_0P_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}&\leq \sum_{j\leq 0}\left\|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,\cdot)\right\|_{L_p(\bR^d)}\leq N\sum_{j\leq 0}2^{j((m+\varepsilon)\gamma+|\alpha|+d/p')}.
\end{align*}
Note that the summation is finite if $(m+\varepsilon)\gamma+|\alpha|+d/p'>0$.
The corollary is proved. \end{proof}
Now we prove Lemma \ref{lem:kernel esti}. \begin{proof}[Proof of Lemma \ref{lem:kernel esti}]
We apply some elementary properties of the Fourier inverse transform to obtain an upper bound of $|x|^n|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|$.
Indeed, recalling \eqref{def fund} and \eqref{def kernel}, we have
\begin{equation}\label{ineq:22 02 28 14 29}
\begin{aligned}
|x|^n|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|&\leq N(d,n)\sum_{i=1}^d|x^i|^n|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|\\
&\leq N(d,n)\sum_{|\beta|=n}\int_{\bR^d}\left|D^{\beta}_{\xi}\left(\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\partial_t^m\exp\left(\int_s^t\psi(r,\xi)dr\right)\right)\right|d\xi.
\end{aligned}
\end{equation}
For the integrand, we make use of Leibniz's product rule. Then
\begin{equation}\label{ineq:22 02 28 14 30}
\begin{aligned}
&D_{\xi}^{\beta}\left(\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\partial_t^m\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)\\
&=D_{\xi}^{\beta}\left(\psi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)\\
&=\sum_{\beta=\beta_0+\beta_1}c_{\beta_0,\beta_1}D^{\beta_0}_{\xi}\left(\psi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)2^{-j|\beta_1|}D^{\beta_1}_{\xi}\cF[\Psi](2^{-j}\xi).
\end{aligned}
\end{equation}
To estimate $D^{\beta_0}_{\xi}\left(\psi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)$, we borrow the lemma in \cite{Choi_Kim2022} and introduce it below.
\begin{lem}[\cite{Choi_Kim2022} Lemma 4.1]\label{lem:multiplier esti}
Let $n\in\bN$ and assume that $\psi(t, \xi)$ satisfies an ellipticity condition with $(\gamma, \kappa)$ and has an $n$-times regular upper bound with $(\gamma, M)$.
Then there exists a positive constant $N = N(M,n)$ such that for all $t>s>0$ and $\xi\in\bR^d \setminus \{0\}$,
\begin{align*}
\bigg| D_\xi^n\biggl(\exp\bigg(\int_s^t \psi(r, \xi)dr\bigg)\biggr) \bigg|
\leq N |\xi|^{-n} \exp(-\kappa(t-s)|\xi|^\gamma) \sum_{k=1}^n |t-s|^k |\xi|^{k\gamma}.
\end{align*}
\end{lem} We keep going to prove Lemma \ref{lem:kernel esti}. By Lemma \ref{lem:multiplier esti}, it follows that
\begin{equation}\label{ineq:22 02 28 15 13}
\begin{aligned}
&\bigg| D^{\beta_0}_{\xi}\left(\psi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)\bigg| \\
&\leq \sum_{\gamma_1 + \gamma_2 + \gamma_3 = \beta_0} c_{\gamma_1,\gamma_2,\gamma_3}
\big|D_\xi^{\gamma_1} \big(\psi(t,\xi)^m\big) \big|
\times \big|D_\xi^{\gamma_2} \big(\xi^\alpha |\xi|^{\varepsilon\gamma}\big) \big|
\times \bigg|D_\xi^{\gamma_3} \bigg(\exp\biggl(\int_s^t \psi(r,t)dr\biggr)\bigg) \bigg|\\
&\leq N(|\beta_0|,M) |\xi|^{m\gamma + \varepsilon\gamma +|\alpha| - |\beta_0|} \exp(-\kappa(t-s)|\xi|^{\gamma}) \sum_{k=1}^n(t-s)^k |\xi|^{k\gamma}.
\end{aligned}
\end{equation}
By \eqref{ineq:22 02 28 14 30} and \eqref{ineq:22 02 28 15 13},
\begin{equation}\label{21.08.31.13.49}
\begin{aligned}
&\left|D_{\xi}^{\beta}\left(\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\partial_t^m\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)\right|\\
&\leq Ne^{-\kappa(t-s)|\xi|^{\gamma}}\left(\sum_{k=1}^{n}(t-s)^k|\xi|^{k\gamma}\right)\times\left(\sum_{\beta=\beta_0+\beta_1}|\xi|^{(m+\varepsilon)\gamma+|\alpha|-|\beta_0|}2^{-j|\beta_1|}|D^{\beta_1}_{\xi}\cF[\Psi](2^{-j}\xi)|\right)\\
&\leq N|\xi|^{(m+\varepsilon)\gamma+|\alpha|-|\beta|}e^{-\kappa(t-s)|\xi|^{\gamma}}\left(\sum_{k=1}^{n}(t-s)^k|\xi|^{k\gamma}\right)1_{2^{j-1}\leq|\xi|\leq2^{j+1}}\\
&\leq N|\xi|^{(m+\varepsilon)\gamma+|\alpha|-|\beta|}e^{-\kappa(1-\delta)(t-s)|\xi|^{\gamma}}1_{2^{j-1}\leq|\xi|\leq2^{j+1}}=N|\xi|^{(m+\varepsilon)\gamma+|\alpha|-n}e^{-\kappa(1-\delta)(t-s)|\xi|^{\gamma}}1_{2^{j-1}\leq|\xi|\leq2^{j+1}},
\end{aligned}
\end{equation}
where $N=N(\delta,M,n)$. Moreover, one can check that
\begin{align}
\notag
&\int_{2^{j-1}\leq |\xi|\leq 2^{j+1}}|\xi|^{(m+\varepsilon)\gamma+|\alpha|-n}e^{-\kappa(1-\delta)(t-s)|\xi|^{\gamma}}d\xi\\
\notag
&=N(d)\int_{2^{j-1}}^{2^{j+1}}l^{(m+\varepsilon)\gamma+|\alpha|-n+d-1}e^{-\kappa(1-\delta)(t-s)l^{\gamma}}dl\\
\notag
&\leq Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}\int_{2^{j-1}}^{2^{j+1}}l^{(m+\varepsilon)+\frac{|\alpha|-n+d}{\gamma}-1}dl\\
\label{ineq:22 02 28 14 31}
&=N(|\alpha|,d,\varepsilon,m,n)e^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d)}.
\end{align} All together with \eqref{ineq:22 02 28 14 29}, \eqref{21.08.31.13.49}, and \eqref{ineq:22 02 28 14 31}, we have
\begin{align*}
|x|^n|\partial_t^mD^{\alpha}_x\Delta_jP_{\varepsilon}(t,s,x)|
\leq
Ne^{-\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j((m+\varepsilon)\gamma+|\alpha|-n+d}.
\end{align*}
Similarly, by \eqref{21.08.31.13.49} and Plancherel's theorem,
\begin{align*}
&\int_{\bR^d}|x|^{2n}|\partial_t^mD_x^{\alpha} \Delta_jP_{\varepsilon}(t,s,x)|^2dx\leq N\int_{\bR^d}|x^i|^{2n}|\psi(t,-i\nabla)^mD_x^{\alpha}\Delta_j P_{\varepsilon}(t,s,x)|^2dx\\
&\leq N\sum_{|\beta|=n}\int_{\bR^d}\left|D^{\beta}_{\xi}\left(\psi(t,\xi)^m\xi^{\alpha}|\xi|^{\varepsilon\gamma}\cF[\Psi](2^{-j}\xi)\exp\left(\int_{s}^t\psi(r,\xi)dr\right)\right)\right|^2d\xi\\
&\leq \int_{2^{j-1}\leq|\xi|\leq2^{j+1}}|\xi|^{2(m+\varepsilon)\gamma+2|\alpha|-2n}e^{-2\kappa(1-\delta)(t-s)|\xi|^{\gamma}}d\xi\\
&\leq Ne^{-2\kappa(t-s)2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j(2(m+\varepsilon)\gamma+2|\alpha|-2n+d)},
\end{align*}
where $N=N(|\alpha|,d,\delta,\varepsilon,m,n)$.
The lemma is proved. \end{proof}
\subsection{Proof of Proposition \ref{prop:maximal esti}}
Recall $$ \cT_{t,0}^{\varepsilon,j} f(x):=\int_{\bR^d}\Delta_jP_{\varepsilon}(t,0,x-y)f(y)dy,\quad \cT_{t,0}^{\varepsilon,\leq0}f(x):=\int_{\bR^d}S_0P_{\varepsilon}(t,0,x-y)f(y)dy, $$ and $$ \psi(t,-i \nabla)\cT_{t,0}^{j} f(x):=\int_{\bR^d}\Delta_jP_{\varepsilon}(t,0,x-y)f(y)dy,\quad \psi(t,-i \nabla)\cT_{t,0}^{\leq0}f(x):=\int_{\bR^d}S_0P_{\varepsilon}(t,0,x-y)f(y)dy. $$ In this subsection, we start estimating mean oscillations of $\cT_{t,0}^{\varepsilon, j}f$ and $\cT_{t,0}^{\varepsilon,\leq0}f$. \begin{lem}\label{20.12.21.16.26}
Let $t>0$, $j\in\bZ$, $\delta \in (0,1)$, $p_0\in(1,2]$, $b>0$, and $f\in \cS(\bR^d)$. Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{p_0}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
\begin{enumerate}[(i)]
\item
Then for any $x\in B_{2^{-j}b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|^{p_0}dy_0dy_1\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}| \psi(t,-i\nabla)\cT_{t,0}^{j}f(y_0)-\psi(t,-i\nabla)\cT_{t,0}^{j} f(y_1)|^{p_0}dy_0dy_1
\leq N'2^{j\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d, \delta,\varepsilon,\gamma,\kappa,M,m,p_0)$ and $N'=N'(d,\delta, \gamma,\kappa,M,m,p_0)$.
\item
Then for any $x\in B_{b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{b}(0)}-\hspace{-0.40cm}\int_{B_{b}(0)}|\cT_{t,0}^{\varepsilon,\leq0} f(y_0)-\cT_{t,0}^{\varepsilon,\leq0} f(y_1)|^{p_0}dy_0dy_1\leq N\bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{b}(0)}-\hspace{-0.40cm}\int_{B_{b}(0)}|\psi(t,-i\nabla)\cT_{t,0}^{\leq0} f(y_0)-\psi(t,-i\nabla)\cT_{t,0}^{\leq0} f(y_1)|^{p_0}dy_0dy_1\leq N'\bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d,\varepsilon,\gamma,\kappa,M,m,p_0)$ and $N'=N'(d,\gamma,\kappa,M,m,p_0)$
\end{enumerate} \end{lem} Once we assume Lemma \ref{20.12.21.16.26}, then Proposition \ref{prop:maximal esti} follows. \begin{proof}[Proof of Proposition \ref{prop:maximal esti}]
Due to similarity, we only prove it for $\cT_{t,0}^{\varepsilon,j}$. Let $b>0$, $t>0$, and $x\in B_{2^{-j}b}(0)$. By Lemma \ref{20.12.21.16.26},
\begin{align*}
&-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} f(y_0)-\cT_{t,0}^{\varepsilon,j} f(
y_1)|^{p_0}dy_0dy_1 \leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
For $x_0\in\bR^{d}$, denote
$$
\tau_{x_0}f(t,x):=f(t,x_0+x).
$$
Since $\cT_{t,0}^{\varepsilon,j}$ and $\tau_{x_0}$ are commutative,
\begin{align*}
&-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}|\cT_{t,0}^{\varepsilon,j} f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|^{p_0}dy_0dy_1\\
&=-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} (\tau_{x_0}f)(t,y_0)-\cT_{t,0}^{\varepsilon,j} (\tau_{x_0}f)(t,y_1)|^{p_0}dy_0dy_1\\
&\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|\tau_{x_0}f|^{p_0}\right)(x)=N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x_0+x).
\end{align*}
Therefore, by Jensen's inequality, for $x\in B_{2^{-j}b}(0)$ and $x_0\in\bR^{d}$
\begin{align*}
&\left(-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|dy_0dy_1\right)^{p_0}\\
&\leq -\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}-\hspace{-0.40cm}\int_{B_{2^{-j}b}(x_0)}|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j} f(y_1)|^{p_0}dy_0dy_1\\
&\leq N2^{j\varepsilon\gamma
p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x_0+x).
\end{align*}
Taking the supremum both sides with respect to all $B_{2^{-j}b}$ containing $x_0+x$, we obtain the desired result. The theorem is proved. \end{proof} Therefore, it suffices to prove Lemma \ref{20.12.21.16.26} to complete the proof of Proposition \ref{prop:maximal esti}. In doing so, we begin with two lemmas which reduce our computational effort. For readers' convenience, we also present the following scheme, which explains relations among Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, \ref{22.01.28.16.57}, \ref{22.01.28.17.18}, \ref{20.12.21.16.26}, and Proposition \ref{prop:maximal esti}. \begin{equation*}
\begin{rcases}
\text{Lemma \ref{22.01.28.16.57}} \to &\text{Lemma \ref{22.01.28.17.18}} \\
&\text{Lemma \ref{20.12.17.20.21}} \\
&\text{Lemma \ref{22.02.15.16.27}}
\end{rcases}
\to \text{Lemma \ref{20.12.21.16.26}} \to \text{Proposition \ref{prop:maximal esti}}, \end{equation*} where $A\to B$ implies that $A$ is used in the proof of $B$. Note that Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, and \ref{22.01.28.16.57} are simple consequences of Theorem \ref{22.02.15.11.27} and Corollary \ref{22.02.15.14.36}.
\begin{lem}\label{20.12.17.20.21}
Let $p_0\in(1,2]$ and $f\in \cS(\bR^d)$.
Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{p_0}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then for all $t>s>0$ and $(m,|\alpha|)\in\{0,1\}\times\{0,1,2\}$, there exists a positive constant $N=N(|\alpha|,d,\delta,\varepsilon,\gamma,\kappa,M,m,p_0)$ such that
for all $a_0,a_1\in(0,\infty)$,
\begin{equation*}
\begin{aligned}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d \int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha}\Delta_j P_{\varepsilon}(t,s,\lambda \omega)|^{p_0'}\sigma(d \omega) \right)^{1/p_0'}d\lambda\\
&\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d(p_0)-1+d/p_0\right)}\left(\int_{a_1}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0},
\end{aligned}
\end{equation*}
where $\bS^{d-1}$ denotes the $d-1$-dimensional unit sphere, $\sigma(d\omega)$ denotes the surface measure on $\bS^{d-1}$,
$$
d(p_0) :=\left\lfloor\frac{d}{p_0}\right\rfloor+1,
$$
and $p_0'$ is the H\"older conjugate of $p_0$, \textit{i.e.} $1/p_0+1/p_0'=1$. \end{lem} \begin{proof}
For notational convenience, we define
\begin{equation*}
\begin{gathered}
\mu:=d(p_0)+\frac{1}{p_0}>\frac{d+1}{p_0}
\end{gathered}
\end{equation*}
and
$$
K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda):=\int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha} \Delta_jP_{\varepsilon}(t,s,\lambda w)|^{p_0'}\sigma(dw).
$$
By H\"older's inequality and Theorem \ref{22.02.15.11.27},
\begin{align*}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}(\lambda^dK_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda))^{1/p_0'}d\lambda\\
&\leq\left(\int_{a_1}^{\infty}\lambda^{-p_0\mu}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}\left(\int_{a_1}^{\infty}\lambda^{d+p_0'\mu}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda)d\lambda \right)^{1/p_0'}\\
&\leq\left(\int_{a_1}^{\infty}\lambda^{-p_0\mu}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}\left(\int_{\bR^d}|z|^{1+p_0'\mu}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,z)|^{p_0'} dz\right)^{1/p_0'}\\
&\leq Ne^{-\kappa|t-s|2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d(p_0)-1+d/p_0\right)}\left(\int_{a_1}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}.
\end{align*}
The lemma is proved. \end{proof}
\begin{lem}\label{22.02.15.16.27}
Let $p_0\in(1,2]$ and $f\in \cS(\bR^d)$.
Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{p_0}\right\rfloor+2\right)$-times regular upper bound with $(\gamma,M)$.
Then for all $t>s>0$ and $(m,|\alpha|)\in\{0,1\}\times\{0,1,2\}$ satisfying
$$
(m+\varepsilon)\gamma+|\alpha|+d/p_0-\lfloor d/p_0\rfloor>0,
$$
there exists a positive constant $N=N(|\alpha|,d,\varepsilon,\gamma,k,\kappa,M,m,p_0)$ such that
for all $a_0,a_1\in(0,\infty)$,
\begin{equation*}
\begin{aligned}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d \int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha}S_0 P_{\varepsilon}(t,s,\lambda \omega)|^{p_0'}\sigma(d \omega) \right)^{1/p_0'}d\lambda\\
&\leq N\left(\int_{a_1}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0},
\end{aligned}
\end{equation*}
where
$$
d(p_0) :=\left\lfloor\frac{d}{p_0}\right\rfloor+1
$$
and $p_0'$ is the H\"older conjugate of $p_0$, \textit{i.e.} $1/p_0+1/p_0'=1$. \end{lem} \begin{proof}
First, we choose a $c=c(|\alpha|,d,\varepsilon,\gamma,m,p_0)>1$ so that
$$
(m+\varepsilon)\gamma+|\alpha|+d/p_0-\lfloor d/p_0\rfloor>\log_2(c)>0.
$$
By H\"older's inequality,
\begin{equation}\label{22.02.15.16.06}
\begin{aligned}
&\int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha} S_0P_{\varepsilon}(t,s,\lambda w)|^{p_0'}\sigma(dw)\\
&\leq \left(\sum_{j\leq 0}c^{j/(p_0-1)}\right)^{p_0-1}\int_{\bS^{d-1}}\sum_{j\leq 0}c^{-j}|\partial_t^mD_x^{\alpha} \Delta_jP_{\varepsilon}(t,s,\lambda w)|^{p_0'}\sigma(dw)\\
&\leq N(c,p_0)\sum_{j\leq0}c^{-j}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda).
\end{aligned}
\end{equation} Putting $$ \mu = d(p_0) +\frac{1}{p_0} $$ and using \eqref{22.02.15.16.06} and H\"older's inequality, we have
\begin{align*}
&\int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d \int_{\bS^{d-1}}|\partial_t^mD_x^{\alpha}S_0 P_{\varepsilon}(t,s,\lambda \omega)|^{p_0'}\sigma(d \omega) \right)^{1/p_0'}d\lambda\\
\leq& \int_{a_1}^{\infty}\left(\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}\left(\lambda^d\sum_{j\leq0}c^{-j}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda) \right)^{1/p_0'}d\lambda\\
\leq &\left(\int_{a_1}^{\infty}\lambda^{-p_0\mu}\int_{B_{a_0+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)^{1/p_0}\left(\sum_{j\leq0}c^{-j}\int_{a_1}^{\infty}\lambda^{d+p_0'\mu}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda)d\lambda \right)^{1/p_0'}.
\end{align*}
By Theorem \ref{22.02.15.11.27},
\begin{align*}
\sum_{j\leq0}c^{-j}\int_{a_1}^{\infty}\lambda^{d+p_0'\mu}K_{\varepsilon,m,\alpha,j}^{p_0'}(t,s,\lambda)d\lambda& N\leq \sum_{j\leq0}c^{-j}\int_{\bR^d}|z|^{1+p_0'\mu}|\partial_t^mD_x^{\alpha}\Delta_jP_{\varepsilon}(t,s,z)|^{p_0'} dz\\
&\leq N \sum_{j\leq0}c^{-j}2^{j\left((m+\varepsilon)\gamma+|\alpha|-d(p_0)-1+d/p_0\right)}\\
&=N\sum_{j\leq0}2^{j\left((m+\varepsilon)\gamma+|\alpha|+d/p_0-\lfloor d/p_0\rfloor-\log_2(c)\right)}=N.
\end{align*}
The lemma is proved. \end{proof}
Making use of Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, we want to estimate mean oscillations of $\cT_{t,0}^{\varepsilon,j} f$ and $\cT_{t,0}^{\varepsilon,\leq0}f$. To do so, we first calculate $L_p$-norms of $\cT_{t,0}^{\varepsilon,j}f$ and $\cT_{t,0}^{\varepsilon,\leq0}f$ with respect to the space variable $x$.
\begin{lem}\label{22.01.28.16.57}
Suppose that $\psi$ is a symbol satisfying an ellipticity condition with $(\gamma,\kappa)$ and having a $\left( \left\lfloor \frac{d}{2}\right\rfloor+1\right)$-times regular upper bound with $(\gamma,M)$.
Then for all $t>0$, $p\in[1,\infty]$, and $\delta \in (0,1)$, there exist positive constants $N=N(d,\varepsilon,\gamma,\kappa,M)$, $N(\delta)=N(d,\delta, \varepsilon,\gamma,\kappa,M)$, $N'(\delta)=N'(d,\delta,\gamma,\kappa,M)$, and $N'=N'(d,\gamma,\kappa,M)$ such that for all $f \in \cS(\bR^{d})$ and $ t>0$,
\begin{equation*}
\begin{gathered}
\|\cT_{t,0}^{\varepsilon,j}f\|_{L_p(\bR^{d})}\leq N(\delta)e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\varepsilon\gamma}\|f\|_{L_p(\bR^{d})}, \quad \|\cT_{t,0}^{\varepsilon,\leq0}f\|_{L_p(\bR^{d})}\leq N\|f\|_{L_p(\bR^{d})},\\
\| \psi(t, -i\nabla)T_{t,0}^{j}f\|_{L_p(\bR^d)}\leq N'(\delta)e^{-\kappa t 2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\gamma}\|f\|_{L_p(\bR^{d})},\quad
\|\psi(t, -i\nabla) T_{t,0}^{\leq0}f\|_{L_p(\bR^d)}\leq N'\|f\|_{L_p(\bR^{d})}.
\end{gathered}
\end{equation*} \end{lem} \begin{proof} Let $ t > 0$. By Minkowski's inequality and Theorem \ref{22.02.15.11.27},
\begin{equation*}
\begin{gathered}
\|\cT_{t,0}^{\varepsilon,j}f\|_{L_p(\bR^{d})}\leq \|\Delta_{j}P_{\varepsilon}(t,0,\cdot)\|_{L_1(\bR^d)}\|f\|_{L_p(\bR^d)}\leq Ne^{-\kappa t2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\varepsilon\gamma}\|f\|_{L_p(\bR^d)},\\
\|\psi(t, -i\nabla)\cT_{t,0}^{j}f\|_{L_p(\bR^{d})}\leq \|\psi(t, -i\nabla) \Delta_{j} p(t,0,\cdot)\|_{L_1(\bR^d)}\|f\|_{L_p(\bR^d)}\leq Ne^{-\kappa t2^{j\gamma}\times\frac{(1-\delta)}{2^{\gamma}}}2^{j\gamma}\|f\|_{L_p(\bR^d)}.
\end{gathered}
\end{equation*}
Similarly, using Minkowski's inequality and Corollary \ref{22.02.15.14.36}, we also obtain the other estimates. The lemma is proved. \end{proof}
\begin{lem}\label{22.01.28.17.18}
Let $t>0$ and $b>0$.
\begin{enumerate}[(i)]
\item
Assume that $f\in \cS(\bR^{d})$ has a support in $B_{3\times2^{-j}b}(0)$.
Then for any $x\in B_{2^{-j}b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} f(y)|^{p_0}dy\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\psi(t,-i\nabla)\cT_{t,0}^{j} f(y)|^{p_0}dy\leq N'2^{j\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}} \bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d,\delta,\varepsilon,\gamma,\kappa,M)$ and $N=N(d,\delta,\gamma,\kappa,M)$.
\item
Assume that $f\in \cS(\bR^{d})$ has a support in $B_{3b}(0)$.
Then for any $x\in B_{b}(0)$,
\begin{equation*}
\begin{gathered}
-\hspace{-0.40cm}\int_{B_{b}(0)}|\cT_{t,0}^{\varepsilon,\leq0} f(y)|^{p_0}dy\leq N\bM\left(|f|^{p_0}\right)(x),\\
-\hspace{-0.40cm}\int_{B_{b}(0)}|\psi(t,-i\nabla)\cT_{t,0}^{\leq 0} f(y)|^{p_0}dy\leq N\bM\left(|f|^{p_0}\right)(x),
\end{gathered}
\end{equation*}
where $N=N(d,\varepsilon,\gamma,\kappa,M)$.
\end{enumerate} \end{lem} \begin{proof}
By Lemma \ref{22.01.28.16.57},
\begin{align*}
-\hspace{-0.40cm}\int_{B_{2^{-j}b}(0)}|\cT_{t,0}^{\varepsilon,j} f(y)|^{p_0}dy&\leq N\frac{2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}}{|B_{2^{-j}b}(0)|}\int_{\bR^d}|f(y)|^{p_0}dy\\
&= N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\left(-\hspace{-0.40cm}\int_{B_{3\times 2^{-j}b}(0)}|f(y)|^{p_0}dy\right)\\
&\leq N2^{j\varepsilon \gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
Similarly, using Lemma \ref{22.01.28.16.57}, we can easily obtain the other results. The lemma is proved. \end{proof}
With the help of Lemmas \ref{20.12.17.20.21}, \ref{22.02.15.16.27}, and \ref{22.01.28.17.18}, we can prove Lemma \ref{20.12.21.16.26}. \begin{proof}[Proof of Lemma \ref{20.12.21.16.26}]
First, we prove $(i)$. Due to similarity, we only prove it for $T_{t,0}^{\varepsilon,j}$. Choose a $\zeta\in C^{\infty}(\bR^d)$ satisfying
\begin{itemize}
\item $\zeta(y)\in[0,1]$ for all $y\in\bR^d$
\item $\zeta(y)=1$ for all $y\in B_{2\times 2^{-j}b}(0)$
\item $\zeta(y)=0$ for all $y\in \bR^d\setminus B_{5\times2^{-j}b/2}(0)$.
\end{itemize}
Note that $\cT_{t,0}^{\varepsilon,j} f=\cT_{t,0}^{\varepsilon,j} (f\zeta)+\cT_{t,0}^{\varepsilon,j} (f(1-\zeta))$ and $\cT_{t,0}^{\varepsilon,j}(f\zeta)$ can be estimated by Lemma \ref{22.01.28.17.18}.
Thus it suffices to estimate $\cT_{t,0}^{\varepsilon,j} (f(1-\zeta))$ and we may assume that $f(y)=0$ if $|y|<2\times 2^{-j}b$. Hence if $y\in B_{2^{-j}b}(0)$ and $|z|<2^{-j}b$, then $|y-z|\leq 2\times 2^{-j}b$ and $f(y-z)=0$. By \cite[Lemma 6.6]{Choi_Kim2022} and H\"older's inequality,
\begin{align}
&\left|\int_{\bR^d}(\Delta_{j}P_{\varepsilon}(t,0,y_0-z)-\Delta_{j}P_{\varepsilon}(t,0,y_1-z))f(z)dz\right|\nonumber\\
&\leq N|y_0-y_1| \int_{2^{-j}b}^{\infty}(\lambda^dK_{\varepsilon,0,\alpha,j}^{p_0'}(t,0,\lambda))^{1/p_0'}\left(\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dz\right)^{1/p_0}d\lambda,\label{22.10.24.11.37}
\end{align}
where $N=N(d,p_0)$, $|\alpha|=2$, and
$$
K_{\varepsilon,0,\alpha,j}^{p_0'}(t,0,\lambda):=\int_{\bS^{d-1}}|D_x^{\alpha}\Delta_j P_{\varepsilon}(t,0,\lambda w)|^{p_0'}\sigma(dw).
$$
If $b\leq1$, then by Lemma \ref{20.12.17.20.21} and \eqref{22.10.24.11.37},
\begin{align*}
&|\cT_{t,0}^{\varepsilon,j}f(y_0)-\cT_{t,0}^{\varepsilon,j}f(y_1)|^{p_0}\\
&\leq N|y_0-y_1|^{p_0}\left(\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}2^{jp_0\left(\varepsilon\gamma-d(p_0)+1+d/p_0\right)}\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}b^{p_0}\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\left(\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dz\right)d\lambda\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x)b^{p_0}\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)+d-1}d\lambda\\
&\leq N2^{j\varepsilon\gamma p_0}b^{-p_0d(p_0)+d+p_0}e^{-p_0\kappa'2^{j\gamma} t }\bM\left(|f|^{p_0}\right)(x)\\
&\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
If $b>1$ and $y\in B_{2^{-j}b}(0)$, then by \cite[Lemma 6.6]{Choi_Kim2022} with $|\alpha|=1$ and Lemma \ref{20.12.17.20.21},
\begin{align*}
&|\cT_{t,0}^{\varepsilon,j}f(y)|^{p_0}\\
&\leq N\left(\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dzd\lambda \right)e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}2^{jp_0\left(\varepsilon\gamma-d(p_0)+d/p_0\right)}\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)-1}\left(\int_{B_{2\times 2^{-j}b+\lambda}(x)}|f(z)|^{p_0}dz\right)d\lambda\\
&\leq N2^{j(\varepsilon\gamma p_0-p_0d(p_0)+d)}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x)\int_{2^{-j}b}^{\infty}\lambda^{-p_0d(p_0)+d-1}d\lambda\\
&\leq N2^{j\varepsilon\gamma p_0}b^{-p_0d(p_0)+d}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x)\\
&\leq N2^{j\varepsilon\gamma p_0}e^{-\kappa t 2^{j\gamma}\times\frac{p_0(1-\delta)}{2^{\gamma}}}\bM\left(|f|^{p_0}\right)(x).
\end{align*}
For $(ii)$, note that if $|\alpha|\geq1$, then
$$
|\alpha|+d/p_0-\lfloor d/p_0\rfloor\geq |\alpha|>0.
$$
Thus applying the similar arguments in $(i)$ with Lemma \ref{22.02.15.16.27} instead of Lemma \ref{20.12.17.20.21}, we also have $(ii)$.
The lemma is proved. \end{proof}
\appendix
\mysection{Weighted multiplier and Littlewood-Paley theorem}
\begin{prop}[Weighted Mikhlin multiplier theorem]
\label{21.02.24.16.49} Let $p\in(1,\infty)$, $w\in A_p(\bR^{d})$, and $f \in \cS(\bR^{d})$. Suppose that \begin{equation}
\label{22.05.12.13.14}
\sup_{R>0}\left(R^{2|\alpha|-d}\int_{R<|\xi|<2R}|D^{\alpha}_{\xi}\pi(\xi)|^{2}d\xi\right)^{1/2}\leq N^*,\quad\forall |\alpha|\leq d. \end{equation} Then there exists a constant $N=N(d,p,K,N^*)$ such that $$
\|\bT_{\pi}f\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)}, $$ where $[w]_{A_p(\bR^d)}\leq K$ and $$ \bT_{\pi}f(x):=\cF^{-1}[\pi\cF[f]](x). $$ \end{prop} \begin{proof} By \cite[Theorem 6.2.7]{grafakos2014classical}, the operator $\bT_{\pi}:L_q(\bR^d)\to L_{q}(\bR^d)$ is bounded for all $q\in(1,\infty)$. It is well-known that there exists $r>1$ such that $w\in A_{p/r}(\bR^d)$. Applying \cite[Corollaries 6.10, 6.11, and Remark 6.14]{fackler2020weighted}, we have \begin{align*}
\|\bT_{\pi} f\|_{L_p(\bR^d,w\,dx)}&\leq N\|f\|_{L_p(\bR^d,w\,dx)}, \end{align*} where $N=N(d,p,K,N^*)$. The proposition is proved. \end{proof}
\begin{prop}[Weighted Littlewood-Paley theorem] \label{prop:WLP}
Let $p\in (1, \infty)$ and $w\in A_p(\bR^d)$. Then we have
\begin{align}
\label{20230128 01}
\|Sf\|_{L_p(\bR^d,w\,dx)} &\leq C(d,p) [w]_{A_p(\bR^d)}^{\max(\frac{1}{2}, \frac{1}{p-1})} \|f\|_{L_p(\bR^d,w\,dx)} \end{align} and \begin{align}
\label{20230128 03}
\|f\|_{L_p(\bR^d,w\,dx)} &\leq C(d,p) [w]_{A_p(\bR^d)}^{\frac{\max(1/2, 1/(p'-1))}{p-1}} \|Sf \|_{L_p(\bR^d,w\,dx)},
\end{align}
where \begin{align}
\label{20230128 02}
Sf(x):=\left(\sum_{j\in\bZ}|\Delta_jf(x)|^2\right)^{1/2}. \end{align} \end{prop}
\begin{proof} This result is already proved in several literature \cite{kurtz1980little,rychkov2001weights}. They did not, however, reveal the growth of implicit constants relative to $A_p(\bR^d)$-seminorm. These optimum implicit constants could be obtained from recent general theories.
The first inequality is proved in \cite{Ler2011weighted,Wil2007square} for various types of Littlewood-Paley operators.
We show that $Sf$ is one of such Littlewood-Paley operators.
The second inequality follows from the first inequality and the duality of $L_p(\bR^d,w\,dx)$, which will be shown in the last part of the proof.
For $\alpha \in (0,1]$, let $\mathcal{C}_\alpha$ be a family of functions $\phi:\bR^d \to \bR$ supported in $\{x\in\bR^d : |x|\leq 1\}$ such that
\begin{align*}
\int_{\bR^d} \phi(x) dx=0,\quad |\phi(x) - \phi(x')| \leq |x-x'|^\alpha,\quad \forall x, x' \in \bR^d.
\end{align*}
Then we define a maximal operator over the family $\mathcal{C}_\alpha$ as
\begin{align}\label{ineq:22 05 06 17 32}
A_\alpha(f)(x,t) := \sup_{\phi \in \mathcal{C}_\alpha} |\phi_t \ast f(x)|,\quad \phi_t(y) = t^{-d}\phi(t^{-1}y).
\end{align}
Using \eqref{ineq:22 05 06 17 32}, we construct intrinsic square functions as follows:
\begin{align*}
G_\alpha(f)(x) &:= \biggl(\int_{\Gamma(x)}\big|A_\alpha(f)(y,t)\big|^2 \frac{dydt}{t^{d+1}}\biggr)^{1/2},\\
g_\alpha(f)(x) &:= \biggl(\int_0^\infty\big|A_\alpha(f)(x,t)\big|^2\frac{dt}{t}\biggr)^{1/2},\\
\sigma_\alpha(f)(x) &:= \biggl(\sum_{j\in\bZ} \big| A_\alpha(f)(x, 2^j)\big|^2\biggr)^{1/2},
\end{align*}
where $\Gamma(x)$ denotes the conic area $\{(y,t) \in \bR^d\times\bR_+ : |x-y|\leq t\}$.
In \cite{Wil2007square}, Wilson showed pointwise equivalences among $G_\alpha$, $g_\alpha$ and $\sigma_\alpha$, \textit{i.e.}
\begin{align}\label{ineq:22 05 06 18 12}
G_\alpha(f)(x) \approx g_\alpha(f)(x) \approx \sigma_\alpha(f)(x),
\end{align} where the implicit constants depend only on $\alpha$ and $d$.
Moreover, Lerner \cite[Theorem 1.1]{Ler2011weighted} proved
\begin{align}\label{ineq:WLP}
\|G_\alpha\|_{L_p(\bR^d,w\,dx)\to L_p(\bR^d,w\,dx)} \leq C(\alpha, d, p) [w]_{A_p(\bR^d)}^{\max(\frac{1}{2}, \frac{1}{p-1})}.
\end{align}
It should be remarked that \cite[Theorem 1.1]{Ler2011weighted} covers a broad class of operators of Littlewood-Paley type. However, this result does not give \eqref{20230128 01} directly since the Littlewood-Paley operator considered in this paper is not an integral form (see \eqref{20230128 02}). Our Littlewood-Paley operator is given as the summation of $\Delta_j f = \Psi_j \ast f$ over $\bZ$ and the symbol of $\Delta_j$ is supported in $\{\xi\in\bR^d:2^{j-1}\leq|\xi| \leq2^{j+1} \}$.
Note that $\Psi_j$ is globally defined due to the uncertainty principle, while elements in $\mathcal{C}_\alpha$ are supported in $\{x\in\bR^d:|x|\leq 1\}$.
To fill this gap, we need a new family $\mathcal{C}_{\alpha,\varepsilon}$ introduced in \cite{Wil2007square}.
For $\alpha \in (0,1]$ and $\varepsilon>0$, let $\mathcal{C}_{\alpha, \varepsilon}$ be a family of functions $\phi:\bR^d \to \bR$ satisfying
\begin{equation*}
\begin{gathered}
\int_{\bR^d} \phi(x) dx=0,\quad |\phi(x)| \leq (1+|x|)^{-d-\varepsilon},\\
|\phi(x) - \phi(x')| \leq |x-x'|^\alpha \bigl((1+|x|)^{-d-\varepsilon} + (1+|x'|)^{-d-\varepsilon}\bigr),\quad \forall x, x' \in \bR^d.
\end{gathered}
\end{equation*}
Then we can define $\widetilde{A}_{\alpha, \varepsilon}$ on the basis of all functions in the above class as in \eqref{ineq:22 05 06 17 32}, \textit{i.e.}
\begin{align}
\widetilde{A}_{\alpha,\varepsilon}(f)(x,t) := \sup_{\phi \in \mathcal{C}_{\alpha,\varepsilon}} |\phi_t \ast f(x)|,\quad \phi_t(y) = t^{-d}\phi(t^{-1}y).
\end{align}
We can also define $\widetilde{G}_{\alpha, \varepsilon}, \widetilde{g}_{\alpha, \varepsilon}$ and $\widetilde{\sigma}_{\alpha, \varepsilon}$ similarly to $G_\alpha$, $g_\alpha$ and $\sigma_\alpha$. Likewise, they become equivalent, \textit{i.e.}
\begin{align}\label{ineq:22 05 06 18 13}
\widetilde{G}_{\alpha,\varepsilon}(f)(x) \approx \widetilde{g}_{\alpha,\varepsilon}(f)(x) \approx \widetilde{\sigma}_{\alpha,\varepsilon}(f)(x).
\end{align}
Here, the implicit constants depend only on $\alpha,\varepsilon$ and $d$. Since every Schwartz function is contained in $\mathcal{C}_{1,1}$, it follows that $\psi \in \mathcal{C}_{1,1}$.
Thus we have
\begin{align*}
|\Delta_jf(x)| \leq \widetilde{A}_{1,1}(f)(x, 2^j),
\end{align*}
which yields
\begin{align}\label{ineq:22 05 06 18 27}
Sf(x) \leq C(d)\widetilde{\sigma}_{1,1}(f)(x)\approx C(d) \widetilde{G}_{1,1}(f)(x).
\end{align} It is also known in \cite[Theorem 2]{Wil2007square}, for all $\alpha \in (0,1]$, $\varepsilon>0$, and $0<\alpha' \leq \alpha$, there is a constant $C = C(\alpha, \alpha', \varepsilon, d)$ such that
\begin{align}\label{ineq:22 05 06 18 28}
\widetilde{G}_{\alpha, \varepsilon}(f)(x) \leq C G_{\alpha'}(f)(x).
\end{align} Thus finally by \eqref{ineq:WLP}, \eqref{ineq:22 05 06 18 13}, \eqref{ineq:22 05 06 18 27} and \eqref{ineq:22 05 06 18 28}, we have
\begin{equation}
\label{22.05.12.16.08}
\begin{aligned}
\big\|Sf \big\|_{L_p(\bR^d,w\,dx)}
&\leq C(d) \big\|\widetilde{G}_{1,1}(f) \big\|_{L_p(\bR^d,w\,dx)}\\
&\leq C(d) \big\|G_{1}(f) \big\|_{L_p(\bR^d,w\,dx)} \leq C(d, p) [w]_{A_p(\bR^d)}^{\max(\frac{1}{2}, \frac{1}{p-1})} \| f\|_{L^p(\bR^d,w\,dx)}.
\end{aligned}
\end{equation}
For the converse inequality \eqref{20230128 03}, we recall that the topological dual space of $L_p(\bR^d,w\,dx)$ is $L_{p'}(\bR^d,\bar{w}\,dx)$ where
$$
\frac{1}{p}+\frac{1}{p'}=1,\quad \bar{w}(x):=(w(x))^{-\frac{1}{p-1}}.
$$
For a more detail, see \cite[Theorem A.1]{Choi_Kim2022}. Then by the almost orthogonality of $\Delta_j$, the Cauchy-Schwartz inequality, H\"older's inequality, and \eqref{22.05.12.16.08}, for all $f\in L_p(\bR^d,w\,dx)$ and $g\in C_c^{\infty}(\bR^d)$,
\begin{equation}
\label{22.05.12.16.15}
\begin{aligned}
\int_{\bR^d}f(x)g(x)dx&=\int_{\bR^d}\sum_{j\in\bZ}\Delta_jf(x)(\Delta_{j-1}+\Delta_j+\Delta_{j+1})g(x)dx\\
&\leq 3\int_{\bR^d}Sf(x)Sg(x)dx\leq 3\|Sf\|_{L_p(\bR^d,w\,dx)}\|Sg\|_{L_{p'}(\bR^d,\bar{w}\,dx)}\\
&\leq N(d,p)[\bar{w}]_{A_{p'}(\bR^d)}^{\max(\frac{1}{2},\frac{1}{p'-1})}\|Sf\|_{L_p(\bR^d,w\,dx)}\|g\|_{L_{p'}(\bR^d,\bar{w}\,dx)}.
\end{aligned}
\end{equation} Combining \eqref{22.05.12.16.08} and \eqref{22.05.12.16.15}, we have $$
\|f\|_{L_p(\bR^d,w\,dx)}=\sup_{\|g\|_{L_{p'}(\bR^d,\bar{w}\,dx)}\leq 1}\left|\int_{\bR^d}f(x)g(x)dx\right|\leq N(d,p)[w]_{A_{p}(\bR^d)}^{\frac{\max(1/2,1/(p'-1)
)}{p-1}}\|Sf\|_{L_p(\bR^d,w\,dx)}. $$ The proposition is proved. \end{proof}
\mysection{Properties of function spaces} \begin{lem} \label{wbound} Let $p\in(1,\infty)$ and $w\in A_p(\bR^d)$. Suppose that $$ [w]_{A_p(\bR^d)}\leq K. $$ Then there exists a positive constant $N=N(d,p,K)$ such that $$
\|S_0f\|_{L_p(\bR^d,w\,dx)}+\sup_{j\in\bZ}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)},\quad \forall f\in L_p(\bR^d,w\,dx). $$ \end{lem} \begin{proof} By the definition of $S_0$, there exists a $\Phi\in\cS(\bR^d)$ such that $$ S_0f(x)=\Phi\ast f(x)=\int_{B_1(0)}f(x-y)\Phi(y)dy+\sum_{k=1}^{\infty}\int_{B_{2^k}(0)\setminus\overline{B_{2^{k-1}}(0)}}f(x-y)\Phi(y)dy. $$ Since $\Phi\in\cS(\bR^d)$, there exists $N=N(d,\Phi)$ such that $$
|\Phi(y)|+|y|^{d+1}|\Phi(y)|\leq N,\quad \forall y\in\bR^d. $$
Therefore $| S_0 f (x) | \leq N(d,\Phi)\bM f(x)$ and it leads to the first part of the inequality, \textit{i.e.} $$
\|S_0f\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)},\quad \forall f\in L_p(\bR^d,w\,dx) $$ due to the weighted Hardy-Littlewood Theorem. Next recall $$
\Delta_jf=f\ast\Psi_j=f \ast 2^{jd}\Psi(2^{j}\cdot),
$$ and we show that $\Delta_j$ is a Calder\'on-Zygmund operator to obtain
\begin{equation}
\label{22.04.11.14.12}
\sup_{j\in\bZ}\|\Delta_jf\|_{L_p(\bR^d,w\,dx)}\leq N\|f\|_{L_p(\bR^d,w\,dx)}.
\end{equation} In other words, we have to show that $f \to \Delta_j f$ is bounded on $L_2(\bR^d)$ and $\Psi_j(x-y)$ is a standard kernel. It is obvious that $$
\sup_{\xi \in \bR^d} |\cF[\Psi_j](\xi)|=\sup_{\xi \in \bR^d}|\cF[\Psi](2^{-j} \xi)| = \sup_{\xi \in \bR^d}|\cF[\Psi](\xi)|, $$ which implies that $f \to \Delta_jf$ becomes a bounded operator on $L_2(\bR^d)$ due to the Plancherel theorem. More precisely, we have \begin{align}
\label{20230128 30}
\|\Delta_j f\|_{L_2(\bR^d)} \leq \frac{1}{ (2\pi)^{d/2} } \sup_{\xi \in \bR^d}|\cF[\Psi](\xi)| \|f\|_{L_2(\bR^d)} \end{align}
where $\Psi_j=2^{jd}\Psi(2^{j}\cdot)$ and $\Psi\in\cS(\bR^d)$. Next we show that $\Psi_j(x-y)$ is a standard kernel.
Observe that there exists a $N=N(d,\Psi)$ such that
\begin{align}
\label{20230128 10}
|x|^d|\Psi_j(x)|+|x|^{d+1}|\nabla\Psi_j(x)|=|2^jx|^d|\Psi(2^{j}x)|+|2^jx|^{d+1}|\nabla\Psi(2^jx)|\leq N,\quad \forall x\in\bR^d.
\end{align}
Moreover, by the fundamental theorem of calculus,
$$
|\Psi_j(x)-\Psi_j(y)|\leq |x-y|\int_0^1|\nabla \Psi_j(\theta x+(1-\theta)y)|d\theta.
$$
If $2|x-y|\leq\max(|x|,|y|)$, then for $\theta\in[0,1]$,
$$
|\theta x+(1-\theta)y|\geq \max(|x|,|y|)-|x-y|\geq \frac{\max(|x|,|y|)}{2}\geq \frac{|x|+|y|}{4}.
$$
Hence,
\begin{align}
\label{20230128 11}
|\Psi_j(x)-\Psi_j(y)| \leq |x-y|\int_0^1|\nabla \Psi_j(\theta x+(1-\theta)y)|d\theta\leq \frac{N4^{d+1}|x-y|}{(|x|+|y|)^{d+1}}.
\end{align} Due to \eqref{20230128 10} and \eqref{20230128 11}, $\Psi_j(x-y)$ becomes a standard kernel with constant $1$ and $N$. Especially, $N$ can be chosen uniformly for all $j \in \bZ$. In total, the $L_2$-bounds are given uniformly for all $j \in \bZ$ in \eqref{20230128 30} and $\Psi_j(x-y)$ becomes a standard kernel with same parameters for all $j \in \bZ$. Therefore applying \cite[Theorem 7.4.6]{grafakos2014classical}, we have \eqref{22.04.11.14.12}. The lemma is proved. \end{proof} \begin{rem}\label{rem RdF vv}
An operator $T:L_p(\bR^d)\to L_p(\bR^d)$ is linearizable if there exist a Banach space $B$ and a $B$-valued linear operator $U:L_p(\bR^d)\to L_p(\bR^d;B)$ such that
$$
|Tf(x)| = \| Uf(x)\|_B,\quad f\in L_p(\bR^d).
$$
Hence any linear operator is linearizable.
Let $\{T_j\}_{j\in\bZ}$ be a sequence of linearizable operators and $K \in (0,\infty)$.
Assume that there exist a $r \in (1,\infty)$ and $C(K) \in (0,\infty)$ such that
\begin{equation}
\label{22.09.13.16.42}
\sup_{j\in\bZ}\int_{\bR^d} | T_j f(x)|^r w(x) dx \leq C(K) \int_{\bR^d} |f(x)|^r w(x)dx
\end{equation} for all $[w]_{A_r} \leq K$ and $f\in L_r(\bR^d,w\,dx)$.
Then for all $1<p,q<\infty$, there exist $K' \in (0,\infty)$ and $C$ such that
\begin{align}
\label{20230128 40}
\Big\| \Big( \sum_{j\in\bZ} |T_j f_j|^p\Big)^{1/p} \Big\|_{L_q(\bR^d,w'\,dx)} \leq C\Big\| \Big( \sum_{j\in\bZ} |f_j|^p\Big)^{1/p} \Big\|_{L_q(\bR^d,w'\,dx)}
\end{align} for all $[w']_{A_q(\bR^d)}\leq K'$ and $f_j \in L_q(\bR^d,w'\,dx)$. The above statement can be found, for instance, in \cite[p. 521, Remarks 6.5]{RdF1985weighted} with a modification to dependence of $A_p$-norms. In particular, since $\Delta_j$ is linear and satisfies the weighted inequality for any $r>1$ due to Lemma \ref{wbound}, we have \eqref{20230128 40} with $T_j=\Delta_j$ and it will be used in the proof of the next proposition. \end{rem}
For $\boldsymbol{r}:\bZ\to(0,\infty)$, we denote $$ \pi_{\boldsymbol{r}}:=\sum_{j\in\bZ}2^{\boldsymbol{r}(j)}\Delta_j~\text{and}~ \pi^{\boldsymbol{r}}:=S_0+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j. $$
\begin{prop}
\label{22.05.03.11.34} Let $p\in(1,\infty)$, $q\in(0,\infty)$, and $w\in A_p(\bR^d)$. Suppose that two sequences $\boldsymbol{r},\boldsymbol{r}':\bZ\to(-\infty,\infty)$ satisfy \begin{align}
\label{uniform r assumption}
\sup_{j\in\bZ}\left(|\boldsymbol{r}'(j+1)-\boldsymbol{r}'(j)|+|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|\right)=:C_0<\infty. \end{align} Then for any $f\in \cS(\bR^d)$,
\begin{equation*}
\begin{gathered}
\|\pi^{\boldsymbol{r}}f\|_{H_p^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx \|f\|_{H_p^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},\quad \|\pi^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx\|f\|_{B_{p,q}^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},\\
\|\pi_{\boldsymbol{r}}f\|_{\dot{H}_p^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx \|f\|_{\dot{H}_p^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},\quad
\|\pi_{\boldsymbol{r}}f\|_{\dot{B}_{p,q}^{\boldsymbol{r}'}(\bR^d,w\,dx)}\approx\|f\|_{\dot{B}_{p,q}^{\boldsymbol{r}+\boldsymbol{r}'}(\bR^d,w\,dx)},
\end{gathered}
\end{equation*} where the equivalences depend only on $p$, $q$, $d$, $C_0$ and $[w]_{A_p}(\bR^d)$. In particular, we have
$$
\|\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}\approx \|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}.
$$ \end{prop} \begin{proof} Due to the existence of $S_0$, the proof of the inhomogeneous case become more difficult. Even the case $\boldsymbol{r}'\neq\boldsymbol{0}$ is quite similar to the case $\boldsymbol{r}'=\boldsymbol{0}$, where $\boldsymbol{0}(j)=0$ for all $j\in\bZ$. Thus we only prove the inhomogeneous case with the assumption $\boldsymbol{r}'=\boldsymbol{0}$.
First, by the definition of $\pi^{\boldsymbol{r}}$ and the triangle inequality, it is obvious that
\begin{align}
\label{20230128 50}
\|\pi^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)}
\leq \| S_0 f\|_{L_p(\bR^d,w\,dx)} + \Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}.
\end{align}
For the converse inequality of \eqref{20230128 50}, we use Proposition \ref{21.02.24.16.49}. One can easily check that
$$
\Pi_0(\xi):=\frac{\cF[\Phi](\xi)}{\cF[\Phi](\xi)+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\xi)}
$$
is infinitely differentiable, $\Pi_0(\xi)=1$ if $|\xi|\leq 1$ and $\Pi_0(\xi)=0$ if $|\xi|\geq2$, where
$$
\cF[\Phi](\xi) = \sum_{j=-\infty}^{0}\cF[\Psi](2^{-j}\xi).
$$
This certainly implies that $\Pi_0$ satisfies \eqref{22.05.12.13.14}. Hence by Proposition \ref{21.02.24.16.49},
$$
\|S_0f\|_{L_p(\bR^d,w\,dx)}=\|\bT_{\Pi_0}\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}\leq N\|\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)},
$$ where $$ \bT_{\Pi_0}f(x):=\cF^{-1}[\Pi_0\cF[f]](x). $$
This also yields
$$
\Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}=\|\pi^{\boldsymbol{r}}f-S_0f\|_{L_p(\bR^d,w\,dx)}\leq N\|\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}
$$
and thus \begin{align}
\label{20230128 51}
\|\pi^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)}
\approx \| S_0 f\|_{L_p(\bR^d,w\,dx)} + \Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}. \end{align} Moreover, Proposition \ref{prop:WLP} implies
\begin{align}
\label{20230128 52}
\Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}
\approx
\Big\| \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)},
\end{align}
where the implicit constant depends only on $p$, $d$ and $[w]_{A_p(\bR^d)}$.
Therefore by \eqref{20230128 51} and \eqref{20230128 52},
\begin{align*}
\|\pi^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)} \approx \| S_0 f\|_{L_p(\bR^d,w\,dx)} + \Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}.
\end{align*} Next we claim
\begin{equation}\label{ineq 220718 1747}
\begin{aligned}
\Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}
\leq N\Big\| \Big( \sum_{j=1}^{\infty} | 2^{\boldsymbol{r}(j)}\Delta_j f |^2 \Big)^{1/2} \Big\|_{L_p(\bR^d,w\,dx)}.
\end{aligned}
\end{equation} Put $T_j:=\Delta_j$ for all $j\in\bZ$ and
\begin{equation*}
\begin{gathered} f_0:=2^{\boldsymbol{r}(1)}\Delta_1f,\quad f_1:=2^{\boldsymbol{r}(1)}\Delta_1f+2^{\boldsymbol{r}(2)}\Delta_2f\\ f_j:=(2^{\boldsymbol{r}(j-1)}\Delta_{j-1}+2^{\boldsymbol{r}(j)}\Delta_{j}+2^{\boldsymbol{r}(j+1)}\Delta_{j+1})f\quad \text{if } j\geq2,\quad f_j:=0 \quad \text{if } j<0.
\end{gathered}
\end{equation*} Considering the almost orthogonal property of $\Delta_j$ and Remark \ref{rem RdF vv}, we have
$$
\Big\| S\Big( \sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j f \Big) \Big\|_{L_p(\bR^d,w\,dx)}
=\Big\| \Big( \sum_{j\in\bZ}|T_jf_j|^2 \Big)^{1/2} \Big\|_{L_p(\bR^d,w\,dx)}\leq N\Big\| \Big( \sum_{j=1}^{\infty} | 2^{\boldsymbol{r}(j)}\Delta_j f |^2 \Big)^{1/2} \Big\|_{L_p(\bR^d,w\,dx)}
$$
For the converse of \eqref{ineq 220718 1747}, we make use of the Khintchine inequality (\textit{e.g.} see \cite{Haagerup1981}),
\begin{align}\label{ineq 220718 1734}
\Big( \sum_{j=1}^{\infty} \big| 2^{\boldsymbol{r}(j)}\Delta_j f \big|^2 \Big)^{1/2}
\approx \Big( \mathbb{E} \Big|\sum_{j=1}^{\infty} X_j 2^{\boldsymbol{r}(j)}\Delta_j f \Big|^p \Big)^{1/p},\quad p \in (0, \infty)
\end{align}
where the implicit constants depend only on $p$ and $\{X_j\}_{j\in\bN}$ is a sequence of independent and identically distributed random variables with the Rademacher distribution. Let $\Pi_1$ be a Fourier multiplier defined by
$$
\Pi_1(\xi) :=
\frac{ \sum_{j=1}^\infty X_j 2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\xi)}{\cF[\Phi](\xi)+\sum_{j=1}^{\infty} 2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\xi)}.
$$ Recall the notation $$ \bT_{\Pi_1}f(x):=\cF^{-1}[\Pi_1\cF[f]](x) $$ and \begin{align*} \pi^{\boldsymbol{r}}:=S_0+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j. \end{align*} Then the right-hand side of \eqref{ineq 220718 1734} equals to
\begin{align*}
\left( \bE \left| \bT_{\Pi_1}(\pi^{\boldsymbol{r}}f) \right|^p \right)^{1/p}.
\end{align*} It is easy to check that $\Pi_1$ satisfies \eqref{22.05.12.13.14}, that is, $\Pi_1$ is a weighted Mikhlin multiplier.
Thus it follows that
\begin{equation}\label{ineq 220718 1744}
\begin{aligned}
\Big\| \Big( \mathbb{E} \Big|\sum_{j=1}^{\infty} X_j 2^{\boldsymbol{r}(j)}\Delta_j f \Big|^p \Big)^{1/p} \Big\|_{L_p(\bR^d,w\,dx)}
&\leq N\Big( \mathbb{E} \Big\|\sum_{j=1}^{\infty} X_j 2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}^p \Big)^{1/p}\\
&= N\Big( \mathbb{E} \Big\| \bT_{\Pi_1}(\pi^{\boldsymbol{r}}f) \Big\|_{L_p(\bR^d,w\,dx)}^p \Big)^{1/p}
\\
&\leq N [w]_{A_p(\bR^d)}^{\max(1,(p-1)^{-1})}\|\pi^{\boldsymbol{r}} f\|_{L_p(\bR^d,w\,dx)}.
\end{aligned}
\end{equation}
By \eqref{ineq 220718 1747}, \eqref{ineq 220718 1734} and \eqref{ineq 220718 1744}, we conclude that
$$
\|\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)}\approx \|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)}.
$$
Similarly, we compute $\|\pi^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{0}}(\bR^d,w\,dx)}$. Recall the definition first:
\begin{align*}
\|\pi^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{0}}(\bR^d,w\,dx)}
&= \| S_0 \pi^{\boldsymbol{r}}f \|_{L_p(\bR^d,w\,dx)} + \Big( \sum_{j=1}^{\infty} \Big\| \Delta_j \pi^{\boldsymbol{r}}f \Big\|_{L_p(\bR^d,w\,dx)}^q \Big)^{1/q}
\end{align*} and
\begin{align*}
\|f\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)}
&= \| S_0 f \|_{L_p(\bR^d,w\,dx)} + \Big( \sum_{j=1}^{\infty} \Big\| 2^{\boldsymbol{r}(j)}\Delta_j f \Big\|_{L_p(\bR^d,w\,dx)}^q \Big)^{1/q}.
\end{align*} Due to the almost orthogonal property again, it is obvious that
\begin{equation*}
\begin{gathered}
S_0\pi^{\boldsymbol{r}}f=S_0(S_0+2^{\boldsymbol{r}(1)}\Delta_1)f, \\
\Delta_1\pi^{\boldsymbol{r}}f=\Delta_1(S_0+2^{\boldsymbol{r}(1)}\Delta_1+2^{\boldsymbol{r}(2)}\Delta_2)f, \\
\Delta_j \pi^{\boldsymbol{r}}f = \Delta_{j}(2^{\boldsymbol{r}(j-1)} \Delta_{j-1} +2^{\boldsymbol{r}(j)} \Delta_{j} +2^{\boldsymbol{r}(j+1)} \Delta_{j+1}) f,\quad j\geq 2.
\end{gathered}
\end{equation*} Moreover, setting
\begin{equation*}
\begin{gathered} \Pi_2:=\frac{\cF[\Phi]+\cF[\Psi](2^{-1}\cdot)}{\cF[\Phi]+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\cdot)},\\
\Pi_3:=\frac{2^{\boldsymbol{r}(1)}(\cF[\Phi]+\cF[\Psi](2^{-1}\cdot)+\cF[\Psi](2^{-2}\cdot))}{\cF[\Phi]+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\cdot)},\\
\Pi_{j+2}:=\frac{2^{\boldsymbol{r}(j)}(\cF[\Psi](2^{-j+1}\cdot)+\cF[\Psi](2^{-j}\cdot)+\cF[\Psi](2^{-j-1}\cdot))}{\cF[\Phi]+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\cF[\Psi](2^{-j}\cdot)},\quad j\geq 2,
\end{gathered}
\end{equation*} we have
\begin{equation*}
\begin{gathered}
S_0f=\bT_{\Pi_2}S_0\pi^{\boldsymbol{r}}f,
\quad 2^{\boldsymbol{r}(1)}\Delta_1f=\bT_{\Pi_3}\Delta_1\pi^{\boldsymbol{r}}f,\\
2^{\boldsymbol{r}(j)}\Delta_jf=\bT_{\Pi_{j+2}}\Delta_j \pi^{\boldsymbol{r}}f, \quad j\geq 2.
\end{gathered}
\end{equation*} Therefore, by Lemma \ref{wbound}, and Proposition \ref{21.02.24.16.49}, we obtain
$$
\|\pi^{\boldsymbol{r}}f\|_{B_{p,q}^{\boldsymbol{0}}(\bR^d,w\,dx)}\approx\|f\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)}
$$ since all $L_p$-bounds can be chosen uniformly for all $j$ due to the assumption on $\boldsymbol{r}$ \eqref{uniform r assumption}.
The proposition is proved. \end{proof}
\begin{corollary}
\label{classical sobo}
For $s\in\bR$, put $\boldsymbol{r}(j) = sj$. Then the space $H_p^{\boldsymbol{r}}(\bR^d, w\,dx)$ is equivalent to $H_p^s(\bR^d, w\,dx)$ whose norm is given by $$
\| f \|_{H_p^s(\bR^d,w\,dx)} := \| (1 - \Delta)^{s/2} f \|_{L_p(\bR^d,w\,dx)}. $$ \end{corollary} \begin{proof} Recall the notation $$ \pi^{\boldsymbol{r}}:=S_0+\sum_{j=1}^{\infty}2^{\boldsymbol{r}(j)}\Delta_j. $$
By Proposition~\ref{22.05.03.11.34}, it follows that $\|f\|_{H_p^{\boldsymbol{r}}(\bR^d,w\,dx)} \approx \|\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)} $. Thus it is sufficient to show \begin{align*}
\|\pi^{\boldsymbol{r}}f\|_{L_p(\bR^d,w\,dx)} \approx \| (1 - \Delta)^{s/2} f \|_{L_p(\bR^d,w\,dx)}. \end{align*} Let $\Psi$ be the Schwartz function appearing in the definition of $\Delta_j$. Define
$$
\cF[\Phi](\xi) = \sum_{j=-\infty}^{0}\cF[\Psi](2^{-j}\xi).
$$ and $$
\Pi(\xi) := \frac{\cF[\Phi](\xi)+\sum_{j=1}^\infty 2^{sj} \cF[\Psi](2^{-j}\xi)}{(1+|\xi|^2)^{s/2}}. $$ Then $\Pi$ is a weighted Mikhlin multiplier in Proposition~\ref{21.02.24.16.49}. Thus we have $$
\left\|\sum_{j=1}^\infty 2^{\boldsymbol{r}(j)} \Delta_j f\right\|_{L_p(\bR^d,w\,dx)} = \Big\| \bT_{\Pi} \Bigl( (1-\Delta)^{s/2} f \Bigr)\Big\|_{L_p(\bR^d,w\,dx)} \leq N\| (1-\Delta)^{s/2} f\|_{L_p(\bR^d,w\,dx)}. $$ For the converse direction, it suffices to observe that $$
\widetilde{\Pi}(\xi) = \frac{(1+|\xi|^2)^{s/2} }{\cF[\Phi](\xi)+\sum_{j=1}^\infty 2^{sj}\cF[\Psi](2^{-j}\xi)} $$ becomes a weighted Mikhlin multiplier as well. The corollary is proved. \end{proof}
\begin{defn} Let $q\in(0,\infty)$, $\boldsymbol{r}:\bZ\to(-\infty,\infty)$, and $B$ be a Banach lattice. \begin{enumerate}[(i)]
\item We denote by $B(\ell_2^{\boldsymbol{r}})$ the set of all $B$-valued sequences $x=(x_0,x_1,\cdots)$ such that $$
\|x\|_{B(\ell_2^{\boldsymbol{r}})}:=\|x_0\|_B+\left\|\Big(\sum_{j=1}^{\infty}2^{2\boldsymbol{r}(j)}|x_j|^2\Big)^{1/2}\right\|_B<\infty,\quad |x|: =x\vee(-x). $$
\item We denote by $\ell_q^{\boldsymbol{r}}(B)$ the set of all $B$-valued sequences $x=(x_0,x_1,\cdots)$ such that $$
\|x\|_{\ell_q^{\boldsymbol{r}}(B)}:=\|x_0\|_B+\Big(\sum_{j=1}^{\infty}2^{q\boldsymbol{r}(j)}\|x_j\|_{B}^q\Big)^{1/q}<\infty. $$ In particular, we use the simpler notation $\ell_q^{\boldsymbol{r}}=\ell_q^{\boldsymbol{r}}(\bR)$. \end{enumerate}
\end{defn} \begin{defn} Let $X$ and $Y$ be quasi-Banach spaces. We say that $X$ is a retract of $Y$ if there exist linear transformation $\frI:X\to Y$ and $\frP:Y\to X$ such that $\frP\frI$ is an identity operator in $X$. \end{defn}
\begin{lem} \label{22.04.19.16.55} Let $p\in(1,\infty)$, $q\in(0,\infty)$, and $w\in A_p(\bR^d)$. Suppose that \begin{equation} \label{locsim}
\sup_{j\in\bZ}|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|=:C_0<\infty. \end{equation} \begin{enumerate}[(i)]
\item Then $H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$ is a retract of $L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})$.
\item Then $B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$ is a retract of $\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))$. \end{enumerate}
\end{lem} \begin{proof} We suggest a unified method to prove both (i) and (ii) simultaneously. Let $$ (X,Y)=\left(H_p^{\boldsymbol{r}}(\bR^d,w\,dx),L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})\right) $$ or $$ (X,Y)=\left(B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx),\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))\right). $$ Consider two mappings: \begin{equation*} f \mapsto \frI(f):=(S_0f,\Delta_1f,\Delta_2f,\cdots) \end{equation*} and \begin{equation*} \boldsymbol{f}=(f_0,f_1,\cdots)\mapsto \frP(\boldsymbol{f}):=S_0(f_0+f_1)+\sum_{j=1}^{\infty}\sum_{l=-1}^1\Delta_{j}f_{j+l}. \end{equation*} By using the fact that $f= S_0(f) + \sum_{j=1}^\infty \Delta_j f$, it is each to check that $\frP\frI$ is an identity operator in $X$. Moreover, due to the definitions of the function spaces, \begin{align}
\label{J norm equivalence}
\|f\|_{X}=\|\frI(f)\|_{Y}. \end{align} In other words, the target space of the mapping $f \mapsto \frI(f)$ is $Y$ and this implies that $\frI$ is a linear transformation from $X$ to $Y$ since the linearity of the mapping is obvious.
Now, it only remains to show that $\frP$ is a linear transformation from $Y$ to $X$. Since the linearity is trivial as before, it is sufficient to show that the target space of the mapping $\boldsymbol{f}=(f_0,f_1,\cdots)\mapsto \frP(\boldsymbol{f})$ is $X$. By the almost orthogonality of the Littlewood-Paley projections, \begin{equation} \label{22.09.13.16.17}
\begin{gathered}
S_0\frP(\boldsymbol{f})=S_0f_0+S_0f_1+S_0\Delta_1f_{2},\\
\Delta_1\frP(\boldsymbol{f})=\Delta_1(S_0+\Delta_1)f_0+\Delta_1f_1+\Delta_1(\Delta_1+\Delta_2)f_2+\Delta_1\Delta_2f_3,
\end{gathered} \end{equation} and \begin{equation}
\begin{aligned}
\Delta_i\frP(\boldsymbol{f})&=\Delta_i\Delta_{i-1}f_{i-2}+\Delta_i(\Delta_{i-1}+\Delta_i)f_{i-1}+\Delta_if_i+\Delta_i(\Delta_i+\Delta_{i+1})f_{i+1}+\Delta_i\Delta_{i+1}f_{i+2}\\
&=:T_i^1f_{i-2}+T_i^2f_{i-1}+T_i^3f_i+T_i^4f_{i+1}+T_i^5f_{i+2},\quad i\geq2.
\end{aligned} \end{equation} The remaining part of the proof becomes slightly different depending on $Y=L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})$ or $Y=\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))$. \begin{enumerate}[(i)] \item Due to \eqref{22.09.13.16.17}, $$
\sum_{i=2}^{\infty}2^{2\boldsymbol{r}(i)}|\Delta_i\frP(\boldsymbol{f})|^2\leq N\sum_{k=1}^{5}\cI_k, $$ where $$
\cI_k=\sum_{i=2}^{\infty}|T_i^k(2^{\boldsymbol{r}(i)}f_{i+k-3})|^2,\quad k=1,2,3,4,5. $$ By Lemma \ref{wbound}, the sequence of linear operators $\{T_j^k\}_{j=2}^{\infty}$ satisfies \eqref{22.09.13.16.42} for all $k=1,2,3,4,5$. Using Remark \ref{rem RdF vv} and \eqref{locsim}, we have $$
\Big\|\Big(\sum_{i=2}^{\infty}2^{2\boldsymbol{r}(i)}|\Delta_i\frP(\boldsymbol{f})|^2\Big)^{1/2}\Big\|_{L_p(\bR^d,w\,dx)}\leq N\|\boldsymbol{f}\|_{L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})}. $$ This implies that $\frP(\boldsymbol{f})\in H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$.
\item By Lemma \ref{wbound}, \begin{align}
\label{2023012880}
\|S_0\frP(\boldsymbol{f})\|_{L_p(\bR^d,w\,dx)}\leq N \sum_{j=0}^{2}\|f_{j}\|_{L_p(\bR^d,w\,dx)} \end{align} and \begin{align}
\label{2023012881}
\|\Delta_i\frP(\boldsymbol{f})\|_{L_p(\bR^d,w\,dx)}\leq N \sum_{j=-2}^{2}\|f_{i+j}\|_{L_p(\bR^d,w\,dx)},\quad \forall i\in\bN, \end{align} where $N$ is independent of $i$ and $f_i=0$ if $i<0$. Moreover, due to \eqref{locsim}, \begin{equation} \label{22.04.11.15.09}
2^{\boldsymbol{r}(i)}\|\Delta_i\frP(\boldsymbol{f})\|_{L_p(\bR^d,w\,dx)}\leq N\sum_{j=-2}^{2}2^{\boldsymbol{r}(i+j)}\|f_{i+j}\|_{L_p(\bR^d,w\,dx)} \end{equation} and the constant $N$ is independent of $i$. Finally, recalling the definition of $B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$ and combining all \eqref{2023012880}, \eqref{2023012881}, and \eqref{22.04.11.15.09}, we have \begin{equation*}
\begin{aligned}
\|\frP(\boldsymbol{f})\|_{B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)} \leq N\|\boldsymbol{f}\|_{\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))},\quad q\in(0,\infty),
\end{aligned} \end{equation*} where $N$ is independent of $\boldsymbol{f}$. Therefore, $\frP$ is a linear transformation from $Y$ to $X$. \end{enumerate} The lemma is proved. \end{proof}
\begin{prop} \label{22.04.24.20.57}
Let $p\in(1,\infty)$, $q\in(0,\infty)$, $w\in A_p(\bR^d)$, and $\boldsymbol{r}:\bZ\to(-\infty,\infty)$ be a sequence satisfying
$$
\sup_{j\in\bZ}|\boldsymbol{r}(j+1)-\boldsymbol{r}(j)|<\infty.
$$
\begin{enumerate}[(i)]
\item The space $X$ is a quasi-Banach space
\item The closure of $C_c^{\infty}(\bR^d)$ under the quasi-norm $\|\cdot\|_{X}$ is $X$,
\end{enumerate} where the space $X$ indicates $H_p^{\boldsymbol{r}}(\bR^d,w\,dx)$ or $B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx)$. \end{prop} \begin{proof} We put $$ (X,Y)=\left(H_p^{\boldsymbol{r}}(\bR^d,w\,dx),L_p(\bR^d,w\,dx)(\ell_2^{\boldsymbol{r}})\right) $$ or $$ (X,Y)=\left(B_{p,q}^{\boldsymbol{r}}(\bR^d,w\,dx),\ell_q^{\boldsymbol{r}}(L_p(\bR^d,w\,dx))\right) $$ as in the previous lemma. \begin{enumerate}[(i)] \item All properties of a quasi-Banach space is obvious except completeness.
Thus we only prove the completeness. Let $\{f_n\}_{n=1}^{\infty}$ be a Cauchy sequence in $X$.
It is obvious that $Y$ is a Banach space due to the completeness of $L_p$-spaces with general measures.
Therefore, using the equivalence of the norms in \eqref{J norm equivalence}, one can find a $\boldsymbol{f}\in Y$ such that
\begin{align*}
\frI(f_n)=(S_0f_n,\Delta_1f_n,\cdots)\to\boldsymbol{f}\quad \text{ in } Y.
\end{align*}
By Lemma \ref{22.04.19.16.55}, $\frP(\boldsymbol{f})\in X$ and
$$
\|f_n-\frP(\boldsymbol{f})\|_{X}\leq N\|\frI(f_n)-\boldsymbol{f}\|_{Y}\to0
$$
as $n\to\infty$. Therefore, $X$ is complete. \item With the help of Proposition \ref{22.05.03.11.34}, it is sufficient to consider the case that the order is $\boldsymbol{0}$. Then $\cS(\bR^d)$ is dense in $X$ due to \cite[Theorem 2.4]{qui1982weighted}. Finally, based on the typical truncation and diagonal arguments, the result is obtained. \end{enumerate} The proposition is proved. \end{proof}
\end{document} |
\begin{document}
\begin{center}
{\Large Some remarks about Fibonacci elements in an arbitrary algebra} \begin{equation*} \end{equation*}
Cristina FLAUT and Vitalii SHPAKIVSKYI
\begin{equation*} \end{equation*} \end{center}
\textbf{Abstract. }{\small In this paper, we prove some relations between Fibonacci elements in an arbitrary algebra. Moreover, we define imaginary Fibonacci quaternions and imaginary Fibonacci octonions and we prove that always three arbitrary imaginary Fibonacci quaternions are linear independents and the mixed product of three arbitrary imaginary Fibonacci octonions is zero.} \begin{equation*} \end{equation*}
Keywords: Fibonacci quaternions, Fibonacci octonions, Fibonacci elements.
2000 AMS Subject Classification: 11B83, 11B99.
\begin{equation*} \end{equation*}
\textbf{1. Introduction} \begin{equation*} \end{equation*}
Fibonacci elements over some special algebras were intensively studied in the last time in various papers, as for example: [Akk; ], [Fl, Sa; 15], [Fl, Sh; 13(1)], [Fl, Sh; 13(2)], [Ha; ],[Ha1; ],[Ho; 61],[Ho; 63],[Ke; ]. All these papers studied properties of Fibonacci quaternions, Fibonacci octonions in Quaternion or Octonion algebras or in generalized Quaternion or Octonion algebras or studied dual vectors or dual Fibonacci quaternions ( see [Gu;], [Nu; ]).
In this paper, we will prove that some of the obtained identities can be obtained over an arbitrary algebras. We introduce the notions of imaginary Fibonacci quaternions and imaginary Fibonacci octonions and we prove, using the structure of the quaternion algebras \ and octonion algebras, that always arbitrary three of such elements are linear dependents. For other details, properties and applications regarding quaternion algebras \ and octonion algebras, the reader is referred, for example, to [Sc; 54], [Sc; 66], [Fl, St; 09], [Sa, Fl, Ci; 09].
\textbf{2. Fibonacci elements in an arbitrary algebra} \begin{equation*} \end{equation*}
Let $A$ be a unitary algebra over $K$ ($K=\mathbb{R},\mathbb{C}$) with a basis $\{e_{0}=1,e_{1},e_{2},...,e_{n}\}.$ Let $\{f_{n}\}_{n\in \mathbb{N}}$ be the Fibonacci sequence \begin{equation*} f_{n}=f_{n-1}+f_{n-2},n\geq 2,f_{0}=0,f_{1}=1. \end{equation*} In algebra $A,$ we define the Fibonacci element as follows: \begin{equation*} F_{m}=\overset{n}{\underset{k=0}{\sum }}f_{m+k}e_{k}. \end{equation*}
\textbf{Proposition 2.1.} \textit{With the above notations, the following relations hold:}
\textit{1)} $F_{m+2}=F_{m+1}+F_{m};$
\textit{2)} $\overset{p}{\underset{i=0}{\sum }}F_{i}=F_{p+2}-F_{i}.$
\textbf{Proof.} 1) $F_{m+1}+F_{m}=\overset{n}{\underset{k=0}{\sum }} f_{m+k+1}e_{k}+\overset{n}{\underset{k=0}{\sum }}f_{m+k}e_{k}=\overset{n}{ \underset{k=0}{\sum }}(f_{m+k+1}+f_{m+k})e_{k}=\overset{n}{\underset{k=0}{ \sum }}$ $f_{m+k+2}e_{k}=F_{m+2}.$
2) $\overset{p}{\underset{i=0}{\sum }}F_{i}=F_{1}+F_{2}+...+F_{p}=$\newline $=\overset{n}{\underset{k=0}{\sum }}f_{k+1}e_{k}+\overset{n}{\underset{k=0}{ \sum }}f_{k+2}e_{k}+...+\overset{n}{\underset{k=0}{\sum }}f_{k+p}e_{k}=$ \newline $=e_{0}\left( f_{1}+...+f_{p}\right) +e_{1}\left( f_{2}+...+f_{p+1}\right) +$ \newline $+e_{2}\left( f_{3}+...+f_{p+2}\right) +...+e_{n}\left( f_{k+n}+...+f_{p+n}\right) =$\newline $=$ $e_{0}\left( f_{p+2}-1\right) +e_{1}\left( f_{p+3}-1-f_{1}\right) +e_{2}\left( f_{p+4}-1-f_{1}-f_{2}\right) +$\newline $+e_{3}\left( f_{p+5}-1-f_{1}-f_{2}-f_{3}\right) +...+e_{n}\left( f_{p+n+2}-1-f_{1}-f_{2}-...-f_{n}\right) =$\newline $=F_{p+2}-F_{2}.$
We used the identity $\overset{p}{\underset{i=1}{\sum }}f_{i}=f_{p+2}-1$ (for usual Fibonacci numbers) and $1+f_{1}+f_{2}+...+f_{n}=f_{n+2}.\Box
$
\textbf{Remark 2.2}. The equalities 1, 2 from the above proposition generalize the corresponding formulae from [Ke; ] [Ha; ] [Nu; ] [Ha1; ].
\textbf{Proposition 2.3.} \textit{We have the following formula (Binet's fomula):} \begin{equation*} F_{m}=\frac{\alpha ^{\ast }\alpha ^{m}-\beta ^{\ast }\beta ^{m}}{\alpha -\beta }, \end{equation*} \textit{where} $\alpha =\frac{1+\sqrt{5}}{2},\beta =\frac{1-\sqrt{5}}{2} ,\alpha ^{\ast }=\underset{k=0}{\overset{n}{\sum }}\alpha ^{k}e_{k},~\beta ^{\ast }=\underset{k=0}{\overset{n}{\sum }}\beta ^{k}e_{k}.$
\textbf{Proof.} \ Using the formula for the real quaternions, $f_{m}=\frac{ \alpha ^{m}-\beta ^{m}}{\alpha -\beta },$ we obtain\newline $F_{m}=\overset{n}{\underset{k=0}{\sum }}f_{m+k}e_{k}=\frac{\alpha ^{m}-\beta ^{m}}{\alpha -\beta }e_{0}+\frac{\alpha ^{m+1}-\beta ^{m+1}}{ \alpha -\beta }e_{1}+\frac{\alpha ^{m+2}-\beta ^{m+2}}{\alpha -\beta } e_{2}+...+$\newline $+\frac{\alpha ^{m+n}-\beta ^{m+n}}{\alpha -\beta }e_{n}=\frac{a^{m}}{\alpha -\beta }\left( e_{0}+\alpha e_{1}+\alpha ^{2}e_{2}+...+\alpha ^{n}e_{n}\right) +$\newline $+\frac{\beta ^{m}}{\alpha -\beta }\left( e_{0}+\beta e_{1}+\beta ^{2}e_{2}+...+\beta ^{n}e_{n}\right) =\frac{\alpha ^{\ast }\alpha ^{m}-\beta ^{\ast }\beta ^{m}}{\alpha -\beta }.
\Box
$
\textbf{Remark 2.4.} The above result generalizes the Binet formulae from the papers [Gu;] [Akk; ] [Ke; ] [Ha; ] [Nu; ] [Ha1; ].
\textbf{Theorem 2.5.} \textit{The generating function for the Fibonacci number over an algebra is of the form} \begin{equation*} G\left( t\right) =\frac{F_{0}+\left( F_{1}-F_{0}\right) t}{1-t-t^{2}}. \end{equation*}
\textbf{Proof.} We consider the generating function of the form \begin{equation*} G\left( t\right) =\overset{\infty }{\underset{m=0}{\sum }}F_{m}t^{m}. \end{equation*} We consider the product\newline $G\left( t\right) \left( 1-t-t^{2}\right) =\overset{\infty }{\underset{m=0}{ \sum }}F_{m}t^{m}=$ $\overset{\infty }{\underset{m=0}{\sum }}F_{m}t^{m}- \overset{\infty }{\underset{m=0}{\sum }}F_{m}t^{m+1}-\overset{\infty }{ \underset{m=0}{\sum }}F_{m}t^{m+2}=$\newline $=F_{0}+F_{1}t+F_{2}t^{2}+F_{3}t^{3}+...-F_{0}t-F_{1}t^{2}-F_{2}t^{3}-...-$ \newline $-F_{0}t^{2}-F_{1}t^{3}-F_{2}t^{4}-...=F_{0}+\left( F_{1}-F_{0}\right) t.\Box
$
\textbf{Remark 2.6.} The above Theorem generalizes results from the papers [Gu;], [Akk; ], \ [Ke; ], [Ha; ],[Nu; ].
\begin{equation*} \end{equation*}
\textbf{The Cassini identity} \begin{equation*} \end{equation*}
First, we obtain the following identity.
\textbf{Proposition 2.7.}
\begin{equation} F_{-m}=\left( -1\right) ^{m+1}f_{m}F_{1}+\left( -1\right) ^{m}f_{m+1}F_{0}. \tag{2.1.} \end{equation}
\textbf{Proof.} We use induction. \ For $m=1,$ we obtain $ F_{-1}=f_{1}F_{1}-f_{2}F_{0},$ which is true. Now, we assume that it is true for an arbitrary integer $k$ \begin{equation*} F_{-k}=\left( -1\right) ^{k+1}f_{k}F_{1}+\left( -1\right) ^{k}f_{k+1}F_{0} \end{equation*} For $k+1,$ we obtain\newline $F_{-(k+1)}=\left( -1\right) ^{k+2}f_{k+1}F_{1}+\left( -1\right) ^{k+1}f_{k+2}F_{0}=$\newline $=\left( -1\right) ^{k}f_{k}F_{1}+\left( -1\right) ^{k}f_{k-1}F_{1}+\left( -1\right) ^{k-1}f_{k+1}F_{0}+$\newline $+\left( -1\right) ^{k-1}f_{k}F_{0}=F_{-\left( n-1\right) }-F_{-n}.$ Therefore, this statement is true.$\Box
$
\textbf{Theorem 2.8.} (Cassini's identity) \textit{With the above notations, we have the following formula}
\begin{equation*} F_{m-1}F_{m+1}-F_{m}^{2}=\left( -1\right) ^{m}(F_{-1}F_{1}-F_{0}^{2}). \end{equation*}
\textbf{Proof.}
We consider\newline $ F_{m-1}=f_{m-1}e_{0}+f_{m}e_{1}+f_{m+1}e_{2}+f_{m+2}e_{3}+...+f_{m+n-1}e_{n}, $\newline $ F_{m+1}=f_{m+1}e_{0}+f_{m+2}e_{1}+f_{m+3}e_{2}+f_{m+4}e_{3}+...+f_{m+n+1}e_{n}, $\newline $F_{m}=f_{m}e_{0}+f_{m+1}e_{1}+f_{m+2}e_{2}+f_{m+2}e_{3}+...+f_{m+n}e_{n}.$
We compute\newline $F_{m-1}F_{m+1}=$\newline $=\left[ f_{m-1}f_{m+1}e_{0}^{2}+f_{m-1}f_{m+2}e_{0}e_{1}+f_{m-1}f_{m+3}e_{0}e_{2}+f_{m-1}f_{m+4}e_{0}e_{3}...+f_{m-1}f_{m+n+1}e_{0}e_{n} \right] +$\newline $ +[f_{m}f_{m+1}e_{1}e_{0}+f_{m}f_{m+2}e_{1}^{2}+f_{m}f_{m+3}e_{1}e_{2}+f_{m}f_{m+4}e_{1}e_{3}...+f_{m}f_{m+n+1}e_{1}e_{n}]+ $\newline $ +[f_{m+1}^{2}e_{2}e_{0}+f_{m+1}f_{m+2}e_{2}e_{1}+f_{m+1}f_{m+3}e_{2}^{2}+f_{m+1}f_{m+4}e_{1}e_{3}...+f_{m+1}f_{m+n+1}e_{2}e_{n}]+ $\newline $ +[f_{m+2}f_{m+1}e_{3}e_{0}+f_{m+2}^{2}e_{3}e_{1}+f_{m+2}f_{m+3}e_{3}e_{2}+f_{m+2}f_{m+4}e_{3}^{2}...+f_{m+2}f_{m+n+1}e_{3}e_{n}]+...+ $\newline $ +[f_{m+n-1}f_{m+1}e_{n}e_{0}+f_{m+n-1}f_{m+2}e_{n}e_{1}+f_{m+n-1}f_{m+3}e_{n}e_{2}+f_{m+n-1}f_{m+4}e_{n}e_{3}...+f_{m+n-1}f_{m+n+1}e_{n}^{2}]. $
Now, we compute \newline $F_{m}^{2}=\left[ f_{m}^{2}e_{0}^{2}+f_{m}f_{m+1}e_{0}e_{1}+f_{m}f_{m+2}e_{0}e_{2}+f_{m}f_{m+3}e_{0}e_{3}+...+f_{m}f_{m+n}e_{0}e_{n} \right] +$\newline $+$ $\left[ f_{m+1}f_{m}e_{1}e_{0}+f_{m+1}^{2}e_{1}^{2}+f_{m+1}f_{m+2}e_{1}e_{2}+f_{m+1}f_{m+3}e_{1}e_{3}+...+f_{m+1}f_{m+n}e_{1}e_{n} \right] +$\newline $+\left[ f_{m+2}f_{m}e_{2}e_{0}+f_{m+2}f_{m+1}e_{2}e_{1}+f_{m+2}^{2}e_{2}^{2}+f_{m+2}f_{m+3}e_{2}e_{3}+...+f_{m+2}f_{m+n}e_{2}e_{n} \right] +$\newline $+\left[ f_{m+2}f_{m}e_{2}e_{0}+f_{m+2}f_{m+1}e_{2}e_{1}+f_{m+2}^{2}e_{2}^{2}+f_{m+2}f_{m+3}e_{2}e_{3}+...+f_{m+2}f_{m+n}e_{2}e_{n} \right] +$\newline $+\left[ f_{m+3}f_{m}e_{3}e_{0}+f_{m+3}f_{m+1}e_{3}e_{1}+f_{m+3}f_{m+2}e_{3}e_{2}+f_{m+3}^{2}e_{3}^{2}+...+f_{m+3}f_{m+n}e_{3}e_{n} \right] +...+$\newline $+\left[ f_{m+n}f_{m}e_{n}e_{0}+f_{m+n}f_{m+1}e_{n}e_{1}+f_{m+n}f_{m+2}e_{n}e_{2}+f_{m+n}f_{m+3}e_{n}e_{3}+...+f_{m+n}^{2}e_{n}^{2} \right] .$
Consider the difference\newline $F_{m-1}F_{m+1}-F_{m}^{2}=$\newline $=e_{0}\left[ e_{0}\left( f_{m-1}f_{m+1}-f_{m}^{2}\right) +e_{1}\left( f_{m-1}f_{m+2}-f_{m}f_{m+1}\right) +...+e_{n}\left( f_{m-1}f_{m+n+1}-f_{m}f_{m+n}\right) \right] +$\newline $+e_{1}\left[ e_{0}\left( f_{m}f_{m+1}-f_{m+1}f_{m}\right) +e_{1}\left( f_{m}f_{m+2}-f_{m+1}^{2}\right) +...+e_{n}\left( f_{m}f_{m+n+1}-f_{m+1}f_{m+n}\right) \right] +$\newline $+e_{2}\left[ e_{0}\left( f_{m+1}^{2}-f_{m+2}f_{m}\right) +e_{1}\left( f_{m+1}f_{m+2}-f_{m+2}f_{m+1}\right) +...+e_{n}\left( f_{m+1}f_{m+n+1}-f_{m+2}f_{m+n}\right) \right] +$\newline $+e_{3}\left[ e_{0}\left( f_{m+2}f_{m+1}-f_{m+3}f_{m}\right) +e_{1}\left( f_{m+2}^{2}-f_{m+3}f_{m+1}\right) +...+e_{n}\left( f_{m+2}f_{m+n+1}-f_{m+3}f_{m+n}\right) \right] +...+$\newline $+e_{n}\left[ e_{0}\left( f_{m+n-1}f_{m+1}-f_{m+n}f_{m}\right) +e_{1}\left( f_{m+n-1}f_{m+2}-f_{m+n}f_{m+1}\right) +...+e_{n}\left( f_{m+n-1}f_{m+n+1}-f_{m+n}^{2}\right) \right] .$
Using the formula $f_{i}f_{j}-f_{i+k}f_{j-k}=\left( -1\right) ^{j-k}f_{i+k-j}f_{k}$ (see Koshy, p. 87, formula 2) and the identities $ f_{1}=1,f_{-m}=\left( -1\right) ^{m+1}f_{m}$ (see Koshy, p. 84), we obtain \newline $F_{m-1}F_{m+1}-F_{m}^{2}=e_{0}\left( -1\right) ^{m+1}\left[ e_{0}f_{1}+e_{1}f_{2}+e_{2}f_{3}+...+e_{n}f_{n+1}\right] +$\newline $+e_{1}\left( -1\right) ^{m+1}\left[ e_{0}f_{0}+e_{1}f_{1}+e_{2}f_{2}+...+e_{n}f_{n}\right] +$\newline $+e_{2}\left( -1\right) ^{m}\left[ e_{0}f_{-1}+e_{1}f_{0}+e_{2}f_{1}+...+e_{n}f_{n-1}\right] +$\newline $+e_{3}\left( -1\right) ^{m}\left[ e_{0}f_{-2}+e_{1}f_{-1}+e_{2}f_{0}+...+e_{n}f_{n-2}\right] +...+$\newline $+\left( -1\right) ^{m\ast n}e_{n}\left[ e_{0}f_{-n+1}+e_{1}f_{-n+2}+e_{2}f_{-n+3}+...+e_{n}f_{1}\right] =$\newline $=\left( -1\right) ^{m}\left( e_{0}F_{1}-e_{1}F_{0}+e_{2}F_{-1}-e_{3}F_{-2}+...+\left( -1\right) ^{n}e_{n}F_{-n+1}\right) .$
Using Proposition 2.7, we have\newline $F_{m-1}F_{m+1}-F_{m}^{2}=\left( -1\right) ^{m}[e_{0}F_{1}-e_{1}F_{0}+e_{2}\left( F_{1}-F_{0}\right) -$\newline $-e_{3}\left( 2F_{0}-F_{1}\right) +e_{4}\left( 2F_{1}-3F_{0}\right) -e_{5}\left( -3F_{1}+5F_{0}\right) +...+$\newline $+e_{n}\left( -1\right) ^{n}\left( \left( -1\right) ^{n}f_{n-1}F_{1}+\left( -1\right) ^{n-1}f_{n}F_{0}\right) ]=$\newline $=\left( -1\right) ^{m}[(e_{0}f_{-1}+e_{1}f_{0}+e_{2}f_{1}+...+e_{n}f_{n-1})F_{1}-$\newline $-\left( f_{0}e_{0}+f_{1}e_{1}+f_{2}e_{2}+...+f_{n}e_{n}\right) F_{0}]=$ \newline $=\left( -1\right) ^{m}\left[ F_{-1}F_{1}-F_{0}^{2}\right] .$ The theorem is now proved.
\textbf{Remark 2.9. }
i) Similarly, we can prove an analogue of Cassini's formula: \begin{equation*} F_{m+1}F_{m-1}-F_{m}^{2}=\left( -1\right) ^{m}\left[ F_{1}F_{-1}-F_{0}^{2} \right] . \end{equation*}
ii) Theorem 2.8 generalizes Cassini's formula for all real algebras.
iii) If the algebra $A$ is algebra of the real numbers $\mathbb{R},$ in this case, we have $F_{m}=f_{m}.$ From the above theorem, it \ results that
\begin{equation*} f_{m+1}f_{m-1}-f_{m}^{2}=\left( -1\right) ^{m}\left[ f_{1}f_{-1}-f_{0}^{2} \right] =\left( -1\right) ^{m}, \end{equation*} which it is the classical Cassini's identity. \begin{equation*} \end{equation*}
\textbf{3. Imaginary Fibonacci quaternions and imaginary Fibonacci octonions } \begin{equation*} \end{equation*}
In the following, we will consider a field $K$ with $charK\neq 2,3,$ $V$ a finite dimensional vector space and $A$ a finite dimensional unitary algebra over a field $\ K$, associative or nonassociative.
Let $\mathbb{H}\left( \alpha ,\beta \right) $ be the generalized real\ quaternion algebra, the algebra of the elements of the form $a=a_{1}\cdot 1+a_{2}\mathbf{i}+a_{3}\mathit{j}+a_{4}\mathbf{k},$ where $a_{i}\in \mathbb{R },\mathbf{i}^{2}=-\alpha ,\mathbf{j}^{2}=-\beta ,$ $\mathbf{k}=\mathbf{ij}=- \mathbf{ji}.$ We denote by $\mathbf{t}\left( a\right) $ and $\mathbf{n} \left( a\right) $ the trace and the norm of a real quaternion $a.$ The norm of a generalized quaternion has the following expression $\mathbf{n}\left( a\right) =a_{1}^{2}+\alpha a_{2}^{2}+\beta a_{3}^{2}+\alpha \beta a_{4}^{2}$ and $\mathbf{t}\left( a\right) =2a_{1}.$ It is known that for $a\in $ $ \mathbb{H}\left( \alpha ,\beta \right) ,$ we have $a^{2}-\mathbf{t}\left( a\right) a+\mathbf{n}\left( a\right) =0.$ The quaternion algebra $\mathbb{H} \left( \alpha ,\beta \right) $ is a \textit{division algebra} if for all $ a\in \mathbb{H}\left( \alpha ,\beta \right) ,$ $a\neq 0$ we have $\mathbf{n} \left( a\right) \neq 0,$ otherwise $\mathbb{H}\left( \alpha ,\beta \right) $ is called a \textit{split algebra}.
Let $\mathbb{O}(\alpha ,\beta ,\gamma )$ be a generalized octonion algebra over $\mathbb{R},$ with basis $\{1,e_{1},...,e_{7}\},$ the algebra of the elements of the form $\ a=a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{4}e_{4}+a_{5}e_{5}+a_{6}e_{6}+a_{7}e_{7}\, $and the multiplication given in the following table:
\begin{center} {\footnotesize $
\begin{tabular}{c||c|c|c|c|c|c|c|c|} $\cdot $ & $1$ & $\,\,\,e_{1}$ & $\,\,\,\,\,e_{2}$ & $\,\,\,\,e_{3}$ & $ \,\,\,\,e_{4}$ & $\,\,\,\,\,\,e_{5}$ & $\,\,\,\,\,\,e_{6}$ & $ \,\,\,\,\,\,\,e_{7}$ \\ \hline\hline $\,1$ & $1$ & $\,\,\,e_{1}$ & $\,\,\,\,e_{2}$ & $\,\,\,\,e_{3}$ & $ \,\,\,\,e_{4}$ & $\,\,\,\,\,\,e_{5}$ & $\,\,\,\,\,e_{6}$ & $ \,\,\,\,\,\,\,e_{7}$ \\ \hline $\,e_{1}$ & $\,\,e_{1}$ & $-\alpha $ & $\,\,\,\,e_{3}$ & $-\alpha e_{2}$ & $ \,\,\,\,e_{5}$ & $-\alpha e_{4}$ & $-\,\,e_{7}$ & $\,\,\,\alpha e_{6}$ \\ \hline $\,e_{2}$ & $\,e_{2}$ & $-e_{3}$ & $-\,\beta $ & $\,\,\beta e_{1}$ & $ \,\,\,\,e_{6}$ & $\,\,\,\,\,e_{7}$ & $-\beta e_{4}$ & $-\beta e_{5}$ \\ \hline $e_{3}$ & $e_{3}$ & $\alpha e_{2}$ & $-\beta e_{1}$ & $-\alpha \beta $ & $ \,\,\,\,e_{7}$ & $-\alpha e_{6}$ & $\,\,\,\beta e_{5}$ & $-\alpha \beta e_{4} $ \\ \hline $e_{4}$ & $e_{4}$ & $-e_{5}$ & $-\,e_{6}$ & $-\,\,e_{7}$ & $-\,\gamma $ & $ \,\,\,\gamma e_{1}$ & $\,\,\gamma e_{2}$ & $\,\,\,\,\,\gamma e_{3}$ \\ \hline $\,e_{5}$ & $\,e_{5}$ & $\alpha e_{4}$ & $-\,e_{7}$ & $\,\alpha e_{6}$ & $ -\gamma e_{1}$ & $-\,\alpha \gamma $ & $-\gamma e_{3}$ & $\,\alpha \gamma e_{2}$ \\ \hline $\,\,e_{6}$ & $\,\,e_{6}$ & $\,\,\,\,e_{7}$ & $\,\,\beta e_{4}$ & $-\,\beta e_{5}$ & $-\gamma e_{2}$ & $\,\,\,\gamma e_{3}$ & $-\beta \gamma $ & $-\beta \gamma e_{1}$ \\ \hline $\,\,e_{7}$ & $\,\,e_{7}$ & $-\alpha e_{6}$ & $\,\beta e_{5}$ & $\alpha \beta e_{4}$ & $-\gamma e_{3}$ & $-\alpha \gamma e_{2}$ & $\beta \gamma e_{1} $ & $-\alpha \beta \gamma $ \\ \hline \end{tabular} \
$ }
Table 1 \end{center}
The algebra $\mathbb{O}(\alpha ,\beta ,\gamma )$ is non-commutative and non-associative.
If $a\in \mathbb{O}(\alpha ,\beta ,\gamma ),$ $ a=a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{4}e_{4}+a_{5}e_{5}+a_{6}e_{6}+a_{7}e_{7} $ then $\bar{a} =a_{0}-a_{1}e_{1}-a_{2}e_{2}-a_{3}e_{3}-a_{4}e_{4}-a_{5}e_{5}-a_{6}e_{6}-a_{7}e_{7} $ is called the \textit{conjugate} of the element $a.$ The scalars $\mathbf{t }\left( a\right) =a+\overline{a}\in \mathbb{R}$ and \begin{equation} \,\mathbf{n}\left( a\right) =a\overline{a}=a_{0}^{2}+\alpha a_{1}^{2}+\beta a_{2}^{2}+\alpha \beta a_{3}^{2}+\gamma a_{4}^{2}+\alpha \gamma a_{5}^{2}+\beta \gamma a_{6}^{2}+\alpha \beta \gamma a_{7}^{2}\in \mathbb{R}, \tag{3.1.} \end{equation} are called the \textit{trace}, respectively, the \textit{norm} of the element $a\in $ $A.$ \thinspace It\thinspace \thinspace \thinspace follows\thinspace \thinspace \thinspace that$\,$\newline \thinspace \thinspace $a^{2}-\mathbf{t}\left( a\right) a+\mathbf{n}\left( a\right) =0,\forall a\in A.$The octonion algebra $\mathbb{O}\left( \alpha ,\beta ,\gamma \right) $ is a \textit{division algebra} if for all $a\in \mathbb{O}\left( \alpha ,\beta ,\gamma \right) ,$ $a\neq 0$ we have $\mathbf{ n}\left( a\right) \neq 0,$ otherwise $\mathbb{O}\left( \alpha ,\beta ,\gamma \right) $ is called a \textit{split algebra}.
Let \ $V$ be a real vector space of dimension $n$ and $<,>$ be the inner product. The\textit{\ cross product} on $V$ is a continuos map \begin{equation*} X:V^{s}\rightarrow V,s\in \{1,2,...,n\} \end{equation*} with the following properties:
1) $<X\left( x_{1},...x_{s}\right) ,x_{i}>=0,i\in \{1,2,...,s\};$
2) $<X\left( x_{1},...x_{s}\right) ,X\left( x_{1},...x_{s}\right) >=\det \left( <x_{i},x_{j}>\right) .($see [Br; ]$)$
In [Ro; 96], was proved that if $d=\dim _{\mathbb{R}}V,$ therefore $d\in \{0,1,3,7\}.$(see [Ro; 96], Proposition 3)
The values $0,1,3$ and $7$ for dimensions are obtained from Hurwitz's theorem, since the real Hurwitz division algebras $\mathcal{H}$ exist only for dimensions $1,2,4$ and $8$ dimensions. In this situations, the cross product is obtained from the product of the normed division algebra, restricting it to imaginary subspace of the algebra $\mathcal{H},$ which can be of \ dimension $0,1,3$ or $7$.(see [Ja; 74]) It is known that the real Hurwitz division algebras are only: the real numbers, the complex numbers, the quaternions and the octonions.
In $\mathbb{R}^{3}$ with the canonical basis $\{i_{1},i_{2},i_{3}\},$ the cross product of two linearly independent vectors $ x=x_{1}i_{1}+x_{2}i_{2}+x_{3}i_{3}$ and $y=y_{1}i_{1}+y_{2}i_{2}+y_{3}i_{3}$ is a vector, denoted by $x\times y$ and can be expressed computing the following formal determinant \begin{equation} x\times y=\left\vert \begin{array}{ccc} i_{1} & i_{2} & i_{3} \\ x_{1} & x_{2} & x_{3} \\ y_{1} & y_{2} & y_{3} \end{array} \right\vert . \tag{3.2.} \end{equation}
The cross product can also be described using the quaternions and the basis $ \{i_{1},i_{2},i_{3}\}$ as a standard basis for $\mathbb{R}^{3}.$ If a vector $x\in \mathbb{R}^{3}$ has the form $x=x_{1}i_{1}+x_{2}i_{2}+x_{3}i_{3}$ and is represented as the quaternion $x=x_{1}\mathbf{i}+x_{2}\mathbf{j}+x_{3} \mathbf{k}$, therefore the cross product of two vectors has the form $ x\times y=xy+<x,y>,$ where $<x,y>=x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}$ is the inner product.
A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. If $x=\underset{i=0}{\overset {7}{\sum }}x_{i}e_{i}$ and $y=\underset{i=0}{\overset{7}{\sum }}y_{i}e_{i}$ are two imaginary octonions, therefore \begin{eqnarray*} x\times y &=&(x_{2}y_{4}-x_{4}y_{2}+x_{3}y_{7}-x_{7}y_{3}+x_{5}y_{6}-x_{6}y_{5}) \,e_{1}+ \\ &&+(x_{3}y_{5}-x_{5}y_{3}+x_{4}y_{1}-x_{1}y_{4}+x_{6}y_{7}-x_{7}y_{6}) \,e_{2}+ \\ &&+(x_{4}y_{6}-x_{6}y_{4}+x_{5}y_{2}-x_{2}y_{5}+x_{7}y_{1}-x_{1}y_{7}) \,e_{3}+ \\ &&+(x_{5}y_{7}-x_{7}y_{5}+x_{6}y_{3}-x_{3}y_{6}+x_{1}y_{2}-x_{2}y_{1}) \,e_{4}+ \\ &&+(x_{6}y_{1}-x_{1}y_{6}+x_{7}y_{4}-x_{4}y_{7}+x_{2}y_{3}-x_{3}y_{2}) \,e_{5}+ \\ &&+(x_{7}y_{2}-x_{2}y_{7}+x_{1}y_{5}-x_{5}y_{1}+x_{3}y_{4}-x_{4}y_{3}) \,e_{6}+ \\ &&+(x_{1}y_{3}-x_{3}y_{1}+x_{2}y_{6}-x_{6}y_{2}+x_{4}y_{5}-x_{5}y_{4}) \,e_{7}, \end{eqnarray*} \begin{equation} \tag{3.3.} \end{equation} see [Si; 02].
Let $\mathbb{H}$ be the real division quaternion algebra and $\mathbb{H} _{0}=\{x\in \mathbb{H}$ $\mid $ $\mathbf{t}\left( x\right) =0\}.$ An element $F_{n}\in \mathbb{H}_{0}$ is called an\textit{\ imaginary Fibonacci quaternion element} if it is of the form $F_{n}=f_{n}\mathbf{i}+f_{n+1} \mathbf{j}+f_{n+2}\mathbf{k,}$ where $\left( f_{n}\right) _{n\in \mathbb{N}}$ is the Fibonacci numbers sequence$.$ Let $F_{k},F_{m},F_{n}$ be three imaginary Fibonacci quaternions. Therefore, we have the following result.
In the proof of the following results, we will use some relations between Fibonacci numbers, namely:
\textit{D'Ocagne's identity} \begin{equation} f_{m}f_{n+1}-f_{n}f_{m+1}=\left( -1\right) ^{n}f_{m-n} \tag{3.4.} \end{equation} see relation (33) from [Wo], and
\textit{Johnson's identity} \begin{equation} f_{a}f_{b}-f_{c}f_{d}=\left( -1\right) ^{r}\left( f_{a-r}f_{b-r}-f_{c-r}f_{d-r}\right) , \tag{3.5.} \end{equation} for arbitrary integers $a,b,c,d,$ and $r$ with $a+b=c+d,$ see relation (36) from [Wo].
\textbf{Proposition 3.1.} \textit{With the above notations, for three arbitrary Fibonacci imaginary quaternions, } \textit{we have} \begin{equation*} <F_{k}\times F_{m},F_{n}>=0. \end{equation*} \textit{Therefore, the vectors} $F_{k},F_{m},F_{n}$ \textit{are linear dependents}.
The above result is similar with the result for dual Fibonacci vectors obtained in [Gu;], Theorem 11.
Let $\mathbb{O}$ be the real division octonion algebra and $\mathbb{O} _{0}=\{x\in \mathbb{H}$ $\mid $ $\mathbf{t}\left( x\right) =0\}.$ An element $F_{n}\in \mathbb{O}_{0}$ is called an \textit{imaginary Fibonacci octonion element} if it is of the form $ F_{n}=f_{n}e_{1}+f_{n+1}e_{1}+f_{n+2}e_{1}+f_{n+3}e_{1}+f_{n+4}e_{1}+f_{n+5}e_{1}+f_{n+6}e_{1} \mathbf{,}$ where $\left( f_{n}\right) _{n\in \mathbb{N}}$ is the Fibonacci numbers sequence$.$ Let $F_{k},F_{m},F_{n}$ be three imaginary Fibonacci octonions.
\
\textbf{Proposition 3.2.} \textit{With the above notations, for three arbitrary Fibonacci imaginary octonions, we have} \begin{equation*} <F_{k}\times F_{m},F_{n}>=0. \end{equation*} \qquad
\textbf{Proof.} Using formulae ($3.3$), $\left( 3.4\right) $ and $\left( 3.5\right) $, we will compute
$F_{k}\times F_{m}.$ \newline The coefficient of \ $e_{1}$ is\newline $f_{m+2}f_{k+4}-f_{k+2}f_{m+4}+f_{m+3}f_{k+7}-f_{k+3}f_{m+7}+f_{m+5}f_{k+6}-$ $f_{k+5}f_{m+6}=$\newline $=f_{m}f_{k+2}-f_{k}f_{m+2}-f_{m}f_{k+4}+f_{k}f_{m+4}-f_{m}f_{k+1}+$ $ f_{k}f_{m+1}=$\newline $=f_{m}\left( f_{k+2}-f_{k+4}-f_{k+1}\right) +f_{k}\left( -f_{m+2}+f_{m+4}+f_{m+1}\right) =$\newline $=f_{m}\left( f_{k}-f_{k+4}\right) +f_{k}\left( f_{m+4}-f_{m}\right) =$ \newline $=-f_{m}\left( 3f_{k+1}+f_{k}\right) +f_{k}\left( 3f_{m+1}+f_{m}\right) =$ \newline $=-3\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =-3\left( -1\right) ^{k}f_{m-k}.$ \newline The coefficient of $e_{2}$ is\newline $f_{m+3}f_{k+5}-f_{k+3}f_{m+5}+f_{m+4}f_{k+1}-f_{k+4}f_{m+1}+f_{m+6}f_{k+7}-$ $f_{k+6}f_{m+7}=$\newline $=$ $-f_{m}f_{k+2}+f_{k}f_{m+2}-f_{m+3}f_{k}+f_{k+3}f_{m}+f_{m}f_{k+1}-$ $ f_{k}f_{m+1}=$\newline $=f_{m}\left( -f_{k+2}+f_{k+3}+f_{k+1}\right) +f_{k}\left( f_{m+2}-f_{m+3}-f_{m+1}\right) =$\newline $=2\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =2\left( -1\right) ^{k}f_{m-k}.$ \newline The coefficient of $e_{3}$ is \newline $f_{m+4}f_{k+6}-f_{m+3}f_{k+5}+f_{m+5}f_{k+2}-f_{m+2}f_{k+5}+f_{m+7}f_{k+1}-$ $f_{k+7}f_{m+1}=$\newline $ =f_{m}f_{k+2}-f_{m+2}f_{k}+f_{m+3}f_{k}-f_{m}f_{k+3}-f_{m+6}f_{k}+f_{m}f_{k+6}= $\newline $=f_{m}\left( f_{k+2}-f_{k+3}+f_{k+6}\right) +f_{k}\left( -f_{m+2}+f_{m+3}-f_{m+6}\right) =$\newline $=7\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =7\left( -1\right) ^{k}f_{m-k}.$ \newline The coefficient of $e_{4}$ is\newline $f_{m+5}f_{k+7}-f_{k+5}f_{m+7}+f_{m+6}f_{k+3}-f_{k+6}f_{m+3}+f_{m+1}f_{k+2}-$ $f_{m+2}f_{k+1}=$\newline $ =-f_{m}f_{k+2}+f_{k}f_{m+2}-f_{m+3}f_{k}+f_{k+3}f_{m}-f_{m}f_{k+1}+f_{k}f_{m+1}= $\newline $=f_{m}\left( -f_{k+2}+f_{k+3}-f_{k+1}\right) =0.$\newline The coefficient of $e_{5}$ is\newline $f_{m+6}f_{k+1}-f_{k+6}f_{m+1}+f_{m+7}f_{k+4}-f_{k+7}f_{m+4}+f_{m+2}f_{k+3}-$ $f_{k+2}f_{m+3}=$\newline $ =-f_{m+5}f_{k}+f_{k+5}f_{m}+f_{m+3}f_{k}-f_{k+3}f_{m}+f_{m}f_{k+1}-f_{k}f_{m+1}= $\newline $=f_{m}\left( f_{k+5}-f_{k+3}+f_{k+1}\right) +f_{k}\left( -f_{m+5}+f_{m+3}-f_{m+1}\right) =$\newline $=4\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =4\left( -1\right) ^{k}f_{m-k}.$ \newline The coefficient of $e_{6}$ is\newline $f_{m+7}f_{k+2}-f_{k+7}f_{m+2}+f_{m+1}f_{k+5}-f_{k+1}f_{m+5}+f_{m+3}f_{k+4}-$ $f_{k+3}f_{m+4}=$\newline $=f_{m+5}f_{k}-f_{k+5}f_{m}-f_{m}f_{k+4}+f_{k}f_{m+4}-f_{m}f_{k+1}+$ $ f_{k}f_{m+1}=$\newline $=f_{m}\left( -f_{k+5}-f_{k+4}-f_{k-1}\right) +f_{k}\left( f_{k+5}+f_{k+4}+f_{k-1}\right) =$\newline $=-9\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =-9\left( -1\right) ^{k}f_{m-k}.$ \newline The coefficient of $e_{7}$ is\newline $f_{m+1}f_{k+3}-f_{k+1}f_{m+3}+f_{m+2}f_{k+6}-f_{k+2}f_{m+6}+f_{m+4}f_{k+5}-$ $f_{k+4}f_{m+5}=$\newline $=f_{m}\left( -f_{k+2}+f_{k+4}+f_{k+1}\right) +f_{k}\left( f_{m+2}-f_{m+4}-f_{m+1}\right) =$\newline $=3\left( f_{m}f_{k+1}-f_{k}f_{m+1}\right) =3\left( -1\right) ^{k}f_{m-k}.$ \newline We obtain that\newline $F_{k}\times F_{m}=\left( -1\right) ^{k}f_{m-k}\left( -3e_{1}+2e_{2}+7e_{3}+4e_{5}-9e_{6}+3e_{7}\right) .$ \newline Therefore\newline $<F_{k}\times F_{m},F_{n}>=\left( -1\right) ^{k}f_{m-k}\left( -3f_{n+1}+2f_{n+2}+7f_{n+3}+4f_{n+5}-9f_{n+6}+3f_{n+7}\right) =$\newline $=-2f_{n+2}+2f_{n+1}+2f_{n}=0.$
\begin{equation*} \end{equation*}
\textbf{References}
\begin{equation*} \end{equation*} \newline \newline [Akk; ] I Akkus, O Kecilioglu, \textit{Split Fibonacci and Lucas Octonions}, accepted in Adv. Appl. Clifford Algebras.\newline [Br; ] R. Brown, A. Gray (1967), \textit{Vector cross products}, Commentarii Mathematici Helvetici \qquad 42 (1)(1967), 222--236.\newline [Fl, Sa; 15] C. Flaut, D. Savin, \textit{Quaternion Algebras and Generalized Fibonacci-Lucas Quaternions}, accepted in Adv. Appl. Clifford Algebras \newline [Fl, Sh; 13(1)] C. Flaut, V. Shpakivskyi,\textit{\ Real matrix representations for the complex quaternions}, Adv. Appl. Clifford Algebras, 23(3)(2013), 657-671.\newline [Fl, Sh; 13(2)] Cristina Flaut and Vitalii Shpakivskyi, \textit{On Generalized Fibonacci Quaternions and Fibonacci-Narayana Quaternions}, Adv. Appl. Clifford Algebras, 23(3)(2013), 673-688.\newline [Fl, St; 09] C. Flaut, M. \c{S}tef\~{a}nescu, \textit{Some equations over generalized quaternion and octonion division algebras}, Bull. Math. Soc. Sci. Math. Roumanie, 52(100), no. 4 (2009), 427-439.\newline [Gu;] I. A. Guren, S.K. Nurkan, \textit{A new approach to Fibonacci, Lucas numbers and dual vectors}, accepted in Adv. Appl. Clifford Algebras.\newline [Ha; ] S. Halici, On Fibonacci Quaternions, Adv. in Appl. Clifford Algebras, 22(2)(2012), 321-327.\newline [Ha1; ] S Halici, \textit{On dual Fibonaci quaternions}, Selcuk J. Appl Math, accepted.\newline [Ho; 61] A. F. Horadam, \textit{A Generalized Fibonacci Sequence}, Amer. Math. Monthly, 68(1961), 455-459.\newline [Ho; 63] A. F. Horadam, \textit{Complex Fibonacci Numbers and Fibonacci Quaternions}, Amer. Math. Monthly, 70(1963), 289-291.\newline [Ja; 74] Nathan Jacobson (2009). \textit{Basic algebra I}, Freeman 1974 2nd ed., 1974, p. 417--427.\newline [Ke; ] O Kecilioglu, I Akkus, \textit{The Fibonacci Octonions,} accepted in Adv. Appl. Clifford Algebras.\newline [Ro; 96] M. Rost, \textit{On the dimension of a composition algebra}, Doc. Math. J.,1(1996), 209-214.\newline [Nu; ] S.K. Nurkan, I.A. Guren, \textit{Dual Fibonacci quaternions}, accepted in Adv. Appl. Clifford Algebras.\newline [Sa, Fl, Ci; 09] D. Savin, C. Flaut, C. Ciobanu, \textit{Some properties of the symbol algebras, Carpathian Journal of Mathematics}, 25(2)(2009), p. 239-245.\newline [Si; 02] Z. K. Silagadze, \textit{Multi-dimensional vector product}, arxiv \newline [Sc; 66] R. D. Schafer, \textit{An Introduction to Nonassociative Algebras}, Academic Press, New-York, 1966.\newline [Sc; 54] R. D. Schafer, \textit{On the algebras formed by the Cayley-Dickson process}, Amer. J. Math. 76 (1954), 435-446.\newline [Sm; 04] W.D.Smith, \textit{Quaternions, octonions, and now, 16-ons, and 2n-ons; New kinds of numbers,\newline }[Sw; 73] M. N. S. Swamy, \textit{On generalized Fibonacci Quaternions}, The Fibonacci Quaterly, 11(5)(1973), 547-549.\newline [Wo] http://mathworld.wolfram.com/FibonacciNumber.html
\begin{equation*} \end{equation*}
Cristina FLAUT
Faculty of Mathematics and Computer Science,
Ovidius University,
Bd. Mamaia 124, 900527, CONSTANTA,
ROMANIA
http://cristinaflaut.wikispaces.com/
http://www.univ-ovidius.ro/math/
e-mail:
[email protected]
cristina [email protected]
Vitalii SHPAKIVSKYI
Department of Complex Analysis and Potential Theory
Institute of Mathematics of the National Academy of Sciences of Ukraine,
3, Tereshchenkivs'ka st.
01601 Kiev-4
UKRAINE
http://www.imath.kiev.ua/
e-mail: [email protected]
\begin{equation*} \end{equation*}
\end{document} |
\begin{document}
\title[Operator BMO spaces] {Embeddings between operator-valued dyadic BMO spaces} \author{Oscar Blasco} \address{Department of Mathematics, Universitat de Valencia, Burjassot 46100 (Valencia)
Spain} \email{[email protected]} \author{Sandra Pott} \address{Department of Mathematics, University of
Glasgow, Glasgow G12 8QW, UK} \email{[email protected]} \keywords{Operator BMO, Carleson measures, paraproducts} \thanks{{\it 2000 Mathematical Subjects Classifications.}
Primary 42B30, 42B35, Secondary 47B35 \\ The first author gratefully acknowledges support by the LMS and Proyectos MTM 2005-08350 and PR2006-0086. The second author gratefully acknowledges support by EPSRC}
\begin{abstract} We investigate a scale of dyadic operator-valued BMO spaces, corresponding to the different yet equivalent characterizations of dyadic BMO in the scalar case. In the language of operator spaces, we investigate different operator space structures on the scalar dyadic BMO space which arise naturally from the different characterisations of scalar BMO. We also give sharp dimensional growth estimates for the sweep of functions and its bilinear extension in some of those different dyadic BMO spaces. \end{abstract} \maketitle \section{Introduction} Let
$\mathcal{D}$ denote the collection of dyadic subintervals of the unit circle $\mathbb{T}$, and let $(h_I)_{I \in \mathcal{D}}$, where $h_I = \frac{1}{|I|^{1/2}} ( \chi_{I^+} - \chi_{I^-})$, be the Haar basis of $L^2(\mathbb{T})$.
For $I \in \mathcal{D}$ and $\phi\in L^2(\mathbb{T})$, let $\phi_I$ denote the formal Haar coefficient
$\int_I \phi(t) h_I dt$, and $m_I \phi = \frac{1}{|I|} \int_I \phi(t) dt$ denote the average of $\phi$ over $I$. We write $P_I(\phi)=\sum_{J\subseteq I} \phi_Jh_J$.
We say that $\phi\in L^2(\mathbb{T})$ belongs to dyadic BMO, written
$\phi\in {\rm BMO^d}(\mathbb{T})$, if \begin{equation}\label{bmo1}
\sup_{I \in \mathcal{D}}( \frac{1}{|I|} \int_I | \phi(t) - m_I \phi |^2 dt)^{1/2}
< \infty.\end{equation}
Using the identity $P_I(\phi)= (\phi- m_I\phi)\chi_I$, this can also be written as \begin{equation}\label{bmo2}
\sup_{I \in \mathcal{D}} \frac{1}{|I|^{1/2}} \|P_I( \phi) \|_{L^2}
< \infty, \end{equation} or \begin{equation} \label{bmo3}
\sup_{I \in \mathcal{D}} \frac{1}{|I|} \sum_{J \in \mathcal{D}, J \subseteq I}
| \phi_J |^2
< \infty. \end{equation} Due to John-Nirenberg's lemma, we have, for $0< p < \infty$, that $\phi\in {\rm BMO^d}(\mathbb{T})$ if and only if \begin{equation}\label{bmo}
\sup_{I \in \mathcal{D}}( \frac{1}{|I|} \int_I | \phi(t) - m_I \phi |^p dt)^{1/p}= \sup_{I \in \mathcal{D}} \frac{1}{|I|^{1/p}} \|P_I(
\phi)\|_{L^p} < \infty.\end{equation}
It is well-known that the space ${\rm BMO^d}(\mathbb{T})$ has the following equivalent formulation in terms of boundedness of dyadic paraproducts: The map \begin{equation} \label{bmo4}
\pi_\phi: L^2(\mathbb{T}) \rightarrow L^2(\mathbb{T}), \quad f = \sum_{I \in \mathcal{D}}
f_I h_I\mapsto \sum_{I \in \mathcal{D}} \phi_I (m_I f) h_I \end{equation} defines a bounded linear operator on $L^2(\mathbb{T})$, if and only if $\phi\in {\rm BMO^d}(\mathbb{T})$.
For real-valued functions, we can also replace the boundedness of the dyadic paraproduct $\pi_\phi$ by the boundedness of its adjoint operator \begin{equation} \label{adjpara}
\Delta_\phi: L^2(\mathbb{T}) \rightarrow L^2(\mathbb{T}), \quad f = \sum_{I \in \mathcal{D}}
f_I h_I\mapsto
\sum_{I \in \mathcal{D}} \phi_I f_I
\frac{\chi_I}{|I|}. \end{equation} Another equivalent formulation comes from the duality \begin{equation} \label{h1dual} {\rm BMO^d}(\mathbb{T})=(H_d^1(\mathbb{T}))^*, \end{equation}
where the dyadic Hardy space $H_d^1(\mathbb{T})$ consists of those functions $\phi\in L^1(\mathbb{T})$ for which the dyadic square function $\mathcal{S} \phi = (\sum_{I\in
\mathcal{D}}|\phi_I|^2\frac{\chi_I}{|I|})^{1/2}$ is also in $L^1(\mathbb{T})$. Let us recall that
$H_d^1(\mathbb{T})$ can also be described in terms of dyadic atoms. That is, $H_d^1(\mathbb{T})$ consists of functions $\phi= \sum_{k \in \mathbb{N}} \lambda_k a_k,
\lambda_k \in \mathbb{C}$,
$\sum_{k \in \mathbb{N}} | \lambda_k| < \infty $, where the $a_k$
are dyadic atoms, i.e. $\operatorname{supp} (a_k)\subset I_k$ for some $I_k\in \mathcal{D}$, $\int_{I_k} a_k(t)dt=0$, and
$\|a_k\|_\infty\le\frac{1}{|I_k|}.$ The reader is referred to \cite{M} or to \cite{G} for standard results about $H^1_d$ and $\operatorname{{\mathrm{BMO^d}}}$.
Let $$S_\phi= (\mathcal{S}\phi)^2=\sum_{I\in
\mathcal{D}}|\phi_I|^2\frac{\chi_I}{|I|}$$ denote the sweep of the function $\phi$. Using John-Nirenberg's lemma, one easily verifies the well-known fact that \begin{equation} \label{sweep} \phi\in {\rm BMO^d}(\mathbb{T}) \hbox{ if and only if } S_\phi \in {\rm BMO^d}(\mathbb{T}). \end{equation} The reader is referred to \cite{blasco4} for a proof of (\ref{sweep}) independent of John-Nirenberg's lemma.
The aim of this paper is twofold. Firstly, it is to investigate the spaces of operator-valued BMO functions corresponding to characterizations (\ref{bmo1})-(\ref{h1dual}). In the operator-valued case, these characterizations are in general no longer equivalent. In the language of operator spaces, we investigate the different operator space structures on the scalar space $\operatorname{{\mathrm{BMO^d}}}$ which arise naturally from the different yet equivalent characterisations of $\operatorname{{\mathrm{BMO^d}}}$. The reader is referred to \cite{BlascoArg,BP,new,psm} for some recent results on dyadic BMO and Besov spaces connected to the ones in this paper. The second aim is to give sharp dimensional estimate for the operator sweep and its bilinear extension, of which more will be said below, in these operator $\operatorname{{\mathrm{BMO}}}^d$ norms.
We require some further notation for the operator-valued case. Let $\mathcal{H}$ be a separable, finite or infinite-dimensional Hilbert space. Let $\operatorname{{\mathcal{F}_{00}}}$ denote the subspace of $\mathcal{L}(\mathcal{H})$-valued functions on $\mathbb{T}$ with finite formal Haar expansion.
Given $e,f\in \mathcal{H} $ and $B \in L^2(\mathbb{T},\mathcal{L}(\mathcal{H}))$ we denote by $B_e$ the function in $L^2(\mathbb{T},\mathcal{H})$ defined by $B_e(t)= B(t)(e)$ and by $B_{e,f}$ the function in $L^2(\mathbb{T})$ defined by $B_{e,f}(t)= \langle B(t)(e),f\rangle$. As in the scalar case,
let $B_I$ denote the formal Haar coefficients
$\int_I B(t) h_I dt$, and $m_I B = \frac{1}{|I|} \int_I B(t) dt$ denote the average of $B$ over $I$ for any $I \in \mathcal{D}$. Observe that for $B_I$ and $m_IB$ to be well-defined operators, we shall be assuming that the $\mathcal{L}(\mathcal{H})$- valued function $B$ is $weak^*$-integrable. That means, using the duality $\mathcal{L}(\mathcal{H})=(\mathcal{H}\hat\otimes \mathcal{H})^*$, that $\langle B(\cdot)(e),f\rangle\in L^1(\mathbb{T})$ for $e,f\in \mathcal{H} $ and for any measurable set $A$, there exist $B_A\in \mathcal{L}(\mathcal{H})$ such that $\langle B_A(e),f\rangle=\langle\int_A B(t)(e) dt, f\rangle $ for $e,f\in \mathcal{H} $.
We denote by $\operatorname{{\mathrm{BMO^d}}}(\mathbb{T},\mathcal{H})$ the space of Bochner integrable $\mathcal{H}$-valued functions $b: \mathbb{T} \rightarrow \mathcal{H}$ such that \begin{equation}
\|b\|_{\operatorname{{\mathrm{BMO^d}}}}=\sup_{I \in \mathcal{D}} (\frac{1}{|I|} \int_I \| b(t) - m_I b\|^2 dt)^{1/2}<\infty \end{equation} and by $\rm wBMO^d(\mathbb{T},\mathcal{H})$ the space of Pettis integrable $\mathcal{H}$-valued functions $b: \mathbb{T} \rightarrow \mathcal{H}$ such that \begin{equation}
\|b\|_{\rm wBMO^d}=\sup_{I \in \mathcal{D}, e \in \mathcal{H}, \|e\|=1} (\frac{1}{|I|} \int_I |\langle b(t) - m_I b, e \rangle|^2 dt)^{1/2}<\infty. \end{equation}
In the operator-valued case we define the following notions corresponding to the previous formulations: We denote by $\operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ the space of Bochner integrable $\mathcal{L}(\mathcal{H})$-valued functions $B$ such that \begin{equation} \label{bmond}
\|B\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}}=\sup_{I \in \mathcal{D}} (\frac{1}{|I|} \int_I \| B(t) - m_I B
\|^2 dt)^{1/2}<\infty, \end{equation} by ${\rm SBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ the space of $\mathcal{L}(\mathcal{H})$-valued functions $B$ such that $B_e\in {\rm BMO^d}(\mathbb{T},\mathcal{H})$ for all $e\in\mathcal{H}$ and \begin{equation}\label{sbmo}
\|B\|_{{\rm SBMO^d}}= \sup_{I \in \mathcal{D},e \in \mathcal{H}, \|e \|=1}
(\frac{1}{|I|} \int_I \| (B(t) - m_I B)e \|^2
dt)^{1/2}< \infty, \end{equation} and, finally, by ${\rm WBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ the space of $weak^*$-integrable $\mathcal{L}(\mathcal{H})$-valued functions $B$ such that $B_{e,f}\in {\rm BMO^d}$ for all $e,f\in \mathcal{H}$ and \begin{multline}\label{wbmo}
\|B\|_{{\rm WBMO^d}}=\sup_{I \in \mathcal{D}, \|e \|=\|f \|=1}
(\frac{1}{|I|} \int_I | \langle(B(t) - m_I B)e,f\rangle
|^2 dt)^{1/2} <\infty, \end{multline} or, equivalently, such that $$
\|B\|_{{\rm WBMO^d}}= \sup_{e \in \mathcal{H}, \|e \|=1} \|B_e\|_{\rm wBMO^d(\mathbb{T},\mathcal{H})} =\sup_{A \in S_1, \| A \|_1 \le 1 } \| \langle B, A \rangle \|_{\operatorname{{\mathrm{BMO^d}}}(\mathbb{T})} < \infty. $$ Here, $S_1$ denotes the ideal of trace class operators in $\mathcal{L}(\mathcal{H})$, and $\langle B, A \rangle$ stands for the scalar-valued function given by $\langle B, A \rangle (t) = \operatorname{trace}( B(t) A^*)$.
The space $ \operatorname{{\mathrm{BMO^d_{Carl}}}} (\mathbb{T},\mathcal{L}(\mathcal{H}))$ is the space of $weak^*$-integrable operator-valued functions for which \begin{equation} \label{def:bmocd}
\|B\|_{\operatorname{{\mathrm{BMO^d_{Carl}}}}}=\sup_{I \in \mathcal{D}} (\frac{1}{|I|}
\sum_{J \in \mathcal{D}, J \subseteq I} \| B_J \|^2 )^{1/2} < \infty. \end{equation}
We would like to point out that while $B$ belongs to one of the spaces $ \operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H})),{\rm WBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H}))$) or $B\in\operatorname{{\mathrm{BMO^d_{Carl}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ if and only if $B^*$ does, this is not the case for the space $\mathrm{SBMO}^d(\mathbb{T},\mathcal{L}(\mathcal{H}))$. This leads to the following notion
(see \cite{gptv2, petermichl, pxu}): We say that
$B\in\operatorname{{\mathrm{BMO_{so}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$, \label{bmosd} if
$B$ and $B^*$ belong to $ {\rm SBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. We define \begin{equation} \label{def:bmoso}\|B\|_{\operatorname{{\mathrm{BMO_{so}^d}}}}=
\|B\|_{{\rm SBMO^d}}+\|B^*\|_{{\rm SBMO^d}}.\end{equation}
We now define another operator-valued BMO space, using the notion of Haar multipliers.
A sequence $(\Phi_I)_{I \in \mathcal{D}}$, $\Phi_I\in L^2(I,\mathcal{L}(\mathcal{H}))$ for all $I\in \mathcal{D}$, is said to be an \emph{operator-valued Haar multiplier} (see \cite{per, BP}), if there exists $C>0$ such that
$$\|\sum_{I\in \mathcal{D}}\Phi_I(f_I)h_I\|_{L^2(\mathbb{T},\mathcal{H})}\le C (\sum_{I\in
\mathcal{D}}\|f_I\|^2)^{1/2} \text{ for all } (f_I)_{I \in \mathcal{D}} \in l^2(\mathcal{D},\mathcal{H}).$$
We write $\|(\Phi_I)\|_{mult}$ for the norm of the corresponding operator on $L^2(\mathbb{T},\mathcal{H})$.
Letting again as in the scalar-valued case $P_I B =\sum_{J\subseteq I} h_JB_J$, we denote the space of those $weak^*$-integrable $\mathcal{L}(\mathcal{H})$-valued functions for which $(P_IB)_{I\in\mathcal{D}}$ defines a bounded operator-valued Haar multiplier on $L^2(\mathbb{T}, \mathcal{H})$ by
$ \operatorname{{\mathrm{BMO_{mult}}}} (\mathbb{T},\mathcal{L}(\mathcal{H}))$ and write
\begin{equation}\label{bmol}\|B\|_{\operatorname{{\mathrm{BMO_{mult}}}}}=
\|(P_IB)_{I\in\mathcal{D}}\|_{mult}. \end{equation} We shall use the notation $\Lambda_B(f)=\sum_{I \in \mathcal{D}} (P_I B) (f_I) h_I$.
Let us mention that there is a further BMO space, defined in terms of paraproducts, which is very much connected with $\operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ and was studied in detail in \cite{new}. Operator-valued paraproducts are of particular interest, because they can be seen as dyadic versions of vector Hankel operators or of vector Carleson embeddings, which are important in the real and complex analysis of matrix valued functions and its applications in the theory of infinite-dimensional linear systems (see e.g.~\cite{jp}, \cite{jpp1}).
Let $B \in \operatorname{{\mathcal{F}_{00}}}$. We define the dyadic operator-valued paraproduct with symbol $B$, $$
\pi_B: L^2(\mathbb{T}, \mathcal{H}) \rightarrow L^2(\mathbb{T}, \mathcal{H}), \quad f = \sum_{I \in \mathcal{D}}
f_I h_I\mapsto
\sum_{I \in \mathcal{D}} B_I (m_I f) h_I, $$ and $$
\Delta_B: L^2(\mathbb{T}, \mathcal{H}) \rightarrow L^2(\mathbb{T}, \mathcal{H}), \quad f = \sum_{I \in \mathcal{D}}
f_I h_I\mapsto
\sum_{I \in \mathcal{D}} B_I (f_I)
\frac{\chi_I}{|I|}. $$ It is easily seen that $(\pi_B)^* = \Delta_{B^*}$.
We denote by $ \operatorname{{\mathrm{BMO_{para}}}} (\mathbb{T},\mathcal{L}(\mathcal{H}))$ the space of $weak^*$-integrable operator-valued functions for which
$\|\pi_{B}\| < \infty$
and write \begin{equation}
\label{bmop}\|B\|_{\operatorname{{\mathrm{BMO_{para}}}}}= \|\pi_B\|. \end{equation}
We refer the reader to \cite{BlascoArg, new} and \cite{mei1, mei}
for results on this space. It is elementary to see that \begin{equation} \label{mult}
\Lambda_B( f )=
\sum_{I \in \mathcal{D}} B_I (m_I f) h_I
+ \sum_{I \in \mathcal{D}} B_I (f_I) \frac{\chi_I}{|I|} = \pi_B f + \Delta_B f. \end{equation} Hence $\Lambda_B=\pi_B+\Delta_B$ and $(\Lambda_B)^*=\Lambda_{B^*}$. This shows that
$\|B\|_{\operatorname{{\mathrm{BMO_{mult}}}}}=\|B^*\|_{\operatorname{{\mathrm{BMO_{mult}}}}}$.
Let us finally denote by ${\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ the space of symbols $B$ such that $\pi_{B}$ and $\pi_{B^*}$ are bounded operators, and define \begin{equation}
\label{bmosp}\|B\|_{\rm BMO_{spara}}= \|\pi_B\|+\|\pi_{B^*}\|. \end{equation} Since $\Delta_B=\pi^*_{B^*}$, one concludes that ${\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\subseteq \operatorname{{\mathrm{BMO_{mult}}}}
(\mathbb{T},\mathcal{L}(\mathcal{H}))$.
We write $\approx$ for equivalence of norms up to a constant (independent of the dimension of the Hilbert space $\mathcal{H}$, if this appears), and similarly $\lesssim, \gtrsim$ for the corresponding one-sided estimates up to a constant.
Recall that for a given Banach space $(X, \| \cdot \|)$, a family of norms $( M_n(X), \| \cdot\|_n)$ on the spaces $M_n(X)$ of
$X$-valued $n \times n$ matrices defines an \emph{operator space structure} on $X$, if $\|\cdot\|_1 \approx \|\cdot\|$, \begin{enumerate}
\item[(M1)] $\| A \oplus B \|_{n +m} = \max \{ \|A\|_n, \|B\|_m \}$ for $A \in M_n(X)$,
$B \in M_m(X)$
\item[(M2)] $ \|\alpha A \beta \|_{m} \le \|\alpha \|_{M_{n,m}(\mathbb{C})}
\| A \|_n \|\beta\|_{M_{m,n}(\mathbb{C})} $ for all $A \in M_n(X)$
and all scalar matrices $\alpha \in M_{n,m}(\mathbb{C})$, $\beta \in M_{m,n}(\mathbb{C})$. \end{enumerate} (see e.~g.~\cite{effros}). One verifies easily that all the $\operatorname{{\mathrm{BMO^d}}}$-norms on $\mathcal{L}(\mathcal{H})$-valued functions defined above, except $\operatorname{{\mathrm{BMO_{norm}^d}}}$ and $\operatorname{{\mathrm{BMO^d_{Carl}}}}$, define operator space structures on $\operatorname{{\mathrm{BMO^d}}}(\mathbb{T})$ when taken for n-dimensional $\mathcal{H}$, $n \in \mathbb{N}$.
The aim of the paper is to show the following strict inclusions for infinite-dimensional $\mathcal{H}$: \begin{multline} \label{eq:inclchain}
\operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))
\subsetneq \operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\subsetneq\\ \subsetneq {\operatorname{{\mathrm{BMO_{so}^d}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))
\subsetneq {\rm WBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H})) \end{multline} and \begin{equation} \label{carles} \operatorname{{\mathrm{BMO^d_{Carl}}}}(\mathbb{T},\mathcal{L}(\mathcal{H})) \subsetneq {\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\subsetneq \operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H})).\end{equation}
This means that the corresponding inclusions of operator spaces over $\operatorname{{\mathrm{BMO^d}}}(\mathbb{T})$, where they apply, are completely bounded, but not completely isomorphic (for the notation, see again e.~g.~\cite{effros}). We will also consider the preduals for some of the spaces shown. Finally, we will give sharp estimates for the dimensional growth of the sweep and its bilinear extension on $\operatorname{{\mathrm{BMO_{para}}}}$, $\operatorname{{\mathrm{BMO_{mult}}}}$ and $\operatorname{{\mathrm{BMO_{norm}^d}}}$, completing results in \cite{new} and \cite{mei}.
The paper is organized as follows. In Section 2, we prove the chains of strict inclusions (\ref{eq:inclchain}) and (\ref{carles}). Actually the only nontrivial inclusion to be shown is $\operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))
\subset \operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. For this purpose, we introduce a new Hardy space $H^1_\Lambda$ adapted to the problem, and then the result
can be shown from an estimate on the dual side. The remaining inclusions are immediate consequences of the definition, and only the counterexamples showing that none of the spaces are equal need to be found.
The reader is referred to \cite{mei1} for more on the theory of operator-valued Hardy spaces.
Section 3 deals with dimensional growth properties of the \emph{operator sweep} and its bilinear extension. We define the operator sweep for $B \in \operatorname{{\mathcal{F}_{00}}}$, $$
S_B = \sum_{I \in \mathcal{D}} \frac{\chi_I}{|I|} B_I^* B_I, $$ and its bilinear extension $$
\Delta[U^*,V]= \sum_{I \in \mathcal{D}} \frac{\chi_I}{|I|} U_I^* V_I \qquad (U,V \in \operatorname{{\mathcal{F}_{00}}}). $$
These maps are of interest for several reasons. They are closely connected with the paraproduct and certain bilinear paraproducts, they provide a tool to understand the dimensional growth in the John-Nirenberg lemma, and they are useful to understand products of paraproducts and products of certain other operators (see \cite{new}, \cite{psm}).
Considering (\ref{sweep}) in the operator valued case, it was shown in \cite{new} that
\begin{equation}\label{normsweep} \|S_B\|_{\rm BMO^d_{mult}}+\|B\|^2_{\rm SBMO^d}\approx \|B\|^2_{\rm BMO^d_{para}}. \end{equation} Here, we prove the bilinear analogue
\begin{equation}\label{normbisweep} \|\Delta[U^*,V]\|_{\rm BMO^d_{mult}}+\sup_{I \in \mathcal{D}} \frac{1}{|I|} \|\sum_{J \subset I} U_J^* V_J\|
\approx \|\pi_U^* \pi_V\|. \end{equation} It was also shown in \cite{new} that \begin{equation}\label{estsweep}
\|S_B\|_{\rm SBMO^d}\le C \log(n+1) \|B\|^2_{\rm SBMO^d} \end{equation} for $dim(\mathcal{H})=n$, where $C$ is a constant independent of $n$, and that this estimate is sharp.
We extend this by proving sharp estimates of $\|S_B\|$ and $\|\Delta[U^*,V]\|$ in terms of $\|B\|, \|U\|, \|V\|$ with respect to the norms in ${\rm SBMO}^d$, $\operatorname{{\mathrm{BMO_{para}}}}$, $\operatorname{{\mathrm{BMO_{mult}}}}$ and $\operatorname{{\mathrm{BMO_{norm}^d}}}$.
\section{Strict inclusions}
Let us start by stating the following characterizations of $\mathrm{SBMO}$ to be used later on. Some of the equivalences can be found in \cite{gptv2}, we give the proof for the convenience of the reader.
\begin{prop}\label{carbmoso} Let $B\in{\rm SBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. Then
\begin{eqnarray*}\|B\|^2_{{\rm SBMO^d}} &=& \sup_{e \in \mathcal{H},
\|e
\|=1}
\|B_e\|^2_{\operatorname{{\mathrm{BMO^d}}}(\mathbb{T},\mathcal{H}) }\\
&=&\displaystyle\sup_{I\in \mathcal{D},\|e\|=1}\frac{1}{|I|}\|P_I(
B_e)\|^2_{L^2(\mathcal{H})} \\
&=&\displaystyle\sup_{I\in \mathcal{D}}\frac{1}{|I|}\|\sum_{J\subseteq I}
B_J^* B_J\| \\\ &=&\displaystyle
\sup_{I \in \mathcal{D}}\left\|
\frac{1}{|I|} \int_I (B(t) - m_I B)^* (B(t) - m_I B) dt
\right\|\\
&=&\sup_{I\in \mathcal{D}} \|m_I(B^*B)-m_I(B^*)m_I(B)\|. \end{eqnarray*} \end{prop} \proof The two first equalities are obvious from the definition. Now observe that
$$\|\sum_{J\subseteq I} B_J^*
B_J\|=\sup_{\|e\|=1,
\|f\|=1}\sum_{J\subseteq I} \langle B_J(e), B_J(f)\rangle=\sup_{\|e\|=1}\sum_{J\subseteq I} \| B_J(e)
\|^2=\|P_I(B_e)\|^2_{L^2(\mathcal{H})}.$$
The other equalities follow from \begin{eqnarray*}
\|m_I(B^*B)-m_I(B^*)m_I(B)\| &=& \left\|\frac{1}{|I|} \int_I
(B(t)-m_I B)^* (B(t) - m_IB) dt \right\| \\
&=& \sup_{e \in \mathcal{H}, \|e\|=1 }\frac{1}{|I|} \int_I \langle (B(t)-m_I B)^* (B(t) - m_IB)e,e \rangle dt \\ &=&
\sup_{e \in \mathcal{H}, \|e\|=1 }\frac{1}{|I|} \int_I \| P_I B e\|^2 dt. \end{eqnarray*} \qed
\begin{lemm} Let $B=\sum_{k=1}^N B_k r_k$ where $ r_k=\sum_{|I|=2^{-k}}|I|^{1/2} h_I$ denote the Rademacher functions. Then \begin{equation}\label{n1}
\|B\|_{\operatorname{{\mathrm{SBMO^d}}}}=
\sup_{\|e\|=1}(\sum_{k=1}^N \|B_k e\|^2)^{1/2} \end{equation} \begin{equation}\label{n2}
\|B\|_{\operatorname{{\mathrm{BMO_{so}}}}}=
\sup_{\|e\|=1}(\sum_{k=1}^N \|B_k e\|^2)^{1/2}+\sup_{\|e\|=1}(\sum_{k=1}^N\|B^*_k e\|^2)^{1/2} \end{equation} \begin{equation}\label{n3}
\|B\|_{\mathrm{WBMO}^d}= \sup_{\|f\|=\|e\|=1}
(\sum_{k=1}^N |\langle B_k e,f\rangle|^2 )^{1/2}. \end{equation} \end{lemm} \proof This follows from standard Littlewood-Paley theory. \qed
For $x,y\in \mathcal{H}$ we denote by $x\otimes y$ the rank 1 operator in $\mathcal{L}(\mathcal{H})$ given by $(x\otimes y)(h)=\langle h,y\rangle x$. Clearly $(x\otimes y)^*=(y\otimes x)$.
\begin{prop} \label{firstinc} Let $\dim \mathcal{H} = \infty$. Then $$\operatorname{{\mathrm{BMO_{mult}}}} \subsetneq\operatorname{{\mathrm{BMO_{so}^d}}}(\mathbb{T}, \mathcal{L}(\mathcal{H})) \subsetneq \operatorname{{\mathrm{SBMO^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\subsetneq {\rm WBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H})).$$ \end{prop} \proof Note that if $(\Phi_I)_{I\in\mathcal{D}}$ is a Haar multiplier then \begin{equation}\label{debilmult}
\sup_{I\in \mathcal{D}, \|e\|=1} |I|^{-1/2}
\|\Phi_I(e)\|_{L^2(\mathbb{T},\mathcal{H})}\le \|(\Phi_I)\|_{mult}.
\end{equation}
The first inclusion thus follows from (\ref{debilmult}) and Proposition \ref{carbmoso}. The other inclusions are immediate. Let us see that they are strict. It was shown in \cite{gptv2} that $\operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H})) \neq \operatorname{{\mathrm{BMO_{so}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$.
Let $(e_k)$ is an orthonormal basis of $\mathcal{H}$ and
$h\in \mathcal{H}$ with $\|h\|=1$. Hence by (\ref{n1}), $B=\sum_{k=1}^\infty h\otimes e_k \ r_k\in \operatorname{{\mathrm{SBMO^d}}} $ and $B^*=\sum_{k=1}^\infty e_k\otimes h \ r_k\notin\operatorname{{\mathrm{SBMO^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. Thus $B\in\operatorname{{\mathrm{SBMO^d}}}(\mathbb{T}, \mathcal{L}(\mathcal{H})) \setminus \operatorname{{\mathrm{BMO_{so}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. Similarly by (\ref{n1}) and (\ref{n3}), $B\in
{\rm WBMO^d}(\mathbb{T},\mathcal{L}(\mathcal{H}))\setminus \operatorname{{\mathrm{SBMO^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. \qed
Note that \begin{equation}\label{form} \Lambda_B f= B f -\sum_{I\in \mathcal{D}}(m_IB)(f_I) h_I \end{equation} which allows to conclude immediately that
$L^\infty(\mathbb{T},\mathcal{L}(\mathcal{H}))\subseteq \operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$.
Our next objective is to see that $\operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H})) \subsetneq \operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. For that,
we need again some more notation.
Let $S_1$ denote the ideal of trace class operators on $\mathcal{H}$ and recall that $S_1=\mathcal{H}\hat\otimes\mathcal{H}$ and $(S_1)^*=\mathcal{L}(\mathcal{H})$ with the pairing $\langle U,(e \otimes d)\rangle= \langle U(e), d\rangle.$
It is easy to see that the space $\operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ can be embedded isometrically into the dual of a certain $H^1$ space of $S_1$ valued functions: \begin{defi} Let $f,g\in L^2(\mathbb{T},\mathcal{H})$. Define$$
f \circledast g = \sum_{I \in \mathcal{D}} h_I (f_I \otimes m_I g + m_I f \otimes g_I). $$ Let ${H^1_\Lambda(\T,S_1)}$ be the space of functions $ f=\sum_{k=1}^\infty \lambda_k f_k \circledast g_k$ such that $f_k,g_k\in L^2(\mathbb{T},\mathcal{H})$,
$\|f_k\|_{2}=\|g_k\|_2=1$ for all $k \in \mathbb{N}$, and $\sum_{k=1}^\infty
|\lambda_k| <\infty.$
We endow the space with the norm given by the infimum of $\sum_{k=1}^\infty
|\lambda_k|$ for all possible decompositions.
\end{defi}
With this notation, $B \in \operatorname{{\mathrm{BMO_{mult}}}}$ acts on $f \circledast g$ by $$ \langle B, f \circledast g \rangle = \int_\mathbb{T} \langle B(t), (f \circledast g)(t) \rangle dt = \langle \Lambda_B f,g \rangle. $$
By definition of ${H^1_\Lambda(\T,S_1)}$, $\| B\|_{({H^1_\Lambda(\T,S_1)})^*} = \|\Lambda_B\|$.
We will now define a further $H^1$ space of $S_1$-valued functions. For $F \in L^1(\mathbb{T}, S_1)$, define the dyadic Hardy-Littlewood maximal function $F^*$ of $F$ in the usual way, $$
F^*(t) = \sup_{I \in \mathbb{D}, t \in I} \frac{1}{|I|} \int_I \| F(s)\|_{S_1} ds. $$ Then let ${H^1_{\mathrm{max, d}}(\T,S_1)}$ be given by functions $ F \in L^1(\mathbb{T},S_1)$ such that $ F^* \in L^1(\mathbb{T}) $. By a result of Bourgain (\cite{bourgain}, Th.12), $\operatorname{{\mathrm{BMO_{norm}^d}}}$ embeds continuously into $({H^1_{\mathrm{max, d}}(\T,S_1)})^*$ (see also \cite{blasco1,blasco3}).
\begin{lemm} \label{dual}
${H^1_\Lambda(\T,S_1)} \subseteq {H^1_{\mathrm{max, d}}(\T,S_1)}.$ \end{lemm} \proof It is sufficient to show that there is a constant $C >0$ such that for all $f,g \in L^2(\mathbb{T},\mathcal{H})$, $f \circledast g \in {H^1_{\mathrm{max, d}}(\T,S_1)}$, and
$\| f \circledast g \|_{{H^1_{\mathrm{max, d}}(\T,S_1)}} \le C \|f\|_2 \|g\|_2$. One verifies that $$
f \circledast g = \sum_{I \in \mathcal{D}} h_I (f_I \otimes m_I g + m_I f
\otimes g_I)= f \otimes g - \sum_{I \in \mathcal{D}}\frac{\chi_I}{|I|} f_I \otimes g_I. $$ Towards the estimate of the maximal function, let $E_k$ denote the expectation with respect to the $\sigma$-algebra generated by dyadic intervals of length $2^{-k}$, $$
E_k F = \sum_{I \in \mathcal{D}, |I| > 2^{-k}} h_I F_I, $$ for each $k \in \mathbb{N}$. Then we have \begin{equation}
E_k( f \circledast g) = (E_k f) \circledast (E_k g), \end{equation} as $$
\sum_{I \in \mathcal{D}, |I| > 2^{-k}} h_I (f_I \otimes m_I g + m_I f \otimes g_I)
= \sum_{I \in \mathcal{D}} h_I ((E_k f)_I \otimes m_I (E_k g) +
m_I (E_k f) \otimes (E_k g)_I). $$ Thus \begin{multline*}
(f \circledast g)^*(t) = \sup_{k \in \mathbb{N}} \|E_k(f \circledast g)(t)\|_{S_1}
\le \sup_{k \in \mathbb{N}} \|(E_k f)(t)\| \|(E_k g)(t)\| +\sum_{I \in \mathcal{D}}
\frac{\chi_I(t)}{|I|} \|f_I\| \|g_I\|\\
\le \|f^*(t)\| \|g^*(t)\| +\sum_{I \in \mathcal{D}} \frac{\chi_I(t)}{|I|}
\|f_I\| \|g_I\|, \end{multline*} and $$
\|(f \circledast g)^*\|_1 \le \|f^*\|_2 \|g^*\|_2 + \|f\|_2 \|g\|_2 \le C
\|f\|_2 \|g\|_2 $$ by the Cauchy-Schwarz inequality and boundedness of the dyadic Hardy-Littlewood maximal function on $L^2(\mathbb{T}, \mathcal{H})$. \qed
\noindent In particular, ${H^1_\Lambda(\T,S_1)} \subseteq L^1(\mathbb{T},S_1)$.
We can now prove our inclusion result: \begin{satz}\label{maininclu}
$\operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T},\mathcal{L}(\mathcal{H})) \subsetneq \operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. \end{satz}
\proof The inclusion follows by Lemma \ref{dual}, duality and Bourgain's result.
To see that the spaces do not coincide, use the fact that $\operatorname{{\mathrm{BMO^d}}}(\ell_\infty)\subsetneq \ell_\infty(\operatorname{{\mathrm{BMO^d}}})$ to find for each $N \in \mathbb{N}$ functions $b_k \in \operatorname{{\mathrm{BMO}}}$, $k=1,...,N$, such that
$\sup_{1\le k\le N}\|b_k\|_{\operatorname{{\mathrm{BMO^d}}}}\le 1$, but
$\|(b_k)_{k=1,\dots,N}\|_{\operatorname{{\mathrm{BMO}}}^d(\mathbb{T},l^\infty_N)}\ge c_N$, $c_N {\rightarrow} \infty$ as $N \to \infty$.
Let $(e_k)_{k \in \mathbb{N}}$ be an orthonormal basis of $\mathcal{H}$, and consider the operator-valued function $B(t)=\sum_{k=1}^{N} b_k(t)e_k\otimes e_k\in L^2(\mathbb{T},\mathcal{L}(\ell_2))$. Clearly $B_I= \sum_{k=1}^N (b_k)_I e_k \otimes e_k$, and for each $\mathbb{C}^N$-valued function $f= \sum_{k=1}^N f_k e_k$, $f_1, \dots, f_N \in L^2(\mathbb{T})$, we have $$\Lambda_B(f)=\sum_{k=1}^N \Lambda_{b_k}(f_k)e_k . $$
Choosing the $f_k$ such that $\|f\|_2^2=\sum_{k=1}^N
\|f_k\|^2_{L^2(\mathbb{T})}=1$, we find that $$
\|\Lambda_B(f)\|^2_{L^2(\mathbb{T},\ell_2)}=\sum_{k=1}^N
\|\Lambda_{b_k}(f_k)\|^2_{L^2(\mathbb{T})}\le C \sum_{k=1}^N
\|{b_k}\|^2_{\operatorname{{\mathrm{BMO^d}}}}\|f_k\|^2_{L^2(\mathbb{T})}\le C, $$ where $C$ is a constant independent of $N$. Therefore, $\Lambda_B$ is bounded.
But since
$\|B\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}}=\|(b_k)_{k=1,\dots,N}\|_{{\operatorname{{\mathrm{BMO^d}}}}(\mathbb{T},l^\infty_N)}\ge c_N$, it follows that $\operatorname{{\mathrm{BMO_{mult}}}}(\mathbb{T})$ is not continuously embedded in $\operatorname{{\mathrm{BMO_{norm}^d}}}(\mathbb{T}, \mathcal{L}(\mathcal{H}))$. From the open mapping theorem, we obtain inequality of the spaces. \qed
The next proposition shows that the space $\operatorname{{\mathrm{BMO^d_{Carl}}}}$ belongs to a different scale than $\operatorname{{\mathrm{BMO_{norm}^d}}}$ and $\operatorname{{\mathrm{BMO_{mult}}}}$. \begin{prop} $L^\infty(\mathbb{T},\mathcal{L}(\mathcal{H})) \nsubseteq \operatorname{{\mathrm{BMO^d_{Carl}}}}(\mathbb{T},\mathcal{L}(\mathcal{H})).$ \label{subs:inf-carl} \end{prop} \proof This follows from the result $L^\infty(\mathbb{T}, \mathcal{L}(\mathcal{H}))\nsubseteq \operatorname{{\mathrm{BMO_{para}}}} $ in \cite{mei} (see Lemma \ref{mei} below) and next proposition. We give a simple direct argument. Choose an orthonormal basis of $\mathcal{H}$ indexed by the elements of $\mathcal{D}$, say $(e_I)_{I \in \mathcal{D}}$, and let $\Phi_I = e_I \otimes e_I$, $\Phi_I h = \langle h, e_I \rangle e_I$. Let $\lambda_I =
|I|^{1/2}$ for $I \in \mathcal{D}$, and define $B = \sum_{I \in \mathcal{D}} h_I
\lambda_I \Phi_I$. Then $\sum_{I \in \mathcal{D}} \|B_I\|^2 = \sum_{I \in
\mathcal{D}} |I| = \infty$, so in particular $B \notin \operatorname{{\mathrm{BMO^d_{Carl}}}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. But the operator function $B$ is diagonal with uniformly bounded diagonal entry functions $\phi_I(t)
=\langle B(t) e_I, e_I \rangle = |I|^{1/2} h_I(t)$, so $B \in L^\infty(\mathcal{L}(\mathcal{H}))$.\qed
\begin{prop} \label{subs:carl-para} $$\operatorname{{\mathrm{BMO^d_{Carl}}}}(\mathbb{T},\mathcal{L}(\mathcal{H})) \subsetneq {\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\subsetneq {\rm BMO_{mult}}(\mathbb{T},\mathcal{L}(\mathcal{H})).$$
\end{prop} \proof The inclusion $\operatorname{{\mathrm{BMO^d_{Carl}}}} \subseteq {\rm BMO_{spara}}$ is easy,
since (\ref{def:bmocd}) implies that for $B \in \operatorname{{\mathrm{BMO^d_{Carl}}}}$, the $\operatorname{{\mathrm{BMO^d_{Carl}}}}$ norm equals the norm of the scalar $\operatorname{{\mathrm{BMO^d}}}$ function given by $|B|:=\sum_{I \in \mathcal{D}} h_I \|B_I\|$. For $f \in L^2(\mathcal{H})$, let $|f|$ denote the function given by $|f|(t) = \|f(t)\|$. Thus $$
\|\pi_B f \|_2^2 = \sum_{I \in \mathcal{D}} \|B_I m_I f\|^2 \le
\sum_{I \in \mathcal{D}} (\|B_I\| m_I |f|)^2 = \| \pi_{|B|} |
f|\|. $$ The boundedness of $\pi_{B^*}$ follows analogously.
To show that $\operatorname{{\mathrm{BMO^d_{Carl}}}} \neq {\rm BMO_{spara}}$, we can use the diagonal operator function $B$ constructed in Proposition \ref{subs:inf-carl}. There, it is shown that $B \notin \operatorname{{\mathrm{BMO^d_{Carl}}}}$, and that the diagonal entry functions $\phi_I = \langle Be_I, e_I \rangle$ are uniformly bounded. Since the paraproduct of each scalar-valued $L^\infty$ function is bounded, we see that $\pi_B = \bigoplus_{I \in \mathcal{D}} \pi_{\phi_I}$ is bounded. Similarly, $\pi_{B^*}$ is bounded. Thus $B \in {\rm BMO_{spara}}$. It is clear from (\ref{mult}) that ${\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\subseteq {\rm BMO_{mult}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$.
Using that $L^\infty(\mathbb{T},\mathcal{L}(\mathcal{H}))\nsubseteq {\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$ (see \cite{mei}), one concludes that ${\rm BMO_{spara}}(\mathbb{T},\mathcal{L}(\mathcal{H}))\neq {\rm BMO_{mult}}(\mathbb{T},\mathcal{L}(\mathcal{H}))$. \qed
\section{Sharp dimensional growth of the sweep} We begin with the following lower estimate of the $\operatorname{{\mathrm{BMO_{para}}}}$ norm in terms of the $L^\infty$ norm of certain $\mathrm{Mat}(\C, n \times n)$-valued functions from \cite{mei}. \begin{lemm}\label{mei} (see \cite{mei}, Thm 1.1.) There exists an absolute constant $c >0$ such that for each $n\in \mathbb{N}$, there exists a measurable
function $F:\mathbb{T} \rightarrow \mathrm{Mat}(\C, n \times n)$ with $\|F\|_\infty \le 1$ and
$\|\pi_F\| \ge c \log(n+1)$. \end{lemm}
Here are our dimensional estimates of the sweep. \begin{satz} There exists an absolute constant $C >0$ such that for each $n \in \mathbb{N}$ and each measurable function $B: \mathbb{T} \rightarrow \mathrm{Mat}(\C, n \times n)$, \begin{equation} \label{eq:sharppara}
\| S_B \|_{\operatorname{{\mathrm{BMO_{para}}}}} \le C \log(n+1) \| B\|^2_{\operatorname{{\mathrm{BMO_{para}}}}}, \end{equation} \begin{equation} \label{eq:sharpmult}
\| S_B \|_{\operatorname{{\mathrm{BMO_{mult}}}}} \le C (\log(n+1))^2 \| B\|^2_{\operatorname{{\mathrm{BMO_{mult}}}}}, \end{equation} \begin{equation} \label{eq:sharpnorm}
\| S_B \|_{\operatorname{{\mathrm{BMO_{norm}^d}}}} \le C (\log(n+1))^2 \| B\|^2_{\operatorname{{\mathrm{BMO_{norm}^d}}}}, \end{equation} and the dimensional estimates are sharp. \end{satz} \proof Let $B: \mathbb{T} \rightarrow \mathrm{Mat}(\C, n \times n)$ be measurable. Since
$\|S_B\|_* = \lim_{k \to \infty} \|S(E_k B) \|_*$ in all of the above BMO norms ( because we are in the finite-dimensional situation) it suffices to consider the case $B \in \mathcal{F}_{00}$.
We start by proving (\ref{eq:sharppara}). Since \begin{equation} \label{eq:paraest}
\|\pi_B\| \le C'
\log(n+1) \|B\|_{\operatorname{{\mathrm{BMO_{so}^d}}}} \end{equation}
for some absolute constant $C' >0$ (see \cite{ntv}, \cite{katz}) and \begin{equation} \label{eq:strongmult}
\|B\|_{\operatorname{{\mathrm{BMO_{so}^d}}}} \le \|B\|_{\operatorname{{\mathrm{BMO_{mult}}}}}, \end{equation} we have
\begin{equation*} \|S_B \|_{\operatorname{{\mathrm{BMO_{para}}}}} \le C'
\log(n+1) \|S_B\|_{\operatorname{{\mathrm{BMO_{mult}}}}} \le C \log(n+1) \|B\|^2_{\operatorname{{\mathrm{BMO_{para}}}}} \end{equation*} by (\ref{normsweep}).
For the sharpness of the estimate, take $F$ as in Lemma \ref{mei}. Again, approximating by $E_k F$, we can assume that $F \in \mathcal{F}_{00}$. Since each function in $L^\infty(\mathbb{T}, \mathrm{Mat}(\C, n \times n))$ is the linear combination of 4 nonnegative-matrix valued functions, the $L^\infty$-norm of which is controlled by the norm of the original function, we can (by replacing $c$ with a smaller constant) assume that $F$ is a nonnegative matrix-valued function. Each such nonnegative matrix-valued function $F$ can be written as $F = S_B$ with $B \in
\mathcal{F}_{00}$, for example by choosing $B = \sum_{I \in \mathcal{D}, |I|=
{2^{-k}}} h_I B_I$, where $B_I = |I|^{1/2} (F^I)^{1/2}$, $F =
\sum_{I \in \mathcal{D}, |I|= {2^{-k}}} \chi_I F^I$. It follows that \begin{multline*}
\|S_B \|_{\operatorname{{\mathrm{BMO_{para}}}}}\ge c \log(n+1) \|S_B\|_\infty \\
\ge c/2 \log(n+1)( \|S_B\|_{\operatorname{{\mathrm{BMO_{mult}}}}} + \|B\|^2_{\operatorname{{\mathrm{BMO_{so}^d}}}})
\gtrsim \log(n+1) \|B\|^2_{\operatorname{{\mathrm{BMO_{para}}}}} \end{multline*} again by (\ref{normsweep}). Here, we use the estimate
$\|B\|^2_{\operatorname{{\mathrm{BMO_{so}^d}}}} \le \|S_B\|_\infty$, which can easily be obtained by $$
\|P_I B e\|_2^2 = \|S_{P_I Be}\|_1 \le |I| \|S_{P_I Be}\|_\infty \le |I| \|S_{P_I B}\|_\infty
\le |I| \|S_{ B}\|_\infty \text{ for } e \in \mathcal{H}, \|e\|=1. $$ This proves that (\ref{eq:sharppara}) is sharp.
Let us now show (\ref{eq:sharpmult}). Note that by (\ref{normsweep}) and (\ref{eq:paraest}), for $B \in \mathcal{F}_{00}$, $$
\|S_B \|_{\operatorname{{\mathrm{BMO_{mult}}}}} \lesssim \|B\|^2_{\operatorname{{\mathrm{BMO_{para}}}}} \le {C}'^2 \log(n+1)^2 \|B\|^2_{\operatorname{{\mathrm{BMO_{mult}}}}}. $$
For sharpness, choose $B \in \mathcal{F}_{00}$, $\|B \|_{\infty} \le 1$,
$\|\pi_B\|\ge c \log(n+1)$ as above, to obtain \begin{multline*}
\|S_B \|_{\operatorname{{\mathrm{BMO_{mult}}}}} + \|B\|^2_{\operatorname{{\mathrm{BMO_{so}^d}}}} \gtrsim \|B\|^2_{\operatorname{{\mathrm{BMO_{para}}}}} \\
\ge c^2 \log(n+1)^2 \|B\|_\infty^2
\ge c^2 \log(n+1)^2 \|B\|_{\operatorname{{\mathrm{BMO_{mult}}}}}^2 \end{multline*} and thus $$
\|S_B \|_{\operatorname{{\mathrm{BMO_{mult}}}}} \gtrsim \log(n+1)^2 \|B\|_{\operatorname{{\mathrm{BMO_{mult}}}}}^2, $$
as $\|B\|_{\operatorname{{\mathrm{BMO_{so}^d}}}} \le \|B\|_{\operatorname{{\mathrm{BMO_{mult}}}}}$.
Finally, let us show (\ref{eq:sharpnorm}). Again, we can restrict ourselves to the case $B \in \operatorname{{\mathcal{F}_{00}}}$ by an approximation argument. We use the fact that the UMD constant of $\mathrm{Mat}(\C, n \times n)$ is equivalent to $\log(n+1)$ (see for instance \cite{Pi1}) and the representation $$
S_B(t) = \int_{\Sigma} (T_\sigma B)^*(t) (T_\sigma B)(t) d \sigma \qquad(B \in \operatorname{{\mathcal{F}_{00}}}) $$ (see \cite{new}, \cite{gptv2}), where $T_\sigma$ denotes the dyadic martingale transform $B \mapsto T_\sigma B = \sum_{I \in \mathcal{D}} \sigma_I h_I B_I$, $\sigma= (\sigma_I)_{I \in \mathcal{D}} \in\{-1,1\}^\mathcal{D}$, and $d \sigma$ the natural product probability measure on $\Sigma =\{-1,1\}^\mathcal{D}$ assigning measure $2^{-n}$ to cylinder sets of length $n$, to prove that \begin{multline*}
\|P_I S_B\|_{L^1(\mathbb{T}, \mathrm{Mat}(\C, n \times n))} = \|P_I S_{P_I B}\|_{L^1(\mathbb{T}, \mathrm{Mat}(\C, n \times n))}
\le 2 \|S_{P_I B}\|_{L^1(\mathbb{T}, \mathrm{Mat}(\C, n \times n))} \\
\lesssim (\log(n+1))^2 \|P_I B\|_{L^2(\mathbb{T}, \mathrm{Mat}(\C, n \times n))}^2 \le (\log(n+1))^2
|I| \|B\|^2_{\operatorname{{\mathrm{BMO_{norm}^d}}}}, \end{multline*} which gives the desired inequality.
To prove sharpness, choose $B \in \mathcal{F}_{00}$, $\|B \|_{\infty} \le 1$, $\|\pi_B\|\ge c \log(n+1)$ and note that by Theorem \ref{maininclu}, \begin{multline*}
\|S_B\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}} + \|B \|^2_{\operatorname{{\mathrm{BMO_{so}^d}}}} \gtrsim \|S_B\|_{\operatorname{{\mathrm{BMO_{mult}}}}} + \|B \|^2_{\operatorname{{\mathrm{BMO_{so}^d}}}} \\
\gtrsim \|B\|^2_{\operatorname{{\mathrm{BMO_{para}}}}} \ge c^2 \log(n+1)^2 \|B\|^2_\infty \ge c^2 \log(n+1)^2 \|B\|^2_{\operatorname{{\mathrm{BMO_{norm}^d}}}}. \end{multline*}
Since $\|B \|_{\operatorname{{\mathrm{BMO_{so}^d}}}} \le \|B\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}}$, this implies $$
\|S_B\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}} \gtrsim \log(n+1)^2 \|B\|^2_{\operatorname{{\mathrm{BMO_{norm}^d}}}}. $$ \qed
We now consider the bilinear extension of the sweep. By \cite{psm}, \cite{new} or \cite{BlascoArg} \begin{equation} \label{eq:bidentity} \pi_{U}^* \pi_{V} = \Lambda_{\Delta[U^*,V]} + D_{U^*,V} \qquad(U,V \in \operatorname{{\mathcal{F}_{00}}}), \end{equation} where $D_{U^*,V}$ is given by $D_{U^*,V} h_I e = h_I
\frac{1}{|I|}\sum_{J \subset I} U^*_J V_J e$ for $I \in \mathcal{D}$, $e \in \mathcal{H}$. \begin{prop} \label{prop:bisweep} $$
\|\pi_{U}^* \pi_{V}\| \approx \|\Delta[U^*,V]\|_{\operatorname{{\mathrm{BMO_{mult}}}}} + \sup_{I \in \mathcal{D}}\frac{1}{|I|}\|\sum_{J \subset I} U^*_J V_J \| \qquad (U,V \in \operatorname{{\mathcal{F}_{00}}}). $$ \end{prop}
\proof Obviously $\|D_{U^*,V}\| = \sup_{I \in
\mathcal{D}}\frac{1}{|I|}\|\sum_{J \subset I} U^*_J V_J \|$. Thus by (\ref{eq:bidentity}), $$
\|\pi_{U}^* \pi_{V}\| \le \|\Delta[U^*,V]\|_{\operatorname{{\mathrm{BMO_{mult}}}}} + \sup_{I \in
\mathcal{D}}\frac{1}{|I|}\|\sum_{J \subset I} U^*_J V_J \|. $$ For the reverse estimate, it suffices to observe that $D_{U^*,V}$ is the block diagonal of the operator $\pi_{U}^* \pi_{V}$ with respect to the orthogonal subspaces $h_I \mathcal{H}$, $I \in \mathcal{D}$
and therefore $\|D_{U^*,V}\| \le \|\pi_{U}^* \pi_{V} \|$. \qed
\noindent Here are the dimensional estimates of the bilinear map $\Delta$. \begin{cor} There exists an absolute constant $C >0$ such that for each $n \in \mathbb{N}$ and each pair of measurable functions $U,V: \mathbb{T} \rightarrow \mathrm{Mat}(\C, n \times n)$, \begin{equation} \label{eq:bistrong}
\| \Delta[U^*,V] \|_{\operatorname{{\mathrm{SBMO^d}}}} \le C \log(n+1) \|U\|_{\operatorname{{\mathrm{SBMO^d}}}}\|V\|_{\operatorname{{\mathrm{SBMO^d}}}}, \end{equation} \begin{equation} \label{eq:bipara}
\| \Delta[U^*,V] \|_{\operatorname{{\mathrm{BMO_{para}}}}} \le C \log(n+1) \|U\|_{\operatorname{{\mathrm{BMO_{para}}}}}\|V\|_{\operatorname{{\mathrm{BMO_{para}}}}}, \end{equation} \begin{equation} \label{eq:bimult}
\| \Delta[U^*,V] \|_{\operatorname{{\mathrm{BMO_{mult}}}}} \le C (\log(n+1))^2 \| U\|_{\operatorname{{\mathrm{BMO_{mult}}}}}\| V\|_{\operatorname{{\mathrm{BMO_{mult}}}}}, \end{equation} \begin{equation} \label{eq:binorm}
\| \Delta[U^*,V] \|_{\operatorname{{\mathrm{BMO_{norm}^d}}}} \le C (\log(n+1))^2 \|U\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}}\|V\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}}, \end{equation} and the dimensional estimates are sharp. \end{cor} \proof Only the upper bounds need to be shown.
For (\ref{eq:bistrong}), use Proposition \ref{carbmoso} to write $\|B\|_{\operatorname{{\mathrm{SBMO^d}}}}= \sup_{I\in \mathcal{D}, \|e\|=1} \|\Lambda_B(h_Ie)\|$
and (\ref{eq:bidentity}) to estimate
$$ \| \Delta[U^*,V] \|_{\operatorname{{\mathrm{SBMO^d}}}}\le \sup_{I\in \mathcal{D},\|e\|=1} \| \pi_U^* \pi_V h_I e\|
+ \sup_{I\in \mathcal{D},\|e\|=1}\|D_{U^*,V}(h_Ie)\|.$$ Now observe that for $e \in \mathcal{H}$, $I \in \mathcal{D}$, one has
$$\| \pi_U^* \pi_V h_I e\| \\
\le \|U\|_{\operatorname{{\mathrm{BMO_{para}}}}} \|V\|_{\operatorname{{\mathrm{SBMO^d}}}} \|e\| \le C' \log(n+1) \|U\|_{\operatorname{{\mathrm{SBMO^d}}}}
\|V\|_{\operatorname{{\mathrm{SBMO^d}}}}\|e\| $$ by (\ref{eq:paraest}). Since $D_{U^*,V}
h_Ie = \frac{1}{|I|} \sum_{J\subset I} U_J^* V_Je \, h_I$, one obtains \begin{multline*}
\|D_{U^*,V}(h_Ie)\|= \sup_{f \in \mathcal{H}, \|f\|=1} |\langle D_{U^*,V}(h_Ie), h_I f \rangle| \\
= \sup_{f \in \mathcal{H}, \|f\|=1}
\frac{1}{|I|} |\sum_{J\subset I}\langle V_Je, U_Jf\rangle |
\le \|V_e\|_{\operatorname{{\mathrm{BMO^d}}}(\mathbb{T},\mathcal{H})}\|U\|_{\operatorname{{\mathrm{SBMO^d}}}}, \end{multline*} and the proof of (\ref{eq:bistrong}) if complete.
Using first (\ref{eq:paraest}) and (\ref{eq:strongmult}) and then Proposition \ref{prop:bisweep}, we obtain (\ref{eq:bipara}). In a similar way, using first Proposition \ref{prop:bisweep} and then (\ref{eq:paraest}), (\ref{eq:strongmult}) yields (\ref{eq:bimult}).
Finally, for (\ref{eq:binorm}) observe first that for any $U,V \in \operatorname{{\mathcal{F}_{00}}}$, $e, f \in \mathcal{H}$, $t \in \mathbb{T}$, \begin{eqnarray*}
&& |\langle \Delta[U^*,V](t)e, f \rangle|\\
&=& |\sum_{I \in \mathcal{D}} \left \langle \frac{\chi_I(t)}{|I|^{1/2}} V_I e, \frac{\chi_I(t)}{|I|^{1/2}}
U_I f \right \rangle| \\
& \le& \left(\sum_{I \in \mathcal{D}} \|\frac{\chi_I(t)}{|I|^{1/2}}V_I e\|^2\right)^{1/2}
\left(\sum_{I \in \mathcal{D}} \|\frac{\chi_I(t)}{|I|^{1/2}}U_I f\|^2\right)^{1/2} \\
&=& \langle S_U (t) e,e \rangle^{1/2} \langle S_V (t) f,f \rangle^{1/2}
\le\|S_U(t)\|^{1/2} \|S_V(t)\|^{1/2} \end{eqnarray*} and therefore \begin{equation} \label{eq:pointest}
\|\Delta[U^*,V](t)\| \le \|S_U(t)\|^{1/2} \|S_V(t)\|^{1/2} \quad (t \in \mathbb{T}). \end{equation} Now consider the $\operatorname{{\mathrm{BMO_{norm}^d}}}$ norm of $\Delta[U^*,V]$. For $I \in \mathcal{D}$, \begin{eqnarray*}
&& \|P_I \Delta[U^*,V]\|_{L^1(\mathbb{T}, \mathrm{Mat}(\C, n \times n))}\\
& =& \|P_I \Delta[P_I U^*,P_I V]\|_{L^1(\mathbb{T}, \mathrm{Mat}(\C, n \times n))} \\
&\le& 2\|\Delta[P_I U^*,P_I V]\|_{L^1(\mathbb{T}, \mathrm{Mat}(\C, n \times n))} \\
& \le& 2 \| \|S_{P_I U}(\cdot)\|^{1/2} \|S_{P_I V}(\cdot)\|^{1/2}\|_{L^1(\mathbb{T})} \\
&\le& 2 \|S_{P_I U}\|^{1/2}_{L^1(\mathbb{T},\mathrm{Mat}(\C, n \times n))}\|S_{P_I V}\|^{1/2}_{L^1(\mathbb{T},\mathrm{Mat}(\C, n \times n))}\\
&\le & 2 (\log(n+1))^2 \|P_I U\|_{L^2(\mathbb{T},\mathrm{Mat}(\C, n \times n))} \|P_I U\|_{L^2(\mathbb{T},\mathrm{Mat}(\C, n \times n))} \\
& \le & 2 (\log(n+1))^2 |I| \|U\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}} \|V\|_{\operatorname{{\mathrm{BMO_{norm}^d}}}}, \end{eqnarray*} where we obtain the third inequality from (\ref{eq:pointest}) and the fourth inequality from the proof of (\ref{eq:sharpnorm}). This finishes the proof of (\ref{eq:binorm}). \qed
\end{document} |
\begin{document}
\newcommand{\ket}[1]{|#1\rangle}
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\bracket}[2]{\langle #1|#2\rangle}
\newcommand{\ketbra}[1]{|#1\rangle\langle #1|} \newcommand{\average}[1]{\langle #1\rangle} \newtheorem{theorem}{Theorem}
\title{Spherical Code Key Distribution Protocols for Qubits} \author{Joseph M. Renes} \affiliation{Department of Physics and Astronomy, University of New Mexico,\\ Albuquerque, New Mexico 87131--1156, USA\\ \texttt{[email protected]}}
\begin{abstract} Recently spherical codes were introduced as potentially more capable ensembles for quantum key distribution. Here we develop specific key creation protocols for the two qubit-based spherical codes, the trine and tetrahedron, and analyze them in the context of a suitably-tailored intercept/resend attack, both in standard form, and a ``gentler'' version whose back-action on the quantum state is weaker. When compared to the standard unbiased basis protocols, BB84 and six-state, two distinct advantages are found. First, they offer improved tolerance of eavesdropping, the trine besting its counterpart BB84 and the tetrahedron the six-state protocol. Second, the key error rate may be computed from the sift rate of the protocol itself, removing the need to sacrifice key bits for this purpose. This simplifies the protocol and improves the overall key rate. \end{abstract}
\pacs{03.67.Dd, 03.67.Hk, 03.67.-a}
\maketitle
Heretofore quantum key distribution protocols have often been constructed using sets of unbiased bases, enabling key bit creation whenever the two parties Alice and Bob happen to send and measure the quantum system in the same basis. Alice randomly selects a basis and a state within that basis to send to Bob, who randomly chooses a basis in which to measure and decodes the bit according to their pre-established scheme. Should Bob choose the same basis as Alice, his outcome is perfectly correlated with hers. Each of the parties publicly announces the bases used, and for each instance they agree, they establish one letter of the key. The use of more than one basis prevents any would-be eavesdropper Eve from simply reading the encoded bit without Alice and Bob noticing. In two dimensions two sets of mutually unbiased bases exist, forming the BB84~\cite{bb84} and six-state protocols~\cite{bruss98}.
Equiangular spherical codes can be used to construct a new scheme for key distribution~\cite{renes03}. Two such codes exist in two dimensions. In the Bloch-sphere representation we may picture these ensembles as three equally-spaced coplanar states forming a trine or four equally-spaced states forming a tetrahedron. Both Alice and Bob replace their use of unbiased bases with equiangular spherical codes; by arranging Bob's code to be the dual of Alice's, key creation becomes a process of elimination, as previously considered by Phoenix, \emph{et al.}~\cite{pbc00}. Each of Bob's measurement outcomes is orthogonal to one of Alice's signals, and thus each outcome excludes one signal.
Alice may then attempt to furnish the remaining information by announcing a certain number of signals that were not sent, a process known as sifting. By symmetry, Bob can also send the sifting information to Alice, in the form of outcomes not obtained; this convention will be followed here. The shared (anti-) correlation between signal and outcome allows them to remain one step ahead of an eavesdropper Eve, ensuring that unless she tampers with the quantum signal, she knows nothing of the created key.
Should Eve tamper with the signal, the disturbance can be recognized by Alice and Bob in the statistics of their results. With this they can determine what she knows about their key, and they may either proceed to shorten their key string so as to remove Eve's information of it, or else discard it entirely and begin anew. Unlike bases-based protocols, however, here Alice and Bob can determine the disturbance from the sifting rate directly, obviating the need to explicitly compare (and waste) portions of the key for this purpose.
The overarching questions in evaluating a key distribution protocol are whether or not it is unconditionally secure, and if so, what the maximum tolerable error rate is. If, by granting Eve the ability to do anything consistent with the laws of physics, Alice and Bob can still share a key, the protocol is said to be unconditionally secure. This state of affairs persists up to the maximum tolerable error rate, at which point Alice and Bob must abandon their key creation efforts. Establishing unconditional security is complicated and delicate, so here we restrict attention to more limited attacks, examining the intercept/resend attack and a ``gentler'' variant. In these settings we find that the spherical codes are more tolerant of noise than their basic counterparts.
First, however, we must consider the protocols for the two spherical codes in detail.
Unlike the case of unbiased bases, in which Alice's choice of signal or Bob's outcome determines the key letter, for the trine and tetrahedron it is only the relation between Alice's signal and Bob's outcome that determines the bit. In the trine protocol Alice's choice of signal narrows Bob's possible outcomes to the two lying 60 degrees on either side. Each is equally likely, and they publicly agree beforehand that the one clockwise from Alice's signal corresponds to \texttt{1} and the other \texttt{0}. Alice hopes to determine which is the case when Bob announces one outcome that he \emph{did not} receive. For any given outcome, he chooses randomly between the other two and publicly announces it. Half the time he announces that he did not receive the outcome which Alice already knew to be impossible. This tells Alice nothing new, and she announces that the protocol failed. In the other half of cases, Alice learns Bob's outcome and announces success.
Upon hearing his message was a success, Bob can determine the signal Alice sent. For any outcome Bob receives, he immediately knows one signal Alice could not have sent, and the message that his announcement was successful indicates to him that she also did not send the signal orthogonal to his message. Had she sent that signal, she would have announced failure; thus Bob learns the identity of Alice's signal. Each knowing the relative position of signal and outcome, they can each generate the same requisite bit. This round of communication is the analog of sifting in the protocols utilizing unbiased bases: a follow-up classical communication referencing the quantum signals with which Alice and Bob establish the key.
Mathematically, we might consider the protocol as follows. Alice sends signal $j$, and Bob necessarily obtains $k=j\!+\!1$ or $k=j\!+\!2$. He announces that he did not receive $l\!\neq\! k$. If $l\!=\!j$, Alice announces failure. Otherwise each party knows the identity of $j,k$, and $l$, and they compute the key bit as $ (1\!-\!\epsilon_{jkl})/2$. Fig.~1 shows the case that they agree on a \texttt{1}.
\begin{figure}
\caption{Bloch-sphere representation of the trine-based protocol by which Alice and Bob create a secret key bit, shown here creating a \texttt{1}. Alice's three possible signal states are shown in black and Bob's measurement outcomes in dotted lines; antipodal points are orthogonal. Without loss of generality we may assume that Alice sends the state $j=1$. The antipodal point is the impossible outcome for Bob; here he obtains the outcome $k=3$. Of the two outcomes he did not get, he picks one at random and announces this to Alice. Here he announces the outcome $l=2$, and Alice infers the value of $k$. Had Bob announced the other outcome, the protocol would fail, as this tells Alice nothing she does not already know. Here she announces that she is satisfied with Bob's message, and Bob infers the value of $j$, since Alice's signal could not have been $l$. Now they compute the bit $(1\!-\!\epsilon_{jkl})/2=1$. The announcement only reveals $l$, so the bit is completely secret.}
\label{fig:label}
\end{figure}
Though Eve may listen to the messages on the classical channel, she still has no knowledge of the bit value, for all she knows is one outcome Bob did not receive and the corresponding antipodal state not sent by Alice. Of the two remaining equally-likely alternatives, one corresponds to a \texttt{0} and the other a \texttt{1}. Hence the protocol establishes one fully secret bit half the time, analogous to the BB84 protocol.
The strategy for the tetrahedron is entirely similar, except that Bob must now reveal two outcomes not obtained. As depicted in Fig.~2, Alice uses four tetrahedral states in the Bloch-sphere picture, and as before Bob uses the dual of Alice's tetrahedron for measurement. Alice sends signal $j$ and Bob receives $k\!\neq\! j$. He then randomly chooses two outcomes $l$ and $m$ he did not obtain and announces them. One-third of the time this is successful, in that $l\!\neq\! j$ and $m\!\neq\! j$. This allows Alice to infer $k$, and her message of satisfaction allows Bob to infer $j$, just as for the trine. They then each compute the bit $(1+\epsilon_{jklm})/2$.
Again they stay one step ahead of Eve as she listens to the messages, as she can only narrow Alice's signal down to two possibilities. Given the order of Bob's messages, one of these corresponds to \texttt{0} and the other to \texttt{1}, so Eve is ignorant of the bit's identity. Using the tetrahedron allows Alice and Bob to establish one fully secret bit one third of the time, analogous to the six-state protocol.
\begin{figure}
\caption{Unfolded view of the Bloch-sphere tetrahedron states. Vertices of triangles correspond to Bob's outcomes, their centers Alice's signals; all three vertices of the large triangle represent the same point antipodal to its center. Suppose Alice sends signal $j$; Bob necessarily receives $k\neq j$. Here we suppose $j=1$ and $k=2$. Bob then announces two outcomes not obtained, here shown as $l=3$ and $m=4$. Had either message equaled $j$, which happens 2/3 of the time, Alice announces failure. Otherwise, as here, she accepts. Thus Alice determines $k$, and Bob finds out $j$. They compute the bit $(1+\epsilon_{jklm})/2=1$. The announcement reveals only $l$ and $m$, so the bit is secret.}
\label{fig:tetra}
\end{figure}
In the two protocols, the dual arrangement of signals and measurements allows Alice and Bob to proceed by elimination to establish a putative key. To ensure security of the protocols, however, the arrangement must also disallow Eve from reading the signal without Alice and Bob noticing. Analyzing the intercept/resend attack provides evidence of how well the protocols based on spherical codes measure up to this task.
If Eve tampers with the signals in order to learn their identity, the inevitable disturbance allows Alice and Bob to infer how much Eve knows about the raw key. They can then proceed to use error correction and privacy amplification procedures to distill a shorter key which, with high probability, is identical for Alice and Bob and which Eve has low probability of knowing anything about. Instead of delving into the details of error correction and privacy amplification, we may instead use a lower bound on the optimal rate of the distilled key, i.e., its length as a fraction of the raw key.~\cite{ck78} This provides a reasonable guess as to what may be achieved in practice and is known to be achieveable using one-way communication. Given $N\!\rightarrow\!\infty$ samples from a tripartite distribution $p(a,b,e)$, Alice and Bob can construct a protocol to distill with high probability a length $RN$ string about which Eve has asymptotically zero information for \begin{equation} \label{eq:ratebound} R= I(A\!:\!B)-\min\{I(A\!:\!E),I(B\!:\!E)\}. \end{equation} Here the tripartite distribution refers to Alice's and Bob's bit values $a$ and $b$, and Eve's best guess $e$ from the eavesdropping. The quantity $I(A\!:\!B)$ is the mutual information between two parties, quantifying how much knowledge of one's outcome implies about the other's.
Here we're assuming that Eve simply intercepts a fraction $q$ of the signals, measures them, and sends a new state on to Bob. The first task is then to determine $R$ as a function of $q$ and then to relate $q$ to the statistics compiled in the course of the protocol. As it happens, Eve's best attack in the intercept/resend context is to use \emph{both} Alice's and Bob's trines for measurement, half the time pretending to be Alice and the other half Bob. This holds for the tetrahedron as well and is due to the minimum in Eq.~(\ref{eq:ratebound}), which gives the equation a symmetry between Alice and Bob with respect to Eve. Choosing only one of the trines (or tetrahedra) to measure breaks this symmetry, leading the minimum to pick the smaller information quantity and yield a consequently larger key.
By mixing the two strategies, Eve restores the symmetry and increases the minimum knowledge she has about either party's bit string. Phoenix, \emph{et al}.~\cite{pbc00} note that the scheme in which Eve pretends to be Bob maximizes her mutual information with Alice; however, as the analysis stops there and does not proceed to consider either Eve's information about the key bits nor any secret key rate bounds, it is insufficient as a cryptographic analysis.
To determine the mutual information quantities as functions of $q$, it suffices to consider first the case in which Eve intercepts every signal and uses Alice's ensemble for measurement. With these quantities in hand, we can mix Eve's two measurement strategies appropriately and then include her probability of interception. We begin with the trine. Given a signal state from Alice, there are two cases to consider. Either Eve measures and gets the same state, which happens with probability 2/3, or she obtains one of the other two results, with probability 1/6 for each. Whatever her outcome, she passes the corresponding state along to Bob and guesses that it was the state sent by Alice, \emph{unless the subsequent exchange of classical messages eliminates this possibility}, at which point she reserves judgement about the key bit.
Suppose Eve's outcome corresponds to Alice's signal, and thus no disturbance is caused. Naturally, Alice and Bob go on to establish a bit half the time, a bit known to Eve. On the other hand, should her outcome not coincide with Alice's signal, there are two further possibilities. Half the time Bob obtains a result consistent with Alice's signal, i.e. not the orthogonal state, and a further half the time the sifting succeeds. However, the required sifiting messages will eliminate Eve's outcome as Alice's signal, thus forcing Eve to abandon her guess.
In the remaining case, Bob's result is orthogonal to Alice's signal, which guarantees successful sifting, but also different bit values for Alice and Bob. Half of Eve's guesses are excluded while the remainder agree with Bob's.
Putting all this together, one obtains that the protocol fails with probability 5/12. All three agree one-third of the time, and Alice's bit is different from that shared by Bob and Eve one-twelfth of the time. In the remaining one-sixth of events, Eve does not field a guess, as the messages exchanged by Alice and Bob contradict her measurement results; better to abstain than to introduce a purely random guess. In this subset of events, Alice and Bob agree a further half the time.
Of the key bits created, Bob and Alice agree with probability 5/7, while Eve and Alice agree with probability 4/7. Eve only fields a guess with probability 5/7, and always agrees with Bob when she does. These numbers are obtained by considering the raw probabilities of agreement and renormalizing by $12/7$. Should Eve instead measure the signals using Bob's trine ensemble, her agreement probabilities with Alice and Bob are swapped. Mixing the two strategies then yields Eve a no-guess probability of 2/7, an agreement probability with either party of 9/14, and an an error probability of 1/14.
To interpolate between the endpoints of no interception and full interception, note that to condition on the cases of successful bit creation, the probability of bit agreement must be renormalized by the probability of sifting success. This probability depends linearly on $q$: $p_{\rm sift}=(6+q)/12$. All probabilities must therefore contain $6+q$ in the denominator, whence we may derive pairwise probabilities that the parties' bit values agree: $p_{ab}=(6-q)/(6+q)$, and $p_{ae}=9q/2(6+q)$, respectively.
Eve's probability to not guess at all is $2(3-2q)/(6+q)$. Determining the relevant mutual informations from these expressions is straightforward; for expressions involving Eve, simply treat the ``no-guess'' as another outcome which has no correlation at all to the other party.
By determining the probability of error in Alice's and Bob's bit strings as a function of $q$, we may compare to other protocols. For the trine, errors occur in the key string with probability $2q/(6+q)$. Using the calculated agreement probabilities
in the rate bound, one obtains that $R=0$ corresponds to a maximum tolerable bit error rate of 20.4\%. This compares favorably with the BB84 protocol's maximum tolerable bit error rate of 17.1\% under the same attack~\cite{ekerthuttner94}. In terms of \emph{channel} error rate these figures double, if we consider the quantum channel to be a depolarizing channel instead of arising from Eve's interference. If Bob receives the maximally-mixed state instead of Alice's signal, the probability of error given successful sifting is 1/2. Hence a fully depolarizing channel leads to a bit error rate of 50\% for either protocol.
Analysis of the tetrahedron protocol proceeds similarly by examining the various cases. In this case, when $q=1$ the failure rate of the protocol drops to 5/9, while Alice and Bob agree with probability 5/8, Eve has probability 7/16 of knowing Alice's or Bob's bit value, and she reserves judgement half the time. As the successful sifting rate of the protocol goes like $(3+q)/9$, we may determine the form of the probabilities using the same method to be $p_{ab}=p_{eb}=(6-q)/2(3+q)$ and $p_{ae}=7q/4(3+q)$,
while the error rate in the key string is $3q/2(3+q)$ and Eve's probability of not guessing is $(3\!-\!q)/(3\!+\!q)$. Using these probabilities in the rate bound yields a maximum error rate of 26.7\%. Like before, this compares favorably to the maximum tolerable error rate in the six-state protocol of 22.7\%.
Eve's attack could be gentler, however. In the version already considered, her POVM consists of subnormalized projectors onto the code states in addition to an element proportional to the identity operator, corresponding to the case in which Eve opts not to intercept the signal. A similar POVM can be created by distributing a piece of identity operator to all the other elements.
The crucial difference is that the state Eve sends on to Bob after her measurement is different. Using the square root of each POVM element in the formula for the post-measurement state,
the resulting measurement yields Eve more information for the same amount of disturbance. Note that in the context of the BB84 protocol, this attack was determined to be optimal when Eve does not wait to hear in which basis the signal was prepared~\cite{luetkenhaus96}.
Enlisting the aid of Mathematica to carry out the bookkeeping yields the following results. Since the attack is stronger, the maximum tolerable error decreases; in particular the trine can create secret keys up to a 16.6\% bit error rate, as opposed to 15.3\% for its cousin BB84. The tetrahedron remains the most robust, sustaining key creation up to a maximum error rate of 22.6\%, as compared to 21.0\% for the six-state protocol.
Framing the key rate in terms of the error rate is solely for ease of comparison, as it is not necessary for Alice and Bob to sacrifice key bits in order to obtain an estimate of $q$ when using spherical codes, in contrast to the situation for the unbiased bases. For spherical codes, the sifting rate of the protocol itself determines $q$; as the channel becomes noisier and Bob's outcome less correlated with Alice's signal, the sifting rate increases. Of course, not all of this increase provides useful key: most of it leads to errors. But Eve cannot substitute signals solely for the purpose of modifying the sift rate, as her signals will be uncorrelated with Alice's and will therefore also lead to an increase in the sift rate. Hence she is precluded from masking her interceptions, and Alice and Bob can determine $q$ from the sifting rate itself.
Finally, a word on the feasibility of implementing such protocols. Generation of trine or tetrahedral codewords as polarization states of (near) single-photon sources is not difficult. The generalized measurements accompanying the ensembles can be performed by using polarizing beam splitters and wave plates to map polarization states into different propagation modes and proceeding from there with linear optical elements to produce the appropriate interference.
Such measurements have indeed been performed with rms errors in observed statistical distributions of a few percent~\cite{ckcbrs01}. The physical implementation needn't be identical to the logical construction of the protocol, however. For instance, three states constructed from two pairs of singlets together with ordinary photodectors can implement the trine protocol~\cite{bglps04}.
Two advantages of using spherical codes have been established herein. First and foremost is the strong possibility of improved eavesdropping resistance. Subsequent analyses either of stronger attacks, such as use of an asymmetric cloning machine~\cite{cerf00}, or the use of error-correcting codes to beat back noise~\cite{shorpreskill00} are required to demonstrate this fact in the setting of unconditional security, though the intercept/resend attacks are indicative of the trend~\cite{fggnp97}. Beyond security is the ability to directly estimate the error rate from the sift rate itself, obviating any need to sacrifice raw key bits.
The author acknowledges helpful input from D.~Bru\ss, C.~M.~Caves, J.~Eisert, D.~Gottesman, and N.~L\"utkenhaus. This work was supported in part by Office of Naval Research Grant No.~N00014-00-1-0578.
\end{document} |
\begin{document}
\title{Constant depth fault-tolerant Clifford circuits for multi-qubit large block codes}
\author{Yi-Cong Zheng}
\email{ [email protected]} \affiliation{Tencent Quantum Lab, Tencent, Shenzhen, Guangdong, China, 518057} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore 117543} \affiliation {Yale-NUS College, Singapore 138527}
\author{Ching-Yi Lai}
\affiliation{Institute of Communications Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan} \author{Todd A. Brun}
\affiliation{Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, USA\\} \author{Leong-Chuan Kwek} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore 117543} \affiliation{MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, UMI 3654, Singapore} \affiliation{ Institute of Advanced Studies, Nanyang Technological University, Singapore 639673} \affiliation{National Institute of Education, Nanyang Technological University, Singapore 637616 } \date{\today}
\begin{abstract} Fault-tolerant quantum computation (FTQC) schemes using large block codes that encode $k>1$ qubits in $n$ physical qubits can potentially reduce the resource overhead to a great extent because of their high encoding rate. However, the fault-tolerant (FT) logical operations for the encoded qubits are difficult to find and implement, which usually takes not only a very large resource overhead but also long \emph{in-situ} computation time. In this paper, we focus on Calderbank-Shor-Steane $\llbracket n,k,d\rrbracket$ (CSS) codes and their logical FT Clifford circuits. We show that the depth of an arbitrary logical Clifford circuit can be implemented fault-tolerantly in $O(1)$ steps \emph{in-situ} via either Knill or Steane syndrome measurement circuit, with the qualified ancilla states efficiently prepared. Particularly, for those codes satisfying $k/n\sim \Theta(1)$, the resource scaling for Clifford circuits implementation on the logical level can be the same as on the physical level up to a constant, which is independent of code distance $d$. With a suitable pipeline to produce ancilla states, our scheme requires only a modest resource cost in physical qubits, physical gates, and computation time for very large scale FTQC. \end{abstract}
\maketitle
\section{Introduction} Quantum error-correcting codes (QECCs)~\cite{Shor:1995:R2493,Steane:1996:793,Calderbank:1996:1098,Gaitan:2008:CRC,QECbook:2013} and the theory of fault-tolerant quantum computation (FTQC)~\cite{Shor:1996:56,Aharonov:1997:176, Gottesman:9705052,Kitaev:2003:2, DivencenzoFTPhysRevLett.77.3260, KnillFTNature,Aharonov:2006:050504,QECbook:2013} have shown that large-scale quantum computation is possible if the noise is not strongly correlated between qubits and its rate is below certain threshold~\cite{Aharonov:1997:176,KnillFTNature, Terhal:2005:012336,Aharonov:2006:050504,Aliferis:2006:97,cross2007comparative, Aliferis:2008:181}.
Large QECCs with high encoding rates typically encode many logical qubits with high distance. FTQC architectures based on these codes may potentially outperform smaller codes and topological codes, like surface codes~\cite{Kitaev:2003:2, Folwer2012PhysRevA.86.032324} and color codes ~\cite{Bombin:2006:180501}, in terms of the overall resource required and the error correction ability~\cite{steane1999efficient_Nature, steane2005fault, brun2015teleportation, Steane:2003:042322, gottesman2013overhead}. However, for an $\llbracket n,k,d\rrbracket$ code with $k,d\gg 1$, it may be extremely difficult (or even impossible) to find all required fault-tolerant (FT) logical gates. For Calderbank-Shor-Steane (CSS) codes~\cite{Calderbank:1996:1098,Steane:1996:793}, one way to resolve this challenge is to implement logical circuits indirectly through Knill or Steane syndrome extraction circuits~\cite{KnillFTNature, steane1997active} with additional blocks of encoded ancilla qubits prepared in specific states~\cite{steane1997active, Gottesman:1999:390,Zhou:2000:052316, brun2015teleportation}.
Unfortunately, the distillation processes for each encoded ancilla state are complicated, and different ancilla states are usually required for each logical gate. As an example, a Clifford circuit on $k$ qubits requires $O(k^2/\log k)$ Clifford gates~\cite{markov2008optimal, aaronson2004improved} with circuit depth $O(k)$; if an $\llbracket n,k, d \rrbracket$ CSS code is used, it requires $O(k^2/\log k)$ logical Clifford gates~\cite{aaronson2004improved}, and in general, $O(k^2/\log k)$ different ancilla states need to be prepared, and the same number of Knill/Steane syndrome extraction steps are required.
A natural question arises: can one implement logical circuits on those multi-qubit large block codes ($k\gg 1$) in a quicker and more efficient way? In this paper, we show that for Clifford circuits, the answer is positive for CSS codes: one can implement an arbitrary logical Clifford circuit fault-tolerantly using $O(1)$ qualified encoded ancilla states and a constant number of Knill/Steane syndrome measurement steps. Thus the depth of a logical Clifford circuit can be reduced to $O(1)$ \emph{in-situ}. Furthermore, we show that with the distillation protocol proposed in ~\cite{Ancilla_distillation_1,zheng2017efficient}, these ancilla states can be distilled \emph{off-line} in ancilla factories with yield rate close to $O(1)$ asymptotically, if the physical error rate is sufficiently low. Especially, for those families of large block codes with $k/n\sim \Theta(1)$, the number of physical qubits and physical gates required for an arbitrary logical Clifford circuit can scale as $O(k)$ and $O(k^2/\log k)$ respectively on average. These results suggest that the resource cost of Clifford circuits on the logical level can scale the same as on the physical level, if the distillation circuits and large block quantum codes are carefully chosen. With a proper pipeline structure of ancilla factories to work in parallel, we are also convinced that the scaling of the required resources including the overall number of qubits, physical gates and the computation time, can be very modest for large scale FTQC.
The structure of the paper is as follows. We review preliminaries and set up notation in Sec.~\ref{sec:prim}. In Sec.~\ref{sec:constant_depth}, we propose our scheme to implement FT logical Clifford circuits via a constant number of Knill or Steane syndrome measurement. The resource overhead for the scheme is carefully analyzed. In Sec.~\ref{sec:discussion}, we compare our scheme to some other closely-related FTQC schemes according to the resource overhead and real-time computational circuit depth.
\section{PRELIMINARIES and notation}\label{sec:prim} \subsection{Stabilizer formalism and CSS codes}
Let $\mathcal{P}_n=\mathcal{P}_1^{\otimes n}$ denote the $n$-fold Pauli group, where \begin{equation*} \mathcal{P}_1=\{\pm I, \pm i I, \pm X, \pm i X, \pm Y, \pm i Y, \pm Z, \pm i Z\}, \end{equation*} and $I={\footnotesize \left(
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right)}$, $X={\footnotesize \left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)} $, $Z={\footnotesize \left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right)}$, and $Y=iXZ$ are the Pauli matrices.
Let $X_j$, $Y_j$, and $Z_j$ act as single-qubit Pauli matrices on the $j$th qubit and trivially elsewhere. We also introduce the notation $X^{\mathbf a}$, for ${\mathbf a}=a_1\cdots a_n\in \mathbb{Z}_2^n$, to denote the operator $\otimes_{j=1}^n X^{a_j}$ and let $\text{supp}({\mathbf a})=\{j:a_j=1\}$. For ${\mathbf a}, {\mathbf b}\in \mathbb{Z}_2^n$, define $\mathcal{I}_{\bf ab}=\text{supp}({\bf a})\bigcap\text{supp}({\bf b})$ and let $\tau_{\bf ab}=\left|\mathcal{I}_{\bf ab}\right|$ be the size of $\mathcal{I}_{{\bf a}{\bf b}}$. An $n$-fold Pauli operator can be expressed as \begin{equation}\label{eq:general_error} i^l\cdot \bigotimes_{j=1}^n X^{a_j}Z^{b_j}=i^l X^{\bf a}Z^{\bf b}, \quad {\bf a},{\bf b}\in \mathbb{Z}^n_2, \ l\in\{0,1,2,3\}. \end{equation}
Then $({\bf a}\,|\,{\bf b})$ is called the \emph{binary representation} of the Pauli operator $i^lX^{\bf a}Z^{\bf b}$ up to an overall phase $i^l$. In particular, $\pm i^{\tau_{\bf ab}} X^{\bf a}Z^{\bf b}$, which is Hermitian, has eigenvalues $\pm 1$. From now on we use the binary representation and neglect the overall phase for simplicity when there is no ambiguity. We define the weight of $E$, $\text{wt}(E)$, as the number of terms in the tensor product which are not equal to the identity.
Suppose $\mathcal{S}$ is an Abelian subgroup of $\mathcal{P}_n$ with a set of $n-k$ independent and commuting generators $\{S_1=i^{\tau_{{\bf a}_1{\bf b}_1}} X^{{\bf a}_1}Z^{{\bf b}_1},\dots, S_{n-k}=i^{\tau_{{\bf a}_{n-k}{\bf b}_{n-k}}} X^{{\bf a}_{n-k}}Z^{{\bf b}_{n-k}}\}$, and $\mathcal{S}$ does not include $-I^{\otimes n}$. An $\llbracket n,k\rrbracket$ quantum stabilizer code $C(\mathcal{S})$ is defined as the $2^{k}$-dimensional subspace of the $n$-qubit state space ($\mathbb{C}^{2^n}$) fixed by $\mathcal{S}$, which is the joint $+1$ eigenspace of $S_1, \dots, S_{n-k}$. Then for a codeword $\ket{\psi}\in C(\mathcal{S})$, $$S\ket{\psi}=\ket{\psi}$$ for all $S\in \mathcal{S}$. We also define $N(\mathcal{S})$ to be the normalizer of the stabilizer group. Thus any non-trivial logical Pauli operator on codewords belongs to $N(\mathcal{S})\backslash\mathcal{S}$ and let $X_{j,L}$, $Y_{j,L}$ and $Z_{j,L}$ be logical Pauli operators acting on the $j$th logical qubit. The distance $d$ of the code is defined as
$$d=\min_{L\in N(\mathcal{S})\backslash \mathcal{S}} \text{wt}(L).$$ Suppose $\mathcal{S'}\in \mathcal{P}_n$ is another Abelian subgroup containing $\mathcal{S}$ with $k=0$, then $C(\mathcal{S}')$ has only one state $|\psi\rangle$ up to a global phase. This state is called a \emph{stabilizer codeword} of $\mathcal{S}$, whose binary representation is $$ \psi = \left(
\begin{array}{c|c}
{\bf a}_1 & {\bf b}_1 \\
\vdots & \vdots \\
{\bf a}_{n} & {\bf b}_{n}\\
\end{array}
\right). $$
If a Pauli error $E$ corrupts $\ket{\psi}$, some eigenvalues of $S_1,\dots, S_{n-k}$ may be flipped, if they are measured on $E|\psi\rangle$. Consequently, we gain information about the error by measuring the stabilizer generators $S_1,\dots, S_{n-k}$, and the corresponding measurement outcomes (in bits) are called the \emph{error syndrome} of $E$. A quantum decoder has to choose a good recovery operation based on the measured error syndromes.
CSS codes are an important class of stabilizer codes for FTQC. Their generators are tensor products of the identity and either $X$ or $Z$ operators (but not both)~\cite{Calderbank:1996:1098, Steane:1996:793}. More formally, consider two classical codes, $\mathcal{C}_Z$ and $\mathcal{C}_X$ with parameters $[n, k_Z, d_Z]$ and $[n, k_X, d_X]$, respectively, such that $\mathcal{C}_X^\perp \subset \mathcal{C}_Z$. The corresponding parity-check matrices are $\textsf{H}_Z$ ($(n-k_Z)\times n$) and $\textsf{H}_X$ ($(n-k_X)\times n$) with full rank. One can form an $\llbracket n, k=k_X+k_Z-n, d \rrbracket$ CSS code, where $d\geq\min\{d_Z,d_X\}$. In general, a logical basis state can be represented as: $$
|u\rangle_L=\sum_{x\in \mathcal{C}_{X}^\perp}|x+u D\rangle, $$ where $u\in \mathbb{Z}_2^k$ and $D$ is a $k\times n$ binary matrix, whose rows are the coset leaders of $\mathcal{C}_Z/\mathcal{C}_X^\perp$. The stabilizer generators of a CSS code in binary representation are: $$ \left(
\begin{array}{c|c}
\textsf{H}_Z & {\bf 0} \\
{\bf 0} & \textsf{H}_X \\
\end{array} \right), $$ where $\textsf{H}_X(\textsf{H}_Z)$ is made of $Z(X)$ type Pauli operators. For the special case that $\mathcal{C}_{X}=\mathcal{C}_{Z}$, we call such a code self-dual CSS code.
\subsection{Clifford circuits}\label{sec:stabilizer_circuit} Clifford circuits are composed solely of Hadamard ({\rm H}), Phase ({\rm P}), and controlled-NOT ({\rm CNOT}) gates, defined as \begin{equation*} \text{H} = \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
1 & 1 \\
1 & -1 \\
\end{array}
\right),\ \text{P} = \left(
\begin{array}{cc}
1 & 0 \\
0 & i \\
\end{array}
\right),\ {\small \text{CNOT} = \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{array}
\right).} \end{equation*} The $n$-qubit Clifford circuits form a finite group, which, up to overall phases, is isomorphic to the binary symplectic matrix group defined in~\cite{aaronson2004improved}: \begin{mydef}[Symplectic group]\label{def:symplectic_group} The group of $2n\times 2n$ symplectic matrices over $\mathbb{Z}_2$ is defined in: \begin{equation*} {\rm Sp}(2n, \mathbb{Z}_2)\equiv \{M\in {\rm GL}(2n, \mathbb{Z}_2): MJ_nM^t=J_n\} \end{equation*} under matrix multiplication. Here $ J_n=\left(
\begin{array}{c|c}
{\bf 0} & I_n \\
I_n & {\bf 0} \\
\end{array}
\right). $ \end{mydef} In general, $M\in {\rm Sp}(2n, \mathbb{Z}_2)$ has the form $$ M=\left(
\begin{array}{c|c}
Q & R \\
\hline
\rule[0.4ex]{0pt}{8pt}
S & T \\
\end{array} \right), \label{eq:symplectic_matrix} $$ where $Q$, $R$, $S$ and $T$ are $n\times n$ square matrices satisfying the following conditions: $$ QR^t=RQ^t, \quad ST^t = TS^t, \quad Q^tT + R^t S = I_n. $$
In other words, the rows of $(Q \ | \ R)$ are symplectic partners of the rows of $(S \ | \ T)$. Thus, an $n$-qubit Clifford circuit can be represented by a $2n\times 2n$ binary matrix with respect to the basis of the binary representation of Pauli operators in (\ref{eq:general_error}). Then $UX^{{\bf a}}Z^{{\bf b}}U^\dag$ is represented by $({\bf a},{\bf b})M_U$, where $M_U$ is the binary symplectic matrix corresponding to $U$. For example, the idle circuit (no quantum gates) is represented by $I_{2n}$, the $2n\times 2n$ identity matrix. The representation of consecutive Clifford circuits $M_1,\dots, M_j$ is their binary matrix product $$ M=M_1\cdots M_j. $$
We emphasize here that the symplectic matrix $M$ acts on the binary representation of a Pauli operator from the right. The binary representations of Pauli operators and Clifford unitaries omit the overall phases of full operators. If needed, such overall phases can always be compensated by a single layer of gates consisting solely of $Z$ and $X$ gates~\footnote{Such extra layer has depth $O(1)$. Throughout the paper, Pauli gates are assumed to be free and can be directly applied to qubits. This is also true in FTQC using stabilizer codes, where logical Pauli operators are easy to realize. } on some subsets of qubits~\cite{aaronson2004improved,maslov2017Bruhat}.
Let $\text{C}(j,l)$ denote a CNOT gate with control qubit $j$ and target qubit $l$. The actions of appending a Hadamard, Phase, or CNOT gate to a Clifford circuit $M$ can be described as follows: \begin{enumerate}
\item A Hadamard gate on qubit $j$ exchanges columns $j$ and $n+j$ of $M$.
\item A Phase gate on qubit $j$ adds column $j$ to column $n+j$ (mod 2) of $M$.
\item $\text{C}(j,l)$ adds column $j$ to column $l$ (mod 2) of $M$ and adds column $n+l$ to column $n+j$ (mod 2) of $M$. \end{enumerate} \section{Constant depth FT Clifford circuit}\label{sec:constant_depth}
\subsection{FT syndrome measurement}\label{sec:ft_syndrome}
The goal of an error correction protocol in FTQC is to find the most likely errors during computation, based on the extracted syndromes. However, the circuits to perform a syndrome measurement may introduce additional errors to the system or get wrong syndromes with high probability. Therefore, the error correction may fail, if not treated properly.
In this section, we briefly review two major protocols used in this paper --- Knill and Steane syndrome measurements~\cite{KnillFTNature, steane1997active}. Each scheme has its own advantages in different computation scenarios~\cite{chamberland2018deep}, such as a better threshold or a better ability to handle particular types of noise, and both can be used to construct arbitrary FT logical Clifford circuits.
\subsubsection{Knill syndrome measurement}
For an arbitrary $\llbracket n, k,d \rrbracket$ stabilizer code, one can use the logical teleportation circuit in Fig.~\ref{fig:Knill} to extract the error syndrome~\cite{steane1997active}, as proposed by Knill~\cite{KnillFTNature}. Here, two blocks of ancilla qubits are maximally entangled in a logical Bell state $|\Phi_L^+\rangle^{\otimes k}=\frac{1}{\sqrt{2}}\left(|0_L\rangle\otimes |0_L\rangle+|1_L\rangle
\otimes |1_L\rangle\right)^{\otimes k}$. The upper block of ancilla qubits are encoded to the same code protecting the data state, while the lower ones can be protected by an arbitrary stabilizer code encoding $k$ logical qubits. In this paper, we restrict ourselves to the same $\llbracket n, k,d \rrbracket$ CSS code for all blocks.
\begin{figure}
\caption{ The quantum circuit for Knill syndrome measurement and gate teleportation for an $\llbracket n, k,d \rrbracket$ stabilizer code. For a logical Clifford circuit $U_L$, if $|\Psi_L^{U_L}\rangle$ is prepared before logical Bell measurement, $U_L|\psi\rangle$ can be obtained up to some Pauli correction of $X_L^{\bf a}Z_L^{\bf b}$ on the output block, depending on the logical Bell measurement results. }
\label{fig:Knill}
\end{figure}
The logical Bell measurement in the dashed box teleports the encoded state to the lower ancilla block up to a logical Pauli correction (depending on the Bell measurement outcomes), and simultaneously obtains the error syndrome of on the input data blocks. Both logical Bell measurement outcomes and syndromes are calculated from the bitwise measurement results. The circuit is intrinsically fault-tolerant because it consists solely of transversal CNOT gates and bitwise measurements.
One particular virtue of the teleportation syndrome measurement circuit is that it can also provide a straightforward way to produce any logical circuit $U_L$ (on the teleported state) of the Clifford hierarchy $C_k$ (up to a $C_{k-1}$ correction depending on the logical measurement outcomes) via the very same syndrome measurement circuit~\cite{Gottesman:1999:390}, if one can construct the ancilla state \begin{equation}
\left|\Psi_L^{U_L}\right\rangle=(I \otimes U_L)\left|\Phi^+ _L\right\rangle^{\otimes k}.
\end{equation} This construction is very useful when implementing logical circuits for large block codes. In this paper, we focus on $U \in C_2$, the Clifford circuit. In this case, all the $|\Psi_L^{U_L}\rangle$ are stabilizer states that can be prepared by Clifford circuits.
\subsubsection{Steane syndrome measurement}
\begin{figure}\label{fig:Steane}
\end{figure}
Now we consider a CSS code $\llbracket n, k,d \rrbracket$ for convenience in later discussion. For CSS codes, Steane suggested a syndrome measurement circuit as shown in Fig.~\ref{fig:Steane}~\cite{steane1997active}. Here, two logical ancilla blocks of the same code are used that protects the data state. Two transversal CNOT gates propagate $Z$ and $X$ errors from the data block to ancilla blocks and corresponding error syndromes are calculated from the bitwise measurement outcomes. If the two ancilla blocks are prepared in a tensor product state $|0_L\rangle^{\otimes k}\otimes |+_L\rangle^{\otimes k}$, the circuit extracts the error syndromes without disturbing the encoded quantum information. Like the Knill syndrome measurement, the circuit is intrinsically fault-tolerant.
Moreover, one can simultaneously measure an arbitrary Hermitian logical Pauli operator of the form $i^{\tau_{\bf ab}}X^{\bf a}_L Z^{\bf b}_L$ while extracting syndromes, if $|\Omega^{{\bf ab}}_L\rangle$ is prepared in \begin{equation}
|\Omega^{\bf ab}_{L}\rangle =\frac{1}{\sqrt{2}}\left(I+i^{\tau_{\bf ab}}X^{\bf a}_L\otimes Z^{\bf b}_L\right)|0_L\rangle^{\otimes k}\otimes |+_L\rangle^{\otimes k}. \end{equation}
It is easy to prove the functionality of the circuit: start with the joint state $|\psi\rangle|\Omega^{{\bf ab}}_{L}\rangle$, after two transversal CNOTs, the state becomes {\small $$
\frac{1}{\sqrt{2}}\left(|\psi\rangle |0_L\rangle^{\otimes k}|+_L\rangle^{\otimes k} + i^{\tau_{\bf ab}} X^{\bf a}_LZ^{\bf b}_L |\psi\rangle X^{\bf a}_L |0_L\rangle^{\otimes k} Z^{\bf b}_L|+_L\rangle^{\otimes k}\right). $$ } Let the measurement outcomes of the $j$th logical qubit in the upper and lower blocks be $v^{x}_j$ and $v^{z}_j\in \{0,1\}$, respectively. Then the joint output state is: \begin{widetext} \small \begin{equation} \begin{split}
&\frac{1}{\sqrt{2}}|\psi\rangle\bigotimes_{j=1}^k\left(\frac{I+(-1)^{v_j^x}X_L}{2}|0_L\rangle\right)
\bigotimes_{j=1}^{k}\left(\frac{I+(-1)^{v_j^z}Z_L}{2}|+_L\rangle\right)+\frac{1}{\sqrt{2}}i^{\tau_{\bf ab}}X_L^{\bf a}Z_L^{\bf b}|\psi\rangle\bigotimes_{j=1}^{k}\left(\frac{I+(-1)^{v_j^x}X_L}{2}X_L^{a_j}|0_L\rangle\right)
\bigotimes_{j=1}^{k}\left(\frac{I+(-1)^{v_j^z}Z_L}{2}Z_L^{b_j}|+_L\rangle\right)\\
=&\frac{1}{\sqrt{2}}\left(I+\prod_{l\in \text{supp}({\bf a})}(-1)^{v_l^x}\prod_{l\in \text{supp}({\bf b})}(-1)^{v_l^z}i^{\tau_{\bf ab}}X^a_LZ^b_L\right)|\psi\rangle\bigotimes_{j=1}^{k}\left(\frac{I+(-1)^{v_j^x}X_L}{2}|0_L\rangle\right)
\bigotimes_{j=1}^{k}\left(\frac{I+(-1)^{v_j^z}Z_L}{2}|+_L\rangle\right),\\ \end{split} \end{equation} \end{widetext}
which is the state after the measurement of $i^{\tau_{\bf ab}}X_L^{\bf a}Z_L^{\bf b}$ on $|\psi\rangle$ with measurement outcome $$ \prod_{l\in \text{supp}({\bf a})}(-1)^{v_l^x}\prod_{l\in \text{supp}({\bf b})}(-1)^{v_l^z}. $$ This circuit also allows measuring several commuting logical Pauli operator simultaneously. Here, we restrict ourselves to a commuting set of $m \leq k$ logical Pauli operators and suppose that the set of commuting Pauli operators to be measured is
$\{X_L^{{\bf e}_1}Z_L^{{\bf f}_1},\dots, X_L^{{\bf e}_m}Z_L^{{\bf f}_m} \}$. These operators can be simultaneously measured by replacing $|\Omega^{\bf ab}_L\rangle$ with: \begin{equation}\label{eq:state_multi_measurement}
|\Omega^{{\bf EF}}_L\rangle=\frac{1}{\sqrt{2^m}}\prod_{j=1}^{m}
\left(I+i^{\tau_{{\bf e}_j{\bf f}_j}}X_L^{{\bf e}_j}\otimes Z_L^{{\bf f}_j}\right)|0_L\rangle^{\otimes k}\otimes |+_L\rangle^{\otimes k}.
\end{equation} Note that $|\Omega^{{\bf EF}}_L\rangle$ is also a stabilizer state. Like logical circuit teleportation, one can also effectively construct any logical Clifford circuit via such Pauli measurements~\cite{Gottesman:9705052,brun2015teleportation}.
\subsection{Single-shot FT logical circuit teleportation and Pauli measurement} Ideally, if the ancilla qubits are clean and measurements are perfect, one can extract the error syndrome of the data block with logical circuit teleportation or Pauli measurements in a single round of Knill/Steane syndrome measurement.
In practice, ancillas may contain different types of errors after preparation, while the measurement outcomes can also be noisy. One needs to make sure that high weight errors do not propagate from ancilla qubits to data blocks. At the same time, reliable values of syndromes and logical operators must be established from measurement outcomes. For error correction, one can repeat the syndrome measurements several rounds to establish reliable syndromes of the data state via majority vote~\cite{Shor:1996:56, Steane:2003:042322}. However, for the purpose of logical circuit teleportation or Pauli measurements, one needs reliable values of logical operators right after the first round of measurement for subsequent correction. Otherwise, it will cause a logical error on the data state. Thus, a single-shot FT logical circuit teleportation or Pauli measurement protocol is required.
Actually, we will see this is possible if the blocks of ancilla qubits for Knill/Steane syndrome measurements do not contain any \emph{correlated} errors. Here, we define \red{an} uncorrelated error as follow~\cite{steane2002fast}: \begin{mydef}\label{def:uncorrelatation} For an $\llbracket n,k,d \rrbracket$ code correcting any Pauli error on $t=\lfloor \frac{d-1}{2} \rfloor$ qubits, we say that an error $E$ on the code block is spatially \emph{uncorrelated} if the probability of $E$ is $$ \mathrm{Pr}(E)\sim O(p^s): \begin{cases} &\text{for some }s\geq\text{wt}(E),\ \text{if wt}(E)\leq t; \\ &\text{for some }s \geq t,\ \text{if wt}(E)> t, \end{cases} $$ where the coefficients behind $O$ are not unreasonably large. \end{mydef} Otherwise, $E$ is said to be correlated. For those uncorrelated errors satisfying this definition, they should have a distribution similar to the binomial distribution. Thus, the errors on the code block can be regarded as independent. We say that an ancilla is \emph{qualified} if it is free of correlated errors.
\begin{figure}
\caption{ The effective error model of the Knill (part (a)) and Steane (part(b)) syndrome measurement circuits. While $E_i$ and $E_f$ are in general correlated \emph{in time}, they are \emph{spatially} uncorrelated, if the ancilla states are qualified. }
\label{fig:equivalent_error}
\end{figure} It is obvious that no correlated error will be propagated back to the data blocks if ancilla blocks are qualified. Even more, we have: \begin{mylemma}[Effective error model]\label{lemma:effective_error} During imperfect logical state teleporation via Knill syndrome extraction, or logical Pauli measurements via Steane syndrome measurement, if errors in the same block (data or ancilla) are spatially uncorrelated according to Def.~\ref{def:uncorrelatation}, then the errors are equivalent to spatially \emph{uncorrelated} effective errors acting only on the data code block before and after the process, as shown in Fig.~\ref{fig:equivalent_error}. \end{mylemma}
It has already been shown in Ref.~\cite{brun2015teleportation} that this statement is true for Steane syndrome measurement. The basic idea is that failures occurring in any location of the circuit can be commuted forward or backward to the data code block, allowing the ancillas to be treated as clean and the measurements as perfect. Thus we can leave $E_f$ to the next round of syndrome measurements and analyze as if only $E_i$ (and $E_f$ from the previous round) have occurred, followed by perfect syndrome measurements. The same argument is also applicable to Knill syndrome measurements and hence one has: \begin{mytheorem}\label{thm:single-shot} The Knill/Steane syndrome measurement circuit can implement fault-tolerant logical circuit teleportation/Pauli measurements in a single round. \end{mytheorem} After a single-shot Knill/Steane syndrome measurement and correction, the final data state is: \begin{equation}\label{eq:final_state}
|\psi_f\rangle \propto E_f \cdot R \cdot O_L \cdot L_C \cdot E_{\text{comb}}|\psi_i\rangle. \end{equation} Here, $E_{\text{comb}}$ includes both $E_i$ in current stage and $E_f$ from previous stage; $O_L$ is the logical operation (either a teleported logical circuit, or logical Pauli measurements); $L_C$ is the Pauli correction based on the outcomes of logical measurements; $R$ is the recovery operation based on the measured syndromes, $O_L$ and $L_C$.
\subsection{Constant depth Clifford circuit via FT circuit teleportation}\label{sec:constant_teleport} For a CSS code with $k$ logical data qubits, it requires $O(k^2/\log k)$ logical Clifford gates~\cite{aaronson2004improved, markov2008optimal} for a logical Clifford circuit. If we implement these gates one by one in a fault-tolerant manner, it will require $O(k^2/\log k)$ qualified ancilla states using $O(k^2/\log k)$ times of the Knill/Steane single-shot syndrome measurements circuit. In this and next subsections, we show that $O(1)$ qualified ancilla states and $O(1)$ steps of the Knill/Steane syndrome measurements are sufficient for arbitrary logical Clifford circuits, up to a permutation of qubits.
It is well known that any Clifford circuit has an equivalent circuit comprising 11 stages, each using only one type of gate: -H-C-P-C-P-C-H-P-C-P-C-~\cite{aaronson2004improved}. That can be further reduced to a 9-stage -C-P-C-P-H-P-C-P-C-~\cite{maslov2017Bruhat}. More specifically, one has: \begin{mytheorem}[Bruhat decomposition~\cite{maslov2017Bruhat}]\label{thm:bruhat} Any symplectic matrix $M$ of dimension $2k\times 2k$ can be decomposed as \begin{equation}\label{eq:used_bruhat} \begin{split} M=&M^{(1)}_{C} M^{(1)}_{P}M^{(2)}_{C}M^{(2)}_{P} M^{(1)}_{H} \cdot\\ &M^{(3)}_{P} \left(\pi M^{(3)}_{C}\pi^{-1}\right)M^{(4)}_{P} \left(\pi M^{(4)}_{C}\pi^{-1}\right)\pi. \end{split} \end{equation} Here, $M^{(j)}_{C} $ are -C- stage matrices containing only CNOT gates $\text{C}(q,r)$ such that $q < r$; $M^{(j)}_{P} $ and $M^{(j)}_{H} $ are -P- and -H- stage matrices; $\pi$ is a permutation matrix. \end{mytheorem}
In a -P- stage, since $\text{P}^4=I_2$, effectively there are three types of single-qubit gates: P, P$^2=Z$ and $\text{P}^3=\text{P}^\dag=\text{P}Z$. Note that we will postpone all the $Z$ gates to the final stage, and thus the -P- layer consists of at most $k$ individual Phase gates. The symplectic matrix of a -P- stage on a set of $m$ qubits is in general of the form: \begin{equation} M_P=\left(
\begin{array}{c|c}
I_k & \Lambda_k^m \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & I_k\\
\end{array} \right), \end{equation} where $\Lambda_k^m$ is an $k\times k$ diagonal matrix with $m$ 1s.
Similar to the -P- stage, since $\text{H}^2=I_2$, an -H- stage contains at most $k$ individual H gates. The symplectic matrix of an -H- stage on an arbitrary set of $m$ qubits can be written as
\begin{equation} M_H=\left(
\begin{array}{c|c}
I_k + \Lambda_k^m & \Lambda_k^m \\[2pt]
\hline
\rule[1.0ex]{0pt}{8pt}
\Lambda_k^m & I_k+\Lambda_k^m \\
\end{array} \right). \end{equation}
The corresponding symplectic matrix of a -C- stage can be written as: \begin{equation} M_C=\left(
\begin{array}{c|c}
U & {\bf 0}_k\\[2pt]
\hline
\rule[1.0ex]{0pt}{8pt}
{\bf 0}_k & \left( U^{t}\right)^{-1}\\
\end{array} \right), \end{equation} where $U$ is an invertible $k\times k$ upper triangular matrix.
Clearly, if one can implement each stage in $O(1)$ steps fault-tolerantly, an arbitrary logical Clifford circuit can be implemented in $O(1)$ steps. For Knill syndrome measurements, it is straightforward---one can prepare the ancilla for the circuit in each stage directly as: \begin{equation} \begin{split}
\left|\Psi_{L}^{U_P}\right\rangle&=I\otimes U_P(|0_L\rangle\otimes|0_L\rangle+|1_L\rangle\otimes |1_L\rangle)^{\otimes k},\\
\left|\Psi_{L}^{U_H}\right\rangle&=I\otimes U_H(|0_L\rangle\otimes|0_L\rangle+|1_L\rangle\times |1_L\rangle)^{\otimes k},\\
\left|\Psi_{L}^{U_C}\right\rangle&=I\otimes U_C(|0_L\rangle\otimes|0_L\rangle+|1_L\rangle\otimes|1_L\rangle)^{\otimes k} \end{split} \end{equation} where $U_P$, $U_H$ and $U_C$ are the corresponding unitaries for the -P-, -H- and -C- stages, respectively. Obviously, these are all CSS states up to local Clifford operations, whose binary representations at the logical level are: \begin{equation} \Psi_L^{U_P}=\left(
\begin{array}{cc|cc}
I_k & I_k & {\bf 0} & \Lambda_k^m \\[2pt]
{\bf 0} & {\bf 0} & I_k & I_k\\
\end{array} \right), \end{equation}
\begin{equation} \Psi_L^{U_H}=\left(
\begin{array}{cc|cc}
I_k & I_k+\Lambda_k^m & {\bf 0} & \Lambda_k^m \\[2pt] {\bf 0} & \Lambda_k^m & I_k & I_k+\Lambda_k^m \\
\end{array}
\right) \end{equation} assuming -P- or -H- is applied to a set of $m$ qubits, and \begin{equation} \Psi_L^{U_C}=\left(
\begin{array}{cc|cc}
I_k & U & {\bf 0} & {\bf 0} \\
{\bf 0} & {\bf 0} & I_k & \left(U^t\right)^{-1} \\
\end{array} \right). \end{equation} If these states are all qualified for all the stages, by Theorem.~\ref{thm:single-shot} and \ref{thm:bruhat}, one can implement an arbitrary logical Clifford circuit in 9 rounds of single-shot Knill syndrome measurements. Later, we will show that all the three types of ancilla states can be prepared fault-tolerantly and efficiently.
\subsection{Constant depth Clifford circuit FT Pauli measurement}\label{sec:constant_pauli_measurement} Unlike Knill syndrome measurement, it is not so obvious how to implement the logical Clifford group using Steane syndrome measurement. In this section, we provide a constructive proof showing that by introducing $k$ extra auxiliary logical qubits (labeled as $\text{A}_1,\dots, \text{A}_k$), each stage of a logical Clifford circuit on $k$ data logical qubits ($\text{Q}_1,\dots, \text{Q}_k$) can be implemented via a constant number of Pauli operator measurements, up to a permutation of qubits. We choose an $\llbracket n, 2k, d\rrbracket$ CSS code and put the logical qubits in the following order: $\{\text{A}_1,\dots, \text{A}_k, \text{Q}_1,\dots, \text{Q}_k\}$.
\subsubsection{-P- stage}
Consider a pair of qubits $\{\text{A}_j,\text{Q}_j\}$ with $\text{A}_j$ in the $|0_L\rangle$ state. Measure operators $X_{{\text{A}_j},L}Y_{\text{Q}_{j},L}$ and then $Z_{\text{Q}_j,L}$. After swapping $\text{A}_j$ and $\text{Q}_j$, the overall effect is a Phase gate on $\text{Q}_j$ up to a Pauli correction depending on the measurement outcomes. The swap does not need to be done physically. Instead, one can just keep a record of it in software.
For $m$ Phase gates on a logical qubit set $\mathscr{M}$, since $\{X_{{\text{A}_j},L}Y_{\text{Q}_j,L}|\ j\in\mathscr{M}\}$ and $\{Z_{\text{Q}_j,L} |\ j\in\mathscr{M}\}$ are commuting operator sets, it requires only two steps of Pauli measurements by preparing two ancilla states with $4k$ logical qubits: {\small \begin{equation}\label{eq:phase_ancilla} \begin{split}
&\left|\Omega^{P_1}_L\right\rangle\\ =&\frac{1}{\sqrt{2^m}}\prod_{j\in \mathscr{M}}
\left(I+i\left(X_{j,L}X_{j+k,L}\right)\otimes Z_{j+k,L}\right)|0_L\rangle^{\otimes 2k}\otimes |+_L\rangle^{\otimes 2k} \end{split} \end{equation} } and \begin{equation}
\left|\Omega^{P_2}_L\right\rangle=\frac{1}{\sqrt{2^m}}|0_L\rangle^{\otimes 2k}\otimes \left(\prod_{j\in \mathscr{M}}\left(I+ Z_{j,L}\right)|+_L\rangle^{\otimes 2k}\right), \end{equation} whose binary representations at the logical level are \begin{equation} \Omega^{P_1}_L=\left(
\begin{array}{cccc|cccc}
{\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & I_k & \Lambda_k^m & {\bf 0} & {\bf 0} \\[2pt]
\Lambda_k^m &\Lambda_k^m &{\bf 0} &{\bf 0} &{\bf 0} & I_k &{\bf 0} & {\Lambda}_k^m \\[2pt]
{\bf 0} & {\bf 0} & I_k & {\bf 0} & {\bf 0}& {\bf 0} & {\bf 0}_k & {\bf 0}\\[2pt]
{\bf 0}& {\bf 0} & {\bf 0} & I_k & {\bf 0} & \Lambda_k^m & {\bf 0}& {\bf 0}_k\\
\end{array} \right), \end{equation} and \begin{equation} \Omega^{P_2}_L=\left(
\begin{array}{ccc|ccc} {\bf 0}_{2k} & {\bf 0} & {\bf 0} & I_{2k} & {\bf 0} & {\bf 0} \\[2pt] {\bf 0} & I_k & {\bf 0 } & {\bf 0} & {\bf 0}_k& {\bf 0} \\[2pt] {\bf 0}& {\bf 0} & I_k+\Lambda_k^m & {\bf 0}& {\bf 0} & \Lambda_{k}^m \\ \end{array} \right),
\end{equation} respectively. Note that $|\Omega_L^{P_2}\rangle$ is a CSS state. $|\Omega_L^{P_1}\rangle$ is the joint $+1$ eigenstate of {\small $$
\{Z_{j,L} Z_{j+k,L}\otimes I_{2n}, Z_{j+k,L}\otimes X_{j+k,L},X_{j,L}Y_{j+k,L}\otimes Z_{j+k,L}\ |\ j\in \mathscr{M}\}, $$ }
which is also a CSS state up to Phase gates on logical qubits $\{j+k| \ j\in \mathscr{M}\}$ of the upper block, and Hadamard gates on the logical qubits $\{j+k| \ j\in \mathscr{M}\}$ of the lower block.
\subsubsection{-H- stage}
Like the -P- stage, we consider only a single H on a data qubit. For a pair of qubits $\{\text{A}_j,\text{Q}_j\}$ with $\text{A}_j$ in the $|0_L\rangle$ state, measure $X_{\text{A}_j,L}Z_{\text{Q}_j,L}$ and then $X_{\text{Q}_j,L}$. After swapping $\text{A}_j$ and $\text{Q}_j$ , the overall effect is a Hadamard gate on $\text{Q}_j$ with $\text{A}_j$ in the $|+_L\rangle$ up to a Pauli correction depending on the measurement outcome.
For $m$ Hadamard gates on a logical qubits set $\mathscr{M}$, since $\{X_{\text{A}_j,L}Z_{\text{Q}_j,L}\ | \ j\in \mathscr{M} \}$ and $\{X_{\text{Q}_j,L}\ | \ j\in \mathscr{M}\}$ are both commuting sets, we need just two steps of Pauli measurements and an ancilla state with $4k$ logical qubits for an -H- stage. If Hadamard gates are applied to a set $\mathscr{M}$ of qubits, the required ancilla states are \begin{equation}\label{eq:had_ancilla}
\left|\Omega^{H_1}_L\right\rangle=\frac{1}{\sqrt{2^m}}\prod_{j\in \mathscr{M}}
\left(I+X_{j,L}\otimes Z_{j+k,L}\right)|0_L\rangle^{\otimes 2k}\otimes |+_L\rangle^{\otimes 2k}, \end{equation} and \begin{equation}
\left|\Omega^{H_2}_L\right\rangle=\frac{1}{\sqrt{2^m}}\left(\prod_{j\in \mathscr{M}}
\left(I+ X_{j,L}\right)|0_L\rangle^{\otimes 2k}\right)\otimes |+_L\rangle^{\otimes 2k}, \end{equation} whose binary representations at the logical level are: \begin{equation} \Omega_L^{H_1}=\left(
\begin{array}{cccc|cccc}
\Lambda_k^m & {\bf 0} & {\bf 0} & {\bf 0} & I_k+\Lambda_k^m & {\bf 0} & {\bf 0} & \Lambda_k^m\\[2pt]
{\bf 0} & {\bf 0}_k & {\bf 0} & {\bf 0} & {\bf 0} & I_k & {\bf 0} & {\bf 0} \\[2pt]
{\bf 0} & {\bf 0} & I_k & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0}_k & {\bf 0} \\[2pt]
{\bf 0} & {\bf 0} & {\bf 0} & I_k & \Lambda_k^m & {\bf 0} & {\bf 0} & {\bf 0} \\
\end{array} \right) \end{equation} and \begin{equation} \Omega^{H_2}_L=\left(
\begin{array}{ccc|ccc} {\bf 0}_k & {\bf 0 } & {\bf 0} & I_k& {\bf 0} & {\bf 0} \\[2pt] {\bf 0} & \Lambda_k^m & {\bf 0} & {\bf 0} & I_{k}+\Lambda_k^m& {\bf 0}\\[2pt] {\bf 0}& {\bf 0}&I_{2k}& {\bf 0}& {\bf 0} & {\bf 0}_{2k} \end{array} \right),
\end{equation} respectively. Note that $\left|\Omega^{H_2}_L\right\rangle$ is a CSS state. $\left|\Omega_L^{H_1}\right\rangle$ is the joint $+1$ eigenstate of $\{X_{j,L}\otimes Z_{j+k,L}, Z_{j,L}\otimes X_{j+k,L}\ |\ j\in \mathscr{M}\}$, and thus, it is a CSS state up to Hadamard gates.
\subsubsection{-C- stage} We first introduce the generalized stabilizer formalism that is helpful later. Consider a $2^k$ dimensional subspace $C(\mathcal{G})$ of the $N$ logical qubit Hilbert space, where $\mathcal{G}$ has $N-k$ stabilizer generators. We focus on the effects of Clifford circuits on the $k$ logical qubits stabilized by $\mathcal{G}$. Consider a set of matrices $\textsf{C}_{\mathcal{G}}$ of the form: \begin{equation}\label{eq:generalized_circuit matrix} \left(
\begin{array}{c|c}
Q' & R' \\
\hline
\rule[0.4ex]{0pt}{8pt}
S' & T' \\
\hline
\rule[0.4ex]{0pt}{8pt}
A & B
\end{array} \right).
\end{equation} Here, $(A|B)$ corresponds to the stabilizer generators of $\mathcal{G}$; $(Q'|R')$ and $(S'|T')$ are $k\times 2N$ binary matrices orthogonal to $(A|B)$ with respect to the symplectic inner product, and which are symplectic partners of each other. They can be regarded as ``encoded operators" on $C(\mathcal{G})$.
We define the following equivalence relation $\mathscr{R}$ in $\textsf{C}_\mathcal{G}$: two matrices \begin{equation*} {C}_1=\left(
\begin{array}{c|c}
Q'_1& R'_1 \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
S'_1 & T'_1 \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
A_1 & B_1
\end{array} \right) \quad \text{and} \quad {C}_2=\left(
\begin{array}{c|c}
Q'_2& R'_2 \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
S'_2 & T'_2 \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
A_2 & B_2
\end{array} \right),
\end{equation*} are equivalent if (a)~$(A_1|B_1)$ and $(A_2|B_2)$ generate the same stabilizer group $\mathcal{G}$; and (b)~{\tiny $\left(
\begin{array}{c|c}
Q'_1 & R'_1 \\[2pt]
\hline
\rule[0.4ex]{0pt}{5pt}
S'_1 & T'_1 \\
\end{array} \right)$} differs from {\tiny $\left(
\begin{array}{c|c}
Q'_2 & R'_2 \\[2pt]
\hline
\rule[0.4ex]{0pt}{5pt}
S'_2 & T'_2 \\
\end{array} \right)$} by multiplication of elements in $\mathcal{G}$. Thus, there is a one-to-one correspondence between $\textsf{C}_{\mathcal{G}}/\mathscr{R}$ and ${\rm Sp}(2k, \mathbb{Z}_2)$.
Therefore, $\textsf{C}_{\mathcal{G}}/\mathscr{R}$ captures the behavior of stabilizer circuits on $C(\mathcal{G})$. The circuit representation of Eq.~(\ref{eq:generalized_circuit matrix}) is called the \emph{generalized stabilizer form} (GSF) of $\mathcal{G}$.
The following lemma will also be used in the circuit construction: \begin{mylemma}\label{lemma:stabilizer_transform} Let $L_1$ be an $n\times n$ lower triangular matrix with the diagonal elements being zeros. Suppose \begin{equation*} L=(I_n \ L_1). \end{equation*} Then there exists a full-rank matrix $L'=(L_2 \ L_3)$, where $L_2$ and $L_3$ are two $n\times n$ lower triangular matrices, such that the rows of $L'$ are linear combinations of rows of $L$ and \begin{equation}\label{eq:orthogonal}
L'
\left(
\begin{array}{c}
I_n \\
I_n \\
\end{array}
\right) = L_2+L_3= I_n.
\end{equation} \end{mylemma} \begin{proof} See Appendix.~\ref{sec:proof_lemma} for details. \end{proof}
We now construct the sequence of Pauli measurements which can generate any -C- stage on logical qubits $\text{Q}_1,\dots , \text{Q}_k$ of an $\llbracket n,N=2k,d\rrbracket$ CSS code. We start with the GSF of an arbitrary -C- circuit with auxiliary logical qubits $\text{A}_1,\dots,\text{A}_k$ in $|+_L\rangle^{\otimes k}$: \begin{equation} \left(
\begin{array}{cc|cc}
{\bf 0}_k & U & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & {\bf 0}_k & (U^t)^{-1} \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
I_k & {\bf 0}_k & {\bf 0}_k & {\bf 0}_k \\
\end{array} \right), \end{equation} and reduce it to the idle circuit by a series of row operation. This set of operations in reverse will effectively implement the target CNOT circuit.
As mentioned before, $U$ is an invertible upper triangular matrix. The GSF is then equivalent to \begin{equation}\label{eq:first_measure_before} \left(
\begin{array}{cc|cc}
U+I_k & U & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & {\bf 0}_k & (U^t)^{-1} \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
I_k & {\bf 0}_k & {\bf 0}_k & {\bf 0}_k \\
\end{array} \right)
\end{equation} since all the nonzero row vectors of $\left(U+I_k \ \ {\bf 0}_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right)$ can be generated by $\left(I_k \ \ {\bf 0}_k \ | \ {\bf 0}_n \ \ {\bf 0}_n\right)$ and we add these vectors to the first row.
Since $U$ is of full rank, the diagonal elements of $U+I_k$ must be all zeros. Observe that $\left({\bf 0}_k \ {\bf 0}_k \ | \ {I}_k \ \ (U^t)^{-1}+I_k\right)$ commutes with the logical operators and is a symplectic partner of the stabilizer generators, since \begin{equation*} \left( {I}_k \ \ \ \ (U^t)^{-1}+I_k\right) \left(U+I_k \ \ \ \ U\right)^t = {\bf 0}_k,
\end{equation*} and \begin{equation*}
\left(I_k \ {\bf 0}_k \ |{\bf 0}_k \ \ {\bf 0}_k\right)\left(\ {I}_k \ \ (U^t)^{-1}+I_k \ | \ {\bf 0}_k \ {\bf 0}_k \right)^t=I_{2k}.
\end{equation*} One can measure $k$ commuting logical Pauli operators $\left({\bf 0}_k \ {\bf 0}_k \ | \ {I}_k \ \ (U^t)^{-1}+I_k\right)$ simultaneously. The GSF will then be transformed into \begin{equation}\label{eq:first_measure_after} \left(
\begin{array}{cc|cc}
U+I_k & U & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & {\bf 0}_k & (U^t)^{-1} \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & {I}_k & (U^t)^{-1}+I_k \\[2pt]
\end{array} \right). \end{equation}
(Meanwhile, we can perform the Pauli measurements $\left(I_k \ \ {\bf 0}_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right)$ to reverse the process (from Eq.~(\ref{eq:first_measure_after}) to Eq.~(\ref{eq:first_measure_before})).)
Now, adding the third row of Eq.~(\ref{eq:first_measure_after}) to the second row, one can obtain an equivalent GSF \begin{equation} \left(
\begin{array}{cc|cc}
U+I_k & U & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & I_k & I_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & {I}_k & (U^t)^{-1}+I_k \\
\end{array} \right). \end{equation} Let $L=\left( {I}_k \ \ \ L_1\right)$, where $L_1=(U^t)^{-1}+I_n$ is a lower triangular matrix with all the diagonal elements being~0. By Lemma~\ref{lemma:stabilizer_transform}, the GSF can be equivalently transformed into \begin{equation}\label{eq:second_measure_before} \left(
\begin{array}{cc|cc}
U+I_k & U & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & I_k & I_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & L_2 & L_3 \\
\end{array} \right),
\end{equation} where $(L_2 \ \ L_3) (I_k \ \ I_k)^t=I_k$. One can measure a set of $k$ Pauli operators $(I_k \ \ I_k | \ {\bf 0}_k \ \ {\bf 0}_k)$ simultaneously and transform the GSF into \begin{equation}\label{eq:second_measure_after} \left(
\begin{array}{cc|cc}
U+I_k & U & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & I_k & I_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
I_k & I_k & {\bf 0}_k & {\bf 0}_k \\
\end{array} \right).
\end{equation} Meanwhile, measuring $\left({\bf 0}_k \ \ {\bf 0}_k \ | \ L_2 \ \ L_3\right)$ will transfer the GSF of Eq.~(\ref{eq:second_measure_after}) into Eq.~(\ref{eq:second_measure_before}). Note that the measurement of $\left({\bf 0}_k \ \ {\bf 0}_k \ | \ L_2 \ \ L_3\right)$ is equivalent to measuring $\left({\bf 0}_k \ \ {\bf 0}_k \ | \ {I}_k \ \ (U^t)^{-1}+I_k \right)$.
Now, since the stabilizer generators in Eq.~(\ref{eq:second_measure_after}) are of the form $\left(I_k \ \ I_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right)$, one can add $\left(U+I_k \ \ U+I_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right)$ to the first row of Eq.~(\ref{eq:second_measure_after}), which equivalently reduces the GSF to: \begin{equation}\label{eq:third_measure_before} \left(
\begin{array}{cc|cc}
{\bf 0}_k & I_k & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & I_k & I_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
I_k & I_k & {\bf 0}_k & {\bf 0}_k \\
\end{array} \right). \end{equation}
The final step is to eliminate the left-most $I_k$ in the second row of Eq.~(\ref{eq:third_measure_before}). This can be done by measuring $\left({\bf 0}_k \ \ {\bf 0}_k \ | \ I_k \ \ {\bf 0}_k\right)$ and adding the third row to the second. This will then transform the GSF into the second matrix in: \begin{equation}\label{eq:third_measure_after} \left(
\begin{array}{cc|cc}
{\bf 0}_k & I_k & {\bf 0}_k & {\bf 0}_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & {\bf 0}_k & I_k \\[2pt]
\hline
\rule[0.4ex]{0pt}{8pt}
{\bf 0}_k & {\bf 0}_k & I_k & {\bf 0}_k \\
\end{array} \right),
\end{equation} Meanwhile, one can measure the set of $k$ logical Pauli operators $\left(I_k \ \ I_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right)$ to transform Eq.~(\ref{eq:third_measure_after}) to Eq.~(\ref{eq:third_measure_before}).
To reverse the whole procedure above and start from Eq.~(\ref{eq:third_measure_after}), we initially set $\text{A}_1,\dots, \text{A}_k$ to $|0_L\rangle^{\otimes k}$ and perform the following three sets of Pauli measurements: \begin{equation}\label{eq:three_measurements} \begin{split}
&1.~\left(I_k \ \ I_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right),\\
&2.~\left({\bf 0}_k \ \ {\bf 0}_k \ | \ {I}_k \ \ (U^t)^{-1}+I_k \right),\\
&3.~\left(I_k \ \ {\bf 0}_k \ | \ {\bf 0}_k \ \ {\bf 0}_k\right). \end{split} \end{equation} The measurements require three $4k$ logical qubits CSS ancilla states, which are \begin{equation}\label{eq:Cnot_ancilla_1}
\left|\Omega_L^{C_1}\right\rangle=\frac{1}{\sqrt{2^k}}\left(\prod_{j=1}^k \left(I+X_{j,L} X_{j+k,L}\right)|0_L\rangle^{\otimes 2k}\right)\otimes |+_L\rangle^{\otimes 2k}, \end{equation} \begin{equation}\label{eq:Cnot_ancilla_2}
\left|\Omega_L^{C_2}\right\rangle=\frac{1}{\sqrt{2^k}} |0_L\rangle^{2k}\otimes\left(\prod_{j=1}^k \left(I+Z^{{\bf u}_j}_L\right)|+_L\rangle^{\otimes 2k})\right), \end{equation} and \begin{equation}
\left|\Omega_L^{C_3}\right\rangle = \left(|+_L\rangle^{\otimes k}|0_L\rangle^{\otimes k}\right)\otimes |+_L\rangle^{\otimes 2k}. \end{equation} Here, ${\bf u}_j$ is the $j$th row of $\left({I}_k \quad (U^t)^{-1}+I_k \right)$. The binary representations of these states are: \begin{equation} \Omega^{C_1}_L= \left(
\begin{array}{ccc|ccc}
I_k & I_k & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0}\\[2pt]
{\bf 0} & {\bf 0} & {\bf 0} & I_k & I_k& {\bf 0} \\[2pt]
{\bf 0}& {\bf 0}& I_{2k} & {\bf 0} & {\bf 0}& {\bf 0}_{2k}\\
\end{array}
\right), \end{equation}
\begin{equation} \Omega^{C_2}_L= \left(
\begin{array}{ccc|ccc}
{\bf 0}_{2k} & {\bf 0} & {\bf 0} & {\bf 0} & I_k & (U^t)^{-1}+I_k \\
{\bf 0}& \left[\left(U^t\right)^{-1}\right]^{t}+I_k & I_k & {\bf 0} & {\bf 0} & {\bf 0} \\
{\bf 0}& {\bf 0} &{\bf 0}& I_{2k}& {\bf 0}& {\bf 0}
\end{array}
\right), \end{equation} and \begin{equation} \Omega^{C_3}_L=\left(
\begin{array}{ccc|ccc} I_k & {\bf 0} & {\bf 0 } & {\bf 0}_k& {\bf 0} & {\bf 0} \\[2pt] {\bf 0} & {\bf 0}_k & {\bf 0} & {\bf 0} & I_k & {\bf 0} \\[2pt] {\bf 0}& {\bf 0} & I_{2k} & {\bf 0} & {\bf 0 }& {\bf 0}_{2k}\\ \end{array} \right). \end{equation}
$\left|\Omega_L^{C_1}\right\rangle$ is actually a $k$-fold tensor product of Bell states. $\left|\Omega_L^{C_2}\right\rangle$ is the key resource state in our procedure to reduce the depth of -C- stage computation. All the ancillas here are CSS states. The net effect is the desired \text{-C-} stage acting on $\text{Q}_{1}, \dots, \text{Q}_k$, and the auxiliary qubits $\text{A}_{1}, \dots, \text{A}_k$ are reset to $|+_L\rangle^{\otimes k}$ (up to logical $Z$ corrections). One can transform $\text{A}_{1}, \dots, \text{A}_k$ back into $|0_L\rangle^{\otimes k}$ or just keep them and start with $|+_L\rangle^{\otimes k}$ for the next stage. The procedure with $\text{A}_{1}, \dots, \text{A}_k$ initially in the $|+_L\rangle^{\otimes k}$ state for -C- stage is similar. As a conclusion, one has the following theorem: \begin{mytheorem} For an $\llbracket n, 2k, d \rrbracket$ CSS code, any logical Clifford circuit on $k$ logical qubits can be realized fault-tolerantly by 22 rounds of single-shot Steane syndrome measurement. \end{mytheorem}
\subsection{FT preparation of qualified ancilla states } The ancilla states listed in the last subsection can be prepared fault-tolerantly by using Shor syndrome measurement to measure all stabilizer generators and logical Pauli operators, which will be discussed in Sec.~\ref{sec:related_FTQC}. In this subsection, we generalize the FT state preparation protocol in \cite{Ancilla_distillation_1, zheng2017efficient} to show that all the logical stabilizer ancilla states required in the previous subsection can be prepared fault-tolerantly by distillation with almost constant overhead in terms of the number of qubits.
Since all logical ancilla states we considered are stabilizer states, once the eigenvalues of all their stabilizers are known, one can remove the errors completely. The basic idea of distillation is shown in Fig.~\ref{fig:ft_distillation}---many copies of the logical ancilla states are prepared in the physical level via the same $U_{\text{prep}}$ and then encoded by $U_{\text{enc}}$ to the same large block code. Both $U_{\text{prep}}$ and $U_{\text{enc}}$ are noisy in practice. They are sent into a distillation circuit (which is also noisy) and certain blocks are measured bitwise. The eigenvalues of all the stabilizers in group $\mathcal{S}$ and the logical Pauli operators of the output blocks can be estimated if the distillation circuit is constructed based on the parity-check matrix of some classical error-correcting code---the flipped eigenvalues caused by errors during state preparation and distillation can be treated as classical noise and thus be decoded. Since correlated errors remaining on a block can cause its estimated eigenvalues not compatible with each other, each output block needs further check for compatibility. Postselection (rejecting the blocks whose estimated eigenvalues are incompatible) is then done to remove the blocks likely containing correlated errors. Error correction is then applied based on the estimated eigenvalues of stabilizers to the remaining blocks. \begin{figure}\label{fig:ft_distillation}
\end{figure}
The distillation circuit can be synthesized according the parity-check matrix of an $[n_c,k_c,d_c]$ classical code, which has the form $\textsf{H}=(I_{n_c-k_c}| \ \textsf{A}_c)$. Consider a group of $n_c$ ancilla blocks. Choose the first $r_c=n_c-k_c$ ancillas blocks to hold the classical parity checks, and do transversal CNOTs from the remaining $k_c$ ancillas onto each of the parity-check ancillas according to the pattern of 1s in the rows of $\textsf{A}_c$: if $[\textsf{A}_c]_{i,j}=1$, we apply a transversal CNOT from the $(r_c+j)$th ancilla to the $i$th ancilla block. Then measure all the qubits on each of the first $r_c$ ancilla blocks in the $Z$ basis, which destroys the states of those blocks and extracts information to estimate the eigenvalues of all $Z$ types stabilizers and logical operators of the remaining $k_c$ blocks. In the low error regime, after filtering out the blocks with incompatible estimated eigenvalues of stabilizers, the output blocks will contain no correlated $X$ errors after subsequent error correction if $d_c$ is larger than the distance of the underlying CSS code~\cite{zheng2017efficient}. Correlated $Z$ errors can be removed in a similar manner.
\begin{figure}
\caption{ Two-stage FT ancilla state distillation circuit for $|\Psi_L^{U_C}\rangle$. The distillation circuit is based on the parity-check matrix of the $[3,1,3]$ code for both stages. Actually this circuit can distill any logical CSS states fault-tolerantly. }
\label{fig:ft_distillation_CSS}
\end{figure}
One can concatenate two stages of distillation (with the output blocks of the first stage randomly shuffled) to remove both correlated $X$ and $Z$ errors. Fig.~\ref{fig:ft_distillation_CSS} shows the overall circuit to distill $\left|\Psi_L^{U_C}\right\rangle$ using the $[3,1,3]$ code for both stages. For a two-stage distillation protocol based on two classical $[n_{c_1},k_{c_1},d_{c_1}]$ and $[n_{c_2},k_{c_2},d_{c_2}]$ codes, the number of input and output blocks are $n_{c_1}n_{c_2}$ and $Y(p)\cdot n_{c_1}n_{c_2}$ respectively. Here, $Y(p)$ is the yield rate defined as \begin{equation} Y(p)= \frac{k_{c_1}k_{c_2}(1-R_1(p))(1-R_2(p))}{n_{c_1}n_{c_2}}, \end{equation} where $R_i(p)$ is the block rejection rate for postselection in the $i$th stage of distillation for a gate/measurement error rate $p$. Asymptotically, the rejection rate for each round of distillation is $O(p^2)$, because at least two failures are needed to cause a rejection of the output blocks. Thus, it is likely that $R_1(p)$ and $R_2(p)$ negligible in the small $p$ regime. On the other hand, good capacity-achieving classical codes exist such that $\frac{k_{c_1}k_{c_2}}{n_{c_1}n_{c_2}}$ can be independent of the code distance of the underlying CSS code to ensure $d_c>d$, so that the distillation circuits are still able to output qualified ancilla states. Hence, $Y(p)$ can achieve almost $\Theta(1)$ for sufficiently low $p$.
As shown in the last subsection, one needs several ancilla states, which can be grouped into three types:
---\emph{Type I}--- These are CSS states including $\left|\Psi_L^{U_C}\right\rangle$, $\left|\Omega_L^{P_2}\right\rangle$ $\left|\Omega_L^{H_2}\right\rangle$, $\left|\Omega_L^{C_1}\right\rangle$, $\left|\Omega_L^{C_2}\right\rangle$, and $\left|\Omega_L^{C_3}\right\rangle$. They are stabilized by logical operators which are tensor products of either $X$ or $Z$. They can be prepared and distilled by the circuit in Fig.~\ref{fig:ft_distillation_CSS}: eigenvalues of the $Z$ ($X$) stabilizer generators and $Z$ ($X$) logical operators are checked at the first (second) round to remove correlated $X$ ($Z$) errors.
---\emph{Type II}--- These are CSS states up to logical Hadamard gates applied to some logical qubits, including $\left|\Psi_L^{U_H}\right\rangle$ and $\left|\Omega_L^{H_1}\right\rangle$. In this case, we distill the upper block to remove correlated $X$ errors and the lower block to remove correlated $Z$ errors in the first round and reverse the order in the second round, as shown in Fig.~\ref{fig:ft_distillation_H}.
\begin{figure}
\caption{ Two-stage FT ancilla state distillation circuit for $\left|\Psi_L^{U_H}\right\rangle$ based on the parity-check matrix of $[3,1,3]$ code. This circuit can be used to prepare any logical CSS states up to logical Hadamard gates fault-tolerantly. }
\label{fig:ft_distillation_H}
\end{figure}
---\emph{Type III}--- This set of states are Type I or II states up to logical Phase gates applied to some logical qubits, including $\left|\Psi_L^{U_P}\right\rangle$ and $\left|\Omega_L^{P_2}\right\rangle$. We will confine our attention to doubly even and self-dual CSS codes and set the weight of logical $X$ operators $X_{j,L}$ to odd numbers for all $j$.
For $\left|\Psi_L^{U_P}\right\rangle$, one could first prepare a qualified CSS state of the form \begin{equation} \Psi_L^{U_P'}=\left(
\begin{array}{cc|cc}
I_k & \Lambda_k^m & {\bf 0} & {\bf 0} \\[2pt]
{\bf 0} & {\bf 0} & \Lambda_k^m & I_k\\
\end{array} \right) \end{equation} via the distillation circuit in Fig.~\ref{fig:ft_distillation_CSS}. Then, Phase gates are applied bitwise to the lower ancilla block, which will transform the state to \begin{equation} \Psi_L^{U_P''}=\left(
\begin{array}{cc|cc}
I_k & \Lambda_k^m & {\bf 0} & \Lambda_k^m \\[2pt]
{\bf 0} & {\bf 0} & \Lambda_k^m & I_k\\
\end{array} \right).
\end{equation} This is because the bitwise Phase gates will preserve the stabilizer group while implementing logical Phase gates on all the logical qubits. Then one can apply logical CNOTs (assisted by some CSS states) from the upper block to the lower block on the remaining $k-m$ logical qubits to obtain a qualified $\left|\Psi_L^{U_P}\right\rangle$.
Similarly, for $\left|\Omega_L^{P_1}\right\rangle$, one can prepare a qualified Type II state of the form \begin{equation} \Omega^{P'_1}_L=\left(
\begin{array}{cccc|cccc}
I_k & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} \\
{\bf 0} &\Lambda_k^m &{\bf 0} &{\bf 0} &{\bf 0} & I_k+\Lambda_k^m &{\bf 0} & {\Lambda}_k^m \\
{\bf 0} & {\bf 0} & I_k & {\bf 0} & {\bf 0}& {\bf 0} & {\bf 0}_k & {\bf 0}\\
{\bf 0}& {\bf 0} & {\bf 0} & I_k & {\bf 0} & \Lambda_k^m & {\bf 0}& {\bf 0}_k\\
\end{array} \right)
\end{equation} by the circuit in Fig.~\ref{fig:ft_distillation_H}. After that, bitwise Phase gates are applied to the upper block to transform $\left|\Omega^{P'_1}_L\right\rangle$ to \begin{equation} \Omega^{P''_1}_L=\left(
\begin{array}{cccc|cccc}
I_k& {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} \\[2pt]
{\bf 0}&\Lambda_k^m &{\bf 0} &{\bf 0} & {\bf 0} & I_k & {\bf 0}& {\Lambda}_k^m \\[2pt]
{\bf 0} & {\bf 0} & I_k & {\bf 0} & {\bf 0}& {\bf 0} & {\bf 0}_k & {\bf 0}\\[2pt]
{\bf 0}& {\bf 0} & {\bf 0} & I_k & {\bf 0} & \Lambda_k^m & {\bf 0}& {\bf 0}_k\\
\end{array} \right).
\end{equation} We can then measure the operators $({\bf 0} \ \ {\bf 0} \ | \ I_k \ \Lambda_k^m )$ (assisted by some CSS states) on the upper block and obtain \begin{equation} \Omega^{P_1}_L=\left(
\begin{array}{cccc|cccc}
{\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & I_k & \Lambda_k^m & {\bf 0} & {\bf 0} \\[2pt]
\Lambda_k^m &\Lambda_k^m &{\bf 0} &{\bf 0} & {\bf 0} & I_k & {\bf 0} & {\Lambda}_k^m \\[2pt]
{\bf 0} & {\bf 0} & I_k & {\bf 0} & {\bf 0}& {\bf 0} & {\bf 0}_k & {\bf 0}\\[2pt]
{\bf 0}& {\bf 0} & {\bf 0} & I_k & {\bf 0} & \Lambda_k^m & {\bf 0}& {\bf 0}_k\\
\end{array} \right) \end{equation} up to logical Pauli corrections.
\subsection{Resource overhead} We estimate the average number of qubits and physical gates to implement a logical Clifford circuit in this subsection.
For both Knill and Steane syndrome measurements, one only needs a constant rounds of circuit teleportations or Pauli measurements. Asymptotically, the fault-tolerant ancilla state distillation protocol dominates the resource cost. We can recycle the ancilla qubits after they are used to further reduce the redundancy. However, it will not change the asymptotic scaling of resource cost.
As discussed in the previous section, the overall number of qubits required for a two-stage state distillation protocol based on the parity-check matrices of two classical codes is $$N_q = c n n_{c_1} n_{c_2},$$ where $c$ is some constant. The numbers of gates for each $U_{\text{prep}}$ and $U_{\text{enc}}$ in Fig.~\ref{fig:ft_distillation} are $O(k^2/\log k)$ and $O(n^2/\log n)$, respectively. Therefore, the total number of gates for noisy logical stabilizer state preparation at the physical level is $$ N_{\text{enc}}= \frac{c n^2 n_{c_1}n_{c_2} }{\log n} $$ with depth $O(n)$.
The number of gates for the two-round distillation circuit depends on $\textsf{A}_{c}$, which has $O(k_c^2)$ 1s. Hence, the number of gates required for $U_{\text{dist}}$ for two stages is $$ N_{\text{dist}} = c_1 k_{c_1}^2n n_{c_2} + c_2 k_{c_2}^2 n, $$ with depth $O\left(\max\{k_{c_1},k_{c_2}\}\right)$, where $c_1,c_2$ are constants. Therefore, the overall depth of ancilla state preparation is $$ O(\max\{n,k_{c_1}, k_{c_2}\}). $$
Note that there are $Y(p) n_{c_1}n_{c_2}$ output ancilla blocks per round of distillation and hence the same number of identical circuits can be implemented. On average, one needs $$ \bar{N}_q=\frac{cn_{c_1}n_{c_2}n}{Y(p)n_{c_1}n_{c_2}}\sim O(n) $$ qubits for a logical Clifford circuit. On the other hand, the physical gates required for raw state preparation is $$ \bar{N}_{\text{enc}}=\frac{c n^2 n_{c_1}n_{c_2} }{Y(p)n_{c_1}n_{c_2}\log n}\sim O(n^2/\log n). $$ If two good classical capacity-achieving codes are used(e.g., low-density parity-check (LDPC) codes~\cite{MacKay:2003:CambridgeUniversityPress}) with $k_{c_i}/n_{c_i}=\Theta(1)$, $i=1,2$, and if we restrict ourselves to $k_{c_1}\lesssim n/\log n$ and $k_{c_2}/n_{c_1}=\Theta(1)$, the average number of physical gates required for distillation will be $$ \bar{N}_{\text{dist}}=\frac{c_1 k_{c_1}^2 n n_{c_2} + c_2 k_{c_2}^2 n}{Y(p)n_{c_1}n_{c_2}}\sim O(n^2/\log n). $$ To sum up, an arbitrary logical Clifford circuit requires $\bar{N}_{\text{gate}} = O(n^2/\log n)$ physical gates on average. If one considers the family of large CSS block codes (e.g. quantum LDPC codes) with $k/n\sim \Theta(1)$, only $O(k)$ physical qubits and $O(k^2/\log k)$ physical gates are needed on average to implement any logical Clifford circuit, with off-line circuit depth $O\left(\max\{k,k_{c_1}, k_{c_2}\}\right)$ for ancilla preparation. These results suggest that the numbers of qubits and gates required for logical Clifford circuits have the same scaling as the physical level, when the physical error rate is sufficiently low and good classical LDPC codes are used.
\begin{remark} One can achieve very high efficiency of resource utilization with our scheme in the following scenario: a small batch of finite Clifford circuits are repeatedly applied in an algorithm for a certain period of time. This is because our distillation process has high throughput --- it can produce a large number of \emph{identical} ancilla states (and thus generate the same number of identical Clifford circuits), and each state preparation is efficient. Several questions are raised here naturally: is there any useful quantum algorithm whose quantum circuits have such structure? In other words, is there an ansatz to adapt our FTQC architecture to design a circuit for some particular algorithm? Is there a good computation architecture to efficiently generate the large amount of identical ancilla states? These questions are all open and need further investigation. One promising candidate here is the Hamiltonian simulation algorithm for quantum simulations~\cite{Lloyd:1996:1073,aspuru2005simulated, wecker2014gate,Hastings:2015_simulation,Poulin:2015qic_simulation,garnet2020quantum}. In that case, the target Hamiltonian changes slowly during the computation. In a certain period of time, the Trotter decomposition can be regarded as identical. Another candidate is the optimization type algorithms like Quantum Approximate Optimization Algorithm (QAOA)~\cite{farhi2014quantumqaoa}, which needs to rapidly apply Hadamards. On the other hand, to generate large mount ancilla states, Single Instruction Multiple Data (SIMD) style architecture~\cite{heckey2015compiler, risque2016characterization} that apply the same quantum gates on multiple qubits in the same region simultaneously, maybe particularly useful. \end{remark}
\subsection{Summary} In conclusion, we provided two methods implementing logical Clifford circuits fault-tolerantly via constant number of steps of Knill or Steane syndrome measurement circuits \emph{in-situ}. Each method requires certain types of logical stabilizer states as ancillas. We showed that all ancilla states listed can be prepared fault-tolerantly through two-stage distillation. Our method transfers the complexity of Clifford circuits on logical level to the complexity of state preparation $U_{\text{prep}}$ on physical level completely, which can be done offline. Surprisingly, if one chooses large block codes with encoding rate $k/n\sim \Theta(1)$, the overall numbers of qubits and physical gates required for a Clifford circuit on $\llbracket n,k,d\rrbracket$ CSS circuit are around $O(k)$ and $O(k^2/\log k)$, respectively, which are independent of the distance of the underlying CSS code. This is the same scaling as a perfect Clifford circuit acting on $k$ physical qubits.
\section{Discussion}\label{sec:discussion} In this section, we compare the method proposed in this paper with some other related fault-tolerant protocols in the literature including one-way quantum computation. Then we estimate the numbers of physical qubits and gates required for each scheme. The results are summarized in Table~\ref{table_resoure}. Since different FTQC schemes have different working regions and performance, these results only provide a rough insight of resource scaling. We also discuss the potential improvements on ancilla state preparation for further reduce the overhead.
\begin{table*}[tp!] {\footnotesize
\begin{tabular}{c|c|c|c|c|c}
\hline
\hline
Method/Average resource cost &\, \#. physical qubits \, &\, \#. physical operatation \,& \, \#. ancilla states \, & \, \emph{in-situ} depth & off-line depth \rule{0pt}{2.6ex}\,\\[2pt]
\hline
\rule{0pt}{2.6ex}
Standard circuit model & $O(k)$ & $O(k^2/\log k)$ & N/A & $O(k)$ & N/A\\[2pt]
\emph{Circuit model FTQC (this paper)}\, & $O(k)$ & $O(k^2/\log k)$ & $O(1)$ & $O(1)$ & $O(\max\{k,k_{c_1},k_{c_2}\})$ \\[2pt]
Circuit model FTQC (as in Ref.~\cite{Gottesman:1999:390, Zhou:2000:052316}) & $O(\max \{k, wd \})$ & $O(\max\{k w d, k^2/\log k\})$ & $O(kd)$ & $O(kd)$ & $O(kd)$ \\[2pt]
Circuit model FTQC (as in Ref.~\cite{steane1999efficient_Nature,steane2005fault,brun2015teleportation}) & $O(k)$ & $O\left(k^4/(\log k)^2\right)$ & $O(k^2/\log k)$ & $O(k^2/\log k)$ & $O(\max\{k,k_{c_1},k_{c_2}\})$ \\[2pt]
one-way QC (as in Ref.~\cite{Raussendorf:2003:022312}) & $O(k^3/\log k)$ & $O(k^3/\log k)$ & N/A & $O(1)$ & N/A \\[2pt]
FT one-way QC (as in Ref.~\cite{Raussendorf:2006:2242,Raussendorf:2007:199}) & $O\left(k^3 d^3 / \log k \right)$ & $O\left(k^3 d^3 / \log k \right)$ & N/A & $O(1)$& N/A\\[2pt]
Surface code (as in Ref~\cite{Folwer2012PhysRevA.86.032324}) & $O(kd^2)$ & $O(k^3d^3/\log k)$ & N/A & $O(k^2d/\log k)$ & N/A\\[3pt]
\hline
\hline \end{tabular} } \caption{\label{table_resoure} Resources required for a Clifford circuit on $k$ logical qubits at the physical and logical levels for an $\llbracket n,k, d \rrbracket$ CSS code. The physical operations counts all state preparation, gates, measurements including ancilla preparation and error correction. We assume $k/n\sim \Theta(1)$ for the large block codes and $w$ is the maximum weight of the stabilizers. Sufficient parallelization are also considered to minimize the depth.} \end{table*}
\subsection{Related FTQC protocols}\label{sec:related_FTQC}
In this paper, we implement FT logical circuit teleportation via the single-shot Knill syndrome measurement protocol and FT ancilla distillation. It is worthwhile to compare this with the original teleportation-based FTQC in Ref.~\cite{Gottesman:1999:390,Zhou:2000:052316}. Rather than Knill syndrome measurement, Shor syndrome measurement~\cite{Shor:1996:56} is used for error correction and ancilla state preparation. Our construction of logical Clifford circuits through a constant number of steps of teleportation is also possible in that scenario, where the ancilla state preparation again dominates the resource cost.
In Ref.~\cite{Gottesman:1999:390,Zhou:2000:052316}, $O(1)$ logical ancilla states of size $O(n)$ is required. To prepare qualified logical ancilla states, one applies Shor syndrome measurement to measure $n$ stabilizers, including stabilizer generators and logical operators. Each measurements needs one cat state. Each cat state consists $O(w)$ qubits, which takes $O(w)$ CNOTs to prepare, where $w$ is the maximum weight of the stabilizers. A verification is also required after the raw preparation of each cat state, which also takes $O(w)$ CNOTs and rejects the states with probability around $O(p)$. Transversal CNOTs from the verified cat state to the code block are then applied to extract the eigenvalues of the stabilizers, which takes $O(w)$ CNOTs. To establish reliable eigenvalues for the stabilizers of an $\llbracket n,k,d\rrbracket$ code, $O(d)$ rounds of Shor syndrome measurements and a majority vote are required for each stabilizer. Thus the overall number of physical gates required for state preparation is $O(\max\{nwd, n^2/\log n\})$ with depth $O(nd)$.
For large block codes with $k/n\sim \Theta(1)$, the number of physical gates required for ancilla state preparation is $O(\max\{kwd, k^2/\log k\})$ with depth $O(kd)$. Meanwhile, $O(1)$ logical ancilla states are required, which needs $O(k)$ physical ancilla qubits altogether. It also takes $O(kd)$ rounds of serial Shor syndrome measurements (since the stabilizers are in general highly overlapped) to do error correction on the data block, each round consumes a verified cat states. Hence, the depth for a logical Clifford circuit is $O(kd)$ and the same number of verified cat states are needed. Assuming the qubits supporting cat states are recycled after they are measured, one needs $O(wd)$ ancilla qubits for cat states throughout the process after parallelization. The number of all ancilla qubits is thus $O(\max\{k, wd\})$. This way of implementing logical Clifford circuits needs more physical gates when $w$ is large and takes a much longer computation time for large $k$ and $d$.
Our scheme also greatly simplifies the block-code based FTQC using Steane syndrome measurement in Ref.~\cite{steane1999efficient_Nature,steane2005fault,brun2015teleportation}. There, logical Clifford gates are implemented one by one, and thus $O(k^2/\log k)$ different ancilla states of size $2n$ qubits are required and $O(k^2/\log k)$ rounds of Steane syndrome measurements are needed. With the same ancilla distillation protocol and ancilla recycling, one needs $k^4/(\log k)^2$ gates and $O(k)$ qubits for every single circuit on average for finite rate codes with $k/n\sim \Theta(1)$.
\subsection{One-way quantum computing} For one-way QC, one initially prepares a cluster state consisting of a large number of qubits. Quantum information is then loaded onto the cluster and processed through single-qubit measurements on the cluster state substrate. It can be shown that all quantum circuits can be mapped to the form of one-way QC. In general, one-qubit measurements are performed in a certain temporal order and in a spatial pattern of adaptive measurement bases based on previous measurement outcomes.
Interestingly, for those qubits supporting Clifford circuits, no measurement bases have to be adjusted (i.e., those of which the operator $X$, $Y$ or $Z$ is measured). Thus, for any given quantum circuit, all of its Clifford gates can be realized \emph{simultaneously} in the first round of single-qubit measurements, regardless of their space-time locations in the circuit~\cite{Raussendorf:2003:022312}, if a sufficiently large cluster state is provided to support the whole computation circuit. Specifically, for a cluster state in 2D, it requires $O(k^3/\log k)$ supporting cluster qubits and single-qubit measurements for an instantaneous Clifford circuit without error correction.
However, it is difficult to control errors if a cluster state large enough to support the entire quantum computation is used, since the computation might reach certain qubits only after a long time, so that these qubits would already suffer significant errors. This scheme is not fault-tolerant. By contrast, if the computation is split, then the size of sub-circuits may be adjusted so that each of them can be performed within some constant time. The measured qubits are then recycled to entangle with the unmeasured qubits to form a new cluster for the next computation step. In this way, each cluster qubit is exposed to constant decoherence time before being measured and the error rate is bounded. In this case, FT one-way QC is possible~\cite{Raussendorf:2003:022312}. Note that it is still possible to perform a Clifford circuit during the computation in one time step, if $O(k^3/\log k)$ qubits are provided at the same time, but it is no longer possible to finish all the Clifford gates in the computation in a single time step.
To complete the discussion, here we consider FT one-way QC in 3D lattice as in Ref.~\cite{Raussendorf:2006:2242, Raussendorf:2007:199}. The Clifford circuits are performed through single-qubit measurements in the $Z$ basis to create topologically entangled defects in the 3D lattice. The remaining qubits are measured in the $X$ basis to provide syndrome information for 3D topological error correction~\cite{raussendorf2005long}. For $k$ encoded qubits in \cite{Raussendorf:2007:199} with distance $d$ boundary surface codes, the number of cluster qubits needed in a single 2D slice is $O(kd^2)$ and it requires $O(k^2/\log k)$ slices for an arbitrary Clifford circuit in the worst case. Thus, it takes the volume of a cluster state comprising $O(k^3d^3/\log k)$ qubits and the same number of single-qubit measurements. As a comparison, the variants of FT one-way QC in 2D based on the surface code~\cite{Folwer2012PhysRevA.86.032324} need $kd^2$ physical qubits for encoding and each logical CNOT gate takes $O(d)$ time steps.
In conclusion, even though one-way QC can in principle implement the Clifford gates of a circuit in a single time step, it requires many more physical qubits, whether it is implemented in a fault-tolerant manner or not. It is worth noting that the FT one-way QC and its 2D variants require only local operation, which is a great practical advantage, since the codes considered in our scheme are highly non-local in general. However, our results suggest the potential for huge resource reduction for FTQC if non-local operations are allowed.
\subsection{More efficient ancilla state preparation} The distillation protocol mentioned in this paper is basically the same as the one in Ref.~\cite{zheng2017efficient}. The main difference is that the ancilla states distilled here can generate a whole circuit rather than a single gate on the data code block. These will cause an extra complexity on $U_{\text{prep}}$ stage in Fig.~\ref{fig:ft_distillation} up to $O(k^2 /\log k)$ gates with an extra depth $O(k)$. Meanwhile, the overall number of gates and depth for the whole distillation protocol ($U_{\text{prep}}$, $U_{\text{enc}}$ and $U_{\text{dist}}$ combined) also scale as $O(k^2/\log k)$ and $O(k)$. Note that the extra depth of ancilla preparation is negligible if they are produced in a pipeline manner. Thus, in the worst case, it will cause a constant decrease of distillation quality. But in practice, $U_{\text{prep}}$ may only take a small portion of the whole protocol. On the other hand, since we generate a circuit rather than a single gate at one time, it will reduce the quality requirement of the output ancilla states to support FTQC, and hence the error rate requirement for each operation as well. The overall effect of extra preparation complexity on distillation needs further exploration.
The two-stage distillation protocol gives $O(1)$ yield rate on average. However, the number of input ancilla blocks required simultaneously is $O(n_{c_1}n_{c_2})$, which can be huge in practice. Consequently, the distillation circuit is still relatively complicated and error can occur in many positions. As a result, the typical acceptable error rate (or threshold for distillation) is less than $10^{-4}$~\cite{zheng2017efficient}, which is challenging with current technologies like superconducting qubits~\cite{supremacy}. Meanwhile, in many cases, one doesn't need as many as $O(k_{c_1}k_{c_2})$ identical ancilla states to generate that large number of the same Clifford circuits.
There are two ways to further simplify the distillation process and reduce overall number of qubits: in stead of two-stage distillation, one can filter out one type of correlated errors at the beginning by single-round stabilizer measurements with Steane Latin rectangle method~\cite{steane2002fast}, which takes advantage of the fact that only a small set of correlated errors needs to be removed according to the degeneracy of quantum code. Then we remove the other type of correlated errors through distillation. The depth of such preparation is also $O(\max\{k,k_c\})$ Here, only $O(n_{c})$ input code blocks are required simultaneously and it generates $O(k_{c})$ identical output states. It not only reduces the complexity of preparation circuit but also gives more flexibility. The second method is to take advantage of the symmetry of the underlying CSS codes: the qubits of different code blocks are permuted in different ways after raw preparation (though finding such permutation may be challenging), so that the correlation of errors between code blocks can be suppressed~\cite{paetznick_ben2011fault}. Consequently, it may require less input code blocks for distillation. In principle, these two methods can also be combined together and their effect needs further investigation.
\acknowledgments The funding support from the National Research Foundation \& Ministry of Education, Singapore, is acknowledged. This work is also supported by the National Research Foundation of Singapore and Yale-NUS College (through grant number IG14-LR001 and a startup grant). CYL was supported by the Ministry of Science and Technology, Taiwan under Grant MOST108-2636-E-009-004. TAB was supported by NSF Grants No. CCF-1421078 and No. MPS-1719778, and by an IBM Einstein Fellowship at the Institute for Advanced Study. \appendix
\section{Proof of Lemma~\ref{lemma:stabilizer_transform}}\label{sec:proof_lemma} \begin{proof} Let $l'_j$ denote the $j$th row vector of $L'$ and $c_p$ be the $p$th column vector of $(I_n \ I_n)^t$. Equation~(\ref{eq:orthogonal}) is equivalent to \begin{equation} l'_j c_p = \delta_{jp}, \quad \quad 1\leq j, p\leq n, \label{eq:orthogonal2} \end{equation} where $\delta$ is the Kroneker delta function.
Let $l_j$ denote the $j$th row vector of $L$. Obviously, $l_1=(1,0,\dots, 0)$, satisfying $l_1 c_p = \delta_{1p}$. Let $l'_1=l_1$.
It is easy to see that $l_j c_p = 0$ for $p>j$, since $L_1$ is a lower triangular matrix. With all the diagonal elements of $L_1$ being~0, one has \begin{equation} l_jc_j=1.\label{eq:lc}
\end{equation} Define the set $\mathscr{I}_j=\{p\ |\ l_{j}c_p =1, p < j \}$.
For $j=2,\dots,n$, let \begin{equation} l_{j}'=l_{j} + \sum_{p\in \mathscr{I}_j} l_p'. \label{eq:lp} \end{equation} We also define a matrix $L'^{(j)}$ that contains the rows $l_1',\dots,l_j'$: \begin{equation*} L'^{(j)}=\left(
\begin{array}{c}
l_1' \\[2pt]
\vdots \\[2pt]
l_j' \\
\end{array}
\right). \end{equation*} Since $L_1$ is lower triangular, and the summation of $l_p$ in Eq.~(\ref{eq:lp}) only counts the terms with $p<j$, $L'^{(j)}$ can be written as \begin{equation*} L'^{(j)}=\left(L_2^{(j)}\ L_3^{(j)}\right), \end{equation*} where $L_2^{(j)}$ and $L_3^{(j)}$ are also lower triangular matrices. Eventually, we have $L_2=L_2^{(n)}$ and $L_3=L_3^{(n)}$.
It remains to prove Eq.~(\ref{eq:orthogonal2}).
We prove this by induction. For $j = 2$, if $l_2 c_1 = 1$, one has $l_2'=l_2 + l_1$. Thus $l_2' c_1 = 0$ and $l_2'c_2=1$, since $l_1c_1=1$ and $l_1c_2=0$. Also, $l_2' c_p=0$ for $p>2$ since $L_2'^{(2)}$ and $L_3'^{(2)}$ are lower triangular matrices. So $l'_2 c_p = \delta_{2p}$ holds for $1\leq p \leq n$.
Now assume $l'_{1}c_p=\delta_{1p}$, $\dots,$ $l'_{j}c_p=\delta_{jp}$ holds. Then \begin{equation*} l_{j+1}'c_q=l_{j+1}c_q + \sum_{p\in \mathscr{I}_{j+1} } l_p'c_q. \end{equation*} Consider $q < j+1$ first. If $l_{j+1}c_q=1$, then $q\in \mathscr{I}_{j+1}$ and \begin{equation*} \sum_{p\in \mathscr{I}_{j+1} } l_p'c_q = \sum_{p\in \mathscr{I}_{j+1}} \delta_{pq}=1. \end{equation*} Then $l_{j+1}'c_q=0$. If $l_{j+1}c_q=0$, then $q\notin \mathscr{I}_{j+1}$ and $\sum_{p\in \mathscr{I}_{j+1}} l_p'c_q = 0$. Again, $l_{j+1}'c_q=0$. When $q=j+1$, $l'_{j+1}c_{j+1} = l_{j+1}c_{j+1} = 1$ by Eq.~(\ref{eq:lc}).
For $q>j+1$, since $L^{(j+1)}_2$ and $L^{(j+1)}_3$ are both lower triangular, $l_{j+1}'c_q = 0$. Thus, $l'_jc_p=\delta_{jp}$ holds for $1\leq j,p\leq n$.
\end{proof}
\end{document} |
\begin{document}
\title[Addendum BMO]{Addendum to \textquotedblleft BMO: OSCILLATIONS, SELF IMPROVEMENT, GAGLIARDO COORDINATE SPACES AND REVERSE HARDY INEQUALITIES"} \author{Mario Milman} \email{[email protected]} \urladdr{https://sites.google.com/site/mariomilman}
\begin{abstract} We provide a precise statement and self contained proof of a Sobolev inequality (cf. \cite[ page 236 and page 237]{A}) stated in the original paper. Higher order and fractional inequalities are treated as well.
\end{abstract} \maketitle
\section{Introduction}
One of the purposes of the original paper (cf. \cite{A}) was to highlight some connections between interpolation theory, and inequalities connected with the theory of $BMO$ and Sobolev spaces. This resulted in a somewhat lengthy paper and as consequence many known results were only stated, and the reader was referred to the relevant literature for proofs. It has become clear, however, that a complete account of some of the results could be useful. In this expository addendum we update and correct one paragraph of the original text by providing a precise statement and proof of a Sobolev inequality which was stated in the original paper (cf. \cite[(13) page 236, and line 10, page 237]{A}). Included as well are the corresponding results for higher order and fractional inequalities.
All the results discussed in this note are known\footnote{In presenting the results yet again we have followed in part advise from Rota \cite{82}.}: The only novelty is perhaps in the unified presentation.
We shall follow the notation and the ordering of references of the original paper \cite{A} to which we shall also refer for background, priorities, historical comments, etc. Newly referenced papers will be labeled with letters.
\section{The Hardy-Littlewood-Sobolev-O'Neil program}
We let \begin{equation} \left\Vert f\right\Vert _{L(p,q)}=\left\{ \begin{array} [c]{cc} \left\{ \int_{0}^{\infty}\left( f^{\ast}(t)t^{1/p}\right) ^{q}\frac{dt} {t}\right\} ^{1/q} & 1\leq p<\infty,1\leq q\leq\infty\\ \left\Vert f\right\Vert _{L(\infty,q)} & 1\leq q\leq\infty, \end{array} \right. \label{berta} \end{equation} where \begin{equation} \left\Vert f\right\Vert _{L(\infty,q)}:=\left\{ \int_{0}^{\infty}(f^{\ast \ast}(t)-f^{\ast}(t))^{q}\frac{dt}{t}\right\} ^{1/q}. \label{berta1} \end{equation} In particular we note that in this notation \[ \left\Vert f\right\Vert _{L(\infty,\infty)}=\{f:\sup_{t>0}\{f^{\ast\ast }(t)-f^{\ast}(t)\}<\infty\}, \] \[ L(1,1)=L^{1}. \] Moreover, if $f$ has bounded support \[ \left\Vert f\right\Vert _{L(\infty,1)}=\left\Vert f\right\Vert _{L^{\infty}}. \] In \cite[(13) page 236]{A} we stated that \textquotedblleft it was shown in \cite{7} that \begin{equation} \left\Vert f\right\Vert _{L(\bar{p},q)}\leq c_{n}\left\Vert \nabla f\right\Vert _{L(p,q)},1\leq p\leq n,\frac{1}{\bar{p}}=\frac{1}{p}-\frac{1} {n},\text{ }1\leq q\leq\infty,f\in C_{0}^{\infty}(R^{n})." \label{sobolev} \end{equation} However, to correctly state what was actually shown in \cite{7}, the indices in the displayed formula need to be restricted when $p=1$. The precise statement reads as follows:
\begin{theorem} \label{teo1}Let $n>1.$ Let $1\leq p\leq n,$ $1\leq q\leq\infty,$ and define $\frac{1}{\bar{p}}=\frac{1}{p}-\frac{1}{n}.$ Then, if $(p,q)\in(1,n]\times \lbrack1,\infty]$ \textbf{or if} $p=q=1,$ we have \begin{equation} \left\Vert f\right\Vert _{L(\bar{p},q)}\leq c_{n}(p,q)\left\Vert \left\vert \nabla f\right\vert \right\Vert _{L(p,q)},\text{ }f\in C_{0}^{\infty}(R^{n}). \label{sobol1} \end{equation}
\end{theorem}
\begin{remark} If $n=1,$ then $p=q=1,$ and (\ref{sobol1}) is an easy consequence of the fundamental theorem of Calculus. \end{remark}
The corresponding higher order result (cf. \cite[line 10, page 237]{A}) reads as follows,
\begin{theorem} \label{teo2}Let $k\in N$, $k\leq n,$ $1\leq p\leq\frac{n}{k},$ $1\leq q\leq\infty.$ Define $\frac{1}{\bar{p}}=\frac{1}{p}-\frac{k}{n}.$ Then, (i) if $k<n,$ and $(p,q)\in(1,\frac{n}{k}]\times\lbrack1,\infty],$ or $(p,q)\in \{1\}\times\{1\},$ or (ii) if $n=k,$ and $p=q=1,$ we have \[ \left\Vert f\right\Vert _{L(\bar{p},q)}\leq c_{n,k}(p,q)\left\Vert \left\vert D^{k}f\right\vert \right\Vert _{L(p,q)},\text{ }f\in C_{0}^{\infty}(R^{n}), \] where $\left\vert D^{k}f\right\vert $ is the length of the vector whose components are all the partial derivatives of order $k.$ \end{theorem}
\begin{remark} Observe that when $p=\frac{n}{k},$ $p>1$, and $q=1,$ we have \[ \left\Vert f\right\Vert _{L^{\infty}}=\left\Vert f\right\Vert _{L(\infty ,1)}\preceq\left\Vert \left\vert D^{k}f\right\vert \right\Vert _{L(\frac{n} {k},1)},\text{ }f\in C_{0}^{\infty}(R^{n}). \] We also obtain an $L^{\infty}$ estimate when $p=\frac{n}{k}=1,$ and $q=1,$ \begin{equation} \left\Vert f\right\Vert _{L^{\infty}}=\left\Vert f\right\Vert _{L(\infty ,1)}\preceq\left\Vert D^{n}f\right\Vert _{L^{1}},\text{ }f\in C_{0}^{\infty }(R^{n}). \label{deacuerdo} \end{equation}
\end{remark}
In the particular case when we are working with $L^{p}$ spaces, i.e. $p=q$, \textbf{there is no need to separate the cases }$p=1$\textbf{ and }$p>1,$ and Theorems \ref{teo1} and \ref{teo2} give us what we could call the \textquotedblleft completion" of the Hardy-Littlewood-Sobolev-O'Neil program, namely
\begin{corollary} Let $1\leq k\leq n,1\leq p\leq\frac{n}{k},\frac{1}{\bar{p}}=\frac{1}{p} -\frac{k}{n}.$ Then \begin{equation} \left\Vert f\right\Vert _{L(\bar{p},p)}\leq c_{n}(p)\left\Vert D^{k} f\right\Vert _{L(p,p)},\text{ }f\in C_{0}^{\infty}(R^{n}). \label{sobsup} \end{equation}
\end{corollary}
\begin{proof} \textbf{(of Theorem \ref{teo1})}. \textbf{The case }$1<p\leq n.$ We start with the inequality (cf. \cite[(58) page 263]{A}), \begin{equation} f^{\ast\ast}(t)-f^{\ast}(t)\leq c_{n}t^{1/n}(\nabla f)^{\ast\ast}(t), \label{sob2} \end{equation} which yields \begin{equation} \left( f^{\ast\ast}(t)-f^{\ast}(t)\right) t^{1/p}t^{-1n}\leq c_{n} t^{1/p}(\nabla f)^{\ast\ast}(t). \label{sobol2} \end{equation} If $q<\infty,$ we integrate (\ref{sobol2}) and find \begin{align*} \left\{ \int_{0}^{\infty}[\left( f^{\ast\ast}(t)-f^{\ast}(t)\right) t^{1/\bar{p}}]^{q}\frac{dt}{t}\right\} ^{1/q} & \leq c_{n}\left\{ \int _{0}^{\infty}[t^{1/p}(\nabla f)^{\ast\ast}(t)]^{q}\frac{dt}{t}\right\} ^{1/q}\\ & \leq C_{n}(p,q)\left\Vert \nabla f\right\Vert _{L(p,q)}, \end{align*} where in the last step we used Hardy's inequality (cf. \cite[Appendix 4, page 272]{St}). To identify the left hand side we consider two cases. If $p=n,$ then $\bar{p}=\infty$ and the desired result follows directly from the definitions (cf. (\ref{berta})). If $p<n,$ then we can write \begin{equation} f^{\ast\ast}(t)=\int_{t}^{\infty}f^{\ast\ast}(s)-f^{\ast}(s)\frac{ds}{s}, \label{from} \end{equation} and use Hardy's inequality (cf. \cite[Appendix 4, page 272]{St}) to get \[ \left\Vert f\right\Vert _{L(\bar{p},q)}\leq\left\{ \int_{0}^{\infty} [f^{\ast\ast}(t)t^{1/\bar{p}}]^{q}\frac{dt}{t}\right\} ^{1/q}\preceq\left\{ \int_{0}^{\infty}[\left( f^{\ast\ast}(t)-f^{\ast}(t)\right) t^{1/\bar{p} }]^{q}\frac{dt}{t}\right\} ^{1/q}. \] The case $q=\infty$ is easier. Indeed, if $p=n,$ the desired result follows taking a sup in (\ref{sobol2})$,$ while if $p<n,$ from (\ref{from}) we find \begin{align*} f^{\ast\ast}(t) & \leq\int_{t}^{\infty}\left( f^{\ast\ast}(s)-f^{\ast }(s)\right) s^{1/\bar{p}}s^{-1/\bar{p}}\frac{ds}{s}\\ & \preceq t^{-1/\bar{p}}\sup_{s}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1/\bar{p}}. \end{align*} Consequently \[ \left\Vert f\right\Vert _{L(\bar{p},\infty)}\preceq\sup_{s}\left( f^{\ast \ast}(s)-f^{\ast}(s)\right) s^{1/\bar{p}}. \] Therefore, combining the estimates we have obtained for the right and left hand sides, we obtain \[ \left\Vert f\right\Vert _{L(\bar{p},\infty)}\preceq\left\Vert \nabla f\right\Vert _{L(p,q)},1<p\leq n,1\leq q\leq\infty. \]
Finally, we consider the case when\textbf{ }$p=q=1.$ In this case we have $\frac{1}{\bar{p}}=1-\frac{1}{n}.$ At this point recall the inequality (cf. \cite[page 264]{A}) \begin{align} \int_{0}^{t}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1/\bar{p}}\frac {ds}{s} & =\int_{0}^{t}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1-1/n}\frac{ds}{s}\\ & \preceq\int_{0}^{t}(\nabla f)^{\ast}(s)ds. \label{dav1} \end{align} Let $t\rightarrow\infty,$ to find \[ \int_{0}^{\infty}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1/\bar{p} }\frac{ds}{s}\preceq c_{n}\int_{0}^{\infty}(\nabla f)^{\ast}(s)ds=c_{n} \left\Vert \nabla f\right\Vert _{L^{1}}=c_{n}\left\Vert \nabla f\right\Vert _{L(1,1)}. \] We conclude the proof remarking that, as we have seen before, \[ \left\Vert f\right\Vert _{L(\bar{p},1)}\preceq\int_{0}^{\infty}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1/\bar{p}}\frac{ds}{s}. \]
\end{proof}
\section{Higher Order}
We will only deal in detail with the case $k=2$ (i.e. the case of second order derivatives) since the general case follows by induction,\textit{ mutatis mutandi}.
\begin{proof} (i) Suppose first that $n>2.$ Let $\bar{p}_{1}$ and $\bar{p}_{2}$ be defined by $\frac{1}{\bar{p}_{1}}=\frac{1}{p}-\frac{1}{n}$ and $\frac{1}{\bar{p}_{2} }=\frac{1}{\bar{p}_{1}}-\frac{1}{n}=\frac{1}{p}-\frac{2}{n}=\frac{1}{\bar{p} }.$ The first step of the iteration is to observe (cf. \cite{75}) the elementary fact: \[ \left\vert \nabla(\nabla f)\right\vert \leq\left\vert D^{2}(f)\right\vert . \] Therefore, by (\ref{sobol1}) we have \begin{align*} (\nabla f)^{\ast\ast}(t)-(\nabla f)^{\ast}(t) & \preceq t^{1/n} [\nabla(\nabla f)]^{\ast\ast}(t)\\ & \preceq t^{1/n}\left\vert D^{2}(f)\right\vert ^{\ast\ast}(t). \end{align*} Consequently, we find \begin{equation} \left( (\nabla f)^{\ast\ast}(t)-(\nabla f)^{\ast}(t)\right) t^{1/\bar{p} _{1}}\preceq t^{\frac{1}{p}}\left\vert D^{2}(f)\right\vert ^{\ast\ast}(t). \label{sob4} \end{equation} Suppose that $1<p\leq\frac{n}{2},$ and let $1\leq q<\infty.$ Then, from (\ref{sob4}) and a familiar argument, we get \begin{align*} \left\Vert \nabla f\right\Vert _{L(\bar{p}_{1},q)} & \preceq\left\{ \int_{0}^{\infty}[\left( (\nabla f)^{\ast\ast}(t)-(\nabla f)^{\ast }(t)\right) t^{1/\bar{p}_{1}}]^{q}\frac{dt}{t}\right\} ^{1/q}\\ & \preceq\left\{ \int_{0}^{\infty}[t^{1/p}\left\vert D^{2}(f)\right\vert ^{\ast\ast}(t)]^{q}\frac{dt}{t}\right\} ^{1/q}. \end{align*} Thus, \[ \left\Vert \nabla f\right\Vert _{L(\bar{p}_{1},q)}\preceq\left\Vert \left\vert D^{2}(f)\right\vert \right\Vert _{L(p,q)}. \] Now, combining the previous inequality with the already established first order case (cf. Theorem \ref{teo1}) we find, \begin{align*} \left\Vert f\right\Vert _{L(\bar{p}_{2},q)} & \preceq\left\Vert \nabla f\right\Vert _{L(\bar{p}_{1},q)}\\ & \preceq\left\Vert \left\vert D^{2}(f)\right\vert \right\Vert _{L(p,q)}. \end{align*} Likewise we can treat the case when $q=\infty.$ The analysis also works in the case $p=1=q.$ In this case we replace (\ref{sob4}) with (\ref{dav1}): \[ \int_{0}^{t}\left( \nabla f)^{\ast\ast}(s)-(\nabla f)^{\ast}(s)\right) s^{1-1/n}\frac{ds}{s}\preceq\int_{0}^{t}(D^{2}f)^{\ast}(s)ds, \] which yields \[ \int_{0}^{\infty}\left( \nabla f)^{\ast\ast}(s)-(\nabla f)^{\ast}(s)\right) s^{1-1/n}\frac{ds}{s}\preceq\int_{0}^{\infty}(D^{2}f)^{\ast}(s)ds. \] Therefore \[ \left\Vert \nabla f\right\Vert _{L(\bar{p}_{1},1)}\preceq\int_{0}^{\infty }(D^{2}f)^{\ast}(s)ds. \] At this point recall that the first order case gives us \[ \left\Vert f\right\Vert _{L(\bar{p}_{2},1)}\preceq\left\Vert \nabla f\right\Vert _{L(\bar{p}_{1},1)}. \] Thus, \[ \left\Vert f\right\Vert _{L(\bar{p}_{2},1)}\preceq\left\Vert D^{2}f\right\Vert _{L^{1}}. \]
Finally consider the case when $n=2=$ $k,$ this means that $p=\frac{2}{2}=1,$ and we let $q=1.$ Then, from \[ \int_{0}^{t}\left( (Df)^{\ast\ast}(s)-\left( Df\right) ^{\ast}(s)\right) s^{1-1/2}\frac{ds}{s}\preceq\int_{0}^{t}(D^{2}f)^{\ast}(s)ds \] we once again derive \[ \left\Vert \nabla f\right\Vert _{L(2,1)}\preceq\left\Vert D^{2}f\right\Vert _{L^{1}}. \] Moreover, since \[ \left( f^{\ast\ast}(t)-f^{\ast}(t)\right) \preceq t^{1/2}\left( \nabla f\right) ^{\ast\ast}(t) \] integrating we get \[ \left\Vert f\right\Vert _{L(\infty,1)}\preceq\left\Vert \nabla f\right\Vert _{L(2,1)}, \] consequently, we see that, \[ \left\Vert f\right\Vert _{L^{\infty}}=\left\Vert f\right\Vert _{L(\infty ,1)}\preceq\left\Vert D^{2}f\right\Vert _{L^{1}}. \]
\end{proof}
\begin{example} In the case $n>2,p=\frac{n}{2},$ $q=1,$ we have \[ \left\Vert f\right\Vert _{L(\infty,1)}\preceq\left\Vert \nabla f\right\Vert _{L(n,1)}\preceq\left\Vert D^{2}f\right\Vert _{L(\frac{n}{2},1)}, \] in other words \begin{equation} \left\Vert f\right\Vert _{L^{\infty}}\preceq\left\Vert D^{2}f\right\Vert _{L(\frac{n}{2},1)}. \label{steine} \end{equation}
\end{example}
\begin{remark} Sobolev inequalities involving only the Laplacian are usually referred to as *reduced Sobolev inequalities* and there is a large literature devoted to them. For example, in the context of the previous Example, since $n/2>1$ it is possible to replace $D^{2}$ by the Laplacian in (\ref{steine}) (cf. the discussion in \cite[Chapter V]{St}). The correct *reduced* analog of (\ref{steine}) when $n=2$ involves a stronger condition on the Laplacian, as was recently shown by Steinerberger \cite{Stef}, who, in particular, shows that for a domain $\Omega\subset R^{2}$ of finite measure, and $f\in C^{2}(\Omega)\cap C(\bar{\Omega}),$ there exists an absolute constant $c>0$ such that \[ \max_{x\in\Omega}\left\vert f(x)\right\vert \leq\max_{x\in\partial\Omega }\left\vert f(x)\right\vert +c\max_{x\in\Omega}\int_{\Omega}\max\{1,\log \frac{\left\vert \Omega\right\vert }{\left\vert x-y\right\vert ^{2} }\}\left\vert \Delta f(y)\right\vert dy. \] In particular, when $f$ is zero at the boundary, Steinerberger's result gives \begin{equation} \max_{x\in\Omega}\left\vert f(x)\right\vert \leq c\max_{x\in\Omega} \int_{\Omega}\max\{1,\log\frac{\left\vert \Omega\right\vert }{\left\vert x-y\right\vert ^{2}}\}\left\vert \Delta f(y)\right\vert dy. \label{steine2} \end{equation} By private correspondence Steinerberger showed the author that (\ref{steine2}) implies an inequality of the form \begin{equation} \left\Vert f\right\Vert _{L^{\infty}(\Omega)}\preceq\left\Vert \Delta f\right\Vert _{L^{1}(\Omega)}+\left\Vert \Delta f\right\Vert _{L(LogL)(\Omega )}. \label{steine3} \end{equation}
Let us informally put forward here that one can develop an approach to Steinerberger's result (\ref{steine3}) using the symmetrization techniques of this paper, if one uses a variant of symmetrization inequalities for the Laplacian, originally obtained by Maz'ya-Talenti, that was recorded in \cite[Theorem 13 (ii), page 178]{Mm}. We hope to give a detailed discussion elsewhere. \end{remark}
\section{The Fractional Case}
In this section we remark that a good deal of the analysis can be also adapted to the fractional case (cf. \cite{59}). Let us go through the details. Let $X(R^{n})$ be a rearrangement invariant space, and let $\phi_{X}(t)=\left\Vert \chi_{(0,t)}\right\Vert _{X},$ be its fundamental function. Let $w_{X}$ be the modulus of continuity associated with $X:$ \[ w_{X}(t,f)=\sup_{\left\vert h\right\vert \leq t}\left\Vert f(\circ +h)-f(\circ)\right\Vert _{X}. \] Our basic inequality will be (cf. [50] and [59]) \begin{equation} f^{\ast\ast}(t)-f^{\ast}(t)\leq c_{n}\frac{w_{X}(t^{1/n},f)}{\phi_{X}(t)},f\in C_{0}^{\infty}(R^{n}). \label{nueva} \end{equation} Let $\alpha\in(0,1),1\leq p\leq\frac{n}{\alpha},$ $1\leq q\leq\infty.$ Let (with the usual modification if $q=\infty)$ \[ \left\Vert f\right\Vert _{\mathring{B}_{p}^{\alpha,q}}=\left\{ \int _{0}^{\infty}[t^{-\alpha}w_{L^{p}}(t,f)]^{q}\frac{dt}{t}\right\} ^{1/q}. \]
\begin{theorem} Suppose that $\alpha\in(0,1),1\leq p\leq\frac{n}{\alpha},\frac{1}{\bar{p} }=\frac{1}{p}-\frac{\alpha}{n}.$ Then, we have \[ \left\Vert f\right\Vert _{L(\bar{p},q)}\preceq\left\Vert f\right\Vert _{\mathring{B}_{p}^{\alpha,q}},f\in C_{0}^{\infty}(R^{n}). \]
\end{theorem}
\begin{proof} Consider first the case $q<\infty.$ Let $X=L^{p},$ then $\phi_{X}(t)=t^{1/p},$ consequently (\ref{nueva}) becomes \[ f^{\ast\ast}(t)-f^{\ast}(t)\leq c_{n}\frac{w_{L^{p}}(t^{1/n},f)}{t^{1/p}},f\in C_{0}^{\infty}(R^{n}), \] which yields \begin{align*} \left\{ \int_{0}^{\infty}[(f^{\ast\ast}(t)-f^{\ast}(t))t^{\frac{1}{\bar{p}} }]^{q}\frac{dt}{t}\right\} ^{1/q} & =\left\{ \int_{0}^{\infty}[(f^{\ast \ast}(t)-f^{\ast}(t))t^{-\alpha/n}t^{1/p}]^{q}\frac{dt}{t}\right\} ^{1/q}\\ & \leq c_{n}\left\{ \int_{0}^{\infty}[t^{-\alpha/n}w_{L^{p}}(t^{1/n} ,f)]^{q}\frac{dt}{t}\right\} ^{1/q}\\ & \simeq\left\{ \int_{0}^{\infty}[t^{-\alpha}w_{L^{p}}(t,f)]^{q}\frac{dt} {t}\right\} ^{1/q}\\ & \simeq\left\Vert f\right\Vert _{\mathring{B}_{p}^{\alpha,q}}. \end{align*} It follows readily that \[ \left\Vert f\right\Vert _{L(\bar{p},q)}\preceq\left\Vert f\right\Vert _{\mathring{B}_{p}^{\alpha,q}},\text{ }f\in C_{0}^{\infty}(R^{n}). \] For the case $q=\infty$ we simply go back to \begin{equation} f^{\ast\ast}(t)-f^{\ast}(t))t^{\frac{1}{\bar{p}}}\leq c_{n}t^{-\alpha/n} w_{p}(t^{1/n},f),\label{antigua} \end{equation} and take a sup over all $t>0$. \end{proof}
\begin{example} Note that when $p=\frac{n}{\alpha},$ then $\frac{1}{\bar{p}}=0,$ consequently if $1\leq q\leq\infty$, we have that for $f\in C_{0}^{\infty}(R^{n}),$ \begin{align} \left\Vert f\right\Vert _{L(\infty,q)} & =\left\{ \int_{0}^{\infty }[(f^{\ast\ast}(t)-f^{\ast}(t))]^{q}\frac{dt}{t}\right\} ^{1/q} \label{nueva2}\\ & \leq c_{n}\left\Vert f\right\Vert _{\mathring{B}_{\frac{n}{\alpha}} ^{\alpha,q}}.\nonumber \end{align} In particular, if $q=1,$ \[ \left\Vert f\right\Vert _{L^{\infty}}=\left\Vert f\right\Vert _{L(\infty ,1)}\leq c_{n}\left\Vert f\right\Vert _{\mathring{B}_{\frac{n}{\alpha} }^{\alpha,1}},f\in C_{0}^{\infty}(R^{n}). \]
\end{example}
The corresponding result for Besov spaces anchored on Lorentz spaces follows the same analysis. Let $1\leq p<\infty,1\leq r\leq\infty,1\leq q\leq \infty,0<\alpha<1.$ We let (with the usual modification if $q=\infty$) \[ \left\Vert f\right\Vert _{\mathring{B}_{L(p,r)}^{\alpha,q}}=\left\{ \int _{0}^{\infty}[t^{-\alpha}w_{L(p,r)}(t,f)]^{q}\frac{dt}{t}\right\} ^{1/q}. \] Note that since \[ \phi_{L(p,r)}(t)\sim t^{1/p},1\leq p<\infty,1\leq r\leq\infty, \] our basic inequality now takes the form \begin{equation} f^{\ast\ast}(t)-f^{\ast}(t)\leq c_{n}\frac{w_{L(p,r)}(t^{1/n},f)}{t^{1/p} },f\in C_{0}^{\infty}(R^{n}),1\leq p<\infty,1\leq r\leq\infty. \label{denueva} \end{equation} Then,\textit{ mutatis mutandi} we have
\begin{theorem} Suppose that $\alpha\in(0,1),1\leq p\leq\frac{n}{\alpha},\frac{1}{\bar{p} }=\frac{1}{p}-\frac{\alpha}{n}.$ Then, if $p>1,1\leq r\leq\infty,$ or $p=r=1,$ we have \[ \left\Vert f\right\Vert _{L(\bar{p},q)}\preceq\left\Vert f\right\Vert _{\mathring{B}_{L(p,r)}^{\alpha,q}},f\in C_{0}^{\infty}(R^{n}). \]
\end{theorem}
\section{More Examples and Remarks}
\subsection{On the role of the $L(\infty,q)$ spaces}
In the range $1<p<n,$ (\ref{sobol1}) and (\ref{sobsup}) yield the classical Sobolev inequalities. Suppose that $p=n.$ Then $\frac{1}{\bar{p}}=0,$ and (\ref{sobol1}) becomes \begin{equation} \left\Vert f\right\Vert _{L(\infty,q)}\preceq\left\Vert \nabla f\right\Vert _{L(n,q)},1\leq q\leq\infty. \label{hbr} \end{equation} When dealing with domains $\Omega$ with $\left\vert \Omega\right\vert <\infty,$ from (\ref{sob2}) we get, $1\leq q\leq\infty,$ \begin{equation} \left\{ \int_{0}^{\left\vert \Omega\right\vert }\left( f^{\ast\ast }(s)-f^{\ast}(s)\right) ^{q}\frac{ds}{s}\right\} ^{1/q}\preceq\left\Vert \nabla f\right\Vert _{L(n,q)},\text{ }f\in C_{0}^{\infty}(\Omega). \label{hbr1} \end{equation} To compare this result with classical results it will be convenient to normalize the *norm* as follows \[ \left\Vert f\right\Vert _{L(\infty,q)(\Omega)}=\left\{ \int_{0}^{\left\vert \Omega\right\vert }\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) ^{q}\frac {ds}{s}\right\} ^{1/q}+\frac{1}{\left\vert \Omega\right\vert }\int_{\Omega }\left\vert f(x)\right\vert dx. \]
\begin{remark} Note that this does not change the nature of (\ref{hbr1}) since if $f$ has compact support on $\Omega,$ then if we let $t\rightarrow\left\vert \Omega\right\vert $ in \[ f^{\ast\ast}(t)-f^{\ast}(t)\leq c_{n}t^{1/n}(\nabla f)^{\ast\ast}(t) \] we find that \begin{align*} \frac{1}{\left\vert \Omega\right\vert }\int_{\Omega}\left\vert f(x)\right\vert dx & =f^{\ast\ast}(\left\vert \Omega\right\vert )\leq\left\vert \Omega\right\vert ^{1/n-1}\left\Vert \nabla f\right\Vert _{L^{1}(\Omega)}\\ & \leq\left\Vert \nabla f\right\Vert _{L(n,q)}. \end{align*}
\end{remark}
Let us consider the case $q=n.$ It was shown in \cite[page 1227]{7} (the so called Hansson-Brezis-Wainger-Maz'ya embedding) that \begin{align*} \left\{ \int_{0}^{\left\vert \Omega\right\vert }\left( \frac{f^{\ast\ast }(s)}{1+\log\frac{\left\vert \Omega\right\vert }{s}}\right) ^{n}\frac{ds} {s}\right\} ^{1/n} & \preceq\left\Vert f\right\Vert _{L(\infty,n)(\Omega )}\\ & \preceq\left\Vert \nabla f\right\Vert _{L(n,q)}+\left\Vert f\right\Vert _{L^{1}(\Omega)}. \end{align*} Therefore, (\ref{hbr}) implies an improvement on the Hansson-Brezis-Wainger-Maz'ya embedding. The connection with $BMO$ appears when $q=\infty,$ for then we have \[ \left\Vert f\right\Vert _{L(\infty,\infty)}\preceq\left\Vert \nabla f\right\Vert _{L(n,\infty)},f\in C_{0}^{\infty}(R^{n}). \]
In the case $p=n,q=1.$ Then, (\ref{sobol1}) gives \begin{equation} \left\Vert f\right\Vert _{L(\infty,1)}\preceq\left\Vert \nabla f\right\Vert _{L(n,1)},f\in C_{0}^{\infty}(R^{n}), \label{comparada} \end{equation} which ought to be compared with the following (cf. \cite{St1}) \begin{equation} \left\Vert f\right\Vert _{L^{\infty}}\preceq\left\Vert \nabla f\right\Vert _{L(n,1)},f\in C_{0}^{\infty}(R^{n}). \label{comparada1} \end{equation} Indeed, let us show that (\ref{comparada}) gives (\ref{comparada1}). From \[ \frac{d}{dt}(tf^{\ast\ast}(t))=\frac{d}{dt}(\int_{0}^{t}f^{\ast} (s)ds)=f^{\ast}(t), \] it follows (by the product rule of Calculus) that \[ \frac{d}{dt}(f^{\ast\ast}(t))=-\left( \frac{f^{\ast\ast}(t)-f^{\ast}(t)} {t}\right) . \] Therefore, if $f$ has compact support$,$ \begin{align*} \left\Vert f\right\Vert _{L(\infty,1)} & =\lim_{t\rightarrow\infty}\int _{0}^{t}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) \frac{ds}{s} =\lim_{t\rightarrow\infty}\left( f^{\ast\ast}(0)-f^{\ast\ast}(t)\right) \\ & =\left\Vert f\right\Vert _{L^{\infty}}-\lim_{t\rightarrow\infty}\frac{1} {t}\left\Vert f\right\Vert _{L^{1}}\\ & =\left\Vert f\right\Vert _{L^{\infty}}. \end{align*}
\subsection{The Gagliardo-Nirenberg Inequality and Weak type vs Strong Type}
It is well known that the Sobolev inequalities have remarkable self improving properties. In this section we wish to discuss the connections of these self improving effects with symmetrization. The study is important when trying to extend Sobolev inequalities to more general contexts.
We consider three forms of \ the Gagliardo-Nirenberg inequality. The strong form of the Gagliardo-Nirenberg inequality \begin{equation} \left\Vert f\right\Vert _{L(n^{\prime},1)}\preceq\left\Vert \nabla f\right\Vert _{L^{1}},f\in C_{0}^{\infty}(R^{n}), \label{via} \end{equation} which implies the classical version of the Gagliardo-Nirenberg inequality
\begin{equation} \left\Vert f\right\Vert _{L^{n^{\prime}}}\preceq\left\Vert \nabla f\right\Vert _{L^{1}},f\in C_{0}^{\infty}(R^{n}). \label{via2} \end{equation} which in turn implies the weaker version of the Gagliardo-Nirenberg inequality \begin{equation} \left\Vert f\right\Vert _{L(n^{\prime},\infty)}\preceq\left\Vert \nabla f\right\Vert _{L^{1}},f\in C_{0}^{\infty}(R^{n}). \label{via1} \end{equation}
Let us now show that (\ref{via1}) implies (\ref{via}). In \cite[(55) page 261]{A} we showed that (\ref{via1}) implies the symmetrization inequality \begin{equation} f^{\ast\ast}(t)-f^{\ast}(t)\preceq t^{1/n}(\nabla f)^{\ast\ast}(t). \label{v2} \end{equation} Conversely, (\ref{v2}) can be rewritten as \begin{equation} (f^{\ast\ast}(t)-f^{\ast}(t))t^{1-1/n}\preceq\int_{0}^{t}(\nabla f)^{\ast }(s)ds. \label{v3} \end{equation} Consequently, taking a sup over all $t>0$ we see that (\ref{v2}) in turn implies (\ref{via1}). Moreover, let us show that (\ref{v2}) implies the isoperimetric inequality (here we ignore the issue of constants to simplify the considerations). To see this suppose that $E$ is a bounded set with smooth border and let $f_{n}$ be \ a sequence of smooth functions with compact support such that $f_{n}\rightarrow\chi_{E}$ in $L^{1},$ with \[ \left\Vert \nabla f_{n}\right\Vert _{L^{1}}\rightarrow Per(E) \] where $Per(E)$ is the perimeter of $E.$ Selecting $t>\left\vert E\right\vert ,$ we see that $(f_{n}^{\ast\ast}(t)-f_{n}^{\ast}(t))\rightarrow\frac{1} {t}\left\vert E\right\vert ,$ therefore from (\ref{v3}) we find \[ \frac{1}{t}\left\vert E\right\vert t^{1-1/n}\preceq Per(E) \] therefore letting $t\rightarrow\left\vert E\right\vert ,$ gives \[ \left\vert E\right\vert ^{1-1/n}\preceq Per(E). \]
This concludes our proof that (\ref{via1}) is equivalent to (\ref{via}) since it is a well known consequence of the co-area formula that the isoperimetric inequality is equivalent to (\ref{via}) (cf. \cite{67}). At the level of symmetrization inequalities we have shown in \cite[page 263]{A} that (\ref{via}) implies the symmetrization inequality \begin{equation} \int_{0}^{t}(f^{\ast\ast}(s)-f^{\ast}(s))s^{1-1/n}\frac{ds}{s}\preceq\int _{0}^{t}(\nabla f)^{\ast}(s)ds. \label{v4} \end{equation} Moreover, conversely, taking a sup over all $t>0$ in (\ref{v4})$,$ shows that (\ref{v4}) implies (\ref{via}).
A direct proof of the fact that (\ref{v4}) implies (\ref{v2}) is straightforward. Indeed, starting with \[ \int_{t/2}^{t}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1-1/n}\frac {ds}{s}\preceq\int_{0}^{t}(\nabla f)^{\ast}(s)ds, \] and using the fact that $\left( f^{\ast\ast}(t)-f^{\ast}(t\right) )t=\int_{f^{\ast}(t)}^{\infty}\lambda_{f}(s)ds$ increases, we see that \[ \left( f^{\ast\ast}(t/2)-f^{\ast}(t/2)\right) t^{1-1/n}\preceq\int_{0} ^{t}(\nabla f)^{\ast}(s)ds, \] and (\ref{v2}) follows readily. The proof that we give now, showing that (\ref{v2}) implies (\ref{v4}) is indirect. First, as we have seen (\ref{v2}) is equivalent to the validity of (\ref{via1}) which in turn implies the following inequality\footnote{Note that by P\'{o}lya-Szeg\"{o}, $f^{\ast}$ is absolutely continuous} due to Maz'ya-Talenti (cf. \cite{65}), \begin{equation} t^{1-1/n}[-f^{\ast}(t)]^{\prime}\preceq\frac{d}{dt}(\int_{\{\left\vert f(x)\right\vert >f^{\ast}(t)\}}\left\vert \nabla f(x)\right\vert dx).\label{v5} \end{equation} To proceed further we need a new expression for $f^{\ast\ast}(t)-f^{\ast}(t),$ which we derive integrating by parts: \begin{align} f^{\ast\ast}(t)-f^{\ast}(t) & =\frac{1}{t}\int_{0}^{t}[f^{\ast}(s)-f^{\ast }(t)]ds\nonumber\\ & =\frac{1}{t}\left. (s[f^{\ast}(s)-f^{\ast}(t)])\right\vert _{s=0} ^{s=t}+\frac{1}{t}\int_{0}^{t}s[-f^{\ast}(s)]^{\prime}ds\nonumber\\ & =\frac{1}{t}\int_{0}^{t}s[-f^{\ast}(s)]^{\prime}ds.\label{numer} \end{align} Therefore, \begin{align*} \int_{0}^{t}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{-1/n}ds & =\int_{0}^{t}\frac{1}{s}\int_{0}^{s}u[-f^{\ast}(u)]^{\prime}dus^{-1/n}ds\\ & =-n\int_{0}^{t}\left( \int_{0}^{s}u[-f^{\ast}(u)]^{\prime}du\right) ds^{-1/n}\\ & =\left. -n\left( \int_{0}^{s}u[-f^{\ast}(u)]^{\prime}du\right) s^{-1/n}\right\vert _{s=0}^{s=t}+n\int_{0}^{t}s[-f^{\ast}(s)]^{\prime} s^{-1/n}ds. \end{align*} We claim that we can discard the integrated term since its contribution makes the right hand side smaller. To see this note that, since (\ref{v2}) holds, (\ref{numer}) implies \[ \left( \int_{0}^{s}u[-f^{\ast}(u)]^{\prime}du\right) s^{-1/n}=\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{1-1/n}\preceq\int_{0}^{s}(\nabla f)^{\ast}(u)du, \] which in turn implies that $\left( \int_{0}^{s}u[-f^{\ast}(u)]^{\prime }du\right) s^{-1/n}\rightarrow0$ when $s\rightarrow0.$ Consequently, we can continue our estimates to obtain, \begin{align*} \int_{0}^{t}\left( f^{\ast\ast}(s)-f^{\ast}(s)\right) s^{-1/n}ds & \preceq n\int_{0}^{t}s[-f^{\ast}(s)]^{\prime}s^{-1/n}ds\\ & \preceq\int_{0}^{t}[-f^{\ast}(s)]^{\prime}s^{1-1/n}ds\\ & \preceq\int_{0}^{t}\frac{d}{dt}(\int_{\{\left\vert f(x)\right\vert >f^{\ast}(s)\}}\left\vert \nabla f(x)\right\vert dx)ds\text{ \ \ (by(\ref{v5} ))}\\ & \leq\int_{\{\left\vert f(x)\right\vert >f^{\ast}(t)\}}\left\vert \nabla f(x)\right\vert dx\\ & \leq\int_{0}^{t}\left( \nabla f\right) ^{\ast}(s)ds. \end{align*}
Underlying these equivalences between weak and strong inequalities is the Maz'ya truncation principle (cf. \cite{34}) which, informally, shows that, contrary to what happens for most other inequalities in analysis, in the case of Sobolev inequalities: weak implies strong!
In \cite{A} we showed the connection of the truncation method to a certain form of extrapolation of inequalities initiated by Burkholder and Gundy. The import of these considerations is that the symmetrization inequalities hold in a very general context and allow for some unification of Sobolev inequalities. For example, the preceding analysis and the corresponding symmetrization inequalities can be extended for gradients defined in metric measure spaces using a variety of methods. One method, often favored by probabilists, goes via defining the gradient by suitable limits, in this case, under suitable assumptions, we can use isoperimetry to reformulate the symmetrization inequalities and embeddings (cf. \cite{63}, \cite{64}, and the references therein). In the context of metric probability spaces with concave isoperimetric profile $I,$ the basic inequality takes the form \begin{equation} f^{\ast\ast}(t)-f^{\ast}(t)\leq\frac{t}{I(t)}\left\vert \nabla f\right\vert ^{\ast\ast}(t).\label{v6} \end{equation} For example, if we consider $R^{n}$ with Gaussian measure, the isoperimetric profile satisfies \[ I(t)\sim t(\log\frac{1}{t})^{1/2},\text{ }t\text{ near zero.} \] Thus in the Gaussian case (\ref{v6}) yields logarithmic Sobolev inequalities (cf. \cite{Mm}, \cite{63}, \cite{64}, for more on this story). A somewhat different approach, which yields however similar symmetrization inequalities, obtains if we define the gradient indirectly via Poincar\'{e} inequalities and then derive the symmetrization inequalities using maximal inequalities. The analysis here depends an a large body of classical research on maximal functions and Poincar\'{e} inequalities (for the symmetrization inequalities that result we refer to \cite{47}, and Kalis' 2007 PhD thesis at FAU).
\end{document} |
\begin{document}
\title{``Disproof of Bell's Theorem" : more critics.}
\author{ Philippe Grangier}
\affiliation{Laboratoire Charles Fabry de l'Institut d'Optique, \\CNRS, Univ Paris-Sud, CD128, 91127 Palaiseau, France}
\begin{abstract} In a series of recent papers \cite{p1,p2,p3} Joy Christian claims to have ``disproved Bell's theorem". Though his work is certainly intellectually stimulating, we argue below that his main claim is unwarranted. \end{abstract}
\maketitle
In a series of recent papers, Joy Christian introduced a model which violates Bell's inequalities, and which agrees with quantum mechanics, though at first sight it does look ``local and realistic". For details the reader is referred to \cite{p1,p2,p3}, and here we will assume that the content of these papers is correct, and use the same notations. The essential features of the proposed model are thus :
\begin{itemize}
\item the ``measurement results" denoted as $A_a(\mu) = \mu.a$ and $A_b(\mu) = \mu.b$ are algebraic quantities (bivectors) which do not commute, and which depend on the hidden variables ($\mu$, which is a trivector), the analysers directions ($a$ or $b$, which are unit vectors), and a sign $\pm 1$ which tells the outcome of the measurement, given $\mu$, and either $a$ or $b$.
\item when averaged over $\mu$, the correlation functions deduced from these algebraic ``measurement results" are real numbers, which exactly reproduce quantum mechanical predictions, and thus violate Bell's inequalities.
\end{itemize}
However, what is still lacking in that model is a way to extract the ``sign" , ie the result of each individual (dichotomic) measurement, from the algebraic quantities $\mu.a$ or $\mu.b$. As written in \cite{ p3}, we still need ``a yet to be discovered physical theory, which, when measured, should reproduce the binary outcomes $\pm 1$".
But whatever is this theory, it will give a real-valued function equal to $\pm 1$ for each particle and each measurement device, i.e. for each $\mu$, $a$ and $b$, and thus it will have to obey Bell's theorem... This brings us back to step 0 : nothing is wrong with Bell's theorem.
In other words, though it seems that the only ``available" information in the algebraic quantities $\mu.a$ and $\mu.b$ are their signs, averaging over non-commuting algebraic quantities is certainly not equivalent to averaging over commuting real functions \cite{reply1}. So the model proposed by Joy Christian lacks an essential ingredient~: it does not provide a physical way to extract the ``measurement results" from the algebraic quantities which are introduced. And whatever method will be used, what has to be averaged according to Bell's reasoning are the real-valued functions (i.e. the measurement results), {\bf not } the algebraic quantities \cite{reply2}. Then Bell's theorem still tells us that the whole construction must be either non-local and/or non-realistic \cite{note1}, or in conflict with quantum mechanics.
More generally, Bell's theorem cannot be ``disproved", in the sense that its conclusions follow from its premices in a mathematically correct way. On the other hand, one may argue that these premices are unduly restrictive, but this is discussing, not disproving \cite{pg}. Here the conclusion is that extracting the ``sign" of a bivector is a non-trivial operation~: this is an interesting mathematical remark, but not a challenge for Bell's theorem.
In order to finally conclude this discussion, it may be enlightening to consider the following ``toy model"~:
\begin{itemize}
\item The ``hidden variable" is a random variable $\epsilon$, with values $\pm 1$ with equal probabilities. If $\epsilon =1$, the particle goes along the Stern-Gerlach axis $a$, and if $\epsilon =-1$, it goes opposite to it. For correlated particles 1 and 2 with $\epsilon_1 = - \epsilon_2 = \epsilon$ (singlet state), this model obeys Bell's inequalities, and gives $S_{Bell}=2$.
\item But now let us change the nature of the ``measurement result", and consider that it is the vector $\epsilon_1 a$ for particle 1, and $\epsilon_2 b$ for particle 2. In addition, let us define the ``correlation function" for these two ``observables" \cite{reply2} by the scalar product $(\epsilon_1 a).(\epsilon_2 b)$. Since $\epsilon_1 \epsilon_2 = -1$ in all cases, the averaged correlation function is $-a.b$ alike quantum mechanics, and thus violates Bell's inequalities. According to the terminology of \cite{p1,p2,p3}, this is a ``local realistic model disproving Bell's inequalities".
\end{itemize}
The crucial point in this toy model is the following : by computing correlation functions with just the same rules as in \cite{p1,p2,p3}, the ``ordinary vectors" do just the same job as the ``sophisticated bivectors" : they violate Bell's inequalities. Therefore for achieving this goal the ``sophisticated bivectors" are completely useless.
So once again : the basic problem in \cite{p1,p2,p3} is that the mathematical objects which are used to calculate the ``correlation functions" are simply not the good ones. According to Bell's formalism, which is based on usual classical statistics, these objects must be the measurement results $\pm 1$. But as soon as these objects are changed, and that new rules are introduced for computing new ``correlation functions", one can easily violate Bell's inequalities, no matter whether the objects are ``ordinary vectors" or ``sophisticated bivectors".
Obviously, anybody may define new ways for calculating correlation functions if he wishes so, but this is not ``disproving Bell's theorem", because this moves too far out from Bell's hypothesis. What is at stake is rather an alternative formulation of Quantum Mechanics, and then the questions go to completely different grounds : is this new formulation correct ? is it useful ? can it handle more complicated situations such as multiparticle entanglement ? etc... Answering such questions is certainly more interesting than agitating useless polemics about an inexistent and irrelevant ``disproving".
\vskip 2mm \centerline{-o-o-o-} \vskip 2mm Useful exchanges of ideas with Valerio Scarani and Gregor Weihs are acknowledged.
\begin{thebibliography}{1}
\bibitem{p1} ``Disproof of Bell's Theorem by Clifford Algebra Valued Local Variables", Joy Christian, arXiv:quant-ph/0703179
\bibitem{p2} ``Disproof of Bell's Theorem: Reply to Critics", Joy Christian, arXiv:quant-ph/0703244
\bibitem{p3} ``Disproof of Bell's Theorem: Further Consolidations", Joy Christian, arXiv:quant-ph/0707.1333
\bibitem{note1} Actually, by opposition to e.g. Bohm's model which is considered to be ``realistic but non-local", the present model can be said to be ``local but non-realistic", since one cannot associate simultaneous ``elements of reality" to the two non-commuting measurements $a$ and $a'$ on one side (or $b$ and $b'$ on the other side). See e.g. in~\cite{p3} :``It is crucial to note here that a given bivector $\mu . n$ cannot be spinning either ÒupÓ or ÒdownÓ, or in any other way, about any other direction but $n$ (...) Thus, our observables $A_a(\mu)$ and $B_b(\mu)$ represent quite faithfully what is actually observed in a Stern-Gerlach type experiment." This is consistent with calculating the correlation functions algebraically (like in quantum mechanics), and not from real-valued functions (which would lead to Bell's theorem).
\bibitem{pg} An alternative view about \cite{p1,p2,p3} may be based on the ``contextual objectivity" point of view~\cite{pgp}~: by considering that the ``measurement result" is the bivector $\mu.a$ rather than simply $\pm 1$, one makes the ``context" (i.e. the value of $a$) an intrinsic part of the measurement result, which certainly makes sense according to \cite{pgp}. Then the algebraic ``hidden variable" $\mu$ can be seen as a way to carry the ``holistic" character of the entangled state, in a different way from the usual quantum formalism (usually $a$ and $b$ are seen as ``measurement parameters" rather than ``measurement results"). It is unclear whether or not such an approach might be interesting as an alternative formulation of quantum mechanics, but again it does not ``disprove" Bell's theorem, and it contradicts local realism just as much as quantum mechanics does. \bibitem{pgp} ``Contextual objectivity and the quantum formalism", Philippe Grangier, Int. J. Quant. Inf. {\bf 3:1}, 17 (2005); see also arXiv:quant-ph/0407025.
\bibitem{reply1} Besides the fact that \cite{p2} seems to ignore what a collegial tone is, it is constantly misinterpreting when accusing of misinterpretation. What is meant in this sentence is the obvious fact that $$ \int (\mu.a) (\mu.b) d\rho(\mu) \neq \int ``sign"(\mu.a) ``sign"(\mu.b) d\rho(\mu), $$ where $``sign"$ is any function which gives the measurement result $\pm 1$, knowing $\mu.a$ and $\mu.b$~: clearly the rhs has to obey Bell's theorem, while the lhs has not.
\bibitem{reply2} It is actually quite revealing that \cite{p2} keeps on using the wording ``observable" rather than ``measurement result". This ``quantum" vocabulary clearly misses that the central issue in Bell's theorem, which is correlating clicks between detectors (corresponding to binary measurement results), and not correlating bivectors (which cannot be given any ``local realistic meaning"). More precisely, knowing the ``sign" of $(\mu.a)$ forbids to tell anything about the ``sign" of $(\mu.a')$, for the same given $\mu$, while in Bell's formalism $E(\lambda, a)$ and $E(\lambda, a')$ are two values of the same function, taken for the same $\lambda$ and two different measurement angles. So the proposed model \cite{p1} cannot be a local realistic model, it could at best be an alternative formulation of quantum mechanics \cite{note1}, like Bohm's theory is.
\end{thebibliography}
\end{document} |
\begin{document}
\begin{center} {\large \bf Uniformly $S$-Noetherian rings}
Wei Qi$^{a}$,\ Hwankoo Kim$^{b}$,\ Fanggui Wang$^{c}$,\ Mingzhao Chen$^{d}$,\ Wei Zhao$^{e}$
{\footnotesize a.\ School of Mathematics and Statistics, Shandong University of Technology, Zibo 255049, China\\ b.\ Division of Computer and Information Engineering, Hoseo University, Asan 31499, Republic of Korea\\ c.\ School of Mathematical Sciences, Sichuan Normal University, Chengdu 610068, China\\ d.\ College of Mathematics and Information Science, Leshan Normal University, Leshan 614000, China\\ e.\ School of Mathematics, ABa Teachers University, Wenchuan 623002, China
} \end{center}
\centerline { \bf Abstract}
\leftskip10truemm \rightskip10truemm \noindent
Let $R$ be a ring and $S$ a multiplicative subset of $R$. Then $R$ is called a uniformly $S$-Noetherian ($u$-$S$-Noetherian for abbreviation) ring provided there exists an element $s\in S$ such that for any ideal $I$ of $R$, $sI \subseteq K$ for some finitely generated sub-ideal $K$ of $I$. We give the Eakin-Nagata-Formanek Theorem for $u$-$S$-Noetherian rings. Besides, the $u$-$S$-Noetherian properties on several ring constructions are given. The notion of $u$-$S$-injective modules is also introduced and studied. Finally, we obtain the Cartan-Eilenberg-Bass Theorem for uniformly $S$-Noetherian rings. \vbox to 0.3cm{}\\ {\it Key Words:} $u$-$S$-Noetherian rings, $S$-Noetherian rings, $u$-$S$-injective modules, ring constructions.\\ {\it 2010 Mathematics Subject Classification:} 13E05, 13A15.
\leftskip0truemm \rightskip0truemm
\section{Introduction} Throughout this article, $R$ is always a commutative ring with identity. For a subset $U$ of an $R$-module $M$, we denote by $\langle U\rangle$ the submodule of $M$ generated by $U$. A subset $S$ of $R$ is called a multiplicative subset of $R$ if $1\in S$ and $s_1s_2\in S$ for any $s_1\in S$, $s_2\in S$. Recall from Anderson and Dumitrescu \cite{ad02} that a ring $R$ is called an \emph{$S$-Noetherian ring} if for any ideal $I$ of $R$, there is a finitely generated sub-ideal $K$ of $I$ such that $sI\subseteq K$ for some $s\in S$. Cohen's Theorem, Eakin-Nagata Theorem and Hilbert Basis Theorem for $S$-Noetherian rings are given in \cite{ad02}. Many algebraists have paid considerable attention to the notion of $S$-Noetherian rings, especially in the $S$-Noetherian properties of ring constructions. In 2007, Liu \cite{l07} characterized a ring $R$ when the generalized power series ring $[[R^{M,\leq}]]$ is an $S$-Noetherian ring under some additional conditions. In 2014, Lim and Oh \cite{lO14} obtained some $S$-Noetherian properties on amalgamated algebras along and ideal. They \cite{lO15} also studied $S$-Noetherian properties on the composite semigroup rings and the composite generalized series rings next year. In 2016, Ahmed and Sana \cite{as16} gave an $S$-version of Eakin-Nagata-Formanek Theorem for $S$-Noetherian rings in the case where $S$ is finite. Very recently, Kim, Mahdou, and Zahir \cite{kmz21} established a necessary and sufficient condition for a bi-amalgamation to inherit the $S$-Noetherian property. Some generalizations of $S$-Noetherian ring can be found in \cite{bh18,kkl14}.
However, in the definition of $S$-Noetherian rings, the choice of $s\in S$ such that $sI\subseteq K\subseteq I$ with $K$ finitely generated is dependent on the ideal $I$. This dependence sets many obstacles to the further study of $S$-Noetherian rings. The main motivation of this article is to introduce and study a ``uniform'' version of $S$-Noetherian rings. In fact, we say a ring $R$ is \emph{uniformly $S$-Noetherian} ($u$-$S$-Noetherian for abbreviation) provided there exists an element $s\in S$ such that for any ideal $I$ of $R$, $sI \subseteq K$ for some finitely generated sub-ideal $K$ of $I$. Trivially, Noetherian rings are $u$-$S$-Noetherian, and $u$-$S$-Noetherian rings are $S$-Noetherian. Some counterexamples are given in Example \ref{exam-not-ut} and Example \ref{exam-not-ut-1}. We also consider the notion of $u$-$S$-Noetherian modules (see Definition \ref{us-no-module}), and then obtain the Eakin-Nagata-Formanek Theorem for $u$-$S$-Noetherian modules (see Theorem \ref{u-s-noe-char}) which generalizes some part of the result in \cite[Corollary 2.1]{as16}. The $S$-extension property of $S$-Noetherian modules is given in Proposition \ref{s-u-noe-s-exact}. In Section $3$, we mainly consider the $u$-$S$-Noetherian properties on some ring constructions, including trivial extensions, pullbacks and amalgamated algebras along an ideal (see Proposition \ref{trivial extension-usn}, Proposition \ref{pullback-usn} and Proposition \ref{amag-usn}). In Section $4$, we first introduce the notion of $u$-$S$-injective modules $E$ for which ${\rm Hom}_R(-,E)$ preserves $u$-$S$-exact sequences (see Definition \ref{u-S-tor-ext}), and then characterize it by $u$-$S$-torsion properties of the ``Ext'' functor in Theorem \ref{s-inj-ext}. The Baer's Criterion for $u$-$S$-injective modules is given in Proposition \ref{s-inj-baer}. Finally, we obtain the Cartan-Eilenberg-Bass Theorem for uniformly $S$-Noetherian rings as follows (see Theorem \ref{s-injective-ext}): \begin{theo}\label{s-injective-ext} Let $R$ be a ring, $S$ a multiplicative subset of $R$ consisting of non-zero-divisors. Then the following assertions are equivalent: \begin{enumerate} \item $R$ is $u$-$S$-Noetherian; \item any direct sum of injective modules is $u$-$S$-injective; \item any direct union of injective modules is $u$-$S$-injective. \end{enumerate} \end{theo}
\section{$u$-$S$-Noetherian rings and $u$-$S$-Noetherian modules} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Recall from \cite{ad02} that $R$ is called an $S$-Noetherian ring (resp., $S$-PIR) provided that for any ideal $I$ of $R$ there exists an element $s\in S$ and a finitely (resp., principally) generated sub-ideal $K$ of $I$ such that $sI\subseteq K$. Note that the choice of $s$ is decided by the ideal $I$. Now we introduce some ``uniform'' versions of $S$-Noetherian rings and $S$-PIRs.
\begin{definition} Let $R$ be a ring and $S$ a multiplicative subset of $R$. \begin{enumerate} \item $R$ is called a $u$-$S$-Noetherian ring provided there exists an element $s\in S$ such that for any ideal $I$ of $R$, $sI \subseteq K$ for some finitely generated sub-ideal $K$ of $I$. \item $R$ is called a $u$-$S$-Principal ideal ring $($$u$-$S$-PIR for short$)$ provided there exists an element $s\in S$ such that for any ideal $I$ of $R$, $sI \subseteq (a)$ for some element $a\in I$. \end{enumerate} \end{definition}
If the element $s$ can be chosen to be the identity in the definition of $u$-$S$-Noetherian rings, then $u$-$S$-Noetherian rings are exactly Noetherian rings. Thus, every Noetherian ring is $u$-$S$-Noetherian. However, the converse does not hold generally. \begin{example}\label{exam-not-ut}
Let $R=\prod\limits_{i=1}^{\infty}\mathbb{Z}_2$ be the countable infinite direct product of $\mathbb{Z}_2$, then $R$ is not Noetherian. Let $e_i$ be the element in $R$ with the $i$-th component $1$ and others $0$. Denote $S=\{1,e_i|i\geq 1\}$. Then $R$ is $u$-$S$-Noetherian. Indeed, let $I$ be an ideal of $R$. Then if all elements in $I$ have $1$-th components equal to $0$, we have $e_1I=0$. Otherwise $e_1I=e_1R$. Thus $e_1I$ is principally generated. Consequently $R$ is a $u$-$S$-PIR, and so is $u$-$S$-Noetherian. \end{example}
Let $R$ be a ring, $M$ an $R$-module and $S$ a multiplicative subset of $R$. For any $s\in S$, there is a multiplicative subset $S_s=\{1,s,s^2,....\}$ of $S$. We denote by $M_s$ the localization of $M$ at $S_s$. Certainly, $M_s\cong M\otimes_RR_s$
\begin{lemma} \label{s-loc-u-noe}
Let $R$ be a ring and $S$ a multiplicative subset of $R$. If $R$ is a $u$-$S$-Noetherian ring $($resp., $u$-$S$-PIR$)$, then there exists an element $s\in S$ such that $R_{s}$ is a Noetherian ring $($resp., PIR$)$. \end{lemma} \begin{proof} Since $R$ is $u$-$S$-Noetherian, there exists an element $s\in S$ satisfies that for any ideal $I$ of $R$ there is a finitely (resp., principally) generated sub-ideal $K$ of $I$ such that $sI\subseteq K$. Let $J$ be an ideal of $R_{s}$. Then there exists an ideal $I'$ of $R$ such that $J= I'_s$, and hence $sI' \subseteq K'$ for some finitely (resp., principally) generated sub-ideal $K'$ of $I'$. So $J= I'_s= K_s$ is finitely (resp., principally) generated ideal of $R_{s}$. Consequently, $R_{s}$ is a Noetherian ring (resp., PIR). \end{proof}
\begin{proposition} \label{s-loc-u-noe-fini} Let $R$ be a ring and $S$ a multiplicative subset of $R$ consisting of finite elements. Then $R$ is a $u$-$S$-Noetherian ring $($resp., $u$-$S$-PIR$)$ if and only if $R$ is an $S$-Noetherian ring $($resp., $S$-PIR$)$. \end{proposition} \begin{proof} Suppose $R$ is a $u$-$S$-Noetherian ring $($resp., $u$-$S$-PIR$)$. Then trivially $R$ is an $S$-Noetherian ring (resp., $S$-PIR). Suppose $S=\{s_1,...,s_n\}$ and set $s=s_1\cdots s_n$. Suppose $R$ is an $S$-Noetherian ring (resp., $S$-PIR). Then for any ideal $I$ of $R$, there is a finitely (resp., principally) generated sub-ideal $J$ of $I$ such that $s_II\subseteq J$ for some $s_I\in S$. Then $sI\subseteq s_II\subseteq J$. So $R$ is a $u$-$S$-Noetherian ring $($resp., $u$-$S$-PIR$)$. \end{proof} The following example shows $S$-Noetherian rings are not $u$-$S$-Noetherian in general. \begin{example}\label{exam-not-ut-1} Let $R=k[x_1,x_2,....]$ be the countably infinite variables polynomial ring over a field $k$. Set $S=R-\{0\}$. Then $R$ is an $S$-Noetherian ring. However, $R$ is not $u$-$S$-Noetherian. \end{example} \begin{proof} Certainly, $R$ is an $S$-Noetherian ring. Indeed, let $I$ be a non-zero ideal of $R$. Suppose $0\not=s\in I$. Then $sI\subseteq sR\subseteq I$. Thus $I$ is $S$-principally generated. So $R$ is an $S$-PIR and thus an $S$-Noetherian ring.
We claim that $R$ is not $u$-$S$-Noetherian. Assume on the contrary that $R$ is $u$-$S$-Noetherian. Then $R_{s}$ is a Noetherian ring for some $s\in S$ by Lemma \ref{s-loc-u-noe}. If $n$ is the minimal number such that $x_m$ does not divide any monomial of $s$ for any $m\geq n$. Then $R_{s}\cong T[x_n,x_{n+1},....]$ where $T=k[x_1,x_2,....,x_{n-1}]_s$. Obviously, $R_{s}\cong T[x_n,x_{n+1},....]$ is not Noetherian since the ideal generated by $\{x_n,x_{n+1},....\}$ is not a finitely generated ideal of $T[x_n,x_{n+1},....]$. So $R$ is not $u$-$S$-Noetherian. \end{proof}
Recall from \cite{ad02} that an $R$-module $M$ is called an $S$-Noetherian module if every submodule of $M$ is $S$-finite, that is, for any submodule $N$ of $M$, there is an element $s\in S$ and a finitely generated $R$-module $F$ such that $sN\subseteq F\subseteq N$. Note that the choice of $s$ is decided by the submodule $N$. The rest of this section mainly studies a ``uniform'' version of $S$-Noetherian modules. Let $\{M_j\}_{j\in \Gamma}$ be a family of $R$-modules and $N_j$ a submodule of $M_j$ generated by $\{m_{i,j}\}_{i\in \Lambda_j}\subseteq M_j$ for each $j\in \Gamma$. Recall from \cite{z21} that a family of $R$-modules $\{M_j\}_{j\in \Gamma}$ is \emph{$u$-$S$-generated} (with respective to $s$) by $\{\{m_{i,j}\}_{i\in \Lambda_j}\}_{j\in \Gamma}$ provided that there exists an element $s\in S$ such that $sM_j\subseteq N_j$ for each $j\in \Gamma$, where $N_j=\langle \{m_{i,j}\}_{i\in \Lambda_j}\rangle$. We say a family of $R$-modules $\{M_j\}_{j\in \Gamma}$ is \emph{$u$-$S$-finite} (with respective to $s$) if the set $\{m_{i,j}\}_{i\in \Lambda_j}$ can be chosen as a finite set for each $j\in \Gamma$.
\begin{definition}\label{us-no-module} Let $R$ be a ring and $S$ a multiplicative subset of $R$. An $R$-module $M$ is called a $u$-$S$-Noetherian $R$-module provided the set of all submodules of $M$ is $u$-$S$-finite. \end{definition}
Let $R$ be a ring and $S$ a multiplicative subset of $R$. Recall from \cite{z21}, an $R$-module $T$ is called a \emph{$u$-$S$-torsion module} provided that there exists an element $s\in S$ such that $sT=0$. Obviously, $u$-$S$-torsion modules are $u$-$S$-Noetherian. A ring $R$ is $u$-$S$-Noetherian if and only if it is $u$-$S$-Noetherian as an $R$-module. It is well known that an $R$-module $M$ is Noetherian if and only if $M$ satisfies ascending chain condition on submodules, if and only if $M$ satisfies the maximal condition (see \cite{n93}). In 2016, Ahmed et al. \cite{as16} obtained an $S$-version of this result provided $S$ is a finite set and called it the $S$-version of Eakin-Nagata-Formanek Theorem. Next we will give a uniformly $S$-version of Eakin-Nagata-Formanek Theorem for any multiplicative subset $S$ of $R$.
First, we recall from \cite[Definition 2.1]{as16} some modified notions of $S$-stationary ascending chains of $R$-modules and $S$-maximal elements of a family of $R$-modules. Let $R$ be a ring, $S$ a multiplicative subset of $R$ and $M$ an $R$-module. Denote by $M^{\bullet}$ an ascending chain $M_1\subseteq M_2\subseteq ... $ of submodules of $M$. An ascending chain $M^{\bullet}$ is called \emph{stationary with respective to $s$} if there exists $k\geq 1$ such that $sM_n\subseteq M_k$ for any $n\geq k$. Let $\{M_i\}_{i\in \Lambda}$ be a family of submodules of $M$. We say an $R$-module $M_0\in \{M_i\}_{i\in \Lambda}$ is \emph{maximal with respective to $s$} provided that if $M_0\subseteq M_i$ for some $M_i\in \{M_i\}_{i\in \Lambda}$, then $sM_i\subseteq M_0$.
\begin{theorem} \label{u-s-noe-char} {\bf (Eakin-Nagata-Formanek Theorem for uniformly $S$-Noetherian rings)} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Let $M$ be an $R$-module. Then the following assertions are equivalent: \begin{enumerate} \item $M$ is $u$-$S$-Noetherian; \item there exists an element $s\in S$ such that any ascending chain of submodules of $M$ is stationary with respective to $s$;
\item there exists an element $s\in S$ such that any non-empty of submodules of $M$ has a maximal element with respective to $s$. \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow (2):$ Let $M_1\subseteq M_2\subseteq ... $ be an ascending chain of submodules of $M$. Set $M_0=\bigcup\limits_{i=1}^{\infty}M_i$. Then there exist an element $s\in S$ and a finitely generated submodule $N_i$ of $M_i$ such that $sM_i\subseteq N_i$ for each $i\geq 0$. Since $N_0$ is finitely generated, there exists $k\geq 1$ such that $N_0\subseteq M_k$. Thus $sM_0\subseteq M_k$. So $sM_n\subseteq M_k$ for any $n\geq k$.
$(2)\Rightarrow (3):$ Let $\Gamma$ be a nonempty of submodules of $M$. On the contrary, we take any $M_1\in \Gamma$. Then $M_1$ is not a maximal element with respective to $s$ for any $s\in S$. Thus there is $M_2\in \Gamma$ such that $s M_2\not\subseteq M_1$. Since $M_2$ is not a maximal element with respective to $s$, there is $M_3\in \Gamma$, such that $s M_3\not\subseteq M_2$. Similarly, we can get an ascending chain $M_1\subseteq M_2\subseteq ...\subseteq M_n\subseteq M_{n+1}\subseteq ...$ such that $sM_{n+1}\not\subseteq M_n$ for any $n\geq 1$. Obviously, this ascending chain is not stationary with respective to any $s\in S$.
$(3)\Rightarrow (1):$ Let $N$ be a submodule of $M$ and $s\in S$ the element in $(3)$. Set $\Gamma=\{A\subseteq N| $ there is a finitely generated submodule $F_A$ of $A$ satisfies $sA\subseteq F_A\}$. Since $0\in \Gamma$, $\Gamma$ is nonempty. Thus $\Gamma$ has a maximal element $A$. If $A\not=N$, then there is an element $x\in N-A$. Since $F_1=F_A+Rx$ is a finitely generated submodule of $A_1=A+Rx$ such that $sA_1\subseteq F_1$, we have $F_1\in \Gamma$, which contradicts the choice of maximality of $A$. \end{proof}
\begin{corollary} \label{u-s-noe-ring-char} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Then the following assertions are equivalent: \begin{enumerate} \item $R$ is $u$-$S$-Noetherian; \item there exists an element $s\in S$ such that any ascending chain of ideals of $R$ is stationary with respective to $s$;
\item there exists an element $s\in S$ such that any nonempty of ideals of $R$ has a maximal element with respective to $s$. \end{enumerate} \end{corollary}
We can rediscover the following result by Proposition \ref{s-loc-u-noe-fini}. \begin{corollary} \cite[Corollary 2.1]{as16}\label{u-s-noe-char-s} Let $R$ be a ring and $S$ a multiplicative subset of $R$ consisting of finite elements. Then the following assertions are equivalent: \begin{enumerate} \item $R$ is an $S$-Noetherian ring; \item every increansing sequence of ideals of $R$ is $S$-stationary; \item every nonempty set of ideals of $R$ has an $S$-maximal element. \end{enumerate} \end{corollary}
Recall from \cite{z21}, an $R$-sequence $M\xrightarrow{f} N\xrightarrow{g} L$ is called \emph{$u$-$S$-exact} provided that there is an element $s\in S$ such that $s{\rm Ker}(g)\subseteq {\rm Im}(f)$ and $s{\rm Im}(f)\subseteq {\rm Ker}(g)$. An $R$-homomorphism $f:M\rightarrow N$ is an \emph{$u$-$S$-monomorphism} $($resp., \emph{$u$-$S$-epimorphism}, \emph{$S$-isomorphism}$)$ provided $0\rightarrow M\xrightarrow{f} N$ $($resp., $M\xrightarrow{f} N\rightarrow 0$, $0\rightarrow M\xrightarrow{f} N\rightarrow 0$ $)$ is $u$-$S$-exact. It is easy to verify that an $R$-homomorphism $f:M\rightarrow N$ is a $u$-$S$-monomorphism $($resp., $u$-$S$-epimorphism$)$ if and only if ${\rm Ker}(f)$ $($resp., ${\rm Coker}(f))$ is a $u$-$S$-torsion module.
\begin{lemma}\label{s-exct-tor}\cite[Proposition 2.8]{z21} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Let $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ be a $u$-$S$-exact sequence. Then $B$ is $u$-$S$-torsion if and only if $A$ and $C$ are $u$-$S$-torsion. \end{lemma}
\begin{lemma}\label{s-exct-diag} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Let $$\xymatrix@R=20pt@C=25pt{
0 \ar[r]^{}&A_1\ar@{^{(}->}[d]^{i_A}\ar[r]& B_1 \ar[r]^{\pi_1}\ar@{^{(}->}[d]^{i_B}&C_1\ar[r] \ar@{^{(}->}[d]^{i_C} &0\\ 0 \ar[r]^{}&A_2\ar[r]&B_2 \ar[r]^{\pi_2}&C_2\ar[r] &0\\}$$ be a commutative diagram with exact rows, where $i_A, i_B$ and $i_C$ are embedding maps. Suppose $s_A A_2\subseteq A_1$ and $s_C C_2\subseteq C_1$ for some $s_A\in S, s_C\in S$. Then $s_As_CB_2\subseteq B_1$. \end{lemma} \begin{proof} Let $x\in B_2$. Then $\pi_2(x)\in C_2$. Thus $s_C\pi_2(x)=\pi_2(s_Cx)\in C_1$. So we have $\pi_1(y)=\pi_2(y)=\pi_2(s_Cx)$ for some $y\in B_1$. Thus $s_Cx-y=a_2$ for some $a_2\in A_2$. It follows that $s_As_Cx=s_Ay+s_Aa_2\in B_1$. Consequently, $s_As_CB_2\subseteq B_1$. \end{proof}
\begin{lemma} \label{s-u-noe-exact} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Let $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ be an exact sequence. Then $B$ is $u$-$S$-Noetherian if and only if $A$ and $C$ are $u$-$S$-Noetherian. \end{lemma} \begin{proof} It is easy to verify that if $B$ is $u$-$S$-Noetherian, so are $A$ and $C$. Suppose $A$ and $C$ are $u$-$S$-Noetherian. Let $\{B_i\}_{i\in \Lambda}$ be the set of all submodules of $B$. Then there exists an element $s_1\in S$ such that $s_1(A\cap B_i)\subseteq K_i\subseteq A\cap B_i$ for some finitely generated $R$-module $K_i$ and any $i\in \Lambda$, since $A$ is $u$-$S$-Noetherian. There also exists an element $s_2\in S$ such that $s_2(B_i+A)/A\subseteq L_i\subseteq (B_i+A)/A$ for some finitely generated $R$-module $L_i$ and any $i\in \Lambda$, since $C$ is $u$-$S$-Noetherian. Let $N_i$ be the finitely generated submodule of $B_i$ generated by the finite generators of $K_i$ and finite pre-images of generators of $L_i$. Consider the following natural commutative diagram with exact rows: $$\xymatrix@R=20pt@C=25pt{
0 \ar[r]^{}&K_i\ar@{^{(}->}[d]\ar[r]&N_i \ar[r]\ar@{^{(}->}[d]&L_i\ar[r] \ar@{^{(}->}[d] &0\\ 0 \ar[r]^{}&A\cap B_i\ar[r]&B_i \ar[r]&(B_i+A)/A \ar[r] &0.\\}$$ Set $s=s_1s_2\in S$. We have $sB_i\subseteq N_i\subseteq B_i$ by Lemma \ref{s-exct-diag}. So $B$ is $u$-$S$-Noetherian. \end{proof}
\begin{proposition} \label{s-u-noe-s-exact} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Let $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ be a $u$-$S$-exact sequence. Then $B$ is $u$-$S$-Noetherian if and only if $A$ and $C$ are $u$-$S$-Noetherian \end{proposition} \begin{proof} Let $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ be a $u$-$S$-exact sequence. Then there exists an element $s\in S$ such that $ s{\rm Ker}(g)\subseteq {\rm Im}(f)$ and $ s{\rm Im}(f)\subseteq {\rm Ker}(g)$. Note that ${\rm Im}(f)/s{\rm Ker}(g)$ and ${\rm Ker}(g)/s{\rm Im}(f)$ are $u$-$S$-torsion. If ${\rm Im}(f)$ is $u$-$S$-Noetherian, then the submodule $s{\rm Im}(f)$ of ${\rm Im}(f)$ is $u$-$S$-Noetherian. Thus ${\rm Ker}(g)$ is $u$-$S$-Noetherian by Lemma \ref{s-u-noe-exact}. Similarly, if ${\rm Ker}(g)$ is $u$-$S$-Noetherian, then ${\rm Im}(f)$ is $u$-$S$-Noetherian. Consider the following three exact sequences: $0\rightarrow{\rm Ker}(g) \rightarrow B\rightarrow {\rm Im}(g)\rightarrow 0,\quad 0\rightarrow{\rm Im}(g) \rightarrow C\rightarrow {\rm Coker}(g)\rightarrow 0,$ and $0\rightarrow{\rm Ker}(f) \rightarrow A\rightarrow {\rm Im}(f)\rightarrow 0$ with ${\rm Ker}(f)$ and ${\rm Coker}(g)$ $u$-$S$-torsion. It is easy to verify that $B$ is $u$-$S$-Noetherian if and only if $A$ and $C$ are $u$-$S$-Noetherian by Lemma \ref{s-u-noe-exact}. \end{proof}
\begin{corollary} \label{s-u-noe-u-iso} Let $R$ be a ring, $S$ a multiplicative subset of $R$ and $ M\xrightarrow{f} N$ a $u$-$S$-isomorphism. If one of $M$ and $N$ is $u$-$S$-Noetherian, so is the other. \end{corollary} \begin{proof} It follows from Proposition \ref{s-u-noe-exact} since $0\rightarrow M\xrightarrow{f} N\rightarrow 0\rightarrow 0$ is a $u$-$S$-exact sequence. \end{proof}
Let $\frak p$ be a prime ideal of $R$. We say an $R$-module $M$ is \emph{$u$-$\frak p$-Noetherian} provided that $M$ is $u$-$(R\setminus\frak p)$-Noetherian. The next result gives a local characterization of Noetherian modules. \begin{proposition}\label{s-noe-m-loc-char} Let $R$ be a ring and $M$ an $R$-module. Then the following statements are equivalent:
\begin{enumerate} \item $M$ is Noetherian; \item $M$ is $u$-$\frak p$-Noetherian for any $\frak p\in {\rm Spec}(R)$; \item $M$ is $u$-$\frak m$-Noetherian for any $\frak m\in {\rm Max}(R)$.
\end{enumerate} \end{proposition} \begin{proof} $(1)\Rightarrow (2)\Rightarrow (3):$ Trivial.
$(3)\Rightarrow (1):$ Let $N$ be an submodule of $M$. Then for each $\frak m\in {\rm Max}(R)$, there exists an element $s^{\frak m}\in R\setminus\frak m$ and a finitely generated submodule $F^{\frak m}$ of $N$ such that $s^{\frak m}N\subseteq F^{\frak m}$. Since $\{s^{\frak m} \mid \frak m \in {\rm Max}(R)\}$ generated $R$, there exist finite elements $\{s^{\frak m_1},...,s^{\frak m_n}\}$ such that $N=\langle s^{\frak m_1},...,s^{\frak m_n}\rangle N\subseteq F^{\frak m_1}+...+F^{\frak m_n}\subseteq N$. So $N=F^{\frak m_1}+...+F^{\frak m_n}$. It follows that $N$ is finitely generated, and thus $M$ is Noetherian. \end{proof}
\begin{corollary}\label{s-noe-m-loc-char} Let $R$ be a ring. Then the following statements are equivalent:
\begin{enumerate} \item $R$ is a Noetherian ring; \item $R$ is a $u$-$\frak p$-Noetherian ring for any $\frak p\in {\rm Spec}(R)$; \item $R$ is a $u$-$\frak m$-Noetherian ring for any $\frak m\in {\rm Max}(R)$.
\end{enumerate} \end{corollary}
\section{$u$-$S$-Noetherian properties on some ring constructions}
In this section, we mainly consider the $u$-$S$-Noetherian properties on trivial extensions, pullbacks and amalgamated algebras along an ideal. For more on these ring constructions, one can refer to \cite{DW09,lO14}.
Let $R$ be a commutative ring and $M$ be an $R$-module. Then the \emph{trivial extension} of $R$ by $M$ denoted by $R(+)M$ is equal to $R\bigoplus M$ as $R$-modules with coordinate-wise addition and multiplication $(r_1,m_1)(r_2,m_2)=(r_1r_2,r_1m_2+r_2m_1)$. It is easy to verify that $R(+)M$ is a commutative ring with identity $(1,0)$. Let $S$ be a multiplicative subset of $R$. Then it is easy to verify that $S(+)M=\{(s,m)|s\in S, m\in M\}$ is a multiplicative subset of $R(+)M$. Now, we give a $u$-$S$-Noetherian property on the trivial extension.
\begin{proposition}\label{trivial extension-usn} Let $R$ be a commutative ring, $S$ a multiplicative subset of $R$ and $M$ an $R$-module. Then $R(+)M$ is a $u$-$S(+)M$-Noetherian ring if and only if $R$ is a $u$- $S$-Noetherian ring and $M$ is a $u$-$S$-Noetherian $R$-module. \end{proposition} \begin{proof} Note that we have an exact sequence of $R(+)M$-modules: $$0\rightarrow 0(+)M\xrightarrow{i} R(+)M\xrightarrow{\pi} R\rightarrow 0.$$ Suppose $R(+)M$ is a $u$-$S(+)M$-Noetherian ring. Let $\{I_i\}_{i\in \Lambda}$ be the set of all ideals of $R$. Then $\{I_i(+)M\}_{i\in \Lambda}$ is a set of ideals of $R(+)M$. So there is an element $(s,m)\in S(+)M$ and finitely generated sub-ideals $O_i$ of $I_i(+)M$ such that $(s,m)I_i(+)M\subseteq O_i$. Thus $sI_i\subseteq \pi(O_i)\subseteq I_i$. Suppose $O_i$ is generated by $\{(r_{1,i},m_{1,i}),...,(r_{n,i},m_{n,i})\}$. Then it is easy to verify that $\pi(O_i)$ is generated by $\{r_{1,i},...,r_{n,i}\}$. So $R$ is a $u$-$S$-Noetherian ring. Let $\{M_i\}_{i\in \Gamma}$ be the set of all submodules of $M$. Then $\{0(+)M_i\}_{i\in \Gamma}$ is a set of ideals of $R(+)M$. Thus there is an element $(s',m')\in S(+)M$ and finitely generated sub-ideals $O'_i$ of $0(+)M_i$ such that $(s',m')0(+)M_i\subseteq O'_i$. So $s'M_i\subseteq N_i\subseteq M_i$ where $0(+)N_i=O'_i$. Suppose that $O'_i$ is generated by $\{(r'_{1,i},m'_{1,i}),...,(r'_{n,i},m'_{n,i}\}$. Then it is easy to verify that $N_i$ is generated by $\{m'_{1,i},...,m'_{n,i}\}$. Thus $M$ is a $u$-$S$-Noetherian $R$-module.
Suppose $R$ is a $u$-$S$-Noetherian ring and $M$ is a $u$-$S$-Noetherian $R$-module. Let $O^{\bullet}: O_1\subseteq O_2\subseteq ...$ be an ascending chain of ideals of $R(+)M$. Then there is an ascending chain of ideals of $R$: $\pi(O^{\bullet}): \pi(O_1)\subseteq \pi(O_2)\subseteq ...$. Thus there is an element $s\in S$ which is independent of $O^{\bullet}$ satisfying that there exists $k\in \mathbb{Z}^{+}$ such that $s\pi(O_n)\subseteq \pi(O_k)$ for any $n\geq k$. Similarly, $O^{\bullet}\cap 0(+)M: O_1\cap 0(+)M\subseteq O_2\cap 0(+)M\subseteq ...$ is an ascending chain of sub-ideals of $0(+)M$ which are equivalent to some submodules of $M$. So there is an element $s'\in S$ satisfying that there exists $k'\in \mathbb{Z}^{+}$ such that $s'O_n\cap 0(+)M\subseteq O_k\cap 0(+)M$ for any $n\geq k'$. Let $l=\max(k,k')$ and $n\geq l$. Consider the following natural commutative diagram with exact rows: $$\xymatrix@R=20pt@C=25pt{
0 \ar[r]^{}&O_l\cap 0(+)M \ar@{^{(}->}[d]\ar[r]&O_l \ar[r]\ar@{^{(}->}[d]&\pi(O_l)\ar[r] \ar@{^{(}->}[d] &0\\ 0 \ar[r]^{}&O_n\cap 0(+)M \ar[r]&O_n \ar[r]&\pi(O_n) \ar[r] &0.\\}$$ Set $t=ss'$. Then we have $tO_n\subseteq O_l$ for any $n\geq l$ by Lemma \ref{s-exct-diag}. So $R(+)M$ is a $u$- $S(+)M$-Noetherian ring by Theorem \ref{u-s-noe-char}. \end{proof}
Let $\alpha: A\rightarrow C$ and $\beta: B\rightarrow C$ be ring homomorphisms. Then the subring $$D:= \alpha \times_C \beta:= \{(a, b)\in A\times B | \alpha(a) =\beta(b)\}$$ of $A\times B$ is called the \emph{pullback} of $\alpha$ and $\beta$. Let $D$ be a pullback of $\alpha$ and $\beta$. Then there is a pullback diagram in the category of commutative rings: $$\xymatrix@R=20pt@C=25pt{ D\ar[d]^{p_B}\ar[r]^{p_A}& A \ar[d]^{\alpha}\\ B\ar[r]^{\beta}&C. \\ }$$
If $S$ is a multiplicative subset of $D$, then it is easy to verify that $p_A(S):=\{p_A(s)\in A|s\in S\}$ is a multiplicative subset of $A$. Now, we give a $u$-$S$-Noetherian property on the pullback diagram.
\begin{proposition}\label{pullback-usn} Let $\alpha: A\rightarrow C$ be a ring homomorphism and $\beta: B\rightarrow C$ a surjective ring homomorphism. Let $D$ be the pullback of $\alpha$ and $\beta$. If $S$ is a multiplicative subset of $D$, then the following assertions are equivalent:
\begin{enumerate} \item $D$ is a $u$-$S$-Noetherian ring; \item $A$ is a $u$-$p_A(S)$-Noetherian ring and ${\rm Ker}(\beta)$ is a $u$-$S$-Noetherian $D$-module.
\end{enumerate} \end{proposition} \begin{proof} Let $D$ be the pullback of $\alpha$ and $\beta$. Since $\beta$ is a surjective ring homomorphism, so is $p_A$. Then there is a short exact sequence of $D$-modules: $$0\rightarrow {\rm Ker}(\beta)\rightarrow D\rightarrow A\rightarrow 0.$$ By Proposition \ref{s-u-noe-s-exact}, $D$ is a $u$-$S$-Noetherian $D$-module if and only if ${\rm Ker}(\beta)$ and $A$ are $u$-$S$-Noetherian $D$-modules. Since $p_A$ is surjective, the $D$-submodules of $A$ are exactly the ideals of the ring $A$. Thus $A$ is a $u$-$S$-Noetherian $D$-module if and only if $A$ is a $u$-$p_A(S)$-Noetherian ring. \end{proof}
Let $f:A\rightarrow B$ be a ring homomorphism and $J$ an ideal of $B$. Following from \cite{df09} the \emph{amalgamation} of $A$ with $B$ along $J$ with respect to $f$, denoted by $A\bowtie^fJ$, is defined as $$A\bowtie^fJ=\{(a,f(a)+j)|a\in A,j\in J\},$$ which is a subring of of $A \times B$. Following from \cite[Proposition 4.2]{df09}, $A\bowtie^fJ$ is the pullback $\widehat{f}\times_{B/J}\pi$,
where $\pi:B\rightarrow B/J$ is the natural epimorphism and $\widehat{f}=\pi\circ f$: $$\xymatrix@R=20pt@C=25pt{ A\bowtie^fJ\ar[d]^{p_B}\ar[r]_{p_A}& A\ar[d]^{\widehat{f}}\\ B\ar[r]^{\pi}&B/J. \\ }$$
For a multiplicative subset $S$ of $A$, set $S^\prime:= \{(s, f (s)) | s \in S\},$ and $f(S):=\{f(s)\in B|s\in S\}$. Then it is easy to verify that $S^\prime$ and $f(S)$ are multiplicative subsets of $A\bowtie^fJ$ and $B$ respectively.
\begin{lemma} \label{s-u-noe-epi} Let $\alpha:R\rightarrow R^\prime$ be a surjective ring homomorphism and $S$ a multiplicative subset of $R$. If $R$ is a $u$-$S$-Noetherian ring, then $R^\prime$ is a $u$-$\alpha(S)$-Noetherian ring. \end{lemma} \begin{proof} Since $R$ is $u$-$S$-Noetherian, there is an element $s\in S$ such that for any ideal $J$ of $R$, there exists a finitely generated sub-ideal $F_J$ of $J$ satisfying $sJ\subseteq F_J$. Let $I$ be an ideal of $R^\prime$. Since $\alpha:R\rightarrow R^\prime$ is a surjective ring homomorphism, there exists an ideal $\alpha^{-1}(I)$ of $R$ such that $\alpha(\alpha^{-1}(I))=I$. Thus there exists a finitely generated sub-ideal $F_{\alpha^{-1}(I)}$ of $\alpha^{-1}(I)$ satisfying $s\alpha^{-1}(I)\subseteq F_{\alpha^{-1}(I)}$. So $\alpha(F_{\alpha^{-1}(I)})$ is a finitely generated sub-ideal of $I$ satisfying $\alpha(s)I\subseteq \alpha(F_{\alpha^{-1}(I)})$. \end{proof}
\begin{proposition}\label{amag-usn} Let $f :A\rightarrow B$ be a ring homomorphism, $J$ an ideal of $B$ and $S$ a multiplicative subset of $A$. Set $S^\prime= \{(s, f (s)) | s \in S\}$ and $f(S)=\{f(s)\in B|s\in S\}$. Then the following statements are equivalent:
\begin{enumerate} \item $A\bowtie^fJ$ is a $u$-$S^\prime$-Noetherian ring; \item $A$ is a $u$-$S$-Noetherian ring and $J$ is a $u$-$S^\prime$-Noetherian $A\bowtie^fJ$-module $($with the $A\bowtie^fJ$-module structure naturally induced by $p_B$, where $p_B : A\bowtie^fJ\rightarrow B$ defined by $(a,f(a)+j)\rightarrow f(a)+j)$; \item $A$ is a $u$-$S$-Noetherian ring and $f(A)+J$ is a $u$-$f(S)$-Noetherian ring. \end{enumerate} \end{proposition} \begin{proof} $(1)\Leftrightarrow(2)$ Follows from Proposition \ref{pullback-usn}.
$(1)\Rightarrow(3)$: By Proposition \ref{pullback-usn}, $A$ is a $u$-$S$-Noetherian ring. By \cite[Proposition 5.1]{df09}, there is a short exact sequence $0\rightarrow f^{-1}(J)\times \{0\}\rightarrow A\bowtie^fJ \rightarrow f(A)+J\rightarrow 0$ of $A\bowtie^fJ$-modules. Note that any $A\bowtie^fJ$-submodule of $f(A)+J$ is exactly an ideal of $f(A)+J$. Since $p_B(S^\prime)=f(S),$ we conclude that $f(A)+J$ is a $u$- $f(S)$-Noetherian ring by Proposition \ref{s-u-noe-s-exact}.
$(3)\Rightarrow(2)$: Let $f(s)$ be an element in $f(S)$ such that for any ideal of $f(A)+J$ is $u$- $f(S)$-Noetherian with respective to $f(s)$. Then for any $A\bowtie^fJ$-submodule $J_0$ of $J$, $J_0$ is an ideal of $f(A)+J$ since every $A\bowtie^fJ$-submodule of $J$ is an ideal of $f(A)+J$. Since $f(A)+J$ is $f(S)$-Noetherian, there exists $j_1,...,j_k\in J_0$ such that $f(s)J_0\subseteq \langle j_1,...,j_k\rangle (f(A)+J)\subseteq J_0$. Hence we obtain
$$(s,f(s))J_0\subseteq A\bowtie^fJ j_1+...+A\bowtie^fJ j_k\subseteq J_0.$$ Thus $J$ is $u$-$S^\prime$-Noetherian with respective to $(s,f(s)).$ \end{proof}
\section{Cartan-Eilenberg-Bass Theorem for uniformly $S$-Noetherian rings} It is well known that an $R$-module $E$ is \emph{injective} provided that the induced sequence $0\rightarrow {\rm Hom}_R(C,E)\rightarrow {\rm Hom}_R(B,E)\rightarrow {\rm Hom}_R(A,E)\rightarrow 0$ is exact for any exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$. The well-known Cartan-Eilenberg-Bass Theorem says that a ring $R$ is Noetherian if and only if any direct sum of injective modules is injective (see \cite[Theorem 3.1.17]{ej11}). In order to obtain the Cartan-Eilenberg-Bass Theorem for uniformly $S$-Noetherian rings, we first introduce the $S$-analogue of injective modules.
\begin{definition} Let $R$ be a ring and $S$ a multiplicative subset of $R$. An $R$-module $E$ is called $u$-$S$-injective $($abbreviates uniformly $S$-injective$)$ provided that the induced sequence $$0\rightarrow {\rm Hom}_R(C,E)\rightarrow {\rm Hom}_R(B,E)\rightarrow {\rm Hom}_R(A,E)\rightarrow 0$$ is $u$-$S$-exact for any $u$-$S$-exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$. \end{definition}
\begin{lemma}\label{u-S-tor-ext} Let $R$ be a ring and $S$ a multiplicative subset of $R$. If $T$ is a $u$-$S$-torsion module, then ${\rm Ext}_R^{n}(T,M)$ and ${\rm Ext}_R^{n}(M,T)$ are $u$-$S$-torsion for any $R$-module $M$ and any $n\geq 0$. \end{lemma} \begin{proof} We only prove ${\rm Ext}_R^{n}(T,M)$ is $u$-$S$-torsion, Since the case of ${\rm Ext}_R^{n}(M,T)$ is similar. Let $T$ be a $u$-$S$-torsion module with $sT=0$. If $n=0$, then for any $f\in {\rm Hom}_R(T,M)$, we have $sf(t)=f(st)=0$ for any $t\in T$. Thus $sf=0$ and so $s{\rm Hom}_R(T,M)=0$. Let $0\rightarrow M\rightarrow E\rightarrow \Omega^{-1} (M)\rightarrow0$ be a short exact sequence with $E$ injective and $\Omega^{-1} (M)$ the $1$-st cosyzygy of $M$. Then ${\rm Ext}_R^{1}(T,M)$ is a quotient of ${\rm Hom}_R(T,\Omega^{-1} (M))$ which is $u$-$S$-torsion. Thus ${\rm Ext}_R^{1}(T,M)$ is $u$-$S$-torsion. For $n\geq 2$, we have an isomorphism ${\rm Ext}_R^{n}(T,M)\cong {\rm Ext}_R^{1}(T,\Omega^{-(n-1)}(M))$ where $\Omega^{-(n-1)}(M)$ is the $(n-1)$-th cosyzygy of $M$. Since ${\rm Ext}_R^{1}(T,\Omega^{-(n-1)}(M))$ is $u$-$S$-torsion by induction, ${\rm Ext}_R^{n}(T,M)$ is $u$-$S$-torsion. \end{proof}
\begin{theorem}\label{s-inj-ext} Let $R$ be a ring, $S$ a multiplicative subset of $R$ and $E$ an $R$-module. Then the following assertions are equivalent: \begin{enumerate} \item $E$ is $u$-$S$-injective;
\item for any short exact sequence $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$, the induced sequence $0\rightarrow {\rm Hom}_R(C,E)\xrightarrow{g^\ast} {\rm Hom}_R(B,E)\xrightarrow{f^\ast} {\rm Hom}_R(A,E)\rightarrow 0$ is $u$-$S$-exact;
\item ${\rm Ext}_R^1(M,E)$ is $u$-$S$-torsion for any $R$-module $M$;
\item ${\rm Ext}_R^n(M,E)$ is $u$-$S$-torsion for any $R$-module $M$ and $n\geq 1$.
\end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow(2)$ and $(4)\Rightarrow(3)$: Trivial.
$(2)\Rightarrow(3)$: Let $0\rightarrow L\rightarrow P\rightarrow M\rightarrow 0$ be a short exact sequence with $P$ projective. Then there exists a long exact sequence $0\rightarrow {\rm Hom}_R(M,E)\rightarrow {\rm Hom}_R(P,E)\rightarrow {\rm Hom}_R(L,E)\rightarrow {\rm Ext}_R^1(M,E) \rightarrow 0$. Thus ${\rm Ext}_R^1(M,E)$ is $u$-$S$-torsion by $(2)$.
$(3)\Rightarrow (2)$: Let $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ be a short exact sequence. Then we have a long exact sequence $0\rightarrow {\rm Hom}_R(C,E)\xrightarrow{g^\ast} {\rm Hom}_R(B,E)\xrightarrow{f^\ast} {\rm Hom}_R(A,E)\xrightarrow{\delta} {\rm Ext}_R^1(C,E) \rightarrow 0$. By $(3)$, ${\rm Ext}_R^1(C,E)$ is $u$-$S$-torsion, and so $0\rightarrow {\rm Hom}_R(C,E)\xrightarrow{g^\ast} {\rm Hom}_R(B,E)\xrightarrow{f^\ast} {\rm Hom}_R(A,E)\rightarrow 0$ is $u$-$S$-exact.
$(3)\Rightarrow(4)$: Let $M$ be an $R$-module. Denote by $\Omega^{n-1}(M)$ the $(n-1)$-th syzygy of $M$. Then ${\rm Ext}_R^n(M,E)\cong {\rm Ext}_R^1(\Omega^{n-1}(M),E)$ is $u$-$S$-torsion by $(3)$.
$(2)\Rightarrow(1)$: Let $E$ be an $R$-module satisfies $(2)$. Suppose $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ is a $u$-$S$-exact sequence. Then there is an exact sequence $B\xrightarrow{g} C\rightarrow T\rightarrow 0 $ where $T={\rm Coker}(g)$ is $u$-$S$-torsion. Then we have an exact sequence $$0\rightarrow {\rm Hom}_R(T,E)\rightarrow {\rm Hom}_R(C,E)\rightarrow {\rm Hom}_R(B,E).$$ By Lemma \ref{u-S-tor-ext}, we have $ {\rm Hom}_R(T,E)$ is $u$-$S$-torsion. So $0\rightarrow {\rm Hom}_R(C,E)\xrightarrow{g^\ast} {\rm Hom}_R(B,E)\xrightarrow{f^\ast} {\rm Hom}_R(A,E)\rightarrow 0$ is $u$-$S$-exact at ${\rm Hom}_R(C,E)$.
There are also two short exact sequences: \begin{center} $0\rightarrow {\rm Ker}(f)\xrightarrow{i_{A}} A\xrightarrow{\pi_{{\rm Im}(f)}} {\rm Im}(f)\rightarrow 0$ and $0\rightarrow {\rm Im}(f)\xrightarrow{i_B} B\rightarrow {\rm Coker}(f)\rightarrow 0,$ \end{center} where ${\rm Ker}(f)$ is $u$-$S$-torsion. Consider the induced exact sequences $$0\rightarrow {\rm Hom}_R({\rm Im}(f),E)\xrightarrow{\pi_{{\rm Im}(f)}^\ast} {\rm Hom}_R(A,E)\xrightarrow{i_{A}^\ast} {\rm Hom}_R({\rm Ker}(f),E)$$ and $$0\rightarrow {\rm Hom}_R({\rm Coker}(f),E)\rightarrow {\rm Hom}_R(B,E)\xrightarrow{i_{B}^\ast} {\rm Hom}_R({\rm Im}(f),E).$$ Then ${\rm Im}(i_{A}^\ast)$ and ${\rm Coker}(i_{B}^\ast)$ are all $u$-$S$-torsion. We have the following pushout diagram:
$$\xymatrix@R=20pt@C=25pt{ & 0\ar[d]&0\ar[d]&&\\
& {\rm Im}(i_{B}^\ast)\ar[d]\ar@{=}[r]^{} &{\rm Im}(i_{B}^\ast)\ar[d]& & \\
0 \ar[r]^{}& {\rm Hom}_R({\rm Im}(f),E)\ar[d]\ar[r]& {\rm Hom}_R(A,E) \ar[r]\ar[d]&{\rm Im}(i_{A}^\ast)\ar[r] \ar@{=}[d] &0\\ 0 \ar[r]^{}&{\rm Coker}(i_{B}^\ast)\ar[d]\ar[r]&Y \ar[r]\ar[d]&{\rm Im}(i_{A}^\ast)\ar[r] &0\\
& 0 &0 & &\\}$$ Since ${\rm Im}(i_{A}^\ast)$ and ${\rm Coker}(i_{B}^\ast)$ are all $u$-$S$-torsion, $Y$ is also $u$-$S$-torsion by Lemma \ref{s-exct-tor}. Thus the natural composition $f^\ast: {\rm Hom}_R(B,E)\rightarrow {\rm Im}(i_{B}^\ast)\rightarrow {\rm Hom}_R(A,E)$ is a $u$-$S$-epimorphism. So $0\rightarrow {\rm Hom}_R(C,E)\xrightarrow{g^\ast} {\rm Hom}_R(B,E)\xrightarrow{f^\ast} {\rm Hom}_R(A,E)\rightarrow 0$ is $u$-$S$-exact at ${\rm Hom}_R(A,E)$.
Since the sequence $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ is $u$-$S$-exact at $B$ and $C$, there exists $s\in S$ such that $s{\rm Ker}(g)\subseteq {\rm Im}(f)$, $s{\rm Im}(f)\subseteq {\rm Ker}(g)$ and $s{\rm Coker}(g)=0$. We claim that $s^2{\rm Im} (g^\ast)\subseteq {\rm Ker}(f^\ast)$ and $s^2{\rm Ker}(f^\ast)\subseteq {\rm Im} (g^\ast)$. Indeed, consider the following diagram: $$\xymatrix@R=20pt@C=25pt{
& & E& &\\ 0 \ar[r]^{}&A\ar[r]^{f}&B \ar[u]^{h}\ar[r]^{g}&C\ar[r] &0\\}$$ Suppose $h\in {\rm Im} (g^\ast)$. Then there exists $u\in {\rm Hom}_R(C,E)$ such that $h=u\circ g$. Thus for any $a\in A$, $sh\circ f (a)=su\circ g\circ f(a)=u\circ g\circ sf(a)=0$ since $s{\rm Im}(f)\subseteq {\rm Ker}(g)$. So $sh\circ f=0$ and then $s{\rm Im} (g^\ast)\subseteq {\rm Ker}(f^\ast)$. Thus $s^2{\rm Im} (g^\ast)\subseteq {\rm Ker}(f^\ast)$. Now, suppose $h\in {\rm Ker}(f^\ast)$. Then $h\circ f=0$. Thus ${\rm Ker}(h)\supseteq {\rm Im}(f)\supseteq s{\rm Ker}(g)$. So $sh\circ i_{{\rm Ker}(g)}=0$ where $i_{{\rm Ker}(g)}:{\rm Ker}(g)\hookrightarrow B$ is the natural embedding map. There is a well-defined $R$-homomorphism $v:{\rm Im}(g)\rightarrow E$ such that $v\circ \pi_B=sh$, where $\pi_B$ is the natural epimorphism $B\twoheadrightarrow {\rm Im}(g)$. Consider the exact sequence ${\rm Hom}_R({\rm Coker}(g),E)\rightarrow {\rm Hom}_R(C,E)\rightarrow {\rm Hom}_R({\rm Im}(g),E)\rightarrow {\rm Ext}_R^1({\rm Coker}(g),E)$ induced by $0\rightarrow {\rm Im}(g)\rightarrow C\rightarrow {\rm Coker}(g)\rightarrow 0$. Since $s{\rm Hom}_R({\rm Coker}(g),E)=s {\rm Ext}_R^1({\rm Coker}(g),E)=0$, $s{\rm Hom}_R({\rm Im}(g),E)\subseteq i_{{\rm Im}(g)}^\ast({\rm Hom}_R(C,E))$. Thus there is a homomorphism $u:C\rightarrow E$ such that $s^2h=v\circ g$. Then we have $s^2{\rm Ker}(f^\ast)\subseteq {\rm Im} (g^\ast)$. So $0\rightarrow {\rm Hom}_R(C,E)\xrightarrow{g^\ast} {\rm Hom}_R(B,E)\xrightarrow{f^\ast} {\rm Hom}_R(A,E)\rightarrow 0$ is $u$-$S$-exact at ${\rm Hom}_R(B,E)$. \end{proof}
It follows from Theorem \ref{s-inj-ext} that $u$-$S$-torsion modules and injective modules are $u$-$S$-injective. \begin{corollary}\label{inj-ust-s-inj} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Suppose $E$ is a $u$-$S$-torsion $R$-module or an injective $R$-module. Then $E$ is $u$-$S$-injective. \end{corollary}
The following example shows that the condition ``${\rm Ext}_R^1(M,F)$ is $u$-$S$-torsion for any $R$-module $M$'' in Theorem \ref{s-inj-ext} can not be replaced by ``${\rm Ext}_R^1(R/I,F)$ is $u$-$S$-torsion for any ideal $I$ of $R$''. \begin{example}\label{uf not-extsion}
Let $R=\mathbb{Z}$ be the ring of integers, $p$ a prime in $\mathbb{Z}$ and $S=\{p^n|n\geq 0\}$. Let $J_p$ be the additive group of all $p$-adic integers $($see \cite{FS15} for example$)$. Then ${\rm Ext}_{R}^1(R/I,J_p)$ is $u$-$S$-torsion for any ideal $I$ of $R$. However, $J_p$ is not $u$-$S$-injective. \end{example} \begin{proof} Let $\langle n\rangle$ be an ideal of $\mathbb{Z}$. Suppose $n=p^km$ with $(p,m)=1$. Then ${\rm Ext}_{\mathbb{Z}}^1(\mathbb{Z}/\langle n\rangle, J_p)\cong J_p/nJ_p\cong \mathbb{Z}/\langle p^k\rangle$ by \cite[Exercise 1.3(10)]{FS15}. So ${\rm Ext}_{\mathbb{Z}}^1(\mathbb{Z}/\langle n\rangle, J_p)$ is $u$-$S$-torsion for any ideal $\langle n\rangle$ of $\mathbb{Z}$. However, $J_p$ is not $u$-$S$-injective. Indeed, let $\mathbb{Z}(p^{\infty})$ be the quasi-cyclic group (see \cite{FS15} for example). Then $\mathbb{Z}(p^{\infty})$ is a divisible group and $J_p\cong{\rm Hom}_{\mathbb{Z}}(\mathbb{Z}(p^{\infty}),\mathbb{Z}(p^{\infty})))$. So \begin{align*}
&{\rm Ext}_{\mathbb{Z}}^1(\mathbb{Z}(p^{\infty}), M) \\
\cong &{\rm Ext}_{\mathbb{Z}}^1(\mathbb{Z}(p^{\infty}), {\rm Hom}_{\mathbb{Z}}(\mathbb{Z}(p^{\infty}),\mathbb{Z}(p^{\infty}))) \\
\cong&{\rm Hom}_{\mathbb{Z}}({\rm Tor}_1^{\mathbb{Z}}(\mathbb{Z}(p^{\infty}),\mathbb{Z}(p^{\infty})),\mathbb{Z}(p^{\infty}))\\
\cong&{\rm Hom}_{\mathbb{Z}}(\mathbb{Z}(p^{\infty}),\mathbb{Z}(p^{\infty}))\cong J_p. \end{align*} Note that for any $p^k\in S$, we have $p^kJ_p\not=0$. So $J_p$ is not $u$-$S$-injective. \end{proof}
\begin{remark}\label{uf not-dprod} It is well known that any direct product of injective modules is injective. However, the direct product of $u$-$S$-injective modules need not be $u$-$S$-injective. Indeed, Let $R$ and $S$ be in Example \ref{uf not-extsion}. Let $\mathbb{Z}/\langle p^k\rangle$ be cyclic group of order $p^k$ ($k\geq 1$). Then each $\mathbb{Z}/\langle p^k\rangle$ is $u$-$S$-torsion, and thus is $u$-$S$-injective. Let $\mathbb{Q}$ be the rational number group. Then, by \cite[Chapter 9 Theorem 6.2]{FS15}, we have ${\rm Ext}_{\mathbb{Z}}(\mathbb{Q}/\mathbb{Z},\prod\limits_{k=1}^\infty \mathbb{Z}/\langle p^k\rangle)\cong \prod\limits_{k=1}^\infty {\rm Ext}_{\mathbb{Z}}^1(\mathbb{Q}/\mathbb{Z}, \mathbb{Z}/\langle p^k\rangle)\cong \prod\limits_{k=1}^\infty \mathbb{Z}/\langle p^k\rangle$ since each $\mathbb{Z}/\langle p^k\rangle$ is a reduced cotorsion group. It is easy to verify that $\prod\limits_{k=1}^\infty \mathbb{Z}/\langle p^k\rangle$ is not $u$-$S$-torsion. So $\prod\limits_{k=1}^\infty \mathbb{Z}/\langle p^k\rangle$ is not $u$-$S$-injective. \end{remark}
\begin{proposition}\label{s-inj-prop} Let $R$ be a ring and $S$ a multiplicative subset of $R$. Then the following assertions hold. \begin{enumerate} \item Any finite direct sum of $u$-$S$-injective modules is $u$-$S$-injective. \item Let $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ be a $u$-$S$-exact sequence. If $A$ and $C$ are $u$-$S$-injective modules, so is $B$. \item Let $A\rightarrow B$ be a $u$-$S$-isomorphism. If one of $A$ and $B$ is $u$-$S$-injective, so is the other. \item Let $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ be a $u$-$S$-exact sequence. If $A$ and $B$ are $u$-$S$-injective, then $C$ is $u$-$S$-injective. \end{enumerate} \end{proposition} \begin{proof} $(1)$ Suppose $E_1,...,E_n$ are $u$-$S$-injective modules. Let $M$ be an $R$-module. Then there exists $s_i\in S$ such that $s_i{\rm Ext}_R^1(M,E_i)=0$ for each $i=1,...,n$. Set $s=s_1...s_n$. Then $s{\rm Ext}_R^1(M,\bigoplus\limits_{i=1}^n E_i)\cong \bigoplus\limits_{i=1}^ns{\rm Ext}_R^1(M, E_i)=0$. Thus $\bigoplus\limits_{i=1}^n E_i$ is $u$-$S$-injective.
$(2)$ Suppose $A$ and $C$ are $u$-$S$-injective modules and $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ is a $u$-$S$-exact sequence. Then there are three short exact sequences: $0\rightarrow {\rm Ker}(f)\rightarrow A\rightarrow {\rm Im}(f)\rightarrow 0$, $0\rightarrow {\rm Ker}(g)\rightarrow B\rightarrow {\rm Im}(g)\rightarrow 0$ and $0\rightarrow {\rm Im}(g)\rightarrow C\rightarrow {\rm Coker}(g)\rightarrow 0$. Then ${\rm Ker}(f)$ and ${\rm Coker}(g)$ are all $u$-$S$-torsion and $s{\rm Ker}(g)\subseteq {\rm Im}(f)$ and $s{\rm Im}(f)\subseteq {\rm Ker}(g)$ for some $s\in S$. Let $M$ be an $R$-module. Then $$ {\rm Ext}_R^1(M,A)\rightarrow {\rm Ext}_R^1(M,{\rm Im}(f))\rightarrow {\rm Ext}_R^2(M,{\rm Ker}(f))$$ is exact. Since ${\rm Ker}(f)$ is $u$-$S$-torsion and $A$ is $u$-$S$-injective, ${\rm Ext}_R^1(M,{\rm Im}(f))$ is $u$-$S$-torsion. Note $${\rm Hom}_R(M,{\rm Coker}(g))\rightarrow {\rm Ext}_R^1(M,{\rm Im}(g))\rightarrow {\rm Ext}_R^1(M,C)$$ is exact. Since ${\rm Coker}(g)$ is $u$-$S$-torsion, ${\rm Hom}_R(M,{\rm Coker}(g))$ is $u$-$S$-torsion by Lemma \ref{u-S-tor-ext}. Thus ${\rm Ext}_R^1(M,{\rm Im}(g))$ is $u$-$S$-torsion as ${\rm Ext}_R^1(M,C)$ is $u$-$S$-torsion. We also note that $${\rm Ext}_R^1(M,{\rm Ker}(g))\rightarrow {\rm Ext}_R^1(M,B) \rightarrow {\rm Ext}_R^1(M,{\rm Im}(g))$$ is exact. Thus to verify that ${\rm Ext}_R^1(M,B)$ is $u$-$S$-torsion, we just need to show ${\rm Ext}_R^1(M,{\rm Ker}(g))$ is $u$-$S$-torsion. Denote $N= {\rm Ker}(g)+{\rm Im}(f)$. Consider the following two exact sequences \begin{center} $0\rightarrow {\rm Ker}(g)\rightarrow N\rightarrow N/{\rm Ker}(g)\rightarrow 0$ and $0\rightarrow {\rm Im}(f)\rightarrow N\rightarrow N/{\rm Im}(f)\rightarrow 0.$ \end{center} Then it is easy to verify $N/{\rm Ker}(g)$ and $N/{\rm Im}(f)$ are all $u$-$S$-torsion. Consider the following induced two exact sequences $${\rm Hom}_R(M,N/{\rm Im}(f))\rightarrow {\rm Ext}_R^1(M,{\rm Ker}(g)) \rightarrow {\rm Ext}_R^1(M,N) \rightarrow {\rm Ext}_R^1(M, N/{\rm Im}(f)),$$ $${\rm Hom}_R(M,N/{\rm Ker}(g)) \rightarrow {\rm Ext}_R^1(M,{\rm Im}(f)) \rightarrow {\rm Ext}_R^1(M,N) \rightarrow {\rm Ext}_R^1(M, N/{\rm Ker}(g)).$$ Thus ${\rm Ext}_R^1(M,{\rm Ker}(g))$ is $u$-$S$-torsion if and only if ${\rm Ext}_R^1(M,{\rm Im}(f))$ is $u$-$S$-torsion. Consequently, $B$ is $u$-$S$-injective since ${\rm Ext}_R^1(M,{\rm Im}(f))$ is $u$-$S$-torsion.
$(3)$ Considering the $u$-$S$-exact sequences $0\rightarrow A\rightarrow B\rightarrow 0\rightarrow 0$ and $0 \rightarrow 0\rightarrow A\rightarrow B\rightarrow 0$, we have $A$ is $u$-$S$-injective if and only if $B$ is $u$-$S$-injective by $(2)$.
(4) Suppose $0\rightarrow A\xrightarrow{f} B\xrightarrow{g} C\rightarrow 0$ is a $u$-$S$-exact sequence. Then, as in the proof of $(3)$, there are three short exact sequences: $0\rightarrow {\rm Ker}(f)\rightarrow A\rightarrow {\rm Im}(f)\rightarrow 0$, $0\rightarrow {\rm Ker}(g)\rightarrow B\rightarrow {\rm Im}(g)\rightarrow 0$ and $0\rightarrow {\rm Im}(g)\rightarrow C\rightarrow {\rm Coker}(g)\rightarrow 0$. Then ${\rm Ker}(f)$ and ${\rm Coker}(g)$ are all $u$-$S$-torsion and $s{\rm Ker}(g)\subseteq {\rm Im}(f)$ and $s{\rm Im}(f)\subseteq {\rm Ker}(g)$ for some $s\in S$. Let $M$ be an $R$-module. Note that $${\rm Hom}_R(M,{\rm Coker}(g))\rightarrow{\rm Ext}_R^1(M,{\rm Im}(g))\rightarrow{\rm Ext}_R^1(M,C)\rightarrow {\rm Ext}_R^1(M,{\rm Coker}(g)) $$ is exact. Since ${\rm Coker}(g)$ is $u$-$S$-torsion, we have ${\rm Hom}_R(M,{\rm Coker}(g)$ and ${\rm Ext}_R^1(M,{\rm Coker}(g))$ are $u$-$S$-torsion by Lemma \ref{u-S-tor-ext}. We just need to verify ${\rm Ext}_R^1(M,{\rm Im}(g))$ is $u$-$S$-torsion. Note that $${\rm Ext}_R^1(M,B)\rightarrow {\rm Ext}_R^1(M,{\rm Im}(g)) \rightarrow {\rm Ext}_R^2(M,{\rm Ker}(g))$$ is exact. Since ${\rm Ext}_R^1(M,B)$ is $u$-$S$-torsion, we just need to verify that ${\rm Ext}_R^2(M,{\rm Ker}(g))$ is $u$-$S$-torsion. By the proof of $(2)$, we just need to show that ${\rm Ext}_R^2(M,{\rm Im}(f))$ is $u$-$S$-torsion. Note that $$ {\rm Ext}_R^2(M,A)\rightarrow {\rm Ext}_R^2(M,{\rm Im}(f))\rightarrow {\rm Ext}_R^3(M,{\rm Ker}(f))$$ is exact. Since ${\rm Ext}_R^2(M,A)$ and ${\rm Ext}_R^3(M,{\rm Ker}(f))$ is $u$-$S$-torsion, we have ${\rm Ext}_R^2(M,{\rm Im}(f))$ is $u$-$S$-torsion. So $C$ is $u$-$S$-injective. \end{proof}
Let $\frak p$ be a prime ideal of $R$. We say an $R$-module $E$ is \emph{$u$-$\frak p$-injective} shortly provided that $E$ is $u$-$(R\setminus\frak p)$-injective. The next result gives a local characterization of injective modules. \begin{proposition}\label{s-injective-loc-char} Let $R$ be a ring and $E$ an $R$-module. Then the following statements are equivalent:
\begin{enumerate} \item $E$ is injective; \item $E$ is $u$-$\frak p$-injective for any $\frak p\in {\rm Spec}(R)$; \item $E$ is $u$-$\frak m$-injective for any $\frak m\in {\rm Max}(R)$.
\end{enumerate} \end{proposition} \begin{proof} $(1)\Rightarrow (2)$ It follows from Theorem \ref{s-inj-prop}.
$(2)\Rightarrow (3):$ Trivial.
$(3)\Rightarrow (1):$ Let $M$ be an $R$-module. Then ${\rm Ext}_R^1(M,E)$ is $u$-$(R-\frak m)$-torsion. Thus for any $\frak m\in {\rm Max}(R)$, there exists $s_{\frak m}\in S$ such that $s_{\frak m}{\rm Ext}_R^1(M,E)=0$. Since the ideal generated by all $s_{\frak m}$ is $R$, ${\rm Ext}_R^1(M,E)=0$. So $E$ is injective. \end{proof}
We say an $R$-module $M$ is $S$-divisible is $M=sM$ for any $s\in S$. The well known Baer's Criterion states that an $R$-module $E$ is injective if and only if ${\rm Ext}_R^1(R/I,E)=0$ for any ideal $I$ of $R$. The next result gives a uniformly $S$-version of Baer's Criterion. \begin{proposition}\label{s-inj-baer}{\bf (Baer's Criterion for $u$-$S$-injective modules)} Let $R$ be a ring, $S$ a multiplicative subset of $R$ and $E$ an $R$-module. If $E$ is a $u$-$S$-injective module then there exists an element $s\in S$ such that $s{\rm Ext}_R^1(R/I,E)=0$ for any ideal $I$ of $R$. Moreover, if $E=sE$, then the converse also holds. \end{proposition} \begin{proof}
If $E$ is a $u$-$S$-injective module, then ${\rm Ext}_R^1(\bigoplus\limits_{I\unlhd R}R/I,E)$ is $u$-$S$-torsion by Theorem \ref{s-inj-ext}. Thus there is an element $s\in S$ such that $s{\rm Ext}_R^1(\bigoplus\limits_{I\unlhd R}R/I,E)=s\prod\limits_{I\unlhd R}{\rm Ext}_R^1(R/I,E)=0$. So $s{\rm Ext}_R^1(R/I,E)=0$ for any ideal $I$ of $R$.
Suppose $E$ is an $S$-divisible $R$-module. Let $B$ be an $R$-module, $A$ a submodule of $B$ and $s$ an element in $S$ satisfying the necessity. Let $f:A\rightarrow E$ be an $R$-homomorphism. Set \begin{center}
$\Gamma=\{(C,d)|C$ is a submodule of $B$ containing $A$ and $d|_A=sf\}.$ \end{center}
Since $(A,sf)\in \Gamma$, $\Gamma$ is nonempty. Set $(C_1,d_1)\leq (C_2,d_2)$ if and only if $C_1\subseteq C_2$ and $d_2|_{C_1}=d_1$. Then $\Gamma$ is a partial order. For any chain $(C_j,d_j)$, let $C_0=\bigcup\limits_{j}C_j$ and $d_0(c)=d_j(c)$ if $c\in C_j$. Then $(C_0,d_0)$ is the upper bound of the chain $(C_j,d_j)$. By Zorn's Lemma, there is a maximal element $(C,d)$ in $\Gamma$.
We claim that $C=B$. On the contrary, let $x\in B-C$. Denote $I=\{r\in R|rx\in C\}$. Then $I$ is an ideal of $R$. Since $E=sE$, there exists a homomorphism $h:I\rightarrow E$ satisfy that $sh(r)=d(rx)$. Then there is an $R$-homomorphism $g: R\rightarrow E$ such that $g(r)=sh(r)=d(rx)$ for any $r\in I$. Let $C_1=C+Rx$ and $d_1(c+rx)=d(c)+g(r)$ where $c\in C$ and $r\in R$. If $c+rx=0$, then $r\in I$ and thus $d(c)+g(r)=d(c)+sh(r)=d(c)+d(rx)=d(c+rx)=0$. Hence $d_1$ is a well-defined homomorphism such that $d_1|_A=sf$. So $(C_1,d_1)\in \Gamma$. However, $(C_1,d_1)> (C,d)$ which contradicts the maximality of $(C,d)$. \end{proof}
Now, we give the main result of this section.
\begin{theorem}\label{s-injective-ext} {\bf (Cartan-Eilenberg-Bass Theorem for uniformly $S$-Noetherian rings)} Let $R$ be a ring, $S$ a regular multiplicative subset of $R$. Then the following assertions are equivalent: \begin{enumerate} \item $R$ is $u$-$S$-Noetherian; \item any direct sum of injective modules is $u$-$S$-injective; \item any direct union of injective modules is $u$-$S$-injective. \end{enumerate} \end{theorem}
\begin{proof} $(1)\Rightarrow(3):$ Let $\{E_i,f_{i,j}\}_{i<j\in \Lambda}$ be a direct system of injective modules where each $f_{i,j}$ is the embedding map. Let $\lim\limits_{\longrightarrow}{E_i}$ be its direct limit. Let $s$ be an element in $S$ such that for any ideal $I$ of $R$ there exists a finitely generated sub-ideal $K$ of $I$ such that $sI\subseteq K$. Considering the short exact sequence $0\rightarrow I/K\rightarrow R/K\rightarrow R/I\rightarrow 0$, we have the following long exact sequence: $${\rm Hom}_R(I/K,\lim\limits_{\longrightarrow}{E_i})\rightarrow {\rm Ext}_R^1(R/I,\lim\limits_{\longrightarrow}{E_i})\rightarrow {\rm Ext}_R^1(R/K,\lim\limits_{\longrightarrow}{E_i})\rightarrow {\rm Ext}_R^1(I/K,\lim\limits_{\longrightarrow}{E_i}).$$ Since $R/K$ is finitely presented, we have ${\rm Ext}_R^1(R/K,\lim\limits_{\longrightarrow}{E_i})\cong \lim\limits_{\longrightarrow}{\rm Ext}_R^1(R/K,{E_i})=0$ by the Five Lemma and \cite[Theorem 24.10]{w}. By the proof of Lemma \ref{u-S-tor-ext}, one can show that $s{\rm Hom}_R(I/K,\lim\limits_{\longrightarrow}{E_i})=0$. Thus $s{\rm Ext}_R^1(R/I,\lim\limits_{\longrightarrow}{E_i})=0$ for any ideal $I$ of $R$. Since $S$ is composed of non-zero-divisors, each $E_i$ is $S$-divisible by the proof \cite[Theorem 2.4.5]{fk16}. Thus $\lim\limits_{\longrightarrow}{E_i}$ is also $S$-divisible. So $\lim\limits_{\longrightarrow}{E_i}$ is $u$-$S$-injective by Proposition \ref{s-inj-baer}.
$(3)\Rightarrow(2):$ Trivial.
$(2)\Rightarrow(1):$ Assume $R$ is not a $u$-$S$-Noetherian ring. By Theorem \ref{u-s-noe-char}, for any $s\in S$, there exists a strictly ascending chain $I_1\subset I_2\subset ...$ of ideals of $R$ such that for any $k\geq 1$ there is $n\geq k$ satisfying $sI_n\not\subseteq I_k$. Set $I=\bigcup\limits_{i=1}^{\infty}I_i$. Then $I$ is an ideal of $R$ and $I/I_i\not=0$ for any $i\geq 1$. Denote by $E(I/I_i)$ the injective envelope of $I/I_i$. Let $f_i$ be the natural composition $I\twoheadrightarrow I/I_i\rightarrowtail E(I/I_i)$. Since $sI_n\not\subseteq I_i$ for any $i\geq 1$ and some $n\geq i$, we have $sf_i\not=0$ for any $i\geq 1$. We define $f:I\rightarrow \bigoplus_{i=1}^{\infty}E(I/I_i)$ by $f(a)=(f_i(a))$. Not that for each $a\in I$, we have $a\in I_i$ for some $i\geq 1$. So $f$ is a well-defined $R$-homomorphism. Let $\pi_i:\bigoplus_{i=1}^{\infty}E(I/I_i)\twoheadrightarrow E(I/I_i)$ be the projection. The embedding map $i: I\rightarrow R$ induces an exact sequence $${\rm Hom}_R(R,\bigoplus_{i=1}^{\infty}E(I/I_i))\xrightarrow{i^{\ast}} {\rm Hom}_R(I,\bigoplus_{i=1}^{\infty}E(I/I_i))\xrightarrow{\delta} {\rm Ext}_R^1(R/I,\bigoplus_{i=1}^{\infty}E(I/I_i))\rightarrow 0.$$Since $\bigoplus_{i=1}^{\infty}E(I/I_i)$ is $u$-$S$-injective, there is an $s\in S$ such that $$s{\rm Ext}_R^1(R/I,\bigoplus_{i=1}^{\infty}E(I/I_i))=0.$$ Thus there exists a homomorphism $g:R\rightarrow \bigoplus_{i=1}^{\infty}E(I/I_i)$ such that $sf=i^{\ast}(g)$. Thus for sufficiently large $i$, we have $s\pi_if(a)=\pi_ii^{\ast}(g)(a)=a\pi_ii^{\ast}(g)(1)=0$ for any $a\in I$. So for such $i$, $sf_i=s\pi_if:I\rightarrow E(I/I_i)$ is a zero homomorphism, which is a contradiction. Hence $R$ is $u$-$S$-Noetherian. \end{proof}
\end{document} |
\begin{document}
\title{\textbf{\huge{An efficient methodology to estimate the parameters of a two-dimensional chirp signal model}
\begin{abstract} Abstract: In various capacities of statistical signal processing two-dimensional (2-D) chirp models have been considered significantly, particularly in image processing$-$ to model gray-scale and texture images, magnetic resonance imaging, optical imaging etc. In this paper we address the problem of estimation of the unknown parameters of a 2-D chirp model under the assumption that the errors are independently and identically distributed (i.i.d.). The key attribute of the proposed estimation procedure is that it is computationally more efficient than the least squares estimation method. Moreover, the proposed estimators are observed to have the same asymptotic properties as the least squares estimators, thus providing computational effectiveness without any compromise on the efficiency of the estimators. We extend the propounded estimation method to provide a sequential procedure to estimate the unknown parameters of a 2-D chirp model with multiple components and under the assumption of i.i.d.\ errors we study the large sample properties of these sequential estimators. Simulation studies and a synthetic data analysis show that the proposed estimators perform satisfactorily.
\end{abstract} \section{Introduction}\label{sec:Intro} A two-dimensional (2-D) chirp model has the following mathematical expression: \begin{equation}\begin{split}\label{multiple_comp_model} y(m,n) = \sum_{k=1}^{p} \{A_k^0 \cos(\alpha_k^0 m + \beta_k^0 m^2 + \gamma_k^0 n + \delta_k^0 n^2) + B_k^0 \sin(\alpha_k^0 m + \beta_k^0 m^2 + \gamma_k^0 n + \delta_k^0 n^2)\} + X(m,n);\\ m = 1, \ldots, M; n = 1, \ldots, N. \end{split}\end{equation} Here, $y(m,n)$ is the observed signal data, and the parameters $A_k^0$s, $B_k^0$s are the amplitudes, $\alpha_k^0$s, $\gamma_k^0$s are the frequencies and $\beta_k^0$s, $\delta_k^0$s are the frequency rates. The random component $X(m,n)$ accounts for the noise component of the observed signal. In this paper, we assume that $X(m,n)$ is an independently and identically distributed (i.i.d.) random field. \\ \par
It can be seen that the model admits a decomposition of two components$-$ the deterministic component and the random component. The deterministic component represents a gray-scale texture and the random component makes the model more realistic for practical realisation. For illustration, we simulate data with a fixed set of model parameters. Figure \ref{fig:true_signal} represents the gray-scale texture corresponding to the simulated data without the noise component and Figure \ref{fig:noisy_signal} represents the contaminated texture image corresponding to the simulated data with the noise component. This clearly suggests that the 2-D chirp signal models can be used effectively in modelling and analysing black and white texture images. \begin{figure}
\caption{Original texture.}
\label{fig:true_signal}
\caption{Noisy texture.}
\label{fig:noisy_signal}
\end{figure} Apart from the applications in image analysis, these signals are commonly observed in mobile telecommunications, surveillance systems, in radars and sonars etc. For more details on the applications, one may see the works of Francos and Friedlander \cite{1998}, \cite{1999}, Simeunovi$\acute{c}$ and Djurovi$\acute{c}$ \cite{2016} and Zhang et al.\ \cite{2008} and the references cited therein. \\ \par
Parameter estimation of a 2-D chirp signal is an important statistical signal processing problem. Recently Zhang et al.\ \cite{2008}, Lahiri et al.\ \cite{2013_2} and Grover et al.\ \cite{2018_2} proposed some estimation methods of note. For instance, Zhang et al.\ \cite{2008} proposed an algorithm based on the product cubic phase function for the estimation of the frequency rates of the 2-D chirp signals under low signal to noise ratio and the assumption of stationary errors. They conducted simulations to verify the performance of the proposed estimation algorithm, however there was no study of the theoretical properties of the proposed estimators. Lahiri et al.\ \cite{2015} suggested the least squares estimation method. They observed that the least squares estimators (LSEs) of the unknown parameters of this model are strongly consistent and asymptotically normally distributed under the assumption of stationary additive errors. The rates of convergence of the amplitude estimates were observed to be $M^{-1/2}N^{-1/2}$, of the frequencies estimates, they are $M^{-3/2}N^{-1/2}$ and $M^{-1/2}N^{-3/2}$ and of the frequency rate estimates, they are $M^{-5/2}N^{-1/2}$ and $M^{-1/2}N^{-5/2}$. Grover et al.\ \cite{2018_2} proposed the approximate least squares estimators (ALSEs), obtained by maximising a periodogram-type function and under the same stationary error assumptions, they observed that ALSEs are strongly consistent and asymptotically equivalent to the LSEs.\\ \par
A chirp signal is a particular case of the polynomial phase signal when the phase is a quadratic polynomial. Although work on parameter estimation of the aforementioned 2-D chirp model is rather limited, several authors have considered the more generalised version of this model$-$the 2-D polynomial phase signal model. For references, see Djurovi\'c et al.\ \cite{2010}, Djurovi\'c \cite{2017_2}, Francos and Friedlander \cite{1998,1999}, Friedlander and Francos \cite{1996}, Lahiri and Kundu \cite{2017}, Simeunovi\'c et al.\ \cite{2014}, Simeunovi$\acute{c}$ and Djurovi$\acute{c}$ \cite{2016} and Djurovi\'c and Simeunovi\'c \cite{2018_3}. \\ \par
In this paper, we address the problem of parameter estimation of a one-component 2-D chirp model as well as the more general multiple-component 2-D chirp model. We put forward two methods for this purpose. The key characteristic of the proposed estimation method is that it reduces the foregoing 2-D chirp model into two 1-D chirp models. Thus, instead of fitting a 2-D chirp model, we are required to fit two 1-D chirp models to the given data matrix. For the fitting, we use a simple modification of the least squares estimation method. The proposed algorithm is numerically more efficient than the usual least squares estimation method proposed by Lahiri et al. \cite{2015}. For instance, for a one-component 2-D chirp model, to estimate the parameters using these algorithms, we need to solve two 2-D optimisation problems as opposed to a 4-D optimisation problem in the case of finding the LSEs. This also leads to curtailment of the number of grid points required to find the initial values of the non-linear parameters as the 4-D grid search required in case of the computation of the usual LSEs or ALSEs, reduces to two 2-D grid searches. Therefore, instead of searching along a grid mesh consisting of $M^3N^3$ points, we need to search among only $M^3 + N^3$ points, which is much more feasible to execute computationally. In essence, the contributions of this paper are three-fold: \begin{enumerate} \item [1.] We put forward a computationally efficient algorithm for the estimation of the unknown parameters of 2-D chirp signal models as a practical alternative to the usual least squares estimation method. \item [2.] We examine the asymptotic properties of the proposed estimators under the assumption of i.i.d.\ errors and observe that the proposed estimators are strongly consistent and asymptotically normally distributed. In fact, they are observed to be asymptotically equivalent to the corresponding LSEs. When the errors are assumed to be Gaussian, the asymptotic variance-covariance matrix of the proposed estimators coincides with asymptotic Cram\'er-Rao lower bound. \item [3.] We conduct simulation experiments and analyse a synthetic texture (see Figure \ref{fig:noisy_signal}) to assess the effectiveness of the proposed estimators. \end{enumerate}
The rest of the paper is organised as follows. In the next section, we provide some preliminary results required to study the asymptotic properties of the proposed estimators. In Section \ref{sec:One_component_model}, we consider a one-component 2-D chirp model and state the model assumptions, some notations and present the proposed algorithms along with the asymptotic properties of the proposed estimators. In Section \ref{sec:Multiple_component_model}, we extend the algorithm and develop a sequential procedure to estimate the parameters of a multiple-component 2-D chirp model. We also study the asymptotic properties of the proposed sequential estimators in this section. We perform numerical experiments for different model parameters in Section \ref{sec:Simulation_studies} and analyse a synthetic data for illustration in Section \ref{sec:data_analysis}. Finally, we conclude the paper in Section \ref{sec:Conclusion} and we provide the proofs of all the theoretical claims in the appendices.
\section{Preliminary Results}\label{sec:preliminary _results} In this section, we provide the asymptotic results obtained for the usual LSEs of the unknown parameters of a 1-D chirp model by Lahiri et al. \cite{2015}. These results are later exploited to prove the asymptotic normality of the proposed estimators. \subsection{One-component 1-D Chirp Model}\label{sec:one_comp_1D} Consider a 1-D chirp model with the following mathematical expression: \begin{equation}\label{one_comp_1D_model} y(t) = A^0 \cos(\alpha^0 t + \beta^0 t^2) + B^0 \sin(\alpha^0 t + \beta^0 t^2) + X(t). \end{equation} Here $y(t)$ is the observed data at time points $t = 1, \ldots, n$, $A^0$, $B^0$ are the amplitudes and $\alpha^0$ is the frequency and $\beta^0$ is the frequency rate parameter. $\{X(t)\}_{t=1}^{n}$ is the sequence of error random variables. \\
\noindent The LSEs of $\alpha^0$ and $\beta^0$ can be obtained by minimising the following reduced error sum of squares: \begin{equation*} R_{n}(\alpha, \beta) = Q_n(\hat{A}, \hat{B}, \alpha, \beta) = \textit{\textbf{Y}}^{\top}(\textbf{I} - \textbf{P}_{\textbf{Z}_n}(\alpha, \beta))\textit{\textbf{Y}} \end{equation*} where, $$Q_n(A, B, \alpha, \beta) = (\textit{\textbf{Y}} - \textbf{Z}_n(\alpha, \beta)\boldsymbol{\phi})^{\top}(\textit{\textbf{Y}} - \textbf{Z}_n(\alpha, \beta)\boldsymbol{\phi}),$$ is the error sum of squares, $\textbf{P}_{\textbf{Z}_n}(\alpha, \beta) = \textbf{Z}_n(\alpha, \beta)(\textbf{Z}_n(\alpha, \beta)^{\top}\textbf{Z}_n(\alpha, \beta))^{-1}\textbf{Z}_n(\alpha, \beta)^{\top}$ is the projection matrix on the column space of the matrix $\textbf{Z}_n(\alpha, \beta)$, \begin{equation}\label{Z_definition} \textbf{Z}_n(\alpha, \beta) = \begin{bmatrix} \cos(\alpha + \beta) & \sin(\alpha + \beta) \\ \vdots & \vdots \\ \cos(n\alpha + n^2\beta) & \sin(n\alpha + n^2\beta) \\ \end{bmatrix}, \end{equation} $\textit{\textbf{Y}} = \begin{bmatrix} y(1) \ldots y(n) \end{bmatrix}^{\top}$ is the observed data vector and $\boldsymbol{\phi} = \begin{bmatrix} A & B \end{bmatrix}^{T}$ is the the vector of linear parameters. \\ \par \noindent Following are the assumptions, we make on the error component and the parameters of model \eqref{one_comp_1D_model}: \begin{assumptionP}\label{assump:P1} X(t) is a sequence of i.i.d.\ random variables with mean zero, variance $\sigma^2$ and finite fourth order moment. \end{assumptionP}
\begin{assumptionP}\label{assump:P2} $(A^0, B^0, \alpha^0, \beta^0)$ is an interior point of the parameter space $\boldsymbol{\Theta} = (-K, K) \times (-K, K) \times (0, \pi) \times (0, \pi),$ where $K$ is a positive real number and ${A^0}^2 + {B^0}^2 > 0.$ \end{assumptionP}
\begin{theoremP}\label{theorem:preliminary_result_1} Let us denote $\textit{\textbf{R}}'_{n}(\alpha, \beta)$ as the first derivative vector and $\textit{\textbf{R}}''_{n}(\alpha, \beta)$ as the second derivative matrix of the function $R_{n}(\alpha, \beta)$. Then, under the assumptions \ref{assump:P1} and \ref{assump:P2}, we have: \begin{equation}\begin{split}\label{prelim_LSE_first_derivative_one_comp} -\textit{\textbf{R}}'_{n}(\alpha^0, \beta^0)\boldsymbol{\Delta} \rightarrow \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}^{-1}), \end{split}\end{equation} \begin{equation}\begin{split}\label{prelim_LSE_second_derivative_one_comp} \boldsymbol{\Delta}\textit{\textbf{R}}''_{n}(\alpha^0, \beta^0)\boldsymbol{\Delta} \rightarrow \boldsymbol{\Sigma}^{-1}. \end{split}\end{equation} Here, $\boldsymbol{\Delta} = \textnormal{diag}(\frac{1}{n\sqrt{n}}, \frac{1}{n^2\sqrt{n}})$, \begin{equation}\label{Sigma_definition} \boldsymbol{\Sigma} = \frac{2}{{A^0}^2 + {B^0}^2} \begin{bmatrix}
96 & -90 \\
-90 & 90 \\ \end{bmatrix} \textnormal{ and } \end{equation}
\begin{equation}\label{Sigma_inverse_definition}
\boldsymbol{\Sigma}^{-1} = \begin{bmatrix}
\frac{{A^0}^2 + {B^0}^2}{12} & \frac{{A^0}^2 + {B^0}^2}{12} \\
\frac{{A^0}^2 + {B^0}^2}{12} & \frac{4({A^0}^2 + {B^0}^2)}{45} \\ \end{bmatrix}. \end{equation} The notation $\boldsymbol{\mathcal{N}}_2(\boldsymbol{\mu}, \boldsymbol{\mathcal{V}})$ means bivariate normally distributed with mean vector $\boldsymbol{\mu}_{2 \times 1}$ and variance-covariance matrix $\boldsymbol{\mathcal{V}}_{2 \times 2}$. \end{theoremP} \begin{proof} This proof follows from Theorem 2 of Lahiri et al. \cite{2015}. \\ \end{proof}
\subsection{Multiple-component 1-D Chirp Model}\label{sec:multiple_comp_1D} Now we consider a 1-D chirp model with multiple components, mathematically expressed as follows: \begin{equation*} y(t) = \sum_{k=1}^{p} \{A_k^0 \cos(\alpha_k^0 t + \beta_k^0 t^2) + B_k^0 \sin(\alpha_k^0 t + \beta_k^0 t^2)\} + X(t); \ t = 1, \ldots, n. \end{equation*} Here, $A_k^0$s, $B_k^0$s are the amplitudes, $\alpha_k^0$s are the frequencies and $\beta_k^0$ are the frequency rates, the parameters that characterise the observed signal $y(t)$ and $X(t)$ is the random noise component. \\
Lahiri et al.\ \cite{2015} suggested a sequential procedure to estimate the unknown parameters of the above model. We discuss in brief, the proposed sequential procedure and then state some of the asymptotic results they established, germane to our work. \begin{description} \item \namedlabel{itm:step1} {\textbf{Step 1:}}\ The first step of the sequential method is to estimate the non-linear parameters of the first component of the model, $\alpha_1^0$ and $\beta_1^0$, say $\hat{\alpha}_1$ and $\hat{\beta}_1$ by minimising the following reduced error sum of squares: \begin{equation*} R_{1,n}(\alpha, \beta) = \textit{\textbf{Y}}^{\top}(\textbf{I} - \textbf{P}_{\textbf{Z}_n}(\alpha, \beta))\textit{\textbf{Y}} \end{equation*} with respect to $\alpha$ and $\beta$ simultaneously. \item \namedlabel{itm:step2} {\textbf{Step 2:}}\ Then the first component linear parameter estimates, $\hat{A}_1$ and $\hat{B}_1$ are obtained using the separable linear regression of Richards \cite{1961} as follows: \begin{equation*} \begin{bmatrix} \hat{A}_1 \\ \hat{B}_1 \end{bmatrix} = [\textbf{Z}_n(\hat{\alpha}_1, \hat{\beta}_1)^{\top}\textbf{Z}_n(\hat{\alpha}_1, \hat{\beta}_1)]^{-1}\textbf{Z}_n(\hat{\alpha}_1, \hat{\beta}_1)^{\top}\textit{\textbf{Y}}. \end{equation*} \item \namedlabel{itm:step3} {\textbf{Step 3:}}\ Once we have the estimates of the first component parameters, we take out its effect from the original signal and obtain a new data vector as follows: \begin{equation*} \textit{\textbf{Y}}_1 = \textit{\textbf{Y}} - \textbf{Z}_n(\hat{\alpha}_1, \hat{\beta}_1)\begin{bmatrix} \hat{A}_1 \\ \hat{B}_1 \end{bmatrix}. \end{equation*} \item \namedlabel{itm:step4} {\textbf{Step 4:}}\ Then the estimates of the second component parameters are obtained by using the new data vector and following the same procedure and the process is repeated $p$ times. \end{description} \noindent Under the Assumption \ref{assump:P1} on the error random variables and the following assumption on the parameters: \begin{assumptionP}\label{assump:P3} $(A_k^0, B_k^0, \alpha_k^0, \beta_k^0)$ is an interior point of $\boldsymbol{\Theta}$, for all $k = 1, \ldots, p$ and the frequencies and the frequency rates are such that $(\alpha_i^0, \beta_i^0) \neq (\alpha_j^0, \beta_j^0)$ $\forall i \neq j$. \end{assumptionP} \begin{assumptionP}\label{assump:P4} $A_k^0$s and $B_k^0$s satisfy the following relationship: \begin{equation*} K^2 > {A_{1}^{0}}^2 + {B_{1}^{0}}^2 > {A_{2}^{0}}^2 + {B_{2}^{0}}^2 > \ldots > {A_{p}^{0}}^2 + {B_{p}^{0}}^2 > 0, \end{equation*} \end{assumptionP} \noindent we have the following results. \begin{theoremP}\label{theorem:preliminary_result_3} Let us denote $\textit{\textbf{R}}'_{k,n}(\alpha, \beta)$ as the first derivative vector and $\textit{\textbf{R}}''_{k,n}(\alpha, \beta)$ as the second derivative matrix of the function $R_{k,n}(\alpha, \beta)$, $k = 1, \ldots, p$. Then, under the assumptions \ref{assump:P1}, \ref{assump:P3} and \ref{assump:P4}: \begin{equation}\begin{split}\label{prelim_LSE_first_derivative_multiple_comp_1} -\frac{1}{\sqrt{n}}\textit{\textbf{R}}'_{k,n}(\alpha^0, \beta^0)\boldsymbol{\Delta} \rightarrow 0, \end{split}\end{equation} \begin{equation}\begin{split}\label{prelim_LSE_first_derivative_multiple_comp_2} -\textit{\textbf{R}}'_{k,n}(\alpha^0, \beta^0)\boldsymbol{\Delta} \rightarrow \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}_k^{-1}), \end{split}\end{equation} \begin{equation}\begin{split}\label{prelim_LSE_second_derivative_multiple_comp} \boldsymbol{\Delta}\textit{\textbf{R}}''_{k,n}(\alpha^0, \beta^0)\boldsymbol{\Delta} \rightarrow \boldsymbol{\Sigma}_k^{-1}. \end{split}\end{equation} Here, $\boldsymbol{\Delta}$ is as defined in Theorem \ref{theorem:preliminary_result_1}, \begin{equation}\label{Sigma_k_definition} \boldsymbol{\Sigma}_k = \frac{2}{{A_k^0}^2 + {B_k^0}^2} \begin{bmatrix}
96 & -90 \\
-90 & 90 \end{bmatrix} \textnormal{ and } \end{equation} \begin{equation}\label{Sigma_k_inverse_definition}
\boldsymbol{\Sigma}_k^{-1} = \begin{bmatrix}
\frac{{A_k^0}^2 + {B_k^0}^2}{12} & \frac{{A_k^0}^2 + {B_k^0}^2}{12} \\
\frac{{A_k^0}^2 + {B_k^0}^2}{12} & \frac{4({A_k^0}^2 + {B_k^0}^2)}{45} \end{bmatrix}. \end{equation} \end{theoremP} \begin{proof} The proof of \eqref{prelim_LSE_first_derivative_multiple_comp_1} follows along the same lines as proof of Lemma 4 of Lahiri et al. \cite{2015} and that of \eqref{prelim_LSE_first_derivative_multiple_comp_2} and \eqref{prelim_LSE_second_derivative_multiple_comp} follows from Theorem 2 of Lahiri et al. \cite{2015}. Note that Lahiri et al. \cite{2015} showed that the sequential LSEs have the same asymptotic distribution as the usual LSEs based on a famous number theory conjecture (see the reference). \\ \end{proof}
\section{One-Component 2-D Chirp Model}\label{sec:One_component_model} In this section, we provide the methodology to obtain the proposed estimators for the parameters of a one-component 2-D chirp model, mathematically expressed as follows: \begin{equation}\begin{split}\label{one_comp_model} y(m,n) =A^0 \cos(\alpha^0 m + \beta^0 m^2 + \gamma^0 n + \delta^0 n^2) + B^0 \sin(\alpha^0 m + \beta^0 m^2 + \gamma^0 n + \delta^0 n^2) + X(m,n);\\ m = 1, \ldots, M; n = 1, \ldots, N. \end{split}\end{equation} Here $y(m,n)$ is the observed data vector and the parameters $A^0$, $B^0$ are the amplitudes, $\alpha^0$, $\gamma^0$ are the frequencies and $\beta^0$, $\delta^0$ are the frequency rates of the signal model. As mentioned in the introduction, $X(m,n)$ accounts for the noise present in the signal.\\
\noindent We will use the following notations: $\boldsymbol{\theta} = (A, B, \alpha, \beta, \gamma, \delta)$ is the parameter vector, \\ $\boldsymbol{\theta}^0 = (A^0, B^0, \alpha^0, \beta^0, \gamma^0, \delta^0)$ is the true parameter vector and $\boldsymbol{\Theta}$ = $(-K,K) \times (-K, K) \times (0,\pi) \times (0,\pi)\times (0,\pi) \times (0,\pi)$ is the parameter space.
\subsection{Proposed Methodology}\label{sec:Estimation_one_comp_LSEs_method} Let us consider the above-stated 2-D chirp signal model with one-component. Suppose we fix $n = n_0$, then \eqref{one_comp_model} can be rewritten as follows: \begin{equation}\begin{split}\label{model_n=n0_one_comp} y(m,n_0) & = A^0 \cos(\alpha^0 m + \beta^0 m^2 + \gamma^0 n_0 + \delta^0 n_0^2) + B^0 \sin(\alpha^0 m + \beta^0 m^2 + \gamma^0 n_0 + \delta^0 n_0^2) + X(m,n_0)\\ & = A^0(n_0) \cos(\alpha^0 m + \beta^0 m^2) + B^0(n_0) \sin(\alpha^0 m + \beta^0 m^2) + X(m,n_0);\ \ m = 1, \cdots, M, \end{split}\end{equation} which represents a 1-D chirp model with $A^0(n_0)$, $B^0(n_0)$ as the amplitudes, $\alpha^0$ as the frequency parameter and $\beta^0$ as the frequency rate parameter. Here, \begin{equation*}\begin{split} & A^0(n_0) = \ \ A^0 \cos(\gamma^0 n_0 + \delta^0 n_0^2) + B^0 \sin(\gamma^0 n_0 + \delta^0 n_0^2), \textmd{ and} \\ & B^0(n_0) = -A^0 \sin(\gamma^0 n_0 + \delta^0 n_0^2) + B^0 \cos(\gamma^0 n_0 + \delta^0 n_0^2). \end{split}\end{equation*} Thus for each fixed $n_0$ $\in$ $\{1, \ldots, N\}$, we have a 1-D chirp model with the same frequency and frequency rate parameters, though different amplitudes. This 1-D model corresponds to a column of the 2-D data matrix.\\
Our aim is to estimate the non-linear parameters $\alpha^0$ and $\beta^0$ from the columns of the data matrix and one of the most reasonable estimators for this purpose are the least squares estimators. Therefore, the estimators of $\alpha^0$ and $\beta^0$ can be obtained by minimising the following function: \begin{equation*}\begin{split} R_M(\alpha, \beta, n_0) &
= {\textit{\textbf{Y}}^{\top}_{n_0}}(\textbf{I} - \textbf{P}_{\textbf{Z}_M}(\alpha, \beta))\textit{\textbf{Y}}_{n_0} \end{split}\end{equation*} for each $n_0$. Here, $\textit{\textbf{Y}}_{n_0} = \begin{bmatrix} y[1,n_0] & \ldots & y[M,n_0] \end{bmatrix}^{\top}$ is the $n_0$\textit{th} column of of the original data matrix, $\textbf{P}_{\textbf{Z}_M}(\alpha, \beta) = \textbf{Z}_M(\alpha, \beta)(\textbf{Z}_M(\alpha, \beta)^{\top}\textbf{Z}_M(\alpha, \beta))^{-1}\textbf{Z}_M(\alpha, \beta)^{\top}$ is the projection matrix on the column space of the matrix $\textbf{Z}_M(\alpha, \beta)$ and the matrix $\textbf{Z}_M(\alpha, \beta)$ can be obtained by replacing $n$ by $M$ in \eqref{Z_definition}. This process involves minimising $N$ 2-D functions corresponding to the N columns of the matrix. Thus, for computational efficiency, we propose to minimise the following function instead: \begin{equation}\begin{split}\label{reduced_ess_alpha_beta} R^{(1)}_{MN}(\alpha, \beta) & = \sum_{n_0 = 1}^{N}R_M(\alpha, \beta, n_0) = \sum_{n_0 = 1}^{N} \textit{\textbf{Y}}^{\top}_{n_0}(\textbf{I} - \textbf{P}_{\textbf{Z}_M}(\alpha, \beta))\textit{\textbf{Y}}_{n_0} \end{split}\end{equation} with respect to $\alpha$ and $\beta$ simultaneously and obtain $\hat{\alpha}$ and $\hat{\beta}$ which reduces the estimation process to solving only one 2-D optimisation problem. Note that since the errors are assumed to be i.i.d.\ replacing these $N$ functions by their sum is justifiable.
Similarly, we can obtain the estimates, $\hat{\gamma}$ and $\hat{\delta}$, of $\gamma^0$ and $\delta^0$, by minimising the following criterion function:
\begin{equation}\begin{split}\label{reduced_ess_gamma_delta} R^{(2)}_{MN}(\gamma, \delta) & = \sum_{m_0 = 1}^{M}R_N(\gamma, \delta, m_0) = \sum_{m_0 = 1}^{M} \textit{\textbf{Y}}^{\top}_{m_0}(\textbf{I} - \textbf{P}_{\textbf{Z}_N}(\gamma, \delta))\textit{\textbf{Y}}_{m_0} \end{split}\end{equation} with respect to $\gamma$ and $\delta$ simultaneously. The data vector $\textit{\textbf{Y}}_{m_0} = \begin{bmatrix} y[m_0,1] & \ldots & y[m_0,N] \end{bmatrix}^{\top}$, is the $m_0$\textit{th} row of the data matrix, $m_0 = 1, \ldots, M$, $\textbf{P}_{\textbf{Z}_N}(\gamma, \delta)$ is the projection matrix on the column space of the matrix $\textbf{Z}_N(\gamma, \delta)$ and the matrix $\textbf{Z}_{N}(\gamma, \delta)$ can be obtained by replacing $n$ by $N$ and $\alpha$ and $\beta$ by $\gamma$ and $\delta$ respectively in the matrix $\textbf{Z}_n(\alpha, \beta)$, defined in \eqref{Z_definition}.\\
Once we have the estimates of the non-linear parameters, we estimate the linear parameters by the usual least squares regression technique as proposed by Lahiri et al. \cite{2015}: \begin{equation*} \begin{bmatrix} \hat{A} \\ \hat{B} \end{bmatrix} = [\textit{\textbf{W}}(\hat{\alpha}, \hat{\beta}, \hat{\gamma}, \hat{\delta})^{T} \textit{\textbf{W}}(\hat{\alpha}, \hat{\beta}, \hat{\gamma}, \hat{\delta})]^{-1}\textit{\textbf{W}}(\hat{\alpha}, \hat{\beta}, \hat{\gamma}, \hat{\delta})^{T}\textit{\textbf{Y}}. \end{equation*} Here, $\textit{\textbf{Y}}_{M N \times 1} = \left[\begin{array}{ccccccc}y(1, 1) & \ldots & y(M, 1) & \ldots & y(1, N) & \ldots & y(M, N)\end{array}\right]^{T}$ is the observed data vector, and \begin{equation}\label{W_matrix} \textit{\textbf{W}}(\alpha, \beta, \gamma, \delta)_{M N \times 2} = \left[\begin{array}{cc}\cos(\alpha + \beta + \gamma + \delta) & \sin(\alpha + \beta + \gamma + \delta) \\ \cos(2\alpha + 4\beta + \gamma + \delta) & \sin(2\alpha + 4\beta + \gamma + \delta) \\ \vdots & \vdots \\ \cos(M \alpha + M^2 \beta + \gamma + \delta) & \sin(M \alpha + M^2 \beta + \gamma + \delta) \\ \vdots & \vdots \\ \cos(\alpha + \beta + N \gamma + N^2 \delta) & \sin(\alpha + \beta + N \gamma + N^2 \delta) \\ \cos(2\alpha + 4\beta + N \gamma + N^2 \delta) & \sin(2\alpha + 4\beta + N \gamma + N^2 \delta)\\\vdots & \vdots \\ \cos(M \alpha + M^2 \beta + N \gamma + N^2 \delta) & \sin(M \alpha + M^2 \beta + N \gamma + N^2 \delta)\end{array}\right]. \end{equation}
\noindent We make the following assumptions on the error component and the model parameters before we examine the asymptotic properties of the proposed estimators: \begin{assumption}\label{assump:1} X(m,n) is a double array sequence of i.i.d.\ random variables with mean zero, variance $\sigma^2$ and finite fourth order moment. \end{assumption}
\begin{assumption}\label{assump:2} The true parameter vector $\boldsymbol{\theta}^0$ is an interior point of the parametric space $\boldsymbol{\Theta}_1$, and ${A^0}^2 + {B^0}^2 > 0$. \end{assumption}
\subsection{Consistency}\label{sec:Estimation_one_comp_LSEs_consistency} The results obtained on the consistency of the proposed estimators are presented in the following theorems: \begin{theorem}\label{theorem:consistency_alpha_beta_one_comp_LSE} Under assumptions \ref{assump:1} and \ref{assump:2}, $\hat{\alpha}$ and $\hat{\beta}$ are strongly consistent estimators of $\alpha^0$ and $\beta^0$ respectively, that is, \begin{equation*}\begin{split} \hat{\alpha} \xrightarrow{a.s.} \alpha^0 \textmd{ as } M \rightarrow \infty.\\ \hat{\beta} \xrightarrow{a.s.} \beta^0 \textmd{ as } M \rightarrow \infty. \end{split}\end{equation*} \end{theorem} \begin{proof} See \nameref{appendix:A}.\\ \end{proof} \begin{theorem}\label{theorem:consistency_gamma_delta_one_comp_LSE} Under assumptions \ref{assump:1} and \ref{assump:2}, $\hat{\gamma}$ and $\hat{\delta}$ are strongly consistent estimators of $\gamma^0$ and $\delta^0$ respectively, that is, \begin{equation*}\begin{split} \hat{\gamma} \xrightarrow{a.s.} \gamma^0 \textmd{ as } N \rightarrow \infty.\\ \hat{\delta} \xrightarrow{a.s.} \delta^0 \textmd{ as } N \rightarrow \infty. \end{split}\end{equation*} \end{theorem} \begin{proof} This proof follows along the same lines as the proof of Theorem \ref{theorem:consistency_alpha_beta_one_comp_LSE}.\\ \end{proof} \subsection{Asymptotic distribution.}\label{sec:Estimation_one_comp_LSEs_distribution} The following theorems provide the asymptotic distributions of the proposed estimators: \begin{theorem}\label{theorem:asymp_dist_one_comp_LSE_alpha_beta} If the assumptions, \ref{assump:1} and \ref{assump:2} are satisfied, then \begin{equation*}\begin{split} \begin{bmatrix} (\hat{\alpha} - \alpha^0) & , & (\hat{\beta} - \beta^0) \end{bmatrix}\textit{\textbf{D}}_1^{-1} \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}) \textmd{ as } M \rightarrow \infty.\\ \end{split}\end{equation*} Here, $ \mathbf{D}_1 = \textnormal{diag}(M^{\frac{-3}{2}}N^{\frac{-1}{2}}, M^{\frac{-5}{2}}N^{\frac{-1}{2}})$ and $\boldsymbol{\Sigma}$ is as defined in \eqref{Sigma_definition}. \end{theorem}
\begin{proof} See \nameref{appendix:A}.\\ \end{proof}
\begin{theorem}\label{theorem:asymp_dist_one_comp_LSE_gamma_delta} If the assumptions, \ref{assump:1} and \ref{assump:2} are satisfied, then \begin{equation*}\begin{split} \begin{bmatrix} (\hat{\gamma} - \gamma^0) & , & (\hat{\delta} - \delta^0) \end{bmatrix}\textit{\textbf{D}}_2^{-1} \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}) \textmd{ as } N \rightarrow \infty.\\ \end{split}\end{equation*} Here, $ \mathbf{D}_2 = \textnormal{diag}( M^{\frac{-1}{2}}N^{\frac{-3}{2}}, M^{\frac{-1}{2}}N^{\frac{-5}{2}})$ and $\boldsymbol{\Sigma}$ is as defined in \eqref{Sigma_definition}. \end{theorem} \begin{proof} This proof follows along the same lines as the proof of Theorem \ref{theorem:asymp_dist_one_comp_LSE_alpha_beta}.\\ \end{proof}
\noindent The asymptotic distributions of $(\hat{\alpha}, \hat{\beta})$ and $(\hat{\gamma}, \hat{\delta})$ are observed to be the same as those of the corresponding LSEs. Thus, we get the same efficiency as that of the LSEs without going through the exhaustive process of actually computing the LSEs.
\section{Multiple-Component 2-D Chirp model}\label{sec:Multiple_component_model} In this section, we consider the multipl-component 2-D chirp model with $p$ number of components, with the mathematical expression of the model as given in \eqref{multiple_comp_model}. Although estimation of $p$ is an important problem, in this paper we deal with the estimation of the other important parameters characterising the observed signal, the amplitudes, the frequencies and the frequency rates, assuming $p$ to be known. We propose a sequential procedure to estimate these parameters. The main idea supporting the proposed sequential procedure is same as that behind the ones proposed by Prasad et al. \cite{2008_2} for a sinusoidal model and Lahiri et al. \cite{2015} and Grover et al. \cite{2018} for a chirp model$-$ the orthogonality of different regressor vectors. Along with the computationally efficiency, the sequential method provides estimators with the same rates of convergence as the LSEs.\\
\subsection{Proposed Sequential Algorithm}\label{sec:Estimation_multiple_comp_LSEs_method} \noindent The following algorithm is a simple extension of the method proposed to obtain the estimators for a one-component 2-D model in Section \ref{sec:Estimation_one_comp_LSEs_method}:
\begin{description} \item \namedlabel{itm:step1} {\textbf{Step 1:}}\ Compute $\hat{\alpha}_1$ and $\hat{\beta}_1$ by minimising the following function: \begin{equation*} R^{(1)}_{1, MN}(\alpha, \beta) = \sum_{n_0 = 1}^{N} \textit{\textbf{Y}}^{\top}_{n_0}(\textbf{I} - \textbf{P}_{\textbf{Z}_M}(\alpha, \beta))\textit{\textbf{Y}}_{n_0} \end{equation*} with respect to $\alpha$ and $\beta$ simultaneously. \item \namedlabel{itm:step2} {\textbf{Step 2:}}\ Compute $\hat{\gamma}_1$ and $\hat{\delta}_1$ by minimising the function: \begin{equation*} R^{(2)}_{1,MN}(\gamma, \delta) = \sum_{m_0 = 1}^{M} \textit{\textbf{Y}}^{\top}_{m_0}(\textbf{I} - \textbf{P}_{\textbf{Z}_N}(\alpha, \beta))\textit{\textbf{Y}}_{m_0} \end{equation*} with respect to $\gamma$ and $\delta$ simultaneously. \item \namedlabel{itm:step3} {\textbf{Step 3:}}\ Once the nonlinear parameters of the first component of the model are estimated, estimate the linear parameters $A_1^0$ and $B_1^0$ by the usual least squares estimation technique: \begin{equation*} \begin{bmatrix} \hat{A}_1 \\ \hat{B}_1 \end{bmatrix} = [\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)^{T} \textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)]^{-1}\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)^{T}\textit{\textbf{Y}}. \end{equation*} Here, $\textit{\textbf{Y}}_{M N \times 1} = \left[\begin{array}{ccccccc}y(1, 1) & \ldots & y(M, 1) & \ldots & y(1, N) & \ldots & y(M, N)\end{array}\right]^{T}$ is the observed data vector, and the matrix $\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)$ can be obtained by replacing $\alpha$, $\beta$, $\gamma$ and $\delta$ by $\hat{\alpha}_1$, $\hat{\beta}_1$, $\hat{\gamma}_1$ and $\hat{\delta}_1$ respectively in \eqref{W_matrix}. \item \namedlabel{itm:step4} {\textbf{Step 4:}}\ Eliminate the effect of the first component from the original data and construct new data as follows: \begin{equation}\begin{split}\label{second_stage_data} y_1(m,n) = y(m,n) - \hat{A}_1 \cos(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2) - \hat{B}_1 \sin(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2); \\ m = 1, \ldots, M;\ n = 1, \ldots, N. \end{split}\end{equation}
\item \namedlabel{itm:step5} {\textbf{Step 5:}}\ Using the new data, estimate the parameters of the second component by following the same procedure. \item \namedlabel{itm:step6} {\textbf{Step 6:}}\ Continue this process until all the parameters are estimated. \\ \end{description}
\noindent In the following subsections, we examine the asymptotic properties of the proposed estimators under the assumptions \ref{assump:1}, \ref{assump:P4} and the following assumption on the parameters:
\begin{assumption}\label{assump:3} $\boldsymbol{\theta}_{k}^{0}$ is an interior point of $\boldsymbol{\Theta}_1$, for all $k = 1, \ldots, p$ and the frequencies $\alpha_{k}^0s$, $\gamma_{k}^0s$ and the frequency rates $\beta_{k}^0s$, $\delta_{k}^0s$ are such that $(\alpha_i^0, \beta_i^0) \neq (\alpha_j^0, \beta_j^0)$ and $(\gamma_i^0, \delta_i^0)$ $\neq$ $(\gamma_j^0, \delta_j^0)$ $\forall i \neq j$. \end{assumption}
\subsection{Consistency.}\label{sec:Estimation_multiple_comp_LSEs_consistency} Through the following theorems, we proclaim the consistency of the proposed estimators when the number of components, $p$ is unknown.
\begin{theorem}\label{theorem:consistency_alphak_betak_multiple_comp_LSE} If assumptions \ref{assump:1}, \ref{assump:3} and \ref{assump:P4} are satisfied, then the following results hold true for $ 1 \leqslant k \leqslant p$: \begin{equation*}\begin{split} & \hat{\alpha}_k \xrightarrow{a.s.} \alpha_k^0 \textmd{ as } M \rightarrow \infty, \\ & \hat{\beta}_k \xrightarrow{a.s.} \beta_k^0 \textmd{ as } M \rightarrow \infty. \end{split}\end{equation*} \end{theorem} \begin{proof} See \nameref{appendix:B}.\\ \end{proof} \begin{theorem}\label{theorem:consistency_gammak_deltak_multiple_comp_LSE} If assumptions \ref{assump:1}, \ref{assump:3} and \ref{assump:P4} are satisfied, then the following results hold true for $ 1 \leqslant k \leqslant p$: \begin{equation*}\begin{split} & \hat{\gamma}_k \xrightarrow{a.s.} \gamma_k^0 \textmd{ as } N \rightarrow \infty, \\ & \hat{\delta}_k \xrightarrow{a.s.} \delta_k^0 \textmd{ as } N \rightarrow \infty. \end{split}\end{equation*} \end{theorem} \begin{proof} This proof can be obtained along the same lines as proof of \textnormal{Theorem \ref{theorem:consistency_alphak_betak_multiple_comp_LSE}}. \\ \end{proof}
\begin{theorem}\label{theorem:limit_A_p+1_B_p+1} If the assumptions \ref{assump:1}, \ref{assump:3} and \ref{assump:P4} are satisfied, and if $\hat{A_k}$, $\hat{B_k}$, $\hat{\alpha_k}$, $\hat{\beta_k}$, $\hat{\gamma_k}$ and $\hat{\delta_k}$ are the estimators obtained at the $k$-th step, then \\ for $k \leqslant p$, \begin{equation*}\begin{split} & \hat{A_k} \xrightarrow{ a.s } A_k^0 \textmd{ as } \textnormal{min}\{M, N\} \rightarrow \infty \\ & \hat{B_k} \xrightarrow{ a.s } B_k^0 \textmd{ as } \textnormal{min}\{M, N\} \rightarrow \infty, \end{split}\end{equation*} and for $k > p$, \begin{equation*}\begin{split} & \hat{A_k} \xrightarrow{ a.s } 0 \textmd{ as } \textnormal{min}\{M, N\} \rightarrow \infty \\ & \hat{B_k} \xrightarrow{ a.s } 0 \textmd{ as } \textnormal{min}\{M, N\} \rightarrow \infty. \end{split}\end{equation*} \end{theorem} \begin{proof} This proof follows from the proof of Theorem 2.4.4 of Lahiri \cite{2015}.\\ \end{proof}
\noindent Note that we do not know the number of components in practice. The problem of estimation of $p$ is an important problem though we have not considered it here. From the above theorem, it is clear that if the number of components of the fitted model is less than or same as the true number of components, $p$, then the amplitude estimators converge to their true values almost surely, else if it is more than $p$, then the amplitude estimators upto the $p$-th step converge to the true values and past that, they converge to zero almost surely. Thus, this result can be used a criterion to estimate the number $p$. However, this might not work in low signal to noise ratio scenarios. \subsection{Asymptotic distribution.}\label{sec:Estimation_multiple_comp_LSEs_distribution} \begin{theorem}\label{theorem:asymp_dist_multiple_comp_LSE_alphak_betak} If assumptions \ref{assump:1}, \ref{assump:3} and \ref{assump:P4} are satisfied, then for $1 \leqslant k \leqslant p:$ \begin{equation*}\begin{split} & \begin{bmatrix} (\hat{\alpha}_k - \alpha_k^0) & , & (\hat{\beta}_k - \beta_k^0) \end{bmatrix}\textit{\textbf{D}}_1^{-1} \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}_{k}) \textmd{ as } M \rightarrow \infty.\\ \end{split}\end{equation*} Here $\textit{\textbf{D}}_1$ is as defined in Theorem \ref{theorem:asymp_dist_one_comp_LSE_alpha_beta} and $\boldsymbol{\Sigma}_{k}$ is as defined in \eqref{Sigma_k_definition}. \end{theorem} \begin{proof} See \nameref{appendix:B}. \\ \end{proof}
\begin{theorem}\label{theorem:asymp_dist_multiple_comp_LSE_gammak_deltak} If the assumptions, \ref{assump:1}, \ref{assump:3} and \ref{assump:P4} are satisfied, then \begin{equation*}\begin{split} \begin{bmatrix} (\hat{\gamma}_k - \gamma_k^0) & , & (\hat{\delta}_k - \delta_k^0) \end{bmatrix}\textit{\textbf{D}}_2^{-1} \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}_k) \textmd{ as } N \rightarrow \infty.\\ \end{split}\end{equation*} Here $\textit{\textbf{D}}_2$ is as defined in Theorem \ref{theorem:asymp_dist_one_comp_LSE_gamma_delta} and $\boldsymbol{\Sigma}_{k}$ is as defined in \eqref{Sigma_k_definition}. \end{theorem}
\begin{proof} This proof follows along the same lines as the proof of Theorem \ref{theorem:asymp_dist_multiple_comp_LSE_alphak_betak}.\\ \end{proof}
\section{Numerical Experiments and Simulated Data Analysis}\label{sec:Simulations} \subsection{Numerical Experiments}\label{sec:Simulation_studies} We perform simulations to examine the performance of the proposed estimators. We consider the following two cases: \begin{description} \item \namedlabel{itm:case1} {\textbf{Case I:}}\ When the data are generated from a one-component model \eqref{one_comp_model}, with the following set of parameters: \\ $A^0 = 2$, $B^0 = 3$, $\alpha^0 = 1.5$, $\beta^0 = 0.5$, $\gamma^0 = 2.5$ and $\delta^0 = 0.75$. \label{case_1_simulations} \item \namedlabel{itm:case2} {\textbf{Case II:}}\ When the data are generated from a two components model \eqref{multiple_comp_model}, with the following set of parameters:\\
$A_1^0 = 5$, $B_1^0 = 4$, $\alpha_1^0 = 2.1$, $\beta_1^0 = 0.1$, $\gamma_1^0 = 1.25$ and $\delta_1^0 = 0.25$, $A_2^0 = 3$, $B_2^0 = 2$, $\alpha_2^0 = 1.5$, $\beta_2^0 = 0.5$, $\gamma_2^0 = 1.75$ and $\delta_2^0 = 0.75$. \label{case_2_simulations} \end{description} The noise used in the simulations is generated from Gaussian distribution with mean 0 and variance $\sigma^2$. Also, different values of the error variance, $\sigma^2$ and sample sizes, $M$ and $N$ are considered. We estimate the parameters using the proposed estimation technique as well as the least squares estimation technique for \textbf{Case I} and for \textbf{Case II}, the proposed sequential technique and the sequential least squares technique proposed by Lahiri \cite{2015} are employed for comparison. For each case, the procedure is replicated 1000 times and the average values of the estimates, the average biases and the mean square errors (MSEs) are reported. The collation of the MSEs and the theoretical asymptotic variances (Avar) exhibits the efficacy of the proposed estimation method. \subsubsection{One-component simulation results}\label{sec:one_comp_simulations} In Table \ref{table:1}-Table \ref{table:4}, the results obtained through simulations for \textbf{Case I} are presented. It is observed that as $M$ and $N$ increase, the average estimates get closer to the true values, the average biases decrease and the MSEs decrease as well, thus verifying consistency of the proposed estimates. Also, the biases and the MSEs of both types of estimates increase as the error variance increases. The MSEs of the proposed estimators are of the same order as those of the LSEs and thus are well-matched with the corresponding asymptotic variances. \begin{table}[] \centering
\resizebox{0.75\textwidth}{!}{\begin{tabular}{cc|cccc|cccc} \hline
\multicolumn{2}{c|}{Parameters} & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ \\
\multicolumn{2}{c|}{True values} & 1.5 & 0.5 & 2.5 & 0.75 & 1.5 & 0.5 & 2.5 & 0.75 \\ \hline
$\sigma$ & & \multicolumn{4}{c|}{Proposed estimators} & \multicolumn{4}{c}{Usual LSEs} \\ \hline 0.10 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & 3.26e-05 & -1.23e-06 & 2.37e-05 & -1.26e-06 & 2.86e-05 & -1.07e-06 & 2.13e-05 & -1.17e-06 \\
& MSE & 9.01e-07 & 1.21e-09 & 8.34e-07 & 1.14e-09 & 8.75e-07 & 1.19e-09 & 8.02e-07 & 1.10e-09 \\
& Avar & 7.56e-07 & 1.13e-09 & 7.56e-07 & 1.13e-09 & 7.56e-07 & 1.13e-09 & 7.56e-07 & 1.13e-09 \\ \hline 0.50 & Avg & 1.4997 & 0.5000 & 2.4999 & 0.7500 & 1.4998 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -2.78e-04 & 1.05e-05 & -6.85e-05 & 3.20e-06 & -2.01e-04 & 7.90e-06 & -7.23e-06 & 8.39e-07 \\
& MSE & 2.37e-05 & 3.18e-08 & 2.17e-05 & 3.11e-08 & 2.17e-05 & 2.97e-08 & 2.07e-05 & 2.95e-08 \\
& Avar & 1.89e-05 & 2.84e-08 & 1.89e-05 & 2.84e-08 & 1.89e-05 & 2.84e-08 & 1.89e-05 & 2.84e-08 \\ \hline 1.00 & Avg & 1.5004 & 0.5000 & 2.4998 & 0.7500 & 1.5004 & 0.5000 & 2.4998 & 0.7500 \\
& Bias & 4.11e-04 & -1.77e-05 & -2.24e-04 & 8.00e-06 & 3.63e-04 & -1.60e-05 & -2.16e-04 & 7.57e-06 \\
& MSE & 9.54e-05 & 1.22e-07 & 8.92e-05 & 1.25e-07 & 8.92e-05 & 1.17e-07 & 8.48e-05 & 1.18e-07 \\
& Avar & 7.56e-05 & 1.13e-07 & 7.56e-05 & 1.13e-07 & 7.56e-05 & 1.13e-07 & 7.56e-05 & 1.13e-07 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{one_comp_model} when M = N = 25} \label{table:1} \end{table}
\begin{table}[] \centering
\resizebox{0.75\textwidth}{!}{\begin{tabular}{cc|cccc|cccc} \hline
\multicolumn{2}{c|}{Parameters} & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ \\
\multicolumn{2}{c|}{True values} & 1.5 & 0.5 & 2.5 & 0.75 & 1.5 & 0.5 & 2.5 & 0.75 \\ \hline
$\sigma$ & & \multicolumn{4}{c|}{Proposed estimators} & \multicolumn{4}{c}{Usual LSEs} \\ \hline 0.10 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & 5.53e-06 & -1.25e-07 & 2.33e-06 & -4.65e-08 & 2.51e-06 & -7.96e-08 & -1.89e-06 & 2.57e-08 \\
& MSE & 4.88e-08 & 1.83e-11 & 5.09e-08 & 1.88e-11 & 4.14e-08 & 1.54e-11 & 4.65e-08 & 1.72e-11 \\
& Avar & 4.73e-08 & 1.77e-11 & 4.73e-08 & 1.77e-11 & 4.73e-08 & 1.77e-11 & 4.73e-08 & 1.77e-11 \\ \hline 0.50 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -3.57e-05 & 4.91e-07 & -4.47e-05 & 7.83e-07 & 2.56e-06 & -2.61e-07 & -4.19e-05 & 6.84e-07 \\
& MSE & 1.35e-06 & 4.93e-10 & 1.31e-06 & 4.78e-10 & 1.16e-06 & 4.18e-10 & 1.18e-06 & 4.34e-10 \\
& Avar & 1.18e-06 & 4.43e-10 & 1.18e-06 & 4.43e-10 & 1.18e-06 & 4.43e-10 & 1.18e-06 & 4.43e-10 \\ \hline 1.00 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & 2.11e-05 & -2.41e-07 & -2.42e-05 & 2.35e-07 & 5.55e-06 & -1.92e-09 & -2.37e-05 & 2.45e-07 \\
& MSE & 5.36e-06 & 1.92e-09 & 5.03e-06 & 1.77e-09 & 4.38e-06 & 1.56e-09 & 4.53e-06 & 1.60e-09 \\
& Avar & 4.73e-06 & 1.77e-09 & 4.73e-06 & 1.77e-09 & 4.73e-06 & 1.77e-09 & 4.73e-06 & 1.77e-09 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{one_comp_model} when M = N = 50} \label{table:2} \end{table}
\begin{table}[] \centering
\resizebox{0.75\textwidth}{!}{\begin{tabular}{cc|cccc|cccc} \hline
\multicolumn{2}{c|}{Parameters} & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ \\
\multicolumn{2}{c|}{True values} & 1.5 & 0.5 & 2.5 & 0.75 & 1.5 & 0.5 & 2.5 & 0.75 \\ \hline
$\sigma$ & & \multicolumn{4}{c|}{Proposed estimators} & \multicolumn{4}{c}{Usual LSEs} \\ \hline 0.10 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -5.61e-06 & 5.15e-08 & 9.53e-07 & 7.95e-10 & -5.73e-06 & 5.26e-08 & -2.45e-07 & 1.47e-08 \\
& MSE & 9.77e-09 & 1.60e-12 & 1.02e-08 & 1.65e-12 & 9.48e-09 & 1.56e-12 & 9.10e-09 & 1.48e-12 \\
& Avar & 9.34e-09 & 1.56e-12 & 9.34e-09 & 1.56e-12 & 9.34e-09 & 1.56e-12 & 9.34e-09 & 1.56e-12 \\ \hline 0.50 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -3.55e-05 & 3.80e-07 & 1.73e-06 & -1.05e-07 & -3.45e-05 & 3.63e-07 & 4.93e-06 & -1.46e-07 \\
& MSE & 2.45e-07 & 4.00e-11 & 2.39e-07 & 3.86e-11 & 2.01e-07 & 3.29e-11 & 1.79e-07 & 2.96e-11 \\
& Avar & 2.33e-07 & 3.89e-11 & 2.33e-07 & 3.89e-11 & 2.33e-07 & 3.89e-11 & 2.33e-07 & 3.89e-11 \\ \hline 1.00 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -4.93e-06 & 7.23e-08 & 4.93e-05 & -6.60e-07 & -1.67e-05 & 2.28e-07 & 2.78e-05 & -3.89e-07 \\
& MSE & 1.01e-06 & 1.67e-10 & 1.06e-06 & 1.74e-10 & 8.29e-07 & 1.39e-10 & 7.77e-07 & 1.31e-10 \\
& Avar & 9.34e-07 & 1.56e-10 & 9.34e-07 & 1.56e-10 & 9.34e-07 & 1.56e-10 & 9.34e-07 & 1.56e-10 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{one_comp_model} when M = N = 75} \label{table:3} \end{table} \begin{table}[] \centering
\resizebox{0.75\textwidth}{!}{\begin{tabular}{cc|cccc|cccc} \hline
\multicolumn{2}{c|}{Parameters} & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ \\
\multicolumn{2}{c|}{True values} & 1.5 & 0.5 & 2.5 & 0.75 & 1.5 & 0.5 & 2.5 & 0.75 \\ \hline
$\sigma$ & & \multicolumn{4}{c|}{Proposed estimators} & \multicolumn{4}{c}{Usual LSEs} \\ \hline 0.10 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -5.76e-07 & -5.65e-10 & 6.02e-07 & 2.59e-09 & -6.66e-07 & -1.33e-10 & 4.29e-07 & 3.89e-09 \\
& MSE & 3.23e-09 & 2.92e-13 & 3.00e-09 & 2.85e-13 & 2.47e-09 & 2.28e-13 & 2.87e-09 & 2.74e-13 \\
& Avar & 2.95e-09 & 2.77e-13 & 2.95e-09 & 2.77e-13 & 2.95e-09 & 2.77e-13 & 2.95e-09 & 2.77e-13 \\ \hline 0.50 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -5.41e-06 & 5.31e-08 & 1.12e-05 & -1.10e-07 & -1.07e-06 & 1.56e-08 & 1.38e-05 & -1.34e-07 \\
& MSE & 8.11e-08 & 7.28e-12 & 7.52e-08 & 6.83e-12 & 5.41e-08 & 5.03e-12 & 5.54e-08 & 5.18e-12 \\
& Avar & 7.38e-08 & 6.92e-12 & 7.38e-08 & 6.92e-12 & 7.38e-08 & 6.92e-12 & 7.38e-08 & 6.92e-12 \\ \hline 1.00 & Avg & 1.5000 & 0.5000 & 2.5000 & 0.7500 & 1.5000 & 0.5000 & 2.5000 & 0.7500 \\
& Bias & -1.98e-05 & 1.63e-07 & 1.43e-05 & -8.73e-08 & -8.54e-06 & 6.29e-08 & 1.12e-05 & -5.96e-08 \\
& MSE & 2.83e-07 & 2.56e-11 & 2.96e-07 & 2.75e-11 & 1.91e-07 & 1.77e-11 & 2.07e-07 & 1.97e-11 \\
& Avar & 2.95e-07 & 2.77e-11 & 2.95e-07 & 2.77e-11 & 2.95e-07 & 2.77e-11 & 2.95e-07 & 2.77e-11 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{one_comp_model} when M = N = 100} \label{table:4} \end{table}
\subsubsection{Two component simulation results}\label{sec:two_comp_simulations} We present the simulation results for \textbf{Case II} in Table \ref{table:5}-Table \ref{table:8}. From these tables, it is evident that the average estimates are quite close to the true values. The results also verify consistency of the proposed sequential estimators. It is also observed that the MSEs of the parameter estimates of the first component are mostly of the same order as the corresponding theoretical variances while those of the second component have exactly the same order as the corresponding asymptotic variances. \begin{table}[] \centering
\resizebox{\textwidth}{!}{\begin{tabular}{c|c|c|cccc|cccc} \hline
$\sigma$ & & & \multicolumn{4}{c|}{Proposed sequential estimates} & \multicolumn{4}{c}{Sequential LSEs} \\ \hline 0.10 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1016 & 0.0998 & 1.2614 & 0.2500 & 2.1031 & 0.0998 & 1.2565 & 0.2500 \\
& & Bias & 1.63e-03 & -1.81e-04 & 1.14e-02 & -4.71e-05 & 3.05e-03 & -1.76e-04 & 6.46e-03 & 3.94e-05 \\
& & MSE & 2.92e-06 & 3.30e-08 & 1.31e-04 & 2.60e-09 & 9.59e-06 & 3.12e-08 & 4.20e-05 & 1.91e-09 \\
& & AVar & 2.40e-07 & 3.60e-10 & 2.40e-07 & 3.60e-10 & 2.40e-07 & 3.60e-10 & 2.40e-07 & 3.60e-10 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\ \hline
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5018 & 0.5000 & 1.7520 & 0.7499 & 1.5017 & 0.5000 & 1.7510 & 0.7500 \\
& & Bias & 1.83e-03 & -1.92e-05 & 1.98e-03 & -6.41e-05 & 1.68e-03 & -2.19e-05 & 1.03e-03 & -2.95e-05 \\
& & MSE & 4.19e-06 & 1.54e-09 & 4.96e-06 & 5.53e-09 & 3.61e-06 & 1.59e-09 & 1.97e-06 & 2.12e-09 \\
& & AVar & 7.56e-07 & 1.13e-09 & 7.56e-07 & 1.13e-09 & 7.56e-07 & 1.13e-09 & 7.56e-07 & 1.13e-09 \\ \hline 0.50 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1017 & 0.0998 & 1.2613 & 0.2500 & 2.1031 & 0.0998 & 1.2563 & 0.2500 \\
& & Bias & 1.71e-03 & -1.83e-04 & 1.13e-02 & -4.38e-05 & 3.13e-03 & -1.78e-04 & 6.32e-03 & 4.34e-05 \\
& & MSE & 8.92e-06 & 4.14e-08 & 1.35e-04 & 1.18e-08 & 1.60e-05 & 3.98e-08 & 4.61e-05 & 1.07e-08 \\
& & AVar & 5.99e-06 & 8.99e-09 & 5.99e-06 & 8.99e-09 & 5.99e-06 & 8.99e-09 & 5.99e-06 & 8.99e-09 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5017 & 0.5000 & 1.7522 & 0.7499 & 1.5015 & 0.5000 & 1.7512 & 0.7500 \\
& & Bias & 1.69e-03 & -1.05e-05 & 2.16e-03 & -6.94e-05 & 1.51e-03 & -1.25e-05 & 1.17e-03 & -3.35e-05 \\
& & MSE & 2.54e-05 & 3.37e-08 & 3.15e-05 & 4.22e-08 & 2.35e-05 & 3.18e-08 & 2.40e-05 & 3.31e-08 \\
& & AVar & 1.89e-05 & 2.84e-08 & 1.89e-05 & 2.84e-08 & 1.89e-05 & 2.84e-08 & 1.89e-05 & 2.84e-08 \\ \hline 1.00 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1015 & 0.0998 & 1.2616 & 0.2499 & 2.1029 & 0.0998 & 1.2567 & 0.2500 \\
& & Bias & 1.54e-03 & -1.79e-04 & 1.16e-02 & -5.63e-05 & 2.93e-03 & -1.72e-04 & 6.67e-03 & 3.07e-05 \\
& & MSE & 2.98e-05 & 6.77e-08 & 1.65e-04 & 4.44e-08 & 3.72e-05 & 6.70e-08 & 6.97e-05 & 3.66e-08 \\
& & AVar & 2.40e-05 & 3.60e-08 & 2.40e-05 & 3.60e-08 & 2.40e-05 & 3.60e-08 & 2.40e-05 & 3.60e-08 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5018 & 0.5000 & 1.7516 & 0.7499 & 1.5018 & 0.5000 & 1.7507 & 0.7500 \\
& & Bias & 1.79e-03 & -1.57e-05 & 1.64e-03 & -5.47e-05 & 1.75e-03 & -2.25e-05 & 7.18e-04 & -2.07e-05 \\
& & MSE & 9.84e-05 & 1.41e-07 & 1.20e-04 & 1.66e-07 & 9.40e-05 & 1.35e-07 & 1.01e-04 & 1.40e-07 \\
& & AVar & 7.56e-05 & 1.13e-07 & 7.56e-05 & 1.13e-07 & 7.56e-05 & 1.13e-07 & 7.56e-05 & 1.13e-07 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{multiple_comp_model} when M = N = 25} \label{table:5} \end{table}
\begin{table}[] \centering
\resizebox{\textwidth}{!}{\begin{tabular}{c|c|c|cccc|cccc} \hline
$\sigma$ & & & \multicolumn{4}{c|}{Proposed sequential estimates} & \multicolumn{4}{c}{Sequential LSEs} \\ \hline 0.10 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1006 & 0.1000 & 1.2567 & 0.2499 & 2.1011 & 0.1000 & 1.2572 & 0.2499 \\
& & Bias & 6.07e-04 & -8.61e-06 & 6.70e-03 & -1.11e-04 & 1.07e-03 & -1.11e-05 & 7.20e-03 & -1.22e-04 \\
& & MSE & 3.85e-07 & 8.02e-11 & 4.49e-05 & 1.23e-08 & 1.15e-06 & 1.29e-10 & 5.18e-05 & 1.49e-08 \\
& & AVar & 1.50e-08 & 5.62e-12 & 1.50e-08 & 5.62e-12 & 1.50e-08 & 5.62e-12 & 1.50e-08 & 5.62e-12 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5006 & 0.5000 & 1.7506 & 0.7500 & 1.5008 & 0.5000 & 1.7507 & 0.7500 \\
& & Bias & 6.09e-04 & -1.05e-05 & 5.56e-04 & -9.98e-06 & 7.51e-04 & -1.36e-05 & 6.83e-04 & -1.20e-05 \\
& & MSE & 4.23e-07 & 1.30e-10 & 3.60e-07 & 1.18e-10 & 6.13e-07 & 2.03e-10 & 5.17e-07 & 1.61e-10 \\
& & AVar & 4.73e-08 & 1.77e-11 & 4.73e-08 & 1.77e-11 & 4.73e-08 & 1.77e-11 & 4.73e-08 & 1.77e-11 \\ \hline 0.50 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1006 & 0.1000 & 1.2567 & 0.2499 & 2.1011 & 0.1000 & 1.2572 & 0.2499 \\
& & Bias & 6.27e-04 & -9.05e-06 & 6.72e-03 & -1.11e-04 & 1.09e-03 & -1.15e-05 & 7.22e-03 & -1.22e-04\\
& & MSE & 8.09e-07 & 2.36e-10 & 4.56e-05 & 1.25e-08 & 1.58e-06 & 2.78e-10 & 5.25e-05 & 1.51e-08 \\
& & AVar & 3.75e-07 & 1.40e-10 & 3.75e-07 & 1.40e-10 & 3.75e-07 & 1.40e-10 & 3.75e-07 & 1.40e-10 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5006 & 0.5000 & 1.7505 & 0.7500 & 1.5008 & 0.5000 & 1.7506 & 0.7500 \\
& & Bias & 6.09e-04 & -1.07e-05 & 5.04e-04 & -8.84e-06 & 7.52e-04 & -1.38e-05 & 6.26e-04 & -1.07e-05\\
& & MSE & 1.68e-06 & 5.94e-10 & 1.50e-06 & 5.49e-10 & 1.79e-06 & 6.29e-10 & 1.62e-06 & 5.72e-10 \\
& & AVar & 1.18e-06 & 4.43e-10 & 1.18e-06 & 4.43e-10 & 1.18e-06 & 4.43e-10 & 1.18e-06 & 4.43e-10 \\ \hline 1.00 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1006 & 0.1000 & 1.2567 & 0.2499 & 2.1011 & 0.1000 & 1.2572 & 0.2499 \\
& & Bias & 5.98e-04 & -8.36e-06 & 6.70e-03 & -1.11e-04 & 1.06e-03 & -1.09e-05 & 7.20e-03 & -1.22e-04\\
& & MSE & 1.92e-06 & 6.44e-10 & 4.66e-05 & 1.29e-08 & 2.64e-06 & 6.72e-10 & 5.34e-05 & 1.55e-08 \\
& & AVar & 1.50e-06 & 5.62e-10 & 1.50e-06 & 5.62e-10 & 1.50e-06 & 5.62e-10 & 1.50e-06 & 5.62e-10 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5006 & 0.5000 & 1.7507 & 0.7500 & 1.5008 & 0.5000 & 1.7508 & 0.7500 \\
& & Bias & 6.49e-04 & -1.10e-05 & 6.55e-04 & -1.15e-05 & 7.70e-04 & -1.36e-05 & 7.92e-04 & -1.38e-05 \\
& & MSE & 5.75e-06 & 2.09e-09 & 4.50e-06 & 1.65e-09 & 5.66e-06 & 2.03e-09 & 4.68e-06 & 1.68e-09 \\
& & AVar & 4.73e-06 & 1.77e-09 & 4.73e-06 & 1.77e-09 & 4.73e-06 & 1.77e-09 & 4.73e-06 & 1.77e-09 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{multiple_comp_model} when M = N = 50} \label{table:6} \end{table}
\begin{table}[] \centering
\resizebox{\textwidth}{!}{\begin{tabular}{c|c|c|cccc|cccc} \hline
$\sigma$ & & & \multicolumn{4}{c|}{Proposed sequential estimates} & \multicolumn{4}{c}{Sequential LSEs} \\ \hline 0.10 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1000 & 0.1000 & 1.2528 & 0.2500 & 2.1000 & 0.1000 & 1.2528 & 0.2500 \\
& & Bias & -6.14e-06 & -8.37e-07 & 2.81e-03 & -3.31e-05 & -2.85e-05 & 9.79e-08 & 2.80e-03 & -3.27e-05\\
& & MSE & 3.06e-09 & 1.19e-12 & 7.90e-06 & 1.10e-09 & 5.96e-09 & 8.46e-13 & 7.84e-06 & 1.07e-09 \\
& & AVar & 2.96e-09 & 4.93e-13 & 2.96e-09 & 4.93e-13 & 2.96e-09 & 4.93e-13 & 2.96e-09 & 4.93e-13 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5001 & 0.5000 & 1.7500 & 0.7500 & 1.5001 & 0.5000 & 1.7500 & 0.7500 \\
& & Bias & 5.82e-05 & -5.89e-07 & 5.55e-06 & 1.89e-07 & 5.64e-05 & -5.67e-07 & 2.25e-05 & -2.69e-08\\
& & MSE & 1.26e-08 & 1.85e-12 & 9.86e-09 & 1.64e-12 & 1.23e-08 & 1.82e-12 & 1.02e-08 & 1.57e-12 \\
& & AVar & 9.34e-09 & 1.56e-12 & 9.34e-09 & 1.56e-12 & 9.34e-09 & 1.56e-12 & 9.34e-09 & 1.56e-12 \\ \hline 0.50 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1000 & 0.1000 & 1.2528 & 0.2500 & 2.1000 & 0.1000 & 1.2528 & 0.2500 \\
& & Bias & 1.36e-05 & -1.06e-06 & 2.82e-03 & -3.33e-05 & -1.21e-05 & -9.17e-08 & 2.82e-03 & -3.29e-05\\
& & MSE & 7.02e-08 & 1.24e-11 & 8.05e-06 & 1.12e-09 & 7.03e-08 & 1.14e-11 & 8.01e-06 & 1.09e-09 \\
& & AVar & 7.40e-08 & 1.23e-11 & 7.40e-08 & 1.23e-11 & 7.40e-08 & 1.23e-11 & 7.40e-08 & 1.23e-11 \\\hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5000 & 0.5000 & 1.7500 & 0.7500 & 1.5000 & 0.5000 & 1.7500 & 0.7500 \\
& & Bias & 4.54e-05 & -3.85e-07 & 7.93e-06 & 1.36e-07 & 3.58e-05 & -2.65e-07 & 2.58e-05 & -9.44e-08\\
& & MSE & 2.50e-07 & 4.08e-11 & 2.49e-07 & 4.03e-11 & 2.21e-07 & 3.65e-11 & 2.42e-07 & 3.91e-11 \\
& & AVar & 2.33e-07 & 3.89e-11 & 2.33e-07 & 3.89e-11 & 2.33e-07 & 3.89e-11 & 2.33e-07 & 3.89e-11 \\ \hline 1.00 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.1000 & 0.1000 & 1.2528 & 0.2500 & 2.1000 & 0.1000 & 1.2528 & 0.2500 \\
& & Bias & 1.22e-05 & -1.10e-06 & 2.82e-03 & -3.32e-05 & -9.57e-06 & -1.62e-07 & 2.81e-03 & -3.27e-05\\
& & MSE & 3.09e-07 & 5.09e-11 & 8.28e-06 & 1.15e-09 & 3.07e-07 & 4.96e-11 & 8.18e-06 & 1.12e-09 \\
& & AVar & 2.96e-07 & 4.93e-11 & 2.96e-07 & 4.93e-11 & 2.96e-07 & 4.93e-11 & 2.96e-07 & 4.93e-11 \\\hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5000 & 0.5000 & 1.7501 & 0.7500 & 1.5000 & 0.5000 & 1.7501 & 0.7500 \\
& & Bias & 3.16e-05 & -2.66e-07 & 5.78e-05 & -6.26e-07 & 2.85e-06 & 7.51e-08 & 6.49e-05 & -7.12e-07\\
& & MSE & 9.43e-07 & 1.52e-10 & 9.58e-07 & 1.55e-10 & 8.36e-07 & 1.36e-10 & 9.41e-07 & 1.52e-10 \\
& & AVar & 9.34e-07 & 1.56e-10 & 9.34e-07 & 1.56e-10 & 9.34e-07 & 1.56e-10 & 9.34e-07 & 1.56e-10 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{multiple_comp_model} when M = N = 75} \label{table:7} \end{table}
\begin{table}[] \centering
\resizebox{\textwidth}{!}{\begin{tabular}{c|c|c|cccc|cccc} \hline
$\sigma$ & & & \multicolumn{4}{c|}{Proposed sequential estimates} & \multicolumn{4}{c}{Sequential LSEs} \\ \hline 0.10 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.0992 & 0.1000 & 1.2507 & 0.2500 & 2.0995 & 0.1000 & 1.2504 & 0.2500 \\
& & Bias & -7.92e-04 & 5.60e-06 & 7.40e-04 & -6.20e-06 & -4.67e-04 & 2.68e-06 & 4.46e-04 & -3.57e-06\\
& & MSE & 6.32e-07 & 3.19e-11 & 5.49e-07 & 3.85e-11 & 2.19e-07 & 7.27e-12 & 2.00e-07 & 1.28e-11 \\
& & AVar & 9.37e-10 & 8.78e-14 & 9.37e-10 & 8.78e-14 & 9.37e-10 & 8.78e-14 & 9.37e-10 & 8.78e-14 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5000 & 0.5000 & 1.7500 & 0.7500 & 1.5000 & 0.5000 & 1.7500 & 0.7500 \\
& & Bias & 1.94e-05 & -1.60e-07 & -8.18e-06 & 1.64e-08 & 1.46e-05 & -1.19e-07 & -3.32e-06 & -2.62e-08\\
& & MSE & 3.17e-09 & 2.88e-13 & 3.25e-09 & 2.91e-13 & 2.99e-09 & 2.73e-13 & 3.15e-09 & 2.89e-13 \\
& & AVar & 2.95e-09 & 2.77e-13 & 2.95e-09 & 2.77e-13 & 2.95e-09 & 2.77e-13 & 2.95e-09 & 2.77e-13 \\ \hline 0.50 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.0992 & 0.1000 & 1.2507 & 0.2500 & 2.0995 & 0.1000 & 1.2504 & 0.2500 \\
& & Bias & -7.93e-04 & 5.61e-06 & 7.35e-04 & -6.15e-06 & -4.61e-04 & 2.63e-06 & 4.41e-04 & -3.52e-06 \\
& & MSE & 6.55e-07 & 3.38e-11 & 5.65e-07 & 4.01e-11 & 2.37e-07 & 9.07e-12 & 2.19e-07 & 1.47e-11 \\
& & AVar & 2.34e-08 & 2.20e-12 & 2.34e-08 & 2.20e-12 & 2.34e-08 & 2.20e-12 & 2.34e-08 & 2.20e-12 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5000 & 0.5000 & 1.7500 & 0.7500 & 1.5000 & 0.5000 & 1.7500 & 0.7500 \\
& & Bias & 1.84e-05 & -1.28e-07 & -1.52e-05 & 1.01e-07 & 1.36e-05 & -8.67e-08 & -1.06e-05 & 6.10e-08 \\
& & MSE & 8.03e-08 & 7.24e-12 & 7.48e-08 & 6.90e-12 & 7.96e-08 & 7.16e-12 & 7.42e-08 & 6.86e-12 \\
& & AVar & 7.38e-08 & 6.92e-12 & 7.38e-08 & 6.92e-12 & 7.38e-08 & 6.92e-12 & 7.38e-08 & 6.92e-12 \\ \hline 1.00 & First Component & Parameters & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ & $\alpha_1$ & $\beta_1$ & $\gamma_1$ & $\delta_1$ \\
& & True values & 2.1 & 0.1 & 1.25 & 0.25 & 2.1 & 0.1 & 1.25 & 0.25 \\ \hline
& & Average & 2.0992 & 0.1000 & 1.2507 & 0.2500 & 2.0995 & 0.1000 & 1.2505 & 0.2500 \\
& & Bias & -7.86e-04 & 5.52e-06 & 7.46e-04 & -6.24e-06 & -4.53e-04 & 2.53e-06 & 4.52e-04 & -3.61e-06 \\
& & MSE & 7.16e-07 & 3.95e-11 & 6.55e-07 & 4.79e-11 & 2.99e-07 & 1.49e-11 & 3.00e-07 & 2.17e-11 \\
& & AVar & 9.37e-08 & 8.78e-12 & 9.37e-08 & 8.78e-12 & 9.37e-08 & 8.78e-12 & 9.37e-08 & 8.78e-12 \\ \hline
& Second Component & Parameters & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ & $\alpha_2$ & $\beta_2$ & $\gamma_2$ & $\delta_2$ \\
& & True values & 1.5 & 0.5 & 1.75 & 0.75 & 1.5 & 0.5 & 1.75 & 0.75 \\ \hline
& & Average & 1.5000 & 0.5000 & 1.7500 & 0.7500 & 1.5000 & 0.5000 & 1.7500 & 0.7500 \\
& & Bias & -1.78e-05 & 1.53e-07 & 1.71e-05 & -2.75e-07 & -2.13e-05 & 1.78e-07 & 1.87e-05 & -2.92e-07 \\
& & MSE & 3.05e-07 & 2.86e-11 & 3.02e-07 & 2.78e-11 & 3.03e-07 & 2.83e-11 & 2.98e-07 & 2.76e-11 \\
& & AVar & 2.95e-07 & 2.77e-11 & 2.95e-07 & 2.77e-11 & 2.95e-07 & 2.77e-11 & 2.95e-07 & 2.77e-11 \\ \hline \end{tabular}} \caption{Estimates of the parameters of model \eqref{multiple_comp_model} when M = N = 100} \label{table:8} \end{table}
\subsection{Simulated Data Analysis}\label{sec:data_analysis} We analyse a synthetic texture data using model \eqref{multiple_comp_model} to demonstrate how the proposed parameter estimation methods work. The synthetic data is generated using the following model structure and parameters: \begin{equation} y(m,n) = \sum_{k=1}^{5}\{A_k^0 \cos(\alpha_k^0 m + \beta_k^0 m^2 + \gamma_k^0 n + \delta_k^0 n^2) + B_k^0 \sin(\alpha_k^0 m + \beta_k^0 m^2 + \gamma_k^0 n + \delta_k^0 n^2)\} \end{equation} \begin{comment}\begin{equation}\begin{split}\label{synthetic_data} y(m,n) = 6 \cos(2.75 m + 0.05 m^2 + 2.5 n + 0.075 n^2) + 6 \sin(2.75 m + 0.05 m^2 + 2.5 n + 0.075 n^2)\\ + 2 \cos(1.75 m + 0.01 m^2 + 1.5 n + 0.025 n^2) + 2 \sin(1.75 m + 0.01 m^2 + 1.5 n + 0.025 n^2)\\ + 1 \cos(1.5 m + 0.15 m^2 + 2 n + 0.25 n^2) + 1 \sin(1.5 m + 0.15 m^2 + 2 n + 0.25 n^2)\\ + 0.5 \cos(1.75 m + 0.75 m^2 + 2.75 n + 0.275 n^2) + 0.5 \sin(1.75 m + 0.75 m^2 + 2.75 n + 0.275 n^2)\\ + 0.1 \cos(1.95 m + 0.95 m^2 + 2.95 n + 0.295 n^2) + 0.1 \sin(1.95 m + 0.95 m^2 + 2.95 n + 0.295 n^2)\\
+ X(m,n); \ m = 1, \ldots, 250; n = 1, \ldots, 250. \end{split}\end{equation} \end{comment}
\noindent The true parameter values are provided in Table \ref{table9}. The errors $X(m,n)$s are i.i.d.\ random variables with mean 0 and variance 100. Figure \ref{fig:true_signal} represents the original texture without any contamination and Figure \ref{fig:noisy_signal} represents the noisy texture. Our purpose is to extract the original gray-scale texture from the one which is contaminated.
\begin{table}[]
\centering
\caption{True parameters values of the synthetic data.}
\resizebox{0.9\textwidth}{!}{\begin{tabular}{cc|cc|cc|cc|cc|cc}
\hline $A_1^0$ & 6 & $B_1^0$ & 6 & $\alpha_1^0$ & 2.75 & $\beta_1^0$ & 0.05 & $\gamma_1^0$ & 2.5 & $\delta_1^0$ & 0.075 \\ $A_2^0$ &2 & $B_2^0$ & 2 & $\alpha_2^0$ & 1.75 & $\beta_2^0$ & 0.01 & $\gamma_2^0$ & 1.5 & $\delta_2^0$ & 0.025 \\
$A_3^0$ &1 & $B_3^0$ & 1 & $\alpha_3^0$ & 1.5 & $\beta_3^0$ & 0.15 & $\gamma_3^0$ & 2 & $\delta_3^0$ & 0. 25 \\
$A_4^0$ &0.5 & $B_4^0$ & 0.5 & $\alpha_4^0$ & 1.75 & $\beta_4^0$ & 0.75 & $\gamma_4^0$ & 2.75 & $\delta_4^0$ & 0.275 \\
$A_5^0$ &0.1 & $B_5^0$ & 0.1 & $\alpha_5^0$ & 1.95 & $\beta_5^0$ & 0.95 & $\gamma_5^0$ & 2.95 & $\delta_5^0$ & 0.295 \\ \hline \end{tabular}} \label{table9} \end{table}
\noindent We model the data using the proposed sequential procedure and the parameter estimates obtained using the sequential estimators are presented in Table \ref{table10}. From the obtained estimates, it can be inferred that the first four components are estimated satisfactorily but the last component is hardly detected. This makes sense as the amplitudes corresponding to the last component are very small. The estimated texture is plotted in Figure \ref{fig:estimated_signal_LSE} and it is evident that the estimated texture and the original texture look extremely well matched.
\begin{table}[]
\centering
\caption{Estimates obtained using the sequential procedure for the synthetic data.}
\resizebox{\textwidth}{!}{\begin{tabular}{cc|cc|cc|cc|cc|cc}
\hline $\hat{A}_1$ & 5.8773 & $\hat{B}_1$ & 6.1384 & $\hat{\alpha}_1$ & 2.7499 & $\hat{\beta}_1$ & 0.0500 & $\hat{\gamma}_1$ & 2.5004 & $\hat{\delta}_1$ & 0.0749 \\ $\hat{A}_2$ & 2.2789 & $\hat{B}_2$ & 1.7718 & $\hat{\alpha}_2$ & 1.7492 & $\hat{\beta}_2$ & 0.0100 & $\hat{\gamma}_2$ & 1.4988 & $\hat{\delta}_2$ & 0.0250 \\ $\hat{A}_3$ & 1.0856 & $\hat{B}_3$ & 0.9090 & $\hat{\alpha}_3$ & 1.4999 & $\hat{\beta}_3$ & 0.1499 & $\hat{\gamma}_3$ & 1.9979 & $\hat{\delta}_3$ & 0.2500 \\ $\hat{A}_4$ & 0.4828 & $\hat{B}_4$ & 0.5418 & $\hat{\alpha}_4$ & 1.7482 & $\hat{\beta}_4$ & 0.7500 & $\hat{\gamma}_4$ & 2.7547 & $\hat{\delta}_4$ & 0.2749 \\ $\hat{A}_5$ & 0.0251 & $\hat{B}_5$ & -0.0106 & $\hat{\alpha}_5$ & 2.2254 & $\hat{\beta}_5$ & 1.1450 & $\hat{\gamma}_5$ & 3.2173 & $\hat{\delta}_5$ & 0.5945 \\
\hline \end{tabular}} \label{table10} \end{table} \begin{figure}
\caption{Estimated texture for the synthetic data.}
\label{fig:estimated_signal_LSE}
\end{figure}
\section{Concluding Remarks}\label{sec:Conclusion} In this paper, we have considered the estimation of unknown parameters of a 2-D chirp model under the assumption of i.i.d.\ additive errors. The main idea is to reduce the computational complexity involved in finding the LSEs of these parameters. The proposed estimators minimise the computations to a great extent and are observed to be strongly consistent and asymptotically equivalent to the LSEs. For a 2-D chirp model with $p$ number of components, we have proposed a sequential procedure which reduces the problem of estimation of the parameters to solving $p$ number of 2-D optimisation problems. Moreover, the propounded sequential estimators are observed to be strongly consistent and asymptotically equivalent to the usual LSEs.\\
An alternative method to estimate the non-linear parameters of a one-component 2-D chirp model is to maximize the following periodogram-type functions: \begin{equation}\begin{split}\label{periodogram-type_function_alpha_beta} I^{(1)}_{MN}(\alpha, \beta) & = \frac{2}{MN}\sum\limits_{{n_0} = 1}^{N}\textit{\textbf{Y}}^{\top}_{n_0}\textbf{Z}_M(\alpha, \beta)\textbf{Z}_M(\alpha, \beta)^{\top}\textit{\textbf{Y}}_{n_0} \end{split}\end{equation} \begin{equation}\begin{split}\label{periodogram-type_function_gamma_delta} I^{(2)}_{MN}(\gamma, \delta) & = \frac{2}{MN}\sum\limits_{{m_0} = 1}^{M}\textit{\textbf{Y}}^{\top}_{m_0}\textbf{Z}_N(\gamma, \delta)\textbf{Z}_N(\gamma, \delta)^{\top}\textit{\textbf{Y}}_{m_0} \end{split}\end{equation} with respect to $\alpha$, $\beta$ and $\gamma$, $\delta$ respectively. These periodogram-type functions are constructed in the same way as the reduced error sum of squares functions defined in \eqref{reduced_ess_alpha_beta} and \eqref{reduced_ess_gamma_delta} with the same idea to reduce 2-D chirp model to a number of 1-D chirp models with same frequency and frequency rate parameters as the original 2-D model but with different amplitudes. \\
Using some number theoretic results established by Lahiri \cite{2015}, it is easy to show that the following relationship exists between the reduced error sum of squares and the periodogram-type functions:
\begin{equation}\label{R_I_relationship} \frac{1}{N}R^{(1)}_{MN}(\alpha, \beta) = \frac{1}{N}\sum_{{n_0} = 1}^{N} \textit{\textbf{Y}}^{\top}_{n_0}\textit{\textbf{Y}}_{n_0} - I^{(1)}_{MN}(\alpha, \beta) + o(1) \end{equation} Here, a function $f$ is $o(1)$, if $f \to 0$ almost surely as $M \to \infty$. A similar relation can be seen between $R^{(2)}_{MN}(\gamma, \delta)$ and $I^{(2)}_{MN}(\gamma, \delta)$. Thus, replacing the functions $R^{(1)}_{MN}(\alpha, \beta)$ and $R^{(2)}_{MN}(\gamma, \delta)$ by $I^{(1)}_{MN}(\alpha, \beta)$ and $I^{(2)}_{MN}(\gamma, \delta)$ respectively is plausible as its effect on the estimators will be inconsequential. However, this replacement simplifies the estimation process to a great extent as the evaluation of periodogram-type functions does not involve matrix inversion. \\
This relationship is analogous to the one that was first proposed by Walker \cite{1971} for the sinusoidal model and later Grover et al. \cite{2018, 2018_2} extended the same for 1-D and 2-D chirp models. The estimators obtained by maximising a periodogram function \cite{1971} or a periodogram-type function \cite{2018, 2018_2} are called the approximate least squares estimators (ALSEs). In fact, Grover et al. \cite{2018, 2018_2} showed that the ALSEs are strongly consistent and asymptotically equivalent to the corresponding LSEs. Furthermore, they showed that the ALSEs have two distinctive and noteworthy aspects$-$ \textit{(a)} their consistency is obtained under slightly less restrictive assumptions on the linear parameters than those required for the LSEs and \textit{(b)} their computation is faster as compared to the LSEs due to absence of a matrix inversion in the former case. Therefore, it will be interesting to investigate the behaviour of the estimators obtained by maximising functions \eqref{periodogram-type_function_alpha_beta} and \eqref{periodogram-type_function_gamma_delta} and to assess their computational performance as compared to the estimators we proposed in this paper. \\
The numerical experiments$-$ the simulations and the data analysis, show that the proposed estimation technique provides as accurate results as the least squares estimation method with the additional advantage of being computationally more efficient. Thus to summarise, the proposed estimators seem to be the method of choice as their performance is satisfactory and as efficient as the LSEs, both numerically and analytically.
\section*{Appendix A}\label{appendix:A} \begin{comment} To prove Theorem \ref{consistency_alpha_beta_one_comp_LSE}, we need the following lemma: \begin{lemma}\label{lemma_consistency_LSEs_alpha_beta}
Consider the set $S_c$ = $\{\boldsymbol{\theta} = (A(1), \ldots A(N), B(1), \ldots, B(N), \alpha, \beta) : |\boldsymbol{\theta} - \boldsymbol{\theta}^0| \geqslant c\}$. If for any c $>$ 0, \\ \begin{equation}\label{condition_for_consistency_LSE_alpha_beta} \liminf \inf\limits_{\boldsymbol{\theta} \in S_c} \frac{1}{M N}Q_{MN}(\boldsymbol{\theta}) - Q_{MN}(\boldsymbol{\theta}^0) > 0 \ a.s. \end{equation} then, $\hat{\boldsymbol{\theta}}$ $\rightarrow$ $\boldsymbol{\theta}^0$ almost surely as $M \rightarrow \infty$. \end{lemma} \noindent \begin{proof} This proof follows along the same lines as that of Lemma 1 of Wu \cite{1981}.\\ \end{proof} \justify \textit{Proof of Theorem \ref{consistency_alpha_beta_one_comp_LSE}:} Let us consider the following: \begin{flequation*} & \liminf \inf\limits_{\boldsymbol{\theta} \in S_c} \frac{1}{M N}(Q_{MN}(\boldsymbol{\theta}) - Q_{MN}(\boldsymbol{\theta}^0))& \\ & = \liminf \inf\limits_{\boldsymbol{\theta} \in S_c} \frac{1}{M N}\bigg[\sum_{n_0 = 1}^{N}\bigg\{\sum_{m=1}^{M}\bigg(y(m,n) - A(n_0)\cos(\alpha m + \beta m^2) - B(n_0)\sin(\alpha m + \beta m^2)\bigg)^2\bigg\} & \\ & - \sum_{n_0 = 1}^{N}\bigg\{\sum_{m=1}^{M}\bigg(y(m,n) - A^0(n_0)\cos(\alpha^0 m + \beta^0 m^2) - B^0(n_0)\sin(\alpha^0 m + \beta^0 m^2)\bigg)^2\bigg\}\bigg]& \\ & = \liminf \inf\limits_{\boldsymbol{\theta} \in S_c} \frac{1}{M N}\bigg[\sum_{n_0 = 1}^{N} Q_M(\boldsymbol{\theta}(n_0)) - \sum_{n_0 = 1}^{N} Q_M(\boldsymbol{\theta}^0(n_0))\bigg] &\\
&\geqslant \frac{1}{N}\sum_{n_0 = 1}^{N} \liminf \inf\limits_{\boldsymbol{\theta} \in S_c} \frac{1}{M}\bigg[Q_M(\boldsymbol{\theta}(n_0)) - Q_M(\boldsymbol{\theta}^0(n_0))\bigg] > 0 \end{flequation*} This follows from the proof of Theorem 1 of Kundu and Nandi \cite{2008}. Thus, using Lemma \ref{lemma_consistency_LSEs_alpha_beta}, $\hat{\alpha} \xrightarrow{a.s.} \alpha^0$ and $\hat{\beta} \xrightarrow{a.s.} \beta^0$. \\ \qed
\justify \textit{Proof of Theorem \ref{asymp_dist_one_comp_LSE}:} Let us first determine the asymptotic distribution of $\hat{\alpha}$ and $\hat{\beta}.$ For this, we expand the sum of the error sums of squares $\textbf{Q}^{\prime}_{MN}(\hat{\boldsymbol{\theta}})$ defined in \eqref{ess_alpha_beta}, around the point $\boldsymbol{\theta}^0$ using Taylor series expansion as follows: \begin{equation*} \textbf{Q}^{\prime}_{MN}(\hat{\boldsymbol{\theta}}) - \textbf{Q}^{\prime}_{MN}(\boldsymbol{\theta}^0) = (\hat{\boldsymbol{\theta}} - \boldsymbol{\theta}^0)\textbf{Q}^{\prime\prime}_{MN}(\bar{\boldsymbol{\theta}}), \end{equation*} where $\bar{\boldsymbol{\theta}}$ is a point between $\hat{\boldsymbol{\theta}}$ and $\boldsymbol{\theta}^0.$ Since $\hat{\boldsymbol{\theta}}$ minimizes $Q_{MN}(\boldsymbol{\theta})$, therefore $\textbf{Q}^{\prime}_{MN}(\hat{\boldsymbol{\theta}}) = 0$, and we have: \begin{equation}\label{taylor_series_Q'MN_theta} (\hat{\boldsymbol{\theta}} - \boldsymbol{\theta}^0) = - \textbf{Q}^{\prime}_{MN}(\boldsymbol{\theta}^0) [\textbf{Q}^{\prime\prime}_{MN}(\bar{\boldsymbol{\theta}})]^{-1}. \end{equation} Let us define a diagonal matrix $\textit{\textbf{D}}$, as $\textit{\textbf{D}} = diag(\underbrace{\frac{1}{M^{1/2}}, \ldots \frac{1}{M^{1/2}}}_{2N\ times}, \frac{1}{M^{3/2}}, \frac{1}{M^{5/2}})$. Multiplying both sides of \eqref{taylor_series_Q'MN_theta} by the matrix $\textit{\textbf{D}}$, we have: \begin{equation}\label{taylor_series_Q'MN_theta_D} (\hat{\boldsymbol{\theta}} - \boldsymbol{\theta}^0)\textit{\textbf{D}}^{-1} & = - \textbf{Q}^{\prime}_{MN}(\boldsymbol{\theta}^0)\textit{\textbf{D}} [\textit{\textbf{D}} \textbf{Q}^{\prime\prime}_{MN}(\bar{\boldsymbol{\theta}})\textit{\textbf{D}}]^{-1}. \end{equation} We first establish the distribution of the vector $- \textbf{Q}^{\prime}_{MN}(\boldsymbol{\theta}^0)\textit{\textbf{D}}$: \begin{multline}\label{Q'MN_theta_D} - \textbf{Q}^{\prime}_{MN}(\boldsymbol{\theta}^0)\textit{\textbf{D}} = \bigg(\begin{matrix} \frac{1}{M^{1/2}} \frac{\partial Q_{MN}(\theta^0)}{\partial A(1)} & \ldots & \frac{1}{M^{1/2}} \frac{\partial Q_{MN}(\theta^0)}{\partial A(N)}, \frac{1}{M^{1/2}} \frac{\partial Q_{MN}(\theta^0)}{\partial B(1)} & \ldots & \frac{1}{M^{1/2}} \frac{\partial Q_{MN}(\theta^0)}{\partial B(N)} \end{matrix} \\ \begin{matrix} \frac{1}{M^{3/2}} \frac{\partial Q_{MN}(\theta^0)}{\partial \alpha} & \frac{1}{M^{5/2}} \frac{\partial Q_{MN}(\theta^0)}{\partial \beta} \end{matrix}\bigg) \end{multline} Note that, we defined $Q_{MN}(\boldsymbol{\theta})$ as: \begin{equation*} Q_{MN}(\boldsymbol{\theta}) = \sum_{{n_0} = 1}^{N}Q_{M}(\boldsymbol{\theta}(n_0)), \end{equation*} where $Q_{M}(\boldsymbol{\theta}(n_0))$ is the error sum of squares that is minimised with respect to $\boldsymbol{\theta}(n_0) = (A(n_0), B(n_0), \alpha, \beta)$ to obtain the LSEs of the parameters of the 1-D chirp model \eqref{model_n=n0_one_comp}. Also, we have the following results from Lahiri et al.\ \cite{2015}, \begin{equation}\begin{split}\label{one_comp_results} - &\textbf{Q}^{\prime}_{M}(\boldsymbol{\theta}^0(n_0))\boldsymbol{\Delta} \xrightarrow{d} \boldsymbol{\mathcal{N}}_4(\textbf{0}, 2\sigma^2\boldsymbol{\Sigma}^{-1}(\boldsymbol{\theta}^0(n_0))), \textmd{ and} \\ & \boldsymbol{\Delta} \textbf{Q}^{\prime\prime}_{M}(\bar{\boldsymbol{\theta}}(n_0))\boldsymbol{\Delta} \rightarrow \boldsymbol{\Delta} \textbf{Q}^{\prime\prime}_{M}(\boldsymbol{\theta}^0(n_0))\boldsymbol{\Delta} \rightarrow \boldsymbol{\Sigma}^{-1}(\boldsymbol{\theta}^0(n_0)). \end{split}\end{equation} Here, \begin{equation}\label{variance_covariance_one_comp} \boldsymbol{\Sigma}(\boldsymbol{\theta}^0(n_0)) = \frac{2}{{A^0}^2(n_0) + {B^0}^2(n_0)}\left[\begin{array}{cccc}\frac{1}{2}\bigg({A^0}^2(n_0) + 9{B^0}^2(n_0)\bigg) & -4A^0(n_0)B^0(n_0) & -18B^0(n_0) & 15B^0(n_0) \\-4A^0(n_0)B^0(n_0) & \frac{1}{2}\bigg({9A^0}^2(n_0) +{B^0}^2(n_0)\bigg) & 18A^0(n_0) & -15A^0(n_0) \\ -18B^0(n_0) & 18A^0(n_0) & 96 & -90 \\ 15B^0(n_0) & -15A^0(n_0) & -90 & 90 \end{array}\right] \end{equation} Also, note that \begin{equation}\label{variance_covariance^-1_one_comp} \boldsymbol{\Sigma}^{-1}(\boldsymbol{\theta}^0(n_0)) = \left[\begin{array}{cccc}1 & 0 & \frac{B^0(n_0)}{2} & \frac{B^0(n_0)}{3} \\0 & 1 & -\frac{A^0(n_0)}{2} & -\frac{A^0(n_0)}{3} \\ \frac{B^0(n_0)}{2} & -\frac{A^0(n_0)}{2} & \frac{{A^0(n_0)}^2 + {B^0(n_0)}^2}{3} & \frac{{A^0(n_0)}^2 + {B^0(n_0)}^2}{4} \\ \frac{B^0(n_0)}{3} & -\frac{A^0(n_0)}{3} & \frac{{A^0(n_0)}^2 + {B^0(n_0)}^2}{4} & \frac{{A^0(n_0)}^2 + {B^0(n_0)}^2}{5} \end{array}\right] \end{equation} \noindent Using the results \eqref{one_comp_results}, we compute the elements of the vector \eqref{Q'MN_theta_D} discretely and obtain: \begin{equation}\label{Q'MN_theta_D_ditribution} - \textbf{Q}^{\prime}_{MN}(\boldsymbol{\theta}^0)\textit{\textbf{D}} \xrightarrow{d} \boldsymbol{\mathcal{N}}_{2N + 2}(\textbf{0}, 2\sigma^2 \bm{\mathcal{V}}^{-1}), \end{equation} where $\bm{\mathcal{V}}^{-1} = \left[\begin{array}{cccccccc} 1 & \ldots & 0 & 0 & \ldots & 0 & \frac{B^0(1)}{2} & \frac{B^0(1)}{3}\\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & \ldots & 1 & 0 & \ldots & 0 & \frac{B^0(N)}{2} & \frac{B^0(N)}{3}\\ 0 & \ldots & 0 & 1 & \ldots & 0 &-\frac{A^0(1)}{2} & -\frac{A^0(1)}{3}\\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & \ldots & 0 & 0 & \ldots & 1 &-\frac{A^0(N)}{2} & -\frac{A^0(N)}{3}\\ \frac{B^0(1)}{2} & \ldots & \frac{B^0(N)}{2} & -\frac{A^0(1)}{2} & \ldots & -\frac{A^0(N)}{2} & \frac{N({A^0}^2 + {B^0}^2)}{3} & \frac{N({A^0}^2 + {B^0}^2)}{4} \\ \frac{B^0(1)}{3} & \ldots & \frac{B^0(N)}{3} & -\frac{A^0(1)}{3} & \ldots & -\frac{A^0(N)}{3} & \frac{N({A^0}^2 + {B^0}^2)}{4} & \frac{N({A^0}^2 + {B^0}^2)}{5}
\end{array}\right].$\\ Similarly, if we compute each of the elements of the second derivative matrix using the above results, it can be easily seen that: \begin{equation}\label{DQ''MN_theta_D_limit} \textit{\textbf{D}}\textbf{Q}^{\prime\prime}_{MN}(\boldsymbol{\theta}^0)\textit{\textbf{D}} \rightarrow \bm{\mathcal{V}}^{-1}. \end{equation} On combining, \eqref{Q'MN_theta_D_ditribution} and \eqref{DQ''MN_theta_D_limit} and using the results in \eqref{taylor_series_Q'MN_theta_D}, we have: $$(\hat{\boldsymbol{\theta}} - \boldsymbol{\theta}^0)\textit{\textbf{D}}^{-1} \xrightarrow{d} \boldsymbol{\mathcal{N}}_{2N + 2}(\textbf{0}, 2\sigma^2 \bm{\mathcal{V}}). $$ From the above equation we can work out the joint distribution of $\hat{\alpha}$ and $\hat{\beta}$ and obtain the desired result. Analogously, the joint distribution of $\hat{\gamma}$ and $\hat{\delta}$ can be obtained. \\ \qed \end{comment} \noindent Henceforth, we will denote $\boldsymbol{\theta}(n_0) = (A(n_0), B(n_0), \alpha, \beta)$ as the parameter vector and $\boldsymbol{\theta}^0(n_0) = (A^0(n_0), B^0(n_0), \alpha^0, \beta^0)$ as the true parameter vector of the 1-D chirp model \eqref{model_n=n0_one_comp}. \\ To prove Theorem \ref{theorem:consistency_alpha_beta_one_comp_LSE}, we need the following lemma: \begin{lemma}\label{lemma_consistency_LSEs_alpha_beta}
Consider the set $S_c$ = $\{(\alpha, \beta) : |\alpha - \alpha^0| \geqslant c \textmd{ or } |\beta - \beta^0| \geqslant c\}$. If for any c $>$ 0, \\ \begin{equation}\label{condition_for_consistency_LSE_alpha_beta} \liminf \inf\limits_{(\alpha, \beta) \in S_c} \frac{1}{M N}\bigg[R^{(1)}_{MN}(\alpha, \beta) - R^{(1)}_{MN}(\alpha^0, \beta^0)\bigg] > 0 \ a.s. \end{equation} then, $\hat{\alpha}$ $\rightarrow$ $\alpha^0$ and $\hat{\beta}$ $\rightarrow$ $\beta^0$ almost surely as $M \rightarrow \infty$. \end{lemma} \noindent \begin{proof} This proof follows along the same lines as that of Lemma 1 of Wu \cite{1981}.\\ \end{proof}
\noindent \textbf{Proof of Theorem \ref{theorem:consistency_alpha_beta_one_comp_LSE}:} Let us consider the following: \begin{flalign*} & \liminf \inf\limits_{(\alpha, \beta) \in S_c} \frac{1}{M N}\bigg[R^{(1)}_{MN}(\alpha, \beta) - R^{(1)}_{MN}(\alpha^0, \beta^0)\bigg]& \\ & = \liminf \inf\limits_{(\alpha, \beta) \in S_c} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}R_{M}(\alpha, \beta, n_0) - \sum_{{n_0} = 1}^{N} R_{M}(\alpha^0, \beta^0, n_0)\bigg]& \\ & = \liminf \inf\limits_{(\alpha, \beta) \in S_c} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}Q_{M}(\hat{A}(n_0), \hat{B}(n_0), \alpha, \beta) - \sum_{{n_0} = 1}^{N} Q_{M}(\hat{A}(n_0), \hat{B}(n_0), \alpha^0, \beta^0)\bigg] & \\ & \geqslant \liminf \inf\limits_{(\alpha, \beta) \in S_c} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}Q_{M}(\hat{A}(n_0), \hat{B}(n_0), \alpha, \beta) - \sum_{{n_0} = 1}^{N} Q_{M}(A^0(n_0) , B^0(n_0), \alpha^0, \beta^0)\bigg] & \\ & \geqslant \liminf \inf\limits_{ \boldsymbol{\theta}(n_0) \in M_c^{n_0}} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}Q_{M}(A(n_0), B(n_0), \alpha, \beta) - \sum_{{n_0} = 1}^{N} Q_{M}(A^0(n_0) , B^0(n_0), \alpha^0, \beta^0)\bigg] & \\ &\geqslant \frac{1}{N}\sum_{n_0 = 1}^{N} \liminf \inf\limits_{\boldsymbol{\theta}(n_0) \in M_c^{n_0}} \frac{1}{M}\bigg[Q_M(\boldsymbol{\theta}(n_0)) - Q_M(\boldsymbol{\theta}^0(n_0))\bigg] > 0. & \end{flalign*}
This follows from the proof of Theorem 1 of Kundu and Nandi \cite{2008}. Here, $Q_{M}(A(n_0), B(n_0), \alpha, \beta) = \textit{\textbf{Y}}^{\top}_{n_0}(\textit{\textbf{I}} - \textbf{Z}_M(\alpha, \beta)(\textbf{Z}_M(\alpha, \beta)^{\top}\textbf{Z}_M(\alpha, \beta))^{-1}\textbf{Z}_M(\alpha, \beta)^{\top})\textit{\textbf{Y}}_{n_0}$. Also note that the set $M_c^{n_0}$ = $\{\boldsymbol{\theta}(n_0) : |A(n_0) - A^0(n_0)| \geqslant c \textmd{ or } |B(n_0) - B^0(n_0)| \geqslant c \textmd{ or } |\alpha - \alpha^0| \geqslant c \textmd{ or } |\beta - \beta^0| \geqslant c\}$ which implies $S_c \subset M_c^{n_0}$, for all $n_0 \in \{1, \ldots, N\}$. Thus, using Lemma \ref{lemma_consistency_LSEs_alpha_beta}, $\hat{\alpha} \xrightarrow{a.s.} \alpha^0$ and $\hat{\beta} \xrightarrow{a.s.} \beta^0$. \\ \qed
\noindent \textbf{Proof of Theorem \ref{theorem:asymp_dist_one_comp_LSE_alpha_beta}:} Let us denote $\boldsymbol{\xi} = (\alpha, \beta)$ and $\hat{\boldsymbol{\xi}} = (\hat{\alpha}, \hat{\beta})$, the estimator of $\boldsymbol{\xi}^0 = (\alpha^0, \beta^0)$ obtained by minimising the function $R_{MN}^{(1)}(\boldsymbol{\xi}) = R_{MN}^{(1)}(\alpha, \beta)$ defined in \eqref{reduced_ess_alpha_beta}. \\ \noindent Using multivariate Taylor series, we expand the $1 \times 2$ first derivative vector $\textit{\textbf{R}}_{MN}^{(1)'}(\hat{\boldsymbol{\xi}})$ of the function $R_{MN}^{(1)}(\boldsymbol{\xi})$, around the point $\boldsymbol{\xi}^0$ as follows: \begin{equation*} \textit{\textbf{R}}_{MN}^{(1)'}(\hat{\boldsymbol{\xi}}) - \textit{\textbf{R}}_{MN}^{(1)'}(\xi^0) = (\hat{\boldsymbol{\xi}} - \boldsymbol{\xi}^0)\textit{\textbf{R}}_{MN}^{(1)''}(\bar{\boldsymbol{\xi}}), \end{equation*} where $\bar{\boldsymbol{\xi}}$ is a point between $\hat{\boldsymbol{\xi}}$ and $\boldsymbol{\xi}^0$ and $\textit{\textbf{R}}_{MN}^{(1)''}(\bar{\boldsymbol{\xi}})$ is the $2 \times 2$ second derivative matrix of the function $R_{MN}^{(1)}(\boldsymbol{\xi})$ at the point $\bar{\boldsymbol{\xi}}$. Since $\hat{\boldsymbol{\xi}}$ minimises the function $R_{MN}^{(1)}(\boldsymbol{\xi})$, $\textit{\textbf{R}}_{MN}^{(1)'}(\hat{\boldsymbol{\xi}}) = 0$. Thus, we have \begin{equation*} (\hat{\boldsymbol{\xi}} - \boldsymbol{\xi}^0) = - \textit{\textbf{R}}_{MN}^{(1)'}(\boldsymbol{\xi}^0)[\textit{\textbf{R}}_{MN}^{(1)''}(\bar{\boldsymbol{\xi}})]^{-1}. \end{equation*} Multiplying both sides by the diagonal matrix $\textit{\textbf{D}}_1^{-1} = \textnormal{diag}(M^{\frac{-3}{2}}N^{\frac{-1}{2}}, M^{\frac{-5}{2}}N^{\frac{-1}{2}})$, we get: \begin{equation}\label{taylor_series_R_MN'} (\hat{\boldsymbol{\xi}} - \boldsymbol{\xi}^0)\textit{\textbf{D}}_1^{-1} = - \textit{\textbf{R}}_{MN}^{(1)'}(\boldsymbol{\xi}^0)\textit{\textbf{D}}_1[\textit{\textbf{D}}_1R_{MN}^{(1)''}(\bar{\boldsymbol{\xi}})\textit{\textbf{D}}_1]^{-1}. \end{equation} Consider the vector, $$\textit{\textbf{R}}_{MN}^{(1)'}(\boldsymbol{\xi}^0)\textit{\textbf{D}}_1 = \begin{bmatrix} \frac{1}{M^{3/2}N^{1/2}}\frac{\partial R_{MN}^{(1)}(\boldsymbol{\xi}^0)}{\partial \alpha} & \frac{1}{M^{3/2}N^{1/2}}\frac{\partial R_{MN}^{(1)}(\boldsymbol{\xi}^0)}{\partial \beta} \end{bmatrix}.$$ On computing the elements of this vector and using preliminary result \eqref{prelim_LSE_first_derivative_one_comp} (see Section \ref{sec:one_comp_1D}) and the definition of the function: $$R_{MN}^{(1)}(\alpha, \beta) = \sum\limits_{{n_0} = 1}^{N}R_M(\alpha, \beta, n_0)$$ we obtain the following result: \begin{equation}\label{R_MN'_distribution} -\textit{\textbf{R}}_{MN}^{(1)'}(\boldsymbol{\xi}^0)\textit{\textbf{D}}_1 \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2\boldsymbol{\Sigma}) \textmd{ as } M \rightarrow \infty. \end{equation} Since $\hat{\boldsymbol{\xi}} \xrightarrow{a.s.} \boldsymbol{\xi}^0$, and as each element of the matrix $\textit{\textbf{R}}_{MN}^{(1)''}(\boldsymbol{\xi})$ is a continuous function of $\boldsymbol{\xi}$, we have \begin{equation*} \lim_{M \rightarrow \infty}\textit{\textbf{D}}_1\textbf{\textit{R}}_{MN}^{(1)''}(\bar{\boldsymbol{\xi}})\textit{\textbf{D}}_1 = \lim_{M \rightarrow \infty}\textit{\textbf{D}}_1\textit{\textbf{R}}_{MN}^{(1)''}(\boldsymbol{\xi}^0)\textit{\textbf{D}}_1. \end{equation*} Now using preliminary result \eqref{prelim_LSE_second_derivative_one_comp} (see Section \ref{sec:one_comp_1D}), it can be seen that: \begin{equation}\label{R_MN''_limit} \lim_{M \rightarrow \infty}\textit{\textbf{D}}_1\textbf{\textit{R}}_{MN}^{(1)''}(\boldsymbol{\xi}^0)\textit{\textbf{D}}_1 \rightarrow \boldsymbol{\Sigma}^{-1}. \end{equation} On combining \eqref{taylor_series_R_MN'}, \eqref{R_MN'_distribution} and \eqref{R_MN''_limit}, we have the desired result.
\section*{Appendix B}\label{appendix:B} To prove Theorem \ref{theorem:consistency_alphak_betak_multiple_comp_LSE}, we need the following lemmas: \begin{lemma}\label{lemma_consistency_LSEs_alpha1_beta1}
Consider the set $S_c^{1}$ = $\{(\alpha, \beta) : |\alpha - \alpha_1^0| \geqslant c \textmd{ or } |\beta - \beta_1^0| \geqslant c\}$.If for any c $>$ 0, \\ \begin{equation}\label{condition_for_consistency_LSE_alpha1_beta1} \liminf \inf\limits_{(\alpha, \beta) \in S_c^{1}} \frac{1}{M N}[R^{(1)}_{1,MN}(\alpha, \beta) - R^{(1)}_{1,MN}(\alpha_1^0, \beta_1^0)] > 0 \ a.s. \end{equation} then, $\hat{\alpha}_1$ $\rightarrow$ $\alpha_1^0$ and $\hat{\beta}_1$ $\rightarrow$ $\beta_1^0$ almost surely as $M \rightarrow \infty$. \end{lemma} \noindent \begin{proof} This proof follows along the same lines as proof of Lemma \ref{lemma_consistency_LSEs_alpha_beta}. \\ \end{proof}
\begin{lemma}\label{lemma:convergence_LSE_alpha1_beta1} If assumptions \ref{assump:1}, \ref{assump:3} and \ref{assump:P4} are satisfied then: \begin{equation*}\begin{split} & M(\hat{\alpha}_1 - \alpha_1^0) \xrightarrow{a.s.} 0,\\ & M^2(\hat{\beta}_1 - \beta_1^0) \xrightarrow{a.s.} 0. \end{split}\end{equation*} \end{lemma} \noindent \begin{proof}
Let us denote $\textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi})$ as the $1 \times 2$ first derivative vector and $\textit{\textbf{R}}_{1, MN}^{(1)''}(\boldsymbol{\xi})$ as the $2 \times 2$ second derivative matrix of the function $R_{1, MN}^{(1)}(\boldsymbol{\xi})$. Using multivariate Taylor series expansion, we expand the function $\textit{\textbf{R}}_{1, MN}^{(1)'}(\hat{\boldsymbol{\xi}}_1)$ around the point $\boldsymbol{\xi}_1^0$ as follows: \begin{equation*} \textit{\textbf{R}}_{1, MN}^{(1)'}(\hat{\boldsymbol{\xi}}_1) - \textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0) = (\hat{\boldsymbol{\xi}}_1 - \boldsymbol{\xi}_1^0)\textit{\textbf{R}}_{1, MN}^{(1)''}(\bar{\boldsymbol{\xi}}_1) \end{equation*} where $\bar{\boldsymbol{\xi}}_1$ is a point between $\hat{\boldsymbol{\xi}}_1$ and $\boldsymbol{\xi}_1^0$. Note that $\textit{\textbf{R}}_{1, MN}^{(1)'}(\hat{\boldsymbol{\xi}}_1) = 0$. Thus, we have: \begin{equation}\label{taylor_series_R_1_MN'_original} (\hat{\boldsymbol{\xi}}_1 - \boldsymbol{\xi}_1^0) = - \textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0)[\textit{\textbf{R}}_{1, MN}^{(1)''}(\bar{\boldsymbol{\xi}}_1)]^{-1}. \end{equation} Multiplying both sides by $\frac{1}{\sqrt{MN}}\textit{\textbf{D}}_1^{-1}$, we get: \begin{equation}\label{taylor_series_R_1_MN'} (\hat{\boldsymbol{\xi}}_1 - \boldsymbol{\xi}_1^0)(\sqrt{MN}\textit{\textbf{D}}_1)^{-1} = - \frac{1}{\sqrt{MN}}\textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1[\textit{\textbf{D}}_1\textit{\textbf{R}}_{1, MN}^{(1)''}(\bar{\boldsymbol{\xi}}_1)\textit{\textbf{D}}_1]^{-1}. \end{equation} Since each of the elements of the matrix $\textit{\textbf{R}}_{1, MN}^{(1)''}(\boldsymbol{\xi})$ is a continuous function of $\boldsymbol{\xi},$ $$ \lim_{M \rightarrow \infty} \textit{\textbf{D}}_1\textit{\textbf{R}}_{1, MN}^{(1)''}(\bar{\boldsymbol{\xi}}_1)\textit{\textbf{D}}_1 = \lim_{M \rightarrow \infty} \textit{\textbf{D}}_1\textit{\textbf{R}}_{1, MN}^{(1)''}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1. $$ By definition, \begin{equation}\label{definition_R_1_MN} R_{1, MN}^{(1)}(\boldsymbol{\xi}) = \sum_{{n_0} = 1}^{N} R_{1,M}(\boldsymbol{\xi}, n_0). \end{equation} Using this and the preliminary result \eqref{prelim_LSE_first_derivative_multiple_comp_1} and \eqref{prelim_LSE_second_derivative_multiple_comp} (see Section \ref{sec:multiple_comp_1D}), it can be seen that: \begin{align} - \frac{1}{\sqrt{MN}}\textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1 \xrightarrow{a.s.} 0 \textmd{ as } M \rightarrow \infty. \label{limit_R_1_MN'}\\ \textit{\textbf{D}}_1\textit{\textbf{R}}_{1, MN}^{(1)''}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1 \xrightarrow{a.s.} \boldsymbol{\Sigma}_1^{-1} \textmd{ as } M \rightarrow \infty.\label{limit_R_1_MN''} \end{align} On combining \eqref{taylor_series_R_1_MN'}, \eqref{limit_R_1_MN'} and \eqref{limit_R_1_MN''}, we have the desired result.\\ \end{proof}
\noindent \textbf{Proof of Theorem \ref{theorem:consistency_alphak_betak_multiple_comp_LSE}:} Consider the left hand side of \eqref{condition_for_consistency_LSE_alpha1_beta1}, that is, \begin{flalign*} & \liminf \inf\limits_{(\alpha, \beta) \in S_c^{1}} \frac{1}{M N}\bigg[R^{(1)}_{1,MN}(\alpha, \beta) - R^{(1)}_{1,MN}(\alpha_1^0, \beta_1^0)\bigg]& \\ & = \liminf \inf\limits_{(\alpha, \beta) \in S_c^{1}} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}Q_{1,M}(\hat{A}_1(n_0), \hat{B}_1(n_0), \alpha, \beta) - \sum_{{n_0} = 1}^{N} Q_{1,M}(\hat{A}_1(n_0), \hat{B}_1(n_0), \alpha_1^0, \beta_1^0)\bigg] & \\ & \geqslant \liminf \inf\limits_{(\alpha, \beta) \in S_c^{1}} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}Q_{1,M}(\hat{A}_1(n_0), \hat{B}_1(n_0), \alpha, \beta) - \sum_{{n_0} = 1}^{N} Q_{1,M}(A_1^0(n_0) , B_1^0(n_0), \alpha_1^0, \beta_1^0)\bigg] & \\ & \geqslant \liminf \inf\limits_{ \boldsymbol{\theta}_1(n_0) \in M_c^{1,n_0}} \frac{1}{M N}\bigg[\sum_{{n_0} = 1}^{N}Q_{1,M}(A_1(n_0), B_1(n_0), \alpha, \beta) - \sum_{{n_0} = 1}^{N} Q_{1,M}(A_1^0(n_0) , B_1^0(n_0), \alpha_1^0, \beta_1^0)\bigg] & \\ &\geqslant \frac{1}{N}\sum_{n_0 = 1}^{N} \liminf \inf\limits_{\boldsymbol{\theta}_1(n_0) \in M_c^{1,n_0}} \frac{1}{M}\bigg[Q_{1,M}(\boldsymbol{\theta}_1(n_0)) - Q_{1,M}(\boldsymbol{\theta}_1^0(n_0))\bigg] > 0. & \end{flalign*} Here, $Q_{1,M}(A(n_0), B(n_0), \alpha, \beta) = \textit{\textbf{Y}}^{\top}_{n_0}(\textit{\textbf{I}} - \textbf{Z}_M(\alpha, \beta)(\textbf{Z}_M(\alpha, \beta)^{\top}\textbf{Z}_M(\alpha, \beta))^{-1}\textbf{Z}_M(\alpha, \beta)^{\top})\textit{\textbf{Y}}_{n_0}$ and $M_c^{1,n_0}$ can be obtained by replacing $\alpha^0$ and $\beta^0$ by $\alpha_1^0$ and $\beta_1^0$ respectively, in the set $M_c^{n_0}$ defined in Lemma \ref{lemma_consistency_LSEs_alpha_beta}. The last step follows from the proof of Theorem 2.4.1 of Lahiri \cite{2015}. Thus, using Lemma \ref{lemma_consistency_LSEs_alpha1_beta1}, $\hat{\alpha}_1 \xrightarrow{a.s.} \alpha_1^0$ and $\hat{\beta}_1 \xrightarrow{a.s.} \beta_1^0$ as $M \rightarrow\infty$. \\ Following similar arguments, one can obtain the consistency of $\hat{\gamma}_1$ and $\hat{\delta}_1$ as $N \rightarrow \infty$. Also, \begin{equation*}\begin{split} & N(\hat{\gamma}_1 - \gamma_1^0) \xrightarrow{a.s.} 0,\\ & N^2(\hat{\delta}_1 - \delta_1^0) \xrightarrow{a.s.} 0. \end{split}\end{equation*} The proof of the above equations follows along the same lines as the proof of Lemma \ref{lemma:convergence_LSE_alpha1_beta1}. From Theorem \ref{theorem:limit_A_p+1_B_p+1}, it follows that as min$\{M, N\} \rightarrow \infty$: \begin{equation*}\begin{split} & (\hat{A}_1 - A_1^0) \xrightarrow{a.s.} 0,\\ & (\hat{B}_1 - B_1^0) \xrightarrow{a.s.} 0. \end{split}\end{equation*} Thus, we have the following relationship between the first component of model \eqref{multiple_comp_model} and its estimate: \begin{equation}\begin{split}\label{relationship_first_comp_true_estimate} & \hat{A}_1 \cos(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2) + \hat{B}_1 \sin(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2) = \\ & \qquad \qquad \qquad \qquad \qquad \qquad A_1^0 \cos(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) + B_1^0 \sin(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) + o(1). \end{split}\end{equation} Here a function $g$ is $o(1)$, if $g \to 0$ almost surely as min$\{M, N\} \to \infty$.\\
\noindent Using \eqref{relationship_first_comp_true_estimate} and following the same arguments as above for the consistency of $\hat{\alpha}_1$, $\hat{\beta}_1$, $\hat{\gamma}_1$ and $\hat{\delta}_1$, we can show that, $\hat{\alpha}_2$, $\hat{\beta}_2$, $\hat{\gamma}_2$ and $\hat{\delta}_2$ are strongly consistent estimators of $\alpha_2^0$, $\beta_2^0$, $\gamma_2^0$ and $\delta_2^0$ respectively. And the same can be extended for $k \leqslant p.$ Hence, the result.\\ \qed
\noindent \textbf{Proof of Theorem \ref{theorem:limit_A_p+1_B_p+1}:} We will consider the following two cases that will cover both the scenarios$-$ underestimation as well as overestimation of the number of components: \begin{itemize} \item \underline{Case 1:} When $k = 1$: \begin{equation}\begin{split}\label{linear_estimates_1} \begin{bmatrix} \hat{A}_1 \\ \hat{B}_1 \end{bmatrix} = [\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)^{\top}\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)]^{-1}\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)^{\top} \textit{\textbf{Y}} \end{split}\end{equation} Using Lemma 1 of Lahiri et al. \cite{2015}, it can be seen that: \begin{equation*} \frac{1}{MN}[\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)^{\top}\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)] \rightarrow \frac{1}{2}\textit{\textbf{I}}_{2 \times 2} \textmd{ as } \min\{M, N\} \rightarrow \infty. \end{equation*} Substituting this result in \eqref{linear_estimates_1}, we get: \begin{equation*}\begin{split} \begin{bmatrix} \hat{A}_1 \\ \hat{B}_1 \end{bmatrix} & = \frac{2}{MN}\textit{\textbf{W}}(\hat{\alpha}_1, \hat{\beta}_1, \hat{\gamma}_1, \hat{\delta}_1)^{\top} \textit{\textbf{Y}} + o(1)\\ & = \begin{bmatrix} \frac{2}{MN}\sum\limits_{n=1}^{N}\sum\limits_{m=1}^{M}y(m,n)\cos(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2) + o(1) \\ \frac{2}{MN}\sum\limits_{n=1}^{N}\sum\limits_{m=1}^{M}y(m,n)\sin(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2) + o(1) \end{bmatrix}. \end{split}\end{equation*} Now consider the estimate $\hat{A}_1$. Using multivariate Taylor series, we expand the function $\cos(\hat{\alpha}_1 m + \hat{\beta}_1 m^2 + \hat{\gamma}_1 n + \hat{\delta}_1 n^2)$ around the point $(\alpha_1^0, \beta_1^0, \gamma_1^0, \delta_1^0)$ and we obtain: \begin{equation*}\begin{split} \hat{A}_1 & = \frac{2}{MN}y(m,n)\bigg\{\cos(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) - m (\hat{\alpha}_1 - \alpha_1^0)\sin(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) \\ & - m^2(\hat{\beta}_1 - \beta_1^0)\sin(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) - n (\hat{\gamma}_1 - \gamma_1^0)\sin(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) \\ & - n^2(\hat{\delta}_1 - \delta_1^0)\sin(\alpha_1^0 m + \beta_1^0 m^2 + \gamma_1^0 n + \delta_1^0 n^2) \bigg\} \\ & \rightarrow 2 \times \frac{A_1^0}{2} = A_1^0 \textmd{ almost surely as } \min\{M, N\} \rightarrow \infty, \end{split}\end{equation*} using \eqref{multiple_comp_model} and Lemma 1 and Lemma 2 of Lahiri et al. \cite{2015}. Similarly, it can be shown that $\hat{B}_1 \rightarrow B_1^0$ almost surely as $\min\{M, N\} \rightarrow \infty$. \\ \\ For the second component linear parameter estimates, consider: \begin{equation*}\begin{split} \begin{bmatrix} \hat{A}_2 \\ \hat{B}_2 \end{bmatrix} = \begin{bmatrix} \frac{2}{MN}\sum\limits_{n=1}^{N}\sum\limits_{m=1}^{M}y_1(m,n)\cos(\hat{\alpha}_2 m + \hat{\beta}_2 m^2 + \hat{\gamma}_2 n + \hat{\delta}_2 n^2) + o(1)\\ \frac{2}{MN}\sum\limits_{n=1}^{N}\sum\limits_{m=1}^{M}y_1(m,n)\sin(\hat{\alpha}_2 m + \hat{\beta}_2 m^2 + \hat{\gamma}_2 n + \hat{\delta}_2 n^2) + o(1) \end{bmatrix}. \end{split}\end{equation*} Here, $y_1(m,n)$ is the data obtained at the second stage after eliminating the effect of the first component from the original data as defined in \eqref{second_stage_data}. Using the relationship \eqref{relationship_first_comp_true_estimate} and following the same procedure as for the consistency of $\hat{A}_1$, it can be seen that: \begin{equation} \hat{A}_2 \xrightarrow{a.s.} A_2^0 \quad \textmd{and} \quad \hat{B}_2 \xrightarrow{a.s.} B_2^0 \textmd{ as } \min\{M, N\} \rightarrow \infty. \end{equation} It is evident that the result can be easily extended for any $2 \leqslant k \leqslant p$. \item \underline{Case 2:} When $k = p+1$: \begin{equation}\begin{split}\label{A_p+1_B_p+1_estimates} \begin{bmatrix} \hat{A}_{p+1} \\ \hat{B}_{p+1} \end{bmatrix} = \begin{bmatrix} \frac{2}{MN}\sum\limits_{n=1}^{N}\sum\limits_{m=1}^{M}y_p(m,n)\cos(\hat{\alpha}_{p+1} m + \hat{\beta}_{p+1} m^2 + \hat{\gamma}_{p+1} n + \hat{\delta}_{p+1} n^2) + o(1)\\ \frac{2}{MN}\sum\limits_{n=1}^{N}\sum\limits_{m=1}^{M}y_p(m,n)\sin(\hat{\alpha}_{p+1} m + \hat{\beta}_{p+1} m^2 + \hat{\gamma}_{p+1} n + \hat{\delta}_{p+1} n^2) + o(1) \end{bmatrix}, \end{split}\end{equation} where \begin{equation*}\begin{split} y_p(m,n) & = y(m,n) - \sum\limits_{j=1}^{p}\bigg\{\hat{A}_j \cos(\hat{\alpha}_j m + \hat{\beta}_j m^2 + \hat{\gamma}_j n + \hat{\delta}_j n^2) + \hat{B}_j \sin(\hat{\alpha}_j m + \hat{\beta}_j m^2 + \hat{\gamma}_j n + \hat{\delta}_j n^2) \bigg\} \\ & = X(m,n) + o(1), \textmd{ using \eqref{relationship_first_comp_true_estimate} and case 1 results.} \end{split}\end{equation*} From here, it is not difficult to see that \eqref{A_p+1_B_p+1_estimates} implies the following result: \begin{equation*} \hat{A}_{p+1} \xrightarrow{a.s.} 0 \quad \textmd{and} \quad \hat{B}_{p+1} \xrightarrow{a.s.} 0. \end{equation*} This is obtained using Lemma 2 of Lahiri et al. \cite{2015}. It is apparent that the result holds true for any $k > p.$ \end{itemize}\qed \\
\noindent \textbf{Proof of Theorem \ref{theorem:asymp_dist_multiple_comp_LSE_alphak_betak}:} Consider \eqref{taylor_series_R_1_MN'_original} and multiply both sides of the equation with the diagonal matrix, $\textit{\textbf{D}}_1^{-1}$: \begin{equation}\label{taylor_series_R_1_MN'_D)} (\hat{\boldsymbol{\xi}}_1 - \boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1^{-1} = - \textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1[\textit{\textbf{D}}_1\textit{\textbf{R}}_{1, MN}^{(1)''}(\bar{\boldsymbol{\xi}}_1)\textit{\textbf{D}}_1]^{-1}. \end{equation} Computing the elements of the first derivative vector $- \textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1$ and using definition \eqref{definition_R_1_MN} and the preliminary result \eqref{prelim_LSE_first_derivative_multiple_comp_2} (Section \ref{sec:multiple_comp_1D}), we obtain the following result: \begin{equation} - \textit{\textbf{R}}_{1, MN}^{(1)'}(\boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1 \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}_1^{-1}) \textmd{ as } M \rightarrow \infty. \label{distribution_R_1_MN'} \end{equation} On combining \eqref{taylor_series_R_1_MN'_D)}, \eqref{distribution_R_1_MN'} and \eqref{limit_R_1_MN''}, we have: \begin{equation*} (\hat{\boldsymbol{\xi}}_1 - \boldsymbol{\xi}_1^0)\textit{\textbf{D}}_1^{-1} \xrightarrow{d} \boldsymbol{\mathcal{N}}_2(\textbf{0}, 2\sigma^2 \boldsymbol{\Sigma}_1) \end{equation*} This result can be extended for $k = 2$ using the relation \eqref{relationship_first_comp_true_estimate} and following the same argument as above. Similarly, we can continue to extend the result for any $k \leqslant p$.
\end{document} |
\begin{document}
\begin{center}
\title{Constants for Artin-like problems in Kummer and division fields}
\author{Amir Akbary} \address{Department of Mathematics and Computer Science, University of Lethbridge, Lethbridge, Alberta T1K 3M4, Canada} \email{[email protected]}
\author{Milad Fakhari} \address{Department of Mathematics and Computer Science, University of Lethbridge, Lethbridge, Alberta T1K 3M4, Canada} \email{[email protected]}
\subjclass[2020]{11N37, 11A07} \keywords{Generalized Artin problem, character sums, Titchmarsh divisor problems in the family of number fields}
\date{\today}
\begin{abstract} We apply the character sums method of Lenstra, Moree, and Stevenhagen to explicitly compute the constants in the Titchmarsh divisor problem for Kummer fields and division fields of Serre curves. We derive our results as special cases of a general result on the product expressions for the sums in the form $$\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}$$ in which $g(n)$ is a multiplicative arithmetic function and $\{G(n)\}$ is a certain family of Galois groups. Our results extend the application of the character sums method to the evaluation of constants, such as the Titchmarsh divisor constants, that are not density constants. \end{abstract} \maketitle \end{center}
\section{Introduction}\label{sec:intro}
Throughout this paper, let $a$ be a non-zero integer that is not $\pm1$. Let $h$ be the largest integer for which $a$ is a perfect $h$-th power. In 1927, Emil Artin proposed a conjecture for the density of primes $q$ for which a given integer $a$ is a primitive root modulo $q$. More precisely, Artin conjectured that the density is
\begin{equation} \label{ArtinConstant}
A_a=\displaystyle\prod_{p \: \primen}\left(1-\frac{1}{\#G(p)}\right)=\displaystyle\prod_{\substack{p \: \primen \\ p\mid h}}\left(1-\frac{1}{p-1}\right)\displaystyle\prod_{\substack{p \: \primen \\ p\nmid h}}\left(1-\frac{1}{p(p-1)}\right), \end{equation} where $G(p)$ is the Galois group of $\mathbb{Q}(\zeta_p,a^{1/p})$ over $\mathbb{Q}$. Here $\zeta_p$ is a primitive $p$-th root of unity. Note that $G(p)$ depends on $a$, but we suppress the dependence on $a$ in our notation for simplicity. Also, observe that $A_a=0$ if $a$ is a perfect square as $G(2)=\{1\}$ for such $a$.
In 1957, computer calculations of the density for various values of $a$ by D. H. Lehmer and E. Lehmer revealed some discrepancies from the conjectured value $A_a$. The reason for these inconsistencies is the dependency between the splitting conditions in \emph{Kummer fields} $\mathbb{Q}(\zeta_p,a^{1/p})$.
To deal with these dependencies, Artin suggested an \textit{entanglement correction factor} that appears when $a_{sf}\equiv1~({\rm mod}~4)$, where $a_{sf}$, the square-free part of $a$, is the largest square-free factor of $a$ (see preface to Artin's collected works \cite{artin}). More precisely,
the corrected conjectured density $\delta_a$ is \begin{equation} \label{artin-corrected}
\delta_a=
\begin{cases}
A_a &\quad\quad\text{if }a_{sf}\nequiv1~({\rm mod}~{4}),\\
E_a\cdot A_a&\quad\quad\text{if }a_{sf}\equiv1~({\rm mod}~{4}),
\end{cases} \end{equation} where \begin{equation} \label{Hooley}
E_a=1-\mu(|a_{sf}|)\displaystyle\prod_{\substack{p \mid h\\ p\mid a_{sf}} }\frac{1}{p-2}\displaystyle\prod_{\substack{p \nmid h\\ p\mid a_{sf}} }\frac{1}{p^2-p-1}. \end{equation} Here, $\mu(.)$ is the M\"obius function. Hooley proved the modified conjecture \cite{Hooley} in 1967 under the assumption of the \emph{Generalized Riemann Hypothesis} (GRH) for the \emph{Kummer fields} $K_n=\mathbb{Q}(\zeta_n, a^{1/n})$ for square-free values of $n$. For any $n$, let $G(n)$ be the Galois group of $K_n/\mathbb{Q}$. Hooley proved, under the GRH, that the primitive root density is \begin{equation} \label{hooley-sum}
\sum_{n=1}^{\infty}\frac{\mu(n)}{\#G(n)}, \end{equation} and then showed that the above sum equals the corrected conjectured density $\delta_a$ in \eqref{artin-corrected}.
In \cite{lenstra-stevenhagen-moree}, Lenstra, Moree, and Stevenhagen introduced their character sums method in finding product expressions for densities in Artin-like problems. Their method directly studies the primes that do not split completely in a Kummer family attached to $a$, without considering the summation expressions such as \eqref{hooley-sum} for the constants. In \cite[Theorem 4.2]{lenstra-stevenhagen-moree}, they express the correction factor \eqref{Hooley}, when $a$ is non-square and the discriminant $d$ of $K_2=\mathbb{Q}(a^{1/2})$ is odd, as \begin{equation} \label{artin-lenstra-product} E_a=1+\displaystyle\prod_{p \mid 2d }\frac{-1}{\#G(p)-1}.
\end{equation} The authors of \cite{lenstra-stevenhagen-moree} achieve this by constructing a quadratic character $\chi=\prod_{p} \chi_p$ of a certain profinite group $A=\prod_{p} A_p$ such that $\ker \chi=\Gal(K_\infty/\mathbb{Q})$, where $K_{\infty}=\bigcup_{n\geq1}K_n$ (see Section 2 for details). They derive \eqref{artin-corrected} as a special case of the following general theorem (\cite[Theorem 3.3]{lenstra-stevenhagen-moree}) in the context of profinite groups.
\begin{theorem}[{Lenstra-Moree-Stevenhagen}] \label{LMS} Let $A=\prod_{p} A_p$, with Haar measure $\nu=\prod_{p} \nu_p$, and the quadratic character $\chi=\prod_{p} \chi_p$ be as above. Then for $G=\ker \chi$ and $S=\prod_p S_p$, a product of $\nu_p$-measurable subsets $S_p\subset A_p$ with $\nu_p(S_p)>0$, we have $$\delta(S)=\frac{\nu(G\bigcap S)}{\nu(G)} =\left(1+\prod_p \frac{1}{\nu_p(S_p)} \int_{S_p} \chi_p d\nu_p \right)\cdot \frac{\nu(S)}{\nu(A)}.$$ \end{theorem}
The above theorem shows that if $\frac{\nu(G\bigcap S)}{\nu(G)}\neq \frac{\nu(S)}{\nu(A)}$, then the density of $S$ in $A$ can be corrected to give the density of the elements of $S$ in $G$. Moreover, the correction factor can be written explicitly in terms of the average of local characters $\chi_p$ over $S_p$.
Our goals in this paper are two-fold. In one direction, in Theorem \ref{main1} and Corollary \ref{cor-after-main1}, we will show how the character sums method of \cite{lenstra-stevenhagen-moree} can be adapted to directly deal with the general sums similar to \eqref{hooley-sum}. This is an approach different from the one given in Theorem \ref{LMS} in which a density given as a product, i.e., $\nu(S)/\nu(A)$, is corrected to another density, i.e., $\nu(G\cap S)/\nu(G)$, which is not explicitly given as an infinite sum. In another direction, we describe how the method of \cite{lenstra-stevenhagen-moree} can be adapted to derive product expressions for the general sums similar to \eqref{hooley-sum} in which $\mu(n)$ is replaced by a multiplicative arithmetic function that could be supported on non-square free integers (all the examples given in \cite{lenstra-stevenhagen-moree} are dealing with arithmetical functions supported on square-free integers). Such arithmetic sums appear naturally on many Artin-like problems. In addition, some of them, such as Titchmarsh divisor problems for families of number fields, are not problems related to the natural density of subsets of integers. In this direction, our Theorem \ref{product-kummer-family} provides a product formula for the constant appearing in the Generalized Artin Problem for multiplicative functions $f$ (see Problem \ref{generalized}) in full generality.
We continue with our general setup. Let $a=\pm a_0^e$, where
$e$ is the largest positive integer such that $\lvert a\rvert$ is a perfect $e$-th power,
and $\sgn(a_0)=\sgn(a)$. In our arguments, the integer $a$ is fixed, so we suppress the dependency on $a$ in most of our notations. We fix a solution of the equation $x^2-a_0=0$ and denote it by $a_0^{1/2}$. The quadratic field $K=\mathbb{Q}(a_0^{1/2})$, the so-called \emph{entanglement field}, plays an important role in our arguments. We denote the discriminant of $K$ by $D$. Observe that for an integer $a (\neq 0, \pm 1)$, we have three different cases based on the parity of the exponent $e$ and the sign of $a$: (i) \emph{Odd exponent case}, in which $e$ is odd; (ii) \emph{Square case}, in which $e$ is even and $a>0$; (iii) \emph{Twisted case}, in which $e$ is even and $a<0$. We refer to cases (i) and (ii) as \emph{untwisted} cases. Note that for odd exponent case $K=K_2$, for square case $K_2=\mathbb{Q}$ and $K\neq K_2$, and for twisted case $K_2=\mathbb{Q}(i)$ and $K\neq K_2$.
For a Kummer family $\{K_n\}$, the Galois elements in $G(n)=\Gal(K_n/\mathbb{Q})$ are determined by their actions on the $n$-th roots of $a$ and the $n$-th roots of unity. Thus, any Galois automorphism can be realized as a group automorphism of the multiplicative group $$R_n=\{\alpha\in\overline{\mathbb{Q}}^{\times};\;\alpha^n\in\langle a\rangle\},$$ the group of \emph{$n$-radicals} of $a$. This yields the injective homomorphisms \begin{equation} \label{embed}
r_n:G(n)\to A(n):=\Aut_{\mathbb{Q}^{\times}\cap R_n}(R_n), \end{equation} where $A(n)$ is the group of automorphisms of $R_n$ fixing elements of $\mathbb{Q}^\times$. For $n=\prod_{p^k\Vert n} p^k$ we have $A(n)\cong \prod_{p^k\Vert n} A(p^k)$. Let $\nu_p(e)$ denote the multiplicity of $p$ in $e$. Let $\varPhi(n)$ be the Euler totient function.
For odd $p$, $$\#A(p^k)= p^{k-\min\{k, \nu_p(e)\}}\varPhi(p^k)$$ and for $p=2$, $$\#A(2^k)=\begin{cases} 2^{k-\min\{k, s-1\}}\varPhi(2^k)&\text{if}~e~\text{is~odd~or}~a>0,\\ 2^{k-\min\{k, s-1\}}\varPhi(2^{k+1})&\text{if}~e~\text{is~even~and}~a<0, \end{cases}$$ where \begin{equation} \label{s-def} s=\begin{cases} \nu_2(e)+1&\text{if}~e~\text{is~odd~or}~ a>0,\\ \nu_2(e)+2& \text{if}~e~\text{is}~\text{even~and}~a<0. \end{cases} \end{equation} (see Proposition \ref{new-prop} for a proof.)
The following theorem, related to the family of Kummer fields $K_n$, gives us the product expressions of a large family of summations involving the orders of the Galois groups of $K_n/\mathbb{Q}$. \begin{theorem} \label{product-kummer-family} Let $a=\pm a_0^e$, where $a_0$ and $e$ are defined as above, $K=\mathbb{Q}(a_0^{1/2})$, and let $D$ be the discriminant of $K$.
Let $g$ be a multiplicative arithmetic function such that \begin{equation*}
\sum_{n\geq1}^{\infty}\frac{\lvert g(n)\rvert}{\#G(n)}<\infty, \end{equation*} where $\{G(n)\}$ is the family of Galois groups of the Kummer family $\{\mathbb{Q}(\zeta_n, a^{1/n}) \}$. Let
$A(n)$ be as defined above.
Then, \begin{equation} \label{product-kummer}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=\prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}+
\prod_{p}\sum_{k\geq\ell(p)}\frac{g(p^k)}{\#A(p^k)},
\end{equation} where
$$\ell(p)= \left\{\begin{array}{ll} 0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\
s&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\ \max\{2,s\}&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\
2&\text{if}~ p=2,~8\Vert D,~\text{and}~(\nu_2(e)=1~\text{and}~a<0),\\ \max\{3,s\}&\text{if}~ p=2,~ 8\Vert D,~\text{and}~(\nu_2(e)\neq 1~\text{or}~a>0).\end{array} \right. $$ \end{theorem}
\begin{Remark} (i) In the summation \eqref{hooley-sum} appearing in Artin's primitive root conjecture, we have $g(n)=\mu(n)$. In this case, formula \eqref{product-kummer} for $g(n)=\mu(n)$ provides a unified way of expressing the constant in Artin's primitive root conjecture. Note that if $e$ is even and $a>0$, i.e., $a$ is a perfect square, we have $$\sum_{k\geq 0}\frac{\mu(2^k)}{\#A(2^k)}= 0~~{\rm ~~ and}~\sum_{k\geq\ell(2)}\frac{\mu(2^k)}{\#A(2^k)}= 0.$$ (The first sum is zero since $\#A(2)=1$ and the second sum is zero since $\ell(2)\geq 2$.) Hence, \eqref{product-kummer} vanishes. Also, if $e$ is even and $a<0$, then $\ell(2)\geq 2$. Thus, \eqref{product-kummer} reduces to \eqref{ArtinConstant}. If $e$ is odd and $D$ is even, then again $\ell(2)\geq 2$ and \eqref{product-kummer} reduces to \eqref{ArtinConstant}. The only remaining case is when $e$ is odd and $D$ is odd (equivalently $e$ odd and $a_{sf}\equiv 1$ (mod $4$)), where \eqref{product-kummer} reduces to $E_a \cdot A_a$ given in \eqref{artin-corrected}.
{(ii) As a consequence of Theorem 1.1, we can derive necessary and sufficient conditions for the vanishing of \begin{equation} \label{constant} \sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}. \end{equation} More precisely, \eqref{constant} vanishes if and only if one of the following holds:
\noindent (a) For a prime $p\nmid 2D$, we have $\displaystyle{\sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}=0}$.
\noindent (b) We have $$\prod_{p\mid 2D} \sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}+ \prod_{p\mid 2D} \sum_{k\geq \ell(p)}\frac{g(p^k)}{\#A(p^k)}=0.$$
In the case of Artin's conjecture, (a) is never satisfied and (b) holds if and only if $a$ is a perfect square. }
{(iii) If $\#G(n)$ was a multiplicative function, then the sum in \eqref{product-kummer} would have been equal to the {product} $\prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}$. However, this is not the case for the Kummer family, and thus the sum in \eqref{product-kummer} may differ from the above naive product. If the sum and the product are not equal, then a complex number $E_{a, g}$ is called a \emph{correction factor} if \begin{equation*}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=E_{a, g} \prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}. \end{equation*} The expression \eqref{product-kummer} provides precise information on the correction factor $E_{a, g}$. In fact, if \\$\sum_{k\geq0}\frac{g(p^k)}{\#G(p^k)}\neq0$ for all primes $p\mid 2D$, we have \begin{equation*}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=\left(\frac{\displaystyle{\prod_{p\mid 2D}\sum_{k\geq 0}\frac{g(p^k)}{\#A(p^k)}+ \prod_{p\mid 2D}\sum_{k\geq\ell(p)}\frac{g(p^k)}{\#A(p^k)}}}{\displaystyle{\prod_{p\mid 2D}\sum_{k\geq 0} \frac{g(p^k)}{\#G(p^k)}}}\right)\prod_p \sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}. \end{equation*}
Also, if $\sum_{k\geq0}\frac{g(p^k)}{\#G(p^k)}=0$ for some prime $p\mid 2D$, and $\sum_{n\geq1}\frac{g(n)}{\#G(n)}\neq 0$, then the {product} $\prod_p\sum_{k\geq 0}\frac{g(p^k)}{\#G(p^k)}$ cannot be corrected.
It should be noted that for $K=\mathbb{Q}(\sqrt{\pm 2})$ the above correction factor is slightly different from the one given in Theorem \ref{LMS} for the density problems since in these cases $\#G(2^k)\neq \#A(2^k)$ for some positive integers $k$.}
(iv) For integer $a (\neq 0, \pm 1)$, let $n_a=\prod_{p\mid 2D} p^{\ell(p)}$, where $D$ and $\ell(p)$ are as in Theorem \ref{product-kummer-family}. Then, by taking $g(n)=1/n^s$, for ${\mathfrak{Re}}(s)>0$, in Theorem \ref{product-kummer-family} and comparing the coefficients of $1/n^s$ in both sides of \eqref{product-kummer}, we get $$ [\mathbb{Q}(\zeta_n, a^{1/n}):\mathbb{Q}]=\begin{cases} \#A(n)&\text{if } n_a\nmid n,\\ \frac{1}{2}\#A(n)&\text{if } n_a\mid n. \end{cases} $$
\end{Remark}
\par
The formula \eqref{product-kummer} can be used to study the constants in many Artin-like problems. We next apply this formula in the computation of the average value of a specific arithmetic function attached to a Kummer family. More precisely, for $\{K_n:=\mathbb{Q}(\zeta_n,a^{1/n})\}_{n\geq 1}$, we define \begin{equation*}
\tau_{a}(p)=\#\left\{n\in\mathbb{N} ;\; p\text{ splits completely in }K_n/\mathbb{Q}\right\}. \end{equation*} The Titchmarsh divisor problem attached to a Kummer family concerns the behaviour of $\sum_{p\leq x}\tau_a(p)$ as $x\to\infty$ (see \cite{AG} for the motivation behind this problem and its relation with the classical Titchmarsh divisor problem on the average value of the number of divisors of shifted primes). Under the assumption of the GRH for the Dedekind zeta function of $K_n/\mathbb{Q}$ for $n\geq1$, Felix and Murty \cite[Theorem 1.6]{felix-murty} proved that \begin{equation} \label{felix-murty-tdp} \sum_{p\leq x}\tau_a(p)\sim \left(\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}\right)\cdot \li(x), \end{equation} as $x\to\infty$, where $\li(x)=\int_2^x \frac{1}{\log t}dt$. They do not provide an Euler product expression for the constant appearing in the main term of \eqref{felix-murty-tdp}. As a direct consequence of Theorem \ref{product-kummer-family} with $g(n)=1$, we readily find an explicit product formula for the constant appearing in \eqref{felix-murty-tdp}.
\begin{proposition} \label{TDPK-formula} Let $a=\pm a_{0}^e$ with $e=\prod_pp^{\nu_p(e)}$, and let $D$ be the discriminant of $K=\mathbb{Q}(a_0^{1/2})$. Then, if $e$ is odd or $a>0$, \begin{equation}
\label{kummer-tdp-lastproduct}
\begin{split}
\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}=& \left(1+\frac{c_0}{3\cdot2^{\nu_2(e)}-2}\prod_{p\mid2D}\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)+3}+p^{\nu_p(e)}-p^2}\right)\\
&~~~~~~~~~~~\times\prod_p\left(1+\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)}(p-1)(p^2-1)}\right),
\end{split} \end{equation} where \begin{equation*}
c_0=
\begin{cases}
1/4 & \text{ if } 4\Vert D\;\text{and}\;\nu_2(e)=0,\text{ or if } 8\Vert D ~\text{and}~ \nu_2(e)=1, \\
1/16 & \text{ if } 8\Vert D\;\text{and}\; \nu_2(e)=0,\\
1 & \text{ otherwise.}
\end{cases} \end{equation*}
If $e$ is even and $a<0$ (i.e., the twisted case)
\begin{equation}
\label{kummer-tdp-lastproduct-2}
\begin{split}
\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}=& \left(1+\frac{c_0}{3\cdot2^{\nu_2(e)+2}-2}\prod_{\substack{p\mid D\\ p\neq 2}}\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)+3}+p^{\nu_p(e)}-p^2}\right)\\
&~~~~~~~~~~~\times \left(1+\frac{2^{\nu_2(e)+2}-2^{\nu_2(e)}-1}{3\cdot 2^{\nu_2(e)}}\right)\prod_{\substack{p\\p\neq 2}}\left(1+\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)}(p-1)(p^2-1)}\right),
\end{split} \end{equation} where
\begin{equation*}
c_0=
\begin{cases}
4 & \text{ if } 8\Vert D\;\text{and}\; \nu_2(e)=1,\\
1 & \text{ otherwise.}
\end{cases} \end{equation*}
\end{proposition}
Let $c_{a}$ denote the constant given in \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}.
It is evident that $c_a=q_a\cdot u$, where $q_a$ is a rational number depending on $a$, and $u$ is the \emph{universal constant} \begin{equation} \label{universal} \sum_{n=1}^{\infty} \frac{1}{n \Phi(n)}=\prod_{p}\left( 1+\frac{p}{(p-1)(p^2-1)} \right)= 2.203856\cdots, \end{equation} where $\Phi(n)$ is the Euler totient function. Observe that if $\nu_p(e)=0$ for all $p$, the expressions for naive products (products over all primes $p$) given in \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2} reduce to \eqref{universal}. This is in accordance with \cite[Theorem 1.4]{AF} in which \eqref{universal} appears as the average constant while varying $a$. Thus, on average over $a$, a smooth version of \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}, i.e., the universal constant, appears.
The product expressions of Proposition \ref{TDPK-formula} provide a convenient way of computing the numerical value of $c_{a}$ for a given value of $a$. We record a sample of such values in the following table.
\par
\begin{tabular}{c|cccccccccccc} $a$&-13&-10&-8&-5&-3&-2&2&3&5&8&10&13\\ \hline $c_{a}$ &2.205&2.206&2.972&2.214&2.343&2.258&2.258&2.238&2.247&2.972&2.206&2.209 \end{tabular}
\par
The classical Artin conjecture and the Titchmarsh divisor problem for a Kummer family are instances of a more general problem that we now describe. For an integer $a (\neq 0, \pm 1)$ and a prime $p\nmid a$, the \emph{residual index} of $a$ mod $p$, denoted by $i_a(p)$ is the index of the subgroup $\langle a \rangle$ in the multiplicative group $(\mathbb{Z}/p\mathbb{Z})^\times$. There is a vast amount of literature on the study of asymptotics of functions of $i_a(p)$ as $p$ varies over primes. In \cite[p. 377]{Papa}, the following problem is proposed.
\begin{problem}[Generalized Artin Problem] \label{generalized} For certain integers $a$ and arithmetic functions $f(n)$, establish the asymptotic formula $$\sum_{p\leq x} f(i_a(p)) \sim c_{f, a} \li(x),$$ as $x\rightarrow \infty$, where \begin{equation} \label{series} c_{f, a}:= \sum_{n\geq 1} \frac{g(n)}{[K_n:\mathbb{Q}]}. \end{equation} Here $g(n)=\sum_{d\mid n} \mu(d) f(n/d)$ is the M\"{o}bius inverse of $f(n)$, where $\mu(n)$ is the M\"{o}bius function. \end{problem} Note that by setting $f(n)$ as the characteristic function of the set $S=\{1\}$ and $g(n)=\mu(n)$ in Problem \ref{generalized}, we get the Artin conjecture, and $f(n)=d(n)$ (the divisor function) and $g(n)=1$ give the Titchmarsh divisor problem for a Kummer family, this is true since $\tau_a(p)=d(i_a(p))$ (see \cite[Lemma 2.1]{felix-murty} for details). Also, a conjecture of Laxton from 1969 \cite{L} predicts that for $f(n)=1/n$, the generalized Artin problem determines the density of primes in the sequence given by the recurrence $w_{n+2}=(a+1) w_{n+1}-aw_n$, where $a>1$ is a fixed integer. Another instance of Problem \ref{generalized} appears in a conjecture of Bach, Lukes, Shallit, and Williams \cite{BLSW} in which the constant $c_{f, 2}$ for $f(n)=\log{n}$ appears in the main term of the asymptotic formula for $\log{P_2(x)}$, where $P_2(x)$ is the smallest \emph{$x$-pseudopower} of the base $2$.
A notable result on Generalized Artin Problem, due to Felix and Murty \cite[Theorem 1.7]{felix-murty}, establishes, under the assumption of GRH, the asymptotic \begin{equation} \label{FM} \sum_{p\leq x} f(i_a(p))= c_{f, a} \li(x)+O_a\left( \frac{x}{(\log{x})^{2-\epsilon-\alpha}} \right), \end{equation} for $\epsilon>0$. Here $f(n)$ is an arithmetic function whose M\"obius inverse $g(n)$ satisfies
$$|g(n)| \ll d_k(n)^r (\log{n})^{\alpha}, $$ with $k, r \in \mathbb{N}$ and $0\leq \alpha <1$ all fixed, where $d_k(n)$ denotes the number of representations of $n$ as product of $k$ positive integers. Observe that the identity \eqref{product-kummer} in Theorem \ref{product-kummer-family} conveniently furnish a product formula in full generality for the constant $c_{f, a}$ in \eqref{FM} when $f$ (equivalently $g$) is a multiplicative function. This product formula is valuable for studying the vanishing criteria for $c_{f, a}$ and their numerical evaluations for different $f$.
We now comment on the proof of Theorem \ref{product-kummer-family}. Observe that corresponding to the Kummer family $\{K_n\}$, we can consider the inverse systems $((G(n))_{n\in \mathbb{N}}, (i_{n_1,n_2})_{n_1\mid n_2})$ and $((A(n))_{n\in \mathbb{N}},(j_{n_1,n_2})_{n_1\mid n_2})$ ordered by divisibility relation on $\mathbb{N}$, where $G(n)$ and $A(n)$ are as defined before and $i_{n_1,n_2}$ and $j_{n_1, n_2}$, for $n_1\mid n_2$, are restriction maps. By taking the inverse limits on both sides of \eqref{embed}
we have the injective continuous homomorphism $$r:G=\varprojlim G(n)\to A=\varprojlim A(n)$$ of profinite groups, where $G=\Gal(K_{\infty}/\mathbb{Q})$ and $A=\Aut_{\mathbb{Q}^{\times}\cap R_{\infty}}(R_{\infty})$ with $K_{\infty}=\bigcup_{n\geq1}K_n$ and $R_{\infty}=\bigcup_{n\geq 1}R_n$. As profinite groups, both $G$ and $A$ are endowed with compact topologies and thus can be equipped by Haar measures. We will show that Theorem \ref{product-kummer-family} is a corollary of the following theorem attached to a general setting of profinite groups $G$ and $A$. \begin{theorem} \label{main1} Let $((G(n))_{n\in \mathbb{N}}, (i_{n_1,n_2})_{n_1\mid n_2})$ and $((A(n))_{n\in \mathbb{N}},(j_{n_1,n_2})_{n_1\mid n_2})$ be inverse systems of finite groups ordered by divisibility relation on $\mathbb{N}$. Moreover, for $n\geq 1$, assume that there are injective maps $r_n:G(n)\to A(n)$ compatible by $i_{n_1,n_2}$ and $j_{n_1,n_2}$, i.e., for $n_1\mid n_2$, the diagram \begin{equation*}
\begin{tikzcd}
G(n_2)\arrow[r,"r_{n_2}"]\arrow[d,"i_{n_1,n_2}"] & A(n_2)\arrow[d,"j_{n_1,n_2}"]\\
G(n_1)\arrow[r,"r_{n_1}"] & A(n_1)
\end{tikzcd} \end{equation*} commutes. Let $r: G=\varprojlim G(n)\to A=\varprojlim A(n)$ be the resulting injective continuous homomorphism of profinite groups.
Let $\mu_m$ be the multiplicative group of $m$-th roots of unity for a fixed $m$. Suppose there exists an exact sequence \begin{equation} \label{ev1}
1\to G\stackrel{r}{\longrightarrow} A\stackrel{\chi}{\longrightarrow} \mu_m\to 1, \end{equation} where $\chi$ is a continuous homomorphism.
Let $g$ be an arithmetic function such that \begin{equation*}
\sum_{n\geq1}\frac{\lvert g(n)\rvert}{\#G(n)}<\infty. \end{equation*} Consider the natural projections
$\pi_{A,n}:A\to A(n)$ and let
\begin{equation} \label{g-tilde}
\Tilde{g}=\sum_{n\geq1}g(n)1_{\ker\pi_{A,n}}, \end{equation} where $1_{\ker\pi_{A, n}}$ denotes the characteristic function of $\ker\pi_{A, n}$.
Let $\nu_A$ be the normalized Haar measure attached to $A$. Then, $\Tilde{g}\in L^1(\nu_A)$ (the space of $\nu_A$-integrable functions) and
\begin{equation*}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{i=0}^{m-1}\int_A \Tilde{g}\chi^id\nu_A. \end{equation*} \end{theorem}
Observe that Theorem \ref{main1} is quite general and can be applied in the evaluation of sums of the form $\sum_{n\geq 1} g(n)/\#G(n)$ for any family $\{G(n)\}$ of finite groups satisfying the assumptions of the theorem.
The Kummer family is an instance of such families. Another example is the family of division fields attached to a \emph{Serre elliptic curve} $E$ (see Section \ref{Serre section} for the definition). Following \cite[Section 8]{lenstra-stevenhagen-moree}, in Section \ref{Serre section}, we show that the family of division fields $\{\mathbb{Q}(E[n])\}$ attached to a Serre curve $E$ satisfies the conditions of Theorem \ref{main1} and so as a consequence of it the following holds. \begin{proposition} \label{prop-Serre} Let $\mathbb{Q}(E[n])$ denote the $n$-division field of a Serre elliptic curve defined over $\mathbb{Q}$ by a Weierstrass equation $y^2=x^3+ax+b$. Let $\Delta$ be the discriminant of the cubic equation $x^3+ax+b=0$ and let $D$ be the discriminant of the quadratic field $K=\mathbb{Q}({\Delta}^{1/2})$. Let $g(n)$ be a multiplicative arithmetic function such that \begin{equation*}
\sum_{n\geq1}^{\infty}\frac{\lvert g(n)\rvert}{[\mathbb{Q}(E[n]) : \mathbb{Q}]}<\infty. \end{equation*}
Then, \begin{equation} \label{serre-sum}
\sum_{n=1}^{\infty}\frac{g(n)}{[\mathbb{Q}(E[n]) : \mathbb{Q}]} =\prod_p\sum_{k\geq 0}\frac{g(p^k)}{p^{4k-3}(p^2-1)(p-1)}+
\prod_{p}\sum_{k\geq\ell(p)}\frac{g(p^k)}{p^{4k-3}(p^2-1)(p-1)},
\end{equation}
where
$$\ell(p)=\left\{\begin{array}{ll} 0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\ 1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\ 1&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\ 2&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\ 3&\text{if}~ p=2~ \text{and}~ 8\Vert D.\\ \end{array} \right. $$
\end{proposition}
Observe that the above proposition for $g(n)=1$ reduces to the product expression of the Titchmarsh divisor problem for the family of division fields attached to a Serre curve $E$. We note that the product expression for this constant and two other constants corresponding to different $g(n)$'s for such families are given in \cite[Theorem 5]{cojocaru:tdp} by determining the value of $[\mathbb{Q}(E[n]):\mathbb{Q}]$ for a Serre curve $E$ (see \cite[Proposition 17 (iv)]{cojocaru:tdp}) and employing \cite[Lemma 3.12]{kowalski}. It is worth mentioning that a similar approach in finding the expression \eqref{kummer-tdp-lastproduct} using the exact formulas for $[K_n: \mathbb{Q}]$ as given in \cite[Proposition 4.1]{wagstaff} will result in the tedious case by case computations that does not appear to be straightforward. Especially when $a<0$, this approach seems to be intractable. The method of \cite{lenstra-stevenhagen-moree} as described above provides an elegant approach to establishing identities similar to \eqref{kummer-tdp-lastproduct} and \eqref{kummer-tdp-lastproduct-2}.
The structure of the paper is as follows. We describe our adaptation of the character sums method of \cite{lenstra-stevenhagen-moree} in Sections 2 and 3 and prove Proposition \ref{character} that plays a crucial role in our explicit computation of the constants in the Kummer case. Section 4 is dedicated to a proof of Theorem \ref{main1}. The proofs of Theorem \ref{product-kummer-family} and its consequence, Proposition \ref{TDPK-formula}, are given respectively in Sections 5 and 6. Finally, a brief discussion on Serre curves and the proof of Proposition \ref{prop-Serre} are provided in Section 7.
\begin{notation}
The following notations are used throughout the paper. The letter $p$ denotes a prime number, $k$ denotes a non-negative integer, the letter $n$ denotes a positive integer, the multiplicity of the prime $p$ in the prime factorization of $n$ is denoted by $\nu_p(n)$, the cardinality of a finite set $S$ is denoted by $\#S$, $1_S$ is the characteristic function of a set $S$, $\overline{\mathbb{Q}}$ is an algebraic closure of $\mathbb{Q}$, $\zeta_n$ denotes a primitive root of unity, and $\Phi(n)$ is the Euler totient function. In Sections \ref{section:character}, \ref{Sec-3}, \ref{S1}, and \ref{Section 5}, $a=\pm a_0^e$ is a non-zero integer other than $\pm 1$, the collection $\{K_n=\mathbb{Q}(\zeta_n, a^{1/n}) \}_{n\in \mathbb{N}}$ is the family of Kummer fields and $K=\mathbb{Q}(a_0^{1/2})$ is the entanglement field attached to this family, $D$ is the discriminant of $K$, the Galois group of $K_n$ over $\mathbb{Q}$ is denoted by $G(n)$, the inverse limit of the directed family $\{G(n)\}$ is denoted by $G$, $\mu_\infty$ denotes the group of all roots of unity, and $\mathbb{Q}_{ab}=\mathbb{Q}(\mu_\infty)$ is the maximal abelian extension of $\mathbb{Q}$. The group of $n$-radicals of the integer $a=\pm a_0^e$ is denoted by $R_n$ and $R_{\infty}=\bigcup_{n\geq 1}R_n$. The group of automorphisms of $R_n$ (respectively $R_\infty$) that fix $\mathbb{Q}^\times$ is denoted by $A(n)$ (respectively $A$). The inverse limit of the system $\{A(p^k)\}_{k\geq 1}$ is denoted by $A_p$. The map $\pi_{A, n}$ (respectively $\pi_{G, n}$ and $\varphi_{p^k}$) is the projection map from $A$ (respectively $G$ and $A_p$) to $A(n)$ (respectively $G(n)$ and $A(p^k)$). The profinite completion of $\mathbb{Z}$ is denoted by $\widehat{\mathbb{Z}}$ and $\mathbb{Z}_p$ is the ring of $p$-adic integers. The normalized Haar measures on $G$, $A$, and $A_p$ are denoted respectively by $\nu_G$, $\nu_A$, and $\nu_{A_p}$. The space of $\nu$-integrable functions is denoted by $L^1(\nu)$. In Section \ref{Section 3}, $G(n)$, $A(n)$, $A(p^k)$, $G$, $A$, $A_p$, $\pi_{A, n}$, $\varphi_{p^k}$, $\nu_G$, $\nu_A$, and $\nu_{A_p}$ are used in the general setting of profinite groups. Finally, in Section \ref{Serre section}, $E[n]$ denotes the group of $n$-division points of an elliptic curve $E$ defined over $\mathbb{Q}$ given by a Weierstrass equation with discriminant $\Delta$, and $K=\mathbb{Q}(\Delta^{1/2})$ of discriminant $D$ is the entanglement field attached to the family of division fields of a Serre elliptic curve. \end{notation}
\section{The associated character to a Kummer family}\label{section:character}
Recall that for an integer $a (\neq0,\pm1)$, we set $a=\pm a_0^e$, where $\sgn(a)=\sgn(a_0)$ and $e$ is the largest such integer. We fix a solution of the equation $x^2-a_0=0$, denote it by $a_0^{1/2}$, and set $K=\mathbb{Q}(a_0^{1/2})$.
We next
define a quadratic character which describes the entanglements inside a given Kummer family $\{K_n\}$. Let $\mu_{\infty}=\bigcup_{n\geq1}\mu_n(\overline{\mathbb{Q}})$ be the group of all roots of unity in $\overline{\mathbb{Q}}$. Then, $\mu_{\infty}$ is contained in $K_{\infty}=\bigcup_{n\geq1}K_n$. In addition, the infinite extension $K_{\infty}/\mathbb{Q}$ is the compositum of $\mathbb{Q}(a_0^{\mathbb{Q}})$ and $\mathbb{Q}_{ab}$ (the maximal abelian extension of $\mathbb{Q}$), where \begin{equation} \label{intersection} \mathbb{Q}(a_0^{\mathbb{Q}})\cap\mathbb{Q}_{ab}=\mathbb{Q}(a_0^{1/2}) \end{equation}
(see \cite{lenstra-stevenhagen-moree}*{Lemma 2.5}). Note that $a_0^{\mathbb{Q}}=\{a_0^b;\;b\in\mathbb{Q}\}$. In \cite{lenstra-stevenhagen-moree}*{Page 494} it is proved that \begin{equation} \label{semidirect}
A=\Aut_{\mathbb{Q}^{\times}\cap R_{\infty}}(R_{\infty})\cong\Hom(a_0^{\mathbb{Q}}/a_0^{\mathbb{Z}},\mu_{\infty})\rtimes \Aut(\mu_{\infty}), \end{equation} where
$a_0^{\mathbb{Z}}=\{a_0^b;\;b\in\mathbb{Z}\}$,
and for $(\phi_1, \sigma_1), (\phi_2, \sigma_2)\in A$ we have $$(\phi_1, \sigma_1)(\phi_2, \sigma_2)= (\phi_1\cdot (\sigma_1 \circ \phi_2), \sigma_1\circ \sigma_2).$$
Note that $G=\Gal(K_{\infty}/\mathbb{Q})$ can be embedded in $A$. Thus, if $(\phi,\sigma)\in \Hom(a_0^{\mathbb{Q}}/a_0^{\mathbb{Z}},\mu_{\infty})\rtimes \Aut(\mu_{\infty})\cong A$ is an element of $G$, then, by \eqref{intersection}, the action of $\phi$ and $\sigma$ on $\mathbb{Q}(a_0^{1/2})$ must be the same. One can show that $(\phi,\sigma)\in A$ is in $G$ if and only if $\phi$ and $\sigma$ act in a compatible way on $a_0^{1/2}$, i.e., \begin{equation} \label{compatibility}
\phi(a_0^{1/2})=\frac{\sigma(a_0^{1/2})}{a_0^{1/2}}\in \mu_2 \end{equation} (see \cite{lenstra-stevenhagen-moree}*{Page 494}). (For simplicity, we used $\phi(a_0^{1/2})$ instead of $\phi(a_0^{1/2}a_0^{\mathbb{Z}})$.) We elaborate on \eqref{compatibility} by considering two distinct quadratic characters $\psi_K$ and $\chi_D$ on $A$ which are related to the entanglement field $K=\mathbb{Q}(a_0^{1/2})$ of discriminant $D$. The quadratic character $\psi_K:A\to\mu_2$ corresponds to the action of $\phi$-component of $(\phi,\sigma)\in A$ on $a_0^{1/2}$, i.e., \begin{equation*}
\psi_K(\phi,\sigma)=\phi(a_0^{1/2})\in\mu_2. \end{equation*} This is a \emph{non-cyclotomic character}, i.e., $\psi_K$ does not factor via the natural map $A\to\Aut(\mu_{\infty})$ (see \cite{lenstra-stevenhagen-moree}*{Page 495}). The other quadratic character, \begin{equation*}
\chi_D:A\to\Aut(\mu_{\infty})\cong\widehat{\mathbb{Z}}^{\times}\to\mu_2, \end{equation*} corresponds to the action of the cyclotomic component $\Aut(\mu_{\infty})$ of $A$ on $K=\mathbb{Q}(a_0^{1/2})$ of discriminant $D$, i.e., \begin{equation*}
\chi_D(\phi,\sigma)=\frac{\sigma(a_0^{1/2})}{a_0^{1/2}}\in \mu_2. \end{equation*} Hence, by \cite{cox}*{Proposition 5.16 and Corollary 5.17}, $\chi_D$ is the lift of the Kronecker symbol $\left(\frac{ D}{.}\right)$ to $\Aut(\mu_{\infty})\cong\widehat{\mathbb{Z}}^{\times}$.
The characters $\chi_D$ and $\psi_K$ are not the same on $A$ since one is cyclotomic, and the other is not. Moreover, by \eqref{compatibility}, an element $x\in A$ is in $G$ if and only if $\psi_K(x)=\chi_D(x)$. Thus, the image of the homomorphism $G\to A$ is the kernel of the non-trivial quadratic character $\chi=\psi_K\cdot\chi_D: A\to\mu_2$. In other words, the sequence \begin{equation*}
1\longrightarrow G\stackrel{r}{\longrightarrow} A\xrightarrow{\chi=\psi_K\cdot\chi_D}\mu_2\longrightarrow 1 \end{equation*} is an exact sequence (see \cite{lenstra-stevenhagen-moree}*{Theorem 2.9} for more details).
Let $A(p^k)=\Aut_{\mathbb{Q}^\times \cap R_{p^k} }(R_{p^k})$ and $A_p=\varprojlim A(p^k)$. Since an element of $A$ can be determined by its action on prime power radicals, then $A\cong \prod_{p} A_p$ (see \cite[formula (2.10), p. 495]{lenstra-stevenhagen-moree} and \cite[p. 20]{MS}). The character $\chi_D$ is the lift of the Kronecker symbol $\left(\frac{D}{.}\right)$ to $A$ via the maps \begin{equation*}
A\cong\left(\prod_pA_p\right)\stackrel{\proj}{\longrightarrow}\Aut(\mu_{\infty})\left(\cong\prod_p\mathbb{Z}^{\times}_p\right)\stackrel{\text{proj}}{\longrightarrow}(\mathbb{Z}/|D|\mathbb{Z})^{\times}, \end{equation*} where the first projection comes via \eqref{semidirect}. Since $D$ is a fundamental discriminant, $\chi_D=\prod_{p\mid D}\chi_{D,p}$, where $\chi_{D,p}$ is the lift of the Legendre symbol modulo $p$ to $A_p$ for odd $p$, and $\chi_{D,2}$ is the lift of one of the Dirichlet characters mod $8$ to $A_2$ (see \cite{davenport}*{Chapter 5}). More precisely, if $D$ is odd, then $\chi_{D,2}=1$; if $4~\Vert~D$, then $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{-4}{.}\right)$, the unique Dirichlet character mod $8$ of conductor $4$; and if $8~\Vert~D$, then $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{\pm8}{.}\right)$, one of the two Dirichlet characters mod $8$ of conductor $8$. For the case $8~\Vert~D=2^a\prod_{i=1}^kp_i$, if $D>0$ and the number of $1\leq i\leq k$ with $p_i\equiv3\;(\modd\;4)$ is even, or $D<0$ and the number of $1\leq i\leq k$ with $p_i\equiv3\;(\modd\;4)$ is odd, then $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{8}{.}\right)$. Otherwise, $\chi_{D,2}$ is the lift to $A_2$ of $\left(\frac{-8}{.}\right)$.
Next, we show that $\chi$ can be written as a product of local characters $\chi_p: A_p\to\mu_2$. Note that $\psi_K$ factors via $A_2$. Let $\psi_{K,2}:A_2\to\mu_2$ be the corresponding homomorphism obtained from factorization of $\psi_K$ via $A_2$. For odd primes $p\nmid D$, set $\chi_p=1$. Let $\chi_p=\chi_{D,p}$ for odd primes $p\mid D$ and for prime $2$ let $\chi_2=\chi_{D,2}\cdot\psi_{K,2}$. Therefore, by the above construction of $\chi$, we have the decomposition $\chi=\prod_p\chi_p$.
\section{The local characters $\chi_p$}
\label{Sec-3}
In this section, we find the smallest values of $k$, as a function of $p$ and $a$, for which the local character $\chi_p$ factors via $A(p^k)$. In other words, we will determine the values of $k$ for which $\chi_p$ is trivial on $\ker\varphi_{p^{k}}$ and it is nontrivial on $\ker\varphi_{p^{k-1}}$, where $\varphi_{p^i}$ is the projection map from $A_p$ to $A(p^i)$. The values are recorded in the statements of Theorem \ref{product-kummer-family} and Proposition \ref{character}. We start
by giving a concrete description of the groups $A(2^k)$, for positive integers $k$, as subgroups of matrices $\begin{psmallmatrix}1 & 0\\b & d\end{psmallmatrix}$, where $b\in \mathbb{Z}/2^k\mathbb{Z}$ and $d\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^\times$. We achieve this by choosing a certain compatible system of generators for the groups $R_{2^k}$, where $k\geq 1$.
\begin{proposition} \label{new-prop} (i) Let $\varPhi(n)$ be the Euler totient function and $s$ be as defined in \eqref{s-def}. For odd $p$, $$\#A(p^k)= p^{k-\min\{k, \nu_p(e)\}}\varPhi(p^k)$$ and for $p=2$, $$\#A(2^k)=\begin{cases} 2^{k-\min\{k, s-1\}}\varPhi(2^k)&\text{if}~e~\text{is~odd~or}~a>0,\\ 2^{k-\min\{k, s-1\}}\varPhi(2^{k+1})&\text{if}~e~\text{is~even~and}~a<0. \end{cases}$$ (ii) If $e$ is even and $a<0$, we have \begin{equation*} A(2^k)\cong \left\{\left(\begin{array}{cc} 1&0\\b&d \end{array} \right);~ b\in \mathbb{Z}/2^k\mathbb{Z}, d\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^\times, {\rm and}~2b+1\equiv d~({\rm mod}~2^{\min\{k,s-1\}}) \right\}. \end{equation*} (iii) If $e$ is odd or $a>0$, we have \begin{equation*} A(2^k)\cong \left\{\left(\begin{array}{cc} 1&0\\b&d \end{array} \right);~ b\in \mathbb{Z}/2^k\mathbb{Z}, d\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^\times, {\rm and}~b+1\equiv d~({\rm mod}~2^{\min\{k,s-1\}}) \right\}. \end{equation*} \end{proposition} \begin{proof} (i) For odd primes $p$, it is known that $A(p^k)\cong G(p^k)={\rm Gal}(K_{p^k}/\mathbb{Q})$ (see \cite[Remarks 2.12. (b), p. 496]{lenstra-stevenhagen-moree}). Also if $K\neq \mathbb{Q}(\sqrt{\pm 2})$, then $A(2^k)\cong G(2^k)$. Since the size of $A(2^k)$ is independent of $K$, we get the formulas for the size of $A(p^k)$ from the ones for $G(p^k)$ as given in \cite[Proposition 4.1]{wagstaff}.
(ii) Let $a=-a_0^e$ as before and $e=2^{\nu_2(e)}e_1$, where $\nu_2(e)\geq 1$ and $e_1$ is odd. We denote a primitive $m$-th root of unity by $\zeta_m$. Recall that $R_{2^k}$ is the group of $2^k$- radicals. We have $$R_{2^k} =\langle\zeta_{2^{k+1}} \left(a_0^{e_1}\right)^{1/2^{k-\nu_2(e)}}, \zeta_{2^k} \rangle=\langle \beta, \zeta_{2^k} \rangle.$$ An automorphism $\tau\in A(2^k)$ is determined by its action on these generators of $R_{2^k}$, i.e., $\beta$ and $\zeta_{2^k}$. We have $\tau(\beta)=\beta \zeta_{2^{k}}^{b(\tau)}$ and $\tau(\zeta_{2^k})=\zeta_{2^k}^{d(\tau)}$, where $b(\tau)\in \mathbb{Z}/2^k\mathbb{Z}$ and $d(\tau)\in \left(\mathbb{Z}/2^k\mathbb{Z}\right)^{\times}$. We consider two cases.
Case 1: $k\geq s-1=\nu_2(e)+1$. We have $$a_0^{e_1} \tau(\zeta_{2^{k+1}}^{2^{k-\nu_2(e)}})=\tau(\beta^{2^{k-\nu_2(e)}})=(\beta \zeta_{2^{k}}^{b(\tau)})^{2^{k-\nu_2(e)}}=a_0^{e_1} \zeta_{2^{k+1}}^{2^{k-\nu_2(e)}} \zeta_{2^{k}}^{b(\tau)2^{k-\nu_2(e)}}. $$ From here we get $$\zeta_{2^k}^{d(\tau)2^{k-\nu_2(e)-1}}=\zeta_{2^k}^{2^{k-\nu_2(e)-1}+b(\tau) 2^{k-\nu_2(e)}}.$$ This implies $2b(\tau)+1\equiv d(\tau)~({\rm mod}~2^{s-1})$.
Case 2: $k< s-1=\nu_2(e)+1$. We have $$(a_0^{e_1})^{2/2^{k-\nu_2(e)}} \tau(\zeta_{2^{k+1}}^{2})=\tau(\beta^{2})=(\beta \zeta_{2^{k}}^{b(\tau)})^{2}=(a_0^{e_1})^{2/2^{k-\nu_2(e)}} \zeta_{2^{k+1}}^{2} \zeta_{2^{k}}^{2b(\tau)}. $$ From here we get $$\zeta_{2^k}^{d(\tau)}=\zeta_{2^k}^{1+2b(\tau) }.$$ This implies $2b(\tau)+1\equiv d(\tau)~({\rm mod}~2^{k})$.
So any $\tau\in A(2^k)$ corresponds to a matrix $\begin{psmallmatrix}1 & 0\\b(\tau) & d(\tau)\end{psmallmatrix}$ in the affine group of matrices given in part (ii) of the proposition. Since, in each case, the number of such matrices is equal to the cardinality of $A(2^k)$ given in part (i), then the claimed isomorphism in part (ii) holds.
(iii) The proof is analogous to the proof of (ii) by considering $R_{2^k} =\langle \zeta_{2^k}\left(a_0^{e_1}\right)^{1/2^{k-\nu_p(e)}}, \zeta_{2^k} \rangle,$ where $(e_1, 2)=1$.
\end{proof}
The following proposition indicates the significance of the integer $s$ defined in \eqref{s-def}.
\begin{proposition} \label{factoring} The number $s$ defined in \eqref{s-def} is the smallest integer $k$ for which $\psi_{K, 2}$ factors via $A(2^k)$.
\end{proposition} \begin{proof} For integers $k\geq 0$, let $\varphi_{p^k}:A_p\to A(p^k)$ be the projection map. It is enough to show that $\psi_{K, 2}$ is non-trivial on $\ker \varphi_{2^{s-1}}$ and is trivial on $\ker \varphi_{2^{s}}$.
We write the proof for the twisted case, where $s=\nu_2(e)+2$. The proof for the untwisted case is similar.
Assume that $a=-(a_0^{e_1})^{2^{\nu_2(e)}}$ as in part (iii) of Proposition \ref{new-prop}. Let $\alpha\in A_2$ be such that $$\tau_2=\varphi_{2^{\nu_2(e)+2}}(\alpha)= \left(\begin{array}{cc} 1&0\\0&1+2^{\nu_2(e)+1} \end{array}\right)\in A(2^{\nu_2(e)+2}).$$
Observe that $\alpha\in \ker \varphi_{2^{\nu_2(e)+1}}$ and since $R_{2^{\nu_2(e)+2}}=\langle\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4}, \zeta_{2^{\nu_2(e)+2}} \rangle$, we have \begin{equation} \label{T1} \tau_2(\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4})=\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4}. \end{equation} Raising both sides of \eqref{T1} to power $2$ and observing that
$\zeta_{2^{\nu_2(e)+2}}$ and $ \left(a_0^{e_1}\right)^{1/2}\in R_{2^{\nu_2(e)+2}}$, we get \begin{equation} \label{T2} \tau_2(\zeta_{2^{\nu_2(e)+2}}) \tau_2( \left(a_0^{e_1}\right)^{1/2})=\zeta_{2^{\nu_2(e)+2}} \left(a_0^{e_1}\right)^{1/2}. \end{equation} Now since $\tau_2(\zeta_{2^{\nu_2(e)+2}})= \zeta_{2^{\nu_2(e)+2}}^{1+2^{\nu_2(e)+1}}$, the equation \eqref{T2} implies that $$\tau_2( a_0^{e_1/2})=- a_0^{e_1/2}.$$ We have $e_1=2m+1$ for some integer $m$. Hence, $$\tau_2( a_0^{1/2}a_0^m)=- a_0^{1/2}a_0^m.$$ Thus for $\alpha\in \ker \varphi_{2^{\nu_2(e)+1}}$, we have $\psi_{K, 2}(\alpha)=-1$. Hence, $\psi_{K, 2}$ is non-trivial on $\ker \varphi_{2^{\nu_2(e)+1}}$.
Next, let $\alpha\in A_2$ be such that $\alpha\in \ker \varphi_{2^{\nu_2(e)+2}}$. Hence, \begin{equation*} \label{T3} \tau_3=\varphi_{2^{\nu_2(e)+3}}(\alpha)= \left(\begin{array}{cc} 1&0\\ b&d \end{array}\right)\in A(2^{\nu_2(e)+3}) \end{equation*} and \begin{equation} \label{T4}
\left(\begin{array}{cc} 1&0\\ b&d \end{array}\right) \equiv \left(\begin{array}{cc} 1&0\\ 0&1 \end{array}\right)~({\rm mod}~2^{\nu_2(e)+2} ). \end{equation} Hence, $b=2^{\nu_2(e)+2}b_1$ for some integer $b_1$. We have \begin{equation*}
\tau_3(\zeta_{2^{\nu_2(e)+4}} \left(a_0^{e_1}\right)^{1/8})=\zeta_{2^{\nu_2(e)+4}} \left(a_0^{e_1}\right)^{1/8} \zeta_{2^{\nu_2(e)+3}}^{2^{\nu_2(e)+2}b_1}. \end{equation*} Squaring both sides of this identity yields \begin{equation*}
\tau_3(\zeta_{2^{\nu_2(e)+3}}) \tau_3(\left(a_0^{e_1}\right)^{1/4})=\zeta_{2^{\nu_2(e)+3}} \left(a_0^{e_1}\right)^{1/4}. \end{equation*} This implies \begin{equation} \label{T5} \tau_3(\left(a_0^{e_1}\right)^{1/4})=\frac{\zeta_{2^{\nu_2(e)+3}}}{\zeta_{2^{\nu_2(e)+3}}^d} \left(a_0^{e_1}\right)^{1/4}. \end{equation} Now observe, from \eqref{T4}, that
\begin{equation} \label{T6} d=1+2^{\nu_2(e)+2}d_1 \end{equation} for some integer $d_1$. Raising both sides of \eqref{T5} to power 2 and employing \eqref{T6} yield $$\tau_3(\left(a_0^{e_1}\right)^{1/2})= \left(a_0^{e_1}\right)^{1/2}. $$ Hence, $$\tau_3( a_0^{1/2}a_0^m)= a_0^{1/2}a_0^m,$$ where $e_1=2m+1$. Thus, $\psi_{K, 2}$ is trivial on $\ker \varphi_{2^{\nu_2(e)+2}}$.
\end{proof}
The following proposition is essential in proving Theorem \ref{product-kummer-family}.
\begin{proposition} \label{character} Let $\ell(p)$ be the smallest integer $k$ for which $\chi_p$ factors via $A(p^{k})$. Then $$\ell(p)=\left\{\begin{array}{ll} 0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\ s&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\ \max\{2,s\}&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\
2&\text{if}~ p=2,~8\Vert D,~\text{and}~(\nu_2(e)=1~\text{and}~a<0),\\ \max\{3,s\}&\text{if}~ p=2,~ 8\Vert D,~\text{and}~(\nu_2(e)\neq 1~\text{or}~a>0).\end{array} \right. $$
\end{proposition}
\begin{proof} If $p\nmid 2D$, by the definition of $\chi_p$, we have that $\chi_p$ is constantly equal to $1$. Thus, the assertion holds.
If $p$ is an odd integer dividing $D$, then $\chi_p$ is the Legendre symbol mod $p$, so the result follows.
If $p=2$ and $D$ is odd, then $\chi_2=\psi_{K, 2}$. Thus, the result follows from Proposition \ref{factoring}.
If $p=2$ and $4\Vert D$, then $\chi_2=\psi_{K, 2} \chi_{D, 2}$, where $\chi_{D, 2}$ is the Dirichlet character mod $8$ of conductor $4$. {We are looking for a positive integer $k$ such that $\psi_{K,2} (\alpha)\neq \chi_{D, 2}(\alpha)$ for an element $\alpha\in \ker\varphi_{2^{k-1}}$, and $\psi_{K,2} (\alpha)= \chi_{D, 2}(\alpha)$ for all $\alpha\in \ker\varphi_{2^k}$.} Note that $2$ is the smallest values of $k$ for which $\chi_{D, 2}$ factors via $A(2^k)$, and, by Proposition \ref{factoring}, $s$ is the smallest value of $k$ for which $\psi_{K, 2}$ factors via $A(2^k)$. Thus, $\chi_{D, 2}$ is trivial on $\ker\varphi_{2^k}$ for $k\geq 2$ and is nontrivial on $\ker\varphi_{2^{k}}$ for $0\leq k\leq 1$. Also, $\psi_{K, 2}$ is trivial on $\ker\varphi_{2^k}$ for $k\geq s$ and is nontrivial on $\ker\varphi_{2^{k}}$ for $0\leq k\leq s$. Using these facts and a case-by-case analysis in terms of the values of $\nu_2(e)$, and for the untwisted and twisted cases, we can see that the claimed assertion, in this case, holds. More precisely, if $\nu_2(e)=0$, then $\chi_2$ factors via $A(2^2)$. Otherwise, $\chi_2$ factors via $A(2^s)$. The only case that needs special attention is when $a$ is an exact perfect square (i.e., $a>0$ and $\nu_2(e)=1$). In this case, $\max\{2, s\}=2$ and both $\psi_{K, 2}$ and $\chi_{D, 2}$ are trivial on $\ker\varphi_{2^2}$, hence $\chi_2$ is trivial on $\ker\varphi_{2^2}$. Let $\alpha\in \ker\varphi_{2}$ be such that $\varphi_{2^2}(\alpha)= \begin{psmallmatrix}1 & 0\\0 & 3\end{psmallmatrix}\in A(2^2)$. Note that $0+1\equiv 3$ (mod $2$), so by Proposition \ref{new-prop}(iii) such $\alpha$ exists. We have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=(1)(-1)=-1$. Thus, $\chi_2$ is non-trivial on $\ker\varphi_{2}$. Hence, $\chi_2$ factors via $A(2^2)=A(2^s)=A(2^{\max\{2, s\}})$.
If $p=2$ and $8\Vert D$, similar to part (iv), a case-by-case analysis in terms of the values of $\nu_2(e)$, and for the untwisted and twisted cases, verifies the result. (Note that in this case $3$ is the smallest values of $k$ for which $\chi_{D, 2}$ factors via $A(2^k)$.) More precisely, if $\nu_2(e)=0$ or $1$, and $a$ is not negative of a perfect square, then $\chi_2$ factors via $A(2^3)$. Also, if $\nu_2(e)=1$ and $a<0$, then $\chi_2$ factors via $A(2^2)$. Otherwise, $\chi_2$ factors via $A(2^s)$. Two cases need special attention.
Case 1: The number $a$ is negative of an exact perfect square (i.e., $a<0$ and $\nu_2(e)=1$). In this case, $\max\{3, s\}=3$ and both $\psi_{K, 2}$ and $\chi_{D, 2}$ are trivial on $\ker\varphi_{2^3}$, hence $\chi_2$ is trivial on $\ker\varphi_{2^3}$. Thus, $\chi_2$ acts through $A(2^3)$. Let $\alpha\in \ker\varphi_{2^2}$. Then $\varphi_{2^3}(\alpha)= \begin{psmallmatrix}1 & 0\\b & d\end{psmallmatrix}\in A(2^3)$ such that $\begin{psmallmatrix}1 & 0\\b & d\end{psmallmatrix}\equiv \begin{psmallmatrix}1 & 0\\0 & 1\end{psmallmatrix}$ (mod $2^2$). Hence,
$$ \left(\begin{array}{cc}1 & 0\\b & d\end{array}\right )\in \left\{ \left(\begin{array}{cc}1 & 0\\0 & 1\end{array}\right ), \left(\begin{array}{cc}1 & 0\\0 & 5\end{array}\right ), \left(\begin{array}{cc}1 & 0\\4 & 1\end{array}\right ), \left(\begin{array}{cc}1 & 0\\4 & 5\end{array}\right ) \right\}\subset A(2^3).$$ Since for each $\alpha$ corresponding to the above matrices we have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=1$, we conclude that $\chi_2$ is trivial on $\ker \varphi_{2^2}$. Now let $\alpha\in \ker\varphi_{2}$ be such that $\varphi_{2^2}(\alpha)= \begin{psmallmatrix}1 & 0\\6 & 1\end{psmallmatrix}\in A(2^2)$. Note that $(2)(6)+1\equiv 1$ (mod $2^2$), so by Proposition \ref{new-prop}(ii) such $\alpha$ exists. We have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=(-1)(1)=-1$. Thus, $\chi_2$ is non-trivial on $\ker\varphi_{2}$. Hence, $\chi_2$ factors via $A(2^2)$ as claimed.
Case 2: The number $a$ is an exact perfect fourth power (i.e., $a>0$ and $\nu_2(e)=2$). In this case, $\max\{3, s\}=3$ and both $\psi_{K, 2}$ and $\chi_{D, 2}$ are trivial on $\ker\varphi_{2^3}$, hence $\chi_D$ is trivial on $\ker\varphi_{2^3}$. Let $\alpha\in \ker\varphi_{2^2}$ be such that $\varphi_{2^3}(\alpha)= \begin{psmallmatrix}1 & 0\\0 & 9\end{psmallmatrix}\in A(2^3)$. Note that $0+1\equiv 9$ (mod $2^2$), so by Proposition \ref{new-prop}(iii) such $\alpha$ exists. We have $\chi_2(\alpha)=\psi_{K, 2}(\alpha)\chi_{D, 2}(\alpha)=(1)(-1)=-1$. Thus, $\chi_2$ is non-trivial on $\ker\varphi_{2^2}$. Hence, $\chi_2$ factors via $A(2^3)=A(2^s)=A(2^{\max\{3, s\}})$. \end{proof}
\section{Proof of Theorem \ref{main1}} \label{sec-main1}
\begin{proof}[Proof of Theorem \ref{main1}] \label{Section 3} Let $\nu_G$ be the normalized Haar measure on the profinite group $G$, and $\nu_A$ be the normalized Haar measure on the profinite group $A$. We start by writing the summation \begin{equation*}
\sum_{n\geq1}\frac{g(n)}{\#G(n)} \end{equation*} in terms of measures of certain measurable subgroups of $G$. For this purpose, let $\pi_{G,n}: G\to G(n)$ be the projection map for each $n\geq1$. Then, $G/\ker\pi_{G,n}\cong G(n)$ and $[G:\ker\pi_{G,n}]=\#G(n)$. Hence, since $\ker\pi_{G, n}$ is a closed subgroup of $G$, we have $\nu_G(\ker\pi_{G,n})=1/\#G(n)$.
Thus, \begin{equation} \label{ev2}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{n\geq1}g(n)\nu_G(\ker\pi_{G,n}). \end{equation}
Observe that the number of cosets of the set $A/r(\ker\pi_{G,n})$ divided by the number of cosets of the group $G/\ker\pi_{G,n}\cong r(G)/r(\ker\pi_{G,n})$ is equal to $\#\left( A/r(G)\right)$. Hence, we have \begin{equation} \label{ev3}
\nu_G(\ker\pi_{G,n})=\frac{\nu_A(r(\ker\pi_{G,n}))}{\nu_A(r(G))}. \end{equation} Now, since \begin{equation} \label{star}
1\to G\stackrel{r}{\longrightarrow} A\stackrel{\chi}{\longrightarrow} \mu_m\to 1 \end{equation} is an exact sequence, by (\ref{ev3}), we have \begin{equation} \label{ev4}
\begin{split}
\sum_{n\geq1}g(n)\nu_G(\ker\pi_{G,n}) & =\sum_{n\geq1}g(n)\frac{\nu_A(r(\ker\pi_{G,n}))}{\nu_A(\ker\chi)}\\
& =\frac{1}{\nu_A(\ker\chi)}\sum_{n\geq1}g(n)\nu_A(r(\ker\pi_{G,n})).
\end{split} \end{equation}
Next, we show that $r(\ker\pi_{G,n})=\ker(\pi_{A,n})\cap\ker\chi$, where $\pi_{A,n}:A\to A(n)$ is the projection map for each $n\geq1$. To prove this claim, we note that the diagram \begin{equation} \label{commutative-G-to-A-main1}
\begin{tikzcd}
G\arrow[d,"r"]\arrow[r,"\pi_{G,n}"]&G(n)\arrow[d,"r_n"]\\
A\arrow[r,"\pi_{A,n}"]&A(n)
\end{tikzcd} \end{equation} commutes. For a group $H$, let $e_H$ denote its identity element. Note that if $\sigma\in\ker\pi_{G,n}$, then $r_n(\pi_{G,n}(\sigma))=r_n(e_{G(n)})=e_{A(n)}$. Hence, by the commutative diagram \eqref{commutative-G-to-A-main1}, we have $r(\sigma)\in\ker(\pi_{A,n})$. Moreover, by the exact sequence \eqref{star}, we have $r(\sigma)\in r(G)=\ker\chi$. Therefore, \begin{equation} \label{ev5-1} r(\ker\pi_{G,n})\subset \ker(\pi_{A,n})\cap\ker\chi. \end{equation} On the other hand, if $\alpha\in\ker(\pi_{A,n})\cap\ker\chi\subset\ker\chi=r(G)$, then there exists a $\sigma\in G$ such that $r(\sigma)=\alpha$. Moreover, $r(\sigma)\in\ker(\pi_{A,n})$ means $\pi_{A,n}(r(\sigma))=e_{A(n)}$. Hence, $r_n(\pi_{G,n}(\sigma))=e_{A(n)}$ as \eqref{commutative-G-to-A-main1} is commutative. Thus, $\sigma\in\ker\pi_{G,n}$, since $r_n$ is injective. This shows that \begin{equation} \label{ev5-2} \ker(\pi_{A,n})\cap\ker\chi\subset r(\ker\pi_{G,n}). \end{equation} Therefore, from \eqref{ev5-1} and \eqref{ev5-2}, we have \begin{equation} \label{ev5}
r(\ker\pi_{G,n})=\ker(\pi_{A,n})\cap\ker\chi. \end{equation}
From (\ref{ev5}), we have \begin{equation} \label{ev6}
\begin{split}
\sum_{n\geq1}g(n)\nu_A(r(\ker\pi_{G,n}))
& =\sum_{n\geq1}g(n)\nu_A(\ker\pi_{A,n}\cap\ker\chi)\\
& =\sum_{n\geq1}g(n)\int_A1_{\ker\pi_{A,n}\cap\ker\chi}d\nu_A\\
& =\int_A\left(\sum_{n\geq1}g(n)1_{\ker\pi_{A,n}}\right)1_{\ker\chi}d\nu_A.
\end{split} \end{equation} To justify the interchange of the summation and the integral in the last equality, observe that \begin{equation*} \left\lvert\sum_{n=1}^mg(n)1_{\ker\pi_{A,n}\cap\ker\chi}\right\rvert\leq\sum_{n\geq1}\lvert g(n)\rvert1_{\ker\pi_{A,n}\cap\ker\chi}. \end{equation*} Since by the assumption, $\sum_{n\geq1}\lvert g(n)\rvert/\#G(n)$ converges, then, by \cite[Theorem 1.27]{rudin}, \\$\sum_{n\geq1}\lvert g(n)\rvert1_{\ker\pi_{A,n}\cap\ker\chi}$ is integrable. Thus, by Lebesgue's dominated convergence theorem (see \cite[ Theorem 1.34]{rudin}), the interchange of the summation and the integral in \eqref{ev6} is justified. Also, since $\#A(n)\geq \#G(n)$, then $\sum_{n\geq1}\lvert g(n)\rvert/\#A(n)<\infty$. Hence, by \cite[Theorem 1.38]{rudin}, $$\Tilde{g}=\sum_{n\geq1}g(n)1_{\ker\pi_{A,n}}\in L^1(\nu_A).$$ Now from (\ref{ev2}), (\ref{ev4}), and (\ref{ev6}), we have \begin{equation}
\label{ev7}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}
=\frac{\int_A \Tilde{g}1_{\ker\chi}d\nu_A}{\int_A1_{\ker\chi}d\nu_A}. \end{equation}
Note that the character $\chi:A\to\mu_m$ in (\ref{star}) induces the character $\chi':A/r(G)\stackrel{\sim}{\longrightarrow}\mu_m$ by $\chi'(\bar{\alpha})=\chi(\alpha)$, where $\alpha\in A$ and $\bar{\alpha}$ is the coset associated to $\alpha$ in $A/r(G)$. More precisely, $\chi$ is the lift of $\chi'$ to $A$. Since $\chi'$ sends a generator of $A/r(G)$ to a generator of $\mu_m$, then $\chi'$ is a generator of the group of characters of $A/r(G)$ denoted by $\widehat{A/r(G)}$. Thus, for $\bar{\alpha}\in A/r(G)$, by \cite{course-in-arithmetic}*{Chapter \textrm{VI}, Proposition 4}, we have \begin{equation*}
\sum_{\epsilon\in\widehat{A/G}}\epsilon(\bar{\alpha})=\sum_{i=0}^{m-1}(\chi')^i(\bar{\alpha})=
\begin{cases}
m &\quad \text{if }\;\bar{\alpha}=1,\\
0 &\quad \text{if }\;\bar{\alpha}\neq 1.
\end{cases} \end{equation*} Therefore, since $\bar{\alpha}=1$ means $\alpha\in\ker\chi$, we have \begin{equation*}
\sum_{i=0}^{m-1}\chi^i(\alpha)=
\begin{cases}
m &\quad \text{if }\;\alpha\in\ker\chi,\\
0 &\quad \text{if }\;\alpha\notin\ker\chi.
\end{cases} \end{equation*} This implies $\sum_{i=0}^{m-1}\chi^i(\alpha)=m\cdot1_{\ker\chi}(\alpha)$. Thus, \begin{equation} \label{ev8}
\frac{\int_A\Tilde{g}\1_{\ker\chi}d\nu_A}{\int_A\1_{\ker\chi}d\nu_A}=\frac{\int_A\Tilde{g}\sum_{i=0}^{m-1}\chi^id\nu_A}{m\int_A 1_{\ker\chi}d\nu_A}. \end{equation} Furthermore, by \eqref{star}, we have $[A:\ker\chi]=[A:r(G)]=m$. Hence, $\nu_A(\ker\chi)=1/m$. Thus, the desired result follows from \eqref{ev7} and \eqref{ev8}. \end{proof}
The following corollary considers a special case of Theorem \ref{main1}.
\begin{corollary} \label{cor-after-main1} In Theorem \ref{main1}, suppose that $g$ is a multiplicative arithmetic function. In addition, assume that $A\cong\prod_pA_p$, where $A_p=\varprojlim A(p^i)$, and $\chi=\prod_p\chi_p$, where $\chi_p:A_p\to\mu_m$ is a character of $A_p$. Then \begin{equation} \label{tilde-g-p} \Tilde{g}_p=\sum_{k\geq0}g(p^k)1_{\ker\varphi_{p^k}}\in L^1(\nu_{A_p}). \end{equation}
Moreover, \begin{equation} \label{new-cor-12}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{i=0}^{m-1}\prod_p\int_{A_p}\Tilde{g}_p\chi^i_pd\nu_{A_p}, \end{equation} where $\nu_{A_p}$ is the normalized Haar measure on $A_p$. \end{corollary} \begin{proof} By Theorem \ref{main1}, we have \begin{equation} \label{new-cor-1}
\sum_{n\geq1}\frac{g(n)}{\#G(n)}=\sum_{i=0}^{m-1}\int_A\Tilde{g}\chi^id\nu_A. \end{equation} Since $g(n)$ is multiplicative, $A\cong\prod_pA_p$, $\nu_A=\prod_p\nu_{A_p}$, $\chi=\prod_p\chi_p$, and $\Tilde{g}=\prod_p\Tilde{g}_p$, then \eqref{new-cor-1} yields \eqref{new-cor-12}. Note that, since $\#A(p^k)\geq \#G(p^k)$, then $\sum_{k\geq 0}\lvert g(n)\rvert/\#A(p^k)<\infty$. Hence, by \cite[Theorem 1.38]{rudin}, $\Tilde{g}_p\in L^1(\nu_{A_p}).$ Thus, the integrals in \eqref{new-cor-12} are finite. \end{proof}
\section{Proof of Theorem \ref{product-kummer-family}} \label{S1}
\begin{proof}[Proof of Theorem \ref{product-kummer-family}] We employ Corollary \ref{cor-after-main1} and compute $\int_{A_p}\Tilde{g}_pd\nu_{A_p}$ and $\int_{A_p}\Tilde{g}_p\chi_pd\nu_{A_p}$ for primes $p$. Since $\ker\varphi_{p^k}$ is a closed subgroup of $A_p$, we have $\nu_{A_p}(\ker\varphi_{p^k})=1/[A_{p}:\ker\varphi_{p^k}]=1/\#A(p^k)$. Observe that \begin{equation} \label{sum-in-last-form-1}
\begin{split}
\int_{A_p} \Tilde{g}_pd\nu_{A_p}
& =\int_{A_p}\sum_{k\geq0} g(p^k)1_{\ker\varphi_{p^k}}d\nu_{A_p}\\
& =\sum_{k\geq0}g(p^k)\nu_{A_p}(\ker\varphi_{p^k})\\
& =\sum_{k\geq0}\frac{g(p^k)}{\#A(p^k)}.
\end{split} \end{equation}
In addition, by Proposition \ref{character}, \begin{equation} \label{sum-in-last-form-3}
\begin{split}
\int_{A_p}\Tilde{g}_p\chi_pd\nu_{A_p}
& =\int_{A_p}\left(1_{A_p}\chi_p+g(p)1_{\ker\varphi_p}\chi_p+\dots+g(p^k)1_{\ker\varphi_{p^k}}\chi_p+\dots\right)d\nu_{A_p}\\
& =0+\sum_{k\geq \ell(p)}g(p^k)\nu_{A_p}(\ker\varphi_{p^k})\\
& =\sum_{k\geq \ell(p)}\frac{g(p^k)}{\#A(p^k)}.
\end{split} \end{equation}
Thus, by Corollary \ref{cor-after-main1} with $m=2$, \eqref{sum-in-last-form-1}, and \eqref{sum-in-last-form-3}, we get \eqref{product-kummer}. \end{proof} \section{Proof of Proposition \ref{TDPK-formula}} \label{Section 5}
\begin{proof}[Proof of Proposition \ref{TDPK-formula}]
For integer $k\geq 1$ and odd prime $p$, let \begin{equation} \label{k'}
k'=
\begin{cases}
0 &\text{if }k\leq \nu_p({e}),\\
k-\nu_p(e)&\text{if }k>\nu_p(e),\\
\end{cases} \end{equation} and for $k\geq 1$ and $p=2$, let \begin{equation} \label{k-prime}
k'=
\begin{cases}
0 &\text{if }k\leq \nu_2({e})~\text{and}~ (a>0~ \text{or}~ e~\text{is~odd}),\\
1 &\text{if }k\leq \nu_2({e})~\text{and}~(a<0~ \text{and}~ e~\text{is~even}),\\
k-\nu_2(e)&\text{if }k>\nu_2(e).
\end{cases} \end{equation} Then, from Proposition \ref{new-prop} (i), we have \begin{equation} \label{Apk} \#A(p^k)=\begin{cases}
p^{k+k'-1}(p-1)&\text{if }k\geq 1,\\ 1&\text{if }k=0. \end{cases} \end{equation}
Now by employing \eqref{Apk} in \eqref{product-kummer} we get \begin{equation}
\label{kummer-sum}
\sum_{n=1}^{\infty}\frac{g(n)}{\#G(n)}=\prod_p\sum_{k\geq 0}\frac{g(p^k)}{p^{k+k'-1}(p-1)}+\prod_{p}\sum_{k\geq\ell(p)}\frac{g(p^k)}{p^{k+k'-1}(p-1)}.
\end{equation}
We set $g=1$ in \eqref{kummer-sum} to get the product expression for the constant in the conjectured asymptotic formula in the Titchmarsh Divisor Problem for a given Kummer family. Therefore, by \eqref{kummer-sum}, \begin{equation} \label{expression}
\sum_{n\geq1}\frac{1}{[K_n:\mathbb{Q}]}=\left(1+\prod_{p\mid 2D}\frac{C_p}{1+B_p}\right)\prod_p\left(1+B_p\right), \end{equation} for the following values for $B_p$ and $C_p$.
If $p$ is odd, we have \begin{equation} \label{bp}
B_p= \sum_{k\geq1}\frac{1}{p^{k+k'-1}(p-1)}=\frac{p^{\nu_p(e)+2}+p^{\nu_p(e)+1}-p^2}{p^{\nu_p(e)}(p-1)(p^2-1)}, \end{equation}
and $C_p=B_p$, where $k^\prime$ is given by \eqref{k'}.
For $p=2$, we have the following cases for $B_2$ and $C_2$ with $k^\prime$ as given by \eqref{k-prime}.
\noindent {\it Case (i).} Let $e$ be odd or $a>0$. Hence, $s=\nu_2(e)+1$. Then $B_2$ is the same as \eqref{bp} with $p=2$. Now, if $D$ is odd; or $4 \Vert D$ and $s\geq2$; or $8 \Vert D$ and $s\geq3$, then \begin{equation*}
C_2= \sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}=\frac{2}{2^{\nu_2(e)}(2^2-1)}. \end{equation*} Otherwise, \begin{equation*}
C_2=\sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}=\frac{2^{\nu_2(e)+1}}{2^{\beta}(2^2-1)}, \end{equation*} where $\beta=2$ if $4 \Vert D$ and $s=1$; and $\beta=4$ if $8 \Vert D$ and $s\in \{1, 2\}$.
\noindent {\it Case (ii).} Let $e$ be even and $a<0$. Then \begin{equation*}
B_2= \sum_{k\geq 1}\frac{1}{2^{k+k'-1}}=\frac{2^{\nu_2(e)+2}-2^{\nu_2(e)}-1}{2^{\nu_2(e)}(2^2-1)}. \end{equation*} If $8\Vert D$ and $\nu_2(e)= 1$, we have $\ell(2)=2$. Hence, $$C_2= \sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}= \frac{1}{2^2-1}.$$ Otherwise, we have $\ell(2)=s=\nu_2(e)+2$ and thus \begin{equation*}
C_2= \sum_{k\geq \ell(2)}\frac{1}{2^{k+k'-1}}=\frac{1}{2^{\nu_2(e)+1}(2^2-1)}. \end{equation*}
By applying the above expressions in \eqref{expression} and by case-by-case simplifying, we get \eqref{kummer-tdp-lastproduct}. \end{proof}
\section{Serre Curves} \label{Serre section} Let $E$ be an elliptic curve defined over $\mathbb{Q}$ given by a Weierstrass equation \begin{equation*}
y^2=x^3+ax+b, \end{equation*} where $a,b\in\mathbb{Q}$. Let $\mathbb{Q}(E[n])$ be the $n$-division field of $E$. By taking the inverse limit of the natural injective maps \begin{equation*}
r_n:\Gal(\mathbb{Q}(E[n])/\mathbb{Q})\to\Aut(E[n])\cong \GL_2(\mathbb{Z}/n\mathbb{Z}), \end{equation*} over all $n\geq1$, we have an injective profinite homomorphism \begin{equation*}
r\colon \Gal(\mathbb{Q}(E[\infty])/\mathbb{Q})\to\Aut(E[\infty])\cong\GL_2(\widehat{\mathbb{Z}}). \end{equation*} Let $\Delta$ be the discriminant of the cubic equation $x^3+ax+b=0$. Set $K=\mathbb{Q}({\Delta}^{1/2})$ and let $D$ be the discriminant of $K$. In anticipation of applying Theorem \ref{main1}, let $\det$ be the determinant map $\det:\GL_2(\widehat{\mathbb{Z}})\to\widehat{\mathbb{Z}}^{\times}$ and \begin{equation*}
\chi_D:\GL_2(\widehat{\mathbb{Z}})\stackrel{\det}{\longrightarrow}\widehat{\mathbb{Z}}^{\times}\stackrel{\left(\frac{ D}{.}\right)}{\longrightarrow}\mu_2 \end{equation*} be the composition of $\det$ with the lift to $\widehat{\mathbb{Z}}^{\times}$ of the Kronecker symbol attached to $D$. We note that $\GL_2(\mathbb{Z}/2\mathbb{Z})\cong S_3$, where $S_3$ is the symmetric group on three letters. Let \begin{equation*}
\psi:\GL_2(\widehat{\mathbb{Z}})\to \GL_2(\mathbb{Z}/2\mathbb{Z})\cong S_3\stackrel{\sgn}{\longrightarrow}\mu_2 \end{equation*} be the composition of the projection map from $\GL_2(\widehat{\mathbb{Z}})$ to $\GL_2(\mathbb{Z}/2\mathbb{Z})$ with the signature character on $S_3$. Let $G=\Gal(\mathbb{Q}(E[\infty])/\mathbb{Q})$. For $\eta \in G$ we can show that the image of $r(\eta)$ under $\psi$ is the same as ${\chi_D(r(\eta))=\eta({\Delta}^{1/2})}/{{\Delta}^{1/2}}$. We now set $\chi=\chi_D\cdot\psi$.
The above construction of character $\chi$ is described by J.-P. Serre in \cite{serre}. In addition, in \cite{serre}, Serre shows that
always $[\GL_2(\widehat{\mathbb{Z}}):r(G)]\geq2$. We name $E$ a \emph{Serre curve} if $[\GL_2(\widehat{\mathbb{Z}}):r(G)]=2$. This is equivalent to saying that $r(G)=\ker\chi$. Thus, letting $A=\GL_2(\widehat{\mathbb{Z}})$, for Serre curve $E$, the sequence \begin{equation} \label{exactec} 1 \longrightarrow G \stackrel{r}{\longrightarrow} A \stackrel{\chi}{\longrightarrow} \mu_{2} \longrightarrow 1 \end{equation} is an exact sequence. In addition for a Serre curve $K=\mathbb{Q}(\Delta^{1/2})$ is a quadratic field.
The quadratic character $\chi:\GL_2(\hat{\mathbb{Z}})(\cong\prod_p\GL_2(\mathbb{Z}_p))\to\mu_2$ can be written as a product of local characters $\chi_p:\GL_2(\mathbb{Z}_p)\to\mu_2$. Observe that since $\psi$ factors via $\GL_2(\mathbb{Z}/2\mathbb{Z})$, then it factors via $\GL_2(\mathbb{Z}_2)$. Let $\psi_{2}:\GL_2(\mathbb{Z}_2)\to\mu_2$ be the corresponding homomorphism obtained from factorization of $\psi$ via $\GL_2(\mathbb{Z}_2)$. For prime $p\nmid 2D$, let $\chi_p$ be constantly equal to $1$. For odd prime $p\mid D$, let $\chi_p=\chi_{D,p}$ be the lift of the Legendre symbol mod $p$ to $\mathbb{Z}_p^{\times}$,
i.e., \begin{equation*}
\chi_p:\GL_2(\mathbb{Z}_p^{\times})\stackrel{\det}{\longrightarrow}\mathbb{Z}_p^{\times}\stackrel{}{\longrightarrow}\mu_2 \end{equation*} where the last map is the composition of {projection map to $\mathbb{Z}/p\mathbb{Z}$} and the Legendre symbol mod $p$. For prime $2$, let $\chi_2=\chi_{D,2}\cdot\psi_{2}$, where $\chi_{D,2}$, similar to the Kummer case, is the lift of one of the Dirichlet characters mod $8$ to $\mathbb{Z}^{\times}_2$ (if $D$ is odd, then $\chi_{D, 2}$ is trivial). Therefore, by the above construction of $\chi$, we have the decomposition $\chi=\prod_p\chi_p$.
Let $A_p=\GL_2(\mathbb{Z}_p)$ and $A(p^k)=\GL_2(\mathbb{Z}/p^k\mathbb{Z})$. The following is an analogous of Proposition \ref{character} for Serre curves. \begin{proposition} \label{character-serre} For a Serre curve $E$, assume the above notations. Let $\ell(p)$ be the smallest integer $k$ for which $\chi_p$ factors via $A(p^k)$. Then $$\ell(p)=\left\{\begin{array}{ll} 0&\text{if~} p~ \text{is~ odd~and~}p\nmid D,\\ 1&\text{if~} p~ \text{is~ odd~and~}p\mid D,\\ 1&\text{if}~ p=2~ \text{and}~ D ~\text{is odd},\\ 2&\text{if}~ p=2~ \text{and}~ 4\Vert D,\\ 3&\text{if}~ p=2~ \text{and}~ 8\Vert D.\\ \end{array} \right. $$
\end{proposition}
\begin{proof} If $p\nmid 2D$, then $\chi_p$ is constanly equal to $1$. Hence, it factors via $A(1)$. If $p$ is odd and $p\mid D$, then $\chi_p$ is the Legendre symbol mod $p$, and so it factors via $A(p)$, and since it is non-trivial, it does not factor via $A(1)$. The result for $p=2$ follows from the construction of $\chi_2$ described above, noting that the smallest integer $k$ for which $\psi_2$ factors via $A(p^k)$ is $k=1$, for $4\Vert D$ the smallest such $k$ is $k=2$, and for $8\Vert D$ the smallest such $k$ is $k=3$. \end{proof}
We are now ready to prove our last remaining assertion.
\begin{proof}[Proof of Proposition \ref{prop-Serre}]
Following steps similar to the proof of Theorem \ref{product-kummer-family} and by employing Corollary \ref{cor-after-main1} with $m=2$, Proposition \ref{character-serre}, and \begin{equation*}
\label{|GL|}
\left|\GL_2(\mathbb{Z}/n\mathbb{Z})\right|=\prod_{p^e\:\Vert\:n}p^{4e-3}(p^2-1)(p-1) \end{equation*} (see \cite[page 231]{K}) we have the stated product expression. \end{proof}
\par \noindent{\bf Acknowledgements.} The authors thank the reviewers for their valuable comments and suggestions. The authors thank David Basil and Solaleh Bolvardizadeh for help computing the explicit constants $ c_a$ given in the table after Proposition \ref{TDPK-formula}.
\begin{rezabib} \bib{AG}{article}{
author={Akbary, Amir},
author={Ghioca, Dragos},
title={A geometric variant of Titchmarsh divisor problem},
journal={Int. J. Number Theory},
volume={8},
date={2012},
number={1},
pages={53--69},
issn={1793-0421},
review={\MR{2887882}},
doi={10.1142/S1793042112500030}, }
\bib{AF}{article}{
author={Akbary, Amir},
author={Felix, Adam Tyler},
title={On the average value of a function of the residual index},
journal={Springer Proc. Math. Stat.},
volume={251},
date={2018},
pages={19--37},
review={\MR{3880381}},
}
\bib{artin}{book}{
author={Artin, Emil},
title={The collected papers of Emil Artin},
note={Edited by Serge Lang and John T. Tate},
publisher={Addison-Wesley Publishing Co., Inc., Reading, Mass.-London},
date={1965},
pages={xvi+560 pp. (2 plates)},
review={\MR{0176888}}, }
\bib{BLSW}{article}{
author={Bach, Eric},
author={Lukes, Richard},
author={Shallit, Jeffrey},
author={Williams, H. C.},
title={Results and estimates on pseudopowers},
journal={Math. Comp.},
volume={65},
date={1996},
number={216},
pages={1737--1747},
issn={0025-5718},
review={\MR{1355005}}, }
\bib{cojocaru:tdp}{article}{
author={Bell, Renee},
author={Blakestad, Clifford},
author={Cojocaru, Alina Carmen},
author={Cowan, Alexander},
author={Jones, Nathan},
author={Matei, Vlad},
author={Smith, Geoffrey},
author={Vogt, Isabel},
title={Constants in Titchmarsh divisor problems for elliptic curves},
journal={Res. Number Theory},
volume={6},
date={2020},
number={1},
pages={Paper No. 1, 24},
issn={2522-0160},
review={\MR{4041152}},
doi={10.1007/s40993-019-0175-9}, }
\bib{cox}{book}{
author={Cox, David A.},
title={Primes of the form $x^2 + ny^2$},
series={Pure and Applied Mathematics (Hoboken)},
edition={2},
note={Fermat, class field theory, and complex multiplication},
publisher={John Wiley \& Sons, Inc., Hoboken, NJ},
date={2013},
pages={xviii+356},
isbn={978-1-118-39018-4},
review={\MR{3236783}},
doi={10.1002/9781118400722}, }
\bib{davenport}{book}{
author={Davenport, Harold},
title={Multiplicative number theory},
series={Graduate Texts in Mathematics},
volume={74},
edition={3},
note={Revised and with a preface by Hugh L. Montgomery},
publisher={Springer-Verlag, New York},
date={2000},
pages={xiv+177},
isbn={0-387-95097-4},
review={\MR{1790423}}, }
\bib{felix-murty}{article}{
author={Felix, Adam Tyler},
author={Murty, M. Ram},
title={A problem of Fomenko's related to Artin's conjecture},
journal={Int. J. Number Theory},
volume={8},
date={2012},
number={7},
pages={1687--1723},
issn={1793-0421},
review={\MR{2968946}},
doi={10.1142/S1793042112500984}, }
\bib{Hooley}{article}{
author={Hooley, Christopher},
title={On Artin's conjecture},
journal={J. Reine Angew. Math.},
volume={225},
date={1967},
pages={209--220},
issn={0075-4102},
review={\MR{207630}},
doi={10.1515/crll.1967.225.209}, }
\bib{K}{book}{
author={Koblitz, Neal},
title={Introduction to elliptic curves and modular forms},
series={Graduate Texts in Mathematics},
volume={97},
edition={2},
publisher={Springer-Verlag, New York},
date={1993},
pages={x+248},
isbn={0-387-97966-2},
review={\MR{1216136}},
doi={10.1007/978-1-4612-0909-6}, }
\bib{kowalski}{article}{
author={Kowalski, E.},
title={Analytic problems for elliptic curves},
journal={J. Ramanujan Math. Soc.},
volume={21},
date={2006},
number={1},
pages={19--114},
issn={0970-1249},
review={\MR{2226355}}, }
\bib{lang}{book}{
author={Lang, Serge},
title={Algebra},
series={Graduate Texts in Mathematics},
volume={211},
edition={3},
publisher={Springer-Verlag, New York},
date={2002},
pages={xvi+914},
isbn={0-387-95385-X},
review={\MR{1878556}},
doi={10.1007/978-1-4613-0041-0}, }
\bib{L}{article}{
author={Laxton, R. R.},
title={On groups of linear recurrences. I},
journal={Duke Math. J.},
volume={36},
date={1969},
pages={721--736},
issn={0012-7094},
review={\MR{0258781}}, }
\bib{lenstra-stevenhagen-moree}{article}{
author={Lenstra, H. W., Jr.},
author={Moree, P.},
author={Stevenhagen, P.},
title={Character sums for primitive root densities},
journal={Math. Proc. Cambridge Philos. Soc.},
volume={157},
date={2014},
number={3},
pages={489--511},
issn={0305-0041},
review={\MR{3286520}},
doi={10.1017/S0305004114000450}, }
\bib{MS}{article}{
author={Moree, P.},
author={Stevenhagen, P.},
title={Computing higher rank primitive root densities},
journal={Acta Arith.},
volume={163},
date={2014},
number={1},
pages={15--32},
issn={0065-1036},
review={\MR{3194054}},
doi={10.4064/aa163-1-2}, }
\bib{Papa}{article}{
author={Pappalardi, F.},
title={On Hooley's theorem with weights},
note={Number theory, II (Rome, 1995)},
journal={Rend. Sem. Mat. Univ. Politec. Torino},
volume={53},
date={1995},
number={4},
pages={375--388},
issn={0373-1243},
review={\MR{1452393}}, }
\bib{rudin}{book}{
author={Rudin, Walter},
title={Real and complex analysis},
edition={3},
publisher={McGraw-Hill Book Co., New York},
date={1987},
pages={xiv+416},
isbn={0-07-054234-1},
review={\MR{924157}}, }
\bib{course-in-arithmetic}{book}{
author={Serre, J.-P.},
title={A course in arithmetic},
series={Graduate Texts in Mathematics, No. 7},
note={Translated from the French},
publisher={Springer-Verlag, New York-Heidelberg},
date={1973},
pages={viii+115},
review={\MR{0344216}}, }
\bib{serre}{article}{
author={Serre, Jean-Pierre},
title={Propri\'{e}t\'{e}s galoisiennes des points d'ordre fini des courbes
elliptiques},
language={French},
journal={Invent. Math.},
volume={15},
date={1972},
number={4},
pages={259--331},
issn={0020-9910},
review={\MR{387283}},
doi={10.1007/BF01405086}, }
\bib{wagstaff}{article}{
author={Wagstaff, Samuel S., Jr.},
title={Pseudoprimes and a generalization of Artin's conjecture},
journal={Acta Arith.},
volume={41},
date={1982},
number={2},
pages={141--150},
issn={0065-1036},
review={\MR{674829}},
doi={10.4064/aa-41-2-141-150}, }
\end{rezabib}
\end{document} |
\begin{document}
\title{Mean value formulas on sublattices and flags of the random lattice}
\begin{abstract} We present extensions of the Siegel integral formula (\cite{Sie}), which counts the vectors of the random lattice, to the context of counting its sublattices and flags. Perhaps surprisingly, it turns out that many quantities of interest diverge to infinity. \end{abstract}
\section{Introduction}
We start by recalling the celebrated Siegel integral formula (\cite{Sie}), one of the cornerstones of geometry of numbers. Let $X_n = \mathrm{SL}(n, \mathbb{Z}) \backslash \mathrm{SL}(n, \mathbb{R})$ be the space of lattices of determinant $1$, and equip $X_n$ with the measure $\mu_n$ (defined up to a constant, which is to be determined by Theorem \ref{thm:siegel_int} below) that is inherited from the Haar measure of $\mathrm{SL}(n, \mathbb{R})$; in particular, $\mu_n$ is invariant under the right $\mathrm{SL}(n, \mathbb{R})$-action on $X_n$. In this setting, Siegel proved the following theorem.
\begin{theorem}[Siegel \cite{Sie}] \label{thm:siegel_int} $\mu_n(X_n) < \infty$, so upon normalizing we may suppose $\mu_n(X_n) = 1$. Also, for $f: \mathbb{R}^n \rightarrow \mathbb{R}$ a compactly supported and bounded Borel measurable function, we have \begin{equation*} \int_{X_n} \sum_{x \in L \atop x \neq 0} f(x) d\mu_n(L) = \int_{\mathbb{R}^n} f(x) dx. \end{equation*} \end{theorem}
There are many other useful variations of Theorem \ref{thm:siegel_int}. For instance, Rogers proved the following, known as the Rogers integral formula.
\begin{theorem}[Rogers \cite{Rogers}] \label{thm:rogers_int} Let $1 \leq k \leq n-1$. Then for $f: (\mathbb{R}^n)^k \rightarrow \mathbb{R}$ a compactly supported and bounded Borel measurable function, we have \begin{equation*} \int_{X_n} \sum_{x_1, \ldots, x_k \in L \atop \mathrm{rk}\,\langle x_1, \ldots, x_k \rangle = k} f(x_1, \ldots, x_k) d\mu_{n}(L) = \int_{\mathbb{R}^n} \ldots \int_{\mathbb{R}^n} f(x_1, \ldots, x_k) dx_1 \ldots dx_k. \end{equation*} \end{theorem}
Rogers also proved explicit formulas of this kind for the cases $\mathrm{rk}\,\langle x_1, \ldots, x_k \rangle = l$ for any $1 \leq l < k$ --- see (\cite{Rogers}) for the precise statement. This result, called the Rogers integral formula, is the essential tool for the study of random lattice vectors, since it makes possible to study the higher moments of the lattice vector-counting function $\sum_{x\in L \backslash \{0\}} f(x)$. See Schmidt (\cite{Sch3}) for yet more variants of the Siegel integral formula, and Kim (\cite{Kim}), Shapira and Weiss (\cite{SW}), and S\"odergren and Str\"ombergsson (\cite{SS}) for some of the recent applications of these mean value theorems.
The motivation of the present paper is to explore the extensions of these results from the counting of lattice vectors to that of rank $d < n$ sublattices and of flags. In other words, we ask whether they too demonstrate any interesting statistical behavior, like the lattice vectors do.
On the practical side, we hope that such extensions will find applications in lattice-based cryptography. One of the basic implications of Theorem \ref{thm:siegel_int}, that a ball of volume $V$ contains $V$ nonzero lattice vectors on average, is already a fundamental tool for predicting and fine-tuning the decryption process. More recent results, such as the Poissonian behavior of short lattice vectors (\cite{Sod}), have also found applications in cryptanalysis (\cite{BSW}). It is therefore natural to expect that analogous results on sublattices would be useful too, since those are what the BKZ algorithm (stands for Block Korkine-Zolotarev, originally proposed by Schnorr and Euchner \cite{SE94}), the standard decryption algorithm for lattice-based systems, operates on.
Our first result is the following generalization of Theorem \ref{thm:rogers_int}. For a lattice $L \in X_n$ and $1 \leq d < n$, write $\mathrm{Gr}(L, d)$ for the set of primitive rank $d$ sublattices of $L$. For an element $A \in \mathrm{Gr}(L,d)$, define $\det A$ as follows: choose any basis $\{v_1, \ldots, v_d\}$ of $A$, and we set $\det A = \|v_1 \wedge \ldots \wedge v_d \|$, where $\| \cdot \|$ here is the standard Euclidean norm in $\wedge^d \mathbb{R}^n$. This definition is independent of the basis choice. Throughout this paper, for $A \in \mathrm{Gr}(L, d)$ and $H \geq 0$ we write \begin{equation*} f_{H}(A) = \begin{cases} 1 & \mbox{if $\det A \leq H$} \\ 0 & \mbox{otherwise.} \end{cases} \end{equation*} We also define \begin{equation*} a(n,d) = \frac{1}{n}\binom{n}{d}\prod_{i=1}^{d}\frac{V(n-i+1)\zeta(i)}{V(i)\zeta(n-i+1)}, \end{equation*} where $V(i) = \pi^{i/2}/\Gamma(1+i/2)$ is the volume of the unit ball in $\mathbb{R}^i$, and $\zeta(i)$ is the Riemann zeta function evaluated at the positive integer $i$, except that we pretend $\zeta(1) = 1$ for notational convenience. Then we prove the following, the first main result of the present paper.
\begin{theorem} \label{thm:main} Suppose $1 \leq k \leq n-1$, $1 \leq d_1, \ldots, d_k \leq n-1$ with $d_1 + \ldots + d_k \leq n-1$. Then \begin{equation*} \int_{X_n} \underset{A_1, \ldots, A_k\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(L, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(L, d_k)}} f_{H_1}(A_1) \ldots f_{H_k}(A_k) d\mu_n(L) = \prod_{i=1}^k a(n,d_i)H_i^n. \end{equation*} \end{theorem}
\begin{remark} (i) We point out that the $k=1$ case of this theorem is proved by Thunder (\cite{Thu3}) in the more general context of number fields.
(ii) If one wants a formula that counts non-primitive sublattices as well, by a standard M\"obius inversion argument (see \cite{Sch}, or Section 7.1 of \cite{Kim2}) one can show that we could simply replace all the $a(n,d)$ by \begin{equation*} c(n,d) = a(n,d)\prod_{i=1}^d \zeta(n-i+1). \end{equation*} The same applies to our other results introduced below. \end{remark}
It is natural to ask what happens if the sublattices $A_1, \ldots, A_k$ are not independent. In fact, if $A_i \cap A_j \neq A_i, A_j, 0$ for some $i, j$, the corresponding integral diverges, as can be seen from the next statement that we prove. \begin{theorem}\label{thm:overlap} Let $1 \leq r < d_1, d_2$, so that $d_1 + d_2 < n + r$. Then \begin{equation*} \int_{X_n} \sum_{A \in \mathrm{Gr}(L,d_1)} \sum_{B \in \mathrm{Gr}(L,d_2) \atop \mathrm{rk}\, A \cap B = r} f_{H_1}(A) f_{H_2}(B) d\mu_n(L) = \infty. \end{equation*} \end{theorem} Because \begin{equation} \label{eq:l2} \int_{X_n} \left(\sum_{A \in \mathrm{Gr}(L, d)} f_{H}(A) \right)^2 d\mu_n(L) = \sum_{r=0}^d \int_{X_n} \sum_{A \in \mathrm{Gr}(L,d)} \sum_{B \in \mathrm{Gr}(L,d) \atop \mathrm{rk}\, A \cap B = r} f_{H}(A) f_{H}(B) d\mu_n(L), \end{equation} Theorem \ref{thm:overlap} has the following consequence. \begin{corollary} The $L^2$-norm of $\sum_{A \in \mathrm{Gr}(L, d)} f_{H}(A)$ diverges. \end{corollary}
Thus, unfortunately, it would be difficult to study $\mathrm{Gr}(L,d)$ via the standard methods. At least we learn that the statistics of rank $d$ sublattices is radically different from that of lattice vectors. In particular, the various Poissonian properties enjoyed by random lattice vectors (see e.g. \cite{Kim}) fail spectacularly for rank $d$ sublattices. It may be possible to tweak the left-hand side of \eqref{eq:l2}, for example by restricting to counting certain pairs of elements of $\mathrm{Gr}(L,d)$, to extract some useful information, but I have been unable to do so.
At the other extreme is the case in which $A_1 \subseteq \ldots \subseteq A_k$, and we would be counting \emph{flags}. For $L \in X_n$ and $d_0 = 0 < d_1 < \ldots < d_k <d_{k+1} = n$, a flag of type $\mathfrak{d} = (d_1, \ldots, d_k)$ (rational with respect to $L$) is a sequence of sublattices \begin{equation*} A_1 \subseteq A_2 \subseteq \ldots \subseteq A_k \subseteq L \end{equation*} such that $\dim A_i = d_i$. Define \begin{equation*} a(n, \mathfrak{d}) = a(n,d_1)\prod_{i=1}^{k-1} \frac{n-d_{i-1}}{d_{i+1}-d_{i-1}}a(n-d_i, d_{i+1} - d_i). \end{equation*} (This coincides with $a(\alpha)$ in Thunder (\cite{Thu2}).) Then we have the following formula for counting rational flags.
\begin{theorem} \label{thm:flags} The $\mu_n$-average of the number of flags $A_1 \subseteq \ldots \subseteq A_k$ of type $\mathfrak{d} = (d_1, \ldots, d_k)$ such that $\det A_i \leq H_i$ for $i = 1, \ldots, k$ is equal to \begin{equation*} a(n, \mathfrak{d})\prod_{i=1}^k H_i^{d_{i+1} - d_{i-1}}. \end{equation*} \end{theorem}
Theorem \ref{thm:flags} has the following corollary.
\begin{corollary} The $\mu_n$-average number of flags of height $\leq H$ is equal to $\infty$. \end{corollary}
Here, the \emph{height} of a flag $A_1 \subseteq \ldots \subseteq A_k$ is defined as the quantity \begin{equation*} \prod_{i=1}^k (\det A_i)^{d_{i+1}-d_{i-1}}. \end{equation*} (See e.g. \cite{FMT} or \cite{Thu2}.) Therefore, the average number of flags of height $\leq H$ equals \begin{equation*} a(n,\mathfrak{d})\int_{x_i > 0 \atop x_1 \ldots x_k \leq H} dx_1 \ldots dx_k, \end{equation*} but this is $\infty$ for $k \geq 2$. It may be interesting to compare this result with Theorem 5 in Thunder (\cite{Thu2}).
We end the introduction with a few words on the method of proof. It is a product of a reflection on the phenomenon in which the mean of a counting function over $X_n$ coincides with its main term for a fixed individual lattice. For instance, Theorem \ref{thm:siegel_int} implies \begin{equation*}
\int_{X_n} \left| B(V) \cap L \backslash \{0\} \right| d\mu_n(L) = V, \end{equation*}
which is the expected main term of an estimate of $\left| B(V) \cap L \backslash \{0\} \right|$ for a fixed $L$. Among several possible approaches to mean value formulas, we chose the one that transparently shows how this happens. A discrete analogue of Theorem 1 of Rogers (\cite{Rogers}), which bears some semblance to the Hecke equidistribution (Lemma \ref{lemma:eqdist} below), reduces the problem of averaging over $X_n$ to that of counting over a fixed lattice; and the author's recent work (\cite{Kim2}) solves this counting problem. This argument also serves to fix a recently found error in Rogers' proof of the formula named after him (\cite{Rogers}); see Section 2 below for details. For Theorems \ref{thm:overlap} and \ref{thm:flags}, we also need the standard unfolding trick for the Eisenstein series.
If so desired, an appropriate combination of these techniques allows one to handle many other integrals in which the sublattices $A_1, \ldots, A_k$ interact in different ways, e.g. something like \begin{equation*} \int_{X_n} \underset{A_1, A_2\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(L, d_1)} \sum_{A_2 \in \mathrm{Gr}(L, d_2)}} \sum_{B \in \mathrm{Gr}(A_1 \oplus A_2, r)} f_{H_1}(A_1)f_{H_2}(A_2)f_{H_3}(B) d\mu_n(L). \end{equation*}
\subsection{Acknowledgments}
This work was supported by NSF grant CNS-2034176. The author thanks the referee for the careful review of the original manuscript.
\section{Some background}
Fix a positive integer $n$. For a prime $p$ and $c \in \{ 1, \ldots, n \}$, let $\mathcal{M}^{(c)}_p$ be the set of $n \times n$ integral matrices $M = (m_{ij})_{1 \leq i, j \leq n}$ such that $m_{ij} = \delta_{ij}$ for $j \neq c$, and $0 \leq m_{ic} \leq p-1$ for $i < c$, $m_{cc} = p$, and $m_{ic} = 0$ for $i > c$. To illustrate, $M$ is a matrix of the form \begin{equation*} \begin{pmatrix} 1 & & & x_1 & & & \\
& \ddots & & \vdots & & & \\
& & 1 & x_{c-1} & & & \\
& & & p & & & \\
& & & & 1 & & \\
& & & & & \ddots & \\
& & & & & & 1 \end{pmatrix}. \end{equation*}
Also let $\mathcal{M}_p = \bigcup_{c=1}^n \mathcal{M}^{(c)}_p$. It is clear that $|\mathcal{M}^{(c)}_p| = p^{c-1}$, so $|\mathcal{M}_p| = 1 + p + \ldots + p^{n-1}$. For the $n \times n$ diagonal matrix \begin{equation*} \alpha_p = \mathrm{diag}(1, \ldots, 1, p) = \begin{pmatrix} 1 & & & \\
& \ddots & & \\
& & 1 & \\
& & & p \end{pmatrix}, \end{equation*} $\mathcal{M}_p$ serves as a set of all distinct representatives of the right cosets of $\Gamma$ in the double coset $\Gamma \alpha_p \Gamma$, where $\Gamma = \mathrm{SL}(n, \mathbb{Z})$ --- see Chapter 3 of Shimura (\cite{Shi}). The Hecke operator on the functions on $X_n$ with respect to $\alpha_p$, which we denote by $T_p$, is defined as \begin{align*}
T_p\rho(L) &= \frac{1}{|\Gamma \backslash \Gamma\alpha_p\Gamma|}\sum_{M \in \Gamma \backslash \Gamma\alpha_p\Gamma} \rho(p^{-1/n}ML) \\
&= \frac{1}{|\mathcal{M}_p|}\sum_{M \in \mathcal{M}_p} \rho(p^{-1/n}ML). \end{align*}
Throughout this paper, we will also write $L$ for any representative of the coset $L \in X_n$ when it is harmless to do so, as we have done above. In other words, we write $L$ for both an element of $\mathrm{SL}(n,\mathbb{R})$ and the lattice spanned by the row vectors of $L$, to keep the notations simple.
Lemma \ref{lemma:eqdist} below, which is a discrete analogue of Theorem 1 of Rogers (\cite{Rogers}), is crucial to the present paper. However, the author (thanks to Kevin Schmitt) recently found that Rogers's argument contains an error: in p. 256 of \cite{Rogers}, he claims $\int_{X_n} \rho(\gamma L) d\mu_n = \int_{X_n} \rho(L) d\mu_n$ for any $\gamma \in \mathrm{SL}(n,\mathbb{R})$, saying that it follows ``from the known properties of the fundamental domain,'' but offering no justification otherwise. This equality is in fact not true, which can be seen by taking $n=2$, $\rho$ any function on $X_n$ vanishing in a neighborhood of the cusp, and \begin{equation*} \gamma = \begin{pmatrix} a & 0 \\ 0 & a^{-1} \end{pmatrix} \end{equation*} for large $a > 0$, for example. Fortunately, in our discrete context, the needed relation follows from the basic properties of the Hecke operators.
\begin{lemma} \label{lemma:hecke} Let $\rho(L)$ be an integrable function on $X_n$. Then \begin{equation*} \int_{X_n} T_p\rho(L) d\mu_n(L) = \int_{X_n} \rho(L) d\mu_n(L). \end{equation*} \end{lemma} \begin{proof} The argument is very similar to the proof of Proposition 3.39 in \cite{Shi}. Write $\Gamma = \mathrm{SL}(n,\mathbb{Z})$ as before, and let \begin{equation*} \widetilde{\mathcal{M}}_p^{(c)} = \{E_{cn}M : M \in \mathcal{M}_p^{(c)}\}, \end{equation*} where $E_{cn}$ is the permutation matrix obtained by swapping the $c$-th and $n$-th rows of the $n \times n$ identity matrix and then changing the sign of the $n$-th row (so that $\det E_{cn} = 1$). Observe that $\widetilde{\mathcal{M}}_p := \bigcup_{c=1}^n \widetilde{\mathcal{M}}_p^{c}$ is also a set of all representatives of the right cosets of $\Gamma$ in $\Gamma\alpha_p\Gamma$. Moreover, $\alpha_p^{-1}M \in \Gamma$ for all $M \in \widetilde{\mathcal{M}}_p$. Therefore \begin{equation*} \Gamma = \Gamma \cap \alpha^{-1}_p \Gamma \alpha_p \Gamma = \bigcup_{M \in \widetilde{\mathcal{M}}_p} (\Gamma \cap \alpha_p^{-1} \Gamma \alpha_p \alpha^{-1}_p M) = \bigcup_{M \in \widetilde{\mathcal{M}}_p} (\Gamma \cap \alpha_p^{-1} \Gamma \alpha_p) \alpha^{-1}_p M. \end{equation*} This shows that the elements $\alpha_p^{-1}M$, $M \in \widetilde{\mathcal{M}}_p$, serve as the coset representatives of $\Gamma \cap \alpha^{-1}_p \Gamma \alpha_p$ in $\Gamma$. Hence we can reinterpret $T_p$ as \begin{equation*}
T_p\rho(L) = \frac{1}{|\mathcal{M}_p|}\sum_{N \in (\Gamma \cap \alpha_p^{-1} \Gamma \alpha_p) \backslash \Gamma} \rho(p^{-1/n}\alpha_p NL). \end{equation*}
For a choice of the fundamental domain $P$ with respect to $\Gamma \cap \alpha_p^{-1} \Gamma \alpha_p$, we have \begin{align*} &\int_{X_n} T_p\rho(L) d\mu_n(L) \\
&= \frac{1}{|\mathcal{M}_p|} \int_{X_n} \sum_{N \in (\Gamma \cap \alpha_p^{-1} \Gamma \alpha_p) \backslash \Gamma} \rho(p^{-1/n}\alpha_p NL) d\mu_n(L) \\
&= \frac{1}{|\mathcal{M}_p|} \int_{P} \rho(p^{-1/n}\alpha_p L) d\mu_n(L) \\
&= \frac{1}{|\mathcal{M}_p|} \int_{p^{-1/n}\alpha_p P} \rho(L) d\mu_n(L). \end{align*}
However, $p^{-1/n}\alpha_p P$ is a fundamental domain with respect to $\alpha_p \Gamma \alpha^{-1}_p \cap \Gamma$. Since $|\mathcal{M}_p| = [\Gamma : \Gamma \cap \alpha_p^{-1} \Gamma \alpha_p] = [\Gamma : \alpha_p \Gamma \alpha^{-1}_p \cap \Gamma]$ (see e.g. Proposition 3.6 of \cite{Shi}), this completes the proof.
\end{proof}
\begin{lemma} \label{lemma:eqdist} Let $\rho(L)$ be a non-negative integrable function on $X_n$. Suppose $\lim_{p \rightarrow \infty} T_p \rho(L)$ exists and have the same (finite) value $I$ for all $L \in X_n$. Then $I = \int_{X_n} \rho d\mu_n$. \end{lemma} \begin{proof}
For any function $F$ and $h \in \mathbb{R}$, write $[F]_h := \min(F, h)$. For any $h > I$, the dominated convergence theorem implies \begin{equation*} \int_{X_n} \left[T_p\rho\right]_h(L) d\mu_n(L) \rightarrow I \end{equation*} as $p \rightarrow \infty$. Also by Lemma \ref{lemma:hecke}, we have \begin{equation*} \int_{X_n} [\rho]_h(L) d\mu_n(L) = \int_{X_n} T_p[\rho]_h(L) d\mu_n(L) \leq \int_{X_n} [T_p\rho]_h(L) d\mu_n(L). \end{equation*} Taking $p \rightarrow \infty$ and then $h \rightarrow \infty$ here, by the monotone convergence theorem we obtain the upper bound \begin{equation*} \int_{X_n} \rho(L) d\mu_n(L) \leq I. \end{equation*}
On the other hand, consider the integral \begin{align*} &\int_{X_n} T_p\rho(L) d\mu_n(L) \\
&= \int_{X_n} \frac{1}{|\mathcal{M}_p|} \sum_{M \in \mathcal{M}_p} \rho(p^{-1/n}M L) d\mu_n(L) \\
&= \frac{1}{|\mathcal{M}_p|} \sum_{M \in \mathcal{M}_p} \int_{X_n} \rho(p^{-1/n}M L) d\mu_n(L). \end{align*} Combined with Fatou's lemma, this implies that \begin{equation*} I = \int_{X_n} \lim_{p \rightarrow \infty} T_p\rho(L) d\mu_n(L) \leq \lim_{p \rightarrow \infty} \int_{X_n} T_p\rho(L) d\mu_n(L) = \int_{X_n} \rho(L) d\mu_n(L), \end{equation*} again by Lemma \ref{lemma:hecke}. This completes the proof.
\end{proof}
\begin{remark} Lemma \ref{lemma:eqdist} may remind one of the Hecke equidistribution (\cite{DM}). Of course, the latter is a much deeper statement, but Lemma \ref{lemma:eqdist} has the advantage that it applies to functions that are neither compactly supported nor bounded, which is the situation we are in. \end{remark}
To confirm the rather strong condition for Lemma \ref{lemma:eqdist}, we need the following theorem recently established by the author (\cite{Kim2}). Here we only state the parts that we need.
\begin{theorem}[Kim \cite{Kim2}] \label{thm:fixed_L} For a (full-rank) lattice $L \in \mathbb{R}^n$, define $P(L, d, H)$ to be the number of primitive rank $d < n$ sublattices of $L$ of determinant less than or equal to $H$. Also let \begin{equation*} b(n, d) = \max\left(\frac{1}{d}, \frac{1}{n-d}\right). \end{equation*} Then \begin{equation} \label{eq:fixed_L} P(L, d, H) = a(n, d)\frac{H^n}{(\det L)^d} + O\left(\sum_{\gamma \in \mathbb{Q} \atop 0 \leq \gamma \leq n - b(n,d)} b_\gamma(L)H^\gamma\right), \end{equation} where the implied constant depends only on $n$ and $d$, the sum on the right is finite, and every $b_\gamma$ is a reciprocal of a product of the successive minima of $L$, so that the right-hand side of \eqref{eq:fixed_L} is invariant under rescaling $L$ to $\alpha L$ and $H$ to $\alpha^d H$.
Furthermore, for a sublattice $S \subseteq L$ of rank $\leq n-d$, define $P_S(L,d,H)$ to be the number of primitive rank $d$ sublattices of $L$ of determinant $\leq H$ that intersect trivially with $S$. Then $P_S(L,d,H)$ also satisfies the estimate \eqref{eq:fixed_L}; in particular, the error term is independent of $S$. The sublattices that do intersect $S$ contribute at most $O(\sum_{\gamma \leq n-1} b_\gamma H^{\gamma})$. \end{theorem}
For a $d \times n$ matrix $A = (a_{ij})_{d \times n}$ and $c \in \{1, \ldots, n\}$, write $A^{(c)} = (a_{ij})_{d \times (c-1)}$ for the ``first'' $d \times (c-1)$ submatrix of $A$. We also define $\det A = (\det AA^T)^{1/2}$. As with $L$, let us use the same letter $A$ to refer to the rank $d$ lattice generated by the row vectors of $A$. With this convention, the definition of $\det A$ just given for the matrix $A$ is consistent with the definition of $\det A$ for $A \in \mathrm{Gr}(L,d)$ in the introduction. For $A \in \mathrm{Gr}(L,d)$, we also sometimes write $\det_L A$ when the extra clarification might be helpful.
\begin{proposition} \label{prop:mat-det} We continue with the notations of the preceding discussion. In addition, choose a basis $\{v_1, \ldots, v_n\}$ of $L \in X_n$, and also write $L$ for the matrix whose $i$-th row is $v_i$. Let $A$ be an integral $d \times n$ matrix, $c \in \{d+1, \ldots, n\}$, and let $\bar{L}^{(c)}$ be the $(c-1) \times n$ matrix whose $i$-th row vector $\bar{v}_i$ is the projection of $v_i$ onto $\mathrm{span}\{v_c, \ldots, v_n\}^\perp$. Then, provided $\det A^{(c)} \neq 0$, $\det AL \geq \det A^{(c)}\bar{L}^{(c)}$. \end{proposition} \begin{proof} We first present the proof for the case $c = n$. Write $A = (A^{(n)}; a)$, with $a = (a_{1n}, \ldots, a_{dn})^T$. Similarly, write \begin{equation*} L = \begin{pmatrix} 1 & & & \mu_1 \\
& \ddots & & \vdots \\
& & 1 & \mu_{n-1} \\
& & & 1 \end{pmatrix} \cdot \begin{pmatrix}
& & \\ & \bar{L}^{(n)} & \\
& & \\
& v_n & \end{pmatrix} \end{equation*} for some $\mu_1, \ldots, \mu_{n-1} \in \mathbb{R}$. Write $\mu = (\mu_1, \ldots, \mu_{n-1})^T$. Then \begin{equation*} AL = A^{(n)}\bar{L}^{(n)} + (A^{(n)}\mu + a)v_n. \end{equation*} Temporarily write $\mathcal{A} = A^{(n)}\bar{L}^{(n)}$, $\mathcal{B} = (A^{(n)}\mu + a)v_n$. Then \begin{equation*} (AL)(AL)^T = (\mathcal{A} + \mathcal{B})(\mathcal{A}^T + \mathcal{B}^T) = \mathcal{A}\mathcal{A}^T + \mathcal{B}\mathcal{B}^T, \end{equation*} because $\mathcal{A}\mathcal{B}^T = \mathcal{B}\mathcal{A}^T = 0$. Also observe that \begin{equation*}
\mathcal{B}\mathcal{B}^T = \|v_n\|^2(A^{(n)}\mu + a)(A^{(n)}\mu + a)^T. \end{equation*} The matrix-determinant lemma now gives \begin{equation*}
(\det AL)^2 = (\det \mathcal{A})^2(1+ \|v_n\|^2(A^{(n)}\mu + a)^T(\mathcal{A}\mathcal{A}^T)^{-1}(A^{(n)}\mu + a)) \geq (\det \mathcal{A})^2, \end{equation*} which yields the desired conclusion.
To prove the $c=n-1$ case, for example, observe that we can write \begin{equation*} \bar{L}^{(n)} = \begin{pmatrix} 1 & & & \mu'_1 \\
& \ddots & & \vdots \\
& & 1 & \mu'_{n-2} \\
& & & 1 \end{pmatrix} \cdot \begin{pmatrix}
& & \\ & \bar{L}^{(n-1)} & \\
& & \\
& \bar{v}_{n-1} & \end{pmatrix} \end{equation*} for some $\mu'_1, \ldots, \mu'_{n-2} \in \mathbb{R}$, where $\bar{v}_{n-1}$ is the last row of $\bar{L}^{(n)}$. By repeating the argument above, we obtain $\det A^{(n)}\bar{L}^{(n)} \geq \det A^{(n-1)}\bar{L}^{(n-1)}$. The remaining cases follow by further repetitions. \end{proof}
\section{Proof of Theorem \ref{thm:main}}
Recall that we wish to evaluate the integral \begin{equation} \label{eq:toy2} \int_{X_n} \underset{A_1, \ldots, A_k\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(L, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(L, d_k)}} f_{H_1}(A_1) \ldots f_{H_k}(A_k) d\mu_n(L), \end{equation} where we have $d_1 + \ldots + d_k := d < n$. The plan is to instead estimate the sum \begin{equation} \label{eq:toy2sum}
\frac{1}{|\mathcal{M}_p|} \sum_{M \in \mathcal{M}_p}\underset{A_1, \ldots, A_k\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1ML) \ldots f_{p^{d_k/n}H_k}(A_kML) \end{equation} for a fixed $L \in X_n$ in the $p$ limit, and then use Lemma \ref{lemma:eqdist} to prove Theorem \ref{thm:main}. More precisely, we rewrite \eqref{eq:toy2sum} as \begin{equation*}
\frac{1}{|\mathcal{M}_p|} \sum_{c=1}^{n} \sum_{M \in \mathcal{M}^{(c)}_p}\underset{A_1, \ldots, A_k\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1ML) \ldots f_{p^{d_k/n}H_k}(A_kML), \end{equation*} and study the inner sum for each $c$, to show that this approaches $\prod_{i=1}^k a(n,d_i)H_i^n$ as $p \rightarrow \infty$. This is independent of $L$, and hence Lemma \ref{lemma:eqdist} applies.
Throughout this section, we fix a representative of $L$ in $\mathrm{SL}(n,\mathbb{R})$, also denoted by $L$. It will also be helpful for us to identify each $A_l \in \mathrm{Gr}(\mathbb{Z}^n,d_l)$ with a choice of its matrix representative, e.g. its Hermite normal form (HNF), for explicit computations. Let $c \in \{1, \ldots, n\}$, and \begin{equation*} M = \begin{pmatrix} 1 & & & x_1 & & & \\
& \ddots & & \vdots & & & \\
& & 1 & x_{c-1} & & & \\
& & & p & & & \\
& & & & 1 & & \\
& & & & & \ddots & \\
& & & & & & 1 \end{pmatrix} \in \mathcal{M}^{(c)}_p. \end{equation*} If we write $A_l = (a^l_{ij})_{d_l \times n}$ as a matrix, then for each $l$ \begin{equation*} A_lML = \begin{pmatrix} a^l_{11} & a^l_{12} & \dots & \sum_{j=1}^{c-1} a^l_{1j}x_j + pa^l_{1c} & \dots & a^l_{1n} \\
\vdots & \vdots & & \vdots & & \vdots \\ a^l_{d_l1} & a^l_{d_l2} & \dots & \sum_{j=1}^{c-1} a^l_{d_lj}x_j + pa^l_{d_lc} & \dots & a^l_{d_ln} \end{pmatrix} L. \end{equation*} Denote by $A^{(c)}_l = (a^l_{ij})_{d_l \times (c-1)}$ the first $d_l \times (c-1)$ submatrix of $A_l$, and write \begin{equation*} A = (a_{ij})_{d \times n} = \begin{pmatrix} A_1 \\ \vdots \\ A_k \end{pmatrix}, A^{(c)} = (a_{ij})_{d \times (c-1)} = \begin{pmatrix} A^{(c)}_1 \\ \vdots \\ A^{(c)}_k \end{pmatrix}, \end{equation*} which are $d \times n$ and $d \times (c-1)$ matrices, respectively.
\subsection{The main term}
The main term of \eqref{eq:toy2sum} comes from the terms with $c=n$, namely \begin{equation}\label{eq:c=n}
\frac{1}{|\mathcal M_{p}|}\sum_{M \in \mathcal{M}^{(n)}_p}\underset{A_1, \ldots, A_k\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1ML) \ldots f_{p^{d_k/n}H_k}(A_kML). \end{equation}
Consider the map $A^{(n)}: \mathbb{F}_p^{n-1} \rightarrow \mathbb{F}_p^d$ induced by the matrix $A^{(n)}$ as above. We claim that the contribution to the above sum from the $A_l$'s for which $A^{(n)}$ is not surjective is negligible. There are two cases: \begin{enumerate}[(i)] \item $\det A^{(n)} \geq p$ over $\mathbb{Q}$. Then \begin{equation*} p \leq \det A^{(n)} \leq \prod_{l=1}^k \det A^{(n)}_l, \end{equation*} so there exists an $l$ such that $\det A^{(n)}_l \geq p^{d_l/n + 1/nk}$, and hence $\det A^{(n)}_l\bar{L} \gg_L p^{d_l/n + 1/nk}$, where $\bar{L}$ here is $\bar{L}^{(n)}$ in the statement of Proposition \ref{prop:mat-det}. Proposition \ref{prop:mat-det} implies $\det A_lML \gg_L p^{d_l/n + 1/nk}$. For $p$ sufficiently large, this is greater than $p^{d_l/n}H_l$, and so does not contribute to the sum.
\item $\det A^{(n)} = 0$ over $\mathbb{Q}$. Row-reduce $A$ so that the last row equals $(0, \ldots, 0, C)$ with $C \neq 0$ --- possible by assumption $\mathrm{rk}\, A = d$ --- which shows that $\det AM \geq p$. This again implies $\det A_lML \gg_L p^{d_l/n + 1/nk}$ for some $l$. \end{enumerate}
Now if $A^{(n)}$ did induce a surjective map onto $\mathbb{F}_p^d$, then the vectors \begin{equation*} \left( \sum_{j=1}^{n-1} {a_{1j}x_j}, \ldots, \sum_{j=1}^{n-1} {a_{dj}x_j} \right) \end{equation*} are equidistributed mod $p$. Therefore computing \eqref{eq:c=n} is equivalent to computing \begin{equation*}
\frac{p^{n-1}}{p^d|\mathcal M_{p}|}\underset{\mathrm{rk}\,_p A^{(n)} = d}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n,d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n,d_k)}} f_{p^{d_1/n}H_1}(A_1L) \ldots f_{p^{d_k/n}H_k}(A_kL) \end{equation*} (here $\mathrm{rk}\,_p$ means the rank modulo $p$). This is equal to \begin{equation} \label{eq:ribbit}
\frac{p^{n-1}}{p^d|\mathcal M_{p}|}\underset{\mathrm{rk}\, A^{(n)} = d}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n,d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n,d_k)}} f_{p^{d_1/n}H_1}(A_1L) \ldots f_{p^{d_k/n}H_k}(A_kL) \end{equation} because $\mathrm{rk}\,_p A^{(n)} = d \Leftrightarrow \mathrm{rk}\, A^{(n)} = d$ and $p \nmid \det A^{(n)}$, and we already showed that if $p \mid \det A^{(n)}$ the corresponding sets of $A_l$'s do not contribute to the sum.
The summation in \eqref{eq:ribbit} requires that $A_k$ is independent of $A_1, \ldots, A_{k-1}$ and $(0, \ldots, 0, 1)$, since otherwise, $A_1 \oplus \ldots \oplus A_k$ contains a nonzero multiple of $(0, \ldots, 0, 1)$, which implies $\mathrm{rk}\, A^{(n)} < d$. Thus, by applying Theorem \ref{thm:fixed_L} with $S = A_1 \oplus \ldots \oplus A_{k-1} \oplus \mathrm{span}_\mathbb{Z}\{(0, \ldots, 0, 1)\}$, we can rewrite \eqref{eq:ribbit} as \begin{align*}
&\frac{p^{n-1}}{p^d|\mathcal M_{p}|}\left(a(n,d_k)H_k^np^{d_k} + o_L(H_k^np^{d_k})\right) \cdot \\ &\underset{\mathrm{rk}\, A^{(n)} = d - d_k}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n,d_1)} \ldots \sum_{A_{k-1} \in \mathrm{Gr}(\mathbb{Z}^n,d_{k-1})}} f_{p^{d_1/n}H_1}(A_1L) \ldots f_{p^{d_{k-1}/n}H_{k-1}}(A_{k-1}L), \end{align*} where $A^{(n)}$ here now means \begin{equation*} A^{(n)} = \begin{pmatrix} A^{(n)}_1 \\ \vdots \\ A^{(n)}_{k-1} \end{pmatrix}. \end{equation*}
Repeating the same argument with other $A_i$'s, we find that \eqref{eq:ribbit} equals \begin{equation*}
\frac{p^{n-1}}{|\mathcal M_{p}|}\left(\prod_{i=1}^ka(n,d_i)H^n_i + o_{L,H_1, \ldots, H_k}(1)\right). \end{equation*}
Recalling $|\mathcal M_{p}| = \sum_{i=0}^{n-1} p^i$, and taking $p \rightarrow \infty$, this gives the intended main term $\prod_{i=1}^k a(n,d_i)H^n_i$ for \eqref{eq:toy2sum}.
\subsection{Error terms, part 1}
In the rest of this section, we show that, for $c \in \{1, \ldots, n-1\}$, \begin{equation}\label{eq:c<n}
\frac{1}{|\mathcal M_{p}|}\sum_{M \in \mathcal{M}^{(c)}_p}\underset{A_1, \ldots, A_k\ \mathrm{independent}}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1ML) \ldots f_{p^{d_k/n}H_k}(A_kML) \end{equation} vanishes as $p \rightarrow \infty$. This will complete the proof of Theorem \ref{thm:main}.
We first assume $c > d$, and consider the contributions from those $A_1, \ldots, A_k$ such that $\mathrm{rk}\,_p A^{(c)} = d$. By a similar argument to the $c=n$ case, the surjection of the linear map $A^{(c)} : \mathbb{F}^{c-1}_p \rightarrow \mathbb{F}^d_p$ implies that their contributions amount to \begin{equation*}
\frac{p^{c-1}}{p^d|\mathcal M_{p}|}\underset{\mathrm{rk}\,_p A^{(c)} = d}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n,d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n,d_k)}} f_{p^{d_1/n}H_1}(A_1L) \ldots f_{p^{d_k/n}H_k}(A_kL). \end{equation*} We simply drop the rank condition and bound this by \begin{equation*}
\frac{p^{c-1}}{|\mathcal M_{p}|}\left(\prod_{i=1}^k a(n,d_i)H_i^n + o_{L, H_1, \ldots, H_k}(1)\right), \end{equation*} which clearly vanishes as $p \rightarrow \infty$.
Continue with the assumption $c > d$, but this time suppose $\mathrm{rk}\,_p A^{(c)} < d$. If $\det A^{(c)} \geq p$ (over $\mathbb{Q}$), then we can argue exactly as in (i) in Section 3.1 above and show it does not contribute to \eqref{eq:c<n}. The case $\det A^{(c)} = 0$ will be handled below.
\subsection{Error terms, part 2}
We now assume that either $c > d$ and $\det A^{(c)} = 0$, or $c \leq d$, in which case $\det A^{(c)} = 0$ necessarily. Write $\mathrm{rk}\,_p A^{(c)} = c' < \min(c,d)$. We claim that we may assume $\mathrm{rk}\, A^{(c)} = c'$ as well. If not, then $\mathrm{rk}\, A^{(c)} > \mathrm{rk}\,_p A^{(c)}$, and thus the HNF of $A^{(c)}$ has a leading coefficient (also called a pivot) that is a nonzero multiple of $p$. But this implies that $\det A \geq p$ by the Cauchy-Binet formula, and we can again argue as in (i) in Section 3.1 to show that this $A$ contributes zero to \eqref{eq:c<n}.
Suppose in addition that $\mathrm{rk}\, A^{(c+1)} = c' + 1$. Then the HNF of $A$ has a pivot in column $c$, and it follows that the HNF of $AM$ has a pivot in column $c$ that is a multiple of $p$. This implies $\det AM \geq p$, and again we argue as in (i) in Section 3.1.
Summarizing our argument so far, it remains to consider the case in which $\det A^{(c)} = 0$, and $\mathrm{rk}\,_p A^{(c)} = \mathrm{rk}\, A^{(c)} = \mathrm{rk}\, A^{(c+1)} = c' < \min(c,d)$. For integers $1 \leq r_1 < r_2 < \ldots < r_{c'} \leq d$, let $r = (r_1, \ldots, r_{c'})$, and for a matrix $B$ with $d$ rows, denote by $B|_r$ the matrix with $c'$ rows whose $i$-th row is the $r_i$-th row of $B$. For each $r$, let us restrict \eqref{eq:c<n} to those $A$ for which the rows of $A^{(c)}|_r$ are linearly independent. Thus, we are considering the following restriction of \eqref{eq:c<n}:
\begin{equation*}
\frac{1}{|\mathcal M_{p}|}\sum_{M \in \mathcal{M}^{(c)}_p}\underset{\mathrm{rk}\, A = d,\, \mathrm{rk}\, A^{(c)}|_r = \mathrm{rk}\,_p A^{(c)} = \mathrm{rk}\, A^{(c)} = \mathrm{rk}\, A^{(c+1)} = c'}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1ML) \ldots f_{p^{d_k/n}H_k}(A_kML). \end{equation*}
For each $A$ appearing in this sum, there exists a rational $d \times c'$ matrix $R$ such that $A^{(c)} = RA^{(c)}|_r$. Due to the rank condition $\mathrm{rk}\, A^{(c)} = \mathrm{rk}\, A^{(c+1)}$, we also must have $A^{(c+1)} = RA^{(c+1)}|_r$. In other words, $R$ and $A^{(c+1)}|_r$ determine $A^{(c+1)}$ under our current assumptions. With this understanding, we can rewrite the above sum as \begin{align*}
&\frac{1}{|\mathcal M_{p}|} \sum_R \sum_{M \in \mathcal{M}^{(c)}_p} \\
&\underset{\mathrm{rk}\, A = d, A^{(c)} = RA^{(c)}|_r, \atop \mathrm{rk}\, A^{(c)}|_r = \mathrm{rk}\,_p A^{(c)} = \mathrm{rk}\, A^{(c)} = \mathrm{rk}\, A^{(c+1)} = c'}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1ML) \ldots f_{p^{d_k/n}H_k}(A_kML), \end{align*}
where the sum over $R$ is over all $d \times c'$ matrices such that the inner sum is nontrivial. Similarly to the argument in Section 3.1, for each $A$ appearing in the sum, as $M$ ranges over $\mathcal{M}^{(c)}_p$, the $c$-th column of $A|_rM$ \begin{equation*} \left( \sum_{j=1}^{c-1} a_{r_1j} x_j + pa_{r_1c}, \ldots, \sum_{j=1}^{c-1} a_{r_{c'}j} x_j + pa_{r_{c'}c} \right)^T \end{equation*} becomes equidistributed mod $p$. Also, multiplying by $M$ from the right keeps all the rank conditions invariant. Thus the above sum becomes \begin{align}
\frac{p^{c-1}}{p^{c'}|\mathcal{M}_p|} \sum_R \underset{\mathrm{rk}\, A = d, A^{(c)} = RA^{(c)}|_r, \atop \mathrm{rk}\, A^{(c)}|_r = \mathrm{rk}\,_p A^{(c)} = \mathrm{rk}\, A^{(c)} = \mathrm{rk}\, A^{(c+1)} = c'}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n, d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n, d_k)}} f_{p^{d_1/n}H_1}(A_1L) \ldots f_{p^{d_k/n}H_k}(A_kL) \notag \\
= \frac{p^{c-1}}{p^{c'}|\mathcal{M}_p|}\underset{\mathrm{rk}\, A = d,\, \mathrm{rk}\, A^{(c)}|_r = \mathrm{rk}\, A^{(c)} = \mathrm{rk}\, A^{(c+1)} = c'
}{\sum_{A_1 \in \mathrm{Gr}(\mathbb{Z}^n,d_1)} \ldots \sum_{A_k \in \mathrm{Gr}(\mathbb{Z}^n,d_k)}} f_{p^{d_1/n}H_1}(A_1L) \ldots f_{p^{d_k/n}H_k}(A_kL). \label{eq:abcde} \end{align}
It remains to estimate this sum \eqref{eq:abcde}. By dropping the rank conditions and applying Theorem \ref{thm:fixed_L}, \eqref{eq:abcde} is at most \begin{equation*}
\frac{p^{c-c'+d-1}}{|\mathcal{M}_p|}\left(\prod_{i=1}^k a(n,d_i)H_i^{n} + o_{L,H_1, \ldots, H_k}(1)\right), \end{equation*} which approaches $0$ as $p \rightarrow \infty$ provided $n-c > d-c'$ (note that $n-c \geq d-c'$ always). If $n-c = d-c'$, note that the HNF of $A$ is of the form \begin{equation*} \begin{pmatrix} P & * \\ 0 & Q \end{pmatrix}, \end{equation*} where $P$ is a $c' \times c$ matrix, and $Q$ is a $(d-c') \times (n-c)$ matrix, where $P$ and $Q$ are themselves HNFs and must be of full rank. Now since $n-c=d-c'$ by assumption, $Q$ is a square matrix, and therefore $Q$ must be an upper triangular matrix with no nonzero diagonal entries. This implies that $A_1 \oplus \ldots \oplus A_k$ intersects nontrivially with $\mathrm{span}_\mathbb{Z}\{(0, \ldots, 0, 1)\}$. Therefore, one of the $A_l$'s intersects nontrivially with the lattice $S_l := A_1 \oplus \ldots \oplus \hat{A}_l \oplus \ldots \oplus A_{k} \oplus \mathrm{span}_\mathbb{Z}\{(0, \ldots, 0, 1)\}$ --- where $\hat{A}_l$ here indicates that $A_l$ does \emph{not} appear in the sum --- of dimension at most $d-d_l+1$. But Theorem \ref{thm:fixed_L} implies \begin{equation*} \sum_{A_l \in \mathrm{Gr}(\mathbb{Z}^n, d_l) \atop A_l \cap S_l \neq 0} f_{p^{d_l/n}H_l}(A_lL) = o_{L, H_1, \ldots, H_k}(p^{d_l}), \end{equation*} which implies that \eqref{eq:abcde} is bounded by \begin{equation*}
o_{L,H_1, \ldots, H_k}\left(\frac{p^{c-c'+d-1}}{|\mathcal{M}_p|}\right) = o_{L,H_1, \ldots, H_k}(1), \end{equation*} as desired.
\section{The case of partially overlapping sublattices}
In this section we prove Theorem \ref{thm:overlap}. Recall that we are considering the integral \begin{equation} \label{eq:toy4} \int_{X_n} \sum_{A \in \mathrm{Gr}(L,d_1)} \sum_{B \in \mathrm{Gr}(L,d_2) \atop \mathrm{rk}\, A \cap B = r} f_{H_1}(A) f_{H_2}(B) d\mu_n(L) \end{equation} for $1 \leq r < d_1, d_2$, so that $d_1 + d_2 < n + r$.
Let $L \in X_n$. For a primitive sublattice $C \subseteq L$ of rank $r < n$, we identify the quotient lattice $L/C$ with the projection of $L$ onto the $n-r$-dimensional subspace of $\mathbb{R}^n$ orthogonal to $C$, and assign the metric induced by this projection from the metric on $L$. If $A \in \mathrm{Gr}(L,d)$ satisfies $C \subseteq A$, then it is easy to see that $A/C \in \mathrm{Gr}(L/C, d-r)$, and that it satisfies \begin{equation*} {\det}_L(A) = {\det}_L(C) {\det}_{L/C}(A/C). \end{equation*} Using this relation, we rewrite the inner sum of \eqref{eq:toy4} as \begin{equation*} \sum_{C \in \mathrm{Gr}(L, r)} \sum_{\bar{A} \in \mathrm{Gr}(L/C, d_1-r) \atop { \bar{B} \in \mathrm{Gr}(L/C, d_2-r) \atop \mathrm{indep}}} f_{H_1/\det_L C}(\bar{A}) f_{H_2/\det_L C}(\bar{B}). \end{equation*}
We will interpret this expression as a pseudo-Eisenstein series, i.e. a function on $X_n$ of form $\sum_{\gamma \in P \cap \Gamma \backslash \Gamma} f(\gamma L)$ where $\Gamma = \mathrm{SL}(n,\mathbb{Z})$ and $P$ is a parabolic subgroup of $\mathrm{SL}(n,\mathbb{R})$, and then use the unfolding trick. Fix a representative of $L \in X_n$ in $\mathrm{SL}(n,\mathbb{R})$, again denoted by $L$. In this context, a choice of $C \in \mathrm{Gr}(L, r)$ corresponds to two choices of $\gamma \in P(n-r,r,\mathbb{Z}) \backslash \mathrm{SL}(n,\mathbb{Z})$, where \begin{equation*} P(a,b,F) = \left\{ \begin{pmatrix} G_a & * \\ & G_b \end{pmatrix} : G_a \in \mathrm{SL}(a, F), G_b \in \mathrm{SL}(b, F) \right\} \end{equation*} for any ring $F$. $C$ and $\gamma$ are related by $C = \mbox{(sublattice generated by last $r$ rows of $\gamma L$)}$, and this correspondence is one-to-two due to the two possible orientations for the last $r$ rows of $\gamma L$.
Next, fixing a representative of $\gamma$, we can uniquely decompose $\gamma L$ in the form \begin{equation} \label{eq:levi_decomp} \begin{pmatrix} G_{n-r} & U \\ & G_r \end{pmatrix} \begin{pmatrix} \alpha^{-\frac{1}{n-r}}I_{n-r} & \\ & \alpha^{\frac{1}{r}}I_r \end{pmatrix} K', \end{equation} for some $G_{n-r} \in \mathrm{SL}(n-r,\mathbb{R})$, $G_{r} \in \mathrm{SL}(r,\mathbb{R})$, $U \in \mathrm{Mat}_{(n-r) \times r}(\mathbb{R})$, and $K'$ an element of a fundamental domain of $(\mathrm{SO}(n-r,\mathbb{R}) \times \mathrm{SO}(r,\mathbb{R})) \backslash \mathrm{SO}(n,\mathbb{R})$, which we fix in advance. In this context, a choice of an independent pair $\bar{A} \in \mathrm{Gr}(L/C, d_1-r)$ and $\bar{B} \in \mathrm{Gr}(L/C, d_2-r)$ corresponds to four choices --- again due to the four possibilities for the orientations --- of $\delta \in P'(n-d_1-d_2+r, d_2-r, d_1-r,\mathbb{Z}) \backslash \mathrm{SL}(n-r, \mathbb{Z})$, where \begin{equation*} P'(a,b,c, F) = \left\{ \begin{pmatrix} G_a & * & * \\
& G_b & 0 \\
& & G_c \end{pmatrix} : G_a \in \mathrm{SL}(a, F), G_b \in \mathrm{SL}(b, F), G_c \in \mathrm{SL}(c, F) \right\} \end{equation*} for any ring $F$. Indeed, $\bar{A}$ is the sublattice generated by the last $d_1-r$ rows of \begin{equation}\label{eq:temtem} \delta \cdot G_{n-r} \cdot \alpha^{-\frac{1}{n-r}}I_{n-r} \cdot \left(\mbox{first $n-r$ rows of $K'$} \right); \end{equation} this expression is independent of $U$ because $\bar A$ is orthogonal to $C$, and equivalently also to the last $r$ rows of $K'$. Similarly, $\bar{B}$ is the sublattice generated by the next last $d_2-r$ rows of \eqref{eq:temtem}. But since we are only interested in the determinants of $\bar{A}$ and $\bar{B}$, in what follows we can regard them as the sublattices generated by the corresponding rows of $\delta \cdot \alpha^{-\frac{1}{n-r}}G_{n-r}$.
The considerations so far allows us to rewrite \eqref{eq:toy4} as \begin{equation*} \frac{1}{8}\int_{P(n-r,r,\mathbb{Z}) \backslash \mathrm{SL}(n,\mathbb{R})} \sum_{\delta} f_{H_1/\alpha}(\bar{A}) f_{H_2/\alpha}(\bar{B}) d\mu_n, \end{equation*} where $\delta$ is summed over $P'(n-d_1-d_2+r, d_2-r, d_1-r,\mathbb{Z}) \backslash \mathrm{SL}(n-r, \mathbb{Z})$.
Lemma \ref{lemma:measure} below, which gives a decomposition of $d\mu_n$ compatible with \eqref{eq:levi_decomp}, implies that this equals \begin{equation*} \mathrm{(const)}\int_0^\infty \int_{X_{n-r}} \sum_{\bar{A} \in \mathrm{Gr}(\alpha^{-\frac{1}{n-r}}G_{n-r},d_1-r) \atop {\bar{B} \in \mathrm{Gr}(\alpha^{-\frac{1}{n-r}}G_{n-r},d_2-r) \atop \mathrm{indep}}} f_{H_1/\alpha}(\bar{A}) f_{H_2/\alpha}(\bar{B}) d\mu_{n-r}(G_{n-r}) \cdot \alpha^{n-1} d\alpha, \end{equation*} which, by Theorem \ref{thm:main} (notice that $\det\alpha^{-\frac{1}{n-r}}G_{n-r} = \alpha^{-1}$, so we normalize accordingly), equals \begin{equation*} \mathrm{(const)}\int_0^\infty H_1^{n-r}H_2^{n-r}\alpha^{-n+d_1+d_2-1}d\alpha, \end{equation*} which is divergent, proving Theorem \ref{thm:overlap}. One way to understand this phenomenon is that, if $L \in X_n$ is heavily skewed i.e. its successive minima have a huge gap, there may exist too many possibilities for $C = A \cap B$. Indeed, if we additionally required that $\alpha = \det C \geq 1$, say, then we would have instead obtained \begin{equation*} \mathrm{(const)}\int_1^\infty H_1^{n-r}H_2^{n-r}\alpha^{-n+d_1+d_2-1}d\alpha, \end{equation*} which converges, at least when $d_1+d_2 < n$.
Before we prove the needed lemma, we fix our notations related to $d\mu_n$. Recall the standard fact (see e.g. \cite{KV}) that, with respect to the $NAK$ decomposition of $\mathrm{SL}(n,\mathbb{R})$, we can write $d\mu_n$ as \begin{equation*} d\mu_n = \tau(n)dN \cdot dA \cdot dK, \end{equation*} where $\tau(n)$ is some constant, $dN = \prod_{i < j} dn_{ij}$, $dA = \prod_i \alpha_i^{-i(n-i)} d\alpha_i / \alpha_i$ upon writing $A = \mathrm{diag}(a_1, \ldots, a_n)$ and $\alpha_i = a_i/a_{i+1}$, and $dK$ is the Haar measure on $\mathrm{SO}(n,\mathbb{R})$ so that \begin{equation*} \int_{\mathrm{SO}(n,\mathbb{R})} dK = \prod_{i=2}^n iV(i). \end{equation*} To make $\int_{X_n} d\mu_n = 1$, we set \begin{equation*} \tau(n) = \frac{1}{n}\prod_{i=2}^n \zeta^{-1}(i). \end{equation*}
\begin{lemma}\label{lemma:measure}
With respect to the decomposition \eqref{eq:levi_decomp} of $\mathrm{SL}(n,\mathbb{R})$, we have \begin{equation*} d\mu_n = \frac{n}{r(n-r)} \cdot \frac{\tau(n)}{\tau(r)\tau(n-r)}dU d\mu_r(G_r) d\mu_{n-r}(G_{n-r}) \alpha^{n-1} d\alpha dK', \end{equation*} where $dU = \prod_{1 \leq i \leq n-r \atop 1 \leq j \leq r} du_{ij}$ on writing $U = (u_{ij})_{1 \leq i \leq n-r \atop 1 \leq j \leq r}$, and $dK'$ is the natural measure on $(\mathrm{SO}(n-r,\mathbb{R}) \times \mathrm{SO}(r,\mathbb{R})) \backslash \mathrm{SO}(n,\mathbb{R})$ descended from the measure $dK$ on $\mathrm{SO}(n,\mathbb{R})$.
\end{lemma}
\begin{proof} The only nontrivial part of the proof consists of comparing the diagonal parts, or the ``$A$ parts'', of the measures $d\mu_{n}, d\mu_{n-r}, d\mu_r$. We can decompose \begin{equation*} \begin{pmatrix} a_1 & & & & \\
& & & & \\
& & \ddots & & \\
& & & & \\
& & & & a_n \end{pmatrix} = \begin{pmatrix} b'_1 & & & & & \\
& \ddots & & & & & \\
& & b'_{n-r} & & & \\
& & & b''_1 & & \\
& & & & \ddots & \\
& & & & & b''_r \end{pmatrix} \begin{pmatrix} \alpha^{\frac{-1}{n-r}} & & & & & \\
& \ddots & & & & & \\
& & \alpha^{\frac{-1}{n-r}} & & & \\
& & & \alpha^{\frac{1}{r}} & & \\
& & & & \ddots & \\
& & & & & \alpha^{\frac{1}{r}} \end{pmatrix}, \end{equation*} where there is the relation $\prod a_i = \prod b'_i = \prod b''_i = 1$ among the entries. Write $\alpha_i = a_i/a_{i+1}, \beta'_i = b'_i/b'_{i+1}, \beta''_i = b''_i/b''_{i+1}$. Then the measures on the ``A parts'' of the groups $G_n, G_{n-r}, G_r$ are, respectively, \begin{equation*} dA := \prod_i \alpha_i^{-i(n-i)}\frac{d\alpha_i}{\alpha_i}, dB' = \prod_i \beta_i'^{-i(n-r-i)}\frac{d\beta'_i}{\beta'_i}, dB'' = \prod_i \beta''^{-i(r-i)}\frac{d\beta''_i}{\beta''_i}. \end{equation*}
It remains to perform the change of coordinates from the $\alpha_i$-coordinates to the $\beta'_i, \beta''_i, \alpha$-coordinates. We have $\alpha_i = \beta'_i$ for $1 \leq i \leq n-r-1$ and $\alpha_{n-r+i} = \beta''_i$ for $1 \leq i \leq r-1$, and the single nontrivial relation \begin{equation*} \alpha_{n-r} = \frac{b'_{n-r}}{b''_1}\alpha^{-\frac{n}{r(n-r)}}. \end{equation*}
At this point, we can compute and find that \begin{equation} \label{eq:subsub} dA = dB'dB'' \cdot \prod_{i=1}^{n-r-1} \beta'^{-ri}_i \prod_{i=1}^{r-1} \beta_{i}''^{-(n-r)(r-i)} \cdot \alpha_{n-r}^{-r(n-r)}\frac{d\alpha_{n-r}}{\alpha_{n-r}}. \end{equation}
On the other hand, from the shape of the Jacobian matrix \begin{equation*} \begin{pmatrix} - & \frac{\partial\alpha_i}{\partial\beta'_1} & - \\
& \vdots & \\ - & \frac{\partial\alpha_i}{\partial\alpha} & - \\
& \frac{\partial\alpha_i}{\partial\beta''_1} &\\ - & \vdots & - \end{pmatrix} = \begin{pmatrix} 1 & & & * & & & \\
& \ddots & & \vdots & & & \\
& & 1 & * & & & \\
& & & \frac{\partial\alpha_{n-r}}{\partial\alpha} & & & \\
& & & * & 1 & & \\
& & & \vdots & & \ddots & \\
& & & * & & & 1 \end{pmatrix} \end{equation*} and the fact that \begin{equation*} \frac{\partial\alpha_{n-r}}{\partial\alpha} = -\frac{n}{r(n-r)}\alpha_{n-r}\alpha^{-1}, \end{equation*} we have \begin{equation*} -\frac{d\alpha_{n-r}}{\alpha_{n-r}} = \frac{n}{r(n-r)}\frac{d\alpha}{\alpha} + \mbox{(terms in $d\beta'$ and $d\beta''$)}, \end{equation*} and the $d\beta'$ and $d\beta''$ parts here do not affect the outcome of the computation. Thus we can pretend that we have \begin{align*} &\alpha_{n-r}^{-r(n-r)}\frac{d\alpha_{n-r}}{\alpha_{n-r}} \\ &= \frac{-n}{r(n-r)}\left(\frac{b'_{n-r}}{b''_1}\alpha^{-\frac{n}{r(n-r)}}\right)^{-r(n-r)}\frac{d\alpha}{\alpha} \\ &= \frac{-n}{r(n-r)} \frac{\prod_{i=1}^{n-r-1} \beta'^{ri}_i}{\prod_{i=1}^{r-1} \beta_{i}''^{-(n-r)(r-i)}}\alpha^{n} \frac{d\alpha}{\alpha}. \end{align*}
Substituting this into \eqref{eq:subsub}, we obtain \begin{equation*} dA = \frac{-n}{r(n-r)}dB'dB''\alpha^n\frac{d\alpha}{\alpha}, \end{equation*} which completes the proof. By reorienting $\alpha$ (which moves in the opposite direction to $\alpha_{n-r}$) we can eliminate the negative sign.
\end{proof}
\section{Average number of flags}
The same technique as in the previous section can be applied to compute the $\mu_n$-average number of flags bounded by certain constraints, even though such a formula for a fixed lattice is not known and is probably difficult to find. In this section, we compute the average number of flags such that $\det A_i \leq H_i$ for $i = 1, \ldots, k$, i.e. the quantity \begin{equation*} \int_{X_n} \sum_{A_1 \subseteq \ldots \subseteq A_k \subseteq L \atop \dim A_i = d_i} \prod_{i=1}^k f_{H_i}(A_i) d\mu_n, \end{equation*} or equivalently \begin{equation*} \int_{X_n} \sum_{A_1 \in \mathrm{Gr}(L, d_1)} \sum_{\bar{A}_2 \in \mathrm{Gr}(L/A_1, d_2-d_1)} \ldots \sum_{\bar{A}_k \in \mathrm{Gr}(L/A_{k-1}, d_k-d_{k-1})} f_{H_1}(A_1)f_{H_2/\det A_1}(A_2) \ldots d\mu_n, \end{equation*} thereby proving Theorem \ref{thm:flags}.
First consider the case $k=2$, in which we are computing \begin{equation*} \int_{X_n} \sum_{A_1 \in \mathrm{Gr}(L, d_1)} \sum_{\bar{A}_2 \in \mathrm{Gr}(L/A_1, d_2-d_1)} f_{H_1}(A_1)f_{H_2/\det A_1}(A_2) d\mu_n. \end{equation*} By the same argument as in the previous section, and Lemma \ref{lemma:measure}, this equals \begin{align*} &\frac{n}{d_1(n-d_1)} \cdot \frac{\tau(n)}{\tau(d_1)\tau(n-d_1)} \cdot \frac{1}{2}\mathrm{vol}\left(\frac{\mathrm{SO}(n,\mathbb{R})}{\mathrm{SO}(n-d_1,\mathbb{R}) \times \mathrm{SO}(d_1,\mathbb{R})}\right) \cdot \\ &\int \int_{X_{d_1}}d\mu_{d_1} \int_{X_{n-d_1}} \sum_{\bar{A}_2} f_{H_2/\alpha}(\bar{A}_2) d\mu_{n-d_1} f_{H_1}(\alpha)\alpha^{n-1}d\alpha. \end{align*} Using the fact that $\mathrm{vol}(\mathrm{SO}(n,\mathbb{R})) = \prod_{i=2}^n iV(i)$, one finds that the product of the terms on the first line here equals $na(n,d_1)$. By Theorem \ref{thm:main}, the integral part is equal to \begin{align*} &a(n-d_1, d_2 - d_1)\int_0^{H_1} \alpha^{n-1} \cdot \alpha^{d_2-d_1} \left(\frac{H_2}{\alpha}\right)^{n-d_1} d\alpha \\ &= a(n-d_1, d_2 - d_1)H_2^{n-d_1} \int_0^{H_1} \alpha^{d_2-1}d\alpha \\ &= \frac{a(n-d_1, d_2 - d_1)}{d_2}H_1^{d_2}H_2^{n-d_1}. \end{align*} This proves the $k=2$ case. For general $k$, we proceed by induction: we have \begin{align*} &\frac{n}{d_1(n-d_1)} \cdot \frac{\tau(n)}{\tau(d_1)\tau(n-d_1)} \cdot \frac{1}{2}\mathrm{vol}\left(\frac{\mathrm{SO}(n,\mathbb{R})}{\mathrm{SO}(n-d_1,\mathbb{R}) \times \mathrm{SO}(d_1,\mathbb{R})}\right) \cdot \\ &\int \int_{X_{d_1}}d\mu_{d_1} \int_{X_{n-d_1}} \sum_{\bar{A}_2, \ldots, \bar{A}_k} f_{H_2/\alpha}(\bar{A}_2) \ldots f_{H_k/\det A_{k-1}\alpha}(\bar{A}_k) d\mu_{n-d_1} f_{H_1}(\alpha)\alpha^{n-1}d\alpha. \end{align*} The first line is exactly the same as in the $k=2$ case. As for the integral, writing $\mathfrak{d}' = (d_2-d_1, \ldots, d_k-d_1)$, and using the induction hypothesis, we find that it is equal to \begin{align*} &a(n-d_1, \mathfrak{d}')\int_0^{H_1} \alpha^{n-1} \cdot \alpha^{d_k-d_1} \prod_{i=2}^{k}\left(\frac{H_i}{\alpha}\right)^{d_{i+1}-d_{i-1}} d\alpha \\ &= a(n-d_1, \mathfrak{d}')\prod_{i=2}^{k}{H_i}^{d_{i+1}-d_{i-1}}\int_0^{H_1} \alpha^{d_2-1} d\alpha \\ &= \frac{a(n-d_1, \mathfrak{d}')}{d_2}\prod_{i=1}^{k}{H_i}^{d_{i+1}-d_{i-1}}. \end{align*} Since \begin{equation*} a(n-d_1, \mathfrak{d}') = a(n-d_1, d_2-d_1) \prod_{i=2}^{k-1} \frac{n-d_{i-1}}{d_{i+1}-d_{i-1}}a(n-d_i, d_{i+1} - d_i), \end{equation*} this gives the desired result.
\end{document} |
\begin{document}
\title{The existence of partitioned balanced tournament designs}
\begin{abstract} E. R. Lamken prove in \cite{pbtd} that there exists a partitioned balanced tournament design of side $n$, PBTD($n$), for $n$ a positive integer, $n \ge 5$, except possibly for $n \in \{9,11,15\}$. In this article, we show the existence of PBTD($n$) for $n \in \{9,11,15\}$.
As a consequence, the existence of PBTD($n$) has been completely determined.
\end{abstract}
\section{Introduction}
A {\it partitioned balanced tournament design of side $n$}, \textrm{PBTD($n$)}, defined on a $2n$-set $V$, is an arrangement of the ${2n \choose n}$ distinct unordered pairs of the elements of $V$ into an $n\times (2n-1)$ arrays such that \begin{enumerate} \item every element of $V$ is contained in precisely one cell of each column, \item every element of $V$ is contained in at most two cells of any row, \item each row contains all $2n$ elements of $V$ in the first $n$ columns, and \item each row contains all $2n$ elements of $V$ in the last $n$ columns, \end{enumerate}
\noindent see \cite{hb}.
E. R. Lamken prove the following theorem.
\begin{thm}[\cite{pbtd}]\label{th:Lamken} There exists a PBTD{\normalfont ($n$)} for $n$ a positive integer, $n \ge 5$, except possibly for $n \in \{9,11,15\}$. \end{thm}
Let $V$ be a $2n$-set. A {\it Howell design} $H(s,2n)$ is an $s \times s$ array, $H$, that satisfies the following three conditions \begin{enumerate} \item every cell of $H$ is empty or contains an unordered pair of elements from $V$, \item each element of $V$ occurs in each row and columns of $H$, and \item each unordered pair of elements from $V$ occurs in at most one cell of $H$, \end{enumerate}
\noindent see \cite{howell}.
For $T$ a PBTD($n$), let $T^L,T^C$ and $T^R$ be the first $(n-1)$ columns, the $n$-th column and the last $(n-1)$ columns of $T$, respectively. Then ($T^L\ T^C$) and ($T^R\ T^C$) are Howell designs $H(n,2n)$. These two designs are called {\it almost disjoint}. Conversely, if there is a pair of almost disjoint Howell designs, then there is a partitioned balanced tournament design.
By computer calculation, we found almost disjoint Howell designs $H(n,2n)$ for $n \in \{9,11,15\}$ in figures~$1,2$ and $3$. Hence the following theorem holds. \begin{thm}\label{th:ArayaTokihisa} Partitioned balanced tournament designs of side $n$ exist for $n \in \{9,11,15\}$. \end{thm}
It is not difficult to show that there is no PBTD($n$) for $n \le 4$. Therefore we have the following corollary from Theorem \ref{th:Lamken} and \ref{th:ArayaTokihisa}. \begin{cor}
There exists a PBTD{\normalfont ($n$)} if and only if $n$ is a positive integer, $n \ge 5$. \end{cor}
\begin{figure}
\caption{a pair of almost disjoint Howells designs $H(9,18)$}
\end{figure}
\begin{figure}
\caption{a pair of almost disjoint Howell designs $H(11,22)$}
\end{figure}
\begin{landscape} \begin{figure}
\caption{a pair of almost disjoint Howell designs $H(15,30)$}
\end{figure} \end{landscape}
\section{Observations} Let $V=\{0,1,\dots,2n-1\}$ be a $2n$-set and $T=(T^L\ T^C\ T^R)$ a PBTD($n$). Suppose $A$ is the array obtained by permuting elements of $V$, the rows, the first $n-1$ columns, the last $n-1$ columns of $T$, or $A=(T^R\ T^C\ T^L)$. Then $A$ is also a PBTD($n$). Two PBTD($n$) are {\it isomorphic} if one can be obtained from the other by these operations. By permuting elements of $V$, we may assume $T^C$ is the transposed of the array $(\{0,1\}\ \{2,3\}\ \dots\ \{2n-2,2n-1\})$.
From Dinitz and Dinitz~\cite{pbtd10}, there exist two PBTD($5$)'s up to isomorphism. For these two PBTD($5$)'s, we find that there exists the permutation $$\sigma=(0,1)(2,3)(4,5)(6,7)(8,9)$$ such that
$$T^L=
\begin{array}{|c|c|c|c|} \hline t_{11} & \sigma(t_{11}) & t_{13} & \sigma(t_{13})\\\hline t_{21} & \sigma(t_{21}) & t_{23} & \sigma(t_{23})\\\hline t_{31} & \sigma(t_{31}) & t_{33} & \sigma(t_{33})\\\hline t_{41} & \sigma(t_{41}) & t_{43} & \sigma(t_{43})\\\hline t_{51} & \sigma(t_{51}) & t_{53} & \sigma(t_{53})\\\hline \end{array} \text{ and } T^R=
\begin{array}{|c|c|c|c|} \hline t_{16} & \sigma(t_{16}) & t_{18} & \sigma(t_{18})\\\hline t_{26} & \sigma(t_{26}) & t_{28} & \sigma(t_{28})\\\hline t_{36} & \sigma(t_{36}) & t_{38} & \sigma(t_{38})\\\hline t_{46} & \sigma(t_{46}) & t_{48} & \sigma(t_{48})\\\hline t_{56} & \sigma(t_{56}) & t_{58} & \sigma(t_{58})\\\hline \end{array} \ .$$ Thus we observe that these two PBTD($5$)'s are determined by some $4$ columns and the permutation $\sigma$.
Seah and Stinson~\cite{pbtd7} obtained two almost disjoint Howell designs $H(7,14)$ by computer calcuation for the given $T^L$ which was constructed by E. R. Lamken. Then for these two PBTD($7$)'s, we find that there exists the permutation $$\sigma=(0,1)(2,3)(4,5)(6,7)(8,9)(10,11)(12,13)$$ such that $$T^L=
\begin{array}{|c|c|c|c|c|c|} \hline t_{11} & \sigma(t_{11}) & t_{13} & \sigma(t_{13}) & t_{15} & \sigma(t_{15})\\\hline t_{21} & \sigma(t_{21}) & t_{23} & \sigma(t_{23}) & t_{25} & \sigma(t_{25})\\\hline t_{31} & \sigma(t_{31}) & t_{33} & \sigma(t_{33}) & t_{35} & \sigma(t_{35})\\\hline t_{41} & \sigma(t_{41}) & t_{43} & \sigma(t_{43}) & t_{45} & \sigma(t_{45})\\\hline t_{51} & \sigma(t_{51}) & t_{53} & \sigma(t_{53}) & t_{55} & \sigma(t_{55})\\\hline t_{61} & \sigma(t_{61}) & t_{63} & \sigma(t_{63}) & t_{65} & \sigma(t_{65})\\\hline t_{71} & \sigma(t_{71}) & t_{73} & \sigma(t_{73}) & t_{75} & \sigma(t_{75})\\\hline \end{array}\ .$$
\noindent Also we find that there exists the permutation $$\tau = (0,2,4)(1,3,5)(8,10,12)(9,11,13)$$ such that
$$T^L=
\begin{array}{|c|c|c|c|c|c|} \hline t_{11} & t_{12} & t_{13} & t_{14} & t_{15} & t_{16}\\\hline \tau(t_{15}) & \tau(t_{16}) & \tau(t_{11}) & \tau(t_{12}) & \tau(t_{13}) & \tau(t_{14})\\\hline \tau^2(t_{13}) & \tau^2(t_{14}) &\tau^2(t_{15}) & \tau^2(t_{16}) & \tau^2(t_{11}) & \tau^2(t_{12}) \\\hline t_{41} & t_{42} & \tau(t_{41}) & \tau(t_{42}) & \tau^2(t_{41}) & \tau^2(t_{42})\\\hline t_{51} & t_{52} & t_{53} & t_{54} & t_{55} & t_{56}\\\hline \tau(t_{55}) & \tau(t_{56}) & \tau(t_{51}) & \tau(t_{52}) & \tau(t_{53}) & \tau(t_{54})\\\hline \tau^2(t_{53}) & \tau^2(t_{54}) &\tau^2(t_{55}) & \tau^2(t_{56}) & \tau^2(t_{51}) & \tau^2(t_{52})\\\hline \end{array}$$ and $$T^R=
\begin{array}{|c|c|c|c|c|c|} \hline t_{1,8} & t_{1,9} & t_{1,10} & t_{1,11} & t_{1,12} & t_{1,13}\\\hline \tau(t_{1,10}) & \tau(t_{1,8}) & \tau(t_{1,9}) & \tau(t_{1,13}) & \tau(t_{1,11}) & \tau(t_{1,12})\\\hline \tau^2(t_{1,9}) & \tau^2(t_{1,10}) & \tau^2(t_{1,8}) & \tau^2(t_{1,12}) & \tau^2(t_{1,13}) & \tau^2(t_{1,11})\\\hline t_{4,8} & \tau(t_{4,8}) & \tau^2(t_{4,8}) & t_{4,11} & \tau^2(t_{4,11}) & \tau^2(t_{4,11})\\\hline t_{5,8} & t_{5,9} & t_{5,10} & t_{5,11} & t_{5,12} & t_{5,13}\\\hline \tau(t_{5,10}) & \tau(t_{5,8}) & \tau(t_{5,9}) & \tau(t_{5,13}) & \tau(t_{5,11}) & \tau(t_{5,12})\\\hline \tau^2(t_{5,9}) & \tau^2(t_{5,10}) & \tau^2(t_{5,8}) & \tau^2(t_{5,12}) & \tau^2(t_{5,13}) & \tau^2(t_{5.11})\\\hline \end{array}\ .$$ Thus we observe that $T^L$ is determined by some $7$ cells, the permutations $\sigma$ and $\tau$. And $T^R$ is determined by some $14$ cells and the permutation $\tau$.
From these observations, we make GAP programs, see~\cite{GAP4}, to construct partitioned balanced tournament designs.
And we found designs in figures 1,2 and 3.
\
\end{document} |
\begin{document}
\title{Coproducts in brane topology} \begin{abstract}
We extend the loop product and the loop coproduct
to the mapping space from the $k$-dimensional sphere, or more generally from any $k$-manifold,
to a $k$-connected space with finite dimensional rational homotopy group, $k\geq 1$.
The key to extending the loop coproduct is the fact that
the embedding $M\rightarrow M^{S^{k-1}}$ is of ``finite codimension''
in a sense of Gorenstein spaces.
Moreover, we prove the associativity, commutativity, and Frobenius compatibility of them. \end{abstract}
\section{Introduction} Chas and Sullivan \cite{chas-sullivan} introduced the loop product on the homology $\homol{\loopsp{M}}$ of the free loop space $\loopsp{M}=\map(S^1, M)$ of a manifold. Cohen and Godin \cite{cohen-godin} extended this product to other string operations, including the loop coproduct.
Generalizing these constructions, F\'elix and Thomas \cite{felix-thomas09} defined the loop product and coproduct in the case $M$ is a Gorenstein space. A Gorenstein space is a generalization of a manifold in the point of view of Poincar\'e duality, including the classifying space of a connected Lie group and the Borel construction of a connected oriented closed manifold and a connected Lie group. But these operations tend to be trivial in many cases. Let $\mathbb K$ be a field of characteristic zero. For example, the loop coproduct is trivial for a manifold with the Euler characteristic zero \cite[Corollary 3.2]{tamanoi}, the composition of the loop coproduct followed by the loop product is trivial for any manifold \cite[Theorem A]{tamanoi}, and the loop product over $\mathbb K$ is trivial for the classifying space of a connected Lie group \cite[Theorem 14]{felix-thomas09}. A space with the nontrivial composition of loop coproduct and product is not found.
On the other hand, Sullivan and Voronov \todo{article([SV05] in \cite{cohen-hess-voronov}) not found} generalized the loop product to the sphere space $\spheresp[k]{M}=\map(S^k, M)$\todo{Voronov's notation} for $k\geq 1$. This product is called the \textit{brane product}. See \cite[Part I, Chapter 5]{cohen-hess-voronov}.
In this article, we will generalize the loop coproduct to sphere spaces, to construct nontrivial and interesting operations. We call this coproduct the \textit{brane coproduct}.
Here, we review briefly the construction of the loop product and the brane product. For simplicity, we assume $M$ is a connected oriented closed manifold of dimension $m$. The loop product is constructed as a mixture of the Pontrjagin product $\homol{\Omega M \times \Omega M} \rightarrow \homol{\Omega M}$ defined by the composition of based loops and the intersection product $\homol{M\times M} \rightarrow \homol[*-m]{M}$. More precisely, we use the following diagram \begin{equation}
\label{equation:loopProdDiagram}
\xymatrix{
\loopsp{M}\times\loopsp{M} \ar_{\mathrm{ev}_1\times\mathrm{ev}_1}[d] & \loopsp{M}\times_M\loopsp{M} \ar_-\mathrm{incl}[l]\ar[d]\ar^-\mathrm{comp}[r] & \loopsp{M}\\
M\times M & M. \ar_-\Delta[l]
} \end{equation} Here, the square is a pullback diagram by the diagonal map $\Delta$ and the evaluation map $\mathrm{ev}_1$ at $1$, identifying $S^1$ with the unit circle $\{z\in \mathbb C\mid \abs{z}=1\}$, and $\mathrm{comp}$ is the map defined by the composition of loops. Since the diagonal map $\Delta\colon M\rightarrow M\times M$ is an embedding of finite codimension, we have the shriek map $\shriekhomol{\Delta}\colon \homol{M\times M}\rightarrow\homol[*-m]{M}$, which is called the intersection product. Using the pullback diagram, we can ``lift'' $\shriekhomol{\Delta}$ to $\shriekhomol{\mathrm{incl}}\colon \homol{\loopsp{M}\times \loopsp{M}}\rightarrow\homol[*-m]{\loopsp{M}\times_M \loopsp{M}}$. Then, we define the loop product to be the composition $\mathrm{comp}_*\circ\shriekhomol{\mathrm{incl}}\colon\homol{\loopsp{M}\times \loopsp{M}}\rightarrow\homol[*-m]{\loopsp{M}}.$
The brane product can be defined by a similar way. Let $k$ be a positive integer. We use the diagram \begin{equation*}
\xymatrix{
\spheresp[k]{M}\times\spheresp[k]{M} \ar[d] & \spheresp[k]{M}\times_M\spheresp[k]{M} \ar_\mathrm{incl}[l]\ar[d]\ar^-\mathrm{comp}[r] & \spheresp[k]{M}\\
M\times M & M. \ar_-\Delta[l]
} \end{equation*} Since the base map of the pullback diagram is the diagonal map $\Delta$, which is the same as that for the loop product, we can use the same method to define the shriek map $\shriekhomol{\mathrm{incl}}\colon \homol{\spheresp[k]{M}\times \spheresp[k]{M}}\rightarrow\homol[*-m]{\spheresp[k]{M}\times_M \spheresp[k]{M}}$. Hence we define the brane product to be the composition $\mathrm{comp}_*\circ\shriekhomol{\mathrm{incl}}\colon\homol{\spheresp[k]{M}\times \spheresp[k]{M}}\rightarrow\homol[*-m]{\spheresp[k]{M}}.$
Next, we review the loop coproduct. Using the diagram \begin{equation}
\label{equation:loopCopDiagram}
\xymatrix{
\loopsp{M} \ar_{\mathrm{ev}_1\times\mathrm{ev}_{-1}}[d] & \loopsp{M}\times_M\loopsp{M} \ar_-\mathrm{comp}[l]\ar[d]\ar^-\mathrm{incl}[r] & \loopsp{M}\times\loopsp{M}\\
M\times M & M, \ar_-\Delta[l]
} \end{equation} we define the loop coproduct to be the composition $\mathrm{incl}_*\circ\shriekhomol{\mathrm{comp}}\colon \homol{\loopsp{M}}\rightarrow\homol[*-m]{\loopsp{M}\times\loopsp{M}}$.
But the brane coproduct cannot be defined in this way. To construct the brane coproduct, we have to use the diagram \begin{equation*}
\xymatrix{
\spheresp[k]{M} \ar_\mathrm{res}[d] & \spheresp[k]{M}\times_M\spheresp[k]{M} \ar_-\mathrm{comp}[l]\ar[d]\ar^-\mathrm{incl}[r] & \spheresp[k]{M}\times\spheresp[k]{M}\\
\spheresp[k-1]{M} & M. \ar_-c[l]
} \end{equation*} Here, $c\colon M\rightarrow\spheresp[k-1]{M}$ is the embedding by constant maps and $\mathrm{res}\colon\spheresp[k]{M}\rightarrow\spheresp[k-1]{M}$ is the restriction map to $S^{k-1}$, which is embedded to $S^k$ as the equator. In a usual sense, the base map $c$ is not an embedding of finite codimension. But using the algebraic method of F\'elix and Thomas \cite{felix-thomas09}, we can consider this map as an embedding of codimension $\bar{m}=\dim\Omega M$, which is the dimension as a $\mathbb K$-Gorenstein space and is finite when $\pi_*(M)\otimes\mathbb K$ is of finite dimension. Hence, under this assumption, we have the shriek map $\shriekhomol{c}\colon\homol{\spheresp[k-1]{M}}\rightarrow\homol[*-\bar{m}]{M}$ and the lift $\shriekhomol{\mathrm{comp}}\colon\homol{\spheresp[k]{M}}\rightarrow\homol[*-\bar{m}]{\spheresp[k]{M}\times_M\spheresp[k]{M}}$. This enables us to define the brane coproduct to be the composition $\mathrm{incl}_*\circ\shriekhomol{\mathrm{comp}}\homol{\spheresp[k]{M}}\rightarrow\homol[*-\bar{m}]{\spheresp[k]{M}\times\spheresp[k]{M}}$.
More generally, using connected sums, we define the product and coproduct for mapping spaces from manifolds. Let $S$ and $T$ be manifolds of dimension $k$. Let $M$ be a $k$-connected $\mathbb K$-Gorenstein space of finite type.\todo{$k$-conn?} Denote $m = \dim M$. Then we define the \textit{$(S,T)$-brane product} \begin{equation*}
\mu_{ST}\colon \homol{M^S\times M^T} \rightarrow \homol[*-m]{M^{S\#T}} \end{equation*} using the diagram \begin{equation}
\label{equation:STProdDiagram}
\xymatrix{
M^S\times M^T \ar[d] & M^S\times_MM^T \ar_-\mathrm{incl}[l]\ar[d]\ar^-\mathrm{comp}[r] & M^{S\#T}\\
M\times M & M. \ar_-\Delta[l]
} \end{equation} Assume that $M$ is $k$-connected and $\pi_*(M)\otimes\mathbb K = \bigoplus_n\pi_n(M)\otimes\mathbb K$ is of finite dimension. Then the iterated based loop space $\Omega^{k-1} M$ is a Gorenstein space, and denote $\bar{m} = \dim\Omega^{k-1} M$. Then we define the \textit{$(S,T)$-brane coproduct} \begin{equation*}
\delta_{ST}\colon \homol{M^{S\#T}} \rightarrow \homol[*-\bar{m}]{M^S\times M^T} \end{equation*} using the diagram \begin{equation}
\label{equation:STCopDiagram}
\xymatrix{
M^{S\#T}\ar[d] & M^S\times_MM^T \ar_-\mathrm{comp}[l]\ar[d]\ar^-\mathrm{incl}[r] & M^S\times M^T\\
\spheresp[k-1]{M} & M. \ar_-c[l]
} \end{equation} Note that, if we take $S=T=S^{k}$, then $\mu_{ST}$ and $\delta_{ST}$ are the brane product and coproduct, respectively.
Next, we study some fundamental properties of the brane product and coproduct. For the loop product and coproduct on Gorenstein spaces, Naito \cite{naito13} showed their associativity and the Frobenius compatibility. In this article, we generalize them to the case of the brane product and coproduct. Moreover, we show the commutativity of the brane product and coproduct, which was not known even for the case of the loop product and coproduct on Gorenstein spaces.
\begin{theorem}
\label{theorem:associativeFrobenius}
Let $M$ be a $k$-connected space with $\dim\pi_*(M)\otimes\mathbb K < \infty$.
Then the above product and coproduct satisfy following properties.
\begin{enumerate}[label={\rm(\arabic*)}]
\item \label{item:assocProd} The product is associative and commutative.
\item \label{item:assocCop} The coproduct is associative and commutative.
\item \label{item:Frob} The product and coproduct satisfy the Frobenius compatibility.
\end{enumerate}
In particular,
if we take $S=T=S^k$,
the shifted homology
$\homolshift{\spheresp[k]{M}} = \homol[*+m]{\spheresp[k]{M}}$
is a non-unital and non-counital Frobenius algebra,
where $m$ is the dimension of $M$ as a Gorenstein space. \end{theorem} \todo{Explain Frobenius algebra} \todo{commutativity,unitality}
Note that $M$ is a Gorenstein space by the assumption $\dim\pi_*(M)\otimes\mathbb K < \infty$ (see \cref{proposition:FinDimImplyGorenstein}). The associativity of the product holds even if we assume that $M$ is a Gorenstein space instead of assuming $\dim\pi_*(M)\otimes\mathbb K < \infty$. But we need the assumption to prove the commutativity of the \textit{product}.
A non-unital and non-counital Frobenius algebra corresponds to a ``positive boundary'' TQFT, in the sense that TQFT operations are defined only when each component of the cobordism surfaces has a \textit{positive} number of incoming and outgoing boundary components \cite{cohen-godin}.
See \cref{section:proofOfAssocAndFrob} for the precise statement and the proof of the associativity, the commutativity and the Frobenius compatibility. It is interesting that the proof of the commutativity of the loop coproduct (i.e., $k=1$) is easier than that of the brane coproduct with $k\geq 2$. In fact, we prove the commutativity of the loop coproduct using the explicit description of the loop coproduct constructed in \cite{wakatsuki16}. On the other hand, we prove the commutativity of the brane coproduct with $k\geq 2$ directly from the definition.
Moreover, we compute an example of the brane product and coproduct. Here, we consider the shifted homology $\homolshift[*]{\spheresp[k]{M}} = \homol[*+m]{\spheresp[k]{M}}$. We also have the shifts of the brane product and coproduct on $\homolshift{\spheresp[k]{M}}$ with the sign determined by the Koszul sign convention. \begin{theorem}
\label{theorem:braneOperationsOfSphere}
\newcommand{S^{2n+1}}{S^{2n+1}}
The shifted homology $\homolshift{\spheresp[2]S^{2n+1}}$, $n\geq 1$, equipped with the brane product $\mu$
is isomorphic to the exterior algebra
$\wedge(y,z)$ with $\deg{y}=-2n-1$ and $\deg{z}=2n-1$.
The brane coproduct $\delta$ is described as follows.
\begin{align*}
\delta(1) &= 1\otimes yz - y\otimes z + z\otimes y + yz\otimes 1\\
\delta(y) &= y\otimes yz + yz\otimes y\\
\delta(z) &= z\otimes yz + yz\otimes z\\
\delta(yz) &= - yz\otimes yz
\end{align*} \end{theorem}
Note that both of the brane product and coproduct are non-trivial. Moreover, $(\delta\otimes 1)\circ\delta\neq 0$ in contrast with the case of the loop coproduct, in which the similar composition is always trivial \cite[Theorem A]{tamanoi}.
On the other hand, the brane coproduct is trivial in some cases.
\begin{theorem}
\label{theorem:coprodTrivForPure}
If the minimal Sullivan model $(\wedge V, d)$ of $M$ is pure
and satisfies $\dim V^{\mathrm{even}}>0$,
then the brane coproduct on $\homol{\spheresp[2]{M}}$ is trivial. \end{theorem}
See \cref{definition:pureSullivanAlgebra} for the definition of a pure Sullivan algebra.
\begin{remark}
\label{remark:generalizedConnectedSum}
\todo{orientation?}
If we fix embeddings of disks $D^k\hookrightarrow S$ and $D^k\hookrightarrow T$
instead of assuming $S$ and $T$ being manifolds,
we can define the product and coproduct using ``connected sums'' defined by these embedded disks.
Moreover, if we have two disjoint embeddings $i,j\colon D^k\hookrightarrow S$ to the same space $S$,
we can define the ``connected sum'' along $i$ and $j$,
and hence we can define the product and coproduct using this.
We call these $(S,i,j)$-brane product and coproduct,
and give definitions in \cref{section:definitionOfSijBraneOperations}. \end{remark}
\todo{outline} \cref{section:constructionFelixThomas} contains brief background material on string topology on Gorenstein spaces. We define the $(S,T)$-brane product and coproduct in \cref{section:definitionOfBraneOperations} and $(S,i,j)$-brane product and coproduct in \cref{section:definitionOfSijBraneOperations}. Here, we defer the proof of \cref{corollary:extSphereSpace} to \cref{section:constructModel}. In \cref{section:computeExample}, we compute examples and prove \cref{theorem:braneOperationsOfSphere} and \cref{theorem:coprodTrivForPure}. \cref{section:proofOfAssocAndFrob} is devoted to the proof of \cref{theorem:associativeFrobenius}, where we defer the determination of some signs to \cref{section:determineSign} and \cref{section:proofOfExtAlgebraic}.
\tableofcontents
\section{Construction by F\'elix and Thomas} \label{section:constructionFelixThomas} In this section, we recall the construction of the loop product and coproduct by F\'elix and Thomas \cite{felix-thomas09}. Since the cochain models are good for fibrations, the duals of the loop product and coproduct are defined at first, and then we define the loop product and coproduct as the duals of them. Moreover we focus on the case when the characteristic of the coefficient $\mathbb K$ is zero. So we make full use of rational homotopy theory. For the basic definitions and theorems on homological algebra and rational homotopy theory, we refer the reader to \cite{felix-halperin-thomas01}.
\begin{definition}
[{\cite{felix-halperin-thomas88}}]
Let $m\in\mathbb Z$ be an integer.
\begin{enumerate}
\item An augmented dga (differential graded algebra) $(A,d)$ is called a \textit{($\mathbb K$-)Gorenstein algebra of dimension} $m$ if
\begin{equation*}
\dim \mathrm{Ext}_A^l(\mathbb K, A) =
\begin{cases}
1 & \mbox{ (if $l = m$)} \\
0 & \mbox{ (otherwise),}
\end{cases}
\end{equation*}
where the field $\mathbb K$ and the dga $(A,d)$ are $(A,d)$-modules via the augmentation map and the identity map, respectively.
\item A path-connected topological space $M$ is called a \textit{($\mathbb K$-)Gorenstein space of dimension} $m$
if the singular cochain algebra $\cochain{M}$ of $M$ is a Gorenstein algebra of dimension $m$.
\end{enumerate} \end{definition}
Here, $\mathrm{Ext}_A(M, N)$ is defined using a semifree resolution of $(M,d)$ over $(A,d)$, for a dga $(A,d)$ and $(A,d)$-modules $(M,d)$ and $(N,d)$. $\mathrm{Tor}_A(M,N)$ is defined similarly. See \cite[Section 1]{felix-thomas09} for details of semifree resolutions.
An important example of a Gorenstein space is given by the following \lcnamecref{proposition:FinDimImplyGorenstein}.
\begin{proposition}
[{\cite[Proposition 3.4]{felix-halperin-thomas88}}]
\label{proposition:FinDimImplyGorenstein}
A 1-connected topological space $M$ is a $\mathbb K$-Gorenstein space if $\pi_*(M)\otimes\mathbb K$ is finite dimensional.
Similarly, a Sullivan algebra $(\wedge V, d)$ is a Gorenstein algebra if $V$ is finite dimensional. \end{proposition}
Note that this \lcnamecref{proposition:FinDimImplyGorenstein} is stated only for $\mathbb Q$-Gorenstein spaces in \cite{felix-halperin-thomas88}, but the proof can be applied for any $\mathbb K$ and Sullivan algebras.
Let $M$ be a 1-connected $\mathbb K$-Gorenstein space of dimension $m$ whose cohomology $\cohom{M}$ is of finite type. As a preparation to define the loop product and coproduct, F\'elix and Thomas proved the following theorem.
\begin{theorem}
[{\cite[Theorem 12]{felix-thomas09}}]
\label{theorem:ExtDiagonal}
The diagonal map $\Delta\colon M \rightarrow M^2$ makes $\cochain{M}$ into a $\cochain{M^2}$-module.
We have an isomorphism
\[
\mathrm{Ext}_{\cochain{M^2}}^*(\cochain{M}, \cochain{M^2}) \cong \cohom[*-m]{M}.
\]
\end{theorem}
By \cref{theorem:ExtDiagonal}, we have $\mathrm{Ext}_{\cochain{M^2}}^m(\cochain{M}, \cochain{M^2})\cong\cohom[0]{M}\cong\mathbb K$, hence the generator \[ \shriek\Delta \in \mathrm{Ext}_{\cochain{M^2}}^m(\cochain{M}, \cochain{M^2}) \] is well-defined up to the multiplication by a non-zero scalar. We call this element the \textit{shriek map} for $\Delta$.
Using the map $\shriek\Delta$, we can define the duals of the loop product and coproduct. Then, using the diagram \cref{equation:loopProdDiagram}, we define the dual of the loop product to be the composition \begin{equation*}
\shriek\mathrm{incl}\circ\mathrm{comp}^*\colon\cohom{\loopsp{M}}
\xrightarrow{\mathrm{comp}^*}\cohom{\loopsp{M}\times_M\loopsp{M}}
\xrightarrow{\shriek\mathrm{incl}}\cohom[*+m]{\loopsp{M}\times\loopsp{M}}. \end{equation*} Here, the map $\shriek\mathrm{incl}$ is defined by the composition \begin{equation*}
\begin{array}{l}
\cohom{\loopsp{M}\times_M \loopsp{M}}
\xleftarrow[\cong]{\mathrm{EM}} \mathrm{Tor}^*_{\cochain{M^2}}(\cochain{M},\cochain{\loopsp{M}\times \loopsp{M}}) \\
\xrightarrow{\mathrm{Tor}_{\rm id}(\shriek\Delta, {\rm id})}\mathrm{Tor}^{*+m}_{\cochain{M^2}}(\cochain{M^2},\cochain{\loopsp{M}\times \loopsp{M}})
\xrightarrow[\cong]{} \cohom[*+m]{\loopsp{M}\times \loopsp{M}},
\end{array} \end{equation*} where the map $\mathrm{EM}$ is the Eilenberg-Moore map, which is an isomorphism (see \cite[Theorem 7.5]{felix-halperin-thomas01} for details). Similarly, using the diagram \cref{equation:loopCopDiagram}, we define the dual of the loop coproduct to be the composition \begin{equation*}
\shriek\mathrm{comp}\circ\mathrm{incl}^*\colon\cohom{\loopsp{M}\times\loopsp{M}}
\xrightarrow{\mathrm{incl}^*}\cohom{\loopsp{M}\times_M\loopsp{M}}
\xrightarrow{\shriek\mathrm{comp}}\cohom{\loopsp{M}}. \end{equation*} Here, the map $\shriek\mathrm{comp}$ is defined by the composition \begin{equation*}
\begin{array}{l}
\cohom{\loopsp{M}\times_M \loopsp{M}}
\xleftarrow[\cong]{\mathrm{EM}} \mathrm{Tor}^*_{\cochain{M^2}}(\cochain{M}, \cochain{\loopsp{M}})\\
\xrightarrow{\mathrm{Tor}_{\rm id}(\shriek\Delta, {\rm id})} \mathrm{Tor}^{*+m}_{\cochain{M^2}}(\cochain{M^2}, \cochain{\loopsp{M}})
\xrightarrow[\cong]{} \cohom[*+m]{\loopsp{M}}.
\end{array} \end{equation*}
\section{Definition of $(S,T)$-brane coproduct} \label{section:definitionOfBraneOperations} Let $\mathbb K$ be a field of characteristic zero, $S$ and $T$ manifolds of dimension $k$,\todo{connected? connected sum at where} and $M$ a $k$-connected Gorenstein space of finite type.
As in the construction by F\'elix and Thomas, which we reviewed in \cref{section:constructionFelixThomas}, we construct the duals \begin{align*}
\mu^\vee_{ST}\colon& \cohom{M^{S\#T}} \rightarrow \cohom[*+\dim M]{M^S\times M^T}\\
\delta^\vee_{ST}\colon& \cohom{M^S\times M^T} \rightarrow \cohom[*+\dim\Omega^{k-1} M]{M^{S\#T}} \end{align*} of the $(S,T)$-brane product and the $(S,T)$-brane coproduct.
The $(S,T)$-brane product is defined by a similar way to that of F\'elix and Thomas. Using the diagram \cref{equation:STProdDiagram}, we define $\mu^\vee_{ST}$ to be the composition \begin{equation*}
\shriek\mathrm{incl}\circ\mathrm{comp}^*\colon\cohom{M^{S\#T}}
\xrightarrow{\mathrm{comp}^*}\cohom{M^S\times_MM^T}
\xrightarrow{\shriek\mathrm{incl}}\cohom[*+m]{M^S\times M^T}. \end{equation*} Here, the map $\shriek\mathrm{incl}$ is defined by the composition \begin{equation*}
\begin{array}{l}
\cohom{M^S\times_M M^T}
\xleftarrow[\cong]{\mathrm{EM}} \mathrm{Tor}^*_{\cochain{M^2}}(\cochain{M},\cochain{M^S\times M^T}) \\
\xrightarrow{\mathrm{Tor}_{\rm id}(\shriek\Delta, {\rm id})}\mathrm{Tor}^{*+m}_{\cochain{M^2}}(\cochain{M^2},\cochain{M^S\times M^T})
\xrightarrow[\cong]{} \cohom[*+m]{M^S\times M^T},
\end{array} \end{equation*}
Next,\todo{,?} we begin the definition of the $(S,T)$-brane coproduct. But \cref{theorem:ExtDiagonal} cannot be applied to this case since the base map of the pullback is $c\colon M\rightarrow\spheresp[k-1]{M}$.
Instead of \cref{theorem:ExtDiagonal}, we use the following theorem to define the $(S,T)$-brane coproduct. A graded algebra $A$ is \textit{connected} if $A^0=\mathbb K$ and $A^i=0$ for any $i<0$. A dga $(A,d)$ is \textit{connected} if $A$ is connected.
\newcommand{\bar{m}}{\bar{m}} \begin{theorem}
\label{theorem:extAlgebraic}
\todo{characteristic can be nonzero}
\todo{Gor dim $m$? $\bar{m}$?}
Let $(A\otimes B, d)$ be a dga such that
$A$ and $B$ are connected commutative graded algebras,
$(A,d)$ is a sub dga of finite type,
and $(A\otimes B, d)$ is semifree over $(A,d)$.
Let $\eta\colon(A\otimes B, d) \rightarrow (A,d)$ be a dga homomorphism.
Assume that the following conditions hold.
\begin{enumerate}[label={\rm(\alph{enumi})}]
\item \label{item:assumpResId} The restriction of $\eta$ to $A$ is the identity map of $A$.
\item \label{item:assumpGorenstein} The dga $(B,\bar{d})=\mathbb K\otimes_A(A\otimes B, d)$ is a Gorenstein algebra of dimension $\bar{m}$.
\item \label{item:assump1conn} For any $b\in B$, the element $db-\bar{d}b$ lies in $A^{\geq 2}\otimes B$.
\end{enumerate}
Then we have an isomorphism
\begin{equation*}
\mathrm{Ext}^*_{A\otimes B}(A, A\otimes B) \cong \cohom[*-\bar{m}]{A}.
\end{equation*} \end{theorem}
This can be proved by a similar method to \cref{theorem:ExtDiagonal} \cite[Theorem 12]{felix-thomas09}. The proof is given in \cref{section:proofOfExtAlgebraic}.
Applying to sphere spaces, we have the following corollary. \begin{corollary}
\label{corollary:extSphereSpace}
Let $M$ be a $(k-1)$-connected (and 1-connected) space with $\pi_*(M)\otimes\mathbb K$ of finite dimension.
Then we have an isomorphism
\begin{equation*}
\mathrm{Ext}^*_{\cochain{\spheresp[k-1]{M}}}(\cochain{M}, \cochain{\spheresp[k-1]{M}}) \cong \cohom[*-\bar{m}]{M},
\end{equation*}
where $\bar{m}$ is the dimension of $\Omega^{k-1}M$ as a Gorenstein space. \end{corollary}
To prove the corollary, we need to construct models of sphere spaces satisfying the conditions of \cref{theorem:extAlgebraic}. This will be done in \cref{section:constructModel}.
Note that, since $\spheresp[0]M = M\times M$, this is a generalization of \cref{theorem:ExtDiagonal} (in the case that the characteristic of $\mathbb K$ is zero).
Assume that $M$ is a $k$-connected space with $\pi_*(M)\otimes\mathbb K$ of finite dimension. Then we have $\mathrm{Ext}^{\bar{m}}_{\cochain{\spheresp[k-1]{M}}}(\cochain{M}, \cochain{\spheresp[k-1]{M}}) \cong \cohom[0]{M}\cong\mathbb K$, hence the shriek map for $c\colon M\rightarrow\spheresp[k-1]{M}$ is defined to be the generator \begin{equation*}
\shriekc\in
\mathrm{Ext}^{\bar{m}}_{\cochain{\spheresp[k-1]{M}}}(\cochain{M}, \cochain{\spheresp[k-1]{M}}), \end{equation*} which is well-defined up to the multiplication by a non-zero scalar. Using $\shriekc$ with the diagram \cref{equation:STCopDiagram}, we define the dual $\delta^\vee_{ST}$ of the $(S,T)$-brane coproduct to be the composition \begin{equation*}
\shriek\mathrm{comp}\circ\mathrm{incl}^*\colon\cohom{M^S\times M^T}
\xrightarrow{\mathrm{incl}^*}\cohom{M^S\times_MM^T}
\xrightarrow{\shriek\mathrm{comp}}\cohom{M^{S\#T}}. \end{equation*} Here, the map $\shriek\mathrm{comp}$ is defined by the composition \begin{equation*}
\begin{array}{l}
\cohom{M^S\times_M M^T}
\xleftarrow[\cong]{\mathrm{EM}} \mathrm{Tor}^*_{\cochain{\spheresp[k-1]{M}}}(\cochain{M}, \cochain{M^{S\#T}})\\
\xrightarrow{\mathrm{Tor}_{\rm id}(\shriekc, {\rm id})} \mathrm{Tor}^{*+\bar{m}}_{\cochain{\spheresp[k-1]{M}}}(\cochain{\spheresp[k-1]{M}}, \cochain{M^{S\#T}})
\xrightarrow[\cong]{} \cohom[*+\bar{m}]{\loopsp{M}}.
\end{array} \end{equation*} Note that the Eilenberg-Moore isomorphism can be applied since $\spheresp[k-1]{M}$ is 1-connected.
\section{Definition of $(S,i,j)$-brane product and coproduct} \label{section:definitionOfSijBraneOperations} \newcommand{\#}{\#} \newcommand{\bigvee}{\bigvee} \newcommand{Q}{Q} \newcommand{D}{D} \newcommand{\smalld^\circ}{D^\circ} \newcommand{\partial \smalld}{\partial D}
In this section, we give a definition of $(S,i,j)$-brane product and coproduct. Let $S$ be a topological space, and $i$ and $j$ embeddings $D^k\rightarrow S$. Fix a small $k$-disk $D\subset D^k$ and denote its interior by $\smalld^\circ$ and its boundary by $\partial \smalld$. Then we define three spaces $\#(S,i,j)$, $Q(S,i,j)$, and $\bigvee(S,i,j)$ as follows. The space $\#(S,i,j)$ is obtained from $S\setminus(i(\smalld^\circ)\cup j(\smalld^\circ))$ by gluing $i(\partial \smalld)$ and $j(\partial \smalld)$ by an orientation reversing homeomorphism. We obtain $Q(S,i,j)$ by collapsing two disks $i(D)$ and $j(D)$ to two points, respectively. $\bigvee(S,i,j)$ is defined as the quotient space of $Q(S,i,j)$ identifying the two points. Then, since the quotient space $D^k/D$ is homeomorphic to the disk $D^k$, we identify $Q(S,i,j)$ with $S$ itself. By the above definitions, we have the maps $\#(S,i,j)\rightarrow\bigvee(S,i,j)$ and $S=Q(S,i,j)\rightarrow\bigvee(S,i,j)$. For a space $M$, these maps induce the maps $\mathrm{comp}\colon M^{\bigvee(S,i,j)}\rightarrow M^{\#(S,i,j)}$ and $\mathrm{incl}\colon M^{\bigvee(S,i,j)}\rightarrow M^S$. Moreover, we have diagrams \begin{equation*}
\xymatrix{
M^S \ar[d] & M^{\bigvee(S,i,j)} \ar_-\mathrm{incl}[l]\ar[d]\ar^-\mathrm{comp}[r] & M^{\#(S,i,j)}\\
M\times M & M \ar_-\Delta[l]
} \end{equation*} and \begin{equation*}
\xymatrix{
M^{\#(S,i,j)}\ar[d] & M^{\bigvee(S,i,j)} \ar_-\mathrm{comp}[l]\ar[d]\ar^-\mathrm{incl}[r] & M^S\\
\spheresp[k-1]{M} & M \ar_-c[l]
} \end{equation*} in which the squares are pullback diagrams. \todo{remove label?} If $M$ is a $k$-connected space with $\pi_*(M)\otimes\mathbb K$ of finite dimension, we define the $(S,i,j)$-brane product and coproduct by a similar method to \cref{section:definitionOfBraneOperations}, using these diagrams instead of the diagrams \cref{equation:STProdDiagram} and \cref{equation:STCopDiagram}. Note that this generalizes $(S,T)$-brane product and coproduct defined in \cref{section:definitionOfBraneOperations}.
\section{Construction of models and proof of \cref{corollary:extSphereSpace}} \label{section:constructModel}
In this section, we give a proof of \cref{corollary:extSphereSpace}, constructing a Sullivan model of the dga homomorphism $c^*\colon \cochain{\spheresp[k-1]{M}}\rightarrow\cochain{M}$ satisfying the assumptions of \cref{theorem:extAlgebraic}.
\newcommand{s^{(k-1)}}{s^{(k-1)}} \newcommand{s^{(k)}}{s^{(k)}} \newcommand{\diffsphere}[1][k-1]{\bar{d}^{(#1)}} \newcommand{\diffdisk}[1][k]{d^{(#1)}}
First, we construct models algebraically. Let $(\wedge V, d)$ be a Sullivan algebra. For an integer $l\in \mathbb Z$, let $\susp[l]V$ be a graded module defined by $(\susp[l]V)^n=V^{n+l}$ and $\susp[l]v$ denotes the element in $\susp[l]V$ corresponding to the element $v \in V$.
Define two derivations $s^{(k-1)}$ and $\diffsphere$ on the graded algebra $\wedge V\otimes \wedge \susp[k-1]V$ by
\begin{align*}
&s^{(k-1)}(v)=\susp[k-1]v,\quad s^{(k-1)}(\susp[k-1]v)=0, \\
&\diffsphere(v)=dv,\ \mbox{and}\quad\diffsphere(\susp[k-1]v)=(-1)^{k-1}s^{(k-1)} dv. \end{align*} Then it is easy to see that $\diffsphere\circ\diffsphere=0$ and hence $(\wedge V\otimes \wedge\susp[k-1]V, \diffsphere)$ is a dga.
Similarly, define derivations $s^{(k)}$ and $\diffdisk$ on the graded algebra $\wedge V\otimes \wedge\susp[k-1]V \otimes\wedge\susp[k]V$ by
\begin{align*}
&s^{(k)}(v)=\susp[k]v,\quads^{(k)}(\susp[k-1]v)=s^{(k)}(\susp[k]v)=0,\quad
\diffdisk(v)=dv, \\
&\diffdisk(\susp[k-1]v)=\diffsphere(\susp[k-1]v),\
\mbox{and}\quad\diffdisk(\susp[k]v)=\susp[k-1]v+(-1)^ks^{(k)} dv. \end{align*} Then it is easy to see that $\diffdisk\circ\diffdisk=0$ and hence $(\wedge V\otimes \wedge\susp[k-1]V \otimes\wedge\susp[k]V, \diffdisk)$ is a dga.
Note that the tensor product $(\wedge V, d) \otimes_{\wedge V\otimes\wedge\susp[k-1]V}(\wedge V\otimes\wedge\susp[k-1]V\otimes\wedge\susp[k]V,\diffdisk)$ is canonically isomorphic to $(\wedge V\otimes\wedge\susp[k]V,\diffsphere[k])$, where $(\wedge V, d)$ is a $(\wedge V\otimes\wedge\susp[k-1]V, \diffsphere)$-module by the dga homomorphism $\phi\colon (\wedge V\otimes\wedge\susp[k-1]V, \diffsphere) \rightarrow (\wedge V, d)$ defined by $\phi(v)=v$ and $\phi(\susp[k-1]v)=0$.
It is clear that, if $V^{\leq k-1}=0$, the dga $(\wedge V\otimes\wedge\susp[k-1]V,\diffsphere)$ is a Sullivan algebra and, if $V^{\leq k}=0$, the dga $(\wedge V\otimes \wedge\susp[k-1]V \otimes\wedge\susp[k]V, \diffdisk)$ is a relative Sullivan algebra over $(\wedge V\otimes\wedge\susp[k-1]V,\diffsphere)$.
\newcommand{\tilde{\varepsilon}}{\tilde{\varepsilon}} Define a dga homomorphism \begin{equation*}
\tilde{\varepsilon}\colon
(\wedge V\otimes \wedge\susp[k-1]V \otimes\wedge\susp[k]V, \diffdisk)
\rightarrow (\wedge V, d) \end{equation*} by $\tilde{\varepsilon}(v)=v$ and $\tilde{\varepsilon}(\susp[k-1]v)=\tilde{\varepsilon}(\susp[k]v)=0$. Then the linear part \begin{equation*}
Q(\tilde{\varepsilon})\colon
(V \oplus\susp[k-1]V \oplus\susp[k]V, \diffdisk_0)
\rightarrow (V, d_0) \end{equation*} is a quasi-isomorphism, and hence $\tilde{\varepsilon}$ is a quasi-isomorphism \cite[Proposition 14.13]{felix-halperin-thomas01}.
Using these algebras, we have the following proposition.
\begin{proposition}
\label{proposition:modelOfInclConst}
Let $k\geq 2$ be an integer,
$M$ a $(k-1)$-connected (and 1-connected) Gorenstein space of finite type,
and $(\wedge V, d)$ its Sullivan model such that $V^{\leq k-1}=0$ and $V$ is of finite type.
Then the dga homomorphism
$\phi\colon (\wedge V\otimes\wedge\susp[k-1]V, \diffsphere) \rightarrow (\wedge V, d)$
is a Sullivan representative of the map
$c\colon M\rightarrow\spheresp[k-1]{M}$,
i.e., there is a homotopy commutative diagram
\begin{equation*}
\xymatrix{
(\wedge V\otimes\wedge\susp[k-1]V, \diffsphere) \ar[r]^-\phi \ar[d]^\simeq & (\wedge V, d) \ar[d]^\simeq\\
\cochain{\spheresp[k-1]{M}} \ar[r]^-{c^*} & \cochain{M}
}
\end{equation*}
such that the vertical arrows are quasi-isomorphisms. \end{proposition} \begin{proof}
We prove the proposition by induction on $k\geq 2$.
The case $k=2$ is well-known.
See \cite[Section 15 (c) Example 1]{felix-halperin-thomas01} or \cite[Appendix A]{wakatsuki16} for details.
Assume that the proposition holds for some $k$.
Consider the commutative diagram
\begin{equation*}
\begin{tikzcd}[row sep=2.5em]
M \arrow[rr,"="] \arrow[dr,swap,"c"] \arrow[dd,swap,"="] &&
M \arrow[dd,swap,"=" near start] \arrow[dr,""] \\
& \spheresp[k]{M} \arrow[rr,crossing over,"" near start] &&
M^{D^k} \arrow[dd,"\mathrm{res}"] \\
M \arrow[rr,"=" near end] \arrow[dr,swap,"="] && M \arrow[dr,swap,"c"] \\
& M \arrow[rr,"c"] \arrow[uu,<-,crossing over,"\mathrm{ev}" near end]&& \spheresp[k-1]{M},
\end{tikzcd}
\end{equation*}
where the front and back squares are pullback diagrams.
Since any pullback diagram of a fibration is modeled by a tensor product of Sullivan algebras \cite[Section 15 (c)]{felix-halperin-thomas01},
this proves the proposition.
\end{proof}
\begin{proof}
[Proof of \cref{corollary:extSphereSpace}]
In the case $k=1$, apply \cref{theorem:extAlgebraic} to the product map
$(\wedge V, d)^{\otimes 2}\rightarrow (\wedge V, d)$.
(Note that this case is a result of F\'elix and Thomas \cite{felix-thomas09}.)
In the case $k\geq 2$, using \cref{proposition:modelOfInclConst},
apply \cref{theorem:extAlgebraic} to the map $\phi$. \end{proof}
\section{Computation of examples} \label{section:computeExample} \newcommand{S^{2n+1}}{S^{2n+1}}
\makeatletter \newcommand{\@tensorpower}[1]{\ifx#1\relax\else^{\otimes #1}\fi} \newcommand{\@modelcommand}[3]{
\ifx#2\relax
{#1}\@tensorpower{#3}
\else
#1(#2)\@tensorpower{#3}
\fi } \newcommand{{\mathcal M}_\mathrm{P}}{{\mathcal M}_\mathrm{P}} \newcommand{{\mathcal M}_\mathrm{L}}{{\mathcal M}_\mathrm{L}} \newcommand{\mpath}[1][\relax]{\@modelcommand{{\mathcal M}_\mathrm{P}}{\relax}{#1}} \newcommand{\mpathv}[2][\relax]{\@modelcommand{{\mathcal M}_\mathrm{P}}{\wedge #2}{#1}} \newcommand{\mloop}[1][\relax]{\@modelcommand{{\mathcal M}_\mathrm{L}}{\relax}{#1}} \newcommand{\mloopv}[2][\relax]{\@modelcommand{{\mathcal M}_\mathrm{L}}{\wedge #2}{#1}} \newcommand{\mdisk}[1]{\@modelcommand{{\mathcal M}_{D^{#1}}}{\relax}{\relax}} \newcommand{\msphere}[1]{\@modelcommand{{\mathcal M}_{S^{#1}}}{\relax}{\relax}} \makeatother
In this section, we will compute the brane product and coproduct for some examples, which proves \cref{theorem:braneOperationsOfSphere,theorem:coprodTrivForPure}.
In \cite{naito13}, the duals of the loop product and coproduct are described in terms of Sullivan models using the torsion functor description of \cite{kuribayashi-menichi-naito}. By a similar method, we can describe the brane product and coproduct as follows.
Let $M$ be a $k$-connected $\mathbb K$-Gorenstein space of finite type and $(\wedge V, d)$ its Sullivan model such that $V^{\leq k}=0$ and $V$ is of finite type. Denote $(\wedge V\otimes\wedge\susp[k]{V}, \diffsphere[k])$ by $\msphere{k}$ and $(\wedge V\otimes\wedge\susp[k-1]{V}\otimes\wedge\susp[k]{V}, \diffdisk[k])$ by $\mdisk{k}$ (see \cref{section:constructModel} for the definitions). Define a relative Sullivan algebra $\mpath=(\wedge V\tpow2\otimes\wedge \susp{V}, d)$ over $(\wedge V, d)\tpow2$ by the formula \begin{equation*}
d(\susp v)=1\otimes v - v\otimes 1 - \sum_{i=1}^\infty\frac{(sd)^i}{i!}(v\otimes 1) \end{equation*} inductively (see \cite[Section 15 (c)]{felix-halperin-thomas01} or \cite[Appendix A]{wakatsuki16} for details\todo{wakatsuki16?,inductively?}). Note that $\cohom{\msphere{k}}\cong\cohom{\spheresp[k]{M}}$ and $\cohom{\mdisk{k}}\cong\cohom{M^{D^k}}\cong\cohom{M}$. Then the dual of the brane product on $\cohom{\spheresp[k]{M}}$ is induced by the composition \begin{equation*}
\begin{array}{l}
\msphere{k}
\xrightarrow{\cong} \wedge V \otimes_{\msphere{k-1}} \mdisk{k}
\xleftarrow[\simeq]{\tilde{\varepsilon}\otimes {\rm id}} \mdisk{k} \otimes_{\msphere{k-1}} \mdisk{k}
\xrightarrow{(\phi\otimes{\rm id})\otimes_\phi(\phi\otimes{\rm id})} \msphere{k} \otimes_{\wedge V} \msphere{k}\\
\xrightarrow{\cong} \wedge V \otimes_{\wedge V\tpow2}\msphere{k}\tpow2
\xleftarrow[\simeq]{\bar{\varepsilon}\otimes{\rm id}} \mpath\otimes_{\wedge V\tpow2}\msphere{k}\tpow2
\xrightarrow{\shriek\delta\otimes{\rm id}} \wedge V\tpow2\otimes_{\wedge V\tpow2}\msphere{k}\tpow2
\xrightarrow{\cong} \msphere{k}\tpow2,
\end{array} \end{equation*} where $\shriek\delta$ is a representative of $\shriek\Delta$. See \cref{section:constructModel} for the definitions of the other maps.
Assume that $\pi_*(M)\otimes\mathbb K$ is of finite dimension. Then the dual of the brane coproduct is induced by the composition \begin{equation*}
\begin{array}{l}
\msphere{k}\tpow2
\xrightarrow{\cong} \wedge V\tpow2 \otimes_{\msphere{k-1}\tpow2} \mdisk{k}\tpow2
\xrightarrow{\mu\otimes_{\mu'}\eta} \wedge V \otimes_{\msphere{k-1}} (\mdisk{k}\otimes_{\msphere{k-1}}\mdisk{k})\\
\xleftarrow[\simeq]{\tilde{\varepsilon}\otimes{\rm id}} \mdisk{k}\otimes_{\msphere{k-1}}(\mdisk{k}\otimes_{\msphere{k-1}}\mdisk{k})
\xrightarrow{\shriek\gamma\otimes{\rm id}} \msphere{k-1}\otimes_{\msphere{k-1}}(\mdisk{k}\otimes_{\msphere{k-1}}\mdisk{k})\\
\xrightarrow{\cong} \mdisk{k}\otimes_{\msphere{k-1}}\mdisk{k}
\xrightarrow[\simeq]{\tilde{\varepsilon}\otimes{\rm id}} \wedge V\otimes_{\msphere{k-1}}\mdisk{k}
\xrightarrow{\cong} \msphere{k},
\end{array} \end{equation*} where $\shriek\gamma$ is a representative of $\shriekc$, the maps $\mu$ and $\mu'$ are the product maps, and $\eta$ is the quotient map.
As a preparation of computation, recall the definition of a pure Sullivan algebra.
\begin{definition}
[c.f. {\cite[Section 32]{felix-halperin-thomas01}}]
\label{definition:pureSullivanAlgebra}
A Sullivan algebra $(\wedge V, d)$ with $\dim V < \infty$ is called {\it pure}
if $d(V^{\rm even})=0$ and $d(V^{\rm odd}) \subset \wedge V^{\rm even}$. \end{definition}
For a pure Sullivan algebra, we have an explicit construction of the shriek map $\shriek\delta$ and $\shriek\gamma$. For $\shriek\delta$, see \cite{naito13}\todo{correct?}. For $\shriek\gamma$, we have the following proposition.
\begin{proposition}
\label{proposition:shriekInclconstForPure}
Let $(\wedge V, d)$ be a pure minimal Sullivan algebra.
Take bases $V^{\mathrm{even}}=\mathbb K\{x_1,\dots x_p\}$ and $V^{\mathrm{odd}}=\mathbb K\{y_1,\dots y_q\}$.
Define a $(\wedge V\otimes\wedge \susp{V}, d)$-linear map
\begin{equation*}
\shriek\gamma\colon
(\wedge V\otimes\wedge\susp{V}\otimes\wedge\susp[2]V, d)
\rightarrow(\wedge V\otimes\wedge\susp{V}, d)
\end{equation*}
by $\shriek\gamma(\susp[2]y_1\cdots\susp[2]y_q)=\susp x_1\cdots\susp x_p$
and $\shriek\gamma(\susp[2]y_{j_1}\cdots\susp[2]y_{j_l})=0$ for $l<q$.
Then $\shriek\gamma$ defines a non-trivial element in
$\mathrm{Ext}_{\wedge V\otimes\wedge\susp V}
(\wedge V,\wedge V\otimes\wedge\susp V)$ \end{proposition} \begin{proof}
By a straightforward calculation, $\shriek\gamma$ is a cocycle in
$\mathrm{Hom}_{\wedge V\otimes\wedge \susp{V}}
(\wedge V\otimes\wedge\susp{V}\otimes\wedge\susp[2]V,\wedge V\otimes\wedge\susp{V})$.
In order to prove the non-triviality,
we define an ideal
$I=(x_1,\dots, x_p,y_1,\dots, y_q,\susp y_1,\dots,\susp y_q)\subset\wedge V\otimes\wedge\susp V$.
By the purity and minimality, we have $d(I)\subset I$.
Using this ideal, we have the evaluation map of the form
\begin{align*}
&\mathrm{Ext}_{\wedge V\otimes\wedge\susp V}(\wedge V,\wedge V\otimes\wedge\susp V)
\otimes \mathrm{Tor}_{\wedge V\otimes\wedge\susp V}(\wedge V, \wedge V\otimes\wedge \susp V/I) \\
&\xrightarrow{\mathrm{ev}} \mathrm{Tor}_{\wedge V\otimes\wedge\susp V}(\wedge V\otimes\wedge\susp V, \wedge V\otimes\wedge \susp V/I)
\xrightarrow{\cong} \wedge\susp V^{\mathrm{even}}.
\end{align*}
By this map,
the element $[\shriek\gamma]\otimes[\susp[2]y_1\cdots\susp[2]y_q\otimes 1]$ is mapped to
the element $\susp x_1\cdots \susp x_p$, which is obviously non-trivial.
Hence $[\shriek\gamma]$ is also non-trivial. \end{proof}
Now, we give proofs of \cref{theorem:braneOperationsOfSphere,theorem:coprodTrivForPure}.
\begin{proof}
[Proof of \cref{theorem:braneOperationsOfSphere}]
Using the above descriptions,
we compute the brane product and coproduct for $M=S^{2n+1}$ and $k=2$.
In this case,
we can take $(\wedge V, d) = (\wedge x, 0)$ with $\deg{x}=2n+1$,
and have $\msphere{1}=(\wedge(x,\susp{x}), 0)$
and $\mdisk{2}=(\wedge(x,\susp{x},\susp[2]x),d)$ where $dx=d\susp{x}=0$ and $d\susp[2]x=\susp{x}$.
The computation is straightforward except for the shriek maps $\shriek\delta$ and $\shriek\gamma$.
The map $\shriek\delta$ is the linear map $\mpath\rightarrow(\wedge x, 0)\tpow2$ over $(\wedge x, 0)\tpow2$
determined by $\shriek\delta(1)=1\otimes x - x\otimes 1$ and $\shriek\delta((\susp{x})^l)=0$ for $l\geq 1$.
By \cref{proposition:shriekInclconstForPure}, the map $\shriek\gamma$ is the linear map $\mdisk{k}\rightarrow\msphere{k-1}$ over $\msphere{k-1}$
determined by $\shriek\gamma(\susp[2]x)=1$ and $\shriek\gamma(1)=0$.
Then the dual of the brane product $\mu^\vee$ is a linear map
\begin{equation*}
\mu^\vee\colon \wedge(x,\susp[2]x)\rightarrow\wedge(x,\susp[2]x)\otimes\wedge(x,\susp[2]x).
\end{equation*}
of degree $(1-2n)$
over $\wedge(x)\otimes\wedge(x)$,
which is characterized by
\begin{equation*}
\mu^\vee(1) = 1\otimes x - x\otimes 1,\
\mu^\vee(\susp[2]x) = (1\otimes x - x\otimes 1)(\susp[2]x\otimes 1 + 1\otimes\susp[2]x).
\end{equation*}
Similarly, the dual of the brane coproduct $\delta^\vee$ is a linear map
\begin{equation*}
\delta^\vee\colon \wedge(x,\susp[2]x)\otimes\wedge(x,\susp[2]x)\rightarrow\wedge(x,\susp[2]x).
\end{equation*}
of degree $(1-2n)$
over $\wedge(x)\otimes\wedge(x)$,
which is characterized by
\begin{equation*}
\delta^\vee(1) = 0,\
\delta^\vee(\susp[2]x\otimes 1) = -1,\
\delta^\vee(1\otimes\susp[2]x) = 1,\
\delta^\vee(\susp[2]x\otimes\susp[2]x) = -\susp[2]x.
\end{equation*}
Dualizing these results,
we get the brane product and coproduct on the homology,
which proves \cref{theorem:braneOperationsOfSphere}. \end{proof}
\begin{proof}
[Proof of \cref{theorem:coprodTrivForPure}]
By \cref{proposition:shriekInclconstForPure},
we have that $\operatorname{Im}(\shriek\gamma\otimes{\rm id})$ is contained in the ideal $(\susp x_1,\dots \susp x_p)$,
which is mapped to zero by the map $\tilde\varepsilon\otimes{\rm id}$. \end{proof}
\section{Proof of the associativity, the commutativity, and the Frobenius compatibility}
\newcommand{\tau_{\mathrm{\times}}}{\tau_{\mathrm{\times}}} \newcommand{\tau_{\mathrm{\#}}}{\tau_{\mathrm{\#}}} \label{section:proofOfAssocAndFrob} In this section, we give a precise statement and the proof of \cref{theorem:associativeFrobenius}.
First, we give a precise statement of \cref{theorem:associativeFrobenius}. For simplicity, we omit the statement for $(S,i,j)$-brane product and coproduct, which is almost the same as that for $(S,T)$-brane product and coproduct. Let $M$ be a $k$-connected $\mathbb K$-Gorenstein space of finite type with $\dim\pi_*(M)\otimes\mathbb K < \infty$. Denote $m=\dim M$. Then the precise statement of \cref{item:assocProd} is that the diagrams \begin{equation}
\label{equation:assocProdDiagram}
\xymatrix{
\cohom{M^{S\#T\#U}} \ar[r]^-{\mu^\vee_{S\#T,U}} \ar[d]^{\mu^\vee_{S,T\#U}} &
\cohom{M^{S\#T}\times M^U} \ar[d]^{\mu^\vee_{S,T\amalg U}} \\
\cohom{M^S\times M^{T\#U}} \ar[r]^-{\mu^\vee_{S\amalg T,U}} & \cohom{M^S\times M^T\times M^U}
} \end{equation} and \begin{equation}
\label{equation:commProdDiagram}
\xymatrix{
\cohom{M^{T\#S}} \ar[r]^-{\mu^\vee_{T,S}} \ar[d]^{\tau_{\mathrm{\#}}^*} & \cohom{M^T\times M^S} \ar[d]^{\tau_{\mathrm{\times}}^*}\\
\cohom{M^{S\#T}} \ar[r]^-{\mu^\vee_{S,T}} & \cohom{M^S\times M^T}
} \end{equation} commute by the sign $(-1)^m$.\todo{commutes by the sign?} Here, $\tau_{\mathrm{\times}}$ and $\tau_{\mathrm{\#}}$ are defined as the transposition of $S$ and $T$. Note that the associativity of the product holds even if the assumption $\dim\pi_*(M)\otimes\mathbb K < \infty$ is dropped.
Denote $\bar{m} = \dim\Omega^{k-1} M$. Then \cref{item:assocCop} states that the diagrams \begin{equation}
\label{equation:assocCopDiagram}
\xymatrix{
\cohom{M^S\times M^T\times M^U} \ar[r]^-{\delta^\vee_{S\amalg T, U}} \ar[d]^{\delta^\vee_{S,T\amalg U}} &
\cohom{M^S\times M^{T\#U}} \ar[d]^{\delta^\vee_{S,T\#U}} \\
\cohom{M^{S\#T}\times M^U} \ar[r]^-{\delta^\vee_{S\#T,U}} & \cohom{M^{S\#T\#U}}
} \end{equation} and \begin{equation}
\label{equation:commCopDiagram}
\xymatrix{
\cohom{M^{T\times S}} \ar[r]^-{\delta^\vee_{T,S}} \ar[d]^{\tau_{\mathrm{\#}}^*} & \cohom{M^T\#M^S} \ar[d]^{\tau_{\mathrm{\times}}^*}\\
\cohom{M^{S\times T}} \ar[r]^-{\delta^\vee_{S,T}} & \cohom{M^S\#M^T}
} \end{equation} commute by the sign $(-1)^{\bar{m}}$. Similarly, \cref{item:Frob} states that the diagram \begin{equation}
\label{equation:FrobDiagram}
\xymatrix{
\cohom{M^S\times M^{T\#U}} \ar[r]^-{\delta^\vee_{S,T\#U}} \ar[d]^{\mu^\vee_{S\#T,U}} &
\cohom{M^{S\#T\#U}} \ar[d]^{\mu^\vee_{S\amalg T,U}} \\
\cohom{M^S\times M^T\times M^U} \ar[r]^-{\delta^\vee_{S,T\amalg U}} & \cohom{M^{S\#T}\times M^U}
} \end{equation} commutes by the sign $(-1)^{m\bar{m}}$. \todo{the other diagram?}
\newcommand{\lift}[2]{#1_{#2}}
Before proving \cref{theorem:associativeFrobenius}, we give a notation $\lift{g}{\alpha}$ for a shriek map.
\begin{definition}
Consider a pullback diagram
\begin{equation*}
\xymatrix{
X \ar[r]^g \ar[d]^p & Y \ar[d]^q \\
A \ar[r]^f & B
}
\end{equation*}
of spaces, where $q$ is a fibration.
Let $\alpha$ be an element of $\mathrm{Ext}^m_{\cochain{B}}(\cochain{A}, \cochain{B})$.
Assume that the Eilenberg-Moore map
\begin{equation*}
\operatorname{EM}\colon \mathrm{Tor}^*_{\cochain{B}}(\cochain{A}, \cochain{Y})\xrightarrow{\cong}\cohom{X}
\end{equation*}
is an isomorphism (e.g., $B$ is 1-connected and the cohomology of the fiber is of finite type).
Then we define $\lift{g}{\alpha}$ to be the composition
\begin{equation*}
\lift{g}{\alpha}\colon \cohom{X}
\xleftarrow{\cong} \mathrm{Tor}^*_{\cochain{B}}(\cochain{A}, \cochain{Y})
\xrightarrow{\mathrm{Tor}(\alpha, {\rm id})} \mathrm{Tor}^{*+m}_{\cochain{B}}(\cochain{B}, \cochain{Y})
\xrightarrow{\cong} \cohom[*+m]{Y}
\end{equation*} \end{definition}
Using this notation, we can write the shriek map $\shriek\mathrm{incl}$ as $\lift{\mathrm{incl}}{\shriek\Delta}$ for the diagram \cref{equation:STProdDiagram}, and the shriek map $\shriek\mathrm{comp}$ as $\lift{\mathrm{comp}}{\shriekc}$ for the diagram \cref{equation:STCopDiagram}.
Now we have the following two propositions as a preparation of the proof of \cref{theorem:associativeFrobenius}.
\begin{proposition}
\label{proposition:naturalityOfShriek}
Consider a diagram
\begin{equation*}
\begin{tikzcd}[row sep=2.5em]
X \arrow[rr,"g"] \arrow[dr,swap,"\varphi"] \arrow[dd,swap,""] &&
Y \arrow[dd,swap,"q" near start] \arrow[dr,"\psi"] \\
& X' \arrow[rr,crossing over,"g'" near start] &&
Y' \arrow[dd,"q'"] \\
A \arrow[rr,"" near end] \arrow[dr,swap,"a"] && B \arrow[dr,swap,"b"] \\
& A' \arrow[rr,""] \arrow[uu,<-,crossing over,"" near end]&& B',
\end{tikzcd}
\end{equation*}
where $q$ and $q'$ are fibrations
and the front and back squares are pullback diagrams.
Let $\alpha \in \mathrm{Ext}^m_{\cochain{B}}(\cochain{A}, \cochain{B})$
and $\alpha' \in \mathrm{Ext}^m_{\cochain{B'}}(\cochain{A'}, \cochain{B'})$.
Assume that the elements $\alpha$ and $\alpha'$
are mapped to the same element in $\mathrm{Ext}^m_{\cochain{B'}}(\cochain{A'}, \cochain{B})$
by the morphisms induced by $a$ and $b$,
and that the Eilenberg-Moore maps of two pullback diagrams are isomorphisms.
Then the following diagram commutes.
\begin{equation*}
\xymatrix{
\cohom{X'} \ar[r]^-{\lift{g'}{\alpha'}} \ar[d]^{\varphi^*} & \cohom[*+m]{Y'} \ar[d]^{\psi^*}\\
\cohom{X} \ar[r]^{\lift{g}{\alpha}} & \cohom[*+m]{Y}
}
\end{equation*} \end{proposition}
\begin{proposition}
\label{proposition:functorialityOfShriek}
Consider a diagram
\begin{equation*}
\xymatrix{
X \ar[r]^{\tilde{f}} \ar[d]^p & Y \ar[r]^{\tilde{g}} \ar[d]^q & Z \ar[d]^r \\
A \ar[r]^f & B \ar[r]^g & C,
}
\end{equation*}
where the two squares are pullback diagrams.
Let $\alpha$ be an element of $\mathrm{Ext}^m_{\cochain{B}}(\cochain{A}, \cochain{B})$
and $\beta$ an element of $\mathrm{Ext}^n_{\cochain{C}}(\cochain{B}, \cochain{C})$.
Assume that the Eilenberg-Moore maps are isomorphisms for two pullback diagrams.
Then we have
\begin{equation*}
\lift{(\tilde{g}\circ\tilde{f})}{\beta\circ (g_*\alpha)}
= \lift{\tilde{g}}{\beta} \circ\lift{\tilde{f}}{\alpha},
\end{equation*}
where
$g_*\colon \mathrm{Ext}^m_{\cochain{B}}(\cochain{A}, \cochain{B}) \rightarrow \mathrm{Ext}^m_{\cochain{C}}(\cochain{A}, \cochain{B})$
is the morphism induced by the map $g\colon B\rightarrow C$. \end{proposition}
These propositions can be proved by straightforward arguments.
\begin{proof}
[Proof of \cref{theorem:associativeFrobenius}]
First, we give a proof for \cref{item:Frob}.
Note that the associativity in \cref{item:assocProd} and \cref{item:assocCop} can be proved similarly.
Consider the following diagram.
\begin{equation*}
\xymatrix{
\cohom{M^S\times M^{T\#U}} \ar[r]^-{\mathrm{incl}^*} \ar[d]^-{\mathrm{comp}^*} &
\cohom{M^S\times_MM^{T\#U}} \ar[r]^-{\lift\mathrm{comp}{\shriekc}} \ar[d]^-{\mathrm{comp}^*} &
\cohom{M^{S\#T\#U}} \ar[d]^-{\mathrm{comp}^*} \\
\cohom{M^S\times M^T\times_MM^U} \ar[r]^-{\mathrm{incl}^*} \ar[d]^-{\lift\mathrm{incl}{\shriek\Delta}} &
\cohom{M^S\times_MM^T\times_MM^U} \ar[r]^-{\lift\mathrm{comp}{\shriek{(c\times{\rm id})}}} \ar[d]^-{\lift\mathrm{incl}{\shriek{({\rm id}\times\Delta)}}} &
\cohom{M^{S\#T}\times_MM^U} \ar[d]^-{\lift\mathrm{incl}{\shriek{({\rm id}\times\Delta)}}} \\
\cohom{M^S\times M^T\times M^U} \ar[r]^-{\mathrm{incl}^*} &
\cohom{M^S\times_MM^T\times M^U} \ar[r]^-{\lift\mathrm{comp}{\shriek{(c\times{\rm id})}}} &
\cohom{M^{S\#T}\times M^U}
}
\end{equation*}
Note that the boundary of the whole square is the same as the diagram \cref{equation:FrobDiagram}.
The upper left square is commutative by the functoriality of the cohomology and
so are the upper right and lower left squares by \cref{proposition:naturalityOfShriek}.
Next, we consider the lower right square.
Applying \cref{proposition:functorialityOfShriek} to the diagram
\begin{equation*}
\xymatrix{
M^S\times_MM^T\times_MM^U \ar[r]^-{\mathrm{comp}} \ar[d] & M^{S\#T}\times_MM^U \ar[r]^-{\mathrm{incl}} \ar[d] & M^{S\#T}\times M^U \ar[d]\\
M\times M \ar[r]^-{c\times{\rm id}} & \spheresp[k-1]{M}\times M \ar[r]^-{{\rm id}\times\Delta} & \spheresp[k-1]{M}\times M^2,
}
\end{equation*}
we have
\begin{equation*}
\lift\mathrm{incl}{\shriek{({\rm id}\times\Delta)}}
\circ \lift\mathrm{comp}{\shriek{(c\times{\rm id})}}
= \lift{(\mathrm{incl}\circ\mathrm{comp})}
{\shriek{({\rm id}\times\Delta)} \circ (({\rm id}\times\Delta)_*\shriek{(c\times{\rm id})})}.
\end{equation*}
Using appropriate semifree resolutions,
we have a representation
\begin{align*}
\shriek{({\rm id}\times\Delta)} \circ (({\rm id}\times\Delta)_*\shriek{(c\times{\rm id})})
& = [{\rm id}\otimes\shriek\delta] \circ [\shriek\gamma\otimes{\rm id}] \\
& = [(-1)^{m\bar{m}}\shriek\gamma\otimes\shriek\delta]
\end{align*}
as a chain map.\todo{write more detailed proof}
Here,
$[\shriek\delta]=\shriek\Delta\in\mathrm{Ext}_{\cochain{M^2}}^m(\cochain{M}, \cochain{M^2})$
and
$[\shriek\gamma]=\shriekc\in\mathrm{Ext}^{\bar{m}}_{\cochain{\spheresp[k-1]{M}}}(\cochain{M}, \cochain{\spheresp[k-1]{M}})$
are representations as cochains.
Similarly, we compute the other composition to be
\begin{equation*}
\lift\mathrm{comp}{\shriek{(c\times{\rm id})}}
\circ \lift\mathrm{incl}{\shriek{({\rm id}\times\Delta)}}
= \lift{(\mathrm{comp}\circ\mathrm{incl})}
{\shriek{(c\times{\rm id})} \circ ((c\times{\rm id})_*)\shriek{({\rm id}\times\Delta)}}
\end{equation*}
with
\begin{equation*}
\shriek{(c\times{\rm id})} \circ ((c\times{\rm id})_*)\shriek{({\rm id}\times\Delta)}
= [\shriek\gamma\otimes\shriek\delta].
\end{equation*}
This proves the commutativity by the sign $(-1)^{m\bar{m}}$ of the lower right square.
Next, we prove the commutativity of the coproduct in \cref{item:assocCop}.
This follows from the commutativity of the diagram
\begin{equation}
\label{equation:proofOfCommutativityDiagram}
\xymatrix{
\cohom{M^T\times M^S} \ar[r]^{\mathrm{incl}^*} \ar[d]^{\tau_{\mathrm{\times}}^*}
& \cohom{M^T\times_MM^S} \ar[r]^-{\shriek{\mathrm{comp}}} \ar[d]^{\tau_{\mathrm{\times}}^*}
& \cohom{M^{T\#S}} \ar[d]^{\tau_{\mathrm{\#}}^*}\\
\cohom{M^S\times M^T} \ar[r]^{\mathrm{incl}^*}
& \cohom{M^S\times_MM^T} \ar[r]^-{\shriek{\mathrm{comp}}}
& \cohom{M^{S\#T}}.
}
\end{equation}
The commutativity of the left square is obvious.
If one can apply \cref{proposition:naturalityOfShriek} to the diagram
\cref{equation:cubeDiagramForCommutativity},
we obtain the commutativity of the right square of \cref{equation:proofOfCommutativityDiagram}.
\begin{equation}
\label{equation:cubeDiagramForCommutativity}
\begin{tikzcd}[row sep=2.5em]
M^S\times_MM^T \arrow[rr,"\mathrm{comp}"] \arrow[dr,swap,"\tau_{\mathrm{\times}}"] \arrow[dd,swap,""] &&
M^{S\#T} \arrow[dd,swap,"\mathrm{res}" near start] \arrow[dr,"\tau_{\mathrm{\#}}"] \\
& M^T\times_MM^S \arrow[rr,crossing over,"\mathrm{comp}" near start] &&
M^{T\#S} \arrow[dd,"\mathrm{res}"] \\
M \arrow[rr,"c" near end] \arrow[dr,swap,"{\rm id}"] && \spheresp[k-1]{M} \arrow[dr,swap,"\tau"] \\
& M \arrow[rr,"c"] \arrow[uu,<-,crossing over,"" near end]&& \spheresp[k-1]{M}
\end{tikzcd}
\end{equation}
In order to apply \cref{proposition:naturalityOfShriek},
it suffices to prove the equation
\begin{equation}
\label{equation:commutativityOfInclconstShriek}
\mathrm{Ext}_{\tau^*}({\rm id},\tau^*)(\shriek{c})=(-1)^{\bar{m}}\shriek{c}
\end{equation}
in $\mathrm{Ext}_{\cochain{\spheresp[k-1]{M}}}(\cochain{M},\cochain{\spheresp[k-1]{M}})$.
Since $\mathrm{Ext}^{\bar{m}}_{\cochain{\spheresp[k-1]{M}}}(\cochain{M}, \cochain{\spheresp[k-1]{M}}) \cong \mathbb K$ and
$\mathrm{Ext}_{\tau^*}({\rm id},\tau^*) \circ \mathrm{Ext}_{\tau^*}({\rm id},\tau^*) = {\rm id}$,
we have \cref{equation:commutativityOfInclconstShriek} up to sign.
In \cref{section:proofOfExtAlgebraic},
we will determine the sign to be $(-1)^{\bar{m}}$.
Similarly, in order to prove the commutativity of the product in \cref{item:assocProd},
we need to prove the equation
\begin{equation}
\label{equation:commutativityOfDeltaShriek}
\mathrm{Ext}_{\tau^*}({\rm id},\tau^*)(\shriek{\Delta})=(-1)^m\shriek{\Delta}
\end{equation}
in $\mathrm{Ext}_{\cochain{M^2}}(\cochain{M},\cochain{M^2})$.
As above, we have \cref{equation:commutativityOfDeltaShriek} up to sign.
The sign is determined to be $(-1)^m$ in \cref{section:determineSign}.
The same proofs can be applied for $(S,i,j)$-brane product and coproduct. \end{proof}
\section{Proof of \cref{equation:commutativityOfDeltaShriek}} \label{section:determineSign} \newcommand{f}{f} In this section, we will prove \cref{equation:commutativityOfDeltaShriek}, determining the sign. Here, we need the explicit description of $\shriek{\Delta}$ in \cite{wakatsuki16}.
Let $M$ be a $1$-connected space with $\dim\pi_*(M)\otimes\mathbb K < \infty$. By \cite[Theorem 1.6]{wakatsuki16}, we have a Sullivan model $(\wedge V, d)$ of $M$ which is semi-pure, i.e., $d(I_V)\subset I_V$, where $I_V$ is the ideal generated by $V^{\mathrm{even}}$. Let $\varepsilon\colon (\wedge V, d)\rightarrow \mathbb K$ be the augmentation map and $\mathrm{pr}\colon (\wedge V, d)\rightarrow(\wedge V/I_V, d)$ the quotient map. Take bases $V^{\mathrm{even}}=\mathbb K\{x_1,\dots x_p\}$ and $V^{\mathrm{odd}}=\mathbb K\{y_1,\dots y_q\}$. Recall the relative Sullivan algebra $\mpath=(\wedge V\tpow2\otimes\wedge \susp{V}, d)$ over $(\wedge V, d)\tpow2$ from \cref{section:computeExample}.
Note that the relative Sullivan algebra $(\wedge V\tpow2\otimes\wedge \susp{V}, d)$ is a relative Sullivan model of the multiplication map $(\wedge V, d)^{\otimes 2}\rightarrow (\wedge V, d)$, Hence, using this as a semifree resolution, we have $\mathrm{Ext}_{\wedge V^\otimes 2}(\wedge V, \wedge V^{\otimes 2}) = \cohom{\mathrm{Hom}_{\wedge V^\otimes 2}(\wedge V^{\otimes 2}\otimes\wedge\susp{V}, \wedge V^{\otimes 2})}$. By \cite[Corollary 5.5]{wakatsuki16}, we have a cocycle $f \in \mathrm{Hom}_{\wedge V^\otimes 2}(\wedge V^{\otimes 2}\otimes\wedge\susp{V}, \wedge V^{\otimes 2})$ satisfying $f(\susp{x_1}\cdots\susp{x_p}) = \prod_{j=1}^{j=q}(1\otimes y_j - y_j\otimes 1) + u$ for some $u \in (y_1\otimes y_1, \ldots, y_q\otimes y_q)$. Consider the evaluation map \begin{align*}
\mathrm{ev}\colon \mathrm{Ext}_{\wedge V\tpow{2}}(\wedge V, \wedge V\tpow{2})
\otimes \mathrm{Tor}_{\wedge V\tpow{2}}(\wedge V, \wedge V / I_V)
&\rightarrow \mathrm{Tor}_{\wedge V\tpow{2}}(\wedge V\tpow{2}, \wedge V / I_V)\\
&\xrightarrow{\cong} \cohom{\wedge V / I_V}, \end{align*} where $(\wedge V, d)\tpow{2}$, $(\wedge V, d)$, and $(\wedge V / I_V, d)$ are $(\wedge V, d)\tpow{2}$-module via ${\rm id}$, $\varepsilon\cdot{\rm id}$, and $\mathrm{pr}\circ(\varepsilon\cdot{\rm id})$, respectively. Here, we use $(\wedge V\tpow2\otimes\wedge\susp{V},d)$ as a semifree resolution of $(\wedge V, d)$. Then, we have \begin{equation*}
\mathrm{ev}([f]\otimes[\susp{x_1}\cdots\susp{x_p}]) = [y_1\cdots y_q] \neq 0, \end{equation*} and hence $[f]\neq 0$ in $\mathrm{Ext}_{\wedge V\tpow{2}}(\wedge V, \wedge V\tpow{2})$. \newcommand{t}{t} \newcommand{\tilde{\transposeSullivan}}{\tilde{t}} Thus, it is enough to calculate $\mathrm{Ext}_{t}({\rm id},t)([f])$ to determine the sign in \cref{equation:commutativityOfDeltaShriek}, where $t\colon (\wedge V, d)\tpow2 \rightarrow (\wedge V, d)$ is the dga homomorphism defined by $t(v\otimes1)=1\otimes v$ and $t(1\otimes v)=v\otimes 1$.
\begin{proof}
[Proof of \cref{equation:commutativityOfDeltaShriek}]
By definition, $\mathrm{Ext}_{t}({\rm id},t)$ is induced by the map
\begin{equation*}
\mathrm{Hom}_t(\tilde{\transposeSullivan}, t)\colon
\mathrm{Hom}_{\wedge V\tpow2}(\wedge V\tpow2\otimes\wedge\susp{V}, \wedge V\tpow2)
\rightarrow \mathrm{Hom}_{\wedge V\tpow2}(\wedge V\tpow2\otimes\wedge\susp{V}, \wedge V\tpow2),
\end{equation*}
where $\tilde{\transposeSullivan}$ is the dga automorphism defined by
$\tilde{\transposeSullivan}|_{\wedge V\tpow2}=t$ and
$\tilde{\transposeSullivan}(\susp{v}) = -\susp{v}$.
Since $\tilde{\transposeSullivan}(\susp{x_1}\cdots\susp{x_p})=(-1)^p\susp{x_1}\cdots\susp{x_p}$
and $t(\prod_{j=1}^{j=q}(1\otimes y_j - y_j\otimes 1))=(-1)^q\prod_{j=1}^{j=q}(1\otimes y_j - y_j\otimes 1)$,
we have
\begin{equation*}
\mathrm{ev}([\mathrm{Hom}_t(\tilde{\transposeSullivan}, t)(f)]
\otimes[\susp{x_1}\cdots\susp{x_p}])
= \mathrm{ev}([t\circf\circ\tilde{\transposeSullivan}]
\otimes[\susp{x_1}\cdots\susp{x_p}])
= (-1)^{p+q}[y_1\cdots y_q].
\end{equation*}
Since the parity of $p+q$ is the same as that of the dimension of $(\wedge V, d)$ as a Gorenstein algebra,
the sign in \cref{equation:commutativityOfDeltaShriek} is proved to be $(-1)^{m}$. \end{proof}
\section{Proof of \cref{equation:commutativityOfInclconstShriek}} \label{section:proofOfExtAlgebraic} \newcommand{\eta}{\eta} In this section, we give the proof of \cref{equation:commutativityOfInclconstShriek}, using the spectral sequence constructed in the proof of \cref{theorem:extAlgebraic}. Although the key idea of the proof of \cref{theorem:extAlgebraic} is the same as \cref{theorem:ExtDiagonal} due to F\'elix and Thomas, we give the proof here for the convenience of the reader. \todo{??}
\begin{proof}
[Proof of \cref{theorem:extAlgebraic}]
Take a $(A\otimes B, d)$-semifree resolution $\eta\colon(P,d)\xrightarrow{\simeq}(A,d)$.
Define $(C,d)=(\mathrm{Hom}_{A\otimes B}(P,A\otimes B),d)$.
Then $\mathrm{Ext}_{A\otimes B}(A,A\otimes B) = \cohom{C,d}$.
We fix a non-negative integer $N$, and define a complex
$(C_N,d) = (\mathrm{Hom}_{A\otimes B}(P,(A/A^{>n})\otimes B),d)$.
We will compute the cohomology of $(C_N,d)$.
Define a filtration $\{F^pC_N\}_{p\geq 0}$ on $(C_N,d)$ by
$F^pC_N = \mathrm{Hom}_{A\otimes B}(P,(A/A^{>n})^{\geq p}\otimes B)$.
Then we obtain a spectral sequence $\{E^{p,q}_r\}_{r\geq 0}$
converging to $\cohom{C_N, d}$.
\begin{claim}
\label{lemma:E2termOfSS}
\begin{equation*}
E^{p,q}_2 =
\begin{cases}
\cohom[p]{A/A^{>N}} & \mbox{(if $q=m$)}\\
0 & \mbox{(if $q\neq m$)}
\end{cases}
\end{equation*}
\end{claim}
\begin{proof}
[Proof of \cref{lemma:E2termOfSS}]
We may assume $p\leq N$.
Then we have an isomorphism of complexes
\begin{equation*}
(A^{\geq p}/A^{\geq p+1},0)\otimes (\mathrm{Hom}_B(B\otimes_{A\otimes B}P,B),d)
\xrightarrow{\cong} (E^p_0, d_0),
\end{equation*}
hence
\begin{equation*}
(A^{\geq p}/A^{\geq p+1})\otimes \cohom{\mathrm{Hom}_B(B\otimes_{A\otimes B}P,B),d}
\xrightarrow{\cong} E^p_1.
\end{equation*}
Define
\begin{equation*}
\bar{\eta}\colon (B,\bar{d})\otimes_{A\otimes B}(P,d)
\xrightarrow{1\otimes\eta}(B,\bar{d})\otimes_{A\otimes B}(A,d)
\cong \mathbb K.
\end{equation*}
Note that the last isomorphism follows from the assumption \cref{item:assumpResId}.
Then, since $\eta$ is a quasi-isomorphism, so is $\bar{\eta}$.
Hence we have
\begin{equation*}
\cohom[q]{\mathrm{Hom}_B(B\otimes_{A\otimes B}P,B),d}
\cong \mathrm{Ext}^q_B(\mathbb K, B) \cong
\begin{cases}
\mathbb K & \mbox{(if $q = m$)}\\
0 & \mbox{(if $q\neq m$)}
\end{cases}
\end{equation*}
by the assumption \cref{item:assumpGorenstein}.
Hence we have
\begin{align*}
E^{p,q}_1 &\cong (A^{\geq p}/A^{\geq p+1}) \otimes \cohom[q]{\mathrm{Hom}_B(B\otimes_{A\otimes B}P, B),d}\\
&\cong A^p\otimes\mathrm{Ext}^q_B(\mathbb K, B).
\end{align*}
Moreover, using the assumption \cref{item:assump1conn} and the above isomorphisms,
we can compute the differential $d_1$ and have an isomorphism of complexes
\begin{equation}
\label{equation:computationOfE1}
(E^{*,q}_1,d_1) \cong (A^*,d) \otimes \mathrm{Ext}^q_B(\mathbb K, B).
\end{equation}
This proves \cref{lemma:E2termOfSS}.
\end{proof}
Now we return to the proof of \cref{theorem:extAlgebraic}.
We will recover $\cohom{C}$ from $\cohom{C_N}$ taking a limit.
Since ${\varprojlim}^1_NC_N=0$, we have an exact sequence
\begin{equation*}
0\rightarrow {\varprojlim}^1_N\cohom{C_N}
\rightarrow \cohom{{\varprojlim}_NC_N}
\rightarrow \cohom{{\varprojlim}_N\cohom{C_N}}
\rightarrow 0.
\end{equation*}
By \cref{lemma:E2termOfSS},
the sequence $\{\cohom{C_N}\}_N$ satisfies the (degree-wise) Mittag-Leffler condition,
and hence ${\varprojlim}^1_N\cohom{C_N}=0$.
Thus, we have
\begin{equation*}
\cohom[l]{C}
\cong \cohom[l]{{\varprojlim}_NC_N}
\cong {\varprojlim}_N\cohom[l]{C_N}
\cong \cohom[l-m]{A}.
\end{equation*}
This proves \cref{theorem:extAlgebraic}. \end{proof}
\newcommand{t}{t} \newcommand{{\tilde{\orirevSullivan}}}{{\tilde{t}}} \newcommand{{\bar{\orirevSullivan}}}{{\bar{t}}} \newcommand{{\hat{\orirevSullivan}}}{{\hat{t}}} Next, using the above spectral sequence, we determine the sign in \cref{equation:commutativityOfInclconstShriek}.
\begin{proof}
[Proof of \cref{equation:commutativityOfInclconstShriek}]
If $k=1$, \cref{equation:commutativityOfInclconstShriek} is the same as \cref{equation:commutativityOfDeltaShriek},
which was proved in \cref{section:determineSign}.
Hence we assume $k\geq 2$.
As in \cref{section:proofOfAssocAndFrob},
let $M$ be a $k$-connected $\mathbb K$-Gorenstein space of finite type with $\dim\pi_*(M)\otimes\mathbb K < \infty$,
and $(\wedge V, d)$ its minimal Sullivan model.
Using the Sullivan models constructed in \cref{section:constructModel},
we have that the automorphism $\mathrm{Ext}_{\tau^*}({\rm id},\tau^*)$
on $\mathrm{Ext}_{\cochain{\spheresp[k-1]{M}}}(\cochain{M},\cochain{\spheresp[k-1]{M}})$
is induced by the automorphism $\mathrm{Hom}_t({\tilde{\orirevSullivan}}, t)$
on
$\mathrm{Hom}_{\wedge V\otimes\wedge\susp[k-1]{V}}
(\wedge V\otimes\wedge\susp[k-1]{V}\otimes\wedge\susp[k]{V}, \wedge V\otimes\wedge\susp[k-1]{V})$,
where $t$ and ${\tilde{\orirevSullivan}}$ are the dga automorphisms
on $(\wedge V\otimes\wedge\susp[k-1]{V}, d)$
and $(\wedge V\otimes\wedge\susp[k-1]{V}\otimes\wedge\susp[k]{V}, d)$, respectively,
defined by
\begin{align*}
&t(v) = v,\ t(\susp[k-1]{v}) = -\susp[k-1]{v},\\
&{\tilde{\orirevSullivan}}(v) = v,\ {\tilde{\orirevSullivan}}(\susp[k-1]{v}) = -\susp[k-1]{v},\ \mathrm{and}\
{\tilde{\orirevSullivan}}(\susp[k]{v}) = -\susp[k]{v}.
\end{align*}
Now, consider the spectral sequence $\{E^{p,q}_r\}$ in the proof of \cref{theorem:extAlgebraic}
by taking $(A\otimes B, d) = (\wedge V \otimes \wedge \susp[k-1]{V}, d)$
and $(P,d) = (\wedge V\otimes\wedge\susp[k-1]{V}\otimes\wedge\susp[k]{V}, d)$.
Since $k\geq 2$,
$\mathrm{Hom}_t({\tilde{\orirevSullivan}}, t)$ induces
automorphisms on the complexes $C_N$ and $F^pC_N$,
and hence on the spectral sequence $\{E^{p,q}_r\}$.
By the isomorphism \cref{equation:computationOfE1},
we have
\begin{equation*}
E^{p,q}_2\cong \cohom[p]{A}\otimes\mathrm{Ext}^q_{\wedge \susp[k-1]{V}}(\mathbb K, \wedge \susp[k-1]{V}),
\end{equation*}
and that the automorphism induced on $E_2$ is the same as ${\rm id}\otimes\mathrm{Ext}_{\bar{\orirevSullivan}}({\rm id}, {\bar{\orirevSullivan}})$,
where ${\bar{\orirevSullivan}}$ is defined by ${\bar{\orirevSullivan}}(\susp[k-1]{v})=-\susp[k-1]{v}$ for $v\in V$.
Since the differential is zero on $\wedge \susp[k-1]{V}$,
we have an isomorphism
\begin{equation*}
\mathrm{Ext}^*_{\wedge \susp[k-1]{V}}(\mathbb K, \wedge \susp[k-1]{V})
\cong \bigotimes_i\mathrm{Ext}^*_{\wedge \susp[k-1]{v_i}}(\mathbb K, \wedge \susp[k-1]{v_i})
\end{equation*}
where $\{v_1,\ldots,v_l\}$ is a basis of $V$.
Using this isomorphism, we can identify
\begin{equation*}
\mathrm{Ext}_{\bar{\orirevSullivan}}({\rm id}, {\bar{\orirevSullivan}}) = \bigotimes_i\mathrm{Ext}_{{\bar{\orirevSullivan}}_i}({\rm id}, {\bar{\orirevSullivan}}_i),
\end{equation*}
where ${\bar{\orirevSullivan}}_i$ is defined by ${\bar{\orirevSullivan}}_i(\susp[k-1]{v_i})=-\susp[k-1]{v_i}$.
Since $(-1)^{\dim V}=(-1)^{\bar{m}}$,
it suffices to show $\mathrm{Ext}_{{\bar{\orirevSullivan}}_i}({\rm id}, {\bar{\orirevSullivan}}_i)=-1$.
Taking a resolution, we have
\begin{align*}
&\mathrm{Ext}^*_{\wedge \susp[k-1]{v_i}}(\mathbb K, \wedge \susp[k-1]{v_i})
= \cohom{\mathrm{Hom}_{\wedge \susp[k-1]{v_i}}
(\wedge \susp[k-1]{v_i}\otimes\wedge\susp[k]{v_i}, \wedge \susp[k-1]{v_i})}\\
&\mathrm{Ext}_{{\bar{\orirevSullivan}}_i}({\rm id}, {\bar{\orirevSullivan}}_i)=\cohom{\mathrm{Hom}_{{\bar{\orirevSullivan}}_i}({\hat{\orirevSullivan}}_i, {\bar{\orirevSullivan}}_i)},
\end{align*}
where the differential $d$ on $\wedge \susp[k-1]{v_i}\otimes\wedge\susp[k]{v_i}$
is defined by $d(\susp[k-1]{v_i})=0$ and $d(\susp[k]{v_i})=\susp[k-1]{v_i}$,
and the dga homomorphism ${\hat{\orirevSullivan}}_i$ is defined by
${\hat{\orirevSullivan}}_i(\susp[k-1]{v_i})=-\susp[k-1]{v_i}$ and ${\hat{\orirevSullivan}}_i(\susp[k]{v_i})=-\susp[k]{v_i}$.
\newcommand{f}{f}
Using this resolution, we have the generator $[f]$ of
$\cohom{\mathrm{Hom}_{\wedge \susp[k-1]{v_i}}(\wedge \susp[k-1]{v_i}\otimes\wedge\susp[k]{v_i}, \wedge \susp[k-1]{v_i})}\cong\mathbb K$
as follows:
\begin{itemize}
\item If $\deg{\susp[k-1]{v_i}}$ is odd,
define $f$ by $f(1)=\susp[k-1]{v_i}$ and $f((\susp[k]{v_i})^l)=0$ for $l\geq 1$.
\item If $\deg{\susp[k-1]{v_i}}$ is even,
define $f$ by $f(1)=0$ and $f((\susp[k]{v_i}))=1$.
\end{itemize}
In both cases, we have
$\mathrm{Hom}_{{\bar{\orirevSullivan}}_i}({\hat{\orirevSullivan}}_i, {\bar{\orirevSullivan}}_i)(f)
={\bar{\orirevSullivan}}_i\circf\circ{\hat{\orirevSullivan}}_i
=-f$.
This proves $\mathrm{Ext}_{{\bar{\orirevSullivan}}_i}({\rm id}, {\bar{\orirevSullivan}}_i)=-1$
and completes the determination of the sign in \cref{equation:commutativityOfInclconstShriek}.
\end{proof}
\section*{Acknowledgment} I would like to express my gratitude to Katsuhiko Kuribayashi and Takahito Naito for productive discussions and valuable suggestions. Furthermore, I would like to thank my supervisor Nariya Kawazumi for the enormous support and comments. This work was supported by JSPS KAKENHI Grant Number 16J06349 and the Program for Leading Graduate School, MEXT, Japan.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Quantum Decoherence} \author{Maximilian Schlosshauer} \address{Department of Physics, University of Portland,\\5000 North Willamette Boulevard, Portland, OR 97203, USA} \ead{[email protected]}
\begin{abstract} Quantum decoherence plays a pivotal role in the dynamical description of the quantum-to-classical transition and is the main impediment to the realization of devices for quantum information processing. This paper gives an overview of the theory and experimental observation of the decoherence mechanism. We introduce the essential concepts and the mathematical formalism of decoherence, focusing on the picture of the decoherence process as a continuous monitoring of a quantum system by its environment. We review several classes of decoherence models and discuss the description of the decoherence dynamics in terms of master equations. We survey methods for avoiding and mitigating decoherence and give an overview of several experiments that have studied decoherence processes. We also comment on the role decoherence may play in interpretations of quantum mechanics and in addressing foundational questions.\\[.2cm] \emph{Journal reference:} \emph{Phys.\ Rep.\ }{\bf 831}, 1--57 (2019), \href{https://doi.org/10.1016/j.physrep.2019.10.001}{\texttt{doi.org/10.1016/j.physrep.2019.10.001}} \end{abstract}
\begin{keyword} quantum decoherence \sep quantum-to-classical transition \sep quantum measurement \sep quantum master equations \sep quantum information \sep quantum foundations \end{keyword}
\end{frontmatter}
\begin{center} \emph{In memory of H.\,Dieter Zeh (1932--2018)} \end{center}
\tableofcontents
\section{Introduction}
Hilbert space is a vast and seemingly egalitarian place. If $\ket{\psi_1}$ and $\ket{\psi_2}$ represent two possible physical states of a quantum system, then quantum mechanics postulates that an arbitrary superposition $\alpha\ket{\psi_1} + \beta \ket{\psi_2}$ constitutes another possible physical state. The question, then, is why most such states, especially for mesoscopic and macroscopic systems, are found to be very difficult to prepare and observe, often prohibitively so. For example, it turns out to be extremely challenging to prepare a macroscopic quantum system in a spatial superposition of two macroscopically separated, narrow wave packets, with each individual wave packet approximately representing the kind of spatial localization familiar from the classical world of our experience. Even if one succeeded in generating such a superposition and confirming its existence---for example, by measuring fringes arising from interference between the wave-packet components---one would find that it becomes very rapidly unobservable. Thus, we arrive at the dynamical \emph{problem of the quantum-to-classical transition}: Why are certain ``nonclassical'' quantum states so fragile and easily degraded? The question is of immense importance not only from a fundamental point of view, but also because quantum information processing and quantum technologies crucially depend on our ability to generate, maintain, and manipulate such nonclassical superposition states.
The key insight in addressing the problem of the quantum-to-classical transition was first spelled out almost fifty years ago by Zeh \cite{Zeh:1970:yt}, and it gave birth to the theory of \emph{quantum decoherence}, sometimes also called \emph{dynamical decoherence} or \emph{environment-induced decoherence} \cite{Zeh:1970:yt,Zurek:1981:dd,Zurek:1982:tv,Paz:2001:aa,Zurek:2002:ii,Schlosshauer:2003:tv,Bacciagaluppi:2003:yz,Joos:2003:jh,Schlosshauer:2007:un}. The insight is that realistic quantum systems are never completely isolated from their environment, and that when a quantum system interacts with its environment, it will in general become rapidly and strongly entangled with a large number of environmental degrees of freedom. This entanglement dramatically influences what we can locally observe upon measuring the system, even when from a classical point of view the influence of the environment on the system (in terms of dissipation, perturbations, noise, etc.) is negligibly small. In particular, quantum interference effects with respect to certain physical quantities (most notably, ``classical'' quantities such as position) become effectively suppressed, making them prohibitively difficult to observe in most cases of practical interest.
This, in a nutshell, is the process of decoherence \cite{Zeh:1970:yt,Zurek:1981:dd,Zurek:1982:tv,Paz:2001:aa,Zurek:2002:ii,Schlosshauer:2003:tv,Bacciagaluppi:2003:yz,Joos:2003:jh,Schlosshauer:2007:un}. Stated in general and interpretation-neutral terms, decoherence describes how entangling interactions with the environment influence the statistics of future measurements on the system. Formally, decoherence can be viewed as a dynamical filter on the space of quantum states, singling out those states that, for a given system, can be stably prepared and maintained, while effectively excluding most other states, in particular, nonclassical superposition states of the kind epitomized by Schr\"odinger's cat \cite{Schrodinger:1935:gs}. In this way, decoherence lies at the heart of the quantum-to-classical transition. It ensures consistency between quantum and classical predictions for systems observed to behave classically. It provides a quantitative, dynamical account of the boundary between quantum and classical physics. In any concrete experimental situation, decoherence theory specifies the physical requirements, both qualitatively and quantitatively, for pushing the quantum--classical boundary toward the quantum realm. Decoherence is a genuinely quantum-mechanical effect, to be carefully distinguished from classical dissipation and stochastic fluctuations.
One of the most surprising aspects of the decoherence process is its extreme efficiency, especially for mesoscopic and macroscopic quantum systems. Furthermore, due to the many uncontrollable degrees of freedom of the environment, the dynamically created entanglement between system and environment is usually irreversible for all practical purposes; indeed, this effective irreversibility is a hallmark of decoherence. Increasingly realistic models of decoherence processes have been developed, progressing from toy models to complex models tailored to specific experiments (see Sec.~\ref{sec:decmodels}). Advances in experimental techniques have made it possible to observe the gradual action of decoherence in experiments such as cavity QED \cite{Raimond:2001:aa}, matter-wave interferometry \cite{Hornberger:2012:ii}, superconducting systems \cite{Leggett:2002:uy}, and ion traps \cite{Leibfried:2003:om,Haffner:2008:pp} (see Sec.~\ref{sec:exper-observ-decoh}).
The superposition states necessary for quantum information processing are typically also those most susceptible to decoherence. Thus, decoherence is a major barrier to the implementation of devices for quantum information processing such as quantum computers. Qubit systems must be engineered to minimize environmental interactions detrimental to the preparation and longevity of the desired superposition states. At the same time, these systems must remain sufficiently open to allow for their control. Strategies for combatting the adverse effects of decoherence include decoherence avoidance, such as the encoding of information in decoherence-free subspaces (see Sec.~\ref{sec:dfs}), and quantum error correction \cite{Lidar:2013:pp}, which can undo the decoherence-induced degradation of the superposition state (see Sec.~\ref{sec:corr-decoh-induc}). Such strategies will be an integral part of quantum computers. Not only is decoherence relevant to quantum information, but also vice versa. An information-centric view of quantum mechanics proves helpful in conveying the essence of the decoherence process and is also used in recent explorations of the role of the environment as an information channel (see Secs.~\ref{sec:envir-monit-inform} and \ref{sec:prol-inform-quant}).
Decoherence is a technical result concerning the dynamics and measurement statistics of open quantum systems. From this view, decoherence merely addresses a \emph{consistency problem}, by explaining how and when the quantum probability distributions approach the classically expected distributions. Since decoherence follows directly from an application of the quantum formalism to interacting quantum systems, it is not tied to any particular interpretation of quantum mechanics, and it neither supplies such an interpretation nor amounts to a theory that could make predictions beyond those of standard quantum mechanics. However, the bearing decoherence has on the problem of the relation between quantum and classical has been frequently invoked to assess or support various interpretations of quantum mechanics, and the implications of decoherence for the so-called quantum measurement problem have been analyzed extensively (see Sec.~\ref{sec:impl-found-quant}). Indeed, historically decoherence theory arose in the context of Zeh's independent formulation of an Everett-style interpretation \cite{Zeh:1970:yt}; see Ref.~\cite{Camilleri:2009:aq} for an analysis of the connections between the roots of decoherence and matters of interpretation.
It is a curious ``historical accident'' (Joos's term \cite[p.~13]{Joos:1999:po}) that the implications of environmental entanglement were appreciated only relatively late. While one can find---for example, in Heisenberg's writings (see Sec.~\ref{sec:niels-bohrs-views} and Ref.~\cite{Camilleri:2015:oo})---a few early anticipatory remarks about the role of environmental interactions in the quantum-mechanical description of physical systems, it was not until the 1970s that the ubiquity and implications of environmental entanglement were realized by Zeh \cite{Zeh:1970:yt,Kubler:1973:ux}. In the 1980s, the formalism of decoherence was further developed, chiefly by Zurek \cite{Zurek:1981:dd,Zurek:1982:tv}, and the first concrete decoherence models and numerical estimates of decoherence rates were worked out by Joos and Zeh \cite{Joos:1985:iu} and Zurek \cite{Zurek:1986:uz} (see also Refs~\cite{Walls:1985:pp,Walls:1985:lm,Caldeira:1985:tt}). Zurek's 1991 \emph{Physics Today} article \cite{Zurek:1991:vv} was an important factor in introducing a broader audience of physicists to decoherence theory. Such dissemination and maturing of decoherence theory came at a perfect time, as the 1990s also saw the blossoming of quantum information \cite{Feynman:1982:yy,Deutsch:1985:ym,Deutsch:1992:tv,Berthiaume:1992:lk,Berthiaume:1992:lm,Bernstein:1993:yy,Simon:1994:lk,Shor:1994:om,Shor:1997:tt,Grover:1996:rr,Grover:1997:mm}, as well as experimental advances in the creation of superpositions of mesoscopically and macroscopically distinct states \cite{Brune:1996:om,Arndt:1999:rc,Friedman:2000:rr,Wal:2000:om}. The quantum states relevant to quantum information processing and Schr\"odinger-cat-type experiments required the insights of decoherence theory, and conversely the new experiments served as a fertile ground for testing the predictions of decoherence theory. Accordingly, these developments led to a rapid rise in interest and research activity in the field of decoherence. Today, decoherence has become a central topic of modern quantum mechanics and is studied intensely both theoretically and experimentally.
Existing reviews of decoherence include the papers by Zurek \cite{Zurek:2002:ii}, Paz and Zurek \cite{Paz:2001:aa}, and Hornberger \cite{Hornberger:2009:aq}. Two books dedicated to decoherence are presently available: a volume by Joos et al.\ \cite{Joos:2003:jh} (a collection of chapters written by different authors), and a monograph by this author \cite{Schlosshauer:2007:un}, which offers, among other material, a detailed treatment of the topics surveyed in this paper. Textbooks on open quantum systems, such as Ref.~\cite{Breuer:2002:oq}, also contain a substantial amount of material on decoherence, especially in the context of quantum master equations.
This article is organized as follows. Section~\ref{sec:form-basic-conc} introduces the theory, formalism, and fundamental concepts of decoherence. Section~\ref{sec:mastereqs} discusses the description of decoherence dynamics in terms of master equations. Section~\ref{sec:decmodels} reviews several classes of important decoherence models. Section~\ref{sec:decoh-errcorr} describes methods for avoiding and mitigating the influence of decoherence. Section~\ref{sec:exper-observ-decoh} gives an overview of several experiments that have demonstrated the gradual, controlled action of decoherence. Section~\ref{sec:impl-found-quant} comments on the implications of decoherence for foundational issues in quantum mechanics and for the different interpretations of quantum mechanics. Section~\ref{sec:concluding-remarks} offers concluding remarks.
\section{\label{sec:form-basic-conc}Basic formalism and concepts}
\begin{figure}
\caption{Basic idea of the decoherence process, illustrated in the context of a quantum double-slit experiment. \emph{(a)} Particles passing through a double slit create an interference pattern on a distant screen. \emph{(b)} If one monitors which slit each particle passes through, the interference pattern vanishes. \emph{(c)} The monitoring may arise from any measurement-like interaction, such as the scattering of environmental particles. The motional states of the environmental particles will then encode information about the path of the particle through the slits, resulting in the disappearance of the interference pattern. If the environment obtains only partial (rather than complete) which-path information, an interference pattern with reduced visibility obtains.}
\label{fig:bi}
\end{figure}
In the double-slit experiment, we cannot observe an interference pattern if we also measure which slit the particle passes through, that is, if we obtain perfect \emph{which-path information} (Fig.~\ref{fig:bi}). In fact, there is a continuous tradeoff between interference (phase information) and which-path information: the better we can distinguish the two possible paths, the less visible the interference pattern becomes \cite{Wooters:1979:az,Englert:1996:km}. What is more, for a decrease in interference visibility to occur it suffices that there are degrees of freedom \emph{somewhere in the world} that, \emph{if they were measured}, would allow us to make, with a certain degree of confidence, a statement about the path of the particle through the slits. While we cannot say that prior to their measurement, these degrees of freedom have encoded information about a particular, definitive path of the particle---instead, we have merely \emph{correlations} involving both possible paths---no actual measurement is required to bring about the decrease in interference visibility. It is enough that, \emph{in principle}, we could make such a measurement to obtain which-path information.
This is somewhat loose talk, and conceptual caveats lurk. But it captures quite well the essence of what is happening in decoherence, where those ``degrees of freedom somewhere in the world'' are degrees of freedom of the system's environment that interact with the system, leading to the creation of quantum correlations (entanglement) between system and environment. Decoherence can thus be thought of as a process arising from the \emph{continuous monitoring of the system by the environment} \cite{Zurek:1981:dd}; effectively, the environment is performing nondemolition measurements on the system (see Sec.~\ref{sec:envir-monit-inform}). We now give a formal quantum-mechanical account of what we have just tried to convey in words, and then flesh out the consequences and details.
\subsection{Decoherence and interference damping}
Consider again the double-slit experiment and denote the quantum states of the particle (call it $S$, for ``system'') corresponding to passage through slit 1 and 2 by $\ket{s_1}$ and $\ket{s_2}$, respectively. Suppose that the particle interacts with another system $E$---for example, a detector or an environment---such that if the quantum state of the particle before the interaction is $\ket{s_1}$, then the quantum state of $E$ will become $\ket{E_1}$ (and similarly for $\ket{s_2}$), resulting in the final composite states $\ket{s_1}\ket{E_1}$ and $\ket{s_2}\ket{E_2}$, respectively. Owing to the linearity of the Schr\"odinger time evolution, for an initial superposition state $\alpha\ket{s_1}+\beta\ket{s_2}$ the final composite state will be entangled,
\begin{equation} \label{eq:1dlkf} \ket{\Psi} = \alpha \ket{s_1} \ket{E_1} + \beta \ket{s_2} \ket{E_2}. \end{equation}
Consider now the \emph{reduced density matrix} $\op{\rho}_S$ for the system \cite{Landau:1927:uy,Neumann:1932:gq,Furry:1936:pp}, which is obtained by tracing out (i.e., averaging over) the degrees of freedom of the environment in the composite system--environment density matrix $\op{\rho}_{SE}$,
\begin{align}
\label{eq:aa12rm}
\op{\rho}_S &= \text{Tr}_E(\op{\rho}_{SE}). \end{align}
The reduced density matrix exhaustively encodes the statistics of all possible local measurements on the system $S$. That is to say, for any observable that pertains only to the Hilbert space of the system, $\op{O} = \op{O}_S\otimes \op{I}_E$, where $\op{I}_E$ is the identity operator in the Hilbert space of the environment, the reduced density matrix $\op{\rho}_S$ will be sufficient to calculate the expectation value of $\op{O}$. To see this, let $\{ \ket{\psi_k} \}$ and $\{ \ket{\phi_l} \}$ be orthonormal bases of the Hilbert spaces of the system and environment, respectively. Then the expectation value of $\op{O}$ is
\begin{align} \label{eq:rcae} \langle \op{O} \rangle &= \text{Tr} \, (\op{\rho}_{SE}\op{O}) \notag \\ &= \sum_{kl} \bra{\phi_l} \bra{\psi_k} \op{\rho}_{SE} \left(\op{O}_S \otimes \op{I}_E\right) \ket{\psi_k} \ket{\phi_l} \notag \\ &= \sum_{k} \bra{\psi_k} \left( \sum_l \bra{\phi_l}
\op{\rho}_{SE}
\ket{\phi_l} \right) \op{O}_S \ket{\psi_k} \notag \\ &= \sum_{k} \bra{\psi_k} \left( \text{Tr}_E \, \op{\rho}_{SE} \right) \op{O}_S \ket{\psi_k} \notag \\ &= \sum_{k} \bra{\psi_k} \op{\rho}_S\op{O}_S \ket{\psi_k} \notag \\ &= \text{Tr}_S \left( \op{\rho}_S
\op{O}_S \right), \end{align}
showing that indeed only the reduced density matrix, rather than the full composite density matrix $\op{\rho}_{SE}$, is needed to calculate the expectation value. Since in the context of decoherence we are chiefly concerned with the effects of the environment on the measurable properties of the system, the reduced density matrix plays an essential role in decoherence theory for describing the quantum state of a system in the presence of environmental entanglement \cite{Zurek:1981:dd,Zurek:1982:tv,Schlosshauer:2007:un}.
For the composite state vector described by Eq.~\eqref{eq:1dlkf}, the reduced density matrix is \cite{Schlosshauer:2007:un}
\begin{align}
\label{eq:aa12}
\op{\rho}_S &= \text{Tr}_E(\op{\rho}_{SE}) =\text{Tr}_E \ketbra{\Psi}{\Psi} \nonumber \\
&= \abs{\alpha}^2
\ketbra{s_1}{s_1} + \abs{\beta}^2 \ketbra{s_2}{s_2} + \alpha \beta^*\ketbra{s_1}{s_2}
\braket{E_2}{E_1} + \alpha^*\beta\ketbra{s_2}{s_1}
\braket{E_1}{E_2}. \end{align}
Now suppose, for example, that we measure the particle's position by letting the particle impinge on a distant detection screen. Statistically, the resulting particle probability density $P(x)$ will be given by
\begin{align}\label{eq:fdljksjk2} P(x) & = \text{Tr}_S(\op{\rho}_S x) = \notag \\ &= \abs{\alpha}^2 \abs{\psi_1(x)}^2 +\abs{\beta}^2 \abs{\psi_2(x)}^2 + 2 \,\text{Re} \left\{\alpha \beta^* \psi_1(x) \psi_2^*(x)\braket{E_2}{E_1} \right\}, \end{align}
where $\psi_i(x) \equiv \braket{x}{s_i}$. The last term represents the interference contribution. Thus, the visibility of the interference pattern is quantified by the overlap $\braket{E_2}{E_1}$, i.e., by the distinguishability of $\ket{E_1}$ and $\ket{E_2}$. In the limiting case of perfect distinguishability ($\braket{E_2}{E_1} = 0$), no interference pattern will be observable and we obtain the classical prediction. Phase relations have become \emph{locally} (i.e., with respect to $S$) inaccessible, and there is no measurement on $S$ that can reveal coherence between $\ket{s_1}$ and $\ket{s_2}$. The coherence is now between the states $\ket{s_1} \ket{E_1}$ and $\ket{s_2} \ket{E_2}$, requiring an appropriate \emph{global} measurement (acting jointly on $S$ and $E$) for it to be revealed. Conversely, if the interaction between $S$ and $E$ is such that $E$ is completely unable to resolve the path of the particle, then $\ket{E_1}$ and $\ket{E_2}$ are indistinguishable and full coherence is retained at the level of $S$, as is also directly obvious from Eq.~\eqref{eq:1dlkf}.
Here is another way of putting the matter. Looking back at Eq.~\eqref{eq:1dlkf}, we see that $E$ encodes which-way information about $S$ in the same ``relative-state'' sense \cite{Everett:1957:rw} in which EPR correlations \cite{Einstein:1935:dr,Bell:1964:ep,Bell:1966:ph} may be said to encode ``information.'' That is, if $\braket{E_2}{E_1} = 0$ and we were to measure $E$ and found it to be in state $\ket{E_1}$, we could, in EPR's words \cite[p.~777]{Einstein:1935:dr}, ``predict with certainty'' that we will find $S$ in $\ket{s_1}$.\footnote{Of course, this must not be read as saying that $S$ was already in $\ket{s_1}$ (i.e., ``went through slit 1'') prior to the measurement of $E$. Nor does it mean that the result of a subsequent path measurement on $S$ is necessarily determined, by virtue of the measurement on $E$, prior to this $S$-measurement's actually being carried out. After all, as Peres \cite{Peres:1978:aa} has cautioned us, unperformed measurements have no outcomes. So while the picture of $E$ as ``encoding which-path information'' about $S$ is certainly suggestive and helpful, it should be used with an understanding of its conceptual pitfalls.} Whenever such a prediction is possible were we to measure $E$, no interference effects between the components $\ket{s_1}$ and $\ket{s_2}$ can be measured at $S$, even if $E$ is never actually measured. In the intermediary regime where $0 < \abs{\braket{E_2}{E_1}} < 1$, $E$ encodes only \emph{partial} which-way information about $S$, in the sense that a measurement of $E$ could not reliably distinguish between $\ket{E_1}$ and $\ket{E_2}$; instead, sometimes the measurement will result in an outcome compatible with both $\ket{E_1}$ and $\ket{E_2}$. Consequently, an interference experiment carried out on $S$ would find reduced visibility, representing diminished local coherence between the components $\ket{s_1}$ and $\ket{s_2}$. Equation~\eqref{eq:fdljksjk2} shows that the reduction in visibility increases as $\ket{E_1}$ and $\ket{E_2}$ become more distinguishable.
As hinted above, the description developed so far describes the essence of the decoherence process if we identify the particle $S$ more generally with an arbitrary quantum system and the second system $E$ with the environment of $S$. Then an idealized account of the decoherence interaction has the (von Neumann \cite{Neumann:1932:gq}) form
\begin{equation} \label{eq:d1} \biggl( \sum_i c_i \ket{s_i} \biggr) \ket{E_0} \quad \longrightarrow \quad \sum_i c_i \ket{s_i} \ket{E_i(t)}. \end{equation}
Here we have introduced a time parameter $t$, where $t=0$ corresponds to the onset of the environmental interaction, with $\ket{E_i(t=0)} \equiv \ket{E_0}$ for all $i$.\footnote{In cases where the environment does not start out in a pure state, we can always purify it through the introduction of an additional (fictitious) environment. Without loss of generality, we can therefore always take the environment to be in a pure state before its interaction with the system.} At $t<0$ the system and environment are assumed to be uncorrelated (an assumption common to most decoherence models).
A single environmental particle interacting with the system will typically only insufficiently resolve the components $\ket{s_i}$ in the system's superposition state. But because of the large number of such particles (and, hence, degrees of freedom), the overlap between their different joint states $\ket{E_i(t)}$ will rapidly decrease as a result of the buildup of many interaction events. Specifically, in many decoherence models an exponential decay of overlap is found \cite{Zurek:1982:tv,Joos:1985:iu,Paz:1993:ta,Leggett:1987:pm,Mokarzel:2002:za,Hornberger:2003:un,Zurek:2002:ii,Schlosshauer:2007:un,Breuer:2002:oq},
\begin{equation}
\label{eq:jkHjkfhjhPJHyudf615}
\braket{E_i(t)}{E_j(t)} \, \propto \, \ensuremath{e}^{-t/\tau_\text{d}} \qquad \text{for $i \not= j$}. \end{equation}
Here, $\tau_\text{d}$ is the characteristic decoherence timescale, which can be evaluated for particular choices of the parameters in each model (see Sec.~\ref{sec:decmodels}). Because the overlap of the environmental states quantifies the observability of interference effects between the corresponding system states that are correlated with the environmental states, Eq.~\eqref{eq:jkHjkfhjhPJHyudf615} describes an exponentially fast suppression of local interference.
\subsection{\label{sec:envir-monit-inform}Environmental monitoring and information transfer}
We will now motivate, in a different and more rigorous way, the picture of decoherence as a process of environmental monitoring. First, we express the influence of the environment in a completely general way. We assume that at $t=0$ there are no correlations between system $S$ and environment $E$, $\op{\rho}_{SE}(0) = \op{\rho}_S(0) \otimes \op{\rho}_E(0)$. We write $\op{\rho}_E(0)$ in its diagonal decomposition, $\op{\rho}_E(0) = \sum_i p_i \ketbra{E_i}{E_i}$, where $\sum_i p_i =1$ and the states $\ket{E_i}$ form an orthonormal basis of the Hilbert space of $E$. If $H$ denotes the Hamiltonian (here assumed to be time-independent) of $SE$ and $U(t) = \ensuremath{e}^{-\ensuremath{i} H t/\hbar}$ represents the unitary time evolution operator, then the density matrix of $S$ evolves according to
\begin{align}
\label{eq:1slvjhvkjfkjvsj0}
\op{\rho}_S(t) &= \text{Tr}_E \left\{ U(t) \left[ \op{\rho}_S(0) \otimes \left( \sum_i p_i \ketbra{E_i}{E_i} \right) \right] U^\dagger(t) \right\}\nonumber \\ &= \sum_{ij} p_i \bra{E_j} U(t) \ket{E_i} \op{\rho}_S(0)\bra{E_i} U^\dagger(t) \ket{E_j}. \end{align}
Introducing the \emph{Kraus operators} \cite{Kraus:1971:ii,Kraus:1983:ee} defined by $E_{ij}(t) = \sqrt{p_i} \bra{E_j} U(t) \ket{E_i}$, we obtain
\begin{equation}
\op{\rho}_S(t) = \sum_{ij} E_{ij}(t) \op{\rho}_S(0) E^\dagger_{ij}(t). \end{equation}
It is customary to combine the two indices $i$ and $j$ into a single index and write the Kraus operators as
\begin{equation}
\label{eq:worihfvsjvttrafs2}
W_k(t) \equiv \sqrt{p_{i_k}} \bra{E_{j_k}} U(t) \ket{E_{i_k}}, \end{equation}
such that
\begin{equation} \label{eq:dfjsb1}
\op{\rho}_S(t) = \sum_k W_k(t) \op{\rho}_S(0) W^\dagger_k(t). \end{equation}
Unitarity of the evolution of $SE$ implies that the Kraus operators satisfy the completeness constraint
\begin{equation}
\label{eq:19sirhgvksjvbkjsb} \sum_k W_k(t) W^\dagger_k(t) = I_S, \end{equation}
where $I_S$ is the identity operator in the Hilbert space of $S$.\footnote{Conversely, Eq.~\eqref{eq:19sirhgvksjvbkjsb} may also be used as an indicator of unitarity; if it were not obeyed, then one would need to conclude that $SE$ is evolving nonunitarily due to the presence of an additional environment $E'$.} The Kraus-operator formalism (also called \emph{operator-sum formalism}) represents the effect of the environment as a sequence of (in general nonunitary) transformations of $\op{\rho}_S$ generated by the operators $W_k$ \cite{Breuer:2002:oq,Alicki:2007:uu}. The Kraus operators exhaustively encode information about the initial state of the environment and about the dynamics of the joint $SE$ system, and they play the role of generators of so-called dynamical maps (see Sec.~\ref{sec:mastereqs}).
Following the treatment given by Hornberger in Ref.~\cite{Hornberger:2009:aq}, we will now use Eq.~\eqref{eq:dfjsb1} to formally motivate the view that decoherence corresponds to an indirect measurement of the system by the environment, and that it thus results from a transfer of information from the system to the environment. In such an indirect measurement, we let the system $S$ interact with a probe---here the environment $E$---followed by a projective measurement on $E$. The probe is treated as a quantum system. This procedure aims to yield information about $S$ without performing a projective (and thus destructive) direct measurement on $S$. To model such an indirect measurement, consider again an initial composite density operator $\op{\rho}_{SE}(0) = \op{\rho}_S(0) \otimes \op{\rho}_E(0)$ evolving under the action of $U(t) = \ensuremath{e}^{-\ensuremath{i} H t/\hbar}$, where $H$ is the total Hamiltonian. Consider a projective measurement $M$ on $E$ with eigenvalues $\alpha$ and corresponding projectors $P_\alpha = \ketbra{\alpha}{\alpha}$, with $P_\alpha^2=P_\alpha^\dagger=P_\alpha$. The probability of obtaining outcome $\alpha$ in this measurement when $S$ is described by the density operator $\op{\rho}_S(t)$ is
\begin{equation}
\text{Prob}\left(\alpha \mid \op{\rho}_S(t) \right) = \text{Tr}_E \left( P_\alpha \op{\rho}_E(t) \right) = \text{Tr}_E \left\{ P_\alpha \text{Tr}_S \left[ U(t) \left( \op{\rho}_S(0) \otimes \op{\rho}_E(0) \right) U^\dagger(t)\right] \right\}. \end{equation}
The density matrix of $S$ conditioned on the particular outcome $\alpha$ is
\begin{align} \op{\rho}_S^{(\alpha)}(t) &= \frac{ \text{Tr}_E \left\{ \left[I \otimes P_\alpha \right] \op{\rho}_{SE}(t) \left[I \otimes P_\alpha \right] \right\}}{\text{Prob}\left(\alpha \mid \op{\rho}_S(t)\right)}\nonumber\\ &=\frac{ \text{Tr}_E \left\{ \left[I \otimes P_\alpha \right] U(t) \left[ \op{\rho}_S(0) \otimes \op{\rho}_E(0) \right] U^\dagger(t) \left[I \otimes P_\alpha \right] \right\}}{\text{Prob}\left(\alpha \mid \op{\rho}_S(t)\right)}.\label{eq:hiuvb} \end{align}
Inserting the diagonal decomposition $\op{\rho}_E(0) = \sum_k p_k \ketbra{E_k}{E_k}$ and carrying out the trace gives \cite{Hornberger:2009:aq}
\begin{equation}\label{eq:hiuvbxxxxx} \op{\rho}_S^{(\alpha)}(t) = \sum_k \frac{ M_{\alpha,k}(t) \op{\rho}_S(0) M^\dagger_{\alpha,k}(t)}{\text{Prob}\left(\alpha \mid \op{\rho}_S(t)\right)}, \end{equation}
where we have introduced the measurement operators
\begin{equation} M_{\alpha,k}(t) = \sqrt{p_k} \bra{\alpha} U(t) \ket{E_k}, \end{equation}
which obey the completeness constraint $\sum_{\alpha,k}M_{\alpha,k}(t)M_{\alpha,k}^\dagger(t)=I_S$. Equation~\eqref{eq:hiuvbxxxxx} describes the effect of the indirect measurement on the state of the system. If, however, we do not actually inquire about the result of this measurement, we must assign to the system a density operator that is a sum over all the possible conditional states $\op{\rho}_S^{(\alpha)}(t)$ weighted by their probabilities $\text{Prob}\left(\alpha \mid \op{\rho}_S(t)\right)$,
\begin{equation} \op{\rho}_S(t) = \sum_\alpha \text{Prob}\left(\alpha \mid \op{\rho}_S(t)\right) \op{\rho}_S^{(\alpha)}(t) = \sum_{\alpha,k} M_{\alpha,k}(t) \op{\rho}_S(0) M^\dagger_{\alpha,k}(t). \end{equation}
Note that this expression is formally analogous to the Kraus-operator expression of Eq.~\eqref{eq:dfjsb1}, which described the effect of a general environmental interaction on the state of the system. Recall, further, that the situation we encounter in decoherence is precisely one in which we do not actually read out the environment---or, in the present picture, in which we do not inquire about the result of the indirect measurement. This suggests that decoherence can indeed be understood as an indirect measurement---a monitoring---of the system by its environment.
\subsection{\label{sec:meas}Measures and visualization of decoherence}
Given the reduced density matrix $\op{\rho}_S(t)$ for a system interacting with an environment, there exist several measures for quantifying the amount of decoherence introduced into the system by the environmental interaction. Two commonly used measures are the \emph{purity},
\begin{equation} \label{eq:puri} \varsigma(\op{\rho}_S) = \text{Tr} \op{\rho}_S^2, \end{equation}
and the \emph{von Neumann entropy} \cite{VonNeumann:1926:tv},
\begin{equation} \label{eq:ent} S(\op{\rho}_S) = - \text{Tr}\left( \op{\rho}_S \log_2 \op{\rho}_S \right). \end{equation}
Both are based on the fact that the entanglement with the environment causes an initially pure quantum state of the system to become progressively mixed.
Consider first the purity, $\varsigma(\op{\rho}_S) = \text{Tr} \,\op{\rho}_S^2$. If $S$ is in a pure state, i.e., if its density matrix can be written as a single projector $\op{\rho}_S=\ketbra{\psi}{\psi}$ on a pure state $\ket{\psi}$, then $\op{\rho}_S^2=\op{\rho}_S$ and therefore $\varsigma(\op{\op{\rho}}_S) =1$. In the opposite limit of a maximally mixed state of an $N$-dimensional system,
\begin{equation} \label{eq:msd}
\op{\rho}_S = \frac{1}{N} \sum_{i} \ketbra{\psi_i}{\psi_i}, \end{equation}
where the states $\{\ket{\psi_i}\}$ form an orthonormal basis of the system's Hilbert space, the purity attains its lower bound, $\varsigma(\op{\rho}_S) =1/N$.
Similarly, the von Neumann entropy $S(\op{\rho}_S) = - \text{Tr}\left( \op{\rho}_S \log_2 \op{\rho}_S \right)$ is equal to zero for a pure state and increases for nonpure states, up to a value of $\log_2 (N)$ for a maximally mixed state. This can be seen explicitly by writing out the trace in the expression for the von Neumann entropy, which for an arbitrary density matrix $\op{\rho}$ with eigenvalues $\lambda_i$ yields
\begin{equation}\label{eq:ojibef00011}
S(\op{\rho}) = - \text{Tr} \, ( \op{\rho} \log_2 \op{\rho} ) =
- \sum_i \lambda_i \log_2 \lambda_i, \end{equation}
where any eigenvalues $\lambda_i$ that are equal to zero (representing states not contained in the mixture) are by convention excluded from the sum. For a pure state, there will be only a single nonzero eigenvalue $\lambda_i$, which must be equal to 1, and therefore $S(\op{\rho}) = 0$. For a maximally mixed state, $\lambda_i=1/N$ for all $i$, and thus $S(\op{\rho}) = \log_2 (N)$, its largest possible value.
A conceptual note on mixed density matrices is in order. Such density matrices may arise in two fundamentally different ways. In the first, a state-preparation procedure produces different possible pure states for the system; the mixture then reflects an observer's ignorance of which (pure) state was prepared in a particular run, which connects with the statistical distribution of (pure) states in the limit of many runs of the experiment. In this case, the probabilities associated with the pure states in the mixture can be thought of as classical entities: they represent either subjective ignorance in a situation when a single pure state was actually prepared, or they describe relative frequencies of pure states (for a physical \emph{ensemble} of systems). Such mixtures are also called \emph{proper} \cite{Espagnat:1966:mf,Espagnat:1988:cf,Espagnat:1995:ma,Schlosshauer:2003:tv,Schlosshauer:2007:un}. The second, distinct way in which a mixture may obtain is for a system entangled with an environment. Now, however, the reduced density matrix describing the mixture is ``improper,'' in the sense that no pure state can be ascribed to the system because of the presence of entanglement. The ``mixedness'' of the reduced state---reflecting a loss of information about a particular pure state arising from the environmental information transfer described in Sec.~\ref{sec:envir-monit-inform}---is purely quantum-mechanical in nature. Therefore, mixed reduced density matrices for systems entangled with an environment are not \emph{ignorance-interpretable} \cite{Espagnat:1966:mf,Espagnat:1988:cf,Espagnat:1995:ma,Schlosshauer:2003:tv,Schlosshauer:2007:un}, i.e., they do not describe a situation in which the system is in a pure state but one does not know which.\footnote{The degree to which this distinction between proper and improper ensembles is considered fundamental, or even relevant, depends in some measure on one's interpretation of quantum states. For example, in the interpretation known as QBism \cite{Fuchs:2014:pp}, all quantum states are purely subjective entities that encode an observer's probabilistic expectations associated with his future measurement interactions. On this view, the notion of the system's \emph{being} in a particular quantum state is not applicable to begin with, and the above cautionary note against interpreting decohered reduced density matrices as proper mixtures would consequently appear unnecessary. The use of a mixed reduced density matrix would simply reflect an observer's adjustment of his subjective probabilistic expectations on account of the presence of an environment.}
\begin{figure}
\caption{Visualization of the decoherence dynamics in one dimension, showing the reduced density matrix representing a superposition of two Gaussian wavepackets separated in position space. \emph{(a)} The initial density matrix before the onset of decoherence, exhibiting large off-diagonal terms that represent spatial coherence. \emph{(b)} Decoherence arising from entanglement with an environment diminishes the size of the off-diagonal terms over time. The direct peaks along the diagonal represent the position-space probability density $P(x)=\rho(x,x)$ and are, in the absence of dissipation, not affected by the decoherence process.}
\label{fig:r12}
\end{figure}
To visualize the decoherence of a quantum state, one may display the decay of the off-diagonal elements in the reduced density matrix as a function of time. As an example, Fig.~\ref{fig:r12} shows the reduced density matrix for a particle that moves in one spatial dimension and is described by a superposition of two position-space Gaussian wave packets. The interaction with the environment progressively reduces the size of the off-diagonal terms, while in the absence of dissipation the direct peaks (representing the probability distribution of finding the different possible position values in a measurement) remain unchanged.
\begin{figure}
\caption{Wigner representation of a decohering superposition of
two position-space Gaussian wave packets in one spatial dimension. \emph{(a)} Interference is represented by an oscillatory, ridge-like pattern between the direct peaks. \emph{(b)} Decoherence manifests itself as a progressive damping of the oscillatory pattern.}
\label{fig:wig}
\end{figure}
An alternative and commonly used approach to representing the decoherence of a system represented by a continuous degree of freedom (such as position) is the \emph{Wigner function} \cite{Wigner:1932:un,Hillery:1984:tv}. Using the example of a position degree of freedom, the Wigner function representing a position-space density matrix $\rho(x,x') \equiv \bra{x}\op{\rho}\ket{x'}$ is defined by
\begin{equation}
\label{eq:fsoifhwddfs6611a} W(x,p) = \frac{1}{2\pi\hbar} \int_{-\infty}^{+\infty} \ensuremath{\mathrm{d}} y \, \exp\left( \frac{\ensuremath{i} p
y}{\hbar}\right) \rho(x+y/2,x-y/2), \end{equation}
where $p$ is the conjugate momentum variable. The Wigner function is attractive because it resembles a phase-space probability distribution: it is real-valued and normalized, $\int \ensuremath{\mathrm{d}} x \int \ensuremath{\mathrm{d}} p \,W(x,p) = 1$, and position and momentum distributions may be obtained from the marginals $P(x) = \int \ensuremath{\mathrm{d}} p \, W(x,p) $ and $P(p) = \int \ensuremath{\mathrm{d}} x \, W(x,p)$. Of course, owing to the uncertainty principle, no proper quantum phase-space probability distribution is admissible, a fact that is reflected in the observation that the Wigner function (with the notable exception of Gaussians \cite{Hudson:1974:ra}) may be negative in certain regions. In the Wigner representation, interference terms appear as an oscillatory, ridge-like pattern between the direct peaks, as shown in Fig.~\ref{fig:wig} for a superposition of two Gaussian wave packets separated in position space. The wavelength $\lambda$ of the oscillations is inversely proportional to the spatial separation $\Delta x$ of the wave packets, $\lambda =2\pi\hbar/\Delta x$, which implies that the oscillations become more rapid as the superposition becomes more nonclassical (i.e., as $\Delta X$ increases) \cite{Zurek:2002:ii,Schlosshauer:2007:un}. Decoherence then manifests itself as a progressive damping of these oscillations (see also Sec.~\ref{sec:quant-brown-moti} and Fig.~\ref{fig:gaussmov}).
\subsection{\label{sec:envir-induc-supers}Environment-induced superselection}
As we have seen, decoherence results when a quantum system becomes entangled with its environment. How much the system becomes entangled---and thus how strong the effect of decoherence is---depends on how its initial quantum state relates to the Hamiltonian that governs the interaction between system and environment. In particular, as we will elaborate below, the specific structure of a given interaction Hamiltonian implies a set of quantum states that will become least entangled with the environment and are therefore most immune to the decohering influence of the environment. The states that are dynamically chosen through this \emph{stability criterion} \cite{Zurek:1981:dd,Zurek:1982:tv} are commonly referred to as \emph{preferred states} or \emph{pointer states}. In situations where the pointer states form a proper basis of the Hilbert space of the system---as is often the case for low-dimensional systems---one may also speak of a \emph{preferred basis} or \emph{pointer basis}. In this sense, the interaction with the environment imposes a dynamical filter on the state space, selecting those states that can be stably prepared and observed even in the presence of the environmental interaction \cite{Zeh:1970:yt,Kubler:1973:ux,Zurek:1981:dd,Zurek:1982:tv,Walls:1985:lm}. Zurek, who studied the process of state selection via environmental interactions in two influential papers in the 1980s \cite{Zurek:1981:dd,Zurek:1982:tv}, called it \emph{environment-induced superselection}.
To find the pointer states, we decompose the total system--environment Hamiltonian $\op{H}$ into the self-Hamiltonians $\op{H}_S$ and $\op{H}_E$ of the system $S$ and environment $E$ (describing the intrinsic dynamics), and a part $\op{H}_\text{int}$ representing the interaction between system and environment,
\begin{equation} H = \op{H}_S + \op{H}_E + \op{H}_\text{int}. \end{equation}
In many cases of practical interest, $\op{H}_\text{int}$ dominates the evolution of the system, such that $\op{H} \approx \op{H}_\text{int}$; this situation is referred to as the \emph{quantum-measurement limit} of decoherence. Let us first consider this case and determine the corresponding pointer states. In the spirit of the stability criterion, the idea is to find a set of system states $\{\ket{s_i}\}$ that remain unchanged and do not get entangled with the environment under the evolution generated by $\op{H}_\text{int}$. This condition is met for the eigenstates $\{\ket{s_i}\}$ (with eigenvalues $\{\lambda_i\}$) of the part of the interaction Hamiltonian $\op{H}_\text{int}$ that addresses the Hilbert space of the system---i.e., for the states of the system that are stationary under $\op{H}_\text{int}$ \cite{Zurek:1981:dd}. In this case, a system--environment product state $\ket{s_i}\ket{E_0}$ at $t=0$ (when the interaction with the environment is turned on) will evolve according to
\begin{equation}
\label{eq:gxlknn98ygya24}
\ensuremath{e}^{-\ensuremath{i} \op{H}_\text{int} t/\hbar} \ket{s_i}\ket{E_0}=
\lambda_i \ket{s_i}\ensuremath{e}^{-\ensuremath{i} \op{H}_\text{int} t/\hbar} \ket{E_0} \equiv \ket{s_i}\ket{E_i(t)}, \end{equation}
where we have assumed that $\op{H}_\text{int}$ is not explicitly time-dependent. Since the state remains a product state for all subsequent times $t>0$, there is no entanglement or decoherence. Note that \emph{superpositions} of pointer states are in general not immune to decoherence, since the environmental states $\ket{E_i(t)}$ tend to become rapidly distinguishable and therefore lead to an entangled system--environment state.
Thus, in the quantum-measurement limit the pointer states $\ket{s_i}$ are obtained by diagonalizing the interaction Hamiltonian in the subspace of the system. We may also define a \emph{pointer observable} $\op{O}_S = \sum_i o_i \ketbra{s_i}{s_i}$ of the system as a linear combination of pointer-state projectors $\op{\Pi}_i=\ketbra{s_i}{s_i}$. Because each $\ket{s_i}$ is an eigenstate of $\op{H}_\text{int}$, it follows that $\op{O}_S$ commutes with $\op{H}_\text{int}$,
\begin{equation}
\label{eq:dhvvsdnbbfvs27}
\bigl[ \op{O}_S, \op{H}_\text{int} \bigr] = 0. \end{equation}
This \emph{commutativity criterion} \cite{Zurek:1981:dd,Zurek:1982:tv} is particularly easy to apply when $\op{H}_\text{int}$ takes the (commonly encountered) tensor-product form $\op{H}_\text{int} = \op{S} \otimes \op{E}$, in which case the pointer observables will be those observables that commute with the system part $\op{S}$ of the interaction Hamiltonian.
If the operator $\op{S}$ appearing in $\op{H}_\text{int}$ is Hermitian and thus could represent a physical observable, it will describe the quantity monitored by the environment, in the spirit of the discussion in Sec.~\ref{sec:envir-monit-inform}. For example, often position dynamically emerges as the environment-selected quantity because many interaction Hamiltonians describe scattering processes governed by force laws that depend on some power of particle distance. Then the Hamiltonian will commute with the position operator, and the corresponding eigenstates---the pointer states---are approximate eigenstates of position, represented by narrow position-space wave packets. These states are dynamically robust, thus accounting for the fact that position is a preferred, stable quantity in our everyday world. Their superpositions, however, are typically rapidly decohered, especially if they refer to mesoscopically or macroscopically distinct positions. A ubiquitous source of decoherence of such spatial superpositions is the scattering of environmental particles, a process known as \emph{collisional decoherence} (see Sec.~\ref{sec:collisionaldecoherence}). This explains why mesoscopic and macroscopic spatial superpositions tend to be prohibitively difficult to observe for larger systems \cite{Zurek:1981:dd,Zurek:1982:tv,Joos:1985:iu,Zurek:1991:vv,Gallis:1990:un,Diosi:1995:um,Hornberger:2003:un,Hornberger:2006:tb,Hornberger:2008:ii,Busse:2009:aa,Busse:2010:aa}.
But collisional decoherence may also be significant in microscopic systems. For instance, chiral molecules such as sugar occur in two distinct spatial configurations: left-handed and right-handed. When these molecules are immersed into a medium, the scattering of environmental particles resolves these two configurations, and thus the left-handed and right-handed chirality eigenstates dynamically emerge as the preferred states. Energy eigenstates of such molecules, on the other hand, are represented by superpositions of chirality eigenstates and are therefore subject to immediate decoherence. This explains why chiral molecules are found not in energy eigenstates but in chirality eigenstates \cite{Harris:1981:rc,Zeh:1999:qr,Trost:2009:ll,Bahrami:2012:oo}.
To give another example, the fact that we do not observe superpositions of different electric charges can be explained as a consequence of the coupling of a charge to its own Coulomb far-field acting as an environment \cite{Zeh:1970:yt,Giulini:1995:zh,Giulini:2000:ry}. The interaction leads to decoherence of charge superpositions and therefore to the environment-induced superselection of eigenstates of the charge operator. This role of the environment was already spelled out by Zeh \cite{Zeh:1970:yt} in his 1970 paper marking the birth of decoherence theory:
\begin{quote} This interpretation of measurement may also explain certain ``superselection rules'' which state, for example, that superpositions of states with different charge cannot occur. \dots [Such states] cannot be dynamically stable because of the significantly different interaction of their components with their environment, in analogy with the different handedness components of a sugar molecule. \end{quote}
In general, any interaction Hamiltonian $\op{H}_\text{int}$ can be written as a diagonal decomposition of system and environment operators $\op{S}_\alpha$ and $\op{E}_\alpha$, $\op{H}_\text{int} = \sum_\alpha \op{S}_\alpha \otimes \op{E}_\alpha$. For Hermitian operators $\op{S}_\alpha$, such a Hamiltonian represents the simultaneous environmental monitoring of different observables $\op{S}_\alpha$ of the system. Then the pointer states will be simultaneous eigenstates of the operators $\op{S}_\alpha$:
\begin{equation}
\label{eq:OIbvsrhjkbv9}
\op{S}_\alpha \ket{s_i} = \lambda_i^{(\alpha)}\ket{s_i} \qquad
\text{for all $\alpha$ and $i$}. \end{equation}
The \emph{quantum limit of decoherence} \cite{Paz:1999:vv} applies in situation where the self-Hamiltonian of the system dominates over the interaction Hamiltonian. This represents a situation in which the frequencies of the environment are small compared with the frequencies of the system. Then the environment will be able to monitor only quantities that are constants of motion. In the case of nondegeneracy, this will be the energy of the system, leading to the environment-induced superselection of energy eigenstates for the system \cite{Paz:1999:vv}.\footnote{Energy eigenstates are given a special role in textbooks because of their stationarity. Note, however, that for closed systems superpositions of energy eigenstates are equally viable. It is only through the inclusion of an environment and consideration of the resulting decoherence that such superpositions become dynamically suppressed, leading to the emergence of energy eigenstates as the preferred states of the system. Therefore, decoherence can be used to justify the special status commonly attributed to energy eigenstates.}
For more realistic models of decoherence, the stability criterion, Eq.~\eqref{eq:dhvvsdnbbfvs27}, often cannot be fulfilled exactly. Furthermore, in many situations the self-Hamiltonian of the system and the interaction Hamiltonian are of approximately equal strength, which means that neither of the two limiting cases discussed above---the quantum-measurement limit of negligible intrinsic dynamics and the quantum limit of decoherence of a slow environment---are appropriate. To deal with such situations, a more general, operational method known as a \emph{predictability sieve} \cite{Zurek:1993:pu,Zurek:1993:qq,Zurek:1998:re} can be used to identify classes of approximate pointer states. Here one computes the amount of decoherence introduced into the system over time for a large set of initial, pure states of the system evolving under the total system--environment Hamiltonian. Typically, this decoherence is measured as a decrease in purity $\text{Tr} \op{\rho}_S^2$ or an increase in von Neumann entropy $S(\op{\rho}_S) = - \text{Tr}\left( \op{\rho}_S \log_2 \op{\rho}_S \right)$ of the reduced density matrix $\op{\rho}_S$ over time (see Sec.~\ref{sec:meas} for a description of these measures). This allows one to rank the states according to their susceptibility to decoherence, and in this way the states most robust to the environmental interaction---the (approximate) pointer states---can be identified \cite{Zurek:1993:pu,Zurek:1993:qq,Zurek:1998:re,Zurek:2002:ii}. For example, in the model for quantum Brownian motion (see Sec.~\ref{sec:quant-brown-moti}), different measures of decoherence all lead to the selection of minimum-uncertainty wave packets in phase space as the most robust states \cite{Kubler:1973:ux,Zurek:1993:pu,Zurek:2002:ii,Diosi:2000:yn,Joos:2003:jh,Eisert:2003:ib}.
We note that the term ``predictability sieve'' is motivated by the connection between the purity of a state of a system and our knowledge of this state. A pure state encodes perfect knowledge (one assigns precisely one state vector to the system) and therefore maximum ``predictability.'' On the other hand, decoherence caused by entanglement with (and thus information transfer to) the environment renders the reduced density matrix progressively impure, which introduces an additional, purely quantum-mechanical probabilistic element and diminishes the degree of predictability. In this sense, the states most robust to decoherence---the (exact or approximate) pointer states whose purity is least affected by the presence of the environmental interactions---are also the most predictable \cite{Zurek:1993:pu,Zurek:1993:qq,Zurek:1998:re}.
Subspaces of a system's Hilbert space spanned by pointer states that couple to the environment in the same way are known as \emph{decoherence-free subspaces}. Because any state in such a subspace will be immune to decoherence, decoherence-free subspaces are a valuable tool for encoding quantum information in a manner that avoids decoherence. Decoherence-free subspaces will be discussed in Sec.~\ref{sec:dfs}.
\subsection{\label{sec:prol-inform-quant}Proliferation of information and quantum Darwinism}
Decoherence theory focuses on the effect that entanglement with an environment has on a quantum system. As discussed in Sec.~\ref{sec:envir-monit-inform}, decoherence represents a process in which the environment monitors the system and information is transferred from the system to the environment. In this spirit, \emph{quantum Darwinism} \cite{Zurek:2003:pl,Ollivier:2003:za,Ollivier:2004:im,Blume:2004:oo,Blume:2005:oo,Zurek:2009:om,Riedel:2010:un,Riedel:2011:un,Riedel:2012:un,Zurek:2014:xx, Zwolak:2016:zz,Zwolak:2017:mm,Zurek:2018:on,Unden:2018:ia} turns the focus from the system to the environment and considers the information that the environment encodes about the system. Building on the ideas of decoherence and environmental encoding of information, quantum Darwinism broadens the role of the environment to that of a communication and amplification channel. It studies how interactions between the system and its environment lead to the \emph{redundant} storage of selected information about the system in many fragments of the environment. Hence the name ``quantum Darwinism'': certain states of the system are \emph{fitter} than others in the sense that they are able to imprint their information robustly and redundantly across the environment. By measuring some of these environmental fragments, observers can indirectly obtain information about the system without appreciably disturbing the system itself. Indeed, this represents how we typically observe objects. For example, we see an object not by directly interacting with it, but by intercepting scattered photons that encode information about the object's spatial structure \cite{Riedel:2010:un,Riedel:2011:un}.
In this sense, quantum Darwinism provides a dynamical explanation for the robustness of states to observation, especially for macroscopic objects. It has been shown that the observable of the system that can be imprinted most completely and redundantly in many distinct fragments of the environment coincides with the pointer observable selected by the system--environment interaction \cite{Ollivier:2003:za,Ollivier:2004:im,Blume:2004:oo,Blume:2005:oo}; conversely, most other states do not seem to be redundantly storable. Indeed, the redundant proliferation of information regarding pointer states may be as inevitable as decoherence itself \cite{Zwolak:2014:tt}.
Quantum Darwinism has been studied in several concrete models, including spin environments \cite{Blume:2004:oo,Zwolak:2016:zz}, quantum Brownian motion \cite{Blume:2007:oo}, and photon and photon-like environments \cite{Riedel:2010:un,Riedel:2011:un,Zwolak:2014:tt}. The efficiency of the amplification process described by quantum Darwinism can be expressed in terms of the quantum Chernoff information \cite{Zwolak:2014:tt}, and the ability of this information measure to appropriately capture the rich dynamics of amplification has been confirmed in the context of realistic spin models \cite{Zwolak:2016:zz}.
The structure and amount of information that the environment encodes about the system can be quantified using the measure of \emph{mutual information}, either in its classical \cite{Ollivier:2003:za,Ollivier:2004:im} or quantum \cite{Zurek:2002:ii,Blume:2004:oo,Blume:2005:oo} definition. Classical mutual information measures how well one can predict the outcome of a measurement of a given observable of the system $S$ by measuring an observable on a fraction of the environment $E$ \cite{Ollivier:2003:za,Ollivier:2004:im}. Quantum mutual information generalizes this concept and is defined as \cite{Zurek:2002:ii,Blume:2004:oo,Blume:2005:oo}
\begin{equation} \mathcal{I}_{S:E} =S(\op{\rho}_S) + S(\op{\rho}_E) - S(\op{\rho}_{SE}), \end{equation}
where $\op{\rho}_S$ is the density matrix of the system $S$, $\op{\rho}_E$ is the density matrix of the environment $E$, $\op{\rho}_{SE}$ is the density matrix of the composite system $SE$, and $S(\op{\rho})$ is the von Neumann entropy, Eq.~\eqref{eq:ent}, associated with $\op{\rho}$. Quantum mutual information represents the amount of entropy that would be created if all quantum correlations between $S$ and $E$ were destroyed; in other words, it quantifies how strongly system and environment are correlated. Classical and quantum mutual information give similar results \cite{Ollivier:2003:za,Ollivier:2004:im,Zurek:2002:ii,Blume:2004:oo,Blume:2005:oo} because the difference between the two measures, known as the \emph{quantum discord} \cite{Ollivier:2001:az}, vanishes when decoherence is effective enough to select a pointer basis \cite{Ollivier:2001:az}. We note here that the measure of quantum discord has also been applied to an analysis of Bohr's suggestion that the classicality of a measurement outcome is related to its communicability by classical means \cite{Streltsov:2013:oo}.
Recently, many of the more subtle details of quantum Darwinism have begun to be investigated. For example, Zwolak and Zurek \cite{Zwolak:2017:mm} have shown that environmental imprints left by quantum systems other than the system of interest do not appreciably affect the redundancy of the environmental information about the system of interest. The influence of factors such as initial correlations, interactions between subenvironments, and non-Markovian dynamics that may hinder the redundant encoding of information in the environment has been studied by several authors \cite{Riedel:2012:un,Galve:2016:oo, Pleasance:2017:oo, Ciampini:2018:ii}. Among such studies, Pleasance and Garraway \cite{Pleasance:2017:oo} used the model of a single qubit interacting with a collection of bosonic environments to investigate environmental encoding of information in the presence of many subenvironments. Ciampini et al.\ \cite{Ciampini:2018:ii} employed photonic cluster states to explore, both theoretically and experimentally, the influence that correlations between parts of the environment have on the redundancy and objectivity of environmental information. It has also been shown \cite{Galve:2016:oo, Pleasance:2017:oo, Ciampini:2018:ii} that non-Markovian dynamics and the resulting memory effects can result in a backflow of information from the environment to the system in a manner that impedes the creation of robust, classical, redundant environmental records. In addition to the aforementioned photonic experiment by Ciampini et al.\ \cite{Ciampini:2018:ii}, experimental studies of the ideas of quantum Darwinism have been reported by Unden et al.\ \cite{Unden:2018:ia}, who used a controlled interaction between a nitrogen vacancy center (the system) and several nuclear spins (the environment).
\subsection{\label{sec:decoh-vers-diss}Decoherence versus dissipation and noise}
Dissipation is always accompanied by decoherence (see, for example, the early studies by Walls and Milburn \cite{Walls:1985:pp} and by Caldeira and Leggett \cite{Caldeira:1985:tt}, who investigated the influence of damping on the coherence of superpositions of macroscopically different states). The converse, however, is not necessarily true. In fact, one of the earliest models of decoherence due to random spin environments \cite{Zurek:1982:tv, Cucchietti:2005:om} clearly demonstrated that the system may rapidly decohere without any loss of energy from the system. When dissipation and decoherence are both present, the loss of coherence is usually many orders of magnitude faster than any relaxation processes induced by dissipation. For example, a classic paper by Zurek \cite{Zurek:1986:uz} gave a ballpark estimate for the ratio of the relaxation timescale $\tau_\text{r}$ to the decoherence timescale $\tau_\text{d}$ for a massive object represented by a coherent superposition of two positions separated by $\Delta x$:
\begin{equation}
\label{eq:daf12}
\frac{\tau_\text{r}}{\tau_\text{d}} \sim \left( \frac{\Delta
x}{\lambda_\text{th}} \right)^2, \end{equation}
where
\begin{equation}
\label{eq:daf12thermal} \lambda_\text{th}=\frac{\hbar}{\sqrt{2mk_BT}} \end{equation}
is the thermal de Broglie wavelength of the object. Applied to a macroscopic object of mass $m = \unit[1]{g}$ at $T=\unit[300]{K}$ with a macroscopic separation $\Delta x =\unit[1]{cm}$, Eq.~\eqref{eq:daf12} gives $\tau_\text{r}/\tau_\text{d} \sim10^{40}$ \cite{Zurek:1986:uz}. Thus, for macroscopic objects described by such nonclassical superposition states, dissipation is typically negligible over the timescale relevant to the decoherence process.
Decoherence is a consequence of environmental entanglement, and as such is a purely quantum-mechanical effect. In the literature (especially in the area of quantum information processing), the term ``decoherence'' is often used more broadly to encompass any process, quantum or classical, that detrimentally affects the desired superposition states. An example would be classical noise processes arising, for instance, from experimental fluctuations and imperfections, such as variations in laser intensities in ion-trap experiments \cite{Schneider:1998:yz,Miquel:1997:zz}, bias fluctuations in superconducting qubits \cite{Martinis:2003:bz}, and inhomogeneities in the magnetic fields used in NMR quantum processing \cite{Vandersypen:2004:ra}. When averaged over many different realizations of such noise processes, the density matrix of the system then shows a decay of off-diagonal terms, representing a loss of interference similar to what would result from environmental entanglement. But it is important to realize that for an individual instance of the noise process applied to an individual system, the evolution is completely unitary---there is no ``washing-out'' of phase information, no loss of information from the system, and no creation of entanglement between the system and an environment. Hence the consequences of the noise process could in principle be undone through a local operation acting on the system alone (in fact, this is the basis of the spin-echo method for reversing collective spin dephasing in NMR experiments). Such a local reversal is not possible for decoherence resulting from environmental entanglement; ``undoing'' decoherence to restore an (unknown) pre-decoherence state of the system will require appropriate measurements on the environment to gather information that has leaked from the system \cite{Myatt:2000:yy,Zurek:2002:ii}.\footnote{Incidentally, this is reminiscent of the situation in a quantum eraser experiment \cite{Jaynes:1980:lm,Peres:1980:im,Scully:1982:yb,Scully:1991:yb}, where interference fringes can be extracted from the no-fringes data only once the outcomes of measurements on one particle are correlated with the outcomes of measurements on the other, entangled partner \cite{Englert:1999:aq,Ashby:2016:pp}.}
We note that the loss of phase coherence due to environmental entanglement has sometimes been \emph{simulated} (with the above caveats) by classical fluctuations introduced through the addition of time-dependent perturbations to the self-Hamiltonian of the system; see, for example, Refs.~\cite{Schneider:1998:yz,Schneider:1999:tt,Turchette:2000:aa,Myatt:2000:yy} and Sec.~\ref{sec:trapped} for applications of this approach to decoherence in ion traps.
\section{\label{sec:mastereqs}Master equations}
To calculate the time-evolved reduced density matrix of a decohering system, the route we have discussed so far consists of determining the time evolution $\ket{\psi}\ket{E_0} \longrightarrow \ket{\Psi_{SE}(t)}$ of the joint quantum state of the system and environment, and then obtaining the reduced density matrix of the system by tracing out the degrees of freedom of the environment in the composite density matrix $\op{\rho}_{SE}(t) = \ketbra{\Psi_{SE}(t)}{\Psi_{SE}(t)}= U(t) \op{\rho}_{SE}(0) U^\dagger(t)$, where $U(t) = \ensuremath{e}^{-\ensuremath{i} \op{H} t/\hbar}$ is the time-evolution operator for the composite system $SE$ evolving under the total Hamiltonian $\op{H}$. This is the procedure formally represented by Eq.~\eqref{eq:1slvjhvkjfkjvsj0}:
\begin{equation} \label{eq:dmm2}
\op{\rho}_S(t) = \text{Tr}_E \, \op{\rho}_{SE}(t) \equiv \text{Tr}_{E} \left\{ U(t) \op{\rho}_{SE}(0) U^\dagger(t) \right\}. \end{equation}
Alternatively, we can start from the Liouville--von Neumann equation for the composite density matrix $\op{\rho}_{SE}(t)$,
\begin{equation} \label{eq:dmm2xdfvb}
\frac{\partial}{\partial t} \op{\rho}_{SE}(t) = -\frac{\ensuremath{i}}{\hbar} \left[ \op{H}, \op{\rho}_{SE}(t)\right], \end{equation}
and then take the trace over the environment, which yields a differential equation for the evolution of the reduced density operator,
\begin{equation} \label{eq:dmsdlm2}
\frac{\partial}{\partial t} \op{\rho}_S(t) = -\frac{\ensuremath{i}}{\hbar} \text{Tr}_{E} \left\{ \left[ \op{H}, \op{\rho}_{SE}(t)\right]\right\}. \end{equation}
The evolution equations \eqref{eq:dmm2} and \eqref{eq:dmsdlm2} require calculating the exact dynamics of the system and environment, as the reduced density matrix at some time $t$ (or, equivalently its differential change) depends on the full system--environment state and its entire past history. Typically, solving such equations presents an intractable problem both analytically and numerically, and therefore the exact description is generally not useful in practice. Furthermore, since we are usually not interested in the dynamics of the environment (unless we explicitly inquire about, say, the storage of information in the environment, as in the program of quantum Darwinism described in Sec.~\ref{sec:prol-inform-quant}), calculating the full composite system--environment state also provides unnecessary detail.
Master equations offer a shortcut. To provide a reduction in computational effort over Eqs.~\eqref{eq:dmm2} and \eqref{eq:dmsdlm2}, such master equations are typically based on certain assumptions and simplifications that lead to an approximate (but in practice sufficiently accurate) description of the decoherence process. The reduced density matrix is calculated directly from an (in general nonunitary) evolution equation that depends only on the reduced, not the global, density matrix. A \emph{generalized master equation} for the reduced density matrix is of the form
\begin{equation} \label{eq:dggamsdlm2}
\frac{\partial}{\partial t} \op{\rho}_S(t) = \mathcal{K} \left[ \op{\rho}_S(t'), t' < t \right ], \end{equation}
where $\mathcal{K}$ is a superoperator that takes the history of the reduced density matrix as input. Of particular interest to the description of decoherence processes are \emph{Markovian master equations},
\begin{equation} \label{eq:dggamsdlm58672}
\frac{\partial}{\partial t} \op{\rho}_S(t) = \mathcal{L}\op{\rho}_S(t). \end{equation}
Such master equations are local in time (the right-hand side of the equation does not depend on the history of the density matrix), and the superoperator $\mathcal{L}$ does not depend on time or the initial preparation.
Markovian master equations are widely used in the description of decoherence dynamics. They enable a relatively easy calculation of the reduced dynamics while still providing, in many cases of practical interest, a good approximation to the exact dynamics and the experimentally observed data. We will now discuss their derivation and underlying assumptions. We start by introducing the concept of dynamical maps (Sec.~\ref{sec:dynamical-maps}), followed by the discussion of the approach to Markovian master equations via the formalism of quantum dynamical semigroups and their generators (Sec.~\ref{sec:semigr-deriv-mark}). Separately, we describe the derivation of Markovian master equations from microscopic considerations (Sec.~\ref{sec:micr-deriv-mark}). We also comment on the formalism of quantum trajectories (Sec.~\ref{sec:quantum-trajectories}) and on the treatment of non-Markovian decoherence (Sec.~\ref{sec:non-mark-decoh}).
\subsection{\label{sec:dynamical-maps}Dynamical maps}
In what follows, we will make the usual assumption of an initially uncorrelated system--environment state, $\op{\rho}_{SE}(0)=\op{\rho}_{S}(0)\otimes \op{\rho}_{E}(0)$. Equation~\eqref{eq:dmm2} defines a state transformation
\begin{equation} \label{eq:dmmetf2} \op{\rho}_{S}(0) \mapsto \op{\rho}_S(t) = \mathcal{V}_t \op{\rho}_{S}(0), \end{equation}
with
\begin{equation}
\label{eq:dmssmetf2} \mathcal{V}_t \op{\rho}_{S}(0) \equiv \text{Tr}_{E} \left\{ U(t) \left[\op{\rho}_{S}(0)\otimes \op{\rho}_{E}(0) \right]U^\dagger(t) \right\}. \end{equation}
The transformation $\mathcal{V}_t$ given in Eqs.~\eqref{eq:dmmetf2} and \eqref{eq:dmssmetf2} is an instance of a \emph{dynamical map} \cite{Breuer:2002:oq,Alicki:2007:uu}. A dynamical map $\mathcal{V}_t : \op{\rho}(0) \mapsto \op{\rho}(t)$ is a transformation that takes an arbitrary initial quantum state $\op{\rho}(0)$ to a final quantum state $\op{\rho}(t)$ at some fixed time $t$ in accordance with the rules of quantum mechanics. Since the map defined in Eq.~\eqref{eq:dmssmetf2} was solely derived from the Schr\"odinger equation and the trace operation, it will automatically obey the correct quantum rules. In Sec.~\ref{sec:envir-monit-inform}, we already showed [see Eq.~\eqref{eq:dfjsb1}] that the right-hand side of Eq.~\eqref{eq:dmssmetf2} can be expressed in terms of Kraus operators $\op{W}_k(t)$ \cite{Kraus:1971:ii,Kraus:1983:ee}, and therefore the dynamical map $\mathcal{V}_t$ defined by Eq.~\eqref{eq:dmssmetf2} can be written as
\begin{equation}
\label{eq:asdmssmetf2} \mathcal{V}_t \op{\rho}_{SE}(0) = \sum_k W_k(t) \op{\rho}_S(0) W^\dagger_k(t), \end{equation}
where the $\op{W}_k(t)$ are operators in the Hilbert space $\mathcal{H}_S$ of $S$ obeying the completeness constraint~\eqref{eq:19sirhgvksjvbkjsb}, i.e., $\sum_k W_k(t) W^\dagger_k(t) = I_S$. For a Hilbert space $\mathcal{H}_S$ of finite dimension $N$, the number of Kraus operators required to represent a dynamical map is bounded by $N^2$ \cite{Alicki:2007:uu}.\footnote{For a (separable) Hilbert space of infinite dimension, a countable set of Kraus operators is needed.} Since Eq.~\eqref{eq:dmssmetf2} represents the most general way in which the state of the open quantum system $S$ may change, it follows from Eq.~\eqref{eq:asdmssmetf2} that any dynamical map can be completely characterized in terms of a set of Kraus operators $\op{W}_k \in \mathcal{H}_S$ with $\sum_k W_k W^\dagger_k = I_S$ \cite{Kraus:1983:ee,Lindblad:1976:um,Breuer:2002:oq,Alicki:2007:uu}. Therefore, the Kraus operators play the role of \emph{generators} of dynamical maps.
Alternatively, and equivalently \cite{Kraus:1983:ee}, a dynamical map $\mathcal{V}_t$ may be characterized by requiring it to obey the following three mathematical conditions \cite{Breuer:2002:oq,Alicki:2007:uu}: complete positivity, convex linearity, and trace preservation. Let us describe these conditions in turn.
\begin{enumerate}
\item \emph{Complete positivity.} It is well known that a valid density operator must be positive semidefinite, i.e., its eigenvalues must be nonnegative, because these eigenvalues have the physical interpretation of probabilities. Therefore, a dynamical map $\mathcal{V}_t$ must be \emph{positive} in the sense that it takes positive semidefinite operators to positive semidefinite operators. But this is not sufficient. Instead, we must require \emph{complete positivity} \cite{Kraus:1971:ii,Gorini:1976:tt,Lindblad:1976:um,Breuer:2002:oq,Alicki:2001:aa,Alicki:2007:uu,Benatti:2005:ii}, a much stronger condition. It means that also all extensions $\mathcal{V}_t \otimes \text{id}_n$ of $\mathcal{V}_t$ to a composite Hilbert space $\mathcal{H}_\text{ext} = \mathcal{H}_S \otimes \tilde{\mathcal{H}}_n \equiv \mathcal{H}_S \otimes \mathbb{C}^n$ for all integer $n$ must be positive, where $\text{id}_n$ denotes the identity map on the space $\tilde{\mathcal{H}}_n$ that leaves all operators in that space unchanged. That is, we require that $\mathcal{V}_t \otimes \text{id}_n$ maps any $\op{\rho} \in \mathcal{H}_\text{ext}$ onto another valid density operator.
The physical motivation behind this requirement is as follows. Imagine an ancillary system $A$ (represented by the Hilbert space $\tilde{\mathcal{H}}_n$), which is assumed to have no intrinsic dynamics (i.e., the self-Hamiltonian is $H_A=0$), and which is placed at a large distance from the system $S$ of interest such that it does not interact with $S$. Then the dynamical map for the composite system $SA$ will be given by $\mathcal{V}_t \otimes \text{id}_n$, which should again be positive for any state of the composite system $SA$; this is the condition of complete positivity. If this condition is not met, then it can be shown (see, for example, Refs.~\cite{Horodecki:1995:oo,Peres:1996:oo,Alicki:2001:aa,Benatti:2005:ii}) that, if $S$ and $A$ start out entangled,\footnote{We can imagine that the quantum correlations between $S$ and $A$ arose from some past interaction prior to the initial time point when the map $\mathcal{V}_t \otimes \text{id}_n$ is applied.} the linear map $\mathcal{V}_t \otimes \text{id}_n$ may give rise to negative probabilities.\footnote{A simple example is the 2D transposition map $\mathcal{T} : \left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right) \mapsto \left(\begin{smallmatrix}a&c\\b&d\end{smallmatrix}\right)$, which is linear and positive. However, when its extension $\mathcal{T}\otimes\text{id}_2$ is applied to the density matrix $\op{\rho}=\ketbra{\Psi^+}{\Psi^+}$ representing the maximally entangled bipartite state $\ket{\Psi^+}=2^{-1/2}\left(\ket{0}\ket{0}+\ket{1}\ket{1}\right)$, the resulting matrix is no longer positive.} Thus, complete positivity ensures that $\mathcal{V}_t$ generates physically consistent dynamics even when the system $S$ initially has correlations with another system.
A related motivation of completely positivity that is especially pertinent to open quantum systems can be given by considering two identical, noninteracting $N$-level systems $S_1$ and $S_2$ immersed into the same environment \cite{Benatti:2002:oo,Benatti:2005:ii}. Suppose that $\mathcal{V}_t$ generates the reduced evolution of $S_1$ so that, to first approximation, $\mathcal{V}_t\otimes \mathcal{V}_t$ generates the evolution of the joint system $S_1S_2$. Then one can show that for entangled states of $S_1S_2$, $\mathcal{V}_t\otimes \mathcal{V}_t$ preserves positivity if and only if $\mathcal{V}_t$ is completely positive \cite{Benatti:2002:oo}. See Ref.~\cite{Benatti:2005:ii} for a detailed discussion of the requirement of completely positivity in the context of open quantum systems and quantum master equations.
\item \emph{Convex linearity.} Consider a convex-linear combination of density operators, $\op{\rho} = \lambda \op{\rho}_1 + (1-\lambda) \op{\rho}_2$ with $0 < \lambda < 1$. This represents an ignorance-interpretable mixture of the two ensembles $\op{\rho}_1$ and $\op{\rho}_2$, with (classical) probability weights $\lambda$ and $1-\lambda$. Thus, we require that it should be possible to represent the time-evolved mixture again as an ignorance-interpretable mixture of the two ensembles evolved \emph{individually} under the action of the dynamical map:
\begin{equation}
\label{eq:cl} \mathcal{V}_t \op{\rho} = \mathcal{V}_t \left\{ \lambda \op{\rho}_1 + (1-\lambda) \op{\rho}_2\right\} = \lambda \mathcal{V}_t\op{\rho}_1 + (1-\lambda) \mathcal{V}_t\op{\rho}_2. \end{equation}
\item \emph{Trace preservation.} We demand that the time-evolved density matrix remains a trace-one operator:
\begin{equation}
\label{eq:t78sccl} \text{Tr} \left\{ \mathcal{V}_t \op{\rho} \right\} =1. \end{equation}
\end{enumerate}
We emphasize again that this characterization of dynamical maps in terms of completely positive, convex-linear, trace-preserving maps is equivalent to the characterization in terms of maps generated by a complete set of Kraus operators [see Eq.~\eqref{eq:asdmssmetf2}] \cite{Kraus:1983:ee}.
\subsection{Markovian master equations}
In Eq.~\eqref{eq:dggamsdlm58672}, we introduced the notion of a Markovian master equation, given by $\partial_t \op{\rho}_S(t) = \mathcal{L}\op{\rho}_S(t)$ with a time-independent superoperator $\mathcal{L}$. There exist several approaches to deriving the dynamical maps representing Markovian master equations: an axiomatic approach based on the theory of quantum dynamical semigroups (see Sec.~\ref{sec:semigr-deriv-mark}) \cite{Lindblad:1976:um,Gorini:1976:tt,Gorini:1978:uf,Davies:1974:tw,Kossakowski:1972:tf,Alicki:2007:uu}; a microscopic approach proceeding from a consideration of the relevant Hamiltonians and the evolution generated by them (see Sec.~\ref{sec:micr-deriv-mark}); and a monitoring approach (see Refs.~\cite{Hornberger:2006:tc,Hornberger:2008:ii}).
The common key assumption underlying these approaches is known as the \emph{Markov approximation}. Here one considers two timescales: (i) the typical relaxation time $\tau_r$ of the open quantum system, describing the timescale on which the environment affects the evolution of the system; and (ii) the typical coherence time $\tau_c$ of the environment, representing the characteristic timescale for the decay of correlations between the degrees of the environment that are being generated by the interaction with the system. In the Markov approximation, one assumes that $\tau_r \gg \tau_c$, i.e., the environmental self-correlations are assumed to decay rapidly compared to the timescale on which the open system evolves. Then, on this coarse-grained timescale defined by $\tau_r$, the environment may be considered memoryless, meaning that it does not appreciably retain information about its interaction with the system between time points much farther apart than the timescale set by environmental self-correlations.
\subsubsection{\label{sec:semigr-deriv-mark}Semigroup approach to Markovian master equations and the Lindblad form}
In terms of a family $\{ \mathcal{V}_t \mid t \ge 0 \}$ of dynamical maps parametrized by $t$, the Markov property can be rigorously stated in terms of the \emph{semigroup condition} \cite{Lindblad:1976:um,Gorini:1976:tt,Gorini:1978:uf,Davies:1974:tw,Kossakowski:1972:tf,Alicki:2007:uu},
\begin{equation} \label{eq:d4488m58672} \mathcal{V}_{t_2}\mathcal{V}_{t_1}=\mathcal{V}_{t_1+t_2}. \end{equation}
If this relation is fulfilled, then $\{ \mathcal{V}_t \mid t \ge 0 \}$ is said to form a \emph{quantum dynamical semigroup} \cite{Alicki:2007:uu}. In this case, it can be shown that (given mild assumptions) there exists a superoperator $\mathcal{L}$ such that \cite{Alicki:2007:uu}
\begin{equation}\label{eq:767n8m58672} \mathcal{V}_t = \exp (\mathcal{L}t), \end{equation}
which implies the quantum Markovian master equation~\eqref{eq:dggamsdlm58672}, $\partial_t \op{\rho}_S(t) = \mathcal{L}\op{\rho}_S(t)$. The superoperator $\mathcal{L}$ is a linear map known as the \emph{generator of the dynamical semigroup}; it is also often referred to as the \emph{Liouville superoperator}.
Gorini, Kossakowski, and Sudarshan \cite{Gorini:1976:tt} first showed that for a finite-dimensional Hilbert space $\mathcal{H}_S$ of the system, the most general form of the generator $\mathcal{L}$ is \cite{Breuer:2002:oq,Alicki:2007:uu}
\begin{equation}
\label{eq:sdfkhwr69} \mathcal{L}\op{\rho}_S = \underbrace{-\frac{\ensuremath{i}}{\hbar} \bigl[
\op{H}'_S, \op{\rho}_S \bigr]}_{\text{unitary part}} + \underbrace{\sum_{\alpha,\beta=1}^{N^2-1} \gamma_{\alpha\beta} \left\{ \op{F}_\alpha \op{\rho}_S \op{F}^\dagger_\beta - \frac{1}{2} \op{F}^\dagger_\beta\op{F}_\alpha \op{\rho}_S - \frac{1}{2} \op{\rho}_S\op{F}^\dagger_\beta\op{F}_\alpha \right\}}_{\text{nonunitary part (``dissipator'')}}. \end{equation}
Here, $N=\dim(\mathcal{H}_S)$, and the $\op{F}_\alpha$ are a set of $N^2$ linear operators forming an orthonormal\footnote{Orthonormality is here defined in terms of the Hilbert--Schmidt scalar product $(\op{F}_\alpha, \op{F}_\beta) = \text{tr}_S (\op{F}_\alpha^\dagger \op{F}_\beta)$.} basis in the Liouville space of linear and bounded operators in $\mathcal{H}_S$, with $\op{F}_{N^2}$ chosen to be proportional to the identity (see Refs.~\cite{Breuer:2002:oq,Alicki:2007:uu} for mathematical details). The coefficients $\gamma_{\alpha\beta}$ define a matrix, and one can show that this matrix is positive, i.e., that all its eigenvalues $\kappa_\mu$ are nonnegative. Conversely, if a master equation can be brought into the form~\eqref{eq:sdfkhwr69} with a positive coefficient matrix, it will represent the generator of a quantum dynamical semigroup and hence ensure complete positivity. Equation~\eqref{eq:sdfkhwr69} is known as the \emph{first standard form} of the generator.
The first term on the right-hand side of Eq.~\eqref{eq:sdfkhwr69} describes the unitary evolution of the system under a Hamiltonian $\op{H}'_S$. This Hamiltonian will, in general, differ from the self-Hamiltonian $\op{H}_S$ of the system because of the presence of the environment, which renormalizes the energy levels of the system. Accordingly, $\op{H}'_S$ is often referred to as the \emph{environment-renormalized} (or \emph{Lamb-shifted}) Hamiltonian (one can show that it commutes with $\op{H}_S$). The second term on the right-hand side of Eq.~\eqref{eq:sdfkhwr69} reflects the nonunitary influence of the environment, which changes the coherence of the system and may also lead to a loss of energy from the system (i.e., dissipation). It is sometimes referred to as the \emph{dissipator} \cite{Breuer:2002:oq} (but note that it may generate decoherence without dissipation, so the terminology is not always apt). In the context of applications to decoherence models, the time-independent coefficients $\gamma_{\alpha\beta}$ encapsulate all relevant information about the physical parameters of the decoherence (and possibly dissipation) processes. If the $\op{F}_\alpha$ are chosen to be dimensionless, then the $\gamma_{\alpha\beta}$ have units of frequency (i.e., inverse time).
Because the coefficient matrix $\gamma_{\alpha\beta}$ is positive, we may diagonalize it and rewrite Eq.~\eqref{eq:sdfkhwr69} as
\begin{equation}\label{eq:lindblad} \mathcal{L}\op{\rho}_S = -\frac{\ensuremath{i}}{\hbar} \bigl[
\op{H}'_S, \op{\rho}_S \bigr] + \sum_{\mu=1}^{N^2-1} \kappa_\mu \left\{ \op{L}_\mu \op{\rho}_S \op{L}^\dagger_\mu - \frac{1}{2} \op{L}^\dagger_\mu\op{L}_\mu \op{\rho}_S - \frac{1}{2} \op{\rho}_S\op{L}^\dagger_\mu\op{L}_\mu \right\}, \end{equation}
where the \emph{Lindblad operators} $\op{L}_\mu$ are linear combinations of the operators $\op{F}_\alpha$. Lindblad \cite{Lindblad:1976:um} (see also Ref.~\cite{Gorini:1978:uf}) showed that Eq.~\eqref{eq:lindblad} is the most general form for a bounded generator in any separable Hilbert space for a countable set of indices $\{\mu\}$.\footnote{The assumption of boundedness does not hold in many physical applications: both the Hamiltonian $\op{H}_S$ of the system and the Lindblad operators $\op{L}_\mu$ will in general be unbounded. It turns out, however, that one can define quantum dynamical semigroups using expressions of the Lindblad form~\eqref{eq:lindblad} even for unbounded Lindblad operators \cite{Davies:1976:uu,Holevo:1996:ll}, and conversely one finds that all known instances of generators of quantum dynamical semigroups are of Lindblad form (or can be readily adapted to it) \cite{Breuer:2002:oq,Alicki:2007:uu}.} Equation~\eqref{eq:lindblad} is known as the \emph{second standard form}, the \emph{diagonal standard form}, or the \emph{Lindblad form} of the generator \cite{Breuer:2002:oq,Alicki:2007:uu}. When the generator~\eqref{eq:lindblad} is inserted into Eq.~\eqref{eq:dggamsdlm58672}, the resulting master equation is referred to as the \emph{Gorini--Kossakowski--Sudarshan--Lindblad master equation}, or \emph{Lindblad master equation} for short. If one chooses the Lindblad operators to be dimensionless, then the quantities $\kappa_\mu$ (i.e., the eigenvalues of the coefficient matrix) have units of inverse time and may be interpreted directly as decoherence rates. We note that a given Lindblad generator $\mathcal{L}$ does not uniquely determine the Lindblad operators or the Hamiltonian $\op{H}'_S$ \cite{Breuer:2002:oq}.
In applications to decoherence models, the Lindblad operators $\op{L}_\mu$ are constructed from linear combinations of the system operators $\op{S}_\alpha$ appearing in the diagonal decomposition of the interaction Hamiltonian, $\op{H}_\text{int} = \sum_\alpha \op{S}_\alpha \otimes \op{E}_\alpha$. When the system operators $\op{S}_\alpha$ represent physical observables monitored by the environment, they will be Hermitian and therefore the Lindblad operators, being linear combinations of the $\op{S}_\alpha$, will be Hermitian as well. In this case, the Lindblad generator~\eqref{eq:lindblad} may be further simplified by writing it in double-commutator form, resulting in the master equation
\begin{equation}\label{eq:lindbladc} \frac{\partial}{\partial t} \op{\rho}_S(t) = - \frac{\ensuremath{i}}{\hbar} \left[ \op{H}'_S, \op{\rho}_S(t) \right] - \frac{1}{2} \sum_{\mu=1}^{N^2-1} \kappa_\mu \left[ \op{L}_\mu, \left[ \op{L}_\mu, \op{\rho}_S(t) \right] \right]. \end{equation}
Note that the second, nonunitary term on the right-hand side vanishes if
\begin{equation}
\label{eq:7fskjhytw10}
\left[ \op{L}_\mu, \op{\rho}_S(t) \right] = 0 \qquad \text{for all $\mu,t$}, \end{equation}
in which case $\op{\rho}_S(t)$ will evolve unitarily. Incidentally, this leads to a connection with the concept of pointer states. Recall that the Lindblad operators $\op{L}_\mu$ are linear combinations of the operators $\op{S}_\alpha$ in the diagonal decomposition of the interaction Hamiltonian. Thus Eq.~\eqref{eq:7fskjhytw10} implies (disregarding very specific linear combinations of $\op{S}_\alpha$) that we must also have $\left[ \op{S}_\alpha, \op{\rho}_S(t) \right] = 0$ for all $\alpha, t$. This, however, is nothing but the pointer-state criterion of Eq.~\eqref{eq:OIbvsrhjkbv9}, which says that quantum states that are simultaneous eigenstates of all operators $\op{S}_\alpha$ will not decohere, and therefore will evolve unitarily.
\subsubsection{\label{sec:two-simple-examples}Two simple examples of Lindblad master equations}
A particularly important and simple case is that of a single system observable monitored by the environment, corresponding to $\op{H}_\text{int} = \op{S} \otimes \op{E}$. Let us mention two such basic examples.
\paragraph{Pure decoherence in the spin--boson model} Consider a qubit whose $\op{\sigma}_z$ spin coordinate is coupled to an environment of harmonic oscillators (this is the spin--boson model discussed in Sec.~\ref{sec:spin-boson-models}). In the absence of intrinsic tunneling dynamics, the qubit evolution can be described by a Lindblad master equation with a single Lindblad operator $\op{L}=\op{\sigma}_z$,
\begin{equation} \label{eq:vjp32gbntrkh22max} \frac{\partial}{\partial t} \op{\rho}_\mathcal{S}(t) = -\frac{\ensuremath{i}}{\hbar} \bigl[
\op{H}_\mathcal{S}, \op{\rho}_\mathcal{S}(t) \bigr] - D \left[ \op{\sigma}_z, \left[ \op{\sigma}_z,
\op{\rho}_\mathcal{S}(t) \right]\right]. \end{equation}
This equation may be derived from the relevant Hamiltonians using a microscopic approach; see Eq.~\eqref{eq:vjp32gbntrkh22} and Ref.~\cite{Schlosshauer:2007:un} for details. It describes the environmental monitoring and resulting decoherence in the $\{\ket{0},\ket{1}\}$ eigenbasis of $\op{\sigma}_z$, with $D$ playing the role of a decoherence rate. To see this explicitly, we write the Lindblad double commutator on the right-hand side of Eq.~\eqref{eq:vjp32gbntrkh22max} in matrix form in the $\{\ket{0},\ket{1}\}$ basis,
\begin{equation}
\label{eq:fsigj98gre42}
D \left[ \op{\sigma}_z, \left[ \op{\sigma}_z,
\op{\rho}_\mathcal{S}(t) \right]\right] = D \left(
\frac{1}{2} \op{\rho}_\mathcal{S}(t) - 2\op{\sigma}_z
\op{\rho}_\mathcal{S}(t) \op{\sigma}_z \right) \dot{=} \,\,D \left( \begin{array}{cc} 0 & \rho_\mathcal{S}^{(01)}(t) \\
\rho_\mathcal{S}^{(10)}(t) & 0\end{array} \right), \end{equation}
where $\rho_\mathcal{S}^{(ij)}(t)$ denotes the matrix element $\bra{i} \op{\rho}_\mathcal{S}(t) \ket{j}$, $i \in \{0,1\}$. It then follows from Eq.~\eqref{eq:vjp32gbntrkh22max} that the evolution of the off-diagonal matrix elements of the reduced density matrix (expressed in the eigenbasis of $\op{\sigma}_z$) governed by the $D$ term alone is
\begin{align}
\label{eq:44}
\frac{\partial}{\partial t} \rho_\mathcal{S}^{(01)}(t) = -
D\rho_\mathcal{S}^{(01)}(t), \qquad
\frac{\partial}{\partial t} \rho_\mathcal{S}^{(10)}(t) = - D\rho_\mathcal{S}^{(10)}(t). \end{align}
This shows that the off-diagonal elements decay exponentially at a rate given by $D$, while the diagonal elements (the occupation probabilities) are not affected. Thus Eq.~\eqref{eq:fsigj98gre42} generates pure decoherence in the $\{\ket{0},\ket{1}\}$ basis without dissipation.
\paragraph{Spatial decoherence} As another example, consider a free particle in one dimension, subject to environmental monitoring of its position. The most simple way in which we may represent this environmental interaction is in terms of a single Lindblad operator $L \propto \op{x}$. With $\op{H}'_S = \op{H}_S = p^2/2m$, Eq.~\eqref{eq:lindbladc} reads
\begin{equation}\label{eq:lifsfdndbladc} \frac{\partial}{\partial t} \op{\rho}_S(t) = -\frac{\ensuremath{i}}{2m\hbar}\left[p^2, \op{\rho}_S(t) \right] - \Lambda \left[ x, \left[ x, \op{\rho}_S(t) \right]\right], \end{equation}
where the coefficient $\Lambda$ has dimensions of $\text{(time)}^{-1} \times \text{(length)}^{-2}$. Writing this equation in the position representation, one obtains
\begin{equation}\label{eq:lifsshvgvvvxayhcgiefdndbladc}
\frac{\partial \op{\rho}_S(x,x',t)}{\partial t} = - \frac{\ensuremath{i}}{2m\hbar} \left(\frac{ \partial^2}{\partial x'^2}- \frac{ \partial^2}{\partial x^2} \right) \op{\rho}_S(x,x',t) - \Lambda
\left(x-x'\right)^2 \op{\rho}_S(x,x',t), \end{equation}
which is the classic equation of motion for spatial decoherence due to environmental scattering first derived in Ref.~\cite{Joos:1985:iu} (see Sec.~\ref{sec:collisionaldecoherence}). The second term on the right-hand side of Eq.~\eqref{eq:lifsshvgvvvxayhcgiefdndbladc} generates exponential decay of spatial coherences (represented by the off-diagonal elements $x\not= x'$) at a rate given by $ \Lambda \left(x-x'\right)^2$,
\begin{equation} \rho_S(x,x',t) =\rho_S(x,x',0) \exp\left[-\Lambda (x-x')^2 t\right], \end{equation}
where we have neglected the intrinsic dynamics. We see that the localization rate depends on the square of the separation $\abs{x-x'}$. We have already encountered this dependence in Eq.~\eqref{eq:daf12}, and we will find it again below in the context of an explicit scattering model [see Eqs.~\eqref{eq:scwer2}
and \eqref{eq:scwer6565}] and the Caldeira--Leggett master equation for quantum Brownian motion [see Eqs.~\eqref{eq:fsdojgdj1} and \eqref{eq:odijsvuhfsw21}]. In fact, if we add a harmonic potential to the system, Eq.~\eqref{eq:lifsfdndbladc} represents the high-temperature limit of the master equation for quantum Brownian motion with the dissipative term neglected, as given in Eq.~\eqref {eq:vfnbcclasclnd9s27} of Sec.~\ref{sec:cald-legg-mast} (see also Sec.~5.2.4 of Ref.~\cite{Schlosshauer:2007:un} for details). When dissipative effects are included, the appropriate Lindblad operator is a linear combination of the position and momentum operators of the system [see Eq.~\eqref{eq:dkvnkl1}].
\subsubsection{\label{sec:micr-deriv-mark}Microscopic derivation of Markovian master equations}
In the previous section, we obtained the most general form of a Markovian (and time-homogeneous) quantum master equation from the formalism of quantum dynamical semigroups and their generators. This approach is mathematically elegant, leads to a very general result, and automatically ensures the complete positivity of the evolution. Still, from a physical point of view, it would also be desirable to directly derive Markovian master equations from the underlying Hamiltonian description of the system and its environment. To do so, one proceeds from the total Hamiltonian, $\op{H}=\op{H}_S+\op{H}_E+\op{H}_\text{int}$, and an initially uncorrelated system--environment state, $\op{\rho}_{SE}(0)=\op{\rho}_{S}(0)\otimes \op{\rho}_{E}(0)$, and then imposes the following two main assumptions.
The first assumption is the \emph{Born approximation}, which takes the coupling between system and environment to be sufficiently weak and the environment to be sufficiently large such that, to second order in the interaction Hamiltonian $\op{H}_\text{int}$, changes of the density operator of the environment may be neglected and the composite system--environment state remains in an approximate product state over time. That is, one assumes that $\op{\rho}_{SE}(t) \approx \op{\rho}_S(t) \otimes \op{\rho}_E$, where $\op{\rho}_E$ is the stationary state of $E$ (i.e., $\bigl[ \op{H}_E, \op{\rho}_E \bigr] = 0$). The second assumption is the Markov approximation mentioned above, i.e., the assumption of a memoryless environment on the coarse-grained relaxation timescale $\tau_r$ defined by the evolution of the open system. Comparisons of the predictions of master equations derived from the Born and Markov approximations with experimental data indicate that these approximations are reasonable in many physical situations (but see Sec.~\ref{sec:non-mark-decoh} for comments on exceptions and non-Markovian models).
We will now give a brief sketch of the derivation (see, e.g., Refs.~\cite{Breuer:2002:oq,Schlosshauer:2007:un} for details). We start from the Liouville--von Neumann equation for the total density operator $\op{\rho}^{(I)}(t)$ in the interaction picture,
\begin{equation}
\label{eq:pexp}
\frac{\partial}{\partial t} \op{\rho}^{(I)}(t) = \frac{1}{\ensuremath{i} \hbar} \bigl[
\op{H}_\text{int}(t), \op{\rho}^{(I)}(t) \bigr], \end{equation}
where $\op{H}_\text{int}(t)$ is the interaction Hamiltonian in the interaction picture. (From here on, operators bearing explicit time arguments shall be understood as interaction-picture operators, while for interaction-picture density operators we use the superscript ``$(I)$'' to distinguish them from time-dependent Schr\"odinger-picture density operators.) We formally integrate Eq.~\eqref{eq:pexp}, insert the resulting expression for $\op{\rho}^{(I)}(t)$ into the right side of Eq.~\eqref{eq:pexp}, and trace over the environment. This gives
\begin{equation} \label{eq:pexps2} \frac{\partial}{\partial t}
\op{\rho}^{(I)}_S(t) = \frac{1}{\ensuremath{i} \hbar}\text{Tr}_E \bigl[ \op{H}_\text{int}(t), \op{\rho}(0) \bigr] + \left(\frac{1}{\ensuremath{i} \hbar}\right)^2 \int_0^t \ensuremath{\mathrm{d}} t'
\, \text{Tr}_E \bigl[ \op{H}_\text{int}(t), \bigl[
\op{H}_\text{int}(t'), \op{\rho}^{(I)}(t') \bigr]
\bigr]. \end{equation}
Without loss of generality, the first commutator can be made to vanish by redefining $\op{H}_0$ and $\op{H}_\text{int}$. Imposing the Born approximation allows us to replace the total density operator $\op{\rho}^{(I)}(t')$ by $\op{\rho}^{(I)}_S(t') \otimes \op{\rho}_E$,
\begin{equation} \label{eq:pexaa2} \frac{\partial}{\partial t} \op{\rho}^{(I)}_S(t) = -\frac{1}{\hbar^2} \int_0^t \ensuremath{\mathrm{d}} t' \, \text{Tr}_E \bigl[ \op{H}_\text{int}(t), \bigl[
\op{H}_\text{int}(t'), \op{\rho}^{(I)}_S(t') \otimes
\op{\rho}_E \bigr] \bigr]. \end{equation}
Now the master equation is expressed entirely in terms of the reduced state of the system and the initial state of the environment. Inserting into Eq.~\eqref{eq:pexaa2} the diagonal decomposition of the interaction Hamiltonian in the interaction picture, $\op{H}_\text{int}(t) =\sum_\alpha \op{S}_\alpha(t) \otimes \op{E}_\alpha(t)$, and writing out the double commutator, one finds
\begin{multline} \label{eq:pexa51} \frac{\partial}{\partial t} \op{\rho}^{(I)}_S(t) = - \frac{1}{\hbar^2}\int_0^t \ensuremath{\mathrm{d}} t' \, \sum_{\alpha\beta} \, \left\{ \mathcal{C}_{\alpha\beta}(t-t') \left[
\op{S}_\alpha(t) \op{S}_\beta(t') \op{\rho}^{(I)}_S(t')
- \op{S}_\beta(t') \op{\rho}^{(I)}_S(t') \op{S}_\alpha(t)
\right] \right. \\ \left. + \, \mathcal{C}_{\beta\alpha}(t'-t) \left[
\op{\rho}^{(I)}_S(t')\op{S}_\beta(t') \op{S}_\alpha(t)
- \op{S}_\alpha(t)\op{\rho}^{(I)}_S(t')\op{S}_\beta(t')
\right] \right\}, \end{multline}
with
\begin{equation}
\label{eq:xbab20} \mathcal{C}_{\alpha\beta}(t-t') = \text{Tr}_E \left\{ \op{E}_\alpha(t-t')
\op{E}_\beta \op{\rho}_E\right\} = \left\langle \op{E}_\alpha(t-t') \op{E}_\beta
\right\rangle_{\op{\rho}_E}. \end{equation}
The quantities $C_{\alpha\beta}(t-t')$ are referred to as \emph{environment self-correlation functions}. This term is motivated by the following observation. Each operator $\op{E}_\alpha$ (provided it is Hermitian) may be thought of as an observable measured on the environment through its coupling to the system. Equation~\eqref{eq:xbab20} then quantifies how much the result of such a measurement is correlated with the same measurement carried out at a different instant a time $\tau=t-t'$ apart. Thus, it measures how much information the environment retains over time about its interaction with the system.
Imposing the Markov approximation corresponds to assuming that the environment self-correlation functions $C_{\alpha\beta}(\tau)$ are sharply peaked around $\tau = 0$ and decay rapidly (on a timescale $\tau_c$) compared to the system relaxation timescale $\tau_r$ that measures the change of $\op{\rho}^{(I)}_S(t)$ due to the interaction with the environment. This leads to two immediate consequences. First, because the change of $\op{\rho}^{(I)}_S(t)$ is negligibly small over the time interval for which the $C_{\alpha\beta}(\tau)$ have appreciable magnitude, we may replace the retarded-time density operator $\op{\rho}^{(I)}_S(t')$ on the right-hand side of Eq.~\eqref{eq:pexa51} by the current-time density operator $\op{\rho}^{(I)}_S(t)$. The resulting master equation is time-local, but the dependence of the integral limit on $t$ in Eq.~\eqref{eq:pexa51} means that the equation is not yet Markovian. However, we can replace the lower limit of the integral in Eq.~\eqref{eq:pexa51} by $-\infty$, because the functions $C_{\alpha\beta}(t-t')$ vanish for $t' \ll t$. After making the substitution $t' \longrightarrow \tau = t-t'$, we arrive at the Markovian master equation
\begin{multline} \label{eq:pexarr} \frac{\partial}{\partial t} \op{\rho}^{(I)}_S(t) = - \frac{1}{\hbar^2}\int_0^\infty \ensuremath{\mathrm{d}} \tau \, \sum_{\alpha\beta} \, \left\{ \mathcal{C}_{\alpha\beta}(\tau) \left[
\op{S}_\alpha(t) \op{S}_\beta(t-\tau) \op{\rho}^{(I)}_S(t)
- \op{S}_\beta(t-\tau) \op{\rho}^{(I)}_S(t) \op{S}_\alpha(t)
\right]\right. \\ \left. + \mathcal{C}_{\beta\alpha}(-\tau) \left[
\op{\rho}^{(I)}_S(t)\op{S}_\beta(t-\tau) \op{S}_\alpha(t)
- \op{S}_\alpha(t)\op{\rho}^{(I)}_S(t)\op{S}_\beta(t-\tau)
\right] \right\}. \end{multline}
We note again that imposing the Markov assumption means that the evolution is considered only on a coarse-grained timescale, since we are not resolving changes on the order of the environment self-correlation timescale $\tau_c$.
Finally, transforming Eq.~\eqref{eq:pexarr} back to the Schr\"odinger picture yields the \emph{Born--Markov master equation} (sometimes also called \emph{Redfield equation} \cite{Redfield:1957:im,Blum:1981:qq}\footnote{In the literature, the term ``Redfield equation'' is occasionally (see, e.g., Refs.~\cite{Breuer:2002:oq,Hornberger:2009:aq}) associated with the time-local but pre-Markovian master equation obtained just prior to Eq.~\eqref{eq:pexarr}, i.e., before the integration limit is extended to infinity. In his original paper \cite{Redfield:1957:im}, Redfield did, however, include the step of extending the integration limit in this way (see his Eq.~2.14).})
\begin{equation} \label{eq:born-markov-master} \frac{\partial}{\partial t} \op{\rho}_S(t) = -\frac{\ensuremath{i}}{\hbar} \left[ \op{H}_S, \op{\rho}_S(t) \right] - \frac{1}{\hbar^2}\sum_{\alpha} \left\{ \left[ \op{S}_\alpha, B_\alpha \op{\rho}_S(t) \right] + \left[ \op{\rho}_S(t) C_\alpha, \op{S}_\alpha \right] \right\}, \end{equation}
where the system operators $B_\alpha$ and $C_\alpha$ are defined as
\begin{subequations} \label{eq:hvdwg643r5gsxkjcvbsvnx20} \begin{align} \label{eq:iuwrgf8} B_\alpha &=\int_0^\infty \ensuremath{\mathrm{d}} \tau \, \sum_{\beta} C_{\alpha\beta}(\tau) S^{(I)}_\beta(-\tau), \\ \label{eq:112823jmn2} C_\alpha &= \int_0^\infty \ensuremath{\mathrm{d}} \tau \, \sum_{\beta} C_{\beta\alpha}(-\tau) S^{(I)}_\beta(-\tau). \end{align} \end{subequations}
In many situations of interest, the general form~\eqref{eq:born-markov-master} of the Born--Markov master equation simplifies considerably. For example, if only a single system observable $S$ is monitored by the environment, Eq.~\eqref{eq:born-markov-master} becomes
\begin{equation} \label{eq:born-markov-mastersim} \frac{\partial}{\partial t} \op{\rho}_S(t) = -\frac{\ensuremath{i}}{\hbar} \left[ \op{H}_S, \op{\rho}_S(t) \right] - \frac{1}{\hbar^2}\left\{\left[ \op{S}, B \op{\rho}_S(t) \right] + \left[ \op{\rho}_S(t) C, \op{S} \right]\right\}, \end{equation}
with corresponding simplifications of Eqs.~\eqref{eq:xbab20}, \eqref{eq:iuwrgf8}, and \eqref{eq:112823jmn2}. Moreover, in many cases one finds a rather simple time dependence for the operators $\op{S}_\alpha(\tau)$ and $E_\alpha(\tau)$ appearing in Eqs.~\eqref{eq:xbab20}, \eqref{eq:iuwrgf8}, and \eqref{eq:112823jmn2}, which in turn makes calculating the quantities $B_\alpha$ and $C_\alpha$ relatively easy. Examples of applications of Born--Markov master equations to specific models are given in Sec.~\ref{sec:decmodels}.
While the Born--Markov master equation~\eqref{eq:born-markov-master} is both time-local and Markovian, it does \emph{not} guarantee the complete positivity of the evolution \cite{Alicki:2007:uu,Benatti:2005:ii}. Therefore, it may lead to unphysical states with negative populations and does not, in general, describe a quantum dynamical semigroup (see Ref.~\cite{Benatti:2005:ii} for a detailed discussion). A well-known case of a Born--Markov master equation that violates complete positivity is the Caldeira--Leggett master equation \cite{Caldeira:1983:on} discussed in Sec.~\ref{sec:cald-legg-mast}. In fact, there are situations in which a Born--Markov master equation does not even preserve the positivity of the time-evolved reduced density matrix \cite{Dumke:1979:ia,Benatti:2005:ii} (see, e.g., the model discussed in Example~3.4 of Ref.~\cite{Benatti:2005:ii}).
Complete positivity can be ensured, however, by imposing a third, secular approximation, known as the \emph{rotating-wave approximation}, which was analyzed in detail by Davies \cite{Davies:1974:tw,Davies:1976:oo,Davies:1976:uu,Davies:1978:uu} (see also Refs.~\cite{Dumke:1979:ia,Breuer:2002:oq,Alicki:2007:uu,Hornberger:2009:aq}). Its application requires that the system has a discrete and nondegenerate (or exactly degenerate) spectrum (see Ref.~\cite{Davies:1978:uu} for an analysis of the case of nearly degenerate spectra). It is justified when the relaxation timescale $\tau_r$ of the open quantum system $S$ is much larger than the timescale $\tau_S$ set by the typical energy differences $\hbar(\omega-\omega')$ of the system Hamiltonian $\op{H}_S$, i.e., if $\tau_r \gg \abs{\omega-\omega'}^{-1}$. This condition is fulfilled, for example, in many quantum-optical settings. One then proceeds by first inserting a decomposition of the interaction-picture interaction Hamiltonian in terms of eigenoperators of $\op{H}_S$ into the Markovian master equation~\eqref{eq:pexarr}, which leads to Fourier-type summations of the form $\sum_{\omega \omega'} \ensuremath{e}^{\ensuremath{i} (\omega-\omega')t} f(\omega, \omega')$. Since the exponentials $\ensuremath{e}^{\ensuremath{i} (\omega-\omega')t}$ oscillate rapidly over the relaxation timescale $\tau_r$, they will average out to zero unless $\omega \approx \omega'$. In the rotating-wave approximation, one therefore neglects all terms $\omega \not= \omega'$. One can then show \cite{Davies:1974:tw,Davies:1976:oo,Davies:1976:uu,Breuer:2002:oq,Alicki:2007:uu,Hornberger:2009:aq} that this procedure transforms the Born--Markov master equation into the first standard form \eqref{eq:sdfkhwr69} for the generator of a quantum dynamical semigroup, with, as required, a positive coefficient matrix $\gamma_{\alpha\beta}$. Accordingly, the master equation ensures complete positivity, and can also be brought into the Lindblad form~\eqref{eq:lindblad}.
It is interesting to note that, while complete positivity is of course desirable (and a necessary feature of any exact, physically meaningful evolution), imposing the rotating-wave approximation may in turn obscure other relevant physical features \cite{Dodin:2018:zz}. Such observations serve as a reminder that master equations and their underlying approximations must be judiciously chosen and applied to ensure that they are appropriate to a given physical situation.
\subsection{\label{sec:quantum-trajectories}Quantum trajectories}
In \emph{quantum-jump} and \emph{quantum-trajectory} approaches \cite{Barchielli:1991:fv,Belavkin:1989:an,Belavkin:1989:am,Belavkin:1989:um,Belavkin:1995:tt,Diosi:1988:wx,Diosi:1988:hn,Diosi:1988:bv,Gisin:1984:qs,Gisin:1989:jn,Wiseman:1994:qq,Goan:2001:rz,Plenio:1998:bb}, the evolution of the reduced density matrix is conditioned on the results of a sequence of measurements performed on the environment. In this way, one may consider an individual system evolving stochastically, conditioned on a particular measurement record. This evolution is described by a Lindblad master equation of the form \eqref{eq:lindbladc}, where now the reduced density matrix (denoted by $\op{\rho}^C_S$ below) is conditioned on the records of measurements of the Lindblad operators $\op{L}_\mu$,
\begin{equation} \label{eq:cme} \ensuremath{\mathrm{d}} \op{\rho}^C_S = -\frac{\ensuremath{i}}{\hbar} \left[\op{H}_S, \op{\rho}_S^C \right] \ensuremath{\mathrm{d}} t - \frac{1}{2} \sum_\mu \kappa_\mu \left[\op{L}_\mu, \left[\op{L}_\mu, \op{\rho}_S^C\right] \right] \ensuremath{\mathrm{d}} t + \sum_\mu \sqrt{\kappa_\mu} \, \mathcal{W}[\op{L}_\mu] \op{\rho}_S^C \, \ensuremath{\mathrm{d}} W_\mu, \end{equation}
where $\mathcal{W}[L]\op{\rho} \equiv L \op{\rho} + \op{\rho} L^\dagger - \op{\rho} \, \text{Tr} \left\{ L\op{\rho} + \op{\rho} L^\dagger \right\}$, and the $\ensuremath{\mathrm{d}} W_\mu$ are so-called \emph{Wiener increments}. Equation~\eqref{eq:cme} represents what is known as a \emph{diffusive unraveling} of the Lindblad equation into individual quantum trajectories, which can then be expressed by means of a \emph{stochastic Schr\"odinger equation} \cite{Barchielli:1991:fv,Belavkin:1989:an,Belavkin:1989:am,Belavkin:1989:um,Belavkin:1995:tt,Diosi:1988:wx,Diosi:1988:hn,Diosi:1988:bv,Gisin:1984:qs,Gisin:1989:jn,Wiseman:1994:qq,Goan:2001:rz,Plenio:1998:bb}. Unraveling of a master equation has been used, for example, to characterize the dynamically emerging pointer states in collisional decoherence \cite{Busse:2009:aa,Busse:2010:aa} and quantum Brownian motion \cite{Sorgel:2015:pp}.
\subsection{\label{sec:non-mark-decoh}Non-Markovian decoherence}
While Born--Markov master equations adequately capture the decoherence dynamics of many physically relevant systems, the underlying assumption of weak coupling to an essentially unchanging, memoryless environment is not always fulfilled in practice, and significantly non-Markovian dynamics may arise. An important example of such a breakdown of Markovian decoherence dynamics is encountered in the case of a superconducting qubit that interacts strongly with a low-temperature environment of other two-level systems \cite{Prokofev:2000:zz,Dube:2001:zz}. Another example is an experiment \cite{Groeblacher:2013:im} that has measured strongly non-Ohmic spectral densities for the environment of a quantum nanomechanical system; such densities lead to non-Markovian evolution.
If memory effects in the environment are substantial, then the evolution of the reduced density matrix will depend on the past history of the system and the environment. In general, this may mean that time-local master equations are no longer applicable and one has to instead solve integro-differential equations, which is typically a difficult task (see also the Nakajima--Zwanzig projection-operator technique \cite{Nakajima:1958:im,Zwanzig:1960:om,Zwanzig:1960:mo,Joos:2003:jh}). It turns out, however, that in some cases time-local master equations can still provide a good representation even of non-Markovian processes. Such equations are of the form
\begin{equation} \label{eq:sfihvsfhv7}
\frac{\partial}{\partial t} \op{\rho}_S(t) = \mathcal{K}(t) \op{\rho}_S(t), \end{equation}
where the superoperator $\mathcal{K}(t)$ is now time dependent but evaluated at a single time $t$ only, as is the reduced density matrix. To give an example, a non-Markovian but time-local master equation for quantum Brownian motion (see Sec.~\ref{sec:quant-brown-moti}) can be obtained through a formal modification of the Born--Markov master equation \cite{Paz:2001:aa,Zurek:2002:ii}. In general, non-Markovian, time-local master equations may be obtained using the so-called time-convolutionless projection operator technique \cite{Chaturvedi:1979:pm,Shibata:1980:ma,Royer:1972:um,Royer:2003:za}.
\section{\label{sec:decmodels}Decoherence models}
Many physical systems can be represented either by a qubit (i.e., a spin-$\frac{1}{2}$ particle) if the state space of the system is discrete and effectively two-dimensional, or by a particle described by continuous phase-space coordinates. Similarly, a wide range of environments can be modeled as a collection of quantum harmonic oscillators (``oscillator environments,'' representing a quasicontinuum of delocalized bosonic modes) or qubits (``spin environments,'' representing a collection of localized, discrete modes).
A harmonic-oscillator environment is a very general model at low energies. Many systems interacting with an environment can be effectively described by one or two degrees of freedom of the system linearly coupled to an environment of harmonic oscillators; indeed, it turns out that sufficiently weak interactions with an \emph{arbitrary} environment can be mapped onto a system linearly coupled to a harmonic-oscillator environment \cite{Feynman:1963:jj,Caldeira:1983:gv}.
Spin environments are particularly appropriate models in the low-temperature regime, where decoherence is typically dominated by interactions with localized modes, such as paramagnetic spins, paramagnetic electronic impurities, tunneling charges, defects, and nuclear spins \cite{Dube:2001:zz,Prokofev:2000:zz,Lounasmaa:1974:yb}. Each such localized mode may be described by a finite-dimensional Hilbert space with a finite energy cutoff, allowing one to model these modes as a set of discrete states. Since typically only two such states are relevant, the localized modes can be mapped onto an environment of spin-$\frac{1}{2}$ particles.
In the following, we will discuss four important standard models, namely, collisional decoherence (Sec.~\ref{sec:collisionaldecoherence}), quantum Brownian motion (Sec.~\ref{sec:quant-brown-moti}), the spin--boson model (Sec.~\ref{sec:spin-boson-models}), and the spin--spin model (Sec.~\ref{sec:spin-envir-models}). For details on these and other decoherence models, including derivations of the relevant master equations, see, e.g., Secs.~3 and 5 of Ref.~\cite{Schlosshauer:2007:un}.
\subsection{\label{sec:collisionaldecoherence}Collisional decoherence}
\begin{figure}
\caption{Illustration of environmental scattering. Particles in the environment, such as
photons or gas molecules, are scattered by a central particle. When they carry away which-path information, collisional decoherence results.}
\label{fig:scatmod}
\end{figure}
Collisional decoherence arises from the scattering of environmental particles by a massive free quantum particle, a process by which the scattered environmental particles obtain which-path information about the central particle (see Fig.~\ref{fig:scatmod}; compare also Fig.~\ref{fig:bi}\emph{c}). Models of collisional decoherence were first studied in the classic paper by Joos and Zeh \cite{Joos:1985:iu}. Subsequently, a more rigorous derivation of the master equation was given by Hornberger and Sipe \cite{Hornberger:2003:un}; it remedied a flaw in Joos and Zeh's original derivation that had resulted in decoherence rates that were too large by a factor of $2\pi$ (see also Refs.~\cite{Gallis:1990:un,Diosi:1995:um,Adler:2006:yb}). These treatments consider the case in which the mass $M$ of the central particle is much larger than the masses $m$ of the scattered environmental particles, such that the center-of-mass state of the central particle is not disturbed by the scattering events (i.e., no recoil). This situation is applicable, for example, to the decoherence of a macroscopic object due to scattering of microscopic or mesoscopic particles such as photons or air molecules, an ubiquitous process in nature; it also applies to scenarios such as the controlled decoherence of fullerene molecules due to collisions with a gaseous environment \cite{Hackermuller:2003:uu,Hornberger:2003:tv}. If the masses of the central particle and the environmental particles are similar, then a more general description is required that also includes the dissipative dynamics (see below) \cite{Diosi:1995:um,Hornberger:2006:tb,Hornberger:2006:tc,Hornberger:2008:ii, Vacchini:2009:pp,Busse:2009:aa,Busse:2010:aa,Busse:2010:oo}.
\subsubsection{Master equation}
Assuming $M \gg m$ holds, the time evolution of the reduced density matrix is given by \cite{Joos:1985:iu,Gallis:1990:un,Diosi:1995:um,Hornberger:2003:un,Schlosshauer:2007:un,Hornberger:2009:aq}
\begin{equation} \label{eq:scatq} \frac{\partial \rho_S(\vec{x}, \vec{x}', t)}{\partial t} = - F(\vec{x} - \vec{x}') \rho_S(\vec{x}, \vec{x}', t). \end{equation}
This master equation describes pure spatial decoherence without dissipation. The decoherence factor $F(\vec{x} - \vec{x}')$ represents the characteristic decoherence rate at which coherence between two positions $\vec{x}$ and $\vec{x}'$ becomes locally unobservable. It is given by
\begin{equation} \label{eq:scatf}
F(\vec{x} - \vec{x}') = \int_0^\infty \ensuremath{\mathrm{d}} q \,
\varrho(q) v(q) \int \frac{\ensuremath{\mathrm{d}} \hat{n}\,\ensuremath{\mathrm{d}} \hat{n}'}{4\pi} \left(1- \ensuremath{e}^{\ensuremath{i}
q\left(\vec{\hat{n}} - \vec{\hat{n}}'\right) \cdot \left( \vec{x} - \vec{x'}
\right) /\hbar} \right) \abs{ f(q\vec{\hat{n}}, q\vec{\hat{n}}') }^2, \end{equation}
where $\varrho(q)$ is the number density of incoming environmental particles with magnitude of momentum equal to $q=\abs{\vec{q}}$, $\vec{\hat{n}}$ and $\vec{\hat{n}}'$ are unit vectors (with $\ensuremath{\mathrm{d}} \hat{n}$ and $\ensuremath{\mathrm{d}} \hat{n}'$ representing the associated solid-angle differentials), and $v(q)$ is the speed of particles with momentum $q$. If the environmental particles are massive, we have $v(q) = q/m$, where $m$ is each particle's mass; for massless particles such as photons, $v(q)$ is equal to the speed of light. The quantity $\abs{ f(q\vec{\hat{n}}, q\vec{\hat{n}}')}^2$ is the differential cross-section for the scattering of an environmental particle from initial momentum $\vec{q}=q\vec{\hat{n}}$ to final momentum $\vec{q}'=q\vec{\hat{n}}'$.
To further evaluate the decoherence factor $F(\vec{x} - \vec{x}')$ [Eq.~\eqref{eq:scatf}], we distinguish two important limiting cases. In the \emph{short-wavelength limit}, the typical wavelength of the scattered environmental particles is much shorter than the coherent separation $\Delta x = \abs{\vec{x}-\vec{x}'}$ between the well-localized wave packets in the spatial superposition state of the system. Then a single scattering event will be able to fully resolve this separation and thus carry away complete which-path information, leading to maximum spatial decoherence per scattering event. In this limit, $F(\vec{x} - \vec{x}')$ turns out to be simply equal to the total scattering rate $\Gamma_\text{tot}$ \cite{Schlosshauer:2007:un}. This implies the existence of an upper limit (saturation) for the decoherence rate when increasing the separation $\Delta x$, in contrast with decoherence rates obtained from linear models [compare Eqs.~\eqref{eq:daf12} and \eqref{eq:odijsvuhfsw21}]. If we ignore the comparably slow internal dynamics of the system, Equation~\eqref{eq:scatq} then implies exponential decay of spatial interference terms at a rate given by $\Gamma_\text{tot}$,
\begin{equation}\label{eq:sees} \rho_S(\vec{x},\vec{x}',t) = \rho_S(\vec{x},\vec{x}',0) \ensuremath{e}^{-\Gamma_\text{tot} t}. \end{equation}
Such collisional decoherence in the short-wavelength regime has been observed, for example, for fullerene molecules interacting with an environment of background gas particles \cite{Hackermuller:2003:uu}, and good agreement of the measured decoherence rates with theoretical predictions obtained from Eq.~\eqref{eq:sees} has been found \cite{Hornberger:2003:tv} (see also Sec.~\ref{sec:matt-wave-interf}).
In the opposite \emph{long-wavelength limit}, the environmental wavelengths are much larger than the coherent separation $\Delta x = \abs{\vec{x}-\vec{x}'}$, which implies that an individual scattering event will reveal only incomplete which-path information. For this case, the change of the reduced density matrix imparted by environmental scattering is given by
\begin{equation}\label{eq:scwer1}
\frac{\partial\rho_S(\vec{x},\vec{x}',t)}{\partial t} = - \Lambda
(\vec{x} -\vec{x'})^2 \rho_{S}(\vec{x},\vec{x}',t). \end{equation}
Here, $\Lambda$ is a scattering constant that represents the physical properties of the system--environment interaction and is given by
\begin{equation}\label{eq:scatfls2} \Lambda = \int \ensuremath{\mathrm{d}} q\, \varrho(q) v(q) \frac{q^2}{\hbar^2} \sigma_\text{eff}(q), \end{equation}
where
\begin{equation}\label{eq:sccs}
\sigma_\text{eff}(q) = \frac{2\pi}{3} \int \ensuremath{\mathrm{d}} \cos\Theta \,
\left(1 - \cos \Theta \right) \abs{ f(q, \cos\Theta)}^2 \end{equation}
is the effective cross-section for the scattering interaction, with $\Theta$ denoting the scattering angle (i.e., the angle between incoming and outgoing directions of a scattered environmental particle).
If we again neglect the internal dynamics, then Eq.~\eqref{eq:scwer1} leads to
\begin{equation}\label{eq:scwer2} \rho_S(\vec{x},\vec{x}',t) = \rho_S(\vec{x},\vec{x}',0) \ensuremath{e}^{-\Lambda (\Delta x)^2 t}, \end{equation}
showing that spatial coherences become exponentially suppressed at a rate that depends on the square of the separation $\Delta x$ \cite{Schlosshauer:2007:un}. We see that the quantity $\Lambda (\Delta x)^2$ plays the role of a decoherence rate, and therefore
\begin{equation}\label{eq:scwer6565}
\tau_{\Delta x} = \frac{1}{\Lambda (\Delta x)^2} \end{equation}
is the characteristic spatial decoherence time. The dependence on the coherent separation $\Delta x$ is reasonable: if the environmental wavelengths are much larger than $\Delta x$, a large number of scattering events will need to accumulate before an appreciable amount of which-path information has become encoded in the environment, and this amount will increase, for a constant number of scattering events, as $\Delta x$ becomes larger. Note that if $\Delta x$ is increased beyond the typical wavelength of the environment, the short-wavelength limit needs to be considered instead, for which the decoherence rate is independent of $\Delta x$ and attains its maximum possible value.
\subsubsection{Time evolution and decoherence rates}
To study the effect of the environment on a given initial wave function in one spatial dimension, let us consider the evolution \eqref{eq:scwer1} in the long-wavelength limit and also include the self-Hamiltonian $\op{H}_S=\op{p}^2/2M$ of the central particle. In the position representation, the evolution is then given by the master equation [compare Eq.~\eqref{eq:lifsshvgvvvxayhcgiefdndbladc}]
\begin{equation}\label{eq:scweraass1hallo}
\frac{\partial\rho_{S}(x,x',t)}{\partial t} = -\frac{\ensuremath{i}}{2m\hbar}
\left(\frac{ \partial^2}{\partial x'^2} - \frac{ \partial^2}{\partial
x^2} \right) \rho_{S}(x,x',t) - \Lambda
(x-x')^2 \rho_{S}(x,x',t). \end{equation}
\begin{figure}
\caption{Collisional decoherence of a density matrix representing a Gaussian wave packet, as generated by the master equation \eqref{eq:scweraass1hallo}. Reading from left to right, the spatial coherence length, represented by the width of the Gaussian in the off-diagonal direction $x=-x'$, becomes progressively reduced by the environmental interaction.}
\label{fig:gaev}
\end{figure}
Let us start with an initial Gaussian wave packet centered at $x=0$ and apply the master equation~\eqref{eq:scweraass1hallo}. The resulting time evolution is shown in Fig.~\ref{fig:gaev}. We see that the coherence length (the width of the Gaussian in the off-diagonal direction $x=-x'$, representing spatial coherences) decreases over time, describing the collisional decoherence process. In this way, the density matrix approaches a quasiclassical probability distribution of positions clustered around the diagonal $x=x'$. Note that the width of the ensemble in the diagonal $x=x'$ direction---i.e., the size of the probability distribution $P(x,t) \equiv \rho_{S}(x,x,t)$ for different positions---increases in time. This is due to two influences: the free spreading of the wave packet (which is equally present in the absence of an environment), and an increase in the mean energy of the system due to the scattering interaction, rooted in the no-recoil assumption made in deriving the master equation \eqref{eq:scweraass1hallo}. Figure~\ref{fig:twoevl} shows the evolution generated by Eq.~\eqref{eq:scweraass1hallo} for a superposition of two Gaussian wave packets separated in position space. The off-diagonal peaks, which represent spatial coherence between the wave packets, become gradually suppressed due to the coupling to the environment.
\begin{figure}
\caption{Progressive decoherence (left to right) of a density matrix describing a spatial superposition of two Gaussian wave packets, as generated by the master equation \eqref{eq:scweraass1hallo}. Spatial coherence, represented by the peaks along the off-diagonal direction $x=-x'$, becomes damped by the environmental interaction.}
\label{fig:twoevl}
\end{figure}
Numerical values of collisional decoherence rates obtained from Eq.~\eqref{eq:scwer6565}, with the physically relevant scattering parameters $\Gamma_\text{tot}$ and $\Lambda$ appropriately evaluated (see Ref.~\cite{Schlosshauer:2007:un} for details), have demonstrated the extreme efficiency of collisions with environmental particles in suppressing spatial interferences. Table~\ref{tab:decrate} lists a few classic order-of-magnitude estimates \cite{Joos:1985:iu,Joos:2003:jh,Schlosshauer:2007:un}. Carefully controlled decoherence experiments have shown excellent agreement between theory and experimental data, for example, for the decoherence of fullerenes due to collisions with background gas molecules in a Talbot--Lau interferometer \cite{Hackermuller:2003:uu,Hornberger:2003:tv,Hornberger:2003:un,Hornberger:2004:bb,Nimmrichter:2011:pr} (see Sec.~\ref{sec:matt-wave-interf}), and for the decoherence of sodium atoms in a Mach--Zehnder interferometer due to the scattering of photons \cite{Kokorowski:2001:ub} and gas molecules \cite{Uys:2005:yb}.
\begin{table} \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{lcc}
\hline\noalign{
} \small Environment & \,\,Dust grain\,\, & Large molecule \\
\noalign{
}\hline\noalign{
}
Cosmic background radiation & $1$ & $10^{24}$ \\
Photons at room temperature & $10^{-18}$ & $10^{6}$ \\
Best laboratory vacuum & $10^{-14}$ & $10^{-2}$\\
Air at normal pressure & $10^{-31}$ & $10^{-19}$\\
\noalign{
}\hline \end{tabular}} \caption{\label{tab:decrate}Estimates of collisional decoherence timescales (in seconds) obtained from Eq.~\eqref{eq:scwer6565} for spatial coherences over a distance $\Delta x$ chosen to be equal to the size of the object ($\Delta x = \unit[10^{-3}]{cm}$ for a dust grain and $\Delta x = \unit[10^{-6}]{cm}$ for a large molecule), calculated for four different environments. The first two entries represent photon environments, and the last two entries represent an environment of ambient air molecules at room temperature. See Ref.~\cite{Schlosshauer:2007:un} for details on the calculation of the shown values.} \end{table}
\subsubsection{Generalizations and refinements}
Whenever the mass of the central particle becomes comparable to the mass of the environmental particles (as in the case of air molecules scattered by small molecules and free electrons \cite{Tegmark:1993:uz}), the no-recoil assumption does not hold and more general models for collisional decoherence and dissipation have to be considered. An important step in this direction was the master equation given by Di{\'o}si \cite{Diosi:1995:um}, though the derivation was based on a number of approximations that may be considered difficult to justify at the microscopic level \cite{Hornberger:2008:ii}. Later, a general, nonperturbative treatment based on the quantum linear Boltzmann equation was developed by Hornberger and collaborators \cite{Hornberger:2006:tb,Hornberger:2006:tc,Hornberger:2008:ii,Busse:2009:aa,Vacchini:2009:pp,Busse:2010:aa,Busse:2010:oo} (see Ref.~\cite{Vacchini:2009:pp} for a comprehensive review). The resulting framework properly accounts for the dynamical interplay between decoherence (in both position and momentum) and dissipation; previous results are recovered as limiting cases \cite{Hornberger:2006:tb,Hornberger:2006:tc,Hornberger:2008:ii,Vacchini:2009:pp,Busse:2010:oo}. The dynamically selected pointer states are found to be exponentially localized solitonic wave functions that follow the classical equations of motion \cite{Busse:2009:aa,Busse:2010:aa}.
In all aforementioned models of collisional decoherence, the central particle is treated as a point-like particle with no orientational degrees of freedom---i.e., as an isotropic sphere with no rotational motion. Motivated by experiments involving molecular rotors and the observation that near-field interferometry with massive molecules is highly sensitive to molecular rotations \cite{Gring:2010:aa,Stickler:2015:zz}, recently the theoretical treatment of collisional decoherence has been extended to the derivation of Markovian master equations describing the spatio-orientational decoherence of rotating, anisotropic, nonspherical molecules due to scattering interactions with a gaseous environment \cite{Walter:2016:zz,Stickler:2016:yy,Papendell:2017:yy, Stickler:2018:oo,Stickler:2018:uu}. Specific cases considered include molecules with a dipole moment \cite{Walter:2016:zz} and molecules with a high rotation rate, known as superrotors \cite{Stickler:2018:oo}, and good agreement with experimental data has been found \cite{Stickler:2018:oo}. In this way, the development of increasingly refined models of collisional decoherence in response to experimental advances and insights speak nicely to the interplay between theory and experiment.
\subsection{\label{sec:quant-brown-moti}Quantum Brownian motion}
A classic and extensively studied model of decoherence and dissipation is the one-dimensional motion of a particle weakly coupled to a thermal bath of noninteracting harmonic oscillators, a model known as \emph{quantum Brownian motion} \cite{Kubler:1973:ux,Caldeira:1983:on,Hu:1992:om,Paz:1993:ta,Zurek:1993:pu,Weiss:1999:tv,Zurek:2002:ii,Breuer:2002:oq,Diosi:2000:yn,Joos:2003:jh,Eisert:2003:ib,Schlosshauer:2007:un}. (By ``thermal bath'' we shall mean an environment in thermal equilibrium.) The self-Hamiltonian $\op{H}_E$ of the environment is given by
\begin{equation}
\label{eq:sfsfjaa11}
\op{H}_E = \sum_i \left( \frac{1}{2m_i}p_i^2 +
\frac{1}{2}m_i\omega_i^2q_i^2 \right), \end{equation}
where $m_i$ and $\omega_i$ are the mass and natural frequency of the $i$th oscillator, and $q_i$ and $p_i$ denote the canonical position and momentum operators. The interaction Hamiltonian $\op{H}_\text{int}$ is chosen to be
\begin{equation}\label{eq:sfsfjaaaaaa11} \op{H}_\text{int} = x \otimes \sum_i c_i q_i, \end{equation}
which describes the bilinear coupling of the system's position $x$ to the positions $q_i$ of the environmental oscillators, with $c_i$ denoting the coupling strength between the system and the $i$th environmental oscillator. Note that the interaction Hamiltonian \eqref{eq:sfsfjaaaaaa11} describes a continuous monitoring of the position of the system by the environment.
\subsubsection{Master equation}
Given the Hamiltonians~\eqref{eq:sfsfjaa11} and \eqref{eq:sfsfjaaaaaa11}, one can derive the Born--Markov master equation for quantum Brownian motion. The result is (see, e.g., Refs.~\cite{Breuer:2002:oq,Schlosshauer:2007:un} for a derivation)
\begin{equation} \label{eq:vjp32q22}
\frac{\partial}{\partial t} \op{\rho}_S(t)
= -\frac{\ensuremath{i}}{\hbar} \bigl[ \op{H}_S, \op{\rho}_S(t) \bigr] -
\frac{1}{\hbar} \int_0^\infty \ensuremath{\mathrm{d}} \tau \, \left\{ \nu(\tau) \bigl[ x, \bigl[
x(-\tau), \op{\rho}_S(t) \bigr]\bigr] - \ensuremath{i} \eta(\tau) \bigl[ x,
\bigl\{ x(-\tau), \op{\rho}_S(t)
\bigr\}\bigr] \right\}. \end{equation}
Here, $x(\tau)$ denotes the system's position operator in the interaction picture, $x(\tau) = \ensuremath{e}^{\ensuremath{i} \op{H}_S\tau/\hbar} x\ensuremath{e}^{-\ensuremath{i} \op{H}_S\tau/\hbar}$. The curly brackets $\{\cdot , \cdot \}$ in the second line denote the anticommutator $\{ A, B \} \equiv AB + BA$. The functions
\begin{align}
\nu(\tau) &= \int_0^\infty \ensuremath{\mathrm{d}} \omega
\, J(\omega) \coth
\left(\frac{\hbar\omega}{2k_B T}\right) \cos \left(\omega\tau\right), \label{eq:vdjpoo17} \\
\eta(\tau) &= \int_0^\infty \ensuremath{\mathrm{d}} \omega\, J(\omega)
\sin\left(\omega\tau\right), \label{eq:ponol218} \end{align}
are known as the \emph{noise kernel} and \emph{dissipation kernel}, respectively. The function $J(\omega)$, called the \emph{spectral density} of the environment, is given by
\begin{equation} \label{eq:vdfpmdmv16}
J(\omega) = \sum_i \frac{c_i^2}{2m_i\omega_i} \delta(\omega-\omega_i). \end{equation}
Spectral densities encode physical properties of the environment. One frequently replaces the collection of individual environmental oscillators by an (often phenomenologically motivated) continuous spectral-density function $J(\omega)$ of the environmental frequencies $\omega$.
If we focus on the important case of a system represented by a harmonic oscillator with self-Hamiltonian
\begin{equation}
\label{eq:sfsfsdfy7jaa11}
\op{H}_S = \frac{1}{2M}p^2 +
\frac{1}{2}M\Omega^2x^2, \end{equation}
the resulting Born--Markov master equation is (see, e.g., Refs.~\cite{Breuer:2002:oq,Zurek:2002:ii,Schlosshauer:2007:un})
\begin{equation} \label{eq:vfoinbnd9s27}
\frac{\partial}{\partial t} \op{\rho}_S(t)
= -\frac{\ensuremath{i}}{\hbar} \bigl[ \op{H}_S + \frac{1}{2}M
\widetilde{\Omega}^2 x^2, \op{\rho}_S(t) \bigr]
- \frac{\ensuremath{i} \gamma}{\hbar} \bigl[ x, \bigr\{ p,
\op{\rho}_S(t) \bigr\} \bigr]
- D \bigl[ x, \bigl[ x, \op{\rho}_S(t) \bigr] \bigr] - \frac{f}{\hbar} \bigl[ x, \bigl[ p, \op{\rho}_S(t) \bigr] \bigr]. \end{equation}
The coefficients $\widetilde{\Omega}^2$, $\gamma$, $D$, and $f$ are given by\footnote{In the literature, the upper integral limit in the expressions for the coefficients is sometimes considered explicitly time-dependent, rather than being extended to infinity (see, e.g., Refs.~\cite{Paz:2001:aa,Zurek:2002:ii}). This corresponds to not taking the final step in the microscopic derivation of the Born--Markov equation, namely, the replacement of the integral limit by infinity (see Sec.~\ref{sec:micr-deriv-mark}). This results in a partially pre-Markovian master equation that in certain cases (for example, for low-temperature environments \cite{Unruh:1989:rc,Lombardo:2005:ia}) provides a more physically appropriate description than can be given using the Markovian coefficients~\eqref{eq:vfoinbnd9s27}.}
\begin{subequations}\label{eq:jcsfr09355378} \begin{align}
\widetilde{\Omega}^2 &= - \frac{2}{M} \int_0^\infty \ensuremath{\mathrm{d}} \tau \,
\eta(\tau) \cos\left( \Omega \tau \right), \label{eq:caytcs1} \\
\gamma &= \frac{2}{M\Omega} \int_0^\infty \ensuremath{\mathrm{d}} \tau \,
\eta(\tau) \sin\left( \Omega \tau \right), \label{eq:caytcs2} \\
D &= \frac{1}{\hbar} \int_0^\infty \ensuremath{\mathrm{d}} \tau \,
\nu(\tau) \cos\left( \Omega \tau \right), \label{eq:caytcs3} \\
f &= - \frac{1}{M\Omega} \int_0^\infty \ensuremath{\mathrm{d}} \tau \,
\nu(\tau) \sin\left( \Omega \tau \right). \label{eq:caytcs4} \end{align} \end{subequations}
The first term on the right-hand side of Eq.~\eqref{eq:vfoinbnd9s27} represents the unitary dynamics of a harmonic oscillator whose natural frequency is shifted by $\widetilde{\Omega}$. The second term describes momentum damping (dissipation) at a rate proportional to $\gamma$; it depends on the spectral density $J(\omega)$ of the environment but not on the temperature $T$. The third term has the Lindblad double-commutator form [see Eq.~\eqref{eq:lindbladc}] and describes decoherence of spatial coherences over a distance $\Delta X$ at a rate $D(\Delta X)^2$. Note that $D$ depends on both the spectral density $J(\omega)$ and the temperature $T$ of the environment. The fourth term also represents decoherence, but its influence on the dynamics of the system is usually negligible, especially at higher temperatures. In the long-time limit $\gamma t \gg 1$, the master equation \eqref{eq:vfoinbnd9s27} describes dispersion in position space given by
\begin{equation}
\Delta X^2(t) = \frac{\hbar^2D}{2m^2 \gamma^2} t. \end{equation}
We thus see that the ensemble width $\Delta X(t)$ in position space asymptotically scales as $\Delta X(t) \propto \sqrt{t}$. This is the same scaling behavior as in classical Brownian motion, thus motivating the term ``quantum Brownian motion.''
We note that it is possible to derive the \emph{exact}, non-Markovian master equation for quantum Brownian motion \cite{Hu:1992:om} (see also Refs.~\cite{Caldeira:1983:on,Caldeira:1985:tt,Haake:1932:tt,Grabert:1988:bf,Unruh:1989:rc} for preliminary results). Remarkably, this equation also turns out to be time-local. The exact master equation takes the same form as the Born--Markov equation~\eqref{eq:vjp32q22} presented above, but with the coefficients $\widetilde{\Omega}^2$, $\gamma$, $D$, and $f$ now being substantially more complex functions of time and other time-dependent coefficients, which in turn are described in terms of integrals over the noise and dissipation kernels \eqref{eq:vdjpoo17} and \eqref{eq:ponol218}. We refer the reader to Ref.~\cite{Hu:1992:om} for details.
\subsubsection{Time evolution}
\begin{figure}
\caption{Evolution of superpositions of Gaussian wave packets in quantum Brownian motion as studied in Ref.~\cite{Paz:1993:ta}, visualized in the Wigner representation \cite{Wigner:1932:un,Hillery:1984:tv}. Time increases from top to bottom. In the left column, the initial wave packets are separated in position; in the right column, the separation is in momentum. Interference between the two wave packets is represented by the oscillatory pattern between the two direct peaks.}
\label{fig:gaussmov}
\end{figure}
Figure~\ref{fig:gaussmov} shows the time evolution of position-space and momentum-space superpositions of two Gaussian wave packets (represented in the Wigner picture \cite{Wigner:1932:un,Hillery:1984:tv}; see Sec.~\ref{sec:meas}), as described by Eq.~\eqref{eq:vfoinbnd9s27} an studied by Paz, Habib, and Zurek in Ref.~\cite{Paz:1993:ta}. The oscillations between the direct peaks represent interference between the two wave packets; as time goes on, these oscillations become progressively suppressed due to the interaction with the environment. The superposition of \emph{spatially} separated wave packets is decohered much more rapidly than the superposition of two distinct \emph{momentum} states.
This can be explained in terms of the structure of the environmental monitoring. The interaction Hamiltonian couples only the position coordinate of the system to the environment, but not the momentum coordinate. Thus the environment monitors only position, and therefore one would expect that decoherence should occur only in position, not momentum. However, the intrinsic dynamics of the system are such that a superposition of two momenta will evolve into a superposition of positions, which in turn is sensitive to environmental monitoring. Thus, superpositions of momenta will also be decohered, albeit on a timescale associated with the intrinsic dynamics, which is much longer than the timescale for the decoherence interaction described by the interaction Hamiltonian.
This interplay of environmental monitoring and intrinsic dynamics leads to the emergence of pointer states that are minimum-uncertainty Gaussians (coherent states) well-localized in both position and momentum, thus approximating classical points in phase space \cite{Kubler:1973:ux,Paz:1993:ta,Zurek:1993:pu,Zurek:2002:ii,Diosi:2000:yn,Joos:2003:jh,Eisert:2003:ib,Sorgel:2015:pp}. Using a Poissonian unraveling of the master equation into individual quantum trajectories, the motion of these Gaussians may be represented by a stochastic differential equation that describes momentum damping as well as diffusion in position and momentum \cite{Sorgel:2015:pp}.
\begin{figure}
\caption{Ohmic spectral density $J(\omega)$ with a high-frequency cutoff $\Lambda$ as defined by Eq.~\eqref{eq:pojsvsddsjldfv1}. The frequency axis is in units of $\Lambda$.}
\label{fig:spectraldensity}
\end{figure}
\subsubsection{\label{sec:cald-legg-mast}The Caldeira--Leggett master equation}
Let us now consider the frequently used case of an ohmic spectral density $J(\omega) \propto \omega$ with a high-frequency cutoff $\Lambda$,
\begin{equation}
\label{eq:pojsvsddsjldfv1}
J(\omega) = \frac{2M\gamma_0}{\pi} \omega
\frac{\Lambda^2}{\Lambda^2 + \omega^2}, \end{equation}
where $\gamma_0$ is the effective coupling strength between system and environment. This function is shown in Fig.~\ref{fig:spectraldensity}. We evaluate the coefficients $\widetilde{\Omega}^2$, $\gamma$, $D$, and $f$ in Eq.~\eqref{eq:jcsfr09355378} for this spectral density in the limit of a high-temperature environment ($k_B T \gg \Omega$ and $k_B T \gg \Lambda$), and in the limit of the cutoff $\Lambda$ of environmental frequencies being much higher than the characteristic frequency $\Omega$ of the system. Then the master equation for quantum Brownian motion becomes (see, e.g., Refs.~\cite{Breuer:2002:oq,Schlosshauer:2007:un} for a derivation)
\begin{equation} \label{eq:vfnbcclclnd6t48efg89s27}
\frac{\partial}{\partial t} \op{\rho}_S(t)
= -\frac{\ensuremath{i}}{\hbar} \bigl[ \op{H}'_S, \op{\rho}_S(t) \bigr] - \frac{\ensuremath{i} \gamma_0}{\hbar} \bigl[ x, \bigl\{ p,
\op{\rho}_S(t) \bigr\} \bigr]
- \frac{2 M\gamma_0 k_B T}{\hbar^2} \bigl[ x, \bigl[ x, \op{\rho}_S(t) \bigr]\bigr] + \frac{2\gamma_0k_B T}{\hbar^2\Lambda} \bigl[ x, \bigl[ p, \op{\rho}_S(t) \bigr]\bigr]. \end{equation}
where
\begin{equation}
\op{H}'_S = \op{H}_S + \frac{1}{2}M
\widetilde{\Omega}^2 x^2 = \frac{1}{2M}p^2 +
\frac{1}{2}M\left[ \Omega^2 - 2\gamma_0 \Lambda \right]x^2 \end{equation}
is the frequency-shifted Hamiltonian $\op{H}'_S$ of the system. The last term on the right-hand side of Eq.~\eqref{eq:vfnbcclclnd6t48efg89s27} is negligible in the limit $\Lambda \gg \Omega$ considered here and can therefore be omitted. Thus, we arrive at
\begin{equation} \label{eq:vfnbcclclnd9s27}
\frac{\partial}{\partial t} \op{\rho}_S(t)
= -\frac{\ensuremath{i}}{\hbar} \bigl[ \op{H}'_S, \op{\rho}_S(t) \bigr]
- \frac{\ensuremath{i} \gamma_0}{\hbar} \bigl[ x, \bigl\{ p,
\op{\rho}_S(t) \bigr\} \bigr]
- \frac{2 M\gamma_0 k_B T}{\hbar^2} \bigl[ x, \bigl[ x, \op{\rho}_S(t) \bigr]\bigr], \end{equation}
which is known as the \emph{Caldeira--Leggett master equation} \cite{Caldeira:1983:on} (here applied to the case of a system represented by harmonic oscillator). It has been used extensively to model decoherence and dissipation processes \cite{Gallis:1990:un,Gallis:1992:im,Anglin:1997:za}. It has been shown to sometimes provide an adequate representation even when the underlying assumptions are not strictly fulfilled, for example, in quantum-optical settings for which we often have $k_B T \lesssim \hbar\Lambda$ \cite{Walls:1985:lm}. Comparisons of the predictions from the Caldeira--Leggett master equation with those of more complicated, non-Markovian models often exhibit surprisingly good agreement \cite{Paz:1993:ta}.
In the position representation, the final term on the right-hand side of Eq.~\eqref{eq:vfnbcclclnd9s27} may be expressed as
\begin{equation}
\label{eq:fsdojgdj1}
- \gamma_0 \left( \frac{x-x'}{\lambda_\text{th}} \right)^2 \rho_S(x,x',t), \end{equation}
where $\lambda_\text{th}$ is the thermal de Broglie wavelength defined in Eq.~\eqref{eq:daf12thermal}. This term describes spatial localization with a decoherence rate $\tau_{\abs{x-x'}}^{-1}$ given by \cite{Zurek:1986:uz}
\begin{equation} \label{eq:odijsvuhfsw21}
\tau_{\abs{x-x'}}^{-1} = \gamma_0 \left(
\frac{x-x'}{\lambda_\text{th}} \right)^2. \end{equation}
This is Eq.~\eqref{eq:daf12}, and as discussed there, given that $\lambda_\text{th}$ is extremely small for macroscopic and even mesoscopic objects, it follows that, typically, superpositions of macroscopically separated center-of-mass positions will be decohered on a timescale that is many orders of magnitude shorter than the dissipation (relaxation) timescale $\gamma^{-1}_0$. Therefore, over timescales on the order of the decoherence time, it is often safe to drop the second, dissipative term on the right-hand side of Eq.~\eqref{eq:vfnbcclclnd9s27}, yielding a master equation that describes pure decoherence,
\begin{equation} \label{eq:vfnbcclasclnd9s27}
\frac{\partial}{\partial t} \op{\rho}_S(t) = -\frac{\ensuremath{i}}{\hbar} \bigl[
\op{H}'_S, \op{\rho}_S(t) \bigr]
- \frac{2 M\gamma_0 k_B T}{\hbar^2} \bigl[ x, \bigl[ x, \op{\rho}_S(t) \bigr] \bigr], \end{equation}
which is of the Lindblad double-commutator form~\eqref{eq:lindbladc}. Incidentally, this result provides a microscopic motivation for the simple Lindblad master equation~\eqref{eq:lifsfdndbladc} for spatial decoherence we had written down previously.
Note that the decoherence rate given by Eq.~\eqref{eq:odijsvuhfsw21} grows without bounds as the separation $x-x'$ is increased, which is an obviously unphysical behavior. Just as in the case of collisional decoherence, we expect that the decoherence rate will saturate; in the present case, this should happen when the separation $x-x'$ grows to approach the maximum coherence length of the oscillator environment. The absence of such a saturation point indicates the limitations of the Caldeira--Leggett model. In fact, by considering the general model of a massive particle coupled to a massless scalar field, one can show \cite{Unruh:1989:rc,Anglin:1997:za,Paz:2001:aa} that not only the Caldeira--Leggett model but also the quantum Brownian master equation \eqref{eq:vfoinbnd9s27} is based on an implicit long-wavelength assumption (compare the discussion in Sec.~\ref{sec:collisionaldecoherence}). The absence of a saturation point for the decoherence rate given by Eq.~\eqref{eq:odijsvuhfsw21} is therefore nothing but an artifact of the underlying model, showing that one needs to be careful when extrapolating models to different parameter regimes \cite{Gallis:1990:un,Gallis:1992:im,Anglin:1997:za}.
While we presented the Caldeira--Leggett master equation in the context of quantum Brownian motion, it is in fact far more general. It may be applied to arbitrary system potentials, and instead of a system described by position and momentum coordinates, we may instead use a spin-$\frac{1}{2}$ particle represented by the Pauli spin operators $\op{\sigma}_x$ and $\op{\sigma}_z$, giving rise to models of the spin--boson kind (see Sec.~\ref{sec:spin-boson-models}).
As it stands, the Caldeira--Leggett master equation~\eqref{eq:vfnbcclclnd9s27} cannot be expressed in Lindblad form and therefore cannot guarantee complete positivity \cite{Breuer:2002:oq}. However, it can be brought into Lindblad form through a minimal modification. This amounts to adding a term $-\gamma_0(8Mk_B T)^{-1} [p,[p, \op{\rho}_S(t)]]$, which is small in the relevant high-temperature limit \cite{Breuer:2002:oq}. The resulting Lindblad master equation then has the single Lindblad operator
\begin{equation}\label{eq:dkvnkl1} \op{L} = \sqrt{\frac{4Mk_B T}{\hbar^2}}\op{x} + \ensuremath{i} \sqrt{\frac{1}{4Mk_B T}} \,\op{p}. \end{equation}
\subsection{\label{sec:spin-boson-models}Spin--boson models}
In the spin--boson model (see, e.g., Refs.~\cite{Leggett:1987:pm,Breuer:2002:oq,Schlosshauer:2007:un} for reviews), a qubit interacts with an environment of harmonic oscillators. This model has been of strong interest in investigations of decoherence; it has been used, for example, in the first studies of qubit decoherence in the early years of quantum information \cite{Unruh:1995:uy,Palma:1996:yy}. Spin--boson models are exceptionally versatile because many quantum systems can be represented by a two-level system, and because harmonic-oscillator environments are, as mentioned before, of great generality \cite{Feynman:1963:jj,Caldeira:1983:gv}.
Let us first consider a simplified spin--boson model where the self-Hamiltonian of the system is taken to be
\begin{equation} \op{H}_S = \frac{1}{2} \hbar\omega_0 \sigma_z, \end{equation}
with eigenstates $\ket{0}$ and $\ket{1}$. In contrast with the more general case discussed below, this Hamiltonian does not include a tunneling term $\frac{1}{2}\hbar \Delta_0 \sigma_x$, and thus $\op{H}_S$ does not generate any nontrivial intrinsic dynamics. The self-Hamiltonian for the environment of harmonic oscillators is the same as in Eq.~\eqref{eq:sfsfjaa11},
\begin{equation}
\op{H}_E = \sum_i \left( \frac{1}{2m_i}p_i^2 +
\frac{1}{2}m_i\omega_i^2q_i^2 \right), \end{equation}
and we choose the bilinear interaction Hamiltonian
\begin{equation} \op{H}_\text{int} = \sigma_z \otimes \sum_i c_i q_i. \end{equation}
Using the bosonic creation and annihilation (i.e., raising and lowering) operators $a_i^\dagger$ and $a_i$, we may recast the total Hamiltonian as
\begin{equation}\label{eq:h-ssb}
H = \frac{1}{2}\hbar \omega_0 \sigma_z
+ \sum_i \hbar\omega_i a_i^\dagger a_i + \sigma_z \otimes \sum_i
\left( g_ia_i^\dagger + g_i^* a_i \right), \end{equation}
with $[a_i, a_j^\dagger]=\delta_{ij}$ (for simplicity, we have dropped the vacuum-energy term $\sum_i \frac{\hbar\omega_i}{2}$).
Note that since the total Hamiltonian $H=H_S+H_E+H_\text{int}$ commutes with $\sigma_z$, no transitions between the $\sigma_z$ eigenstates $\ket{0}$ and $\ket{1}$ can be induced by $H$. Because there is no energy exchange between the system and the environment, the model describes decoherence without dissipation. Such a model is a good representation of decoherence processes that occur on a timescale that is much shorter than the timescale for dissipation, as is often the case in physical applications. The resulting evolution can be solved exactly (see, e.g., Refs.~\cite{Schlosshauer:2007:un,Hornberger:2009:aq} for details). For an ohmic spectral density with a high-frequency cutoff, it is found that superpositions of the form $\alpha\ket{0}+\beta\ket{1}$ are exponentially decohered on a timescale set by the thermal correlation time $\tau_B=2\hbar (k_B T)^{-1}$ of the environment.
Inclusion of a tunneling term $\frac{1}{2} \hbar\Delta_0 \sigma_x$ yields the general spin--boson model defined by the Hamiltonian
\begin{equation}\label{eq:h-sbhdcskgsf} H = \frac{1}{2} \hbar\omega_0 \sigma_z + \frac{1}{2} \hbar\Delta_0 \sigma_x + \sum_i \left( \frac{1}{2m_i} p_i^2 + \frac{1}{2} m_i \omega_i^2 q_i^2
\right) + \sigma_z \otimes \sum_i
c_i q_i. \end{equation}
One typically considers the unbiased case $\omega_0=0$, corresponding to a symmetric double-well potential (but see Sec.~VII of Ref.~\cite{Leggett:1987:pm} for a treatment of the biased case). The non-Markovian dynamics of this model have been studied in great detail in Refs.~\cite{Leggett:1987:pm,Weiss:1999:tv}. The particular dynamics strongly depend on the various parameters of the model, such as the temperature of the environment, the form of the spectral density (subohmic, ohmic, or supraohmic), and the system--environment coupling strength. For each parameter regime, a characteristic dynamical behavior emerges: localization, exponential or incoherent relaxation, exponential decay, and strongly or weakly damped coherent oscillations \cite{Leggett:1987:pm}.
In the weak-coupling limit, one can derive the Born--Markov master equation for the Hamiltonian~\eqref{eq:h-sbhdcskgsf} (where we shall again assume $\omega_0=0$ for simplicity). Because of the formal similarity between the spin--boson Hamiltonian and the Hamiltonian for quantum Brownian motion, the derivation proceeds in much the same way as for the quantum Brownian motion. The result is (see, e.g., Refs.~\cite{Paz:2001:aa,Schlosshauer:2007:un} for details)
\begin{equation} \label{eq:vjp32gbntrkh22} \frac{\partial}{\partial t} \op{\rho}_S(t) = -\frac{\ensuremath{i}}{\hbar} \left(
\op{H}'_S \op{\rho}_S(t) - {\op{\rho}}_S(t)
H'^\dagger_S \right) - D \left[
\sigma_z, \left[ \sigma_z, \op{\rho}_S(t)
\right]\right] - \zeta \sigma_z \op{\rho}_S(t)\sigma_y - \zeta^* \sigma_y \op{\rho}_S(t)\sigma_z, \end{equation}
where
\begin{subequations} \begin{align}
\op{H}'_\mathcal{S} &= \hbar\left( \frac{1}{2}
\Delta_0 + \zeta^* \right) \op{\sigma}_x. \\
\zeta^* &= \int_0^\infty \ensuremath{\mathrm{d}} \tau \, \left[\nu(\tau) - \ensuremath{i} \eta(\tau)\right] \sin\left( \Delta_0 \tau \right), \\
D &= \int_0^\infty \ensuremath{\mathrm{d}} \tau \,
\nu(\tau) \cos\left( \Delta_0 \tau \right), \end{align} \end{subequations}
with the noise and the dissipation kernels $\nu(\tau)$ and $\eta(\tau)$ taking the same form as in quantum Brownian motion [see Eqs.~\eqref{eq:vdjpoo17} and \eqref{eq:ponol218}]. The first term on the right-hand side of Eq.~\eqref{eq:vjp32gbntrkh22} represents the evolution under the environment-renormalized (and in general non-Hermitian) Hamiltonian $\op{H}'_S$. The second term is in the Lindblad double-commutator form \eqref{eq:lindbladc} and generates decoherence in the $\sigma_z$ eigenbasis at a rate given by $D$. The last two terms describe the decay of the two-level system. In the absence of tunneling ($\Delta_0=0$ and hence also $\zeta=0$), Eq.~\eqref{eq:vjp32gbntrkh22} reduces to the pure-decoherence Lindblad master equation~\eqref{eq:vjp32gbntrkh22max} discussed in Sec.~\ref{sec:two-simple-examples}.
\subsection{\label{sec:spin-envir-models}Spin-environment models}
A qubit linearly coupled to a collection of other qubits---also known as a \emph{spin--spin model}---is often a good model of a two-level system (for example, a superconducting qubit) that interacts strongly with a low-temperature environment \cite{Prokofev:2000:zz,Dube:2001:zz}. The model of a harmonic oscillator interacting with a spin environment may be relevant to the description of decoherence and dissipation in quantum-nanomechanical systems and micron-scale ion traps \cite{Schlosshauer:2008:os}. For details on the theory of spin-environment models, see Refs.~\cite{Dube:2001:zz,Stamp:1998:im,Prokofev:1995:ab,Prokofev:1993:aa}.
A basic version of a spin--spin model was studied by Zurek in his seminal paper of 1982 \cite{Zurek:1982:tv}. This model neglects the intrinsic dynamics of the system and environment, and the interaction Hamiltonian describes a bilinear coupling between the system and environment spins,
\begin{equation}\label{eq:hse-zurek}
\op{H} = \op{H}_\text{int} = \frac{1}{2} \op{\sigma}_z \otimes
\sum_{i=1}^N g_i \op{\sigma}^{(i)}_z \equiv
\frac{1}{2} \op{\sigma}_z \otimes \op{E}. \end{equation}
This Hamiltonian represents the environmental monitoring of the observable $\sigma_z$ and leads to decoherence in the $\{\ket{0},\ket{1}\}$ eigenbasis of $\op{\sigma}_z$. Specifically, one can show \cite{Zurek:1982:tv,Cucchietti:2005:om} that the decoherence rate increases exponentially with the number $N$ of environmental spins, and that for large $N$ and a broad class of distributions of the coupling coefficients $g_i$, the interference damping follows an approximately Gaussian time dependence $\propto \exp(- \Gamma^2 t^2)$, where the decay constant $\Gamma$ is determined by the initial state of the environment and the distribution of the couplings $g_i$.
If a tunneling term is added to the Hamiltonian (representing the intrinsic dynamics of the system), the Hamiltonian becomes
\begin{equation}\label{eq:jhdsgwuygfurwb}
H = \op{H}_S + \op{H}_\text{int} = \frac{1}{2}\hbar
\Delta_0 \sigma_x + \frac{1}{2} \sigma_z \otimes
\sum_{i=1}^N g_i
\sigma_z^{(i)}
\equiv \frac{1}{2}\hbar \Delta_0 \sigma_x + \frac{1}{2}
\sigma_z \otimes E. \end{equation}
This model can be solved exactly \cite{Dobrovitski:2003:az,Cucchietti:2005:om}. The particular preferred (pointer) states selected by the dynamics depend on the relative strengths of the self-Hamiltonian $\op{H}_S$ of the system and the interaction Hamiltonian $\op{H}_\text{int}$. In general, the preferred states are those that are most robust under the action of the \emph{total} Hamiltonian. In the quantum-measurement limit, where the interaction Hamiltonian dominates the evolution [this is the model described by Eq.~\eqref{eq:hse-zurek}], the emerging pointer states are indeed found to be close to the eigenstates of the interaction Hamiltonian (i.e., the eigenstates $\{\ket{0},\ket{1}\}$ of $\op{\sigma}_z$) \cite{Cucchietti:2005:om}, as also predicted by the commutativity criterion, Eq.~\eqref{eq:dhvvsdnbbfvs27}. In the quantum limit of decoherence, where the modes of the environment are slow and the self-Hamiltonian $\op{H}_S$ of the system dominates, the pointer states are found to be close to the eigenstates of $\op{H}_S$, i.e., the eigenstates $\ket{\pm}=\left(\ket{0}\pm\ket{1}\right)/\sqrt{2}$ of $\op{\sigma}_x$ \cite{Cucchietti:2005:om}.
In the weak-coupling limit, spin environments can be mapped onto oscillator environments \cite{Feynman:1963:jj,Caldeira:1993:bz}. Specifically, the reduced dynamics of a system weakly coupled to a spin environment can be described by the system coupled to an \emph{equivalent} oscillator environment with an explicitly temperature-dependent spectral density of the form
\begin{equation} \label{eq:vslkfvfgyiJA2} J_\text{eff}(\omega, T) = J(\omega) \tanh\left(\frac{\hbar\omega}{2k_B T}\right), \end{equation}
where $J(\omega)$ is the original spectral density of the spin environment. See Sec.~5.4.2 of Ref.~\cite{Schlosshauer:2007:un} for details and examples.
Many physical settings in which spin environments are the appropriate model are also those where low temperatures and strong system--environment interactions render the Born--Markov assumptions of weak coupling and negligible memory effects inapplicable. Therefore, one needs to look for non-Markovian solutions, which are typically difficult to calculate. In principle, techniques such as the instanton formalism \cite{Prokofev:2000:zz} allow for analytical calculations of relevant quantities such as spin expectation values. One challenging task is the tracing (averaging) over the degrees of freedom of a strongly coupled environment. Prokof'ev and Stamp \cite{Prokofev:2000:zz} have demonstrated that this task can be accomplished by considering four limiting cases of the general spin-environment model, averaging over the environment in each case, and then combining the four averages to represent the average for the general model; see Ref.~\cite{Prokofev:2000:zz} for details.
\section{\label{sec:decoh-errcorr}Decoherence avoidance and mitigation}
Combatting the detrimental effect of decoherence is of paramount importance whenever nonclassical quantum superposition states need to be generated and maintained, for example, in quantum information processing, quantum computing, and quantum technologies \cite{Dowling:2003:tv}. Accordingly, a number of methods have been developed to prevent quantum states from decohering in the first place (or, at least, to minimize their decoherence), and to undo (correct for) the effects of decoherence. In the terminology of quantum information processing where the effects of decoherence amount to processing errors, the first approach is often known as \emph{error avoidance}, whereas the second approach is referred to as \emph{error correction}. Here, we will discuss decoherence-free subspaces (Sec.~\ref{sec:dfs}) as an instance of an error-avoidance scheme, and also comment on techniques such as reservoir engineering and dynamical decoupling (Sec.~\ref{sec:reserv-engin-quant}). We will then give a brief description of quantum error correction (Sec.~\ref{sec:corr-decoh-induc}).
\subsection{\label{sec:dfs}Decoherence-free subspaces}
Recall that while the pointer states themselves are robust against decoherence, superpositions of such states will generally be rapidly decohered. By contrast, in a \emph{pointer subspace} \cite{Zurek:1982:tv} or \emph{decoherence-free subspace} (DFS) \cite{Palma:1996:yy,Lidar:1998:uu,Zanardi:1997:yy,Zanardi:1997:tv,Zanardi:1998:oo,Lidar:1999:fa,Bacon:2000:yy,Duan:1998:yb,Zanardi:2001:oo,Knill:2000:aa} (see Refs.~\cite{Lidar:2003:aa,Lidar:2014:pp} for reviews), \emph{every possible state in that subspace} will be robust to decoherence. This, obviously, is a much stronger statement than the existence of individual pointer states, and it is therefore not surprising that the existence of a DFS, especially a high-dimensional one for larger systems, is nontrivial and that the conditions required for such a DFS to be present tend to be correspondingly difficult to meet in practice.
\subsubsection{Condition for the existence of a decoherence-free subspace}
For a given interaction Hamiltonian $\op{H}_\text{int} = \sum_\alpha \op{S}_\alpha \otimes \op{E}_\alpha$, a DFS is spanned by a basis $\{\ket{s_i}\}$ of orthonormal pointer states with the added condition that the action of each system operator $\op{S}_\alpha$ in $\op{H}_\text{int}$ must be the same for each of the pointer-basis states $\ket{s_i}$. In other words, the result of the system--environment interaction applied to these pointer states must be trivial in the sense that it must not distinguish between these states. In this way, the existence of a DFS corresponds to the presence of a symmetry in the structure of the system--environment interaction (i.e., a \emph{dynamical symmetry}). Mathematically, this condition is implemented through strengthening the pointer-state condition of Eq.~\eqref{eq:OIbvsrhjkbv9}, by requiring that the pointer states are simultaneous \emph{degenerate} eigenstates of each $\op{S}_\alpha$,
\begin{equation}
\label{eq:OIbvsrhjkbvsfljvh9}
\op{S}_\alpha \ket{s_i} = \lambda^{(\alpha)} \ket{s_i} \qquad
\text{for all $\alpha$ and $i$}. \end{equation}
If this condition is fulfilled, then the evolution generated by the interaction Hamiltonian $\op{H}_\text{int} = \sum_\alpha \op{S}_\alpha \otimes \op{E}_\alpha$ for an arbitrary, initially pure state $\ket{\psi} = \sum_i c_i \ket{s_i}$ in the DFS is
\begin{equation}
\label{eq:OIbvsrhjkbv9zFFHGSVCxc}
\ensuremath{e}^{-\ensuremath{i} \op{H}_\text{int}t} \ket{\psi}\ket{E_0} = \ket{\psi}\ensuremath{e}^{-\ensuremath{i}
\left( \sum_\alpha \lambda^{(\alpha)} E_\alpha \right)t/\hbar}
\ket{E_0} \equiv \ket{\psi} \ket{E_\psi(t)}. \end{equation}
This shows that the system does not get entangled with the environment and therefore remains decoherence-free. Of course, in general the self-Hamiltonian of the system will also contribute, in which case one needs to additionally ensure that this Hamiltonian does not take the state outside the DFS, which would then make it vulnerable to decoherence. The concept of a DFS can be generalized and extended to the formalism of \emph{noiseless subsystems} or \emph{noiseless quantum codes} \cite{Knill:2000:aa,Kempe:2001:oo,Lidar:2003:aa,Choi:2006:tt,Beny:2007:pp,BlumeKohout:2010:pp, Lidar:2014:pp}; see Ref.~\cite{Lidar:2014:pp} for a review.
\subsubsection{Collective versus independent decoherence}
The condition \eqref{eq:OIbvsrhjkbvsfljvh9} for the basis states of a DFS is strong and therefore often difficult to fulfill in practice. To illustrate this point, let us consider a system consisting of $N$ two-level systems (qubits) interacting with an environment of harmonic oscillators (this is the spin--boson model discussed in Sec.~\ref{sec:spin-boson-models}). The interaction Hamiltonian is
\begin{equation}
\label{eq:dadrestinpeacdh11}
\op{H}_\text{int} = \sum_{i=1}^N \sigma_z^{(i)} \otimes \sum_j
\left( g_{ij}a_j^\dagger + g_{ij}^* a_j \right) \equiv
\sum_{i=1}^N \sigma_z^{(i)} \otimes E_i, \end{equation}
where the $g_{ij}$ are coupling coefficients, and $a^\dagger$ and $a$ are the raising and lowering operators for the harmonic oscillators of the environment.
Recall that the existence of a DFS is related to the presence of a dynamical symmetry in the interaction Hamiltonian. A drastic way of creating such a symmetry is to require that each qubit operator $\op{\sigma}_z^{(i)}$ couples to the environment in exactly the same way. This means that the interaction with the environment is invariant under an exchange of any two qubits, and therefore the environment cannot distinguish between the qubits. This limiting case is commonly referred to as \emph{collective decoherence}, and in the example of the spin--boson interaction \eqref{eq:dadrestinpeacdh11} corresponds to dropping the dependence of the coupling coefficients $g_{ij}$ on the index $i$ labeling the particular qubit. Then the interaction Hamiltonian \eqref{eq:dadrestinpeacdh11} becomes
\begin{equation} \label{eq:dadrestinpeacfxndh22} \op{H}_\text{int} = \left( \sum_{i} \sigma_z^{(i)} \right) \otimes E \equiv \op{S}_z \otimes E. \end{equation}
One sees that this Hamiltonian represents an interaction between the collective spin operator and a single environment operator.
The condition \eqref{eq:OIbvsrhjkbvsfljvh9} for basis states of a DFS then tells us that the DFS will be a spanned by states that are simultaneous degenerate eigenstates of $\op{S}_z=\sum_i \op{\sigma}_z^{(i)}$. Consider a computational-basis state $\ket{m_1}\ket{m_2} \otimes \cdots \otimes \ket{m_N}$, where we let $m_i=0$ represent the eigenvalue $+1$ of $\op{\sigma}_z$, and $m_i=1$ the eigenvalue $-1$. Any such computational-basis state will be an eigenstate of $\op{S}_z$, with integer eigenvalues ranging from $M=-N$ (for the state $\ket{11\cdots 1}$) to $M=+N$ (for the state $\ket{00\cdots 0}$). Since we would like to span a subspace from a set of degenerate of eigenstates of $\op{S}_z$, and would like this subspace to be as large as possible so we can make the largest possible number of quantum states immune to decoherence, we need to look for the greatest number of computational-basis states with the same eigenvalue $M$ of $\op{S}_z$. This happens for $M=0$, corresponding to computational-basis states for which half of the qubits are in the state $\ket{0}$ and half in the state $\ket{1}$. For a system of $N$ qubits, there are $\binom{N}{N/2}$ such computational-basis states. These states will then span a DFS. For example, for four qubits ($N=4$), represented by a Hilbert space of dimension $2^4=16$, we have $\binom{4}{2}=6$ computational-basis states with the same eigenvalue $M=0$, namely, $\ket{0011}, \ket{0101}, \ket{0110}, \ket{1001}, \ket{1010}$, and $\ket{1100}$. These states span a six-dimensional DFS. For large $N$, Stirling's formula for approximating the binomial coefficient gives
\begin{equation}
\log_2 \binom{N}{N/2} \approx N - \frac{1}{2} \log_2 (\pi N/2) \,\,
\xrightarrow{N \gg 1} \,\, N, \end{equation}
which shows that the dimension of the DFS approaches the dimension of the Hilbert space of the system. Thus, in this limiting case of perfectly collective decoherence of a very large qubit system, essentially every state in the system's Hilbert space will be immune to decoherence.
Using the spin--boson example just discussed, we can also see that no DFS exists when no two qubits couple to the environment in the same way, that is, if the interaction Hamiltonian does not exhibit any dynamical symmetry. In this limiting case, known as \emph{independent decoherence}, the couplings $g_{ij}$ appearing in the interaction Hamiltonian~\eqref{eq:dadrestinpeacdh11} will be different for each qubit, and thus the Hamiltonian retains its form \eqref{eq:dadrestinpeacdh11}, $\op{H}_\text{int} = \sum_{i=1}^N \sigma_z^{(i)} \otimes E_i$, with the environment operators $E_i$ differing between any two qubits. The usual DFS condition \eqref{eq:OIbvsrhjkbvsfljvh9} would then require us to find an orthonormal set of simultaneous, degenerate $N$-qubit eigenstates of \emph{each} single-qubit operator $\sigma_z^{(i)}$, $i=1,\hdots,N$. The only computational-basis states that fulfill this condition are $\ket{00\cdots 0}$ and $\ket{11\cdots 1}$, albeit with different eigenvalues, and therefore no DFS can exist \cite{Lidar:1998:uu}.
In practice, it would be challenging to find a system in which each qubit couples to exactly the same environment. Fortunately, one can show that a DFS is robust to small deviations from perfect dynamical symmetry. Specifically, one may investigate the consequences of perturbing a symmetric interaction Hamiltonian, such as the Hamiltonian given by Eq.~\eqref{eq:dadrestinpeacfxndh22}, by adding additional, small coupling terms $\widetilde{g}_{ij}$ that break the symmetry by distinguishing between the qubits. Such perturbations will introduce a tunable dependence of the environmental interaction on the qubit index $i$. This poses the question of the sensitivity of a DFS to such perturbations, and how the sensitivity scales with the system. Using the measure of \emph{dynamical fidelity} \cite{Lidar:1998:uu,Bacon:1999:aq}, which quantifies how the evolution of a given initial state differs in the presence of additional system--environment couplings, it has been shown \cite{Lidar:1998:uu,Bacon:1999:aq,Kattemolle:2018:ii} that, to first order in the strength of the symmetry-breaking perturbations, a DFS is robust to such perturbations. The influence of perturbations resulting in noncollective decoherence effects has also been studied experimentally; see the photonic experiment reported in Ref.~\cite{Altepeter:2004:ll} for an example. We note that strategies for quantum error correction \cite{Steane:1996:cd,Shor:1995:rx,Steane:2001:dx,Knill:2002:rx,Nielsen:2000:tt,Lidar:2013:pp} may be used to combat decoherence arising from subspaces that are not perfectly decoherence-free \cite{Lidar:1999:fa}.
\subsubsection{Experimental realizations of decoherence-free subspaces}
Starting with the proof-of-principle demonstration for two-photon states by Kwiat et al.\ \cite{Kwiat:2000:kv}, several experiments have realized DFSs. Among the first, in 2001 Kielpinski et al.\ \cite{Kielpinski:2001:uu} reported the creation of a two-dimensional (i.e., one-bit) DFS using a pair of trapped, interacting $^9$Be$^+$ ions, with the qubit states formed by two hyperfine levels and a decohering environment simulated by fluctuations of the laser intensity, and Viola et al.\ \cite{Viola:2001:ra} described generation of a one-bit DFS using three NMR qubits. Roos et al.\ \cite{Roos:204:pp} created a DFS using two trapped $^{40}$Ca$^+$ ions subject to a dephasing environment and achieved coherence times around \unit[1]{s}. Two-ion DFSs were also reported by H{\"a}ffner et al.\ \cite{Haffner:2005:zz} and Langer et al.\ \cite{Langer:2005:uu}. Coherence times of up to \unit[34]{s} were found, and long lifetimes of up to \unit[20]{s} were observed for the entanglement between ion pairs. DFSs for a photon pair were further investigated experimentally by Altepeter et al.\ \cite{Altepeter:2004:ll}, who also studied the sensitivity of a DFS to perturbations that introduce noncollective couplings to the environment.
Mohseni et al.\ \cite{Mohseni:2003:pp} experimentally demonstrated how the performance of a photonic implementation of the Deutsch--Jozsa quantum algorithm \cite{Deutsch:1989:mm} can be substantially enhanced through the use of a DFS. DFSs have also been experimentally realized in quantum cryptography, for example, in the fault-tolerant quantum key distribution protocol proposed and implemented by Zhang et al.\ \cite{Zhang:2006:zz}. The usefulness of DFSs is not limited to quantum information processing, either. For instance, a DFS has been successfully realized to protect a neutron interferometer from unwanted noise arising from low-frequency mechanical vibrations \cite{Pushin:2011:zz}.
\subsection{\label{sec:reserv-engin-quant}Reservoir engineering and dynamical decoupling}
Even approximate dynamical symmetries will often be absent in multi-qubit systems, and therefore one approach consists of actively creating such symmetries through a strategy known as \emph{environment engineering} or \emph{reservoir engineering}. Dalvit, Dziarmaga, and Zurek \cite{Dalvit:2000:bb} have shown how this idea, in principle, could lead to a DFS that is spanned by superposition states of Bose--Einstein condensates. In the context of ion traps, theoretical \cite{Poyatos:1996:um} and experimental \cite{Myatt:2000:yy,Turchette:2000:aa,Carvalho:2001:ua} studies have investigated how different DFSs for the trapped ion can be created through a laser-induced manipulation of the system--environment coupling.
Beyond DFSs but related in spirit, engineering of the couplings between system and environment has also been used to drive the system into particular quantum superposition states (``quantum state engineering''). Experimental implementations of this approach have been reported, for example, with trapped ions \cite{Barreiro:2011:oo,Lin:2013:pp,Kienzler:2015:oo}, superconducting circuits \cite{Shankar:2013:pp}, and atomic ensembles \cite{Krauter:2011:ll}. Reservoir engineering even opens up the possibility of an unorthodox implementation of universal quantum computation \cite{Verstraete:2009:ii}, in which the interaction with the environment is not an adversary but rather a resource for quantum information processing. It has also been shown \cite{Braun:2002:aa, Benatti:2003:aa,Kim:2002:oo,Jakobczyk:2002:oo} that for two quantum systems that do not interact with each other but are coupled to a common environment, the decoherence and dissipation produced by the environment can sometimes lead to the creation of entanglement between the two systems. This is a noteworthy result, given that the action of decoherence and dissipation is usually considered detrimental to the presence of entanglement between systems \cite{Zyczkowski:2001:ii,Lee:2004:uu,Barreiro:2010:aa}. Such dynamical entanglement generation through environmental interactions was demonstrated, for example, in an exactly solvable model of two qubits interacting with a heat bath of harmonic oscillators \cite{Braun:2002:aa}, as well as arising from Markovian dissipative dynamics \cite{Benatti:2003:aa}; see also Refs.~\cite{Kim:2002:oo,Jakobczyk:2002:oo}.
Coherence of quantum states may be maintained also through a sequence of rapidly applied control pulses (or projective measurements) that average the coupling between system and environment to zero, an approach known as \emph{dynamical decoupling} or \emph{quantum bang-bang control} \cite{Viola:1998:uu,Viola:1999:zp,Zanardi:1999:oo,Viola:2000:pp,Wu:2002:aa,Wu:2002:bb}. In this way, a dynamically decoupled subspace is created, and coherence may be maintained to a large degree as long as the rate of the control pulses is higher than the rate at which entanglement with the environment is being produced. Dynamical decoupling can also be used to enhance the fidelity of quantum gates by several orders of magnitude \cite{Viola:1999:pp,West:2010:oo}.
\subsection{\label{sec:corr-decoh-induc}Quantum error correction}
Quantum error correction (see, e.g., Refs.~\cite{Steane:2001:dx,Knill:2002:rx,Nielsen:2000:tt,Gaitan:2008:uu,Lidar:2013:pp} for reviews) is the technique of undoing the change of the quantum state of a system induced by decoherence. The idea of quantum error correction, going back to Steane \cite{Steane:1996:cd} and Shor \cite{Shor:1995:rx}, is to couple the system to an ancilla in such a way that the original, pre-decoherence state can be reconstructed.
\subsubsection{Basic concepts}
To see how such state reconstruction is made possible, we observe that any changes to the state of a qubit resulting from its interaction with an environment $E$ can be reduced to the combined action of three different, discrete transformations. Specifically, for a single qubit in an initially pure state $\ket{\psi}$, the evolution of the combined system--environment state may always be written in the form \cite{Steane:2001:dx,Knill:2002:rx,Nielsen:2000:tt,Schlosshauer:2007:un}
\begin{equation} \label{eq:qcerrc} \ket{\psi} \ket{e_r} \, \longrightarrow \, I
\ket{\psi} \ket{e_I} + \sum_{s= x,y,z }
\left( \sigma_s \ket{\psi} \right)
\ket{e_s}, \end{equation}
where $\op{I}$ is the identity operator, the Pauli operators $\sigma_s$ act on the Hilbert space of $S$, $\ket{e_r}$ is the initial state of the environment, and $\ket{e_I}$ and $\{ \ket{e_s} \}$ denote states of the environment that need not be orthogonal or normalized. Equation~\eqref{eq:qcerrc} is simply a consequence of the fact that the Pauli operators, together with the identity, form a complete set of operators in the Hilbert space of the qubit.
The effects of $\sigma_x$ and $\sigma_z$ on the qubit state are referred to as a \emph{bit-flip error} and \emph{phase-flip error}, respectively (since $\op{\sigma}_y= \ensuremath{i}\op{\sigma}_x\op{\sigma}_z$, the operator $\op{\sigma}_y$ represents the simultaneous presence of a bit-flip error and phase-flip error). State changes resulting from environmental entanglement alone (i.e., decoherence) are fully captured by phase-flip errors. To see this explicitly, consider a qubit in an arbitrary state $\ket{\psi}=a\ket{0}+b\ket{1}$ undergoing an entangling interaction with an environment,
\begin{equation} \ket{\psi}\ket{e_r} \,\longrightarrow\, a\ket{0}\ket{e_0}+b\ket{1}\ket{e_1}. \end{equation}
We may rewrite the right-hand side as
\begin{align} a\ket{0}\ket{e_0}+b\ket{1}\ket{e_1} &= \left( a\ket{0}+b\ket{1} \right) \frac{1}{2} \left( \ket{e_0}+\ket{e_1}\right) + \left( a\ket{0}-b\ket{1} \right) \frac{1}{2} \left( \ket{e_0}-\ket{e_1}\right) \notag \\ &= \left(\op{I} \ket{\psi} \right) \frac{1}{2} \left( \ket{e_0}+\ket{e_1}\right) + \left(\op{\sigma}_z \ket{\psi} \right) \frac{1}{2} \left( \ket{e_0}-\ket{e_1}\right). \end{align}
Making the identifications $\ket{e_I} \equiv \frac{1}{2} \left( \ket{e_0}+\ket{e_1}\right)$ and $\ket{e_z} \equiv \frac{1}{2} \left( \ket{e_0}-\ket{e_1}\right)$, we recover Eq.~\eqref{eq:qcerrc} with only the $\op{\sigma}_z$ (phase-flip) term present,
\begin{equation} \label{eq:qcerrcaa} \ket{\psi}
\ket{e_r} \, \longrightarrow \, I
\ket{\psi} \ket{e_I} +
\left( \sigma_z \ket{\psi} \right)
\ket{e_z}. \end{equation}
For $N$ qubits, this can be shown to generalize to
\begin{equation}\label{eq:qcerrcN}
\ket{\psi} \ket{e_r} \, \longrightarrow\,
\sum_{i}
\left( E_i \ket{\psi} \right) \ket{e_i}, \end{equation}
where the so-called \emph{error operators} $E_i$ represent tensor products of $N$ operators involving identity and $\sigma_z$ operators; the number of $\sigma_z$ operators appearing in a given operator $E_i$ is refereed to as the \emph{weight} of the error operator. In many cases of interest, only a limited number $K<N$ of qubits become entangled with the environment (``partial decoherence''), and hence only the $2^K$ different error operators up to weight $K$ will need to be considered. A further dramatic reduction in the number of error operators to be taken into account occurs in the case of independent qubit decoherence, where each qubit couples independently to an environment (see Sec.~\ref{sec:dfs}). In this case, only error operators of weight equal to one will need to be considered (corresponding to independent phase flip errors), and therefore no more than $N$ such operators in total will be needed to describe the result of the decoherence process.
To bring about the actual correction of the quantum error imparted by decoherence, we start from the post-decoherence state $\sum_{i} \left( E_i \ket{\psi} \right) \ket{e_i}$ of Eq.~\eqref{eq:qcerrcN} and couple an ancilla to the qubit system in such a way that the composite system evolves as
\begin{equation}\label{eq:errfsyn}
\ket{a_0} \left[ \sum_{i} \left( E_i \ket{\psi} \right)
\ket{e_i} \right] \, \longrightarrow \,
\sum_{i} \ket{a_i} \left( E_i \ket{\psi} \right)
\ket{e_i}. \end{equation}
Here $\ket{a_0}$ is the initial state of the ancilla, and $\ket{a_i}$ are ancilla states that we shall assume to be (approximately) orthogonal so that they can be distinguished by a subsequent measurement. This measurement of the ancilla, represented by an observable $\op{O}_A = \sum_i a_i \ketbra{a_i}{a_i}$ with all eigenvalues $a_i$ being distinct, will project the system--ancilla combination onto one of the states $\ket{a_k} \left( E_k \ket{\psi} \right) \ket{e_k}$, with measurement outcome $a_k$. We have therefore isolated a single error operator $E_k$, and knowledge of the outcome $a_k$ provides the necessary information for applying a countertransformation $E_k^{-1}=E_k^\dagger$ to the system. This transformation changes the state from $\ket{a_k} \left( E_k \ket{\psi} \right) \ket{e_k}$ to $\ket{a_k} \ket{\psi} \ket{e_k}$, thereby restoring the original pre-decoherence state of the system. Note that no information about the state of the system is necessary to correct the error---as must be, for otherwise any such information gain would result in an uncontrollable disturbance of the system.
\subsubsection{Challenges}
We have sketched here only the bare essentials of quantum error correction, and in a highly simplified form. In practice, a number of challenges and complications arise. Let us mention just three. First, and perhaps most importantly, it is usually impossible to realize a system--ancilla evolution of the form \eqref{eq:errfsyn} such that all possible error operators are perfectly distinguishable, by measurement, via corresponding orthogonal ancilla states. We have to settle for a limited scope, usually one in which error operators only up to a certain weight can be distinguished, and one in which the error correction only works for a subspace of the qubit's Hilbert space (referred to as the \emph{code space}). In this endeavor, an important strategy for realizing an error-correcting code is the \emph{redundant encoding} of the qubit state in multiple physical qubits through a successive application of \textsc{cnot} gates,
\begin{equation}\label{eq:snba}
\left(a\ket{0} + b\ket{1}\right)\ket{0 0 \cdots
0 } \,\longrightarrow\, a \ket{0 0 0 \cdots
0 } + b \ket{1 1 1 \cdots 1 }. \end{equation}
For example, to correct phase-flip errors of a single qubit without needing to know or restrict the state of the qubit, redundant encoding of the qubit state in three qubits is required; this is the so-called \emph{three-bit code} for phase errors (see, e.g., Sec.~7.4.5 of Ref.~\cite{Schlosshauer:2007:un} for details). Incidentally, the three-bit code was the basis of the first experimental demonstration of quantum error correction, reported in 1998 \cite{Cory:1998:uu}.
A second challenge in realizing quantum error correction, especially for larger numbers of qubits, is the implementation of the final countertransformations restoring the original quantum state. A third challenge comes from the fact that adding an ancilla increases the effective size of the system, thereby making it potentially more prone to decoherence (recall that the rate of decoherence typically scales exponentially with system size).
Undoubtedly, quantum error correction will be an integral part of any viable quantum computer. It may be combined (or \emph{concatenated} \cite{Lidar:1999:fa}) with decoherence-free subspaces to achieve universal fault-tolerant quantum computation \cite{Bacon:2000:yy,Lidar:2001:oo}.
\section{\label{sec:exper-observ-decoh}Experimental studies of decoherence}
Decoherence happens all around us, and in this sense its consequences are readily observed. But what we would like to be able to do is experimentally study the \emph{gradual} and \emph{controlled} action of decoherence, preferably of superpositions of mesoscopically or macroscopically distinguishable states. Such experiments have many important applications and implications. They demonstrate the possibility of generating nonclassical superposition quantum states for mesoscopic and macroscopic systems, and they show that the quantum--classical boundary can be shifted by changing the relevant experimental parameters. They are useful for assessing the predictions of decoherence models, and for designing quantum devices---for example, those needed for quantum information processing---that are good at evading the detrimental influence of the environment. They can also be used to search for deviations from standard unitary quantum mechanics.
Realizing an experiment capable of measuring the progressive decoherence of a quantum state requires meeting several challenges. The first is the task of preparing a suitable quantum superposition state (see Ref.~\cite{Arndt:2014:oo} for an overview of the state of the art in generating superpostions of mesoscopically and macroscopically distinguishable states). Second, decoherence of this superposition must be sufficiently slow for its gradual action to be observed. Third, one must be able to measure the decoherence introduced over time without imparting a significant amount of additional, unwanted decoherence. Ideally, one would also like to have sufficient control over the environment, so that one can tune the strength and form of its interaction with the system.
In what follows, we shall focus on four experimental areas that have played a key role in experimental studies of decoherence: atom--photon interactions in a cavity (Sec.~\ref{sec:atoms-cavity}), interferometry with mesoscopic molecules (Sec.~\ref{sec:matt-wave-interf}), superconducting systems such as SQUIDs and Cooper-pair boxes (Sec.~\ref{sec:superc-syst}), and ion traps (Sec.~\ref{sec:trapped}). Section~\ref{sec:other} briefly lists a few other experimental areas in which decoherence has been studied. Finally, in Sec.~\ref{sec:exper-tests-quant}, we comment on the use of experimental investigations of decoherence for testing quantum mechanics.
\subsection{\label{sec:atoms-cavity}Photon states in a cavity}
In decoherence experiments of the cavity-QED type \cite{Raimond:2001:aa,Haroche:2006:hh}, an atom interacts with a radiation field in a cavity in such a way that information about the atomic state is imprinted on the state of the field, resulting in an entangled atom--field state. Atom and field are then disentangled through a measurement of the atomic state, resulting in a superposition of two coherent field states whose decoherence is monitored over time. In 1996 Brune et al.\ generated a superposition of radiation fields with classically distinguishable phases involving several photons and observed the controlled decoherence of this state \cite{Brune:1996:om,Raimond:2001:aa,Kaiser:2001:tm} (see Refs.~\cite{Raimond:2001:aa,Haroche:2006:hh} for reviews).
\begin{figure}\label{fig:brune-setup}
\end{figure}
In this experiment (see Fig.~\ref{fig:brune-setup}), a rubidium atom is prepared in a superposition of energy eigenstates $\ket{g}$ (the ``lower'' state) and $\ket{e}$ (the ``upper'' state) corresponding to two circular Rydberg states. This preparation is done by applying a $\pi/2$ pulse in a microwave cavity $R_1$, at a frequency $\nu$ that is very close to the resonant frequency $\nu_{ge}$ of the atomic transition between the two Rydberg levels $g$ and $e$. The atom enters a cavity $C$ made of highly reflecting superconducting mirrors, with a long damping time $T_r$; more recent experiments have realized cavities with damping times in excess of a tenth of a second \cite{Kuhr:2007:aa,Deleglise:2008:oo}. The cavity contains a radiation field that consists of a few photons and is described by a coherent state $\ket{\alpha}$,
\begin{equation} \ket{\alpha} = \ensuremath{e}^{- \abs{\alpha}^2 /2}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\ket{n}, \end{equation}
where $\alpha$ is a complex number. This state may be visualized as a vector in phase space whose squared length $\abs{\alpha}^2$ is equal to the mean number $\bar{n}$ of photons.
The interaction between the atom and field inside the cavity is tuned such that there is no energy transfer. The atom effectively acts as a transparent dielectric for the field, by imposing a state-dependent dispersive phase shift on the field. If the atom is in the state $\ket{e}$, then the coherent field state $\ket{\alpha}$ experiences a phase shift $\chi$ such that $\ket{\alpha}$ is transformed to $\ket{\ensuremath{e}^{i\chi} \alpha}$; if the atom enters in the state $\ket{g}$ instead, the phase shift is $-\chi$ and $\ket{\alpha}$ is transformed to $\ket{\ensuremath{e}^{-i\chi} \alpha}$. Thus, for an atom prepared in a coherent superposition $\frac{1}{\sqrt{2}}\left( \ket{g} + \ket{e} \right)$ of $\ket{g}$ and $\ket{e}$, the entangling evolution is
\begin{equation}\label{eq:cat-brune}
\frac{1}{\sqrt{2}}\left( \ket{g} + \ket{e} \right) \ket{\alpha} \,\longrightarrow\,
\frac{1}{\sqrt{2}} \left( \ket{g} \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} + \ket{e} \ket{\alpha
\ensuremath{e}^{\ensuremath{i} \chi}} \right). \end{equation}
The atom then passes through an additional microwave cavity $R_2$, which applies another $\pi/2$ pulse at the same frequency $\nu$ as cavity $R_1$. The pulse transforms the atomic states according to $\ket{g} \,\longrightarrow\,\frac{1}{\sqrt{2}} \left( \ket{g} - \ket{e} \right)$ and $\ket{e} \,\longrightarrow\, \frac{1}{\sqrt{2}} \left( \ket{g} + \ket{e} \right)$, resulting in the combined atom--field state
\begin{align}\label{eq:cat-brune4} \ket{\Psi_\text{atom+field}} &= \frac{1}{2} \left( \ket{g} \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} - \ket{e} \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} +
\ket{g} \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} + \ket{e} \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} \right) \notag \\ &= \frac{1}{2} \left( \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} + \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} \right) \ket{g} + \frac{1}{2} \left(- \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} + \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} \right) \ket{e}. \end{align}
Finally, the energy of the atom is measured, collapsing the state \eqref{eq:cat-brune4} onto either one of the energy eigenstates $\ket{g}$ and $\ket{e}$. This measurement destroys the entanglement between atom and photon field, and the field is left in a superposition of the coherent-field states $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$ whose relative phase depends on the outcome of the measurement. If the outcome is the ground state $g$, the field state will be the ``even'' state
\begin{equation}\label{eq:cat-bhvlhvlrune} \ket{+} = \frac{1}{\sqrt{2}} \left( \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} + \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} \right). \end{equation}
If the outcome is the excited state $e$, the field state will be the ``odd'' state
\begin{equation}\label{eq:cat-baahvlhvlrune} \ket{-} = \frac{1}{\sqrt{2}} \left( \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} - \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} \right). \end{equation}
(The terminology ``even'' and ``odd'' is motivated by the observation that for a phase shift $\chi$ equal to $\pi/2$, these states contain, respectively, only even and odd photon numbers \cite{Deleglise:2008:oo}.)
To quantify the ``catness'' of the superpositions $\ket{\pm} =\frac{1}{\sqrt{2}} \left(\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}\pm \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} \right)$---i.e., to measure the degree to which the components $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$ represent mesoscopically or macroscopically distinguishable states---we consider the squared magnitude of the overlap between $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$, $\abs{\braket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}}^2$. For two general coherent states $\ket{\alpha}$ and $\ket{\beta}$, we have
\begin{equation}\label{eq:hsvg} \abs{\braket{\alpha}{\beta}}^2 = \ensuremath{e}^{- \abs{\alpha-\beta}^2}, \end{equation}
and therefore
\begin{equation}\label{eq:hsvg11} \abs{\braket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}}^2 = \ensuremath{e}^{- 2\abs{\alpha}^2(1-\cos2\chi)} = \ensuremath{e}^{- 4\abs{\alpha}^2\sin^2\chi}. \end{equation}
We may therefore quantify the ``catness'' (or ``size'' of the cat-like state) by introducing a parameter $D^2$ equal to the argument of the exponential in Eq.~\eqref{eq:hsvg11} \cite{Brune:1996:om},
\begin{equation}\label{eq:hsADvg11} D^2 = 4 \bar{n}\sin^2\chi, \end{equation}
where we have used that $\bar{n}=\abs{\alpha}^2$. This also represents the squared distance between the (direct) peaks of the density matrices representing the superpositions $\ket{\pm} =\frac{1}{\sqrt{2}} \left(\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}\pm \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} \right)$ of coherent states \cite{Deleglise:2008:oo}. We see that the overlap depends both on the mean photon number $\bar{n}=\abs{\alpha}^2$ and on the phase difference $\chi$, and decreases exponentially with $\bar{n}$. This makes sense: the ``catness'' increases if the size of the system and the phase difference are increased. For fixed mean photon number $\bar{n}$, Eq.~\eqref{eq:hsvg11} shows that the minimum overlap is obtained for $\chi=\pi/2$ (i.e., when the vectors representing the two coherent states point in opposite directions).
Owing to the properties of the Rydberg atoms, the experiment by Brune et al.\ \cite{Brune:1996:om} achieved relatively large phase shifts of up to $\chi = 0.31\pi$, with mean photon number $\bar{n} \approx 10$. For these values, the overlap $\abs{\braket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}}= \ensuremath{e}^{- 2\abs{\alpha}^2\sin^2\chi}$ [see Eq.~\eqref{eq:hsvg11}] is less than $3 \times 10^{-5}$. Thus the states $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$ are very nearly orthogonal, and therefore the field effectively acts as a meter that encodes which-state information about the energy eigenstates $\ket{g}$ and $\ket{e}$ in the mesoscopically distinct states $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$. A subsequent experiment has realized mesoscopic superposition states involving $\bar{n}=29$ photons \cite{Auffeves:2003:za}. By coupling a superconducting qubit to a waveguide cavity resonator, a superposition of coherent states involving 111 photons has been generated \cite{Vlastakis:2013:pp} (see also Ref.~\cite{Hermann-Avigliano:2015:tt}).
The experiment by Brune et al.\ \cite{Brune:1996:om,Maitre:1997:tv} then measured the progressive decoherence of the field superposition $\ket{\pm}=\frac{1}{\sqrt{2}} \left( \ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}} \pm \ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}} \right)$ [see Eqs.~\eqref{eq:cat-bhvlhvlrune} and \eqref{eq:cat-bhvlhvlrune}] that is left behind in the cavity $C$ after the passage and detection of the atom. This measurement was accomplished by sending a second rubidium atom through the apparatus. One can show \cite{Davidovich:1996:sa, Maitre:1997:tv,Raimond:2001:aa,Kaiser:2001:tm,Schlosshauer:2007:un} that upon detection, this second atom will be found in the same state ($g$ or $e$) as the first atom provided the superposition has not been decohered.\footnote{For such a perfect correlation to obtain, we also need to require that the state components $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$ be orthogonal. This holds to a good approximation even for the modest photon numbers used by Brune et al.; see the discussion following Eq.~\eqref{eq:hsADvg11}.} Thus, the conditional detection probability $P_{ee}$ for finding both the first and second atoms in the state $e$ after passage through the apparatus will be equal to one. If, however, the field state has started to decohere before the second atom has passed through the cavity $C$, $P_{ee}$ will decrease, approaching a value of $\frac{1}{2}$ in the limit of complete decoherence. The longer one waits before sending the second atom through $C$, the more the field state will have decohered. Thus, by adjusting the wait time $\tau$ between sending the first and second atoms through the apparatus and recording $P_{ee}$ as a function of $\tau$, the gradual decoherence of the field state can be measured \cite{Davidovich:1996:sa, Maitre:1997:tv,Raimond:2001:aa}.
\begin{figure}\label{fig:twoatom}
\end{figure}
The first experimental realizations \cite{Brune:1996:om,Maitre:1997:tv} used the observed data for $P_{ee}$ and $P_{eg}$ (the probability of finding the second atom in the state $e$ if the first atom was found in $g$) to define the two-atom correlation function $\eta(\tau)=P_{ee}(\tau)-P_{eg}(\tau)$. In the absence of decoherence and therefore perfect two-atom correlations, we have $P_{ee}=1$ and $P_{eg}=\frac{1}{2}$, and thus $\eta=\frac{1}{2}$; in the case of complete decoherence, the correlation is lost and we have $P_{ee}=P_{eg}=\frac{1}{2}$, and thus $\eta=0$. Figure~\ref{fig:twoatom} shows the experimental results for two different phase shifts $\chi=0.13\pi$ and $\chi=0.31\pi$. Good agreement is found with theoretical predictions obtained from a simple model \cite{Davidovich:1996:sa,Maitre:1997:tv}. It is clearly seen that, as expected, decoherence becomes more rapid as the phase shift $\chi$ between the coherent-state components $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$---and thus their distinguishability---is increased.
\begin{figure}\label{fig:qeddecoh}
\end{figure}
In a subsequent experiment reported by Del{\'e}glise et al.\ \cite{Deleglise:2008:oo}, the field states inside the cavity were reconstructed at different stages of their gradual decoherence. Thus, the effect of the decoherence process on the quantum state could be visualized explicitly (see Fig.~\ref{fig:qeddecoh}); the authors even generated a movie of the decoherence process (available as supplementary information for Ref.~\cite{Deleglise:2008:oo}). In the experiment, the mean photon number was $\bar{n}=3.5$ and the phase difference was $\chi=0.37\pi$. A simple model of decoherence \cite{Walls:1985:pp,Brune:1992:zz,Haroche:2006:hh} predicts a decoherence timescale on the order of $T_d=2T_r/D^2$, where $T_r$ is the damping time of the optical cavity. In the experiment by Del{\'e}glise et al.\ \cite{Deleglise:2008:oo}, this damping time was $T_r=\unit[0.13]{s}$, leading to a predicted decoherence time of $T_d=\unit[19.5]{ms}$ when adjusted for thermal background, which is in good agreement with the measured value $T_d=\unit[(17 \pm 3)]{ms}$.
\begin{figure}\label{fig:ramsey}
\end{figure}
The experiment by Brune et al.\ \cite{Brune:1996:om} was also used to observe the decoherence of the \emph{atomic} state due to the photon field. The combination of the two microwave cavities $R_1$ and $R_2$ effectively forms a (spatially separated) Ramsey interferometer \cite{Ramsey:1950:pp,Haroche:2006:hh}. One may then measure the detection probability $P_g(\nu)$ of finding an atom, after passage through the apparatus, in the ground state $g$ as a function of the frequency $\nu$ in the cavities $R_1$ and $R_2$ (see Fig.~\ref{fig:ramsey}). One finds that, in the absence of a photon field in cavity $C$, $P_g(\nu)$ displays an oscillatory (fringe) pattern. This pattern arises because either cavity $R_1$ or $R_2$ may trigger a transition from state $g$ to state $e$, and it is impossible to distinguish in which cavity a given transition occurred. The indistinguishability of these two ``paths'' (transition in $R_1$ or in $R_2$) implies interference between the paths, which manifests itself in an interference pattern for the detection probability at the output. However, in the presence of a photon field in cavity $C$, the field will obtain information about the atomic state [compare Eq.~\eqref{eq:cat-brune}]. This which-path information diminishes the visibility of the interference pattern, with the visibility quantified by the overlap between $\ket{\alpha \ensuremath{e}^{\ensuremath{i} \chi}}$ and $\ket{\alpha \ensuremath{e}^{-\ensuremath{i} \chi}}$ as given by Eq.~\eqref{eq:hsvg11}. This is again an instance of decoherence, but it is now the atomic state that decoheres due to the environment provided by the photon field.
\subsection{\label{sec:matt-wave-interf}Matter-wave interferometry}
\begin{figure}
\caption{Examples of molecular clusters used in matter-wave interferometry experiments, drawn to scale (the scale bar represents \unit[10]{\AA}). \emph{(a)} Fullerene C$_{60}$ ($m = \unit[720]{amu}$, 60 atoms). \emph{(b)} Perfluoroalkylated nanosphere PFNS8 ($m = \unit[5672]{amu}$, 356 atoms). \emph{(c)} PFNS10 ($m = \unit[6910]{amu}$, 430 atoms). \emph{(d)} Tetraphenylporphyrin TPP ($m = \unit[614]{amu}$, 78 atoms). \emph{(e)} TPPF84 ($m = \unit[2814]{amu}$, 202 atoms). \emph{(f)} TPPF152 ($m = \unit[5310]{amu}$, 430 atoms). Figure reproduced with permission from Ref.~\cite{Gerlich:2011:aa}. }
\label{fig:molecules}
\end{figure}
In matter-wave interferometry experiments with molecules (see Ref.~\cite{Hornberger:2012:ii} for a review), spatial interference fringes are demonstrated for mesoscopic molecules ranging from C$_{60}$ and C$_{70}$ fullerenes \cite{Arndt:1999:rc} to large molecular clusters (Fig.~\ref{fig:molecules}) \cite{Gerlich:2011:aa,Eibenberger:2013:az}. Because the de~Broglie wavelength of such molecules is on the order of picometers, one cannot use ordinary double-slit interferometry as one would do for photons or particles such as electrons \cite{Jonsson:1961:rz,Jonsson:1974:rz,Tonomura:1989:as}. Instead, the experiments are based on the Talbot effect familiar from classical optics \cite{Mansuripur:2009:zz}, a genuine interference phenomenon in which a plane wave incident on a diffraction grating creates images of the grating at multiples of the Talbot length $L_\lambda = d^2/\lambda$ behind the grating, where $d$ is the slit spacing and $\lambda$ is the wavelength of the incident wave (see Fig.~\ref{fig:tbeff}) \cite{Mansuripur:2009:zz,Hackermuller:2003:uu}.
\begin{figure}
\caption{Schematic illustration of the Talbot effect. When a plane wave of wavelength $\lambda$ is incident from the left on a diffraction grating, a repeating image of the grating is generated at distances equal to multiples of the Talbot length $L_\lambda = d^2/\lambda$ behind the grating, where $d$ is the spacing of the slits in the diffraction grating.}
\label{fig:tbeff}
\end{figure}
In the matter-wave interferometry experiments with C$_{60}$ and C$_{70}$ molecules \cite{Brezger:2002:mu,Hornberger:2003:tv,Hackermueller:2002:wb,Hackermuller:2003:uu,Hackermuller:2004:rd}, one does not, however, work with an incident plane (i.e., coherent) wave as required for the Talbot effect, but rather with an uncollimated, incoherent molecular beam to allow for sufficiently high intensity. To accommodate such beams, the experiments make use of the three-grating setup shown in Fig.~\ref{fig:c70-setup}, realizing a so-called Talbot--Lau interferometer \cite{Brezger:2002:mu,Hornberger:2003:tv,Hackermuller:2003:uu,Hackermueller:2002:wb,Hackermuller:2004:rd}. The first grating is used to produce sufficient transverse coherence of the molecular beam (on the order of 2--3 grating periods) at the location of the second grating, which plays the role of the diffraction grating in the Talbot effect (see again Fig.~\ref{fig:tbeff}) and is placed at the Talbot length $L_\lambda$ behind the first grating. (For C$_{70}$ molecules, the wavelength is a few picometer, and given the experimental slit spacing $d$ of about one micrometer, one finds a macroscopic Talbot length on the order of one meter; the exact Talbot length, and thus grating separation, in the fullerene experiments of Refs.~\cite{Brezger:2002:mu,Hornberger:2003:tv,Hackermueller:2002:wb,Hackermuller:2003:uu,Hackermuller:2004:rd} is $L_\lambda = \unit[38]{cm}$.)
If there is indeed coherence between the different possible paths through the second grating, then the Talbot image of the diffraction grating will manifest itself as an oscillatory variation of the transverse molecular density at multiples of the Talbot length. To image this density pattern, a third grating, placed at a distance equal to the Talbot length behind the second grating, is scanned across the pattern (in the $x$ direction shown in Fig.~\ref{fig:c70-setup}), thus serving as a detection mask. The molecules that have passed through all three gratings are then ionized by a laser beam and detected. A sinusoidal variation of the number of detected molecules as a function of the position of the third grating indicates the presence of interference fringes. These fringes confirm the delocalization of the spatial wavefunction of the molecule due to the presence of the diffraction grating. Since the grating period is roughly one micrometer, the fringes therefore announce quantum coherence between two spatial locations one micrometer apart.\footnote{Such fringes could, in principle, also be explained in terms of the classical Moir\'e effect, which is a consequence of the blocking of rays by the grating. This effect, however, is independent of the velocity (and thus de Broglie wavelength) of the molecules. Therefore, a variation of the fringe visibility with molecular velocity indicates the presence of quantum interference arising from the Talbot effect. This variation was indeed observed in the fullerene experiment of Ref.~\cite{Brezger:2002:mu}, confirming the quantum nature of the observed interference fringes.}
\begin{figure}
\caption{Schematic illustration of the Talbot--Lau interferometer used in Refs.~\cite{Arndt:1999:rc,Brezger:2002:mu,Hornberger:2003:tv,Hackermueller:2002:wb,Hackermuller:2004:rd} for demonstrating interference patterns for C$_{60}$ and C$_{70}$ fullerene molecules, and for studying their decoherence. Molecules emitted from a source are velocity-selected and pass through the first grating to produce sufficient beam coherence. The second grating is a diffraction grating implementing the Talbot effect (see Fig.~\ref{fig:tbeff}). The third grating acts as a scanning mask for the molecular density pattern subsequently recorded by ionizing and detecting the molecules. Figure adapted with permission from Ref.~\cite{Brezger:2002:mu}.}
\label{fig:c70-setup}
\end{figure}
In an improved version \cite{Gerlich:2007:om} of this original Talbot--Lau setup, the mechanical diffraction grating is replaced by a standing laser light wave, which eliminates perturbations arising from van der Waals interactions between the grating walls and the molecules. An all-optical implementation using optical ionization gratings has also been realized \cite{Haslinger:2013:ii}. Interference fringes observed for C$_{70}$ molecules and the much larger TPPF20 molecules \cite{Eibenberger:2013:az} are shown in Fig.~\ref{fig:pattern}.
\begin{figure}
\caption{Interference fringes observed in molecular interferometry experiments. \emph{(a)} Fringes for C$_{70}$ molecules, as reported by Brezger et al.\ \cite{Brezger:2002:mu}. Figure adapted with permission from Ref.~\cite{Brezger:2002:mu}. \emph{(b)} Fringes for TPPF20 molecules, as observed by Eibenberger et al.\ \cite{Eibenberger:2013:az}. Measured fringe visibilities were $V=\unit[38]{\%}$ for the C$_{70}$ molecules and $V=\unit[33]{\%}$ for the TPPF20 molecules. Solid lines are sinusoidal fits. Figure reproduced with permission from Ref.~\cite{Eibenberger:2013:az}.}
\label{fig:pattern}
\end{figure}
\begin{figure}
\caption{Influence of different sources of decoherence on the visibility of interference fringes for C$_{70}$ fullerenes. \emph{(a)} Decoherence due to scattering of background gas molecules in the interferometer, as demonstrated in the experiments by Hornberger et al.\ \cite{,Hornberger:2003:tv} and Hackerm\"uller et al.\ \cite{Hackermuller:2003:uu}. The plot shows the fringe visibility as a function of the pressure of the background gas. Experimental data are represented by circles, and the theoretical prediction is shown as a solid line \cite{Hornberger:2003:un,Hornberger:2004:bb,Hornberger:2005:mo}. Figure adapted with permission from Ref.~\cite{Hackermuller:2003:uu}. \emph{(b)} Reduction in fringe visibility as a result of emission of thermal radiation from heated fullerenes, shown for different
laser heating powers and corresponding mean microcanonical molecular temperatures, as demonstrated in the experiment by Hackerm\"uller et al.\
\cite{Hackermuller:2004:rd}. The theoretical prediction \cite{Hackermuller:2004:rd} (see also Refs.~\cite{Hornberger:2005:mo,Hornberger:2006:tx}) is shown as solid line. Figure reproduced with permission from Ref.~\cite{Hornberger:2012:ii}. }
\label{fig:c70-vis}
\end{figure}
Two important sources of decoherence ubiquitous in nature were studied in a controlled fashion using fullerene interferometry experiments: collisional decoherence (see Sec.~\ref{sec:collisionaldecoherence}) \cite{Hornberger:2003:tv,Hackermuller:2003:uu} and thermal decoherence \cite{Hackermuller:2004:rd}. To induce controlled collisional decoherence, the experiments by Hornberger et al.\ \cite{,Hornberger:2003:tv} and Hackerm\"uller et al.\ \cite{Hackermuller:2003:uu} introduced a background gas of adjustable pressure into the interferometer. Scattering of gas molecules by the fullerenes creates entanglement, and which-path information about the fullerene is carried away by the gas molecules. The higher the gas pressure, the greater the likelihood for a fullerene to collide with a gas particle, and thus the stronger the resulting decoherence effect. Figure~\ref{fig:c70-vis}\emph{a} shows the experimentally observed decrease of fringe visibility as a function of gas pressure, which was found to be in excellent agreement with theoretical models for collisional decoherence \cite{Hornberger:2003:un,Hornberger:2004:bb}. The experimental data were also used to confirm the predictions of more realistic collisional-decoherence models based on the quantum linear Boltzmann equation \cite{Hornberger:2006:tb,Hornberger:2006:tc,Hornberger:2008:ii,Busse:2009:aa,Vacchini:2009:pp,Busse:2010:aa,Busse:2010:oo,Hornberger:2012:ii}. Moreover, as already mentioned in Sec.~\ref{sec:collisionaldecoherence}, Talbot--Lau interferometry is sensitive to molecular rotations \cite{Gring:2010:aa,Stickler:2015:zz}, and this observation has inspired the development of collisional-decoherence models that take into account both spatial and orientational decoherence of anisotropic molecules \cite{Walter:2016:zz,Stickler:2016:yy,Papendell:2017:yy, Stickler:2018:oo,Stickler:2018:uu}; predictions derived from these models are in good agreement with experimental data \cite{Stickler:2018:oo}.
To study decoherence of fullerenes due to emission of thermal radiation, Hackerm\"uller et al.\ \cite{Hackermuller:2004:rd} used laser beams to heat the fullerene molecules to temperatures up to 3,000~K. Because the photons emitted from the heated molecules carry away which-path information, spatial coherence---and thus fringe visibility---is reduced. By changing the molecular temperature, the strength of the resulting thermal decoherence can be adjusted. Figure~\ref{fig:c70-vis}\emph{b} shows the experimentally observed visibility of the interference fringes as a function of the laser heating power, with the corresponding (mean microcanonical) temperature of the molecular beam shown as well. We see that at temperatures below around \unit[1,500]{K} (the source temperature is \unit[900]{K}), thermal decoherence is still relatively weak and the fringe visibility is only mildly affected. Around \unit[2,000]{K}, the decoherence becomes strong enough for interference fringes to start being visibly reduced, while around \unit[2,500]{K} the visibility has been halved. Above \unit[3,000]{K}, decoherence is complete and fringes are no longer discernible.
The observed dependence of the fringe visibility on molecular temperature is in good agreement with theoretical predictions \cite{Hackermuller:2004:rd}; see also Refs.~\cite{Hornberger:2005:mo,Hornberger:2006:tx} for a detailed theoretical analysis of the experiment. To explain the temperature dependence \cite{Hornberger:2005:mo}, one notes that only above \unit[2,000]{K} there is a non-negligible probability for a heated fullerene to emit a photon with a wavelength comparable to the extent of the spatial delocalization of the fullerene state (as given by the slit spacing $d$) such that appreciable which-path information can be obtained. Thus, this temperature marks the onset of decoherence. The emission of several photons of such wavelength is needed to cause a 50\% decrease in fringe visibility, which requires the higher temperature regime of \unit[2,500]{K}. Around \unit[3,000]{K}, the average number of emitted photons per fullerene becomes large enough to produce full decoherence with no remaining fringe visibility. While for fullerene molecules substantial thermal decoherence obtains at only relatively high temperatures, for much larger molecules it becomes a significant source of decoherence even at room temperature, showing the importance of thermal decoherence for understanding the quantum-to-classical transition on macroscopic scales \cite{Joos:2003:jh,Hornberger:2006:tx,Schlosshauer:2007:un}. For example, observation of interference fringes for molecules on the order of $10^9$ amu (which is orders of magnitude beyond current experiments) would likely necessitate cooling the molecules to their vibrational ground state at below \unit[77]{K} \cite{Hornberger:2012:ii, Kaltenbaek:2016:pp}.
\subsection{\label{sec:superc-syst}Superconducting systems}
Superconducting qubit systems, such as superconducting quantum interference devices (SQUIDs) and Cooper-pair boxes, play a prominent role in the exploration of coherence and decoherence in macroscopic systems. Their importance to fundamental studies of macroscopic quantum behavior was spelled out early by Leggett \cite{Leggett:1980:yt}, who suggested, in 1980, that such systems may become ideal vehicles for the creation of cat-like superpositions of macroscopically distinct states. Superconducting qubits are also considered promising candidates for the realization of a quantum computer \cite{Devoret:2013:pp}.
Superconductivity is a phenomenon in which pairs of electrons of opposite spin condense into a boson-like particle, known as a Cooper pair. Each Cooper pair is in a low-energy ground state. Provided the thermal vibrational energy of the crystal lattice of the material is lower than the energy gap between the ground and first excited states of the Cooper pair, interactions with the lattice cannot excite the Cooper pairs. The Cooper pairs can therefore freely move around the lattice, forming a resistance-free, persistent ``supercurrent'' whose collective center-of-mass motion may be described quantum-mechanically by a single, macroscopically extended wave function. When a thin insulating barrier between two pieces of superconducting material is inserted (known as a Josephson junction), Cooper pairs will tunnel through the barrier, leading to a flow of supercurrent even if no voltage is applied. This \emph{Josephson effect} is a purely quantum-mechanical phenomenon (for reviews, see, e.g., Refs.~\cite{Likharev:1979:ii,Makhlin:2001:oo}).
\begin{figure}
\caption{Schematic illustration of a SQUID. A ring of superconducting material is interrupted by a thin insulating barrier (Josephson junction), which causes a dissipation-free current (``supercurrent'') consisting of Cooper pairs to flow in the loop.}
\label{fig:squidscheme}
\end{figure}
Figure~\ref{fig:squidscheme} schematically shows a SQUID with a single Josephson junction, a setup referred to as an rf-SQUID. The supercurrent creates an intrinsic magnetic flux $\Phi_\text{int}$ threading the loop, and in addition an external magnetic field is applied to provide an adjustable external magnetic flux $\Phi_\text{ext}$. The requirement that the macroscopic wave function around the loop must be continuous translates into a condition for the total trapped flux $\Phi=\Phi_\text{int}+ \Phi_\text{ext}$, which must obey $\Delta \phi_\text{J} + 2\pi \Phi/\Phi_0 = 2\pi k$, $k=1,2,\hdots$, where $\Delta \phi_\text{J}$ is the phase shift introduced by the Josephson junction and $\Phi_0=h/2e$ is the flux quantum. In this way, the total flux $\Phi$ becomes quantized and serves as the single macroscopic variable representing the collective quantum-mechanical evolution of the Cooper pairs.
\begin{figure}
\caption{Double-well potential $U(\Phi)$ governing the evolution of the macroscopic flux variable $\Phi$ in an rf-SQUID. \emph{(a)} Away from the bias point $\Phi_\text{ext} = \Phi_0/2$, the well is tilted. Low-lying energy eigenstates $\psi_k(\Phi) =\braket{\Phi}{k}$ are tightly localized in each well, representing approximate flux eigenstates. They also approximately correspond to a persistent supercurrent flowing in a fixed direction (clockwise or counterclockwise) around the SQUID loop. The potential $U(\Phi)$ is shown in units of $I_c \Phi_0$, where $I_c$ is the critical current of the Josephson junction and $\Phi_0$ is the flux quantum. \emph{(b)} At the bias point $\Phi_\text{ext} = \Phi_0/2$, the double well is symmetric. The two lowest-lying energy eigenstates become delocalized across the wells and consist of coherent superpositions of two macroscopic supercurrents flowing in opposite directions around the loop. These superpositions were observed in several experiments \cite{Friedman:2000:rr,Wal:2000:om,Chiorescu:2003:ta,Ilichev:2003:tv}.}
\label{fig:squidpot0}
\end{figure}
The evolution of $\Phi$ is effectively governed by a tilted double-well potential $U(\Phi)$ in flux space (Fig.~\ref{fig:squidpot0}\emph{a}) \cite{Weiss:1999:tv}, with the amount of tilt determined by the applied external flux, and resonant quantum tunneling between the wells may occur \cite{Silvestrini:1996:ii,Rouse:1998:om}. Away from the bias point $\Phi_\text{ext}=\Phi_0/2$, each well contains low-lying energy eigenstates $\ket{k}$ that are well-localized within each well, corresponding to approximate eigenstates of the flux operator. These localized eigenstates in a given well also approximately correspond to a macroscopic supercurrent flowing in a definite direction (clockwise or counterclockwise) around the loop. The two lowest-lying energy eigenstates are well-separated from higher-energy states, such that the SQUID effectively acts as a two-state system, forming a superconducting flux qubit.
At the bias point $\Phi_\text{ext}=\Phi_0/2$, the double-well potential $U(\Phi)$ becomes symmetric (Fig.~\ref{fig:squidpot0}\emph{b}). The presence of the tunneling barrier leads to a level anticrossing that produces an energy ground state $\ket{0}$ and a first excited state $\ket{1}$ that are delocalized across the two wells. They are given by coherent superpositions of the states $\ket{\circlearrowright}$ and $\ket{\circlearrowleft}$ (which are localized in each well) representing ``classical'' clockwise and counterclockwise supercurrents,
\begin{subequations} \begin{align} \ket{0} &= \frac{1}{\sqrt{2}} \left( \ket{\circlearrowright} + \ket{\circlearrowleft} \right), \label{eq:scqqqq3}\\ \ket{1} &= \frac{1}{\sqrt{2}} \left( - \ket{\circlearrowright} + \ket{\circlearrowleft} \right). \label{eq:scqqqq4} \end{align} \end{subequations}
\begin{figure}
\caption{\emph{(a)} Superconducting flux qubit used in the experiment by
Chiorescu et al.\ \cite{Chiorescu:2003:ta}. The micrometer-sized superconducting loop is interrupted by three Josephson junctions (the use of three junctions, rather than one, enables easier tuning of the SQUID), and the flux in the loop is measured by coupling it to a second SQUID. The black and white arrows indicate the clockwise and counterclockwise directions of the supercurrent. \emph{(b)} Observation of superpositions of macroscopic supercurrents flowing in opposite directions, as evidenced by measurements of Rabi oscillations and reported by Chiorescu et al.\ \cite{Chiorescu:2003:ta}. The plots show the occupation probability $P_\circlearrowright(\tau)$ for the clockwise supercurrent state as a function of the length $\tau$ of the microwave pulse. From top to bottom, the three data sets correspond to decreasing amplitude of the microwave pulse, with the Rabi frequency decreasing as also predicted by theory. Figures adapted with permission from Ref.~\cite{Chiorescu:2003:ta}.}
\label{fig:chio}
\end{figure}
Such superposition states of macroscopic supercurrents flowing in opposite directions were first observed in several experiments in the early 2000s \cite{Friedman:2000:rr,Wal:2000:om,Chiorescu:2003:ta,Ilichev:2003:tv}. Friedman et al.\ \cite{Friedman:2000:rr} confirmed their existence indirectly through spectroscopic measurements of the energy splitting between them and found excellent agreement with theoretical predictions. The supercurrent in the experiment was several microampere. In experiments by Chiorescu et al.\ \cite{Chiorescu:2003:ta} (see Fig.~\ref{fig:chio}\emph{a}) and Ilichev et al.\ \cite{Ilichev:2003:tv}, the existence of the supercurrent superpositions was confirmed through the observation of Rabi oscillations between the states $\ket{\circlearrowright}$ and $\ket{\circlearrowleft}$ (Fig.~\ref{fig:chio}\emph{b}).
\begin{figure}
\caption{Loss of coherence of a superposition of two supercurrents flowing in opposite directions in a SQUID, as measured in the experiment by Chiorescu et al.\ \cite{Chiorescu:2003:ta} using Ramsey interferometry. Progressive dephasing is observed as the decay of the oscillation amplitude over time. The oscillation represents the probability $P_{\circlearrowright}(\tau)$ of measuring a clockwise supercurrent as a function of the delay time $\tau$. Figure adapted with permission from Ref.~\cite{Chiorescu:2003:ta}.}
\label{fig:chiorescu2}
\end{figure}
The gradual loss of coherence from these supercurrent superpositions was first measured by Chiorescu et al.\ \cite{Chiorescu:2003:ta} using Ramsey interferometry. The SQUID was tuned close to the bias point and initialized in the ground state $\ket{0}$. A $\pi/2$ microwave pulse was applied to transform this state into an equal-weight superposition of $\ket{0}$ and $\ket{1}$. The state was then allowed to evolve freely for a duration $\tau$, followed by the application of a second $\pi/2$ microwave pulse. In the resulting state, occupation probabilities for the supercurrent states $\ket{\circlearrowright}$ and $\ket{\circlearrowleft}$ will exhibit an oscillatory dependence on the delay time $\tau$. This dependence was experimentally observed (see Fig.~\ref{fig:chiorescu2}), with a measured frequency in excellent agreement with theoretical predictions. The characteristic time for the loss of phase coherence was obtained from the decay envelope of the oscillation and found to be around \unit[20]{ns}. In subsequent experiments with superconducting flux qubits, various influences that limit coherence time were studied, including flux noise \cite{Yoshihara:2006:ii,Bialczak:2007:uu} and photon noise \cite{Bertet:2005:un}; in the latter experiment, relatively long coherence times of several microseconds were observed \cite{Bertet:2005:un}.
Loss of coherence has also been observed for superconducting charge qubits (Cooper-pair boxes \cite{Bouchiat:1998:ii}) and phase qubits. In a charge qubit, Cooper pairs tunnel through a Josephson junction onto a superconducting island, and the two qubit basis states are formed by states differing by the amount of charge on the island. Superpositions of such charge states were experimentally observed through Rabi oscillations \cite{Nakamura:1999:ub} and coherent oscillations with a decay time of $\unit[0.5]{\mu s}$ were measured \cite{Vion:2002:oo}. For phase qubits, where the variable of interest is the phase difference between the electrodes of the Josephson junction, superpositions of macroscopically distinct phase states with coherence times up to several $\mu$s have been observed \cite{Yu:2002:yb,Martinis:2002:qq}. Since then, several improved designs of superconducting qubits, such as quantronium, transmon, and fluxonium qubits, have further enhanced the coherence time \cite{Devoret:2013:pp}. For example, a 3D transmon has achieved coherence times on the order of $\unit[100]{\mu s}$ \cite{Rigetti:2012:aa,Sears:2012:ee}.
Drawing on experimental data, a number of theoretical studies have investigated and modeled loss-of-coherence processes in superconducting qubits, focusing on sources such as intrinsic quasiparticle tunneling \cite{Catelani:2012:zz}, the coupling to electromagnetic circuitry for SQUID readout \cite{Wal:2003:pp}, two-level defects in the Josephson junction \cite{Martinis:2005:zz}, and fluctuations in the bias current of the Josephson junction \cite{Martinis:2003:bz}. Such investigations have indicated that many sources of an intrinsic loss of coherence in superconducting qubits may be modeled in terms of an environment composed of effective two-level systems \cite{Martinis:2002:qq,Martinis:2005:zz}, i.e., by means of a spin--spin model (compare Sec.~\ref{sec:spin-envir-models}) \cite{Dube:2001:zz,Prokofev:2000:zz}.
It should be noted here that in the aforementioned experimental and theoretical studies of superconducting qubits, the loss of coherence is typically due to ensemble dephasing induced by fluctuations (noise), rather than by an entanglement-based transfer of information to an environment. While phenomenologically the effect on the system's density matrix may be similar for both processes, the physical differences between the two processes should be remembered; see the discussion in Sec.~\ref{sec:decoh-vers-diss}.
\subsection{\label{sec:trapped}Ion traps}
Ion traps are one of the most promising and advanced platforms for the implementation of a quantum computer \cite{Haffner:2008:pp}. The idea of using trapped ions for quantum computation goes back to the groundbreaking papers by Cirac and Zoller \cite{Cirac:1995:tt} and Monroe et al.\ \cite{Monroe:1995:oo}. The dynamics of trapped ions have been studied extensively \cite{Leibfried:2003:om}. Several quantum-computational tasks have been experimentally realized using ion-trap qubits, including demonstrations of the Deutsch--Jozsa algorithm \cite{Deutsch:1989:mm}, quantum teleportation \cite{Barrett:2004:oo,Riebe:2004:qq}, quantum error correction \cite{Chiaverini:2004:aa}, decoherence-free subspaces \cite{Kielpinski:2001:uu,Roos:204:pp,Haffner:2005:zz,Langer:2005:uu}, entanglement between several ions \cite{Haffner:2005:sc}, and entanglement purification \cite{Reichle:2006:ii}; see Ref.~\cite{Haffner:2008:pp} for a comprehensive review. In ion-trap qubits, ions are bound by a time-dependent potential (Paul trap \cite{Paul:1990:oo,Leibfried:2003:om}), and the qubit states are formed by a pair of long-lived internal states of the ions \cite{Leibfried:2003:om,Haffner:2008:pp}. States of individual ions are typically initialized using optical-pumping techniques and are manipulated by laser pulses. Two-qubit operations may be carried out through coupling of collective motional degrees of freedom \cite{Cirac:1995:tt} or other means \cite{Haffner:2008:pp}.
Many studies of the loss of coherence in ion traps have focused on dephasing caused by noise and fluctuations in the physical (and often classical) parameters describing the trapping and control of the ions (see, e.g., Refs.~\cite{Turchette:2000:aa,Turchette:2000:oa,Brouard:2004:in,Grotz:2006:km,Stick:2006:aa,Seidelin:2006:rz,Haffner:2008:pp}). In fact, the influence of an environment is often \emph{simulated} by actively driving fluctuations in parameters such as the trap frequency or by applying external noise \cite{Myatt:2000:yy,Turchette:2000:aa}. Since such loss of coherence is due to classical noise processes (manifesting their effect only in an ensemble average for which phase relations become smeared out) rather than entanglement-induced information transfer, the points discussed in Sec.~\ref{sec:decoh-vers-diss} regarding the distinction between noise-induced and entanglement-induced decoherence should be kept in mind. A frequently used technique for measuring dephasing in ion traps is Ramsey interferometry, in much the same way as we have already discussed for photons in a cavity (Sec.~\ref{sec:atoms-cavity}) and superconducting qubits (Sec.~\ref{sec:superc-syst}).
\begin{figure}
\caption{Loss of coherence of a superposition of two hyperfine levels in a single trapped $^9$Be$^+$ ion, as reported by Langer et al.\ \cite{Langer:2005:uu}. The loss of coherence is quantified by the Ramsey fringe contrast as a function of the wait time $T_R$ between the two $\pi/2$ pulses. The solid line is an exponential fit, from which a coherence time of $\unit[(14.7 \pm 1.6)]{s}$ is obtained. Figure reproduced with permission from Ref.~\cite{Langer:2005:uu}.}
\label{fig:iondec}
\end{figure}
Two different kinds of superposition states in ion traps should be distinguished: superpositions of the internal atomic levels that represent the qubit basis states, and superpositions of the motional states of the ions. Since superpositions of the internal qubit levels are merely microscopic, they are not in the territory of the mesoscopic and macroscopic ``cat-like'' superposition states we have discussed previously in the context of photon fields, matter-wave interferometry, and SQUIDs. For such trapped-ion qubit states, a dominant source of loss of coherence are fluctuations in the magnetic trapping field (see, e.g., Refs.~\cite{SchmidtKaler:2003:pp,Brouard:2004:in,Grotz:2006:km, Haffner:2008:pp}). A superposition state of the ion will be sensitive to such fluctuations if its components differ in magnetic moment. Because the loss of coherence due to magnetic-field fluctuations has substantially limited achievable coherence times \cite{Haffner:2008:pp}, several ion-trap experiments have used qubit states that have the same magnetic moment and that are therefore insensitive to fluctuations of the magnetic field (see, e.g., Refs.~\cite{Haljan:2005:oo,Langer:2005:uu,Benhelm:2008:oo}). In this way, coherence times of \unit[10]{s} for a superposition of two hyperfine levels in a single $^9$Be$^+$ ion have been achieved \cite{Langer:2005:uu} (see Fig.~\ref{fig:iondec}). Other fluctuations in the experimental parameters that lead to a loss of coherence arise in the context of the control of the ion, for example, in the form of fluctuations in the intensity \cite{Schneider:1998:yz} and duration \cite{Miquel:1997:zz} of the laser beam, off-resonant excitations \cite{Steane:2000:ii}, AC-Stark shifts \cite{Haffner:2003:oo}, and detuning errors \cite{Leibfried:2003:mm}.
Let us now turn to the second kind of superposition states relevant to ion traps, namely, superpositions of motional states. Provided proper tuning of the trap parameters, the motion of a trapped ion is equivalent to that of a quasi-one-dimensional harmonically bound particle, and therefore the motional state of the ion may be represented by a harmonic oscillator \cite{Leibfried:2003:om,Wineland:2013:pp}. From a practical point of view, motional states of trapped ions are important because in certain implementations of ion-trap qubits, two-qubit gate operations are implemented by storing quantum information in motional states \cite{Cirac:1995:tt}. Different motional superposition states have been realized experimentally, including superpositions of coherent states \cite{Monroe:1996:tv,Myatt:2000:yy,Turchette:2000:aa,Wineland:2013:pp} and Fock states (i.e., number eigenstates) \cite{Myatt:2000:yy,Turchette:2000:aa}, and their dephasing has been observed \cite{Myatt:2000:yy,Turchette:2000:aa,SchmidtKaler:2003:pp}. In 1996, Monroe et al.\ \cite{Monroe:1996:tv} reported the observation of an ion for which the internal spin states $\ket{\uparrow}$ and $\ket{\downarrow}$ were quantum-correlated (entangled) with two coherent motional states $\ket{\alpha_\uparrow}$ and $\ket{\alpha_\downarrow}$ representing wave packets oscillating back and forth in the trap potential,
\begin{equation}\label{eq:lidvg2} \ket{\Psi} = \frac{1}{\sqrt{2}} \left( \ket{\uparrow}\ket{\alpha_\uparrow} + \ket{\downarrow} \ket{\alpha_\downarrow} \right). \end{equation}
In the experiment, $\alpha_\downarrow=-\alpha_\uparrow$, so the motions of the wave packets were $180^\circ$ out of phase with each other; the maximum separation between the two wave packets at the turning points was \unit[83]{nm}, with a wave-packet size of \unit[7.1]{nm}.
\begin{figure}\label{fig:iondec2}
\end{figure}
Dephasing of such states was experimentally studied by Turchette et al.\ \cite{Turchette:2000:aa} using Ramsey interferometry (see Fig.~\ref{fig:iondec2}). The authors simulated a dephasing environment by varying the trap frequency during the wait time between the two Ramsey $\pi/2$ pulses, which introduced a relative phase shift between the components in the superposition. The frequency was changed adiabatically to avoid energy transfer to and from the ion. The loss of coherence then appears as the result of an averaging over many different instances of the random noise process. It is therefore to be understood as a consequence of ensemble averaging, rather than entanglement with an environment (see again Sec.~\ref{sec:decoh-vers-diss} for comments on this distinction). Superpositions of Fock states, with number differences up to $\Delta n = 3$, and their dephasing were also observed. In a different experiment, Schmidt-Kaler et al.\ \cite{SchmidtKaler:2003:pp} measured motional center-of-mass-mode coherence times on the order of \unit[100]{ms} for a trapped $^{40}$Ca$^+$ ion described by a superposition of two vibrational states.
\begin{figure}
\caption{Dynamics of multiparticle entanglement under the influence of an engineered dephasing environment, as studied experimentally by Barreiro et al.\ \cite{Barreiro:2010:aa}. The amount of dephasing, quantified by the parameter $\gamma$, increases from left to right. The plots show the absolute values of the tomographically reconstructed density matrix for four trapped-ion qubits subject to engineered dephasing. \emph{(a)} Without dephasing ($\gamma=0$), the state violates a Bell--CHSH inequality. \emph{(b)} For modest dephasing ($\gamma=0.32$), the entanglement becomes bound. \emph{(c)} For stronger dephasing ($\gamma=0.60$), the entanglement disappears and the state becomes separable. Figure reproduced with permission from Ref.~\cite{Barreiro:2010:aa}. }
\label{fig:entan}
\end{figure}
Ion traps have also been used to experimentally explore the dynamics of entanglement under the influence of an environment. For example, Barreiro et al.\ \cite{Barreiro:2010:aa} reported an experiment in which four entangled trapped-ion qubits were coupled to an engineered, tunable environment. By varying the amount of dephasing (represented by a parameter $\gamma$) introduced into the multiparticle system and then tomographically reconstructing the resulting state, crossovers between different entanglement regimes were observed (see Fig.~\ref{fig:entan}). In the absence of dephasing ($\gamma=0$), the entangled multiparticle state was shown to violate a Bell--CHSH inequality. With even a relatively small amount of dephasing ($\gamma=0.06$), the state no longer violates the inequality. Around $\gamma=0.3$, the state crosses over into bound entanglement \cite{Horodecki:1998:oo}, i.e., it becomes a state that is entangled but not distillable. Around $\gamma=0.6$, the state becomes completely separable, indicating that all entanglement initially present in the multiparticle state has been lost to dephasing.
As already discussed in Sec.~\ref{sec:dfs}, trapped ions have also been used in experimental studies of decoherence-free subspaces \cite{Kielpinski:2001:uu,Roos:204:pp,Haffner:2005:zz,Langer:2005:uu}, reservoir engineering \cite{Poyatos:1996:um,Myatt:2000:yy,Turchette:2000:aa,Carvalho:2001:ua} (of which the aforementioned studies by Turchette et al.\ \cite{Turchette:2000:aa} are an example), and quantum state engineering \cite{Barreiro:2011:oo,Lin:2013:pp,Kienzler:2015:oo}.
\subsection{\label{sec:other}Other experimental areas}
We shall very briefly list a few other existing and prospective areas for the observation of decoherence.
\paragraph{Quantum dots} Decoherence of electron spins in quantum dots \cite{Hanson:2007:pp} has been studied for a number of sources of decoherence, including electrostatic fluctuations \cite{Kuhlmann:2013:aa,Arnold:2014:oo}, spin environments \cite{Fischer:2009:ii,Kuhlmann:2013:aa,Urbaszek:2013:pp,Delteil:2014:aa} and phonon environments representing acoustic vibrations of the crystal lattice \cite{Tighineanu:2018:ii}.
\paragraph{Mechanical quantum resonators} Mechanical quantum resonators \cite{Aspelmeyer:2013:aa,Poot:2012:aa,Greenberg:2012:zz,Blencowe:2004:mm}, coupled either to electronic transducers (``quantum electromechanical systems'' \cite{Blencowe:2004:mm,Poot:2012:aa,Greenberg:2012:zz}) or photon fields (``cavity optomechanical systems'' \cite{Aspelmeyer:2013:aa}), are promising candidates for the generation of spatial macro-superpositions \cite{Arndt:2014:oo}. Such resonators may be effectively treated, under the right conditions, as a one-dimensional quantum harmonic oscillator, which represents the fundamental flexural mode. Several potential decoherence mechanisms have been explored. For example, the role of intrinsic tunneling two-level defects (i.e., spin-$\frac{1}{2}$ particles) as a decohering (and dissipative) environment has been studied \cite{Mohanty:2002:mm,Ahn:2003:mt,Blencowe:2004:mm,Blencowe:2005:cc,Zolfagharkhani:2005:tv,Seoanez:2006:yb,Seoanez:2007:um,Remus:2009:im,Schlosshauer:2008:os,Aspelmeyer:2013:aa}, including consideration of decoherence models in which the resonator interacts with a collection of two-level systems that are in turn subject to a decohering bosonic bath \cite{Schlosshauer:2008:os,Remus:2009:im}. While decoherence in quantum resonators has often been investigated in the context of dissipative processes such as heating (see, e.g., Refs.~\cite{Mohanty:2002:mm,Ahn:2003:mt,Zolfagharkhani:2005:tv,Seoanez:2006:yb,Seoanez:2007:um,Aspelmeyer:2013:aa}), pure dephasing has also been observed and analyzed \cite{Fong:2012:aa,Zhang:2014:oo,Miao:2014:ii,Moser:2014:uu,Maillet:2016:zz}. For example, Ref.~\cite{Maillet:2016:zz} experimentally studied dephasing of a resonator due to a simulated phase reservoir, realized by inducing fluctuations in the resonator frequency through applied voltage noise.
\paragraph{Bose--Einstein condensates} Different kinds of superposition states and nonclassical phenomena have been observed in Bose--Einstein condensates, including: interference fringes between independent, overlapping condensates arising from the indistinguishability of bosons \cite{Andrews:1997:um}; interference between single atoms in a coherently split condensate involving either spatial or internal degrees of freedom (see, e.g., Refs.~\cite{Shin:2004:lo,Gross:2010:gg}); and many-particle entanglement \cite{Tura:2014:oo,Schmied:2016:ll,Pezze:2018:uu}. For Bose--Einstein condensates described by a superposition of macroscopically different particle numbers (see, e.g., Refs.~\cite{Cirac:1998:mm,Ruostekoski:1998:mm,Gordon:1999:mh,Dunningham:2001:da,Calsamiglia:2001:tt,Louis:2001:mu,Micheli:2003:jn}), collisional decoherence due to scattering processes between condensate and noncondensate atoms has been studied theoretically \cite{Dalvit:2000:bb}. Decoherence of phonons representing the collective quantum excitations of the condensate atoms in an isolated Bose--Einstein condensate has also been modeled \cite{Howl:2017:aa}. Ref.~\cite{Schrinski:2017:yy} considered the coherent splitting of a Bose--Einstein condensate into two distinct momentum modes traversing a Mach--Zehnder-like interferometer, such that the accumulated phase difference between the two arms of the interferometer leads to an interference signal at the output. The authors investigated the susceptibility of this interference signal to decoherence processes, as well as to hypothetical collapse theories such as continuous spontaneous localization models \cite{Bassi:2003:yb,Adler:2007:um,Bassi:2010:aa}.
\subsection{\label{sec:exper-tests-quant}Prospective tests of quantum mechanics}
Decoherence experiments are also useful for testing the universal validity of quantum mechanics \cite{Leggett:2002:uy,Marshall:2003:om,Bassi:2005:om,Pikovski:2012:aa,Arndt:2014:oo,Wan:2016:oo,Kaltenbaek:2016:pp,Stickler:2018:ii}, most notably with respect to the hypothetical presence of a novel nonunitary mechanism in nature that would break the linearity of the Schr\"odinger time evolution and lead to wave-function collapse. Such mechanisms that modify the linear structure of quantum mechanics are known under the headings of collapse theories, dynamical reduction models, and continuous spontaneous localization models; see Ref.~\cite{Bassi:2003:yb} for a comprehensive review. As long as the observable effect of such nonlinearities is to effectively destroy or prevent quantum interferences, the challenge is to distinguish such effects from those produced by ordinary quantum decoherence \cite{Adler:2007:um,Bassi:2010:aa}. One would need to sufficiently shield the system from decoherence in order to unambiguously isolate the postulated reduction effect. This goal is difficult to achieve, because for the collapse mechanism to become appreciable, the size of the system must be sufficiently large, in which case decoherence will in general be strong as well \cite{Tegmark:1993:uz,Nimmrichter:2013:aa}.
The superpositions realized in current experiments are still not sufficiently macroscopic to rule out collapse theories, although it has been demonstrated \cite{Nimmrichter:2011:pr} that matter-wave interferometry with large molecular clusters (in the mass range between $10^6$ and $\unit[10^8]{amu}$) would be able to test the collapse theories proposed in Refs.~\cite{Adler:2007:um,Bassi:2010:aa}; such experiments may soon become technologically feasible \cite{Hornberger:2012:ii,Arndt:2014:oo}. Other promising avenues for testing quantum mechanics include motional superposition states of micromechanical oscillators \cite{Marshall:2003:om,Pikovski:2012:aa}, interference of free nanoparticles \cite{Romero:2011:aa,Wan:2016:oo} (an approach that offers the prospect of spatial superpositions separated by \unit[100]{nm} for particles on the order of $\unit[10^9]{amu}$ \cite{Wan:2016:oo}), and molecular nanorotors \cite{Stickler:2018:ii}. Ultimately, experiments carried out in space rather than on Earth might be able to push the limit for macroscopic superpositions to objects involving on the order of $10^{10}$ atoms \cite{Kaltenbaek:2016:pp}. In such space-based experiments, low background gas pressures ($\lesssim \unit[10^{-13}]{Pa}$) would minimize collisional decoherence, low temperatures ($\lesssim \unit[20]{K}$) would minimize thermal decoherence, and microgravity ($\lesssim \unit[10^{-9}]{g}$) would minimize decoherence induced by gravitational time dilation \cite{Pikovski:2015:oo}, potentially enabling tests of quantum gravity models \cite{Kaltenbaek:2016:pp}.
\begin{figure}
\caption{Macroscopicity of mechanical superpositions reached in existing (top) and prospective (bottom) experiments, as reported in Refs.~\cite{Nimmrichter:2013:aa, Arndt:2014:oo}. The macroscopicity is quantified by a parameter $\mu$ defined by Nimmrichter and Hornberger \cite{Nimmrichter:2013:aa}. It represents the susceptibility of the superposition states to minimal modifications of quantum mechanics that would induce a dynamical reduction of the density operator to a classical mixture. References to existing experiments are as follows: neutron interference \cite{Zeilinger:1982:oo}; persistent current superpositions in SQUIDs \cite{Friedman:2000:rr} (see Sec.~\ref{sec:superc-syst}); far-field interference of Na atoms \cite{Keith:1988:uu}; far-field interference of C$_{60}$ \cite{Arndt:1999:rc} (see Sec.~\ref{sec:matt-wave-interf}); Mach--Zehnder interference of Cs \cite{Chung:2009:oo}; Talbot--Lau interference of PFNS8 \cite{Gerlich:2011:aa}. References to prospective experiments are as follows: membrane phonons refer to the experiment of Ref.~\cite{Teufel:2011:oo} extended in such a way that more than 1,000 oscillation cycles between the zero-phonon and one-phonon states become observable; the hypothetical giant SQUID refers to a loop of length \unit[20]{mm}, a wire cross-section of $\unit[100]{\mu m^2}$, and a coherence time of \unit[1]{ms}; Talbot--Lau interferometry at $\unit[10^5]{amu}$ \cite{Nimmrichter:2011:pr}; oscillating micromirror \cite{Marshall:2003:om}; nanosphere interference \cite{Romero:2011:aa}; and OTIMA nanoparticle interference refers to an all-optical matter-wave interferometer in the time domain using pulsed ionization gratings \cite{Nimmrichter:2011:pr}. Figure reproduced with permission from Ref.~\cite{Arndt:2014:oo}.}
\label{fig:macro}
\end{figure}
A related issue of interest is the question of how one may best quantify the ``catness''---i.e., the macroscopicity or ``size''---of a given superposition state. Various measures of macroscopicity have been suggested \cite{Leggett:1980:yt,Leggett:2002:uy,Dur:2002:pp,Bjork:2004:pp,Korsbakken:2007:pp,Marquardt:2008:ii,Lee:2011:oo,Frowis:2012:zz,Nimmrichter:2013:aa}. Most are focused on analyzing particular representations of quantum states and rely on quantum-information-theoretic measures such as the quantum Fisher information \cite{Frowis:2012:zz}. Such approaches tend to depend on particular basis choices for the decomposition of the wave function and can make it difficult to compare macroscopicities of states between physically different systems. Nimmrichter and Hornberger \cite{Nimmrichter:2013:aa} have introduced an alternative measure of macroscopicity that quantifies the extent to which a given superposition state would be capable of ruling out small modifications of quantum mechanics. To represent such a modification in the most general and model-independent way, the authors considered a dynamical-semigroup framework in which a dynamical generator obeying certain invariance and symmetry conditions is added to the evolution equation for the density operator of an $N$-particle system such that superpositions of macroscopically distinct states are dynamically transformed into classical mixtures. Specific collapse models, such as continuous spontaneous localization models \cite{Bassi:2003:yb}, may be recovered as special cases within this framework. The effect of the modification can be quantified in terms of the resulting coherence time $\tau$ of the superposition, leading to the definition of a corresponding macroscopicity parameter $\mu$ \cite{Nimmrichter:2013:aa}. Figure~\ref{fig:macro} shows estimates of $\mu$ for several existing and prospective experiments. One sees that, by this measure, the spatial superpositions involved in matter-wave interferometry (see Sec.~\ref{sec:matt-wave-interf}) exhibit some of the largest macroscopicity; the aforementioned proposed experiments on free nanoparticles \cite{Romero:2011:aa,Wan:2016:oo} and micromechanical oscillators \cite{Marshall:2003:om,Pikovski:2012:aa} would also rank high on the macroscopicity scale.
\section{\label{sec:impl-found-quant}Decoherence and the foundations of quantum mechanics}
Since the early days of quantum mechanics, the interpretation of the quantum formalism and its attending foundational questions have been the subject of much debate (see, for example, Bacciagaluppi and Valentini's analysis of the 1927 Solvay conference \cite{Bacciagaluppi:2006:yq}). Especially given that decoherence theory was ``discovered'' only relatively recently, it is natural to ask what role decoherence may play in addressing foundational problems and informing the existing interpretations of quantum mechanics. One of the central topics in the foundations of quantum mechanics is known as the quantum measurement problem \cite{Bub:1997:iq,Wigner:1963:yt,Fine:1970:iq,Schlosshauer:2003:tv,Wallace:2008:ii,Schlosshauer:2011:ee}, and in Sec.~\ref{sec:mmtprob} we will discuss whether decoherence has anything of substance to say about it. In Sec.~\ref{sec:interp}, we will then briefly review the role that decoherence plays, or may play, in the various interpretations of quantum mechanics. In Sec.~\ref{sec:niels-bohrs-views}, we will comment on Niels Bohr's views on the primacy of classical concepts and their relationship to the quantum--classical correspondence described by decoherence. For in-depth discussions of the connections between decoherence and the foundations of quantum mechanics, see, for example, Refs.~\cite{Bacciagaluppi:2003:yz,Schlosshauer:2003:tv,Schlosshauer:2006:rw,Schlosshauer:2007:un}. Looking beyond the subject of decoherence, the interviews collected in Ref.~\cite{Schlosshauer:2011:ee} provide an overview of contemporary attitudes toward the interpretation of quantum mechanics.
\subsection{Decoherence and the measurement problem\label{sec:mmtprob}}
Application of the unitary Schr\"odinger evolution to a measuring apparatus interacting with a system prepared in a quantum superposition state cannot dynamically describe the stochastic selection of a particular term in the superposition as the measurement outcome (the ``collapse of the wave function''); rather, system and apparatus end up in an entangled state, with all terms of the original superposition still present and quantum-correlated with different apparatus states. This is the measurement problem: the question of how to reconcile the linear, deterministic evolution described by the Schr\"odinger equation with our observation of the occurrence of random measurement outcomes. Whether one considers the measurement problem a genuine difficulty depends strongly on one's interpretation of quantum states (see Ref.~\cite{Schlosshauer:2011:ee} for a representation of different views on the issue). Generally, the need to supply a dynamical account of the reduction of the superposition to a single outcome in the course of a measurement is much more acute when the quantum state is construed as a real, physical entity, rather than as encapsulating an observer's information or beliefs. (The latter, ``epistemic'' view is most radically, and consistently, realized in the QBist interpretation of quantum mechanics \cite{Fuchs:2014:pp}, in which quantum states represent an observer's beliefs---his probabilistic expectations---about his future experiences resulting from his interactions with the system.)
The measurement problem as just defined cannot be solved by decoherence \cite{Schlosshauer:2003:tv,Schlosshauer:2007:un}. This is so for two reasons. First, the dynamics of decoherence processes are based entirely on the standard, unitary Schr\"odinger evolution. Second, the predictively relevant part of decoherence theory relies on reduced density matrices, which are derived from the requirement that they encode the correct quantum statistics (expectation values) for all measurements pertaining to only a subset of degrees of freedom of a composite system in a multipartite (and in general entangled) state. This derivation \emph{presumes} the existence and validity of the usual measurement axioms of quantum mechanics, in particular, the collapse postulate and Born's rule. In other words, for the kinds of entangled quantum states produced by decoherence-type interactions to be interpreted as describing a situation in which the system becomes ``classical,'' we need to take the existence of measurement outcomes as \emph{a priori} given, or otherwise give an account outside of decoherence of how measurement outcomes are produced, because the property of classicality is ultimately a statement about measurement statistics. Thus decoherence, by itself, cannot address the measurement problem in any substantial way.
Of course, to say that decoherence has no bearing on the measurement problem---or on any of the ``big'' foundational problems in general---is not to suggest that decoherence and its underlying ideas cannot be of relevance in the investigation of fundamental questions. In fact, to give just one example, further explorations of the role of the environment, such as those undertaken in the development of quantum Darwinism (see Sec.~\ref{sec:prol-inform-quant}), have already shed valuable light on deeper issues concerning information transfer, amplification, irreversibility, and communication in the quantum setting \cite{Zurek:2003:pl,Ollivier:2003:za,Ollivier:2004:im,Blume:2004:oo,Blume:2005:oo,Zurek:2009:om,Riedel:2010:un,Riedel:2011:un,Riedel:2012:un,Streltsov:2013:oo,Zurek:2013:xx,Zurek:2018:om,Zurek:2018:on} (see Ref.~\cite{Zurek:2009:om} for an overview of some of the relevant ideas).
If one takes the quantum measurement problem to include the preferred-basis problem \cite{Schlosshauer:2007:un}, and if the preferred-basis problem is understood in the sense defined in Sec.~\ref{sec:envir-induc-supers}, then decoherence solves it, as discussed there. Indeed, the ability of decoherence to dynamically define preferred bases is exploited in certain interpretations of quantum mechanics (see the following Sec.~\ref{sec:interp}).
\subsection{Decoherence in interpretations of quantum mechanics\label{sec:interp}}
The interplay between decoherence and the interpretation of quantum mechanics goes back to the birth of decoherence. As mentioned in the Introduction, decoherence theory itself initially came about all but as a by-product of Zeh's development of an interpretation in the mold of Everett's many-worlds interpretation \cite{Zeh:1970:yt}. Since then, various interpretations have been assessed and refined in light of the insights and results brought about by the decoherence program. Most notably, decoherence has been used to define certain structural elements in interpretations, as well as identify internal consistency issues. Below, we shall give just give a few examples; the interested reader is pointed to Refs.~\cite{Schlosshauer:2003:tv, Bacciagaluppi:2003:yz} for in-depth discussions of the interplay between decoherence and interpretations.
In Everett-style ``many worlds'' interpretations of quantum mechanics \cite{Everett:1957:rw,Wallace:2010:im}, the quantum state is interpreted realistically and never collapses; our observation of definite measurement outcomes is then explained as a continuous ``branching'' of universes, worlds, observers, and minds in the course of measurement-like interactions. Another version, which arguably includes Everett's own conception, interprets the global entangled state as merely describing relations between states (``relative-state interpretation''); see also Refs.~\cite{Rovelli:1996:rq,Mermin:1998:ii}. The preferred-basis problem is particularly acute in such interpretations, since the particular decomposition of the global quantum state (representing, in principle, the entire universe) defines the ``worlds'' (or ``relations'') and thus their properties at each instant in time; those worlds must also be appropriately connected in time. Here, the pointer states defined by the stability criterion of decoherence (see Sec.~\ref{sec:envir-induc-supers}) provide a ready-made solution, and in this way decoherence theory has played a critical role in defining the branching structure in Everett-style interpretations \cite{Zurek:1998:re,Butterfield:2001:ua,Wallace:2003:iq,Wallace:2003:iz,Wallace:2010:im}. Such an approach does not need to define the worlds \emph{a priori} or by means of an external rule; instead, the worlds are defined dynamically through the standard Schr\"odinger evolution, and since they are dynamically stable, they lead to robust, temporally extended trajectory-like branches. A relative-state interpretation that draws heavily from the insights and structures provided by decoherence theory is the ``existential interpretation'' of Zurek \cite{Zurek:1993:pu,Zurek:1998:re,Zurek:2004:yb}. This interpretation was later extended to include the results of quantum Darwinism \cite{Zurek:2009:om}, as well as a decoherence-inspired account of the origin of Born's rule based on symmetry and invariance properties of entangled system--environment states (``environment-assisted invariance'') \cite{Zurek:2002:ii,Zurek:2003:rv,Zurek:2003:pl,Zurek:2004:yb,Zurek:2009:om,Zurek:2018:on,Schlosshauer:2003:ms,Barnum:2003:yb,Mohrhoff:2004:tv}.
In modal interpretations of quantum mechanics \cite{Clifton:1996:op}, the physical quantity represented by an observable may be assigned a definite value even if the system is not in an eigenstate of that observable. The assignment of such definite values (corresponding to well-defined physical properties) must be in agreement with the prediction of quantum mechanics; in particular, the proper Born probabilities and time evolution must be recovered. Also, at least on macroscopic scales the modally assigned definite values ought to correspond to the definite ``classical'' quantities of our experience, such as well-localized positions. Recognizing the importance of environmental interactions highlighted by decoherence theory, some modal interpretations have derived their value assignments from states obtained from an orthogonal decomposition of the decohered reduced density matrix \cite{Bacciagaluppi:1996:po,Bene:2001:po}. For finite-dimensional state spaces, the resulting states are found to be very close to the states that would be dynamically selected by the stability criterion of decoherence, ensuring proper classicality of the modally assigned properties \cite{Bacciagaluppi:1996:po,Bene:2001:po}. For infinite-dimensional state spaces, however, this agreement often breaks down; for example, for an environmental-scattering model it was shown that the modal properties obtained from the orthogonal decomposition of the decohered reduced density matrix were significantly delocalized while the pointer states indicated tight localization \cite{Bacciagaluppi:2000:yz}. Such inconsistencies can pinpoint limitations and empirical inadequacies of certain types of modal interpretations.
The consistent-histories interpretation of quantum mechanics \cite{Griffiths:1984:tr,Omnes:1994:pz,Griffiths:2002:tr} dispenses with the usual notions of measurement and instead defines time-ordered sequences of events (``histories'') for a closed system and assigns appropriate probabilities to these sequences. As a minimal requirement, such sets of histories must fulfill a consistency condition \cite{Griffiths:1984:tr,Omnes:1994:pz,Griffiths:2002:tr} to ensure the applicability of Boolean logic in the form of the additivity of probabilities. This, however, is not enough, as most consistent histories do not exhibit appropriate quasiclassicality for macroscopic systems \cite{GellMann:1990:uz,GellMann:1991:pp,Zurek:1993:pu,Paz:1993:ww,Albrecht:1993:pq,Dowker:1995:pa,Dowker:1996:ch}. To address this issue, the pointer bases obtained from decoherence have frequently been used to dynamically yield consistent, quasiclassical histories \cite{Zurek:1993:pu,Paz:1993:ww,Albrecht:1992:rz,Albrecht:1993:pq,Twamley:1993:bz}, and the importance of ``records'' (represented by stable system--environment correlations) for the definition of quasiclassical histories has been emphasized repeatedly \cite{Albrecht:1992:rz,Albrecht:1993:pq,Paz:1993:ww,Zurek:1993:pu,Zurek:2002:ii,GellMann:1998:xy}. In particular, the redundant environmental encoding of such records, as described by quantum Darwinism, has been identified as a key mechanism for ensuring consistent, stable, quasiclassical, objective histories \cite{Riedel:2016:oo} (see also Refs.~\cite{Zurek:1993:pu,Paz:1993:ww,Zurek:2002:ii,Zurek:2003:pl}).
\subsection{\label{sec:niels-bohrs-views}Bohr's views on the primacy of classical concepts}
In Niels Bohr's writings on quantum mechanics, the indispensability and primacy of ``classical concepts'' (such as position and momentum) is widely emphasized (see, for example, Refs.~\cite{Bohr:1949:mz,Bohr:1931:ii,Bohr:1935:re,Bohr:1996:mn,Bohr:1987:oo,Bohr:1958:lu}). Indeed, Howard has stated that ``the doctrine of classical concepts turns out to be more fundamental to Bohr's philosophy of physics than are better-known doctrines, like complementarity'' \cite[p.~202]{Howard:1994:lm}. Given that decoherence theory describes a dynamical emergence of classicality, it is not surprising that decoherence has sometimes been suggested to make Bohr's insistence on fundamental classical concepts superfluous. For example, Joos has traced the birth of the ideas of decoherence theory to a dissatisfaction with the ``orthodoxy of the Copenhagen school'' \cite[p.~54]{Joos:2006:yy}. He has argued that ``the message of decoherence'' is that ``we do not need to take classical notions as the starting point for physics,'' because ``these emerge through the dynamical process of decoherence from the quantum substrate'' \cite[p.~77]{Joos:2006:yy}. Similarly, Zeh \cite{Zeh:2000:rr} has asserted that
\begin{quote} the Heisenberg--Bohr picture of quantum mechanics can now be claimed dead. Neither classical concepts, nor any uncertainty relations, complementarity, observables, quantum logic, quantum statistics, or quantum jumps have to be introduced on a fundamental level. \end{quote}
Any analysis of such claims is complicated by the variety of meanings of the term ``classical''---referring variously to concepts, dynamical properties, statistics, phenomena, laws, or theories. A closer reading of Bohr's views reveals that his insistence on the primacy of classical concepts is chiefly grounded in epistemological concerns, and that it pertains to his understanding of the functional role of experiments \cite{Camilleri:2015:oo,Schlosshauer:2017:oo}. For Bohr, classical concepts are indispensable because without them, it would impossible to acquire empirical knowledge of the world through experiments. Therefore, according to Bohr, any interpretation of quantum mechanics must in the end fall back on the use of classical concepts, rendering circular any attempt to derive such concepts from the quantum formalism. This suggests that we must clearly distinguish between Bohr's \emph{epistemological} thesis of the primacy of classical concepts based on his view of the \emph{functional} role of an experiment, and the \emph{dynamical} problem of the quantum--classical transition. It is the latter that is addressed by decoherence theory, not the former \cite{Camilleri:2015:oo,Schlosshauer:2017:oo}.
While Bohr repeatedly emphasized the epistemological necessity of classical concepts, he offered only a few oblique comments on the problem of why, physically and dynamically, macroscopic systems and measurement apparatuses may be described in classical terms. These comments mostly involved the ``heaviness'' and large size of macroscopic systems, though they frequently also referred to irreversible amplification effects (see, e.g., Refs.~\cite{Bohr:1958:lu,Bohr:1958:mj}); for example, with regard to measurement apparatuses Bohr stated that they ``concern bodies sufficiently heavy to permit the quantum [effects] to be neglected in their description'' \cite[p.~170]{Bohr:1958:lu}.
Heisenberg, in a tantalizing passage, wrote that \cite[pp.~121--2]{Heisenberg:1989:zb}
\begin{quote}
the system which is treated by the methods of quantum mechanics is in fact a part of a much bigger system (eventually the whole world); it is interacting with
this bigger system; and one must add that the microscopic properties of the bigger system are (at least to a large extent) unknown. \dots The interaction with the bigger system with its undefined microscopic properties then introduces a new statistical element into the description \dots\ of the system under
consideration. In the limiting case of the large dimensions this statistical element destroys the effects of the ``interference of probabilities'' in such a manner that the quantum-mechanical scheme really approaches the classical one in the limit. \end{quote}
In a similar vein, elsewhere Heisenberg suggested that ``the interference terms are \dots\ removed by the partly undefined interactions of the measuring apparatus, with the system and with the rest of the world'' \cite[p.~23]{Heisenberg:1955:lm}. Even though one might identify a faint hint of the later ideas of the decoherence program in Heisenberg's pronouncements, there is no mention of the critical ingredient: entanglement.
In the 1950s and 1960s, several of Bohr's disciples, including Weizs\"acker and Rosenfeld, attempted to develop a physical account of the quantum-to-classical transition based on notions of irreversibility, a strategy they saw as serving as a dynamical justification of Bohr's classical concepts \cite{Camilleri:2015:oo}. The thermodynamic theory of Daneri, Loinger, and Prosperi \cite{Daneri:1962:om}, which built on Bohr's hints concerning the role of irreversibility and amplification, is perhaps the best known (though circular \cite{Wigner:1995:jm, Bub:1971:ll}) approach. Yet, none of these early efforts recognized the role of entanglement in a dynamical explanation of quantum statistics turning into classical-looking distributions.
\section{\label{sec:concluding-remarks}Concluding remarks}
Schr\"odinger called entanglement ``\emph{the} characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought'' \cite[p.~555]{Schrodinger:1935:jn}. He used his eponymous cat paradox to argue how entanglement amplified to macroscopic scales demonstrates the apparent irreconcilability of quantum mechanics with our ``classical'' experience of the everyday world. On this view, entanglement was perceived to be a peculiar quantum feature that would have to be tamed in order to bridge the gap between quantum and classical descriptions. So it is perhaps ironic that entanglement turned out to be the key to a dynamical explanation of the emergence of classicality in quantum mechanics.
Without a doubt, future experiments will realize ever-larger Schr\"odinger cat--like states, and we will continue our journey toward the realization of a quantum computer. A key role in all such endeavors will be played by a deep understanding of decoherence and an ongoing development of decoherence models of increasing complexity and detail. It is indeed remarkable how the basic idea of decoherence---that entanglement of a quantum system with an environment has a dramatic influence on what is observable at the level of the system, an idea already spelled out in the very first paper by Zeh \cite{Zeh:1970:yt}---has enriched so thoroughly our theoretical understanding and experimental control of the quantum-to-classical transition.
We shall close with a quote by Zeh himself, who not only was a pioneer of decoherence theory but remained, for the rest of his life, a steadfast, thoughtful advocate of the inseparability of his discovery from the interpretation of quantum mechanics. In 1996, he humbly observed \cite{Zeh:1996:gy} that decoherence is
\begin{quote} a normal consequence of interacting quantum mechanical systems. It can hardly be denied to occur---but it cannot explain anything that could not have been explained before. Remarkable is only its quantitative (realistic) aspect that seems to have been overlooked for long. \end{quote}
\section*{} \addcontentsline{toc}{section}{References}
\end{document} |
\begin{document}
\title{Optimal discretization of hedging strategies\
with directional views} \begin{abstract} \noindent We consider the hedging error of a derivative due to discrete trading in the presence of a drift in the dynamics of the underlying asset. We suppose that the trader wishes to find rebalancing times for the hedging portfolio which enable him to keep the discretization error small while taking advantage of market trends. Assuming that the portfolio is readjusted at high frequency, we introduce an asymptotic framework in order to derive optimal discretization strategies. More precisely, we formulate the optimization problem in terms of an asymptotic expectation-error criterion. In this setting, the optimal rebalancing times are given by the hitting times of two barriers whose values can be obtained by solving a linear-quadratic optimal control problem. In specific contexts such as in the Black-Scholes model, explicit expressions for the optimal rebalancing times can be derived. \\
\noindent \textbf{Key words:}\ {Discretization of hedging strategies, delta hedging, hitting times, asymptotic optimality, expectation-error criterion, semi-martingales, limit theorems, linear-quadratic optimal control.}
\end{abstract} \section{Introduction}
In order to manage the risks inherent to the derivatives they buy and sell, practitioners use continuous time stochastic models to compute their prices and hedging portfolios. In the simplest cases, notably in that of the so-called delta hedging strategy, the hedging portfolio obtained from the model is a time varying self financed combination of cash and the underlying of the option. We denote the price at time $t$ of the underlying asset by $Y_t$ and assume it to be a one-dimensional semi-martingale. Hence, in such situations, the outputs of the model are the price of the option together with the number of underlying assets to hold in the hedging portfolio at any time $t$, denoted by $X_t$ (the weight in cash is then deduced from the self financing property). Therefore, assuming zero interest rates, the theoretical value of the model based hedging portfolio at the maturity of the option $T$ is given by $$\int_0^T X_tdY_t.$$
\noindent Typically, the process $X_t$ derived from the model is a continuously varying semi-martingale, requiring continuous trading to be implemented in practice. This is of course physically impossible and would be anyway irrelevant because of the costs induced by microstructure effects. Hence practitioners do not use the strategy $X_t$, but rather a discretized version of it. This means the hedging portfolio is only rebalanced at some discrete times and thus is held constant between two rebalancing times. Let us denote by $(\tau^n_j)_{j\geq 0}$ an increasing sequence of rebalancing times over $[0,T]$ (the meaning of the parameter $n$ will be explained below). With respect to the target portfolio obtained from the model, the hedging error due to discrete trading $Z_T^n$ is therefore given by \begin{equation*} Z^n_T = \sum_{j=0}^{+\infty}X_{\tau^n_j}(Y_{\tau^n_{j+1}\wedge T} - Y_{\tau^n_{j}\wedge T})- \int_0^TX_tdY_t. \end{equation*} Thus, some important questions in practice are: \begin{itemize} \item What is the order of magnitude of $Z^n_T$ in the case of classical discretization strategies ? \item For a given criterion, how to optimize the rebalancing times ? \end{itemize}
\noindent The most classical rebalancing scheme is that of equidistant trading dates of the form $$\tau_j^n=jT/n,~j=0,\ldots,n,$$ where $n$ represents the total number of trades on the period $[0,T]$. In this setting, the first question has been addressed in details. There are two popular approaches to quantify the hedging error $Z^n_T$, both of them being asymptotic, assuming the rebalancing frequency $n/T$ tends to infinity (that is $n$ tends to infinity since $T$ is fixed). A first possibility is to use the $L^2$ norm, where one typically looks for asymptotic bounds of the form \begin{equation*} \mathbb{E}[(Z^n_T)^2]\leq cn^{-{\theta}},\quad n\to \infty. \end{equation*} Many authors have explored various aspects of this problem in this deterministic rebalancing dates framework. For European call and put options in the Black-Scholes model, it is shown in \cite{bertsimas2000time} and \cite{zhang1999couverture} that the $L^2$ error has a convergence rate $\theta = {1}$. For other options, the convergence rate depends on the regularity of the payoff. For example, it is shown in \cite{gobet2001discrete} that for binary options, the convergence rate is $\theta=1/2$. However, in this context, the convergence rate $\theta = 1$ can be achieved by choosing a suitable non equidistant deterministic rebalancing grid defined by $$ \tau^n_j = T - T\paren{1 - j/n}^{1/\beta}, $$ with $\beta \in (0, 1]$ being the fractional smoothness in the Malliavin sense of the option payoff, see \cite{geiss2002quantitative}. An asymptotic lower bound for the $L^2$ error is given in \cite{fukasawa2011asymptotically, fukasawa2012efficient} for a general class of rebalancing schemes. \\
\noindent The second way to assess the hedging error is through the weak convergence of the sequence of the suitably rescaled random variables $Z^n_T$. When $X$ and $Y$ are It$\hat{\text{o}}$ processes, the case of equidistant rebalancing dates has been investigated in this approach in \cite{bertsimas2000time,hayashi2005evaluating,rootzen1980limit}, where the following convergence in law is proved: \begin{equation} \sqrt{n}Z^n_T\xrightarrow{\mathcal{L}}{} \sqrt{\frac{{T}}{2}}\int_0^T\sigma^X_t\sigma^Y_tdB_t, \label{hm.eq} \end{equation} where $\sigma^X$ and $\sigma^Y$ are the volatilities of $X$ and $Y$ and $B$ is a Brownian motion independent of the other quantities. The case where $X$ and $Y$ are processes with jumps is treated in \cite{tankov2009asymptotic}.\\
\noindent This asymptotic approach has also been recently used in the context where the rebalancing times are random stopping times. Some specific hitting times based schemes derived from a microstructure model are investigated in \cite{robert2010microstructural}. In \cite{fukasawa2011discretization}, the author works with quite general sampling schemes based on stopping times. More precisely, for a given parameter $n$ driving the asymptotic, one considers an increasing sequence of stopping times $$0=\tau_0^n\leq\tau_1^n\leq\ldots\leq\tau_j^n\leq\ldots$$ so that almost surely, $\underset{j\rightarrow\infty}{\text{lim}}\tau_j^n=T$ (meaning in fact that the stopping times are all equal to $T$ for large enough $j$)
and $$ \sup_j (\tau^n_{j+1}-\tau^n_j)$$ tends to $0$ in a suitable sense as $n$ goes to infinity. Under some regularity conditions on the (random) rebalancing times, a general limit theorem for the hedging error is obtained in \cite{fukasawa2011discretization}. It is shown that after suitable renormalization (specified in the next sections), the hedging error converges in law to a random variable of the form \begin{equation}\label{eqn: stable limit fukasawa} \frac{1}{3}\int_0^T s_tdY_t + \frac{1}{\sqrt{6}}\int_0^T \big(a_t^2 - \frac{2}{3}s_t^2\big)^{1/2}\sigma^Y_tdB_t. \end{equation} Here $B$ is a Brownian motion independent of all the other quantities and the processes $s$ and $a$ can be interpreted as the asymptotic local conditional skewness and kurtosis of the increments of the process $X$ between two consecutive discretization dates (see next sections for details).\\
\noindent One can remark a crucial difference between the deterministic discretization schemes associated to \eqref{hm.eq} and the random stopping times case leading to \eqref{eqn: stable limit fukasawa}. For deterministic dates, the discretization error asymptotically behaves as a stochastic integral with respect to Brownian motion. Therefore, it is (essentially) centered. In the case of random discretization dates, one may obtain a ``biased'' asymptotic hedging error because of the presence of the term $$\int_0^T s_tdY_t.$$ Hence, if $s$ does not vanish and $Y$ has non zero drift, the asymptotic hedging error is no longer centered.\\
\noindent From a practitioner viewpoint, this is quite an interesting property. Indeed, it shows that in the presence of market trends, the trader may actually be compensated for the extra risk arising from discrete trading, provided that the rebalancing dates are chosen in an appropriate way.
Of course one may say this is not the option trader's job to try to get a positive expected return with the hedging strategy. However, knowing that there is anyhow a hedging error, it seems reasonable to optimize it to the trader's benefit.\\
\noindent Hence we place ourselves in the asymptotic high frequency regime where $n$ is large and therefore $$\underset{j}{\text{sup}}(\tau_{j+1}^n-\tau_j^n)$$ is small, meaning that the hedging error should be small. In this setting we address the second question raised above, that is finding the optimal times to rebalance the portfolio. To do so, we simply use an asymptotic expectation-error type criterion. More precisely, we wish to maximize the expectation of the hedging error under a constraint on its $L^2$ norm. This is quite in the spirit of \cite{sepp2013you}, where the author aims at finding an optimal hedging frequency to optimize the Sharpe ratio. Remark that in our context, the $L^2$ norm is more meaningful than the variance since the primary goal of the trader is to make the hedging error small. Our asymptotic approach goes as follows. First, we approximate the law of the renormalized hedging error by that in Equation \eqref{eqn: stable limit fukasawa}. Then we find the processes $a_t^*$ and $s_t^*$ which correspond to optimality in terms of our expectation-error criterion for the family of laws given by \eqref{eqn: stable limit fukasawa}. Finally, we show that we can indeed build a discretization rule which leads to the optimal $a_t^*$ and $s_t^*$ in the limiting distribution of the hedging error.\\
\noindent Note that using an asymptotic framework to design optimal discretizations of hedging strategies has been a quite popular approach in the recent years. Such method (although in a slightly different context) is in particular used in \cite{fukasawa2011asymptotically,fukasawa2011discretization,gobet2012almost} in the continuous setting whereas the case with jumps is investigated in \cite{rosenbaum2011asymptotically}. All these works aim at minimizing some form of transaction costs (typically the number of trades) under some constraint on the $L^2$ norm of the hedging error. Here we also put a constraint on the $L^2$ norm of the hedging error. However, instead of minimizing transaction costs, we maximize the expectation of the hedging error. Thus our viewpoint is that of a trader giving himself a lower bound on the quality of his hedge (the $L^2$ norm of the hedging error), but allowing himself to try to take advantage of market drifts provided the constraint is satisfied.\\
\noindent In practice, our work should probably only be considered as a benchmark. Indeed, we somehow make the highly unreasonable assumption that practitioners observe the drift. This is of course not realistic at all since any kind of statistical estimation of the drift is irrelevant in this high frequency setting. However, some practitioners still have views on the market and our work gives them a way to incorporate their beliefs in their hedging strategies.\\
\noindent The paper is organized as follows. In Section \ref{sec : framework}, we investigate the set of admissible discretization rules, that is those leading to a limiting law of the form \eqref{eqn: stable limit fukasawa}. In particular, we extend the examples provided in \cite{fukasawa2011discretization} by showing that the discretization rules based on hitting times of stochastic barriers are admissible. In Section \ref{sec: optimality}, we consider a first criterion for optimizing the trading times: the modified Sharpe ratio. It enables us to carry out very simple computations. However, the relevance of the modified Sharpe ratio being in fact quite arguable, a more suitable approach in which we consider an expectation-error type criterion is investigated in Section \ref{sec: optimality2}. Using tools from linear-quadratic optimal control theory, explicit developments are provided in the Black-Scholes model in Section \ref{sec: bs}. Finally, the longest proofs are relegated to an appendix.
\section{Assumptions and admissible strategies}\label{sec : framework}
In this section we detail our assumptions on the processes $X$ and $Y$ together with the admissibility conditions for the sampling schemes. \subsection{Assumptions on the dynamics and admissibility conditions} Let $(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P})$ be a filtered probability space. We write $Y$ for the underlying asset. Let $T > 0$ stand for the maturity of the derivative to be hedged. We assume that the benchmark hedging strategy deduced from a theoretical model simply consists in holding a certain number of units of the underlying asset, denoted by $X$, and some cash in a self financed way, under zero interest rates. Throughout the paper, we assume both $Y$ and $X$ are It$\hat{\text{o}}$ processes of the form \begin{equation} \label{eq:itorep} dY_t = b^Y_tdt + \sigma^Y_t dW^Y_t,~~dX_t = b^X_tdt + \sigma^X_t dW^X_t \end{equation} on $[0,T]$, where $W^X$ and $W^Y$ are $\mathbb F$-Brownian motions which may be arbitrarily correlated, and
the coefficients of $X$ and $Y$ satisfy the following technical assumptions. \begin{asmp}\label{assump_model} ${}$ \begin{itemize} \item The processes $b^Y$, $b^X$, $\sigma^Y$ and $\sigma^X$ are
adapted and continuous on $[0,T]$ almost surely. \item The volatility process $\sigma^Y$ of $Y$ is positive on $[0,T]$ almost
surely. \item The volatility process $\sigma^X$ of $X$ is positive on $[0,T)$ almost
surely. \item The instantaneous Sharpe ratio $\rho = b^Y/\sigma^Y$ satisfies \begin{equation*} \mathbb{E}\big[\int_0^T\rho_t^2dt\big]<+\infty. \end{equation*} \end{itemize} \end{asmp} \begin{exam}[The Black-Scholes model]\label{example : bs} \upshape The case that $b^Y_t = b Y_t$ and $\sigma^Y_t = \sigma Y_t$ with
constants $b$ and $\sigma > 0$ corresponds to the Black-Scholes
model. The instantaneous Sharp ratio $\rho = b/\sigma$ is a
constant. To hedge a call option with payoff $(Y_T-K)_+$ and strike $K > 0$, the standard theory suggests to use the so-called Delta hedging strategy: \begin{equation*} X_t = \Phi(d_1(t,Y_t)), \ \ d_1(t,y) = \frac{\log(y/K) + \sigma^2(T-t)/2}{\sigma\sqrt{T-t}}, \end{equation*} where $\Phi$ stands for the distribution function of a standard Gaussian random variable. By It$\hat{\text{o}}$'s formula, we see that $X$ is an
It$\hat{\text{o}}$ process of the form \eqref{eq:itorep} with $W^X = W^Y$ and \begin{equation*} \begin{split} & b^X_t = \phi(d_1(t,Y_t)) \left\{ \frac{\partial d_1}{\partial t}(t,Y_t) + \frac{\sigma^2}{2}\frac{\partial^2 d_1}{\partial y^2} (t,Y_t) Y_t^2 + b \frac{\partial d_1}{\partial
y}(t,Y_t) Y_t\right\}+\frac{\sigma^2}{2}\big(\frac{\partial d_1}{\partial y} (t,Y_t)\big)^2\phi'(d_1(t,Y_t))Y_t^2, \\ & \sigma^X_t = \sigma \phi(d_1(t,Y_t)) \frac{\partial d_1}{\partial y}(t,Y_t) Y_t,
\end{split} \end{equation*} with $\phi$ the density of a standard Gaussian random variable. Almost surely, $Y_T\neq K$ and therefore both $b^X$ and $\sigma^X$ are continuous on $[0,T]$ and $b^X_T =
\sigma^X_T = 0$. Furthermore $\sigma^X$ is positive on
$[0,T)$. Hence Assumption \ref{assump_model} is satisfied.
\end{exam} \noindent As explained in the introduction, in practice, the trader cannot realize the theoretical strategy $X_t$ which typically implies continuous trading. Hence the quantity $$\int_0^T X_sdY_s$$ only represents a benchmark terminal wealth and $X_t$ is a benchmark hedging strategy. Thus, we consider that this strategy is discretized over the stopping times \begin{equation*} 0=\tau^n_0\leq \tau^n_1\leq \cdots\leq \tau^n_j\leq\cdots, \end{equation*} so that for given $n$, almost surely, $\tau_j^n$ attains $T$ for $j$ large enough. Such array of stopping times is called a discretization rule. Consequently, if we define the discretized process $X^n$ by \begin{equation*} X^n_t = X_{\tau^n_j}, \quad t\in [\tau^n_j, \tau^n_{j+1}), \end{equation*} the hedging error $Z^n_T$ with respect to the benchmark strategy writes \begin{equation*} Z^n_T = \int_0^T(X^n_{s-} - X_s) dY_s. \end{equation*}
\noindent We now define the admissibility conditions for our discretization rules which we comment in the next subsection.
\begin{cond}[Admissibility conditions]\label{Condition I} A discretization rule $(\tau_j^n)$ is admissible if there exist continuous $\mathbb F$-adapted processes $a$ and $s$ satisfying \begin{equation}\label{asmp: integrability a s} \E{\int_0^T\big(1+(\rho_t)^2\big)(a_t^2+s_t^2)(\sigma^Y_t)^2dt}< \infty, \end{equation} and a positive sequence $\varepsilon_n$ tending to zero such that: \begin{itemize} \item The first two moments of the renormalized hedging error $\varepsilon_n^{-1}Z^n_T$ converge
to those of a random variable of the form \begin{equation}\label{eqn: stable limit} Z^*_{a,s}=\frac{1}{3}\int_0^T s_tdY_t + \frac{1}{\sqrt{6}}\int_0^T\paren{a_t^2 - \frac{2}{3}s_t^2}^{1/2}\sigma^Y_tdB_t, \end{equation} that is, \begin{equation}\label{moment2} \mathbb{E}[\varepsilon_n^{-1}Z^n_T]\rightarrow \mathbb{E}[Z^*_{a,s}], \ \ \mathbb{E}[(\varepsilon_n^{-1}Z^n_T)^2]\rightarrow \mathbb{E}[(Z^*_{a,s})^2], \end{equation} where $B$ is a Brownian motion independent of all the other quantities. \item Almost surely, the processes $a_t$ and $s_t$ satisfy $a_t^2 \geq
s_t^2$, for all $t \in [0,T]$. \end{itemize} \end{cond}
\subsection{Comments on the admissibility conditions} \noindent Equation \ref{asmp: integrability a s} is simply a technical integrability condition. We now give the interpretation of the sequence $\varepsilon_n$. Recall that for fixed $n$, we deal with an increasing sequence of stopping times $(\tau_j^n)$ over $[0,T]$. Typically, $\varepsilon_n^2$ will represent the order of magnitude of the interarrival time $\tau_{j+1}^n-\tau_j^n$. For example, in the case of equidistant trading times with frequency $n/T$, $\varepsilon_n$ can simply be taken equal to $n^{-1/2}$. In the case of the hitting times based scheme consisting in rebalancing the portfolio each time the process $X$ has varied by $\nu_n$, where $\nu_n$ is a deterministic sequence tending to zero, one can choose $\varepsilon_n=\nu_n$ (since the order of magnitude of the time interval between two hitting times is $\nu_n^2$). \\
\noindent The specific form \eqref{eqn: stable limit} may appear rather ad hoc at first sight. However, it is in fact quite natural. Indeed, Proposition~\ref{proposition: stable conv} below, which is proved in Appendix and used to show the main result of the next subsection, indicates that as soon as the quadratic covariations $\varepsilon_n^{-2}\crochet{Z^n}$ and $\varepsilon_n^{-1}\crochet{Z^n, Y}$ have regular limits, the form \eqref{eqn: stable limit} appears for the weak limit of the renormalized hedging error. So the idea for this admissibility condition is that in our asymptotic approach, we want to work in regular cases where the renormalized hedging error can be approximated by a random variable of the form \eqref{eqn: stable limit}. However, our asymptotic optimality criterion will be based on the first two moments of the renormalized hedging error only. Therefore, we just require these first two moments to be asymptotically close to those of a random variable of the form \eqref{eqn: stable limit} (in particular we do not impose the convergence in law of the renormalized hedging error towards $Z^*_{a,s}$, although this is the underlying idea behind this admissibility condition). We now give Proposition \ref{proposition: stable conv}.
\begin{prop}\label{proposition: stable conv} If there exist a sequence $\varepsilon_n \to 0$ and continuous processes $s$ and
$a$ such that \begin{align}
&\varepsilon_n^{-2}\crochet{Z^n}\rightarrow\frac{1}{6}\int_0^{\cdot}a_u^2(\sigma^Y_u)^2du,\label{asmp: limit 2} \\ &\varepsilon_n^{-1}\crochet{Z^n, Y}\rightarrow \frac{1}{3}\int_0^{\cdot}s_u(\sigma^Y_u)^2du,\label{asmp: limit 3} \end{align} uniformly in probability on $[0,T]$, then $\varepsilon_n^{-1}Z^n$ converges in law to \begin{equation}\label{condstabconv} \frac{1}{3}\int_0^\cdot s_tdY_t + \frac{1}{\sqrt{6}}\int_0^\cdot\paren{a_t^2 - \frac{2}{3}s_t^2}^{1/2}\sigma^Y_tdB_t \end{equation} in $C[0,T]$. In particular the convergence in law of $\varepsilon_n^{-1}Z^n_T$ to $Z_{a,s}^\ast$ defined by \eqref{eqn: stable limit}
holds. If in addition, \begin{equation} \label{Condition add} \varepsilon_n^{-4/3}\sup_{j\geq 0}(\tau^n_{j+1}\wedge T_0 - \tau^n_j \wedge T_0)\to 0 \end{equation} in probability, for all $T_0 \in [0,T)$, then almost surely $a_t^2 \geq s^2_t$ for
all $t \in [0,T]$. \end{prop}
\noindent We now consider the processes $a^2_t$ and $s_t$ appearing in the admissibility conditions. We place ourselves in the situation where Proposition \ref{proposition: stable conv} can be applied. In that case, an inspection of the proof of this lemma shows that the inequality $a_t^2 \geq s^2_t$ essentially follows from the elementary fact that $\mathbb{E}[\Delta^2]\mathbb{E}[\Delta^4] \geq \mathbb{E}[\Delta^3]^2$ for a general random variable $\Delta$. Indeed, $a^2_t$ and $s_t$ are respectively related to the local third and fourth conditional moments of the increments of $X$. Proposition~\ref{prop: intuition a s} below, which is proved in Appendix and used to show the main result in the next subsection, somehow illustrates the connections between $a^2_t$ and $s_t$ and the conditional moments. Thus we give it now. Let $\Delta_{j,n} = X_{\tau^n_{j+1}} - X_{\tau^n_j}$ be the increment of $X$ between $\tau^n_j$ and $\tau^n_{j+1}$ and $N^n_t$ be the number of rebalancing times until time $t$: \begin{equation*} N^n_t = \max\accro{j \geq 0 \vert \tau^n_j \leq t }. \end{equation*} The following proposition holds.
\begin{prop}\label{prop: intuition a s} Let $\varepsilon_n$ be a positive sequence tending to $0$ and $s$ and $a$ be continuous processes. Assume the following: \begin{itemize} \item The family of random variables \begin{equation} \label{sup diff}
\varepsilon_n^{-4} \sup_{t \in [0,T]} |X^n_t - X_t|^4 \end{equation} is uniformly integrable.
\item The following uniform convergences in probability on $[0,T_0]$ hold for all $T_0 \in [0,T)$: \begin{equation} \label{asmp: suff} \begin{split} \varepsilon_n^{-1}&\sum_{j=0}^{N^n_\cdot}\kappa_{\tau^n_j}\mathbb{E}\big[ \Delta_{j, n}^3
\big| \mathcal{F}_{\tau^n_j} \big] \to -\int_0^\cdot s_u (\sigma^Y_u)^2{d}u,\\ \varepsilon_n^{-2}&\sum_{j=0}^{N^n_\cdot}\kappa_{\tau^n_j} \mathbb{E}\big[ \Delta_{j, n}^4
\big| \mathcal{F}_{\tau^n_j} \big] \to \int_0^\cdot a_u^2 (\sigma^Y_u)^2{d}u, \end{split} \end{equation} where $\kappa_u=(\sigma^Y_u/\sigma^X_u)^2$. \end{itemize} Then the convergences \eqref{asmp: limit 2}, \eqref{asmp: limit 3} and \eqref{Condition add} hold. \end{prop}
\noindent Proposition~\ref{prop: intuition a s} is useful to obtain the convergences \eqref{asmp: limit 2} and \eqref{asmp: limit 3} for a given discretization rule since it is usually easy to have approximate values of the conditional moments of the increments. We actually apply this approach in the proof of the main result of the next subsection.
\subsection{Examples of admissible discretization rules}
We show in this section that the most common discretization rules are admissible. We start with hitting times based schemes. We have the following result.
\begin{prop}[Hitting times based discretization rule]\label{prop : hitting strats} Let $\varepsilon_n$ be a positive sequence tending to zero and $\underline{l}$ and $\overline{l}$ be two adapted processes which are positive and continuous on $[0,T]$ almost surely with \begin{equation}\label{regbarrier} \E{\int_0^T\big(1+(\rho_t)^2\big)(\overline{l}_t\vee\underline{l}_t)^2(\sigma^Y_t)^2dt} < \infty. \end{equation} The discretization rule based on the hitting times of $\varepsilon_n \underline{l}_{t}$ or $\varepsilon_n \overline{l}_{t}$ by the process $X$: \begin{equation}\label{def: hitting strat} \tau^n_{j+1} = \inf\accro{t>\tau^n_j: X_t \notin (X_{\tau^n_j}- \varepsilon_n \underline{l}_{t},X_{\tau^n_j}+\varepsilon_n \overline{l}_{t} )} \wedge T \end{equation} is admissible. Moreover, we can take \begin{equation}\label{prop: hitting strat} s_t = \underline{l}_t - \overline{l}_t,\quad a_t^2 =(s_t)^2+\underline{l}_t\overline{l}_t \end{equation} and we also have the convergence \eqref{condstabconv} and therefore the convergence in law of $Z_T^n$ towards $Z^\ast_{a,s}$ defined by \eqref{eqn: stable limit}. \end{prop} \noindent It is interesting to note here that the limit $Z^\ast_{a,s}$ does not depend on the structure of $X$.\\
\noindent This result is particularly important since many traders monitor the values of the increments of their so-called delta (which corresponds to the process $X$) in order to decide when to rebalance their portfolio. Thus they are indeed using hitting times based strategies. This proposition notably extends the weak convergence results in \cite{fukasawa2011discretization} since it shows that not only constant barriers (between $\tau^n_j$ and $\tau^n_{j+1}$) but also time varying stochastic barriers can be considered. This will be very useful in the next sections since our optimal discretization rules will correspond to hitting times of such barriers. Furthermore, in the proofs, the assumption that the time varying barriers satisfy \eqref{regbarrier} will enable us to deduce quite easily some relevant integrability properties for the hedging error (which would be harder to obtain with locally constant barriers).\\
\noindent Now remark that under the condition $a_t^2 > s_t^2$, we can always find some positive processes $\overline{l}_t$ and $\underline{l}_t$ such that \eqref{prop: hitting strat} is satisfied. Indeed, it is easy to see that the real numbers $\overline{l}_t$ and $-\underline{l}_t$ can be taken as the roots of the quadratic equation $x^2 + s_tx +s_t^2 - a_t^2 = 0$. Under the condition $a_t^2 > s_t^2$, this equation admits two nonzero roots with different signs. Therefore, another interesting property of hitting times based schemes is the following.
\begin{lem}\label{lem : hitting strat univ}
For any pair of limiting processes $s$ and $a$ satisfying \eqref{asmp: integrability a s} and $a_t^2>s_t^2$, we can always build a corresponding admissible discretization rule based on hitting times as in \eqref{def: hitting strat}-\eqref{prop: hitting strat}. \end{lem} \noindent Consequently, if one has some processes $a_t$ and $s_t$ as targets, Lemma \ref{lem : hitting strat univ} implies that a strategy giving rise to these processes in the limiting distribution \eqref{eqn: stable limit} can be found. We will work in this framework in Section \ref{sec: optimality2}. Remark that there are infinitely many strategies for which the hedging error converge in law to some $Z^*_{a,s}$ with the same $a$ and $s$ as limiting processes. The hitting time strategy is an efficient one among them, in the sense that it requires the least number of rebalancing in an asymptotic sense, see \cite{fukasawa2011discretization} for the detail.\\
\noindent Another classical discretization rule is given by equidistant trading times. Here, the integrability property \eqref{moment2} in the admissibility conditions does not hold in full generality. Compared to the hitting times setting, this is because the deviations of the benchmark strategy are not explicitly controlled by the barriers. Nevertheless, the following example describes a reasonable framework under which such discretization rule is admissible.
\begin{prop}[Equidistant sampling discretization rule]\label{eqstrat} Consider the hedging strategy of a European option with payoff $h(Y_T)$ and replace Assumption \ref{assump_model} by that the underlying $Y_t$ follows a diffusion process of the form \begin{equation*} dY_t = b(t, Y_t)Y_tdt+\sigma(t,Y_t)Y_t dW_t, \end{equation*} with $b$, $\sigma$ and $h$ some deterministic functions satisfying the regularity assumptions p.21-23 in \cite{zhang1999couverture} (allowing in particular for call and put in the Black-Scholes model). Define the delta hedging portfolio: $$ X_t = \frac{\partial P}{\partial y} (t,Y_t),\text{ with }P(t,y) = \mathbb E_{(t,y)}^{\mathbb{Q}}[h(Y_T)], $$ where $\mathbb{E}^{\mathbb{Q}}$ denotes the expectation operator under the risk neutral probability. Let $\varepsilon_n$ be a positive sequence tending to zero. Then the equidistant trading times discretization rule: \begin{equation*} \tau^n_j =j\varepsilon_n^2, \quad j=0, \ldots, n, \ldots \end{equation*} is admissible (under the original measure). Moreover, we can take \begin{equation*} s_t=0, \quad a_t^2 = 3(\sigma^X_t)^2. \end{equation*} \end{prop}
\noindent The proof of Proposition \ref{eqstrat} follows easily from previous works. We can first obtain the convergence in law towards $Z^*_{a,s}$ using for example the results of \cite{hayashi2005evaluating}. Indeed, up to localization, we can assume that $\sigma^X$, $\sigma^Y$, $b^X$ and $b^Y$ are bounded. Then the integrability conditions in the mentioned reference are obviously satisfied and the convergence follows. For \eqref{moment2}, it suffices to use Theorem 2.4.1 in \cite{zhang1999couverture} where the convergence of the $L^2$ norm of the
normalized error under the original measure is provided.\\
\noindent Finally, note that the discretization rule based on equidistant trading times will not be of interest for us since the associated $s_t$ process vanishes and so the expectation of the limiting variable is zero.
\section{Asymptotic optimality: a preliminary approach}\label{sec: optimality}
Our viewpoint is that the trader's priority is to get a small hedging error. However, once this error is suitably controlled, he may try to take advantage of the directional views he has on the market. Hence, adopting the asymptotic approximation under which the first two moments of the renormalized hedging error are given by those of $Z^{*}_{a,s}$, we aim at maximizing $\mathbb{E}[Z^{*}_{a,s}]$ while keeping $\mathbb{E}[(Z^{*}_{a,s})^2]$ reasonably small. This very problem is treated in Section \ref{sec: optimality2}.\\
\noindent Here, as a first step, we consider the approximation for $\mathbb{E}[(Z^{*}_{a,s})^2]$ given by $\mathbb{E}[(Z^{*,c}_{a,s})^2]$, where $Z^{*,c}_{a,s}$ denotes the sum of the two integrals with respect to the Brownian motions $W^Y$ and $B$ in the definition of $Z^{*}_{a,s}$ in Equation \eqref{eqn: stable limit}, that is \begin{equation*} Z^{*,c}_{a,s}=\frac{1}{3}\int_0^T s_t\sigma_t^YdW^Y_t + \frac{1}{\sqrt{6}}\int_0^T\paren{a_t^2 - \frac{2}{3}s_t^2}^{1/2}\sigma^Y_tdB_t. \end{equation*} To do so, we place ourselves in this section under the additional admissibility condition that the renormalized hedging error weakly converges in the sense of \eqref{condstabconv} and we take $s_t$ and $a_t^2$ as the processes in the limit \eqref{condstabconv} (so $s_t$ and $a_t^2$ are uniquely defined). Replacing $\mathbb{E}[(Z^{*}_{a,s})^2]$ by $\mathbb{E}[(Z^{*,c}_{a,s})^2]$ is technically very convenient but in practice quite arguable since this approximation is meaningful only when the drift is small. However, our aim here is only to have a first rough idea about the form of the optimal discretization rules. Since we wish to get the moment of order one large while that of order two remains controlled, we consider that we want to maximize the so-called modified Sharpe ratio $S$ defined by \begin{equation*} S=S(a,s)=\frac{\mathbb{E}[Z^{*}_{a,s}]}{\sqrt{\mathbb{E}[(Z^{*,c}_{a,s})^2]}}. \end{equation*} This ratio is said to be modified since we use $\mathbb{E}[(Z^{*,c}_{a,s})^2]$ instead of the variance of $Z^*_{a,s}$.\\
\noindent Hence we are looking for strategies which maximize $S$. To do so, we now introduce the notion of nearly efficient (modified) Sharpe ratio. \begin{dfn}[Nearly efficient Sharpe ratio] The value $S^*\in\mathbb{R}$ is said to be a nearly efficient Sharpe ratio if: \begin{itemize} \item For any admissible discretization rule with associated limiting processes $a$ and $s$, the associated modified Sharpe ratio $S(a,s)$ satisfies $$S(a,s)\leq S^*.$$ \item For any $\eta>0$, there exists a discretization rule with associated limiting processes $a$ and $s$ such that $$S(a,s)\geq S^*-\eta.$$ \end{itemize} \end{dfn} \noindent We only consider nearly efficient ratios since our strategies will not enable us to attain exact efficiency (which would corresponds to $\eta=0$ in the previous definition). Of course the slight difference between efficient and nearly efficient ratios has no importance in practice.\\
\noindent In our setting, for any limiting variable $Z^{*}_{a,s}$, we have $$S(a,s)=\frac{\mathbb{E}\Big[\frac{1}{3}\int_0^T s_tb^Y_tdt\Big]}{\Big(\mathbb{E}\Big[\frac{1}{9}\int_0^Ts_t^2(\sigma^Y_t)^2dt+\frac{1}{6}\int_0^T\big(a_t^2-\frac{2}{3}s_t^2\big)(\sigma^Y_t)^2dt\Big]\Big)^{1/2}}.$$ Now, the admissibility condition $a_t^2 \geq s_t^2$ implies $$S(a,s)\leq\frac{\sqrt{6}}{3}\frac{\mathbb{E}\Big[\int_0^T s_tb^Y_tdt\Big]}{\Big(\mathbb{E}\Big[\int_0^Ts_t^2(\sigma^Y_t)^2dt\Big]\Big)^{1/2}}$$ and Cauchy-Schwarz inequality gives $$S(a,s)\leq \frac{\sqrt{6}}{3}\Big(\mathbb{E}\Big[\int_0^T\big(\frac{b^Y_t}{\sigma^Y_t}\big)^2dt\Big]\Big)^{1/2}.$$ This provides an upper bound for the modified Sharpe ratio. We now wish to find a discretization rule enabling to (almost) attain this upper bound. To achieve this, our rule must be so that for the associated processes $a_t$ and $s_t$, the inequalities used above ($a_t^2 \geq s_t^2$ and Cauchy-Schwarz) become almost equalities. This means that $a_t$ should be close to $s_t$ and $s_t$ essentially proportional to $b_t^Y/(\sigma^Y_t)^2$. Furthermore, we want the product $s_tb^Y_t$ to be essentially positive in order to get a positive modified Sharpe ratio. If we look for this rule among the hitting times based schemes specified by two processes $(\underline{l}_t,\overline{l}_t)$, Lemma \ref{lem : hitting strat univ} implies that \begin{itemize} \item the difference $\underline{l}_t-\overline{l}_t$ should be essentially proportional to $b_t/(\sigma^Y_t)^2$, \item the product $\underline{l}_t\overline{l}_t$ should be negligible compared to $(\underline{l}_t-\overline{l}_t)^2$, \item the term $(\underline{l}_t-\overline{l}_t)b^Y_t$ should be essentially positive. \end{itemize} From these remarks together with Proposition \ref{prop : hitting strats}, we easily deduce the following theorem.
\begin{theo}\label{theo_unbiased} Suppose that for all $t\leq T$, $b^Y_t\neq 0$. Then the value \begin{equation*} \frac{\sqrt{6}}{3}\Big(\mathbb{E}\Big[\int_0^T\big(\frac{b^Y_t}{\sigma^Y_t}\big)^2dt\Big]\Big)^{1/2} \end{equation*} is a nearly efficient Sharpe ratio. It is approximately attained by the discretization rule defined for $\lambda>0$ by \begin{equation}\label{discrule1} \tau^{n,\lambda}_{j+1}= \inf \Big\{t > \tau^{n,\lambda}_j; X_t - X_{\tau^{n,\lambda}_j} = -\frac{b^Y_{t}}{(\sigma^Y_{t})^2}e^{\lambda}\varepsilon_n\text{ or }\frac{b^Y_{t}}{(\sigma^Y_{t})^2}e^{-\lambda}\varepsilon_n\Big\}, \quad \tau^n_0 = 0. \end{equation} Indeed, \begin{equation*} \lim_{\lambda\to +\infty} S(\lambda) = \frac{\sqrt{6}}{3}\Big(\mathbb{E}\Big[\int_0^T\big(\frac{b^Y_t}{\sigma^Y_t}\big)^2dt\Big]\Big)^{1/2}, \end{equation*} where $S(\lambda)$ denotes the modified Sharpe ratio obtained for the law of the variable $Z^*_{a,s}$ associated to the discretization rule \eqref{discrule1} with parameter $\lambda$.
\end{theo}
\noindent This result provides simple and explicit strategies for optimizing the modified Sharpe ratio. It is also very easy to interpret. Indeed, we see that in order to take advantage of the drift, one needs to consider asymmetric barriers. The limitation is that we do not really control accurately the magnitude of the hedging error at maturity.\\
\noindent The asymptotic setting simply means that we require $\lambda$ to be quite large while $e^{\lambda}\varepsilon_n$ is small. When using such discretization rule in practice, it is reasonable to consider that the trader fixes a maximal value for the asymmetry between the barriers controlled by $\lambda$. This way he can choose the parameter $\lambda$. Then $\varepsilon_n$ is set to match the bound on $\mathbb{E}[(Z^{*,c}_{a,s})^2]$ that the trader does not want to exceed.
\section{Asymptotic expectation-error optimization}\label{sec: optimality2}
In this section, we now consider a natural expectation-error type criterion in order to optimize our discretization rules. To do so, we work in an asymptotic setting where we are looking for discretization rules which are optimal in the expectation-error sense for their associated limiting random variable $Z^*_{a,s}$. Before giving our main result, we introduce some definitions inspired by classical portfolio theory. \begin{dfn}[Non dominated couple]\label{def2} A couple $(m,v)\in(\mathbb{R}^+)^2$ is said to be non dominated if there exists no admissible discretization rule such that its associated limiting random variable $Z^*_{a,s}$ satisfies $$\mathbb{E}[Z^*_{a,s}]\geq m,~~\mathbb{E}[(Z^*_{a,s})^2]<v.$$ The set of non dominated couples is called the non domination domain. \end{dfn} \begin{dfn}[Nearly efficient couple]\label{def3} A couple $(m,v)\in(\mathbb{R}^+)^2$ is said to be nearly efficient if it is non dominated and for any $\eta>0$, there exists an admissible discretization rule such that its associated limiting random variable $Z^*_{a,s}$ satisfies $$\mathbb{E}[Z^*_{a,s}]=m,~~\mathbb{E}[(Z^*_{a,s})^2]\leq v+\eta.$$ It is efficient if we can take $\eta=0$. \end{dfn} \noindent We introduce the set $\mathcal{Z}_T$ of random variables of the form \begin{equation}\label{solprob} Z_{T,s}=\frac{1}{3}\int_0^T s_tdY_t + \frac{1}{3\sqrt{2}}\int_0^T s_t\sigma^Y_tdB_t, \end{equation} where $B$ is a Brownian motion independent of $\mathcal{F}$ and $s_t$ is an adapted continuous process such that \begin{equation}\label{regul} \mathbb{E}\big[\int_0^T\big(1+(\rho_t)^2\big)s_t^2(\sigma^Y_t)^2dt\big]< \infty. \end{equation} We also define the notions of non dominated and efficient couples with respect to $\mathcal{Z}_T$. The definitions are the same as Definition \ref{def2} and Definition \ref{def3} except that we replace ``admissible discretization rule'' by ``process $s$ satisfying \eqref{regul}'' and ``its associated limiting random variable $Z^*_{a,s}$" by $Z_{T,s}$.\\
\noindent We can now state our main result which enables us to compute efficient discretization rules.
\begin{theo}\label{theo : main result} The following results hold: \begin{itemize} \item The non domination domain coincides with the non domination domain with respect to $\mathcal{Z}_T$. \item Let $(m^*,v^*)$ be an efficient couple with respect to $\mathcal{Z}_T$, with associated optimal process $s^*$. Then $(m^*,v^*)$ is a nearly efficient couple. More precisely, let $\delta>0$ and $(\underline{l}^\delta_t, \overline{l}^\delta_t)$ be defined by \begin{equation*} \underline{l}^\delta_t - \overline{l}^\delta_t = s^*_t, \quad (\underline{l}^\delta_t)^2 -\underline{l}^\delta_t\overline{l}^\delta_t + (\overline{l}^\delta_t)^2 = (s^*_t)^2 + \frac{6\delta}{(\sigma^Y_t)^2}, \end{equation*} that is \begin{equation}\label{eqn : opt barriers} \underline{l}^\delta_t = \sqrt{\frac{(s^*_t)^2}{4} + \frac{6\delta}{(\sigma_t^Y)^2}} + \frac{s^*_t}{2}, \quad \overline{l}^\delta_t = \sqrt{\frac{(s^*_t)^2}{4} + \frac{6\delta}{(\sigma_t^Y)^2}} - \frac{s^*_t}{2}. \end{equation} Then the hitting times based discretization rule specified through the barriers $(\underline{l}^\delta_t, \overline{l}^\delta_t)$ satisfies \begin{equation*} \mathbb{E}[Z^*_{a,s}] = m^*,\quad \mathbb{E}[(Z^*_{a,s})^2] = v^* +\delta T. \end{equation*} \end{itemize} \end{theo}
\noindent We have therefore reduced the impulse control problem of finding the optimal rebalancing times to a classical expectation-error optimization with continuous dynamics. The solutions of this problem can be obtained by solving for $\mu>0$ \begin{equation*} \inf_{(s_t)}\big\{-\mathbb{E}[Z_{T,s}]+\mu\mathbb{E}[(Z_{T,s})^2]\big\}, \end{equation*} for which we can apply the theory of linear-quadratic optimal control, see for example \cite{lim2002mean,zhou2000continuous}. As shown in the next section, we can even obtain closed formulas in the case where the underlying has deterministic drift and volatility. Note that again, our barriers strategies enable us to attain only nearly efficient couples. Indeed, reaching efficient couples would lead to the use of degenerate barriers with $\delta=0$. This does not make sense in practice, however $\delta$ can of course be selected small.\\
\noindent In practice, once he has chosen the target nearly efficient couple he wants to reach, the trader needs to select $\delta$ and $\varepsilon_n$. Two ideas enabling to avoid microstructure effects seem natural and easy to implement: \begin{itemize} \item Fix a minimal time between two rebalancings $t_{min}$. After a rebalancing at a random time say $\tau$, wait $t_{min}$ and then apply the strategy with $\delta=0$ (that is rebalance immediately if at $t=\tau+t_{min}$, $X_t-X_{\tau}$ is not inside the interval $(-\varepsilon_n \underline{l}^0_t ,\varepsilon_n \overline{l}^{0}_t)$ and wait for the exit time otherwise). The parameter $\varepsilon_n$ can be chosen according to the average number of transactions the trader is willing to make. \item Fix (roughly) a minimal distance for the closest barrier after a rebalancing. Then compute $\delta$ and $\varepsilon_n$ according to the general level of volatility $\sigma_t^Y$ so that they (approximately) lead to this bound and the average number of transactions the trader is willing to make. \end{itemize}
\section{One explicit example : Black-Scholes model with time varying coefficients}\label{sec: bs}
In this section, we explain how our method can be applied in practice through the simple example of the Black-Scholes model with time varying coefficients. So we assume the underlying follows the dynamics $$ dY_t = Y_t(b_t dt + \sigma_t dW_t), $$ where $b_t$ and $\sigma_t$ are continuous deterministic functions. We also assume $b_t$ and $\sigma_t$ do not vanish. Using the theory of linear-quadratic optimal control, we give an explicit solution for the problem of designing optimal rebalancing times in this specific setting.
\subsection{Explicit formulas}
\noindent We aim at finding the efficient couples for the controlled random variables of the form $Z_{T,s}$ as in \eqref{solprob}. Following \cite{zhou2000continuous}, such problem is classically recast as follows: solving for any $\mu>0$ the optimization problem \begin{equation*} \inf_{(s_t, 0\leq t\leq T)} -\mathbb{E}[{Z_{T,s}}] + \mu \mathbb{E}[(Z_{T,s})^2]=\inf_{(s_t, 0\leq t\leq T)} \mu \mathbb{E}\big[\big(Z_{T,s}-\frac{1}{2\mu}\big)^2\big]-\frac{1}{4\mu}. \end{equation*} Let us define the family of processes of the form $$ d\check Z_t = s_tY_t(\tilde b_tdt + \tilde\sigma_t dW_t),~~\check Z_0 =0, $$ with $\tilde b_t=b_t/3$, $\tilde\sigma_t=\sigma_t/3$ and $s_t$ adapted continuous. Using obvious computations, the independence between the process $B$ in Equation \eqref{solprob} and $\mathcal{F}$, and the fact that $s_t$ is $\mathcal{F}$-adapted, we get $\mathbb{E}[\check Z_T]=\mathbb{E}[Z_{T,s}]$ and $$\mu\mathbb{E}\big[\big(\check Z_T-\frac{1}{2\mu}\big)^2\big]=\mu\mathbb{E}\big[\big(Z_{T,s}-\frac{1}{2\mu}\big)^2\big]-\frac{\mu}{18}\mathbb{E}\big[\int_0^T(s_t\sigma_t Y_t)^2dt\big].$$ Hence, we can equivalently solve \begin{equation*} \inf_{(s_t, 0\leq t\leq T)}\mathbb{E}\big[ \mu \tilde Z_T^2+\frac{\mu}{2}\int_0^T(s_t\tilde\sigma_t Y_t)^2dt\big], \end{equation*} with $$ d\tilde Z_t = s_tY_t(\tilde b_tdt + \tilde\sigma_t dW_t),~~\tilde Z_0 =-\frac{1}{2\mu}. $$
\noindent Using the results of \cite{zhou2000continuous} which are summarized in Theorem \ref{theoxyz} in Appendix \ref{append: LQ}, the optimal control $s_t^*$ and optimally controlled process $\tilde Z^*_t$ satisfy \begin{equation*}\label{s_star} s^*_tY_t = -\frac{1}{\tilde b_t}\frac{\dot{P_t}}{P_t}\tilde Z^*_t, \end{equation*} where $P_t$ is the solution of the (ordinary) differential equation $$\dot{P_t}=\rho_t^2\frac{P_t^2}{P_t+\mu},~~P_T=2\mu,$$ with $\rho_t = b_t/\sigma_t$. The solution of this equation is given by $$P_t=\frac{\mu}{L\Big(\frac{1}{2}\text{exp}\big(\int_t^T\rho_s^2ds+\frac{1}{2}\big)\Big)},$$ with $L$ is the inverse function of $x\mapsto xe^x$. Moreover, the optimal process $\tilde Z^*$ satisfies $$\frac{d\tilde Z^*_t}{\tilde Z^*_t}= -\frac{\dot{P_t}}{P_t} (dt + \frac{1}{\rho_t}dW_t),~\tilde Z^*_0 = -\frac{1}{2\mu}.$$ Therefore, we obtain $$\mathbb{E}[\tilde Z^*_T]= -\frac{1}{2\mu}\frac{P_0}{P_T}.$$ Using Theorem \ref{theoxyz}, we get \begin{equation*} \mathbb{E}\Big[(\tilde Z^*_T)^2 + \frac{1}{2}\int_0^T(s_t^* Y_t\tilde\sigma_t)^2 dt\Big] =\big(\frac{1}{2\mu}\big)^2\frac{P_0}{P_T}. \end{equation*} Consequently, we have that the optimal variable $Z_{T,s^*}$ satisfies $$\mathbb{E}[Z_{T,s^*}]=\frac{1}{2\mu}\big(1-\frac{P_0}{P_T}\big)$$ and $$\mathbb{E}\big[\big(Z_{T,s^*}-\frac{1}{2\mu}\big)^2\big]=\big(\frac{1}{2\mu}\big)^2\frac{P_0}{P_T}.$$ Hence $$\mathbb{E}[(Z_{T,s^*})^2]=\big(\frac{1}{2\mu}\big)^2(1-\frac{P_0}{P_T}).$$ We have thus proved the following proposition. \begin{prop} In the Black-Scholes model with time varying coefficients, the efficient points are the couples of the form $$(m,m^2\frac{P_T}{P_T-P_0}),$$ with $m>0$ (remark that the ratio $\frac{P_T}{P_T-P_0}$ does not depend on $\mu$). Furthermore, the associated process $s_t^*$ enabling to compute optimal rules according to Theorem \ref{theo : main result} is explicitly given by \begin{equation*} \frac{1}{3}s^*_tY_t = -\frac{1}{b_t}\frac{\dot{P_t}}{P_t}\tilde Z^*_t, \end{equation*} with $$\frac{d\tilde Z^*_t}{\tilde Z^*_t}= -\frac{1}{b_t}\frac{\dot{P_t}}{P_t} \frac{dY_t}{Y_t},~\tilde Z^*_0 = -\frac{1}{2\mu}.$$
\end{prop}
\noindent Note that in practice, $\tilde Z^*$ is not observable. However, it can of course be approximated by a process $\tilde Z^{(*)}$ thanks to historical data, using for example a scheme of the form \begin{equation*} \tilde Z^{(*)}_{t_{i+1}} = \tilde Z^{(*)}_{t_i}\Big(1 -\frac{1}{b_{t_i}}\frac{\dot{P}_{t_i}}{P_{t_i}}\frac{Y_{t_{i+1}}-Y_{t_i}}{Y_{t_i}}\Big), \quad \tilde Z^{(*)}_0 = -\frac{1}{2\mu}, \end{equation*} where the $t_i$ are the observation times of market data.
\appendixtitleon \appendixtitletocon
\begin{appendices} \section{Proofs} In the following $C$ denotes a constant which may vary from line to line. Note that we use several localization procedures in the proofs. We often give them in details since some of them are slightly unusual, in particular because of the fact that $\sigma^X$ may vanish at maturity.
\subsection{Proof of Proposition~\ref{proposition: stable conv}}
\noindent We start by proving in a very standard way the stable convergence of $\varepsilon_n^{-1}Z^n$ in $C[0,T]$, which is stronger than the weak convergence. More precisely, we show that for any bounded continuous function $f$ on $C[0,T]$ and bounded random variable $U$ defined on $(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P})$, \begin{equation*} \lim_{n \to \infty} \mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot})] = \mathbb{E}[Uf(Z^\ast_{\cdot})], \end{equation*} where $Z^*$ is defined by \begin{equation*}
Z^\ast_t = \frac{1}{3}\int_0^t s_udY_u + \frac{1}{\sqrt{6}}\int_0^t\paren{a_u^2 - \frac{2}{3}s_u^2}^{1/2}\sigma^Y_udB_u, \end{equation*} on an extension of $(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P})$ on which $B$ is a Brownian motion independent of all the other quantities. For $K>0$, we set \begin{equation*}
\alpha^K = \inf\{t > 0; |\rho_t| \geq K\} \wedge T. \end{equation*} Since $\rho$ is continuous on $[0,T]$ almost surely, \begin{equation*}
\lim_{K \to \infty}\mathbb{P} [\alpha^K < T] = 0. \end{equation*} Now remark that \begin{equation*} \begin{split}
&| \mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot})] - \mathbb{E}[Uf(Z^\ast_{\cdot})]| \\
&\leq |\mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot})] -
\mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})]|
+ |\mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})]
- \mathbb{E}[Uf(Z^\ast_{\cdot \wedge \alpha^K})]| \\ & \hspace*{1cm}
+ |\mathbb{E}[Uf(Z^\ast_{\cdot \wedge \alpha^K})]
- \mathbb{E}[Uf(Z^\ast_{\cdot})]|
\\ &\leq 4 \|f\|_\infty \|U\|_\infty\mathbb{P}[\alpha^K < T]
+ |\mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})]
- \mathbb{E}[Uf(Z^\ast_{\cdot \wedge \alpha^K})]|. \end{split} \end{equation*} Consequently, it suffices to show that for any $K>0$, \begin{equation*}
\lim_{n \to \infty} |\mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})
- \mathbb{E}[Uf(Z^\ast_{\cdot \wedge \alpha^K})]| = 0. \end{equation*} Let \begin{equation*}
\mathcal{E} = \exp\big\{ -\int_0^{\alpha^K} \rho_t dW^Y_t - \frac{1}{2}\int_0^{\alpha^K} \rho_t^2 dt \big\}. \end{equation*} Since $\mathbb{E}[\mathcal{E}] = 1$, the measure $\mathbb{Q}$ defined by \begin{equation*}
\frac{d\mathbb{Q}}{d\mathbb{P}} = \mathcal{E} \end{equation*} is a probability measure under which $Z^n_{\cdot \wedge \alpha^K}$ is a local martingale. Under $\mathbb{Q}$, the uniform convergences in probability \eqref{asmp: limit 2} and \eqref{asmp: limit 3} on $[0,T]$ remain true. Therefore by Theorem~IX.7.3 of \cite{jacod2003limit}, we have the stable convergence of $\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K}$ to $Z^\ast_{\cdot \wedge \alpha^K}$ under $\mathbb{Q}$. Note that $\tilde{U} = U/\mathcal{E}$ is a $\mathbb{Q}$-integrable positive random variable and moreover, for all $A > 0$, \begin{equation*} \begin{split}
&|\mathbb{E}[Uf(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})]
- \mathbb{E}[Uf(Z^\ast_{\cdot \wedge \alpha^K})]| \\ \leq
&|\mathbb{E}^\mathbb{Q}[\tilde{U}f(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})]
- \mathbb{E}^\mathbb{Q}[\tilde{U}f(Z^\ast_{\cdot \wedge \alpha^K})]| \\\leq &
|\mathbb{E}^\mathbb{Q}[(\tilde{U} \wedge A)f(\varepsilon_n^{-1}Z^n_{\cdot \wedge \alpha^K})]
- \mathbb{E}^\mathbb{Q}[(\tilde{U} \wedge A)f(Z^\ast_{\cdot \wedge \alpha^K})]|
+ 2 \|f\|_\infty \mathbb{E}^{\mathbb{Q}}[\tilde{U}\mathbbm{1}_{\{\tilde{U} \geq A\}}]. \end{split} \end{equation*} The second term tends to $0$ uniformly in $n$ as $A \to \infty$. The first term converges to $0$ due to the stable convergence under $\mathbb{Q}$ since $(\tilde{U} \wedge A)$ is a bounded random variable. \\
\noindent Now, we prove $a^2 \geq s^2$ under the additional condition \eqref{Condition add}. Since $a$ and $s$ are continuous, it suffices to show $a_t^2 \geq s_t^2$ for all $t \in [0,T)$. Fix $T_0 < T$ and let \begin{equation} \label{alphahat}
\hat{\alpha}^K = \inf\{u >0; |b^X_u|\vee \sigma^X_u \vee \sigma^Y_u \geq K \text{ or } \sigma^X_u \leq 1/K\} \wedge T_0 \end{equation} for $K > 0$. Since $\sigma^X$ is positive and continuous on $[0,T_0]$, we have \begin{equation} \label{alphahat conv}
\lim_{K\to \infty}\mathbb{P}[\hat{\alpha}^K < T_0] = 0. \end{equation} Therefore, it suffices to show \begin{equation}\label{eq: local a s}
a_{u \wedge \hat{\alpha}^K}^2 \geq s^2_{u \wedge \hat{\alpha}^K} \end{equation} for all $u \geq 0$ and $K > 0$. Fix $K$ and define the probability measure $\hat{\mathbb{Q}}$ by
\begin{equation*}
\frac{d\hat{\mathbb{Q}}}{d \mathbb{P}} = \exp\Big\{- \int_0^{\hat{\alpha}^K} \frac{b^X_u}{\sigma^X_u}d W^X_u - \frac{1}{2}\int_0^{\hat{\alpha}^K}\big(\frac{b^X_u}{\sigma^X_u} \big)^2 d u \Big\}. \end{equation*} Under $\hat{\mathbb{Q}}$, $X_{\cdot \wedge \hat{\alpha}^K}$ is a martingale with bounded quadratic variation. Since $\hat{\mathbb{Q}}$ is equivalent to $\mathbb{P}$, it suffices to show \eqref{eq: local a s} under $\hat{\mathbb{Q}}$. \\
\noindent By \eqref{Condition add}, there exists a subsequence $\{n(k)\}$ such that \begin{equation*}
\hat{\mathbb{Q}} \Big[\varepsilon_{n(k)}^{-4/3}\sup_{ j\geq 0}(\tau^{n(k)}_{j+1} \wedge
T_0 - \tau^{n(k)}_j \wedge T_0) > \frac{1}{k} \Big] < \frac{1}{k}. \end{equation*} Let \begin{equation*} T_k = \inf\big\{u > 0, \varepsilon_{n(k)}^{-4/3}\sup_{ j\geq 0}(\tau^{n(k)}_{j+1} \wedge
u - \tau^{n(k)}_j \wedge
u) > \frac{1}{k}\big\} \wedge \hat{\alpha}^K. \end{equation*} Then \begin{equation*} \lim_{k \to \infty}\hat{\mathbb{Q}}[T_k < \hat{\alpha}^K] = 0 \end{equation*} and so, \begin{equation} \label{loc conv}
\begin{split}
& \varepsilon_{n(k)}^{-1} \crochet{Z^{n(k)},Y}_{t \wedge T_k} \to \frac{1}{3}\int_0^{t \wedge \hat{\alpha}^K} s_u (\sigma^Y_u)^2 du,\\
& \varepsilon_{n(k)}^{-2} \crochet{Z^{n(k)}}_{t \wedge T_k} \to \frac{1}{6}\int_0^{t \wedge \hat{\alpha}^K} a_u^2 (\sigma^Y_u)^2 du,
\end{split} \end{equation} in probability as $k \to \infty$ for all $ t\geq 0$. Let \begin{equation*}
\hat{\tau}^k_j = \tau^{n(k)}_j \wedge T_k \end{equation*} for $j \geq 0$. We now give three technical lemmas. \begin{lem}\label{lem1} Let $\kappa_u=(\sigma^Y_u/\sigma^X_u)^2$. We have \begin{equation*} \frac{1}{3} \varepsilon_{n(k)}^{-1}\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ (X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
)^3 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big]
- \varepsilon_{n(k)}^{-1}\crochet{Z^{n(k)},Y}_{t \wedge T_k} \to 0, \end{equation*} in probability as $k\to \infty$ for all $t \geq 0$. \end{lem} \begin{proof} By It$\hat{\text{o}}$'s formula, \begin{equation*} \frac{1}{3} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ (X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
)^3 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] = \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]. \end{equation*} We now show that \begin{equation}\label{lln1} \begin{split} & \varepsilon_{n(k)}^{-1}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \\ & -
\varepsilon_{n(k)}^{-1}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u \to 0 \end{split} \end{equation} and \begin{equation}\label{disc kappa}
\varepsilon_{n(k)}^{-1}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u - \varepsilon_{n(k)}^{-1}\crochet{Z^{n(k)},Y}_{t \wedge T_k} \to 0, \end{equation} in probability. \\
\noindent By Lenglart inequality for discrete martingales (see e.g., Lemma~A.2 of \cite{fukasawa2011discretization}), a sufficient condition for (\ref{lln1}) is the fact that \begin{equation}\label{lln1qv}
\varepsilon_{n(k)}^{-2}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t}^2 \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big( \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u
\big)^2 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \to 0, \end{equation} in probability. To get this convergence, first use successively H\"older inequality, It\^o's formula and Burkholder-Davis-Gundy inequality to obtain that \begin{equation*} \begin{split} &\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t}^2 \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big( \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u-X_{\hat{\tau}^k_j})d\langle X
\rangle_u
\big)^2 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \\ & \leq C\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t}^2 \mathbb{E}^{\hat{\mathbb{Q}}}\Big[\big (\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^{3/2} \big( \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u-X_{\hat{\tau}^k_j})^4d\langle X
\rangle_u
\big)^{1/2} \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big] \\ & \leq C \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^3
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]\Big)^{1/2} \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u-X_{\hat{\tau}^k_j})^4d\langle X
\rangle_u
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]\Big)^{1/2} \\ & = C \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^3
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]\Big)^{1/2} \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^6
| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]\Big)^{1/2} \\ & \leq C\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^3
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \\ & \leq C\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[
|\hat{\tau}^k_{j+1} \wedge t -
\hat{\tau}^k_j \wedge t|^3
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]. \end{split} \end{equation*} Note also that \begin{equation*}
\left\{ j \leq N^{n(k)}_{t \wedge T_k} \right\} = \left\{ \tau^{n(k)}_j \leq t \wedge T_k \right\} \in \mathcal{F}_{\hat{\tau}^k_j \wedge t}. \end{equation*} Then (\ref{lln1qv}) follows since \begin{equation*}
\varepsilon_{n(k)}^{-2}\mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \sum_{j=0}^{\infty}
|\hat{\tau}^k_{j+1} \wedge t - \hat{\tau}^k_j \wedge t|^3 \Big] \leq
\varepsilon_{n(k)}^{-2} \frac{\varepsilon_{n(k)}^{8/3}}{k^2} \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \sum_{j=0}^{\infty}
|\hat{\tau}^k_{j+1} \wedge t - \hat{\tau}^k_j \wedge t| \Big] \leq \frac{\varepsilon_{n(k)}^{2/3}}{k^2} t \to 0. \end{equation*}
\noindent We now turn to \eqref{disc kappa}. Note that \begin{equation*}
\varepsilon_{n(k)}^{-1}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u =
\varepsilon_{n(k)}^{-1}\sum_{j=0}^\infty \kappa_{\hat{\tau}^k_j \wedge t} \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})d \crochet{X}_u \end{equation*} and \begin{equation*}
\varepsilon_{n(k)}^{-1}\crochet{Z^{n(k)},Y}_{t \wedge T_k} =
\varepsilon_{n(k)}^{-1}\sum_{j=0}^\infty \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})\kappa_u d \crochet{X}_u. \end{equation*} Therefore, the absolute value of the left hand side of \eqref{disc
kappa} is dominated by \begin{align*} &\varepsilon_{n(k)}^{-1}\sum_{j=0}^\infty
\int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} |X_u -
X_{\hat{\tau}^k_j}|\,|\kappa_u-\kappa_{\hat{\tau}^k_j \wedge t}| d \crochet{X}_u\\ &\leq \Big(\varepsilon_{n(k)}^{-2}\sum_{j=0}^\infty
\int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} |X_u -
X_{\hat{\tau}^k_j}|^2\,|\kappa_u-\kappa_{\hat{\tau}^k_j \wedge t}|^2 d \crochet{X}_u\Big)^{1/2}\Big(\sum_{j=0}^\infty \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} d \crochet{X}_u\Big)^{1/2}\\ &\leq \sup_{u \geq 0}\frac{1}{\kappa_{u \wedge T_k}}
\sup_{u \geq 0, j \geq 0}\big| \kappa_{\hat{\tau}^k_{j+1} \wedge u} - \kappa_{ \hat{\tau}^k_j \wedge u}
\big|
\varepsilon_{n(k)}^{-1}\crochet{Z^{n(k)}}^{1/2}_{t \wedge T_k}\crochet{X}^{1/2}_{t \wedge T_k}, \end{align*} which converges to $0$ due to \eqref{loc conv} and the uniform continuity of $\kappa$. \end{proof}
\begin{lem}\label{lem2} We have \begin{equation*} \frac{1}{6} \varepsilon_{n(k)}^{-2}\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
\big)^4 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]
- \varepsilon_{n(k)}^{-2}\crochet{Z^{n(k)}}_{t \wedge T_k} \to 0, \end{equation*} in probability as $k\to \infty$, for all $t \geq 0$. \end{lem} \begin{proof} The proof is very similar to the previous one. By It$\hat{\text{o}}$'s formula, \begin{equation*} \frac{1}{6} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
\big)^4 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] = \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})^2 d \crochet{X}_u
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]. \end{equation*} We now show that \begin{equation}\label{lln2} \begin{split} & \varepsilon_{n(k)}^{-2}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})^2d \crochet{X}_u
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \\ & -
\varepsilon_{n(k)}^{-2}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})^2d \crochet{X}_u \to 0 \end{split} \end{equation} and \begin{equation}\label{disc kappa2}
\varepsilon_{n(k)}^{-2}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})^2d \crochet{X}_u - \varepsilon_{n(k)}^{-2}\crochet{Z^{n(k)}}_{t \wedge T_k} \to 0, \end{equation} in probability. \\
\noindent By Lenglart inequality for discrete martingales, a sufficient condition for (\ref{lln2}) is \begin{equation}\label{lln2qv}
\varepsilon_{n(k)}^{-4}\sum_{j=0}^{N^{n(k)}_{t\wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t}^2 \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \Big( \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u - X_{\hat{\tau}^k_j})^2d \crochet{X}_u
\Big)^2 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \to 0, \end{equation} in probability. To get this convergence, first use successively H\"older inequality, It\^o's formula and Burkholder-Davis-Gundy inequality to obtain that \begin{equation*} \begin{split} &\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t}^2 \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \big( \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u-X_{\hat{\tau}^k_j})^2d\langle X
\rangle_u
\big)^2 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big] \\ & \leq \sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t}^2 \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^{4/3} \big( \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u-X_{\hat{\tau}^k_j})^6d\langle X
\rangle_u
\big)^{2/3} \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big] \\ & \leq C \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^4
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big]\Big)^{1/3} \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \int_{\hat{\tau}^k_j \wedge t}^{\hat{\tau}^k_{j+1} \wedge t} (X_u-X_{\hat{\tau}^k_j})^6d\langle X
\rangle_u
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big]\Big)^{2/3} \\ & = C \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^4
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big]\Big)^{1/3} \Big(\sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\Big[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^8
| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \Big]\Big)^{2/3} \\ & \leq C \sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(\langle X \rangle_{\hat{\tau}^k_{j+1} \wedge t} - \langle X
\rangle_{\hat{\tau}^k_j \wedge t}\big)^4
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \\ & \leq C \sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \mathbb{E}^{\hat{\mathbb{Q}}}\big[
|\hat{\tau}^k_{j+1} \wedge t - \hat{\tau}^k_j \wedge t|^4
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]. \end{split} \end{equation*} Then, observe that \begin{equation*} \varepsilon_{n(k)}^{-4}\mathbb{E}^{\hat{\mathbb{Q}}} \Big[ \sum_{j=0}^\infty
|\hat{\tau}^k_{j+1} \wedge t -
\hat{\tau}^k_j \wedge t|^4\Big] \leq \frac{t}{k^3} \to 0, \end{equation*} which gives (\ref{lln2}). The proof for \eqref{disc kappa2} is obtained in the same way as that for \eqref{disc kappa}. \end{proof} \noindent We finally give the following almost straightforward result, which is easily deduced from simplified versions of the proofs of the previous lemma. \begin{lem}\label{lem_additional} We have \begin{equation*} \sum_{j=0}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
\big)^2 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]
- \crochet{Y}_{t \wedge T_k} \to 0, \end{equation*} in probability as $k\to \infty$ for all $t \geq 0$. \end{lem}
\noindent We are now ready to complete the proof of Proposition~\ref{proposition: stable conv}. From \eqref{loc conv} and Lemmas~\ref{lem1}, \ref{lem2} and \ref{lem_additional}, we have for all $0\leq v\leq t$ the following convergences in probability as $k\to \infty$: \begin{equation*} \begin{split} & \sum_{j=N^{n(k)}_{v \wedge T_k}+1}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
\big)^2 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \to \int_{v \wedge \hat{\alpha}^K}^{t \wedge \hat{\alpha}^K} (\sigma^Y_y)^2 du, \\ & \varepsilon_{n(k)}^{-1}\sum_{j=N^{n(k)}_{v \wedge T_k}+1}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
\big)^3 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \to \int_{v \wedge \hat{\alpha}^K}^{t \wedge \hat{\alpha}^K} s_u (\sigma^Y_y)^2 du, \\ & \varepsilon_{n(k)}^{-2}\sum_{j=N^{n(k)}_{v \wedge T_k}+1}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j \wedge t} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ \big(X_{\hat{\tau}^k_{j+1} \wedge t} - X_{\hat{\tau}^k_j \wedge t}
\big)^4 \big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \to \int_{v \wedge \hat{\alpha}^K}^{t \wedge \hat{\alpha}^K} a_u^2 (\sigma^Y_y)^2 du.
\end{split} \end{equation*} Since \begin{equation*} \big( \mathbb{E}^{\hat{\mathbb{Q}}}\big[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^3
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]\big)^2 \leq \mathbb{E}^{\hat{\mathbb{Q}}}\big[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^2
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big] \mathbb{E}^{\hat{\mathbb{Q}}}\big[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^4
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big], \end{equation*} we have \begin{equation*} \begin{split} &\Big(\varepsilon_{n(k)}^{-1}\sum_{j=N^{n(k)}_{v \wedge T_k}+1}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j} \mathbb{E}^{\hat{\mathbb{Q}}}\big[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^3
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \big]\Big)^2 \\ &\leq \varepsilon_{n(k)}^{-2}\sum_{j=N^{n(k)}_{v \wedge T_k}+1}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j} \mathbb{E}^{\hat{\mathbb{Q}}}\left[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^4
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \right] \sum_{j=N^{n(k)}_{v \wedge T_k}+1}^{N^{n(k)}_{t \wedge T_k}} \kappa_{\hat{\tau}^k_j} \mathbb{E}^{\hat{\mathbb{Q}}}\left[ (X_{\hat{\tau}^k_{j+1}\wedge t}-X_{\hat{\tau}^k_j \wedge t})^2
\big| \mathcal{F}_{\hat{\tau}^k_j \wedge t} \right]. \end{split} \end{equation*} This implies that for all $0\leq v\leq t$, \begin{equation*} \big( \int_{v \wedge \hat{\alpha}^K}^{t \wedge \hat{\alpha}^K} s_u(\sigma^Y_u)^2du \big)^2 \leq \int_{v \wedge \hat{\alpha}^K}^{t \wedge \hat{\alpha}^K} a_u^2(\sigma^Y_u)^2du \int_{v \wedge \hat{\alpha}^K}^{t \wedge \hat{\alpha}^K} (\sigma^Y_u)^2 du. \end{equation*} Thus we obtain \eqref{eq: local a s}.
\subsection{Proof of Proposition \ref{prop: intuition a s}} In this proof, using a classical localization procedure together with Girsanov theorem, we can assume that $b^X=0$ and that $\sigma^X$ and $\sigma^Y$ are bounded on $[0,T]$. We start with two technical lemmas and their proof.\\
\begin{lem}\label{lemma: conv diff qv} For any $p \in [0,2)$, \begin{equation*}
\varepsilon_n^{-p} \sup_{j \geq 0}(\crochet{X}_{\tau^n_{j+1}} -\crochet{X}_{\tau^n_j}) \to 0, \end{equation*} in probability. \end{lem} \begin{proof} Let $K > 0$ and \begin{equation} \label{loc gamma}
\gamma^n_K = \inf\{t > 0; \varepsilon_n^{-1}|X_t-X^n_t| \geq K \} \wedge T. \end{equation} Using the tightness of the family \eqref{sup diff}, we get \begin{equation} \label{loc gamma unif}
\lim_{K \to \infty} \sup_{n \in \mathbb{N}}\mathbb{P}[ \gamma^n_K < T] = 0. \end{equation} Therefore, it is enough to show that for any $K>0$, \begin{equation*}
\varepsilon_n^{-p} \sup_{j \geq 0}(\crochet{X}_{\tau^n_{j+1} \wedge \gamma^n_K} -\crochet{X}_{\tau^n_j \wedge \gamma^n_K}) \to 0, \end{equation*} in probability. Take an integer $m > 2/(2-p)$. Since \begin{equation*}
\sup_{j \geq 0} (\crochet{X}_{\tau^n_{j+1} \wedge \gamma^n_K} -\crochet{X}_{\tau^n_j \wedge \gamma^n_K})
\leq \Big( \sum_{j=0}^{\infty} (\crochet{X}_{\tau^n_{j+1} \wedge \gamma^n_K} -\crochet{X}_{\tau^n_j \wedge \gamma^n_K})^m \Big)^{1/m}, \end{equation*} the statement of the lemma follows from the fact that \begin{equation*} \begin{split} \mathbb{E} \Big[ \sum_{j=0}^{\infty} (\crochet{X}_{\tau^n_{j+1} \wedge \gamma^n_K} - \crochet{X}_{\tau^n_j \wedge \gamma^n_K})^m \Big] & \leq C \mathbb{E} \Big[ \sum_{j=0}^{\infty}
\big(\sup_{t \geq 0}|X_{\tau^n_{j+1} \wedge \gamma^n_K \wedge t} -
X_{\tau^n_j \wedge \gamma^n_K \wedge t}|\big)^{2m} \Big]
\\ & \leq C \varepsilon_n^{2m-2} \mathbb{E} \Big[ \sum_{j=0}^{\infty}
\big(\sup_{t \geq 0}|X_{\tau^n_{j+1} \wedge \gamma^n_K \wedge t} -
X_{\tau^n_j \wedge \gamma^n_K \wedge t}|\big)^2 \Big]
\\ & \leq C \varepsilon_n^{2m-2} \mathbb{E} \Big[ \sum_{j=0}^{\infty} (\crochet{X}_{\tau^n_{j+1} \wedge \gamma^n_K} - \crochet{X}_{\tau^n_j \wedge \gamma^n_K}) \Big]
\\
& \leq C \varepsilon_n^{2m-2}. \end{split} \end{equation*} Here we have used that $\mathbb{E}[\crochet{X}_T] < \infty$. The result follows using H\"older inequality. \end{proof}
\begin{lem} \label{lemma: condition add} For any $p \in [0,2)$ and $T_0 \in [0,T)$, \begin{equation*}
\varepsilon_n^{-p} \sup_{j \geq 0}(\tau^n_{j+1} \wedge T_0 -\tau^n_j \wedge T_0) \to 0, \end{equation*} in probability. In particular, the convergence in probability \eqref{Condition add} holds for all $T_0 \in [0,T)$. \end{lem}
\begin{proof}
Let $T_0 \in [0,T)$, $K > 0$ and \begin{equation}
\hat{\gamma}_K = \inf\{t > 0; \sigma^X_t \leq 1/K\} \wedge T_0. \end{equation} Using the continuity and the positivity of $\sigma^X$ on $[0,T)$, we get \begin{equation*}
\lim_{K \to \infty} \mathbb{P}[ \hat{\gamma}_K < T_0] = 0. \end{equation*} Therefore, it is enough to show that for any $K>0$, \begin{equation*}
\varepsilon_n^{-p} \sup_{j \geq 0} (\tau^n_{j+1} \wedge \hat{\gamma}_K - \tau^n_j \wedge \hat{\gamma}_K) \to 0, \end{equation*} in probability. This follows from Lemma~\ref{lemma: conv diff qv} since \begin{equation*} \sup_{j \geq 0} (\tau^n_{j+1} \wedge \hat{\gamma}_K - \tau^n_j \wedge \hat{\gamma}_K) \leq C \sup_{j \geq 0}(\crochet{X}_{\tau^n_{j+1}} -\crochet{X}_{\tau^n_j}). \end{equation*} \end{proof}
\noindent We now give the end of the proof of Proposition~\ref{prop: intuition a s}. Define $\gamma^n_K$ by \eqref{loc gamma}. It suffices to show that for any $K>0$ \begin{equation*}
\begin{split} & \sup_{t\geq 0}
\Big| \varepsilon_n^{-2}\crochet{Z^n}_{t \wedge \gamma^n_K} - \frac{1}{6}\int_0^{t \wedge \gamma^n_K}a_u^2(\sigma^Y_u)^2du
\Big| \to 0, \\ & \sup_{t\geq 0}
\Big| \varepsilon_n^{-1}\crochet{Z^n, Y}_{t \wedge \gamma^n_K} - \frac{1}{3}\int_0^{{t \wedge \gamma^n_K}}s_u(\sigma^Y_u)^2du
\Big| \to 0,
\end{split} \end{equation*} in probability. Since \begin{equation} \label{unif bound}
\varepsilon_n^{-1} \sup_{t \geq 0} |X_{t\wedge \gamma^n_K}-
X^n_{t\wedge \gamma^n_K}| \leq K, \end{equation} the families $\varepsilon_n^{-2}\crochet{Z^n}_{\cdot \wedge \gamma^n_K}$ and $\varepsilon_n^{-1}\crochet{Z^n,Y}_{\cdot \wedge \gamma^n_K}$ are equicontinuous. So we just need to prove that for any $t \in [0,T)$, \begin{equation*}
\begin{split} & \varepsilon_n^{-2}\crochet{Z^n}_{t \wedge \gamma^n_K} - \frac{1}{6}\int_0^{t \wedge \gamma^n_K}a_u^2(\sigma^Y_u)^2du \to 0, \\ & \varepsilon_n^{-1}\crochet{Z^n, Y}_{t \wedge \gamma^n_K}- \frac{1}{3}\int_0^{{t \wedge \gamma^n_K}}s_u(\sigma^Y_u)^2du \to 0,
\end{split} \end{equation*} in probability. Let \begin{equation*}
\beta^n_M = \inf\{u > 0 ; (1/\sigma^X_u) \geq M \} \wedge t \wedge \gamma^n_K \end{equation*} for $M > 0$. Since $t < T$, Lemma~\ref{lemma: conv diff qv} gives that \begin{equation*}
\lim_{M\to \infty} \sup_{n \in \mathbb{N}}\mathbb{P}[\beta^n_M < t \wedge \gamma^n_K] =0. \end{equation*} Therefore it is enough to show that for any $M>0$, \begin{equation*}
\begin{split} & \varepsilon_n^{-2}\crochet{Z^n}_{\beta^n_M} - \frac{1}{6}\int_0^{\beta^n_M}a_u^2(\sigma^Y_u)^2du \to 0, \\ & \varepsilon_n^{-1}\crochet{Z^n, Y}_{\beta^n_M}- \frac{1}{3}\int_0^{\beta^n_M}s_u(\sigma^Y_u)^2du \to 0,
\end{split} \end{equation*} in probability. From the assumptions of Proposition \ref{prop: intuition a s}, we have \begin{align*} & \varepsilon_n^{-1}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}\mathbb{E}\big[ \Delta_{j, n}^3
\big| \mathcal{F}_{\tau^n_j} \big] + \int_0^{\beta^n_M}s_u(\sigma^Y_u)^2du \to 0, \\ & \varepsilon_n^{-2}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j} \mathbb{E}\big[ \Delta_{j, n}^4
\big| \mathcal{F}_{\tau^n_j} \big] - \int_0^{\beta^n_M}a_u^2(\sigma^Y_u)^2du \to 0, \end{align*} in probability. Moreover, by It$\hat{\text{o}}$'s formula, \begin{equation*} \begin{split} &\frac{1}{3} \varepsilon_n^{-1}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}\mathbb{E}\big[ \Delta_{j, n}^3
\big| \mathcal{F}_{\tau^n_j} \big] = \varepsilon_n^{-1}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}\mathbb{E}\big[ \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})d \crochet{X}_u
\big| \mathcal{F}_{\tau^n_j} \big], \\ &\frac{1}{6} \varepsilon_n^{-2}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}\mathbb{E}\big[ \Delta_{j, n}^4
\big| \mathcal{F}_{\tau^n_j} \big] = \varepsilon_n^{-2}\sum_{j=0}^{N^n_{t\wedge \beta^K}}\kappa_{\tau^n_j}\mathbb{E}\big[ \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})^2d \crochet{X} _u
\big| \mathcal{F}_{\tau^n_j} \big]. \end{split} \end{equation*} Now remark that the following convergences in probability hold: \begin{equation}\label{lenglart}
\begin{split}
& \sup_{t\geq 0}\Big| \varepsilon_n^{-1}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j} \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})d \crochet{X}_u
- \varepsilon_n^{-1}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}\mathbb{E}\big[ \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})d \crochet{X}_u
\big| \mathcal{F}_{\tau^n_j}
\big] \Big| \to 0, \\
& \sup_{t\geq 0}\Big| \varepsilon_n^{-2}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j} \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})^2d \crochet{X}_u
- \varepsilon_n^{-2}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}\mathbb{E}\big[ \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})^2d \crochet{X}_u
\big| \mathcal{F}_{\tau^n_j}
\big] \Big|\to 0.
\end{split} \end{equation} Indeed, as seen in the proofs of Lemmas~\ref{lem1} and \ref{lem2}, the convergences in probability in \eqref{lenglart} are deduced from the following ones: \begin{equation}\label{lindeberg}
\begin{split} & \varepsilon_n^{-2}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}^2\mathbb{E}\big[ \big( \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})d \crochet{X}_u \big)^2
\big| \mathcal{F}_{\tau^n_j} \big] \to 0,\\ & \varepsilon_n^{-4}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}^2\mathbb{E}\big[ \big( \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})^2d \crochet{X}_u \big)^2
\big| \mathcal{F}_{\tau^n_j} \big] \to 0.
\end{split} \end{equation}
Since $Q_n=\varepsilon_n^{-4}\sup_{t \in [0,T]}|X^n_t-X_t|^4$ is uniformly integrable and \begin{equation*}
\sum_{j=0}^{\infty}(\crochet{X}_{\tau^n_{j+1}} - \crochet{X}_{\tau^n_j})^2 \end{equation*} is bounded and converges to $0$ in probability by Lemma~\ref{lemma: conv diff qv}, we have \begin{equation*}
\mathbb{E}\Big[ \sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j}^2 \big( \int_{\tau^n_j}^{\tau^n_{j+1}} \varepsilon_n^{-k}(X_u - X^n_{\tau^n_j})^kd \crochet{X}_u \big)^2 \Big] \leq C \mathbb{E} \Big[ Q_n^{k/2} \sum_{j=0}^{\infty}(\crochet{X}_{\tau^n_{j+1}} - \crochet{X}_{\tau^n_j})^2 \Big] \to 0 \end{equation*} for $k=1,2$, which gives \eqref{lindeberg}.\\
\noindent We also have
\begin{equation}\label{edge}
\begin{split}
& \sup_{t \geq 0}\Big| \varepsilon_n^{-1}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j} \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})d \crochet{X}_u -\varepsilon_n^{-1}\sum_{j=0}^{\infty}\kappa_{\tau^n_j}
\int_{\tau^n_j \wedge \beta^n_M}^{\tau^n_{j+1} \wedge \beta^n_M} (X_u - X^n_{\tau^n_j})d \crochet{X}_u \Big| \to 0,\\
& \sup_{t \geq 0}\Big| \varepsilon_n^{-2}\sum_{j=0}^{N^n_{\beta^n_M}}\kappa_{\tau^n_j} \int_{\tau^n_j}^{\tau^n_{j+1}} (X_u - X^n_{\tau^n_j})^2d \crochet{X}_u - \varepsilon_n^{-2}\sum_{j=0}^{\infty}\kappa_{\tau^n_j}
\int_{\tau^n_j \wedge \beta^n_M}^{\tau^n_{j+1}\wedge \beta^n_M} (X_u - X^n_{\tau^n_j})^2d \crochet{X}_u \Big| \to 0,
\end{split} \end{equation} in probability. These two convergences follow using that \begin{equation*}
\sup_{j \geq 0, t \in [0,T]}
\int_{\tau^n_j \wedge t}^{\tau^n_{j+1} \wedge t} \varepsilon_n^{-i}|X_u-X^n_u|^i d \crochet{X}_u \to 0 \end{equation*} in probability for $i=1,2$, which is deduced from \eqref{loc gamma unif} and the fact that \begin{equation*}
\sup_{j \geq 0, t \geq 0}
\int_{\tau^n_j \wedge \gamma^n_K \wedge t}^{\tau^n_{j+1} \wedge \gamma^n_K \wedge t} \varepsilon_n^{-i}|X_u-X^n_u|^i d \crochet{X}_u \leq K^i \sup_{j\geq 0}(\crochet{X}_{\tau^n_{j+1}} - \crochet{X}_{\tau^n_j}) \to 0, \end{equation*} in probability, by Lemma~\ref{lemma: conv diff qv}.\\
\noindent Finally, remark that the uniform continuity of $\kappa$ and \eqref{unif bound} imply
\begin{equation}\label{disc kappa3}
\begin{split}
& \sup_{t \geq 0}\Big| \varepsilon_n^{-1}\sum_{j=0}^{\infty}\kappa_{\tau^n_j} \int_{\tau^n_j \wedge \beta^n_M}^{\tau^n_{j+1} \wedge \beta^n_M} (X_u - X^n_{\tau^n_j})d \crochet{X}_u
+ \varepsilon_n^{-1}\crochet{Z^n,Y}_{\beta^n_M} \Big| \to 0,\\
& \sup_{t \geq 0}\Big| \varepsilon_n^{-2}\sum_{j=0}^{\infty}\kappa_{\tau^n_j} \int_{\tau^n_j \wedge \beta^n_M}^{\tau^n_{j+1}\wedge \beta^n_M} (X_u - X^n_{\tau^n_j})^2d \crochet{X}_u
- \varepsilon_n^{-2}\crochet{Z^n}_{\beta^n_M} \Big| \to 0,
\end{split} \end{equation} in probability. Then Proposition~\ref{prop: intuition a s} is eventually obtained from \eqref{lenglart} together with \eqref{edge} and \eqref{disc kappa3}.
\subsection{Proof of Proposition \ref{prop : hitting strats}}
\subsubsection{Proof of the convergence in law to \eqref{eqn: stable limit}} We start with the stable convergence in law of the renormalized hedging error. Such convergence being stable against localization procedures, we can assume without loss of generality that
$|b^X|$,
$\sigma^X$, $|b^Y|$, $\sigma^Y$, $1/\sigma^Y$, $\overline{l}$, $1/\overline{l}$, $\underline{l}$ and $1/\underline{l}$ are bounded by a constant $K >0$.
Then in particular we have $\varepsilon_n^{-1}\sup_{t \in [0,T]}|X^n_t-X_t| \leq K$.\\
\noindent By Lemma~\ref{lemma: condition add}, we have \eqref{Condition add} for all $T_0 \in [0,T)$. Furthermore $\varepsilon_n^{-1}\crochet{Z^n,Y}$ and $\varepsilon_n^{-2}\crochet{Z^n}$ are equicontinuous. Therefore the uniform convergences in probability \eqref{asmp: limit 2} and \eqref{asmp: limit 3} follow from the corresponding convergences in probability at each $t \in [0,T)$.\\
\noindent Fix $T_0 \in [0,T)$ and define $\hat{\alpha}^K$ by \eqref{alphahat}. Then we have \eqref{alphahat conv} and so, we can assume without loss of generality that $1/\sigma^X \leq K$ in order to show the convergences \eqref{asmp: limit 2} and \eqref{asmp: limit 3} on $[0,T_0]$. Also, thanks to the Girsanov-Maruyama transformation, we can assume $b^X=0$. Define for $\delta >0$ and $t \in [0,T_0]$ \begin{equation*}
w_t(\delta) = \sup \{ |\overline{l}_u-\overline{l}_v| + |\underline{l}_u-\underline{l}_v| ; 0\leq u \leq t, \ 0\leq v \leq t,\ |u-v|\leq \delta \}. \end{equation*} Since $\overline{l}$ and $\underline{l}$ are continuous and bounded, we have \begin{equation*}
\mathbb{E}[w_{T_0}(\delta)] \to 0 \end{equation*} as $\delta \to 0$. Let \begin{equation*}
T^n_K = \inf\{t > 0; w_t(\varepsilon_n) \geq K \mathbb{E}[w_{T_0}(\varepsilon_n)]\}\wedge T_0. \end{equation*} Note that \begin{equation*}
\sup_{n \in \mathbb{N}}\mathbb{P}[T^n_K < T_0] \leq \sup_{n \in \mathbb{N}}\mathbb{P}[
w_{T_0}(\varepsilon_n) \geq K \mathbb{E}[w_{T_0}(\varepsilon_n)]] \leq \frac{1}{K} \to 0, \end{equation*} as $K \to \infty$. On the set $\{T^n_K<T_0\}$, we can replace $\overline{l}$ and $\underline{l}$ by $\overline{l}_{\cdot \wedge T^n_K}$ and $\underline{l}_{\cdot \wedge T^n_K}$ respectively. This means that we can assume without loss of generality that $w_{T_0}(\varepsilon_n) \leq K \mathbb{E}[w_{T_0}(\varepsilon_n)]$. Now in order to apply Proposition~\ref{prop: intuition a s}, it remains to show \eqref{asmp: suff}.
\subsubsection*{Part 1: Technical lemma} We give here a first technical lemma. \begin{lem}\label{lem tech} The sequence $\varepsilon_n^2N^n_{T_0}$ is tight. \end{lem} \begin{proof} Since \begin{equation*}
|X_{\tau^n_{j+1} \wedge T_0}-X_{\tau^n_j \wedge T_0}|^2 \geq \frac{\varepsilon_n^2}{K^2}, \end{equation*} we have \begin{equation*}
\varepsilon_n^2 N^n_{T_0} \leq K^2 \sum_{j=0}^{N^n_{T_0}}
(X_{\tau^n_{j+1} \wedge T_0}-X_{\tau^n_j \wedge T_0})^2 \to K^2 \crochet{X}_{T_0}, \end{equation*} in probability by Lemma~\ref{lemma: condition add}. \end{proof}
\subsubsection*{Part 2: Approximation lemma} We give here an important
result. Let $\tilde{\tau}^n_{j+1}$ be the exit time of fixed barriers defined by \begin{equation} \tilde{\tau}^n_{j+1} = \inf\big\{t>\tau^n_j: X_t \notin (X_{\tau^n_j}- \varepsilon_n \underline{l}_{\tau^n_j},X_{\tau^n_j}+\varepsilon_n \overline{l}_{\tau^n_j} )\big\} \wedge T_0. \end{equation} We have the following lemma. \begin{lem}\label{key of lem} We have \begin{equation*}
\sum_{j=0}^{N^n_{T_0}}\mathbb{E}\big[(\tilde{\tau}^n_{j+1} -\tau^n_{j+1})| \mathcal{F}_{\tau^n_j}\big]\to 0, \end{equation*} in probability. \end{lem} \begin{proof}
Since the sequence $\varepsilon_n^2 N^n_{T_0}$ is tight, it is enough to show that \begin{equation*}
\frac{1}{\varepsilon_n^2}\sup_{j\leq N^n_{T_0}}\mathbb{E}\big[{\tilde{\tau}^n_{j+1} -\tau^n_{j+1}} \big| \mathcal{F}_{\tau^n_j}\big]\to 0. \end{equation*} We write
$$\frac{1}{\varepsilon_n^2}\sup_{j\leq N^n_{T_0}}\mathbb{E}\big[{\tilde{\tau}^n_{j+1} -\tau^n_{j+1}} \big| \mathcal{F}_{\tau^n_j}\big]=R_1+R_2,$$ with \begin{align*}
R_1&=\frac{1}{\varepsilon_n^2}\sup_{j\leq N^n_{T_0}}\mathbb{E}\big[({\tilde{\tau}^n_{j+1} -\tau^n_{j+1}})\mathbbm{1}_{\{\tilde{\tau}^n_{j+1}\vee \tau^n_{j+1}\geq\tau^n_j + \varepsilon_n\}} \big| \mathcal{F}_{\tau^n_j}\big],\\
R_2&=\frac{1}{\varepsilon_n^2}\sup_{j\leq N^n_{T_0}}\mathbb{E}\big[({\tilde{\tau}^n_{j+1} -\tau^n_{j+1}})\mathbbm{1}_{\{\tilde{\tau}^n_{j+1}\vee \tau^n_{j+1}< \tau^n_j + \varepsilon_n\}} \big| \mathcal{F}_{\tau^n_j}\big]. \end{align*} We first treat $R_1$. We have
$$R_1\leq \frac{T}{\varepsilon_n^2}\sup_{j\leq N^n_{T_0}} \mathbb{P}\big[\tilde \tau^n_{j+1}\vee\tau^n_{j+1}\geq \tau^n_j+ \varepsilon_n\big| \mathcal{F}_{\tau^n_j}\big].$$ Since $\overline{l}$, $\underline{l}$ and $\sigma^X$ are bounded from below by $1/K$, using the Dambis, Dubins-Schwartz theorem we get that there exists some $C>0$ such that \begin{equation*}
\mathbb{P}\big[\tilde\tau^n_{j+1}\vee \tau^n_{j+1}\geq \tau^n_j+\varepsilon_n\big| \mathcal{F}_{\tau^n_j}\big]\leq \mathbb{P}[\rho^n\geq C\varepsilon_n], \end{equation*} with $\rho^n$ the first exit time of $[-\varepsilon_n/K,\varepsilon_n/K]$ by a Brownian motion starting from zero. Using the well-known bound $\mathbb{E}[(\rho^n)^k]\leq C\varepsilon_n^{2k}$ for $k\in\mathbb{N}$, Markov's inequality gives the convergence to zero of $R_1$.\\
\noindent We now turn to $R_2$. Recall that $w_{T_0}(\varepsilon_n) \leq K \mathbb{E}[w_{T_0}(\varepsilon_n)]=\delta_n \to 0$. Then, we have $$({\tilde{\tau}^n_{j+1} -\tau^n_{j+1}} ) \mathbbm{1}_ {\{\tilde{\tau}^n_{j+1}\vee \tau^n_{j+1}< \tau^n_j + \varepsilon_n\}}\leq \hat{J}^n_{j+1}-\check{J}^n_{j+1},$$ with \begin{align*} \hat{J}^n_{j+1}&=\inf\Big\{t\geq \tau^n_j ; X_{\tau^n_j+t}- X_{\tau^n_j}\notin\big(-\varepsilon_n(\underline{l}_{\tau^n_j}+\delta_n),\varepsilon_n(\overline{l}_{\tau^n_j}+\delta_n)\big)\Big\},\\ \check{J}^n_{j+1}&=\inf\Big\{t\geq \tau^n_j ;X_{\tau^n_j+t}- X_{\tau^n_j}\notin\big(-\varepsilon_n(\underline{l}_{\tau^n_j}-\delta_n),\varepsilon_n(\overline{l}_{\tau^n_j}-\delta_n)\big)\Big\}. \end{align*} Using again the Dambis, Dubins-Schwarz theorem and the various boundedness assumptions, we get
$$\mathbb{E}\Big[\mathbb{E}\big[\hat{J}^n_{j+1} - \check{J}^n_{j+1}\big|\mathcal{F}_{\check{J}^n_{j+1}}\big]\big|\mathcal{F}_{\tau^n_j}\Big]\leq C\varepsilon_n^2 \delta_n.$$ Consequently, $$\mathbb{E}[R_2]\leq C\delta_n,$$ which gives the result. \end{proof} \subsubsection*{Part 3: Proof of \eqref{asmp: suff}} Here we prove \eqref{asmp: suff}, which completes the proof of the convergence in law of $\varepsilon_n^{-1}Z^n_T$ with the help of Proposition~\ref{proposition: stable conv} and Proposition~\ref{prop: intuition a s}. As already seen, by It$\hat{\text{o}}$'s formula, we have \begin{equation*} \begin{split}
& \mathbb{E}[\Delta_{j,n}^4|\mathcal{F}_{\tau^n_j}] = 6 \mathbb{E}\big[ \int_{\tau^n_j}^{\tau^n_j+1} (X_t-X_{\tau^n_j})^2d\crochet{X}_t
\big| \mathcal{F}_{\tau^n_j} \big] = A_j, \\
& \mathbb{E}[\Delta_{j,n}^3|\mathcal{F}_{\tau^n_j}] = 3 \mathbb{E}\big[
\int_{\tau^n_j}^{\tau^n_j+1} (X_t-X_{\tau^n_j})d\crochet{X}_t \big| \mathcal{F}_{\tau^n_j} \big] = B_j. \end{split} \end{equation*} Therefore, we obtain \begin{equation*} \begin{split} &\varepsilon_n^{-2}\sum_{j=0}^{N^n_t}\kappa_{\tau^n_j}A_j = \varepsilon_n^{-2}\sum_{j=0}^{N^n_t}\kappa_{\tau^n_j}
\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^4| \mathcal{F}_{\tau^n_j}\big] + R_t
\\ & \varepsilon_n^{-1}\sum_{j=0}^{N^n_t}\kappa_{\tau^n_j}B_j = \varepsilon_n^{-1}\sum_{j=0}^{N^n_t}\kappa_{\tau^n_j}
\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^3| \mathcal{F}_{\tau^n_j}\big] + R^\prime_t \end{split} \end{equation*} where \begin{equation*} \begin{split} & R_t = 6 \varepsilon_n^{-2}\sum_{j=0}^{N^n_t}\kappa_{\tau^n_j}
\mathbb{E}\Big[\int_{\tilde{\tau}^n_{j+1}}^{\tau^n_{j+1}}(X_u-X^n_u)^2(\sigma^X_u)^2du \big| \mathcal{F}_{\tau^n_j}\Big], \\ & R^\prime_t =3
\varepsilon_n^{-1}\sum_{j=0}^{N^n_t}\kappa_{\tau^n_j} \mathbb{E}\Big[\int_{\tilde{\tau}^n_{j+1}}^{\tau^n_{j+1}}(X_u-X^n_u)(\sigma^X_u)^2du \big| \mathcal{F}_{\tau^n_j}\Big]. \end{split} \end{equation*}
Since $\varepsilon_n^{-1}\sup_t|X_t-X^n_t| \leq K$ and $\sigma^X \leq K$, $R$ and $R^\prime$ converge to $0$ uniformly in probability on $[0,T_0]$ by Lemma~\ref{key of lem}. Using that for $b_1>0$ and $b_2>0$, the probability that a Brownian motion starting from zero hits level $b_1$ before level $-b_2$ is equal to $b_2/(b_2+b_1)$, we get \begin{equation*}
\varepsilon_n^{-2} \frac{\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^4\big| \mathcal{F}_{\tau^n_j}\big]}{\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^2\big| \mathcal{F}_{\tau^n_j}\big]}= a^2_{\tau^n_j}, \ \
\varepsilon_n^{-1} \frac{\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^3\big| \mathcal{F}_{\tau^n_j}\big]}{\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^2\big| \mathcal{F}_{\tau^n_j}\big]}= -s_{\tau^n_j}, \end{equation*} where \begin{equation*} a^2=\overline{l}^2 + \underline{l}^2- \overline{l}\underline{l}, \ \ s = \underline{l}- \overline{l}. \end{equation*} Then, to complete the proof, it suffices to show that the convergences \begin{equation*}
\begin{split}
& \sum_{j=0}^{N^n_\cdot}\kappa_{\tau^n_j}a^2_{\tau^n_j}\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^2| \mathcal{F}_{\tau^n_j}\big] \to \int_0^\cdot a_u^2 (\sigma^Y_u)^2du, \\
& \sum_{j=0}^{N^n_\cdot}\kappa_{\tau^n_j}s_{\tau^n_j}\mathbb{E}\big[(X_{\tilde{\tau}^n_{j+1}} - X_{\tau^n_j})^2| \mathcal{F}_{\tau^n_j}\big] \to \int_0^\cdot s_u (\sigma^Y_u)^2du
\end{split} \end{equation*} hold uniformy in probability on $[0,T_0]$. This follows from Lemma~A.4 in \cite{fukasawa2011discretization} together with Lemma~\ref{lemma: conv diff qv}.
\subsubsection{Proof of \eqref{moment2}} Here we prove a moment convergence result. Thus the localization procedure does not apply here. We set \begin{align*} A_n &= \varepsilon_n^{-1}\int_0^T(X^n_t - X_t)b^Y_t dt,\\ B_n &= \varepsilon_n^{-1}\int_0^T(X^n_t - X_t)\sigma^Y_t dW^Y_t. \end{align*} We have $$(\varepsilon_n^{-1}{Z}^n_T)^2 = (A_n+ B_n)^2 \leq 2( A_n^2 + B_n^2). $$ Thus it is enough to prove the uniform integrability of $(A_n^2)$ and $ (B_n^2)$ to obtain the result. For $(A_n^2)$, we have $$\underset{n}{\text{sup}}(A_n)^2 \leq \big(\int_0^T (\overline{l}_t\vee\underline{l}_t)\abs{b^Y_t}dt\big)^2 \leq \int_0^T(\overline{l}_t\vee\underline{l}_t)^2(\rho_t)^2 (\sigma^Y_t)^2dt.$$ The right hand side of the last inequality being integrable, this gives the result for $(A_n)^2$. We now turn to $(B_n^2)$. The sequence $(B_n^2)$ is non negative integrable and converges in law towards an integrable limit. Thus the uniform integrability is equivalent to the convergence in expectation, see for example \cite{billingsley2009convergence}. Since \begin{equation*} \varepsilon_n^{-2}\crochet{Z^n}_T\to \frac{1}{6}\int_0^Ta_t^2(\sigma^Y_t)^2dt \end{equation*} and $$\varepsilon_n^{-2}\crochet{Z^n}_T\leq \int_0^T(\overline{l}_t\vee \underline{l}_t)^2(\sigma^Y_t)^2dt,$$ we readily obtain \begin{equation*} \mathbb{E}[B_n^2]= \mathbb{E}[\varepsilon_n^{-2}\crochet{Z^n}_T]\to \frac{1}{6}\mathbb{E}[\int_0^T a_t^2(\sigma^Y_t)^2dt], \end{equation*} which concludes the proof.
\subsection{Proof of Theorem \ref{theo : main result}} We start with the first part of Theorem \ref{theo : main result}. Let $(m,v)$ be a non dominated couple. Suppose it is a dominated couple with respect to $\mathcal{Z}_T$. This means there exists a process $s_t^*$ such that the associated expectation, say $m'=\mathbb{E}[Z_{T,s^*}]$, is larger than $m$ and the expected error, say $v'=\mathbb{E}[(Z_{T,s^*})^2]$, is strictly smaller than $v$. From Lemma \ref{lem : hitting strat univ}, for any $\eta$ we can find an admissible strategy with limiting variable $Z^*_{s^*+\eta,s^*}$. Clearly, we can find $\eta$ small enough, such that
$\mathbb{E}|Z^*_{s^*+\eta,s^*}]=m'$ and $$v'\leq \mathbb{E}[(Z^*_{s^*+\eta,s^*})^2]<v.$$ Consequently $(m,v)$ is a dominated couple, which is absurd. Conversely, any point which is non dominated with respect to $\mathcal{Z}_T$ is non dominated since $a_t^2\geq s_t^2$.\\
\noindent For the second part, it remains to show that the proposed discretization rules indeed lead to nearly efficient couples. The fact that they are admissible is clear from Proposition \ref{prop : hitting strats}. Recall now that for the suggested rule $$a_t^2=(s_t^*)^2+\frac{6\delta}{(\sigma^Y_t)^2}.$$ This equality gives that the limiting variable $Z^*_{a,s}$ associated to this discretization rule satisfies $$\mathbb{E}[Z^*_{a,s}]= \frac{1}{3}\mathbb{E}\big[\int_0^Ts_t^*dY_t\big]$$ and \begin{align*} \mathbb{E}[(Z^*_{a,s})^2]&=\frac{1}{9}\mathbb{E}\big[\big(\int_0^Ts_t^*dY_t\big)^2\big]+\frac{1}{6}\mathbb{E} \big[\int_0^T\big(a_t^2-\frac{2}{3}(s_t^*)^2\big)(\sigma^Y_t)^2dt\big]\\ &=\frac{1}{9}\mathbb{E}\big[\big(\int_0^Ts_t^*dY_t\big)^2\big]+\frac{1}{18}\mathbb{E} \big[\int_0^T\big((s_t^*)^2(\sigma^Y_t)^2\big)dt\big]+\delta T. \end{align*} The couple $$\Big(\frac{1}{3}\mathbb{E}\big[\int_0^Ts_t^*dY_t\big],\frac{1}{9}\mathbb{E}\big[\big(\int_0^Ts_t^*dY_t\big)^2\big]+\frac{1}{18}\mathbb{E} \big[\int_0^T\big(\frac{1}{2}(s_t^*)^2(\sigma^Y_t)^2\big)dt\big]\Big)$$ being non dominated, we obtain the result.
\section{Linear-quadratic optimal control}\label{append: LQ} We give here a summary of useful formulas from \cite{zhou2000continuous}. Consider a controlled system governed by the following linear SDE: \begin{equation}\label{LQ: dynm} \begin{cases} dX_t = (A_tX_t + B_tu_t + f_t)dt + \sum_{j=1}^mD^j_tu_tdW^j_t,\\ X_0 = x \in \mathbb{R}^n, \end{cases} \end{equation} where $x$ is the initial state and $W =(W^1, \cdots, W^m)$ is a $m$-dimensional Brownian motion on a given filtered probability space $(\Omega, \mathcal{F}, \mathbb{P}, \paren{\mathcal{F}_t}_{t\geq 0})$ and $u\in L^2_{\mathcal{F}}([0, T], \mathbb{R}^m)$ is a control. For each control $u$, the associated cost is \begin{equation}\label{LQ: cost} J(u) = \E{\int_0^T\frac{1}{2}\paren{X_t'Q_tX_t + u'_tR_tu_t}dt + \frac{1}{2}X_T'HX_T}. \end{equation} We suppose that all the parameters are deterministic and continuous on $[0, T]$ and $H$ belongs to $S^n_+$ the set of $n\times n$ symmetric positive matrices. We introduce the following matrix Riccati equation
\begin{equation}\label{eqn: riccati}
\begin{cases} \dot{P}_t = -P_tA_t -A_t'P_t -Q_t +P_tB_tK_t^{-1}B_t'P_t,\\ P_T = H,\\ K_t = R_t + \sum_{j=1}^{m}D^{j'}_tP_tD^j_t > 0, \quad \forall t\in [0, T],
\end{cases}
\end{equation} along with an equation \begin{equation}\label{eqn: g} \begin{cases} \dot{g}_t = -A_t'g_t + P_tB_tK_t^{-1}B_t'g_t-P_tf_t,\\ g_T = 0. \end{cases} \end{equation} Then following result is given in \cite{zhou2000continuous}. \begin{theo}\label{theoxyz} If \eqref{eqn: riccati} and \eqref{eqn: g} admit solutions $P\in C([0, T], S^n_+)$ and $g\in C([0, T], \mathbb{R}^n)$ respectively, then the stochastic linear-quadratic control problem \eqref{LQ: dynm}-\eqref{LQ: cost} has an optimal feedback control \begin{equation*} u^*(t, x)=-K_t^{-1}B'_t(P_tX_t+g_t). \end{equation*} Moreover, the optimal cost value is \begin{equation*} J^* = \frac{1}{2}\int_{0}^{T}\paren{2f_t'g_t - g_tB_tK^{-1}_tB_t'g_t}dt + \frac{1}{2}x'P_0x + xg_0. \end{equation*} \end{theo} \end{appendices}
\end{document} |
\begin{document}
\begin{abstract} This note presents some properties of the variety of planes $F_2(X)\subset G(3,7)$ of a cubic $5$-fold $X\subset \mathbb P^6$. A cotangent bundle exact sequence is first derived from the remark made in \cite{Iliev-Manivel_cub_hyp_int_syst} that $F_2(X)$ sits as a Lagrangian subvariety of the variety of lines of a cubic $4$-fold, which is a hyperplane section of $X$. Using the sequence, the Gauss map of $F_2(X)$ is then proven to be an embedding. The last section is devoted to the relation between the variety of osculating planes of a cubic $4$-fold and the variety of planes of the associated cyclic cubic $5$-fold. \end{abstract}
\dedicatory{\large Dedicated to Claire Voisin on the occasion of her 60th birthday} \title{Remarks on the geometry of the variety of planes of a cubic fivefold}
\section{Introduction} To understand the topology and the geometry of smooth complex hypersurfaces $X\subset \mathbb P(V^*)\simeq\mathbb P^{n+1}$, various auxiliaries manifolds have been introduced in the past century, the intermediate Jacobian $$J^n(X):=(H^{k-1,k+2}(X)\oplus\cdots\oplus H^{0,n})/H^n(X,\mathbb Z)_{/torsion}$$ when $n=2k+1$ is odd, being, since the seminal work of Clemens-Griffiths (\cite{Cl-Gr}) on the cubic threefold, one of the most widely known.\\ \indent Cubic fivefolds are classically (\cite{griffiths_periods}) known to be the only hypersurfaces of dimension $>3$ for which the intermediate Jacobian, which is in general just a (polarised) complex torus, is a (non-trivial) principally polarised abelian variety.\\ \indent Another interesting series of varieties classically associated to $X$ are the varieties $F_m(X)\subset G(m+1,V)$ of $m$-planes contained in $X$.\\ \indent Starting from Collino (\cite{Coll_cub}), some properties of the variety of planes $F_2(X)\subset G(3,V)$ of a cubic $5$-fold $X$ have been studied in connection with the $21$-dimensional intermediate Jacobian $J^5(X)$. In \textit{loc. cit}, the following is proven
\begin{theoreme}\label{thm_Collino_intro} For a general cubic $X\subset \mathbb P(V^*)\simeq \mathbb P^6$, $F_2(X)$ is a smooth irreducible surface and the Abel-Jacobi of the family of planes $\Phi_{\mathcal P}:F_2(X)\rightarrow J^5(X)$ is an immersion i.e. the associate tangent map is injective and induces an isomorphism of abelian varieties $$\phi_{\mathcal P}:Alb(F_2(X))\xrightarrow{\sim} J^5(X),$$ where $\mathcal P\in {\rm CH}^5(F_2(X)\times X)$ is the universal plane over $F_2(X)$. Equivalently, $q_*p^*:H^3(F_2(X),\mathbb Z)_{/torsion}\rightarrow H^5(X,\mathbb Z)$ is an isomorphism of Hodge structures where the maps are defined by $\begin{small}\xymatrix{\mathcal P\ar[r]^q\ar[d]^p &X\\ F_2(X) &}\end{small}$. \end{theoreme}
In the present note, we investigate some additional properties of $F_2(X)$.\\ \indent In the first section, we establish the following cotangent bundle exact sequence
\begin{theoreme}\label{thm_1} Let $X\subset \mathbb P(V^*)$ be a smooth cubic $5$-fold for which $F_2(X)$ is a smooth irreducible surface. Then the cotangent bundle $\Omega_{F_2(X)}$ fits in the exact sequence
\begin{equation}\label{ex_seq_tgt_bundle_seq} 0\rightarrow \mathcal Q_{3|F_2(X)}^*\rightarrow {\rm Sym}^2\mathcal E_{3|F_2(X)}\rightarrow \Omega_{F_2(X)}\rightarrow 0 \end{equation} where the tautological rank $3$ quotient bundle $\mathcal E_3$ and the other bundle appears in the exact sequence \begin{equation}\label{ex_seq_def_taut_3} 0\rightarrow \mathcal Q_3\rightarrow V^*\otimes \mathcal O_{G(3,V)}\rightarrow \mathcal E_3\rightarrow 0 \end{equation}
and the first map (of (\ref{ex_seq_tgt_bundle_seq})) is the contraction with an equation ${\rm eq}_X\in {\rm Sym}^3V^*$ defining $X$ i.e. for any $[P]\in F_2(X)$, $v\mapsto {\rm eq}_X(v,\cdot,\cdot)_{|P}$. \end{theoreme}
Classically associated to the Albanese map $alb_{F_2}:F_2(X)\rightarrow Alb(F_2(X))$ of $F_2(X)$ there is the Gauss map $$\begin{tabular}{llll} $\mathcal G:$ &$alb_{F_2}(F_2(X))$ &$\dashrightarrow$ &$G(2, T_{Alb(F_2(X)),0})$\\ $ $ &$t$ &$\mapsto$ &$T_{alb_{F_2}(F_2(X))-t,0}$ \end{tabular}$$ where $alb_{F_2}(F_2(X))-t$ designates the translation of $alb_{F_2}(F_2(X))\subset {\rm Alb}(F_2(X))$ by $-t\in {\rm Alb}(F_2(X))$. The map $\mathcal G$ is defined on the smooth locus of $alb_{F_2}(F_2(X))$.\\ \indent In the second section of the note, we prove:
\begin{theoreme}\label{thm_gauss_map} The Albanese map is an embedding. In particular the Gauss map is defined everywhere.\\ \indent Moreover, $\mathcal G$ is an embedding and its composition with the Pl\"ucker embedding $$G(2,_{Alb(F_2(X)),0})\simeq G(2,H^0(\Omega_{F_2})^*)\subset \mathbb P(\bigwedge^2 H^0(\Omega_{F_2(X)})^*)$$ is the composition of the degree $3$ Veronese of the natural embedding $F_2(X)\subset G(3,V)\subset \mathbb P(\bigwedge^3V^*)$ followed by a linear projection. \end{theoreme}
The last section is concerned with some properties of the variety of osculating planes of a cubic $4$-fold, namely \begin{equation}\label{def_var_of_oscul_planes} F_0(Z):=\{[P]\in G(3,H),\ \exists \ell\subset P\ {\rm line\ s.t.}\ P\cap Z=\ell\ {\rm (set-theoretically)}\} \end{equation} where $Z\subset \mathbb P(H^*)\simeq \mathbb P^5$ is a smooth cubic $4$-fold containing no plane.\\ \indent This variety admits a natural projection to the variety of lines $F_1(Z)$ of $Z$ whose image (under that projection) has been studied for example in \cite{GK_geom_lines}. The interest of the authors there, for the variety $F_0(Z)$ stems from its image in $F_1(Z)$ being the fixed locus of the Voisin self-map of $F_1(Z)$ (see \cite{Voisin_map}), a map that plays an important role in the understanding of algebraic cycles on the hyper-K\"ahler $4$-fold $F_1(Z)$ (see for example \cite{shen-vial}).\\ \indent In \cite{GK_geom_lines}, it is proven that for $Z$ general, $F_0(Z)$ is a smooth irreducible surface and some of its invariants are computed.\\ \indent We compute some more invariants of $F_0(Z)$ using its link with the variety of planes $F_2(X_Z)$ of the associated cyclic cubic $5$-fold: to a smooth cubic $4$-fold $Z=\{{\rm eq}_Z=0\}\subset \mathbb P^5$ one can associate the cubic $5$-fold $X_Z=\{X_6^3+{\rm eq}_Z(X_0,\dots,X_5)\}$ which (by linear projection) is the degree $3$ cyclic cover of $\mathbb P^5$ ramified over $Z$. We have
\begin{theoreme}\label{thm_sum_up_var_oscul_planes} For $Z$ general, $F_0(Z)$ is a smooth irreducible surface and\\ \indent (1) $F_2(X_Z)$ is a degree $3$ \'etale cover of $F_0(Z)$;\\ \indent (2) $b_1(F_0(Z))=0$; $h^2(\mathcal O_{F_0(Z)})=1070$; $h^1(\Omega_{F_0(Z)})=2207$;\\ \indent (3) ${\rm Im}(F_0(Z)\rightarrow F_1(Z))$ is a (non-normal) Lagrangian surface of $F_1(Z)$. \end{theoreme} \begin{remarque} {\rm As mentioned by the referee and Frank Gounelas, in \cite{GK_geom_lines}, it is proven there that $[{\rm Im}(F_0(Z)\rightarrow F_1(Z))]=21[F_1(Z\cap H)]$ in ${\rm CH}_2(F_1(Z))$, where $Z\cap H$ is a cubic $3$-fold obtained as a general hyperplane section, which imples that $[{\rm Im}(F_0(Z)\rightarrow F_1(Z))]$ is Lagrangian (see \cite[Lemma 6.4.5]{Huy_cub} for example).} \end{remarque} \textit{}\\
\section{Cotangent bundle exact sequence} Let $X\subset \mathbb P(V^*)\simeq \mathbb P^6$ be a smooth cubic $5$-fold. Its variety of planes $F_2(X)\subset G(3,V)$ is the zero locus of the section of ${\rm Sym}^3\mathcal E_3$ (where $\mathcal E_3$ is defined by (\ref{ex_seq_def_taut_3})) induced by an equation ${\rm eq}_X\in H^0(\mathcal O_{\mathbb P^6}(3))$ of $X$.\\ \indent Let us gather some basic properties of $F_2(X)$ before proving Theorem \ref{thm_1}.\\ \indent It is proven in \cite[Proposition 1.8]{Coll_cub} that $F_2(X)$ is connected for any $X$ so that by Bertini type theorems, for $X$ general, $F_2(X)$ is a smooth irreducible surface.\\ \indent As such $F_2(X)$ is cut out of $G(3,V)$ by a regular section of the rank $10$ vector bundle ${\rm Sym}^3\mathcal E_3$, the Koszul resolution says that structure sheaf $\mathcal O_{F_2(X)}$ is quasi-isomorphic the complex \begin{equation}\label{ex_seq_koszul_resol} 0\rightarrow \wedge^{10}{\rm Sym}^3\mathcal E_3^*\rightarrow\wedge^9{\rm Sym}^3\mathcal E_3^*\rightarrow \cdots\rightarrow {\rm Sym}^3\mathcal E_3^*\rightarrow \mathcal O_{G(3,V)}\rightarrow 0 \end{equation}
where the differentials are given by the section of ${\rm Sym}^3\mathcal E_3$. By the adjunction formula $$K_{F_2(X)}\simeq K_{G(3,V)}\otimes det({\rm Sym}^3\mathcal E_{3|F_2(X)})\simeq \mathcal O_{G(3,V)}(3)_{|F_2(X)}:=\mathcal O_{F_2(X)}(3).$$\\ \indent Theorem \ref{thm_Collino_intro} (see also Theorem \ref{thm_descrip_h_1_and_wedge} below) implies that $h^{1,0}(F_2(X))=h^0(\Omega_{F_2(X)})= h^{2,3}(X)=21$ and we can use a software to compute the other Hodge numbers (see also \cite{gammel}). We use the package Schubert2 of Macaulay2:\\ \begin{enumerate} \item the Koszul resolution of $\mathcal O_{F_2(X)}$ gives $\chi(\mathcal O_{F_2(X)})=\sum_{i=0}^{10}(-1)^i\chi(\wedge^i{\rm Sym}^3\mathcal E_3^*)$. We can get the result $\chi(\mathcal O_{F_2(X)})=3213$ using the following code \begin{verbatim} loadPackage "Schubert2" G=flagBundle{4,3} (Q,E)= bundles G F=symmetricPower(3,dual(E)) chi(exteriorPower(0,F))-chi(exteriorPower(1,F))+chi(exteriorPower(2,F)) -chi(exteriorPower(3,F))+chi(exteriorPower(4,F))-chi(exteriorPower(5,F)) +chi(exteriorPower(6,F))-chi(exteriorPower(7,F))+chi(exteriorPower(8,F)) -chi(exteriorPower(9,F))+chi(exteriorPower(10,F)) \end{verbatim}
Then we get $h^{0,2}(F_2(X))=\chi(\mathcal O_{F_2(X)})-1+h^{0,1}(F_2(X))=3233$.\\ \item Next, Noether's formula writes $\chi_{top}(F_2(X))=12\chi(\mathcal O_{F_2(X)})-\int_{F_2(X)}c_1(K_{F_2(X)})^2$ and as
$$\begin{small}\begin{aligned}\int_{F_2(X)}c_1(K_{F_2(X)})^2 &=\int_{F_2(X)}c_1(\mathcal O_{G(3,V)}(3)_{|F_2(X)})^2\\ &=\int_{G(3,V)}[F_2(X)]\cdot c_1(\mathcal O_{G(3,V)}(3))^2\\ &= 9\int_{G(3,V)}c_{10}({\rm Sym}^3\mathcal E_3)\cdot c_1(\mathcal O_{G(3,V)}(1))^2\end{aligned}\end{small}$$ the number $\int_{F_2(X)}c_1(K_{F_2(X)})^2=3^2*2835=25515$ can be obtain using the code \begin{verbatim} loadPackage "Schubert2" G=flagBundle{4,3} (Q,E)= bundles G F=symmetricPower(3,E) cycle=chern(1,exteriorPower(3,E))*chern(1,exteriorPower(3,E))*chern(10,F) integral cycle \end{verbatim}
Then we get $b_2(F_2(X))=\chi_{top}(F_2(X))-2+2b_1(F_2(X))=13041-2+4\times 21=13123$ and $h^{1,1}(F_2(X))=b_2(F_2(X))-2h^{0,2}(F_2(X))=6657$ \end{enumerate}
\textit{}\\
\indent Associated with $X$, there is also its variety of lines $F_1(X)\subset G(2,V)$. It is a smooth Fano variety of dimension $6$ which is cut out by a regular section of ${\rm Sym}^3\mathcal E_2$ where the latter is the tautological rank $2$ quotient bundle appering in an exact sequence $$0\rightarrow \mathcal Q_2\rightarrow V^*\otimes \mathcal O_{G(2,V)}\rightarrow \mathcal E_2\rightarrow 0.$$ Let us examine the relation between the two auxiliary varieties by introducing the flag variety $$\xymatrix{Fl(2,3,V) \ar[d]_t\ar[r]^e &Gr(2,V)\\ Gr(3,V) &}$$ where $t:Fl(2,3,V)\simeq \mathbb P(\wedge^2 \mathcal E_3)\rightarrow Gr(3,V)$ and $e:Fl(2,3,V)\simeq \mathbb P(\mathcal Q_2)\rightarrow Gr(2,V)$. For the tautological quotient line bundles, we have $\mathcal O_{t}(1)\simeq e^*\mathcal O_{Gr(2,V)}(1)$ and $\mathcal O_{e}(1)\simeq t^*\mathcal O_{Gr(3,V)}(1)\otimes e^*\mathcal O_{Gr(2,V)}(-1)$.\\ \indent On $Fl(2,3,V)$, the relation between the two tautological bundles is given by the exact sequence \begin{equation}\label{ex_seq_taut_bundles_2_3} 0\rightarrow e^*\mathcal O_{G(2,V)}(-1)\otimes t^*\mathcal O_{G(3,V)}(1)\rightarrow t^*\mathcal E_3\rightarrow e^*\mathcal E_2\rightarrow 0. \end{equation} \indent We can restrict the flag bundle to get
$$\xymatrix{\mathbb P_{F_2}:=\mathbb P(\wedge^2\mathcal E_{3|F_2(X)})\ar[r]^(.65){e_{F_2}}\ar[d]_{t_{F_2}} &F_1(X)\\ F_2(X) &.}$$
We have the following property \begin{proposition}\label{prop_immersion_plan_lines} The tangent map $T e_{F_2}$ of $e_{F_2}$ is injective i.e. $e_{F_2}$ is an immersion. Moreover, the ``normal bundle'' $N_{\mathbb P_{F_2}/F_1(X)}:= e_{F_2}^*T_{F_1(X)}/T_{\mathbb P_{F_2}}$ of $\mathbb P_{F_2}$ admits the following description:
\begin{equation}\label{ex_seq_tgt_bundle_part1}0\rightarrow t_{F_2}^*(\mathcal Q_{3|F_2(X)}^*)\otimes \mathcal O_e(1)\rightarrow t_{F_2}^*{\rm Sym}^2\mathcal E_3\otimes \mathcal O_e(1)\rightarrow N_{\mathbb P_{F_2}/F_1(X)}\rightarrow 0 \end{equation} \end{proposition} \begin{proof} (1) Let us first prove that $e_{F_2}$ is an immersion. Let us recall the natural isomorphism between the two presentations of the tangent space of $Fl(2,3,V)$: looking at $t$, we can write $$T_{Fl(2,3,V), ([\ell],[P])}\simeq Hom(\langle P\rangle,V/\langle P\rangle)\oplus Hom(\langle \ell\rangle,\langle P\rangle/\langle \ell\rangle)$$ and looking at $e$, we have $$T_{Fl(2,3,V), ([\ell],[P])}\simeq Hom(\langle\ell\rangle,V/\langle\ell\rangle)\oplus Hom(\langle P\rangle/\langle\ell\rangle,V/\langle P\rangle)$$ where we denote $\langle K\rangle\subset V$ the linear subspace whose projectivisation is $K\subset \mathbb P(V^*)$. For a given decomposition $\langle P\rangle\simeq \langle\ell\rangle\oplus \langle P\rangle/\langle\ell\rangle$, the isomorphism takes the following form $$\begin{tabular}{ccc} $Hom(\langle P\rangle,V/\langle P\rangle)\oplus Hom(\langle\ell\rangle,\langle P\rangle/\langle\ell\rangle)$ &$\rightarrow$ &$Hom(\langle\ell\rangle,V/\langle\ell\rangle)\oplus Hom(\langle P\rangle/\langle\ell\rangle,V/\langle P\rangle)$\\
$(f,\ g)$ &$\mapsto$ &$(f_{|\langle \ell\rangle}+g,\ f_{|\langle P\rangle/\langle\ell\rangle})$ \end{tabular}$$ Notice that by definition, we have $Im(f)\cap Im(g)=\{0\}$ so that in proving that $T_{([\ell],[P])}e_{F_2}$ is injective we can examine the two components separately.
Now we have the exact sequence
$$0\rightarrow N_{\ell/P}\rightarrow N_{\ell/X}\rightarrow N_{P/X|\ell}\rightarrow 0 $$
from which we get \begin{equation} \label{ex_seq_normal_bdle_coh_lines}
0\rightarrow \underset{\simeq \langle\ell\rangle^*}{H^0(\mathcal O_{\ell}(1))}\rightarrow H^0(N_{\ell/X})\rightarrow H^0(N_{P/X|\ell})\rightarrow 0=H^1(\mathcal O_\ell(1)) \end{equation} and we have $T_{F_1(X),[\ell]}\simeq H^0(N_{\ell/X})$.\\
\indent A linear form on $P$ defining $\ell$ is given by any generator of $(\langle P\rangle/\langle\ell\rangle)^*\subset \langle P\rangle^*$ so that $$T_{\mathbb P(\wedge^2 \mathcal E_{3|F_2(X)}),([\ell],[P])}\simeq \underbrace{T_{F_2(X),[P]}}_{\simeq H^0(N_{P/X})}\oplus \underbrace{\langle P\rangle^*/(\langle P\rangle/\langle\ell\rangle)^*}_{\simeq \langle\ell\rangle^*}.$$ The second summand is readily seen to inject into $T_{F_1(X),\langle\ell\rangle}$ by (\ref{ex_seq_normal_bdle_coh_lines}).
Next, we have the exact sequence
$$0\rightarrow N_{P/X}(-1)\rightarrow N_{P/X}\rightarrow N_{P/X|\ell}\rightarrow 0$$ which gives rise to \begin{equation}\label{ex_seq_normal-1_coh}
0\rightarrow H^0(N_{P/X}(-1))\rightarrow H^0(N_{P/X})\rightarrow H^0(N_{P/X|\ell})\rightarrow H^1(N_{P/X}(-1))\rightarrow H^1(N_{P/X}). \end{equation} To prove that $T_{([\ell],[P])}e_{F_2}$ is injective, it is thus sufficient to prove that $H^0(N_{P/X}(-1))=0$.\\ \indent Consider the exact sequence
\begin{equation}\label{ex_seq_nomal_plane} 0\rightarrow N_{P/X}\rightarrow \underbrace{N_{P/\mathbb P^6}}_{\simeq(V/\langle P\rangle)\otimes \mathcal O_P(1)} \overset{\alpha}{\rightarrow} \underbrace{N_{X/\mathbb P^6|P}}_{\simeq \mathcal O_P(3)}\rightarrow 0. \end{equation} Up to a projective transformation, we can assume $P=\{X_0=\cdots =X_3=0\}$ so that ${\rm eq}_X$ has the following form: \begin{equation}\label{normal_form_0}\begin{small}X_0Q_0 + X_1Q_1 + X_2Q_2 +X_3Q_3 + \sum_{i=4}^6X_iD_i(X_0,X_1,X_2,X_3) + R(X_0,X_1,X_2,X_3) \end{small} \end{equation}
where $R$ is a homogeneous cubic polynomial, $D_i$, $4\leq i\leq 6$ are homogeneous quadratic polynomials in the variables $(X_k)_{k\leq 3}$ and $Q_i$, $0\leq i\leq 3$ are homogeneous quadratic polynomials in $(X_i)_{4\leq i\leq 6}$. With these notations, $X$ is smooth along $P$ if and only if ${\rm Span}((Q_{i|P})_{i=0,\dots,3})$ is a base-point free. We recall the following result found in \cite[Proposition 1.2 and Corollary 1.4]{Coll_cub}
\begin{proposition}\label{prop_result_Collino_smoothness} For $X$ smooth along $P$, we have $F_2(X)$ is smooth at $[P]$ $\iff$ $(Q_0,\dots,Q_3)$ is linearly independent $\iff$ $H^0(\alpha):H^0(N_{P/\mathbb P^6})\simeq (V/\langle P\rangle)\otimes H^0(\mathcal O_P(1))\rightarrow H^0(N_{X/\mathbb P^6|P})\simeq H^0(\mathcal O_P(3))$, $(L_0,\dots,L_3)\mapsto \sum_iL_iQ_i$ is surjective \end{proposition}
Now tensoring (\ref{ex_seq_nomal_plane}) by $\mathcal O_P(-1)$, we get the following long exact sequence
\begin{equation}\label{ex_seq_normal_plane-1_coh} 0\rightarrow H^0(N_{P/X}(-1))\rightarrow V/\langle P\rangle\overset{H^0(\alpha(-1))}{\rightarrow} H^0(\mathcal O_P(2))\rightarrow H^1(N_{P/X}(-1))\rightarrow 0=H^1(\mathcal O_P)^{\oplus 4}. \end{equation}
The map $H^0(\alpha(-1))$ is given by the quadrics $(Q_0,\dots,Q_3)$. As $F_2(X)$ is smooth by assumption, the latter are linearly independent thus $H^0(\alpha(-1))$ is injective i.e. $H^0(N_{P/X}(-1))=0$. In particular $H^0(N_{P/X})\subset H^0(N_{P/X|\ell})$ hence, looking at (\ref{ex_seq_nomal_plane}) and (\ref{ex_seq_normal_bdle_coh_lines}), $T_{([\ell],[P])}e_{F_2}$ is injective.\\
\indent (2) We want now to establish the exact sequence (\ref{ex_seq_tgt_bundle_part1}). Pulling back the natural exact sequence of locally free sheaves, we get the commutative diagram:
$$\xymatrix{0 \ar[r] &T_{\mathbb P_{F_2}}\ar[r]\ar[d]^{Te_{F_2}} & T_{Fl(2,3,V)|\mathbb P_{F_2}}\ar[r]\ar[d]^{Te_{|\mathbb P_{F_2}}} &(t^*{\rm Sym}^3\mathcal E_3)_{|\mathbb P_{F_2}} \ar[r]\ar[d]^{\overline{Te_{|\mathbb P_{F_2}}}} &0\\
0 \ar[r] &e_{F_2}^*T_{F_1(X)} \ar[r] &e_{F_2}^*T_{Gr(2,V)|F_1(X)}\ar[r] &e_{F_2}^*{\rm Sym}^3\mathcal E_{2|F_1(X)}\ar[r] &0}$$
which by the snake lemma yields: $$ 0 \rightarrow Ker(Te_{|\mathbb P_{F_2}})\rightarrow Ker(\overline{Te_{|\mathbb P_{F_2}}})\rightarrow coker(Te_{F_2})\rightarrow 0.$$ By definition of the normal bundle, we get $coker(Te_{F_2})\simeq N_{\mathbb P_{F_2}/F_1(X)}$. The restriction of the following exact sequence of locally free sheaves $$\begin{small} 0\rightarrow T_{Fl(2,3,V)/Gr(2,7)}\rightarrow T_{Fl(2,3,V)}\rightarrow e^*T_{Gr(2,V)}\rightarrow 0\end{small}$$ being still exact, we get $ker(Te_{|\mathbb P_{F_2}})\simeq T_{Fl(2,3,V)/Gr(2,V)|\mathbb P_{F_2}}$. The relative tangent bundle appears in the exact sequence: $$\begin{small}0\rightarrow \mathcal O_{Fl(2,3,V)}\rightarrow e^*V/\mathcal E_2^*\otimes \mathcal O_e(1)\rightarrow T_{Fl(2,3,V)/Gr(2,V)}\rightarrow 0.\end{small}$$
The sequence (\ref{ex_seq_taut_bundles_2_3}) also yields $$\begin{small} 0\rightarrow t^*\mathcal O_{Gr(3,V)}(-1)\otimes e^*\mathcal O_{Gr(2,V)}(1)\rightarrow V/\mathcal E_2^*\rightarrow V/\mathcal E_3^*\rightarrow 0.\end{small}$$ from which we get, after twisting by $\mathcal O_e(1)$ that last exact sequence, $T_{Fl(2,3,V)/Gr(2,V)|\mathbb P_{F_2}}\simeq t_{F_2}^*V/\mathcal E_3^*\otimes \mathcal O_e(1)$.\\
\indent Next, taking the symmetric power of (\ref{ex_seq_taut_bundles_2_3}) we get the following exact sequence $$\begin{small} 0\rightarrow e^*\mathcal O_{Gr(2,V)}(-1)\otimes t^*\mathcal O_{Gr(3,V)}(1)\otimes t^*{\rm Sym}^2\mathcal E_3\rightarrow t^*{\rm Sym}^3\mathcal E_3\rightarrow e^*{\rm Sym}^3\mathcal E_2\rightarrow 0\end{small}$$ so that $ker(\overline{Te_{|\mathbb P_{F_2}}})\simeq (e^*\mathcal O_{Gr(2,V)}(-1)\otimes t^*\mathcal O_{Gr(3,V)}(1)\otimes t^*{\rm Sym}^2\mathcal E_3)_{|\mathbb P_{F_2}}$. Putting everything together, we get the desired exact sequence. \end{proof}
For any plane $P_0\subset X$, looking for example at the associated quadric bundle $$\xymatrix{\widetilde{X_{P_0}}\ar[rd]^{\tilde\gamma}\ar@{^{(}->}[r] &\mathbb P(\mathcal E_4)\ar[d]^{\gamma}\\
&B}$$
where $B\simeq \{[\Pi]\in G(4,V),\ P_0\subset \Pi\}\simeq \mathbb P^3$, $\mathcal E_4\simeq \langle P\rangle^*\otimes \mathcal O_{\mathbb P^3}\oplus \mathcal O_{\mathbb P^3}(1)$ and $\widetilde{X_{P_0}}\in |\mathcal O_{\gamma}(2)\otimes \gamma^*\mathcal O_{\mathbb P^3}(1)|$, we see that the locus of quadrics of rank $\leq 2$ has codimension (at most) $\binom{4-2+1}{2}=3$ and by the Harris-Tu formula (\cite[Theorem 1 and Theorem 10]{HT_degener}), there are (at least) $2\left|\begin{smallmatrix} c_2(\mathcal E_4\otimes L) &c_3(\mathcal E_4\otimes L)\\ c_0(\mathcal E_4\otimes L) &c_1(\mathcal E_4\otimes L
\end{smallmatrix}\right|=31$ of them (where $L$ has to be thought of as a formal square root of $\mathcal O_{\mathbb P^3}(1)$).\\ \indent In particular, the locus $\Gamma=\{([\ell],[P])\in \mathbb P_{F_2},\ \exists [P']\neq [P],\ ([\ell],[P'])\in \mathbb P_{F_2}\}$ has codimension $2$ in $\mathbb P_{F_2}$ (above the general plane $[P]\in F_2(X)$ there are finitely many lines that belong to another planes $P'\subset X$).\\
\indent To any hyperplane $H\subset \mathbb P(V^*)$ such that $Y:=X\cap H$ is a smooth cubic $4$-fold containing no plane, we can attach the morphism $j_H:F_2(X)\rightarrow F_1(Y)$ defined by $[P]\mapsto [P\cap H]$.\\
\indent The subvariety $F_1(Y)\subset F_1(X)$ is the zero locus of the regular section of $\mathcal E_{2|F_1(X)}$ induced by the equation of $H\subset \mathbb P(V^*)$. For any such $Y$ (containing no plane), $e^{-1}(F_1(Y))$ is obviously a section $Z_H$ of $\mathbb P_{F_2}\rightarrow F_2(X)$, $[P]\mapsto ([P\cap H],[P])$. The smooth surface $Z_H\simeq F_2(X)$ is thus the zero locus of a regular section of $e_{F_2}^*\mathcal E_{2|F_1(X)}$. By Bertini type theorems, for $H$ general, $Z_H\cap \Gamma$ is $0$-dimensional.\\ \indent As a result, as noticed in \cite[Proposition 7]{Iliev-Manivel_cub_hyp_int_syst} (the published version corrects the preprint, in which it is wrongly claimed that $j_H$ is an embedding, as underlined in \cite{Huy_cub}) $j_H:Z_H\simeq F_2(X)\rightarrow F_1(Y)$ is isomorphic to its image outside a $0$-dimensional subset of $F_2(X)$.\\ \indent The following diagram is commutative:
$$\xymatrix{0 \ar[r] & T_{Z_H} \ar[r]\ar[d] &T_{\mathbb P_{F_2}|Z_H}\ar[r]\ar[d] & N_{Z_H/\mathbb P_{F_2}} \ar[r]\ar[d] &0\\
0 \ar[r] & (e_{F_2}^*T_{F_1(Y)})_{|Z_H}\ar[r] & (e_{F_2}^*T_{F_1(X)})_{|Z_H}\ar[r] & (e_{F_2}^*N_{F_1(Y)/F_1(X)})_{|Z_H}\ar[r] &0.}$$
As $Z_H\subset \mathbb P_{F_2}$ is the zero locus of a regular section of $e_{F_2}^*\mathcal E_{2|F_1(X)}$, we have $N_{Z_H/\mathbb P_{F_2}}\simeq (e_{F_2}^*\mathcal E_{2|F_1(X)})_{|Z_H}$ so that the last vertical arrow in the diagram is an isomorphism. As the second vertical arrow is injective by Proposition \ref{prop_immersion_plan_lines}, the first is injective as well. So the snake lemma gives $(e_{F_2}^*T_{F_1(Y)})_{|Z_H}/T_{Z_H}\simeq N_{\mathbb P_{F_2}/F_1(X)|Z_H}$.\\
\indent According to \cite[Proposition 4]{Iliev-Manivel_cub_hyp_int_syst} ${\rm Im}(j_H)$ is a (non-normal) Lagrangian surface of the hyper-K\"ahler manifold $F_1(Y)$. In particular, outside a codimension $2$ subset of $F_2(X)$, we have $$\Omega_{Z_H}\simeq (e_{F_2}^*T_{F_1(Y)})_{Z_H}/T_{Z_H}.$$ \indent As both sheaves are locally free, the isomorphism holds globally i.e.
\begin{equation}\label{isom_cotang_normal2}\Omega_{F_2(X)}\simeq N_{\mathbb P_{F_2}/F_1(X)|Z_H}. \end{equation} \indent We can now prove Theorem \ref{thm_1}
\begin{proof}[Proof of Theorem \ref{thm_1}] Looking at (\ref{isom_cotang_normal2}) and (\ref{ex_seq_tgt_bundle_part1}) we see that we only have to check that $\mathcal O_e(1)_{|Z_H}\simeq \mathcal O_{Z_H}$.\\ \indent For a (general) hyperplane $H\subset \mathbb P(V^*)$, we have a rational map: $\varphi:Gr(3,V)\dashrightarrow Gr(2,\langle H\rangle)$, $P\mapsto P\cap H$ whose indeterminacy locus is $Gr(3,\langle H\rangle)$. The morphism $j_H:F_2(X)\simeq Z_H\rightarrow F_1(Y)$ is the restriction of the map $\varphi$ to $F_2(X)$. To get the result we will show more generally that $\varphi^*\mathcal O_{Gr(2,\langle H\rangle)}(-1)\otimes \mathcal O_{Gr(3,V)}(1)$ restricts to the trivial line bundle on the open set where $\varphi$ is defined i.e. on $Gr(3,V)\backslash Gr(3,\langle H\rangle)$.\\
\indent The subvariety $Gr(3,\langle H\rangle)\subset Gr(3,V)$ is the zero locus of a regular section of $\mathcal E_3$ so that $N_{Gr(3,\langle H\rangle)/Gr(3,V)}\simeq \mathcal E_{3|Gr(3,\langle H\rangle)}$. After blowing-up this locus we get $$\begin{small}\xymatrix{E_\tau \ar@{^{(}->}[r]^j\ar[d] & \widetilde{Gr(3,V)}\ar[d]^\tau \ar[rd]^{\widetilde{\varphi}} &\\
Gr(3,\langle H\rangle)\ar@{^{(}->}[r]^i & Gr(3,V)\ar@{-->}[r]^\varphi &Gr(2,\langle H\rangle)}\end{small}$$
where the exceptional divisor $E_\tau$ is isomorphic to $\mathbb P(\mathcal E_3^*)\simeq \mathbb P(\wedge^2\mathcal E_3\otimes det(\mathcal E_3)^{-1})$. So $E_\tau$ is isomorphic to the flag variety $Fl(2,3,\langle H\rangle)$ and $\widetilde\varphi\circ j$ correspond to the projection on the Grassmannian of lines; hence $$\begin{small}\mathcal O_{E_\tau}(1)\simeq j^*\widetilde\varphi^*\mathcal O_{Gr(2,\langle H\rangle)}(1)\otimes \tau_{E_\tau}^*i^*\mathcal O_{Gr(3,V)}(-1)\ {\rm in\ Pic}(E_\tau).\end{small}$$ As the restriction ${\rm Pic}(Gr(3,V))\rightarrow {\rm Pic}(Gr(3,\langle H\rangle))$ is an isomorphism, so is ${\rm Pic}(\widetilde{Gr(3,V)})\rightarrow {\rm Pic}(E_\tau))$ thus $$\begin{small}\mathcal O_{\widetilde{Gr(3,V)}}(-E)\simeq \widetilde\varphi^*\mathcal O_{Gr(2,\langle H\rangle)}(1)\otimes \tau^*\mathcal O_{Gr(3,V)}(-1)\ {\rm in\ Pic}(\widetilde{Gr(3,V)}).\end{small}$$
Now pushing forward by $\tau$ the short exact sequence defining $E$, we get $$\begin{small}\tau_*\widetilde\varphi^*\mathcal O_{Gr(2,\langle H\rangle)}(1)\otimes \mathcal O_{Gr(3,V)}(-1)\simeq \tau_*\mathcal O_{\widetilde{Gr(3,V)}}(-E)\simeq \mathcal I_{Gr(3,\langle H\rangle)/ Gr(3,V)}\end{small}$$ which is indeed trivial on $Gr(3,V)\backslash Gr(3,\langle H\rangle)$. \end{proof} \textit{}\\
\section{Gauss map of $F_2(X)$} Let $X\subset \mathbb P(V^*)\simeq \mathbb P^6$ be a smooth cubic hypersurface such that $F_2(X)$ is a smooth (irreducible) surface. We begin this section with the following
\begin{theoreme}\label{thm_descrip_h_1_and_wedge} The following sequence is exact: \begin{equation}\label{ex_seq_descrip_H_1} 0\rightarrow H^1(\mathcal O_{F_2(X)})\rightarrow {\rm Sym}^2V\otimes det(V)\overset{\varphi_{{\rm eq}_X}\otimes {\rm id}_{det(V)}}{\rightarrow} V^*\otimes det(V)\rightarrow 0 \end{equation} where $\varphi_{{\rm eq}_X}$ is defined to be $e_i+e_j\mapsto {\rm eq}_X(e_i,e_j,\cdot)$.\\ \indent Moreover, we have an inclusion $\bigwedge^2H^1(\mathcal O_{F_2(X)})\subset H^2(\mathcal O_{F_2(X)})$, which by Hodge symmetry yields $\bigwedge^2H^0(\Omega_{F_2(X)})\subset H^0(K_{F_2(X)})$. \end{theoreme} \begin{proof} As $\mathcal O_{F_2(X)}$ admits the Koszul resolution (\ref{ex_seq_koszul_resol}), to understand the cohomology groups $H^i(\mathcal O_{F_2(X)})$, we can use the spectral sequence $$E_1^{p,q}=H^q(G(3,V),\wedge^{-p}{\rm Sym}^3\mathcal E_3^*)\Rightarrow H^{p+q}(\mathcal O_{F_2(X)}).$$
\indent As a remainder, we borrow from \cite{jiang_noether_lefschetz} (see also \cite{spandaw}) the following elementary presentation of the Borel-Weil-Bott theorem for a $G(3,W)$ with ${\rm dim}(W)=d$.\\ \indent For any vector space $L$ of dimension $f$ and any decreasing sequence of integers $a=(a_1,\dots,a_f)$ there is an irreducible $GL(L)$-representation (Weyl module) denoted $\Gamma^{(a_1,\dots,a_f)}L$.\\ \indent To two decreasing sequences $a=(a_1,\dots,a_{d-e})$ and $b=(b_1,\dots,b_e)$ we can associate the sequence $$(\phi_1,\dots,\phi_d)=\phi(a,b):=(a_1-1,a_2-2,\dots,a_{d-e}-(d-e),b_1-(d-e+1),\dots,b_e-d).$$ \indent We measure how far $\phi(a,b)$ is from being decreasing by introducing $i(a,b):=\#\{\alpha<\beta,\ \phi_\alpha>\phi_\beta\}$.\\ \indent Finally, let us denote $\phi(a,b)^+=(\phi_1^+,\dots,\phi_d^+)$ a re-ordering of $\phi(a,b)$ to make it non-increasing and set $\psi(a,b):=(\phi_1^++1,\dots,\phi_d^++d)$.\\ \indent The Borel-Weil-Bott theorem states \begin{theoreme}\label{thm_borel_weil_bott} We have:\\ \indent (1) $H^q(G(3,W),\Gamma^a\mathcal Q_3^*\otimes\Gamma^b\mathcal E_3^*)=0$ for $q\neq i(a,b)$.\\ \indent (2) $H^{i(a,b)}(G(3,W),\Gamma^a\mathcal Q_3^*\otimes\Gamma^b\mathcal E_3^*)=\Gamma^{\psi(a,b)}W$.\\ where $\mathcal Q_3$ and $\mathcal E_3$ are defined by (\ref{ex_seq_def_taut_3}) and $\Gamma^{\psi(a,b)}W=0$ if $\psi(a,b)$ is not decreasing. \end{theoreme}
Now, we want to apply it to compute the $E_1^{p,q}$ of the spectral sequence. Using Sage with the code \begin{verbatim} R=WeylCharacterRing("A2") V=R(1,0,0) for k in range(11): print k, V.symmetric_power(3).exterior_power(k) \end{verbatim} we get the decompositions into irreducible modules of $\wedge^k{\rm Sym}^3\mathcal E_3^*$. Then by Borel-Weil-Bott we have:
$$\begin{tabular}{lll} $(0)$ &$\oplus_i^{12} H^i(\mathcal O_{G(3,V)})$ &$=\oplus_i H^i(\Gamma^{(0,\dots,0)}\mathcal Q_3^*\otimes\Gamma^{(0,0,0)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^0(\mathcal O_{G(3,V)})=\Gamma^{(0,\dots,0)}V\simeq \mathbb C$\\ $ $ &$ $ &$ $\\
$(1)$ &$\oplus_i^{12} H^i({\rm Sym}^3\mathcal E_3^*)$ &$=\oplus_i^{12} H^i(\Gamma^{(3,0,0)}\mathcal E_3^*)=0$\\ $ $ &$ $ &$ $\\
$(2)$ &$\oplus_iH^i(\wedge^2{\rm Sym}^3\mathcal E_3^*)$ &$=\oplus_i H^i(\Gamma^{(3,3,0)}\mathcal E_3^*\oplus \Gamma^{(5,1,0)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^4(\Gamma^{(5,1,0)}\mathcal E_3^*)=\Gamma^{(1,\dots,1,0)}V\simeq \wedge^6V$\\ $ $ &$ $ &$ $\\
$(3)$ &$\oplus_i H^i(\wedge^3{\rm Sym}^3\mathcal E_3^*)$ &$=\oplus_i H^i(\Gamma^{(3,3,3)}\mathcal E_3^*\oplus\Gamma^{(5,3,1)}\mathcal E_3^*\oplus \Gamma^{(6,3,0)}\mathcal E_3^*\oplus \Gamma^{(7,1,1)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^4(\Gamma^{(7,1,1)}\mathcal E_3^*)=\Gamma^{(3,1,\dots,1)}V\simeq {\rm Sym}^2V\otimes det(V)$\\ $ $ &$ $ &$ $\\
$(4)$ &$\oplus_i H^i(\wedge^4{\rm Sym}^3\mathcal E_3^*)$ &$=\oplus_iH^i(\Gamma^{(6,3,3)}\mathcal E_3^*\oplus \Gamma^{(6,4,2)}\mathcal E_3^*\oplus\Gamma^{(6,6,0)}\mathcal E_3^*\oplus \Gamma^{(7,4,1)}\mathcal E_3^*\oplus\Gamma^{(8,3,1)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^8(\Gamma^{(6,6,0)}\mathcal E_3^*)=\Gamma^{(2,\dots,2,0)}V$\\ $ $ &$ $ &$\simeq {\rm Sym}^2V^*\otimes det(V)^{\otimes 2}$\\ $ $ &$ $ &$ $\\
$(5)$ &$\oplus_iH^i(\wedge^5{\rm Sym}^3\mathcal E_3^*)$ &$\simeq\oplus_iH^i(\Gamma^{(6,6,3)}\mathcal E_3^*\oplus\Gamma^{(7,4,4)}\mathcal E_3^*\oplus\Gamma^{(7,6,2)}\mathcal E_3^*\oplus\Gamma^{(8,4,3)}\mathcal E_3^*\oplus\Gamma^{(8,6,1)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \ \ \ \oplus\Gamma^{(9,4,2)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^8(\Gamma^{(7,6,2)}\mathcal E_3^*\oplus\Gamma^{(8,6,1)}\mathcal E_3^*)$\\ $ $ &$ $ &$=\Gamma^{(3,2,\dots,2)}V\oplus \Gamma^{(4,2\dots,2,1)}V$\\ $ $ &$ $ &$\simeq ({\rm Sym}^2V\otimes V^*)\otimes det(V)^{\otimes 2}$\\ $ $ &$ $ &$ $\\ \end{tabular}$$ $$\begin{tabular}{lll} $(6)$ &$\oplus_iH^i(\wedge^6{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i(\Gamma^{(7,7,4)}\mathcal E_3^*\oplus\Gamma^{(8,6,4)}\mathcal E_3^*\oplus\Gamma^{(9,6,3)}\mathcal E_3^*\oplus\Gamma^{(9,7,2)}\mathcal E_3^*\oplus\Gamma^{(10,4,4)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^8(\Gamma^{(9,7,2)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq \Gamma^{(5,3,2\dots,2)}V\simeq (\wedge^2{\rm Sym}^2V)\otimes det(V)^{\otimes 2}$\\ $ $ &$ $ &$ $\\
$(7)$ &$\oplus_iH^i(\wedge^7{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(7,7,7)}\mathcal E_3^*\oplus\Gamma^{(9,7,5)}\mathcal E_3^*\oplus\Gamma^{(9,9,3)}\mathcal E_3^*\oplus\Gamma^{(10,7,4)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(7,7,7)}\mathcal E_3^*)\simeq \Gamma^{(3,\dots,3)}V\simeq det(V)^{\otimes 3}$\\ $ $ &$ $ &$ $\\
$(8)$ &$\oplus_iH^i(\wedge^8{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(10,7,7)}\mathcal E_3^*\oplus \Gamma^{(10,9,5)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(10,7,7)}\mathcal E_3^*)=\Gamma^{(6,3,\dots,3)}V\simeq {\rm Sym}^3V\otimes det(V)^{\otimes 3}$\\ $ $ &$ $ &$ $\\
$(9)$ &$\oplus_iH^i(\wedge^9{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(10,10,7)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(10,10,7)}\mathcal E_3^*)\simeq \Gamma^{(6,6,3,\dots,3)}V$\\ $ $ &$ $ &$ $\\
$(10)$ &$\oplus_iH^i(\wedge^{10}{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(10,10,10)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(10,10,10)}\mathcal E_3^*)\simeq \Gamma^{(6,6,6,3\dots,3)}V\ \ \ \ \ \ \ \textit{}$ \end{tabular}$$
To understand $H^1(\mathcal O_{F_2(X)})$, we have to examine the $E_\infty^{-i,i+1}$'s, for $i=0,\dots,10$. As $E_1^{-i,i+1}=0$ for any $i\neq 3$, we get $E_\infty^{-i,i+1}=0$ for $i \neq 3$.\\ \indent On the other hand, for $r\geq 2$, $E_r^{-3,4}$ is defined as the (middle) cohomology of $$E_{r-1}^{-(2+r),2+r}\overset{d_{r-1}}{\rightarrow}E_{r-1}^{-3,4}\overset{d_{r-1}}{\rightarrow}E_{r-1}^{-4+r,6-r}.$$ From the above computations, we see that $E_1^{-i,i}=0$ for $i\geq 3$ so that $E_r^{-i,i}=0$ for any $i\geq 3$ and $r\geq 1$.\\ \indent So we get $E_2^{-3,4}=Ker(E_1^{-3,4}\overset{d_1}{\rightarrow}E_1^{-2,4})$.\\ \indent As $E_1^{-1,3}=0$, we have $E_2^{-1,3}=0$ so that $E_3^{-3,4}\simeq E_2^{-3,4}$.\\ \indent As $E_1^{0,2}=0$, we have $E_3^{0,2}=0$ so that $E_4^{-3,4}\simeq E_2^{-3,4}$.\\ \indent As $E_1^{a,b}=0$ for any $a>0$, we get $E_\infty^{-3,4}\simeq E_2^{-3,4}$ i.e. the following sequence is exact: $$0\rightarrow H^1(\mathcal O_{F_2(X)})\rightarrow E_1^{-3,4}\overset{d_1^{-3,4}}{\rightarrow}E_1^{-2,4}.$$
Now, $d_1^{-3,4}$ is given by contracting with the section defined by ${\rm eq}_X$ so that, choosing a basis $(e_0,\dots,e_6)$ of $V$, we have $$\begin{tabular}{llll} $d_1^{-3,4}:$ &${\rm Sym}^2V\otimes det(V)$ &$\rightarrow$ &$\wedge^6V\simeq V^*\otimes det(V)$\\ $ $ &$(e_i+e_j)\otimes (e_0\wedge\cdots\wedge e_6)$ &$\mapsto$ &$\sum_k{\rm eq}_X(e_i,e_j,e_k)\widehat{e_k}={\rm eq}_X(e_i,e_j,\cdot)\otimes (e_0\wedge\cdots\wedge e_6)$ \end{tabular}$$
If this map is not surjective, we can choose the basis so that $e_0^*\otimes (e_0\wedge\cdots\wedge e_6)\notin {\rm Im}(d_1^{-3,4})$. Then we get ${\rm eq}_X(e_i,e_j,e_0)=0$ for any $i,j$, which means that the cubic hypersurface $X$ is a cone with vertex $[e_0]$.\\ \indent So for a smooth cubic, $d_1^{-3,4}$ is surjective, hence (\ref{ex_seq_descrip_H_1}).\\
\indent Before tackling the case of $H^2(\mathcal O_{F_2(X)})$, we notice that the exterior square of (\ref{ex_seq_descrip_H_1}) gives the following exact sequence: \begin{equation}\label{ex_seq_exterior_square_h_1}\begin{small}\begin{aligned} 0\rightarrow \wedge^2H^1(\mathcal O_{F_2(X)})\rightarrow (\wedge^2{\rm Sym}^2V)\otimes det(V)^{\otimes 2}\overset{\varphi_{{\rm eq}_X}\otimes {\rm id}_{{\rm Sym}^2V\otimes det(V)}}{\longrightarrow}{\rm Sym}^2V\otimes V^*\otimes det(V)^{\otimes 2}\\ \overset{\varphi_{{\rm eq}_X}\otimes {\rm id}_{V^*\otimes det(V)}}{\longrightarrow}{\rm Sym}^2V^*\otimes det(V)^{\otimes 2}\rightarrow 0 \end{aligned}\end{small} \end{equation}
To understand $H^2(\mathcal O_{F_2(X)})$, we have to examine the $E_\infty^{-i,i+2}$'s, for $i=0,\dots, 10$. As $E_1^{-i,i+2}=0$ for $i\neq 2,6,10$, we have $E_\infty^{-i,i+2}=0$ for $i\neq 2,6,10$.\\
\indent \textbf{Analysis of} $E_\infty^{-2,4}$: as $E_1^{-1,4}=0$, $E_2^{-2,4}$ is the cokernel of $d_1^{-3,4}$ which has just been proven to be surjective when $X$ is smooth. So $E_2^{-2,4}=0$; from which get $E_\infty^{-2,4}=0$.\\
\indent \textbf{Analysis of} $E_\infty^{-6,8}$: the $E_r^{-6,8}$ are the middle cohomology of $$E_{r-1}^{-(5+r),6+r}\overset{d_{r-1}}{\rightarrow}E_{r-1}^{-6,8}\overset{d_{r-1}}{\rightarrow}E_{r-1}^{-7+r,10-r}.$$ From the above computations of the cohomology groups, we see that $E_1^{-(5+r),6+r}=0$ for any $r\geq 2$ so $E_{r-1}^{-(5+r),6+r}=0$ for any $r\geq 2$.\\ \indent So $E_2^{-6,8}={\rm Ker}(E_1^{-6,8}\overset{d_1^{-6,8}}{\rightarrow}E_1^{-5,8})$.\\ \indent We see that $E_1^{-7+r,10-r}=0$ for any $r\geq 3$, so that $E_{r-1}^{-7+r,10-r}=0$ for any $r\geq 3$. As a result we get $E_\infty^{-6,8}=E_2^{-6,8}$.\\
\indent From (\ref{ex_seq_exterior_square_h_1}), we get that $Coker(E_1^{-6,8}\overset{d_1^{-6,8}}{\rightarrow}E_1^{-5,8})\simeq {\rm Sym}^2V^*\otimes det(V)^{\otimes 2}$ and $E_\infty^{-6,8}={\rm Ker}(E_1^{-6,8}\overset{d_1^{-6,8}}{\rightarrow}E_1^{-5,8})\simeq \wedge^2H^1(\mathcal O_{F_2(X)})$.\\
\indent Now, the spectral sequence computes the graded pieces of a filtration $$0=F^1\subset F^0\subset \cdots\subset F^{-10}\subset F^{-11}=H^2(\mathcal O_{F_2(X)})$$ and we have seen ($E_\infty^{-2,4}=0$) that all the graded pieces are trivial but $Gr_{-6}^F\simeq E_{\infty}^{-6,8}$ and (a priori) $Gr_{-10}^F\simeq E_\infty^{-10,12}$. As a result, we get $\wedge^2H^1(\mathcal O_{F_2(X)})\simeq E_\infty^{-6,8}=F^{-6}=\cdots=F^{-9}\subset F^{10}\subset H^2(\mathcal O_{F_2(X)})$, proving the inclusion. \end{proof}
Moreover, we have the following proposition
\begin{proposition}\label{prop_descrip_h_0_omega} We have $H^0(\mathcal Q_{3|F_2(X)}^*)\simeq H^0(\mathcal Q_3^*)\simeq V$ and $H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})\simeq H^0({\rm Sym}^2\mathcal E_3)\simeq {\rm Sym}^2V^*$ and the following sequence is exact
\begin{equation}\label{ex_seq_H_0_tgt_bundle_ex_seq}0\rightarrow H^0(\mathcal Q_{3|F_2(X)}^*)\rightarrow H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})\rightarrow H^0(\Omega_{F_2(X)})\rightarrow 0 \end{equation} where the first map is given by $v\mapsto {\rm eq}_X(v,\cdot,\cdot)$. \end{proposition}
\begin{proof} To understand $H^0(\mathcal Q_{3|F_2(X)}^*)$, we use again the Koszul resolution (\ref{ex_seq_koszul_resol}) tensored by $\mathcal Q_3^*$. We have the spectral sequence $$E_1^{p,q}=H^q(G(3,V),\mathcal Q_3^*\otimes \wedge^{-p}{\rm Sym}^3\mathcal E_3^*)\Rightarrow H^{p+q}(Q_{3|F_2(X)}^*).$$ We use again the Borel-Weil-Bott theorem \ref{thm_borel_weil_bott} to compute the cohomology groups on $G(3,V)$. The decompositions of the $\wedge^i{\rm Sym}\mathcal E_3^*$'s into irreducible modules have already been obtained in Theorem \ref{thm_descrip_h_1_and_wedge}. So we get: $$\begin{tabular}{lll} $(0)$ &$\oplus_iH^i(\mathcal Q_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*)$\\ $ $ &$ $ &$=H^0(\Gamma^{(1,0,0,0)}\mathcal Q_3^*)=V$\\ $ $ &$ $ &$ $\\
$(1)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes\Gamma^{(3,0,0)}\mathcal E_3^*)=0$\\ $ $ &$ $ &$ $\\
$(2)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^2{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(3,3,0)}\mathcal E_3^*\oplus\Gamma^{(5,1,0)}\mathcal E_3^*))=0$\\ $ $ &$ $ &$ $\\
$(3)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^3{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(3,3,3)}\mathcal E_3^*\oplus\Gamma^{(5,3,1)}\mathcal E_3^*\oplus \Gamma^{(6,3,0)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus\Gamma^{(7,1,1)}\mathcal E_3^*))$\\ $ $ &$ $ &$=H^4(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes\Gamma^{(7,1,1)}\mathcal E_3^*)\simeq \Gamma^{(3,2,1,\dots,1)}V$\\ $ $ &$ $ &$ $\\
$(4)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^4{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(6,3,3)}\mathcal E_3^*\oplus\Gamma^{(6,4,2)}\mathcal E_3^*\oplus \Gamma^{(6,6,0)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \ \oplus\Gamma^{(7,4,1)}\mathcal E_3^*\oplus\Gamma^{(8,3,1)}\mathcal E_3^*))$\\ $ $ &$ $ &$=0 $\\ $ $ &$ $ &$ $\\
$(5)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^5{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(6,6,3)}\mathcal E_3^*\oplus\Gamma^{(7,4,4)}\mathcal E_3^*\oplus \Gamma^{(7,6,2)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \ \ \oplus\Gamma^{(8,4,3)}\mathcal E_3^*\oplus\Gamma^{(8,6,1)}\mathcal E_3^*\oplus \Gamma^{(9,4,2)}\mathcal E_3^*))$\\ $ $ &$ $ &$=0 $\\ $ $ &$ $ &$ $\\ \end{tabular}$$ $$\begin{tabular}{lll} $(6)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^6{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(7,7,4)}\mathcal E_3^*\oplus\Gamma^{(8,6,4)}\mathcal E_3^*\oplus \Gamma^{(9,6,3)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \ \ \oplus\Gamma^{(9,7,2)}\mathcal E_3^*\oplus\Gamma^{(10,4,4)}\mathcal E_3^*))$\\ $ $ &$ $ &$=H^8(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes \Gamma^{(9,7,2)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq\Gamma^{(5,3,3,2,\dots,2)}V$\\ $ $ &$ $ &$ $\\
$(7)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^7{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(7,7,7)}\mathcal E_3^*\oplus\Gamma^{(9,7,5)}\mathcal E_3^*\oplus \Gamma^{(9,9,3)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \ \ \oplus\Gamma^{(10,7,4)}\mathcal E_3^*))$\\ $ $ &$ $ &$=0$\\ $ $ &$ $ &$ $\\
$(8)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^8{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(10,7,7)}\mathcal E_3^*\oplus\Gamma^{(10,9,5)}\mathcal E_3^*))$\\ $ $ &$ $ &$=0$\\ $ $ &$ $ &$ $\\
$(9)$ &$\oplus_iH^i(\mathcal Q_3^*\otimes\wedge^9{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(10,10,7)}\mathcal E_3^*)=0$\\ $ $ &$ $ &$ $\\
\end{tabular}$$ $$\begin{tabular}{lll} $(10)$ &$\oplus_i H^i(\mathcal Q_3^*\otimes\wedge^{10}{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(10,10,10)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(1,0,0,0)}\mathcal Q_3^*\otimes(\Gamma^{(10,10,10)}\mathcal E_3^*)\simeq\Gamma^{(6,6,6,4,3,3,3)}V$ \end{tabular}$$
The graded pieces of the filtration on $H^0(\mathcal Q_{3|F_2(X)}^*)$ are given by $E_\infty^{-i,i}$, $i=0,\dots,10$. From the above calculations, we see that $E_1^{-i,i}=0$ for any $i\geq 1$, thus $E_\infty^{-i,i}=0$ for any $i\geq 1$.\\
\indent On the other hand $E_1^{0,0}=H^0(\mathcal Q_3^*)=V$ and as $E_r^{a,b}=0$ for any $a>0$, we have $E_r^{0,0}={\rm Coker}(E_{r-1}^{-(r-1),r-2}\overset{d_{r-1}}{\rightarrow}E_{r-1}^{0,0})$ for any $r\geq 2$. But the above calculations gives $E_1^{-r,r-1}=0$ for $r\geq 0$ so that $E_r^{-r,r-1}=0$ for any $r\geq 1$. Thus $E_\infty^{0,0}=E_1^{0,0}$, proving that $H^0(\mathcal Q_{3|F_2(X)}^*)\simeq H^0(\mathcal Q_3^*)\simeq V$.\\
\indent Now, let us examine $H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})$ by the spectral sequence
$$E_1^{p,q}=H^q({\rm Sym}^2\mathcal E_3\otimes\wedge^{-p}{\rm Sym}^3\mathcal E_3^*)\Rightarrow H^{p+q}({\rm Sym}^2\mathcal E_{3|F_2(X)}).$$ \indent Using Sage with the code \begin{verbatim} R=WeylCharacterRing("A2") V=R(1,0,0) W=R(0,0,-1) for k in range(11): print k, W.symmetric_power(2)*V.symmetric_power(3).exterior_power(k) \end{verbatim} and the Borel-Weil-Bott theorem \ref{thm_borel_weil_bott}, we get: $$\begin{tabular}{lll} $(0)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3)$ &$\simeq \oplus_iH^i(\Gamma^{(0,0,-2)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^0(\Gamma^{(0,0,-2)}\mathcal E_3^*)\simeq \Gamma^{(0,\dots,0,-2)}V\simeq {\rm Sym}^2V^*$\\ $ $ &$ $ &$ $\\
$(1)$ &$\oplus_iH^i({\rm Sym}^2\mathcal E_3\otimes{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(1,0,0)}\mathcal E_3^*\oplus \Gamma^{(2,0,-1)}\mathcal E_3^*\oplus \Gamma^{(3,0,-2)}\mathcal E_3^*)=0$\\ $ $ &$ $ &$ $\\
$(2)$ &$\oplus_iH^i({\rm Sym}^2\mathcal E_3\otimes\wedge^2{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i((\Gamma^{(3,1,0)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(3,2,-1)}\mathcal E_3^*\oplus \Gamma^{(3,3,-2)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus\Gamma^{(4,0,0)}\mathcal E_3^*\oplus \Gamma^{(4,1,-1)}\mathcal E_3^*\oplus \Gamma^{(5,1,-2)}\mathcal E_3^*\oplus \Gamma^{(5,0,-1)}\mathcal E_3^*))$\\ $ $ &$ $ &$=H^4(\Gamma^{(5,1,-2)}\mathcal E_3^*\oplus \Gamma^{(5,0,-1)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq \Gamma^{(1,\dots,1,-2)}V\oplus \Gamma^{(1,\dots,1,0,-1)}V$\\ $ $ &$ $ &$ $\\
$(3)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^3{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(3,3,1)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(4,2,1)}\mathcal E_3^*\oplus (\Gamma^{(4,3,0)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(5,1,1)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(5,2,0)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(5,3,-1)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(6,1,0)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(6,2,-1)}\mathcal E_3^*\oplus \Gamma^{(6,3,-2)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(7,1,-1)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^4((\Gamma^{(5,1,1)}\mathcal E_3^*)^{\oplus 2}\oplus(\Gamma^{(6,1,0)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(7,1,-1)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq det(V)^{\oplus 2}\oplus (\Gamma^{(2,1,\dots,1,0)}V)^{\oplus 2}\oplus \Gamma^{(3,1,\dots,1,-1)}V$\\ $ $ &$ $ &$ $\\
\end{tabular}$$ $$\begin{tabular}{lll} $(4)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^4{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_iH^i(\Gamma^{(4,3,3)}\mathcal E_3^*\oplus \Gamma^{(4,4,2)}\mathcal E_3^*\oplus (\Gamma^{(5,3,2)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(5,4,1)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(6,2,2)}\mathcal E_3^*\oplus (\Gamma^{(6,3,1)}\mathcal E_3^*)^{\oplus 4}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(6,4,0)}\mathcal E_3^*)^{\oplus 3}\oplus \Gamma^{(6,5,-1)}\mathcal E_3^*\oplus \Gamma^{(6,6,-2)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(7,2,1)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(7,3,0)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(7,4,-1)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(8,1,1)}\mathcal E_3^*\oplus \Gamma^{(8,2,0)}\mathcal E_3^*\oplus \Gamma^{(8,3,-1)}\mathcal E_3^*)$\\ $ $ &$ $ &$=\underbrace{H^4(\Gamma^{(8,1,1)}\mathcal E_3^*)}_{\simeq {\rm Sym}^3V\otimes \det(V)}\oplus \underbrace{H^8(\Gamma^{(6,6,-2)}\mathcal E_3^*)}_{\simeq \Gamma^{(2,\dots,2,-2)}V}$\\ $ $ &$ $ &$ $\\
$(5)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^5{\rm Sym}^3\mathcal E_3^*)$ &$\simeq\oplus_iH^i(\Gamma^{(5,4,4)}\mathcal E_3^*\oplus (\Gamma^{(6,4,3)}\mathcal E_3^*)^{\oplus 3}\oplus (\Gamma^{(6,5,2)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(6,6,1)}\mathcal E_3^*)^{\oplus 3}\oplus \Gamma^{(7,3,3)}\mathcal E_3^*\oplus (\Gamma^{(7,4,2)}\mathcal E_3^*)^{\oplus 4}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(7,5,1)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(7,6,0)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(8,3,2)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(8,4,1)}\mathcal E_3^*)^{\oplus 3}\oplus \Gamma^{(8,5,0)}\mathcal E_3^*\oplus \Gamma^{(8,6,-1)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(9,2,2)}\mathcal E_3^*\oplus \Gamma^{(9,3,1)}\mathcal E_3^*\oplus \Gamma^{(9,4,0)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^8((\Gamma^{(6,6,1)}\mathcal E_3^*)^{\oplus 3}\oplus (\Gamma^{(7,6,0)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(8,6,-1)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq (\Gamma^{(2,\dots,2,1)}V)^{\oplus 3}\oplus (\Gamma^{(3,2,\dots,2,0)}V)^{\oplus 2}\oplus \Gamma^{(4,2,\dots,2,-1)}V$\\ $ $ &$ $ &$ $ \\
$(6)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^6{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i(\Gamma^{(6,6,4)}\mathcal E_3^*\oplus (\Gamma^{(7,5,4)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(7,6,3)}\mathcal E_3^*)^{\oplus 3}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(7,7,2)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(8,4,4)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(8,5,3)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(8,6,2)}\mathcal E_3^*)^{\oplus 3}\oplus \Gamma^{(8,7,1)}\mathcal E_3^*\oplus (\Gamma^{(9,4,3)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(9,5,2)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(9,6,1)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(9,7,0)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(10,4,2)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^8((\Gamma^{(7,7,2)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(8,6,2)}\mathcal E_3^*)^{\oplus 3}\oplus \Gamma^{(8,7,1)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus (\Gamma^{(9,6,1)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(9,7,0)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq (\Gamma^{(3,3,2,\dots, 2)}V)^{\oplus 2}\oplus (\Gamma^{(4,2,\dots,2)}V)^{\oplus 3}\oplus \Gamma^{(4,3,2,\dots,2,1)}V$\\ $ $ &$ $ &$\ \ \ \oplus (\Gamma^{(5,2,\dots,2,1)}V)^{\oplus 2}\oplus \Gamma^{(5,3,2\dots,2,0)}V$\\ $ $ &$ $ &$ $\\
$(7)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^7{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i((\Gamma^{(7,7,5)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(8,6,5)}\mathcal E_3^*\oplus (\Gamma^{(8,7,4)}\mathcal E_3^*)^{\oplus 2}$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(9,5,5)}\mathcal E_3^*\oplus (\Gamma^{(9,6,4)}\mathcal E_3^*)^{\oplus 2}\oplus (\Gamma^{(9,7,3)}\mathcal E_3^*)^{\oplus 3}$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(9,8,2)}\mathcal E_3^*\oplus \Gamma^{(9,9,1)}\mathcal E_3^*\oplus \Gamma^{(10,5,4)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(10,6,3)}\mathcal E_3^*\oplus \Gamma^{(10,7,2)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^8(\Gamma^{(9,8,2)}\mathcal E_3^*\oplus \Gamma^{(9,9,1)}\mathcal E_3^*\oplus \Gamma^{(10,7,2)}\mathcal E_3^*)$\\ $ $ &$ $ &$\simeq \Gamma^{(5,4,2\dots,2)}V\oplus \Gamma^{(5,5,2,\dots,2,1)}V\oplus \Gamma^{(6,3,2,\dots,2)}V$\\ $ $ &$ $ &$ $\\
$(8)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^8{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i(\Gamma^{(8,7,7)}\mathcal E_3^*\oplus \Gamma^{(9,7,6)}\mathcal E_3^*\oplus \Gamma^{(9,8,5)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(9,9,4)}\mathcal E_3^*\oplus (\Gamma^{(10,7,5)}\mathcal E_3^*)^{\oplus 2}\oplus \Gamma^{(10,8,4)}\mathcal E_3^*$\\ $ $ &$ $ &$\ \ \ \ \oplus \Gamma^{(10,9,3)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(8,7,7)}\mathcal E_3^*)\simeq \Gamma^{(4,3,\dots,3)}V$\\ $ $ &$ $ &$ $\\
$(9)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^9{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i(\Gamma^{(10,8,7)}\mathcal E_3^*\oplus \Gamma^{(10,9,6)}\mathcal E_3^*\oplus \Gamma^{(10,10,5)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(10,8,7)}\mathcal E_3^*)\simeq \Gamma^{(6,4,3,\dots,3)}V$\\ $ $ &$ $ &$ $\\
$(10)$ &$\oplus_i H^i({\rm Sym}^2\mathcal E_3\otimes\wedge^{10}{\rm Sym}^3\mathcal E_3^*)$ &$\simeq \oplus_i H^i(\Gamma^{(10,10,8)}\mathcal E_3^*)$\\ $ $ &$ $ &$=H^{12}(\Gamma^{(10,10,8)}\mathcal E_3^*)\simeq \Gamma^{(6,6,4,3,\dots,3)}V$ \end{tabular}$$
\indent The graded pieces of the filtration on $H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})$ are given by the $E_\infty^{-i,i}$. We have $E_\infty^{-i,i}=0$ for any $i\neq 0,4$ since $E_1^{-i,i}=0$, for $i\neq 0,4$.\\ \indent As $E_r^{a,b}=0$ for any $a>0$ and $E_r^{-r,r-1}=0$ (because $E_1^{-r,r-1}=0$) for any $r\geq 1$, we have $E_\infty^{0,0}=E_1^{0,0}$.\\
\indent In particular $H^0({\rm Sym}^2\mathcal E_3)\simeq E_\infty^{0,0}\subset H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})$. As $h^0({\rm Sym}^2\mathcal E_3)=dim({\rm Sym}^2V^*)=28$ we have $h^0({\rm Sym}^2\mathcal E_{3|F_2(X)})\geq 28$. By Hodge symmetry $h^0(\Omega_{F_2(X)})=h^1(\mathcal O_{F_2(X)})=21$ (Theorem \ref{thm_descrip_h_1_and_wedge}). So the exactness of the sequence $$\begin{small}0\rightarrow H^0(\mathcal Q_{3|F_2(X)}^*)\rightarrow H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})\rightarrow H^0(\Omega_{F_2(X)})\end{small}$$ implies $H^0({\rm Sym}^2\mathcal E_3)=H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})$ and the surjectivity of the last map. \end{proof}
According to Theorem \ref{thm_descrip_h_1_and_wedge}, $\bigwedge^2H^0(\Omega_{F_2(X)})\subset H^0(K_{F_2(X)})$. As $K_{F_2(X)}\simeq \mathcal O_{G(3,V)}(3)_{|F_2(X)}$, the map $\rho:F_2(X)\dashrightarrow |\bigwedge^2H^0(\Omega_{F_2(X)})|$ is the composition of the degree $3$ Veronese of the natural embedding $F_2(X)\subset G(3,V)$ followed by a linear projection. Moreover, we have the following
\begin{lemme}\label{lem_bpf_wedge_kernel_alb} (1) The canonical bundle $K_{F_2(X)}$ is generated by the sections in $\bigwedge^2H^0(\Omega_{F_2(X)})\subset H^0(K_{F_2(X)})$. In particular, $|\bigwedge^2H^0(\Omega_{F_2(X)})|$ is base-point-free.\\
\indent (2) For any $[P]\in F_2(X)$, the following sequence is exact: $$0\rightarrow \mathcal K_{[P]}\rightarrow H^0(\Omega_{F_2(X)})\overset{ev([P])}{\rightarrow} \Omega_{F_2(X),[P]}\rightarrow 0$$ where $\mathcal K_{[P]}=\{Q\in H^0(\mathcal O_{\mathbb P^6}(2)),\ P\subset \{Q=0\}\}/Span( ({\rm eq}_X(x,\cdot,\cdot))_{x\in \langle P\rangle})$ \end{lemme}
\begin{proof} As $\mathcal E_{3|F_2(X)}$ is globally generated (as a restriction of $\mathcal E_3$, which is globally generated (\ref{ex_seq_def_taut_3})), ${\rm Sym}^2\mathcal E_{3|F_2(X)}$ is also globally generated. The same holds for $\mathcal Q_{3|F_2(X)}^*$ (by (\ref{ex_seq_def_taut_3})). So applying the evaluation to (\ref{ex_seq_H_0_tgt_bundle_ex_seq}), we get the commutative diagram:
$$\xymatrix@-1pc{0\ar[r] &H^0(\mathcal Q_{3|F_2(X)}^*)\otimes \mathcal O_{F_2(X)}\ar[r]\ar[d]^{ev_1} &H^0({\rm Sym}^2\mathcal E_{3|F_2(X)})\otimes \mathcal O_{F_2(X)} \ar[r]\ar[d]^{ev_2} &H^0(\Omega_{F_2(X)})\otimes \mathcal O_{F_2(X)} \ar[r]\ar[d]^{ev_3} &0\\
0\ar[r] &\mathcal Q_{3|F_2(X)}^*\ar[r] &{\rm Sym}^2\mathcal E_{3|F_2(X)}\ar[r] &\Omega_{F_2(X)}\ar[r] &0}$$ where the bottom row is (\ref{ex_seq_tgt_bundle_seq}). As $ev_2$ is surjective, we get that $ev_3$ is also surjective i.e. $\Omega_{F_2(X)}$ is globally generated. Then taking the exterior square of $ev_3$, we get that $\wedge^2ev_3$ is surjective: $$\bigwedge^2H^0(\Omega_{F_2(X)})\otimes \mathcal O_{F_2(X)}\overset{\wedge^2 ev_3}{\twoheadrightarrow}\wedge^2\Omega_{F_2(X)}.$$
\indent Now a base point of $|\bigwedge^2H^0(\Omega_{F_2(X)})|$ would be a point where $\wedge^2 ev_3$ fails to be surjective. So $|\bigwedge^2H^0(\Omega_{F_2(X)})|$ is base point free.\\
\indent (2) As $H^0(\mathcal Q_{3|F_2(X)}^*)\simeq H^0(\mathcal Q_3^*)\simeq V$ by Proposition \ref{prop_descrip_h_0_omega}, (\ref{ex_seq_def_taut_3}) yields ${\rm ker}(ev_1)\simeq \mathcal E_{3|F_2(X)}^*$ so the snake lemma gives the exact sequence. \end{proof}
Now, let us come back to the Gauss map of $F_2(X)$, that we have defined to be: $$\begin{tabular}{llll} $\mathcal G:$ &$alb_{F_2}(F_2(X))$ &$\dashrightarrow$ &$G(2, T_{Alb(F_2(X)),0})$\\ $ $ &$t$ &$\mapsto$ &$T_{alb_{F_2}(F_2(X))-t,0}$ \end{tabular}$$ where $alb_{F_2}(F_2(X))-t$ is the translation of $alb_{F_2}(F_2(X))\subset {\rm Alb}(F_2(X))$ by $-t\in {\rm Alb}(F_2(X))$. It is defined on the smooth locus of $alb_{F_2}(F_2(X))$.\\ \indent According to \cite[Section (III)]{Coll_cub} $T alb_{F_2}$ is injective. So the indeterminacies of $\mathcal G$ are resolved by the pre-composition with $alb_{F_2}$ i.e. $$\begin{tabular}{llll} $F_2(X)$ &$\rightarrow$ &$G(2,T_{Alb(F_2(X)),0})$\\ $t$ &$\mapsto$ &$T_{-alb_{F_2}(t)}{\rm Translate}(-alb_{F_2}(t))(T_t alb_{F_2}(T_{F_2(X),t})).$ \end{tabular}$$
\indent We have the Pl\"ucker embedding $$G(2,T_{Alb(F_2(X)),0})\simeq G(2,H^0(\Omega_{F_2(X)})^*)\subset \mathbb P(\bigwedge^2H^0(\Omega_{F_2(X)})^*)$$ and the commutative diagram: $$\xymatrix{F_2(X)\ar[r]^{alb_{F_2}}\ar[dd]^{\rho} &alb_{F_2}(F_2(X))\ar@{.>}[d]^{\mathcal G}\\
&G(2,H^0(\Omega_{F_2(X)})^*)\ar@{^{(}->}[d]\\
|\bigwedge^2H^0(\Omega_{F_2(X)})|\ar[r]^*[@]{\cong} &\mathbb P(\bigwedge^2H^0(\Omega_{F_2(X)})^*)}$$
\indent The following proposition completes the proof of Theorem \ref{thm_gauss_map}
\begin{proposition}\label{prop_rho_embedding} The morphism $\rho$ is an embedding; which implies that $alb_{F_2}$ is an isomorphism unto its image and $\mathcal G$ is an embedding. \end{proposition}
\begin{proof} Let us denote $J_X$ the Jacobian ideal of $X$ i.e. the ideal of the polynomial ring generated by $(\frac{\partial {\rm eq}_X}{\partial X_i})_{i=0,\dots,6}$ and $J_{X,2}$ its homogeneous part of degree $2$. By Proposition \ref{prop_result_Collino_smoothness}, for any $[P]\in F_2(X)$, $dim(J_{X,2|P})=4$ so that $dim(J_X\cap \{Q\in H^0(\mathcal O_{\mathbb P^6}(2)),\ P\subset \{Q=0\}\})=3$. We have the following: \begin{lemme}\label{lem_jacobian_ideal} (1) For $[P]\in G(3,V)$, the codimension of $L_P^2:=\{Q\in H^0(\mathcal O_{\mathbb P^6}(2)),\ P\subset \{Q=0\}\}$ in $H^0(\mathcal O_{\mathbb P^6}(2))$ is $6$. For $[P]\neq [P']\in G(3,V)$, the codimension of $L_{P,P'}^2:=\{Q\in H^0(\mathcal O_{\mathbb P^6}(2)),\ P,P'\subset \{Q=0\}\}$ inside $L_P^2$ is respectively:\\ \indent \indent (i) $6$ if $P\cap P'=\emptyset$;\\ \indent \indent (ii) $5$ if $P\cap P'=\{pt\}$;\\ \indent \indent (iii) $3$ if $P\cap P'=\{{\rm line}\}$.\\
\indent (2) For $[P]\neq [P']\in F_2(X)$ such that $P\cap P'=\{{\rm line}\}$, $dim(J_X\cap L_{P,P'}^2)\geq 1$ and if $X$ is general, we even have $dim(J_X\cap L_{P,P'}^2)\geq 2$. So that $L_P^2/(J_X\cap L_P^2)+ L_{P,P'}^2\subsetneq L_P^2$ and for $X$ general, $dim(L_P^2/(J_X\cap L_P^2)+ L_{P,P'}^2)\geq 2$. \end{lemme} \begin{proof} (1) It is a direct calculation.\\ \indent (2) Up to projective transformation, we can assume $P=\{X_0=\cdots=X_3=0\}$, $P'=\{X_0=X_1=X_2=X_4=0\}$. Then ${\rm eq}_X$ is of the form (\ref{normal_form_0}) with the additional conditions: $Q_3(0,X_5,X_6)=0$, $D_5(0,0,0,X_3)=0$, $D_6(0,0,0,X_3)=0$, $R(0,0,0,X_3)=0$.\\
\indent By definition, the quadrics of the Jacobian ideal are $\frac{\partial {\rm eq}_X}{\partial X_i}$'s and according to Proposition \ref{prop_result_Collino_smoothness}, $(\frac{\partial {\rm eq}_X}{\partial X_i}_{|P})_{i=0,\dots,3}$ are linearly independent so that $J_X\cap L_P^2=Span((\frac{\partial {\rm eq}_X}{\partial X_i}_{|P})_{i=4,5,6})$. For $i\in\{4,5,6\}$, $$\begin{small}\frac{\partial {\rm eq}_X}{\partial X_i}=X_0\frac{\partial Q_0}{\partial X_i}+X_1\frac{\partial Q_1}{\partial X_i}+X_2\frac{\partial Q_2}{\partial X_i}+X_3\frac{\partial Q_3}{\partial X_i}+D_i\end{small}$$ which, when restricted to $P'$ gives $\begin{small}\frac{\partial {\rm eq}_X}{\partial X_i}_{|P'}=X_3\frac{\partial Q_3}{\partial X_i}(0,X_5,X_6)+D_i(0,0,0,X_3)\end{small}$. But since $Q_3(0,X_5,X_6)=0$, we have $\frac{\partial Q_3}{\partial X_i}(0,X_5,X_6)=0$ for $i=5,6$. So that $\frac{\partial {\rm eq}_X}{\partial X_5}_{|P'}=0=\frac{\partial {\rm eq}_X}{\partial X_6}_{|P'}$ i.e. $\frac{\partial {\rm eq}_X}{\partial X_5}$, $\frac{\partial {\rm eq}_X}{\partial X_6}\in L_{P,P'}^2\cap J_X$. For $X$ general, those two quadric polynomials are independent.\\ \indent We have ${\rm dim}(J_X\cap L^2_P + L^2_{P,P'})={\rm dim}(J_X\cap L_P^2)+{\rm dim}(L^2_{P,P'})-{\rm dim}(J_X\cap L_{P,P'}^2)$ which, by the first item of the lemma, yields the result. \end{proof}
According to the Lemma, for $[P]\neq [P']\in F_2(X)$, we can always find a quadric $Q\in H^0(\mathcal O_{\mathbb P^6}(2))$ such that $0\neq \overline Q\in L_P^2/(J_X\cap L_P^2+L_{P,P'}^2)$; in particular $Q_{|P}=0$ but $Q_{|P'}\neq 0$. Pick another $Q'\in H^0(\mathcal O_{\mathbb P^6}(2))\backslash (L_P^2\cup L_{P'}^2)$ (i.e. $Q'_{|P}\neq 0$, $Q'_{|P'}\neq 0$) such that $Q'_{|P'}$ is independent of $Q_{|P'}$ and $Q$ and $Q'$ are independent modulo $J_{X,2}$ (${\rm dim}(H^0(\mathcal O_{\mathbb P^6}(2))/(J_{X,2}\oplus\mathbb C[Q]))=5$).\\
\indent By Proposition \ref{prop_descrip_h_0_omega}, such quadrics give rise to $1$-forms on $F_2(X)$. Then $Q\wedge Q'\in \bigwedge^2H^0(\Omega_{F_2(X)})$ vanishes at $[P]$ but not at $[P']$ i.e. $|\bigwedge^2H^0(\Omega_{F_2(X)})|$ separates points.\\
\indent Now, given a $[P]\in F_2(X)$, we recall that $$T_{[P]}F_2(X)=\{u\in Hom(\langle P\rangle,V/\langle P\rangle),\ {\rm eq}_X(x,x,u(x))=0\ \forall x\in \langle P\rangle\}$$ (the first order of ${\rm eq}_X(x+u(x),x+u(x),x+u(x))=0$, $\forall x\in \langle P\rangle$).\\
\indent Let $Q\in L_P^2$ be such that $0\neq \overline Q\in H^0(\mathcal O_{\mathbb P^6}(2))/J_{X,2}$ and $T_{[P]}F_2(Q)\cap T_{[P]}F_2(X)=\{0\}$. Pick a $0\neq\overline Q'\in H^0(\mathcal O_{\mathbb P^6}(2))/J_{X,2}$, such that $Q'_{|P}\neq 0$ then $Q\wedge Q'\in \bigwedge^2H^0(\Omega_{F_2(X)})$ and $(Q\wedge Q')_{|P}=0$.\\
\indent Moreover, given a $u\in T_{[P]}F_2(X)$, $d_{[P]}Q(u)\wedge Q'_{|P}+Q_{|P}\wedge d_{[P]}Q'(u)=d_{[P]}Q(u)\wedge Q'_{|P}$ where $d_{[P]}Q(u)$ is the quadratic form $x\mapsto {\rm eq}_Q(x,u(x))$ and is non-trivial since $T_{[P]}F_2(Q)\cap T_{[P]}F_2(X)=\{0\}$. Then for $Q$ generic (containing $P$ and such that $T_{[P]}F_2(Q)\cap T_{[P]}F_2(X)=\{0\}$), $d_{[P]}Q(u)$ is linearly independent of $Q'_{|P}$ so that $Q\wedge Q'$ does vanish along the tangent vector $u$. So $|\bigwedge^2H^0(\Omega_{F_2(X)})|$ separates tangent directions. \end{proof} \textit{}\\
\section{Variety of osculating planes of a cubic $4$-fold} We have previously introduced, for a smooth cubic $4$-fold containing no plane $Z\subset \mathbb P(H^*)\simeq \mathbb P^5$, the variety of osculating planes (\ref{def_var_of_oscul_planes}) $F_0(Z):=\{[P]\in G(3,H),\ \exists \ell\subset P\ {\rm line\ s.t.}\ P\cap Z=\ell\ {\rm (set-theoretically)}\}$.\\ \indent The variety $F_0(Z)$ lives naturally in $Fl(2,3,H)$ i.e. $$F_0(Z)=\{([\ell],[P])\in Fl(2,3,H),\ P\cap Z=\ell\ {\rm (set-theoretically)}\}$$ and from the exact sequence (\ref{ex_seq_taut_bundles_2_3}): $$\begin{small}0\rightarrow e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1)\rightarrow t^*\mathcal E_3\rightarrow e^*\mathcal E_2\rightarrow 0 \end{small}$$ we see that $e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1)$ is, for $([\ell],[P])\in Fl(2,3,H)$, the bundle of equations of $\ell\subset P$. As a result $F_0(Z)$ is the zero locus on $Fl(2,3,H)$ of a section of the rank $9$ vector bundle $\mathcal F$ defined by the exact sequence \begin{equation}\label{ex_seq_def_F} \begin{small}0\rightarrow e^*\mathcal O_{G(2,H)}(-3)\otimes t^*\mathcal O_{G(3,H)}(3)\rightarrow t^*{\rm Sym}^3\mathcal E_3\rightarrow \mathcal F\rightarrow 0\end{small} \end{equation}
In particular, (since $\mathcal F$ is globally generated by the sections induced by $H^0(t^*{\rm Sym}^3\mathcal E_3)$) by Bertini type theorems, for $Z$ general, $F_0(Z)$ is a smooth surface with $K_{F_0(Z)}\simeq (t^*\mathcal O_{G(3,H)}(3))_{|F_0(Z)}$.\\ \indent Its link to the surface of planes of a cubic $5$-fold is the following
\begin{proposition}\label{prop_etale_cover_F_2_F_0} Denoting $X_Z=\{X_6^3-{\rm eq}_Z(X_0,\dots,X_5)=0\}$ the cyclic cubic $5$-fold associated to $Z$, the linear projection with center $p_0:=[0:\cdots:0:1]$ induces a degree $3$ \'etale cover $\pi:F_2(X_Z)\rightarrow F_0(Z)$ given by the torsion line bundle $(e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1))_{|F_0(Z)}$.\\ \indent In particular, when $F_0(Z)$ is smooth, $F_2(X_Z)$ and $F_0(Z)$ are smooth and irreducible. \end{proposition} \begin{proof} (1) The point $p_0$ does not belong to $X_Z$. In particular, any $[P]\in F_2(X_Z)$ is sent by $\pi_{p_0}:\mathbb P(V^*)\dashrightarrow \mathbb P(H^*)$ to a plane in $\mathbb P(H^*)$ where $V=H\oplus \mathbb C\cdot p_0$. The restriction of $\pi_{p_0}$ (also denoted $\pi_{p_0}$) to $X$ is a degree $3$ cyclic cover of $\mathbb P^5$ ramified over $Z$. Let us denote $\tau:[a_0:\cdots:a_6]\mapsto[a_0:\cdots:a_5:\xi a_6]$, with $\xi$ a primitive $3^{rd}$ root of $1$, the cover automorphism.\\ \indent For any $[P]\in F_2(X_Z)$, $\pi_{p_0}:\pi_{p_0}^{-1}(\pi_{p_0}(P))\rightarrow \pi_{p_0}(P)$ is a degree $3$ cyclic cover ramified over the cubic curve $\pi_{p_0}(P)\cap Z$. It contains the three sections $P, \tau(P),\tau^2(P)$ which in turn all contain (set-theoretically) the ramification curve $\pi_{p_0}(P)\cap Z$ so it is a line i.e. $([\{\pi_{p_0}(P)\cap Z\}_{{\rm red}}],[\pi_{p_0}(P)])\in F_0(Z)$.\\
\indent Conversely, for any $([\ell],[P])\in F_0(Z)$, $\pi_{p_0|X_Z}^{-1}(P)\rightarrow P$ is a degree $3$ cyclic cover ramified over $\{\ell\}^3$; so it consists of $3$ surfaces isomorphic each to $P$ i.e. $3$ planes. To make it even more explicit, if $P=\{X_0=X_1=X_2=0\}$ and $\ell=\{X_0=X_1=X_2=X_3=0\}$, then $\pi_{p_0|X_Z}^{-1}(P)$ is defined in $\pi_{p_0}^{-1}(P)\simeq{\rm Span}(P,p_0)\simeq \mathbb P^3$ by $X_6^3-aX_3^3$ for some $a\neq 0$ (since $Z$ contains no plane) and we have $X_6^3-aX_3^3=(X_6-bX_3)(X_6-b'X_3)(X_6-b''X_3)$ where $b,b',b''$ are the distinct roots of $y^3=a$. So $\pi:F_2(X_Z)\rightarrow F_0(Z)$ is \'etale of degree $3$.\\
\indent (2) The equation ${\rm eq}_Z$ defines a section $\sigma_{{\rm eq}_Z}\in H^0(t^*{\rm Sym}^3\mathcal E_3)\simeq H^0({\rm Sym}^3\mathcal E_3)$ and by projection in (\ref{ex_seq_def_F}) a section $\overline{\sigma_{{\rm eq}_Z}}$ of $\mathcal F$ whose zero locus if $F_0(Z)$. Restricting (\ref{ex_seq_def_F}) to $F_0(Z)$ we see that $\sigma_{{\rm eq}_Z}$ induces a section of $(e^*\mathcal O_{G(2,H)}(-3)\otimes t^*\mathcal O_{G(3,H)}(3))_{|F_0(Z)}$ which vanishes nowhere since $Z$ contains no plane. Thus $$(e^*\mathcal O_{G(2,H)}(-3)\otimes t^*\mathcal O_{G(3,H)}(3))_{|F_0(Z)}\simeq \mathcal O_{F_0(Z)}.$$
\indent Now if $(e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1))_{|F_0(Z)}\simeq \mathcal O_{F_0(Z)}$, since $(e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1))_{|F_0(Z)}$ is the bundle of equation of $\ell_x\subset P_x$ for any $x=([\ell_x],[P_x])\in F_0(Z)$, for any nowhere vanishing section $s$ of $(e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1))_{|F_0(Z)}$, we would be able to define $3$ distinct sections of $\pi:F_2(X_Z)\rightarrow F_0(Z)$, namely (symbolically) $[x\mapsto \{X_6-\xi^k s(x)\}_{{\rm Span}(P_x,p_0)}]$, $k=0,1,2$. But according to \cite[Proposition 1.8]{Coll_cub}, $F_2(X)$ is connected for any $X$. Contradiction. So $(e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1))_{|F_0(Z)}$ is non-trivial $3$-torsion line bundle.\\
\indent Moreover, we readily see that for any $[P]\in F_2(X_Z)$, $X_{6|P}\neq 0$ is an equation of the line $P\cap \mathbb P(H^*)$ i.e. $\pi^*(e^*\mathcal O_{G(2,H)}(-1)\otimes t^*\mathcal O_{G(3,H)}(1))_{|F_0(Z)}$ has a nowhere vanishing section i.e. is trivial.\\
\indent (3) When $F_0(Z)$ is smooth, since $\pi$ is \'etale, $F_2(X_Z)$ is also smooth. As $F_2(X_Z)$ is connected (by \cite[Proposition 1.8]{Coll_cub}), $F_2(X_Z)$ is irreducible and $\pi(F_2(X_Z))=F_0(Z)$ is also irreducible. \end{proof} \begin{remarque}\label{rmk_result_GK} {\rm That $F_0(Z)$ is smooth and irreducible, for $Z$ general, is proven in \cite[Lemma 4.3]{GK_geom_lines} without reference to $F_2(X_Z)$.} \end{remarque} \textit{}\\
\indent In \cite{GK_geom_lines}, the interest for the image $e(F_0(Z))\subset F_1(Z)$ stems from $e(F_0(Z))$ being the fixed locus of a rational self-map of the hyper-K\"ahler $4$-fold $F_1(Z)$ defined by Voisin (\cite{Voisin_map}).
\begin{proposition}\label{prop_image_F_0} For $Z$ general, the tangent map of $e_{F_0}:=e_{|F_0(Z)}:F_0(Z)\rightarrow F_1(Z)$ is injective, $e_{F_0}$ is the normalisation of $e_{F_0}(F_0(Z))$ and is an isomorphism unto its image outside a finite subset of $F_0(Z)$.\\ \indent Moreover $e_{F_0}(F_0(Z))$ is a (non-normal) Lagrangian surface of the hyper-K\"ahler $4$-fold $F_1(Z)$. \end{proposition}
\begin{proof} (1) That $e_{F_0}$ is injective outside a finite number of points follows from a simple dimension count: let us introduce $I:=\{(([\ell],[P]),[Z])\in Fl(2,3,H)\times |\mathcal O_{\mathbb P^5}(3)|,\ \ell\subset Z\ {\rm and}\ Z\cap P=\ell\ {\rm set-theoretically}\}$ and $I_2:=\{(([\ell],[P_1],[P_2]),[Z])\in \mathbb P(\mathcal Q_2)\times_{G(2,H)}\mathbb P(\mathcal Q_2)\backslash \Delta_{\mathbb P(\mathcal Q_2)}\times |\mathcal O_{\mathbb P^5}(3)|,\ \ell\subset Z\ {\rm and}\ Z\cap P_i=\ell,\ i=1,2\ {\rm set-theoretically}\}$. As $Fl(2,3,H)$ and $\mathbb P(\mathcal Q_2)\times_{G(2,H)}\mathbb P(\mathcal Q_2)\backslash \Delta_{\mathbb P(\mathcal Q_2)}$ are homogeneous, the fibers of $p:I\rightarrow Fl(2,3,H)$ (resp. $p_2:I_2\rightarrow \mathbb P(\mathcal Q_2)\times_{G(2,H)}\mathbb P(\mathcal Q_2)\backslash \Delta_{\mathbb P(\mathcal Q_2)}$) are isomorphic to each other and are sub-linear systems of $|\mathcal O_{\mathbb P^5}(3)|$.\\
\indent Notice that, since $F_0(Z)$ is a surface for $Z$ general, we know that $dim(I)=dim(|\mathcal O_{\mathbb P^5}(3)|+2$.\\ \indent Let us analyse the fiber of $p_2$. To do so, we can assume $\ell=\{X_2=\cdots=X_5=0\}$, $P_1=\{X_3=X_4=X_5=0\}$ and $P_2=\{X_2=X_4=X_5=0\}$. Then the condition $Z\cap P_1=\ell$ implies that ${\rm eq}_Z$ is of the form \begin{equation}\label{normal_form_oscul}\begin{small}{\rm eq}_Z=\alpha X_2^3 + X_3Q_3 + X_4Q_4 +X_5Q_5 + \sum_{i=0}^2X_iD_i(X_3,X_4,X_5) + R(X_3,X_4,X_5)\end{small} \end{equation}
where the $Q_i(X_0,X_1,X_2)$ are quadratic forms in $X_0,X_1,X_2$, $D_i$ are quadratic forms in $X_3,X_4,X_5$ and $R$ is a cubic form in $X_3,X_4,X_5$. Notice that this is the general form of a member of the fiber $p^{-1}([\ell],[P_1])$, in particular $dim(p^{-1}([\ell],[P_1]))=dim(|\mathcal O_{\mathbb P^5}(3)|)+2-dim(Fl(2,3,H))=dim(|\mathcal O_{\mathbb P^5}(3)|)-9$.\\
\indent The additional condition $Z\cap P_2=\ell$ implies that $Q_3(X_0,X_1,0)=0$, $D_0(X_3,0,0)=0$, $D_1(X_3,0,0)=0$, which gives $3+1+1=5$ constraints. So that $dim(p_2^{-1}(([\ell],[P_1],[P_2])))=dim(p^{-1}([\ell],[P_1]))-5=dim(|\mathcal O_{\mathbb P^5}(3)|)-14$, hence $dim(I_2)=dim(p_2^{-1}(([\ell],[P_1],[P_2])))+2\times 3+dim(G(2,H))=dim(|\mathcal O_{\mathbb P^5}(3)|)$. As a result, the general fiber of $I_2\rightarrow |\mathcal O_{\mathbb P^5}(3)|$ is finite i.e. for $[Z]\in |\mathcal O_{\mathbb P^5}(3)|$ general there are only finitely many $\ell\subset Z$ such that there are at least two planes $P_1,P_2\subset \mathbb P^5$ such that $Z\cap P_i=\ell$, $i=1,2$ i.e. there is a finite set $\gamma\subset F_0(Z)$ such that $e_{|F_0}:F_0(Z)\backslash \gamma\rightarrow F_1(Z)$ is a bijection unto its image.\\
\indent (2) Let us give a description of $T_{F_0(Z),([\ell],[P])}$. We recall that the two projective bundle structures on $Fl(2,3,H)$ given by $e:Fl(2,3,H)\simeq \mathbb P(\mathcal Q_2)\rightarrow G(2,H)$ and $t:Fl(2,3,H)\simeq \mathbb P(\wedge^2\mathcal E_3)\rightarrow G(3,H)$ yield the following descriptions of the tangent bundle $$T_{Fl(2,3,H),([\ell],[P])}\simeq Hom(\langle\ell\rangle,H/\langle\ell\rangle)\oplus Hom(\langle P\rangle/\langle\ell\rangle,H/\langle P\rangle)$$ and $$T_{Fl(2,3,H), ([\ell],[P])}\simeq Hom(\langle P\rangle,H/\langle P\rangle)\oplus Hom(\langle \ell\rangle,\langle P\rangle/\langle \ell\rangle).$$ The isomorphism between the two takes the following form $$\begin{tabular}{ccc} $Hom(\langle\ell\rangle,H/\langle\ell\rangle)\oplus Hom(\langle P\rangle/\langle\ell\rangle,H/\langle P\rangle)$ &$\rightarrow$ &$Hom(\langle P\rangle,H/\langle P\rangle)\oplus Hom(\langle \ell\rangle,\langle P\rangle/\langle \ell\rangle)$\\ $(\varphi,\psi)$ &$\mapsto$ &$(\varphi_\perp+\psi,\varphi_\parallel)$ \end{tabular}$$ where $\varphi=(\varphi_\parallel,\varphi_\perp)$ is the decomposition corresponding to the choice of a decomposition $H/\langle\ell\rangle\simeq \langle P\rangle/\langle\ell\rangle\oplus H/\langle P\rangle$ coming from a decomposition $\langle P\rangle\simeq \langle\ell\rangle\oplus \langle P\rangle/\langle\ell\rangle$.\\ \indent Around $([\ell],[P])\in F_0(Z)$, the points of $Fl(2,3,H)$ are of the form $([(id_{\langle\ell\rangle}+\varphi)(\langle\ell\rangle)],[(id_{\langle P\rangle}+\varphi_\perp+\psi)(\langle P\rangle)])$. Let us choose an equation $\lambda\in \langle P\rangle^*$ (a generator of $(\langle P\rangle/\langle\ell\rangle)^*$) of $\ell\subset P$, such that ${\rm eq}_Z(x,x,x)=\lambda(x)^3$ for any $x\in \langle P\rangle$.\\ \indent The first order deformation of this equation to an equation of $(id_{\langle\ell\rangle}+\varphi)(\langle\ell\rangle)\subset (id_{\langle P\rangle}+\varphi_\perp+\psi)(\langle P\rangle)$ is given by $\lambda-\varphi^*(\lambda)$ so that the point associated to $(\varphi,\psi)$ belongs to $F_0(Z)$ if and only if $${\rm eq}_Z(x+\varphi_\perp(x)+\psi(x),x+\varphi_\perp(x)+\psi(x),x+\varphi_\perp(x)+\psi(x))=(1+c(\varphi,\psi))(\lambda(x)-\varphi^*(\lambda)(x))^3\ \forall x\in \langle P\rangle$$ for some term $c(\varphi,\psi)=O(\varphi,\psi)$ constant on $\langle P\rangle$. So at the first order, we get \begin{equation}\label{descript_T_F_0} {\rm eq}_Z(x,x,\varphi_\perp(x)+\psi(x))=-\lambda(x)^2\varphi^*(\lambda)(x)+\frac 1 3 c(\varphi,\psi)\lambda(x)^3\ \forall x\in\langle P\rangle. \end{equation} The differential of the projection $e_{F_0(Z)}:F_0(Z)\rightarrow F_1(Z)$ is simply given by $(\varphi,\psi)\mapsto \varphi$.\\
\indent Let us introduce $$J:=\{(([\ell],[P]),[Z])\in Fl(2,3,H)\times |\mathcal O_{\mathbb P^5}(3)|,\ \ell\subset Z,\ Z\cap P=\ell\ {\rm and}\ T_{([\ell],[P])}e_{|F_0} {\rm is\ not\ injective}\}$$ and analyse the fibers of $p_J:J\rightarrow Fl(2,3,H)$, which are isomorphic to each other by homogeneity of $Fl(2,3,H)$.\\ \indent So we can assume $\ell=\{X_2=\cdots=X_5=0\}$ and $P=\{X_3=\cdots=X_5=0\}$ so that ${\rm eq}_Z$ is of the form (\ref{normal_form_oscul}) with $Q_i=a_iX_0^2+b_iX_1^2+c_iX_2^2+d_iX_0X_1+e_iX_0X_2+f_iX_1X_2$, $i=3,4,5$ for some $a_i,\dots,f_i$. We recall that for $\varphi=\left(\begin{smallmatrix}u_2 &v_2\\ u_3 &v_3\\ u_4 &v_4\\ u_5 &v_5\end{smallmatrix}\right)\in Hom(\langle\ell\rangle,H/\langle\ell\rangle)$ and $\psi=\left(\begin{smallmatrix}w_3\\ w_4\\ w_5\end{smallmatrix}\right)\in Hom(\langle P\rangle/\langle\ell\rangle,H/\langle P\rangle)$ the associated subspaces are $$\ell_{(\varphi,\psi)}=[\lambda,\mu,\lambda u_2+\mu v_2,\dots,\lambda u_5+\mu v_5],\ [\lambda,\mu]\in \mathbb P^1$$ $$P_{(\varphi,\psi)}=[\lambda,\mu,\nu,\lambda u_3+\mu v_3+\nu w_3, \lambda u_4+\mu v_4+\nu w_4,\lambda u_5+\mu v_5+\nu w_5]\ [\lambda,\mu,\nu]\in \mathbb P^2.$$
\indent Now, if $(0,\psi)\in T_{F_0(Z),([\ell],[P])}$, we have at the first order
$$\begin{aligned} {\rm eq}_{Z|P_{(0,\psi)}}&=\alpha\nu^3 +\sum_{i=3}^5\nu w_i(a_i\lambda^2+b_i\mu^2+c_i\nu^2+d_i\lambda\mu+e_i\lambda\nu+f_i\mu\nu)+O((\varphi,\psi)^2)\\ &=(\alpha+c_3w_3+c_4w_4+c_5w_5)\nu^3+[(e_3w_3+e_4w_4+e_5w_5)\lambda+(f_3w_3+f_4w_4+f_5w_5)\mu]\nu^2\\ &\ \ +(a_3w_3+a_4w_4+a_5w_5)\lambda^2\nu +(b_3w_3+b_4w_4+b_5w_5)\mu^2\\ &\ \ +(d_3w_3+d_4w_4+d_5w_5)\lambda\mu\nu+ O((\varphi,\psi)^2)\end{aligned}$$ so that looking at (\ref{descript_T_F_0}), $(0,\psi)\in T_{F_0(Z),([\ell],[P])}$ if and only if $${\rm rank}\left(\begin{smallmatrix}a_3 &a_4 &a_5\\ b_3 &b_4 &b_5\\ d_3 &d_4 &d_5\\ e_3 &e_4 &e_5\\ f_3 &f_4 &f_5\end{smallmatrix}\right)\leq 2$$ which defines a subset of codimension $(3-2)(5-2)=3$.\\
\indent So $J\subset I$ has codimension $3$. As $dim(I)=dim(|\mathcal O_{\mathbb P^5}(3)|)+2$, $J$ does not dominate $|\mathcal O_{\mathbb P^5}(3)|$ i.e. for the general $Z$, $e_{F_0}$ is an immersion.\\
\indent(3) Let us prove that $e_{F_0}(F_0(Z))$ is a Lagrangian surface of $F_1(Z)$. In \cite{Iliev-Manivel_cub_hyp_int_syst}, the following explicit description of the symplectic form $\mathbb C\cdot \Omega=H^{2,0}(F_1(Z))$ is given: let us introduce the following quadratic form on $\wedge^2T_{F_1(Z),[\ell]}$ with values in $Hom((\wedge^2\langle\ell\rangle)^{\otimes 2},\wedge^4(H/\langle\ell\rangle)$
$$\begin{aligned} K(u\wedge v,u'\wedge v')&= u(x)\wedge u'(y)\wedge v(x)\wedge v'(y) - u(y)\wedge u'(y)\wedge v(x)\wedge v'(x)\\ &\ \ +u(y)\wedge u'(x)\wedge v(y)\wedge v'(x) - u(x)\wedge u'(x)\wedge v(y)\wedge v'(y)\end{aligned}$$ where $(x,y)$ is a basis of $\langle\ell\rangle$. Let us also introduce the following skew-symmetric form $$\begin{tabular}{lcll} $\omega:$ &$\wedge^2T_{F_1(Z),[\ell]}$ &$\rightarrow$ &$(\wedge^2\langle\ell\rangle)^{\otimes 3}$\\ $ $ &$u\wedge v$ &$\mapsto$ &${\rm eq}_Z(x,x,u(y)){\rm eq}_Z(y,y,v(x)) -{\rm eq}_Z(x,x,v(y)){\rm eq}_Z(y,y,u(x))$\\ $ $ &$ $ &$ $ &$+2{\rm eq}_Z(x,y,u(y)){\rm eq}_Z(x,x,v(y))-2{\rm eq}_Z(x,x,u(y)){\rm eq}_Z(x,y,v(y))$\\ $ $ &$ $ &$ $ &$+2{\rm eq}_Z(y,y,u(x)){\rm eq}_Z(x,y,v(x))-2{\rm eq}_Z(x,y,u(x)){\rm eq}_Z(y,y,v(x)).$ \end{tabular}$$ According to \cite[Theorem 1]{Iliev-Manivel_cub_hyp_int_syst}, for $u,v\in T_{F_1(Z),[\ell]}$ $$K(u\wedge v,u\wedge v)=w(u\wedge v)\Omega_{[\ell]}(u,v).$$
As for a general point $([\ell],[P])\in F_0(Z)$, $\ell\subset Z$ is of the first type i.e. in reference to the above presentation (\ref{normal_form_oscul}) for $\ell=\{X_2=\cdots=X_5=0\}$, $P=\{X_3=X_4=X_5=0\}$, $\left|\begin{smallmatrix}a_3 &b_3 &d_3\\
a_4 &b_4 &d_4\\ a_5 &b_5 &d_5\end{smallmatrix}\right|\neq 0$ it is sufficient to prove the vanishing of $\Omega_{[\ell]}({\rm Im}(T_{([\ell],[P])}e_{F_0}),{\rm Im}(T_{([\ell],[P])}e_{F_0}))$ for such a line. So can assume $\alpha=1$ and $$\begin{aligned}Q_3&=X_0^2+e_3X_0X_2+f_3X_1X_2+c_3X_2^2\\ Q_4&=X_0X_1+e_4X_0X_2+f_4X_1X_2+c_4X_2^2\\ Q_5&=X_1^2+e_5X_0X_2+f_5X_1X_2+c_5X_2^2.\end{aligned}$$
Then as above, for $\varphi=\left(\begin{smallmatrix}u_2 &v_2\\ u_3 &v_3\\ u_4 &v_4\\ u_5 &v_5\end{smallmatrix}\right)\in Hom(\langle\ell\rangle,H/\langle\ell\rangle)$ and $\psi=\left(\begin{smallmatrix}w_3\\ w_4\\ w_5\end{smallmatrix}\right)\in Hom(\langle P\rangle/\langle\ell\rangle,H/\langle P\rangle)$, we have $$\begin{aligned}{\rm eq}_{Z|P_{(\varphi,\psi)}} &= \nu^3+\sum_{i=3}^5(\lambda u_i+\mu v_i+\nu w_3)Q_i+O((\varphi,\psi)^2)\\ &=(1+c_3w_3+c_4w_4+c_5w_5)\nu^3+(c_3u_3+e_3w_3+c_4u_4+e_4w_4+c_5u_5+e_5w_5)\lambda\nu^2\\ &\ \ + (c_3v_3+f_3w_3+c_4v_4+f_4w_4+c_5v_5+b_5w_5)\mu\nu^2\\ &\ \ + (w_3+e_3u_3+e_4u_4+e_5u_5)\lambda^2\nu +(w_5+f_3v_3+f_4v_4+f_5v_5)\mu\nu^2\\ &\ \ +(w_4+f_3u_3+e_3v_3+f_4u_4+e_4v_4+f_5u_5+e_5v_5)\lambda\mu\nu\\ &\ \ +u_3\lambda^3+v_5\mu^2+(v_4+u_5)\lambda\mu^2 +(v_3+u_4)\lambda^2\mu + O((\varphi,\psi)^2)\end{aligned}$$ so that the description (\ref{descript_T_F_0}) of $T_{F_0(Z),([\ell],[P])}$ yields $$\left\{\begin{aligned} c_3u_3+e_3w_3+c_4u_4+e_4w_4+c_5u_5+e_5w_5 &=-u_2\\ c_3v_3+f_3w_3+c_4v_4+f_4w_4+c_5v_5+b_5w_5 & = - v_2\\ w_3+e_3u_3+e_4u_4+e_5u_5 &= 0\\ w_5+f_3v_3+f_4v_4+f_5v_5 &=0\\ w_4+f_3u_3+e_3v_3+f_4u_4+e_4v_4+f_5u_5+e_5v_5 &=0\\ v_4 =-u_5;\ v_3 =-u_4\ u_3=0\ v_5 =0. \end{aligned} \right.$$ The $7$ last equations yield $w_3=-(e_4u_4+e_5u_5)$, $w_4=(e_3-f_4)u_4+(e_4-f_5)u_5$, $w_5=f_3u_4+ f_4u_5$. Thus the first two gives gives a system $\left\{\begin{aligned}\alpha u_4 + \beta u_5=-u_2\\ -\delta u_4- \alpha u_5=-v_2\end{aligned}\right.$ where $\alpha=c_4-e_4f_4+e_5f_3)$, $\beta=c_5-e_3e_5+e_4^2-e_4f_5+e_5f_4)$ and $\delta=(e_3f_4-f_4^2-e_4f_3+f_3f_5-c_3)$; in particular the determinant $\Delta=-\alpha^2-\beta\delta$ of the $2\times 2$ system is non-zero for a general choice of the $(e_i,f_i,c_i)$ and $\left\{\begin{aligned}u_4&=\frac{1}{\Delta}(\alpha u_2+\beta v_2)\\ u_5&=\frac{1}{\Delta}(\delta u_2-\alpha v_2) \end{aligned}\right.$. So a basis of $T_{F_1(Z),([\ell],[P])}$ is given by ($(u_2=1,v_2=0)$ and $(u_2=0,v_2=1)$) $$\begin{tabular}{lclc} $\varphi_{u_2}:$ &$\epsilon_0$ &$\mapsto$ &$\epsilon_2 +\frac{\alpha}{\Delta}\epsilon_4 +\frac{\delta}{\Delta}\epsilon_5$\\ $ $ &$\epsilon_1$ &$\mapsto$ &$-\frac{\alpha}{\Delta}\epsilon_3-\frac{\delta}{\Delta}\epsilon_4$ \end{tabular}$$ and $$\begin{tabular}{lclc} $\varphi_{v_2}:$ &$\epsilon_0$ &$\mapsto$ &$\frac{\beta}{\Delta}\epsilon_4 -\frac{\alpha}{\Delta}\epsilon_5$\\ $ $ &$\epsilon_1$ &$\mapsto$ &$\epsilon_2-\frac{\beta}{\Delta}\epsilon_3+\frac{\alpha}{\Delta}\epsilon_4$ \end{tabular}$$
where $(\epsilon_0,\dots,\epsilon_5)$ is the (dual) basis associated to the choice of coordinates $X_i$'s. Then we readily compute $$K(\varphi_{u_2}\wedge\varphi_{v_2})=\left|\begin{matrix}1 &0 &0 &1\\ 0 &-\frac{\alpha}{\Delta} &0 &-\frac{\beta}{\Delta}\\ \frac{\alpha}{\Delta} &-\frac{\delta}{\Delta} &\frac{\beta}{\Delta} &\frac{\alpha}{\Delta}\\ \frac{\delta}{\Delta} &0 &-\frac{\alpha}{\Delta} &0\end{matrix}\right|=0$$ and $\omega(\varphi_{u_2}\wedge\varphi_{v_2})=\frac{5}{\Delta}\neq 0$ hence $\Omega_{[\ell]}(\varphi_{u_2},\varphi_{v_2})=0$. \end{proof}
\begin{remarque}\label{rmk_result_GK2} {\rm In \cite{GK_geom_lines} it is also proven that $F_0(Z)\rightarrow e(F_0(Z))$ is the normalisation and that $e(F_0(Z))$ has $3780$ non-normal isolated singularities.}
\end{remarque}
As for $Z$ general, $e_{F_0}$ is an immersion $N_{F_0(Z)/F_1(Z)}:=e_{F_0}^*T_{F_1(Z)}/T_{F_0(Z)}$ is locally free. Moreover, since $e_{F_0}$ is, outside a codimension $2$ subset of $F_0(Z)$, an isomorphism unto its image and the latter is a Lagrangian subvariety of $F_1(Z)$, we get (outside a codimension $2$ subset, thus globally) an isomorphism $$\Omega_{F_0(Z)}\simeq N_{F_0(Z)/F_1(Z)}.$$
\indent Notice that $F_0(Z)$ naturally lives in $\mathbb P(\mathcal Q_{2|F_1(Z)})\subset Fl(2,3,H)$. We have the following
\begin{lemme}\label{lem_normal_bdl_intermediate} The following sequence is exact $$\begin{aligned}0\rightarrow e_{F_1}^*\mathcal O_{F_1(Z)}(-3)\otimes t_{F_1}^*\mathcal O_{G(3,H)}(3)\rightarrow e_{F_1}^*\mathcal O_{F_1}(-1)\otimes t_{F_1}^*({\rm Sym}^2\mathcal E_3\otimes\mathcal O_{G(3,H)}(1))_{|F_1(Z)}\\\rightarrow N_{F_0(Z)/\mathbb P(\mathcal Q_{2|F_1(Z)})}\rightarrow 0\end{aligned}$$ where $e_{F_1}:\mathbb P(\mathcal Q_{2|F_1(Z)})\rightarrow F_1(Z)$ and $t_{F_1}:\mathbb P(\mathcal Q_{2|F_1(Z)})\rightarrow G(3,H)$. \end{lemme} \begin{proof} We have seen that $F_0(Z)\subset Fl(2,3,H)$ is the zero locus of a section of $\mathcal F$ appearing in the sequence (\ref{ex_seq_def_F}). Taking the symmetric power of (\ref{ex_seq_taut_bundles_2_3}), we have the following commutative diagram with exact rows $$\xymatrix@-1pc@M=-0.1em{0\ar[r] &e^*\mathcal O_{G(2,H)}(-3)\otimes t^*\mathcal O_{G(3,H)}(3)\ar[r]\ar[d] &e^*\mathcal O_{G(2,H)}(-3)\otimes t^*\mathcal O_{G(3,H)}(3)\ar[r]\ar[d] &0\ar[r]\ar[d] &0\\ 0\ar[r] &e^*\mathcal O_{G(2,H)}(-1)\otimes t^*({\rm Sym}^2\mathcal E_3\otimes \mathcal O_{G(3,H)}(1))\ar[r] &t^*{\rm Sym}^3\mathcal E_3\ar[r] &e^*{\rm Sym}^2\mathcal E_2\ar[r] &0.}$$
The projection of the section $\sigma_{{\rm eq}_Z}\in H^0(t^*{\rm Sym}^3E_3)$ induced by ${\rm eq}_Z$, to $e^*{\rm Sym}^2\mathcal E_2$ vanishes on $F_1(Z)$ by definition of $F_1(Z)$. So it induces a section of $e_{F_1}^*\mathcal O_{F_1(Z)}(-1)\otimes t_{F_1}^*({\rm Sym}^2\mathcal E_3\otimes \mathcal O_{G(3,H)}(1))\simeq (e^*\mathcal O_{G(2,H)}(-1)\otimes t^*({\rm Sym}^2\mathcal E_3\otimes \mathcal O_{G(3,H)}(1)))_{|\mathbb P(\mathcal Q_{2|F_1(Z)})}$. Now the snake lemma in the above diagram gives the result. \end{proof}
The snake lemma in the following diagram with exact rows
$$\xymatrix@-1pc@M=-0.1em{0\ar[r] &T_{F_0(Z)}\ar[r]\ar[d]^{\cong} &T_{\mathbb P(\mathcal Q_{2|F_1(Z)})|F_0(Z)}\ar[r]\ar[d] &N_{F_0(Z)/\mathbb P(\mathcal Q_{2|F_1(Z)})}\ar[r]\ar[d] &0\\ 0\ar[r] &T_{F_0(Z)}\ar[r] &e_{F_0}^*T_{F_1(Z)}\ar[r] &N_{F_0(Z)/F_1(Z)}\ar[r] &0.}$$ and the description of the relative tangent bundle of $e_{F_1}$ give
\begin{proposition}\label{prop_cotgt_bundle_ex_seq_F_0} The following sequence is exact $$\begin{small}\begin{aligned}0\rightarrow \mathcal O_{F_0}\rightarrow e_{F_0}^*(\mathcal Q_{2|F_1(Z)}\otimes \mathcal O_{F_1(Z)}(-1))\otimes t_{F_0}^*(\mathcal O_{G(3,H)}(1))_{|F_0}\rightarrow N_{F_0(Z)/\mathbb P(\mathcal Q_{2|F_1(Z)})}\rightarrow \Omega_{F_0(Z)}\rightarrow 0\end{aligned}\end{small}$$ \end{proposition}
We finish this section by computing the Hodge numbers of $F_0(Z)$.
\begin{proposition}\label{prop_h_1_F_0} We have $H^1(F_0(Z),\mathbb Z)=0$ for any $Z$ for which $F_0(Z)$ is smooth. \end{proposition}
\begin{proof} For the universal variety of planes $r_{univ}:\mathcal F_2(\mathcal X)\rightarrow |\mathcal O_{\mathbb P^6}(3)|$, $R^3r_{univ,*}\mathbb Q$ is a local system over the open subset $\{[X]\in |\mathcal O_{\mathbb P^6}(3)|,\ F_2(X)\ {\rm is\ smooth}\}$ which, by Proposition \ref{prop_etale_cover_F_2_F_0}, contains an open subset of the locus of cyclic cubic $5$-folds.\\ \indent As a consequence, the Abel-Jacobi isomorphism $q_*p^*:H^3(F_2(X),\mathbb Q)\xrightarrow{\sim} H^5(X,\mathbb Q)$ given by the result of Collino (Theorem \ref{thm_Collino_intro}) for the general $X$, extends to the case of the general cyclic cubic $5$-fold.\\ \indent But, as noticed in the proof of Proposition \ref{prop_etale_cover_F_2_F_0}, for any $[P]\in F_0(Z)$, the associated cycle $q(p^{-1}(\pi^{-1}([P])))$ on $X_Z$ is the complete intersection cycle ${\rm Span}(P,p_0)\cap X_Z$, which belongs to a family of cycles parametrised by a rational variety, namely $\{[\Pi]\in G(4,V),\ p_0\in \Pi\}\simeq G(3,H)$. Now, as an abelian variety contains no rational curve, the Abel-Jacobi map $\Phi:G(3,H)\rightarrow J^5(X_Z)$, $[P]\mapsto [{\rm Span}(P,p_0)\cap X_Z] - [{\rm Span}(P_0,p_0)\cap X_Z]$ ($[P_0]$ being a reference point) is constant. Hence the restriction $\Phi_{(\pi_*,{\rm id}_{X_Z})\mathbb P(\mathcal E_3)}:F_0(Z)\rightarrow J^5(X_Z)$ of $\Phi$ to the sub-family $(\pi_*,{\rm id}_{X_Z})\mathbb P(\mathcal E_3)\subset F_0(Z)\times X_Z$ (of planes $P$ such that ${\rm Span}(P,p_0)\cap X_Z$ consists of $3$ planes) is constant i.e. $q_*p^*\pi^*:H^3(F_0(Z),\mathbb Z)\rightarrow H^5(X_Z,\mathbb Z)$ is trivial.\\ \indent As $\pi$ is \'etale, $\pi^*:H^3(F_0(Z),\mathbb Q)\rightarrow H^3(F_2(X_Z),\mathbb Q)$ is injective so that the trivial map $q_*p^*\pi^*$ is the composition of a injective map followed by an isomorphism. \end{proof}
We can then compute the rest of the Hodge numbers: \begin{enumerate} \item again using the package Schubert2 of Macaulay2, we can use the Koszul resolution of $\mathcal O_{F_0(Z)}$ by $\wedge^i\mathcal F^*$ (where $\mathcal F$ is defined by (\ref{ex_seq_def_F})) to compute $\chi(\mathcal O_{F_0(Z)})=1071$ with the following code: \begin{verbatim} loadPackage "Schubert2" G=flagBundle{3,3} (Q,E)=bundles G wE=exteriorPower(2,E) P=projectiveBundle' wE p=P.StructureMap pl=exteriorPower(3,E) pol=p^*pl**dual(OO_P(1)) F=p^*symmetricPower(3,E)-symmetricPower(3,pol) chi(exteriorPower(0,dual(F)))-chi(exteriorPower(1,dual(F))) +chi(exteriorPower(2,dual(F)))-chi(exteriorPower(3,dual(F))) +chi(exteriorPower(4,dual(F)))-chi(exteriorPower(5,dual(F))) +chi(exteriorPower(6,dual(F)))-chi(exteriorPower(7,dual(F))) +chi(exteriorPower(8,dual(F)))-chi(exteriorPower(9,dual(F))) \end{verbatim} so we get $h^2(\mathcal O_{F_0(Z)})=1070$;\\
\item Then as $\pi$ is \'etale of degree $3$ we get $\chi_{top}(F_0(Z))=\frac 1 3 \chi_{top}(F_2(X_Z))=4347$. So $h^{1,1}(F_0(Z))=2207$. \end{enumerate}
\section*{Acknowledgments} I would like to thank Hseuh-Yung Lin for pointing me the article \cite{Iliev-Manivel_cub_hyp_int_syst} some years ago. I would like to thank also Pieter Belmans for explaining how to use Sage to decompose the tensor powers of $\mathcal E_3$ into irreducible modules and the anonymous referee for his remarks.\\ \indent Finally I am grateful to the gracious Lord for His care.\\
\indent I was partially supported by y NSF Grant, Simons Investigator Award HMS, Simons. Collaboration Award HMS, HSE University Basic Research Program, the Ministry of Education and Science of the Republic of Bulgaria through the Scientific Program “Enhancing the Research Capacity in Mathematical Sciences (PIKOM)” No. DO1-67/05.05.2022.
\noindent \begin{tabular}[t]{l} \textit{[email protected]}\\ UMiami Miami, HSE Moscow,\\ Institute of Mathematics and Informatics, Bulgarian Academy of Sciences,\\ Acad. G. Bonchev Str. bl. 8, 1113, Sofia, Bulgaria. \end{tabular}\\
\end{document} |
\begin{document}
\title[Linear operators and the {L}iouville theorem]{The {L}iouville theorem and linear operators satisfying the maximum principle }
\author[N.~Alibaud]{Natha\"el Alibaud} \address[N.~Alibaud]{ENSMM\\ 26 Chemin de l'Epitaphe\\ 25030 Besan\c{c}on cedex\\ Fran\-ce and\\ LMB\\ UMR CNRS 6623\\ Universit\'e de Bourgogne Franche-Comt\'e (UBFC)\\ France} \email{nathael.alibaud\@@{}ens2m.fr} \urladdr{https://lmb.univ-fcomte.fr/Alibaud-Nathael}
\author[F.~del Teso]{F\'elix del Teso} \address[F.~del Teso]{Departamento de An\'alisis Matem\'atico y Matem\'atica Aplicada\\ Universidad Complutense de Madrid (UCM)\\ 28040 Madrid, Spain} \email{fdelteso\@@{}ucm.es} \urladdr{https://sites.google.com/view/felixdelteso}
\author[J. Endal]{J\o rgen Endal} \address[J. Endal]{Department of Mathematical Sciences\\ Norwegian University of Science and Technology (NTNU)\\ N-7491 Trondheim, Norway} \email{jorgen.endal\@@{}ntnu.no} \urladdr{http://folk.ntnu.no/jorgeen}
\author[E.~R.~Jakobsen]{Espen R. Jakobsen} \address[E.~R.~Jakobsen]{Department of Mathematical Sciences\\ Norwegian University of Science and Technology (NTNU)\\ N-7491 Trondheim, Norway} \email{espen.jakobsen\@@{}ntnu.no} \urladdr{http://folk.ntnu.no/erj}
\subjclass[2010]{ {35B10, 35B53, 35J70, 35R09, 60G51, 65R20}}
\keywords{ Nonlocal degenerate elliptic operators, Courr\`ege theorem, L\'evy-Khintchine formula, Liouville theorem, periodic solutions, propagation of maximum, subgroups of $\ensuremath{\mathbb{R}}^d$, Kronecker theorem}
\begin{abstract} A result by Courr\`ege says that linear translation invariant operators satisfy the maximum principle if and only if they are of the form $\ensuremath{\mathcal{L}}=\ensuremath{\mathcal{L}}^{\sigma,b}+\ensuremath{\mathcal{L}}^\mu$ where $$ \ensuremath{\mathcal{L}}^{\sigma,b}[u](x)=\textup{tr}(\sigma \sigma^{\texttt{T}} D^2u(x))+b\cdot
Du(x) $$ and $$
\ensuremath{\mathcal{L}}^\mu[u](x)=\int_{\ensuremath{\mathbb{R}}^d\setminus\{0\}} \big(u(x+z)-u(x)-z\cdot Du(x) \mathbf{1}_{|z| \leq
1}\big) \,\mathrm{d} \mu(z).$$ This class of operators coincides with the infinitesimal generators of L\'evy processes in probability theory. In this paper we give a complete characterization of the operators of this form that satisfy the Liouville theorem: Bounded solutions $u$ of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$ are constant. The Liouville property is obtained as a consequence of a periodicity result that completely characterizes bounded distributional solutions of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$. The proofs combine arguments from PDEs and group theory. They are simple and short. \end{abstract}
\maketitle
\section{Introduction and main results} \label{sec:intro} The classical Liouville theorem states that bounded solutions of $\Delta u=0$ in $\ensuremath{\mathbb{R}}^d$ are constant. The Laplace operator $\Delta$ is the most classical example of an operator $\ensuremath{\mathcal{L}}:C^\infty_\textup{c} (\ensuremath{\mathbb{R}}^d) \to C(\ensuremath{\mathbb{R}}^d)$ satisfying the maximum principle in the sense that \begin{equation}\label{mp} \mbox{$\ensuremath{\mathcal{L}} [u](x) \leq 0$ at any global maximum point $x$ of $u$.} \end{equation} In the class of linear translation invariant\footnote{Translation invariance means that
$\ensuremath{\mathcal{L}}[u(\cdot+y)](x)=\ensuremath{\mathcal{L}}[u](x+y)$ for all $x,y$.} operators (which includes $\Delta$), a result by Courr\`ege \cite{Cou64}\footnote{If \eqref{mp} holds at any {\it nonnegative} maximum point, then by definition the {\it positive} maximum principle holds and by \cite{Cou64} there is an extra term $c u(x)$ with $c \leq 0$ in \eqref{def:localOp1}. For the purpose of this paper (Liouville and periodicity), the case $c<0$ is trivial since then $u=0$ is the unique bounded solution of $\mathcal{L}[u]=0$.} says that the maximum principle holds if and only if \begin{equation}\label{eq:GenOp1} \ensuremath{\mathcal{L}}=\ensuremath{\mathcal{L}}^{\sigma,b}+\ensuremath{\mathcal{L}}^\mu, \end{equation} where \begin{align}\label{def:localOp1} \ensuremath{\mathcal{L}}^{\sigma,b}[u](x)&=\textup{tr}(\sigma \sigma^{\texttt{T}} D^2u(x))+b \cdot Du(x),\\ \label{def:levy1}
\ensuremath{\mathcal{L}}^\mu[u](x)&=\int_{\ensuremath{\mathbb{R}}^d\setminus\{0\}} \big(u(x+z)-u(x)-z \cdot Du(x) \mathbf{1}_{|z| \leq 1}\big) \,\mathrm{d} \mu(z), \end{align} and \begin{align} & b\in\ensuremath{\mathbb{R}}^d, \ \ \text{and}
\ \ \text{$\sigma=(\sigma_1,\ldots,\sigma_P)\in\ensuremath{\mathbb{R}}^{d\times P}$ for
$P\in\ensuremath{\mathbb{N}}$, $\sigma_j \in\ensuremath{\mathbb{R}}^d$,}\label{as:sigmab}\tag{$\textup{A}_{\sigma,b}$}\\
&\mu\geq0 \ \text{is a Radon measure on $\ensuremath{\mathbb{R}}^d\setminus\{0\}$, $\int_{\ensuremath{\mathbb{R}}^d\setminus\{0\}} \min\{|z|^2,1\} \,\mathrm{d} \mu(z)<\infty$.}\label{as:mus}\tag{$\textup{A}_{\mu}$} \end{align} These
elliptic operators have a local part $\ensuremath{\mathcal{L}}^{\sigma,b}$ and a nonlocal part $\ensuremath{\mathcal{L}}^\mu$, either of which could be zero.\footnote{The representation
\eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1} is
unique up to the choice of a cut-off function in
\eqref{def:levy1} and a square root $\sigma$ of
$a=\sigma\sigma^\texttt{T}$. In this paper we always use
$\mathbf{1}_{|z| \leq 1}$ as a cut-off function.}
Another point of view of these operators comes from probability and stochastic processes: Every operator mentioned above is the generator of a L\'evy process, and conversely, every generator of a L\'evy process is of the form given above. L\'evy processes are Markov processes with stationary independent increments and are the prototypical models of noise in science, engineering, and finance. Well-known examples are Brownian motions, Poisson processes, stable processes, and various other types of jump processes.
{\em The main contributions of this paper are the following:
\begin{enumerate}[ \bf 1.] \item We give necessary and sufficient conditions for $\ensuremath{\mathcal{L}}$ to have the Liouville property: Bounded solutions $u$ of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$ are constant.
\item For general $\ensuremath{\mathcal{L}}$, we show that all bounded solutions of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$ are periodic and we identify the set of admissible periods. \end{enumerate}}
Let us now state our results. For a set $S\subseteq \ensuremath{\mathbb{R}}^d$, we let $G(S)$ denote the smallest additive subgroup of $\ensuremath{\mathbb{R}}^d$ containing $S$ and define the subspace $V_S\subseteq \overline{G(S)}$ by \begin{equation*} V_S:=\Big\{g \in \overline{G(S)} \ :\ t g \in \overline{G(S)} \mbox{ } \forall t \in \ensuremath{\mathbb{R}} \Big\}. \end{equation*} Then we take $\supp(\mu)$ to be the support of the measure $\mu$ and define \begin{align*}
G_\mu:=\overline{G(\supp(\mu))},\quad V_\mu:=V_{\supp (\mu)},\quad\text{and}\quad c_\mu:=-\int_{\{|z|\leq1\}\setminus V_\mu} z \,\mathrm{d}\mu(z). \end{align*} Here $c_\mu$ is well-defined and uniquely determined by $\mu$, cf. Proposition \ref{def-prop}. We also need the subspace $W_{\sigma,b+c_\mu}:=\textup{span}_{\ensuremath{\mathbb{R}}}\{\sigma_1,\ldots,\sigma_P,b+c_\mu\}$. \begin{theorem}[General Liouville]\label{thm:Liouville} Assume \eqref{as:sigmab} and \eqref{as:mus}. Let $\ensuremath{\mathcal{L}}$ be given by \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}. Then the following statements are equivalent:
\begin{enumerate}[\rm(a)] \item\label{a4} If $u\in L^\infty(\ensuremath{\mathbb{R}}^d)$ satisfies
$\ensuremath{\mathcal{L}}[u]=0$ in $\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$, then $u$ is a.e. a constant.
\item\label{label:thmLiou:c} $\overline{G_\mu+W_{\sigma,b+c_\mu}}=\ensuremath{\mathbb{R}}^d$. \end{enumerate} \end{theorem}
The above Liouville result is a consequence of a periodicity result for bounded solutions of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$. For a set $S\subseteq \ensuremath{\mathbb{R}}^d$, a function $u\in L^\infty(\ensuremath{\mathbb{R}}^d)$ is a.e. \emph{$S$-periodic} if $u(\cdot+s)=u(\cdot)$ in $\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$ $\forall s\in S$. Our result is the following: \begin{theorem}[General periodicity]\label{thm:PeriodGeneralOp} Assume \eqref{as:sigmab}, \eqref{as:mus}, and $u\in L^\infty(\ensuremath{\mathbb{R}}^d)$. Let $\ensuremath{\mathcal{L}}$ be given by \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}. Then the following statements are equivalent: \begin{enumerate}[\rm(a)] \item \label{a3} $\ensuremath{\mathcal{L}}[u]=0$ in $\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$.
\item\label{b3} $u$ is a.e. $\overline{G_\mu+W_{\sigma,b+c_\mu}}$-periodic. \end{enumerate} \end{theorem} This result characterizes the bounded solutions for all operators $\ensuremath{\mathcal{L}}$ in our class, also those not satisfying the Liouville property. Note that if $\overline{G_\mu+W_{\sigma,b+c_\mu}}=\ensuremath{\mathbb{R}}^d$, then $u$ is constant and the Liouville result follows. Both theorems are proved in Section \ref{sec:periodandliou}.
We give examples in Section \ref{sec:examples}. Examples \ref{ex1} and \ref{ex2} provide an overview of different possibilities, and Examples \ref{ex:finitenumberofpoints} and \ref{ex:kro} are concerned with the case where $\textup{card} \left(\textup{supp} (\mu) \right)<\infty$. The Liouville property holds in the latter case if and only if $\textup{card} \left( \textup{supp} (\mu) \right) \geq d-\textup{dim} \left(W_{\sigma,b+c_\mu} \right)+1$ with additional algebraic conditions in relation with Diophantine approximation. The Kronecker theorem (Theorem \ref{thm:CharKron}) is a key ingredient in this discussion and a slight change in the data may destroy the Liouville property.
The class of operators $\ensuremath{\mathcal{L}}$ given by \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1} is large and diverse. In addition to the processes mentioned above,
it includes also discrete random walks, constant coefficient It\^{o}- and L\'evy-It\^{o} processes, and most processes used as driving noise in finance.
Examples of nonlocal operators
are fractional
Laplacians \cite{Lan72}, convolution
operators \cite{Cov08,A-VMaRoT-M10,BrChQu12}, relativistic Schr\"odinger operators \cite{FaWe16}, and the CGMY model in finance \cite{CoTa04}. We mention that discrete finite difference operators can be written in the form \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}, cf. \cite{DTEnJa18b}. For more examples, see Section \ref{sec:examples}.
There is a huge literature on the Liouville theorem. In the local case, we simply refer to the survey \cite{Far07}. In the nonlocal case, the Liouville theorem is more or less understood for fractional Laplacians or variants \cite{Lan72,BoKuNo02,CaSi14, ChDALi15, Fal15}, certain L\'evy operators \cite{BaBaGu00,PrZa04, ScWa12, R-OSe16a,DTEnJa17a}, relativistic Schr\"{o}dinger operators \cite{FaWe16}, or convolution operators \cite{CD60,BrChQu12, BrCo18, BrCoHaVa19}. The techniques vary from Fourier analysis, potential theory, probabilistic methods, to classical PDE arguments.
To prove that solutions of $\ensuremath{\mathcal{L}}[u]=0$ are $G_\mu$-periodic, we rely on propagation of maximum points \cite{CD60,Cov08,Cio12,DTEnJa17b,DTEnJa17a,HuDuWu18,BrCo18, BrCoHaVa19} and a localization technique \`a la \cite{CD60,BeHaRo07,Ros09,BrCoHaVa19}. As far as we know, Choquet and Deny \cite{CD60} were the first to obtain such results. They were concerned with the equation $u \ast \mu-u=0$ for some bounded measure $\mu$. This is a particular case of our equation since
$u \ast \mu-u=\ensuremath{\mathcal{L}}^\mu[u]+\int_{\ensuremath{\mathbb{R}}^d \setminus \{0\}} z \mathbf{1}_{|z| \leq 1} \,\mathrm{d} \mu(z) \cdot Du$.
For general $\mu$, the drift $\int_{\ensuremath{\mathbb{R}}^d \setminus \{0\}} z \mathbf{1}_{|z| \leq 1} \,\mathrm{d} \mu(z) \cdot Du$ may not make sense and the identification of the full drift $b+c_\mu$ relies on a standard decomposition of closed subgroups of $\ensuremath{\mathbb{R}}^d$, see e.g. \cite{Mar03}. The idea is to establish $G_\mu$-periodicity of solutions of $\ensuremath{\mathcal{L}}[u]=0$ as in \cite{CD60}, and then use that $G_\mu=V_\mu \oplus \Lambda$ for the vector space $V_\mu$ previously defined and some discrete group $\Lambda$. This will roughly speaking remove the singularity $z=0 \in V_\mu$ in the computation of $c_\mu$ because $\int_{\ensuremath{\mathbb{R}}^d \setminus \{0\}}\mathbf{1}_{z \in V_\mu}z \mathbf{1}_{|z| \leq 1} \,\mathrm{d} \mu(z) \cdot Du=0$ for any $G_\mu$-periodic function. See Section \ref{sec:periodandliou} for details.
Our approach then combines PDEs and group arguments, extends the results of \cite{CD60} to Courr\`ege/L\'evy operators, yields necessary and sufficient conditions for the Liouville property, and provides short and simple proofs.
\subsubsection*{Outline of the paper} Our main results (Theorems \ref{thm:Liouville} and \ref{thm:PeriodGeneralOp}.) were stated in Section \ref{sec:intro}. They are proved in Section \ref{sec:periodandliou} and examples are given in Section \ref{sec:examples}.
\subsubsection*{Notation and preliminaries}
The {\it support} of a measure
$\mu$
is defined as \begin{equation}\label{def-support} {\supp(\mu)} := \left\{z\in \ensuremath{\mathbb{R}}^d \setminus \{0\} \ : \ \mu(B_r(z))>0, \ \forall r>0\right\}, \end{equation}
where $B_r(z)$ is the ball of center $z$ and radius $r$.
To continue, we assume \eqref{as:sigmab}, \eqref{as:mus}, and $\ensuremath{\mathcal{L}}$ is given by \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}.
\begin{definition} For any $u \in L^\infty( \ensuremath{\mathbb{R}}^d )$, $\mathcal{L}[u] \in \mathcal{D}'(\ensuremath{\mathbb{R}}^d)$ is defined by \begin{equation*} \langle \ensuremath{\mathcal{L}}[u],\psi \rangle:=\int_{\ensuremath{\mathbb{R}}^d }u(x)\ensuremath{\mathcal{L}}^*[\psi](x)\,\mathrm{d} x \quad \forall \psi\in C_\textup{c}^\infty( \ensuremath{\mathbb{R}}^d ) \end{equation*} with $\ensuremath{\mathcal{L}}^*:=\ensuremath{\mathcal{L}}^{\sigma,-b}+\ensuremath{\mathcal{L}}^{\mu^*}$ and $\,\mathrm{d} \mu^*(z):= \,\mathrm{d} \mu(-z)$. \end{definition}
The above distribution is well-defined since $\ensuremath{\mathcal{L}}^*:W^{2,1}(\ensuremath{\mathbb{R}}^d) \to L^1(\ensuremath{\mathbb{R}}^d)$ is bounded.
\begin{definition} Let $S \subseteq \ensuremath{\mathbb{R}}^d$ and $u \in L^\infty(\ensuremath{\mathbb{R}}^d)$, then $u$ is a.e. \emph{$S$-periodic} if \[ \int_{\ensuremath{\mathbb{R}}^d} \big(u(x+s)-u(x)\big) \psi(x) \,\mathrm{d} x=0 \quad \forall s\in S, \forall \psi \in C^\infty_\textup{c}(\ensuremath{\mathbb{R}}^d). \] \end{definition}
The following technical result will be needed to regularize
distributional solutions of $\ensuremath{\mathcal{L}}[u]=0$ and a.e. periodic
functions. Let the mollifier $\rho_\varepsilon(x):=\frac1{\varepsilon^{d}}\rho(\frac x\varepsilon)$, $\varepsilon>0$, for
some $0\leq\rho\in C_{\textup{c}}^\infty(\ensuremath{\mathbb{R}}^d)$ with $\int_{\ensuremath{\mathbb{R}}^d}\rho =1$.
\begin{lemma}\label{lem:smoothReduction}
Let $u\in L^\infty(\ensuremath{\mathbb{R}}^d)$ and $u_\varepsilon:=\rho_\varepsilon*u$. Then:
\begin{enumerate}[{\rm (a)}] \item\label{a1} $\ensuremath{\mathcal{L}}[u]=0$ in $\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$ if and only if
$\ensuremath{\mathcal{L}}[u_\varepsilon]=0$ in $\ensuremath{\mathbb{R}}^d$ for all $\varepsilon>0$.
\item\label{b1} $u$ is a.e. $S$-periodic if and only if $u_\varepsilon$ is $S$-periodic for all $\varepsilon>0$. \end{enumerate} \end{lemma}
\begin{proof}
The proof of \eqref{a1} is standard since $\ensuremath{\mathcal{L}}[u_\varepsilon]=\ensuremath{\mathcal{L}}[u] \ast \rho _\varepsilon$ in $\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$. Moreover \eqref{b1} follows from \eqref{a1} since for any $s\in S$ we can take $\ensuremath{\mathcal{L}}[\phi](x)= \phi(x+s)-\phi(x)$ by choosing $\sigma,b=0$ and $\mu=\delta_s$ (the Dirac measure at $s$) in \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}. \end{proof}
\section{Proofs}\label{sec:periodandliou}
This section is devoted to the proofs of Theorems \ref{thm:Liouville} and \ref{thm:PeriodGeneralOp}. We first reformulate the classical Liouville theorem for local operators in terms of periodicity, then study the influence of the nonlocal part.
\subsection{ $W_{\sigma,b}$-periodicity for local operators}
Let us recall the Liouville theorem for operators of the form \eqref{def:localOp1}, see e.g. \cite{Nel61,Miy15}. In the result we use the set \begin{equation*} W_{\sigma,b}=\textup{span}_{\ensuremath{\mathbb{R}}}\{\sigma_1,\ldots,\sigma_P,b\}. \end{equation*} Note that $\textup{span}_{\ensuremath{\mathbb{R}}}\{\sigma_1,\ldots,\sigma_P\}$ equals the span of the eigenvectors of $\sigma\sigma^\texttt{T}$ corresponding to nonzero eigenvalues. \begin{theorem}[Liouville for $\ensuremath{\mathcal{L}}^{\sigma,b}$]\label{thm:LiouvilleLocal} Assume \eqref{as:sigmab} and $\ensuremath{\mathcal{L}}^{\sigma,b}$ is given by \eqref{def:localOp1}. Then the following statements are equivalent:
\begin{enumerate}[{\rm (a)}] \item If $u\in L^\infty(\ensuremath{\mathbb{R}}^d)$ solves $\ensuremath{\mathcal{L}}^{\sigma,b}[u]=0$ in
$\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$, then $u$ is a.e. constant in~$\ensuremath{\mathbb{R}}^d$.
\item $W_{\sigma,b}=\ensuremath{\mathbb{R}}^d$. \end{enumerate} \end{theorem} Let us now reformulate and prove this classical result as a consequence of a periodicity result, a type of argument that will be crucial in the nonlocal case. We will consider $C^\infty_{\textup{b}}(\ensuremath{\mathbb{R}}^d)$ solutions, which will be enough later during the proofs of Theorem \ref{thm:Liouville} and \ref{thm:PeriodGeneralOp}, thanks to Lemma \ref{lem:smoothReduction}.
\begin{proposition}[Periodicity for $\ensuremath{\mathcal{L}}^{\sigma,b}$]\label{thm:PeriodLiouvilleLocal} Assume \eqref{as:sigmab}, $\ensuremath{\mathcal{L}}^{\sigma,b}$ is given by \eqref{def:localOp1}, and $u\in C^\infty_{\textup{b}}(\ensuremath{\mathbb{R}}^d)$. Then the following statements are equivalent: \begin{enumerate}[{\rm (a)}]
\item\label{a2} $\ensuremath{\mathcal{L}}^{\sigma,b}[u]=0$ in $\ensuremath{\mathbb{R}}^d$.
\item\label{b2} $u$ is $W_{\sigma,b}$-periodic. \end{enumerate} \end{proposition}
Note that part \eqref{b2} implies that $u$ is constant in the directions defined by the vectors $\sigma_1,\ldots,\sigma_P,b$. If their span then covers all of $\ensuremath{\mathbb{R}}^d$, Theorem \ref{thm:LiouvilleLocal} follows trivially. To prove Proposition \ref{thm:PeriodLiouvilleLocal}, we adapt the ideas of \cite{Miy15} to our setting.
\begin{proof}[Proof of Proposition \ref{thm:PeriodLiouvilleLocal}]\
\noindent\eqref{b2} $\Rightarrow$ \eqref{a2} \ We have $b \cdot Du(x)=\frac{\,\mathrm{d}}{\,\mathrm{d} t} u(x+t b)_{|_{t=0}}=0$ for any $x \in \ensuremath{\mathbb{R}}^d$ since the function $t \mapsto u(x+t b)$ is constant. Similarly $(\sigma_j \cdot D)^2 u(x):=\frac{\,\mathrm{d}^2}{\,\mathrm{d} t^2} u(x+t \sigma_j)_{|_{t=0}}=0$ for any $j=1,\dots,P$. Using then that $\textup{tr}(\sigma\sigma^\texttt{T} D^2u)= \sum_{j=1}^P (\sigma_j \cdot D)^2 u$, we conclude that $\ensuremath{\mathcal{L}}^{\sigma,b}[u]=0$ in $\ensuremath{\mathbb{R}}^d$.
\noindent\eqref{a2} $\Rightarrow$ \eqref{b2} \ Let $v(x,y,t):=u(x+\sigma y-bt)$ for $x \in \ensuremath{\mathbb{R}}^d$, $y\in\ensuremath{\mathbb{R}}^P$, and $t \in \ensuremath{\mathbb{R}}$. Direct computations show that $$ \Delta_y v(x,y,t)=\sum_{j=1}^P(\sigma_j\cdot D)^2u(x+\sigma y-bt)=\textup{tr}\big[\sigma \sigma^{\texttt{T}} D^2u(x+\sigma y-bt)\big] $$
and $ \partial_tv(x,y,t)=-b \cdot Du(x+\sigma y-bt)$. Hence for all $(x,y,t)\in \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^P \times \ensuremath{\mathbb{R}}$, $$ \Delta_y v(x,y,t)-\partial_tv(x,y,t)=\mathcal{L}^{\sigma,b}[u](x+\sigma y-bt)=0. $$
Since $v(x,\cdot,\cdot)$ is bounded, we conclude by uniqueness of the heat equation that for any $s<t$, \begin{equation}\label{eq:convFormulaHE} v(x,y,t)=\int_{\ensuremath{\mathbb{R}}^P}v(x,z,s)K_P(y-z,t-s)\,\mathrm{d} z, \end{equation} where $K_P$ is the standard heat kernel in $\ensuremath{\mathbb{R}}^P$. But then $$
\|\Delta_yv(x,\cdot,t)\|_\infty\leq \|v(x,\cdot,s)\|_\infty\|\Delta_y K_P(\cdot,t-s)\|_{L^1(\ensuremath{\mathbb{R}}^P)}, $$
and since $\|\Delta_y K_P(\cdot,t-s)\|_{L^1}\to0$ as $s\to-\infty$,
we deduce that $\Delta_yv=0$ for all $x,y,t$.
By the classical Liouville theorem (see e.g. \cite{Nel61}), $v$ is constant in $y$. It is also constant in $t$ by \eqref{eq:convFormulaHE} since $\int_{\ensuremath{\mathbb{R}}^P}K_P(z,t-s)\,\mathrm{d} z=1$. We conclude that $u$ is $W_{\sigma,b}$-periodic since $$ u(x)=v(x,0,0)=v(x,y,t)=u(x+\sigma y-bt) $$
and $W_{\sigma,b}=\{\sigma y-bt:y \in \ensuremath{\mathbb{R}}^P, t \in \ensuremath{\mathbb{R}}\}$. \end{proof}
\subsection{ $G_\mu$-periodicity for general operators}
Proposition \ref{thm:PeriodLiouvilleLocal} might seem artificial in the local case, but not so in the nonlocal case. In fact we will prove our general Liouville result as a consequence of a periodicity result. A key step in this direction is the lemma below.
\begin{lemma}\label{lem:L0ImpusuppmuPer} Assume \eqref{as:sigmab}, \eqref{as:mus}, $\ensuremath{\mathcal{L}}$ is given by \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}, and $u\in C^\infty_{\textup{b}}(\ensuremath{\mathbb{R}}^d)$. If $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$, then $u$ is $\supp(\mu)$-periodic. \end{lemma}
To prove this result, we use propagation of maximum (see e.g. \cite{CD60,Cov08,Cio12}). \begin{lemma}\label{max-prop} If $u\in C^\infty_{\textup{b}}(\ensuremath{\mathbb{R}}^d)$ achieves its global maximum at some $\bar x$ such that $\ensuremath{\mathcal{L}}[u](\bar x)\geq 0$, then $u(\bar x+z)=u(\bar x)$ for any $z \in \textup{supp} (\mu)$.
\end{lemma} \begin{proof}
At $\bar x$, $u=\sup u$, $Du=0$ and $D^2u\leq 0$, and hence
$\ensuremath{\mathcal{L}}^{\sigma,b}[u](\bar x)\leq 0$ and
$$0 \leq\ensuremath{\mathcal{L}}[u](\bar x) \leq \ensuremath{\mathcal{L}}^{\mu}[u](\bar x) = \int_{\ensuremath{\mathbb{R}}^d \setminus \{0\}} \big(u(\bar
x+z) - \sup_{\ensuremath{\mathbb{R}}^d} u\big)\, d\mu(z).$$
Using that $\int_{\ensuremath{\mathbb{R}}^d\setminus\{0\}} f \,\mathrm{d} \mu \geq 0$ and $f \leq 0$ implies $f=0$ $\mu$-a.e., we deduce that $u(\bar x+z)-\sup_{\ensuremath{\mathbb{R}}^d} u=0$ for $\mu$-a.e. $z$. Since $u$ is continuous, this equality holds for all $z \in \textup{supp} (\mu)$.\footnote{ If not, we would find some $z_0$ and $r_0>0$ such that $f(z):=u(\bar x+z)-\sup u<0$ in $B_{r_0}(z_0)$ where as $\mu(B_{r_0}(z_0))>0$ by \eqref{def-support}.} \end{proof}
To exploit Lemma \ref{max-prop}, we need to have a maximum point. For this sake, we use a localization technique \`a la \cite{CD60,BeHaRo07,Ros09,BrCoHaVa19}.
\begin{proof}[Proof of Lemma \ref{lem:L0ImpusuppmuPer}]
Fix an arbitrary $ \bar z \in \supp(\mu)$, define \[ v(x):=u(x+ \bar z )-u(x), \] and let us show that $v(x)=0$ for all $x \in \ensuremath{\mathbb{R}}^d$. We first
show that $v \leq 0$. Take $M$ and a sequence $\{x_n\}_{n}$ such that \[ v(x_n)\stackrel{n\to \infty}{\longrightarrow} M:=\sup v, \] and define $$u_n(x):=u(x+x_n)\quad\text{and}\quad v_n(x):=v(x+x_n).$$ Note that $\ensuremath{\mathcal{L}}[v_n] = 0$ in $\ensuremath{\mathbb{R}}^d$. Now since $v \in C_\textup{b}^\infty(\ensuremath{\mathbb{R}}^d)$, the Arzel\`a-Ascoli theorem implies that there exists $v_\infty$ such that $v_n \to v_\infty$ locally uniformly (up to a subsequence). Taking another subsequence if necessary, we can assume that the derivatives up to second order converge and pass to the limit in the equation $\ensuremath{\mathcal{L}}[v_n] = 0$ to deduce that $\ensuremath{\mathcal{L}}[v_\infty] = 0$ in $\ensuremath{\mathbb{R}}^d$. Moreover, $v_\infty$ attains its maximum at $x=0$ since $v_\infty\leq M$ and \[ v_\infty(0)=\lim_{n\to\infty}v_n(0)=\lim_{n\to \infty}v(x_n)=M. \] A similar argument shows that there is a $u_\infty$ such that $u_n \to u_\infty$ as $n \to \infty$ locally uniformly. Taking further subsequences if necessary, we can assume that $u_n$ and $v_n$ converge along the same sequence. Then by construction $$ v_\infty(x)=u_\infty(x+ \bar z )-u_\infty(x). $$ By Lemma \ref{max-prop} and an iteration, we find that $M=v_\infty(m \bar z )=u_\infty((m+1) \bar z )-u_\infty(m \bar z )$ for any $m \in \ensuremath{\mathbb{Z}}$. Then by another iteration, \[ u_\infty((m+1) \bar z )=u_\infty(m \bar z )+M=\ldots = u_\infty(0)+(m+1)M. \] But since $u_\infty$ is bounded, the only choice is $M=0$ and thus $v\leq M=0$. A similar argument shows that $v \geq 0$, and hence, $0=v(x)=u(x+ \bar z )-u(x)$ for any $ \bar z \in \supp(\mu)$ and all $x\in\ensuremath{\mathbb{R}}^d$. \end{proof}
We can give a more general result than Lemma \ref{lem:L0ImpusuppmuPer} if we consider groups.
\begin{definition} \begin{enumerate}[{\rm (a)}] \item A set $G \subseteq \ensuremath{\mathbb{R}}^d$ is an {\it additive subgroup} if $G \neq \emptyset$ and $$ \forall g_1,g_2 \in G, \quad g_1+g_2\in G \quad \text{and}\quad-g_1 \in G. $$ \item The \textit{subgroup generated} by a set $S \subseteq \ensuremath{\mathbb{R}}^d$, denoted $G(S)$, is the smallest additive group containing $S$. \end{enumerate} \end{definition}
Now we return to a key set for our analysis: \begin{equation}\label{def-Gmu} G_\mu=\overline{G(\supp (\mu))}. \end{equation} This set appears naturally because of the elementary result below.
\begin{lemma}\label{suppmuPerGsuppmuPer} Let $S \subseteq \ensuremath{\mathbb{R}}^d$. Then $w\in C(\ensuremath{\mathbb{R}}^d)$ is $S$-periodic if and only if $w$ is $\overline{G(S)}$-periodic. \end{lemma}
\begin{proof}
It suffices to show that $G:=\{g \in \ensuremath{\mathbb{R}}^d:w(\cdot+g)=w(\cdot)\}$ is a closed subgroup of $\ensuremath{\mathbb{R}}^d$. It is obvious that it is closed by continuity of $w$. Moreover, for any $g_1,g_2 \in \ensuremath{\mathbb{R}}^d$ and $x \in \ensuremath{\mathbb{R}}^d$, \begin{equation*} w(x+g_1-g_2)=w(x-g_2)=w(x-g_2+g_2)=w(x).\qedhere \end{equation*} \end{proof}
By Lemmas \ref{lem:L0ImpusuppmuPer} and \ref{suppmuPerGsuppmuPer}, we have proved that:
\begin{proposition}[$G_\mu$-periodicity]\label{lem:L0ImpusuppmuPer-bis} Assume \eqref{as:sigmab}, \eqref{as:mus}, $\ensuremath{\mathcal{L}}$ is given by \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}, and $G_\mu$ by \eqref{def-Gmu}. Then any solution $u\in C^\infty_{\textup{b}}(\ensuremath{\mathbb{R}}^d)$ of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$ is $G_\mu$-periodic. \end{proposition}
\subsection{The role of $c_\mu$}
Propositions \ref{thm:PeriodLiouvilleLocal} and
\ref{lem:L0ImpusuppmuPer-bis} combined may seem to imply that $\ensuremath{\mathcal{L}}[u]=0$ gives
$(G_\mu+W_{\sigma,b})$-periodicity of $u$, but this is not true in
general. The correct periodicity result depends on a new
drift $b+c_\mu$, where $c_\mu$ is defined in \eqref{cmu}
below. To give this definition, we need to decompose
$G_\mu$ into
a direct sum of a vector subspace and a relative lattice.
\begin{definition} \begin{enumerate}[{\rm (a)}] \item If two subgroups $G,\tilde{G} \subseteq \ensuremath{\mathbb{R}}^d$ satisfy $G \cap \tilde{G}=\{0\}$, their sum is said to be {\em direct} and we write $G+\tilde{G}=G \oplus \tilde{G}$.
\item A \textit{full lattice} is a subgroup $\Lambda \subseteq \ensuremath{\mathbb{R}}^d$ of the form $ \Lambda= \oplus_{n=1}^d a_n \ensuremath{\mathbb{Z}} $ for some basis $\{a_1,\dots,a_d\}$ of $\ensuremath{\mathbb{R}}^d$. A \textit{relative lattice} is a lattice of a vector subspace of $\ensuremath{\mathbb{R}}^d$. \end{enumerate} \end{definition}
\begin{theorem}[Theorem 1.1.2 in \cite{Mar03}]\label{decom:opt}
If $G$ is a closed subgroup of $\ensuremath{\mathbb{R}}^d$, then
$ G=V \oplus \Lambda $ for some vector space $V \subseteq \ensuremath{\mathbb{R}}^d$ and some relative lattice $\Lambda \subseteq \ensuremath{\mathbb{R}}^d$ such that $V \cap \textup{span}_\ensuremath{\mathbb{R}} \Lambda=\{0\}$. \end{theorem}
In this decomposition the space $V$ is unique and can be represented by \eqref{def-V} below.
\begin{lemma}\label{dis} Let $V$ be a vector subspace and $\Lambda$ a relative lattice of $\ensuremath{\mathbb{R}}^d$ such that $V \cap\textup{span}_\ensuremath{\mathbb{R}} \Lambda =\{0\}$. Then for any $\lambda \in \Lambda$, there is an open ball $B$ of $\ensuremath{\mathbb{R}}^d$ containing $\lambda$ such that $B \cap (V \oplus \Lambda)=B \cap (V+\lambda)$. \end{lemma}
\begin{proof} If the lemma does not hold, there exists $v_n+\lambda_n \to \lambda$ as $n \to \infty$ where $v_n \in V$, $\lambda_n \in \Lambda$, $\lambda_n \neq \lambda$. Note that $v_n, \lambda_n, \lambda\in V \oplus \textup{span}_\ensuremath{\mathbb{R}} \Lambda$, and that $$ \lambda=\!\!\underset{\ \, \in V}0+\!\!\underset{\ \, \in \Lambda}{\lambda}. $$ By continuity of the projection from $V \oplus \textup{span}_\ensuremath{\mathbb{R}}\Lambda$ onto $\textup{span}_\ensuremath{\mathbb{R}} \Lambda$, $\lambda_n \to \lambda$ and this contradicts the fact that each point of $\Lambda$ is isolated. \end{proof}
\begin{lemma}\label{pro:def-V} Let $G$, $V$ and $\Lambda$ be as in Theorem \ref{decom:opt}. Then \begin{equation} \label{def-V} V=V_G:=\left\{g \in G \ :\ t g \in G \mbox{ } \forall t \in \ensuremath{\mathbb{R}} \right\}. \end{equation} \end{lemma}
\begin{proof} It is clear that $V \subseteq V_G$. Now given $g \in V_G$, there is $(v,\lambda) \in V \times \Lambda$ such that $g=v+\lambda$. For any $t \in \ensuremath{\mathbb{R}}$, $t g=t v+t \lambda \in G$ and thus $t \lambda \in G$ since $t v \in V \subseteq G$. Let $B$ be an open ball containing $\lambda$ such that $B \cap G=B \cap (V+\lambda)$. Choosing $ t$ such that $t \neq 1$ and $t \lambda \in B$, we infer that $t \lambda={ \tilde{v} }+\lambda$ for some ${
\tilde{v} } \in V$. Hence $\lambda=(t-1)^{-1} { \tilde{v} } \in V$ and this implies that $\lambda=0$. In other words $V_G \subseteq V$, and the proof is complete. \end{proof}
\begin{remark}\label{rem:per} Any $G$-periodic function $w \in C^1(\ensuremath{\mathbb{R}}^d)$ is such that $z \cdot Dw (x)=\lim_{t \to 0} \frac{w(x+t { z })-w(x)}{t}=0$ for any $x \in \ensuremath{\mathbb{R}}^d$ and $z \in V_G$.
\end{remark}
By Theorem \ref{decom:opt} and Lemma \ref{pro:def-V}, we decompose the set $G_\mu$ in \eqref{def-Gmu} into a lattice and the subspace $V_\mu:=V_{G_\mu}$. The new drift can then be defined as \begin{equation}\label{cmu}
c_\mu=-\int_{\{|z| \leq 1\} \setminus V_\mu} z\, \,\mathrm{d} \mu(z). \end{equation}
\begin{proposition}\label{def-prop}
Assume \eqref{as:mus} and $c_\mu$ is given by
\eqref{cmu}. Then $c_\mu \in \ensuremath{\mathbb{R}}^d$ is well-defined
and uniquely determined by $\mu$. \end{proposition}
\begin{proof} Using that $\textup{supp}(\mu) \subset G_\mu=V_\mu\oplus \Lambda$, \begin{equation*} \begin{split}
\int_{\{|z| \leq 1\} \setminus V_\mu} |z| \,\mathrm{d} \mu(z) & = \int_{G_\mu \setminus (V_\mu+0)} |z| \mathbf{1}_{|z| \leq 1} \,\mathrm{d} \mu(z)\\
& \leq \int_{G_\mu \setminus B} |z| \mathbf{1}_{|z| \leq 1} \,\mathrm{d} \mu(z) \end{split} \end{equation*} for some open ball $B$ containing $0$ given by Lemma \ref{dis}. This integral is finite by \eqref{as:mus} which completes the proof. \end{proof}
\begin{proposition}\label{lem:drift} Assume \eqref{as:mus} and $\ensuremath{\mathcal{L}}^\mu$, $G_\mu$, $c_\mu$ are given by \eqref{def:levy1}, \eqref{def-Gmu}, \eqref{cmu}. If $w\in C^\infty_{\textup{b}}(\ensuremath{\mathbb{R}}^d)$ is $G_\mu$-periodic, then
$$\ensuremath{\mathcal{L}}^\mu [w] = c_\mu\cdot Dw \quad \text{in}\quad \ensuremath{\mathbb{R}}^d.$$
\end{proposition}
\begin{proof}
Using that $\int_{\ensuremath{\mathbb{R}}^d\setminus\{0\}} f \,\mathrm{d} \mu=\int_{\textup{supp} (\mu)} f \,\mathrm{d} \mu$, we have \begin{align*}
\ensuremath{\mathcal{L}}^\mu[w](x)
= - \int_{\ensuremath{\mathbb{R}}^d \setminus \{0\}} z\cdot Dw(x)
\mathbf{1}_{|z| \leq 1}\,\mathrm{d}\mu(z) \end{align*} because $w(x+z)-w(x)=0$ for all $x \in \ensuremath{\mathbb{R}}^d$ and $z \in \textup{supp}(\mu) \subset G_\mu$. The result is thus immediate from Remark \ref{rem:per} and Proposition \ref{def-prop}. \end{proof}
\subsection{ Proofs of Theorems \ref{thm:Liouville} and \ref{thm:PeriodGeneralOp}}
We are now in a position to prove our main results. We start with Theorem \ref{thm:PeriodGeneralOp} which characterizes all bounded solutions of $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}^d$ as periodic functions and specifies the set of admissible periods.
\begin{proof}[Proof of Theorem \ref{thm:PeriodGeneralOp}] By Lemma \ref{lem:smoothReduction} we can assume that $u \in C_\textup{b}^\infty(\ensuremath{\mathbb{R}}^d)$.
\noindent\eqref{a3} $\Rightarrow$ \eqref{b3} \ Since $\ensuremath{\mathcal{L}}[u] =0$ in $\ensuremath{\mathbb{R}}^d$, $u$ is $G_\mu$-periodic by Proposition \ref{lem:L0ImpusuppmuPer-bis}. Proposition \ref{lem:drift} then implies that $$ 0=\ensuremath{\mathcal{L}}[u] = \ensuremath{\mathcal{L}}^{\sigma,b}[u]+c_\mu\cdot Du= \ensuremath{\mathcal{L}}^{\sigma,b+c_\mu}[u]\quad \text{in}\quad \ensuremath{\mathbb{R}}^d, $$ which by Proposition \ref{thm:PeriodLiouvilleLocal} shows that $u$ is also $W_{\sigma,b+c_\mu}$-periodic. It is now easy to see that $u$ is $\overline{G_\mu+W_{\sigma,b+c_\mu}}$-periodic.
\noindent\eqref{b3} $\Rightarrow$ \eqref{a3} \ Since $u$ is both $G_\mu$ and $W_{\sigma,b+c_\mu}$-periodic, by first applying Proposition \ref{lem:drift} and then Proposition \ref{thm:PeriodLiouvilleLocal}, $\ensuremath{\mathcal{L}}[ u] =\ensuremath{\mathcal{L}}^{\sigma,b+c_\mu}[u]=0$ in $\ensuremath{\mathbb{R}}^d$. \end{proof}
We now prove Theorem \ref{thm:Liouville} on necessary and sufficient conditions for $\ensuremath{\mathcal{L}}$ to satisfy the Liouville property. We will use the following consequence of Theorem \ref{decom:opt}.
\begin{corollary}\label{pro:group-multid} A subgroup $G$ of $\ensuremath{\mathbb{R}}^d$ is dense if and only if there are no $c \in \ensuremath{\mathbb{R}}^d$ and codimension 1 subspace $H \subset \ensuremath{\mathbb{R}}^d$ such that $G \subseteq H+c \ensuremath{\mathbb{Z}}$. \end{corollary}
\begin{proof} Let us argue by contraposition for both the ``only if'' and ``if'' parts.
\noindent ($\Rightarrow$) \ Assume $G \subseteq H+c \ensuremath{\mathbb{Z}}$ for some codimension 1 space $H$ and $c \in \ensuremath{\mathbb{R}}^d$. If $c \in H$, then $\overline{G} \subseteq \overline{H}=H \neq \ensuremath{\mathbb{R}}^d$. If $c \notin H$, then $ \ensuremath{\mathbb{R}}^d=H \oplus \textup{span}_{\ensuremath{\mathbb{R}}} \{c\} $, and each $x \in \ensuremath{\mathbb{R}}^d$ can be written as $x=x_H+\lambda_x c$ for a unique $(x,\lambda_x) \in H \times \ensuremath{\mathbb{R}}$. Hence $H+c \ensuremath{\mathbb{Z}}=\{x:\lambda_x \in \ensuremath{\mathbb{Z}}\}$ is closed by continuity of the projection $x \mapsto \lambda_x$, and $\overline{G} \subseteq H+c \ensuremath{\mathbb{Z}} \neq \ensuremath{\mathbb{R}}^d$.
\noindent ($\Leftarrow$) \ Assume $\overline{G} \neq \ensuremath{\mathbb{R}}^d$. By Theorem \ref{decom:opt}, $\overline{G}=V \oplus \Lambda$ for a subspace $V$ and lattice $\Lambda$ with $V \cap \textup{span}_\ensuremath{\mathbb{R}} \Lambda=\{0\}$. It follows that the dimensions $n$ of $V$ and $m$ of the vector space $\textup{span}_\ensuremath{\mathbb{R}} \Lambda$ satifsfy $n<d$ and $n+m \leq d$. If $m=0$, $G \subseteq V \subseteq H$ for some codimension 1 space $H$. If $m \geq 1$, then $\Lambda=\oplus_{i=1}^m a_i \ensuremath{\mathbb{Z}}$ for some basis $\{a_1,\dots,a_m\}$ of $\textup{span}_\ensuremath{\mathbb{R}} \Lambda$. Let $W:=V \oplus \textup{span}_\ensuremath{\mathbb{R}} \{a_i:i\neq m\}$ for $m>1$ and $W:=V$ for $m=1$. Then $W$ is of dimension $n+m-1 \leq d-1$ and contained in some codimension 1 space $H$. Hence $G \subseteq H+c \ensuremath{\mathbb{Z}}$ with $c=a_m$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Liouville}]\
\noindent\eqref{label:thmLiou:c} $\Rightarrow$ \eqref{a4} \ If $u\in L^\infty(\ensuremath{\mathbb{R}}^d)$ satisfy $\ensuremath{\mathcal{L}}[u]=0$ in $\mathcal{D}'(\ensuremath{\mathbb{R}}^d)$, then $u$ is $\overline{G_\mu+W_{\sigma,b+c_\mu}}$-periodic by Theorem \ref{thm:PeriodGeneralOp}. Hence $u$ is
constant by \eqref{label:thmLiou:c}.
\noindent\eqref{a4} $\Rightarrow$ \eqref{label:thmLiou:c} \ Assume \eqref{label:thmLiou:c} does not hold and let us construct a nontrivial $\overline{G_\mu+W_{\sigma,b+c_\mu}}$-periodic $L^\infty$-function. By Corollary \ref{pro:group-multid}, \begin{equation}\label{cep2} \overline{G_\mu+W_{\sigma,b+c_\mu}} \subseteq H+ c \ensuremath{\mathbb{Z}}, \end{equation}
for some $c \in \ensuremath{\mathbb{R}}^d$ and codimension 1 subspace $H \subset \ensuremath{\mathbb{R}}^d$. We can assume $c \notin H$ since otherwise \eqref{cep2} will hold if we redefine $c$ to be any element in $H^c$. As before, each $x \in \ensuremath{\mathbb{R}}^d$ can be written as $x=x_H+\lambda_x c$ for a unique pair $(x_H,\lambda_x) \in H \times \ensuremath{\mathbb{R}}$. Now let $ U(x):=\cos (2\pi \lambda_x) $ and note that for any $h \in H$ and ${ n } \in \ensuremath{\mathbb{Z}}$, $$ x+h+{ n }c=\underbrace{(x_H+h)}_{\in H}+\underbrace{(\lambda_x+{ n })}_{\in \ensuremath{\mathbb{R}}} c, $$ so that $$ U(x+h+{ n }c) =\cos (2\pi (\lambda_x+{ n }))=\cos (2\pi \lambda_x)= U(x). $$ This proves that $U$ is $(H+c \ensuremath{\mathbb{Z}})$-periodic and thus also $\overline{G_\mu+W_{\sigma,b+c_\mu}}$-periodic. By Theorem \ref{thm:PeriodGeneralOp}, $\ensuremath{\mathcal{L}}[U]=0$, and we have a nonconstant counterexample of \eqref{a4}. Note indeed that $u \in L^\infty(\ensuremath{\mathbb{R}}^d)$ since it is everywhere bounded by construction and $C^\infty$ (thus measurable) because the projection $x \mapsto \lambda_x$ is linear. We therefore conclude that \eqref{a4} implies \eqref{label:thmLiou:c} by contraposition. \end{proof}
\section{Examples}\label{sec:examples}
Let us give examples for which the Liouville property holds or fails. We will use Theorem \ref{thm:Liouville} or the following reformulation:
\begin{corollary}\label{label:thmLiou:d} Under the assumptions of Theorem \ref{thm:Liouville}, $\ensuremath{\mathcal{L}}$ does \textup{not} satisfy the Liouville property if and only if \begin{equation}\label{failb} \supp(\mu)+W_{\sigma,b+c_\mu} \subseteq H + c \ensuremath{\mathbb{Z}}, \end{equation} for some codimension 1 subspace $H$ and vector $c$ of $\ensuremath{\mathbb{R}}^d$.
\end{corollary}
\begin{proof} Just note that $\overline{G(\supp(\mu)+W_{\sigma,b+c_\mu})}=\overline{G_\mu+W_{\sigma,b+c_\mu}}$ and apply Theorem \ref{thm:Liouville} and Corollary \ref{pro:group-multid}. \end{proof}
\begin{example}\label{ex1} \begin{enumerate}[{\rm (a)}] \item For nonlocal operators $\ensuremath{\mathcal{L}}=\ensuremath{\mathcal{L}}^\mu$ with $\mu$ symmetric, \eqref{failb} reduces to \begin{equation}\label{fail} \supp(\mu) \subseteq H+c \mathbb{Z}, \end{equation}
for some $H$ of codimension $1$ and $c$. This fails for fractional Laplacians, relativistic Schr\"odinger operators, convolution operators, or most nonlocal operators appearing in finance whose L\'evy measures contain an open ball in their supports. In particular all these operators have the Liouville property.
\item Even if $\textup{supp} (\mu)$ has an empty interior,
\eqref{fail} may fail and Liouville still hold. This is e.g. the case for the mean value operator \begin{equation}\label{mean-value}
\mathcal{M}[u](x)=\int_{|z|=1} \big(u(x+z)-u(x)\big) \,\mathrm{d} S(z), \end{equation} where $S$ denotes the $d-1$-dimensional surface measure.
\item We may have in fact the Liouville property with just a finite number of points in the support of $\mu$, see Example \ref{ex:finitenumberofpoints}.
\item The way we have defined the nonlocal operator, if $\ensuremath{\mathcal{L}}=\ensuremath{\mathcal{L}}^\mu$ with general $\mu$, \eqref{failb} reduces to \begin{equation} \label{failc} \supp(\mu) \subseteq H+c \mathbb{Z} \quad \mbox{and} \quad c_\mu \in H, \end{equation} for some $H$ of codimension 1 and $c\in \ensuremath{\mathbb{R}}^d$. We can have \eqref{fail} without \eqref{failc} as e.g. for the 1--$d$ measure $\mu=\delta_{-1}+2\delta_{1}$. Indeed $\supp (\mu) \subset \ensuremath{\mathbb{Z}}$ but $c_\mu=1 \neq 0$. The associated operator $\ensuremath{\mathcal{L}}^\mu$ then has the Liouville property even though it would not for any symmetric measure with the same support.
\item A general operator $\ensuremath{\mathcal{L}}=\ensuremath{\mathcal{L}}^{\sigma,b}+\ensuremath{\mathcal{L}}^\mu$ may satisfy the Liouville property even though each part $\ensuremath{\mathcal{L}}^{\sigma,b}$ and $\ensuremath{\mathcal{L}}^{\mu}$ does not. A simple 3--$d$ example is given by $\ensuremath{\mathcal{L}}=\partial_{x_1}^2+\partial_{x_2}+(\partial_{x_3}^2)^{\alpha}$, $\alpha \in (0,1)$.
Indeed $\sigma=(1,0,0)^\texttt{T}$, $b=(0,1,0)$, $\,\mathrm{d} \mu(z)=\frac{c(\alpha) \,\mathrm{d} z_3}{|z_3|^{1+2 \alpha}}$ with $c(\alpha)>0$, thus $c_\mu=0$, $W_{\sigma,b}=\ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}} \times \{0\}$, and $G_\mu=\{0\} \times \{0\}\times \ensuremath{\mathbb{R}}$, so the result follows from Theorem \ref{thm:Liouville}. \item For other kinds of interactions between the local and nonlocal parts, see Example \ref{ex:kro}. \end{enumerate} \end{example}
\begin{remark} The Liouville property for the nonlocal operator \eqref{mean-value} implies the classical Liouville result for the Laplacian, since $\mathcal{M}[u]=0$ for harmonic functions~$u$. \end{remark}
In the 1--$d$ case, the general form of the operators which do not satisfy the Liouville property is very explicit.
\begin{corollary} Assume $d=1$ and $\ensuremath{\mathcal{L}}:C^\infty_\textup{c} (\ensuremath{\mathbb{R}}) \to C(\ensuremath{\mathbb{R}})$ is a linear translation invariant operator satisfying the maximum principle \eqref{mp}. Then the following statements are equivalent: \begin{enumerate}[{\rm (a)}]
\item\label{1da} There are nonconstant $u\in L^\infty(\ensuremath{\mathbb{R}})$ satisfying $\ensuremath{\mathcal{L}}[u]=0$ in
$\mathcal{D}'(\ensuremath{\mathbb{R}})$.
\item\label{label:thmLiou:cb} There are $g> 0$ and a nonnegative $\{\omega_n\}_{n} \in l^1(\ensuremath{\mathbb{Z}})$ such that \begin{equation*} \ensuremath{\mathcal{L}}[u](x)=\sum_{n \in \mathbb{Z}}(u(x+n g)-u(x)) \omega_n . \end{equation*} \end{enumerate} \end{corollary}
\begin{proof} If \eqref{label:thmLiou:c} holds, any $g$-periodic
function satisfies $\ensuremath{\mathcal{L}}[u]=0$ in $\ensuremath{\mathbb{R}}$. Conversely, if \eqref{1da}
holds then $\ensuremath{\mathcal{L}}$ is of the form
\eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1} by
\cite{Cou64}. By Corollary \ref{label:thmLiou:d}, there is $g \geq0$
such that $\textup{supp} (\mu)+W_{\sigma,b+c_\mu} \subseteq g
\mathbb{Z}$. In particular $\sigma=b+c_\mu=0$ and $\mu$ is a a sum of
Dirac measures: $\mu=\sum_{n \in \mathbb{Z}} \omega_n \delta_{n g}$.\footnote{If $g=0$ then $\mu=0$ and the rest of the proof is trivial.} By \eqref{as:mus}, each $\omega_n \geq 0$ and $\sum_{n \in \mathbb{Z}} \omega_n<\infty$. Injecting these facts into \eqref{eq:GenOp1}--\eqref{def:localOp1}--\eqref{def:levy1}, we can easily rewrite $\ensuremath{\mathcal{L}}$ as in \eqref{label:thmLiou:cb}. \end{proof}
\begin{example}\label{ex2} \begin{enumerate}[{\rm (a)}] \item In 1--$d$, the Liouville property holds for any nontrivial operator with nondiscrete L\'evy measure. \item For discrete L\'evy measures, we need $\sigma \neq 0$ or $b \neq
-c_\mu$ or $G_\mu=\ensuremath{\mathbb{R}}$ for Liouville to hold. The condition
$G_\mu=\ensuremath{\mathbb{R}}$ is typically satisfied if $\overline{\textup{supp} (\mu)}^{\ensuremath{\mathbb{R}}}$
has an accumulation point or if $\textup{supp} (\mu)$ contains two points $z_1,z_2$ with
irrationial ratio $\frac{z_1}{z_2}$ (see Theorem
\ref{thm:CharKron}). Another example is when $\textrm{supp} (\mu) =\{\frac{n^2+1}{n}\}_{n \geq 1}$, which has no accumulation point or contains any pair with irrational ratio. \end{enumerate} \end{example}
Let us continue with interesting consequences of the Kronecker theorem on Diophantine approximation (p. 507 in \cite{GoMo16}).
\begin{theorem}[Kronecker theorem]\label{thm:CharKron} Let $c=(c_1,\dots,c_d)\in \ensuremath{\mathbb{R}}^d$. Then $\overline{c \ensuremath{\mathbb{Z}}+\ensuremath{\mathbb{Z}}^d}=\ensuremath{\mathbb{R}}^d$ if and only if $\{1,c_1,\dots,c_d\}$ is linearly independent over $\mathbb{Q}$. \end{theorem}
We can use this result to get the Liouville property with just a finite number of points in the support of the L\'evy measure. \begin{example}\label{ex:finitenumberofpoints} \begin{enumerate}[{\rm (a)}] \item Consider the operator \[ \ensuremath{\mathcal{L}}[u](x)=u(x+c) +\sum_{i=1}^d u(x+e_i) - (d+1) u(x) \] for some $c=(c_1,\ldots,c_d)\not=0$ where $\{e_1,\dots,e_d\}$ is the canonical basis. Liouville holds if and only if $\{1,c_1,\ldots,c_d\}$ is linearly independent over $\mathbb{Q}$. Indeed $G_\mu=\overline{c \ensuremath{\mathbb{Z}}+\ensuremath{\mathbb{Z}}^d}$, so the result follows from Theorems \ref{thm:Liouville} and \ref{thm:CharKron}. \item For more general operators $\ensuremath{\mathcal{L}}[u](x)=\sum_{z \in S}
(u(x+z)-u(x))\omega(z)$, with $S$ finite and $\omega(\cdot)>0$, we
may have similar results by applying Theorem \ref{thm:CharKron} (or
variants) and changing coordinates. \end{enumerate} \end{example}
Let us end with an illustration of how the local part may interact with such nonlocal operators. We give 2--$d$ examples of the form $$ \ensuremath{\mathcal{L}}[u](x)=\tilde{b}_1u_{x_1}+\tilde{b}_2u_{x_2}+u(x+z_1)+u(x+z_2)-2 u(x) $$ where $\tilde{b}$ represents the full drift $b+c_\mu$. \begin{example}\label{ex:kro} \begin{enumerate}[\rm (a)] \item If $\tilde{b}, z_1, z_2$ are collinear, Liouville does not hold by Theorem~\ref{thm:Liouville}. \item If $z_1$ and $z_2$ are collinear and linearly independent
of $\tilde{b}$ as in \begin{equation*} \ensuremath{\mathcal{L}}[u](x)=u_{x_1}(x)+u(x_1,x_2+\alpha)+u(x_1,x_2+\beta)-2 u(x), \end{equation*} then the Liouville property holds if and only if $\frac{\alpha}{\beta} \notin \mathbb{Q}$.
Indeed, here we have $G_{\mu}=\{0\} \times \overline{\alpha \ensuremath{\mathbb{Z}}+\beta \ensuremath{\mathbb{Z}}}$ and $\textup{span}_\ensuremath{\mathbb{R}} \{b+c_\mu=(1,0)\}=\ensuremath{\mathbb{R}} \times \{0\}$, so we conclude by Theorems \ref{thm:Liouville} and~\ref{thm:CharKron}. \item If $\{z_1,z_2\}$ is a basis of $\ensuremath{\mathbb{R}}^2$ as in \begin{equation*} \begin{split} \hspace{6mm} \ensuremath{\mathcal{L}}[u](x) =&\tilde{b}_1 u_{x_1}(x)+\tilde{b}_2 u_{x_2}(x)+u(x_1+1,x_2)+u(x_1,x_2+1)-2 u(x), \end{split} \end{equation*} then Liouville holds if and only if $\tilde{b}_1\not=0$ and $\frac{\tilde{b}_2}{\tilde{b}_1} \notin \mathbb{Q}$.
Indeed, let us define $G:=G_\mu+W_{\sigma,b+c_\mu}$ where we note that $G_\mu=\ensuremath{\mathbb{Z}}^2$ and $W_{\sigma,b+c_\mu}=\textup{span}_{\ensuremath{\mathbb{R}}}\{(\tilde{b}_1,\tilde{b}_2)\}$. If $\tilde{b}_1=0$ or $\tilde{b}_2=0$, then $\overline{G}\subseteq \ensuremath{\mathbb{Z}} \times \ensuremath{\mathbb{R}}$ or $\ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{Z}}$ which is not $\ensuremath{\mathbb{R}}^2$. Assume now that $\tilde{b}_1,\tilde{b}_2\not=0$ and $\frac{\tilde{b}_2}{\tilde{b}_1} \in \mathbb{Q}$, i.e., $\frac{\tilde{b}_2}{\tilde{b}_1}=\frac{p}{q}$ with $p,q \neq 0$. Then $$G\subseteq T:=\Big(\frac{1}{p},0\Big)\ensuremath{\mathbb{Z}} + \textup{span}_{\ensuremath{\mathbb{R}}}\Big\{\Big(1,\frac{\tilde{b}_2}{\tilde{b}_1}\Big)\Big\}=\Big\{\Big(\frac{k}{p}+r, r\frac{p}{q}\Big): k \in \ensuremath{\mathbb{Z}}, \ r \in \ensuremath{\mathbb{R}}\Big\}$$ since $\textup{span}_{\ensuremath{\mathbb{R}}}\{(\tilde{b}_1,\tilde{b}_2)\}=\textup{span}_{\ensuremath{\mathbb{R}}}\{(1,\frac{\tilde{b}_2}{\tilde{b}_1})\}\subset T$ and $\ensuremath{\mathbb{Z}}^2\subset T$. The last statement follows since for any $(m,n)\in \ensuremath{\mathbb{Z}}^2$, we can take $k=pm-qn\in \ensuremath{\mathbb{Z}}$ and $r=n \frac{q}{p}\in \ensuremath{\mathbb{R}}$. Since $\overline{T}\neq \ensuremath{\mathbb{R}}^2$, Liouville does not hold by Theorem \ref{thm:Liouville} and Corollary \ref{pro:group-multid}.
Conversely, assume $\tilde{b}_1,\tilde{b}_2\not=0$ and $\frac{\tilde{b}_2}{\tilde{b}_1} \notin \mathbb{Q}$. Then $(0,\frac{\tilde{b}_2}{\tilde{b}_1})=(-1,0)+(1,\frac{\tilde{b}_2}{\tilde{b}_1})\in G$ and since $(0,1) \in G$, we get that $\{0\} \times (\ensuremath{\mathbb{Z}}+\frac{\tilde{b}_2}{\tilde{b}_1} \ensuremath{\mathbb{Z}}) \subset G$. By Theorem \ref{thm:CharKron}, $\{0\} \times \ensuremath{\mathbb{R}} \subset \overline{G}$. Arguing similarly with $(\frac{\tilde{b}_1}{\tilde{b}_2},0)$, we find that $\ensuremath{\mathbb{R}} \times \{0\}\subset \overline{G}$. Hence $\overline{G}=\ensuremath{\mathbb{R}}^2$ and Liouville holds by Theorem \ref{thm:Liouville}. \end{enumerate} \end{example}
\let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{}
\end{document} |
\begin{document}
\title{On unimodular tournaments} \author{Wiam Belkouche,
Abderrahim Boussa\"{\i}ri\thanks{Corresponding author: Abderrahim Boussaïri. Email: [email protected]} , Abdelhak Chaïchaâ and Soufiane Lakhlifi } \affil{Laboratoire Topologie, Alg\`ebre, G\'eom\'etrie et Math\'ematiques Discr\`etes, Facult\'e des Sciences A\"in Chock, Hassan II University of Casablanca, Maroc.}
\maketitle
\begin{abstract} A tournament is unimodular if the determinant of its skew-adjacency matrix is $1$. In this paper, we give some properties and constructions of unimodular tournaments. A unimodular tournament $T$ with skew-adjacency matrix $S$ is invertible if $S^{-1}$ is the skew-adjacency matrix of a tournament. A spectral characterization of invertible tournaments is given. Lastly, we show that every $n$-tournament can be embedded in a unimodular tournament by adding at most $n - \lfloor\log_2( n)\rfloor$ vertices. \end{abstract}
\textbf{Keywords:} Unimodular tournament, skew-adjacency matrix, invertible tournament, skew-spectra.
\textbf{MSC Classification:} 05C20, 05C50.
\section{Introduction}
Let $T$ be a tournament with vertex set $\{v_1,\ldots,v_n\}$. The \emph {skew-adjacency matrix} of $T$ is the $n\times n$ zero-diagonal matrix $S =[s_{ij}]_{1\leq i,j\leq n}$ in which $s_{ij}=1$ and $s_{ji}=-1$ if $v_i$
dominates $v_j$. Equivalently, $S=A-A^{t}$ where $A$ is the adjacency matrix of $T$. We define the determinant $\det(T)$ of $T$ as the determinant of $S$. As $S$ is skew symmetric, $\det(T)$ vanishes when $n$ is odd. When $n$ is even, the determinant is the square of the Pfaffian of $S$. Moreover, McCarthy and Benjamin \cite[Proposition~1]{mccarthy96} proved that the determinant of an $n$-tournament has the same parity as $n-1$.
\begin{proposition}\label{mccarthy}
The determinant of a tournament with an even number of vertices is the
square of an odd number. \end{proposition}
Let $T$ be a tournament. The \emph{converse} of $T$, obtained by reversing all its arcs, has the same determinant as $T$. The switching is another operation that preserves the determinant. The \emph{switch} of a tournament on a vertex set $V$ with respect to a subset $X$, is the tournament obtained by reversing all the arcs between $X$ and $V\setminus
X$. It is well-known that if two tournaments are switching equivalent, then their skew-adjacency matrices are $\{\pm 1\}$-diagonally similar \cite{moorhouse95}. Hence, switching equivalent tournaments have the same determinant.
In this paper, we consider the class of tournaments whose skew-adjacency matrices are unimodular, or equivalently, tournaments whose determinants are equal to one. We call such tournaments \emph{unimodular}. By the forgoing, this class is closed under the converse and switching operations. Examples of unimodular tournaments are transitive tournaments with an even number of vertices and their switches. The smallest tournaments that are not unimodular consist of a vertex dominating or dominated by a $3$-cycle. These tournaments are called \emph{diamonds} \cite{gnanvo1992reconstruction}. A tournament contains no diamonds if and only if it is switching equivalent to a transitive tournament \cite{babai2000automorphisms}. Tournaments without diamonds are known as \emph{local orders} \cite{cameron1978orbits}, \emph{locally transitive tournaments} \cite{lachlan1984countable}, or \emph{vortex-free tournaments} \cite{knuth1992axioms}.
The \emph{join} of a tournament $T_1$ to a tournament $T_2$, denoted by $T_1 \rightarrow T_2$, is the tournament obtained from $T_1$ and $T_2$
by adding an arc from each vertex of $T_1$ to all vertices of $T_2$. The join of a tournament $T$ to a tournament with one vertex is denoted by $T^{+}$. Our first main result gives a necessary and sufficient condition on the unimodularity of the join of two tournaments. It follows directly from Theorem \ref{main7}, which will be proved in the next section.
\begin{theorem}\label{main6}
Let $T_1$ and $T_2$ be two tournaments with $p$ and $q$ vertices respectively.
\begin{enumerate}[i)]
\item If $p$ and $q$ are even, then $T_1 \rightarrow T_2$ is unimodular
if and only if $T_1$ and $T_2$ are unimodular.
\item If $p$ and $q$ are odd, then $T_1 \rightarrow T_2$ is unimodular
if and only if $T_1^{+}$ and $T_2^{+}$ are unimodular.
\end{enumerate} \end{theorem}
Let $T$ be a unimodular tournament and let $S$ be its skew-adjacency matrix. The inverse $S^{-1}$ of $S$ is a unimodular skew-symmetric integral matrix, but its off-diagonal entries are not necessarily from $ \{-1, 1\}$. We say that $T$ is invertible if $S^{-1}$ is the skew-adjacency matrix of a tournament. We call this tournament the inverse of $T$ and we denote it by $T^{-1}$. For graphs, the inverse was introduced by considering the adjacency matrix and has been studied extensively \cite{cvetkovic1978self, godsil1985inverses, kirkland2007unimodular, barik2006nonsingular, tifenbach2009directed}.
We give a spectral characterization of invertible tournaments. Moreover, we prove that every $n$-tournament can be embedded in an invertible, and hence unimodular, $2n$-tournament. The following problem arises naturally.
\begin{problem}
For a tournament $T$ on $n$ vertices, what is the smallest number $u^{+ }(T)$ of vertices we must add to $T$ to obtain a unimodular tournament? \end{problem}
We prove that $u^{+}(T)$ cannot exceed $n - \lfloor\log_2(n)\rfloor$. Moreover, if the skew-adjacency matrix $S$ of $T$ is a \emph{skew-conference matrix}, that is, $S^2 = (1-n)I_n$, then $u^{+}(T)$ is at least $n/2$. Hence, $u^{+}(T)$ can be arbitrarily large.
\section{The determinant of the join of tournaments}\label{join}
Let $T_1$ and $T_2$ be two tournaments, and let $\chi_1(x)$ and $\chi_2 (x)$ be the characteristic polynomials of their adjacency matrices. Then, the characteristic polynomial of $T_1\rightarrow T_2$ is $\chi_1(x) \chi_2(x)$. There is no similar result for the skew-adjacency matrix. However, we obtain the following result.
\begin{theorem}\label{main7}
Let $T_1$ and $T_2$ be two tournaments with $p$ and $q$ vertices respectively.
\begin{enumerate}
\item If $p$ and $q$ are even, then $\det(T_1 \rightarrow T_2) = \det(T_1)\cdot \det(T_2)$.
\item If $p$ and $q$ are odd, then $\det(T_1 \rightarrow T_2) = \det(T_1^{+})\cdot \det(T_2^{+})$.
\end{enumerate} \end{theorem}
Let $S_1$ and $S_2$ be the skew-adjacency matrices of $T_1$ and $T_2$ respectively. The skew-adjacency matrix $S$ of $T_1 \rightarrow T_2$ can be written as follows
\begin{equation}
S=
\left(
\begin{array}{c|c}
S_1 & J \\
\hline
-J^t & S_2
\end{array}
\right)\mbox{.} \end{equation} where $J$ is the all ones matrix of order $p\times q$. As mentioned above, if $p$ is even, $S_1$ is non-singular. The first assertion follows from the more general result.
\begin{lemma}\label{lem2}
Let $M$ be a skew-symmetric matrix of the form
\[
M=\begin{pmatrix}
A & B \\
-B^t & D
\end{pmatrix}\mbox{.}
\] If $A$ is non singular and $rank(B)=1$, then \[ \det(M)= \det(A)\cdot\det(D)\mbox{.}\] \end{lemma}
\begin{proof}
Using Schur's complement formula, we get \[\det(M)=\det(A)\cdot\det(D+BA^{-1}B^t)\mbox{.}\]
As $rank(B) = 1$, there exist two column vectors $\alpha$ and $\beta$ such that $B = \alpha\beta^t$. Then, we have
$$ BA^{-1}B^t = \alpha\beta^t A^{-1} \beta\alpha^t$$
Since the matrix $A^{-1}$ is skew-symmetric and $(\beta^t A^{-1} \beta)$ is a scalar, $\beta^t A^{-1} \beta=0$, and hence \[\det(M)=\det(A)\cdot\det(D)\mbox{.}\qedhere\] \end{proof}
\begin{proof}[Proof of Theorem \ref{main7}]
The first assertion is already proven. For the second assertion, we prove that $\det(T_1\rightarrow T_2) = \det(T_1^{+}\rightarrow T_2^{+})$. For this, consider the tournament $R$ on two vertices. It is easy to see that $T_1^{+}\rightarrow T_2^{+}$ is switching equivalent to $(T_1 \rightarrow T_2) \rightarrow R$. Hence, $\det(T_1^{+}\rightarrow T_2^{+} )=\det((T_1\rightarrow T_2) \rightarrow R)$. By the first assertion, $ \det((T_1\rightarrow T_2) \rightarrow R) = \det(T_1\rightarrow T_2)\cdot \det(R)$. Then, $\det(T_1\rightarrow T_2) = \det(T_1^{+}\rightarrow T_2^{ +})$, because $\det(R)=1$. Furthermore, $\det(T_1^{+}\rightarrow T_2^{ +}) = \det(T_1^{+})\cdot\det(T_2^{+})$ by the first assertion.\qed \end{proof}
As we have seen above, the converse and switching operations preserve unimodularity. Together with the join, these operations generate a subclass $\mathcal{H}$ of unimodular tournaments, defined as follows.
\begin{enumerate}
\item The unique 2-tournament is in $\mathcal{H}$.
\item If $T_1,T_2$ are in $\mathcal{H}$, then the tournament $T_1
\rightarrow T_2$ is in $\mathcal{H}$.
\item The switch of a tournament in $\mathcal{H}$ is also in $\mathcal{H}$. \end{enumerate}
Let $T$ be a tournament with $n \geq 4$ vertices. We say that $T$ is \emph{switching decomposable} if there exist two tournaments $T_1$ and $T _2$, each with at least $2$ vertices, such that $T$ is switching equivalent to $T_1\rightarrow T_2$. Otherwise, we say that $T$ is \emph{switching indecomposable}. Switching decomposability coincides with the \emph{bijoin decomposability} \cite{bankoussou2017matrix, bui2007unifying}.
\begin{example} For an odd integer $n$, consider the well-known \emph{circular tournament} $C_n$ whose vertex set is the additive group $\mathbb{Z}_n=\{0,1,\cdots,n -1\}$ of integers modulo $n$, such that $i$ dominates $j$ if and only if $i-j\in\{1,\cdots,(n-1)/2\}$. The tournament $C_n$ is strongly connected. However, by reversing the arcs between the even and the odd vertices we obtain a transitive tournament. Hence, $C_n$ is switching decomposable for every odd integer $n\geq5$. \end{example}
Clearly, every tournament in $\mathcal{H}$ with more than $2$ vertices is switching decomposable. It is easy to check that all unimodular tournaments with at most $6$ vertices are in $\mathcal{H}$. However, we have found a switching indecomposable unimodular tournament with $8$ vertices. Its skew-adjacency matrix is $\mathbb{F}_q$ \[ \begin{pmatrix*}[r] 0 & -1 & -1 & -1 & 1 & -1 & -1 & -1 \\ 1 & 0 & -1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 1 & 0 & -1 & -1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 0 & -1 & -1 & -1 & -1 \\ -1 & 1 & 1 & 1 & 0 & -1 & -1 & -1 \\ 1 & 1 & -1 & 1 & 1 & 0 & 1 & -1 \\ 1 & 1 & 1 & 1 & 1 & -1 & 0 & -1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 \end{pmatrix*} \]
\begin{problem}\label{question:1}
Find an infinite family of unimodular switching indecomposable tournaments. \end{problem}
\section{Spectral properties of unimodular tournaments}
Let $S$ be an integral skew-symmetric matrix. The nonzero eigenvalues of $S$ are purely imaginary and occur as conjugate pairs $\pm i\lambda_1 , \ldots, \pm i\lambda_k$, where $\lambda_1, \ldots, \lambda_k$ are totally real algebraic integers. Moreover, the norm $N(\lambda_i)$ of $ \lambda_i$ divides the determinant of $S$.
If $S$ is unimodular, then $\Pi_{i=1}^{k}\lambda_i=\pm1$ and hence, $ \lambda_i$ are all algebraic units. Conversely, suppose that every $ \lambda_i$ is an algebraic unit. Then, the constant coefficient in the minimal polynomial of every eigenvalue is $\pm1$. Hence, the determinant of $S$ is $\pm1$. In particular, we have the following result.
\begin{proposition}\label{main10}
A tournament is unimodular if and only if all its eigenvalues are algebraic units. \end{proposition}
Godsil \cite{godsil1982eigenvalues} proved that every algebraic integer $ \lambda$ occurs as an eigenvalue of the adjacency matrix of a digraph. Estes \cite{estes1992eigenvalues} proved that if $\lambda$ is a totally real integer, that is, all its conjugates are real, then it is an eigenvalue of the adjacency matrix of a graph. Recently, Salez \cite{salez2015every} proved that the graph may be chosen to be a tree. For tournaments, we can ask the following question.
\begin{question}
Let $\lambda$ be a totally real algebraic integer with an odd norm. Are there any other conditions on $\lambda$ so that $i\lambda$ is the eigenvalue of a tournament?
\end{question}
A similar question can also be asked about the determinant of tournaments. By Proposition \ref{mccarthy}, the determinant of a tournament with an even number of vertices is the square of an odd number. Conversely,
\begin{question}\label{quest:2}
Does there exist a tournament whose determinant is $m^{2}$ for every odd number $m$? \end{question}
By Theorem \ref{main7}, it is enough to consider Question \ref{quest:2} for odd prime numbers.
\section{Invertible tournaments}
Let $T$ be a tournament with skew-adjacency matrix $S$. Let $\phi_{S}(x )=x^{n}+\sigma_{1}x^{n-1}+\sigma_{2}x^{n-2}+\cdots+\sigma_{n-1}x+\sigma_{ n}$ be the characteristic polynomial of $S$. Then, \begin{equation}\label{eq:minors}
\sigma_{k}=(-1)^{k}\sum(\text{all }k\times k\text{ principal minors})\mbox{.} \end{equation}
Since $S$ is the skew-adjacency matrix of a tournament, we have
\begin{enumerate}
\item $\sigma_2 = \binom{n}{2}$.
\item $\sigma_n = \det(S)$.
\item $\sigma_k = 0$ if $k$ is odd. \end{enumerate}
The determinant of a $4$-tournament is $9$ if it is a diamond and $1$ otherwise. It follows that $\sigma_4 = 8\delta_T + \binom{n}{4}$, where $ \delta_T$ is the number of diamonds in $T$. In particular, $T$ has no diamonds if and only if $\sigma_4 = \binom{n}{4}$.
If $T$ is unimodular, then the inverse $S^{-1}$ of $S$ is an integral unimodular skew-symmetric matrix. Furthermore, $\phi_{S^{-1}}(x) = x^{n} + \sigma_{n-2}x^{n-2} + \cdots+\sigma_{2}x^{2} + 1$. Hence, a necessary condition for $T$ to be invertible is $\sigma_{n-2} = \binom{n}{2}$. The following proposition shows that this condition is sufficient.
\begin{proposition}
Let $T$ be a unimodular $n$-tournament, and let $S$ be its skew- adjacency matrix. Then, the off-diagonal entries of $S^{-1}$ are odd. Moreover, the following assertions are equivalent.
\begin{enumerate}[i)]
\item $T$ is invertible.
\item Every $(n-2)$-subtournament of $T$ is unimodular.
\item The coefficient of $x^{2}$ in the characteristic polynomial of $S$ is $\binom{n}{2}$.
\end{enumerate} \end{proposition}
\begin{proof}
Let $[n]=\{1,\ldots,n\}$, and let $I$ be a subset of $[n]$. Denote by $ S[I]$ the submatrix of $S$ whose rows and columns are indexed by $I$. Let $i\neq j\in [n]$, it follows from Jacobi's complementary minors theorem that
\begin{equation}\label{eq:jacobi}
\det(S^{-1}[\{i, j\}]) = \det(S[[n]\setminus{\{i, j\}}])\mbox{.}
\end{equation}
Moreover, as $S^{-1}$ is skew-symmetric, $\det(S^{-1}[\{i, j\}]) = (S_{i,j}^{-1})^2$. Then
\begin{equation}\label{eq:jacobi2}
(S_{i,j}^{-1})^2 = \det(S[[n]\setminus{\{i, j\}}])\mbox{.}
\end{equation}
By Proposition \ref{mccarthy}, $\det(S[[n]\setminus{\{i, j\}}])$ is the square of an odd number. Hence, the $(i, j)$-entry of $S^{-1}$ is odd.
The equivalence $i)\Leftrightarrow ii)$ follows directly from \eqref{eq:jacobi2}. The equivalence $ii)\Leftrightarrow iii)$ follows from \eqref{eq:minors} and Proposition \ref{mccarthy} which implies that the determinant of tournaments with an even number of vertices is at least $1$. \end{proof}
\begin{example}
Let $T$ be an $n$-tournament without diamonds and let $S$ be its skew-adjacency matrix. Every subtournament of $T$ with an even number of vertices is unimodular. Hence, $T$ is invertible and $\phi_S(x) = \phi_{S ^{-1}}(x)=x^{n} + \binom{n}{2}x^{n-2} + \cdots + \binom{n}{n-2}x^{2} + 1$. It follows that $S^{-1}$ is the skew-adjacency matrix of a tournament without diamonds. \end{example}
In the example above, the characteristic polynomial is palindromic, that is, the coefficients of $x^{i}$ and $x^{n-i}$ are equal. We call a tournament \emph{palindromic} if the characteristic polynomial of its skew-adjacency matrix is palindromic. Let $T$ be a tournament and let $S$ be its skew-adjacency matrix. Clearly, if $\phi_S(x)$ is palindromic, then $T$ is unimodular, the inverse of $S$ is the skew-adjacency matrix of a tournament and $\phi_S(x)=\phi_{S^{-1}}({x})$. In what follows, we give a construction of tournaments whose characteristic polynomial is palindromic.
Let $T$ be an $n$-tournament with vertex set $V = \{v_1,\ldots, v_n\}$, and let $S$ be its skew-adjacency matrix. Let $\hat{T}$ be the tournament obtained from $T$ by adding a copy $T^{\prime}$ of $T$ with vertex set $\{v^{\prime}_1,\ldots,v^{\prime}_n\}$, such that $v_{i}$ dominates $v^{\prime}_{i}$, and $v_{i}$ dominates $v^{\prime}_{j}$ if and only $v_{i}$ dominates $v_{j}$. The skew-adjacency matrix $\hat{S}$ of $\hat{T}$ can be written as follows.
\[\hat{S} =
\begin{pmatrix}
S & S + I_n \\
S - I_n & S
\end{pmatrix}\mbox{.} \]
The inverse of $\hat{S}$ is $
\begin{pmatrix}
S & -(S + I_n) \\
-(S - I_n) & S
\end{pmatrix} $ then $\hat{T}$ is invertible. Moreover, $\hat{T}$ and $\hat{T}^{-1}$ are switching equivalent. Indeed, \begin{equation}\label{eq:switch} \hat{S}^{-1} = D\hat{S}D\mbox{.} \end{equation} where $D=\begin{pmatrix}
I_n & 0 \\
0 & -I_n
\end{pmatrix} $. It follows that $\phi_{\hat{S}}(x) = \phi_{(\hat{S})^{-1}}(x)$, and hence $\phi_{\hat{S}}(x)$ is palindromic.
\begin{remark}\label{eq:eq-minors}
Let $I$ be a nonempty proper subset of $[2n]$. By \eqref{eq:switch}, $\det(\hat{S}[I]) = \det(\hat{S}^{-1}[I])$. Moreover, using Jacobi's complementary minors theorem, $\det(\hat{S}^{-1}[I]) = \det(\hat{S}[[2n] \setminus I])$. It follows that $\det(\hat{S}[I]) = \det(\hat{S}[[2n] \setminus I])$. \end{remark}
\section{Embedding of tournaments in unimodular tournaments}
In the previous section, we proved that every $n$-tournament can be embedded in a unimodular $2n$-tournament. For a tournament $T$ on $n$ vertices, let $u^{+}(T)$ be the smallest number of vertices we must add to $T$ to obtain a unimodular tournament. A dual notion of $u^{+}(T)$ is to consider the minimum number $u^{-}(T)$ of vertices we must remove from $T$ to obtain a unimodular tournament. It follows from Theorem \ref{main6} that if $T_1$ and $T_2$ are two tournaments, then \begin{align}
u^{+}(T_1\rightarrow T_2) \leq u^{+}(T_1) + u^{+}(T_2)\mbox{,} \\
u^{-}(T_1\rightarrow T_2) \leq u^{-}(T_1) + u^{-}(T_2)\mbox{.} \end{align}
It is shown in \cite{erdos1964representation} that every $n$-tournament $T$ contains a transitive subtournament of order at least $ \lfloor\log_2(n)\rfloor+1$. In particular, it contains a unimodular tournament of order at least $\lfloor\log_2(n)\rfloor$. Then, $u^{-}(T) \leq n - \lfloor\log_2(n)\rfloor$.
The following proposition provides a relationship between $u^{+}(T)$ and $u^{-}(T)$. \begin{theorem}\label{eq:relation}
Let $T$ be an $n$-tournament. Then, \[ u^{+}(T) \leq u^{-}(T)\mbox{.} \] In particular, $u^{+}(T) \leq n - \lfloor\log_2(n)\rfloor$. \end{theorem}
\begin{proof}
Let $V = \{v_1,\ldots,v_n\}$ be the vertex set of $T$. Consider the $2n $-tournament $\hat{T}$ obtained from $T$ and a copy $T^{\prime}$ of $T$ as described in the previous section. There exists $I\subset V^{\prime}$
, $|I|=u^{-}(T^{\prime})$, such that $T[V^{\prime}\setminus I]$ is unimodular. By Remark \ref{eq:eq-minors}, $\det(\hat{T}[V^{\prime} \setminus I]) = \det(\hat{T}[V\cup I]) = 1$. Moreover, the tournament $ \hat{T}[V\cup I]$ contains $T$, hence $u^{+}(T) \leq u^{-}(T)$. \end{proof}
\begin{remark}
Equality in Theorem \ref{eq:relation} may be strict. Indeed, let $T$ be the tournament whose skew-adjacency matrix is $S$.
\[S = \begin{pmatrix*}[r] 0 & -1 & -1 & -1 & -1 & -1 & -1 & 1 & -1 \\ 1 & 0 & -1 & -1 & -1 & -1 & -1 & -1 & 1 \\ 1 & 1 & 0 & -1 & -1 & -1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 0 & -1 & -1 & -1 & 1 & -1 \\ 1 & 1 & 1 & 1 & 0 & -1 & -1 & -1 & -1 \\ 1 & 1 & 1 & 1 & 1 & 0 & -1 & -1 & 1 \\ 1 & 1 & -1 & 1 & 1 & 1 & 0 & -1 & -1 \\ -1 & 1 & 1 & -1 & 1 & 1 & 1 & 0 & -1 \\ 1 & -1 & 1 & 1 & 1 & -1 & 1 & 1 & 0 \end{pmatrix*}\mbox{.} \] By adding a vertex dominating $T$ we obtain a unimodular tournament, hence $u^{+}(T) = 1$. The tournament $T$ has no unimodular $(n-1)$-subtournament. Moreover, removing the last three rows of $S$ and their corresponding columns yields the skew-adjacency matrix of a unimodular tournament, hence $u^{-}(T) = 3$. This example was found using SageMath \cite{SageMath}. \end{remark}
In what follows, we give a lower bound on $u^{+}(T)$, using the spectra of the skew-adjacency matrix of $T$.
\begin{theorem}\label{main8}
Let $T$ be a non-unimodular $n$-tournament and let $\nu(T)$ be the maximum multiplicity among the non-unit eigenvalues of its skew-adjacency matrix. Then, \[ \nu(T) \leq u^{+}(T)\mbox{.} \] \end{theorem}
To prove this theorem, we need the following lemma, which is a direct consequence of Cauchy Interlace Theorem.
\begin{lemma}\label{main9}
Let $A$ be a hermitian matrix of order $m$, and let $B$ be a principal submatrix of $A$ of order $n$, with an eigenvalue $\lambda$ of multiplicity $r$. If $m-n < r$, then $\lambda$ is an eigenvalue of $A$. \end{lemma}
\begin{proof}[Proof of Theorem \ref{main8}]
Let $T$ be a non-unimodular $n$-tournament and let $i\lambda$ be a non-unit eigenvalue of its skew-adjacency matrix $S$ with multiplicity $\nu(T)$. Let $T^{\prime}$ be an $m$-tournament containing $T$ such that $m < n + \nu(T)$ and denote by $S^{\prime}$ its skew-adjacency matrix. Clearly, $\lambda$ is an eigenvalue of $iS$ with multiplicity $\nu(T)$. Then, by Lemma \ref{main9}, $\lambda$ is also an eigenvalue of $iS^{ \prime}$. Hence, $S^{\prime}$ has a non-unit eigenvalue. It follows from Proposition $\ref{main10}$ that $T^{\prime}$ is not unimodular. \qed \end{proof}
Tournaments with large $\nu(T)$ can be obtained from skew-conference matrices. Let $T$ be an $n$-tournament and let $S$ be its skew-adjacency matrix. Assume that $S$ is a skew-conference matrix. It follows that the eigenvalues of $S$ are $\pm i\sqrt{n-1}$ each with multiplicity $n/2$. As $i\sqrt{n-1}$ is not an algebraic unit, then $\nu(T) = n/2$. Hence, by Theorem \ref{main8}, $u^{+}(T)\geq n/2$.
It is conjectured that skew-conference matrices exist if and only if $n =2$ or $n$ is divisible by $4$ \cite{wallis1971some}. If this conjecture is true, Lemma \ref{main9} implies that for every integer $n\geq4$, there exists an $n$-tournament $T$ such that $u^{+}(T) \geq \frac{n-3}{2 }$. Denote by $u^{+}(n)$ the maximum $u^{+}(T)$ among $n$-tournaments. By the forgoing, we have the following theorem.
\begin{theorem}
Assuming the existence of skew-conference matrices of every order divisible by $4$, we have
\[ \frac{n-3}{2} \leq u^{+}(n) \leq n - \lfloor\log_2(n)\rfloor\mbox{.} \] \end{theorem}
Examples of tournaments with a skew-conference matrix can be obtained from Paley tournaments. For a prime power $q\equiv3\mod4$, the Paley tournament with $q$ vertices is the tournament whose vertex set is the Galois field $GF(q)$, such that $x$ dominates $y$ if and only if $x-y$ is a nonzero quadratic residue in $GF(q)$. There are many other infinite families of skew-conference matrices, see for example \cite{koukouvinos2008skew}.
\section{Concluding remarks} The main concern of this paper is the determinant of the skew-adjacency matrix of tournaments. A multiplicative formula for the determinant of the join of two tournaments was given. This formula provides a new construction of unimodular tournaments. Another construction is the blow-up operation, in which every vertex of a tournament is replaced by a tournament with two vertices. This construction shows that every $n$-tournament can be embedded in a $2n$-unimodular tournament for which the inverse of the skew-adjacency matrix is also the skew-adjacency matrix of a tournament. The minimum number of vertices that must be added to a tournament to be unimodular is considered. We showed that it does not exceed the minimum number of vertices to be removed to obtain a unimodular tournament, and that it is related to the multiplicity of its non-unit eigenvalues.
In addition to the problems presented, many other questions and directions can be considered. \begin{itemize}
\item The construction of the class $\mathcal{H}$, considered in Section \ref{join}, is simple. Nevertheless, this family seems to be rich as it can be proven, by induction, that the blow-up of every tournament is in $\mathcal{H}$. Is a positive proportion of the set of unimodular tournaments in $\mathcal{H}$?
\item Find examples of invertible tournaments that are not palindromic.
\item The problem of finding $u^{+}(T)$ seems extremely hard. We suspect that there is no polynomial time algorithm to solve this problem. Find a non-brute force algorithm to compute $u^{+}(T)$.
\item As we have seen above, skew-conference matrices have non-unit eigenvalues with maximum possible multiplicities. Another property of skew-conference matrices is that they have maximum determinant among zero-diagonal $\{-1, 1\}$-matrices. Do tournaments with skew-conference adjacency matrices have maximum $u^{+}(T)$? \end{itemize}
\end{document} |
\begin{document}
\begin{frontmatter}
\author{Jian Wang} \author{Yong Wang\corref{cor2}} \ead{[email protected]} \cortext[cor2]{Corresponding author} \address{School of Mathematics and Statistics, Northeast Normal University, Changchun, 130024, P.R.China} \title{On the Geometry of Tangent Bundles \\ with the Rescaled Metric} \begin{abstract} For a Riemannian manifold $M$, we determine some curvature properties of a tangent bundle equipped with the rescaled metric. The main aim of this paper is to give explicit formulae for the rescaled metric on $TM$, and investigate the geodesics on the tangent bundle with respect to the rescaled Sasaki metric. \end{abstract} \begin{keyword} Tangent bundle; Rescaled Sasaki metric; Rescaled Cheeger-Gromoll metric; Geodesics. \end{keyword} \end{frontmatter} \section{Introduction} \label{1} Tangent boundles of differentiable manifolds are of great importance in many areas of mathematics and physics. Geometry of the tangent bundle $TM$ of a Riemannian manifold $(M,g)$ with the metric $\bar{g}$ defined by Sasaki in \cite{Sa} had been studied by many authors. Its construction is based on a natural splitting of the tangent bundle $TTM$ of $TM$ into its vertical and horizontal subbundles by means of the Levi-Civita connection $\nabla$ on $(M,g)$. The Levi-Civita connection $\hat{\nabla}$ of the Sasaki metric on $TM$ and its Riemannian curvature tensor $\hat{R}$ were calculated by Kowalski in \cite{Ko}. With this in hand, the authors derived interesting connections between the geometric properties of $(M,g)$ and $(TM,\hat{g})$ in \cite{Ko} and \cite{MT}. In \cite{MT}, the authors proved that the Sasaki metric on $TM$ is rather rigid under the scalar curvature of $(TM,\bar{g})$ is constant.
Another metric nicely fitted to the tangent bundle is the so-called Cheeger-Gromoll metric in \cite{CG}. This can be used to obtain a natural metric $\tilde{g}$ on the tangent bundle $TM$ of a given Riemannian manifold $(M,g)$. It was expressed more explicitly by Musso and Tricerri in \cite{MT}. In \cite{Se}, Sekizawa calculated the Levi-Civita connection $\tilde{\nabla}$ and the curvature tensor $\tilde{R}$ of the tangent bundle $(TM,\tilde{g})$ equipped with the Cheeger-Gromoll metric. Gudmundsson and Kappos derived correct relations between the geometric properties of $(M,g)$ and $(TM,\tilde{g})$ in \cite{GK1}. In \cite{GK2}, Explicit formulae for the Cheeger-Gromoll metric on $TM$ was given. The motivation of this paper is to study the geometry of tangent bundles with the rescaled Sasaki and Cheeger-Gromoll metrics.
This paper is organized as follows: In Section 2, for a Riemannian manifold $(M, g)$, we introduce a natural class of rescaled metrics. In Section 3, we calculate its Levi-Civita connection, its Riemann curvature tensor associated to the rescaled Sasaki metric. In Section 4, we investigate geodesics on the tangent bundle with respect to the rescaled Sasaki metric. The main purpose of Section 5 is to obtain some interesting connections between the geometric properties of the manifold $(M, g)$ and its tangent bundle equipped with the rescaled Cheeger-Gromoll metric.
\section{Natural Metrics} In this section we introduce a natural class of rescaled metrics on the tangent bundle $TM$ of a given Riemannian manifold $(M, g)$. This class contains both the rescaled Sasaki and rescaled Cheeger-Gromoll metrics studied later on.
Throughout this paper we shall assume that $M$ is a smooth $m-$dimensional manifold with maximal atlas
$\mathcal{A}=\{(U_{\alpha},x_{\alpha})|\alpha\in I\}$. For a point $p\in M$, let $T_{p}M$ denote the tangent space of $M$ at $p$. For local coordinates $(U,x)$ on $M$ and $p\in U$ we define $(\frac{\partial}{\partial x_{k}})_{p}\in T_{p}M$ by \begin{equation} (\frac{\partial}{\partial x_{k}})_{p}:f \mapsto \frac{\partial f}{\partial x_{k}}(p)=\partial_{e_{k}}(f\circ x^{-1})(x(p)) \end{equation}
where $\{e_{k}|k=1,\ldots,m\}$ is the standard basis of $\mathbb{R}^{m}$. Then
$\{(\frac{\partial}{\partial x_{k}})_{p}|k=1,\ldots,m\}$ is a basis for $T_{p}M$. The set $TM=\{(p,u)|p\in M,u\in T_{p}M\}$ is called the
tangent bundle of $M$ and bundle map $\pi:TM\rightarrow M$ is given by $\pi:(p,u)\mapsto p$.
As a direct consequence of the Theorem 2.1 in \cite{GK2} we see that the bundle map $\pi:TM\rightarrow M$ is smooth. For each point $p\in M$
the fiber $\pi^{-1}(p)$ is the tangent space $T_{p}M$ of $M$ at $p$ and hence an $m-$dimensional vector space. For local coordinates
$(U,x)\in\mathcal{A}$ we define $\bar{x}:\pi^{-1}(U)\rightarrow U\times \mathbb{R}^{m}$ by \begin{equation}
\bar{x}:(p,\sum_{k=1}^{m}u_{k}\frac{\partial}{\partial x_{k}}|_{p})\mapsto \big(p,(u_{1},\ldots,u_{m})\big). \end{equation}
The restriction $\bar{x}_{p}=\bar{x}|_{T_{p}M}:T_{p}M\rightarrow \{p\}\times\mathbb{R}^{m}$ to the tangent space $T_{p}M$ is given by \begin{equation}
\bar{x}_{p}:\sum_{k=1}^{m}u_{k}\frac{\partial}{\partial x_{k}}|_{p}\mapsto (u_{1},\ldots,u_{m}) \end{equation} so it is obviously a vector space isomorphism. This implies that $\bar{x}:\pi^{-1}(U)\rightarrow U\times \mathbb{R}^{m}$ is a bundle chart for $TM$. This implies that \begin{equation}
\mathcal{B}=\{\big(\pi^{-1}(U)\big),\bar{x}|(U,x)\in\mathcal{A}\} \end{equation} is a bundle atlas transforming $(TM,M,\pi)$ into an $m-$dimensional topological vector bundle. Since the manifold $(M,\mathcal{A})$ is smooth the vector bundle $(TM,M,\pi)$ together with the maximal bundle atlas $\hat{\mathcal{B}}$ induced by $\mathcal{B}$ is a smooth vector bundle.
\begin{defn} Let $(M, g)$ be a Riemannian manifold. Let $f>0$ and $f\in C^{\infty}(M)$, specially when $f=1$, $\bar{g}^{1}=\bar{g}$.
A Riemannian rescaled metric $\bar{g}^{f}$ on the
tangent bundle $TM$ is said to be natural with respect to $g$ on $M$ if \begin{eqnarray}
i) \ \bar{g}^{f}_{(p,u)}(X^{h},Y^{h})&=&f(p) g_{p}(X,Y),\\ ii) \ \bar{g}^{f}_{(p,u)}(X^{h}, Y^{v})&=&0 \end{eqnarray} for all vector fields $X, Y\in C^{\infty}(TM)$ and $(p, u)\in TM.$ \end{defn} A rescaled natural metric $\bar{g}^{f}$ is constructed in such a way that the vertical and horizontal subbundles are orthogonal and the bundle map $\pi: (TM, \bar{g}^{f})\rightarrow (M, f g)$ is Riemannian submersion. The rescaled metric $\bar{g}^{f}$ induces a norm on each tangent space of $TM$ which we denote by $\parallel \cdot \parallel$. \begin{lem}\label{le:22} Let $(M, g)$ be a Riemannian manifold and $TM$ be the tangent bundle of $M$. Let $f>0$ and $f\in C^{\infty}(M)$. If the rescaled Riemannian metric $\bar{g}^{f}$ on $TM$ is natural with respect to $g$ on $M$ then the corresponding Levi-Civita connection $\overline{\nabla}^{f}$ satisfies \begin{eqnarray} i) \ \bar{g}(\overline{\nabla}^{f}_{X^{h}}Y^{h}, Z^{h})&=&\frac{1}{2f}\Big(X(f)g(Y, Z)+Y(f)g(Z, X)-Z(f)g(X, Y)\Big)+g(\nabla_{X}Y,Z),\\ ii) \ \bar{g}(\overline{\nabla}^{f}_{X^{h}}Y^{h}, Z^{v})&=&-\frac{1}{2}\bar{g}\Big((R(X, Y)u)^{v}, Z^{v}\Big) ,\\ iii) \ \bar{g}(\overline{\nabla}^{f}_{X^{h}}Y^{v}, Z^{h})&=&\frac{1}{2f}\bar{g}\Big((R(X, Z)u)^{v}, Y^{v}\Big),\\ iv) \ \bar{g}(\overline{\nabla}^{f}_{X^{h}}Y^{v}, Z^{v})&=&\frac{1}{2}\Big(X^{h}(\bar{g}(Y^{v}, Z^{v}))-\bar{g}(Y^{v}, (\nabla_{X}Z)^{v})
+\bar{g}(Z^{v}, (\nabla_{X}Y)^{v})\Big), \end{eqnarray} \begin{eqnarray} v) \ \bar{g}(\overline{\nabla}^{f}_{X^{v}}Y^{h}, Z^{h})&=&\frac{1}{2f}\bar{g}\Big((R(Y, Z)u)^{v}, X^{v}\Big),\\ vi) \ \bar{g}(\overline{\nabla}^{f}_{X^{v}}Y^{h}, Z^{v})&=&\frac{1}{2}\Big(Y^{h}(\bar{g}(Z^{v}, X^{v}))-\bar{g}(X^{v}, (\nabla_{Y}Z)^{v})
-\bar{g}(Z^{v},(\nabla_{Y}X)^{v})\Big),\\ vii) \ \bar{g}(\overline{\nabla}^{f}_{X^{v}}Y^{v}, Z^{h})&=&\frac{1}{2f}\Big(-Z^{h}(\bar{g}(X^{v}, Y^{v}))+\bar{g}(Y^{v}, (\nabla_{Z}X)^{v})
+\bar{g}(X^{v},(\nabla_{Z}Y)^{v})\Big),\\ viii) \ \bar{g}(\overline{\nabla}^{f}_{X^{v}}Y^{v}, Z^{v})&=&\frac{1}{2}\Big(X^{v}(\bar{g}(X^{v}, Z^{v}))+Y^{v}(\bar{g}(Z^{v}, X^{v}))
-Y^{v}(\bar{g}(X^{v},Y^{v}))\Big) \end{eqnarray} for all vector fields $X, Y, Z\in C^{\infty}(TM)$ and $(p, u)\in TM.$ \end{lem} \begin{proof} We shall repeatedly make use of the Kozul formula for the Levi-Civita connection $\overline{\nabla}^{f}$ stating that \begin{eqnarray}
2\bar{g}^{f}(\overline{\nabla}^{f}_{X^{i}}Y^{j}, Z^{k})&=&X^{i}(\bar{g}^{f}(Y^{j},Z^{k}))+Y^{j}(\bar{g}^{f}(Z^{k}, X^{i}))
-Z^{k}(\bar{g}^{f}(X^{i},Y^{j})) \nonumber\\
&&-\bar{g}^{f}(X^{i},\ [Y^{j}, Z^{k}])+\bar{g}^{f}(Y^{j},[Z^{k},X^{i}])
+\bar{g}^{f}(Z^{k},\ [X^{i},Y^{j}]) \end{eqnarray} for all vector fields $X, Y, Z\in\mathcal{C}^{\infty}(TM)$ and $i, j, k\in \{h, v\}.$
$i)$ The result is a direct consequence of the following calculations using Definition 2.1 and Proposition 5.1 in \cite{GK2}, \begin{eqnarray}
2\bar{g}^{f}(\overline{\nabla}^{f}_{X^{h}}Y^{h}, Z^{h})&=&X^{h}(\bar{g}^{f}(Y^{h}, Z^{h}))+Y^{h}(\bar{g}^{f}(Z^{h}, X^{h}))
-Z^{h}(\bar{g}^{f}(X^{h}, Y^{h}))\nonumber\\
&&-\bar{g}^{f}(X^{h}, [Y^{h}, Z^{h}])+\bar{g}^{f}(Y^{h}, [Z^{h}, X^{h}])
+\bar{g}^{f}(Z^{h}, [X^{h}, Y^{h}])\nonumber\\
&=&X^{h}(f g(Y,Z)\circ\pi)+Y^{h}(f g(Z, X)\circ\pi)
-Z^{h}(f g(X, Y)\circ\pi)\nonumber\\
&&-\bar{g}^{f}(X^{h}, [Y, Z]^{h})+\bar{g}^{f}(Y^{h}, [Z, X]^{h})
+\bar{g}^{f}(Z^{h},[X, Y]^{h})\nonumber\\
&=& X(f)g(Y, Z)+Y(f)g(Z, X)-Z(f)g(X, Y)+2f\bar{g}^{f}(\nabla_{X}Y), Z). \end{eqnarray}
$ii)$ The statement is obtained as follows. \begin{eqnarray}
2\bar{g}^{f}(\overline{\nabla}^{f}_{X^{h}}Y^{v}, Z^{h})&=&X^{h}(\bar{g}^{f}(Y^{h},Z^{v}))+Y^{h}(\bar{g}^{f}(Z^{v}, X^{h}))
-Z^{v}(\bar{g}^{f}(X^{h}, Y^{h})) \nonumber\\
&&-\bar{g}^{f}(X^{h},[Y^{h}, Z^{v}])+\bar{g}^{f}(Y^{h}, [Z^{v}, X^{h}])
+\bar{g}^{f}(Z^{v}, [X^{h}, Y^{h}]) \nonumber\\
&=&-Z^{v}(f g(X, Y))+\bar{g}^{f}(Z^{v}, [X^{h},Y^{h}]) \nonumber\\
&=&-\bar{g}^{f}(Z^{v},(R(X, Y)u)^{v}) \end{eqnarray}
$iii)$ and $v)$ are analogous to $ii)$.
$iv)$ Again using Definition 2.1 and Proposition 5.1 in \cite{GK2} we yield \begin{eqnarray}
2\bar{g}^{f}(\overline{\nabla}^{f}_{X^{h}}Y^{v}, Z^{v})&=&X^{h}(\bar{g}^{f}(Y^{v}, Z^{v}))+Y^{v}(\bar{g}^{f}(Z^{v}, X^{h}))
-Z^{v}(\bar{g}^{f}(X^{h}, Y^{v})) \nonumber\\
&&-\bar{g}^{f}(X^{h}, [Y^{v}, Z^{v}])+\bar{g}^{f}(Y^{v}, [Z^{v}, X^{h}])
+\bar{g}^{f}(Z^{v}, [X^{h}, Y^{v}]) \nonumber\\
&=&X^{h}(\bar{g}(Y^{v}, Z^{v}))-\bar{g}(Y^{v},(\nabla_{X}Z)^{v})
+\bar{g}(Z^{v}, (\nabla_{X}Y)^{v}) \end{eqnarray}
$vi)$ and $vii)$ are analogous to iv).
$viii)$ The statement is a direct consequence of the fact that the Lie bracket of two vertical vector fields vanishes. \end{proof} \begin{cor}\label{co:23} Let $(M, g)$ be a Riemannian manifold and $\bar{g}^{f}$ be a rescaled natural rescaled metric on the tangent bundle $TM$ of $M$.
Then the Levi-Civita connection $\overline{\nabla}^{f}$ satisfies \begin{equation} (\overline{\nabla}^{f}_{X^{h}}Y^{h})_{(p, u)}=(\nabla^{f}_{X}Y)^{h}_{(p, u)}-\frac{1}{2}\Big(R(X, Y)u\Big)^{v}+
\frac{1}{2f(p)}\Big(X(f)Y+Y(f)X-g(X, Y)\circ\pi(\texttt{d}(f\circ\pi))^{*}\Big)^{h}_{p} \end{equation} for all vector fields $X, Y\in C^{\infty}(TM)$ and $(p, u)\in TM.$ \end{cor} \begin{proof} By proposition 3.5 in \cite{GK2}, each tangent vector $Z\in T_{(p,u)}TM$ can be decomposed as $Z=Z^{h}_{1}+Z^{v}_{2}$. Using $i)$ and $ii)$ of Lemma 2.2, we have \begin{eqnarray}
\bar{g}(\overline{\nabla}^{f}_{X^{h}}Y^{h}, Z^{h}_{1}+Z^{v}_{2})&=&-\frac{1}{2}\bar{g}\Big((R(X, Y)u)^{v}, Z^{h}_{1}+Z^{v}_{2}\Big)
+g\Big((\nabla_{X}Y)^{h},Z^{h}_{1}+Z^{v}_{2}\Big)\nonumber\\
&&+\frac{1}{2f}\Big(X(f)g(Y^{h}, Z^{h}_{1}+Z^{v}_{2})+g((Y f X)^{h}, Z^{h}_{1}+Z^{v}_{2})\nonumber\\
&& -\langle g(X^{h}, Y^{h})\texttt{d}(f\circ\pi), Z^{h}_{1}+Z^{v}_{2}\rangle\Big) \nonumber\\
&=&(\nabla^{f}_{X}Y)^{h}-\frac{1}{2}\Big(R(X, Y)u\Big)^{v}+\frac{1}{2f}\Big(X(f)Y+Y(f)X\nonumber\\
&&-g(X, Y)\circ\pi(\texttt{d}(f\circ\pi))^{*}\Big)^{h}. \end{eqnarray} \end{proof}
\begin{defn} Let $(M, g)$ be a Riemannian manifold and $F:TM\rightarrow TM$ be a smooth bundle endomorphism of the tangent bundle $TM$. Then we define the vertical and horizontal lifts $F^{v}:TM\rightarrow TTM$, $F^{h}:TM\rightarrow TTM$ of $F$ by \begin{equation} F^{v}(\eta)=\sum_{i=1}^{m}\eta_{i}F(\partial_{i})^{v} \quad and \quad F^{h}(\eta)=\sum_{i=1}^{m}\eta_{i}F(\partial_{i})^{h}, \end{equation} where $\sum_{i=1}^{m}\eta_{i}\partial_{i}\in\pi^{-1}(V)$ is a local representation of $\eta\in C^{\infty}(TM)$. \end{defn} \begin{lem}\label{le:25} Let $(M, g)$ be a Riemannian manifold and the tangent bundle $TM$ be equipped with a rescaled metric $\bar{g}^{f}$ which is natural with respect to $g$ on $M$. If $F:TM\rightarrow TM$ is a smooth bundle endomorphism of the tangent bundle, then \begin{eqnarray} i) \ (\overline{\nabla}^{f}_{X^{v}}F^{v})_{\xi}&=&F(X_{p})^{v}_{\xi}+\sum_{i=1}^{m}u(x_{i})(\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{v})_{\xi},\\ ii) \ (\overline{\nabla}^{f}_{X^{v}}F^{h})_{\xi}&=&F(X_{p})^{h}_{\xi}+\sum_{i=1}^{m}u(x_{i})(\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{h})_{\xi},\\ iii) \ (\overline{\nabla}^{f}_{X^{h}}F^{v})_{\xi}&=&(\overline{\nabla}^{f}_{X^{h}}F(u)^{v})_{\xi},\\ iv) \ (\overline{\nabla}^{f}_{X^{h}}F^{h})_{\xi}&=&(\overline{\nabla}^{f}_{X^{h}}F(u)^{h})_{\xi}, \end{eqnarray} for any $X\in C^{\infty}(TM)$, $\xi=(p, u)\in TM$ and $\eta=\sum_{i=1}^{m}\eta_{i}\partial_{i}\in\pi^{-1}(V)$. \end{lem} \begin{proof} Let $(x_{1},\cdots,x_{m})$ be local coordinates on $M$ in a neighborhood $V$ of $p$. Then, using the abbreviation $X_{i}$ for $\frac{\partial}{\partial x_{i}}$, we have $X^{v}(\texttt{d}x_{i})=\texttt{d}x_{i}(X)=X(x_{i})$ and $\texttt{d}x_{i}(p,u)=\eta_{i}(p)$ for $i\in\{1,\cdots,m\}$. Hence \begin{eqnarray}
(\overline{\nabla}^{f}_{X^{v}}F^{v})_{\xi}&=&\sum_{i=1}^{m}\overline{\nabla}^{f}_{X^{v}}(\eta_{i}F(\partial_{i})^{v})
=\sum_{i=1}^{m}X^{v}(\texttt{d}x_{i})F(\partial_{i})^{v}
+\eta_{i}\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{v} \nonumber\\
&=&\sum_{i=1}^{m}X(x_{i})F(\partial_{i})^{v}
+\eta_{i}\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{v}
=F(X_{p})^{v}_{\xi}+\sum_{i=1}^{m}u(x_{i})(\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{v})_{\xi}. \end{eqnarray} Similarly we have \begin{eqnarray}
(\overline{\nabla}^{f}_{X^{v}}F^{h})_{\xi}&=&\sum_{i=1}^{m}\overline{\nabla}^{f}_{X^{v}}(\eta_{i}F(\partial_{i})^{h})
=\sum_{i=1}^{m}X^{v}(\texttt{d}x_{i})F(\partial_{i})^{h}
+\eta_{i}\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{h} \nonumber\\
&=&\sum_{i=1}^{m}X(x_{i})F(\partial_{i})^{h}
+\eta_{i}\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{h}
=F(X_{p})^{h}_{\xi}+\sum_{i=1}^{m}u(x_{i})(\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{h})_{\xi}. \end{eqnarray} For the last two equations of the lemma we use a differentiable curve $\gamma: [0, 1]\rightarrow M$ such that $\gamma(0)=p$ and $\gamma'(0)=X_{p}$ to get a differentiable curve $U\circ\gamma: [0, 1]\rightarrow TM$ such that $U\circ\gamma(0)=\xi$ and $(U\circ\gamma)'(0)=X^{h}_{\xi}$. By the definition of $F^{v}$ and $F^{h}$ we get \begin{eqnarray}
F^{v}|_{U\circ\gamma(t)}&=&\sum_{i=1}^{m}\texttt{d}x_{i}F(\partial_{i})^{v}|_{U\circ\gamma(t)}
=\sum_{i=1}^{m}\texttt{d}x_{i}(U\circ\gamma(t))F(\partial_{i})^{v}|_{U\circ\gamma(t)} \nonumber\\
&=&F(\sum_{i=1}^{m}u(x_{i})_{p}e_{j})^{v}|_{U\circ\gamma(t)}=(F\circ U)^{v}|_{U\circ\gamma(t)}. \end{eqnarray}
Similarly $F^{h}|_{U\circ\gamma}=(F\circ U)^{h}_{U\circ\gamma}$. This proves parts $iii)$ and $iv)$. \end{proof}
\section{The Rescaled Sasaki Metric} This section is devoted to the Sasaki metric $\hat{g}$ on the tangent bundle $TM$ introduced by Sasaki in the famous paper \cite{Sa}. We calculate its Levi-Civita connection $\hat{\nabla}^{f}$, its Riemann curvature tensor and obtain some interesting connections between the geometric properties of the manifold $(M, g)$ and its tangent bundle $(TM, \hat{g}^{f})$ equipped with the rescaled Sasaki metric. \begin{defn} Let $(M, g)$ be a Riemannian manifold. Let $f>0$ and $f\in C^{\infty}(M)$. Then the rescaled Sasaki metric $\hat{g}^{f}$ on the tangent bundle $TM$ of $M$ is given by \begin{eqnarray} i) \ \hat{g}_{(x,u)}^{f}(X^{h}, Y^{h})&=&f(p)g_{p}(X, Y),\\ ii) \ \hat{g}_{(x,u)}^{f}(X^{v}, Y^{h})&=&0 ,\\ iii) \ \hat{g}_{(x,u)}^{f}(X^{v}, Y^{v})&=&g_{p}(X, Y). \end{eqnarray} for all vector fields $X, Y\in C^{\infty}(TM)$. \end{defn} The rescaled Sasaki metric is obviously contained in the class of rescaled $g-$natural metrics. It is constructed in such a manner that inner products are respected not only by lifting vectors horizontally but vertically as well.
\begin{prop}\label{pr: 32} Let $(M, g)$ be a Riemannian manifold and $\hat{\nabla}^{f}$ be Levi-Civita connection of the tangent bundle $(TM, \hat{g}^{f})$ equipped with the rescaled Sasaki metric. Then \begin{eqnarray} i) \ (\hat{\nabla}^{f}_{X^{h}}Y^{h})_{(p,u)}&=&(\nabla_{X}Y)^{h}_{(p,u)}+\frac{1}{2f(p)}
\Big((X(f)Y+Y(f)X)-g(X,Y)\circ\pi(\texttt{d}(f\circ\pi))^{*}\Big)^{h}_{p}\nonumber\\ &&-\frac{1}{2}\Big(R_{p}(X,Y)u\Big)^{v},\\ ii) \ (\hat{\nabla}^{f}_{X^{h}}Y^{v})_{(p,u)}&=&(\nabla_{X}Y)^{v}_{(p,u)}+\frac{1}{2f(p)}\Big(R_{p}(u,Y)X\Big)^{h},\\ iii) \ (\hat{\nabla}^{f}_{X^{v}}Y^{h})_{(p,u)}&=&\frac{1}{2f(p)}\Big(R_{p}(u,X)Y\Big)^{h},\\ iv) \ (\hat{\nabla}^{f}_{X^{v}}Y^{v})_{(p,u)}&=&0 \end{eqnarray} for any $X,Y\in C^{\infty}(TM)$, $\xi=(p, u)\in TM$. \end{prop} \begin{proof}
$i)$ The statement is a direct consequence of Corollary 2.3.
$ii)$ By applying Lemma 2.2 we obtain the following for the horizontal part \begin{eqnarray}
2\hat{g}^{f}(\hat{\nabla}^{f}_{X^{h}}Y^{v}, Z^{h})&=&-\hat{g}^{f}((R(Z,X)u)^{v},Y^{v})=-g(R(u,Y)Z,X)\nonumber\\
&=&g(R(u,Y)X,Z)=\frac{1}{f}\hat{g}^{f}((R(u,Y)X)^{h},Z^{h}), \end{eqnarray} As for the vertical part note that \begin{eqnarray}
2\hat{g}^{f}(\hat{\nabla}^{f}_{X^{h}}Y^{v}, Z^{v})&=&X^{h}(\hat{g}^{f}(Y^{v},Z^{v}))+\hat{g}^{f}(Z^{v},(\nabla_{X}Y)^{v})
-\hat{g}^{f}(Y^{v},(\nabla_{X}Z)^{v}) \nonumber\\
&=&X(g(Y,Z))+g(Z,\nabla_{X}Y)-g(Y,\nabla_{X}Z) \nonumber\\
&=&2\hat{g}^{f}((\nabla_{X}Y)^{v},Z^{v}). \end{eqnarray}
$iii)$ For the horizontal part we get calculations similar to those above \begin{eqnarray}
2\hat{g}(\hat{\nabla}^{f}_{X^{v}}Y^{h}, Z^{h})&=&\frac{1}{f}\hat{g}(X^{v},(R(Y,Z)u)^{v})=\frac{1}{f}g(X,R(Y,Z)u)\nonumber\\
&=&\frac{1}{f}g(R(u,X)Y,Z). \end{eqnarray} The rest follows by \begin{eqnarray}
2\hat{g}(\hat{\nabla}^{f}_{X^{v}}Y^{h}, Z^{v})&=&Y^{h}(\hat{g}(Z^{v},X^{v}))-\hat{g}(Z^{v},(\nabla_{Y}X)^{v})
-\hat{g}(X^{v},(\nabla_{Y}Z)^{v}) \nonumber\\
&=&Y(g(Z,X))-g(Z,\nabla_{Y}X)-g(X,\nabla_{Y}Z)=0. \end{eqnarray}
$iv)$ Using Lemma 2.2 again we yield \begin{eqnarray}
2f\hat{g}(\hat{\nabla}^{f}_{X^{v}}Y^{v}, Z^{h})&=&-Z^{h}(\hat{g}(X^{v},Y^{v}))+\hat{g}(Y^{v},(\nabla_{Z}X)^{v})
+\hat{g}(X^{v},(\nabla_{Z}Y)^{v}) \nonumber\\
&=&-Z(g(X,Y))+g(Y,\nabla_{Z}X)+g(X,\nabla_{Z}Y)=0, \end{eqnarray} and \begin{eqnarray}
2\hat{g}(\hat{\nabla}^{f}_{X^{v}}Y^{v}, Z^{v})&=&X^{v}(\hat{g}(Y^{v},Z^{v}))+Y^{v}(\hat{g}(Z^{v},X^{v}))
-Z^{v}(\hat{g}(X^{v},Y^{v})) \nonumber\\
&=&X^{v}(g(Y,Z))+Y^{v}(g(Z,X))-Z^{v}(g(X,Y))=0. \end{eqnarray} This completes the proof. \end{proof}
We shall now turn our attention to the Riemann Curvature tensor $\hat{R}^{f}$ of the tangent bundle $TM$ equipped with the rescaled Sasaki metric $\hat{g}^{f}$. For this we need the following useful Lemma. \begin{lem}\label{le:33} Let $(M, g)$ be a Riemannian manifold and $\hat{\nabla}^{f}$ be the Levi-Civita connection of the tangent bundle $(TM, \hat{g}^{f})$, equipped with the rescaled Sasaki metric $\hat{g}^{f}$. Let $F:TM\rightarrow TM$ is a smooth bundle endomorphism of the tangent bundle, then \begin{equation} (\hat{\nabla}^{f}_{X^{v}}F^{v})_{\xi}=F(X_{p})^{v}_{\xi}, \end{equation} and \begin{equation} (\hat{\nabla}^{f}_{X^{v}}F^{h})_{\xi}=F(X_{p})^{h}_{\xi}+\frac{1}{2f(p)}\Big(R(u,X)F(u)\Big)^{h}_{\xi} \end{equation} for any $X\in C^{\infty}(TM)$ and $\xi=(p, u)\in TM$. \end{lem} \begin{proof} By applying $i)$ of Lemma 2.5 and $iv)$ of Proposition 3.2 we obtain the following \begin{equation} (\hat{\nabla}^{f}_{X^{v}}F^{v})_{\xi}=F(X_{p})^{v}_{\xi}+\sum_{i=1}^{m}u(x_{i})(\overline{\nabla}^{f}_{X^{v}}F(\partial_{i})^{v})_{\xi}
=F(X_{p})^{v}_{\xi}. \end{equation} By applying $ii)$ of Lemma 2.5 and $iii)$ of Proposition 3.2, we get \begin{equation} (\hat{\nabla}^{f}_{X^{v}}F^{h})_{\xi}=F(X_{p})^{h}_{\xi}+(\overline{\nabla}^{f}_{X^{v}}F(u)^{h})_{\xi}
=F(X_{p})^{h}_{\xi}+\frac{1}{2f(p)}\Big(R(u,X)F(u)\Big)^{h}_{\xi}. \end{equation}
\end{proof} \begin{prop}\label{pr: 34} Let $(M, g)$ be a Riemannian manifold and $\hat{R}^{f}$ be the Riemann curvature tensor of the tangent bundle $(TM, \hat{g}^{f})$ equipped with the rescaled Sasaki metric. Then the following formulae hold \begin{eqnarray} i) \ \hat{R}^{f}_{(p,u)}(X^{v},Y^{v})Z^{v}&=&0,\\ ii)\ \hat{R}^{f}_{(p,u)}(X^{h},Y^{v})Z^{v}&=&\Big(-\frac{1}{2f(p)}R(Y,Z)X-\frac{1}{4f^{2}(p)}R(u,Y)(R(u,Z)X)\Big)^{h}_{p}, \end{eqnarray} \begin{eqnarray} iii)\ \hat{R}^{f}_{(p,u)}(X^{v},Y^{v})Z^{h}&=&\Big(-\frac{1}{2f(p)}R(Y,X)Z-\frac{1}{4f^{2}(p)}R(u,Y)(R(u,X)Z)\Big)^{h}_{p} \nonumber\\
&&+\Big(\frac{1}{2f(p)}R(X,Y)Z+\frac{1}{4f^{2}(p)}R(u,y)(R(u,Y)Z)\Big)^{h}_{p}, \end{eqnarray}
\begin{eqnarray} iv) \ \hat{R}^{f}_{(p,u)}(X^{h},Y^{v})Z^{h}&=&\Big(\nabla_{X}(\frac{1}{2f(p)}R(u,Y)Z)\Big)^{h}_{p}+\frac{1}{4f(p)}R((R(u,Y)Z),X)u\nonumber\\
&&+A_{f}\Big(X,\frac{1}{2f(p)}(R(u,Y)Z)\Big) \nonumber\\
&&-\frac{1}{2f(p)}\Big(R(u,Y)(\nabla_{X}Z+A_{f}(X,Z))\Big)^{h}_{p} \nonumber\\
&&+\frac{1}{2}\Big(R(X,Z)u\Big)^{v}_{p}-\frac{1}{2f(p)}\Big(R(u,\nabla_{X}Y)Z\Big)^{h}_{p}, \end{eqnarray} \begin{eqnarray} v) \ \hat{R}^{f}_{(p,u)}(X^{h},Y^{h})Z^{v}&=&\Big(\nabla_{X}(\frac{1}{2f(p)}R(u,Z)Y)\Big)^{h}_{p}-\Big(\nabla_{Y}
(\frac{1}{2f(p)}R(u,Z)X)\Big)^{h}_{p}\nonumber\\
&& +\frac{1}{4f(p)}R(R(u,Z)Y,X)u-\frac{1}{4f(p)}R(R(u,Z)X,Y)u \nonumber\\
&&+\frac{1}{2f(p)}A_{f}(X,R(u,Z)Y)-\frac{1}{2f(p)}A_{f}(Y,R(u,Z)X) \nonumber\\
&&+\frac{1}{2f(p)}R(u,Z)[Y,X]+\Big(R(X,Y)u\Big)^{v}_{p}\nonumber\\
&&+\frac{1}{2f(p)}\Big(R(u,\nabla_{Y}Z)X\Big)^{h}_{p}-\frac{1}{2f(p)}\Big(R(u,\nabla_{X}Z)Y\Big)^{h}_{p}, \end{eqnarray} \begin{eqnarray} vi) \ \hat{R}^{f}_{(p,u)}(X^{h},Y^{h})Z^{h} &=&\hat{\nabla}^{f}_{X^{h}}\hat{\nabla}^{f}_{Y^{h}}Z^{h}-\hat{\nabla}^{f}_{Y^{h}}\hat{\nabla}^{f}_{X^{h}}Z^{h}
-\hat{\nabla}^{f}_{[X^{h},Y^{h}]}Z^{h}\nonumber\\
&=&\hat{\nabla}^{f}_{X^{h}}(F_{1}^{h})-\hat{\nabla}^{f}_{Y^{h}}\Big((\nabla_{X}Z)^{h}
+A_{f}(X,Z)^{h}+F_{2}^{h}\Big)-\hat{\nabla}^{f}_{(\nabla_{X}Y)^{h}}Z^{h} \nonumber\\
&=&\nabla_{X}\Big(\nabla_{Y}Z+A_{f}(Y,Z)\Big)^{h}+A_{f}\Big(X,\nabla_{Y}Z+A_{f}(Y,Z)\Big)^{h}\nonumber\\
&&-\frac{1}{2}\Big(R(X,\nabla_{Y}Z+A_{f}(Y,Z))u\Big)^{v}
-\nabla_{Y}\Big(\nabla_{X}Z+A_{f}(X,Z)\Big)^{h} \nonumber\\
&&-A_{f}\Big(Y,\nabla_{X}Z+A_{f}(X,Z)\Big)^{h}+\frac{1}{2}\Big(R(Y,\nabla_{X}Z+A_{f}(X,Z))u\Big)^{v} \nonumber\\
&&-\Big(\nabla_{[X,Y]}Z\Big)^{h}-A_{f}([X,Y],Z)^{h}-\frac{1}{2}\Big(R([X,Y],Z)u\Big)^{v}\nonumber\\
&&+\frac{1}{2f}\Big(R(u,R(X,Y)u)Z\Big)^{h}+\frac{1}{2}\Big(\nabla_{Y}(R(X,Z)u)\Big)^{v}\nonumber\\
&&+\frac{1}{4f}\Big(R(u,R(X,Z)u)Y\Big)^{h}-\frac{1}{2}\Big(\nabla_{X}(R(Y,Z)u)\Big)^{v}\nonumber\\
&&-\frac{1}{4f}\Big(R(u,R(Y,Z)u)X\Big)^{h}. \end{eqnarray} for any $X, Y, Z\in T_{p}M$. \end{prop} \begin{proof} $i)$ The result follows immediately from Proposition 3.2.
$ii)$ Let $F: TM\rightarrow TM$ be the bundle endomorphism given by \begin{equation} F: u\mapsto \frac{1}{2f}R(u,Z)X. \end{equation}
Applying Proposition 3.2 and Lemma 3.3 we have \begin{equation} \hat{\nabla}^{f}_{Y^{v}}F^{h}=F(Y)^{h}+\frac{1}{2f}\Big(R(u,Y)F(u)\Big)^{h}. \end{equation} This implies that
\begin{eqnarray} \hat{R}^{f}(X^{h},Y^{v})Z^{v}&=&\hat{\nabla}^{f}_{X^{h}}\hat{\nabla}^{f}_{Y^{v}}Z^{v}-\hat{\nabla}^{f}_{Y^{v}}\hat{\nabla}^{f}_{X^{h}}Z^{v}
-\hat{\nabla}^{f}_{[X^{h},Y^{v}]}Z^{v}\nonumber\\
&=&-\hat{\nabla}^{f}_{Y^{v}}\hat{\nabla}^{f}_{X^{h}}Z^{v}
=-\hat{\nabla}^{f}_{Y^{v}}\Big((\nabla_{X}Z)^{v}+F^{h}\Big)\nonumber\\
&=&-\hat{\nabla}^{f}_{Y^{v}}F^{h}=-F(Y)^{h}-\frac{1}{2f}\Big(R(u,Y)F(u)\Big)^{h} \nonumber\\
&=&\Big(-\frac{1}{2f}R(Y,Z)X-\frac{1}{4f^{2}}R(u,Y)(R(u,Z)X)\Big)^{h}. \end{eqnarray}
$iii)$ Using $ii)$ and $1^{st}$ Bianchi identity we get \begin{equation} \hat{R}^{f}(X^{v},Y^{v})Z^{h}=\hat{R}^{f}(Z^{h},Y^{v})X^{v}-\hat{R}^{f}(Z^{h},X^{v})Y^{v} \end{equation} which gives \begin{eqnarray}
\hat{R}^{f}(X^{v},Y^{v})Z^{h}&=&\Big(-\frac{1}{2f}R(Y,X)Z-\frac{1}{4f^{2}}R(u,Y)(R(u,X)Z)\Big)^{h} \nonumber\\
&&+\Big(\frac{1}{2f}R(X,Y)Z+\frac{1}{4f^{2}}R(u,X)(R(u,Y)Z)\Big)^{h}. \end{eqnarray}
$iv)$ Let $F_{1}, F_{2}: TM\rightarrow TM$ be the bundle endomorphisms given by \begin{equation} F_{1}(u)\mapsto \frac{1}{2f}R(u,Y)Z \quad and \quad F_{2}(u)\mapsto -\frac{1}{2f}R(X,Z)u. \end{equation} Then Proposition 3.2 implies that \begin{eqnarray} \hat{R}^{f}(X^{h},Y^{v})Z^{h}&=&\hat{\nabla}^{f}_{X^{h}}\hat{\nabla}^{f}_{Y^{v}}Z^{h}-\hat{\nabla}^{f}_{Y^{v}}\hat{\nabla}^{f}_{X^{h}}Z^{h}
-\hat{\nabla}^{f}_{[X^{h},Y^{v}]}Z^{h}\nonumber\\
&=&\hat{\nabla}^{f}_{X^{h}}(F_{1}^{h})-\hat{\nabla}^{f}_{Y^{v}}\Big((\nabla_{X}Z)^{h}
+A_{f}(X,Z)^{h}+F_{2}^{v}\Big)-\hat{\nabla}^{f}_{(\nabla_{X}Y)^{v}}Z^{h}\nonumber\\
&=&(\nabla_{X}F_{1}(u))^{h}-\frac{1}{2}\Big(R(X,F_{1}(u))u\Big)^{v}+A_{f}(X,F_{1}(u))^{h}\nonumber\\
&&-\frac{1}{2f}\Big(R(u,Y)(\nabla_{X}Z+A_{f}(X,Z))\Big)^{h}-F_{2}(Y)^{v}
-\frac{1}{2f}\Big(R(u,\nabla_{X}Y)Z\Big)^{h}\nonumber\\
&=&\Big(\nabla_{X}(\frac{1}{2f}R(u,Y)Z)\Big)^{h}+\frac{1}{4f}R(R(u,Y)Z,X)u\nonumber\\
&&+A_{f}\Big(X,\frac{1}{2f}(R(u,Y)Z)\Big)-\frac{1}{2f}\Big(R(u,Y)(\nabla_{X}Z+A_{f}(X,Z))\Big)^{h} \nonumber\\
&&+\frac{1}{2}\Big(R(X,Z)u\Big)^{v}-\frac{1}{2f}\Big(R(u,\nabla_{X}Y)Z\Big)^{h}. \end{eqnarray}
$v)$ Applying part $iv)$ and $1^{st}$ Bianchi identity \begin{equation} \hat{R}^{f}(X^{h},Y^{h})Z^{v}=\hat{R}^{f}(X^{h},Z^{v})Y^{h}-\hat{R}^{f}(Y^{h},Z^{v})X^{h}, \end{equation} we get \begin{eqnarray} \hat{R}^{f}(X^{h},Y^{h})Z^{v}&=&\Big(\nabla_{X}(\frac{1}{2f}R(u,Z)Y)\Big)^{h}+\frac{1}{4f}R(R(u,Z)Y,X)u+A_{f}\Big(X,\frac{1}{2f}(R(u,Z)Y)\Big)\nonumber\\
&&-\frac{1}{2f}\Big(R(u,Z)(\nabla_{X}Y+A_{f}(X,Y))\Big)^{h}+\frac{1}{2}(R(X,Y)u)^{v}
-\frac{1}{2f}\Big(R(u,\nabla_{X}Z)Y\Big)^{h}\nonumber\\
&&-\Big(\nabla_{Y}(\frac{1}{2f}R(u,Z)X)\Big)^{h}-\frac{1}{4f}R(R(u,Z)X,Y)u-A_{f}\Big(Y,\frac{1}{2f}R(u,Z)X\Big)\nonumber\\
&&+\frac{1}{2f}\Big(R(u,Z)(\nabla_{Y}X+A_{f}(Y,X))\Big)^{h}-\frac{1}{2}(R(Y,X)u)^{v}
+\frac{1}{2f}\Big(R(u,\nabla_{Y}Z)X\Big)^{h},\nonumber\\ \end{eqnarray} from which the result follows.
$vi)$ By $i)$ of Proposition 3.2 and direct calculation we get \begin{eqnarray} \hat{R}^{f}(X^{h},Y^{h})Z^{h}&=&\hat{\nabla}^{f}_{X^{h}}\hat{\nabla}^{f}_{Y^{h}}Z^{h}-\hat{\nabla}^{f}_{Y^{h}}\hat{\nabla}^{f}_{X^{h}}Z^{h}
-\hat{\nabla}^{f}_{[X^{h},Y^{h}]}Z^{h}\nonumber\\
&=&\hat{\nabla}^{f}_{X^{h}}(F_{1}^{h})-\hat{\nabla}^{f}_{Y^{h}}\Big((\nabla_{X}Z)^{h}
+A_{f}(X,Z)^{h}+F_{2}^{h}\Big)-\hat{\nabla}^{f}_{(\nabla_{X}Y)^{h}}Z^{h} \nonumber\\
&=&\nabla_{X}\Big(\nabla_{Y}Z+A_{f}(Y,Z)\Big)^{h}+A_{f}\Big(X,\nabla_{Y}Z+A_{f}(Y,Z)\Big)^{h}\nonumber\\
&&-\frac{1}{2}\Big(R(X,\nabla_{Y}Z+A_{f}(Y,Z))u\Big)^{v}
-\nabla_{Y}\Big(\nabla_{X}Z+A_{f}(X,Z)\Big)^{h} \nonumber\\
&&-A_{f}\Big(Y,\nabla_{X}Z+A_{f}(X,Z)\Big)^{h}+\frac{1}{2}\Big(R(Y,\nabla_{X}Z+A_{f}(X,Z))u\Big)^{v} \nonumber\\
&&-\Big(\nabla_{[X,Y]}Z\Big)^{h}-A_{f}([X,Y],Z)^{h}-\frac{1}{2}\Big(R([X,Y],Z)u\Big)^{v}\nonumber\\
&&+\frac{1}{2f}\Big(R(u,R(X,Y)u)Z\Big)^{h}+\frac{1}{2}\Big(\nabla_{Y}(R(X,Z)u)\Big)^{v}\nonumber\\
&&+\frac{1}{4f}\Big(R(u,R(X,Z)u)Y\Big)^{h}-\frac{1}{2}\Big(\nabla_{X}(R(Y,Z)u)\Big)^{v} \nonumber\\
&&-\frac{1}{4f}\Big(R(u,R(Y,Z)u)X\Big)^{h}. \end{eqnarray} \end{proof}
We shall now compare the geometries of the manifold $(M,g)$ and its tangent bundle $TM$ equipped with the rescaled Sasaki metric $\hat{g}^{f}$. \begin{thm}\label{th:35} Let $(M, g)$ be a Riemannian manifold and $TM$ be its tangent bundle with the rescaled Sasaki metric $\hat{g}^{f}$. Then $TM$ is flat if and only if $M$ is flat and $f=C(constant)$. \end{thm} \begin{proof} Applying proposition 3.4 and \begin{equation} A_{f}(X,Y)=\frac{1}{2f}\Big(X(f)Y+Y(f)X-g(X,Y)(\texttt{d}f)^{*}\Big)^{h}. \end{equation} If $A_{f}=0$, we have \begin{equation} X(f)Y+Y(f)X-g(X,Y)(\texttt{d}f)^{*}=0, \end{equation} then $R\equiv 0$ implies $\hat{R}^{f}\equiv 0$. If we assume that $\hat{R}^{f}\equiv 0$ and calculate the Riemann curvature tensor for three horizontal vector fields at $(p, 0)$ we have \begin{eqnarray} \hat{R}^{f}(X^{h},Y^{h})Z^{h}&=&R(X,Y)Z+A_{f}(Y,Z)-A_{f}(X,Z)+A_{f}\Big(X,\nabla_{Y}Z+A_{f}(Y,Z)\Big)\nonumber\\
&&-A_{f}\Big(Y,\nabla_{X}Z+A_{f}(X,Z)\Big)-A_{f}([X,Y],Z)=0, \end{eqnarray} then $R=0$ and $f=C(constant)$. \end{proof} \begin{cor}\label{co:36} Let $(M, g)$ be a Riemannian manifold and $TM$ be its tangent bundle with the rescaled Sasaki metric $\hat{g}^{f}$. If $f\neq C(constant)$, then $(TM,\hat{g}^{f})$ is unflat. \end{cor}
For the sectional curvatures of the tangent bundle we have the following. \begin{prop}\label{pr: 37} Let $(M, g)$ be a Riemannian manifold and equip the tangent bundle $(TM, \hat{g}^{f})$ with the rescaled Sasaki metric $\hat{g}^{f}$. Let $(p,u)\in TM$ and $X,Y\in T_{p}M$ be two orthonormal tangent vectors at $p$. Let $\hat{K}^{f}(X^{i},Y^{j})$ denote the sectional curvature of the plane spanned by $X^{i}$ and $Y^{j}$ with $i,j\in \{h,v\}$. Then we have the following \begin{eqnarray} i) \ \hat{K}^{f}_{(p,u)}(X^{v},Y^{v})&=&0,\\
ii)\ \hat{K}^{f}_{(p,u)}(X^{h},Y^{v})&=&\frac{1}{4f^{2}(p)}|R(u,Y)X|^{2},\\
iii) \ \hat{K}^{f}_{(p,u)}(X^{h},Y^{h})&=&\frac{1}{f(p)}K(X,Y)-\frac{3}{4f^{2}(p)}|R(X,Y)u|^{2}+L_{f}(X,Y) , \end{eqnarray} where \begin{eqnarray*} L_{f}(X,Y)&=&\frac{1}{f}\Big(g(\nabla_{X}A_{f}(Y,Y)-\nabla_{Y}A_{f}(X,Y),X)-g(A_{f}(X,\nabla_{Y}Y+A_{f}(Y,Y)),X)\nonumber\\
&&-g(A_{f}(Y,\nabla_{X}Y+A_{f}(X,Y)),X)-g(A_{f}([X,Y],Y),X)\Big) . \end{eqnarray*}
\end{prop} \begin{proof} $i)$ It follows directly from Proposition 3.4 that the sectional curvature for a plane spanned by two vertical vectors vanishes.
$ii)$ Applying part $ii)$ of proposition 3.4 we get \begin{eqnarray}
\hat{K}^{f}(X^{h},Y^{v})&=&\frac{\hat{g}^{f}(\hat{R}^{f}(X^{h},Y^{v})Y^{v},X^{h})}
{\hat{g}^{f}(X^{h},X^{h})\hat{g}^{f}(Y^{v},Y^{v})} \nonumber\\
&=&\frac{1}{f}\Big(-\frac{1}{2f}\hat{g}^{f}((R(Y,Y)X)^{h},X^{h})
-\frac{1}{4f^{2}}\frac{\hat{g}^{f}(R(u,Y)R(u,Y)X,X^{h})}{f g(R(u,Y)R(u,Y)X,X)}\Big) \nonumber\\
&=&\frac{1}{4f^{2}}g(R(u,Y)X, R(u,Y)X) =\frac{1}{4f^{2}}|R(u,Y)X|^{2}. \end{eqnarray}
$iii)$ It follows immediately from proposition 3.4 that \begin{eqnarray}
\hat{K}^{f}(X^{h},Y^{h})&=&\frac{1}{f^{2}}\hat{g}^{f}(\hat{R}^{f}(X^{h},Y^{h})Y^{h},X^{h}) \nonumber\\
&=&\frac{1}{f}g(R(X,Y)Y,X)+\frac{3}{4f^{2}}g(R(Y,X)u,R(X,Y)u)\nonumber\\
&=&\frac{1}{f}K(X,Y)-\frac{3}{4f^{2}}|R(X,Y)u|^{2}+\frac{1}{f}\Big(g(\nabla_{X}A_{f}(Y,Y)\nonumber\\
&&-\nabla_{Y}A_{f}(X,Y),X)-g(A_{f}(X,\nabla_{Y}Y+A_{f}(Y,Y)),X)\nonumber\\
&&-g(A_{f}(Y,\nabla_{X}Y+A_{f}(X,Y)),X)-g(A_{f}([X,Y],Y),X)\Big). \end{eqnarray} \end{proof}
\begin{thm}\label{th:38} Let $(M, g)$ be a Riemannian manifold and equip the tangent bundle $(TM, \hat{g}^{f})$ with the rescaled Sasaki metric $\hat{g}^{f}$. If the sectional curvature of $(TM,\hat{g}^{f})$ is upper bounded, then $(M,g)$ is flat; if $M$ compact and the sectional curvature of $(TM,\hat{g}^{f})$ is lower bounded, then $(M,g)$ is flat. \end{thm} \begin{proof} The statement follows directly from Proposition 3.7. \end{proof} \begin{prop}\label{pr: 39} Let $(M, g)$ be a Riemannian manifold and equip the tangent bundle $(TM, \hat{g}^{f})$ with the rescaled Sasaki metric $\hat{g}^{f}$. Let $(p,u)\in TM$ and $X,Y\in T_{p}M$ be two orthonormal tangent vectors at $p$. Let $S$ denote the scalar curvature of $g$ and $\hat{S}^{f}$ denote the scalar curvature of $\hat{g}^{f}$. Then the following equation holds \begin{equation} \hat{S}^{f}=\frac{1}{f}S-\frac{1}{4f^{2}}\sum_{i,j=1}^{m}\mid R(X_{i},Y_{j})u\mid^{2}+\sum_{i,j=1}^{m}L_{f}(X_{i},Y_{j}) \end{equation} where $\{X_{1},\cdots,X_{m}\}$ is a local orthonormal frame for $TM$. \end{prop} \begin{proof} For a local orthonormal frame $\{\frac{1}{\sqrt{f}}Y_{1},\cdots,\frac{1}{\sqrt{f}}Y_{m}, Y_{m+1},\cdots, Y_{2m}\}$ for $TTM$ with $X^{h}_{i}=Y_{i}$ and $X^{v}_{i}=Y_{m+i}$ we get from proposition 3.7 \begin{eqnarray}
\hat{S}^{f}&=&\sum_{i,j=1}^{m}\hat{K}^{f}(\frac{1}{\sqrt{f}}X^{h}_{i},\frac{1}{\sqrt{f}}X^{h}_{j})
+2\sum_{i,j=1}^{m}\hat{K}^{f}(\frac{1}{\sqrt{f}}X^{h}_{i},X^{v}_{j})
+\sum_{i,j=1}^{m}\hat{K}^{f}(X^{v}_{i},X^{v}_{j}) \nonumber\\
&=&\sum_{i,j=1}^{m}[\hat{K}^{f}(X^{h}_{i},X^{h}_{j})+2\hat{K}^{f}(X^{h}_{i},X^{v}_{j})+\hat{K}^{f}(X^{v}_{i},X^{v}_{j})] \nonumber\\
&=&\sum_{i,j=1}^{m}[\frac{1}{f}K(X_{i},X_{j})-\frac{3}{4f^{2}}\mid R(X_{i},X_{j})u\mid^{2}+L_{f}(X_{i},X_{j})]
+2\sum_{i,j=1}^{m}\frac{1}{4f^{2}}|R(X_{j},u)X_{i}|^{2}. \end{eqnarray} In order to simplify this last expression we put $u=\sum_{i=1}^{m}u_{i}X_{i}$ we get
\begin{eqnarray}
\sum_{i,j=1}^{m}|R(X_{j},u)X_{i}|^{2}
&=&\sum_{i,j,k,l=1}^{m}u_{k}u_{l}g(R(X_{j},X_{k})X_{i},R(X_{j},X_{l})X_{i})\nonumber\\
&=&\sum_{i,j,k,l,s=1}^{m}u_{k}u_{l}g(R(X_{j},X_{k})X_{i},X_{s})g(R(X_{j},X_{l})X_{i},X_{s})\nonumber\\
&=&\sum_{i,j,k,l,s=1}^{m}u_{k}u_{l}g(R(X_{s},X_{i})X_{k},X_{j})g(R(X_{s},X_{i})X_{l},X_{j})\nonumber\\
&=&\sum_{i,j,k,l=1}^{m}u_{k}u_{l}g(R(X_{j},X_{i})X_{k},R(X_{j},X_{i})X_{l})\nonumber\\
&=&\sum_{i,j=1}^{m}|R(X_{j},X_{i})u|^{2}. \end{eqnarray} This completes the proof. \end{proof} \begin{cor}\label{co:40} Let $(M, g)$ be a Riemannian manifold and $TM$ be its tangent bundle with the rescaled Sasaki metric $\hat{g}^{f}$. Then $(TM,\hat{g}^{f})$ has constant scalar curvature if and only if $(M,g)$ is flat and $\sum_{i,j=1}^{m}L_{f}(X_{i},X_{j})=C(constant)$. \end{cor} \begin{proof} The statement follows directly from Proposition 3.9. \end{proof}
\section{Geodesics of The Rescaled Sasaki Metric} Let $M$ be a Riemannian manifold with metric $g$. We denote by $\Im^{p}_{q}(M)$ the set of all tensor fields of type $(p,q)$ on $M$. Manifolds, tensor fields and connections are always assumed to be differentiable and of class $C^{\infty}$. Let $T(M)$ be a tangent bundle bundle of $M$, and $\pi$ the projection $\pi:T(M)\rightarrow M$. Let the manifold $M$ be covered by system of coordinate neighbourhoods $(U,x^{i})$, where $(x^{i}), i=1,\cdots, n$ is a local coordinate system defined in the neighbourhood $U$. Let $y^{i}$ be the Cartesian coordinates in each tangent spaces $T_{p}(M)$ and $P\in M$ with respect to the natural base $\frac{\partial}{\partial x^{i}}$, $P$ being an arbitrary point in $U$ whose coordinates are $x^{i}$. Then we can introduce local coordinates $(x^{i},y^{i})$ in open set $\pi^{-1}(U)\subset T(M_{n})$. We call them coordinates induced in $\pi^{-1}(U)$ from $(U,x^{i})$. The projection $\pi$ is represented by $(x^{i},y^{i})\rightarrow (x^{i})$. The indices $i,j,\cdots$
run from $1$ to $2n$.
Let $\hat{C}$ be a curve on $T(M_{n})$ and locally expressed by $x=x(\sigma),$ with respect to induced coordinates $\frac{\partial}{\partial x_{i}}$ in $\pi^{-1}(U)\subset T(M_{n})$. The curve $\hat{C}$ is said to be a lift of the curve $C$ and denoted by $C^{h}=(x(\sigma),x'(\sigma))$. The tangent vector field of $\hat{C}$ defined by $T=(\frac{dx}{dt},\frac{dy}{dt})=x'^{h}+(\nabla_{x'}y)^{v}$. If the curve $\hat{C}$ is a geodesic, we get $\nabla_{x'}x'=0$, then $y=x'$.
\begin{thm}\label{th:41} Let $C$ be a geodesic on $T(M)$, if $f\neq c(constant)$ in any geodesics on $M$ , then the curve $C$ cannot be lifted to the geodesic of $\hat{g}^{f}$. \end{thm} \begin{proof} By applying Proposition 3.2 we have \begin{eqnarray}
\hat{\nabla}^{f}_{x'^{h}+(\nabla_{x'}y)^{v}}(x'^{h}+(\nabla_{x'}y)^{v})
&=&(\nabla_{x'}x')^{h}+A_{f}(x',x')^{h}-\frac{1}{2}(R(x',x')u)^{v} \nonumber\\
&&+(\nabla_{x'}\nabla_{x'}y)^{v}+\frac{1}{2f}[R_{p}(u,\nabla_{x'}y)x']^{h}
+\frac{1}{2f}[R_{p}(u,\nabla_{x'}y)x']^{h} \nonumber\\
&=&(\nabla_{x'}x')^{h}+\frac{1}{f}[R_{p}(u,\nabla_{x'}y)x']^{h}+A_{f}(x',x')^{h}+(\nabla_{x'}\nabla_{x'}y)^{v}. \end{eqnarray} For the curve $C$ is a geodesic on $M_{n}$, with respect to the adapted frame and taking account of
$\hat{\nabla}^{f}_{T}T=0 $, then we get \begin{eqnarray}
&& (a) \ \nabla_{x'}x'=-\frac{1}{f(x(t))}[R_{p}(y(t),\nabla_{x'}y(t))x'(t)]^{h}-A_{f}(x',x')^{h} , \nonumber\\
&& (b) \ \nabla_{x'}\nabla_{x'}y=0. \end{eqnarray} Applying part $iv)$ of proposition 3.4 and \begin{equation} A_{f}(x',x')=\frac{1}{2f}[2x'(t)x'-g(x',x')\texttt{grad} f], \end{equation} if $\langle A_{f}(x',x'), x'\rangle =0$, we get \begin{equation} 2X'(f)g(x',x')-g(x',x')X'(f)=g(x',x')X'(f)=0. \end{equation} Then $X'(f)=0$, $grad(f)=0$ and $\frac{\texttt{d}f(x(t))}{\texttt{d}t}=0$, so we get $f=c(constant)$ in any geodesics on $M$. \end{proof}
\begin{cor}\label{co:42}
If $(x(t),y(t))$ is geodesic and $|y(t)|=C$, then $\nabla_{x'}x'=-A_{f}(x',x')$. \end{cor} \begin{proof} By applying $(a)$ of equation $(4.2)$ we have \begin{equation} 0=\nabla_{x'}\langle y, y\rangle =\langle \nabla_{x'}y, y\rangle +\langle y, \nabla_{x'}y\rangle, \end{equation} and \begin{equation} 0=\nabla_{x'}\langle \nabla_{x'}y, y\rangle =\langle \nabla_{x'}\nabla_{x'}y, y\rangle +\langle \nabla_{x'}y, \nabla_{x'}y\rangle. \end{equation} Then we get $\langle \nabla_{x'}y, y\rangle=0$ and $\nabla_{x'}y=0$, from which the result follows. \end{proof}
\begin{thm}\label{th:43} Let $C_{1}$ and $C_{2}$ be two geodesics on $M_{n}$ departure from the same arbitrary point, and their initial tangent vectors are not parallel. If the lifts of two geodesics on $M$ are geodesics on $T(M)$ with the metric $\hat{g}^{f}$, then $f=c(constant)$. \end{thm} \begin{proof} By applying $(a)$ of equation $(4.2)$ we have \begin{equation} 2X'(f)x'-g(x',x')X'(f)=2\tilde{X'}(f)\tilde{x}'-g(\tilde{x}',\tilde{x}')X'(f). \end{equation} Using $X'(0) \nparallel\tilde{X'}(0)$ we get $\texttt{grad}f(x_{0})=0$, then we obtain $f=c(constant)$. \end{proof}
The submersion geodesic $C$ is said to be the image under $\pi$ of the geodesic $\hat{C}$ on $TM$. Let $C=\pi\circ \hat{C}$ be a submersion geodesic on $M$, then $\hat{\nabla}^{f}_{T}T=0 $. Using this condition we have \begin{thm}\label{th:44} Let $M$ be a flat manifold, the submersion geodesic is always geodesics on $M$, then $f=c(constant)$. \end{thm}
\section{The Rescaled Cheeger-Gromoll Metric} In \cite{CG}, Cheeger and Gromoll studied complete manifolds of nonnegative curvature and suggest a construction of Riemannian metrics useful
in that context. This can be used to obtain a natural metric $\tilde{g}^{f}$ on the tangent bundle $TM$ of a given Riemannian manifold $(M, g)$.
For a vector field $u\in C^{\infty}(TM)$ we shall by $U$ denote its canonical vertical vector field on $TM$ which in local coordinates
is given by \begin{equation} U=\sum_{i=1}^{m}v_{m+i}(\frac{\partial}{\partial v_{m+i}})_{(p,u)}, \end{equation} where $u=(v_{m+1},\cdots,v_{2})$. To simplify our notation we define the function $r: TM\rightarrow\mathbb{R}$
by $r(p,u)=|u|=\sqrt{g_{p}(u,u)}$ and $\alpha=1+r^{2}.$ \begin{defn} Let $(M,g)$ be a Riemannian manifold. Let $f>0$ and $f\in C^{\infty}(M)$. Then the rescaled Cheeger-Gromoll metric $\tilde{g}^{f}$ on the tangent bundle $TM$ of $M$ is given by \begin{eqnarray} i) \ \tilde{g}_{(p,u)}^{f}(X^{h}, Y^{h})&=&f(p)g_{p}(X, Y),\\ ii) \ \tilde{g}_{(p,u)}^{f}(X^{v}, Y^{h})&=&0 ,\\ iii) \ \tilde{g}_{(p,u)}^{f}(X^{v}, Y^{v})&=&\frac{1}{1+r^{2}}(g_{p}(X, Y)+g_{p}(X, u)g_{p}(Y, u)) \end{eqnarray} for all vector fields $X, Y\in C^{\infty}(TM)$. \end{defn} It is obvious that the rescaled Cheeger-Gromoll metric $\tilde{g}^{f}$ is contained in the class of rescaled natural metrics introduced earlier.
\begin{prop}\label{pr: 42} Let $(M, g)$ be a Riemannian manifold and $\tilde{\nabla}^{f}$ be Levi-Civita connection of the tangent bundle $(TM, \tilde{g}^{f})$ equipped with the rescaled Cheeger-Gromoll metric. Then \begin{eqnarray} i) \ (\tilde{\nabla}^{f}_{X^{h}}Y^{h})_{(p,u)}&=&(\nabla_{X}Y)^{h}_{(p,u)}+\frac{1}{2f(p)}
\Big((X(f)Y+Y(f)X)-g(X,Y)(\texttt{d}f)^{*}\Big)^{h}_{p}\nonumber\\ &&-\frac{1}{2}\Big(R_{p}(X,Y)u\Big)^{v},\\ ii) \ (\tilde{\nabla}^{f}_{X^{h}}Y^{v})_{(p,u)}&=&(\nabla_{X}Y)^{v}_{(p,u)}+\frac{1}{2\alpha f(p)}\Big(R_{p}(u,Y)X\Big)^{h},\\ iii) \ (\tilde{\nabla}^{f}_{X^{v}}Y^{h})_{(p,u)}&=&\frac{1}{2\alpha f(p)}\Big(R_{p}(u,X)Y\Big)^{h},\\ iv) \ (\tilde{\nabla}^{f}_{X^{v}}Y^{v})_{(p,u)}&=&-\frac{1}{\alpha}\Big(\tilde{g}_{(p,u)}^{f}(X^{v}, U)Y^{v}
+\tilde{g}_{(p,u)}^{f}(Y^{v}, U)X^{v}\Big)\nonumber\\ &&+\frac{1+\alpha}{\alpha}\tilde{g}_{(p,u)}^{f}(X^{v},Y^{v})U-\frac{1}{\alpha}\tilde{g}_{(p,u)}^{f}(X^{v}, U)\tilde{g}_{(p,u)}^{f}(Y^{v}, U)U \end{eqnarray} for any $X,Y\in C^{\infty}(TM)$, $\xi=(p, u)\in TM$. \end{prop} \begin{proof}
$i)$ The statement is a direct consequence of Corollary 2.3.
$ii)$ By applying Lemma 2.2 and Definition 4.1 we get \begin{eqnarray}
2\tilde{g}(\tilde{\nabla}^{f}_{X^{h}}Y^{v}, Z^{h})&=&-\frac{1}{f}\tilde{g}(Y^{v},(R(Z,X)u)^{v})\nonumber\\
&=&-\frac{1}{\alpha f}\Big(g(Y,R(Z,X)u)+g(Y,u)g(R(Z,X)u,u)\Big)\nonumber\\
&=&\frac{1}{\alpha f}g\Big(R(u,Y)X,Z\Big). \end{eqnarray} From Definition 3.7 and Lemma 4.1 in \cite{GK2} it follows that \begin{equation} X^{h}(\frac{1}{\alpha})=0 \quad and \quad X^{h}(g(Y,u))\circ\pi=g(\nabla_{X}Y,u)\circ\pi, \end{equation} so \begin{equation} X^{h}(\tilde{g}^{f}(Y^{v},Z^{v}))=\tilde{g}^{f}((\nabla_{X}Y)^{v},Z^{v})+\tilde{g}^{f}(Y^{v},(\nabla_{X}Z)^{v}). \end{equation} This means that \begin{eqnarray}
2\tilde{g}^{f}(\tilde{\nabla}^{f}_{X^{h}}Y^{v}, Z^{v})&=&X^{h}(\tilde{g}^{f}(Y^{v},Z^{v}))+\tilde{g}^{f}(Z^{v},(\nabla_{X}Y)^{v})
-\tilde{g}^{f}(Y^{v},(\nabla_{X}Z)^{v}) \nonumber\\
&=&2\tilde{g}^{f}((\nabla_{X}Y)^{v},Z^{v}). \end{eqnarray}
$iii)$ Calculations similar to those in $ii)$ give \begin{eqnarray}
2\tilde{g}(\tilde{\nabla}^{f}_{X^{v}}Y^{h}, Z^{h})&=&\frac{1}{f}\tilde{g}(X^{v},(R(Y,Z)u)^{v})
=\frac{1}{\alpha f}\tilde{g}((R(u,X)Y)^{h},Z^{h}). \end{eqnarray} The rest follows by \begin{eqnarray}
2\tilde{g}^{f}(\tilde{\nabla}^{f}_{X^{v}}Y^{h}, Z^{v})&=&Y^{h}(\tilde{g}^{f}(Z^{v},X^{v}))-\tilde{g}^{f}(Z^{v},(\nabla_{Y}X)^{v})
-\tilde{g}^{f}(X^{v},(\nabla_{Y}Z)^{v}) \nonumber\\
&=&\tilde{g}^{f}(Z^{v},(\nabla_{Y}X)^{v})+\tilde{g}^{f}(X^{v},(\nabla_{Y}Z)^{v})
-\tilde{g}^{f}(Z^{v},(\nabla_{Y}X)^{v})-\tilde{g}^{f}(X^{v},(\nabla_{Y}Z)^{v})\nonumber\\
&=&0. \end{eqnarray}
$iv)$ Using Lemma 2.2 we yield \begin{eqnarray}
2f\tilde{g}^{f}(\tilde{\nabla}^{f}_{X^{v}}Y^{v}, Z^{h})&=&-Z^{h}(\tilde{g}^{f}(X^{v},Y^{v}))+\tilde{g}^{f}(Y^{v},(\nabla_{Z}X)^{v})
+\tilde{g}^{f}(X^{v},(\nabla_{Z}Y)^{v}) \nonumber\\
&=&-\tilde{g}^{f}(Y^{v},(\nabla_{Z}X)^{v})-\tilde{g}^{f}(X^{v},(\nabla_{Z}Y)^{v})
+\tilde{g}^{f}(Y^{v},(\nabla_{Z}X)^{v})+\tilde{g}^{f}(X^{v},(\nabla_{Z}Y)^{v}) \nonumber\\
&=&0. \end{eqnarray} Using $X^{v}(f(r^{2}))=2f'(r^{2})g(X,u)$ and $\alpha=1+r^{2}$ we get \begin{eqnarray}
X^{v}\tilde{g}^{f}(Y^{v}, Z^{v})&=&-\frac{2}{\alpha^{2}}g(X,u)\Big(g(Y,Z)+g(Y,u)g(Z,u)\Big) \nonumber\\
&&+\frac{1}{\alpha}\Big(g(X,Y)g(Z,u)+g(X,Z)g(Y,u)\Big). \end{eqnarray} The definition of the rescaled Cheeger-Gromoll metric implies that \begin{equation} \tilde{g}^{f}(X^{v}, U)=\frac{1}{\alpha}\Big(g(X,u)+g(X,u)g(u,u)\Big)=g(X,u). \end{equation} This leads to the following \begin{eqnarray}
\alpha^{2}\tilde{g}^{f}(\tilde{\nabla}^{f}_{X^{v}}Y^{v}, Z^{v})&=&\frac{\alpha^{2}}{2}\Big(X^{v}(\tilde{g}^{f}(Y^{v},Z^{v}))
+Y^{v}(\tilde{g}^{f}(Z^{v},X^{v}))-Z^{v}(\tilde{g}^{f}(X^{v},Y^{v}))\Big) \nonumber\\
&=&-g(X,u)\Big(g(Y,Z)+g(Y,u)g(Z,u)\Big) \nonumber\\
&&+\frac{\alpha}{2}\Big(g(X,Y)g(Z,u)+g(X,Z)g(Y,u)\Big)\nonumber\\
&&-g(Y,u)\Big(g(Z,X)+g(Z,u)g(X,u)\Big) \nonumber\\
&&+\frac{\alpha}{2}\Big(g(Y,Z)g(X,u)+g(Y,X)g(Z,u)\Big) \nonumber\\
&&+g(Z,u)\Big(g(X,Y)+g(X,u)g(Y,u)\Big) \nonumber\\
&&-\frac{\alpha}{2}\Big(g(Z,X)g(Y,u)+g(Z,Y)g(X,u)\Big) \nonumber\\
&=&g\Big(\big(g(X,Y)-g(X,u)g(Y,u)\big)u+\alpha g(X,Y)u \nonumber\\
&&-g(X,u)Y-g(Y,u)X,Z\Big). \end{eqnarray} By using the definition of the metric we see that this gives the statement to proof. \end{proof} Having determined the Levi-Civita connection we are ready to calculate the Riemann curvature tensor of $TM$. But first we state the following useful Lemma. \begin{lem}\label{le:43} Let $(M, g)$ be a Riemannian manifold and $\tilde{\nabla}^{f}$ be the Levi-Civita connection of the tangent bundle $(TM, \tilde{g}^{f})$, equipped with the rescaled Cheeger-Gromoll metric $\tilde{g}^{f}$. Let $F:TM\rightarrow TM$ is a smooth bundle endomorphism of the tangent bundle, then \begin{eqnarray}
(\tilde{\nabla}^{f}_{X^{v}}F^{v})_{\xi}&=&F(X)^{v}_{\xi}-\frac{1}{\alpha}\Big(\tilde{g}^{f}(X^{v},U)F^{v}+\tilde{g}^{f}(F^{v},U)X^{v} \nonumber\\
&&-(1+\alpha)\tilde{g}^{f}(F^{v},X^{v})U+\tilde{g}^{f}(X^{v},U)\tilde{g}^{f}(F^{v},U)U\Big)_{\xi} \end{eqnarray} and \begin{equation} (\tilde{\nabla}^{f}_{X^{v}}F^{h})_{\xi}=F(X)^{h}_{\xi}+\frac{1}{2\alpha f(p)}\Big(R(u,X)F(u)\Big)^{h}_{\xi} \end{equation} for any $X\in C^{\infty}(TM)$ and $\xi=(p, u)\in TM$. \end{lem} \begin{proof} The statement is a direct consequence of Lemma 2.5 and Proposition 5.2. \end{proof} \begin{prop}\label{pr: 54} Let $(M, g)$ be a Riemannian manifold and $\tilde{R}^{f}$ be the Riemann curvature tensor of the tangent bundle $(TM, \tilde{g}^{f})$ equipped with the rescaled Sasaki metric. Then the following formulae hold \begin{eqnarray*} i) \ \tilde{R}^{f}(X^{h},Y^{h})Z^{h}&=&\nabla_{X}(\nabla_{Y}Z+A_{f}(Y,Z))^{h}+A_{f}(X,\nabla_{Y}Z+A_{f}(Y,Z))^{h} \nonumber\\
&&-\frac{1}{2}[R(X,\nabla_{Y}Z+A_{f}(Y,Z))u]^{v}-\nabla_{Y}(\nabla_{X}Z+A_{f}(X,Z))^{h}\nonumber\\
&&-A_{f}(Y,\nabla_{X}Z+A_{f}(X,Z))^{h}+\frac{1}{2}\Big(R(Y,\nabla_{X}Z+A_{f}(X,Z))u\Big)^{v}\nonumber\\
&&-(\nabla_{[X,Y]}Z)^{h}-A_{f}([X,Y],Z)^{h}-\frac{1}{2}\Big(R([X,Y],Z)u\Big)^{v}\nonumber\\
&&+\frac{1}{2\alpha f}\Big(R(u,R(X,Y)u)Z\Big)^{h}\nonumber\\ \end{eqnarray*} \begin{eqnarray}
&&+\frac{1}{2}[\nabla_{Y}(R(X,Z)u)]^{v}+\frac{1}{4\alpha f(p)}\Big(R(u,R(X,Z)u)Y\Big)^{h} \nonumber\\
&&-\frac{1}{2}[\nabla_{X}(R(Y,Z)u)]^{v}-\frac{1}{4\alpha f(p)}\Big(R(u,R(Y,Z)u)X\Big)^{h}, \end{eqnarray} \begin{eqnarray} ii) \ \tilde{R}^{f}(X^{h},Y^{h})Z^{v}&=&(R(X,Y)Z)^{v}+\frac{1}{2\alpha }\Big(\nabla_{Z}(\frac{1}{f}R(u,Z)Y)
-\nabla_{Y}(\frac{1}{f}R(u,Z)X)\Big)^{h} \nonumber\\
&&-\frac{1}{4\alpha f(p)}\Big(R(X,R(u,Z)Y)u-R(Y,R(u,Z)X)u\Big)^{v} \nonumber\\
&&+\frac{1}{\alpha}\Big(A_{f}(X,\frac{1}{2f}R(u,Z)Y)-A_{f}(Y,\frac{1}{2f}R(u,Z)X)\Big)^{h} \nonumber\\
&&-\frac{1}{\alpha}\tilde{g}^{f}(Z^{v},u)(R(X,Y)u)^{v}+\frac{1+\alpha}{\alpha}\tilde{g}^{f}((R(X,Y)u)^{v},Z^{v})U. \end{eqnarray} \begin{eqnarray} iii) \ \tilde{R}^{f}(X^{h},Y^{v})Z^{h} &=& \frac{1}{2\alpha }\tilde{\nabla}^{f}_{X^{h}}(\frac{1}{f}R(u,Y)Z)^{h}\nonumber\\
&&-\frac{1}{2\alpha f}(R(u,\nabla_{X}Y)Z)^{h}-\frac{1}{2\alpha f}(R(u,Y)\nabla_{X}Z)^{h}
+\frac{1}{2}(R(X,Z)Y)^{v} \nonumber\\
&&-\frac{1}{2\alpha}\tilde{g}^{f}(Y^{v},U)(R(X,Z)u)^{v}-\frac{1}{2\alpha}\tilde{g}^{f}((R(X,Z)u)^{v},U)Y^{v}\nonumber\\
&&+\frac{1+\alpha}{2\alpha}\tilde{g}^{f}((R(X,Z)u)^{v},Y^{v})U
-\frac{1}{2\alpha}\tilde{g}^{f}(Y^{v},U)\tilde{g}^{f}((R(X,Z)u)^{v},U)U \nonumber\\
&& -\frac{1}{2\alpha f}(R(u,Y)A_{f}(X,Z))^{h}, \end{eqnarray} \begin{eqnarray} iv)\ \tilde{R}^{f}(X^{h},Y^{v})Z^{v}&=&-\frac{1}{2\alpha f}\Big(R(Y,Z)X\Big)^{h}
-\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,Y)R(u,Z)X\Big)^{h}\nonumber\\
&&+\frac{1}{2\alpha^{2} f}[g(Y,U)(R(u,Z)X)^{h}-g(Z,u)(R(u,Y)X)^{h}], \end{eqnarray} \begin{eqnarray} v)\ \tilde{R}^{f}_{(p,u)}(X^{v},Y^{v})Z^{h}&=&-\frac{1}{2\alpha f}\Big(R(X,Y)Z\Big)^{h}
-\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,X)R(u,Y)Z\Big)^{h}\nonumber\\
&&+\frac{1}{2\alpha f}\Big(R(Y,X)Z\Big)^{h}
+\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,Y)R(u,X)Z\Big)^{h}, \end{eqnarray} \begin{eqnarray} vi) \ \tilde{R}^{f}_{(p,u)}(X^{v},Y^{v})Z^{v}&=&\frac{1+\alpha+\alpha^{2}}{\alpha^{2}}
(\tilde{g}^{f}(Y^{v},Z^{v})X^{v}-\tilde{g}^{f}(X^{v},Z^{v})Y^{v})+ \nonumber\\
&&+\frac{2+\alpha}{\alpha^{2}}
(\tilde{g}^{f}(X^{v},Z^{v})g(Y,u)U-\tilde{g}^{f}(Y^{v},Z^{v})g(X,u)U)+ \nonumber\\
&&+\frac{2+\alpha}{\alpha^{2}}
(g(X,u)g(Z,u)Y^{v}-g(Y,u)g(Z,u)X^{v}). \end{eqnarray} for any $X, Y, Z\in T_{p}M$. \end{prop} \begin{proof} $i)$ By $i)$ of Proposition 4.2 and direct calculation we get \begin{eqnarray} \tilde{R}^{f}(X^{h},Y^{h})Z^{h}&=&\tilde{\nabla}^{f}_{X^{h}}\tilde{\nabla}^{f}_{Y^{h}}Z^{h}-\tilde{\nabla}^{f}_{Y^{h}}\tilde{\nabla}^{f}_{X^{h}}Z^{h}
-\tilde{\nabla}^{f}_{[X^{h},Y^{h}]}Z^{h}\nonumber\\
&=&\tilde{\nabla}^{f}_{X^{h}}((\nabla_{Y}Z)^{h}+A_{f}(Y,Z)^{h}-\frac{1}{2}(R(Y,Z)u)^{v}) \nonumber\\
&& -\tilde{\nabla}^{f}_{Y^{h}}((\nabla_{X}Z)^{h}+A_{f}(X,Z)^{h}-\frac{1}{2}(R(X,Z)u)^{v})\nonumber\\
&&-\tilde{\nabla}^{f}_{[X,Y]^{h}-(R(X,Y)u)^{v}}Z^{h}\nonumber\\
&=&\nabla_{X}(\nabla_{Y}Z+A_{f}(Y,Z))^{h}+A_{f}(X,\nabla_{Y}Z+A_{f}(Y,Z))^{h} \nonumber\\
&&-\frac{1}{2}\Big(R(X,\nabla_{Y}Z+A_{f}(Y,Z))u\Big)^{v}-\nabla_{Y}(\nabla_{X}Z+A_{f}(X,Z))^{h}\nonumber\\
&&-A_{f}(Y,\nabla_{X}Z+A_{f}(X,Z))^{h}+\frac{1}{2}\Big(R(Y,\nabla_{X}Z+A_{f}(X,Z))u\Big)^{v}\nonumber\\
&&-(\nabla_{[X,Y]}Z)^{h}-A_{f}([X,Y],Z)^{h}-\frac{1}{2}\Big(R([X,Y],Z)u\Big)^{v}\nonumber\\
&&+\frac{1}{2\alpha f}\Big(R(u,R(X,Y)u)Z\Big)^{h}\nonumber\\
&&+\frac{1}{2}\Big(\nabla_{Y}(R(X,Z)u)\Big)^{v}+\frac{1}{4\alpha f}\Big(R(u,R(X,Z)u)Y\Big)^{h} \nonumber\\
&&-\frac{1}{2}\Big(\nabla_{X}(R(Y,Z)u)\Big)^{v}-\frac{1}{4\alpha f}\Big(R(u,R(Y,Z)u)X\Big)^{h}. \end{eqnarray}
$ii)$ Note that the equation $\tilde{g}^{f}_{(p,u)}(X^{v},U)=g_{p}(X,u)$ implies that \begin{equation} \tilde{g}^{f}_{(p,u)}((R(X,Y)u)^{v},U)=g_{p}(R(X,Y)u,u)=0, \end{equation} Hence \begin{eqnarray*} \alpha\tilde{R}^{f}(X^{h},Y^{h})Z^{v}&=&\alpha\tilde{\nabla}^{f}_{X^{h}}\tilde{\nabla}^{f}_{Y^{h}}Z^{v}
-\alpha\tilde{\nabla}^{f}_{Y^{h}}\tilde{\nabla}^{f}_{X^{h}}Z^{v}
-\alpha\tilde{\nabla}^{f}_{[X^{h},Y^{h}]}Z^{v}\nonumber\\
&=&\tilde{\nabla}^{f}_{X^{h}}(\alpha(\nabla_{Y}Z)^{v}+\frac{1}{2f}(R(u,Z)Y)^{h}) \nonumber\\
&& -\tilde{\nabla}^{f}_{Y^{h}}(\alpha(\nabla_{X}Z)^{h}+\frac{1}{2f}(R(u,Z)X)^{h})
-\alpha\tilde{\nabla}^{f}_{[X,Y]^{h}-(R(X,Y)u)^{v}}Z^{v} \nonumber\\ &=&(\nabla_{X}(\frac{1}{2f}R(u,Z)Y))^{h}-\frac{1}{4f}(R(X,R(u,Z)Y)u)^{v}\nonumber\\
&&+\frac{1}{2f}(R(u,\nabla_{Y}Z)X)^{h}\nonumber\\
&& +A_{f}(X,\frac{1}{2f}R(u,Z)Y)^{h}+\alpha(\nabla_{X}\nabla_{Y}Z)^{v} \nonumber\\
&&-(\nabla_{Y}(\frac{1}{2f}R(u,Z)X))^{h}+\frac{1}{4f}(R(Y,R(u,Z)X)u)^{v}\nonumber\\
&&-\frac{1}{2f}(R(u,\nabla_{X}Z)Y)^{h}\nonumber\\
&&-A_{f}(Y,\frac{1}{2f}R(u,Z)X)^{h}-\alpha(\nabla_{Y}\nabla_{X}Z)^{v} \nonumber\\
&&-\frac{1}{2f}(R(u,Z)[X,Y])^{h}-\alpha(\nabla_{[X,Y]}Z)^{v} \nonumber\\ \end{eqnarray*} \begin{eqnarray}
&&-[\tilde{g}^{f}((R(X,Y)u)^{v},U)Z^{v}+\tilde{g}^{f}(Z^{v},U)(R(X,Y)u)^{v}] \nonumber\\
&&+(1+\alpha)\tilde{g}^{f}((R(X,Y)u)^{v},Z^{v})U-\tilde{g}^{f}((R(X,Y)u)^{v},U)\tilde{g}^{f}(Z^{v},U)U \nonumber\\
&=&\alpha(R(X,Y)Z)^{v}+\frac{1}{2f}[\nabla_{Z}(R(u,Z)Y)-\nabla_{Y}(R(u,Z)X)]^{h} \nonumber\\
&&-\frac{1}{4f}[R(X,R(u,Z)Y)u-R(Y,R(u,Z)X)u]^{v} \nonumber\\
&&+[A_{f}(X,\frac{1}{2f}R(u,Z)Y)-A_{f}(Y,\frac{1}{2f}R(u,Z)X)]^{h} \nonumber\\
&&-\tilde{g}^{f}(Z^{v},u)(R(X,Y)u)^{v}+(1+\alpha)\tilde{g}^{f}((R(X,Y)u)^{v},Z^{v})U. \end{eqnarray} $iii)$ Calculations similar to those above produce the third formula \begin{eqnarray} \tilde{R}^{f}(X^{h},Y^{v})Z^{h}&=&\tilde{\nabla}^{f}_{X^{h}}\tilde{\nabla}^{f}_{Y^{v}}Z^{h}
-\tilde{\nabla}^{f}_{Y^{v}}\tilde{\nabla}^{f}_{X^{h}}Z^{h}
-\tilde{\nabla}^{f}_{[X^{h},Y^{v}]}Z^{h}\nonumber\\
&=&\frac{1}{2\alpha }\tilde{\nabla}^{f}_{X^{h}}(\frac{1}{f}R(u,Y)Z)^{h}
-\tilde{\nabla}^{f}_{(\nabla_{X}Y)^{v}}Z^{h} \nonumber\\
&& -\tilde{\nabla}^{f}_{Y^{v}}[(\nabla_{X}Z)^{h}-\frac{1}{2}(R(X,Z)u)^{v}+A_{f}(X,Z)^{h}] \nonumber\\
&=& \frac{1}{2\alpha }\tilde{\nabla}^{f}_{X^{h}}(\frac{1}{f}R(u,Y)Z)^{h}\nonumber\\
&&-\frac{1}{2\alpha f}(R(u,\nabla_{X}Y)Z)^{h}-\frac{1}{2\alpha f}(R(u,Y)\nabla_{X}Z)^{h}
+\frac{1}{2}(R(X,Z)Y)^{v} \nonumber\\
&&-\frac{1}{2\alpha}\tilde{g}^{f}(Y^{v},U)(R(X,Z)u)^{v}-\frac{1}{2\alpha}\tilde{g}^{f}((R(X,Z)u)^{v},U)Y^{v}\nonumber\\
&&+\frac{1+\alpha}{2\alpha}\tilde{g}^{f}((R(X,Z)u)^{v},Y^{v})U
-\frac{1}{2\alpha}\tilde{g}^{f}(Y^{v},U)\tilde{g}^{f}((R(X,Z)u)^{v},U)U \nonumber\\
&& -\frac{1}{2\alpha f}(R(u,Y)A_{f}(X,Z))^{h}. \end{eqnarray} $iv)$ Since $X^{v}_{(p,u)}(f(r^{2}))=2f'(r^{2})g_{p}(X,u)$ and $(\tilde{\nabla}^{f}_{X^{h}}U)_{(p,u)}=0$ we get \begin{eqnarray*} 2\alpha\tilde{R}^{f}(X^{h},Y^{v})Z^{v}&=&2\alpha[\tilde{\nabla}^{f}_{X^{h}}\tilde{\nabla}^{f}_{Y^{v}}Z^{v}
-\tilde{\nabla}^{f}_{Y^{v}}\tilde{\nabla}^{f}_{X^{h}}Z^{v}
-\tilde{\nabla}^{f}_{[X^{h},Y^{v}]}Z^{v}]\nonumber\\
&=&-2\tilde{\nabla}^{f}_{X^{h}}[\tilde{g}^{f}(Y^{v},U)Z^{v}
-(1+\alpha)\tilde{g}^{f}(Y^{v},Z^{v})U \nonumber\\
&& +\tilde{g}^{f}(Z^{v},U)Y^{v} +\tilde{g}^{f}(Y^{v},U)\tilde{g}^{f}(Z^{v},U)U ] \nonumber\\
&& -\alpha\tilde{\nabla}^{f}_{Y^{v}}(\frac{1}{\alpha f}R(u,Z)X)^{h}
-2\alpha[\tilde{\nabla}^{f}_{Y^{v}}(\nabla_{X}Z)^{v}+\tilde{\nabla}^{f}_{(\nabla_{X}Y)^{v}}Z^{v}] \nonumber\\
&=&-g(Y,u)[\frac{1}{\alpha f}(R(u,Z)X)^{h}+2(\nabla_{X}Z)^{v}]\nonumber\\
&&-g(Z,u)[\frac{1}{\alpha f}(R(u,Y)X)^{h}+2(\nabla_{X}Y)^{v}]\nonumber\\
&&+\frac{2}{\alpha f}g(Y,u)(R(u,Z)X)^{h}\nonumber\\
&&-\tilde{\nabla}^{f}_{Y^{v}}(\frac{1}{f}R(u,Z)X)^{h}\nonumber\\
&&+2[g(Y,u)(\nabla_{X}Z)^{v}+g(\nabla_{X}Z,u)Y^{v}]\nonumber\\
&&-(1+\alpha)\tilde{g}^{f}(Y^{v},(\nabla_{X}Z)^{v})U+g(Y,u)g(\nabla_{X}Z,u)U\nonumber\\ \end{eqnarray*} \begin{eqnarray}
&&+g(\nabla_{X}Y,u)Z^{v}+g(Z,u)(\nabla_{X}Y)^{v}\nonumber\\
&&-(1+\alpha)\tilde{g}^{f}((\nabla_{X}Y)^{v},Z^{v})U+g(\nabla_{X}Y,u)g(Z,u)U\nonumber\\
&=&-\tilde{\nabla}^{f}_{Y^{v}}(\frac{1}{f}R(u,Z)X)^{h}\nonumber\\
&&+\frac{1}{\alpha f}[g(Y,U)(R(u,Z)X)^{h}-g(Z,u)(R(u,Y)X)^{h}]\nonumber\\
&=&-\frac{1}{f}\Big(R(Y,Z)X\Big)^{h}
-\frac{1}{2\alpha f^{2}}\Big(R(u,Y)R(u,Z)X\Big)^{h}\nonumber\\
&&+\frac{1}{\alpha f}[g(Y,U)(R(u,Z)X)^{h}-g(Z,u)(R(u,Y)X)^{h}] \end{eqnarray} For the last equation we have to show that all the terms not containing the Riemann curvature tenson $R$ vanish. But since \begin{equation} \tilde{g}^{f}(Y^{v},(\nabla_{X}Z)^{v})U=\frac{1}{\alpha}[g(Y,\nabla_{X}Z)+g(Y,u)g(\nabla_{X}Z,u)]U, \end{equation} the rest becomes \begin{equation} -\frac{2}{\alpha}[g(Y,\nabla_{X}Z)+g(Y,u)g(\nabla_{X}Z,u)+g(Z,\nabla_{X}Y)+g(Z,u)g(\nabla_{X}Y,u)]U, \end{equation} which vanishes, because \begin{equation} -\frac{2}{\alpha}X^{h}[\tilde{g}^{f}(Y^{v},Z^{v})+\tilde{g}^{f}(Y^{v},U)\tilde{g}^{f}(Z^{v},U)]U=0. \end{equation} $v)$ First we notice that \begin{eqnarray} \tilde{\nabla}^{f}_{X^{v}}\tilde{\nabla}^{f}_{Y^{v}}Z^{h}&=&\frac{1}{2\alpha}\tilde{\nabla}^{f}_{X^{v}}(\frac{1}{f}R(u,Y)Z)^{h}\nonumber\\
&=&-\frac{1}{2\alpha f}\Big(R(X,Y)Z\Big)^{h}
-\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,X)R(u,Y)Z\Big)^{h}. \end{eqnarray}
By using the fact that $[X^{v},Y^{v}]=0$ we get \begin{eqnarray} \tilde{R}^{f}(X^{v},Y^{v})Z^{h}&=&\tilde{\nabla}^{f}_{X^{v}}\tilde{\nabla}^{f}_{Y^{v}}Z^{h}
-\tilde{\nabla}^{f}_{Y^{v}}\tilde{\nabla}^{f}_{X^{v}}Z^{h} \nonumber\\
&=&\frac{1}{2\alpha}\tilde{\nabla}^{f}_{X^{v}}(\frac{1}{f}R(u,Y)Z)^{h}
-\frac{1}{2\alpha}\tilde{\nabla}^{f}_{Y^{v}}(\frac{1}{f}R(u,X)Z)^{h}\nonumber\\
&=&-\frac{1}{2\alpha f}\Big(R(X,Y)Z\Big)^{h}
-\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,X)R(u,Y)Z\Big)^{h}\nonumber\\
&&+\frac{1}{2\alpha f}\Big(R(Y,X)Z\Big)^{h}
+\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,Y)R(u,X)Z\Big)^{h}. \end{eqnarray} $vi)$ The result similar to Proposition 8.5 in \cite{GK2}. \end{proof}
In the following let $\tilde{Q}^{f}(V,W)$ denote the square of the area of the parallelogram with sides $V$ and $W$ for $V,W\in C^{\infty}(TTM)$ given by \begin{equation}
\tilde{Q}^{f}(V,W)=\|V\|^{2}\|W\|^{2}-\tilde{g}^{f}(V,W)^{2}. \end{equation}
\begin{lem}\label{le:45} Let $X,Y\in C^{\infty}(T_{p}M)$ be two orthonormal vectors in the tangent spaces $T_{p}M$ of $M$ at $p$. Then \begin{eqnarray} i) \ \tilde{Q}^{f}(X^{h},Y^{h})&=&f^{2}, \\ ii) \ \tilde{Q}^{f}(X^{h},Y^{v})&=&\frac{f}{\alpha}(1+g(Y,u)^{2}), \\ iii) \ \tilde{Q}^{f}(X^{v},Y^{v})&=&\frac{1}{\alpha^{2}}(1+g(Y,u)^{2}+g(X,u)^{2}). \end{eqnarray} \end{lem} \begin{proof} $i)$ The statement is a direct consequence of the definition of the Rescaled Cheeger-Gromoll Metric.
$ii)$ This is a direct consequence of \begin{eqnarray}
\tilde{Q}^{f}(X^{h},Y^{v})&=&\tilde{g}^{f}(X^{h},X^{h})\tilde{g}^{f}(Y^{v},Y^{v})-\tilde{g}^{f}(X^{h},Y^{v})^{2}\nonumber\\
&=&\frac{f}{\alpha}(1+g(Y,u)^{2}). \end{eqnarray}
$iii)$ This last part follows from \begin{eqnarray}
\tilde{Q}^{f}(X^{v},Y^{v})&=&\tilde{g}^{f}(X^{v},X^{v})\tilde{g}^{f}(Y^{v},Y^{v})-\tilde{g}^{f}(X^{h},Y^{v})^{2}\nonumber\\
&=&\frac{1}{\alpha}(1+g(X,u)^{2})\frac{1}{\alpha}(1+g(Y,u)^{2})\nonumber\\
&&-[\frac{1}{\alpha^{2}}(g(X,Y)+g(X,u)g(Y,u))]^{2} \nonumber\\
&=&\frac{1}{\alpha^{2}}(1+g(Y,u)^{2}+g(X,u)^{2}). \end{eqnarray} \end{proof}
Let $\tilde{G}^{f}$ be the $(2,0)-$tensor on the tangent bundle $TM$ given by \begin{equation} \tilde{G}^{f}(V,W)\mapsto \tilde{g}^{f}(\tilde{R}^{f}(V,W)W,V) \end{equation} for $V,W\in C^{\infty}(TTM)$.
\begin{lem}\label{le:46} Let $X,Y\in C^{\infty}(T_{p}M)$ be two orthonormal vectors in the tangent spaces $T_{p}M$ of $M$ at $p$. Then \begin{eqnarray}
i) \ \tilde{G}^{f}(X^{h},Y^{h})&=&\frac{1}{f} K(X,Y)-\frac{3}{4\alpha^{2}f^{2}}|R(X,Y)u|^{2}+\tilde{L}^{f}(X,Y), \\
ii) \ \tilde{G}^{f}(X^{h},Y^{v})&=&\frac{1}{4\alpha^{2} f^{2}}|R(u,Y)X|^{2}, \\ iii) \ \tilde{G}^{f}(X^{v},Y^{v})&=&\frac{1+\alpha+\alpha^{2}}{\alpha^{2}}\tilde{Q}^{f}(X^{v},Y^{v})
-\frac{2+\alpha}{\alpha^{3}}(g(X,u)^{2}+g(Y,u)^{2}).\\ \end{eqnarray} \end{lem} \begin{proof} $i)$ The statement follows by \begin{eqnarray} \alpha\tilde{G}^{f}(X^{h},Y^{h})&=&\alpha\tilde{g}^{f}(\tilde{R}^{f}(X^{h},Y^{h})Y^{h},X^{h}) \nonumber\\
&=& \tilde{g}^{f}\Big(\nabla_{X}(\nabla_{Y}Y+A_{f}(Y,Y))^{h},X^{h}\Big)\nonumber\\
&&+\tilde{g}^{f}\Big(A_{f}(X,\nabla_{Y}Y+A_{f}(Y,Y))^{h},X^{h}\Big)\nonumber\\
&& -\tilde{g}^{f}\Big(\nabla_{Y}(\nabla_{X}Y+A_{f}(X,Y))^{h},X^{h}\Big)\nonumber\\
&&-\tilde{g}^{f}\Big(A_{f}(Y,\nabla_{X}Y+A_{f}(X,Y))^{h},X^{h}\Big)\nonumber\\
&& -\tilde{g}^{f}\Big((\nabla_{[X,Y]}Y)^{h}+A_{f}([X,Y],Y)^{h},X^{h}\Big)\nonumber\\
&&+\tilde{g}^{f}\Big(\frac{1}{2\alpha f(p)}\Big(R(u,R(X,Y)u)Y\Big)^{h},X^{h}\Big)\nonumber\\
&& +\tilde{g}^{f}\Big(\frac{1}{4\alpha f(p)}\Big(R(u,R(X,Y)u)Y\Big)^{h},X^{h}\Big)\nonumber\\
&&-\tilde{g}^{f}\Big(\frac{1}{4\alpha f(p)}(R(u,R(Y,Y)u)X)^{h},X^{h}\Big)\nonumber\\
&=&\frac{1}{f} K(X,Y)-\frac{3}{4\alpha^{2}}|R(X,Y)u|^{2}+\tilde{L}^{f}(X,Y). \end{eqnarray}
The properties of the Riemann curvature tensor give \begin{equation}
g(R(u,R(X,Y)u)Y,X)=-|R(X,Y)u|^{2}, \end{equation} from which the result follows.
$ii)$ The statement follows by \begin{eqnarray} \alpha^{2}\tilde{G}^{f}(X^{h},Y^{v})&=&\alpha^{2}\tilde{g}^{f}(\tilde{R}^{f}(X^{h},Y^{v})Y^{v},X^{h}) \nonumber\\
&=&-\alpha^{2}\tilde{g}^{f}\Big(-\frac{1}{2\alpha f}\Big(R(Y,Z)X\Big)^{h} ,X^{h}\Big) \nonumber\\
&&-\alpha^{2}\tilde{g}^{f}\Big(-\frac{1}{4\alpha^{2} f^{2}}\Big(R(u,Y)R(u,Z)X\Big)^{h} ,X^{h}\Big) \nonumber\\
&&+\alpha^{2}\tilde{g}^{f}\Big(\frac{1}{2\alpha^{2} f}g(Y,u)(R(u,Y)X)^{h},X^{h}\Big)\nonumber\\
&&-\alpha^{2}\tilde{g}^{f}\Big(\frac{1}{2\alpha^{2} f}g(Y,u)(R(u,Y)X)^{h},X^{h}\Big)\nonumber\\
&=&\frac{1}{4 f^{2}}|R(u,Y)X|^{2}. \end{eqnarray}
$iii)$ In the last case we have \begin{eqnarray} \tilde{G}^{f}(X^{v},Y^{v})&=&\tilde{g}^{f}(\tilde{R}^{f}(X^{v},Y^{v})Y^{v},X^{v}) \nonumber\\
&&+\frac{\alpha+2}{\alpha^{2}}(\tilde{g}^{f}(X^{v},Y^{v})g(Y,u)g(X,u)-\tilde{g}^{f}(Y^{v},Y^{v})g(X,u)^{2}) \nonumber\\
&&+\frac{1+\alpha+\alpha^{2}}{\alpha^{2}}(\tilde{g}^{f}(Y^{v},Y^{v})\tilde{g}^{f}(X^{v},X^{v})
-\tilde{g}^{f}(X^{v},Y^{v})) \nonumber\\
&&+\frac{\alpha+2}{\alpha^{2}}(g(X,u)g(Y,u)\tilde{g}^{f}(X^{v},Y^{v})
-g(Y,u)^{2}\tilde{g}^{f}(X^{v},X^{v}) ) \nonumber\\
&=&\frac{1+\alpha+\alpha^{2}}{\alpha^{2}}\tilde{Q}^{f}(X^{v},Y^{v})
-\frac{2+\alpha}{\alpha^{3}}(g(X,u)^{2}+g(Y,u)^{2}). \end{eqnarray} \end{proof} \begin{prop}\label{pr: 47} Let $(M, g)$ be a Riemannian manifold and $TM$ be its tangent bundle equipped with the rescaled Cheeger-Gromoll metric $\tilde{g}^{f}$. Then the sectional curvature $\tilde{K}^{f}$ of $(TM,\tilde{g}^{f})$ satisfy the following: \begin{eqnarray}
i) \ \tilde{K}^{f}(X^{h},Y^{h})&=&\frac{1}{f^{3}} K(X,Y)-\frac{3}{4\alpha f^{4}}|R(X,Y)u|^{2}
+\frac{1}{f^{2}}\tilde{L}^{f}(X,Y), \\
ii) \ \tilde{K}^{f}(X^{h},Y^{v})&=&\frac{1}{4\alpha f^{3}}\frac{|R(u,Y)X|^{2}}{(1+g(Y,u)^{2})}, \\ iii) \ \tilde{K}^{f}(X^{v},Y^{v})&=&\frac{1-\alpha}{\alpha^{2}}+\frac{2+\alpha}{\alpha}\frac{1}{(1+g(Y,u)^{2}+g(X,u)^{2})}.\\ \end{eqnarray} \end{prop} \begin{proof}
The division of $\tilde{G}^{f}(X^{i},Y^{j})$ by $\tilde{Q}^{f}(X^{i},Y^{j})$ for $i,j\in\{h,v\}$ gives the result. \end{proof} \begin{prop}\label{pr: 48} Let $(M, g)$ be a Riemannian manifold of constant sectional curvature $\kappa$ .Let $TM$ be its tangent bundle equipped with the rescaled Cheeger-Gromoll metric $\tilde{g}^{f}$. Then the sectional curvature $\tilde{K}^{f}$ of $(TM,\tilde{g}^{f})$ satisfy the following: \begin{eqnarray} i) \ \tilde{K}^{f}(X^{h},Y^{h})&=&\frac{1}{f^{3}}\kappa -\frac{3\kappa^{2}}{4\alpha f^{4}}(g(u,X)^{2}+g(u,Y)^{2})
+\frac{1}{f^{2}}\tilde{L}^{f}(X,Y), \\ ii) \ \tilde{K}^{f}(X^{h},Y^{v})&=&\frac{1}{4\alpha f^{3}}\frac{\kappa^{2}g(X,u)^{2}}{(1+g(Y,u)^{2})}, \\ iii) \ \tilde{K}^{f}(X^{v},Y^{v})&=&\frac{1-\alpha}{\alpha^{2}}+\frac{2+\alpha}{\alpha}\frac{1}{(1+g(Y,u)^{2}+g(X,u)^{2})}.\\ \end{eqnarray} for any orthonormal vectors $X,Y\in T_{p}M$. \end{prop} \begin{proof}
This is a simple calculation using the special form of the curvature tensor. \end{proof}
For a given point $(p,u)\in TM$ with $u\neq0$. Let $\{e_{1},\cdots,e_{m}\}$ be an orthonormal basis for the tangent space $T_{p}M$
of $M$ at $p$ such that $e_{1}=\frac{u}{|u|}$, where $|u|$ is the norm of $u$ with respect to the metric $g$ on $M$. Then for
$i\in\{1,\cdots,m\}$ and $k\in\{2,\cdots,m\}$ define the horizontal and vertical lifts by $t_{i}=e_{i}^{h}$, $t_{m+1}=e_{1}^{v}$
and $t_{m+k}=\sqrt{\alpha}e_{k}^{v}$. Then $\{t_{1},\cdots,t_{2m}\}$ is an orthonormal basis of the tangent space $T_{(p,u)}M$
with respect to the rescaled Cheeger-Gromoll metric. \begin{lem}\label{le:411} Let $(p,u)$ be a point on $TM$ and $\{t_{1},\cdots,t_{2m}\}$ be an orthonormal basis of the tangent space $T_{(p,u)}M$ as above. Then the sectional curvature $\tilde{K}^{f}$ satisfy the following equations \begin{eqnarray}
\tilde{K}^{f}(t_{i},t_{j})&=&\frac{1}{f^{3}}K(e_{i},e_{j}) -\frac{3}{4\alpha f^{4}}|R(e_{i},e_{j})u|^{2}
+\frac{1}{f^{2}}\tilde{L}^{f}(X,Y), \\ \tilde{K}^{f}(t_{i},t_{m+1})&=& 0, \\
\tilde{K}^{f}(t_{i},t_{m+k})&=& \frac{1}{4f^{3}}|R(u,e_{k})e_{i}|^{2} \\ \tilde{K}^{f}(t_{m+1},t_{m+k})&=& \frac{3}{\alpha^{2}} \\ \tilde{K}^{f}(t_{m+k},t_{m+l})&=& \frac{\alpha^{2}+\alpha+1}{\alpha^{2}} \\ \end{eqnarray} for $i,j\in\{1,\cdots,m\}$ and $k,l\in\{2,\cdots,m\}$. \end{lem}
\begin{prop}\label{pr: 412} Let $(M, g)$ be a Riemannian manifold with scalar curvature $S$. Let $TM$ be its tangent bundle equipped with the rescaled Cheeger-Gromoll metric $\tilde{g}^{f}$ and $(p,u)$ be a point on $TM$. Then the scalar curvature $\tilde{S}^{f}$ of $(TM,\tilde{g}^{f})$ satisfy the following: Then \begin{eqnarray}
\tilde{S}^{f}_{(p,u)}&=& S_{p}+\frac{2\alpha-3}{4\alpha f^{4}}\sum_{i,j=1}^{m}|R(e_{i},e_{j})u|^{2}+\frac{1}{f^{2}}\sum_{i,j=1}^{m}\tilde{L}^{f}(X,Y) \nonumber\\
&&+\frac{m-1}{\alpha^{2}}[6+(m-2)(\alpha^{2}+\alpha+1)]. \end{eqnarray} \end{prop} \begin{proof} Let $\{t_{1},\cdots,t_{2m}\}$ be an orthonormal basis of the tangent space $T_{(p,u)}TM$ as above. By the definition of the scalar curvature we know that \begin{eqnarray} \tilde{S}^{f}&=&\sum_{i,j=1}^{m}\tilde{K}^{f}(t_{i},t_{j})\nonumber\\
&=&2\sum_{i,j=1,i<j}^{m}\tilde{K}^{f}(t_{i},t_{j})+2\sum_{i,j=1}^{m}\tilde{K}^{f}(t_{i},t_{m+j})
+2\sum_{i,j=1,i<j}^{m}\tilde{K}^{f}(t_{m+i},t_{m+j}) \nonumber\\
&=&\sum_{i\neq j}^{m}\tilde{K}^{f}(t_{i},t_{j})-\frac{-3}{4\alpha f^{2}}\sum_{i,j=1}^{m}\tilde{R}^{f}(t_{i},t_{j})\nonumber\\
&&+\frac{1}{2}\sum_{i,j=1}^{m}\tilde{R}^{f}(t_{i},t_{j})+2\sum_{i=2}^{m}\frac{3}{\alpha^{2}}
+\sum_{i,j=1,i\neq j}^{m}\frac{\alpha^{2}+\alpha+1}{\alpha^{2}} \nonumber\\
&=& S+\frac{2\alpha-3}{4\alpha f^{4}}\sum_{i,j=1}^{m}|R(e_{i},e_{j})u|^{2} \nonumber\\
&&+\frac{1}{f}\sum_{i,j=1}^{m}\tilde{L}^{f}(X,Y)+\frac{m-1}{\alpha^{2}}[6+(m-2)(\alpha^{2}+\alpha+1)]. \end{eqnarray}
For the fact that \begin{equation}
\sum_{i,j=1}^{m}|R(e_{i},e_{j})u|^{2}=\sum_{i,j=1}^{m}|R(u,e_{j})e_{i}|^{2} \end{equation} see the proof of Proposition 3.9. \end{proof}
\section*{ Acknowledgements} The second author was partially supported by National Science Foundation of China under Grant No.10801027, and Fok Ying Tong Education Foundation under Grant No.121003.
\end{document} |
\begin{document}
\title{An Important Corollary for the Fast Solution of Dynamic Maximal Clique Enumeration Problems}
\begin{abstract} In this paper we modify an algorithm for updating a maximal clique enumeration after an edge insertion to provide an algorithm that runs in linear time with respect to the number of cliques containing one of the edge's endpoints, whereas existing algorithms take quadratic time. \end{abstract}
\section{Introduction}
\subsection{Terminology and Notation}
For a graph $G(V,E)$, a \textit{clique} $C$ in $G$ is a subset of vertices that are all adjacent to each other in $G$, i.e., $C$ is a complete subgraph of $G$. A clique $C$ in $G$ is \textit{maximal} if no other clique in $G$ contains $C$. An edge with endpoints $u, v \in V$ is denoted $uv$. For convenience, when we refer to the \textit{neighborhood} of a vertex $u \in V$, we actually mean the closed neighborhood, i.e., the set $\{v : v = u \text{ or } uv \in E \}$, and denote this $N(u)$.
\subsection{Problem Statement}
The solution to a Maximal Clique Enumeration (MCE) Problem for a graph $G$ is a list of all maximal cliques in $G$. In dynamic MCE Problems, our task is to update the list of maximal cliques after inserting or deleting an edge. In the case of inserting an edge, our task is to list all maximal cliques in $G(V, E \cup \{uv\})$ given the list of all maximal cliques in $G(V,E)$.
\subsection{Motivation}
The MCE problem for a static graph has many important applications, such as in solving graph coloring problems: since every pair of vertices in a clique must be assigned different colors, a list of maximal cliques allows us to prune the search tree immensely. Graph coloring, in turn, is used in many scheduling problems. For example, given a list of time intervals during which various flights will need to use a gate, we can construct a graph whose vertices represent flights and whose edges connect vertices whose corresponding flights require a gate at the same time. The chromatic number of this graph is the number of gates required to service all the flights. However, unexpected flight delays induce changes to the graph throughout the day, requiring the solution of another slightly different graph coloring problem and thus another slightly different maximal clique enumeration problem.
The dynamic MCE problem also finds application in computational topolgoy. Given a point cloud $V$ in a metric space and a radius $\epsilon$, we can construct a graph analogous to the Vietoris-Rips Complex $R_{\epsilon}(V)$, where vertices represent points in the point cloud and where two vertices are connected by an edge if the corresponding points in the point cloud lie within distance $\epsilon$ of each other. For $\epsilon = 1$ and $V \subset \mathbb{R}^2$, the graph is a unit disk graph. In computational topology, we can compute the homology for a single graph by enumerating its maximal cliques and using the algorithm proposed by \cite{zomorodian}. Characterizing the topology of the point cloud by the homology of the VR Complex for a single $\epsilon$ is a bad idea, since noise can produce false topological features. What we really want to know is which topological features persist over a wide range of $\epsilon$. Calculting and sorting all pairwise distances of points in $V$ induces a sequence of graphs $\{G_n\}$ where the edges of the $n$th graph are the first $n$ edges of the sorted list of distances. Listing all maximal cliques for each of these graphs is a dynamic MCE Problem, the solution of which is used to construct a \textit{persistence barcode} that illustrates the emergence and disappearance of topological features of the VR Complex as $\epsilon$ varies.
\section{Existing Method}
For completeness, we'll begin by proving the correctness of the existing method for updating the list of maximal cliques after an edge insertion.
\begin{lemma} \label{Symmetry} For a graph $G(V,E)$, if $C$ is a clique in $G(V,E \cup \{uv\})$, then $C \setminus u$ (and, by symmetry, $C \setminus v$) is a clique in $G(V,E)$. \end{lemma}
\begin{proof} $C \setminus u \subset C$, so $C \setminus u$ is a clique in $G(V, E \cup \{uv\})$, i.e., $s,t \in C \setminus u \implies st \in E \cup \{uv\}$. Since $u \not \in C \setminus u$, we must have $st \in E$, i.e., $C$ is a clique in $G(V,E)$. \end {proof}
The following lemma says that if $C$ is a maximal clique in $G(V,E)$ but not in $G(V, E \cup \{uv\})$, then $C \subset N(u) \cup N(v)$.
\begin{lemma} \label{Disappear} For a graph $G(V,E)$ and $C \subset V$, if $u,v \not \in C$, then $C$ is a maximal clique in $G(V,E)$ $\implies$ $C$ is a maximal clique in $G(V,E\cup \{uv\})$. \end{lemma}
\begin{proof} Since $C$ is a clique in $G(V,E)$, $s,t \in C \implies st \in E \implies st \in E \cup \{uv\}$, so $C$ is a clique in $G(V,E \cup \{uv\})$. Let $B \subset V$ be a clique in $G(V, E \cup \{uv\})$ such that $C \subset B$. Since $B$ is a clique in $G(V, E\cup \{uv\})$, $s,t \in B \implies st \in E \cup \{uv\}$. However, $u,v \not \in C \implies u \not \in B$ or $v \not \in B$, since $u,v \in B \implies B \setminus u \supset C$ strictly and $B \setminus u$ is a clique in $G(V,E)$, which contradicts the maximality of $C$ in $G(V,E)$. So $s,t \in B \implies st \in E$, i.e., $B$ is a clique in $G(V,E)$. Since $C$ is maximal in $G(V,E)$, we must have $B = C$. Since $B$ was arbitrary, $C$ is maximal in $G(V, E \cup \{uv\})$. \end{proof}
The following lemma says that if $C$ is a maximal clique in $G(V, E \cup \{uv\})$ but not in $G(V, E)$, then $C \subset N(u) \cap N(v)$.
\begin{lemma} \label{Emerge} For a graph $G(V,E)$ and $C \subset V$, if $u \not \in C$ or $v \not \in C$, then $C$ is a maximal clique in $G(V,E \cup \{uv\})$ $\implies$ $C$ is a maximal clique in $G(V,E)$. \end{lemma}
\begin{proof} Without loss of generality, $u \not \in C$. Since $C$ is a clique in $G(V, E \cup \{uv\})$, $s,t \in C \implies st \in E \cup \{uv\}$. Since $u \not \in C$, we must have $st \in E$, i.e., $C$ is a clique in $G(V,E)$. Let $B \subset V$ be a clique in $G(V,E)$ such that $C \subset B$. Then $s,t \in B \implies st \in E \implies st \in E \cup \{uv\}$, so B is a clique in $G(V, E \cup \{uv\})$. Since $C$ is maximal in $G(V, E \cup \{uv\})$, we must have $B = C$. Since $B$ was arbitrary, $C$ is maximal in $G(V,E)$. \end{proof}
The following is the main theorem that justifies the existing method. It's also the source of the bottleneck in the existing method and the target of our improvement.
\begin{theorem} \label{ExistingMethod} For a graph $G(V,E)$ and $C \subset V$, if $u,v \in C$ and $C$ is a maximal clique in $G(V, E \cup \{uv\})$, then there exist maximal cliques $C_u, C_v \subset V$ in $G(V,E)$ such that $u \in C_u$, $v \in C_v$, and $C$ = $(C_u \cap C_v) \cup \{u,v\}$. \end{theorem}
\begin{proof} $C \setminus v$ and $C \setminus u$ are cliques in $G(V,E)$, and thus each lie within some maximal clique in $G(V,E)$, say $C_u$ and $C_v$, respectively. $u \in C \setminus v \implies u \in C_u$, and $v \in C \setminus u \implies v \in C_v$. Since $C \setminus v \subset C_u$ and $C \setminus u \subset C_v$, we have $(C \setminus v) \cap (C \setminus u) \subset C_u \cap C_v$. So $C = ((C \setminus v) \cap (C \setminus u)) \cup \{u, v\} \subset (C_u \cap C_v) \cup \{u,v\}$. However, this inclusion cannot be strict, since $(C_u \cap C_v) \cup \{u,v\}$ is a clique in $G(V, E \cup \{u,v\})$ and $C$ is maximal. Thus $C = (C_u \cap C_v) \cup \{u,v\}$. \end{proof}
The previous theorems are enough to justify the existing method. Lemma ~\ref{Emerge} tells us that new maximal cliques emerge only in $N(u) \cap N(v)$, and Lemma ~\ref{Disappear} says that cliques lose their maximality only if they lie in $N(u) \cup N(v)$. Together, they say that everything outside $N(u) \cup N(v)$ remains the same. Theorem ~\ref{ExistingMethod} gives us a method for finding the new maximal cliques: for each maximal clique $C_u$ in $G$ containing $u$ and each maximal clique $C_v$ in G containing $v$, generate $(C_u \cap C_v) \cup \{u,v\}$ and mark it as a candidate for maximality. Then we test each candidate for maximality and add the candidate to the enumeration only if it passes the test. Theorem 5.3 in \cite{hendrix} gives a method for testing a candidate's maximality in $O(|N(u) \cap N(v)|^2)$ time. If there's $|\{C_u\}|$ maximal cliques containing $u$ and $|\{C_v\}|$ maximal cliques containing $v$, then checking each candidate against the others takes $O(|\{C_u\}| |\{C_v\}| |N(u) \cap N(v)|^2)$ time.
\section{Main Result}
The following Corollary justifies our improvement to the existing method.
\begin{corollary} \label{ProposedMethod} For a graph $G(V,E)$ and $C \subset V$, if $u,v \in C$ and $C$ is a maximal clique in $G(V,E \cup \{u,v\})$, then there exist maximal cliques $C_u, C_v \subset V$ in $G(V,E)$ such that $u \in C_u$, $v \in C_v$, and $C = (C_u \cap N(v)) \cup \{u,v\} = (N(u) \cap C_v) \cup \{u,v\}$. \end{corollary}
\begin{proof} Let $C_u, C_v$ be the maximal cliques in $G(V,E)$ guaranteed by Theorem ~\ref{ExistingMethod}. $C = (C_u \cap C_v) \cup \{u,v\} \subset (C_u \cap N(v)) \cup \{u,v\}, (N(u) \cap C_v) \cup \{u,v\}$. Since every subset of a clique is a clique, $(C_u \cap N(v)) \cup \{u,v\}$ and $(N(u) \cap C_v) \cup \{u,v\}$ are cliques in $G(V, E \cup \{u,v\})$. Since $C$ is a maximal clique in $G(V, E \cup \{u,v\})$, the inclusion must actually be equality. \end{proof}
Now that we no longer need to generate pairwise intersections, our list of candidate cliques is $|\{C_u\}|$ (or $|\{C_v\}|$ if we generate candidates from $\{C_v\}$) and checking the candidates for maximality takes $O(|\{C_u\}| |N(u) \cap N(v)|^2)$ time. This is significant since $|\{C_v\}|$ can grow exponentially with the number of vertices in the graph $|V|$. Note that generating candidates from $\{C_u\}$ and $\{C_v\}$ is redundant, since in the proof $(C_u \cap N(v)) \cup \{u,v\} = (N(u) \cap C_v) \cup \{u,v\}$. This means we now only require either $\{C_u\}$ or $\{C_v\}$. If we have both lists, we could save time by choosing to generate candidates using the shorter list only.
\section{Maximal k-Cliques}
A \textit{maximal $k$-clique} $C \subset V$ is a subset of vertices that is either a clique in $G$ of size $k$ or a maximal clique in $G$ of size $< k$. The number of $k$-cliques in a graph grows in polynomial time: there are at most $\sum_{i=1}^k \binom{|V|}{i} \in O(|V|^k)$ maximal $k$-cliques in a graph. The task of the maximal $k$-clique enumeration problem is to enumerate all maximal $k$-cliques in a graph, and is much easier than the classic MCE problem. Similarly, the task of the dynamic maximal $k$-clique enumeration problem is to update the maximal $k$-clique enumeration after inserting or deleting an edge. Researchers are sometimes willing to sacrifice information about larger cliques in exchange for better runtimes. Returning to computational topology, surface reconstruction from a point cloud only requires knowledge about 3-cliques. Fortunately, Corollary ~\ref{ProposedMethod} can also be used to accelerate the dynamic maximal $k$-clique enumeration problem in the case of inserting an edge.
The proofs of Lemma ~\ref{Symmetry}, Lemma ~\ref{Disappear}, and Lemma ~\ref{Emerge} are identical to those used to prove equivalent statements for maximal $k$-cliques. However, adjusting Theorem ~\ref{ExistingMethod} for $k$-cliques requires some attention to detail:
\begin{theorem} \label{ExistingKClique}
For a graph $G(V,E)$ and $C \subset V$, if $u,v \in C$ and $C$ is a maximal k-clique in $G(V, E \cup \{uv\})$, then either $|C| < k$ and there exist maximal k-cliques $C_u, C_v \subset V$ in $G(V,E)$ such that $u \in C_u$, $v \in C_v$, and $C$ = $(C_u \cap C_v) \cup \{u,v\}$, or $|C| = k$ and $C \subset (C_u \cap C_v) \cup \{u,v\}$ and $|(C_u \cap C_v) \cup \{u,v\}| \leq k+1$. \end{theorem}
\begin{proof}
$C \setminus v$ and $C \setminus u$ are cliques in $G(V,E)$ of size $\leq k$, and thus each lie within some maximal $k$-clique in $G(V,E)$, say $C_u$ and $C_v$, respectively. $u \in C \setminus v \implies u \in C_u$, and $v \in C \setminus u \implies v \in C_v$. Since $C \setminus v \subset C_u$ and $C \setminus u \subset C_v$, we have $(C \setminus v) \cap (C \setminus u) \subset C_u \cap C_v$. So $C = ((C \setminus v) \cap (C \setminus u)) \cup \{u, v\} \subset (C_u \cap C_v) \cup \{u,v\}$. If $|C| < k$, then $C$ is maximal in $G$ and this inclusion cannot be strict, since $(C_u \cap C_v) \cup \{u,v\}$ is a clique in $G(V, E \cup \{u,v\})$ and $C$ is maximal; in this case, $C = (C_u \cap C_v) \cup \{u,v\}$. Else, $|C| = k$. $|C_u|, |C_v| \leq k$, so $|C_u \cap C_v| \leq k$. If $C_u = C_v$, as may be the case when $uv \in E$, then $(C_u \cap C_v) \cup \{u,v\} = C_u \cap C_v$ so $|(C_u \cap C_v) \cup \{u,v\}| \leq k$. If $C_u \neq C_v$, then $|C_u \cap C_v| \leq k-1$ so $|(C_u \cap C_v) \cup \{u,v\}| \leq |(C_u \cap C_v)| + |\{u,v\}| = k+1$. \end{proof}
So updating the list of maximal $k$-cliques is similar to updating the list of maximal cliques, except that now in the case where $k \leq |(C_u \cap C_v) \cup \{u,v\}|$, we don't have to run any maximality test; every size-$k$ subset of the candidate becomes a maximal $k$-clique.
Corollary ~\ref{ProposedMethod} lets us generate a linear number of candidates for $k$-cliques, too.
\begin{corollary} \label{ProposedKClique}
For a graph $G(V,E)$ and $C \subset V$, if $u,v \in C$ and $C$ is a maximal $k$-clique in $G(V,E \cup \{u,v\})$, then there exist maximal $k$-cliques $C_u, C_v \subset V$ in $G(V,E)$ such that $u \in C_u$, $v \in C_v$, and $C = (C_u \cap N(v)) \cup \{u,v\} = (N(u) \cap C_v) \cup \{u,v\}$ or $|C| = k$ and $C \subset (C_u \cap N(v)) \cup \{u,v\}, (N(u) \cap C_v) \cup \{u,v\}$. \end{corollary}
\begin{proof}
Let $C_u, C_v$ be the maximal $k$-cliques in $G(V,E)$ guaranteed by Theorem ~\ref{ExistingKClique}. $C \subset (C_u \cap C_v) \cup \{u,v\} \subset (C_u \cap N(v)) \cup \{u,v\}, (N(u) \cap C_v) \cup \{u,v\}$. Since every subset of a clique is a clique, $(C_u \cap N(v)) \cup \{u,v\}$ and $(N(u) \cap C_v) \cup \{u,v\}$ are cliques in $G(V, E \cup \{u,v\})$. If $|C| < k$, then $C$ is a maximal clique in $G(V, E \cup \{u,v\})$, so the inclusion must actually be equality. \end{proof}
We follow the same procedure for testing candidates as before: if $|(C_u \cap N(v)) \cup \{u,v\}| < k$, use the maximality test from \cite{hendrix}; else, every size-$k$ subset becomes a maximal $k$-clique. As in the application of Corollary ~\ref{ProposedMethod} to the traditional MCE setting, we only need either $\{C_u\}$ or $\{C_v\}$ to correctly generate the list of $k$-cliques for the updated graph since $C \subset (C_u \cap N(v)) \cup \{u,v\}$ and $C \subset (N(u) \cap C_v) \cup \{u,v\}$.
\section{Parallelism}
Lemmas ~\ref{Disappear} and ~\ref{Emerge} tell us that the list of maximal cliques doesn't change outside of $N(u) \cup N(v)$. If $E_n \subset E_{n+1} = E_n \cup \{u_1 v_1\} \subset E_{n+2} = E_{n+1} \cup \{u_2 v_2\}$, and $(N(u_2) \cup N(v_2)) \cap \{u_1 v_1 \} = \emptyset$, then inserting $u_1 v_1$ affects neither $\{C_{u_2}\}$ nor $\{C_{v_2}\}$, so the updates from $G(V,E_n)$ to $G(V,E_{n+1})$ and from $G(V,E_{n+1})$ to $G(V,E_{n+2})$ are independent and can run in parallel, even with the existing method. Since we only need either $\{C_u\}$ or $\{C_v\}$ to use the proposed method, we have more opportunities for parallelism: if only $N(u_2) \cap \{u_1 v_1\} = \emptyset$, then inserting $u_1 v_1$ doesn't affect $\{C_{u_2}\}$, so the updates can still run in parallel if we use the proposed method.
\end{document} |
\begin{document}
\title{A presentation of relative unitary Steinberg groups}
\begin{abstract} We find an explicit presentation of relative odd unitary Steinberg groups constructed by odd form rings and of relative doubly laced Steinberg groups over commutative rings, i.e. the Steinberg groups associated with the Chevalley group schemes of the types \(\mathsf B_\ell\), \(\mathsf C_\ell\), \(\mathsf F_4\) for \(\ell \geq 3\). \end{abstract}
\section{Introduction}
Relative Steinberg groups \(\stlin(R, I)\) were defined in the stable linear case by F. Keune and J.-L. Loday in \cite{Keune, Loday}. Namely, this group is just \[\frac{\Ker\bigl(p_{2*} \colon \stlin(I \rtimes R) \to \stlin(R)\bigr)}{\bigl[\Ker(p_{1*}), \Ker(p_{2*})\bigr]},\] where \(R\) is a unital associative ring, \(I \trianglelefteq R\), \(p_1 \colon I \rtimes R \to R, a \rtimes p \mapsto a + p\) and \(p_2 \colon I \rtimes R \to R, a \rtimes p \mapsto p\) are ring homomorphisms. Such a group is a crossed module over \(\stlin(R)\) generated by \(x_{ij}(a)\) for \(a \in I\) with the ``obvious'' relations that are satisfied by the generators \(t_{ij}(a)\) of the normal subgroup \(\mathrm E(R, I) \trianglelefteq \mathrm E(R)\). It is classically known that \(\stlin(R, I)\) is generated by \(z_{ij}(a, p) = \up{x_{ji}(p)}{x_{ij}(a)}\) for \(a \in I\) and \(p \in R\) as an abstract group, the same holds for the relative elementary groups.
Such relative Steinberg groups and their generalizations for unstable linear groups and Chevalley groups are used in, e.g., proving centrality of \(\mathrm K_2\) \cite{CentralityD, CentralityE}, a suitable local-global principle for Steinberg groups \cite{LocalGlobalC, CentralityD, Tulenbaev}, early stability of \(\mathrm K_2\) \cite{Tulenbaev}, and \(\mathbb A^1\)-invariance of \(\mathrm K_2\) \cite{Horrocks, AInvariance}. In \cite[theorem 9]{CentralityE} S. Sinchuk proved that all relations between the generators \(z_\alpha(a, p)\) of \(\stlin(\Phi; R, I)\), where \(\Phi\) is a root system of type \(\mathsf{ADE}\) and \(R\) is commutative, come from various \(\stlin(\Psi; R, I)\) for root subsystems \(\Psi \subseteq \Phi\) of type \(\mathsf A_3\), i.e. \(\stlin(\Phi; R, I)\) is the amalgam of \(\stlin(\Psi; R, I)\) with identified generators \(z_\alpha(a, p)\).
There exist explicit presentations (in the sense of abstract groups) of relative unstable linear and symplectic Steinberg groups in terms of van der Kallen's generators, i.e. analogues of arbitrary transvections in \(\glin(n, R)\) or \(\symp(2n, R)\), see \cite{RelativeC, Tulenbaev} and \cite[proposition 8]{CentralityE}.
In \cite{RelStLin} we determined the relations between the generators \(z_\alpha(a, p)\) in the following two cases: \begin{itemize} \item for relative unstable linear Steinberg groups \(\stlin(n; R, I)\) with \(n \geq 4\), \item for relative simply laced Steinberg groups \(\stlin(\Phi; R, I)\) with \(\Phi\) of rank \(\geq 3\). \end{itemize} In turns out that all the relations between \(z_\alpha(a, p)\) come from \(\stlin(\Psi; R, I)\) for \(\Psi\) of types \(\mathsf A_2\) and \(\mathsf A_1 \times \mathsf A_1\), thus Sinchuk's result may be strengthened a bit.
The relations for the simply laced Steinberg groups are easily obtained from the linear case and Sinchuk's result. In the linear case we actually considered Steinberg groups associated with a generalized matrix ring \(T\) instead of \(\mat(n, R)\), i.e. if \(T\) is a ring with a complete family of \(n\) full orthogonal idempotents. Such a generality is convenient to apply ``root elimination'', i.e. replacing the generators of a Steinberg group parameterized by a root system \(\Phi\) by some new generators parameterized by a system of relative roots \(\Phi / \alpha\). Moreover, instead of an ideal \(I \trianglelefteq R\) we considered arbitrary crossed module \(\delta \colon A \to T\) in the sense of associative rings since this is necessary for, e.g., applying the method of Steinberg pro-groups \cite{CentralityBC, LinK2}.
In this paper we find relations between the generators \(z_\alpha(a, p)\) for \begin{itemize} \item relative odd unitary Steinberg groups \(\stunit(R, \Delta; S, \Theta)\), where \(\delta \colon (S, \Theta) \to (R, \Delta)\) is a crossed module of odd form rings and \((R, \Delta)\) has a strong orthogonal hyperbolic family of rank at least \(3\) in the sense of \cite{CentralityBC} (this is a unitary analogue of generalized matrix rings and crossed modules of associative rings), \item relative doubly laced Steinberg groups \(\stlin(\Phi; K, \mathfrak a)\) with \(\Phi\) of rank \(\geq 3\), where \(K\) is a unital commutative ring and \(\delta \colon \mathfrak a \to K\) is a crossed module of commutative rings. \end{itemize} The odd unitary case already gives a presentation of relative Steinberg groups associated with classical sufficiently isotropic reductive groups by \cite[theorem 4]{ClassicOFA}, so the second case is non-trivial only for the root system of type \(\mathsf F_4\). Some twisted forms of reductive groups of type $\mathsf A_\ell$ are constructed using non-strong orthogonal hyperbolic families on odd form rings (say, $\glin(\ell + 1, K)$ itself), but such groups may be constructed using generalized matrix rings, so they are considered in \cite{RelStLin}.
Actually, relative elementary subgroups of \(\stlin(\Phi; K)\) for doubly laced \(\Phi\) may be defined not only for ordinary ideals \(\mathfrak a \trianglelefteq K\), but for E. Abe's \cite{Abe} admissible pairs \((\mathfrak a, \mathfrak b)\), where \(\mathfrak a \trianglelefteq K\), \(2 \mathfrak a + \sum_{a \in \mathfrak a} Ka^2 \leq \mathfrak b \leq \mathfrak a\) is an intermediate group, and \(\mathfrak b k^2 \leq \mathfrak b\) for all \(k \in K\) in the case of type \(\mathsf C_\ell\) or \(\mathfrak b \trianglelefteq K\) in the cases of types \(\mathsf B_\ell\) and \(\mathsf F_4\). Such a pair naturally gives a subgroup \(\mathrm E(\Phi; \mathfrak a, \mathfrak b) \leq \mathrm G^{\mathrm{sc}}(\Phi, K)\) generated by \(x_\alpha(a)\) for short roots \(\alpha\) and \(a \in \mathfrak a\) and by \(x_\beta(b)\) for long roots \(\beta\) and \(b \in \mathfrak b\).
In order to study relative Steinberg groups associated with admissible pairs, we consider new families of Steinberg groups \(\stlin(\Phi; K, L)\), where \(\Phi\) is a doubly laced root system and \((K, L)\) is a \textit{pair of type} \(\mathsf B\), \(\mathsf C\), or \(\mathsf F\) respectively (the precise definition is given in section \ref{pairs-type}). Then admissible pairs are just crossed submodules of \((K, K)\) for a commutative unital ring \(K\). We also find relations between the generators \(z_\alpha(a, p)\) of \begin{itemize} \item relative doubly laced Steinberg groups \(\stlin(\Phi; K, L; \mathfrak a, \mathfrak b)\), where \(\Phi\) is a doubly laced root system of rank \(\geq 3\) and \((\mathfrak a, \mathfrak b) \to (K, L)\) is a crossed modules of pairs of type \(\mathsf B\), \(\mathsf C\), or \(\mathsf F\) respectively. \end{itemize}
All relations between the generators \(z_\alpha(a, p)\) involve only the roots from root subsystems of rank \(2\), i.e. \(\mathsf A_2\), \(\mathsf{BC}_2\), \(\mathsf A_1 \times \mathsf A_1\), \(\mathsf A_1 \times \mathsf{BC}_1\) in the odd unitary case and \(\mathsf A_2\), \(\mathsf B_2\), \(\mathsf A_1 \times \mathsf A_1\) in the generalized Chevalley case.
\section{Relative unitary Steinberg groups}
We use the group-theoretical notation \(\up gh = g h g^{-1}\) and \([g, h] = ghg^{-1}h^{-1}\). If a group \(G\) acts on a set \(X\), then we usually denote the action by \((g, x) \mapsto \up gx\). If \(X\) is itself a group, then \([g, x] = \up gx x^{-1}\) and \([x, g] = x (\up gx)^{-1}\) are the commutators in \(X \rtimes G\). A \textit{group-theoretical crossed module} is a homomorphism \(\delta \colon N \to G\) of groups such that there is a fixed action of \(G\) on \(N\), \(\delta\) is \(G\)-equivariant, and \(\up nn' = \up{\delta(n)}{n'}\) for \(n, n' \in N\).
The group operation in a \(2\)-step nilpotent group \(G\) is usually denoted by \(\dotplus\). If \(X_1\), \ldots, \(X_n\) are subsets of \(G\) containing \(\dot 0\) and \(\prod_i X_i \to G, (x_1, \ldots, x_n) \mapsto x_1 \dotplus \ldots \dotplus x_n\) is a bijection, then we write \(G = \bigoplus_i^\cdot X_i\).
Let \(A\) be an associative unital ring and \(\lambda \in A^*\). A map \(\inv{(-)} \colon A \to A\) is called a \(\lambda\)-\textit{involution} if it is an anti-automorphism, \(\inv{\inv x} = \lambda x \lambda^{-1}\), and \(\inv \lambda = \lambda^{-1}\). For a fixed \(\lambda\)-involution a map \(B \colon M \times M \to A\) for a module \(M_A\) is called a \textit{hermitian form} if it is biadditive, \(B(m, m'a) = B(m, m')a\), and \(B(m', m) = \inv{B(m, m')} \lambda\).
Now let \(A\) be an associative unital ring with a \(\lambda\)-involution and \(M_A\) be a module with a hermitian form \(B\). The \textit{Heisenberg group} of \(B\) is the set \(\Heis(B) = M \times A\) with the group operation \((m, x) \dotplus (m', x') = (m + m', x - B(m, m') + x')\). The multiplicative monoid \(A^\bullet\) acts on \(\Heis(B)\) from the right by \((m, x) \cdot y = (my, \inv y x y)\). An \(A^\bullet\)-invariant subgroup \(\mathcal L \leq \Heis(B)\) is called an \textit{odd form parameter} if \[\{(0, x - \inv x \lambda) \mid x \in A\} \leq \mathcal L \leq \{(m, x) \mid x + B(m, m) + \inv x \lambda = 0\}.\] The corresponding \textit{quadratic form} is the map \(q \colon M \to \Heis(B) / \mathcal L, m \mapsto (m, 0) \dotplus \mathcal L\). Finally, the \textit{unitary group} is \[\unit(M, B, \mathcal L) = \{g \in \Aut(M_A) \mid B(gm, gm') = B(m, m'),\, q(gm) = q(m) \text{ for all } m, m' \in M\}.\]
Recall definitions from \cite{CentralityBC, ClassicOFA}. An \textit{odd form ring} is a pair \((R, \Delta)\), where \(R\) is an associative non-unital ring, \(\Delta\) is a group with the group operation \(\dotplus\), the multiplicative semigroup \(R^\bullet\) acts on \(\Delta\) from the right by \((u, a) \mapsto u \cdot a\), and there are maps \(\phi \colon R \to \Delta\), \(\pi \colon \Delta \to R\), \(\rho \colon \Delta \to R\) such that \begin{itemize} \item \(\phi\) is a group homomorphism, \(\phi(\inv aba) = \phi(b) \cdot a\); \item \(\pi\) is a group homomorphism, \(\pi(u \cdot a) = \pi(u) a\); \item \([u, v] = \phi(-\inv{\pi(u)} \pi(v))\); \item \(\rho(u \dotplus v) = \rho(u) - \inv{\pi(u)} \pi(v) + \rho(v)\), \(\inv{\rho(u)} + \inv{\pi(u)} \pi(u) + \rho(u) = 0\), \(\rho(u \cdot a) = \inv a \rho(u) a\); \item \(\pi(\phi(a)) = 0\), \(\rho(\phi(a)) = a - \inv a\); \item \(\phi(a + \inv a) = \phi(\inv aa) = 0\) (in \cite{CentralityBC, ClassicOFA} we used the stronger axiom \(\phi(a) = \dot 0\) for all \(a = \inv a\)); \item \(u \cdot (a + b) = u \cdot a \dotplus \phi(\inv{\,b\,} \rho(u) a) \dotplus u \cdot b\). \end{itemize}
Let \((R, \Delta)\) be an odd form ring. Its \textit{unitary group} is the set \[\unit(R, \Delta) = \{g \in \Delta \mid \pi(g) = \inv{\rho(g)}, \pi(g) \inv{\pi(g)} = \inv{\pi(g)} \pi(g)\}\] with the identity element \(1_{\unit} = \dot 0\), the group operation \(gh = g \cdot \pi(h) \dotplus h \dotplus g\), and the inverse \(g^{-1} = \mathbin{\dot{-}} g \cdot \inv{\pi(g)} \mathbin{\dot{-}} g\). The unitary groups of odd form rings in \cite{CentralityBC, ClassicOFA} are precisely the graphs of \(\pi \colon \unit(R, \Delta) \to R\) as subsets of \(R \times \Delta\).
An odd form ring \((R, \Delta)\) is called \textit{special} if the homomorphism \((\pi, \rho) \colon \Delta \to \Heis(R)\) is injective, where \(\Heis(R) = R \times R\) with the operation \((x, y) \dotplus (z, w) = (x + z, y - \inv xz + w)\). It is called \textit{unital} if \(R\) is unital and \(u \cdot 1 = u\) for \(u \in \Delta\). In other words, \((R, \Delta)\) is a special unital odd form ring if and only if \(R\) is a unital associative ring with \(1\)-involution and \(\Delta\) is an odd form parameter with respect to the \(R\)-module \(R\) and the hermitian form \(R \times R \to R, (x, y) \mapsto \inv xy\), where we identify \(\Delta\) with its image in \(\Heis(R)\).
If \(M_A\) is a module over a unital ring with a \(\lambda\)-involution, \(B\) is a hermitian form on \(M\), and \(\mathcal L\) is an odd form parameter, then there is a special unital odd form ring \((R, \Delta)\) such that \(\unit(M, B, \mathcal L) \cong \unit(R, \Delta)\), see \cite[section 2]{CentralityBC} or \cite[section 3]{ClassicOFA} for details.
We say that an odd form ring \((R, \Delta)\) \textit{acts} on an odd form ring \((S, \Theta)\) if there are multiplication operations \(R \times S \to S\), \(S \times R \to S\), \(\Theta \times R \to \Theta\), \(\Delta \times S \to \Delta\) such that \((S \rtimes R, \Theta \rtimes \Delta)\) is a well-defined odd form ring. There is an equivalent definition in terms of explicit axioms on the operations, see \cite[section 2]{CentralityBC}. For example, each odd form ring naturally acts on itself. Actions of \((R, \Delta)\) on \((S, \Theta)\) are in one-to-one correspondence with isomorphism classes of right split short exact sequences \[(S, \Theta) \to (S \rtimes R, \Theta \rtimes \Delta) \leftrightarrows (R, \Delta),\] since the category of odd form rings and their homomorphisms is algebraically coherent semi-abelian in the sense of \cite{AlgCoh}.
Let us call \(\delta \colon (S, \Theta) \to (R, \Delta)\) a \textit{precrossed module} of odd form rings if \((R, \Delta)\) acts on \((S, \Theta)\) and \(\delta\) is a homomorphism preserving the action of \((R, \Delta)\). Such objects are in one-to-one correspondence with \textit{reflexive graphs} in the category of odd form rings, i.e. tuples \(((R, \Delta), (T, \Xi), p_1, p_2, d)\), where \(p_1, p_2 \colon (T, \Xi) \to (R, \Delta)\) are homomorphisms with a common section \(d\). Namely, \((S, \Theta)\) corresponds to the kernel of \(p_2\) and \(\delta\) is induced by \(p_1\).
A precrossed module \(\delta \colon (S, \Theta) \to (R, \Delta)\) is a \textit{crossed module} of odd form rings if \textit{Peiffer identities} \begin{itemize} \item \(ab = \delta(a) b = a \delta(b)\) for \(a, b \in S\); \item \(u \cdot a = \delta(u) \cdot a = u \cdot \delta(a)\) for \(u \in \Theta\), \(a \in S\) \end{itemize} hold. Equivalently, the corresponding reflexive graph is an \textit{internal category} (and even an \textit{internal groupoid}, necessarily in a unique way), i.e. there is a homomorphism \[m \colon \lim\bigl( (T, \Xi) \xrightarrow{p_2} (R, \Delta) \xleftarrow{p_1} (T, \Xi) \bigr) \to (T, \Xi)\] such that the homomorphisms from any \((I, \Gamma)\) to \((R, \Delta)\) and \((T, \Xi)\) form a set-theoretic category. See \cite{XMod} for details.
The unitary group \(\unit(R, \Delta)\) acts on \((R, \Delta)\) by automorphisms via \[\up g a = \alpha(g)\, a\, \inv{\alpha(g)} \text{ for } a \in R, \enskip \up g u = (g \cdot \pi(u) \dotplus u) \cdot \inv{\alpha(g)} \text{ for } u \in \Delta,\] where \(\alpha(g) = \pi(g) + 1 \in R \rtimes \mathbb Z\). The second formula also gives the conjugacy action of \(\unit(R, \Delta)\) on itself.
If \((R, \Delta)\) acts on \((S, \Theta)\), then \(\unit(R, \Delta)\) acts on \(\unit(S, \Theta)\) in the sense of groups and \(\unit(S \rtimes R, \Theta \rtimes \Delta) = \unit(S, \Theta) \rtimes \unit(R, \Delta)\). For any crossed module \(\delta \colon (S, \Theta) \to (R, \Delta)\) the induced homomorphism \(\delta \colon \unit(S, \Theta) \to \unit(R, \Delta)\) is a crossed module of groups.
Recall from \cite{CentralityBC} that a \textit{hyperbolic pair} in an odd form ring \((R, \Delta)\) is a tuple \(\eta = (e_-, e_+, q_-, q_+)\), where \(e_-\) and \(e_+\) are orthogonal idempotents in \(R\), \(\inv{e_-} = e_+\), \(q_\pm\) are elements of \(\Delta\), \(\pi(q_\pm) = e_\pm\), \(\rho(q_\pm) = 0\), and \(q_\pm = q_\pm \cdot e_\pm\). A sequence \(H = (\eta_1, \ldots, \eta_\ell)\) is called an \textit{strong orthogonal hyperbolic family} of rank \(\ell\), if \(\eta_i = (e_{-i}, e_i, q_{-i}, q_i)\) are hyperbolic pairs, the idempotents \(e_{|i|} = e_i + e_{-i}\) are orthogonal, and \(e_i \in R e_j R\) for all \(i, j \neq 0\).
From now on and until the end of section \ref{sec-pres}, we fix a crossed module \(\delta \colon (S, \Theta) \to (R, \Delta)\) of odd form rings and a strong orthogonal hyperbolic family \(H = (\eta_1, \ldots, \eta_\ell)\) in \((R, \Delta)\). We also use the notation \[S_{ij} = e_i S e_j, \quad
\Theta^0_j = \{u \in \Theta \cdot e_j \mid e_k \pi(u) = 0 \text{ for all } 1 \leq |k| \leq \ell\}\]
for \(1 \leq |i|, |j| \leq \ell\), and similarly for the corresponding subgroups of \(R\) and \(\Delta\). Clearly, \[S_{ij} R_{jk} = S_{ik}, \quad \Theta^0_i \cdot R_{ij} \dotplus \phi(S_{-j, j}) = \Theta^0_j.\]
An \textit{unrelativized Steinberg group} \(\stunit(S, \Theta)\) is the abstract group with the generators \(X_{ij}(a)\), \(X_j(u)\) for \(1 \leq |i|, |j| \leq \ell\), \(i \neq \pm j\), \(a \in S_{ij}\), \(u \in \Theta^0_j\), and the relations \begin{align*} X_{ij}(a) &= X_{-j, -i}(-\inv a); \\ X_{ij}(a)\, X_{ij}(b) &= X_{ij}(a + b); \\ X_j(u)\, X_j(v) &= X_j(u \dotplus v); \\ [X_{ij}(a), X_{kl}(b)] &= 1 \text{ for } j \neq k \neq -i \neq -l \neq j; \\ [X_{ij}(a), X_l(u)] &= 1 \text{ for } i \neq l \neq -j; \\ [X_{-i, j}(a), X_{ji}(b)] &= X_i(\phi(ab)); \\ [X_i(u), X_i(v)] &= X_i(\phi(-\inv{\pi(u)} \pi(v))); \\ [X_i(u), X_j(v)] &= X_{-i, j}(-\inv{\pi(u)} \pi(v)) \text{ for } i \neq \pm j; \\ [X_{ij}(a), X_{jk}(b)] &= X_{ik}(ab) \text{ for } i \neq \pm k; \\ [X_i(u), X_{ij}(a)] &= X_{-i, j}(\rho(u) a)\, X_j(\mathbin{\dot{-}} u \cdot (-a)). \end{align*}
Of course, the group \(\stunit(S, \Theta)\) is functorial on \((S, \Theta)\). In particular, the homomorphism \[\delta \colon \stunit(S, \Theta) \to \stunit(R, \Delta), X_{ij}(a) \mapsto X_{ij}(\delta(a)), X_j(u) \mapsto X_j(\delta(u))\] is well-defined. There is also a canonical homomorphism \begin{align*} \stmap \colon \stunit(S, \Theta) &\to \unit(S, \Theta), \\ X_{ij}(a) &\mapsto T_{ij}(a) = q_i \cdot a \mathbin{\dot{-}} q_{-j} \cdot \inv a \mathbin{\dot{-}} \phi(a), \\ X_j(u) &\mapsto T_j(u) = u \mathbin{\dot{-}} \phi(\rho(u) + \pi(u)) \dotplus q_{-j} \cdot (\rho(u) - \inv{\pi(u)}). \end{align*}
Let \(p_{i*} \colon \stunit(S \rtimes R, \Theta \rtimes \Delta) \to \stunit(R, \Delta)\) be the induced homomorphisms. The \textit{relative Steinberg group} is \[\stunit(R, \Delta; S, \Theta) = \Ker(p_{2*}) / [\Ker(p_{1*}), \Ker(p_{2*})].\] It is easy to see that it is a crossed module over \(\stunit(R, \Delta)\). The \textit{diagonal group} is
\[\diag(R, \Delta) = \{g \in \unit(R, \Delta) \mid g \cdot e_i \inv{\pi(g)} \dotplus q_i \cdot \inv{\pi(g)} \dotplus g \cdot e_i = \dot 0 \text{ for } 1 \leq |i| \leq \ell\},\] it acts on \(\stunit(S, \Theta)\) by \[\up g{T_{ij}(a)} = T_{ij}(\up ga), \quad \up g{T_j(u)} = T_j(\up gu).\] Hence it also acts on the commutative diagram of groups \[\xymatrix@R=30pt@C=90pt@!0{ \stunit(S, \Theta) \ar[r] \ar[dr]_{\delta} & \stunit(R, \Delta; S, \Theta) \ar[r]^{\stmap} \ar[d]_{\delta} & \unit(S, \Theta) \ar[r] \ar[d]_{\delta} & \mathrm{Aut}(S, \Theta) \\ & \stunit(R, \Delta) \ar[r]^{\stmap} & \unit(R, \Delta) \ar[ur] \ar[r] & \mathrm{Aut}(R, \Delta). }\]
\section{Root systems of type \(\mathsf{BC}\)}
Let \[\Phi = \{\pm \mathrm e_i \pm \mathrm e_j \mid 1 \leq i < j \leq \ell\} \cup \{\pm \mathrm e_i \mid 1 \leq i \leq \ell\} \cup \{\pm 2 \mathrm e_i \mid 1 \leq i \leq \ell\} \subseteq \mathbb R^\ell\] be a \textit{root system} of type \(\mathsf{BC}_\ell\). For simplicity let also \(\mathrm e_{-i} = -\mathrm e_i\) for \(1 \leq i \leq \ell\). The \textit{roots} of \(\Phi\) are in one-to-one correspondence with the \textit{root subgroups} of \(\stunit(S, \Theta)\) as follows: \begin{align*} X_{\mathrm e_j - \mathrm e_i}(a) &= X_{ij}(a) \text{ for } a \in S_{ij}, i + j > 0,\\ X_{\mathrm e_i}(u) &= X_i(u) \text{ for } u \in \Theta^0_i,\\ X_{2 \mathrm e_i}(u) &= X_i(u) \text{ for } u \in \phi(S_{-i, i}). \end{align*} The image of \(X_\alpha\) is denoted by \(X_\alpha(S, \Theta)\). The \textit{Chevalley commutator formulas} from the definition of \(\stunit(S, \Theta)\) may be written as \[[X_\alpha(\mu), X_\beta(\nu)] = \prod_{\substack{i \alpha + j \beta \in \Phi\\ i, j > 0}} X_{i\alpha + j\beta}(f_{\alpha \beta i j}(\mu, \nu))\] for all non-antiparallel \(\alpha, \beta \in \Phi\) and some universal expressions \(f_{\alpha \beta i j}\). It is also useful to set \(f_{\alpha \beta 0 1}(\mu, \nu) = \nu\) and \(f_{\alpha \beta 0 2}(\mu, \nu) = \dot 0\) if \(\beta\) is \textit{ultrashort} (i.e. of length \(1\)). In products we assume that the factor with such a root is the last one. The set of ultrashort roots is denoted by \(\Phi_{\mathrm{us}}\).
It is easy to see that \(\Ker(p_{2*}) \leq \stunit(S \rtimes R, \Theta \rtimes \Delta)\) is the group with the action of \(\stunit(R, \Delta)\) generated by \(\stunit(S, \Theta)\) with the additional relations \[\up{X_\alpha(\mu)}{X_\beta(\nu)} = \prod_{\substack{i \alpha + j \beta \in \Phi\\ i \geq 0, j > 0}} X_{i\alpha + j\beta}(f_{\alpha \beta i j}(\mu, \nu))\] for non-antiparallel \(\alpha, \beta \in \Phi\), \(\mu \in R \cup \Delta\), \(\nu \in S \cup \Theta\). Hence the relative Steinberg group \(\stunit(R, \Delta; S, \Theta)\) is the crossed module over \(\stunit(R, \Delta)\) generated by \(\delta \colon \stunit(S, \Theta) \to \stunit(R, \Delta)\) with the same additional relations.
The \textit{Weyl group} \(\mathrm W(\mathsf{BC}_\ell) = (\mathbb Z / 2 \mathbb Z)^\ell \rtimes \mathrm S_\ell\) acts on the orthogonal hyperbolic family \(\eta_1, \ldots, \eta_\ell\) by permutations and sign changes (i.e. \((e_{-i}, e_i, q_{-i}, q_i) \mapsto (e_i, e_{-i}, q_i, q_{-i})\)), so the correspondence between roots and root subgroups is \(\mathrm W(\mathsf{BC}_\ell)\)-equivariant. Also, the hyperbolic pairs from \(H\) and the opposite ones are in one-to-one correspondence with the set of ultrashort roots, \(\eta_i\) corresponds to \(\mathrm e_i\), and \(\eta_{-i} = (e_i, e_{-i}, q_i, q_{-i})\) corresponds to \(\mathrm e_{-i}\) for \(1 \leq i \leq \ell\).
Recall that a subset \(\Sigma \subseteq \Phi\) is called \textit{closed} if \(\alpha, \beta \in \Sigma\) and \(\alpha + \beta \in \Phi\) imply \(\alpha + \beta \in \Sigma\). We say that a closed subset \(\Sigma \subseteq \Phi\) is \textit{saturated}, if \(\alpha \in \Sigma\) together with \(\frac 12 \alpha \in \Phi\) imply \(\frac 12 \alpha \in \Sigma\). If \(X \subseteq \Phi\), then \(\langle X \rangle\) is the smallest saturated subset of \(\Phi\) containing \(X\), \(\mathbb R X\) is the linear span of \(X\), and \(\mathbb R_+ X\) is the smallest convex cone containing \(X\). A saturated root subsystem \(\Psi \subseteq \Phi\) is a saturated subset such that \(\Psi = -\Psi\).
A closed subset \(\Sigma \subseteq \Phi\) is called \textit{special} if \(\Sigma \cap -\Sigma = \varnothing\). It is well-known that a closed subset of \(\Phi\) is special if and only if it is a subset of some system of positive roots. Hence the smallest saturated subset containing a special subset it also special. A root \(\alpha\) in a saturated special set \(\Sigma\) is called \textit{extreme} if it is indecomposable into a sum of two roots of \(\Sigma\) and in the case \(\alpha \in \Phi_{\mathrm{us}}\) the root \(2\alpha\) is not a sum of two distinct roots of \(\Sigma\). Every non-empty saturated special set contains an extreme root and if \(\alpha \in \Sigma\) is extreme, then \(\Sigma \setminus \langle \alpha \rangle\) is also a saturated special set. Notice that if \(\Sigma\) is a saturated special subset and \(u\) is an extreme ray of \(\mathbb R_+ \Sigma\) in the sense of convex geometry, then \(u\) contains an extreme root of \(\Sigma\).
If \(\Sigma \subseteq \Phi\) is a special subset, then the multiplication map \[\prod_{\alpha \in \Sigma \setminus 2\Sigma} X_\alpha(S, \Theta) \to \stunit(S, \Theta)\] is injective for any linear order on \(\Sigma \setminus 2\Sigma\) and its image \(\stunit(S, \Theta; \Sigma)\) is a subgroup of \(\stunit(S, \Theta)\) independent on the order. Moreover, the homomorphism \(\stunit(S, \Theta; \Sigma) \to \unit(S, \Theta)\) is injective. This follows from results of \cite[section 4]{CentralityBC}.
Let \(\Psi \subseteq \Phi\) be a saturated root subsystem. Consider the following binary relation on \(\Phi_{\mathrm{us}}\): \(\mathrm e_i \sim_\Psi \mathrm e_j\) if \(\mathrm e_i - \mathrm e_j \in \Psi \cup \{0\}\) and \(\mathrm e_j \notin \Psi\). Actually, this is a partial equivalence relation (i.e. symmetric and transitive), \(\mathrm e_i \sim_\Psi \mathrm e_j\) if and only if \(\mathrm e_{-i} \sim_\Psi \mathrm e_{-j}\), \(\mathrm e_i \not \sim_\Psi \mathrm e_{-i}\). Conversely, each partial equivalence relation on \(\Phi_{\mathrm{us}}\) with these properties arise from unique saturated root subsystem.
The image of \(\Phi \setminus \Psi\) in \(\mathbb R^\ell / \mathbb R \Psi\) is denoted by \(\Phi / \Psi\), in the case \(\Psi = \langle \alpha, -\alpha \rangle\) we write just \(\Phi / \alpha\). We associate with \(\Psi\) a new strong orthogonal hyperbolic family \(H / \Psi\) as follows. If \(E \subseteq \Phi_{\mathrm{us}}\) is an equivalence class with respect to \(\sim_\Psi\), then \(\eta_E\) is the sum of all hyperbolic pairs corresponding to the elements of \(E\), where a sum of two hyperbolic pairs is given by \[(e_-, e_+, q_-, q_+) \oplus (e'_-, e'_+, q'_-, q'_+) = (e_- + e'_-, e_+ + e'_+, q_- + q'_-, q_+ + q'_+)\] if \((e_- + e_+) (e'_- + e'_+) = 0\). The family \(H / \Psi\) consists of all \(\eta_E\), if we take only one equivalence class \(E\) from each pair of opposite equivalence classes (so \(H / \Psi\) does not contain opposite hyperbolic pairs). The Steinberg groups constructed by \(H / \Psi\) are denoted by \(\stunit(R, \Delta; \Phi / \Psi)\), \(\stunit(S, \Theta; \Phi / \Psi)\), and \(\stunit(R, \Delta; S, \Theta; \Phi / \Psi)\). In the case \(\Psi = \varnothing\) we obtain the original Steinberg groups. Now it is easy to see that \(\Phi / \Psi\) is a root system of type \(\mathsf{BC}_{\ell - \dim(\mathbb R \Psi)}\), it parametrizes the root subgroups of the corresponding Steinberg groups. Note that \(H / \Psi\) is defined only up to the action of \(\mathrm W(\Phi / \Psi)\).
Let us denote the map \(\Phi \setminus \Psi \to \Phi / \Psi\) by \(\pi_\Psi\). The preimage of a special subset of \(\Phi / \Psi\) is a special subset of \(\Phi\). There is a canonical group homomorphism \(F_\Psi \colon \stunit(S, \Theta; \Phi / \Psi) \to \stunit(S, \Theta; \Phi)\), it maps every root subgroup \(X_\alpha(S, \Theta)\) to \(\stunit(S, \Theta; \pi_\Psi^{-1}(\{\alpha, 2\alpha\} \cap \Phi / \Psi))\) in such a way that \[\stmap \circ F_\Psi = \stmap \colon \stunit(S, \Theta; \Phi / \Psi) \to \unit(S, \Theta).\] Of course, \(\{\alpha, 2\alpha\} \cap \Phi / \Psi\) coincides with \(\{\alpha, 2 \alpha\}\) for ultrashort \(\alpha\) and with \(\{\alpha\}\) otherwise. The map \(F_\Psi\) induces an isomorphism between \(\stunit(S, \Theta; \pi_\Psi^{-1}(\Sigma))\) and \(\stunit(S, \Theta; \Sigma)\) for any special $\Sigma \subseteq \Phi / \Psi$, so we identify such groups.
There are similarly defined natural homomorphisms \(F_\Psi \colon \stunit(R, \Delta; \Phi / \Psi) \to \stunit(R, \Delta; \Phi)\) and \(F_\Psi \colon \stunit(R, \Delta; S, \Theta; \Phi / \Psi) \to \stunit(R, \Delta; S, \Theta; \Phi)\). By \cite[propositions 1 and 2]{CentralityBC}, \(F_\Psi \colon \stunit(R, \Delta; \Phi / \alpha) \to \stunit(R, \Delta; \Phi)\) is an isomorphism for every root \(\alpha\) if \(\ell \geq 3\) (this also follows from theorem \ref{root-elim} proved below). The diagonal group \(\diag(R, \Delta; \Phi / \Psi)\) constructed by \(H / \Psi\) contains the root elements \(T_\alpha(\mu)\) for all \(\alpha \in \Psi\) and \[F_\Psi\bigl(\up{T_\alpha(\mu)}{g}\bigr) = \up{X_\alpha(\mu)}{F_\Psi(g)} \in \stunit(R, \Delta; S, \Theta; \Phi)\] for \(g \in \stunit(R, \Delta; S, \Theta; \Phi / \Psi)\), \(\alpha \in \Psi\).
Note that there is a one-to-one correspondence between the saturated root subsystems of \(\Phi\) containing a saturated root subsystem \(\Psi\) and the saturated root subsystems of \(\Phi / \Psi\). If \(\Psi \subseteq \Psi' \subseteq \Phi\) are two saturated root subsystems, then \[\pi_{\Psi} \circ \pi_{\Psi' / \Psi} = \pi_{\Psi'} \colon \stunit(S, \Theta; \Phi / \Psi') \to \stunit(S, \Theta; \Phi).\]
Let \(e_{i \oplus j} = e_i + e_j\), \(q_{i \oplus j} = q_i \dotplus q_j\), \(e_{\ominus i} = e_{-i} + e_0 + e_i\). There are new root homomorphisms \begin{align*} X_{i, \pm (l \oplus m)} &\colon S_{i, \pm (l \oplus m)} = S_{i, \pm l} \oplus S_{i, \pm m} \\ &\to \stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l)) \text{ for } i \notin \{0, \pm l, \pm m\}; \\ X_{\pm (l \oplus m), j} &\colon S_{\pm (l \oplus m), j} = S_{\pm l, j} \oplus S_{\pm m, j} \\ &\to \stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l)) \text{ for } j \notin \{0, \pm l, \pm m\}; \\ X_{\pm(l \oplus m)} = X^0_{\pm(l \oplus m)} &\colon \Delta^0_{\pm(l \oplus m)} = \Theta^0_{\pm l} \mathbin{\dot{\oplus}} \phi(S_{\mp l, \pm m}) \mathbin{\dot{\oplus}} \Theta^0_{\pm m} \\ &\to \stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l)); \\ X^{\ominus m}_j &\colon \Theta^{\ominus m}_j = q_{-m} \cdot S_{-m, j} \mathbin{\dot{\oplus}} \Theta^0_j \mathbin{\dot{\oplus}} q_m \cdot S_{mj} \\ &\to \stunit(S, \Theta; \Phi / \mathrm e_m) \text{ for } j \notin \{0, \pm m\}. \end{align*} The remaining root homomorphisms of \(\stunit(S, \Theta; \Phi / \mathrm e_m)\) and \(\stunit(S, \Theta; \Phi / (\mathrm e_m - \mathrm e_l))\) are denoted by the usual \(X_{ij}\) and \(X_j = X^0_j\).
\section{Conjugacy calculus}
Let us say that a group \(G\) has a \textit{conjugacy calculus} with respect to the strong orthogonal hyperbolic family \(H\) if there is a family of maps \[\stunit(R, \Delta; \Sigma) \times \stunit(S, \Theta; \Phi) \to G, (g, h) \mapsto \up g{\{h\}_\Sigma}\] parameterized by a saturated special subset \(\Sigma \subseteq \Phi\) such that \begin{itemize} \item[(Hom)] \(\up g{\{h_1 h_2\}_\Sigma} = \up g{\{h_1\}_\Sigma}\, \up g{\{h_2\}_\Sigma}\); \item[(Sub)] \(\up g{\{h\}_{\Sigma'}} = \up g{\{h\}_\Sigma}\) if \(\Sigma' \subseteq \Sigma\); \item[(Chev)] \(\up{g X_{\alpha}(\mu)}{\{X_{\beta}(\nu)\}_\Sigma} = \up g{\bigl\{\prod_{\substack{i \alpha + j \beta \in \Phi\\ i \geq 0, j > 0}} X_{i \alpha + j \beta}(f_{\alpha \beta i j}(\mu, \nu))\bigr\}_\Sigma}\) if \(\alpha, \beta\) are non-antiparallel and \(\alpha \in \Sigma\); \item[(XMod)] \(\up{X_\alpha(\delta(\mu))\, g}{\{h\}_\Sigma} = \up{\up 1{\{X_\alpha(\mu)\}_\Sigma}}{\bigl(\up g{\{h\}_\Sigma}\bigr)}\) if \(\alpha \in \Sigma\); \item[(Conj)] \(\up{\up{g_1}{\{X_\alpha(\mu)\}_{\Sigma'}}}{\bigl(\up{F_\Psi(g_2)}{\{F_\Psi(h)\}_{\pi_\Psi^{-1}(\Sigma)}}\bigr)} = \up{F_\Psi(\up{\delta(f)}{g_2})}{\{F_\Psi(\up f {h})\}_{\pi_\Psi^{-1}(\Sigma)}}\) if \(\Psi \subseteq \Phi\) is a saturated root subsystem, \(\Sigma \subseteq \Phi / \Psi\) is a saturated special subset, \(\alpha \in \Psi\), \(\Sigma' \subseteq \Psi\), \(f = \up{\stmap(g_1)}{T_\alpha(\mu)} \in \unit(S, \Theta)\). \end{itemize} The axiom (Sub) implies that we may omit the subscript \(\Sigma\) in the maps \((g, h) \mapsto \up g{\{h\}_\Sigma}\).
If \(G\) has a conjugacy calculus with respect to \(H\), then we define the elements \begin{align*} Z_{ij}(a, p) &= \up{X_{ji}(p)}{\{X_{ij}(a)\}}; & X_{ij}(a) &= Z_{ij}(a, 0); \\ Z_j(u, s) &= \up{X_{-j}(s)}{\{X_j(u)\}}; & X_j(u) &= Z_j(u, \dot 0); \\ Z_{i, j \oplus k}(a, p) &= \up{X_{j \oplus k, i}(p)}{\{F_{\mathrm e_k - \mathrm e_j}(X_{i, j \oplus k}(a))\}}; & X_{i, j \oplus k}(a) &= Z_{i, j \oplus k}(a, 0); \\ Z_{i \oplus j, k}(a, p) &= \up{X_{k, i \oplus j}(p)}{\{F_{\mathrm e_j - \mathrm e_i}(X_{i \oplus j, k}(a))\}}; & X_{i \oplus j, k}(a) &= Z_{i \oplus j, k}(a, 0); \\ Z_{i \oplus j}(u, s) &= \up{X_{-(i \oplus j)}(s)}{\{F_{\mathrm e_j - \mathrm e_i}(X_{i \oplus j}(u))\}}; & X_{i \oplus j}(u) &= Z_{i \oplus j}(u, \dot 0);\\ Z^{\ominus i}_j(u, s) &= \up{X^{\ominus i}_{-j}(s)}{\{F_{\mathrm e_i}(X^{\ominus i}_j(u))\}}; & X^{\ominus i}_j(u) &= Z^{\ominus i}_j(u, \dot 0). \end{align*} of \(G\). Since \(\stunit(R, \Delta; S, \Theta)\) and \(\unit(S, \Theta)\) have natural conjugacy calculi with respect to \(H\), we use the notation \(Z_{ij}(a, p)\) and \(Z_j(u, s)\) for the corresponding elements in these groups.
\begin{prop} \label{identities} Suppose that a group \(G\) has a conjugacy calculus with respect to \(H\). Then the following identities hold: \begin{itemize} \item[(Sym)] \(Z_{ij}(a, p) = Z_{-j, -i}(-\inv a, -\inv p)\), \(Z_{i \oplus j, k}(a, p) = Z_{-k, (-i) \oplus (-j)}(-\inv a, -\inv p) = Z_{j \oplus i, k}(a, p)\), \(Z_{i \oplus j}(u, s) = Z_{j \oplus i}(u, s)\), and \(Z_j^{\ominus i}(u, s) = Z_j^{\ominus (-i)}(u, s)\); \item[(Add)] The maps \(Z_{ij}\), \(Z_j\), \(Z_{i \oplus j, k}\), \(Z_{i, j \oplus k}\), \(Z_{i \oplus j}\), \(Z^{\ominus i}_j\) are homomorphisms on the first variables. \item[(Comm)]
\begin{enumerate}
\item \([Z_{ij}(a, p), Z_{kl}(b, q)] = 1\) if \(\pm i, \pm j, \pm k, \pm l\) are distinct,
\item \([Z_{ij}(a, p), Z_l(b, q)] = 1\) if \(\pm i, \pm j, \pm l\) are distinct,
\item \(\up{Z_{ij}(a, p)}{Z_{i \oplus j, k}(b, q)} = Z_{i \oplus j, k}\bigl(\up{Z_{ij}(a, p)} b, \up{Z_{ij}(a, p)} q\bigr)\),
\item \(\up{Z_{ij}(a, p)}{Z_{i \oplus j}(u, s)} = Z_{i \oplus j}\bigl(\up{Z_{ij}(a, p)} u, \up{Z_{ij}(a, p)} s\bigr)\),
\item \(\up{Z_i(u, s)}{Z^{\ominus i}_j(v, t)} = Z^{\ominus i}_j\bigl(\up{Z_i(u, s)} v, \up{Z_i(u, s)} t\bigr)\);
\end{enumerate} \item[(Simp)]
\begin{enumerate}
\item \(Z_{ik}(a, p) = Z_{i \oplus j, k}(a, p)\);
\item \(Z_{ij}(a, p) = Z_{(-i) \oplus j}(\phi(a), \phi(p))\);
\item \(Z_j(u, s) = Z^{\ominus i}_j(u, s)\);
\end{enumerate} \item[(HW)]
\begin{enumerate}
\item \(Z_{j \oplus k, i}\bigl(\up{T_{jk}(r)} a, p + q\bigr) = Z_{k, i \oplus j}\bigl(\up{T_{ij}(p)} a, \up{T_{ij}(p)}{(q + r)}\bigr)\) for \(a \in S_{ki}\), \(p \in R_{ij}\), \(q \in R_{ik}\), \(r \in R_{jk}\),
\item \(Z_{-j \oplus -i}\bigl(\up{T_{ij}(q)}{(u \dotplus \phi(a))}, s \dotplus \phi(p) \dotplus t\bigr) = Z^{\ominus i}_{-j}\bigl(\up{T_i(s)}{(u \dotplus q_i \cdot a)}, \up{T_i(s)}{(q_{-i} \cdot p \dotplus t \dotplus q_i \cdot q)}\bigr)\) for \(u \in \Theta^0_{-j}\), \(a \in S_{i, -j}\), \(s \in \Delta^0_i\), \(p \in R_{-i, j}\), \(t \in \Delta^0_j\), \(q \in R_{ij}\);
\end{enumerate} \item[(Delta)]
\begin{enumerate}
\item \(Z_{ij}(a, \delta(b) + p) = \up{Z_{ji}(b, 0)}{Z_{ij}(a, p)}\),
\item \(Z_j(u, \delta(v) \dotplus s) = \up{Z_{-j}(v, \dot 0)}{Z_j(u, s)}\).
\end{enumerate} \end{itemize} Conversely, suppose that \(G\) is a group with the elements \(Z_{ij}(a, p)\), \(Z_j(u, s)\), \(Z_{i \oplus j, k}(a, p)\), \(Z_{i, j \oplus k}(a, p)\), \(Z_{i \oplus j}(u, s)\), \(Z^{\ominus i}_j(u, s)\) satisfying the identities above. Then \(G\) has a unique conjugacy calculus with respect to \(H\) such that the distinguished elements coincide with the corresponding expressions from the conjugacy calculus. \end{prop} \begin{proof} If \(G\) has a conjugacy calculus, then all the identities may be proved by direct calculations. In particular, (Comm) follows from (Conj), (Delta) follows from (XMod), and the remaining ones follow from (Hom), (Sub), (Chev).
To prove the converse, notice that (Add) implies \begin{align*} Z_{ij}(0, p) &= 1, & Z_{i \oplus j, k}(0, p) &= 1, & Z_{i, j \oplus k}(0, p) &= 1, \\ Z_j(\dot 0, s) &= 1, & Z_{i \oplus j}(\dot 0, s) &= 1, & Z^{\ominus i}_j(\dot 0, s) &= 1. \end{align*} Together with (Sym), (Add), (Simp), and (HW) it follows that \(X_{i \oplus j, k}(a)\), \(X_{i, j \oplus k}(a)\), \(X_{i \oplus j}(u)\), and \(X^{\ominus i}_j(u)\) may be expressed in terms of the root elements \(X_{ij}(a)\) and \(X_j(u)\) in the natural way (these elements are defined as various \(Z_{(-)}(=, \dot 0)\)). Also, (Comm) implies that the root elements satisfy the Chevalley commutator formulas. It is also easy to see that (Sym), (Add), (Simp), and (HW) give some canonical expressions of all the distinguished elements in terms of \(Z_{ij}(a, p)\) and \(Z_j(u, s)\).
We explicitly construct the maps \((g, h) \mapsto \up g{\{h\}_\Sigma}\) by induction on \(|\Sigma|\), the case \(\Sigma = \varnothing\) is trivial. Simultaneously we show that \(\up g{\{h\}_\Sigma}\) evaluates to a distinguished element if \(\Sigma\) is strictly contained in a system of positive roots of a rank \(2\) saturated root subsystem and \(h \in \unit(S, \Theta; -\Sigma)\). Hence from now on we assume that \(\Sigma \subseteq \Phi\) is a saturated special subset and there are unique maps \(\up g{\{h\}_{\Sigma'}}\) for all \(\Sigma'\) with \(|\Sigma'| < |\Sigma|\) satisfying the axioms (Hom), (Sub), and (Chev).
Firstly, we construct \(\up g{\{X_\alpha(\mu)\}_\Sigma}\), where \(\alpha \in \Phi \setminus 2 \Phi\) if a fixed root. If there is an extreme root \(\beta \in \Sigma \setminus 2 \Sigma\) such that \(\beta \neq -\alpha\), then we define \[\up{g X_\beta(\nu)}{\{X_\alpha(\mu)\}_\Sigma} = \up g{\bigl\{\up{X_\beta(\nu)}{X_\alpha(\mu)}\bigr\}_{\Sigma \setminus \langle \beta \rangle}}\] for any \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \beta \rangle)\), where \(\up{X_\beta(\nu)}{X_\alpha(\mu)}\) denotes the element \(\prod_{\substack{i\beta + j\alpha \in \Phi\\ i \geq 0, j > 0}} X_{i \beta + j \alpha}(f_{\beta \alpha i j}(\nu, \mu)) \in \stunit(S, \Theta)\). By (HW), this definition gives the distinguished element for appropriate \(\Sigma\).
Let us check that the definition is correct, i.e. if \(\beta, \gamma \in \Sigma \setminus 2 \Sigma\) are two extreme roots, \(\beta, \gamma, -\alpha\) are distinct, and \(g \in \stunit(R, \Delta; \Sigma \setminus (\langle \beta \rangle \cup \langle \gamma \rangle))\), then \[\up{g X_\beta(\nu)}{\bigl\{\up{X_\gamma(\lambda)}{X_\alpha(\mu)}\bigr\}_{\Sigma \setminus \langle \gamma \rangle}} = \up{g [X_\beta(\nu), X_\gamma(\lambda)]\, X_\gamma(\lambda)}{\bigl\{\up{X_\beta(\nu)}{X_\alpha(\mu)}\bigr\}_{\Sigma \setminus \langle \beta \rangle}}.\] If \(\langle \alpha, \beta, \gamma \rangle\) is special, then this claim easily follows. Else these roots lie in a common saturated root subsystem \(\Phi_0\) of type \(\mathsf A_2\) or \(\mathsf{BC}_2\). We may assume that \(\Sigma = \langle \beta, \gamma \rangle\), otherwise there is an extreme root in \(\Sigma\) but not in \(\Phi_0\). But then this is a simple corollary of (HW).
The above definition cannot be used if \(\Sigma = \langle -\alpha \rangle\). In this case we just define \[\up{X_{ji}(p)}{\{X_{ij}(a)\}_{\langle \mathrm e_j - \mathrm e_i \rangle}} = Z_{ij}(a, p), \quad \up{X_{-i}(s)}{\{X_i(u)\}_{\langle \mathrm e_i \rangle}} = Z_i(u, s).\]
Now let us check that the map \((g, h) \to \up g{\{h\}_\Sigma}\) is well-defined, i.e. factors through the Steinberg relations on the second argument. By construction, it factors through the homomorphism property of the root elements. Let us check that if also factors through the Chevalley commutator formula for \([X_\alpha(\mu), X_\beta(\nu)]\), where \(\alpha, \beta\) are linearly independent roots. If there is an extreme root \(\gamma \in \Sigma\) such that \(\langle \alpha, \beta, \gamma \rangle\) is special, then we may apply the construction of \(\up g{\{h\}_\Sigma}\) via \(\gamma\). Otherwise let \(\Phi_0 \subseteq \Phi\) be the rank \(2\) saturated root subsystem containing \(\alpha\), \(\beta\), \(\Sigma\). If \(\Phi_0\) is of type \(\mathsf A_1 \times \mathsf A_1\) or \(\mathsf A_1 \times \mathsf{BC}_1\), then \(\Sigma \subseteq \langle -\alpha, -\beta \rangle\) and we may apply the corresponding case of (Comm). If \(\Phi_0\) is of type \(\mathsf A_2\) or \(\mathsf{BC}_2\) and \(\langle \alpha, \beta \rangle\) is not its subsystem of positive roots, then we just apply (Add).
Consider the case where \(\Phi_0\) is of type \(\mathsf A_2\), \(\alpha, \beta, \alpha + \beta \in \Phi_0\), and \(\Sigma \subseteq \langle -\alpha, -\beta \rangle\). Without loss of generality, \(\alpha = \mathrm e_j - \mathrm e_i\) and \(\beta = \mathrm e_k - \mathrm e_j\). Then \begin{align*} \up{X_{ji}(p)\, X_{ki}(q)\, X_{kj}(r)}{\{X_{ij}(a)\, X_{jk}(b)\}_{\Sigma}} &= X_{k, i \oplus j}\bigl(\up{T_{ji}(p)}{(qa)}\bigr)\, Z_{ij}(a, p)\, Z_{i \oplus j, k}\bigl(b, \up{T_{ji}(p)}{(q + r)}\bigr) \\ &= Z_{i \oplus j, k}\bigl(\up{T_{ji}(p)\, T_{ij}(a)}{b}, \up{T_{ji}(p)}{(q + r)}\bigr)\, X_{k, i \oplus j}\bigl(\up{T_{ji}(p)}{(qa)}\bigr)\, Z_{ij}(a, p) \\ &= \up{X_{ji}(p)\, X_{ki}(q)\, X_{kj}(r)}{\{X_{jk}(b)\, X_{ik}(ab)\, X_{ij}(a)\}_\Sigma}. \end{align*}
The remaining case is where \(\Phi_0\) is of type \(\mathsf{BC}_2\), \(\alpha, \beta, \alpha + \beta, 2\alpha + \beta \in \Phi_0\), and \(\Sigma \subseteq \langle -\alpha, -\beta \rangle\). Without loss of generality, \(\alpha = \mathrm e_i\) and \(\beta = \mathrm e_j - \mathrm e_i\). We have \begin{align*} \up{X_{-i}(s)\, X_{i, -j}(p)\, X_{-j}(t)\, X_{ji}(q)}{\{X_i(u)\, X_{ij}(a)\}_\Sigma} &= Z_{i \oplus j}(u, s \dotplus \phi(p) \dotplus t)\, Z^{\ominus i}_j\bigl(\up{T_{-i}(s)}{(q_i \cdot a)}, \up{T_{-i}(s)}{(q_i \cdot p \dotplus t \mathbin{\dot{-}} q_{-i} \cdot \inv q)}\bigr)\\ &= X^{\ominus i}_{-j}\bigl(\up{T_{-i}(s)}{[t \dotplus q_i \cdot p, T_i(u)]}\bigr)\, Z_i(u, s)\\ &\cdot Z^{\ominus i}_j\bigl(\up{T_{-i}(s)}{(q_i \cdot a)}, \up{T_{-i}(s)}{(q_i \cdot p \dotplus t \mathbin{\dot{-}} q_{-i} \cdot \inv q)}\bigr) \\ &= Z^{\ominus i}_j\bigl(\up{T_{-i}(s)}{(q_i \cdot a \dotplus u \cdot a \dotplus q_{-i} \cdot \rho(u) a)}, \up{T_{-i}(s)}{(q_i \cdot p \dotplus t \mathbin{\dot{-}} q_{-i} \cdot \inv q)}\bigr) \\ &\cdot Z_{i \oplus j}(u, s \dotplus \phi(p) \dotplus t) \\ &= \up{X_{-i}(s)\, X_{i, -j}(p)\, X_{-j}(t)\, X_{ji}(q)}{\{X_{ij}(a)\, X_j(u \cdot a)\, X_{-i, j}(\rho(u) a)\, X_i(u)\}_\Sigma}. \end{align*}
Clearly, our map \((g, h) \mapsto \up g{\{h\}_\Sigma}\) satisfy the required properties and it is unique. The axiom (XMod) follows from the Steinberg relations and (Delta) if \(\Sigma\) is one-dimensional, the general case follows from the construction of \(\up g{\{h\}_\Sigma}\).
To prove the axiom (Conj), without loss of generality \(\Sigma' \subseteq \langle -\alpha \rangle\) and \(\Psi = \mathbb R \alpha \cap \Phi\). Applying (Hom) and (Chev) multiple times to the term \(\up{F_\Psi(g_2)}{\{F_\Psi(h)\}_{\pi^{-1}_\Psi(\Sigma)}}\), we reduce to the case where \(\Sigma\) is also one-dimensional. This is precisely (Comm). \end{proof}
From now on let \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi)\) be the universal group with a conjugacy calculus with respect to \(H\). It is the abstract group with the presentation given by proposition \ref{identities}. Clearly, for a saturated root subsystem \(\Psi \subseteq \Phi\) there is a homomorphism \(F_\Psi \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\), i.e. every group with a conjugacy calculus with respect to \(H\) also has a canonical conjugacy calculus with respect to \(H / \Psi\). We have a sequence of groups \[\stunit(S, \Theta; \Phi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi) \to \stunit(R, \Delta; S, \Theta; \Phi)\] with the action of \(\diag(R, \Delta; \Phi)\). Our goal for the next several sections is to prove that the right arrow is an isomorphism.
\section{Lemmas about odd form rings}
The difficult part in the proof that the right arrow is an isomorphism is to construct an action of \(\stunit(R, \Delta; \Phi)\) on \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi)\). In order to do this, we prove that \(F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\) is an isomorphism for all roots \(\alpha\), then \(T_\alpha(\mu)\) induce certain automorphisms of \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi)\). In this section we prove surjectivity of \(F_\alpha\) and several preparatory results.
\begin{lemma} \label{ring-pres} If \(i, j, k \neq 0\), then the multiplication map \[S_{ik} \otimes_{e_k R e_k} R_{kj} \to S_{ij}\] is an isomorphism. \end{lemma} \begin{proof} Let \(e_j = \sum_m x_m y_m\) for some \(x_m \in R_{jk}\) and \(y_m \in R_{kj}\), they exists since \(e_j \in R e_k R\). Then a direct calculation show that \[S_{ij} \to S_{ik} \otimes_{e_k R e_k} R_{kj}, a \mapsto \sum_m a x_m \otimes y_m\] is the inverse to the map from the statement. \end{proof}
\begin{lemma} \label{form-pres} For any non-zero indices \(j\), \(k\) consider the group \(F\) with the generators \(u \boxtimes p\) for \(u \in \Theta^0_k\), \(p \in R_{kj}\), and \(\phi(a)\) for \(a \in S_{-j, j}\). The relations are \begin{itemize} \item \(\phi(a + b) = \phi(a) \dotplus \phi(b)\), \(\phi(a) = \phi(-\inv a)\); \item \(u \boxtimes a \dotplus \phi(b) = \phi(b) \dotplus u \boxtimes a\), \([u \boxtimes a, v \boxtimes b]^\cdot = \phi(-\inv a \inv{\pi(u)} \pi(v) b)\); \item \((u \dotplus v) \boxtimes a = u \boxtimes a \dotplus v \boxtimes a\), \(u \boxtimes (a + b) = u \boxtimes a \dotplus \phi(\inv{\,b\,} \rho(u) a) \dotplus u \boxtimes b\); \item \(u \boxtimes ab = (u \cdot a) \boxtimes b\) for \(u \in \Theta^0_k\), \(a \in R_{kk}\), \(b \in R_{kj}\); \item \(\phi(a) \boxtimes b = \phi(\inv{\,b\,} a b)\). \end{itemize} Then the homomorphism \[f \colon F \to \Theta^0_j, u \boxtimes p \mapsto u \cdot p, \phi(a) \mapsto \phi(a)\] is an isomorphism. \end{lemma} \begin{proof} Let \(e_j = \sum_m x_m y_m\) for some \(x_m \in R_{jk}\) and \(y_m \in R_{kj}\). Consider the map \[g \colon \Theta^0_j \to F, u \mapsto \sum_m^\cdot (u \cdot x_m) \boxtimes y_m \dotplus \phi\bigl(\sum_{m < m'} \inv{y_{m'}} \inv{x_{m'}} \rho(u) x_m y_m\bigr),\] it is a section of \(f\). The relations of \(F\) easily imply that \(g\) is a homomorphism. Finally, \begin{align*} g(f(\phi(a))) &= \sum_m^\cdot (\phi(a) \cdot x_m) \boxtimes y_m \dotplus \phi\bigl(\sum_{m < m'} \inv{y_{m'}} \inv{x_{m'}} (a - \inv a) x_m y_m\bigr) = \phi(a); \\ g(f(u \boxtimes a)) &= \sum_m^\cdot (u \cdot a x_m) \boxtimes y_m \dotplus \phi\bigl(\sum_{m < m'} \inv{y_{m'}} \inv{x_{m'}} \inv a \rho(u) a x_m y_m\bigr) = u \boxtimes a. \qedhere \end{align*} \end{proof}
\begin{prop} \label{elim-sur} The homomorphism \[F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\] is surjective if \(\alpha\) is a short root and \(\ell \geq 3\) or if \(\alpha\) is an ultrashort root and \(\ell \geq 2\). The homomorphism \[F_{\Psi / \alpha} \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] is also surjective if \(\alpha \in \Psi \subseteq \Phi\) is a root subsystem of type \(\mathsf A_2\) and \(\ell \geq 3\). \end{prop} \begin{proof} By proposition \ref{identities}, \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi)\) is generated by \(Z_{ij}(a, p)\) and \(Z_j(u, s)\). It suffices to show that they lie in the images of the homomorphisms. This is clear for the generators with the roots not in \(\{\alpha, -\alpha\}\) or \(\Psi\) respectively. For the remaining roots \(\beta\) it suffices to show that \(X_\beta(S, \Theta)\) lie in the images. This easily follows from lemmas \ref{ring-pres}, \ref{form-pres} and the identities \begin{align*} Z_{k, i \oplus j}\bigl(\up{T_{ji}(p)} a, \up{T_{ji}(p)} q\bigr) &= \up{X_{ji}(p)\, X_{ik}(q)}{\{X_{kj}(a)\}} = Z_{ij}(qa, p)\, X_{k, i \oplus j}\bigl(\up{T_{ji}(p)}a\bigr); \\ Z^{\ominus i}_j\bigl(\up{T_{-i}(s)}{(q_{-i} \cdot (-\inv a))}, \up{T_{-i}(s)}{(q_{-i} \cdot p)}\bigr) &= \up{X_{-i}(s)\, X_{-i, -j}(p)}{\{X_{-j, i}(a)\}} = Z_i(\varphi(pa), s)\, X^{\ominus i}_j\bigl(\up{T_{-i}(s)}{(q_{-i} \cdot (-\inv a))}\bigr) \\ Z^{\ominus i}_j\bigl(\up{T_{-i}(s)} u, \up{T_{-i}(s)}{(q_{-i} \cdot p)}\bigr) &= \up{X_{-i}(s)\, X_{-i, -j}(p)}{\{X_j(u)\}} = Z_i(u \cdot p, s)\, X_j\bigl(\up{T_{-i}(s)}{(u \mathbin{\dot{-}} q_{-i} \inv p \inv{\rho(u)})}\bigr). \qedhere \end{align*} \end{proof}
Of course, the proposition also implies that \[F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] is surjective if \(\ell \geq 3\) and \(\Psi\) is of type \(\mathsf{BC}_2\).
The final technical lemma is needed in the next section.
\begin{lemma}\label{associator} Suppose that \(\ell = 3\). Let \(A\) be an abelian group and \[\{-\}_{ij} \colon R_{1i} \otimes_{\mathbb Z} R_{ij} \otimes_{\mathbb Z} S_{j3} \to A\] be homomorphisms for \(i, j \in \{-2, 2\}\). Suppose also that \begin{itemize} \item[(A1)] \(\{p \otimes qr \otimes a\}_{ik} = \{pq \otimes r \otimes a\}_{jk} + \{p \otimes q \otimes ra\}_{ij}\); \item[(A2)] \(\{p \otimes q \otimes \inv pa\}_{-i, i} = 0\); \item[(A3)] \(\{p \otimes qr \otimes a\}_{ij} = \{\inv q \otimes \inv pr \otimes a\}_{-i, j}\); \item[(A4)] \(\{p \otimes q \otimes ra\}_{ij} = -\{\inv r \otimes \inv q \otimes \inv pa\}_{-j, -i}\). \end{itemize} Then \(\{x\}_{ij} = 0\) for all \(i\), \(j\), \(x\). \end{lemma} \begin{proof}
Let \(R_{|2|, |2|} = \sMat{R_{-2, -2}}{R_{-2, 2}}{R_{2, -2}}{R_{22}}\) and \(S_{|2|, 3} = \sCol{S_{-2, 3}}{S_{23}}\). For convenience we prove the claim for arbitrary left \(R_{|2|, |2|}\)-module \(S_{|2|, 3}\) instead of a part of a crossed module, where \(S_{\pm 1, 3} = R_{\pm 1, 2} \otimes_{R_{22}} S_{23}\) in (A2) and (A4). From the last two identities we get \begin{align*} \{px \otimes \inv yq \otimes a\}_{ij} &= \{py \otimes \inv xq \otimes a\}_{-i, j}; \tag{A5} \\ \{p \otimes qx \otimes \inv ya\}_{ji} &= \{p \otimes qy \otimes \inv xa\}_{j, -i} \tag{A6} \end{align*} for \(x \in R_{1i}\) and \(y \in R_{1, -i}\). This implies that \[\{p \otimes qxyr \otimes a\}_{ij} = \{p \otimes qyxr \otimes a\}_{ij} \tag{A7}\]
for \(x, y \in R_{\pm 1, \pm 1}\). Let \((I, \Gamma) \trianglelefteq (R, \Delta)\) be the odd form ideal generated by \(xy - yx\) for \(x, y \in R_{11}\). From (A5)--(A7) we obtain that \(\{-\}_{ij}\) factor through \(R / I\) and \(S_{|2|, 3} / (I \cap R_{|2|, |2|}) S_{|2|, 3}\), so we may assume that \(R_{11}\) is commutative. Using (A4)--(A6) it is easy to see that \[\{rx \otimes y e_1 z \otimes w e_1 a\}_{ij} = \{x \otimes yrz \otimes w e_1 a\}_{ij} = \{x \otimes y e_1 z \otimes wra\}_{ij} \tag{A8}\] for \(r \in R_{11}\). By (A5) and (A6), it suffices to prove that \(\{x\}_{22} = 0\).
From (A2), (A4), (A5), and (A6) we get \begin{align*} \{px \otimes qx \otimes a\}_{22} &= 0; \tag{A9} \\ \{p \otimes yq \otimes ya\}_{22} &= 0 \tag{A10} \end{align*} for \(x \in R_{12}\) and \(y \in R_{21}\). Using (A1), (A8), (A9), (A10), and the linearizations of (A9) and (A10), we get \begin{align*} \{x \otimes y (pq)^3 z \otimes wa\}_{22} &= \{xyp \otimes q (pq)^2 z \otimes wa\}_{22} + \{(pq)^2 x \otimes yp \otimes qzwa\}_{22} \\ &= \{xyp \otimes qz' \otimes wpqa\}_{22} + \{x' \otimes ypqp \otimes qzwa\}_{22} \\ &= -\{xyz' \otimes qp \otimes w'a\}_{22} - \{x' \otimes qp \otimes y'zwa\}_{22} = 0 \end{align*} for \(x, p, z \in R_{12}\), \(y, q, w \in R_{21}\), \(a \in S_{13}\), where \(x' = pqx - xqp\), \(y' = ypq - qpy\), \(z' = pqz - zqp\), \(w' = wpq - qpw\). The last equality follows from \(x'q = py' = z'q = pw' = 0\). It remains to notice that the elements \((pq)^3\) generate the unit ideal in \(R_{11}\). \end{proof}
\section{Construction of root subgroups}
In this and the next sections \(\alpha = \mathrm e_m\) or \(\alpha = \mathrm e_m - \mathrm e_l\) for \(m \neq \pm l\) is a fixed root. We also assume that \(\ell \geq 3\). We are going to prove that \[F_\alpha \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi)\] is an isomorphism, i.e. that \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) has a natural conjugacy calculus with respect to \(H\).
In this section we construct root elements \(\widetilde X_\beta(\mu) \in \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\) for \(\beta \in \Phi\) and prove the Steinberg relations for them. Let \(I = \{m, -m\}\) if \(\alpha = \mathrm e_m\) and \(I = \{m, -m, l, -l\}\) if \(\alpha = \mathrm e_m - \mathrm e_l\). If \(\beta \notin \mathbb R \alpha\), then there is a canonical choice for such elements, i.e. \begin{align*} \widetilde X_{ij}(a) &= X_{ij}(a) \text{ for } i, j \notin I; \\ \widetilde X_{ij}(a) &= \widetilde X_{-j, -i}(-\inv a) = X_{i, \pm (l \oplus m)}(a) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, i \notin I, j \in \{\pm l, \pm m\}; \\ \widetilde X_{ij}(a) &= \widetilde X_{-j, -i}(-\inv a) = X_{\pm (l \oplus m)}(\phi(a)) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, i = \mp l, j = \pm m; \\ \widetilde X_{ij}(a) &= \widetilde X_{-j, -i}(-\inv a) = X_j^{\ominus m}(q_i \cdot a) \text{ for } \alpha = \mathrm e_m, i = \pm m; \\ \widetilde X_j(u) &= X_j(u) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, j \notin I; \\ \widetilde X_j(u) &= X_j^{\ominus m}(u) \text{ for } \alpha = \mathrm e_m, j \notin I; \\ \widetilde X_j(u) &= X_{\pm (l \oplus m)}(u) \text{ for } \alpha = \mathrm e_m - \mathrm e_l, j \in \{\pm l, \pm m\}. \end{align*} These elements satisfy all the Steinberg relations involving only roots from saturated special subsets \(\Sigma \subseteq \Phi\) disjoint with \(\mathbb R \alpha\). Similar elements also may be defined in \(\stunit(R, \Delta; \Phi / \alpha)\) and \(\stunit(S, \Theta; \Phi / \alpha)\). The conjugacy calculus with respect to \(H / \alpha\) gives a way to evaluate the elements \(\up{X_\beta(\mu)}{\{\widetilde X_\gamma(\nu)\}}\) in terms of \(\widetilde X_{i \beta + j \gamma}(\lambda)\) if \(\pm \alpha \notin \langle \beta, \gamma \rangle\) and \(\langle \beta, \gamma \rangle\) is special. Up to symmetry, it remains to construct the element \(\widetilde X_{lm}(a)\) for \(\alpha = \mathrm e_m - \mathrm e_l\) or \(\widetilde X_m(u)\) for \(\alpha = \mathrm e_m\) in \(\overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\), as well as to prove the Steinberg relations involving \(\alpha\) and \(2\alpha\).
Consider the expressions \(\up{\widetilde X_\beta(\mu)}{\{\widetilde X_\gamma(\nu)\}}\), where \(\alpha\) is strictly inside the angle \(\mathbb R_+ \beta + \mathbb R_+ \gamma\). We expand them in terms of \(\widetilde X_{i \beta + j \gamma}(\lambda)\) adding new terms as follows. If \(\alpha = \mathrm e_m - \mathrm e_l\) let \begin{align*} \up{X_{li}(p)}{\{\widetilde X_{im}(a)\}} &= \widetilde X_{lm}^i(p, a)\, \widetilde X_{im}(a); \\ \up{X_{im}(p)}{\{\widetilde X_{li}(a)\}} &= \up i {\widetilde X}_{lm}(-a, p)\, \widetilde X_{li}(a); \\ \up{X_{-l}(s)}{\{\widetilde X_m(u)\}} &= \widetilde X_{lm}^\pi(s, \mathbin{\dot{-}} u)\, \widetilde X_m(u); \\ \up{X_m(s)}{\{\widetilde X_{-l}(u)\}} &= \up \pi{\widetilde X}_{lm}(u, s)\, \widetilde X_{-l}(u); \\ \up{X_{-l}(s)}{\{\widetilde X_{-l, m}(a)\}} &= \widetilde X^{-l}_{lm}(s, a)\, \widetilde X_m(\mathbin{\dot{-}} s \cdot (-a))\, \widetilde X_{-l, m}(a); \\ \up{X_m(s)}{\{\widetilde X_{l, -m}(a)\}} &= \up{-m}{\widetilde X}_{lm}(a, \mathbin{\dot{-}} s)\, \widetilde X_{-l}(\mathbin{\dot{-}} s \cdot \inv a)\, \widetilde X_{l, -m}(a); \\ \up{X_{l, -m}(p)}{\{\widetilde X_m(u)\}} &= \widetilde X_{-l}(u \cdot \inv p)\, \widetilde X^{-m}_{lm}(-p, \mathbin{\dot{-}} u)\, \widetilde X_m(u); \\ \up{X_{-l, m}(p)}{\{\widetilde X_{-l}(u)\}} &= \widetilde X_m(u \cdot (-p))\, \up{-l}{\widetilde X}_{lm}(u, -p)\, \widetilde X_{-l}(u); \\ \widetilde X_\alpha(S, \Theta) &= \bigl\langle \widetilde X^i_{lm}(p, a), \up i{\widetilde X}_{lm}(a, p), \widetilde X^\pi_{lm}(s, u), \up \pi {\widetilde X}_{lm}(u, s), \\ &\quad \widetilde X^{-l}_{lm}(s, a), \up{-m}{\widetilde X}_{lm}(a, s), \widetilde X^{-m}_{lm}(p, u), \up{-l}{\widetilde X}_{lm}(u, p) \bigr\rangle. \end{align*} In the case \(\alpha = \mathrm e_m\) let \begin{align*} \up{X_{-m, i}(p)}{\{\widetilde X_{im}(a)\}} &= \widetilde X_{-m, m}^i(p, a)\, \widetilde X_{im}(a); \\ \up{X_i(s)}{\{\widetilde X_{im}(a)\}} &= \widetilde X_{-i, m}(\rho(s) a)\, \widetilde X^i_m(\mathbin{\dot{-}} s, -a)\, \widetilde X_{im}(a); \\ \up{X_{im}(p)}{\{\widetilde X_i(u)\}} &= \up i {\widetilde X}_m(u, -p)\, \widetilde X_{-i, m}(-\rho(u) p)\, \widetilde X_i(u); \\ \widetilde X_\alpha(S, \Theta) &= \bigl\langle \widetilde X^i_{-m, m}(p, a), \widetilde X^i_m(s, a), \up i{\widetilde X}_m(u, p) \bigr\rangle. \end{align*}
\begin{lemma} \label{elim-diag} If \(g \in \widetilde X_\alpha(S, \Theta)\) and \(\beta \in \Phi / \alpha\), then \[\up g{Z_\beta(\mu, \nu)} = Z_\beta\bigl(\up{\delta(\stmap(g))} \mu, \up{\delta(\stmap(g))} \nu\bigr).\] \end{lemma} \begin{proof} Without loss of generality, \(g\) is a generator of \(\widetilde X_\alpha(S, \Theta)\). Let \(\Psi \subseteq \Phi\) be the saturated irreducible root subsystem of rank \(2\) involved in the definition of \(g\) (of type \(\mathsf A_2\) or \(\mathsf{BC}_2\)). If \(\beta \notin \Psi / \alpha\), then the claim follows from (Conj). Otherwise we apply proposition \ref{elim-sur} to \[F_\beta \colon \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \Psi) \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] in order to decompose \(Z_\beta(\mu, \nu)\) into the generators with roots not in \(\Psi / \alpha\). \end{proof}
Let \(\mathrm{eval} \colon \widetilde X_\alpha(S, \Theta) \to S_{lm}\) or \(\mathrm{eval} \colon \widetilde X_\alpha(S, \Theta) \to \Theta^0_m\) be such that \(\stmap(g) = T_{lm}(\mathrm{eval}(g))\) or \(\stmap(g) = T_m(\mathrm{eval}(g))\) for \(g \in \widetilde X_\alpha(S, \Theta)\) depending on the choice of \(\alpha\). Lemma \ref{elim-diag} implies that \[[\widetilde X_\beta(\mu), g] = \prod_{\substack{i \beta + j \alpha \in \Phi\\ i, j > 0}} X_{i \beta + j \alpha}(f_{\beta \alpha i j}(\mu, \mathrm{eval}(g)))\] for all \(g \in \widetilde X_\alpha(S, \Theta)\) if \(\beta\) and \(\alpha\) are linearly independent.
We still have to prove the relations between the generators of \(\widetilde X_\alpha(S, \Theta)\). In order to do so, we consider expressions \[\up{\prod_{\beta \in \Sigma} X_\beta(\mu_\beta)}{\{\widetilde X_\gamma(\nu)\}_\Sigma},\] where \(\Sigma \subseteq \Phi\) is a two-dimensional saturated special subset, \(\langle \Sigma, \gamma \rangle\) is special and three-dimensional, \(\alpha\) is strictly inside \(\mathbb R_+ \Sigma + \mathbb R_+ \gamma\). Such an expression may be decomposed into a product of root elements \(\widetilde X_\beta(\nu)\) and the generators of \(\widetilde X_\alpha(S, \Theta)\) in two ways if at the first step we take one of the extreme roots of \(\Sigma\) applying (Chev). We say that the expression \textit{gives} an identity \(h_1 = h_2\) between such products. Similarly, an expression \(\up{X_\beta(\mu)}{\{\widetilde X_\gamma(\nu_1 \dotplus \nu_2)\}_{\langle \beta \rangle}}\) \textit{gives} an identity \(h_1 = h_2\) if \(\alpha\) is strictly inside the angle \(\mathbb R_+ \beta + \mathbb R_+ \gamma\) and at the first step we either apply (Chev), or replace \(\widetilde X_\gamma(\nu_1 \dotplus \nu_2)\) by \(\widetilde X_\gamma(\nu_1)\, \widetilde X_\gamma(\nu_2)\) and then apply (Hom) and (Chev) to the result.
Note also that any generator of \(\widetilde X_\alpha(S, \Theta)\) is trivial if either of its arguments vanishes.
\begin{lemma} \label{ush-new-root} Suppose that \(\alpha = \mathrm e_m\). Then there is unique homomorphism \[\widetilde X_m \colon \Theta^0_m \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] such that \(g = \widetilde X_m(\mathrm{eval}(g))\) for all \(g \in \widetilde X_\alpha(S, \Theta)\). \end{lemma} \begin{proof} Lemma \ref{elim-diag} implies that the elements \(\widetilde X^i_{-m, m}(p, a)\) lie in the center of \(\widetilde X_\alpha(S, \Theta)\). It is easy to check that \begin{align*} \up{X_{-m, i}(p)}{\{\widetilde X_{im}(a + b)\}} &\text{ gives } \widetilde X^i_{-m, m}(p, a + b) = \widetilde X^i_{-m, m}(p, a)\, \widetilde X^i_{-m, m}(p, b) ; \\ \up{X_{-m, j}(p)\, X_{-m, i}(q)\, X_{ji}(r)}{\{\widetilde X_{im}(a)\}} &\text{ gives } \widetilde X^i_{-m, m}(q + pr, a) = \widetilde X^i_{-m, m}(q, a)\, \widetilde X^j_{-m, m}(p, ra) \text{ for } i \neq \pm j; \\ \up{X_{-m, i}(p)\, X_{jm}(-q)}{\{\widetilde X_{ij}(a)\}} &\text{ gives } \widetilde X^i_{-m, m}(p, aq) = \widetilde X^{-j}_{-m, m}(\inv q, -\inv a \inv p) \text{ for } i \neq \pm j. \end{align*} From the second identity we easily get \(\widetilde X^i_{-m, m}(p + q, a) = \widetilde X^i_{-m, m}(p, a)\, \widetilde X^i_{-m, m}(q, a)\) and \(\widetilde X^i_{-m, m}(pq, a) = \widetilde X^j_{-m, m}(p, qa)\) for all \(i, j \neq \pm m\). Hence by lemma \ref{ring-pres} there is a unique homomorphism \[\widetilde X_{-m, m} \colon S_{-m, m} \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] such that \(\widetilde X^k_{-m, m}(p, a) = \widetilde X_{-m, m}(pa)\) and \(\widetilde X_{-m, m}(a) = \widetilde X_{-m, m}(-\inv a)\).
It turns out that for \(i \neq \pm j\) \begin{align*} \up{X_{im}(-p)}{\{\widetilde X_i(u \dotplus v)\}} &\text{ gives } \up i{\widetilde X}_m(u \dotplus v, p) = \up i{\widetilde X}_m(u, p)\, \up i{\widetilde X}_m(v, p) ; \\ \up{X_{jm}(-r)\, X_{-j, i}(p)}{\{\widetilde X_{ij}(a)\}} &\text{ gives } \up j{\widetilde X_m}(\phi(pa), r) = \widetilde X_{-m, m}(\inv r par) ; \\ \up{X_{im}(-p)\, X_{jm}(-q)\, X_{ji}(-r)}{\{\widetilde X_j(u)\}} &\text{ gives } \up j{\widetilde X}_m(u, rp + q) = \up i{\widetilde X}_m(u \cdot r, p)\, \widetilde X_{-m, m}(\inv q \rho(u) rp)\, \up j{\widetilde X}_m(u, q). \end{align*} The second identity may be generalized to \(\up i{\widetilde X}_m(\phi(a), p) = \widetilde X_{-m, m}(\inv pap)\). The last identity is equivalent to \(\up j{\widetilde X}_m(u, rp) = \up i{\widetilde X}_m(u \cdot r, p)\) and \(\up j{\widetilde X}_m(u, rp + q) = \up j{\widetilde X}_m(u, rp)\, \widetilde X_{-m, m}(\inv q \rho(u) rp)\, \up j{\widetilde X}_m(u, q)\) for \(i \neq \pm j\). Hence \(\up i{\widetilde X}_m(u, p + q) = \up i{\widetilde X}_m(u, p)\, \widetilde X_{-m, m}(\inv q \rho(u) p)\, \up i{\widetilde X}_m(u, q)\) and \(\up j{\widetilde X}_m(u, pq) = \up i{\widetilde X}_m(u \cdot p, q)\) for all \(i, j \neq \pm m\). Moreover, lemma \ref{elim-diag} implies that \([g, \up i{\widetilde X}_m(u, p)] = \widetilde X_{-m, m}\bigl(\inv p \inv{\pi(u)} \pi(\mathrm{eval}(g))\bigr)\) for all \(g \in \widetilde X_\alpha(S, \Theta)\). Now lemma \ref{form-pres} gives the unique homomorphism \[\widetilde X_m \colon \Theta^0_m \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] such that \(\widetilde X_{-m, m}(a) = \widetilde X_m(\phi(a))\) and \(\up i{\widetilde X}_m(u, p) = \widetilde X_m(u \cdot p)\).
It remains to prove that \(\widetilde X_m^i(s, a) = \widetilde X_m(s \cdot a)\). This easily follows from \begin{align*} \up{X_i(\mathbin{\dot{-}} s)}{\{\widetilde X_{im}(-a - b)\}} &\text{ gives } \widetilde X^i_m(s, a + b) = \widetilde X^i_m(s, a)\, \widetilde X_{-m, m}(\inv{\,b\,} \rho(s) a)\, \widetilde X^i_m(s, b) ; \\ \up{X_j(\mathbin{\dot{-}} s)\, X_{im}(-p)}{\{\widetilde X_{ji}(-a)\}} &\text{ gives } \widetilde X^j_m(s, ap) = \widetilde X_m(s \cdot ap) \text{ for } i \neq \pm j. \qedhere \end{align*} \end{proof}
\begin{lemma} \label{sh-new-root} Suppose that \(\alpha = \mathrm e_m - \mathrm e_l\). Then there is unique homomorphism \[\widetilde X_{lm} \colon S_{lm} \to \overline{\stunit}(R, \Delta; S, \Theta; \Phi / \alpha)\] such that \(g = \widetilde X_{lm}(\mathrm{eval}(g))\) for all \(g \in \widetilde X_\alpha(S, \Theta)\). \end{lemma} \begin{proof} First of all, there is a nontrivial element from the Weyl group \(\mathrm W(\Phi)\) stabilizing \(\alpha\), it exchanges \(\pm \mathrm e_l\) with \(\mp \mathrm e_m\). It gives a duality between the generators of \(\widetilde X_\alpha(S, \Theta)\) as follows: \begin{align*} \widetilde X^i_{lm}(p, a) &\leftrightarrow \up{-i}{\widetilde X_{lm}}(\inv a, -\inv p), & \widetilde X^{-l}_{lm}(s, a) &\leftrightarrow \up{-m}{\widetilde X_{lm}}(-\inv a, \mathbin{\dot{-}} s), \\ \widetilde X^\pi_{lm}(s, u) &\leftrightarrow \up \pi{\widetilde X_{lm}}(\mathbin{\dot{-}} u, s), & \widetilde X^{-m}_{lm}(p, u) &\leftrightarrow \up{-l}{\widetilde X_{lm}}(\mathbin{\dot{-}} u, -\inv p). \end{align*} So it suffices to prove only one half of the identities between the generators. Note that \begin{align*} \up{X_{i, -l}(q)\, X_{-l}(s)\, X_{li}(-p)}{\{\widetilde X_m(\mathbin{\dot{-}} u)\}} &\text{ gives } \widetilde X^\pi_{lm}(s \dotplus \phi(pq), u) = \widetilde X^\pi_{lm}(s, u) ; \\ \up{X_{-m, i}(-p)\, X_{-l}(s)}{\{\widetilde X_{im}(a)\}} &\text{ gives } \widetilde X^\pi_{lm}(s, \phi(pa)) = 1; \end{align*} in particular, \(\widetilde X^\pi_{lm}(s, \phi(a)) = \widetilde X^\pi_{lm}(\phi(p), u) = 1\). From this and lemma \ref{elim-diag} we easily obtain that \(\widetilde X^i_{lm}(p, a)\), \(\widetilde X^\pi_{lm}(s, u)\), \(\widetilde X^{-l}_{lm}(s, a)\) lie in the center of \(\widetilde X_\alpha(S, \Theta)\). Next, \begin{align*} \up{X_{li}(p)\, X_{lj}(r)\, X_{ij}(q)}{\{\widetilde X_{jm}(a)\}} &\text{ gives } \widetilde X_{lm}^i(p, qa)\, \widetilde X_{lm}^j(r, a) = \widetilde X_{lm}^j(pq + r, a) \text{ for } i \neq \pm j ; \\ \up{X_{li}(p)}{\{\widetilde X_{im}(a + b)\}} &\text{ gives } \widetilde X^i_{lm}(p, a + b) = \widetilde X^i_{lm}(p, a)\, \widetilde X^i_{lm}(p, b) ; \\ \up{X_{-m, i}(-q)\, X_{li}(r)\, X_{l, -m}(p)}{\{\widetilde X_{im}(a)\}} &\text{ gives } \widetilde X^{-m}_{lm}(-p, \phi(qa))\, \widetilde X^i_{lm}(pq + r, a) = \up{-i}{\widetilde X}_{lm}(p \inv a, \inv q)\, \widetilde X^i_{lm}(r, a). \end{align*} The first identity is equivalent to \begin{align*} \widetilde X_{lm}^i(p, qa) &= \widetilde X_{lm}^j(pq, a) ; \tag{B1} \\ \widetilde X_{lm}^j(pq + r, a) &= \widetilde X_{lm}^j(pq, a)\, \widetilde X_{lm}^j(r, a) \end{align*} for \(i \neq \pm j\) and the third one is equivalent to \begin{align*} \widetilde X^{-m}_{lm}(-p, \phi(qa))\, \widetilde X^i_{lm}(pq, a) &= \up{-i}{\widetilde X}_{lm}(p \inv a, \inv q); \tag{B2} \\ \widetilde X^i_{lm}(pq + r, a) &= \widetilde X^i_{lm}(pq, a)\, \widetilde X^i_{lm}(r, a). \end{align*} It follows that the maps \(\widetilde X^i_{lm}(-, =)\) are biadditive.
Now we have \begin{align*} \up{X_{i, -l}(q)\, X_{li}(-p)}{\{\widetilde X_{-l, m}(a)\}} &\text{ gives } \widetilde X^{-l}_{lm}(\phi(pq), a) = \widetilde X^i_{lm}(p, qa)\, \widetilde X^{-i}_{lm}(-\inv q, \inv pa) ; \tag{B3} \\ \up{X_{-l}(s)\, X_i(t)}{\{\widetilde X_{im}(-a)\}} &\text{ gives } \widetilde X^\pi_{lm}(s, t \cdot a) = \widetilde X^i_{lm}(\inv{\pi(s)} \pi(t), a) . \tag{B4} \end{align*} Using (B4), we easily obtain \begin{align*} \up{X_{li}(p)\, X_{-i}(s)}{\{\widetilde X_{-i, m}(a)\}} &\text{ gives } \widetilde X^i_{lm}(p, \rho(s) a) = \widetilde X^{-i}_{lm}(p \rho(s), a) ; \tag{B5} \\ \up{X_{-l, i}(p)\, X_{-l}(s)}{\{\widetilde X_{im}(a)\}} &\text{ gives } \widetilde X^{-l}_{lm}(s, pa) = \widetilde X^i_{lm}(\rho(s) p, a) ; \tag{B6} \\ \up{X_{i, -l}(-p)\, X_i(t)}{\{\widetilde X_{-l, m}(a)\}} &\text{ gives } \widetilde X^{-l}_{lm}(t \cdot p, a) = \widetilde X^i_{lm}(\inv p \rho(t), pa) . \tag{B7} \end{align*}
We are ready to construct a homomorphism \(\widetilde X_{lm}\) such that \(\widetilde X_{lm}^i(p, a) = \widetilde X_{lm}(pa)\) using lemma \ref{ring-pres}. If \(\ell \geq 4\), then it exists by (B1). Otherwise we may assume that \(l = 1\), \(m = 3\), and apply lemma \ref{associator} to \[\{p \otimes q \otimes a\}_{ij} = \widetilde X_{lm}^j(pq, a)\, \widetilde X_{lm}^i(p, qa)^{-1}.\] Namely, (B3) and (B7) imply (A2); (B3) and (B6) imply (A3); and (B3) implies (A4).
It remains to express the remaining generators via \(\widetilde X_{lm}\). For \(\widetilde X^\pi_{lm}\), \(\widetilde X^{-l}_{lm}\), and \(\widetilde X^{-m}_{lm}\) this follows from (B4), (B6), \begin{align*} \up{X_{-l}(s)}{\{\widetilde X_m(\mathbin{\dot{-}} u \mathbin{\dot{-}} v)\}} &\text{ gives } \widetilde X^\pi_{lm}(s, v \dotplus u) = \widetilde X^\pi_{lm}(s, u)\, \widetilde X^\pi_{lm}(s, v) ; \\ \up{X_{-l}(s)}{\{\widetilde X_{-l, m}(a + b)\}} &\text{ gives } \widetilde X^{-l}_{lm}(s, a + b) = \widetilde X^{-l}_{lm}(s, a)\, \widetilde X^{-l}_{lm}(s, b) ; \\ \up{X_{li}(p)\, X_{l, -m}(-r)\, X_{i, -m}(-q)}{\{\widetilde X_m(\mathbin{\dot{-}} u)\}} &\text{ gives } \widetilde X^{-m}_{lm}(pq + r, u) = \widetilde X^i_{lm}(p, q \rho(u))\, \widetilde X^{-m}_{lm}(r, u). \end{align*} The dual generators may be expressed via \(\widetilde X_{lm}(\mu)\) using (B2). \end{proof}
The elements \(\widetilde X_\alpha(\mu)\) constructed by lemmas \ref{ush-new-root} and \ref{sh-new-root} satisfy all the missing Steinberg relations in \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). Also, \[\widetilde X_\alpha(\mu)\, g\, \widetilde X_\alpha(\mu)^{-1} = \up{T_\alpha(\mu)}g \tag{*}\] for any \(g\), this is easy to check for \(g = Z_\beta(\lambda, \nu)\) expressing \(\widetilde X_\alpha(\mu)\) via \(Z_\gamma(\lambda', \nu')\), where \(\gamma\) and \(\beta\) are linearly independent.
\section{Presentation of relative unitary Steinberg groups} \label{sec-pres}
We are ready to construct a conjugacy calculus with respect to \(H\) on \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). For a special subset \(\Sigma \subseteq \Phi / \alpha\) and \(g \in \stunit(R, \Delta; \Sigma)\) let \begin{align*} \up g{\{\widetilde X_\beta(\mu)\}_{\pi_\alpha^{-1}(\Sigma)}} &= \up g{\{X_{\pi_\alpha(\beta)}(\mu')\}} \text{ for } \beta \notin \mathbb R \alpha; \\ \up g{\{\widetilde X_\beta(\mu)\}_{\pi_\alpha^{-1}(\Sigma)}} &= [g, T_\beta(\mu)]\, \widetilde X_\beta(\mu) \text{ for } \beta \in \mathbb R \alpha \end{align*} be the elements of \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). Here \(\mu' \in S \cup \Theta\) is the element with the property \(T_{\pi_\alpha(\beta)}(\mu') = T_\beta(\mu) \in \unit(S, \Theta)\), \(T_\beta(\mu)\) naturally acts on \(\stunit(S \rtimes R, \Theta \rtimes \Delta; \Sigma)\) in the second formula.
\begin{lemma} \label{conj-comm} Let \(\beta\) and \(\gamma\) be linearly independent roots of \(\Phi\) such that \(\alpha\) lies strictly inside the angle \(\mathbb R_+ \beta + \mathbb R_+ \gamma\). Then \[\up{g X_\beta(\mu)}{\{\widetilde X_\gamma(\nu)\}_{\pi_\alpha^{-1}(\pi_\alpha(\beta))}} = \prod_{\substack{i \beta + j \gamma \in \Phi; \\ i \geq 0, j > 0}} \up g {\{\widetilde X_{i \beta + j \gamma}(f_{\beta \gamma ij}(\mu, \nu))\}_{\pi_\alpha^{-1}(\pi_\alpha(\beta))}}\] for all \(g \in \stunit(R, \Delta; \pi_\alpha(\beta))\). \end{lemma} \begin{proof} We evaluate the expressions \begin{align*} \up{X_{lj}(p)\, X_{l \oplus m, i}(q)\, X_{ji}(r)}{\{\widetilde X_{im}(a)\}} &\text{ for } \alpha = \mathrm e_m - \mathrm e_l; \\ \up{X_{i, -l}(p)\, X_{-(l \oplus m)}(s)\, X_{li}(q)\, X_{mi}(r)\, X_i(t)}{\{\widetilde X_m(u)\}} &\text{ for } \alpha = \mathrm e_m - \mathrm e_l; \\ \up{X_{-m, j}(p)\, X_j(s)\, X_{-i, j}(q)\, X_i^{\ominus m}(t)\, X_{ji}(r)}{\{\widetilde X_{im}(a)\}} &\text{ for } \alpha = \mathrm e_m \end{align*} in two ways as products of an element from \(\stunit(S, \Theta; \Sigma)\) for a special \(\Sigma \subseteq \Phi\) by an element of the type \(\up g{\{\widetilde X_\beta(\mu)\}}_{\pi_\alpha^{-1}(\pi_\alpha(\beta))}\) assuming that the indices and their opposites are distinct and non-zero. During the calculations it is convenient to put the factors \(\widetilde X_{im}(a)\) and \(\widetilde X_m(u)\) inside the curly brackets in the rightmost position. After cancellation we obtain particular cases of the required identity. This easily follows by considering the images in \(\unit(S, \Theta)\) without evaluating the first factors.
The remaining two cases follow from \begin{align*} \up{X_{-(l \oplus m)}(s)}{\{\widetilde X_{im}(pa)\}}\, &\up{X_{-(l \oplus m)}(s \mathbin{\dot{-}} t \cdot (-p) \dotplus \phi(qp))}{\{\widetilde X_{-l, m}(a)\}} \\ &\equiv \up{X_{i, -l}(p)\, X_{-(l \oplus m)}(s \mathbin{\dot{-}} t \cdot (-p) \dotplus \phi(qp))\, X_{li}(q)\, X_i(t)}{\{\widetilde X_{-l, m}(a)\}} \\ &= \up{X_{li}(q + \inv p \inv{\rho(t)})\, X_i(t)\, X_{-(l \oplus m)}(s)\, \widetilde X_{i, -l}(p)}{\{\widetilde X_{-l, m}(a)\}} \\ &\equiv \up{X_{-(l \oplus m)}(s)}{\{\widetilde X_{im}(pa)\, \widetilde X_m(t \cdot pa \mathbin{\dot{-}} \phi(\inv a qpa))\, \widetilde X_{-l, m}(a)\}} \end{align*} up to a factor from \(\stunit(S, \Theta; \langle -\mathrm e_m, \mathrm e_m - \mathrm e_l, \mathrm e_i + \mathrm e_l \rangle)\) on the left for \(\alpha = \mathrm e_m - \mathrm e_l\) and \begin{align*} \up{X^{\ominus m}_i(s \dotplus q_{-m} \cdot qp)}{\{\widetilde X_{-i}(u)\}} &\equiv \up{\widetilde X_{ji}(p)\, X^{\ominus m}_i(s \dotplus q_{-m} \cdot qp)\, \widetilde X_{-m, j}(q)}{\{\widetilde X_{-i}(u)\}} \\ &= \up{\widetilde X_{-m, j}(q)\, X^{\ominus m}_i(s)\, \widetilde X_{ji}(p)}{\{\widetilde X_{-i}(u)\}} \\ &\equiv \up{X^{\ominus m}_i(s)}{\{\widetilde X_{im}(\rho(u) \inv p \inv q)\, \widetilde X_{-i}(u)\}} \end{align*} up to a factor from \(\stunit(S, \Theta; \langle \mathrm e_m, \mathrm e_i - \mathrm e_m, -\mathrm e_i - \mathrm e_j \rangle)\) on the left for \(\alpha = \mathrm e_m\). \end{proof}
\begin{theorem} \label{root-elim} Let \(\delta \colon (S, \Theta) \to (R, \Delta)\) be a crossed module of odd form rings, where \((R, \Delta)\) has a strong orthogonal hyperbolic family of rank \(\ell \geq 4\). Then \[F_\alpha \colon \overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha) \to \overline \stunit(R, \Delta; S, \Theta; \Phi)\] is an isomorphism for any \(\alpha \in \Phi\). \end{theorem} \begin{proof} We find the inverse homomorphism \(G_\alpha\) by providing a conjugacy calculus with respect to \(H\) on \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\). Our construction of the maps \((g, h) \mapsto \up g{\{h\}_\Sigma}\) depends on mutual alignment of \(\Sigma\) and \(\alpha\). If \(\Sigma = \varnothing\), then the required homomorphism \(\stunit(S, \Theta; \Phi) \to \overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\) exists by lemmas \ref{ush-new-root} and \ref{sh-new-root}. Below we use the conjugacy calculus on \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\) and the known properties of \(\widetilde X_\alpha(\mu)\) without explicit references.
Suppose that \(\Sigma\) does not intersect \(\mathbb R \alpha\). We already have the map if \(h\) is a root element. In the subcase of two-dimensional \(\langle \alpha, \Sigma \rangle\) the maps \(\up g{\{-\}_\Sigma}\) are homomorphisms by lemma \ref{conj-comm} and \[\up{\up g{\{\widetilde X_\alpha(\mu)\}_\Sigma}}{\bigl(\up g{\{X_\beta(\nu)\}_\Sigma}\bigr)} = \up{[g, X_\alpha(\delta(\mu))]\, \up{T_\alpha(\delta(\mu))}g}{\bigl\{X_\beta\bigl(\up{T_\alpha(\mu)} \nu\bigr)\bigr\}_\Sigma} = \up g{\bigl\{X_\beta\bigl(\up{T_\alpha(\mu)}\nu\bigr)\bigr\}_\Sigma}\]
for \(\beta \in \Phi / \alpha\) and \(\Sigma = \pi_\alpha^{-1} \langle -\beta \rangle\), this is a corollary of (*). They also satisfy (Chev) by lemma \ref{conj-comm}. In the general case the maps may be constructed by induction on \(|\Sigma|\) as in the proof of proposition \ref{identities} since we always may apply (Chev) to an extreme root of \(\Sigma\). The same induction shows that the maps satisfy (Hom), (Sub), and (Chev). From now on we may assume that \(\alpha \in \Sigma\) since the case \(-\alpha \in \Sigma\) is symmetric.
Let us show that there is unique \(\up{(-)}{\{=\}_\Sigma}\) by induction on the smallest face \(\Gamma\) of the cone \(\mathbb R_+ \Sigma\) containing \(\alpha\). If \(\Gamma\) is one-dimensional, then \(\alpha\) is an extreme root of \(\Sigma\). We define \(\up{(-)}{\{=\}_\Sigma}\) as \[\up{X_\alpha(\mu) g}{\{h\}_\Sigma} = \up{T_\alpha(\mu)}{\bigl(\up g{\{h\}_{\Sigma \setminus \langle \alpha \rangle}}\bigr)}\] for \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \alpha \rangle)\), it satisfies (Hom), (Sub), and (Chev).
In order to construct \(\up{(-)}{\{\widetilde X_\beta(\lambda)\}_\Sigma}\) for \(\dim \Gamma \geq 2\) take an extreme root \(\gamma \in \Gamma \cap \Sigma\) non-antiparallel with \(\beta\) and let \[\up{g\, X_\gamma(\mu)}{\{\widetilde X_\beta(\lambda)\}_\Sigma} = \prod_{\substack{i \gamma + j \beta \in \Phi\\ i \geq 0, j > 0}} \up g{\{\widetilde X_{i \gamma + j \beta}(f_{\gamma \beta i j}(\mu, \lambda))\}_{\Sigma \setminus \langle \gamma \rangle}}\] for \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \gamma \rangle)\). Clearly, this definition is independent of \(\gamma\) unless \(\Gamma\) is two-dimensional and both \(\alpha\), \(-\beta\) lie in its relative interior. In this case let \(\gamma_1\), \(\gamma_2\) be the extreme roots of \(\Sigma \cap \Gamma\). Take a decomposition \(\widetilde X_\beta(\lambda) = \prod_t \up{X_\delta(\kappa_t)}{\{\widetilde X_{\varepsilon_t}(\lambda_t)\}}\), where \(\langle \delta, \beta, \varepsilon_t \rangle\) is special, two-dimensional, and not containing in \(\mathbb R \Gamma\); \(\delta\) and \(\varepsilon_t\) are on the opposite sides of \(\beta\); \(\langle \Sigma, \delta \rangle\) is special with a face \(\Gamma\). Abusing notation a bit, for any \(g \in \stunit(R, \Delta; \Sigma \setminus (\langle \gamma_1 \rangle \cup \langle \gamma_2 \rangle))\) we have \begin{align*} &\up{g\, X_{\gamma_1}(\mu)}{\bigl\{\up{X_{\gamma_2}(\nu)}{\widetilde X_\beta(\lambda)}\bigr\}_{\Sigma \setminus \langle \gamma_2 \rangle}} = \prod_t \up{g\, X_{\gamma_1}(\mu)\, \up{X_{\gamma_2}(\nu)}{X_\delta(\kappa_t)}}{\bigl\{\up{X_{\gamma_2}(\nu)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\bigr\}_{\langle \Sigma, \delta \rangle \setminus \langle \gamma_2 \rangle}} \\ &= \prod_t \up{g\, \up{X_{\gamma_1}(\mu)\, X_{\gamma_2}(\nu)}{X_\delta(\kappa_t)}}{\bigl\{\up{X_{\gamma_1}(\mu)\, X_{\gamma_2}(\nu)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\bigr\}_{\langle \Sigma, \delta \rangle \setminus \langle \gamma_1, \gamma_2 \rangle}} \\ &= \prod_t \up{g\, [X_{\gamma_1}(\mu), X_{\gamma_2}(\nu)]\, X_{\gamma_2}(\nu)\, \up{X_{\gamma_1}(\mu)}{X_\delta(\kappa_t)}}{\bigl\{\up{X_{\gamma_1}(\mu)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\bigr\}_{\langle \Sigma, \delta \rangle \setminus \langle \gamma_1 \rangle}} \\ &= \up{g\, [X_{\gamma_1}(\mu), X_{\gamma_2}(\nu)]\, X_{\gamma_2}(\nu)}{\bigl\{\up{X_{\gamma_1}(\mu)}{\widetilde X_\beta(\lambda)}\bigr\}_{\Sigma \setminus \langle \gamma_1 \rangle}}, \end{align*} so the maps \(\up{(-)}{\{\widetilde X_\beta(\lambda)\}_\Sigma}\) are well-defined. By construction, they satisfy (Sub).
If \(\dim \Gamma \geq 3\), then it is easy to check that \(\up{(-)}{\{=\}_\Sigma}\) satisfy (Hom) and (Chev). In the case \(\dim \Gamma = 2\) the map \(\up{(-)}{\{=\}_\Sigma}\) also satisfy (Chev) and factors through the Chevalley commutator formula for \([\widetilde X_\beta(\lambda), \widetilde X_\gamma(\mu)]\) if at least one of \(\beta\), \(\gamma\) is not in \(\mathbb R \Gamma\). Otherwise let again \(\widetilde X_\beta(\lambda) = \prod_t \up{X_\delta(\kappa_t)}{\{\widetilde X_{\varepsilon_t}(\lambda_t)\}}\), so \begin{align*} \up{\up g{\{\widetilde X_\gamma(\mu)\}_\Sigma}}{\bigl(\up g{\{\widetilde X_\beta(\lambda)\}_\Sigma}\bigr)} &= \prod_t \up{\up g{\{\widetilde X_\gamma(\mu)\}_\Sigma}}{\bigl(\up{g\, X_\delta(\kappa_t)}{\{\widetilde X_{\varepsilon_t}(\lambda_t)\}_{\langle \Sigma, \delta \rangle}}\bigr)} \\ &= \prod_t \up{g\, X_\delta(\kappa_t)}{\bigl\{\up{X_\delta(\kappa_t)^{-1}\, X_\gamma(\delta(\mu))\, X_\delta(\kappa_t)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\bigr\}_{\langle \Sigma, \delta \rangle}} \\ &= \prod_t \up g{\bigl\{\up{X_\gamma(\delta(\mu))\, X_\delta(\kappa_t)}{\widetilde X_{\varepsilon_t}(\lambda_t)}\bigr\}_{\langle \Sigma, \delta \rangle}} = \up g{\bigl\{\up{X_\gamma(\delta(\mu))}{\widetilde X_\beta(\lambda)}\bigr\}_\Sigma}. \end{align*}
To sum up, we have the maps \(\up{(-)}{\{=\}_\Sigma}\). We check that they satisfy (XMod) in the form \[\up{\widetilde X_\gamma(\mu)}{\bigl(\up g{\{\widetilde X_\beta(\nu)\}_\Sigma}\bigr)} = \up{X_\gamma(\delta(\mu))\, g}{\{\widetilde X_\beta(\nu)\}_\Sigma}\] by induction on \(\Sigma\). If there is an extreme root \(\gamma \neq \delta \in \Sigma\) non-antiparallel to \(\beta\), then we may apply (Chev) and the induction hypothesis. If \(\Sigma = \langle \gamma \rangle\), then the identity follows from the definition and (*). Finally, let \(\Sigma = \langle \gamma, -\beta \rangle\) be two-dimensional. Then \[\up{\widetilde X_\gamma(\mu)}{\bigl(\up g {\{\widetilde X_\beta(\nu)\}_\Sigma}\bigr)} = \up{\widetilde X_\gamma(\mu)\, \up g{\{\widetilde X_\gamma(\mu)\}_\Sigma^{-1}}}{\bigl(\up g{\bigl\{\up{X_\gamma(\delta(\mu))}{\widetilde X_\beta(\nu)}\bigr\}_\Sigma}\bigr)} = \up{X_\gamma(\delta(\mu))\, g}{\{\widetilde X_\beta(\nu)\}_\Sigma}\] for \(g \in \stunit(R, \Delta; \Sigma \setminus \langle \gamma \rangle)\) by (Chev) and the induction hypothesis.
It remains to check (Conj) in the form \[\up{\widetilde Z_\beta(\mu, \nu)}{\bigl(\up{F_\beta(g)}{\{F_\beta(h)\}_{\pi_\beta^{-1}(\Sigma)}}\bigr)} = \up{F_\beta(\up fg)}{\bigl\{F_\beta\bigl(\up fh\bigr)\bigr\}_{\pi_\beta^{-1}(\Sigma)}}\] for \(f = Z_\beta(\mu, \nu) \in \unit(S, \Theta)\). This is clear for \(\beta \in \mathbb R \alpha\), so we may assume that \(\alpha\) and \(\beta\) are linearly independent. Then \[\up{\widetilde Z_\beta(\mu, \nu)}{\bigl(\up{F_{\langle \alpha, \beta \rangle}(g')}{\{F_{\langle \alpha, \beta \rangle}(h')\}}\bigr)} = \up{F_{\langle \alpha, \beta \rangle}(\up f{g'})}{\bigl\{F_{\langle \alpha, \beta \rangle}\bigl(\up f{h'}\bigr)\bigr\}}\] by (Conj) from the conjugacy calculus with respect to \(H / \alpha\) and any \(\up{F_\beta(g)}{\{F_\beta(h)\}}\) may be expressed in the terms \(\up{F_{\langle \alpha, \beta \rangle}(g')}{\{F_{\langle \alpha, \beta \rangle}(h')\}}\) by proposition \ref{elim-sur}.
Now we have group homomorphisms \(F_\alpha\) and \(G_\alpha\). By proposition \ref{elim-sur}, the map \(F_\alpha\) is surjective, also \(G_\alpha \circ F_\alpha\) is the identity by construction. It follows that these maps are mutually inverse. \end{proof}
\begin{theorem} \label{pres-stu} Let \(\delta \colon (S, \Theta) \to (R, \Delta)\) be a crossed modules of odd form rings, where \((R, \Delta)\) has a strong orthogonal hyperbolic family of rank \(\ell \geq 3\). Then \(\overline \stunit(R, \Delta; S, \Theta) \to \stunit(R, \Delta; S, \Theta)\) is an isomorphism. In particular, the relative Steinberg group has the explicit presentation from proposition \ref{identities}. \end{theorem} \begin{proof} Notice that \[u \colon \overline \stunit(R, \Delta; S, \Theta) \to \stunit(R, \Delta; S, \Theta)\] is surjective. Indeed, its image contains all generators and it is invariant under the actions of all \(X_\alpha(\mu) \in \stunit(R, \Delta)\) by proposition \ref{elim-sur}.
Let us construct an action of \(\stunit(R, \Delta)\) on \(G = \overline \stunit(R, \Delta; S, \Theta)\). For any \(\alpha \in \Phi\) an element \(X_\alpha(\mu)\) gives the canonical automorphism of \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \alpha)\), so by theorem \ref{root-elim} it gives an automorphism of \(G\). We have to check that these automorphisms satisfy the Steinberg relations. Clearly, \(X_\alpha(\mu \dotplus \nu)\) gives the composition of the automorphisms associated with \(X_\alpha(\mu)\) and \(X_\alpha(\nu)\). If \(\alpha\) and \(\beta\) are linearly independent roots, then the automorphisms induced by the formal products \([X_\alpha(\mu), X_\beta(\nu)]\) and \(\prod_{\substack{i \alpha + j \beta \in \Phi \\ i, j > 0}} X_{i \alpha + j \beta}(f_{\alpha \beta i j}(\mu, \nu))\) coincide on the image of \(\overline \stunit(R, \Delta; S, \Theta; \Phi / \langle \alpha, \beta \rangle)\), so it remains to apply proposition \ref{elim-sur}.
Now it is easy to construct an \(\stunit(R, \Delta)\)-equivariant homomorphism \[v \colon \stunit(R, \Delta; S, \Theta) \to \overline \stunit(R, \Delta; S, \Theta),\, X_\alpha(\mu) \mapsto X_\alpha(\mu).\] We already know that \(u\) is surjective and clearly \(v \circ u\) is the identity, so \(u\) is an isomorphism. \end{proof}
\section{Doubly laced Steinberg groups} \label{pairs-type}
In this and the next section \(\Phi\) is one of the root systems \(\mathsf B_\ell\), \(\mathsf C_\ell\), \(\mathsf F_4\). In order to define relative Steinberg groups of type \(\Phi\) over commutative rings with respect to Abe's admissible pairs, it is useful to consider Steinberg groups of type \(\Phi\) over pairs \((K, L)\), where \(K\) parametrizes the short root elements and \(L\) parametrizes the long root ones.
Let us say that \((K, L)\) is a \textit{pair of type} \(\mathsf B\) if \begin{itemize} \item \(L\) is a unital commutative ring, \(K\) is an \(L\)-module; \item there is a classical quadratic form \(s \colon K \to L\), i.e. \(s(kl) = s(k) l^2\) and the expression \(s(k \mid k') = s(k + k') - s(k) - s(k')\) is \(L\)-bilinear. \end{itemize}
Next, \((K, L)\) is a \textit{pair of type} \(\mathsf C\) if \begin{itemize} \item \(K\) is a unital commutative ring; \item \(L\) is an abelian group; \item there are additive maps \(d \colon K \to L\) and \(u \colon L \to K\); \item there is a map \(L \times K \to L,\, (l, k) \mapsto l \cdot k\); \item \((l + l') \cdot k = l \cdot k + l' \cdot k\), \(l \cdot (k + k') = l \cdot k + d(kk' u(l)) + l \cdot k'\); \item \(u(d(k)) = 2k\), \(u(l \cdot k) = u(l) k^2\), \(d(u(l)) = 2l\), \(d(k) \cdot k' = d(k{k'}^2)\); \item \(l \cdot 1 = l\), \((l \cdot k) \cdot k' = l \cdot kk'\). \end{itemize}
Finally, \((K, L)\) is a \textit{pair of type} \(\mathsf F\) if \begin{itemize} \item \(K\) and \(L\) are unital commutative rings; \item there is a unital ring homomorphism \(u \colon L \to K\); \item there are maps \(d \colon K \to L\) and \(s \colon K \to L\); \item \(d(k + k') = d(k) + d(k')\), \(d(u(l)) = 2l\), \(u(d(k)) = 2k\), \(d(k u(l)) = d(k) l\); \item \(s(k + k') = s(k) + d(kk') + s(k')\), \(s(kk') = s(k) s(k')\), \(s(u(l)) = l^2\), \(u(s(k)) = k^2\). \end{itemize}
If \((K, L)\) is a pair of type \(\mathsf C\) or \(\mathsf F\), we have an action \(K \times L \to K,\, (k, l) \mapsto kl = k u(l)\) and a biadditive map \(K \times K \to L, (k, k') \mapsto s(k \mid k') = d(kk')\). If \((K, L)\) is a pair of type \(\mathsf B\) or \(\mathsf F\), then there is a map \(L \times K \to L,\, (l, k) \mapsto l \cdot k = l s(k)\). With respect to these additional operations any pair \((K, L)\) of type \(\mathsf F\) is also a pair of both types \(\mathsf B\) and \(\mathsf C\). For any unital commutative ring \(K\) the pair \((K, K)\) with \(u(k) = k\), \(d(k) = 2k\), \(s(k) = k^2\) is of type \(\mathsf F\).
The \textit{Steinberg group} of type \(\Phi\) over a pair \((K, L)\) of the corresponding type is the abstract group \(\stlin(\Phi; K, L)\) with the generators \(x_\alpha(k)\) for short \(\alpha \in \Phi\), \(k \in K\); \(x_\beta(l)\) for long \(\beta \in \Phi\), \(l \in L\); and the relations \begin{align*} x_\alpha(p)\, x_\alpha(q) &= x_\alpha(p + q); \\ [x_\alpha(p), x_\beta(q)] &= 1 \text{ if } \alpha + \beta \notin \Phi \cup \{0\}; \\
[x_\alpha(p), x_\beta(q)] &= x_{\alpha + \beta}(N_{\alpha \beta} pq) \text{ if } \alpha + \beta \in \Phi \text{ and } |\alpha| = |\beta| = |\alpha + \beta|; \\
[x_\alpha(p), x_\beta(q)] &= \textstyle x_{\alpha + \beta}\bigl(\frac{N_{\alpha \beta}}2 s(p \mid q)\bigr) \text{ if } \alpha + \beta \in \Phi \text{ and } |\alpha| = |\beta| < |\alpha + \beta|; \\ [x_\alpha(p), x_\beta(q)] &= x_{\alpha + \beta}(N_{\alpha \beta} pq)\, x_{2\alpha + \beta}(N_{\alpha \beta}^{21} q \cdot p) \text{ if } \alpha + \beta, 2\alpha + \beta \in \Phi;\\ [x_\alpha(p), x_\beta(q)] &= x_{\alpha + \beta}(N_{\alpha \beta} qp)\, x_{\alpha + 2\beta}(N_{\alpha \beta}^{12} p \cdot q) \text{ if } \alpha + \beta, \alpha + 2\beta \in \Phi. \end{align*} Here \(N_{\alpha \beta}\), \(N_{\alpha \beta}^{21}\), \(N_{\alpha \beta}^{12}\) are the ordinary integer structure constants. In the case of the pair \((K, K)\) there is a canonical homomorphism \[\stmap \colon \stlin(\Phi, K) = \stlin(\Phi; K, K) \to \group^{\mathrm{sc}}(\Phi, K)\] to the simply connected Chevalley group over \(K\) of type \(\Phi\).
In order to apply the results on odd unitary groups, we need a construction of odd form rings by pairs of type \(\mathsf B\) and \(\mathsf C\). If \((K, L)\) is a pair of type \(\mathsf B\) and \(\ell \geq 0\), then we consider the special odd form ring \((R, \Delta) = \ofaorth(2\ell + 1; K, L)\) with \begin{itemize}
\item \(R = (K \otimes_L K) e_{00} \oplus \bigoplus_{1 \leq |i| \leq \ell} (K e_{i0} \oplus K e_{0i}) \oplus \bigoplus_{1 \leq |i|, |j| \leq \ell} L e_{ij}\); \item \(\inv{x e_{ij}} = x e_{-j, -i}\) for \(i \neq 0\) or \(j \neq 0\), \(\inv{(x \otimes y) e_{00}} = (y \otimes x) e_{00}\); \item \((x e_{ij}) (y e_{kl}) = 0\) for \(j \neq k\); \item \((x e_{ij}) (y e_{jk}) = xy e_{ik}\) for \(j \neq 0\) if at least one of \(i\) and \(k\) is non-zero; \item \((x e_{0j}) (y e_{j0}) = (x \otimes y) e_{00}\) for \(j \neq 0\); \item \((x e_{i0}) (y e_{0j}) = s(x \mid y) e_{ij}\) for \(i, j \neq 0\); \item \((x e_{i0}) ((y \otimes z) e_{00}) = s(x \mid y) z e_{i0}\) for \(i \neq 0\); \item \(((x \otimes y) e_{00}) ((z \otimes w) e_{00}) = (x \otimes s(y \mid z) w) e_{00}\); \item \(\Delta\) is the subgroup of \(\Heis(R)\) generated by \(\phi(S)\), \(((x \otimes y) e_{00}, -(s(x) y \otimes y) e_{00})\), \((x e_{0i}, -s(x) e_{-i, i})\), \((x e_{i0}, 0)\), and \((x e_{ij}, 0)\) for \(i, j \neq 0\). \end{itemize} Clearly, \((R, \Delta)\) had a strong orthogonal hyperbolic family of rank \(\ell\) and the corresponding unitary Steinberg group is naturally isomorphic to \(\stlin(\mathsf B_\ell; K, L)\). The Steinberg relations in \(\stlin(\mathsf B_\ell; K, L)\) and \(\stunit(R, \Delta)\) are the same since \(\ofaorth(2\ell + 1, K, K)\) has the unitary group \(\sorth(2\ell + 1, K) \times (\mathbb Z / 2 \mathbb Z)(K)\) by \cite{ClassicOFA} for every unital commutative ring \(K\). Of course, \(\stlin(\mathsf B_\ell; K, L)\) may also be constructed by the module \(L^\ell \oplus K \oplus L^\ell\) with the quadratic form \(q(x_{-\ell}, \ldots, x_{-1}, k, x_1, \ldots, x_\ell) = \sum_{i = 1}^\ell x_{-i} x_i + s(k)\), but the corresponding odd form rings and unitary groups are not functorial on \((K, L)\).
If \((K, L)\) is a pair of the type \(\mathsf C\), then let \((R, \Delta) = \ofasymp(2\ell; K, L)\), where \begin{itemize}
\item \(R = \bigoplus_{1 \leq |i|, |j| \leq \ell} K e_{ij}\); \item \(\inv{x e_{ij}} = \varepsilon_i \varepsilon_j x e_{-j, -i}\), \((x e_{ij}) (y e_{kl}) = 0\) for \(j \neq k\), \((x e_{ij}) (y e_{jl}) = xy e_{il}\), where \(\varepsilon_i = 1\) for \(i > 0\) and \(\varepsilon_i = -1\) for \(i < 0\);
\item \(\Delta = \bigoplus^\cdot_{1 \leq |i|, |j| \leq \ell; i + j > 0} \phi(K e_{ij}) \mathbin{\dot{\oplus}} \bigoplus_{1 \leq |i| \leq \ell}^\cdot L v_i \mathbin{\dot{\oplus}} \bigoplus_{1 \leq |i|, |j| \leq \ell}^\cdot q_i \cdot K e_{ij}\); \item \(x v_i \dotplus y v_i = (x + y) v_i\), \(q_i \cdot x e_{ij} \dotplus q_i \cdot y e_{ij} = q_i \cdot (x + y) e_{ij}\); \item \(\phi(x e_{-i, i}) = d(x) v_i\), \(\pi(x v_i) = 0\), \(\rho(x v_i) = u(x) e_{-i, i}\); \item \((x v_i) \cdot (y e_{jk}) = \dot 0\) for \(i \neq j\), \((x v_i) \cdot (y e_{ik}) = \varepsilon_i \varepsilon_k (x \cdot y) v_k\); \item \(\pi(q_i \cdot x e_{ij}) = x e_{ij}\), \(\rho(q_i \cdot x e_{ij}) = 0\); \item \((q_i \cdot x e_{ij}) \cdot (y e_{kl}) = \dot 0\) for \(j \neq k\), \((q_i \cdot x e_{ij}) \cdot (y e_{jk}) = q_i \cdot xy e_{ik}\); \end{itemize} Again, \((R, \Delta)\) had a strong orthogonal hyperbolic family of rank \(\ell\) and its unitary Steinberg group is naturally isomorphic to \(\stlin(\mathsf C_\ell; K, L)\). The Steinberg relations in these two Steinberg groups coincide since \(\ofasymp(2\ell; K, K)\) is the odd form ring constructed by the split symplectic module over a unital commutative ring \(K\), so its unitary group is \(\symp(2\ell, K)\).
We do not construct an analogue of \(\mathrm G(\mathsf F_4, K)\) for pairs of type \(\mathsf F\) and do not prove that the product map \[\prod_{\text{short } \alpha \in \Pi} K \times \prod_{\text{long } \beta \in \Pi} L \to \stlin(\mathsf F_4; K, L), (x_\alpha)_{\alpha \in \Pi} \mapsto \prod_{\alpha \in \Pi} X_\alpha(p_\alpha)\] is injective for a system of positive roots \(\Pi \subseteq \Phi\). Such claims are not required in the proof of our main result.
\section{Relative doubly laced Steinberg groups}
In the simply-laced case \cite{RelStLin} relative Steinberg groups are parameterized by the root system and a \textit{crossed module of commutative rings} \(\delta \colon \mathfrak a \to K\), where \(K\) is a unital commutative ring, \(\mathfrak a\) is a \(K\)-module, \(\delta\) is a homomorphism of \(K\)-modules, and \(a \delta(a') = \delta(a) a'\) for all \(a, a' \in \mathfrak a\). In the doubly-laced case we may construct semi-abelian categories of pairs of all three types by omitting the unitality conditions in the definitions, but in this approach we have to add the condition that the action in the definition of a crossed module is unital.
Instead we say that \((\mathfrak a, \mathfrak b)\) is a \textit{precrossed module} over a pair \((K, L)\) of a given type if there is a reflexive graph \(((K, L), (K', L'), p_1, p_2, d)\) in the category of pairs of a given type, where \((\mathfrak a, \mathfrak b) = \Ker(p_2)\). This may be written as explicit family of operations between the sets \(\mathfrak a\), \(\mathfrak b\), \(K\), \(L\) satisfying certain axioms, in particular, \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) is a pair of homomorphisms of abelian groups induced by \(p_1\). A precrossed module \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) is called a \textit{crossed module} if the corresponding reflexive graph has a structure of internal category (necessarily unique), this may be described as additional axioms on the operations (an analogue of Peiffer identities). It is easy to see that crossed submodules of \(\id \colon (K, L) \to (K, L)\) are precisely Abe's admissible pairs.
For a crossed module \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) of pairs of a given type the \textit{relative Steinberg group} is \[\stlin(\Phi; K, L; \mathfrak a, \mathfrak b) = \Ker(p_{2*}) / [\Ker(p_{1*}), \Ker(p_{2*})],\] where \(p_{i*} \colon \stlin(\Phi; \mathfrak a \rtimes K, \mathfrak b \rtimes L) \to \stlin(\Phi; K, L)\) are the induced homomorphisms. As in the odd unitary case and the simply laced case, this is the crossed module over \(\stlin(\Phi; K, L)\) with the generators \(x_\alpha(a)\) for short \(\alpha \in \Phi\), \(a \in \mathfrak a\) and \(x_\beta(b)\) for long \(\beta \in \Phi\), \(b \in \mathfrak b\) satisfying the Steinberg relations, \(\delta(x_\alpha(a)) = x_\alpha(\delta(a))\) for any root \(\alpha\) and \(a \in \mathfrak a \cup \mathfrak b\), and \[\up{x_\alpha(p)}{x_\beta(a)} = \prod_{\substack{i \alpha + j \beta \in \Phi \\ i \geq 0, j > 0}} x_{i \alpha + j \beta}(f_{\alpha \beta i j}(p, a))\] for \(\alpha \neq -\beta\), \(p \in K \cup L\), \(a \in \mathfrak a \cup \mathfrak b\), and appropriate expressions \(f_{\alpha \beta i j}\).
If \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) is a crossed module of pairs of type \(\mathsf C\) and \(\ell \geq 0\), then \(\delta \colon \ofasymp(\ell; \mathfrak a, \mathfrak b) \to \ofasymp(\ell; K, L)\) is a crossed module of odd form rings, where \[\ofasymp(\ell; \mathfrak a, \mathfrak b) = \Ker\bigl(p_2 \colon \ofasymp(\ell; \mathfrak a \rtimes K, \mathfrak b \rtimes L) \to \ofasymp(\ell; K, L)\bigr).\] Clearly,
\[\ofasymp(\ell; \mathfrak a, \mathfrak b) = \Bigl( \bigoplus_{1 \leq |i|, |j| \leq \ell} \mathfrak a e_{ij},
\bigoplus^\cdot_{\substack{1 \leq |i|, |j| \leq \ell \\ i + j > 0}} \phi(\mathfrak a e_{ij}) \mathbin{\dot{\oplus}} \bigoplus^\cdot_{1 \leq |i| \leq \ell} \mathfrak b v_i \mathbin{\dot{\oplus}} \bigoplus^\cdot_{1 \leq |i|, |j| \leq \ell} q_i \cdot \mathfrak a e_{ij} \Bigr),\] so we may apply theorem \ref{pres-stu} for Chevalley groups of type \(\mathsf C_\ell\).
For pairs of type \(\mathsf B\) the construction \(\ofaorth(2\ell + 1; -, =)\) does not preserve fiber products, so we have to modify it a bit. Take a crossed module \(\delta \colon (\mathfrak a, \mathfrak b) \to (K, L)\) of pairs of type \(\mathsf B\) and consider the odd form rings \((T, \Xi) = \ofaorth(2\ell + 1; \mathfrak a \rtimes K, \mathfrak b \rtimes L)\), \((R, \Delta) = \ofaorth(2\ell + 1; K, L)\) forming a reflexive graph. Let \((\widetilde T, \widetilde \Xi)\) be the special odd form factor ring of \((T, \Xi)\) by the odd form ideal \((I, \Gamma)\), where \(I \leq T\) is the subgroup generated by \((a \otimes a' - a \otimes \delta(a')) e_{00}\) and \((a \otimes a' - \delta(a) \otimes a') e_{00}\) for \(a, a' \in \mathfrak a\). It is easy to check that \(I\) is an involution invariant ideal of \(T\), so \((\widetilde T, \widetilde \Xi)\) is well-defined, and the homomorphisms \(p_i \colon (T, \Xi) \to (R, \Delta)\) factor through \((\widetilde T, \widetilde \Xi)\). Moreover, the precrossed module \((S, \Theta) = \Ker\bigl(p_2 \colon (\widetilde T, \widetilde \Xi) \to (R, \Delta)\bigr)\) over \((R, \Delta)\) satisfies the Peiffer relations and
\[S = X e_{00} \oplus \bigoplus_{1 \leq |i| \leq \ell} (\mathfrak a e_{i0} \oplus \mathfrak a e_{0i}) \oplus \bigoplus_{1 \leq |i|, |j| \leq \ell} \mathfrak b e_{ij}\] for some group \(X\), so we may also apply theorem \ref{pres-stu} for Chevalley groups of type \(\mathsf B_\ell\).
If \(\alpha\), \(\beta\), \(\alpha - \beta\) are short roots, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) denotes the abelian group \(\mathfrak a \mathrm e_\alpha \oplus \mathfrak a \mathrm e_\beta\). The groups \(X_{\alpha - \beta}(K)\) and \(X_{\beta - \alpha}(K)\) naturally act on \(V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) in such a way that the homomorphism \[V_{\alpha \beta}(\mathfrak a, \mathfrak b) \to \stlin(\Phi; \mathfrak a, \mathfrak b), x \mathrm e_\alpha \oplus y \mathrm e_\beta \mapsto X_\alpha(x)\, X_\beta(y)\] is equivariant (in the case of \(\mathsf F_4\) we consider \(X_{\pm(\alpha - \beta)}(K)\) as abstract groups, not as their images in the Steinberg group). Similarly, if \(\alpha\), \(\beta\), \(\alpha - \beta\) are long roots, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b) = \mathfrak b \mathrm e_\alpha \oplus \mathfrak b \mathrm e_\beta\) is a representation of \(X_{\alpha - \beta}(L)\) and \(X_{\beta - \alpha}(L)\). If \(\alpha\) and \(\beta\) are long and \((\alpha + \beta)/2\) is short, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b) = \mathfrak b \mathrm e_\alpha \oplus \mathfrak a \mathrm e_{(\alpha + \beta)/2} \oplus \mathfrak b \mathrm e_\beta\) is a representation of \(X_{(\alpha - \beta)/2}(K)\) and \(X_{(\beta - \alpha)/2}(K)\). Finally, if \(\alpha\) and \(\beta\) are short and \(\alpha + \beta\) is long, then \(V_{\alpha \beta}(\mathfrak a, \mathfrak b) = \mathfrak a \mathrm e_\alpha \mathbin{\dot{\oplus}} \mathfrak b \mathrm e_{\alpha + \beta} \mathbin{\dot{\oplus}} \mathfrak a \mathrm e_\alpha\) is a \(2\)-step nilpotent group with the group operation \[\textstyle (x \mathrm e_\alpha \mathbin{\dot{\oplus}} y \mathrm e_{\alpha + \beta} \mathbin{\dot{\oplus}} z \mathrm e_\beta) \dotplus (x' \mathrm e_\alpha \mathbin{\dot{\oplus}} y' \mathrm e_{\alpha + \beta} \mathbin{\dot{\oplus}} z' \mathrm e_\beta) = (x + x') \mathrm e_\alpha \mathbin{\dot{\oplus}} \bigl(y - \frac{N_{\alpha \beta}}2 s(z \mid x') + z'\bigr) \mathrm e_{\alpha + \beta} \mathbin{\dot{\oplus}} (z + z') \mathrm e_\beta\] and the action of \(X_{\alpha - \beta}(L)\) and \(X_{\beta - \alpha}(L)\) such that the homomorphism \[V_{\alpha \beta}(\mathfrak a, \mathfrak b) \to \stlin(\Phi; \mathfrak a, \mathfrak b), x \mathrm e_\alpha \mathbin{\dot{\oplus}} y \mathrm e_{\alpha + \beta} \mathbin{\dot{\oplus}} z \mathrm e_\beta \mapsto X_\alpha(x)\, X_{\alpha + \beta}(y)\, X_\beta(z)\] is equivariant.
We are ready to construct a presentation of \(G = \stlin(\Phi; K, L; \mathfrak a, \mathfrak b)\). Let \(Z_\alpha(x, p) = \up{X_{-\alpha}(p)}{X_\alpha(x)} \in G\) for \(x \in \mathfrak a\), \(p \in K\) if \(\alpha\) is short and \(x \in \mathfrak b\), \(p \in L\) if \(\alpha\) is long. If \(V_{\alpha \beta}(K, L)\) is defined, then there are also natural elements \(Z_{\alpha \beta}(u, s) \in G\) for \(u \in V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) and \(s \in V_{-\alpha, -\beta}(K, L)\)
\begin{theorem}\label{pres-stphi} Let \(\Phi\) be one of the root systems \(\mathsf B_\ell\), \(\mathsf C_\ell\), or \(\mathsf F_4\) for \(\ell \geq 3\); \((K, L)\) be a pair of the corresponding type; \((\mathfrak a, \mathfrak b)\) be a crossed module over \((K, L)\). Then \(\stlin(\Phi; K, L; \mathfrak a, \mathfrak b)\) as an abstract group is generated by \(Z_\alpha(x, p)\) and \(Z_{\alpha \beta}(u, s)\) with the relations \begin{itemize} \item[(Sym)]
\(Z_{\alpha \beta}(u, s) = Z_{\beta \alpha}(u, s)\) if we identify \(V_{\alpha \beta}(\mathfrak a, \mathfrak b)\) with \(V_{\beta \alpha}(\mathfrak a, \mathfrak b)\); \item[(Add)]
\begin{enumerate}
\item \(Z_\alpha(x, p)\, Z_\alpha(y, p) = Z_\alpha(x + y, p)\),
\item \(Z_{\alpha \beta}(u, s)\, Z_{\alpha \beta}(v, s) = Z_{\alpha \beta}(u \dotplus v, s)\);
\end{enumerate} \item[(Comm)]
\begin{enumerate}
\item \([Z_\alpha(x, p), Z_\beta(y, q)] = 1\) for \(\alpha \perp \beta\) and \(\alpha + \beta \notin \Phi\),
\item \(\up{Z_\gamma(x, p)}{Z_{\alpha \beta}(u, s)} = Z_{\alpha \beta}\bigl(Z_\gamma(\delta(x), p) u, Z_\gamma(\delta(x), p) s\bigr)\) for \(\gamma \in \mathbb R (\alpha - \beta)\);
\end{enumerate} \item[(Simp)] \(Z_\alpha(x, p) = Z_{\alpha \beta}(x \mathrm e_\alpha, p \mathrm e_{-\alpha})\); \item[(HW)]
\begin{enumerate}
\item \(Z_{\alpha, \alpha + \beta}\bigl(X_{-\beta}(r)\, x \mathrm e_{\alpha + \beta}, u\bigr) = Z_{\alpha + \beta, \beta}\bigl(X_{-\alpha}(p)\, x \mathrm e_{\alpha + \beta}, X_{-\alpha}(p) v\bigr)\), where \(\alpha\), \(\beta\) is a basis of root subsystem of type \(\mathsf A_2\), \(u = p \mathrm e_{-\alpha} \oplus q \mathrm e_{-\alpha - \beta}\), \(v = q \mathrm e_{-\alpha - \beta} \oplus r \mathrm e_{-\beta}\),
\item \(Z_{\alpha, \alpha + \beta}\bigl(X_{-\beta}(s) u, v\bigr) = Z_{2 \alpha + \beta, \beta}\bigl(X_{-\alpha}(p) u, X_{-\alpha}(p) w\bigr)\), where \(\alpha\), \(\beta\) is a basis of a root subsystem of type \(\mathsf B_2\) and \(\alpha\) is short, \(u = x \mathrm e_{2\alpha + \beta} \mathbin{\dot{\oplus}} y \mathrm e_{\alpha + \beta}\), \(v = p \mathrm e_{-\alpha} \mathbin{\dot{\oplus}} q \mathrm e_{-2 \alpha - \beta} \mathbin{\dot{\oplus}} r \mathrm e_{-\alpha - \beta}\), \(w = q \mathrm e_{-2\alpha - \beta} \oplus r \mathrm e_{-\alpha - \beta} \oplus s \mathrm e_{-\beta}\);
\end{enumerate} \item[(Delta)] \(Z_\alpha(x, \delta(y) + p) = \up{Z_{-\alpha}(y, 0)}{Z_\alpha(x, p)}\). \end{itemize} \end{theorem} \begin{proof} If \(\Phi\) is of type \(\mathsf B_\ell\) or \(\mathsf C_\ell\), then the claim directly follows from theorem \ref{pres-stu}, so we may assume that \(\Phi\) is of type \(\mathsf F_4\). Let \(G\) be the group with the presentation from the statement, it is generated by the elements \(Z_\alpha(a, p)\) satisfying only the relations involving roots from root subsystems of rank \(2\). First of all, we construct a natural action of \(\stlin(\Phi; K, L)\) on \(G\). Notice that any three-dimensional root subsystem \(\Psi \subseteq \Phi\) such that \(\Psi = \mathbb R \Psi \cap \Phi\) is of type \(\mathsf B_3\), \(\mathsf C_3\), or \(\mathsf A_1 \times \mathsf A_2\).
Let \(g = X_\alpha(a)\) be a root element in \(\stlin(\Phi; K, L)\) and \(h = Z_\beta(b, p)\) be a generator of \(G\). If \(\alpha\) and \(\beta\) are linearly independent, then \(\up gh\) may be defined directly as \(h\) itself or an appropriate \(Z_{\gamma_1 \gamma_2}(u, s)\). Otherwise we take a root subsystem \(\alpha \in \Psi \subseteq \Phi\) of rank \(2\) of type \(\mathsf A_2\) or \(\mathsf B_2\), express \(h\) in terms of \(Z_\gamma(c, q)\) for \(\gamma \in \Psi \setminus \mathbb R \alpha\), and apply the above construction for \(\up g{Z_\gamma(c, q)}\). The resulting element of \(G\) is independent of the choices of \(\Psi\) and the decomposition of \(h\) since we already know the theorem in the cases \(\mathsf B_3\) and \(\mathsf C_3\).
We have to check that \(\up g{(-)}\) preserves the relations between the generators. Let \(\Psi\) be the intersection of \(\Phi\) with the span of the roots in a relation, \(\Psi' = (\mathbb R \Psi + \mathbb R \alpha) \cap \Phi\). Consider the possible cases: \begin{itemize} \item If \(\Psi'\) is of rank \(< 3\) or has one of the types \(\mathsf B_3\), \(\mathsf C_3\), then the result follows from the cases \(\mathsf B_3\) and \(\mathsf C_3\) of the theorem. \item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\) and \(\alpha\) lies in the first component (so \(\Psi\) is the second component), then \(g\) acts trivially on the generators with the roots from \(\Psi\) and there is nothing to prove. \item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\), \(\alpha\) lies in the second component, and \(\Psi\) is of type \(\mathsf A_1 \times \mathsf A_1\), then we have to check that \(\bigl[\up g{Z_\beta(b, p)}, \up g{Z_\gamma(c, q)}\bigr] = 1\), where \(\beta\) lies in the first component and \(\gamma\) lies in the second component. But \(\up g{Z_\beta(b, p)} = Z_\beta(b, p)\) commutes with \(\up g{Z_\gamma(c, q)}\), the latter one is a product of various \(Z_{\gamma'}(c', q')\) with \(\gamma'\) from the second component of \(\Psi'\). \end{itemize}
Now let us check that the resulting automorphisms of \(G\) corresponding to root elements satisfy the Steinberg relations when applied to a fixed \(Z_\beta(b, p)\). Let \(\Psi\) be the intersection of \(\Phi\) with the span of the root from such a relation and \(\Psi' = (\mathbb R \Psi + \mathbb R \beta) \cap \Phi\). There are the following cases: \begin{itemize} \item If \(\Psi'\) is of rank \(< 3\) or has one of the types \(\mathsf B_3\), \(\mathsf C_3\), then the result follows from the cases \(\mathsf B_3\) and \(\mathsf C_3\) of the theorem. \item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\) and \(\beta\) lies in the first component (so \(\Psi\) is the second component), then both sides of the relation trivially act on \(Z_\beta(b, p)\) and there is nothing to prove. \item If \(\Psi'\) is of type \(\mathsf A_1 \times \mathsf A_2\), \(\beta\) lies in the second component, and \(\Psi\) is of type \(\mathsf A_1 \times \mathsf A_1\), then we have to check that \(\up{X_\alpha(a)\, X_\gamma(c)}{Z_\beta(b, p)} = \up{X_\gamma(c)\, X_\alpha(a)}{Z_\beta(b, p)}\), where \(\alpha\) lies in the first component and \(\gamma\) lies in the second component. But both sides coincide with \(\up{X_\gamma(c)}{Z_\beta(b, p)}\), it is a product of various \(Z_{\beta'}(b', p')\) with \(\beta'\) from the second component of \(\Psi'\). \end{itemize}
Now consider the homomorphism \[u \colon G \to \stlin(\Phi; K, L; \mathfrak a, \mathfrak b), Z_\alpha(a, p) \mapsto \up{X_{-\alpha}(p)}{X_\alpha(a)}.\] By construction, it is \(\stlin(\Phi; K, L)\)-equivariant, so it is surjective. The \(\stlin(\Phi; K, L)\)-equivariant homomorphism \[v \colon \stlin(\Phi; K, L; \mathfrak a, \mathfrak b) \to G, X_\alpha(a) \mapsto X_\alpha(a)\] is clearly well-defined. It remains to notice that \(v \circ u\) is the identity. \end{proof}
\end{document} |
\begin{document}
\title[On Non-compact Heegaard Splittings]{On Non-compact Heegaard Splittings}
\author{Scott Taylor}
\address{Mathematics Department, University of California, Santa Barbara}
\email{[email protected]}
\maketitle
\begin{abstract} A Heegaard splitting of an open 3-manifold is the partition of the manifold into two non-compact handlebodies which intersect on their common boundary. This paper proves several non-compact analogues of theorems about compact Heegaard splittings. The main theorem is: if N is a compact, connected, orientable 3-manifold with non-empty boundary, with no $S^2$ components, and if $M$ is obtained from $N$ by removing the boundary then any two Heegaard splittings of $M$ are properly ambient isotopic. This is a non-compact analogue of the classifications of splittings of $\text{(closed surface)} \times I$ and $\text{(closed surface)} \times S^1$ by Scharlemann-Thompson and Schultens. Work of Frohman-Meeks and a non-compact analogue of the Casson-Gordon theorem on weakly reducible Heegaard splittings are key tools.
\end{abstract}
\section{Introduction}
Non-compact 3-manifolds vary widely in the degree to which they are similar to compact 3-manifolds. The most tractable are the deleted boundary 3-manifolds which are obtained by removing boundary components from compact 3-manifolds. At the other end of the spectrum are those which are not the connect sums of prime 3-manifolds. Heegaard splittings play an important role in compact 3-manifolds, so it is interesting to ask about the ways in which this structure can be extended to non-compact manifolds. This paper studies Heegaard splittings of an important class of 3-manifolds: eventually end-irreducible 3-manifolds. One of the main results is that eventually end-irreducible 3-manifolds have exhausting sequences which interact nicely with a given Heegaard splitting. This result is applied to classify Heegaard splittings of ``most" deleted boundary 3-manifolds. The only deleted boundary 3-manifolds which have splittings that cannot be classified are those which are obtained from a compact 3-manifold by removing finitely many closed 3-balls. \newline
The study of non-compact Heegaard splittings was initiated by Frohman and Meeks in \cite{FrMe97}. They showed that any two infinite genus Heegaard surfaces in ${\mathbb R}^3$ are properly ambient isotopic and used this result to give a topological classification of complete 1-ended minimal surfaces in ${\mathbb R}^3$. In this paper, Frohman and Meek's result is extended to non-compact 3-manifolds which are obtained by removing boundary components from a compact 3-manifold. The hope is that a better understanding of the Heegaard splittings of these ``household name" manifolds will aid in understanding how these manifolds compare to their more exotic cousins. As Heegaard splittings lift to covering spaces, there may also be some hope of using these results in the study of compact Heegaard splittings. We will eventually need to distinguish between two types of Heegaard splittings: relative and absolute, but for the initial statement of our results, we make the following preliminary definitions here.
\begin{definition} Let $\mc{H}$ be the disjoint union of finitely or countably many 3-balls. A \defn{handlebody} $H$ is formed by attaching finitely or infinitely many 1-handles $D^2 \times I$ to $\mc{H}$ so that a component of $D^2 \times \partial I$ is attached to the boundary of a 3-ball. We allow only finitely many 1-handles to be attached to each component of $\mc{H}$. \end{definition}
Handlebodies are characterized by the existence of a properly embedded collection of pairwise disjoint discs with boundary on $\partial H$ which cut $H$ into 3-balls. We will use the existence of these discs in many of the arguments in this paper.
\begin{definition} A \defn{Heegaard surface} in a 3-manifold $M$ with empty boundary is a properly embedded surface $S \subset M$ such that the closure of the complement of $S$ in $M$ consists of two handlebodies $U$ and $V$ with $S = \partial U = \partial V$. We say that $M = U \cup_S V$ is a \defn{Heegaard splitting} of $M$. \end{definition}
\begin{remark} If $M$ has compact boundary, a Heegaard splitting will be a division of $M$ into two \defn{compressionbodies}. We defer the definition of ``compressionbody" until after we have stated the main theorems of the paper. \end{remark}
As in the theory of compact Heegaard splittings, stabilizations of Heegaard splittings play an important role. The following are the most basic definitions. More details will be given in Section \ref{definitions} and Section \ref{Exh. Seq.}.
\begin{definition} A Heegaard splitting $M = U \cup_S V$ is \defn{stabilized} if there is an embedded 3-ball $B$ in $M$ such that $S \cap B$ is a properly embedded unknotted once-punctured torus. The ball $B$ is a \defn{reducing ball} for $S$. \end{definition}
\begin{definition} A non-compact Heegaard splitting $M = U \cup_S V$ is \defn{end-stabilized} if for every compact set $C \subset M$ and for every non-compact component $W$ of the closure of the complement of $C$ in $M$ there is a reducing ball for $S$ which is contained in $W$. \end{definition}
\subsection{Main Results} We may now give simplified versions of the main results of this paper. Throughout the paper we assume that $M$ is a non-compact orientable 3-manifold with no $S^2$ boundary components. The first result is an elementary extension of a theorem of Frohman and Meeks \cite{FrMe97}. It may be viewed as an analogue of the Reidemeister-Singer theorem for compact manifolds which says that, after some finite number of stabilizations, any two Heegaard splittings with the same partition of the boundary are ambient isotopic. The proof of this extension is contained in the Appendix.
\begin{thmA} Any two end-stabilized Heegaard splittings of $M$ which have the same partition of $\partial M$ are properly ambient isotopic. \end{thmA}
A (compact) Heegaard splitting is called \defn{weakly reducible} if there are disjoint compressing discs for the Heegaard surface which are contained in different compressionbodies. In \cite{CaGo87}, Casson and Gordon prove that if a closed 3-manifold has a Heegaard splitting which is weakly reducible but not reducible then the manifold contains an incompressible surface of positive genus. Since every non-compact Heegaard splitting (of non-zero genus) is weakly reducible, we cannot hope for a direct extension of the Casson and Gordon theorem to non-compact 3-manifolds. The following result can, however, be viewed as a partial extension to the non-compact case. The proof is based on a proof of the Casson-Gordon theorem. Here is a simplified statement of the result.
\begin{definition} A non-compact 3-manifold $M$ is \defn{end-irreducible} (rel $C$) for $C$ a compact subset if there is an exhausting sequence $\{K_i\}$ of nested compact connected submanifolds whose union is $M$ such that $C \subset \operatorname{int}(K_1)$ and the frontier of each $K_i$ is incompressible in $M - C$. $\{K_i\}$ is called a \defn{frontier-incompressible} (rel $C$) exhausting sequence for $M$. \end{definition}
\begin{thmB} Suppose that $M$ is end-irreducible (rel $C$) for some compact set $C$ containing $\partial M$. Let $M = U \cup_S V$ be a Heegaard splitting of $M$. Then there is a frontier-incompressible rel $C$ exhausting sequence where each compact submanifold in the sequence ``inherits" a Heegaard splitting from $S$. \end{thmB}
\begin{remark} The notion of a compact submanifold ``inheriting" a Heegaard splitting from the non-compact 3-manifold $M$ will be made precise by using the idea of a ``relative Heegaard splitting". \end{remark}
As an application of these two theorems we can classify Heegaard splittings of ``most" deleted-boundary 3-manifolds. Here is an example:
\begin{thmC} Let $\ob{M}$ be a compact, orientable 3-manifold with non-empty boundary, no component of which is a sphere. Let $M$ be a 3-manifold obtained from $\ob{M}$ by removing one or more boundary components of $M$. Then any two Heegaard splittings of $M$ which have the same partition of $\partial M$ are equivalent up to proper ambient isotopy. In particular, every Heegaard splitting of $M$ is of infinite genus and is end-stabilized. \end{thmC}
\subsection{Outline}
Section \ref{examples} contains several examples of non-compact Heegaard splittings and proves that the inclusion of a Heegaard surface into a non-compact 3-manifold induces a homeomorphism of ends. \newline
Sections \ref{slide-moves} and \ref{rel HS} provide preliminary work that is necessary for understanding the theorems which are the detailed versions of Theorems A, B, and C. Section \ref{slide-moves} defines and studies handle-slides of boundary-reducing discs in compressionbodies. Section \ref{rel HS} introduces a certain type of submanifold (due to Frohman-Meeks) which is ``balanced" on a non-compact Heegaard surface. We discuss the type of Heegaard splittings (called ``relative Heegaard splittings") which these submanifolds inherit from the splitting of the manifold. Both balanced submanifolds and relative Heegaard splittings are central in the work of Frohman and Meeks. \newline
Section \ref{Exh. Seq.} proves a more detailed version of Theorem B. The proof is based on the outline of Casson and Gordon's theorem given in \cite{Sc02}. \newline
Theorem C is proved in Section \ref{Deleted Boundary}. The key ideas are applications of Theorem A, Theorem B, and the classification of Heegaard splittings of $\text{(closed surface)} \times I$ by Scharlemann and Thompson \cite{ScTh93}. \newline
The Appendix proves Theorem A. This is a slight extension of Frohman and Meeks' \cite[Theorem 2.1]{FrMe97} to manifolds with more than one end. We also correct a misstatement\footnote{The error occurs in the last sentence of Prop. 2.2. After including the collars of $J_i - L_i$ and $L_i - J_i$ you have arranged for $K_i$ to have a relative Heegaard splitting, but $K_{i+1} - K_i$ may not. For example, the frontier of $K_i \cap H_1$ may not be incompressible in $H_1 \cap \operatorname{cl}(K_{i+1} - K_i)$. This error affects the proof of Proposition 2.3.} in the proof of that theorem. The correction is not difficult, but does require some work.
\subsection{History}\label{history} Scharlemann, in his survey paper \cite{Sc02}, gives an overview of the history of Heegaard splittings of compact 3-manifolds. We rely on that paper for our historical treatment of compact Heegaard splittings. \newline
Very few types of compact 3-manifolds are known to have unique Heegaard splittings of a given genus and partition of the boundary. Waldhausen \cite{Wa68b} proved that Heegaard splittings of a given genus of $S^3$ and of $S^2 \times S^1$ are unique up to ambient isotopy. Scharlemann and Thompson in \cite{ScTh94a} have provided another proof of the classification for $S^3$. Bonahon and Otal \cite{BoOt83} proved that lens spaces have unique splittings. The 3-torus and $T^2 \times I$ also have unique splittings of a given genus and boundary partition. This was proved by Boileau and Otal \cite{BoiOt90}. Scharlemann and Thompson proved in \cite{ScTh93} that any two Heegaard splittings of $\text{(closed surface)} \times I$ with the same genus and partition of the boundary are isotopic. In particular, any Heegaard splitting of $\text{(closed surface)} \times I$ is stabilized if it is of genus more than twice the genus of the surface and both boundary components are contained in a single compressionbody. We will use their classification in Section \ref{Deleted Boundary}. Schultens in \cite{Sch93} classified splittings of $\text{(compact surface)} \times S^1$. She proved that any two splittings of $\text{(closed surface)} \times S^1$ of the same genus are isotopic. The uniqueness of splittings for $\text{(closed surface)} \times I$ and $\text{(closed surface)} \times S^1$ provided the inspiration for Theorem C of this paper. \newline
As previously mentioned, Frohman and Meeks \cite{FrMe97} in their work on Heegaard splittings of ${\mathbb R}^3$ defined the notion of noncompact Heegaard splitting used in this paper. Pitts and Rubinstein \cite{PiRu86} have also considered Heegaard splittings of non-compact 3-manifolds. They, however, consider only deleted boundary 3-manifolds and compact Heegaard surfaces which split the manifold into two ``hollow handlebodies". For Pitts and Rubinstein, a hollow handlebody is simply a compact compressionbody with $\partial_-$ removed. Frohman and Meeks also use the term ``hollow handlebody", but they refer to what we call ``relative compressionbodies". The term ``relative compressionbody" was introduced by Canary and McCullough \cite{CaMc04}. Other authors have picked this term up and it has become standard. This paper uses ``relative compressionbody"; this will, hopefully, avoid confusion with Pitts and Rubinstein's use of ``hollow handlebody". Both Frohman-Meeks and Pitts-Rubinstein use Heegaard surfaces in non-compact 3-manifolds to study minimal surfaces from a topological point of view. The main appearances of non-compact Heegaard splittings and non-compact handlebodies have been in minimal surface theory, for example \cite{Fr90,FrMe97, F92, MR05}.\newline
This paper focuses on ``eventually end-irreducible" 3-manifolds. These manifolds were first studied by Brown and Tucker \cite{BrTu74}. They are an important class of 3-manifolds since some questions about arbitrary non-compact 3-manifolds can be reduced to questions about eventually end-irreducible 3-manifolds \cite{BrTh87b}.
\subsection{Definitions}\label{definitions} \subsubsection*{Notation}
If $X$ is a subcomplex of a complex $Y$, then $\eta(X)$ denotes a closed regular neighborhood of $X$ in $Y$. The term ``submanifold" will be reserved for codimension 0 submanifolds. If $X$ is a submanifold of a manifold $Y$ then $\operatorname{cl}(X)$ indicates the closure of $X$ in $Y$ and $\operatorname{int}(X)$ indicates the interior of $X$ in $Y$. The number of components of a complex $X$ is denoted $|X|$. The spaces $[0,1]$ and $[0,\infty)$ are denoted by $I$ and ${\mathbb R}_+$ respectively. ${\mathbb R}^n$ denotes $n$-dimensional Euclidean space and $S^n$ denotes the sphere of dimension $n$. The closed unit disc in ${\mathbb R}^2$ is denoted by $D^2$. The integers and natural numbers are indicated by $\mathbb Z$ and $\mathbb N$ respectively. All homology groups use $\mathbb Z$ coefficients.
\subsubsection*{3-Manifold Topology} We work in the PL category and use, with a few exceptions, standard terminology from 3-manifold theory (see \cite{He04,Ja80}). All 3-manifolds and surfaces are assumed to be orientable. A map $\rho: X \rightarrow Y$ between complexes is \defn{proper} if the preimage of each compact set is compact. If $X$ is a surface and $Y$ is 3-manifold, $\rho$ is a \defn{proper embedding} if, in addition to being proper and an embedding, $\rho^{-1}(\partial Y) = \partial X$. To say that a graph is properly embedded in a 3-manifold means that the inclusion map is proper and an embedding. The vagaries of language being what they are, the terminology for surfaces and graphs in 3-manifolds is different, but this should not cause confusion in practice. A homotopy $\rho: X \times I \rightarrow Y$ is \defn{proper} if it is proper as a map. If $X$ is a surface and $Y$ is a 3-manifold we also require that $\rho^{-1}(\partial Y) = \partial(X \times I)$. The homotopy $\rho$ is \defn{ambient}, if $X \subset Y$ and there is an extension of $\rho$ to a proper homotopy $\rho: Y \times I \rightarrow Y$. An isotopy $\rho: X \times I \rightarrow Y$ is a homotopy where for each $t \in I$, $\rho(\cdot,t): X \rightarrow Y$ is an embedding. An ambient isotopy $\rho: Y \times I \rightarrow Y$ is required to be a homeomorphism at each time $t \in I$. To say that a homotopy $\rho$ is \defn{fixed} on a set $C$ means that $\rho$ restricted to $C \times \{t\}$ is the identity map for all $t \in I$. \newline
A loop on a surface is \defn{essential} if it is not null-homotopic. If the loop is embedded, this is equivalent to saying that it does not bound a disc on the surface. An embedded 2-sphere in a 3-manifold is \defn{essential} if it does not bound a 3-ball. A \defn{compressing disc} for a surface $F$ in a 3-manifold is an embedded disc $D$ for which $D \cap F = \partial D$ and $\partial D$ is an essential loop on $F$. A surface $F$ properly embedded in $M$ is \defn{incompressible} if there are no compressing discs for $F$ in $M$. By the loop theorem and Dehn's lemma this is equivalent (since, for us, all surfaces and manifolds are orientable) to the inclusion map of $F$ into $M$ inducing an injection of fundamental groups. Note that our definition considers an inessential 2-sphere to be an incompressible surface. This is slightly non-standard, but it makes the statements and proofs of some of the results easier. We will emphasize places where this observation matters. \newline
If $S \subset M$ is a surface embedded in a 3-manifold and if $\Delta$ is the union of pairwise disjoint compressing discs for $S$ then $\sigma(S;\Delta)$ will denote the surface obtained from $F$ by compressing along $\Delta$. If $R \subset S$ is a properly embedded subsurface (i.e. $\operatorname{cl}(R) = R$) with each component of $\partial R$ either contained in or disjoint from $\partial \Delta$ then $\sigma(R;\Delta)$ will denote the surface obtained from $R$ by compressing along those discs of $\Delta$ with boundary in $R$. If $S \subset \partial M$ then the manifold obtained by boundary-reducing $M$ along $\Delta$ is denoted $\sigma(M;\Delta)$. As it will always be clear when we have a surface and when we have a 3-manifold this should not cause confusion. \newline
A manifold (2 or 3 dimensional) is \defn{open} if it is non-compact and without boundary. It is \defn{closed} if it is compact and without boundary. A 3-manifold is \defn{irreducible} if every embedded 2-sphere bounds an embedded 3-ball. As much as possible, we do not assume irreducibility. A submanifold of a 3-manifold is a \defn{product region} if it is homeomorphic to $F \times I$ where $F$ is a surface. A \defn{fiber} of $F \times I$ is $\{x\} \times I$ where $x \in F$. A set $X \subset F \times I$ is \defn{vertical} if it is the union of fibers.
\subsubsection*{Heegaard Splittings} For background on Heegaard splittings of compact 3-manifolds the reader is referred to the survey article \cite{Sc02}. Since we are interested in splittings of non-compact 3-manifolds, some of our definitions differ from conventions in the compact setting. Let $F$ be either a compact, orientable surface (possibly disconnected) or the empty set. A \defn{compressionbody} $H$ is formed by taking the disjoint union of $F \times I$ and countably (finitely or infinitely) many disjoint 3-balls and then attaching 1-handles. 1-handles are attached to $F \times I$ on the interior of $F \times \{1\}$ and to the boundaries of the 3-balls. Only finitely many 1-handles are to be attached to each 3-ball and only finitely many may be attached to $F \times \{1\}$. We usually require that the result be connected. The surface $F \times \{0\}$ is denoted $\partial_- H$ and the surface $\operatorname{cl}(\partial H - \partial_- H)$ is denoted $\partial_+ H$ and is called the \defn{preferred surface} of $H$. If $F$ is a closed surface then $H$ is an \defn{absolute compressionbody}; if $F$ has non-empty boundary then $H$ is a \defn{relative compressionbody}. If $H = F \times I$ then $H$ is a \defn{trivial compressionbody}. If $F$ is empty, then $H$ is a \defn{handlebody}. We will generally require that $F$ contain no $S^2$ components, as then $H$ is irreducible. At one point, in section \ref{Exh. Seq.} we will need to allow $S^2$ components. This will be explicitly pointed out. A \defn{subcompressionbody} $A$ of $H$ is a submanifold of $H$ whose frontier in $H$ consists of properly embedded discs. (We do not require these discs to be essential. Thus, for example, upper half space, which is a handlebody, has an exhausting sequence consisting of subcompressionbodies.) We denote $\partial A \cap \partial_+ H$ by $\partial_{\partial_+ H} A$. There is a proper strong deformation retraction of a compressionbody $H$ onto $\partial_- H \cup \Sigma$ where $\Sigma$ is a properly embedded graph in $H$ attached at valence one vertices to $\partial_- H$. $\Sigma \cup \partial_- H$ is called the \defn{spine} of the compressionbody. \newline
A compressionbody $H$ is determined by a properly embedded collection of disjoint discs $\Delta$ with $\partial \Delta \subset \partial_+ H$ such that $\sigma(H;\Delta)$ consists of 3-balls and $\partial_- H \times I$. Such a collection of discs is called a \defn{defining set of discs}. A defining set of discs may contain discs which are not compressing discs. An example is the handlebody which is a closed regular neighborhood of the positive $z$-axis in ${\mathbb R}^3$. Since its boundary is homeomorphic to ${\mathbb R}^2$ there are no compressing discs, but there is clearly a defining set of discs. A properly embedded collection of disjoint discs in $H$ with boundary on $\partial_+ H$ will be called a \defn{disc set} for $H$ or for $\partial_+ H$. A disc set $\delta$ is \defn{collaring} if the union of some components of $\sigma(H;\delta)$ is $\partial_- H \times I$.\newline
A \defn{Heegaard splitting} of a 3-manifold $M$ is a decomposition of $M$ into two compressionbodies $U$ and $V$ glued along $\partial_+ U = \partial_+ V$ = S. If $U$ and $V$ are absolute compressionbodies the splitting is an \defn{absolute Heegaard splitting}. If $U$ and $V$ are relative compressionbodies the splitting is a \defn{relative Heegaard splitting}. The surface $S$ is called the \defn{Heegaard surface}. We write $M = U \cup_S V$. If the term ``Heegaard splitting" is used without either the adjective ``absolute" or ``relative", we will mean ``absolute Heegaard splitting". Usually, relative Heegaard splittings will be of compact submanifolds of a non-compact 3-manifold. \newline
A Heegaard splitting of a manifold $M = U \cup_S V$ is \defn{reducible} if there is an essential simple closed curve on $S$ which bounds embedded discs in $U$ and $V$. To \defn{stabilize} a Heegaard surface, push the interior of an embedded arc on the surface into one of the compressionbodies and include a regular neighborhood of the arc into the other compressionbody. A Heegaard splitting has been stabilized if there is, in $M$, an embedded 3-ball which intersects the Heegaard surface in a properly embedded, unknotted, once-punctured torus. Such a ball is called a \defn{reducing ball}. If a splitting other than the genus 1 splitting of $S^3$ is stabilized, it is reducible. A Heegaard splitting is \defn{weakly reducible} (see \cite{CaGo87}) if there are disjoint compressing discs for the Heegaard surface, one in each compressionbody, which have boundaries that are essential on the Heegaard surface. Any Heegaard splitting, other than the genus 0 splitting of ${\mathbb R}^3$, of a non-compact 3-manifold must have a properly embedded infinite collection of discs in each compressionbody which have boundaries which are essential on the Heegaard surface. Thus, apart from the genus 0 splitting of ${\mathbb R}^3$, every non-compact Heegaard splitting is weakly reducible. A Heegaard splitting $M = U \cup_S V$ is \defn{end-stabilized} if for every compact set $C \subset M$ and every non-compact component $W$ of $\operatorname{cl}(M - C)$ there is a reducing ball for $S$ entirely contained in $W$.
\subsubsection*{Non-Compact 3-Manifolds} An \defn{exhausting sequence} for a non-compact 3-manifold $M$ is a sequence $\{K_i\}$ of compact, connected 3-submanifolds such that $K_i \subset \operatorname{int}(K_{i+1})$ and $M = \cup_i K_i$. A 3-manifold $M$ is \defn{end-irreducible (rel $C$)} for a compact subset $C$ if there is an exhausting sequence for $M$ such that the frontier of each element of the exhausting sequence is incompressible in $M - C$. If $C$ can be taken to be the empty set, then $M$ is simply \defn{end-irreducible}. If $M$ is end-irreducible (rel $C$) for some $C$ then $M$ is \defn{eventually end-irreducible}. If a non-compact 3-manifold is obtained by removing at least one boundary component from a compact 3-manifold then the non-compact 3-manifold is a \defn{deleted boundary 3-manifold}. Deleted boundary 3-manifolds are eventually end-irreducible. All 3-manifolds (excluding compressionbodies) considered in this paper will have compact boundary. When the manifold is end-irreducible (rel $C$) we will assume that $C$ contains $\partial M$.
\section{Examples}\label{examples} Some examples of non-compact Heegaard splittings are in order. When thinking about non-compact Heegaard splittings, keep in mind that an absolute handlebody is the closed regular neighborhood of a properly embedded, locally finite graph in ${\mathbb R}^3$. Frohman and Meeks \cite{FrMe97} give an example of a non-compact 3-manifold whose interior is an open handlebody but the closure of the interior is not a handlebody. Handlebodies have properly embedded discs which cut them into 3-balls. Another observation, which may help the reader's intuition, is that no essential loop in an absolute compressionbody can be homotoped out of every compact set. This is easily proved using the proper deformation retraction of the compressionbody to its spine. This implies, for example, that $\text{(compact surface)} \times {\mathbb R}$ is not a handlebody.\newline
As mentioned in the opening paragraphs of the paper, Heegaard splittings of open manifolds can be obtained from triangulations or by lifting Heegaard splittings from manifolds which are covered by the manifold in question.
\subsection{Heegaard Splittings of ${\mathbb R}^3$}Heegaard splittings of ${\mathbb R}^3$ are easy to construct. Since the upper and lower half spaces are homeomorphic to closed regular neighborhoods of the positive $z$-axis, ${\mathbb R}^3$ has a genus zero Heegaard splitting. Obviously, this splitting can be stabilized any given (finite) number of times. By choosing an infinite, properly embedded collection of arcs in the surface, it can also be stabilized an infinite number of times simultaneously to give an infinite genus Heegaard surface. Frohman and Meeks prove that this is, up to proper ambient isotopy, the only infinite genus Heegaard splitting of ${\mathbb R}^3$.
\subsection{Finite Genus Heegaard Splittings} Let $\ob{M}$ be a compact 3-manifold with Heegaard splitting $\ob{U} \cup_{\ob{S}} \ob{V}$. Let $B$ be an embedded closed 3-ball in $M$ which intersects $\ob{S}$ in a properly embedded disc. Let $X = \ob{X} - B$ for $X = M, U, S, V$. Then $M = U \cup_S V$ is a finite genus Heegaard splitting of the deleted boundary manifold $M$. (Infinitely many discs parallel to $\partial B \cap \ob{U}$ ($\partial B \cap \ob{V})$ are in any defining set of discs for $U$ ($V$).) Classifying such Heegaard splittings would be equivalent to classifying all Heegaard splittings of compact manifolds. No such simple classification is to be hoped for, and so our classification of Heegaard splittings for deleted boundary 3-manifolds does not address such examples. Fortunately, this is the only type not covered by our classification.
\subsection{Amalgamating Heegaard Splittings}An easy way to create Heegaard splittings of non-compact manifolds is to amalgamate splittings of compact submanifolds. We describe a way to do this, beginning with a description of amalgamation. See \cite{Sch93} for the definition of amalgamation. Let $N_0$ and $N_1$ be two compact 3-manifolds with absolute Heegaard splittings $N_0 = U_0 \cup_{S_0} V_0$ and $N_1 = U_1 \cup_{S_1} V_1$ and homeomorphic boundary components $F_0 \subset \partial_- V_0$ and $F_1 \subset \partial_- V_1$. Choose a homeomorphism $h:F_1 \rightarrow F_0$. We can form a Heegaard splitting of the \defn{amalgamated manifold} $N = N_0 \cup_{h} N_1$ by \defn{amalgamation} in the following way: \newline
In $V_i$ there are collaring discs $\delta_i$ which cut off a product region $F_i \times I$ contained in $V_i$. Choose labels so that $F_i = F_i \times \{0\}$. Let $P$ denote the product region $(F_1 \times I) \cup (F_0 \times I)$ in $N$. Think of $P$ as homeomorphic to $F_1 \times [0,2]$. Note that it is contained in $V_0 \cup V_1$. Perform an isotopy of $N_1$ so that, in $P$, $\delta_1 \times [0,2]$ is disjoint from $\delta_0$. Let $A_1$ be $\delta_1 \times [0,2]$ in $P$. Let $U = U_0 \cup (V_1 - (F_1 \times I)) \cup A_1$, $V = \operatorname{cl}(N - U)$ and $S = V \cap U$. Then $M = U \cup_S V$ is a Heegaard splitting with genus equal to $\operatorname{genus}(S_0) + \operatorname{genus}(S_1) - \operatorname{genus}(F_1)$. Note that there there are disjoint discs $\delta_1 \subset U$ and $\delta_0 \subset V$ which, when we compress $S$ along them, leave us with a surface parallel to $F_0 = F_1$ in $N$. \newline
Here is a method of producing an infinite genus Heegaard splitting of a non-compact 3-manifold $M$. Let $\{K_i\}$ be an exhausting sequence for $M$ with the properties that $\partial M \subset K_1$, that no component of $\operatorname{cl}(M - K_i)$ is compact for any $i$, and that for each $i$ and for each component $J$ of $\operatorname{cl}(K_{i+1} - K_i)$ the intersection $J \cap K_i$ is connected. For each $i$, let $L_i = \operatorname{cl}(K_{i+1} - K_i)$ and $F_i = L_i \cap K_i$. $K_{i+1}$ is formed by amalgamating $K_i$ and each component of $L_i$ along a single component of the surface $F_i$. \newline
We now carefully choose absolute Heegaard splittings of $K_1$ and each component of $L_i$ for each $i \geq 1$. Choose a Heegaard splitting $K_1 = U_1 \cup_{S_1} V_1$ of $K_1$ so that every boundary component of $K_1$ is contained in $V_1$. Let $\delta'_1$ be a set of collaring discs for $V_1$. Now for each component of $L_i$ choose a Heegaard splitting so that $L_i = X_i \cup_{T_i} Y_i$. Choose the splitting so that $\partial L_i \subset Y_i$. Since each component of $L_i$ has at least two boundary components neither $X_i$ nor $Y_i$ has a component which is a stabilization of a trivial Heegaard splitting. Inductively, form a Heegaard splitting of $K_n = U_n \cup_{S_n} V_n$ for $n \geq 2$ by amalgamating the Heegaard splittings of $K_{n-1}$ and $L_{n-1}$. Let $V_n$ be the compressionbody which contains $\delta'_1$ and let $U_n$ be the other. \newline
Recall from the definition of amalgamation that if $F_n \subset U_n$ then $U_{n+1} \cap U_n$ can be created by removing 1-handles in $U_n$ which join $F_i$ to $S_i$ and are vertical in the product structure of $U_n$ compressed along a defining set of discs. Denote these 1-handles by $A_n$. The surface $F_n$ is contained in $U_n$ whenever $n$ is even (by our choice of Heegaard splitting for $L_n$). If $n$ is odd then $F_n$ is not in $U_n$, so for odd $n$, let $A_n = \varnothing$. If $n$ is even then $U_n \subset U_{n+1}$. Define $U'_n = \operatorname{cl}(U_n - A_n)$. Since for each $n$, $\partial L_n \subset Y_n$, $U'_n \subset U'_{n+1}$ for all $n$. In particular, when we extend the 1-handles from $Y_n$ into $K_{n-1}$ they do not not need to reach into $L_{n-2}$. Let $U = \cup_{\mathbb N} U'_n$. \newline
We desire to show that $U$ is an absolute compressionbody. Since $\partial M \subset V$, $U$ will be an absolute handlebody. To prove this we will produce a properly embedded collection of discs in $U$ which cut $U$ into compact handlebodies. Let $\delta_n$ be a collaring set of discs contained in $L_{n-1}$ for $U_n$ for each even $n$. We may assume that $\delta_n$ is disjoint from $A_n$ and so $\delta_n$ is a properly embedded finite collection of discs in $U$, for each even $n$. Furthermore, since $\delta_n \subset L_{n-1}$ the infinite collection of discs $\delta = \cup \delta_n$ is properly embedded in $M$. The discs $\delta_n$ cut off a compact submanifold $U'_n - (\partial K_{n+2} \times I)$. As $U = \cup U'_n$ every component of $\sigma(U;\delta)$ is compact. Let $H$ be a component of $\sigma(U;\delta)$. Choose an even $n$ large enough so that $H \subset U'_{n-2}$. $H$ is thus a component of $\sigma(U'_n;\delta)$ which is not contained in $\partial U'_n \times I$. As such, it must be a handlebody as $U'_n$ for $n$ even is an absolute compressionbody. Hence $U$ is a handlebody. \newline
Let $V = \operatorname{cl}(M - U)$. The argument to show that $V$ is an absolute compressionbody is similar, except that the disc set $\delta$ will cut $V$ into compact handlebodies and, if $\partial M \neq \varnothing$, a compact absolute compressionbody $H$ with $\partial_- H = \partial M$. Letting $S = U \cap V$, we have shown that $U \cup_S V$ is an absolute Heegaard splitting of $M$. \newline
It is instructive to examine this construction in the case when $M$ is a deleted boundary 3-manifold. Let $M_0$ be a compact, orientable 3-manifold with non-empty boundary component $\partial_1 M_0 = F \neq S^2$. Let $M_0 = U_0 \cup_{S_0} V_0$ be a Heegaard splitting of $M_0$ with $F \subset V_0$. Let $M_i$ for $i \geq 1$ be homeomorphic to $F \times I$ and choose a Heegaard splitting $M_i = U_i \cup_{S_i} V_i$ of $M_i$ which is obtained by tubing together two copies of $F$ in $M_i$. Such a Heegaard splitting has both boundary components, $\partial_0 M_i$ and $\partial_1 M_i$, contained in $V_i$ and has genus which is twice the genus of $F$. (Heegaard splittings of $F \times I$ are classified by Scharlemann and Thompson in \cite{ScTh93}. This classification will be important for our work in Section \ref{Deleted Boundary}.) Build a 3-manifold $M$, homeomorphic to $M_0 - F$ by glueing $\partial_0 M_i$ to $\partial_1 M_{i-1}$ for $i \geq 1$. At stage $n$ of the glueing process we can obtain a Heegaard splitting of the new manifold by amalgamating the splittings of the previously constructed manifold and $M_n$. The new Heegaard splitting will have genus equal to $\operatorname{genus}(S_0) + n\cdot\operatorname{genus}(F)$. This produces an infinite genus splitting of $M$. It is easy to verify that the splitting is end-stabilized. The content of Proposition \ref{no spheres} is that, up to proper ambient isotopy, this is the only Heegaard splitting of $M$.
\subsection{Infinite Genus Splittings which are not End-stabilized} A consequence of Theorem C is that all infinite genus splittings of one-ended deleted boundary 3-manifolds are end-stabilized. It is then natural to ask:
\begin{question} Are there examples of one-ended, irreducible 3-manifolds which have infinite genus Heegaard splittings that are not stabilized? Are there such examples where the manifold has finitely generated fundamental group? What if we simply require that the splitting not be end-stabilized? \end{question}
In this subsection, we give two examples of splittings which are not end-stabilized. The first example is a non-stabilized splitting of a one-ended, irreducible 3-manifold $M$ with infinitely generated fundamental group. The second example, which is obtained from the first, is a splitting of the Whitehead manifold $W$ which is not end-stabilized, but which is stabilized and cannot be destabilized finitely many times. The key point is that, though there are infinitely many ``inequivalent" reducing balls, they are not properly embedded in $W$. In this sense, this example is similar in spirit to Peter Scott's construction of a simply connected 3-manifold which is not the connect sum of prime 3-manifolds \cite{Sc77}. I do not know of a one-ended, irreducible manifold with finitely generated fundamental group which has an infinite genus non-stabilized splitting.\newline
We begin by constructing the splitting of $M$. Let $W_0$ be the exterior of the Whitehead link in $S^3$. $W_0$ is a compact 3-manifold which contains no essential annuli or essential tori\footnote{This is easy to prove directly, or see \cite{EudUch96}.}. $W_0$ is hyperbolic (Example 3.3.9 of \cite{Thur97}). As the Whitehead link is a 2-bridge link, it has tunnel number one, and therefore $W_0$ has a genus 2 Heegaard splitting which does not separate $\partial W_0$. Let $\partial_0 W_0$ and $\partial_1 W_0$ be the two boundary components of $W_0$. Let $\lambda_j$ and $\mu_j$ be the longitude and meridian of $\partial_j W_0$ (for $j = 0,1$). The choice should be made so that $\lambda_j$ and $\mu_j$ correspond to the longitude and meridian of the corresponding component of the Whitehead link in $S^3$. In particular, $\lambda_0$ and $\lambda_1$ are homologically trivial in $W_0$ and $\mu_0$ and $\mu_1$ included into $W_0$ generate the first homology of $W_0$. Let $f:\partial_0 W_0 \rightarrow \partial_1 W_{0}$ be a homeomorphism which takes $\lambda_0$ to $\mu_1$ and $\mu_0$ to $\lambda_1$. \newline
For each $i \in \mathbb N$ let $W_i$ be a copy of $W_0$. Denote the boundary components of $W_i$ by $\partial_0 W_i$ and $\partial_1 W_i$ in such a way that the labelling corresponds to the labelling of the boundary components of $W_0$. Let $S_i$ be a genus 2 Heegaard surface for $W_i$ which does not separate the boundary components. Let $f_i: \partial_0 W_i \rightarrow \partial_1 W_{i-1}$ be the map $f$. Let $M_1 = W_1$ and, inductively, let $M_n = M_{n-1} \cup_{f_{n}} W_{n}$, $\partial_0 M_n = \partial_0 W_1$, and $\partial_1 M_n = \partial_1 W_n$ for $n \geq 2$. Let $S'_n$ be the Heegaard surface of $M_n$ and $S$ the Heegaard surface of $M$ obtained by amalgamating the surfaces $S_i$, as described previously. \newline
We now show that $S$ is not stabilized. If it were, then some $S'_n$ would be stabilized, as reducing balls are compact. Without loss of generality, we may assume that $n$ is odd, so that $S'_n$ does not separate the boundary components of $M_n$. It will be beneficial to work with a closed 3-manifold: glue a copy of $W_0$ to $M_{n}$ to obtain a closed 3-manifold $M'$. Use the glueing maps $f:\partial_0 W_0 \rightarrow \partial_1 M_{n}$ and $f^{-1}:\partial_1 W_0 \rightarrow \partial_0 M_n$. We may form a Heegaard splitting of $M'$ by amalgamating a genus 2 splitting, which does not separate $\partial W_0$, of $W_0$ to $S'_{n}$ across $\partial M_{n}$ to obtain a Heegaard surface $T$. As neither splitting separates the boundary components of the respective manifolds, this operation gives a well-defined Heegaard splitting $T$ of $M'$, a closed 3-manifold. The genus of $S'_{n}$ is $(2n - (n - 1)) = n + 1$. The splitting given by $T$ is obtained from $S'_n$ by adding a single one-handle to the handlebody in the splitting of $M_n$. Thus, the genus of $T$ is one more than the genus of $S'_n$; that is, the genus of $T$ is $n+2$. By assumption, $S'_n$ is stabilized, and so $T$ is, as well. Thus, $M'$ has an irreducible Heegaard splitting of genus $g \leq n+1$. \newline
We now appeal to a theorem of Scharlemann and Schultens. A consequence of Theorem 4.7 of \cite{ScSch01} is that if $M'$ (a closed, orientable, irreducible 3-manifold) has a JSJ-decomposition with $q$ non-Seifert fibered submanifolds, then $q \leq g-1$. Let $\Theta$ be the union of the boundary tori of $W_i$ for $i \leq n$. As each $W_i$ contains no essential annuli or tori, $\Theta$ is the union of the canonical tori in the JSJ-decomposition of $M'$. None of the $W_i$ are Seifert fibered, so $q = n + 1$. Therefore, $q = n+1 \leq g - 1 \leq n$, a contradiction. We conclude that $S$ is not stabilized. \newline
We have just shown that the manifold $M$ has an infinite genus Heegaard surface $S$ which is not stabilized. $M$ has infinitely generated fundamental group as the tori $\partial W_i$ for $i \geq 1$ are all incompressible and non-parallel.
\begin{remark} There are many other similar constructions of 3-manifolds with infinitely generated fundamental group which have non-stabilized splittings. By allowing arbitrary glueing maps between boundary tori of the compact pieces, one can use a theorem of Bachman, Schleimer, and Sedgewick\cite{BaScSe05} to show that the amalgamated splittings are not stabilized. We do not pursue this route further in this paper. \end{remark}
We now use the splitting of $M$ to obtain a splitting of the Whitehead manifold $W$. The manifold $M$ has a single boundary component $\partial_0 W_1$. By attaching a solid torus $V$ to $\partial M$ so that the meridian of the solid torus is equal to the meridian $\mu_0$ of $\partial_0 W_1$, we obtain the Whitehead manifold $W \supset M$. As this same process can be achieved by attaching first a 2-handle and then a 3-ball to $\partial M$, the surface $S$ is still a Heegaard surface for $W$. As $S$ is not stabilized in $W - V$, every stabilizing ball of $S$ in $W$ must intersect the compact set $V$. Thus, $S$ is not end-stabilized. $S$ is, however, stabilized. To see this, recall that $S$ is formed by amalgamating the splitting $S'_n$ (for any given $n$) to the splittings $S_i$ for $i \geq n$. Interpreted in $W$, $S'_n$ (for any $n$) is a splitting of a solid torus. The genus of $S'_n$ is $n + 1$ and so, by the classification of splittings of handlebodies, $S'_n$ can be destabilized $n$ times in $W$. This means, then, that $S$ can be destabilized infinitely many times in $W$. The stabilizing balls are not properly embedded in $W$ and so only finitely many destabilizations can occur at once.
\subsection{Ends of Heegaard Surfaces} The remainder of this section is devoted to showing that the inclusion of a Heegaard surface into $M$ induces a homeomorphism of end spaces. It may serve to give the reader some feel for the properties of noncompact Heegaard splittings. Before stating the results, we recall the definition of the set of ends of a manifold. (See, for example, \cite{BrTu74}.)
\begin{definition} A \defn{ray} in a connected manifold $M$ is a proper map\linebreak[4] $r:{\mathbb R}_+ \rightarrow M$. An \defn{end} of a non-compact manifold $M$ is an equivalence class of rays. Two rays $r,s:{\mathbb R}_+ \rightarrow M$ are equivalent if for every compact set $C \subset M$ there is a number $t_C \in {\mathbb R}_+$ such that the images of $[t_C,\infty)$ under $r$ and under $s$ are in the same component of $M - C$. The set of ends is topologized by declaring that for any compact set $C$ and any non-compact component $A$ of the closure of $M - C$ the set of equivalence classes $\{[r] : \exists t \in {\mathbb R}_+ \text{ with } r([t,\infty)) \subset A\}$ is an open set. These open sets form a basis for the topology on the end space of $M$. The set of ends of $M$ with this topology is 0-dimensional, compact, and Hausdorff \cite{Ra60}. \end{definition}
The proofs of the following lemma and proposition follow suggestions by Martin Scharlemann.
\begin{lemma}\label{graph exterior} Let $\Gamma$ be a locally finite graph properly embedded in an open 3-manifold $M$. Then the inclusion of $M - \operatorname{int}(\eta(\Gamma))$ into $M$ induces a homeomorphism of ends. \end{lemma}
\begin{proof} Let $X = M - \operatorname{int}(\eta(\Gamma))$. Let $r$ and $s$ be two rays determining the same end of $X$. Let $C \subset M$ be a compact set. $X$ is a closed subset of $M$. As such, $C \cap X$ is a compact subset of $X$. Hence, there exists a $t \in [0,\infty)$ such that the images of $[t,\infty)$ under $r$ and $s$ are contained in the same component of $X - C$. This means that the images of $[t,\infty)$ under $r$ and $s$ are contained in the same component of $M - C$. Thus, $r$ and $s$ are rays in $M$ and determine the same end of $M$. Hence, there is a well-defined map on ends induced by the inclusion of $X$ into $M$. \newline
We next prove that the induced map on ends is surjective. Suppose that $[r]$ is an equivalence class of ends of $M$. By general position, there is a representative of this equivalence class which is disjoint from $\Gamma$ and, hence, there is a representative $r$ which is contained in $X$. Under the induced map the equivalence class $[r]$ in the set of ends of $X$ is sent to the equivalence class $[r]$ in the set of ends of $M$. Thus, the induced map on ends is surjective. \newline
Now suppose that $[r]$ and $[s]$ are equivalence classes in the set of ends of $X$ which have the same image in the set of ends of $M$ under the map induced by the inclusion of $X$ into $M$. Let $r$ and $s$ be representatives of these equivalence classes in the set of ends of $X$. Since $r$ and $s$ represent the same equivalence class in the set of ends of $M$, for any compact set $C \subset M$ there is a $t_C \in [0, \infty)$ such that the images of $[t_C,\infty)$ under $r$ and $s$ are contained in the same component of $M - C$. Let $K \subset X$ be a compact set. As $X$ is closed in $M$, $K$ is a compact subset of $M$. The images $r([t_K,\infty))$ and $s([t_K,\infty))$ are contained in the same component of $M - K$. The components of $M - K$ are also the path components of $M - K$, so there is a path $\gamma$ contained in $M - K$ joining $r([t_K,\infty))$ and $s([t_K,\infty))$. By general position, we may homotope $\gamma$ so that its image is contained in $M - (K \cup \eta(\Gamma))$. That is, $\gamma$ is a path in $X - K$ joining $r([t_K,\infty))$ and $s([t_K,\infty))$. Thus, $r([t_K,\infty))$ and $s([t_K,\infty))$ are contained in the same component of $X - K$. Since $K$ was an arbitrary compact subset of $X$, $[r] = [s]$ in the set of ends of $X$ and the induced map on ends is injective. \newline
We now prove that the induced map is bicontinuous. To show continuity, it suffices to show that the preimage of a basis element in the topology of ends of $M$ is open in the ends of $X$. Let $A'$ be a basis element in the topology of the set of ends of $M$. By definition, there is a compact set $C \subset M$ and a non-compact component $A$ of $M - C$ such that for each ray $r$ for which $[r] \in A'$ there is $t_r \in [0,\infty)$ such that $r([t_r,\infty))$ is contained in $A$. By replacing $C$ with $\eta(C)$, we may assume that $C$ and $A$ are submanifolds of $M$. Since $X$ is closed in $M$, $C \cap X$ is compact and so by choosing representatives $r$ for each $[r] \in A'$ such that $r$ is a ray in $X$, we see that $r([t_r,\infty))$ is contained in $A \cap X$. \newline
We claim that $A \cap X$ is connected and non-compact. It is easy to see that $A \cap X$ is path-connected: choose two points $x,y \in A \cap X$. Since $A$ is path-connected, there is a path in $M$ joining them. By general position we may assume that the path is disjoint from $\Gamma$. Thus, there is a path in $A$ disjoint from $\eta(\Gamma)$. Hence, $A \cap X$ is path-connected and therefore connected. $A \cap X$ is also non-compact since $r$ is a proper map and the image of $[t_r,\infty)$ under $r$ is contained in $A \cap X$. The preimage of $A'$ is, therefore, contained in the set $A'' = \{[s]: \exists t \in {\mathbb R}_+ \text{ with } s([t,\infty)) \subset (A \cap X)\}$. Suppose, now, that $s$ is a representative for $[s] \in A''$. Since $A \cap X \subset A$, $s([t,\infty)) \subset A$. Thus, the image of $[s]$ under the inclusion map of ends of $X$ into ends of $M$ is contained in $A'$. Thus, the preimage of $A'$ is $A''$. $A''$ is, by definition, an open set in the topology of the set of ends of $X$. Hence, the induced map on ends is continuous. Since the set of ends of a connected manifold is compact and Hausdorff the induced map also has continuous inverse. Thus, the induced map is a homeomorphism. \end{proof}
\begin{proposition}\label{end homeomorphism} Let $M = U \cup_S V$ be an absolute Heegaard splitting of a non-compact manifold with compact boundary. Then the inclusion of $S$ into $M$ induces a homeomorphism of ends. \end{proposition}
\begin{proof} If $\partial M \neq \varnothing$ we can attach finitely many 2 and 3-handles to $\partial M$ to obtain an open 3-manifold $M'$ containing $M$. An absolute Heegaard splitting for $M$ is also a Heegaard splitting for $M'$, since the 2 and 3-handles were attached to $\partial_-$ of the compressionbodies. Since we attached only finitely many 2 and 3-handles, the inclusion of $M$ into $M'$ induces a homeomorphism of ends. So, without loss of generality, we may assume that $M$ is open. \newline
Choose spines $\Sigma_U$ and $\Sigma_V$ for $U$ and $V$ respectively. Let $\Gamma = \Sigma_U \cup \Sigma_V$. $\Gamma$ is a locally finite graph properly embedded in $M$. Let $X$ be the complement of an open regular neighborhood of $\Gamma$ in $M$. Since $\Sigma_U$ and $\Sigma_V$ are spines of handlebodies giving a Heegaard splitting of $M$, $X$ is homeomorphic to $S \times I$. By Lemma \ref{graph exterior}, the inclusion of $X$ into $M$ induces a homeomorphism of ends. Since $X$ is homeomorphic to $S \times I$ there is a proper deformation retraction of $X$ onto $S \times \{\frac{1}{2}\}$. Thus the inclusion of $S$ into $X$ is a proper homotopy equivalence and so induces a homeomorphism on ends. Therefore, the inclusion of $S$ into $M$ induces a homeomorphism of ends. \end{proof}
\begin{remark} In \cite{FrMe97}, Frohman and Meeks prove by algebraic means that a Heegaard surface in a 1-ended 3-manifold is 1-ended. \end{remark}
\section{Slide-Moves}\label{slide-moves}
\subsection{Handle-slides}\label{handle-slides} Let $H$ be a compressionbody (absolute or relative) with preferred surface $S = \partial_+ H$. Suppose that we are given a disc set $\Delta$ for $H$ (with $\partial \Delta \subset \partial_+ H$). We now describe a process which transforms $\Delta$ into a new disc set $\Delta'$. \newline
Let $\alpha \subset \partial_+ H$ be an oriented arc such that $\alpha \cap \partial \Delta = \partial \alpha$. Suppose that the endpoints of $\alpha$ are on distinct discs of $\Delta$. Let $D_1$ and $D_2$ be the discs of $\Delta$ containing $\partial \alpha$ so that $\alpha$ joins $D_1$ to $D_2$. A regular neighborhood of $D_1 \cup \alpha \cup D_2$ has frontier in $H$ consisting of three discs. Two of these discs are parallel to $D_1$ and $D_2$, the other has arcs in its boundary which are subarcs of $\eta(\alpha)$. Let $D_1 \slide{\alpha} D_2$ denote this disc. Let $\Delta' = (\Delta - D_1) \cup (D_1 \slide{\alpha} D_2)$.
\begin{definition} The disc set $\Delta'$ is obtained from $\Delta$ by a \defn{handle-slide} of $\Delta$ along $\alpha$. If $D_1, D_2$ and $\alpha$ are all disjoint from a closed set $X$ then the handle-slide is said to be done \defn{relative} to $X$ or \defn{(rel $X$)}. \end{definition}
Suppose that $A \subset H$ is a subcompressionbody with the property that $\operatorname{fr} A \subset \Delta$. There is a subcompressionbody $A'$ of $H$ with frontier contained in $\Delta'$ which we say is \defn{obtained from $A$ by a handle-slide}. The definition of $A'$ depends on the location of $D_1$ and $\alpha$: \begin{itemize}
\item If $D_1$ is not in the frontier of $A$ then $A'$ is equal to $A$.
\item If $D_1$ is in the frontier of $A$ and $\alpha$ is contained in $\partial_{S} A$ then we remove the interior of a collar neighborhood of $\alpha \cup D_2$ from $A$. (The neighborhood of $D_2$ should be taken to be just on the side of $D_2$ which $\alpha$ intersects. This way, if $D_2 \subset \operatorname{int} A$, the disc $D_2$ itself is not removed.) If $D_2$ wasn't in the frontier of $A$, it is now contained in $\operatorname{fr} A'$.
\item If $D_1$ is in the frontier of $A$ and $\alpha$ is not contained in $\partial_{S} A$ then to form $A'$, we add the closure of a regular neighborhood of $\alpha \cup D_2$ to $A$. (Again, the neighborhood of $D_2$ should be taken to be just on the side of $D_2$ which intersects $\alpha$.) \end{itemize}
\begin{remark} The subcompressionbodies $A$ and $A'$ may not be homeomorphic (if, for example, both $D_1$ and $D_2$ are contained in $\operatorname{fr} A$ and $\alpha$ is not in $\partial_S A$). We do have, however, that $\sigma(A;\Delta)$ is homeomorphic to $\sigma(A';\Delta')$. \end{remark}
Likewise, if $R$ is a (topologically) closed subsurface of $\partial_+ H$ with the following three properties: \begin{enumerate} \item[$\cdot$] $\partial D_1$ is either a component of $\partial R$ or disjoint from $\partial R$.
\item[$\cdot$] $\partial D_2$ is either a component of $\partial R$ or disjoint from $\partial R$.
\item[$\cdot$] The interior of $\alpha$ is disjoint from $\partial R$ \end{enumerate} then we can form a new surface $R'$ which is \defn{obtained from $R$ by a handle-slide}. If $\partial D_1 \cap \partial R = \varnothing$ then $R'$ is defined to be $R$. If $\partial D_1 \subset \partial R$ and $\alpha \subset R$ then $R'$ is defined to be $\operatorname{cl}(R - \eta(\alpha \cup \partial D_2))$ where the neighborhood of $\partial D_2$ is a one-sided neighborhood on the side of $D_2$ which $\alpha$ meets. This way if $\partial D_2 \subset \operatorname{int}(R)$ then $\partial D_2 \subset \partial R'$. If $\partial D_1 \subset \partial R$ and $\alpha$ is not contained in $R$ then $R'$ is defined to be $R \cup \eta(\alpha \cup \partial D_2)$. As before, the neighborhood of $\partial D_2$ should be taken to be a one-sided neighborhood on the side of $\partial D_2$ which $\alpha$ meets.\newline
Bonahon developed the use of handle-slides to prove results about compressionbodies. The following proposition and its corollaries are based on his work in \cite{Bo83}. For proofs see Appendix B of that paper. The essence of the proof of Proposition \ref{choosing discs} shows up in Step 6 of the proof of Proposition \ref{good discs exist} of this paper.
\begin{proposition}\label{choosing discs} If $D$ is a boundary-reducing disc for $H$ then there is a collection of defining discs for $H$ which are disjoint from $D$. \end{proposition}
\begin{corollary}\label{boundary reducing gives compressionbody} Boundary-reducing a compressionbody along a finite disc set results in compressionbodies. \end{corollary}
\begin{corollary} \label{nested reducing discs} Given any finite disc set for a compressionbody, there is a defining collection of discs for the compressionbody which contains the given disc set. \end{corollary}
\begin{corollary} A subcompressionbody with compact frontier is a compressionbody. \end{corollary}
The following definition will be useful later. We include it here since Corollary \ref{complementary compressionbodies} follows from Corollary \ref{nested reducing discs}.
\begin{definition} If $A$ and $B$ are relative compressionbodies with $A \subset B$, we say that $A$ is \defn{correctly embedded} in $B$ if $\partial_+ A \subset \partial_+ B$ and if every closed component of $\partial_- A$ is also a component of $\partial_- B$. \end{definition}
Another way of stating the definition is that $A \subset B$ is correctly embedded if each component of $\operatorname{fr} A$ is a component of $\partial_- A$ which has non-empty boundary and is properly embedded in $B$.
\begin{remark} The notion of ``correctly embedded" is similar to Canary and McCullough's ``normally imbedded" in \cite[Section 3.4]{CaMc04}. \end{remark}
\begin{corollary}\label{complementary compressionbodies} Suppose that $A$ is a compact relative compressionbody correctly embedded in a relative compressionbody $B$. Then $\operatorname{cl}(B - A)$ is a relative compressionbody. In particular, if each closed component of $\partial_- B$ is contained in $A$ then $\operatorname{cl}(B - A)$ is a handlebody. \end{corollary}
\begin{proof} Choose a defining set of discs $\Delta_A$ for $A$. Boundary-reduce $B$ along $\Delta_A$ to obtain $B'$. Corollary \ref{boundary reducing gives compressionbody} implies that each component of $B'$ is a (relative) compressionbody. Each component of $B'$ was either contained in $A$ or contains a copy of $\operatorname{fr} A \times I$ with $\operatorname{fr} A \times \{1\}$ a subsurface of $\partial B'$. Subtracting $\operatorname{fr} A \times I$ from those components of $B'$ is simply removing a collar of a subsurface of $\partial B'$ from $B'$ and hence leaves us with a compressionbody. But this is exactly $\operatorname{cl}(B - A)$. If each closed component of $\partial_- B$ is contained in $A$ then, if $C$ is a component of $B'$ which is not contained in $A$, $\partial_- C$ contains no closed components. By our definition of ``compressionbody", $\partial C$ is compact and so $C$ is formed by adding one-handles to $F \times I$ where $F$ is a compact surface, no component of which is boundary-less. Thus, $C$ is obtained by adding one-handles to handlebodies and, so, is a handlebody. We then form a component of $\operatorname{cl}(B - A)$ by removing a neighborhood of $\operatorname{fr} A \cap C$ from $C$. The result is homeomorphic to $C$ and so is a handlebody. \end{proof}
\begin{remark} We may not be able to choose $\operatorname{cl}(\partial_+ B - \partial_+ A)$ to be the preferred surface of $\operatorname{cl}(B - A)$. For example, if $B$ is a compact relative compressionbody and we push each non-closed component of $\partial_- B$ slightly into $B$ and take $A$ to be the closure of the complement of the product regions. \end{remark}
\subsection{Slide-Moves and Isotopies}\label{slide-moves and isotopies}
For the remainder of this section, let $S$ be an absolute Heegaard surface dividing a 3-manifold $M$ with compact boundary into absolute compressionbodies $U$ and $V$. \newline
If we have disc sets $\ob{\Delta}_1$ for $U$ and $\ob{\Delta}_2$ for $V$ which are disjoint from each other we can perform handle-slides on each disc set individually. The remainder of this section studies how these handle-slides affect the surface $S$.
\begin{definition} A \defn{2-sided disc family} $\ob{\Delta}$ for $S$ in $M$ is the union of disc sets $\ob{\Delta}_1$ and $\ob{\Delta}_2$ for $U$ and $V$ with the property that the discs of $\ob{\Delta} = \ob{\Delta}_1 \cup \ob{\Delta}_2$ are pairwise disjoint. \end{definition}
We can expand the notion of a handle-slide to that of a slide-move on the 2-sided disc family $\ob{\Delta} = \ob{\Delta}_1 \cup \ob{\Delta}_2$:
\begin{definition} A \defn{slide-move} of $\ob{\Delta}$ is one of the following operations: \begin{enumerate} \item[(M1)] Perform a handle-slide (rel $\partial \ob{\Delta}_2$) of $\ob{\Delta}_1$. \item[(M2)] Add to $\ob{\Delta}_1$ a boundary-reducing disc for $U$ which is disjoint from $\ob{\Delta}_1 \cup \ob{\Delta}_2$. \item[(M3)] Perform a handle-slide (rel $\partial\ob{\Delta}_1$) of $\ob{\Delta}_2$. \item[(M4)] Add to $\ob{\Delta}_2$ a boundary-reducing disc for $V$ which is disjoint from $\ob{\Delta}_1 \cup \ob{\Delta}_2$. \end{enumerate} \end{definition}
Suppose that $A$ is a subcompressionbody of $U$ or $V$ with $\operatorname{fr} A \subset \ob{\Delta}$. If we perform a slide-move on $\ob{\Delta}$ to obtain a 2-sided disc family $\Delta$ we can obtain from $A$ a subcompressionbody $A'$ with frontier contained in $\Delta$: If slide-move (M2) or (M4) is performed, $A'$ is defined to be equal to $A$. If $A \subset U$ and slide-move (M1) is performed, $A'$ is defined to be the subcompressionbody obtained from $A$ by the handle-slide (see Section \ref{handle-slides}). Similarly, if $A \subset V$ and slide-move (M3) is performed, $A'$ is defined to be the subcompressionbody obtained from $A$ by the handle-slide. If we perform a finite sequence of slide-moves to obtain $\Delta$ from $\ob{\Delta}$ there is a subcompressionbody $A'$ with $\operatorname{fr} A' \subset \Delta$ obtained from $A$ by a finite number of handle-slides. We say that $\Delta$ is \defn{obtained from $\ob{\Delta}$ by slide-moves} and that $A'$ is \defn{obtained from $A$ by slide-moves}. \newline
Suppose that $R$ is a properly embedded subsurface of $S$ with $\partial R \subset \partial \ob{\Delta}$. The boundary components of $R$ may bound discs in either $U$ or $V$ (i.e. discs which are in $\ob{\Delta}_1$ or $\ob{\Delta}_2$). If we perform a finite sequence of slide-moves on $\ob{\Delta}$ to obtain $\Delta$ we may define a subsurface $R'$ of $S$ which is \defn{obtained from $R$ by slide-moves} and has boundary contained in $\partial \Delta$. The definition is basically the same as the definition when a single handle-slide is performed: If slide-moves (M2) or (M4) are performed, $R$ is left unchanged. If (M1) or (M3) is performed, so that a disc $D_1$ is slid over a disc $D_2$ via a path $\alpha$, we can define $R'$ as before (see Section \ref{handle-slides}). The following proposition is an integral part of the proof of Theorem \ref{well-placed}.
\begin{proposition}\label{isotopies of slide-moves} Suppose that $\ob{\Delta}$ is a 2-sided disc family for $S$ and that $\Delta$ is obtained from $\ob{\Delta}$ by slide-moves. Then there is a finite collection of disjoint discs $\mc{D}$ with $\partial \mc{D} \subset \sigma(S;\ob{\Delta})$ and a proper ambient isotopy of $\sigma(S;\Delta)$, fixed outside a compact subset of $M$, with the following properties: \begin{enumerate} \item[(i)] The discs $\mc{D} = D_1 \cup \hdots \cup D_p$ have an ordering such that the disc $D_i$ intersects only on its boundary the surface $\sigma(S;\ob{\Delta})$ compressed along $D_1, \hdots, D_{i-1}$. (See the remark below.) \item[(ii)] The isotopy takes $\sigma(S;\Delta)$ to $\sigma(S;\ob{\Delta})$ compressed along $\mc{D}$. \item[(iii)] Let $R$ be a topologically closed subsurface of $S$ such that $\partial R \subset \partial \ob{\Delta}$ and $R'$ the subsurface of $S$ obtained from $R$ by that sequence of slide-moves. The isotopy takes $\sigma(R';\Delta)$ to the surface obtained from $\sigma(R;\ob{\Delta})$ by compressing along whatever discs of $\mc{D}$ have boundary in $R$. \end{enumerate} \end{proposition}
\begin{remark} The discs $\mc{D}$ may intersect $S$ on their interiors, so part of the conclusion of the theorem is that when we compress $\sigma(S;\ob{\Delta})$ along the discs $D_1, \hdots, D_{i-1}$ we have chosen the regular neighborhoods of $D_1,\hdots, D_{i-1}$ so that $D_i$, although it may intersect $S$, does not intersect $\sigma(S;\ob{\Delta})$ compressed along $D_1, \hdots, D_{i-1}$. We will abuse notation and write $\sigma(S;\ob{\Delta} \cup \mc{D})$ for the surface obtained from $\sigma(S;\ob{\Delta})$ by compressing along the discs $\mc{D}$ in the order given. Similarly, if $R$ is a topologically closed subsurface of $S$ with $\partial R \subset \partial \ob{\Delta}$ we will use $\sigma(R;\ob{\Delta} \cup \mc{D})$ to indicate the surface obtained from $R$ by compressing along the discs of $\ob{\Delta}$ and then $\mc{D}$ in the given order (rather, compressing along those discs which have boundary on $R$). \end{remark}
The proof of Proposition \ref{isotopies of slide-moves} will make use of the following lemma:
\begin{lemma}\label{isotopies of handle slides} If $\ob{\Delta}_i$ is a disc family for $S$ with $\ob{\Delta}_i \subset U$ or $\ob{\Delta}_i \subset V$ and if $\Delta_i$ is obtained from $\ob{\Delta}_i$ by a single handle-slide of the disc $D_1$ over the disc $D_2$ via a path $\alpha$, then there is a proper ambient isotopy of $M$, fixed off a compact set, with the following properties: \begin{enumerate} \item[(a)] the isotopy takes $\sigma(S;\Delta_i)$ to $\sigma(S;\ob{\Delta}_i)$. \item[(b)] if the handle-slide is relative to a closed set $X$ then we can choose the isotopy to be (rel $X$). \item[(c)] if $R$ is a subsurface of $S$ with all of the following properties: \begin{enumerate} \item[$\cdot$]$\partial D_1$ is either a component of $\partial R$ or disjoint from $\partial R$. \item[$\cdot$]$\partial D_2$ is either a component of $\partial R$ or disjoint from $\partial R$. \item[$\cdot$]The interior of $\alpha$ is disjoint from $\partial R$. \end{enumerate} then if $R'$ is the subsurface of $S$ obtained from $R$ by the handle-slide then the isotopy takes $\sigma(R';\Delta_i)$ to $\sigma(R;\ob{\Delta}_i)$. \end{enumerate} \end{lemma}
\begin{proof}[Proof of Lemma \ref{isotopies of handle slides}]
\begin{figure}
\caption{Compressing along $\Delta_i$}
\label{Figure 1}
\end{figure}
Recall that $\Delta_i$ is obtained from $\ob{\Delta}_i$ by removing the disc $D_1$ and replacing it with the disc $D_1 \slide{\alpha} D_2$. Let $S' = \sigma(S;\Delta_i)$. When we compress along the discs $D_2$ and $D_1 \slide{\alpha} D_2$ we end up with a situation as depicted in Figure \ref{Figure 1}. Note that the figure depicts four discs parallel to $D_2$ since a disc parallel to $D_2$ makes up part of $D_1 \slide{\alpha} D_2$ and both $D_1 \slide{\alpha} D_2$ and $D_2$ are in $\Delta_i$. \newline
After compressing along $D_1 \slide{\alpha} D_2$ we see that there is regular neigborhood $N$ (in the compressionbody containing $\ob{\Delta}_i$) of $\alpha$ homeomorphic to $D^2 \times I$ with $D^2 \times \{0\}$ glued to a copy of $D_1$ and $D^2 \times \{1\}$ glued to a copy of $D_2 \times I$. Take a regular neighborhood in the compressionbody containing $\ob{\Delta}_i$ of $N \cup (D_2 \times I)$ which misses the rest of the surface $S'$. This regular neighborhood is a 3-ball $B$. Choose the regular neighborhoods so that $B \cap S' \subset \partial B$. The intersection of $B$ with $D_1$ is a disc which is a regular neighborhood (in the compressionbody) of the point $\alpha \cap D_1$. Slightly enlarge $B$ in $M$ to a ball $B'$ and perform an ambient isotopy supported on $B'$ and which takes $B - D_1$ to $B \cap D_1$. Next use the regular neighborhood of $\alpha$ to isotope back to $S$ the portion of $S'$ which forms part of the boundary of a regular neighborhood of $\alpha$ (the ``trough"). The result is the same as if we had compressed along $\ob{\Delta}_i$. This proves statement (a). The isotopy described is the identity off a neighborhood of $D_1 \cup \alpha \cup D_2$ and so is a proper isotopy. \newline
If the handle-slide was relative to a closed set $X$, then by choosing the neighborhoods of $D_1$, $D_2$, and $\alpha$ to be disjoint from $X$, the isotopy described is relative to $X$. This proves statement (b). \newline
To prove conclusion (c), we examine the possibilities. Suppose that $R$ is a subsurface of $S$ as in the statement and suppose that $R'$ is obtained from $R$ by the handle-slide. Recall that $B$ is the ball which is a regular neighborhood of $\alpha$ and $D_2$. The important observation is that the isotopy takes $\partial B - D_1$ into $D_1$. \begin{itemize}
\item Suppose that $\partial D_1 \subset \partial R$ and that $\alpha \subset R$. In this case, $R'$ equals $R - \partial B$. The isotopy fixes $R' - \eta(\partial D_1 \slide{\alpha} D_2 \cup D_2)$. And so the isotopy takes $\sigma(R';\Delta_i)$ into $\sigma(R;\ob{\Delta}_i)$ \newline
\item Suppose that $\partial D_1 \subset \partial R$ and that $\alpha$ is not contained in $R$. Then $\sigma(R';\Delta_i)$ equals $\sigma(R;\ob{\Delta}_i) \cup \partial B$. The isotopy described takes $\partial B - D_1$ into $\sigma(R;\ob{\Delta}_i)$. \newline
\item Suppose that $\partial D_1$ is not contained in $R$. The previous case shows that $\sigma(\operatorname{cl}(S - R');\Delta_i)$ is taken into $\sigma(\operatorname{cl}(S - R);\ob{\Delta}_i)$ and by part (a) we must have that $\sigma(R';\Delta_i)$ is taken into $\sigma(R;\ob{\Delta}_i)$.\newline
\item Suppose that $\partial D_1 \subset \operatorname{int} R$. In this case, $\partial B - D_1$ is contained in $\sigma(R';\Delta_i)$ and $(\partial B - D_1) \cap S$ in $\sigma(R;\ob{\Delta}_i)$. The isotopy clearly satisfies (c). \end{itemize} \end{proof}
We now turn to the proof of Proposition \ref{isotopies of slide-moves}.
\begin{proof}[Proof of Proposition \ref{isotopies of slide-moves}]
Suppose that the 2-sided disc family $\Delta$ is obtained from the 2-sided disc family $\ob{\Delta}$ by a finite sequence $\{\mu_1,\hdots, \mu_{n}\}$ of slide-moves. Each $\mu_i$ is a slide-move of type (M1), (M2), (M3), or (M4). We prove the proposition by induction on the length of the sequence. If the sequence is of length 0 the result is immediate so suppose that $n \geq 1$ and that the proposition is true for all sequences with $n - 1$ elements. \newline
Let $\delta$ be the 2-sided disc family obtained from $\ob{\Delta}$ by the sequence $\nu = \{\mu_1, \hdots, \mu_{n-1}\}$. Using the notation from the statement of the proposition: let $r$ be the subsurface of $S$ obtained from the subsurface $R$ by the sequence $\nu$. \newline
By the induction hypothesis, there is a collection of disjoint discs $\mc{E}$ with boundary on $\sigma(S;\ob{\Delta})$ and there is an ambient isotopy $f$, fixed off a compact set, which takes $\sigma(S;\delta)$ to $\sigma(S;\ob{\Delta} \cup \mc{E})$ and which takes the surface $\sigma(r;\delta)$ into the surface $\sigma(R;\ob{\Delta} \cup \mc{E})$. (Recall that this means $R$ compressed along those discs of $\ob{\Delta} \cup \mc{E}$ with boundary on $R$.) We assume that $f$ also satisfies conclusions (i), (ii), and (iii).\newline
The 2-sided disc family $\Delta$ is obtained from the 2-sided disc family $\delta$ by a single slide-move $\mu_n$ of type (M1), (M2), (M3), or (M4). We divide the proof into the case when $\mu_n$ is of type (M2) or (M4) and the case when the slide-move is of type (M1) or (M3).
\subsubsection*{Case: $\mu_n$ is of type (M2) or (M4)} If $\mu_n$ is a slide-move of type (M2) or (M4), $\Delta$ is obtained from $\delta$ by adding a single disc $D'$ to $\delta$. In this case, $R' = r$. The ambient isotopy $f$ takes the disc $D'$ to a disc $D$ with boundary on $\sigma(S;\ob{\Delta}\cup \mc{E})$. By a further isotopy, if necessary, we may arrange that the disc $D$ has boundary disjoint from the remnants of $\mc{E}$ and so has boundary on $\sigma(S;\ob{\Delta})$ and that $D$ is disjoint from the discs of $\mc{E}$, though it may intersect $S$ in a neighborhood of $\mc{E}$. Let $\mc{D}$ equal $\mc{E} \cup D$. We need to show that we have satisfied the conclusions of the proposition. \newline
To prove (i), recall that the discs $\mc{E}$ are numbered. The disc $D$ should be given the next number. Since $\operatorname{int} D'$ is disjoint from $\sigma(S;\delta)$ and the isotopy is an ambient isotopy the disc $D$ has interior disjoint from $\sigma(S;\ob{\Delta} \cup \mc{E})$. Thus, $D$ intersects $\sigma(S;\ob{\Delta} \cup \mc{E})$ only on $\partial D$. \newline
Conclusion (ii) is clear, since the isotopy $f$ took $\sigma(S;\Delta' - D')$ to $\sigma(S;\ob{\Delta} \cup \mc{E})$ and also took $D'$ to $D$ which is a disc with boundary on the surface obtained from $\sigma(S;\ob{\Delta})$ by compressing along $\mc{E}$. \newline
To prove (iii), recall that since $\mu_n$ is the slide-move consisting of adding the disc $D'$ to $\delta$, the surface $R'$ equals the surface $r$. The induction hypothesis says that $f$ takes $\sigma(r;\delta)$ to the surface $\sigma(R;\ob{\Delta} \cup \mc{E})$. Conclusion (ii) shows that the isotopy $f$ takes the surface $\sigma(R';\Delta = \delta \cup D')$ to $\sigma(R;\ob{\Delta} \cup \mc{D})$. \newline
\subsubsection*{Case: $\mu_n$ is of type (M1) or (M3)} If the slide-move $\mu_n$ is of type (M1) or (M3) we have obtained $\Delta$ from $\delta$ by a single handle-slide of $\delta_1$ in $U$ or $\delta_2$ in $V$. Without loss of generality, assume that $\mu_n$ is a slide-move of type (M3), so that $\Delta$ is obtained from $\delta$ by the slide-move $\mu_n$ of $\delta_2$. By Lemma \ref{isotopies of handle slides}, there is an ambient isotopy $g$ of $M$, fixed off a compact set, which satisfies properties (a), (b), and (c). In particular, $g$ takes $\sigma(S;\Delta)$ to $\sigma(S;\delta)$ because it takes $\sigma(S;\Delta_2)$ to $\sigma(S;\delta_2)$ and is performed relative to $\partial \delta_1$ (property (b)). Let $h$ be the ambient isotopy formed by performing $g$ and then performing $f$. Let $\mc{D} = \mc{E}$. We show that $h$ satisfies conclusions (i), (ii), and (iii). \newline
Conclusions (i) and (ii) follow immediately from the induction hypothesis on $f$ and property (a) of Lemma \ref{isotopies of handle slides}. \newline
To prove conclusion (iii), recall that $r$ denotes the surface obtained from $R$ by the sequence of slide-moves $\{\mu_1, \hdots, \mu_{n-1}\}$. The surface $R'$ is obtained from $r$ by the handle-slide $\mu_n$. Property (c) from Lemma \ref{isotopies of handle slides} shows that $g$ takes $\sigma(R';\Delta_2)$ to $\sigma(r;\delta_2)$. The isotopy $g$ is an ambient isotopy which was performed relative to $\partial \delta_1$, so $g$ also takes $\sigma(R';\Delta)$ to $\sigma(r;\delta)$. By induction, the isotopy $f$ takes $\sigma(r;\delta)$ to $\sigma(R;\ob{\Delta}\cup \mc{D})$. And so $h$ satisfies (iii). \end{proof}
\section{Relative Heegaard Splittings}\label{rel HS}
\subsection{The Outer Collar Property}\label{outer collar prop} Recall that a compact relative compressionbody $H$ is formed by taking a compact surface $F$ (possibly $\varnothing$), forming $F \times I$, and attaching 1-handles to the interior of $F \times \{1\}$ and finitely many 3-balls. The surface $F \times \{0\}$ is $\partial_- H$ and the closure of its complement in $\partial H$ is the preferred surface of $H$, denoted $\partial_+ H$. Recall, also, from Section \ref{handle-slides} that a relative compressionbody $A$ is correctly embedded in a compressionbody $B$ if the frontier of $A$ in $B$ consists only of components of $\partial_- A$ which have boundary. Corollary \ref{complementary compressionbodies} states that, in this case, $\operatorname{cl}(B - A)$ is a relative compressionbody with some preferred surface. However, $\operatorname{cl}(B - A)$ may not be correctly embedded in $B$ as we may not be able to choose $\operatorname{cl}(\partial_+ B - \partial_+ A)$ to be the preferred surface of $\operatorname{cl}(B - A)$. \newline
In this section we explore situations in which we can ``come close" to having $\partial_+ \operatorname{cl}(B - A)$ equal $\operatorname{cl}(\partial_+ B - \partial_+ A)$. These situations will arise when we have exhausting sequences of noncompact absolute compressionbodies.
\begin{definition} Suppose that $\{K'_i\}$ is an exhausting sequence for a noncompact absolute compressionbody $U$. If each $K'_i$ is a relative compressionbody correctly embedded in $U$ and each $K'_i$ is correctly embedded in $K'_{i+1}$ then $\{K'_i\}$ is a \defn{correctly embedded exhausting sequence} for $U$. \end{definition}
The following definition is somewhat technical, but will be useful for statements of results in Section \ref{Exh. Seq.}. Recall that a collaring set of discs for a compressionbody $H$ is a set of discs which separates off a copy of $\partial_- H \times I$.
\begin{definition} Suppose that $\{K'_i\}$ is a correctly embedded exhausting sequence for $U$. Suppose that for each $i \geq 2$ there is an embedding of $(\operatorname{fr} K'_i \times I, (\partial \operatorname{fr} K'_i) \times I)$ into $(\operatorname{cl}(K'_i - K'_{i-1}),\partial_+ U \cap \operatorname{cl}(K'_i - K'_{i-1}))$ so that $\operatorname{fr} K'_i = \operatorname{fr} K'_i \times \{0\}$ and so that $\operatorname{fr} K'_i \times \{1\}$ is a subsurface of $\partial_+ U$ except at a finite number of open discs. Then $\{K'_i\}$ is said to have the \defn{outer collar property}. \end{definition}
\begin{remark} The open discs of $\operatorname{fr} K'_i \times \{1\}$ which are not contained in $\partial_+ U$ are the interiors of a set of collaring discs for $K'_i$. \end{remark}
\begin{definition} Suppose that $\{K'_i\}$ is a correctly embedded exhausting sequence for $U$. Suppose that for each $i \geq 2$ there is an embedding of $(\operatorname{fr} K'_{i-1} \times I,(\partial \operatorname{fr} K'_{i-1}) \times I)$ into $(\operatorname{cl}(K'_i - K'_{i-1}),\partial_+ U \cap \operatorname{cl}(K'_i - K'_{i-1}))$ so that $\operatorname{fr} K'_{i-1} = \operatorname{fr} K'_{i-1} \times \{0\}$ and so that $\operatorname{fr} K'_{i-1} \times \{1\}$ is a subsurface of $\partial_+ U$ except at a finite number of open discs. Then $\{K'_i\}$ is said to have the \defn{inner collar property}. \end{definition}
The outer collar property plays an important role in this paper, so it may be helpful to give an example of an exhausting sequence of a handlebody with the outer collar property. Our example is, in fact, an exhausting sequence of a one-ended, infinite genus handlebody which has both the inner and outer collar properties.
\begin{example} For each natural number $i$, let $F_i$ be a compact, connected surface with non-empty boundary. Let $P_i = F_i \times I$. Recall that $P_i$ is a handlebody. For each $i \geq 2$ join $F_i \times \{0\}$ to $F_{i-1} \times \{1\}$ by a one-handle $H_i$. Denote the union of all the product regions and all the one-handles by $H$. See Figure \ref{innerouter} for a schematic depiction of $H$. Let $D_i$ be a disc which is a cocore of the one-handle $H_i$. Let
$$K'_i = P_1 \cup H_2 \cup P_2 \cup \hdots \cup P_{2i - 1} \cup H_{2i} \cup (F_{2i} \times [0,\frac{1}{2}]).$$
The construction makes clear that $\{K'_i\}$ is a correctly embedded exhausting sequence of the handlebody $H$. The frontier of $K'_i$ is $F_{2i} \times \{\frac{1}{2}\}$ which is an incompressible surface in $H$. For $i \geq 2$, compressing $K'_i$ along $D_{2i}$ leaves two components, one of which is $F_{2i} \times [0,\frac{1}{2}] = \operatorname{fr} K'_i \times I$. This component is disjoint from $K'_{i - 1}$. From the construction, it is clear that $\{K'_i\}$ has the outer collar property. For $i \geq 2$, boundary-reducing $\operatorname{cl}(K'_i - K'_{i-1})$ along the disc $D_{2i - 1}$ leaves two components, one of which is $F_{2i - 2} \times [\frac{1}{2},1] = \operatorname{fr} K'_{i-1} \times I$. Again, from the construction it is clear that $\{K'_i\}$ has the inner collar property. \end{example}
\begin{figure}
\caption{The handlebody $H$.}
\label{innerouter}
\end{figure}
In this paper, it is the outer collar property which is most used. The inner collar property makes an appearance in the appendix. Certainly not every correctly embedded sequence has the outer collar property. If, for example, for some $i$, $\partial_- K'_{i-1}$ was not a disc and bounded a product region with $\partial_- K'_i$, the sequence would not have the outer collar property. If a sequence has both the inner and outer collar properties we can take $\operatorname{cl}(\partial_+ K'_{i+1} - \partial_+ K'_i)$ to be the preferred surface for the relative compressionbody $\operatorname{cl}(K'_{i+1} - K'_i)$. It is in this sense that a sequence with the outer collar property ``comes close" to having $\operatorname{cl}(\partial_+ K'_{i+1} - \partial_+ K'_i)$ the preferred surface for the relative compressionbody $\operatorname{cl}(K'_{i+1} - K'_i)$. \newline
It turns out that sequences with the outer collar property are fairly common:
\begin{lemma}\label{outer collar property} Suppose that $\{K'_i\}$ is a correctly embedded exhausting sequence for the absolute compressionbody $U$. Then there is a subsequence with the outer collar property. \end{lemma}
\begin{proof} Let $\{L_i\}$ be an exhausting sequence for $U$ by subcompressionbodies. Recall that the frontier of a subcompressionbody consists of properly embedded discs. Take subsequences of $\{K'_i\}$ and $\{L_i\}$ so that for all $i$, $L_i \subset K'_i \subset L_{i+1}$. Each inclusion should be into the interior of the succeeding submanifold. Fix some $i \in \mathbb N$.\newline
By Lemma \ref{nested reducing discs}, we may choose a defining collection of discs $\Delta$ for $K'_i$ which includes the discs $\operatorname{fr} L_i$. Boundary-reducing $K'_i$ along $\Delta$ leaves us with 3-balls and products $\partial_- K'_i \times I$. Since $K'_{i-1} \subset L_i$ and $\operatorname{fr} L_i$ separates $U$ we have that the remnants of $K'_{i-1}$ are completely contained in the 3-balls. Thus the product regions $\partial_- K'_i \times I$ are contained completely in $\operatorname{cl}(K'_i - K'_{i-1})$. Label $\partial_- K'_{i}$ with $\partial_- K'_i \times \{1\}$. The discs of $\Delta$ which show up on $\partial_- K'_i \times \{0\}$ can be taken to be our collaring set of discs. This collaring set of discs and the product region $\partial_- K'_i \times I$ are contained in $K'_i - K'_{i-1}$ so the sequence $\{K'_i\}$ now has the outer collar property. \end{proof}
\subsection{Relative Heegaard Splittings}Suppose that $M = U \cup_S V$ is an absolute Heegaard splitting of a non-compact 3-manifold with compact boundary, containing no $S^2$ components. If $N \subset M$ is a compact submanifold, the surface $S \cap N$ cannot possibly give an absolute Heegaard splitting of $N$ as $S$ is non-compact and $N$ is compact. It can, however, give a relative Heegaard splitting of $N$.\newline
We will eventually look at the relationship between relative Heegaard splittings and absolute Heegaard splittings, but first we show how exhaustions of $M$ by compact submanifolds which inherit relative Heegaard splittings from $S$ give rise to correctly embedded exhausting sequences of $U$.
\begin{definition} A submanifold $N$ contained in $M$ is \defn{adapted} to $S$ if \linebreak[4]$(U \cap N) \cup_{S \cap N} (V \cap N)$ is a relative Heegaard splitting of $N$ and $(U \cap N)$ is correctly embedded in $U$ and $(V \cap N)$ is correctly embedded in $V$. An exhausting sequence $\{K_i\}$ is \defn{adapted} to $S$ if each $K_i$ is adapted to $S$. It is \defn{perfectly adapted} to $S$ if it is adapted to $S$ and, additionally, each $\operatorname{cl}(K_{i+1} - K_i)$ is adapted to $S$. \end{definition}
\begin{remark} The requirement that $(U \cap N)$ and $(V \cap N)$ are correctly embedded in $U$ and $V$ respectively means that $\operatorname{fr} N$ can have no closed components which are contained entirely in $U$ or $V$: such a component would have to be a component of $\partial_- (U \cap N)$ or $\partial_- (V \cap N)$ as it would not be a subsurface of $S$. This, however, means that $U \cap N$ or $V \cap N$ is not correctly embedded in $U$. \end{remark}
In constructing an exhausting sequence that is adapted to $S$, the requirement that $U \cap N$ and $V \cap N$ are correctly embedded in $U$ and $V$ is a minor one. To see this, suppose that a compact submanifold $N \subset M$ containing $\partial M$ has the property that $N \cap U$ and $N \cap V$ are relative compressionbodies with preferred surfaces $S \cap N$. It is easy to adjust $N$ so that $U \cap N$ and $V \cap N$ are correctly embedded. If $U \cap N$, say, is not correctly embedded there must be a component $F$ of $\partial_- (U \cap N) - \partial M$ which is a closed surface. Since $U$ is an absolute compressionbody, $\operatorname{H}_2(U,\partial_- U) = 0$. Thus either $F$, or $F$ and components of $\partial_- U \cap \partial M$, bound(s) a compact submanifold $L$ of $U$. $L$ cannot be interior to $N$ as $N \cap U$ is a relative compressionbody with non-empty preferred surface and $F \cup \partial_- U$ is contained in $\partial_- (U \cap N)$. Thus $L$ is exterior to $N$. Since $\partial_- U \subset \partial M \subset N$, we have that $\partial L = F$. In fact, $(N \cap U) \cup L$ must still be a relative compressionbody. To see this, note that $F$ must be compressible in $L$ as $F$ is incompressible in $N \cap U$. ($\partial_-$ of a compressionbody is incompressible in the compressionbody). Every closed incompressible surface in $U$ is parallel to $\partial_- U$. Boundary-reducing $L$ is the same as adding a 2-handle to $N \cap U$ along a curve in $F \subset \partial_- (N \cup L)$. Adding a 2-handle to $\partial_-$ of a compressionbody preserves the fact that we have a relative compressionbody (up to the introduction of spherical boundary components). Eventually, our surface is a collection of spheres, which, since $U$ is irreducible, bound balls in $U$. Including these balls into $N$ (with the 2-handles attached) also preserves the fact that we have a relative compressionbody.
\begin{lemma}\label{adapted means correctly embedded} If $\{K_i\}$ is an exhausting sequence of $M$ adapted to $S$ with $\partial M \subset K_1$ then $\{K_i \cap U\}$ and $\{K_i \cap V\}$ are correctly embedded exhausting sequences of $U$ and $V$ respectively. \end{lemma}
\begin{proof} Let $X$ denote either $U$ or $V$. Since $\{K_i\}$ is adapted to $S$, by definition each $K_i \cap X$ is correctly embedded in $X$. Thus, $\partial_+ (K_i \cap X) \subset \partial_+ (K_{i+1} \cap X)$. Furthermore, any closed component of $\partial_- (K_i \cap X)$ is a component of $\partial_- X$ which is contained in $\partial_- (K_{i+1} \cap X)$. Thus, each closed component of $\partial_- (K_i \cap X)$ is a component of $\partial_- (K_{i+1} \cap X)$. Hence, $K_i \cap X$ is correctly embedded in $K_{i+1} \cap X$. \end{proof}
\begin{definition} If $\{K_i\}$ is an exhausting sequence of $M$ adapted to $S$ with $\partial M \subset K_1$ and such that $\{K_i \cap U\}$ has the outer collar property we say that $\{K_i\}$ has \defn{the outer collar property with respect to $U$}. \end{definition}
\begin{corollary}\label{outer collar property 2} If $\{K_i\}$ is an exhausting sequence of $M$ adapted to $S$ with $\partial M \subset K_1$ then there is a subsequence which has the outer collar property with respect to $U$. \end{corollary}
\begin{proof} By Lemma \ref{adapted means correctly embedded}, $\{K_i \cap U\}$ is a correctly embedded exhausting sequence of $U$. By Lemma \ref{outer collar property}, there is an infinite subset $\mc{N}$ of $\mathbb N$ such that $\{K_i \cap U\}_{i \in \mc{N}}$ has the outer collar property. Hence, $\{K_i\}_{i \in \mc{N}}$ has the outer collar property with respect to $U$. \end{proof}
\subsection{Balanced Exhausting Sequences}\label{Balanced Exhausting Sequences} We've shown so far that if $M$ has an exhausting sequence adapted to $S$ we can find one which has the outer collar property. We've not yet addressed the question of the existence of an exhausting sequence adapted to $S$. We do that now. This construction is a variation of the construction given by Frohman and Meeks in \cite{FrMe97}. \newline
Recall that $M = U \cup_S V$ is an absolute Heegaard splitting of a non-compact 3-manifold with compact boundary. Let $A$ and $B$ be compact subcompressionbodies of $U$ and $V$ respectively with the property that $\partial_S A \subset \operatorname{int}(\partial_S B)$. Let $C$ be a regular neighborhood of $A \cup B$.
\begin{definition} A set $C$ constructed in such a manner will be called a \defn{balanced submanifold} of $M$ (with respect to $S$). An exhausting sequence $\{C_i\}$ of $M$ will be called a \defn{balanced exhausting sequence} for $M$ (with respect to $S$) if each $C_i = \eta(A_i \cup B_i)$ is a balanced submanifold and if, for all $i$, $\partial_S B_i \subset \operatorname{int}(\partial_S A_{i+1})$. \end{definition}
The next lemma guarantees that balanced submanifolds are adapted to the Heegaard surface. Consequently, we will say that such a set $C$ is a \defn{balanced submanifold of $M$ (adapted to $S$)}.
\begin{lemma}[{Frohman-Meeks,\cite[Proposition 2.2]{FrMe97}}]\label{balanced are adapted} If $C$ is a balanced submanifold of $M$ with respect to $S$ then $C$ is adapted to $S$. \end{lemma}
\begin{proof} We must show that $(U \cap C) \cup_{S \cap C} (V \cap C)$ is a relative Heegaard splitting of $C$. In other words, we must show that $U \cap C$ and $V \cap C$ are both relative compressionbodies with preferred surface $S \cap C$. \newline
Assume that $C$ is a regular neighborhood of $A \cup B$ where $A$ and $B$ are compact subcompressionbodies of $U$ and $V$ respectively and $\partial_S A \subset \partial_S B$. We have $C \cap V = \eta(B)$ so $C \cap V$ is a relative compressionbody with preferred surface $S \cap C$. To obtain $C \cap U$ we take a regular neighborhood of $\operatorname{cl}(\partial_S B - \partial_S A)$ in $U$ and glue it to $A$. An alternative way of performing the construction is as follows. \newline
Let $D$ be the collection of discs which make up the frontier of $A$. Take a regular neighborhood of $D$ and let $D'$ be the components of the frontier of the neighborhood which are not in $A$. Let $F = \operatorname{cl}(\partial_S B - (\partial_S A \cup \eta(D)))$. Take a regular neighborhood of $F$ in $U - A$. Consider $F$ to be $F \times \{1\}$. Since $D' \subset F$, this regular neighborhood contains $D' \times I$. See Figure \ref{compressionbody}. This revised neighborhood is $\partial_- C \times I$. We may then add one-handles so that one end of each one-handle is on a disc of $D' \times \{1\}$ and the other end is on the corresponding disc of $D$. It is clear that $S \cap C$ is the preferred surface of this compressionbody. \begin{figure}
\caption{Adding a regular neighborhood of $\partial_S B - \partial_S A$ to $A$ gives us a relative compressionbody.}
\label{compressionbody}
\end{figure} \end{proof}
To obtain a balanced exhausting sequence of $M$, start by taking exhausting sequences $\{A_i\}$ and $\{B_i\}$ of $U$ and $V$ by subcompressionbodies. Since each $A_i$ and each $B_i$ are compact we may take subsequences of $\{A_i\}$ and $\{B_i\}$ so that, for all $i$, $\partial_S A_i \subset \partial_S B_i \subset \partial_S A_{i+1}$. Each of the inclusions should be into the interior of the succeeding surface. \newline
A component of the frontier of a balanced submanifold $C$ can be thought of as being a compact subsurface of $S$ with discs, each contained entirely in $U$ or $V$, glued onto the boundary components. In fact, since each component of the frontier of each balanced submanifold intersects $S$, neither $\partial_-(C \cap U)$ nor $\partial_- (C \cap V)$ have components which are closed surfaces not contained in $\partial M$. Thus, if we have a balanced exhausting sequence $\{C_i\}$ of $M$ adapted to $S$ with $\partial M \subset C_1$, it is adapted to $S$ in the sense of the definition given at the beginning of this section. By Corollary \ref{outer collar property 2}, we can take a subsequence of $\{C_i\}$ so that it has the outer collar property.
\begin{remark} Even though we have a balanced exhausting sequence $\{C_i\}$ of $M$ which is adapted to $S$ and has the outer collar property, there is no reason to suppose that it is perfectly adapted to $S$. The difficulty is in the fact that $\operatorname{cl}(C_{i+1} - C_i) \cap U$ may not be a relative compressionbody with preferred surface $S \cap \operatorname{cl}(C_{i+1} - C_i)$. \end{remark}
Let $\{C_i\}$ be a balanced exhausting sequence for $M$ adapted to $S$. Each $C_i$ is the neighborhood of $A_i \cup B_i$ where $A_i$ and $B_i$ are subcompressionbodies of $U$ and $V$ respectively. As such, the collection of discs $\ob{\Delta} = \cup_i(\operatorname{fr} A_i \cup \operatorname{fr} B_i)$ is a 2-sided disc family for $S$ in $M$. (The notation means the frontier of $A_i$ in $U$ and the frontier of $B_i$ in $V$.) We can perform a finite sequence of slide-moves (Section \ref{slide-moves and isotopies}) on $\ob{\Delta}$ to obtain a new 2-sided disc family $\Delta$. This sequence also gives us, for each $i$, subcompressionbodies $A'_i$ and $B'_i$ obtained from $A_i$ and $B_i$ respectively by slide-moves. The important observation is that since the slide-moves are done relative to $\cup_i(\partial \operatorname{fr} A_i \cup \partial \operatorname{fr} B_i)$ we still have, for each $i$, that $\partial_S A'_i \subset \partial_S B'_i \subset \partial_S A'_{i+1}$. Thus, $C'_i = \eta(A'_i \cup B'_i)$ is a balanced submanifold of $M$ adapted to $S$. And so, $\{C'_i\}$ is a balanced exhausting sequence of $M$ adapted to $S$. These observations provide the key to the proof of Theorem \ref{well-placed}.
\begin{definition} The balanced submanifold $C' = \eta(A' \cup B')$ is obtained from the balanced submanifold $C = \eta(A \cup B)$ \defn{by slide-moves} if there is a finite sequence of slide-moves by which $A'$ is obtained from $A$ and $B'$ is obtained from $B$. \end{definition}
\subsection{Comparing Absolute and Relative Heegaard Splittings} In the remainder of this section, we look at the relationship between absolute and relative Heegaard splittings of a compact manifold. These results will help us to translate facts about absolute Heegaard splittings to relative Heegaard splittings. Let $N$ denote a 3-manifold, compact or non-compact, with non-empty compact boundary. \newline
Suppose that $N = U \cup_S V$ is a relative Heegaard splitting. Let $\mc{B}$ denote the boundary components of $N$ which intersect $S$. Define $\hat{U}$ to be $U$ together with a regular neighborhood of $\mc{B}$. Define $\hat{V}$ to be the closure of the complement of $\hat{U}$ in $N$ and let $\hat{S} = \hat{U} \cap \hat{V}$.
\begin{lemma} $N = \hat{U} \cup_{\hat{S}} \hat{V}$ is an absolute Heegaard splitting of $N$. \end{lemma} \begin{proof} If $\mc{B} = \varnothing$ there is nothing to prove, so assume that $\mc{B}$ is non-empty. $U$ is a relative compressionbody and so is obtained from $F \times I$ by adding 1-handles to $F \times \{1\}$ and countably many 3-balls. $F$ is a compact surface with boundary. Let $B$ be a component of $\mc{B}$ and let $B_U = B \cap U$ and $B_V = B \cap V$. In the process of obtaining $\hat{U}$ we glue $B_V \times I$ to $B_U \times I$ along $\gamma \times I$ where $\gamma = \partial B_V = \partial B_U$. So $\hat{U}$ is $B \times I$ attached by 1-handles to the preferred surface of a compressionbody. Hence, performing this operation for each boundary component of $N$ which intersects $S$, leaves us with $\hat{U}$, an absolute compressionbody. On the other hand, to form $\hat{V}$ we have removed a collar neighborhood of each component of $\partial_- V$ which intersected $\partial_+ V$. Let $\mc{D}$ be a collaring set of discs for $V$. The discs $\mc{D}$ are also discs in $\hat{V}$. Let $\mc{E}$ be the collection of components of $\sigma(V;\mc{D})$ which contain $\partial_- V \cap \mc{B}$. Each of these components is a $\text{(surface with boundary)} \times I$. As such, each component is a handlebody. Removing a collar neighborhood of $\partial_- V \cap \mc{B}$ from these components does not change the homeomorphism type. The space $\hat{V}$ is formed by attaching these handlebodies to the preferred surface of the absolute compressionbody $\sigma(V;\mc{D}) - \mc{E}$ by 1-handles dual to the discs $\mc{D}$. Thus, $\hat{V}$ is an absolute compressionbody. \end{proof}
If we know that $V$ intersects $\partial N$ in discs, the relationship is stronger.
\begin{lemma}[The Marionette Lemma]\label{equivalence of absolute and relative HS} Suppose that $U_S \cup_S V_S$ and $U_T \cup_T V_T$ are two relative Heegaard splittings of a 3-manifold $N$. Suppose also that for each component of $\partial N$ which intersects $S$, $V_S$ and $V_T$ intersect that component in discs. If for each such boundary component of $N$, $V_S$ and $V_T$ intersect that boundary component in the same number of discs, then $S$ and $T$ are properly ambient isotopic if and only if $\hat{S}$ and $\hat{T}$ are properly ambient isotopic. \end{lemma}
We form $\hat{U_S}$ and $\hat{U_T}$ by including a regular neighborhood of $V_S \cap \partial N$ and $V_T \cap \partial N$ into $U_S$ and $U_T$. If we want to undo this operation we can remember the cocores of the discs $V_S \cap \partial N$ and $V_T \cap \partial N$. These give us finite collections of arcs in $\hat{U_S}$ and $\hat{U_T}$ joining $\partial N$ to $\hat{S}$ and $\hat{T}$ respectively. To prove the lemma, we need to understand how these arcs can be isotoped within the compressionbodies $\hat{U_S}$ and $\hat{U_T}$. We will show that if $\hat{S}$ and $\hat{T}$ are isotopic, we can isotope $\hat{S}$ and $\hat{T}$ to coincide and then isotope the arcs to coincide.
\begin{definition} Let $\psi$ be a finite collection of arcs in an absolute compressionbody $H$ with at least one endpoint of each arc on $\partial_+ H$. If $H$ is a 3-ball then $\psi$ is \defn{standard} if it is isotopic to a collection of arcs which lie in $\partial_+ H = \partial H$. If $H = F \times I$ where $F$ is a closed connected surface, then $\psi$ is \defn{standard} if there is an isotopy of $\psi$ so that each spanning arc is vertical in the product structure and each non-spanning arc is contained in $F \times \{1\} = \partial_+ H$. For a generic absolute compressionbody, $\psi$ is \defn{standard} if there is a defining collection of discs $\Delta$ for $H$ which is disjoint from $\psi$ and such that $\psi$ is standard in each component of $\sigma(H;\Delta)$. \end{definition}
We need the following two results which are slightly rephrased from \cite{ScTh93}. We are allowing our compressionbody to be non-compact, but since the number of arcs is finite the results are still true, as we may restrict attention to a compact subcompressionbody.
\begin{lemma}[{Scharlemann-Thompson, \cite[Lemma 6.4]{ScTh93}}]\label{standard collections} If $\sigma$ and $\tau$ are standard collections of arcs in an absolute compressionbody $H$, then for any defining collection of discs $\Delta$ for $H$ there is an isotopy of $\sigma$ and an isotopy of $\tau$ so that $\sigma$ and $\tau$ are standard in $\sigma(H;\Delta)$. \end{lemma}
\begin{lemma}[{Scharlemann-Thompson, \cite[Corollary 6.7]{ScTh93}}]\label{inducing standard collections} Let $\psi$ be a collection of arcs properly embedded in a compressionbody $H$ such that for every subcollection $\psi' \subset \psi$, the complement of $\psi'$ is a compressionbody. Then $\psi$ is standard. \end{lemma}
\begin{proof}[Proof of the Marionette Lemma] If $S$ and $T$ are ambient isotopic, it is clear that $\hat{S}$ and $\hat{T}$ are. So suppose that $\hat{S}$ and $\hat{T}$ are ambient isotopic. \newline
As mentioned earlier, we can recover $S$ and $T$ from $\hat{S}$ and $\hat{T}$ by remembering the cocores of the 2-handles that were added to $U_S$ and $U_T$. Let $\sigma$ be the collection of arcs coming from $V_S \cap \partial N$ and let $\tau$ be the collection of arcs coming from $V_T \cap \partial N$. \newline
Isotope $\hat{S}$ onto $\hat{T}$. Now we have $\hat{U_S} = \hat{U_T}$. This isotopy takes $\sigma$ to some collection of arcs which we continue to call $\sigma$. If we can show that there is an isotopy of $\sigma$ onto $\tau$ which keeps $\hat{S}$ mapped onto $\hat{T}$ for all time, we will be done. The isotopy is allowed to move the endpoints of the arcs, but it must keep them on $\partial N \cup \hat{S}$. \newline
We claim, first, that for each subcollection $\sigma'$ of arcs in $\sigma$ the complement of $\sigma'$ in $\hat{U_S} = \hat{U_T}$ is a compressionbody. Let $\sigma'$ be a subcollection of arcs from $\sigma$. Let $s'$ denote the arcs of $\sigma - \sigma'$. Let $D_{s'}$ be the 2-handles of $\eta(V_T \cap \partial N)$ which have cocores $s'$. Consider the relative compressionbody $U_S$. $U_S$ is formed by taking a surface $F$ with boundary, forming $F \times I$ and adding 1-handles to $F \times \{1\}$. The surface $F$ has one boundary component for each component of $S \cap \partial N$. Let $\gamma$ denote the boundary components of $F \times \{0\}$ which correspond to $s'$. Adding the 2-handles $D_{\sigma'}$ to $U_S$ is achieved by attaching copies of $D^2 \times I$ to $F$ along $\gamma \times I$. It's clear that the result is still a compressionbody. But this is exactly $\operatorname{cl}(\hat{U_S} - \eta(\sigma'))$. Thus, the complement of every subcollection of $\sigma$ in $\hat{U_S}$ is a compressionbody. The same result holds for $\tau$. \newline
By Lemma \ref{inducing standard collections}, both $\sigma$ and $\tau$ are standard. By Lemma \ref{standard collections}, there is a proper isotopy of $\sigma$ and a proper isotopy of $\tau$ so that both $\sigma$ and $\tau$ are disjoint from a defining disc set $\Delta$ for $\hat{U_S} = \hat{U_T}$ and both are standard in $\sigma(U_S;\Delta)$. Since each arc of $\sigma \cup \tau$ has an endpoint on a component of $\partial N$, we may assume that the isotopy has made each arc of $\sigma$ and each arc of $\tau$ vertical in the product structure of $(\partial_N \times I) \cap U_S$. Since for each component of $\partial N$ the arcs of $\sigma$ and $\tau$ with an endpoint on that component are in one-to-one correspondence, there is the required isotopy taking $\sigma$ onto $\tau$. \end{proof}
The following is a version of Haken's Lemma for relative Heegaard splittings. It is, perhaps, well-known. It appears in similar versions as Lemma 5.2 in \cite{BaScSe05} and as a remark following Definition 2.1 in \cite{FrMe97}.
\begin{lemma}[Haken's Lemma]\label{Haken} Suppose that $U \cup_S V$ is a relative Heegaard splitting of $N$ with the property that each component of $V \cap \partial N$ is a disc. Then if $\partial N$ is compressible in $N$ there is a compressing disc for $\partial N$ whose intersection with $S$ is a single simple closed curve. Furthermore, boundary reducing $N$ along this disc leaves us with a relative Heegaard splitting $\operatorname{cl}(U - \eta(D)) \cup_{\operatorname{cl}(S - \eta(D))} \operatorname{cl}(V - \eta(D))$ of the resulting manifold. \end{lemma}
\begin{proof} Let $\hat{U} \cup_{\hat S} \hat{V}$ be the absolute Heegaard splitting for $N$ obtained by including $\eta(V \cap B)$ into $U$ for each component $B \subset \partial N$ which intersects $S$. Since $\partial N$ is compressible, by Casson and Gordon's version of Haken's Lemma \cite{CaGo87}, there is a compressing disc $D$ for $\partial N$ which intersects $\hat{S}$ in a single simple closed curve. \newline
To obtain $U \cup_S V$ from $\hat{U} \cup_{\hat{S}} \hat{V}$ we include into $\hat{V}$ the neighborhood of a certain collection of arcs $\sigma$. The arcs $\sigma$ are the cocores of the 2-handles which we added to $U$ in order to obtain $\hat{U}$. \newline
If $\partial D$ is on a component of $\partial N$ contained in $\hat{V}$, then by Lemma \ref{standard collections} we may isotope $\sigma$ to be disjoint from the disc $D \cap \hat{U}$. Thus, there is a compressing disc for $\partial N$ which intersects $S$ in a single simple closed curve. \newline
If $\partial D$ is on a component of $\partial N$ contained in $\hat{U}$ then $D \cap \hat{U}$ is an annulus. By performing handle-slides, we may obtain a defining collection of discs $\Delta$ for $\hat{U}$ which are disjoint from that annulus. We may assume that the annulus $D \cap \hat{U}$ is vertical in the product structure of the component of $\sigma(\hat{U};\Delta)$ containing it. By Lemma \ref{standard collections}, there is an isotopy of the arcs $\sigma$ so that $\sigma$ is disjoint from $\Delta$ and is vertical in the product structure of the components of $\sigma(\hat{U};\Delta)$ containing it. It is then easy to isotope $\sigma$ to be disjoint from the annulus $D \cap \hat{U}$. Hence, when we remove an open regular neighborhood of $\sigma$ from $\hat{U}$ to obtain $U$ we have the disc $D$ intersecting $S$ in a single simple closed curve. Thus $S$ divides $D$ into a disc and an annulus. \newline
Boundary-reducing $N$ along $D$ leaves us with a 3-manifold $\ob{N} = \sigma(N;D)$. We have boundary-reduced the relative compressionbody ($U$ or $V$) containing the disc part of $D$ along a disc with boundary in the preferred surface. Thus, by Lemma \ref{boundary reducing gives compressionbody} it is still a relative compressionbody. In the other compressionbody $X$ (equal to $V$ or $U$), there is a defining set of discs $\Delta$ disjoint from $D$ and the annulus $D \cap X$ is vertical in the product structure of the component of $\sigma(X;\Delta)$ containing it. That component is homeomorphic to $F \times I$ where $F$ is a compact surface, possibly with boundary. Removing the open neighborhood of a vertical annulus in such a component leaves us with a manifold homeomorphic to $G \times I$ where $G$ is a compact surface obtained from $F$ by removing an open annulus. Thus, $X - \operatorname{int}(\eta(D \cap X))$ is still a relative compressionbody with preferred surface $S - \operatorname{int}(\eta(D))$. This implies that $\ob{N} = \operatorname{cl}(U - \eta(D)) \cup_{\operatorname{cl}(S - \eta(D))} \operatorname{cl}(V - \eta(D))$ is a relative Heegaard splitting. \end{proof}
\section{Heegaard Splittings of Eventually End-Irreducible 3-manifolds}\label{Exh. Seq.}
\subsection{Introduction}\label{Types of Exh. Seq.} Recall that a non-compact 3-manifold $M$ is \defn{end-irreducible rel $C$} for a compact set $C \subset M$ if there is an exhausting sequence $\{K_i\}_{\mathbb N}$ for $M$ such that $C \subset K_1$ and, for all $i$, $\operatorname{fr} K_i$ is incompressible in $M - C$. Inessential spheres count as incompressible surfaces, so, for example, ${\mathbb R}^3$ is end-irreducible rel $\varnothing$. Other examples of eventually end-irreducible 3-manifolds are deleted boundary 3-manifolds. A deleted boundary 3-manifold $M$ contains a compact set $C$ so that $\operatorname{cl}(M - C)$ is homeomorphic to $F \times {\mathbb R}_+$ for some closed surface $F$. \newline
For the remainder of this section, assume that $M$ is an orientable non-compact 3-manifold which is end-irreducible rel $C$ and that $\partial M \subset C$. Let $M = U \cup_S V$ be an absolute Heegaard splitting of $M$. \newline
Since we will be dealing with a variety of exhausting sequences for $M$ we collect the following definitions here:
\begin{definition} Let $\{K_i\}$ be an exhausting sequence for $M$ with $C \subset K_1$. We say that:
\begin{itemize} \item $\{K_i\}$ is \defn{frontier-incompressible rel $C$} if, for each $i$, $\operatorname{fr} K_i$ is incompressible in $M - C$. \newline
\item $\{K_i\}$ is \defn{adapted} to $S$ if, for all $i$, $(U \cap K_i) \cup_{(S \cap K_i)} (V \cap K_i)$ is a relative Heegaard splitting of $K_i$ and if $(X \cap K_i)$ is correctly embedded in $X$ for $X = U,V$. If $\{K_i\}$ is adapted to $S$ there is a subsequence which has the \defn{outer collar property} (Lemma \ref{outer collar property}). \newline
\item $\{K_i\}$ is \defn{perfectly adapted} to $S$ if it is adapted to $S$ and, in addition, each $\operatorname{cl}(K_{i+1} - K_i)$ is adapted to $S$. That is, each $\operatorname{cl}(K_{i+1} - K_i)$ inherits a relative Heegaard splitting with Heegaard surface $S \cap \operatorname{cl}(K_{i+1} - K_i)$. \newline
\item $\{K_i = \eta(A_i \cup B_i)\}$ is a \defn{balanced exhausting sequence} for $M$ (adapted to $S$) if each $K_i$ is a regular neighborhood of $A_i \cup B_i$ where $A_i$ and $B_i$ are subcompressionbodies of $U$ and $V$ respectively with $\partial_S A_i \subset \partial_S B_i \subset \partial_S A_{i+1}$. \newline
\item $\{K_i\}$ is \defn{well-placed on $S$ rel $C$} if it is a frontier-incompressible (rel $C$) exhausting sequence for $M$ which is adapted to $S$ and, in addition, has the following properties:
\begin{enumerate} \item[(WP1)] For each $i$, $V$ intersects each component of $\operatorname{fr} K_i$ in a single disc. \item[(WP2)] For each $i$, $\operatorname{fr} K_i \cap U$ is incompressible in $U$. \item[(WP3)] $\{K_i\}$ has the outer collar property with respect to $U$. \item[(WP4)] For each $i$, no component of $\operatorname{cl}(M - K_i)$ is compact. \end{enumerate} \end{itemize} \end{definition}
The main result of this section is:
\begin{theorem}\label{well-placed} Suppose that $M$ is a non-compact orientable 3-manifold with compact boundary which is end-irreducible (rel $C$) where $C$ is a compact set containing $\partial M$. Suppose also that $U \cup_S V$ is an absolute Heegaard splitting of $M$. Then there is an exhausting sequence of $M$ which is well-placed on $S$ rel $C$. \end{theorem}
The most difficult part of the proof is in showing that there is a frontier-incompressible (rel $C$) exhausting sequence which is adapted to $S$.
\subsection{Balanced Sequences and the Weakly Reducible Theorem} We begin by showing that there is a balanced exhausting sequence of $M$ adapted to $S$ so that the compressing discs for the frontiers of the exhausting elements are in a ``good position" relative to the Heegaard surface.
\begin{proposition}\label{good discs exist} There is a balanced exhausting sequence $\{C_i = \eta(A'_i \cup B'_i)\}$ for $M$ adapted to $S$ and a 2-sided disc family $\Psi$ for $S$ which contains $\cup_i (\operatorname{fr} A'_i \cup \operatorname{fr} B'_i)$ such that, for each $i$, $\sigma(\operatorname{cl}(\partial_S B'_i - \partial_S A'_i);\Psi)$ is incompressible in $M - C$. \end{proposition}
\begin{remark}In \cite{CaGo87} Casson and Gordon prove that if a Heegaard splitting of a compact 3-manifold is weakly reducible then there is a 2-sided disc family for the Heegaard surface such that when the surface is compressed along that family, the result is a collection of incompressible surfaces (possibly inessential spheres)\footnote{This is not how the result is usually stated, but see the proof given in \cite{Sc02}.}. Since the frontiers of balanced submanifolds consist of surfaces which are obtained from the Heegaard surface by compressions along disjoint discs, it is natural to try to harness the power of the Casson and Gordon theorem.\newline
It is unclear, however, if the Casson and Gordon theorem can be extended to non-compact 3-manifolds in a way that is directly useful in this situation. Nonetheless, the proof of our theorem is based on the outline of a proof of Casson and Gordon's theorem given in \cite{Sc02}. We will also need to use Casson and Gordon's version of Haken's Lemma. \end{remark}
The proof is rather long so we begin with an outline of the proof:
\begin{enumerate} \item\label{step 1} Take a balanced exhausting sequence $\{K_i = \eta(A_i \cup B_i)\}$. For each $K_n$ show how to replace $K_{n-2}, K_{n-1},$ and $K_n$ with ``better" balanced submanifolds $K^L_{k} = \eta(A^L_k \cup B^L_k)$ for $k = n-2, n-1, n$. Each of these better balanced submanifolds is still contained in $K_{n+1}$ and still contains $K_{n-3}$. Let $C_n = K^L_n$. The new manifolds will be obtained from the old ones by a finite sequence of slide-moves $L$. The process of obtaining $C_n$ will also leave us with a 2-sided disc family $\Delta$ for $S \cap K_{n+1}$. \newline
\item\label{step 2} Suppose that there is a compressing disc $D$ for $\sigma(\operatorname{cl}(\partial_S B^L_n - \partial_S A^L_n); \Delta)$. \newline
\item \label{step 3}Show that we can assume that $D$ is contained in $K_{n+1} - K^L_{n-2}$. This step is where we use the eventual end-irreducibility of $M$. \newline
\item\label{step 4} Replace $D$ by a compressing disc of $\sigma(S \cap K_{n+1};\Delta)$ which intersects $\sigma(S \cap K_{n+1};\Delta)$ only on $\partial D$. We continue calling the disc $D$.\newline
\item\label{step 5} Use Haken's Lemma to replace $D$ by a disc which intersects a certain Heegaard surface exactly once and is contained in $K_{n+1} - K_{n-3}$. We continue calling the disc $D$. \newline
\item\label{step 6} Follow the arguments of Casson and Gordon's Weakly Reducible theorem to obtain from $C_n$ by slide-moves a balanced submanifold which is even better than $C_n$. This will contradict the construction of $C_n$. \newline
\item\label{step 7} Use this replacement technique on each element of a subsequence of $\{K_i\}$ to obtain the desired $\{C_i\}$. Construct the 2-sided disc family $\Psi$ from the 2-sided disc families $\Delta$ which were created in each replacement operation. \end{enumerate}
\begin{proof}[Proof of Proposition \ref{good discs exist}]
Let $\{K_i = \eta(A_i \cup B_i)\}_{i \geq 0}$ be a balanced exhausting sequence for $M$ adapted to $S$ and let $\{P_i\}$ be a frontier incompressible exhausting sequence (rel $C$). Choose the exhausting sequences so that $C \subset K_0 \subset P_{i-1} \subset K_i \subset P_i$ for all $i \geq 1$. Each of the inclusions should be into the interior of the succeeding submanifold. Figure \ref{frontier and balanced sequences} is a schematic of the exhausting sequences. The frontiers of the submanifolds in $\{P_i\}$ may have a very complicated intersection with the Heegaard surface. The frontier of each submanifold in the balanced exhausting sequence consists of discs and compact surfaces parallel to subsurfaces of $S$.\newline
\begin{figure}
\caption{A schematic of the exhausting sequences}
\label{frontier and balanced sequences}
\end{figure}
We will show that given a $q \in \mathbb N$ and $n \geq q + 3$, $K_n$ can be replaced by a compact submanifold $C_n = \eta(A'_n \cup B'_n)$ with the following properties: \newline
\begin{enumerate} \item $C_n$ is obtained from $K_n$ by slide moves.\newline
\item There is a 2-sided disc family $\Delta$ for $S$ in $K_{n+1}$ containing $\operatorname{fr} A'_n \cup \operatorname{fr} B'_n$ such that $\sigma(\operatorname{cl}(\partial_S B'_n - \partial_S A'_n); \Delta)$ is incompressible in $M - C$. \newline
\item We still have $K_q \subset C_n$ and the discs $\operatorname{fr} A_q \cup \operatorname{fr} B_q$ are contained in $\Delta$. \end{enumerate}
Choose some $n \geq q+3$. \newline
Let $\ob{\Delta}= \bigcup_{q \leq i \leq n+1} (\operatorname{fr} A_i \cup \operatorname{fr} B_i)$. Recall from Section \ref{slide-moves and isotopies} that a slide-move of this 2-sided disc family consists of either adding a compressing disc for $S$ to $\ob{\Delta}$ which is disjoint from all other discs of $\ob{\Delta}$ or performing a 2-handle slide of one disc of $\ob{\Delta}$ over another disc of $\ob{\Delta}$. The arc over which a 2-handle slide is performed must have its interior disjoint from all discs of $\ob{\Delta}$.\newline
Recall from just after Lemma \ref{balanced are adapted} that each slide-move performed on $\ob{\Delta}$ leaves us with new balanced submanifolds obtained from the submanifolds $\{K_i\}_{i \leq n+1}$ by slide-moves. After performing a slide-move, we still have $K_i \subset K_{i+1}$ for all $i$, since all the slides are performed relative to $\ob{\Delta}$. \newline
Let $\mc{L}$ denote the set of all finite sequences of slide-moves of $\ob{\Delta}$ subject to the following restrictions:
\begin{enumerate} \item Every time a disc is added to $\ob{\Delta}$, the disc has boundary lying on $S \cap K_{n+1}$.\newline
\item No disc of $\operatorname{fr} K_{n+1} \cup \operatorname{fr} K_{q}$ is ever slid over another disc. \end{enumerate}
These restrictions mean that performing a sequence of slide-moves in $\mc{L}$ preserves the ordering of submanifolds $K_i$ for $q \leq i \leq n+1$. Furthermore, the manifolds $K_{n+1}$ and $K_q$ are left unchanged.
\subsubsection*{Step \ref{step 1}} Each sequence $L \in \mc{L}$ leaves us with new balanced submanifolds $K^L_i$ for $q < i < n+1$. The submanifolds $K_q$ and $K_{n+1}$ are left unchanged. For ease of notation, let $K^L_q = K_q$ and $K^L_{n+1} = K_{n+1}$. Let $A^L_i$ be the subcompressionbody of $U$ obtained from $A_i$ by the slide-moves $L$ and let $B^L_i$ be the subcompressionbody of $V$ obtained from $B_i$ by the slide-moves $L$ so that $K^L_i = \eta(A^L_i \cup B^L_i)$. \newline
Recall from \cite{CaGo87} that the complexity of a closed, connected surface $F$ is defined to be $1 - \chi(F)$, unless $F$ is a two-sphere, in which case, it is 0. The complexity of a disconnected closed surface is the sum of the complexities of the components. \newline
Performing $L$ on $\ob{\Delta}$ leaves us with a disc family $\ob{\Delta}_L$ which contains the discs $\operatorname{fr} A^L_i \cup \operatorname{fr} B^L_i$ for $q \leq i \leq n+1$. Define the complexity of $\ob{\Delta}_L$ to be the complexity of $\sigma(S \cap K_{n+1};\ob{\Delta}_L)$. Since complexity is invariant under handle-slides (Lemma \ref{isotopies of handle slides}), the complexity of a 2-sided disc family cannot increase under slide-moves.\newline
Choose an $L \in \mc{L}$ so that $\ob{\Delta}_L$ has minimal complexity. Let $\Delta = \ob{\Delta}_{L}$ and $C_n = K^L_n$. Let $\Delta_1$ be those discs of $\Delta$ which lie in $U$ and $\Delta_2$ those discs which lie in $V$. \newline
Recall that if $R \subset S$ is a compact subsurface of $S$ with $\partial R \subset \partial \Delta$, the notation $\sigma(R;\Delta)$ signifies the surface obtained from $R$ by compressing along those discs of $\Delta$ which have boundary on $R$. Let $R_i = \operatorname{cl}(\partial_S B_i - \partial_S A_i)$ and let $R'_i = \operatorname{cl}(\partial_S B^L_i - \partial_S A^L_i)$ for $q \leq i \leq n+1$. Note that $R'_i$ is obtained from $R_i$ by the sequence of slide-moves $L$. We claim that $\sigma(R'_n; \Delta)$ is incompressible in $M - C$. The surface $R'_i$ is a subsurface of $S$ which is parallel in $K^L_i$ to $\operatorname{cl}(\operatorname{fr} K^L_i - (\operatorname{fr} A^L_i \cup \operatorname{fr} B^L_i))$.
\begin{figure}
\caption{A schematic of $\Delta_1$ and $\Delta_2$}
\end{figure}
\subsubsection*{Step \ref{step 2}} Let $S_k = \sigma(S \cap K_{n+1};\Delta_k)$ for $k = 1, 2$. Let $W_1$ be $U \cap K_{n+1} $ together with the 2-handles coming from $\Delta_2$ minus the 2-handles coming from $\Delta_1$. Let $W_2$ be the closure of the complement of $W_1$ in $K_{n+1}$. Let $\ob{S} = \sigma(S \cap K_{n+1}; \Delta)$. We are trying to show that $\sigma(R'_n;\Delta)$ is incompressible in $M - C$. We assume the contradiction: suppose that a component $\ob{B}$ of $\sigma(R'_n; \Delta)$ is compressible in $M - C$.
\subsubsection*{Step \ref{step 3}} Our next task is to show that there is a compressing disc for $\ob{B}$ which lies entirely in $K_{n+1} - K^L_{n-2}$. Recall that $\{P_i\}$ is the frontier-incompressible (rel $C$) exhausting sequence for $M$ which is interlaced with $\{K_i\}$. Let $\Sigma = \operatorname{fr} P_{n-1} \cup \operatorname{fr} P_n$. The key technique is an application of Proposition \ref{isotopies of slide-moves} and the incompressibility in $M - C$ of $\Sigma$. \newline
By Proposition \ref{isotopies of slide-moves}, there is a proper ambient isotopy $f$ taking $\sigma(S;\Delta)$ to the surface obtained from $\sigma(S;\ob{\Delta})$ by compressing along a certain collection of discs. In particular, there are disjoint collections of disjoint ordered discs $\mc{E}$ and $\mc{G}$ so that the discs of $\mc{E}$ have boundary on $\sigma(R_n;\ob{\Delta})$ and the discs of $\mc{G}$ have boundary on $\sigma(R_{n-1};\ob{\Delta})$ and the isotopy $f$ takes $\sigma(R'_n;\Delta)$ to $\sigma(R_n;\ob{\Delta} \cup \mc{E})$ and $\sigma(R'_{n-1};\Delta)$ to $\sigma(R_{n-1};\ob{\Delta} \cup \mc{G})$. The notation $\sigma(R_n;\ob{\Delta}\cup \mc{E})$ means the surface obtained from $\sigma(R_n;\ob{\Delta})$ by compressing along the discs of $\mc{E}$ in the order given. Similarly, we write $\sigma(R_{n-1};\ob{\Delta} \cup \mc{G})$ for the surface obtained from $\sigma(R_{n-1};\ob{\Delta})$ by compressing along $\mc{G}$. The surfaces $\sigma(R_n;\ob{\Delta} \cup \mc{E})$ and $\sigma(R_{n-1};\ob{\Delta}\cup \mc{G})$ are disjoint. \newline
The discs $\mc{E}$ have boundary on $\sigma(R_n;\ob{\Delta})\subset (P_n - P_{n-1})$. As $\Sigma$ is incompressible in $M - C$ the intersections of $\mc{E}$ with $\Sigma$ are inessential on $\Sigma$. Similarly, the discs of $\mc{G}$ have boundary on $\sigma(R_{n-1};\ob{\Delta}) \subset P_{n-1}$ and so $\mc{G}$ intersects $\Sigma$ in loops which are inessential on $\Sigma$. The surface $\ob{B}$ is taken by the isotopy $f$ to a surface $\ob{K}$ which is a component of $\sigma(R_n;\ob{\Delta}\cup \mc{E})$. Since $\ob{B}$ is compressible in $M - C$ so is $\ob{K}$. \newline
Since the intersections of $\ob{K}$ with the incompressible (in $M - C$) $\Sigma$ come from the intersections of $\mc{E}$ with $\ob{K}$, the loops $\ob{K} \cap \Sigma$ are inessential on both surfaces. There is, therefore, a surface $K' \subset (P_n - P_{n-1})$ which is obtained from $\ob{K}$ by cutting and pasting along the intersections $\ob{K} \cap \Sigma$. (Start with innermost discs of intersection on $\Sigma$ and replace the corresponding discs of $\ob{K}$ with copies of the discs on $\Sigma$ which have been pushed slightly into $(P_n - P_{n-1})$.) As $\ob{K}$ is compressible in $M - C$, $K'$ is also compressible in $M - C$. Since $\Sigma$ is incompressible in $M - C$ there is a compressing disc $F$ for $K'$ which is contained in $P_n - P_{n-1}$. Our goal is to use $F$ to construct a compressing disc for $\ob{K}$ which is disjoint from $\sigma(R_{n-1};\ob{\Delta} \cup \mc{G})$. \newline
Since $\partial \mc{E}$ consists of inessential loops on $K'$ we may assume that $\partial F \cap \partial \mc{E} = \varnothing$. The disc $F$ may intersect the discs $\mc{E}$. It may also intersect the discs of $\mc{G}$ in simple closed curves. Since each loop of $F \cap \mc{E}$ is inessential on both $F$ and $\mc{E}$ we may, by cutting and pasting $F$ along the intersections, obtain a compressing disc $F'$ for $\ob{K}$. Since both $K'$ and $\mc{E}$ were disjoint from $\sigma(R_{n-1};\ob{\Delta} \cup \mc{G})$, any intersections of the disc $F'$ with the surface $\sigma(R_{n-1};\ob{\Delta} \cup \mc{G})$ occur because $F'$ intersects $\mc{G}$ in simple closed curves. These intersections are inessential on both $F'$ and on $\sigma(R_{n-1};\ob{\Delta}\cup \mc{G})$. We may cut and paste $F'$ along these intersections to produce a compressing disc $E$ for $\ob{K}$ which is disjoint from $\sigma(R_{n-1};\ob{\Delta}\cup \mc{G})$. The disc $E$ may intersect $\Sigma$, but that is not of concern. \newline
Reversing the isotopy $f$ takes $E$ to a compressing disc $D$ for $\ob{B}$. $D$ is contained in $K_{n+1}$. The disc $D$ is disjoint from $\sigma(R'_{n-1};\Delta)$ since $E$ was disjoint from $\sigma(R_{n-1};\ob{\Delta} \cup \mc{G})$. \newline
Recall that we are trying to construct a compressing disc for $\ob{B}$ which is contained in $K_{n+1} - K^L_{n-2}$. Each disc of $\Delta$ which had boundary on $R'_{n-1}$ was disjoint from $R'_{n-2}$ since no disc of $\Delta$ intersects $S$ except at its boundary and the discs of $\Delta$ are pairwise disjoint. Thus $K^L_{n-2}$ is contained inside some component of $\sigma(K^L_{n-1};\Delta)$. But since $D$ is disjoint from $\sigma(R'_{n-1};\Delta)$ which is parallel to $(\operatorname{fr} \sigma(K^L_{n-1};\Delta))$, $D$ can be isotoped so as to not intersect $K^L_{n-2}$. Hence, there is a compressing disc $D$ for $\ob{B}$ which is contained in $K_{n+1} - K^L_{n-2}$.
\subsubsection*{Step \ref{step 4}} The compressing disc $D$ may intersect the surface $\ob{S} \cap (K_{n+1} - K^L_{n-2})$. By revising the disc $D$ we may assume that no loops of $D \cap \ob{S}$ are inessential on $\ob{S}$. Replace $D$ by an innermost disc, which we will continue to call $D$, that intersects $\ob{S}$ only on $\partial D$. By our construction $D$ is now a compressing disc for $\ob{S}$. The boundary of $D$ may no longer be on $\ob{B}$. $D$ lies in either $W_1$ or $W_2$ and is completely contained in $(K_{n+1} - K^L_{n-2})$. Recall that $W_1 = [(U \cap K_{n+1}) - \eta(\Delta_1) \cup \eta(\Delta_2)]$ and that $W_2 = [(V \cap K_{n+1}) - \eta(\Delta_2) \cup \eta(\Delta_1)]$.
\subsubsection*{Step \ref{step 5}}Our goal is to use the disc $D$ to construct a sequence $L' \in \mc{L}$ such that $\ob{\Delta}_{L'}$ has lower complexity than $\Delta = \ob{\Delta}_L$. This will contradict our original choice of $L$. As mentioned in the remark preceding this proof, the strategy is to follow the outline of the proof of Casson and Gordon's Weakly Reducible theorem given in \cite{Sc02}. We will view $S_1$ as a Heegaard surface for $W_1$ or $S_2$ as a Heegaard surface for $W_2$ depending on which side the disc $D$ lies. In the Casson and Gordon theorem the two cases had identical arguments. Here, however, the relationship of $W_1$ and $W_2$ to $K_{n+1} - K_q$ is not symmetric due to the asymmetry in the construction of balanced submanifolds. We will briefly need to consider the two cases separately. We will eventually be able to combine arguments.
\begin{remark} Some care is needed when we consider $S_1$ or $S_2$ has a Heegaard surface, as $\ob{S}$ may contain spheres. This means that the compressionbodies we are considering may not be irreducible. This does not really affect the proofs as the only times we would want to use the irreducibility of a compressionbody is when we isotope (in a compressionbody) one disc past another which shares its boundary. If $\ob{S}$ contains spherical components which get in the way of the isotopy, we may first perform a surgery on the disc we want to isotope so that the two discs with common boundary bound a 3-ball and then perform the isotopy. We will refer to this process as \defn{revising and isotoping} the disc which, if $\ob{S}$ were irreducible, we would have merely isotoped. \end{remark}
Suppose, first, that $D$ lies in $W_1$. By pushing $\ob{S}$ slightly into $W_2$ we can view $S_1$ as a Heegaard surface for the disconnected 3-manifold $W_1$. $S_1$ divides $W_1$ into (disconnected) absolute compressionbodies $U'$ and $V'$. Let $V'$ be the absolute compressionbody containing $\ob{S}$. See Figure \ref{GoodDiscsExist3}. The disc $D$ is a compressing disc for $\partial W_1$. \newline
\begin{figure}
\caption{$S_1$ as a Heegaard surface for $W_1$.}
\label{GoodDiscsExist3}
\end{figure}
We can apply Haken's Lemma to obtain a compressing disc $D'$, a compressing disc for $\ob{S}$ in $W_1$, which intersects $S_1$ in a single loop and is such that $\partial D' = \partial D$. $W_1 \subset K_{n+1}$ by the definition of $W_1$, so $D'$ does not intersect $\operatorname{fr} K_{n+1}$. The discs $\operatorname{fr} A^L_{n-2}$ are in $\Delta_1$ and separate $U$. Thus no component of $W_1$ intersects both $\operatorname{fr} K^L_{n-2}$ and $\operatorname{fr} K^L_{n-3}$. Hence, since $\partial D$ is in $W_1 \cap (K_{n+1} - K^L_{n-2})$ the disc $D'$ is in $K_{n+1} - K^L_{n - 3}$. Summarizing: $D'$ is a compressing disc for $\ob{S}$ which intersects $S_1$ in a single loop and is contained in $K_{n+1} - K^L_{n-3}$. \newline
We now turn to the case when $D \subset W_2$. Push $\ob{S}$ slightly into $W_1$ and view $S_2$ as a Heegaard surface for the 3-manifold $W_2$. The disc $D$ is a compressing disc for $\partial W_2$. Let $U'$ and $V'$ be the submanifolds of $W_2$ into which $S_2$ divides $W_2$. $U'$ is the submanifold which has $\ob{S}$ as its boundary. See Figure \ref{GoodDiscsExist4}. \newline
\begin{figure}
\caption{$S_2$ as a Heegaard surface for $W_2$.}
\label{GoodDiscsExist4}
\end{figure}
The discs of $\operatorname{fr} B^L_{n-2}$ are contained in $\Delta_2$ and separate $V$. Thus no component of $W_2$ intersects both $(K_{n+1} - K^L_{n-2})$ and $\operatorname{int} K^L_{n-2}$. The disc $D$ is a compressing disc for $\ob{S} \subset \partial W_2$ which is contained in a component of $W_2$ disjoint from $\operatorname{int} K^L_{n-2}$. Applying Haken's Lemma, we can replace $D$ with a disc $D'$ such that $\partial D' = \partial D$ and $D'$ intersects $S_2$ in a single loop. Since $D$ and $D'$ are in the same component of $W_2$, $D' \cap \operatorname{int} K^L_{n-2} = \varnothing$. Summarizing: The disc $D'$ is a compressing disc for $\ob{S}$ which intersects $S_2$ in a single loop and is contained in $K_{n+1} - K^L_{n-2}$.
\subsubsection*{Step \ref{step 6}}Recall that $2 < q \leq (n-3)$. We may now combine arguments. In the previous step, we showed that there was a compressing disc for $\ob{S}$ which is located in either $W_1 \cap \operatorname{cl}(K_{n+1} - K_q)$ or $W_2 \cap \operatorname{cl}(K_{n+1} - K_q)$. and intersects $S_1$ or $S_2$ (respectively) in a single loop $\gamma$. We will now produce a sequence of slide-moves $l$ such that the sequence of slide-moves $L$ followed by $l$ is in $\mc{L}$ and the such that the sequence $L$ followed by $l$ has lower complexity than $L$. This will contradict our choice of $L$. The difficult part of this step is nearly identical to Bonahon's proof of Proposition \ref{choosing discs}. This is Proposition B.1 of \cite{Bo83}. We include the proof here, because we need to pay careful attention to the type of slide-moves which are required. \newline
Without loss of generality, suppose that $D$ is a compressing disc for $\ob{S}$ which is located in $W_1 \cap \operatorname{cl}(K_{n+1} - K_q)$ and intersects $S_1$ in a single loop $\gamma$. (We were calling this disc $D'$ in the previous step.) We continue to view $S_1$ as a Heegaard surface for $W_1$. Recall that $V'$ denotes the compressionbody which is the region between $\ob{S}$ and $S_1$ and that $U'$ is the closure of the complement of $V'$ in $W_1$. See Figure \ref{GoodDiscsExist3}. \newline
We may assume that $D$ is disjoint from the discs of $\Delta_1$; it may, however, intersect the discs $\Delta_2$ (including the frontiers of some $B^L_i$ (for $q < i < n+1$). Let $A$ denote the annulus $D \cap V'$ and $D'$ the disc $D \cap U'$. Consider how $A$ intersects $\Delta_2$. \newline
By an innermost disc argument we may assume that the annulus $A$ intersects the discs of $\Delta_2$ entirely in arcs with both endpoints on $\gamma$. Let $a$ be an outermost arc of intersection on $A$. Let $b$ be the arc of $\gamma$ with endpoints $\partial a$ which intersects no disc of $\Delta_2$. Let $G$ be the disc of $\Delta_2$ such that $a \subset G \cap A$. Let $c$ be an arc of $\partial G$ which has endpoints $\partial a$. The arc $c$, of course, may have other intersections with $\gamma$. \newline
Combining the subdiscs of $A$ and $G$ with boundaries $a \cup b$ and $a \cup c$ respectively and pushing off $\Delta_2$ a little, we obtain a compressing disc for $S_1$ in $V'$ which is disjoint from the complete collection of discs $\Delta_2$ for $V'$. Thus $b \cup c$ is a loop bounding a disc $Q$ in $\sigma(S_1;\Delta_2)$. (We are not calling this surface $\ob{S}$ since we have pushed $\ob{S}$ into $W_2$.)\newline
We now adapt Bonahon's proof of Proposition \ref{choosing discs} to show that we can perform 2-handle slides of $G$ over the discs of $\Delta_2$ which have boundary in $Q$ and then revise and isotope $D$ to remove all intersections of $D$ with $G$ (see remark in Step 5 about the term ``revise and isotope"). When we compress $S_1$ along $\Delta_2$, the remnants of $\Delta_2$ show up as spots, some of which are in the interior of the disc $Q$. Each disc of $\Delta_2$ contributes two spots to $\sigma(S_1;\Delta_2)$. For each spot $F_i$ from $\Delta_2$ which shows up in $Q$, excluding a possible spot coming from $G$, choose oriented arcs $\alpha_i$ contained in $Q$ joining $G$ to the discs of $\Delta_2$ giving rise to those spots. If a disc of $\Delta_2$ produces two spots contained in $Q$ then we have two oriented arcs joining $G$ to that disc. Choose the arcs $\alpha_i$ so that $\alpha_i \cap \Delta_2 = \partial \alpha_i$ and so that the $\{\alpha_i\}$ are pairwise disjoint. The arcs $\alpha_i$ lie on $S_1$ and for each arc $\alpha_i$ we may perform a handle-slide of $G$ over the the disc to which it is joined by $\alpha_i$. Continue calling this disc $G$. By performing these slides, we may have increased the number of intersections between $\partial G$ and $\gamma$. These handle-slides convert $Q$ into a new disc as the arc $c$ is changed by the handle-slides. We continue calling the disc $Q$. After these handle-slides $Q$ contains no spots from $\Delta_2$, except perhaps one coming from $G$. Revise and isotope $D$ (rel $b$) so that $\gamma$ has minimal intersection with $\partial G$. Suppose, now, that the disc $Q$ contains a spot arising from $G$. Let $G_1$ and $G_2$ denote the two spots. Since they both arise from $G$ we have that $|\gamma \cap \partial G_1| = |\gamma \cap \partial G_2|$. Since any arc of $\gamma$ with both endpoints on $\partial G_2$ would bound a disc in $S_1$ and could, therefore, be removed by revising and isotoping $D$, each arc of $\gamma$ with an endpoint on $\partial G_2$ also has an endpoint on $\partial G_1$. However $\partial b \subset \partial G_1$ and thus $|\gamma \cap \partial G_1| = |\gamma \cap \partial G_2| + 2$. This, however, contradicts the earlier equation and so the spot $G_2$ cannot exist in $Q$. The disc $Q$, therefore, is now a disc in $S_1$ and we can revise and isotope $D$ to remove the intersection $a$ from $D \cap \Delta_2$. Since we have previously removed all other intersections, including the intersections introduced earlier, of $c$ with $\partial \Delta_2$ we have decreased $|D \cap \Delta_2|$ by at least one. Hence, by induction, we can remove all intersections of $D$ with $\Delta_2$ by revising and isotoping $D$ (rel $\partial D$) and handle-sliding $\Delta_2$. \newline
This produces a new disc set $\Delta'_2$ which is disjoint from $\Delta_1 \cup \{D'\}$. At the beginning of the process the curve $\gamma$ does not intersect any disc of $\operatorname{fr} K_{n+1} \cup \operatorname{fr} K_q$. The set of discs with boundary in $R$ may contain discs that are associated to discs of $\operatorname{fr} K_{n+1} \cup \operatorname{fr} K_q$, but we were able to choose our sliding arcs so that they only intersected the discs of $\operatorname{fr} K_{n+1} \cup \operatorname{fr} K_{q}$ in at most one endpoint. The only slides we performed were of the disc $G$ over other discs, and since $\gamma$ intersected $G$, $G$ was not a disc of $\operatorname{fr} K_{n+1} \cup \operatorname{fr} K_{q}$. Furthermore, since the discs of $\Delta_1$ show up as spots on $S_1$ it is easy to arrange these slides to be relative to $\Delta_1$. Thus, these handle-slides are of the sort allowed in sequences in $\mc{L}$. Let $l$ denote the sequence of these handle-slides followed by the slide-move (M2) where we add the disc $D'$ to $\Delta_1$. The sequence of slide-moves consisting of $L$ followed by $l$ does, therefore, give us a sequence of slide-moves in the collection $\mc{L}$. As $D$ was a compressing disc for $\ob{S}$ this sequence of slide-moves has lower complexity than our original choice from $L$. This, however, contradicts our initial choice $L$ to be such that the complexity of $\sigma(S \cap K_{n+1}; \Delta)$ was minimal. The contradiction arises from our assumption that $\ob{B}$ is compressible: therefore, $\ob{B}$ is incompressible in $M - C$.
\subsubsection*{Step \ref{step 7}} Recall that $\{K_i\}$ is our balanced exhausting sequence adapted to $S$ which is interspersed with a frontier-incompressible (rel $C$) exhausting sequence. Let $q_n = 5n$ for $n \geq 2$. We have shown how to replace $K_{q_n}$ with a balanced submanifold $C_n = \eta(A'_n \cup B'_n)$ which contains $K_{q_n - 3}$. The sequence $\{C_n\}$ is a balanced exhausting sequence adapted to $S$. In the construction of each $C_n$ we also constructed a 2-sided disc family $\Delta$ so that $\sigma(\operatorname{cl}(\partial_S B'_n - \partial_S A'_n);\Delta)$ is incompressible in $M - C$. Let $\Delta_n$ denote those discs of $\Delta$ with boundary on $\operatorname{cl}(\partial_S B'_n - \partial_S A'_n)$. Note that $\Delta_n$ is disjoint from $\Delta_i$ for all $i < n$. Let $\Psi = \cup_n \Delta_n$. $\Psi$ is a 2-sided disc family for $S$ where each disc of $\Psi$ has boundary on the frontier of some $C_n$. When we compress $\cup \operatorname{cl}(\partial_S B'_n - \partial_S A'_n)$ along $\Psi$ we obtain surfaces which are incompressible (rel $C$). \end{proof}
\subsection{The Proof of Theorem \ref{well-placed}} Recall that $M$ is end-irreducible rel $C \supset \partial M$ and that $U \cup_S V$ is an absolute Heegaard splitting for $M$.
\begin{proposition}\label{fr incomp and adapted} $M$ has a frontier-incompressible (rel $C$) exhausting sequence which is adapted to $S$. Furthermore, $V$ intersects the frontier of each element of the exhausting sequence in discs. \end{proposition}
\begin{proof} Let $\{C_i = \eta(A'_i \cup B'_i)\}$ be the balanced exhausting sequence guaranteed by Proposition \ref{good discs exist}. By the construction of balanced submanifolds, $V$ intersects each $\operatorname{fr} C_i$ in discs. \newline
Proposition \ref{good discs exist} guarantees the existence of a 2-sided disc family $\Psi$ for $S$ such that $\cup_i (\operatorname{fr} A'_i \cup \operatorname{fr} B'_i) \subset \Psi$ and each $\sigma(\operatorname{cl}(\partial_S B'_i - \partial_S A'_i); \Psi)$ is incompressible in $M - C$. Let $\Psi_1 = \Psi \cap U$ and $\Psi_2= \Psi \cap V$. We may use the product region between $(\operatorname{fr} C_i - (\operatorname{fr} A_i \cup \operatorname{fr} B_i))$ and $\partial_S B'_i - \partial_S A'_i$ to extend the discs in $\Psi_2$ with boundary on $(\partial_S B'_i - \partial_S A'_i)$ to have boundary on $\operatorname{fr} C_i$. \newline
If we boundary-reduce $C_i$ along $\Psi_2$ and add the 2-handles $\eta(\Psi_1)$ to $\operatorname{fr} C_i$ we end up with a new submanifold $\ob{C}_i$ of $M$. By construction, the discs of $\Psi$ with boundary on $\operatorname{fr} C_i$ are disjoint from $\operatorname{fr} C_{i-1} \cup \operatorname{fr} C_{i+1}$. Hence, $C_i$ is contained in a single component of $\ob{C}_{i+1}$ and so $M = \cup \ob{C}_i$. Let $K_1$ be the component of $\ob{C}_2$ containing $C$ and, for each $n > 1$, let $K_n$ be the component of $\ob{C}_{n+1}$ containing $C_n$. Since $C_n \subset K_n$ the sequence $\{K_n\}$ is an exhausting sequence for $M$. Since the frontier of each $K_i$ is incompressible in $M - C$, the sequence $\{K_i\}$ is frontier-incompressible rel $C$. \newline
When we boundary-reduce $C_i$ along $\Psi_2$ we are boundary-reducing $C_i$ along disjoint discs which each intersect the relative Heegaard surface $S \cap C_i$ in a single simple closed curve. By Haken's Lemma (Lemma \ref{Haken}), the resulting submanifold still has its intersection with $S$ a relative Heegaard surface. When we add the 2-handles $\Psi_1$ to $\operatorname{fr} C_i$ we are adding 2-handles to $\partial_- (U \cap C_i)$. Hence, the resulting submanifold still has a relative Heegaard splitting coming from its intersection with $S$, apart from the introduction of 2-sphere components to $\partial_-(U \cap C_i)$. If there are any, we may add to $U \cap K_i$ the 3-balls bounded by those 2-spheres in $U$. After we have added these 3-balls, $\{K_i\}$ is a correctly embedded exhausting sequence. Therefore, $\{K_i\}$ is adapted to $S$. Since the sequence is also frontier-incompressible (rel $C$) the proposition is proved. \end{proof}
We now embark on proving that there is a frontier-incompressible (rel $C$) exhausting sequence for $M$ which is adapted to $S$ and has properties (WP1), (WP2), (WP3), and (WP4) in the definition of ``well placed exhausting sequence".
\begin{lemma}\label{single disc} Let $\{K_i\}$ be a frontier-incompressible (rel $C$) exhausting sequence for $M$ which is adapted to $S$. Suppose that $V$ intersects $\operatorname{fr} K_i$ in discs for each $i$. Then after taking a subsequence of $\{K_i\}$ and performing a proper ambient isotopy of $\cup_i \operatorname{fr} K_i$ we may arrange that $V$ intersects each component of each $\operatorname{fr} K_i$ in a single disc. Additionally, $\{K_i\}$ has the outer collar property. \end{lemma}
\begin{proof}
Begin by taking a subsequence of $\{K_i\}$ such that $\{K_i\}$ has the outer collar property. Let $K = K_j$ (for $j \geq 2$) be an element of this revised exhausting sequence. Suppose that $B$ is a component of $\operatorname{fr} K$ such that $|V \cap B| \geq 2$. We will describe an ambient isotopy of $\operatorname{fr} K$ which is the identity outside of $\operatorname{cl}(K_{j+1} - K_{j-1})$ to reduce the number of components of $|B \cap V|$ by one. We may then perform this ambient isotopy on each element of $\{K_{2i}\}$ as needed in order to arrange that $V$ intersects each component of $\operatorname{fr} K_{2i}$ in a single disc. The union of these isotopies is a proper ambient isotopy of $\{K_{2i}\}$. After performing this isotopy, it will be clear that $\{K_{2i}\}$ still has the outer collar property. \newline
Let $B' = U \cap B$. Since $V \cap B$ consists of discs, $B'$ is connected and has at least two boundary components. $B'$ makes up part of the frontier of the relative compressionbody $K \cap U$. $B'$ is a component of $\partial_- (K \cap U)$ since $\{K_i\}$ is a correctly embedded exhausting sequence. Since $\{K_i\}$ has the outer collar property, there is a product region $P = B' \times I$ which is embedded in $\operatorname{cl}((K - (K_{j-1}) \cap U))$ such that $B' = B' \times \{0\}$ and $B' \times \{1\}$ is a subsurface of $S \cap K$ except at a finite number of open discs $\delta$. Choose an arc $\alpha \subset B' \times \{1\}$ so that $\alpha \cap \partial (B' \times \{1\}) = \partial \alpha$, $\alpha$ joins different components of $\partial (B' \times \{1\})$, and $\alpha$ is disjoint from the discs $\delta$. Let $D = \alpha \times I \subset P$ so that $\alpha = \alpha \times \{1\}$. $D$ is an embedded disc in $P$ such that $\partial D$ is composed of two arcs, one on $B'$ and one on $S \cap K$. Isotope $B \cap \eta(D)$ across the disc $D$. After this isotopy, the number of intersections $B \cap S$ has been reduced by one. \newline
We now inspect the effect of this isotopy on $V \cap K$ and $U \cap K$. In $V \cap K$ we have changed $\partial_- V$ by banding together two discs. Since $V \cap K$ was a relative compressionbody with $\partial_- (V \cap K)$ consisting of discs, we have not changed the homeomorphism type of $V \cap K$, we have changed only the preferred surface. \newline
The effect of the isotopy on $U \cap K$ is to replace $B' \times I$ with $C' \times I$ where $C'$ is the surface obtained from $B'$ by removing a neighborhood of an arc joining two components of $\partial B'$. Clearly, $U \cap K$ is still a relative compressionbody with preferred surface $S \cap K$. Furthermore, the presence of the product region $C' \times I$ shows that the sequence $\{K_i\}$ still has the outer collar property. The isotopy we have described is the identity outside of $K_{j+1} - K_{j-1}$. \end{proof}
\begin{proof}[Proof of Theorem \ref{well-placed}] Take the exhausting sequence $\{K_i\}$ given by Lemma \ref{single disc}. The only properties we have left to achieve are (WP2) and (WP4). We now prove that we have, in fact, already achieved (WP2) and that we can achieve (WP4) without ruining the others. \newline
Suppose that $B$ is some component of $\operatorname{fr} K_i$ such that $B \cap U$ has a compressing disc $D$ which is contained in $U$. Since $K_i \cap U$ is a relative compressionbody and $(B \cap U) \subset \partial_- (K_i \cap U)$, the compressing disc $D$ must be on the outside of $K_i$. The curve $\partial D$ bounds a disc $E \subset B$ since $B$ is incompressible in $M - C$ and $C \subset K$. Since $D$ is a compressing disc for $B \cap U$, the disc $E$ is not contained in $B \cap U$. Thus $(V \cap B) \subset E$. Forming $K'_i$ by adding $\eta(D)$ to $K_i$ cuts $B$ into two surfaces: $B'$ which is homeomorphic to $B$ and $B''$ which is a 2-sphere. Note that both $B'$ and $B''$ are components of $\partial K'_i$. The surface $B'$ is contained in $U$ and the sphere $B''$ intersects $V$ in a single disc.\newline
Since $B$ was incompressible in $M - C$ and $B'$ was obtained from $B$ by cutting off a 2-sphere, $B'$ is also incompressible in $M - C$. The surface $B' \subset U$ is closed and incompressible in $U$. Hence, $B'$ is parallel to a component of $\partial_- U \subset \partial M$. This product region has boundary consisting of two components both of which are components of $\partial K'_i$. Thus the product region is actually $K'_i$. But $B''$ is also a component of $\partial K'_i$, so this is a contradiction. Hence, $B \cap U$ is incompressible in $U$. Thus $\{K_i\}$ satisfies (WP2). \newline
Finally, we need to achieve (WP4). Suppose that $\operatorname{cl}(M - K_1)$ has a compact component $L$. There is some $K_n$ so that every compact component of $\operatorname{cl}(M - K_1)$ is contained in $K_n$. By Corollary \ref{complementary compressionbodies}, $U \cap L$ and $V \cap L$ are relative compressionbodies. Since there are no closed components of $\partial_- (U \cap L)$ or $\partial_-(V \cap L)$, both are also handlebodies. Let $Q = L \cap K_1$. $Q \cap U$ is an incompressible surface in $U$ which makes up part of $\partial_- (U \cap K_1)$. Choose a collaring set of discs $\delta$ for $U \cap K_1$. Boundary-reducing $K_1 \cap U$ along $\delta$ leaves us with components homeomorphic to $(Q \cap U) \times I$. Let $L' = (L \cap U) \cup ((Q \cap U) \times I)$. This does not change the homeomorphism type of $L \cap U$, so $L'$ is a handlebody. We may now reassemble $K_1 \cap U$ by attaching 1-handles corresponding to the discs $\delta$. When we do this, we are attaching the handlebody $L'$ to the $\partial_+$ of a relative compressionbody and so the result is a relative compressionbody with preferred surface $S \cap ((K_1 \cap U) \cup L')$. Since $V$ intersected each component of $B$ in a single disc, $V \cap L$ is a handlebody and so $V \cap (K_1 \cup L)$ is also a relative compressionbody with preferred surface $S \cap (K_1 \cup L)$.\newline
Thus, if we include each compact component of $\operatorname{cl}(M - K_1)$ into $K_1$ to form $K'_1$ we still have a relative Heegaard splitting $K'_1 = (U \cap K'_1) \cup_{S \cap K'_1} (V \cap K'_1)$. Assume that we have defined $K'_j$ for $j \geq 1$. There exists an $n_j$ so that $K'_j \subset K_{n_j}$. Let $K'_{j+1}$ be the union of $K_{n_j}$ and all of the compact components of $\operatorname{cl}(M - K_{n_j})$. By the previous argument, $S$ gives a relative Heegaard splitting of $K_{j+1}$. In such a way we obtain an exhausting sequence $\{K'_n\}$ for $M$ with property (WP4). It is clear from the construction that $\{K'_n\}$ is, in fact, an exhausting sequence well-placed on $S$. \end{proof}
\begin{remark} Theorem \ref{well-placed} tells us that there is a frontier-incompressible (rel $C$) exhausting sequence $\{K_i\}$ for $M$ such that each $K_i$ inherits a relative Heegaard splitting from $U \cup_S V$. If for some $j \geq 2$ each component of $\operatorname{cl}(K_{j+1} - K_{j})$ intersects $K_j$ in a connected surface, then examining the structure of the absolute Heegaard splitting of $K_{j+1}$ induced by the relative Heegaard splitting coming from $S$, shows that this absolute Heegaard splitting is obtained by amalgamating Heegaard splittings of $K_j$ and each component of $\operatorname{cl}(K_{j+1} - K_j)$. \end{remark}
\section{Heegaard Splittings of Deleted Boundary 3-Manifolds}\label{Deleted Boundary}
\subsection{Introduction} \begin{definition} A 3-manifold $M$ is \defn{almost compact} if there is a compact 3-manifold $\ob{M}$ with non-empty boundary and a non-empty closed set $J \subset \partial \ob{M}$ such that $M$ is homeomorphic to $\ob{M} - J$. If $J$ is the union of components of $\partial \ob{M}$ then $M$ is a \defn{deleted boundary manifold}. \end{definition}
Let $M$ be a deleted boundary manifold obtained from the compact manifold $\ob{M}$ by removing the union $J$ of boundary components. By removing an open collar neighborhood of $J$ from $\ob{M}$ we obtain a compact manifold $C$ which resides in $M$. The closure of $M - C$ is homeomorphic to $J \times {\mathbb R}_+$. Since $J$ is the union of components of $\partial \ob{M}$, $J$ is a closed, possibly disconnected, surface. $M$ is obviously end-irreducible (rel $C$) and $\partial M \subset C$. We will also assume that $\partial M$ contains no spherical components, but, except where noted, $J$ may have spherical components. If $|J| \geq 2$ and if at least one component is a sphere, $M$ has Heegaard splittings which have infinitely many properly embedded reducing balls but not end-stabilized. The following definitions (which make sense even when $M$ is not a deleted boundary 3-manifold) assist the classification in this case.
\begin{definition} Let $e$ be an end of $M$ represented by non-compact submanifolds $\{W_i\}$ such that $\operatorname{cl}(W_i)$ is non-compact, $W_{i+1} \subset W_{i}$ for all $i$, and $M = \cup (M - W_i)$. A Heegaard splitting $M = U \cup_S V$ is \defn{$e$-stabilized} if for each $i$ there is a reducing ball for $S$ contained in $W_i$. Recall that $M$ is \defn{infinitely-stabilized} if it is $e$-stabilized for some end $e$ and $\defn{end-stabilized}$ if it is $e$-stabilized for every end $e$. \end{definition}
The notion of being $e$-stabilized is a proper ambient isotopy invariant, as the next lemma shows.
\begin{lemma}\label{proper ambient isotopy invariant} Suppose that $S$ and $T$ are Heegaard surfaces for $M$. If there is an end $e$ of $M$ such that $S$ is $e$-stabilized but $T$ is not then $S$ and $T$ are not properly ambient isotopic. \end{lemma}
\begin{proof} This follows directly from the fact that including a Heegaard surface into $M$ induces a homeomorphism on ends (Proposition \ref{end homeomorphism}) and that proper ambient isotopies fix each end of a manifold. \end{proof}
\begin{definition} Suppose that $U_S \cup_S V_S$ and $U_T \cup_T V_T$ are two absolute Heegaard splittings of $M$. Then they are \defn{approximately isotopic} if for any compact set $C$ there are proper ambient isotopies of $S$ and $T$ so that $S \cap C = T \cap C$. \end{definition}
The goal of this section is to completely classify Heegaard splittings of $M$ up to proper ambient isotopy and up to approximate isotopy. In particular, if $J$ contains no spherical components, $M$ has, up to proper ambient isotopy, exactly one Heegaard splitting and that splitting is end-stabilized.\newline
The following three theorems provide key ingredients in the classification.
\begin{theorem}[Reidemeister-Singer]\label{Reidemeister-Singer} After finitely many stabilizations, any two absolute Heegaard splittings of a compact 3-manifold which have the same partition of boundary are ambient isotopic. \end{theorem}
The next is a version of Theorem 2.1 of \cite{FrMe97}. A proof is provided in the Appendix.
\begin{theorem}[Frohman-Meeks]\label{Frohman-Meeks} Any two end-stabilized absolute Heegaard splittings with the same partition $\partial M$ are properly ambient isotopic. Any two infinitely-stabilized Heegaard splittings with the same partition of $\partial M$ are approximately isotopic. \end{theorem}
The following is the most involved result of this section. Its proof uses Scharlemann and Thompson's classification of splittings of $\text{(closed surface)} \times I$. \newline
Let $W_1, \hdots, W_n$ denote the components of $\operatorname{cl}(M - C)$ and let $X_1, \hdots, X_n$ denote the components of $J$ so that $W_i$ is homeomorphic to $X_i \times {\mathbb R}_+$. Let $e_1, \hdots, e_n$ denote the ends of $M$ corresponding to $W_1, \hdots, W_n$ respectively.
\begin{theorem}\label{end-stabilized} Let $S$ be any Heegaard surface for $M$. If $S \cap W_i$ is of infinite genus then $S$ is $e_i$-stabilized. Furthermore, if $X_i$ is not a sphere $S \cap W_i$ is of infinite genus and, therefore, $S$ is $e_i$-stabilized. \end{theorem}
The promised classification is contained in the following propositions. The proofs of these propositions use Theorem \ref{end-stabilized} to give information about stabilizations and then appeal to Frohman and Meeks' theorem for the existence of the desired isotopies. \newline
In Section \ref{examples}, it was explained how to obtain finite genus splittings of non-compact 3-manifolds: remove some finite number of closed balls from a compact 3-manifold. All such 3-manifolds are deleted boundary 3-manifolds. One consequence of Theorem \ref{end-stabilized} is that these are the only deleted boundary 3-manifolds with finite genus Heegaard splittings. All others have only infinite genus splittings and we can classify them up to approximate isotopy and up to proper ambient isotopy. \newline
The following propositions provide the classification. Recall that $M = \ob{M} - J$ is a deleted boundary 3-manifold:
\begin{proposition}\label{all spheres} Suppose that $J$ consists of 2-spheres and that $M'$ is obtained from $\ob{M}$ by attaching 3-balls to $J$. Then, up to proper ambient isotopy of $M$, any finite genus Heegaard surface in $M$ is the intersection of a Heegaard surface for $M'$ with $M$. The Heegaard surface in $M'$ intersects each attached 3-ball in a properly embedded disc. If two such splittings of $M'$ were isotopic then the resulting splittings of $M$ are properly ambient isotopic. \end{proposition}
\begin{proposition}\label{approx isotopic} Suppose that $S$ and $T$ are infinite genus Heegaard surfaces for $M$ whose splittings have the same partition of $\partial M$. Then $S$ and $T$ are approximately isotopic. \end{proposition}
\begin{proposition}\label{classification} Suppose that $S$ and $T$ are infinite genus Heegaard surfaces for $M$ with the same partition of $\partial M$. Consider the following condition: \begin{enumerate} \item[(*)] For each $i$, $S \cap W_i$ has infinite genus if and only if $T \cap W_i$ is of infinite genus. \end{enumerate} Then (*) holds if and only if $S$ and $T$ are properly ambient isotopic. \end{proposition}
\begin{proposition}\label{no spheres} If no $X_i$ is a 2-sphere then any two Heegaard splittings of $M$ with the same partition of $\partial M$ are equivalent up to proper ambient isotopy. \end{proposition}
Before we prove the theorem and the classifications, we review a technique developed by Scharlemann and Thompson \cite{ScTh94a} which was inspired by work of Otal. We also need to review the classification of Heegaard splittings of $G \times I$ where $G$ is a closed surface.
\subsection{Edge-Slides of Reduced Spines}\label{edge-slides of reduced spines}
\begin{definition} Suppose that $Q$ is a compact 3-manifold and that $\Sigma$ is a finite graph in $Q$ such that $\Sigma$ intersects $\partial Q$ in valence one vertices. Let $B$ denote the components of $\partial Q$ which intersect $\Sigma$. If $\operatorname{cl}(Q - \eta(B \cup \Sigma))$ is a compressionbody then $\Sigma$ is a \defn{reduced spine}. \end{definition}
Choose an edge $e \subset \Sigma$ and a path $\gamma \subset \partial Q \cup \Sigma$ with $\gamma$ beginning at an endpoint of $e$ but otherwise disjoint from $e$. An edge-slide of $e$ over $\gamma$ replaces $e$ with the union of $e$ and a copy of $\operatorname{int}(\gamma)$ pushed slightly away from $\Sigma \cup B$. See \cite{SaScSch01,ScTh93,ScTh94a} for more detail. Edge slides give isotopies of the surface $S = (B - \operatorname{int}(\eta(\Sigma))) \cup \partial \eta(\Sigma)$. Conversely, an isotopy of a Heegaard surface can be converted into a sequence of edge-slides and isotopies of a reduced spine for one of the compressionbodies. The correspondence between edge-slides of reduced spines and isotopies of the Heegaard surface will be useful for the proof of Theorem \ref{end-stabilized}. The reason that this viewpoint is helpful is that if $Q$ is a compact submanifold of a non-compact manifold and if $(\partial Q - \operatorname{int}(\eta(\Sigma))) \cup \partial \eta(\Sigma)$ is part of a Heegaard surface $S$ for $M$ then the isotopies described by edge-slides in $Q$ of $\Sigma$ are fixed off a regular neighborhood of $Q$ and so describe a proper isotopy of $S$. \newline
To increase the genus of the Heegaard surface obtained from the reduced spine, we may stabilize a reduced spine by choosing an edge $e \subset \Sigma$. The edge $e$ is homeomorphic to $[0,1]$ and, choosing some homeomorphism, let $e'$ denote the subarc $[\frac{1}{4},\frac{3}{4}]$. Introduce new vertices on $e$ at $\frac{1}{4}$ and $\frac{3}{4}$ and push the interior of $e'$ slightly off of $e$ to form a new edge $e''$ with endpoints on $e$ at the vertices $\frac{1}{4}$ and $\frac{3}{4}$. The new edges $e''$ and $e'$ of $\Sigma$ bound a disc $D$ whose interior is disjoint from $\Sigma$. The induced Heegaard splitting is stabilized in the usual sense as the boundary of the disc $D$ intersects a meridian disc of $\eta(\Sigma)$ exactly once. \newline
The final lemma of this section produces a reduced spine for $\text{(surface)} \times I$ with particular properties. The spine gives rise to a relative version of a standard splitting of $\text{(surface)} \times I$.
\begin{lemma}\label{model surface} Let $G$ be a closed surface of positive genus. Let $G'$ and $G''$ be the surfaces $G \times \{\frac{1}{4}\}$ and $G \times \{\frac{3}{4}\}$ in $G \times I$. Let $n$ be an fixed integer bigger than or equal to twice the genus of $G$. Let $P_0 = G \times [0,\frac{1}{4}]$. Then there is a connected reduced spine $\Sigma = \Sigma(G,n)$ in $G \times I$ such that $\Sigma$ intersects both boundary components of $G \times I$, $\Sigma$ intersects $P_0$ in a vertical arc, the rank of $H_1(\Sigma) = n$, and $\partial \eta(\Sigma)$ is a relative Heegaard surface for $G \times [\frac{1}{4},1]$. \end{lemma}
\begin{proof} Consider $Q' = (G \times [\frac{7}{16},\frac{9}{16}]) - (\eta(* \times [\frac{7}{16},\frac{9}{16}]))$ where $*$ is a point on $G$. Then $Q'$ is a handlebody of genus twice the genus of $G$. Choose $\text{genus}(G)$ loops $L$ based at a point $b \in \operatorname{int} Q'$ which represent generators of $\pi_1(Q',b)$. Let $a$ be the arc $b \times I$ in $G \times I$ and assume, by general position, that the interior of each loop of $L$ is disjoint from $a$. Since $\partial Q'$ is a Heegaard surface for $G \times I$, $\partial (Q' \cup \eta(a))$ is a relative Heegaard surface for $G \times I$. Stabilize the reduced spine $\Sigma$ enough times so that the rank of its first homology is $n$. Be sure that the stabilizations take place in the interval $[\frac{1}{4},1]$. Then $a \cup L$ is a reduced spine for $G \times I$ satisfying the desired properties. \end{proof}
\subsection{Heegaard Splittings of $\text{(closed surface)} \mathbf{\times I}$.}
Scharlemann and Thompson classified Heegaard splittings of $G \times I$, where $G$ is a closed connected surface. In Theorem 6.1 of \cite{ScTh93} they give a way of interpreting their classification in terms of edge slides of spines (reduced or non-reduced). The following are the versions of their results which we will need.
\begin{theorem}[Scharlemann-Thompson \cite{ScTh93}]\label{STclass1} Suppose that $\Sigma$ and $\Psi$ are connected reduced spines for $G \times I$ which intersect both boundary components of $G \times I$ and whose first homology groups have the same rank. Then there is a finite sequence of edge-slides and isotopies taking $\Sigma$ to $\Psi$. \end{theorem}
\begin{theorem}[Scharlemann-Thompson \cite{ScTh93}]\label{STclass2} If a Heegaard splitting of $G \times I$ has both boundary components of $G \times I$ contained in the same compressionbody and if the splitting surfaces has genus greater than twice the genus of $G$ then the splitting is stabilized. \end{theorem}
\subsection{The proofs} Before beginning each proof, the theorem or proposition has been repeated for the convenience of the reader.
\begin{thm6.4} If $S \cap W_i$ is of infinite genus then $S$ is $e_i$-stabilized. Furthermore, if $X_i$ is not a sphere $S \cap W_i$ is of infinite genus and, therefore, $S$ is $e_i$-stabilized. \end{thm6.4}
\subsubsection*{Proof of Theorem \ref{end-stabilized}}
Since $M$ is end-irreducible (rel $C$) and $\partial M \subset C$, Theorem \ref{well-placed} guarantees that there is an exhausting sequence $\{K_n\}$ which is well-placed on $S$. In particular, $\operatorname{fr} K_n$ is incompressible in $M - C$ and no component of $\operatorname{cl}(M - K_n)$ is compact. Recall that $W_i$ is a component of $\operatorname{cl}(M - C)$ and is homeomorphic to $X_i \times {\mathbb R}_+$ where $X_i$ is a closed connected surface. For each $n$, the surface $\operatorname{fr} K_n \cap W_i$ is an incompressible surface in $W_i$. Furthermore, as $H_2(W_i,\partial W_i) = 0$ and $\operatorname{cl}(M - K_n)$ has no compact components, $\operatorname{fr} K_n \cap W_i$ is connected and is not a 2-sphere which is inessential in $W_i$.
\begin{lemma}\label{h-cobord} For each $i$ and for each $n$ the submanifold $\operatorname{cl}(K_{n+1} - K_n) \cap W_i$ is homeomorphic to $X_i \times I$. \end{lemma} \begin{proof} The proof is well-known, but we include it for completeness. Let $F = \operatorname{fr} K_{n+1} \cap W_i$. $F$ is incompressible in $W_i$. Let $N_n = \operatorname{cl}(K_{n+1} - K_n) \cap W_i$. Suppose first that $X_i = S^2$. In this case, $F$ is also homeomorphic to $S^2$. As $F$ is essential it does not bound a ball in $W_i$. By \cite[Theorem 3.1]{Br66}, $N_n$ is homeomorphic to $S^2 \times I$. \newline
Now suppose that $X_i \neq S^2$. As $W_i$ is irreducible, $F \neq S^2$. The inclusion map of $F$ into $N_n$ induces an injective map on fundamental groups. Since $W_i$ is homeomorphic to $X_i \times {\mathbb R}_+$, each loop in $N_n$ with basepoint on $F$ is homotopic (rel basepoint) to a loop outside of $N_n$. Hence, each loop is homotopic into $F$. Thus the inclusion of $F$ into $N_n$ induces an isomorphism of fundamental groups and, so by the h-cobordism theorem \cite[Theorem 10.2]{He04}, $N_n$ is homeomorphic to $F \times I$. A similar argument shows that the submanifold bounded by $X_i$ and $F$ is homeomorphic to $F \times I$ and so $F$ is homeomorphic to $X_i$. \end{proof}
Fix some $i$. Let $W = \operatorname{cl}(W_i - K_2)$. We will show that there is a subsequence of $\{K_n\}$ and a proper ambient isotopy of $S$ which is fixed off $\operatorname{cl}(W_i - K_1)$ so that either $W \cap \operatorname{cl}(K_{n+1} - K_n)$ is homeomorphic to $S^2 \times I$ and $S \cap W \cap \operatorname{cl}(K_{n+1} - K_n)$ is a genus 0 relative Heegaard surface or $S \cap W \cap \operatorname{cl}(K_{n+1} - K_n)$ is a stabilized relative Heegaard surface of $W \cap \operatorname{cl}(K_{n+1} - K_n)$. \newline
We deal first with the case when $X_i = S^2$. Let $N_n = W \cap \operatorname{cl}(K_{n+1} - K_n)$ for each $n \geq 2$.
\begin{lemma}\label{spheres inherit} If $X_i = S^2$ then $S \cap N_n$ is a relative Heegaard surface for $N_n$. \end{lemma}
\begin{proof} Recall that for each $n$, $\operatorname{fr} K_n \cap W$ is an essential 2-sphere and, by property (WP1) of well-placed exhausting sequences, $V \cap (\operatorname{fr} K_n \cap W)$ is a single disc. This implies that $U \cap (\operatorname{fr} K_n \cap W)$ is a single disc. Thus, for each $n \geq 2$, $U \cap N_n$ is a relative compressionbody with preferred surface $S \cap N_n$. Similarly, for each $n \geq 2$, $V \cap N_n$ is a relative compressionbody with preferred surface $S \cap N_n$. Thus $S \cap N_n$ is a relative Heegaard surface for $N_n$. \end{proof}
By Lemma \ref{h-cobord}, $N_n$ is homeomorphic to $S^2 \times I$. By the classification of Heegaard splittings of $S^2 \times I$, if $S \cap N_n$ has positive genus, there is a reducing ball for $S \cap N_n$ which is contained in $N_n$. If $S \cap W_i$ is of infinite genus, there are infinitely many $n$ so that $S \cap N_n$ is of positive genus, and hence $S$ is $e_i$-stabilized. If $S \cap W_i$ is of finite genus, we can take a subsequence of $\{K_i\}$ so that $S \cap N_n$ has genus 0. This concludes the case when $X_i = S^2$. \newline
Suppose, for the remainder, that $X_i$ is a closed orientable surface of positive genus $g$. We do not begin by supposing that $S \cap W_i$ is of infinite genus but, rather, draw that as our first conclusion. \newline
Recall that since $\{K_n\}$ is well-placed on $S$, $V$ intersects each $\operatorname{fr} K_n \cap W$ is a single disc. Let $N_n = W \cap \operatorname{cl}(K_{n+1} - K_n)$ for each $n \geq 1$. Since $\{K_n\}$ is well-placed on $S$ the sequence $\{K_n\}$ has the outer collar property with respect to $U$. This means that in each $U \cap N_n$ there is a collection of discs $\delta_n$ with boundary on $S \cap N_n$ so that $\sigma(U \cap N_n ; \delta_n)$ has a component which is $(\operatorname{fr} K_{n+1} \cap U \cap N_n) \times I$. The frontier of $K_{n+1} \cap U \cap N_n$ is $(\operatorname{fr} K_{n+1} \cap U \cap N_n) \times \{0\}$. On the other hand, $(\operatorname{fr} K_{n+1} \cap U \cap N_n) \times \{1\}$ is a subsurface of $S$ except at the remnants of $\delta_n$. Since $V \cap N_n \cap \operatorname{fr} K_{n+1}$ is a single disc and since $\operatorname{fr} K_{n+1} \cap N_n$ is homeomorphic to $X_i$, the surface $\operatorname{fr} K_{n+1} \cap N_n \cap U$ is homeomorphic to $X_i$ with a single puncture. As $X_i$ has positive genus $g$, the surface $\sigma(S \cap N_n ; \delta_n)$ has positive genus, and, therefore, $S \cap N_n$ has positive genus for all $n \geq 1$. This implies that $S \cap W$ has infinite genus. \newline
Take a subsequence of $\{K_n\}$ so that the first two terms of the new exhausting sequence are still $K_1$ and $K_2$ but so that the genus of $S \cap \operatorname{cl}(K_{n+1} - K_n) \cap W$ is at least $3g$ for $n \geq 1$. We continue referring to $\operatorname{cl}(K_{n+1} - K_n) \cap W$ as $N_n$. \newline
Fix some $n \geq 2$ and let $N = N_n$. By Lemma \ref{h-cobord}, $N$ is homeomorphic to $X_i \times I$. Let $F_0 = \operatorname{fr} K_n \cap N$ and $F_1 = \operatorname{fr} K_{n+1} \cap N$. $V$ intersects $F_i$ in a single disc $D_i$ for $i \in \{0,1\}$. Since $\{K_i\}$ has the outer collar property with respect to $U$, there is a collection of boundary-reducing discs $\delta_0$ for $U \cap K_n \cap W$ with boundary on $S$ and such that $\sigma(U \cap K_n \cap W;\delta_0)$ contains a component $P^U_0$ with boundary containing $F_0 \cap U$ and which is homeomorphic to $(F_0 \cap U) \times I$. Since $S$ is the preferred surface of $U \cap K_n$, there is a copy of $D^2 \times I$ embedded in $V$ so that $D^2 \times \{0\} = V \cap F_0$ and $\partial D^2 \times I = S \cap P^U_0$. Let $P_0$ be the union of $P^U_0$ and this $D^2 \times I$. Note that $P_0$ is homemorphic to $F_0 \times I$, has $F_0$ as a boundary component, and has $V$ running through $P_0$ as the neighborhood of an arc which is vertical in the product structure. Let $F'_0 = \partial P_0 - F_0$. \newline
We can perform a similar construction on $K_{n+1}$ to obtain, embedded in $N_n$, a submanifold $P_1$ homeomorphic to $F_1 \times I$, with $\partial P_1 = F_1 \cup F'_1$ and $V \cap P_1$ a neighborhood of a vertical arc. Let $N' = N \cup P_0$ and $N'' = \operatorname{cl}(N' - P_1)$. Note that $N'$ and $N''$ are homeomorphic to $X_i \times I$, since $F_0,F_1,F'_0,$ and $F'_1$ are all homeomorphic to $X_i$. See Figure \ref{spine}.\newline
\begin{figure}
\caption{A schematic representing $N$.}
\label{spine}
\end{figure}
Let $\Sigma_V$ be a spine for $V$ in $M$ which intersects each surface $F'_0,F_0,F'_1,F_1$ exactly once and which is a vertical arc in $P_0$ and $P_1$. Let $\Sigma_S = \Sigma_V \cap N'$. Note that $\Sigma_S$ is a reduced spine for a Heegaard splitting of $N''$. To see this, recall that $U \cap (K_{n+1} - K_n)$ is a handlebody (Corollary \ref{complementary compressionbodies}) and notice that $N'' - \eta(\Sigma_S \cup \partial N')$ is homeomorphic to $U \cap (K_{n+1} - K_n)$. We wish to show that after a proper ambient isotopy of $S$ which is the identity off of $\eta(N')$, $S\cap N$ is a Heegaard surface for $N$. \newline
Choose a connected reduced spine $\Sigma_T$ for a Heegaard splitting of $N''$ such that $\Sigma_T$ intersects $P_0$ in a vertical arc, the rank of $H_1(\Sigma_T)$ is the same as the rank of $H_1(\Sigma_S)$, $\Sigma_T \cap F'_1 \neq \varnothing$, $\Sigma_T \cap F'_0 \neq \varnothing$, and $\partial \eta(\Sigma_T)$ is a hollow Heegaard surface for $N$. Such a spine exists by Lemma \ref{model surface}. We call $\Sigma_T$ the \defn{model spine}. \newline
By the Scharlemann-Thompson classification of Heegaard splittings of $\text{(surface)} \times I$ (Theorem \ref{STclass1}) since $\Sigma_S$ and $\Sigma_T$ are both reduced spines with first homologies of the same rank and since they have the same partition of $\partial N''$ there is a sequence of edge-slides and isotopies which takes $\Sigma_S$ to $\Sigma_T$. It is easy to arrange these slides to be away from $\delta_0 \cup \delta_1$. The sequence of edge slides thus describes an isotopy of the surface $S \cap \eta(N'')$. By the choice $\Sigma_T$, we have that after the isotopy, $S \cap N$ is a relative Heegaard surface of genus at least $3g$ for $N'$. \newline
The next corollary follows from our work so far; it is a technical result which will be useful for the classifications.
\begin{corollary}\label{induced rel HS} If $X_i$ is a closed surface of positive genus then after a proper ambient isotopy of $S$ which is supported on a neighborhood of $W'_i = \operatorname{cl}(M - K_1) \cap W_i$ we have that $S \cap K_1$ is the same before and after the isotopy and afterwards $S \cap W'_i$ is a relative Heegaard surface for $W'_i$. \end{corollary}
\begin{proof} Perform the isotopy just described so that $N_2 = \operatorname{cl}(K_2 - K_1)$ inherits a relative Heegaard splitting from $S$. This isotopy is fixed off a neighborhood of $W'_i= \operatorname{cl}(M - K_1) \cap W_i$ and $S \cap K_1$ is the same before and after the isotopy. Since $N_2 \subset W'_i$, there are now discs in $U \cap W'_i$ with boundary on $S$ so that boundary reducing $U \cap W'_i$ along those discs leaves a component homeomorphic to $(\operatorname{fr} W'_i \cap U) \times I$. Since $V \cap W'_i$ is a disc we have that $U \cap W'_i$ and $V \cap W'_i$ are relative compressionbodies with preferred surface $S \cap W'_i$. Thus, $S \cap W'_i$ is a relative Heegaard surface for $W'_i$. \end{proof}
We now continue the proof of Theorem \ref{end-stabilized}. For each even $n$, perform this ambient isotopy on $N_n$. By construction, the union of these ambient isotopies is a proper ambient isotopy of $S \cap W_i$. After the isotopy, for each even $n$, $S \cap N_n$ is a relative Heegaard surface of genus at least $3g$ for a space homeomorphic to $X_i \times I$ where the genus of $X_i$ is $g$. By the Scharlemann-Thompson classification of splittings of $\text{(surface)} \times I$ (Theorem \ref{STclass2}), there is a reducing ball for $S$ in each $N_n$ for $n$ even. Hence $S$ is $e_i$-stabilized. This concludes the proof of Theorem \ref{end-stabilized} \qed
\subsubsection*{Proof of Classification}
\begin{prop6.5} Suppose that $J$ consists of 2-spheres and that $M'$ is obtained from $\ob{M}$ by attaching 3-balls to $J$. Then, up to proper ambient isotopy of $M$, any finite genus Heegaard surface in $M$ is the intersection of a Heegaard surface for $M'$ with $M$. The Heegaard surface in $M'$ intersects each attached 3-ball in a properly embedded disc. If two such splittings of $M'$ were isotopic then the resulting splittings of $M$ are properly ambient isotopic. \end{prop6.5}
\begin{proof}[Proof of Proposition \ref{all spheres}] Suppose that $U_S \cup_S V_S$ and $U_T \cup_T V_T$ are both finite genus Heegaard splittings of $M$. Let $M'$ be the compact 3-manifold obtained from $\ob{M}$ by removing only the interiors of the 3-balls whose removal created $M$. \newline
There is an exhausting sequence $\{K_i\}$ for $M$ which is well-placed on $S$ and an exhausting sequence $\{L_i\}$ which is well-placed on $T$. We may assume that $K_1$ and $L_1$ are homeomorphic to $M'$ and that $S \cap (M - K_1)$ and $T \cap (M - L_1)$ have genus zero. The frontiers of the exhausting elements are essential spheres in $S^2 \times {\mathbb R}_+$ so, after taking a subsequence of each, there is a proper ambient isotopy of $M$ which takes $\operatorname{fr} L_i$ to $\operatorname{fr} K_i$ for each $i$ and so that $V_S \cap \operatorname{fr} K_i$ equals $V_S \cap \operatorname{fr} L_i$. \newline
Let $N_n = \operatorname{cl}(K_{n+1} - K_n)$. By Lemma \ref{spheres inherit}, each component of $N_n$ inherits a genus zero relative Heegaard splitting from $S$ and also from $T$. By Waldhausen's classification of splittings of $S^2 \times I$, there is a proper ambient isotopy taking $S \cap N_n$ to $T \cap N_n$ which is fixed on $\operatorname{fr} N_n$. The union of these isotopies over all $n$ is a proper ambient isotopy of $M$ taking $S \cap \operatorname{cl}(M - K_1)$ to $T \cap \operatorname{cl}(M - K_1)$. In particular, we may assume that $S \cap \operatorname{cl}(M - K_1)$ and $T \cap \operatorname{cl}(M - K_1)$ are vertical annuli in $S^2 \times {\mathbb R}_+$. \newline
When we compactify $M$ to $\ob{M}$, $S \cap \operatorname{cl}(M - K_1)$ and $T \cap \operatorname{cl}(M - K_1)$ compactify to compact annuli. $V_S \cap \operatorname{cl}(M - K_1) = V_T \cap \operatorname{cl}(M - K_1)$ compactifies to $V' = D^2 \times I$. Let $V'_S$ and $V'_T$ be the compactified versions of $V_S$ and $V_T$ respectively. Attach the 3-balls to $\ob{M}$ to create $M'$ and let $U'_S = \operatorname{cl}(M' - V'_S)$ and $U'_T = \operatorname{cl}(M' - V'_T)$. It is clear from the construction that $U'_S$, $U'_T$, $V'_S$ and $V'_T$ are absolute compressionbodies and that the splittings $U_S \cup_S V_S$ and $U_T \cup_T V_T$ are obtained from splittings of $M'$ in the correct fashion. \newline
Furthermore, if we remove the open 3-balls from $\ob{M}$ to create $M'$ we can extend the splittings $U'_S \cup_{S} V'_S$ and $U'_T \cup_T V'_T$ of $\ob{M}$ to be relative Heegaard splittings of $M'$. By the Marionette Lemma the relative Heegaard splittings of $M'$ are isotopic if and only if the absolute splittings of $\ob{M}$ are isotopic. If the splittings of $M'$ are isotopic then since $K_1$ is homeomorphic to $M'$, the surfaces $S \cap K_1$ and $T \cap K_1$ are isotopic. Thus, since we already have $S \cap \operatorname{cl}(M - K_1) = T \cap \operatorname{cl}(M - K_1)$ we can arrange by a proper ambient isotopy for $S$ to be equal to $T$. \end{proof}
\begin{prop6.6} Suppose that $S$ and $T$ are infinite genus Heegaard surfaces for $M$ whose splittings have the same partition of $\partial M$. Then $S$ and $T$ are approximately isotopic. \end{prop6.6}
\begin{proof}[Proof of Proposition \ref{approx isotopic}] If $S$ and $T$ have infinite genus then $S \cap W_i$ and $T \cap W_j$ have infinite genus for some $i,j$. Since each $W_k$ is homeomorphic to $X_k \times I$ where $X_k$ is a closed surface, Theorem \ref{end-stabilized} shows that $S$ must be $e_i$-stabilized and $T$ must be $e_j$-stabilized. Theorem \ref{Frohman-Meeks} then shows that $S$ and $T$ are approximately isotopic. \end{proof}
\begin{prop6.7} Suppose that $S$ and $T$ are infinite genus Heegaard surfaces for $M$ with the same partition of $\partial M$. Consider the following condition: \begin{enumerate} \item[(*)] For each $i$, $S \cap W_i$ has infinite genus if and only if $T \cap W_i$ is of infinite genus. \end{enumerate} Then (*) holds if and only if $S$ and $T$ are properly ambient isotopic. \end{prop6.7}
\begin{proof}[Proof of Proposition \ref{classification}] The proof of Lemma \ref{proper ambient isotopy invariant} can be adapted to show that if $S$ and $T$ are properly ambient isotopic then (*) holds. \newline
Suppose, then, that $S$ and $T$ satisfy (*). We desire to show that $S$ and $T$ are properly ambient isotopic. Using Proposition \ref{approx isotopic}, we will be able to enlarge $C$ to a compact set $C'$ such that (after performing proper ambient isotopies of $S$ and $T$) $C'$ has the following properties: \begin{enumerate} \item $\operatorname{cl}(M - C')$ is homeomorphic to $\cup X_i \times {\mathbb R}_+$ \item $S \cap C' = T \cap C'$ \item $V_S \cap \operatorname{fr} C' = V_T \cap \operatorname{fr} C'$ and each of these consists of a single disc on each component of $\operatorname{fr} C'$. \item For each $W'_i = \operatorname{cl}(M - C) \cap W_i$ where $S$ and $T$ are of infinite genus, the surfaces $S\cap W'_i$ and $T\cap W'_i$ are relative Heegaard surfaces for $W'_i$. \item For each $W'_i$ where $S$ and $T$ are not of infinite genus, the surfaces $S \cap W'_i$ and $T \cap W'_i$ are of genus zero. \end{enumerate}
The way to achieve this is to take an exhausting sequence $\{K_i\}$ for $M$ which is well-placed on $S$ such that in each component of $\operatorname{cl}(M - K_1)$ $S$ and $T$ are both of either infinite genus or of genus zero. Then use the fact that $S$ and $T$ are approximately isotopic to isotope them so that $S \cap K_1 = T \cap K_1$. Let $C' = K_1$. If a certain $X_i$ is not a 2-sphere, Corollary \ref{induced rel HS} guarantees that a further proper ambient isotopy of $S$ and $T$ can be performed which is supported on a neighborhood of $W'_i = \operatorname{cl}(M - C') \cap W_i$ so that after the isotopy $S \cap C'$ still equals $T \cap C'$ but we now have property (4) in addition to property (3) for that $W'_i$. In the case when $X_i = S^2$, $S$ and $T$ automatically give relative Heegaard splittings of $W'_i$ as $\partial_-(U \cap W'_i)$ and $\partial_-(V \cap W'_i)$ can be taken to be the discs $U \cap X_i$ and $V \cap X_i$ respectively. \newline
For each $W'_i$ in which $S$ and $T$ are of infinite genus, Theorem \ref{end-stabilized} guarantees $S \cap W'_i$ and $T \cap W'_i$ are infinitely stabilized. Since $W'_i$ is 1-ended, Theorem \ref{Frohman-Meeks} guarantees that the absolute Heegaard splittings of $W'_i$ induced by $S \cap W'_i$ and $T \cap W'_i$ are equivalent by a proper ambient isotopy in $W'_i$. By the Marionette Lemma, $S \cap W'_i$ and $T \cap W'_i$ are properly ambient isotopic within $W'_i$. For each $W'_i$ where $S$ and $T$ are of genus zero, the fact that $S$ and $T$ are properly ambient isotopic in $W'_i$ follows from Proposition \ref{all spheres}. \newline
Since in each component of $\operatorname{cl}(M - C')$ there is a proper ambient isotopy of $S$ and $T$ in that component so that they coincide, and since $S$ and $T$ already coincide in $C'$ there is a proper ambient isotopy of $M$ taking $T$ to $S$. \end{proof}
\begin{prop6.8} If no $X_i$ is a 2-sphere then any two Heegaard splittings of $M$ with the same partition of $\partial M$ are equivalent up to proper ambient isotopy. \end{prop6.8}
\begin{proof}[Proof of Proposition \ref{no spheres}] By Theorem \ref{end-stabilized}, $S$ and $T$ are end-stabilized. Theorem \ref{Frohman-Meeks} then implies that they are properly ambient isotopic. \end{proof}
\appendix \section{Infinitely Stabilized Heegaard Splittings} The goal of this section is to give a detailed proof the following theorem which is due, essentially, to Frohman and Meeks. Our methods are the same but we elaborate in order to fix the error mentioned previously. We refer the reader to earlier sections for the definitions of the terms used here.
\begin{theorem}\label{Frohman-Meeks 2} Let $M$ be a non-compact orientable 3-manifold with compact boundary not containing any 2-sphere components. Suppose that $M = U_S \cup_S V_S$ and $M = U_T \cup_T V_T$ are two Heegaard splittings of $M$ with the same partition of $\partial M$. If both $S$ and $T$ are infinitely stablized then they are approximately isotopic. If both $S$ and $T$ are end-stabilized then they are properly ambient isotopic. \end{theorem}
In \cite{FrMe97}, Frohman and Meeks introduce a technique which they call ``stealing handles from infinity". This method provides a proper isotopy of an infinitely stabilized splitting so that for any compact submanifold $K$, $S \cap K$ is stabilized an arbitrary number of times.
\begin{proposition}[{Frohman-Meeks \cite[Proposition 2.1]{FrMe97}}] \label{stealing handles} Suppose that $M = U \cup_S V$ is an infinitely stabilized Heegaard splitting of $M$. Let $C$ be a submanifold of $M$ which is adapted to $S$. Then for any given $n \in \mathbb N$ there is a proper ambient isotopy of $S$ so that $S \cap C$ has been stabilized at least $n$ times. \end{proposition}
\begin{proof}[Sketch of Proof] Since $S$ is infinitely stabilized, we can find $n$ disjoint reducing balls for $S$ in the complement of $C$. We may then use paths in the surface $S$ to isotope these balls along $S$ into $C$. \end{proof}
\begin{definition} An exhausting sequence $\{K_i\}$ is \defn{perfectly adapted} to $S$ if it is adapted to $S$ and, additionally, each $\operatorname{cl}(K_{i+1} - K_i)$ is adapted to $S$. (See Section \ref{Types of Exh. Seq.}.) Note that a subsequence of a perfectly adapted sequence is perfectly adapted. \end{definition}
A useful corollary of Proposition \ref{stealing handles} is:
\begin{corollary}\label{key cor} Suppose that $U_S \cup_S V_S$ and $U_T \cup_T V_T$ are two end-stabilized splittings of $M$ with the same partition of $\partial M$. If there is an exhausting sequence $\{K_i\}$ for $M$ with the following properties: \begin{enumerate} \item[(i)] $\partial M \subset K_1$ \item[(ii)] $V_S \cap \operatorname{fr} K_i$ and $V_T \cap \operatorname{fr} K_i$ consist of discs for all $i$. \item[(iii)]$V_S \cap \operatorname{fr} K_i = V_T \cap \operatorname{fr} K_i$ for all $i$. \item[(iv)] $\{K_i\}$ is perfectly adapted to both $S$ and $T$. \end{enumerate} then $S$ and $T$ are equivalent up to proper ambient isotopy. \end{corollary}
\begin{proof} By the Reidemeister-Singer theorem and the Marionette Lemma, after finitely many stabilizations of $S \cap K_1$ and $T \cap K_1$ there is an ambient isotopy of $K_1$ so that $S \cap K_1 = T \cap K_1$. Since both $S$ and $T$ are end-stabilized, these stabilizations can be achieved by stealing handles from infinity. Thus, we may assume that $S \cap K_1 = T \cap K_1$. By the assumption that $\{K_i\}$ is perfectly adapted to both $S$ and $T$, the intersections of $U_S \cup_S V_S$ and $U_T \cup_T V_T$ with any compact component $L$ of $\operatorname{cl}(M - K_1)$ give a relative Heegaard splittings of $L$. By stealing more handles from infinity and passing them through $K_1$ we may stabilize $S \cap L$ and $T \cap L$ enough times so that after performing an ambient isotopy of $L$, $S$ and $T$ coincide in $K_1 \cup L$. We may do this for each compact component of $\operatorname{cl}(M - K_1)$. Since there are only finitely many such components, we have constructed proper ambient isotopies of $S$ and $T$ so that they coincide on $K_1$ and each compact component of $\operatorname{cl}(M - K_1)$. We proceed by induction. \newline
Suppose that we have performed proper ambient isotopies of $M$ so that $S \cap K_{n-1} = T \cap K_{n-1}$ and $S$ and $T$ coincide on each compact component of $\operatorname{cl}(M - K_{n-1})$. We will show that there are proper ambient isotopies of $S$ and $T$ which are fixed on $K_{n-1}$ so that after the isotopies $S$ and $T$ coincide on $K_n$ and each compact component of $\operatorname{cl}(M - K_n)$. This will show that the composition of the isotopies of $S$ converges to a proper ambient isotopy of $S$ and the composition of the isotopies of $T$ converges to a proper ambient isotopy of $T$. Thus, we will have shown that there are proper ambient isotopies of $S$ and $T$ which make them coincide with a third Heegaard surface for $M$. Hence, $S$ and $T$ are properly ambient isotopic. \newline
Let $L$ be a component of $\operatorname{cl}(K_n - K_{n-1})$. By hypothesis, both $S$ and $T$ are relative Heegaard surfaces for $L$. If every non-compact component of $\operatorname{cl}(M - L)$ contains $K_{n-1}$ then $L$ is contained in a compact component of $\operatorname{cl}(M - K_{n-1})$ and so $S \cap L = T \cap L$. \newline
We may, thus, suppose that there is a non-compact component of $\operatorname{cl}(M - L)$ which does not contain $K_{n-1}$. The surfaces $S$ and $T$ are both end-stabilized and so we may steal handles from that non-compact component of $\operatorname{cl}(M - L)$ in order to stabilize $S \cap L$ and $T \cap L$ enough times so that they are ambient isotopic in $L$. Since, $S$ and $T$ already coincide on $\operatorname{fr} K_{n-1}$ we may take the ambient isotopy to be the identity on $\operatorname{fr} K_{n-1} \cap L$. Thus, there is a proper ambient isotopy of $S$ and a proper ambient isotopy of $T$, each fixed on $K_{n-1}$ so that after the isotopies $S \cap K_n = T \cap K_n$. \newline
Now suppose that $L'$ is a compact component of $\operatorname{cl}(M - K_n)$. As before, $S$ and $T$ both give relative Heegaard splittings of $L'$. If $S \cap L' \neq T \cap L'$ then $L'$ is not contained in a compact component of $\operatorname{cl}(M - K_{n-1})$. As in each component of $\operatorname{cl}(K_n - K_{n-1})$ $S$ and $T$ are connected surfaces, this implies that there are paths in $S$ and $T$ from a non-compact component of $\operatorname{cl}(M - K_n)$ to $L'$ which do not intersect $K_{n-1}$. Thus, we may stabilize $S \cap L'$ and $T \cap L'$ as much as we wish by stealing handles from infinity via paths that do not intersect $K_{n-1}$. Now isotope in $L'$ so that the splittings coincide. We have, therefore, constructed proper ambient isotopies of $S$ and $T$ which are fixed on $K_{n-1}$ such that after performing the isotopies $S \cap K_n$ equals $T \cap K_n$ and $S$ and $T$ also coincide on each compact component of $\operatorname{cl}(M - K_n)$. Thus, $S$ and $T$ are properly ambient isotopic in $M$. \end{proof}
To show that two end-stabilized splittings of $M$ with the same partition of $\partial M$ are properly ambient isotopic, we will show that there is an exhausting sequence for $M$ satisfying the requirements of Corollary \ref{key cor}. The first task is to show that if $S$ and $T$ have perfectly adapted exhausting sequences then there is a perfectly adapted sequence of $M$ adapted to both $S$ and $T$ simultaneously.
\begin{lemma}[{Frohman-Meeks \cite[Proposition 2.3]{FrMe97}}]\label{mutually adapted} Suppose that $K_1$ and $K_2$ are two submanifolds of $M$ such that $K_1, K_2,$ and $\operatorname{cl}(K_2 - K_1)$ are adapted to $S$. Suppose that $L_1$ and $L_2$ are two submanifolds of $M$ such that $L_1, L_2$ and $\operatorname{cl}(L_2 - L_1)$ are adapted to $T$. Assume also that $K_1 \subset L_1 \subset K_2 \subset L_2$ where each inclusion is into the interior of the succeeding submanifold. \newline
Then after stabilizing and isotoping $S$ in $\operatorname{cl}(K_2 - K_1)$ and stabilizing and isotoping $T$ in $\operatorname{cl}(L_2 - L_1)$ there is a submanifold $J_1$ of $M$ adapted to both $S$ and $T$ so that $V_S \cap \operatorname{fr} J_1$ equals $V_T \cap \operatorname{fr} J_1$ and these intersections consist of discs. \end{lemma}
\begin{proof} Push the frontier of $K_2$ slightly into $K_2$ to form a surface $F \subset K_2$. Let $M_1$ be the submanifold bounded by $\operatorname{fr} K_2$ and $F$. ($M_1$ is, of course, homeomorphic to $\operatorname{fr} K_2 \times I$.) Let $M_2$ be the submanifold bounded by $F$ and $\operatorname{fr} K_1$. Let $N_1$ be the submanifold with boundary $\operatorname{fr} L_2 \cup F$ and let $N_2$ be the submanifold with boundary $F \cup \operatorname{fr} L_1$. Let $J_1 = K_1 \cup M_2$. Take Heegaard splittings of $M_1, M_2, N_1$ and $N_2$ with Heegaard surfaces $S_1, S_2, T_1$ and $T_2$ respectively. We should choose these splittings so that all the boundary components of each submanifold are contained in the same compressionbody of the splitting.\newline
We can use the Heegaard surfaces $S_1$ and $S_2$ to form a Heegaard surface $\ob{S}$ for $\operatorname{cl}(K_2 - K_1)$. To do this, note that there are surfaces $S'_1$ and $S'_2$ in $M_1$ and $M_2$ which are subsurfaces of $S_1$ and $S_2$ except at a finite number of open discs which are parallel to $F = M_1 \cap M_2$. The surfaces $S'_1$ and $S'_2$ cobound a product region $S'_2 \times I$. The surface $F$ may be assumed to be $S'_2 \times \{\frac{1}{2}\}$. Take a disc $D \subset S'_2 \cap S_2$ so that in the product region $S'_2 \times I$ the tube $D \times I$ is disjoint from $\operatorname{cl}(S'_1 - S_1)$. The Heegard surface $\ob{S}$ for $\operatorname{cl}(K_2 - K_1)$ is formed by taking $(S_1 \cup S_2 \cup D \times I) - \operatorname{int} (D \times I)$. We say that $\ob{S}$ is formed by \defn{tubing together} $S_1$ and $S_2$. This process is different from the amalgamation of Heegaard splittings. Similarly, we may form a Heegaard surface $\ob{T}$ for $\operatorname{cl}(L_2 - L_1)$ by tubing together $T_1$ and $T_2$. Since in both constructions the tube intersects $F$ in a single disc, we may arrange that $\ob{S} \cap F = \ob{T} \cap F$ and that these intersections are a single inessential loop on $F$. Finally, using the product region in the compressionbodies containing $\operatorname{fr}(K_2 - K_1)$ we may use vertical tubes to extend $\ob{S}$ to be a relative Heegaard splitting for $\operatorname{cl}(K_2 - K_1)$ which coincides with $S$ on $\operatorname{fr}(K_2 - K_1)$. Similarly, extend $\ob{T}$ to be a relative Heegaard splitting for $\operatorname{cl}(L_2 - L_1)$ which coincides with $T$ on $\operatorname{fr}(L_2 - L_1)$. We call the Heegaard splittings given by $\ob{S}$ and $\ob{T}$ the \defn{model splittings}. \newline
The Reidemeister-Singer theorem and the Marionette Lemma imply that by stabilizing $S$ and $\ob{S}$ enough in $\operatorname{cl}(K_2 - K_1)$ we may perform an ambient isotopy of $\operatorname{cl}(K_2 - K_1)$ which brings $S \cap \operatorname{cl}(K_2 - K_1)$ to $\ob{S}$. Similarly, we may stabilize $T \cap \operatorname{cl}(L_2 - L_1)$ and $\ob{T}$ enough times so that there is an ambient isotopy of $\operatorname{cl}(L_2 - L_1)$ which brings $T \cap \operatorname{cl}(L_2 - L-1)$ to $\ob{T}$. Since $\ob{S}$ and $\ob{T}$ coincide on $\operatorname{fr} J_1 = F$ we have now arranged that $J_1$ is a submanifold adapted to both $S$ and $T$ and that $S \cap \operatorname{fr} J_1 = T \cap \operatorname{fr} J_1$ and these intersections consists of a single inessential loop on each component of $\operatorname{fr} J_1$. \end{proof}
\begin{corollary}\label{mutually adapted exh seq} Suppose that $\{K_i\}$ is an exhausting sequence perfectly adapted to $S$ and that $\{L_i\}$ is an exhausting sequence perfectly adapted to $T$. Assume that, for all $i$, $K_i \subset L_i \subset K_{i+1}$. Then after stabilizing $S$ and $T$ in each component of $\operatorname{cl}(K_{i+1} - K_i)$ and $\operatorname{cl}(L_{i+1} - L_i)$ respectively we may properly isotope $S$ and $T$ so that there is an exhausting sequence $\{J_i\}$ which is perfectly adapted to both $S$ and $T$ and is such that $S \cap \operatorname{fr} J_i = T \cap \operatorname{fr} J_i$ and the intersection consists of a single inessential loop on each component of $\operatorname{fr} J_i$. \end{corollary}
\begin{proof} Construct $J_1$ as in the proposition. Assuming that we have constructed $J_{n-1}$ we will demonstrate how to construct $J_n$. Build $J_n$ as in the proposition, letting $K_{n+1}$, $K_n$, $L_{n+1}$, $L_n$ play the roles of $K_2, K_1, L_2$ and $L_1$. Choose model splittings for each component of $\operatorname{cl}(K_{n+1} - K_{n})$ and $\operatorname{cl}(L_{n+1} - L_n)$ which coincide with the model splittings of $\operatorname{cl}(K_n - K_{n-1})$ and $\operatorname{cl}(L_n - L_{n-1})$ on $\operatorname{fr} K_n$ and $\operatorname{fr} L_n$ respectively. Stabilize the model splittings enough times so that after stabilizing $S \cap \operatorname{cl}(K_{n+1} - K_n)$ and $T \cap \operatorname{cl}(L_{n+1} - L_n)$ we may perform ambient isotopies of $S \cap \operatorname{cl}(K_{n+1} - K_n)$ and $T \cap \operatorname{cl}(L_{n+1} - L_n)$ so that they coincide with the model splittings. These isotopies are supported off $K_{n-1}$ and $L_{n-1}$ respectively. Note that, by the construction of the model splittings, $\operatorname{cl}(J_n - J_{n-1})$ is adapted to both $S$ and $T$ (after performing the isotopies). \newline
We thus obtain an exhausting sequence $\{J_i\}$ for $M$. The final remarks of the previous paragraph show that there are proper ambient isotopies of $S$ and $T$ so that $\{J_i\}$ is perfectly adapted to both Heegaard surfaces. \end{proof}
\begin{remark} So far we have shown that if $S$ and $T$ are end-stabilized splittings and if there are exhausting sequences perfectly adapted to each of them then (after stealing handles from infinity and performing other proper ambient isotopies of $S$ and $T$) there is an exhausting sequence which is perfectly adapted to both of them at the same time and furthermore $S$ and $T$ coincide on the frontiers of the exhausting submanifolds. Corollary \ref{key cor} then shows that $S$ and $T$ are properly ambient isotopic. It thus remains to show that an end-stabilized splitting has a perfectly adapted exhausting sequence which is adapted to it. The following lemmas show how we can achieve this. This lemma fixes the misstatement in \cite[Proposition 2.2]{FrMe97} mentioned in the introduction. \end{remark}
\begin{lemma}\label{edge slides again} Let $M = U \cup_S V$ be an absolute Heegaard splitting of the non-compact 3-manifold $M$ and let $\{K_i\}$ be an exhausting sequence for $M$ adapted to $S$. Assume that, for each $i$, $V \cap \operatorname{fr} K_i$ consists of discs and that the sequence $\{K_i\}$ has the outer collar property with respect to $U$. Then after stabilizing $S \cap \operatorname{cl}(K_n - K_{n-1})$, for each $n \geq 3$, a finite number of times, there is a proper ambient isotopy of $S \cap K_n$ with the following properties: \begin{enumerate} \item[(i)] The isotopy is fixed on $K_{n-2} \cup \operatorname{cl}(M - K_n)$. \item[(ii)] $S \cap K_{n-1}$ is the same before and after the isotopy. \item[(iii)] After the isotopy, $S$ is a relative Heegaard surface for $\operatorname{cl}(K_n - K_{n-1})$. \end{enumerate} \end{lemma}
The proof is similar to the proof of Proposition \ref{end-stabilized}. The reader is referred to Section \ref{edge-slides of reduced spines} for the definitions and properties of edge-slides.
\begin{proof} Let $N$ be a component of $\operatorname{cl}(K_n - K_{n-1})$. Let $F_2 = \operatorname{fr} K_n \cap N$ and $F_1 = \operatorname{fr} K_{n-1} \cap N$. Since $\{K_i\}$ has the outer collar property, there are discs $\delta_1 \subset (U \cap K_{n-1})$ with boundary on $S$ so that $\sigma(U \cap K_{n-1};\delta_1)$ contains a product region $P^U_1 = (F_1 \cap U) \times I \subset U \cap \operatorname{cl}(K_{n-1} - K_{n-2})$ with $F_1 \cap U = (F_1 \cap U) \times \{0\}$. Let $(F'_1 \cap U)$ signify $(F_1 \cap U) \times \{1\}$; it is a subsurface of $S$ except at the remnants of the discs $\delta_1$. Similarly, there are discs $\delta_2 \subset U \cap K_{n}$ with boundary on $S$ so that $\sigma(U \cap K_{n};\delta_1)$ contains a product region $P^U_2 = (F_2 \cap U) \times I \subset U \cap \operatorname{cl}(K_{n} - K_{n-1})$ with $(F_2 \cap U) = (F_2 \cap U) \times \{0\}$. Let $(F'_2 \cap U)$ signify $(F_2 \cap U) \times \{1\}$; it is a subsurface of $S$ except at the remnants of the discs $\delta_2$. The boundaries of the surfaces $F'_1 \cap U$ and $F'_2 \cap U$ are simple closed curves on $S$ which bound discs in $V$. Let $F'_1$ and $F'_2$ be the surfaces $F'_1 \cap U$ and $F'_2 \cap U$ together with discs in $V$ bound by $\partial F'_1 \cap U$ and $\partial F'_2 \cap U$. Let $P_1$ and $P_2$ be the product regions bounded by $F'_1 \cup F_1$ and $F'_2 \cup F_2$ respectively. $P^U_1$ and $P^U_2$ are the product regions which are the intersections of $P_1$ with $U$ and $P_2$ with $U$. Let $N' = N \cup P_1$. \newline
Choose a spine for $V$ which intersects each disc of $\delta_1 \cup \delta_2$ exactly once. We may assume that the spine intersects $P_1$ and $P_2$ in vertical arcs. Let $\Sigma$ be the intersection of this spine with $N'$. Corollary \ref{complementary compressionbodies} shows that $U \cap \operatorname{cl}(K_n - K_{n-1})$ and $V \cap \operatorname{cl}(K_n - K_{n-1})$ are compressionbodies. Since there are not closed components of $\partial_- \operatorname{cl}(N' - \eta(\partial N' \cup \Sigma))$, $\operatorname{cl}(N' - \eta(\partial N' \cup \Sigma))$ is a handlebody and so $\Sigma$ is a reduced spine for $N'$. (Recall that $\{K_i\}$ is adapted to $S$ and so $U \cap K_{n-1}$ is correctly embedded in $U \cap K_n$. This is needed to apply Lemma \ref{complementary compressionbodies}.) \newline
We now construct a model splitting of $N'$. Let $X \cup_W Y$ be any relative Heegaard splitting of $N$ with $Y \cap \operatorname{fr} N = V \cap \operatorname{fr} N$. Let $\Sigma'$ be a reduced spine for $Y$. We may assume that $\Sigma' \cap P_2$ consists of vertical arcs. Using the product region $P_1$ we may extend $\Sigma'$ to be a graph in $N'$ whose intersection with $P_1$ consists of vertical arcs. $\Sigma' \cap \operatorname{cl}(N' - P_1)$ is a reduced spine for $\operatorname{cl}(N' - P_2)$. \newline
The Reidemeister-Singer theorem and the Marionette Lemma imply that by stabilizing the Heegaard splittings of $N'' = \operatorname{cl}(N' - P_2)$ induced by $\Sigma \cap N''$ and $\Sigma' \cap N''$ they become isotopic. Perform the necessary stabilizations in such a way that the graphs $\Sigma \cap N''$ and $\Sigma' \cap N''$ still intersect $P_1$ in vertical arcs. Edge-slides of reduced spines are equivalent to isotopies of the Heegaard surfaces, so there is a sequence of edge-slides which takes (the now stabilized) $\Sigma \cap N''$ to $\Sigma' \cap N''$. These edge-slides may involve sliding edges of $\Sigma \cap N''$ over other edges or over the surfaces $F'_1 \cup F'_2$. \newline
These edge-slides define an ambient isotopy of $S \cap N'$ which is fixed off a regular neighborhood of $\operatorname{cl}(N' - P_2)$. In particular, the isotopy is fixed on $K_{n-2} \cup \operatorname{cl}(M - K_n)$. After the isotopy, $S \cap K_{n-1}$ is exactly the same as it was before. Now, however, $S \cap \operatorname{cl}(K_n - K_{n-1})$ is a relative Heegaard surface for $N$ since the model surface was. \end{proof}
\begin{lemma}\label{perfect adaptations exist} Suppose that $M = U \cup_S V$ is an end-stabilized absolute Heegaard splitting of $M$. Then there is an exhausting sequence $\{L_i\}$ which is perfectly adapted to $S$. \end{lemma}
\begin{proof} By Section \ref{Balanced Exhausting Sequences} and Corollary \ref{outer collar property 2}, there is an exhausting sequence $\{K_i\}$ which is adapted to $S$, has the outer collar property, and is such that $V \cap \operatorname{fr} K_i$ consists of discs for all $i$. Recall that, since $S$ is end-stabilized, any time we need to stabilize some $S \cap \operatorname{cl}(K_i - K_j)$ we may do so by a proper ambient isotopy of $S$ in such a way that $K_j$ is fixed throughout the isotopy. This means the isotopies needed to make each $\operatorname{cl}(K_i - K_j)$ of arbitrarily high genus can be achieved by a single proper ambient isotopy of $S$ in $M$. \newline
For each $\operatorname{cl}(K_{3i+1} - K_{3i})$, steal handles from infinity and perform the isotopy of $S \cap \operatorname{cl}(K_{3i + 1} - K_{3i})$ needed in order to make $\operatorname{cl}(K_{3i+1} - K_{3i})$ adapted to $S$. Since each of these isotopies is fixed on $K_{3i - 2}$ their union is a proper ambient isotopy of $S$. Let $L_i = K_{3i}$ for each $i$. We claim that $\{L_i\}$ is perfectly adapted to $S$. \newline
It is, of course, adapted to $S$ as each $K_i$ is adapted to $S$ before and after the isotopy. We need to show that after this isotopy $\operatorname{cl}(K_{3i} - K_{3i -3})$ is adapted to $S$ for $i \geq 2$. To see this, note that since $V$ intersects each $\operatorname{fr} K_{3i}$ in discs $V \cap \operatorname{cl}(K_{3i} - K_{3i - 3})$ is a relative compressionbody with preferred surface $S \cap \operatorname{cl}(K_{3i} - K_{3i -3})$ for each $i$. To see that $U \cap \operatorname{cl}(K_{3i} - K_{3i - 3})$ is a relative compressionbody with preferred surface $S \cap \operatorname{cl}(K_{3i} - K_{3i -3})$ note first that $\{L_i\}$ has the outer collar property. Furthermore, after the isotopy, there are discs $(\delta_1,\partial \delta_1) \subset (U \cap \operatorname{cl}(K_{3i-2} - K_{3i - 3}),S \cap \operatorname{cl}(K_{3i -2} - K_{3i - 3}))$ which cut off a product region $(U \cap \operatorname{fr} K_{3i - 3}) \times I$ contained in $U \cap (\operatorname{cl}(K_{3i - 2} - K_{3i - 3})) \subset U \cap \operatorname{cl}(K_{3i} - K_{3i - 3})$. Hence $\{L_i\}$ has both the inner and outer collar properties. It is easy to see that $\{L_i\}$ is perfectly adapted to $S$ (cf. Section \ref{outer collar prop}). \end{proof}
\begin{proof}[Proof of Theorem \ref{Frohman-Meeks 2}]
Suppose, first, that $U_S \cup_S V_S$ and $U_T \cup_T V_T$ are two absolute infinitely stabilized Heegaard splittings of $M$ with the same partition of $\partial M$. To show that they are approximately isotopic we will show that given any compact set $C$ there are proper ambient isotopies of $S$ and of $T$ so that after the isotopies, $S$ and $T$ coincide on $C$. By Section \ref{Balanced Exhausting Sequences} and Corollary \ref{outer collar property 2}, there are exhausting sequences $\{K_i\}$ and $\{L_i\}$ adapted to $S$ and $T$ respectively which have the outer collar property and are such that $V_S \cap \operatorname{fr} K_i$ and $V_T \cap \operatorname{fr} L_i$ consist of discs. Take subsequences so that $C \subset K_1 \subset L_1 \subset K_2 \subset L_2$. By Lemma \ref{edge slides again} we may steal handles from infinity for both $S$ and $T$ and then perform further proper ambient isotopies so that $K_1, K_2$ and $\operatorname{cl}(K_2 - K_1)$ are adapted to $S$ and $L_1, L_2,$ and $\operatorname{cl}(L_2 - L_1)$ are adapted to $T$. By Lemma \ref{mutually adapted} we may steal more handles from infinity and perform more ambient isotopies of $S$ and $T$ so that there is a submanifold $J_1$ containing $K_1$ which is adapted to both $S$ and $T$. By stealing more handles from infinity, we may stabilize $S \cap J_1$ and $T \cap J_1$ enough times so that they are ambient (in $J_1$) isotopic (Reidemeister-Singer theorem and the Marionette Lemma). Isotope $S$ and $T$ so that they coincide on $J_1$. They then also coincide on $C$ and so they are approximately isotopic. \newline
Now suppose that $S$ and $T$ are end-stabilized. By Lemma \ref{perfect adaptations exist} there are exhausting sequences $\{K_i\}$ and $\{L_i\}$ perfectly adapted to $S$ and $T$ respectively. By Corollary \ref{mutually adapted exh seq}, we may perform proper ambient isotopies of $S$ and $T$ so that there is an exhausting sequence $\{J_i\}$ perfectly adapted to both of them and is such that, for each $i$, $V_S \cap \operatorname{fr} J_i = V_T \cap \operatorname{fr} J_i$ and the intersections consist of discs. By Corollary \ref{key cor}, $S$ and $T$ are properly ambient isotopic. \end{proof}
\end{document} |
\begin{document}
\title{f Entanglement robustness and geometry in systems of identical particles}
\begin{abstract} \noindent The robustness properties of bipartite entanglement in systems of $N$ bosons distributed in $M$ different modes are analyzed using a definition of separability based on commuting algebras of observables, a natural choice when dealing with identical particles. Within this framework, expressions for the robustness and generalized robustness of entanglement can be explicitly given for large classes of boson states: their entanglement content results in general much more stable than that of distinguishable particles states. Using these results, the geometrical structure of the space of $N$ boson states can be explicitly addressed. \end{abstract}
\section{Introduction}
When dealing with many-body systems made of identical particles, the usual definitions of separability and entanglement appear problematic since the natural particle tensor product structure on which these notions are based is no longer available. This comes from the fact that in such systems the microscopic constituents can not be singly addressed, nor their properties directly measured \cite{Feynman,Sakurai}.
This observation points to the need of generalized notions of separability and entanglement not explicitly referring to the set of system states, or more in general to the ``particle'' aspect of first quantization: they should rather be based on the second quantized description of many-body systems, in terms of the algebra of observables and to the behavior of the associated correlation functions. This ``dual'' point of view stems from the fact that in systems of identical particles there is not a preferred notion of separability, it can be meaningful only in reference to a given choice of (commuting) sets of observables \cite{Zanardi1}-\cite{Viola2}.
This new approach to separability and entanglement have been formalized in \cite{Benatti1}-\cite{Benatti3}: it is valid in all situations, while reducing to the standard one for systems of distinguishable particles;
\footnote{The notion of entanglement in many-body systems has been widely discussed in the recent literature ({\it e.g.}, see \cite{Schliemann}-\cite{Buchleitner}); nevertheless, we stress that only a limited part of those results are relevant for identical particles systems.}
in particular, it has been successfully applied to the treatment of the behavior of trapped ultracold bosonic gases, giving rise to new, testable prediction in quantum metrology \cite{Benatti1}-\cite{Argentieri}.
As in the case of systems of distinguishable particles,
\footnote{For general reviews on the role of quantum correlations in systems with large number of constituents see \cite{Lewenstein}-\cite{Amico}.}
suitable criteria able to detect non-classical correlations through the implementations of practical tests are needed in order to easily identify entangled bosonic many-body states \cite{Horodecki}-\cite{Modi}. In the case of bipartite entanglement, the operation of partial transposition \cite{Peres}-\cite{Werner} turns out to be again very useful; actually, it has been found that in general this operation gives rise to a much more exhaustive criterion for detecting bipartite entanglement than in the case of distinguishable particles \cite{Argentieri,Benatti3}. As a byproduct, this allows a rather complete classification of the structure of bipartite entangled states in systems composed of $N$ bosons that can occupy $M$ different modes \cite{Benatti3}, becoming completely exhaustive in some relevant special cases, as for $M=2$.
In the following we shall further explore the properties of bipartite entanglement in bosonic systems made of a fixed number of elementary constituents. We shall first study to what extent an entangled bosonic state results robust against its mixing with another state (separable or not): we shall find that in general, bosonic entanglement is much more robust than the one of distinguishable particles. In particular, we shall give an explicit expression for the so-called ``robustness'' \cite{Vidal} and upper bounds for the ``generalized robustness'' \cite{Steiner}. As a byproduct, a characterization of the geometry of the space of bosonic states will also be given; the structure of this space results much richer than in the case of systems with $N$ distinguishable constituents. One of the most striking results is that the totally mixed separable state, proportional to the unit matrix, does no longer lay in the interior of the subspace of separable states: in any of its neighborhood entangled states can always be found.
\section{Entanglement of multimode boson systems}
As mentioned above, we shall focus on bosonic many-body systems made of $N$ elementary constituents that can occupy $M$ different modes. From the physical point of view, this is a quite general framework, relevant in the study of quite different systems in quantum optics, atom and condensed matter physics. For instance, this theoretical paradigm is of special importance in modelling the behavior of ultracold bosonic gases confined in multi-site optical lattices, that are becoming so relevant in the study of quantum phase transitions and other quantum many-body effects ({\it e.g.}, see \cite{Lewenstein,Bloch}, \cite{Stringari}-\cite{Yukalov} and references therein).
In order to properly describe the $N$ boson system, let us thus introduce creation $a^\dagger_i$ and annihilation operators $a_i$, $i=1, 2,\ldots,M$, for the $M$ different modes that the bosons can occupy, obeying the standard canonical commutation relations, $[a_i,\,a^\dagger_j]=\delta_{ij}$. The total Hilbert space $\cal H$ of the system is then spanned by the many-body Fock states, obtained by applying creation operators to the vacuum:
\begin{equation}
|n_1, n_2,\ldots,n_M\rangle= {1\over \sqrt{n_1!\, n_2!\cdots n_M!}}
(a_1^\dagger)^{n_1}\, (a_2^\dagger)^{n_2}\, \cdots\, (a_M^\dagger)^{n_M}\,|0\rangle\ , \label{2-1} \end{equation}
the integers $n_1, n_2, \ldots, n_M$ representing the occupation numbers of the different modes. Since the number of bosons is fixed, the total number operator $\sum_{i=1}^M a_i^\dagger a_i$ is a conserved quantity and the occupation numbers must satisfy the additional constraint $\sum_{i=1}^M n_i=N$; in other words, all states must contain exactly $N$ particles. As a consequence, the dimension $D$ of the system Hilbert space $\cal H$ is finite; one easily finds: $D={N+M-1\choose N}$.
In addition, the set of polynomials in all creation and annihilation operators, $\{a^\dagger_i,\, a_i\}$, $i=1,2,\ldots, M$, form an algebra that, together with its norm-closure, coincides with the algebra ${\cal B}({\cal H})$ of bounded operators;
\footnote{The algebra ${\cal B}({\cal H})$ is generated by the so-called Weyl operators; all polynomials in the in the creation and annihilation operators are obtained from them by proper differentiation \cite{Thirring,Strocchi}.}
the observables of the systems are part of this algebra.
When dealing with systems of identical particles, instead of focusing on partitions of the Hilbert space $\cal H$, it seems natural to define the notion of bipartite entanglement by the presence of non-classical correlations among observables belonging to two commuting subalgebras ${\cal A}_1$ and ${\cal A}_2$ of ${\cal B}({\cal H})$ \cite{Benatti1}. Quite in general, one can then introduce the following definition:
\noindent {\bf Definition 1.} {\sl An {\bf algebraic bipartition} of the algebra ${\cal B}({\cal H})$ is any pair $({\cal A}_1, {\cal A}_2)$ of commuting subalgebras of ${\cal B}({\cal H})$, ${\cal A}_1, {\cal A}_2\subset {\cal B}({\cal H})$.}
\noindent More explicitly, a bipartition of the $M$-oscillator algebra ${\cal B}({\cal H})$ can be given by splitting the collection of creation and annihilation operators into two disjoint sets,
$\{a_i^\dagger,\, a_i\, | i=1,2\ldots,m\}$ and
$\{a_j^\dagger,\, a_j,\, |\, j=m+1,m+2,\ldots,M\}$; it is thus uniquely determined by the choice of the integer $m$, with $0\leq m \leq M$. All polynomials in the first set (together with their norm-closures) form a subalgebra ${\cal A}_1$, while the remaining set analogously generates a subalgebra ${\cal A}_2$. Since operators pertaining to different modes commute, one sees that any element of the subalgebra ${\cal A}_1$ commutes with each element of ${\cal A}_2$, in short $[{\cal A}_1, {\cal A}_2]=\,0$.
\noindent {\bf Remark 1:} {\sl i)} Note that there is no loss of generality in assuming the modes forming the subalgebras ${\cal A}_1$ (and ${\cal A}_2$) to be contiguous; if in the chosen bipartition this is not the case, one can always re-label the modes in such a way to achieve this convenient ordering.
\break {\sl ii)} Further, when the two commuting algebras ${\cal A}_1$ and ${\cal A}_2$ are generated only by a subset $M'<M$ of modes, one can simply proceed as if the $N$ boson system would contain just the used $M'$ modes, since all operators in ${\cal B}({\cal H})$ pertaining to the modes not involved in the bipartition commute with any element of the two subalgebras ${\cal A}_1$ and ${\cal A}_2$, and therefore effectively act as ``spectators''. As a consequence, all the considerations and results discussed below holds also in this situation, provided one replaces the total number of modes $M$ with $M'$, the actual number of modes used in the chosen bipartition.
$\Box$
Having introduced the notion of algebraic bipartition $({\cal A}_1, {\cal A}_2)$ of the system operator algebra ${\cal B}({\cal H})$, one can now define the notion of local observable:
\noindent {\bf Definition 2.} {\sl An element (operator) of ${\cal B}({\cal H})$ is said to be $({\cal A}_1, {\cal A}_2)$-{\bf local}, {\it i.e.} local with respect to a given bipartition $({\cal A}_1, {\cal A}_2)$, if it is the product $A_1 A_2$ of an element $A_1$ of ${\cal A}_1$ and another $A_2$ in ${\cal A}_2$.}
\noindent From this notion of operator locality, a natural definition of state separability and entanglement follows \cite{Benatti1}:
\noindent {\bf Definition 3.} {\sl A state $\rho$ (density matrix) will be called {\bf separable} with respect to the bipartition $({\cal A}_1, {\cal A}_2)$, in short $({\cal A}_1, {\cal A}_2)$-separable, if the expectation ${\rm Tr}\big[\rho\, A_1 A_2\big]$ of any local operator $A_1 A_2$ can be decomposed into a linear convex combination of products of expectations:
\begin{equation} {\rm Tr}\big[\rho\, A_1 A_2\big]=\sum_k\lambda_k\, {\rm Tr}\big[\rho_k^{(1)}\, A_1\big]\, {\rm Tr}\big[\rho_k^{(2)}\, A_2\big]\ ,\qquad \lambda_k\geq0\ ,\quad \sum_k\lambda_k=1\ , \label{2-2} \end{equation}
where $\{\rho_k^{(1)}\}$ and $\{\rho_k^{(2)}\}$ are collections of admissible states for the systems; otherwise the state $\rho$ is said to be {\bf entangled} with respect the bipartition $({\cal A}_1, {\cal A}_2)$.}
\noindent {\bf Remark 2:} {\sl i)} This generalized definition of separability can easily be extended to the case of more than two partitions; for instance, in the case of an $n$-partition, Eq.(\ref{2-2}) above would extend to:
\begin{equation} {\rm Tr}\big[\rho\,A_1 A_2\cdots A_n]=\sum_k\lambda_k\, {\rm Tr}\big[\rho_k^{(1)}A_1\big]\, {\rm Tr}\big[\rho_k^{(2)}A_2\big]\cdots {\rm Tr}\big[\rho_k^{(n)}A_n\big]\, ,\quad \lambda_k\geq0\ ,\,\, \sum_k\lambda_k=1\ . \label{2-3} \end{equation}
\noindent {\sl ii)} When dealing with systems of {\sl distinguishable} particles, one finds that {\sl Definition 3} gives the standard notion of separability \cite{Benatti1}.
\break \noindent {\sl iii)} In this respect, it should be noticed that when dealing with systems of identical particles, there is no {\it a priori} given, natural partition, so that questions about entanglement and separability, non-locality and locality are meaningful only with reference to a specific choice of commuting algebraic sets of observables \cite{Zanardi1}-\cite{Benatti3}; this general observation, often overlooked in the literature, is at the basis of the definitions given in (\ref{2-2}) and (\ref{2-3}).
\break \noindent {\sl iv)} A special situations is represented by pure states \cite{Benatti3}. In fact, when dealing with pure states, instead of general statistical mixtures, and bipartitions that involve the whole algebra ${\cal B}({\cal H})$, the separability condition in (\ref{2-2}) (and similarly for (\ref{2-3})) simplify, becoming:
\begin{equation} {\rm Tr}\big[\rho\, A_1 A_2\big]={\rm Tr}\big[\rho^{(1)}\, A_1\big]\,
{\rm Tr}\big[\rho^{(2)}\, A_2\big]\ ,\quad\rho=|\psi\rangle\langle\psi|\ , \label{2-4} \end{equation}
with $\rho^{(1)}$, $\rho^{(2)}$ projections on the restrictions of $|\psi\rangle$ to the first, respectively second, partition; in other terms, separable, pure states turn out to be just product states.
$\Box$
Examples of $N$ bosons pure separable states are the Fock states. Using the notation and specifications introduced before ({\it cf.} (\ref{2-1})), they can be recast in the form:
\begin{equation}
| k_1, \ldots, k_m\,;\, k_{m+1}, \ldots, k_M\rangle\ ,\quad \sum_{i=1}^m k_i =k\ ,\ \sum_{j=m+1}^M k_j=N-k\ ,\ \ 0\leq k \leq N\ , \label{2-5} \end{equation}
where $k$ indicates the number of bosons in the first partition; by varying it together with the integers $k_i$, these states generate the whole Hilbert space $\cal H$. This basis states can be relabeled in a different, more convenient way as:
\begin{equation}
| k, \sigma; N-k, \sigma'\rangle\ ,\quad \sigma=1,2, \ldots, {k+m-1\choose k}\ ,\ \sigma'=1, 2,\ldots, {N-k+M-m-1\choose N-k}\ , \label{2-6} \end{equation}
where, as before, the integer $k$ represents the number of particles found in the first $m$ modes, while $\sigma$ counts the different ways in which those particles can fill those modes; similarly, $\sigma'$ labels the ways in which the remaining $N-k$ particles can occupy the other $M-m$ modes.
\footnote{Clearly, we need two extra labels $\sigma$ and $\sigma'$ for each value of $k$, so that these labels (as well the range of values they take) are in general $k$-dependent: in order to keep the notation as a simple as possible, in the following these dependences will be tacitly understood.}
In this new labelling, the property of orthonormality of the states in (\ref{2-5}) becomes:
$\langle k, \sigma; N-k, \sigma'| l, \tau; N-l, \tau'\rangle= \delta_{kl}\,\delta_{\sigma\tau}\,\delta_{\sigma'\tau'}$.
For fixed $k$, the basis vectors $\{| k, \sigma; N-k, \sigma'\rangle\}$ span a subspace ${\cal H}_k$ of dimension $D_k\, D_{N-k}$, where for later convenience we have defined ({\it cf.} (\ref{2-6}) above):
\begin{equation} D_k\equiv{k+m-1\choose k}\ ,\qquad
D_{N-k}\equiv{N-k+M-m-1\choose N-k}\ . \label{2-7} \end{equation}
\noindent {\bf Remark 3:} Note that the space ${\cal H}_k$ is naturally isomorphic to the tensor product space $\mathbb{C}^{D_k}\otimes \mathbb{C}^{D_{N-k}}$; through this isomorphism, the states
$| k, \sigma; N-k, \sigma'\rangle$ can then be identified with the corresponding basis states of the form $| k, \sigma\rangle \otimes | N-k, \sigma'\rangle$.
$\Box$
\noindent By summing over all values of $k$, thanks to a known binomial summation formula \cite{Prudnikov}, one recovers the dimension $D$ of the whole Hilbert space $\cal H$ :
\begin{equation} \sum_{k=0}^N D_k\, D_{N-k}=D= {N+M-1\choose N}\ . \label{2-8} \end{equation}
Using this notation, a generic mixed state $\rho$ can then be written as:
\begin{equation} \rho=\sum_{k,l=0}^N\ \sum_{\sigma,\sigma',\tau,\tau'}\
\rho_{k \sigma\sigma', l\tau\tau'}\ | k, \sigma; N-k, \sigma'\rangle \langle l, \tau; N-l, \tau' |\ , \quad \sum_{k=0}^N\ \sum_{\sigma,\sigma'}\ \rho_{k \sigma\sigma', k\sigma\sigma'}=1\ . \label{2-9} \end{equation}
In general, to determine whether a given density matrix $\rho$ can be written in separable form is a hard task and one is forced to rely on suitable separability tests. One of the most useful such criteria involves the operation of partial transposition \cite{Peres,Horodecki2}: a state $\rho$ for which the partially transposed density matrix $\tilde\rho$ is no longer positive is surely entangled. This lack of positivity can be witnessed by the so-called negativity \cite{Zyczkowski,Vidal1,Horodecki}:
\begin{equation}
{\cal N}(\rho)=\, {1\over2}\Big(\!||\tilde\rho||_1 - {\rm Tr}[\rho]\Big)\ ,\qquad
||\tilde\rho||_1={\rm Tr}\Big[\sqrt{\tilde\rho^\dagger \tilde\rho}\Big]\ . \label{2-10} \end{equation}
which is nonvanishing only in presence of a non positive $\tilde\rho$. Although this criterion is not exhaustive (there are entangled states that remain positive under partial transposition), it results much more reliable in systems made of identical particles \cite{Argentieri,Benatti3}. Indeed, the operation of partial transposition gives a necessary and sufficient criteria for entanglement detection for very general classes of bosonic states (\ref{2-9}), {\it e.g.} in presence of only two modes ($M=2$), or, in the generic case of arbitrary $M$, when the $({\cal A}_1,\, {\cal A}_2)$-bipartition is such that the algebra ${\cal A}_1$ is generated by creation and annihilation operators of just one mode, while the remaining $M-1$ modes generates ${\cal A}_2$.
Even more interestingly, it turns out that entangled $N$-body bosonic states need to be of a definite, specific form \cite{Benatti3}:
{\bf Proposition 1.} {\sl A generic $(m, M-m)$-mode bipartite state (\ref{2-9}) is entangled if and only if it can not be cast in the following block diagonal form
\begin{equation} \rho=\sum_{k=0}^N p_k\ \rho_k\ ,\qquad \sum_{k=0}^N p_k=1\ ,\quad {\rm Tr}[\rho_k]=1\ , \label{2-11} \end{equation}
with
\begin{equation} \rho_k=\sum_{\sigma,\sigma',\tau,\tau'}\
\rho_{k \sigma\sigma', k\tau\tau'}\ | k, \sigma; N-k, \sigma'\rangle \langle k, \tau; N-k, \tau' |\ , \quad \sum_{\sigma,\sigma'}\rho_{k \sigma\sigma', k\sigma\sigma'}=1\ , \label{2-12} \end{equation}
({\it i.e.} at least one of its non-diagonal coefficients $\rho_{k \sigma\sigma', l\tau\tau'}$, $k\neq l$, is nonvanishing), or if it can, at least one of its diagonal blocks $\rho_k$ is non-separable.}
\footnote{For each block $\rho_k$, separability is understood with reference to the isomorphic structure $\mathbb{C}^{D_k}\otimes \mathbb{C}^{D_{N-k}}$ mentioned before (see {\sl Remark 3}).}
\noindent {\sl Proof.} Assume first that the state $\rho$ can not be written in block diagonal form; one can then show \cite{Benatti3} that it is not left positive by the operation of partial transposition and therefore it is entangled. Next, take $\rho$ in block diagonal form as in (\ref{2-11}), (\ref{2-12}) above. If all its blocks $\rho_k$ are separable, then clearly $\rho$ itself results separable. Then, assume that at least one of the diagonal blocks is entangled. By mixing it with the remaining blocks as in (\ref{2-11}) will not spoil its entanglement since all blocks $\rho_k$ have support on orthogonal spaces; as a consequence, the state $\rho$ results itself non-separable.
$\Box$
Having found the general form of non-separable $N$-boson states, one can next ask how robust is their entanglement content against mixture with other states. This question has been extensively studied for states of distinguishable particles \cite{Vidal,Steiner,Vidal1,Plenio,Horodecki}; in the next section we shall analyze to what extent the results obtained in that case can be extended to systems with a fixed number of bosons.
\section{Robustness of entanglement}
Several measures of entanglement have been introduced in the literature with the aim of characterizing quantum correlations and its usefulness in specific applications \cite{Horodecki}-\cite{Modi}. Starting with the entanglement of formation, most of these measures point to the quantification of the entanglement content of a given state. A different approach to this general problem has been proposed in \cite{Vidal,Steiner}: the idea is to obtain information about the magnitude of non-classical correlations contained in a state $\rho$ by studying how much it can be mixed with other states before becoming separable.
More precisely, let us indicate with $\cal M$ the set of all systems states and by ${\cal S}\subset{\cal M}$ that of separable ones; then, with reference to an arbitrary $(m, M-m)$-mode bipartition, one can introduce the following definition:
\noindent {\bf Definition 4.} {\sl Given a state $\rho$, its {\bf robustness of entanglement} is defined by
\begin{equation}
R(\rho)={\rm inf}\Big\{ t\ |\ t\geq0,\ \exists\, \sigma\in{\cal M}'\subset{\cal M},\ {\rm for\ which} \ \eta\equiv{\rho + t\,\sigma\over(1+t)} \in{\cal S}\Big\}\ , \label{3-1} \end{equation}
{\it i.e.} it is the smallest, non-negative $t$ such that a state $\sigma$ exists for which the (unnormalized) combination $\rho + t\,\sigma$ is separable.}
\noindent Actually, various forms of robustness have been introduced: they all share the definition (\ref{3-1}), but differ in the choice of the subset ${\cal M}'$ from which the mixing state $\sigma$ should be drawn. In particular, one talks of {\it generalized robustness} $R_g$ when $\sigma$ can be any state \cite{Steiner}, while simply of {\it robustness} $R_s$ when $\sigma$ must be separable \cite{Vidal}.
All robustness defined in this way satisfy nice properties; more specifically, they result entanglement monotones, {\it i.e.} they are invariant under local operations and classical communication, and convex, $R\big(\lambda\, \rho_1+(1-\lambda)\rho_2\big)\leq \lambda\, R(\rho_1) + (1-\lambda)\, R(\rho_2)$; further, $R(\rho)$ in (\ref{3-1}) is vanishing if and only if $\rho$ itself is separable. Although implicitly proven in the case of states of distinguishable particles \cite{Vidal,Steiner}, these properties hold true also in the case of $N$-boson systems. Nevertheless, there are striking differences in the behavior of the robustness of states of identical particles with respect to what is known in systems with distinguishable constituents.
Let us first focus on the robustness $R_s(\rho)$, that measures how strong is the entanglement content of a state $\rho$ when mixed with separable states. One finds:
\noindent {\bf Proposition 2.} {\sl The robustness of entanglement of a generic $(m, M-m)$-mode bipartite state $\rho$ is given by
\begin{equation} R_s(\rho)=\sum_{k=0}^N p_k\, R_s(\rho_k)\ , \label{3-2} \end{equation}
for states that are in block diagonal form as in (\ref{2-11}), (\ref{2-12}), while it is infinitely large otherwise.}
\noindent {\sl Proof.} From the results of {\it Proposition 1}, we know that separable $N$-boson states must be block diagonal. If the state $\rho$ is not in this form, it can never be made block diagonal by mixing it with any separable one; therefore, in this case, the combination $\rho + t\, \sigma$ will never be separable, unless $t$ is infinitely large.
Next, consider the case in which the state $\rho$ is in block diagonal form, {\it i.e.} it can be written as in (\ref{2-11}), (\ref{2-12}). First, if $\rho$ is separable, then clearly $R_s(\rho)=\,0$. For an entangled $\rho$, one can discuss each block $\rho_k$ separately: this is allowed since they have support on orthogonal Hilbert subspaces. Then, let us indicate by $t_k$ the robustness of block density matrix $\rho_k$; by {\sl Remark 3} and the definition of robustness, the numbers $t_k$'s are finite and positive, vanishing only when the corresponding state $\rho_k$ is separable \cite{Vidal}. More specifically, for each $k$, there exist separable states $\sigma_k$ and $\eta_k$, such that:
\begin{equation} \rho_k +t_k\, \sigma_k =(1+t_k)\, \eta_k\ . \label{3-3} \end{equation}
Multiplying both sides of this relation by the positive number $p_k$ and then summing over $k$, one gets
\begin{equation} \rho +t\, \sigma =(1+t)\, \eta\ ,\qquad t=\sum_{k=0}^N p_k\, t_k\ , \label{3-4} \end{equation}
where the separable states $\sigma$ and $\eta$ are explicitly given by
\begin{equation} \sigma=\sum_{k=0}^N \bigg({p_k\, t_k\over t}\bigg)\ \sigma_k\ ,\qquad \eta=\sum_{k=0}^N \bigg({1+t_k \over 1+t}\bigg)\, p_k\, \eta_k\ . \label{3-5} \end{equation}
To prove that indeed $t$ given in (\ref{3-4}) is really the robustness of $\rho$, one needs to check that no better decomposition
\begin{equation} \rho +t'\, \sigma' =(1+t')\, \eta'\label{}\ , \label{3-6} \end{equation}
with $t'\leq t$, exists. In order to show this, let proceed {\it ad absurdum} and assume that such decomposition can indeed be found. Since the states $\sigma'$ and $\eta'$ are separable, they must be block diagonal, {\it i.e.} of the form $\sigma'=\sum_k q_k\, \sigma'_k$ and $\eta'=\sum_k r_k\, \eta'_k$ with $\sum_k q_k=\sum_k r_k=1$ and $\sigma'_k$ and $\eta'_k$ separable density matrices. By the orthogonality of the Hilbert subspaces with fixed $k$, from (\ref{3-6}) one then gets
\begin{equation} p_k\, \rho_k + t'\, q_k\, \sigma'_k = (1+t')\, r_k\, \eta'_k\ , \label{3-7} \end{equation}
and further, by taking its trace, $p_k+ t'\, q_k = (1+t')\, r_k$. In addition, from the previous identity, one sees that the combination
\begin{equation} \rho_k + t'\, {q_k\over p_k}\, \sigma'_k\ , \label{3-8} \end{equation}
is separable. By definition of robustness of the block $\rho_k$ as given in (\ref{3-3}), it then follows that:
\begin{equation} t'\, {q_k\over p_k}\geq t_k\ , \label{3-9} \end{equation}
ore equivalently, $ t_k\, p_k \leq t'\, q_k$. By summing over $k$, one then finds: $t\leq t'$; this result is compatible with the initial assumption $t\geq t'$ only if $t'$ coincides with $t$. Therefore, the robustness of the block diagonal state $\rho$ is indeed given by the weighted sum of the robustness of each block.
$\Box$
The problem of finding the robustness $R_s(\rho)$ of a generic $N$-boson state $\rho$ is then reduced to the more manageable task of identifying the robustness of its diagonal blocks, which are finite-dimensional density matrices for which standard techniques and results can be used.
\noindent {\bf Remark 4:} {\sl i)} A remarkable property of the robustness of entanglement of states describing distinguishable particles is that it is equal to the negativity for pure states. In the case of identical particles, this property does not hold anymore as the robustness of entanglement of non-block diagonal pure states is infinitely large. Nevertheless, one can easily show that in general for pure states: ${\cal N}(\rho)\leq R_s(\rho)$.
\break {\sl ii)} The robustness of entanglement of states that, with respect to the given partition, result mixtures of pure block states, $\rho=\sum_k p_k\rho_k$, $\rho_k{}^2=\rho_k$, is equal to their negativity $R_s(\rho)=\sum_k p_k\, {\cal N}(\rho_k)={\cal N}(\rho)$, since now $R_s(\rho_k)={\cal N}(\rho_k)$, as in the standard case (see \cite{Benatti3}).
$\Box$
More difficult appears the task of computing the generalized robustness $R_g(\rho)$ of a generic $N$-boson state: only upper bounds can in general be given. In any case, note that in general $R_g(\rho)\leq R_s(\rho)$ since the optimization procedure of {\sl Definition 4} is performed over a larger subset of states in the case of the generalized robustness.
By fixing as before a $(m, M-m)$-mode bipartition, a first bound on $R_g(\rho)$ can be easily obtained. Let us extract from $\rho$ its diagonal part $\rho_D$, as defined in terms of a Fock basis determined by the given bipartition ({\it cf.} (\ref{2-6})), and call $\rho_{ND}\equiv \rho-\rho_D$ the rest. By definition of separability, $\rho_D$ is surely separable with respect to the chosen bipartition, so that an easy way to get a separable state by mixing $\rho$ with another state $\sigma$ is to subtract from it its non-diagonal part. However, $-\rho_{ND}$ alone is not in general a density matrix since it might have negative eigenvalues. Let us denote by $\lambda$ the modulus of its largest negative eigenvalue; then the quantity $\lambda \mathbbm{1} -\rho_{ND}$, where $\mathbbm{1}$ is the identity matrix, will surely be positive and therefore, once normalized, can play the role of the density matrix $\sigma$ in the separable combination $\rho +t\, \sigma\equiv\rho_D +\lambda\, \mathbbm{1}$. By taking the trace, one finds for the normalization factor $t$ the following expression: $t=\lambda\, D$, where $D$ is as before the dimension of the total Hilbert space. By definition of robustness, it then follows that
\begin{equation} R_g(\rho)\leq \lambda\, D \label{3-10}\ ; \end{equation}
as a consequence, the generalized robustness of a generic $N$-boson state is always finite.
A different bound on $R_g(\rho)$ can be obtained using a refined decomposition for $\rho$,
\begin{equation} \rho=\rho_B + \rho_{NB}\ ,\qquad \rho_B=\sum_{k=0}^N p_k\ \rho_k\ ,\qquad \sum_{k=0}^N p_k=1\ ,\quad {\rm Tr}[\rho_k]=1\ , \label{3-11} \end{equation}
where $\rho_B$ is the block diagonal part of $\rho$, whose blocks $\rho_k$ can be written as in (\ref{2-12}), while $\rho_{NB}\equiv\rho-\rho_B$ is the rest, containing the non block diagonal pieces. One can first ask for the generalized robustness of $\rho_B$, which is a {\it bona fide} state, being a normalized, positive matrix.
\footnote{This is a direct consequence of the positivity of $\rho$, since $\rho_B$ is made of its principal minors.}
Quite in general, one has:
\noindent {\bf Proposition 3.} {\sl The generalized robustness of entanglement of a generic $(m, M-m)$-mode bipartite state $\rho$ given in block diagonal form as in (\ref{2-11}), (\ref{2-12}) is given by
\begin{equation} R_g(\rho)=\sum_{k=0}^N p_k\, R_g(\rho_k)\ . \label{3-12} \end{equation}
}
\noindent {\sl Proof.} It is the same as in {\sl Proposition 2\,}; the only difference is that now the states $\sigma_k$ are in general entangled.
$\Box$
\noindent As a consequence, for a generic state as in (\ref{3-11}) above, due to the presence of the additional term $\rho_{NB}$, one surely has: $R_g(\rho)\geq R_g(\rho_B)=\sum_{k=0}^N p_k\, R_g(\rho_k)$.
To get an upper bound for $R_g(\rho)$, let us consider the form that the (unnormalized) density matrices $\sigma$ must take in order to make the combination $\rho+\sigma$ separable; by \hbox{\sl Definition 4}, the generalized robustness of entanglement coincides with minimum value of their traces: $R_g(\rho)={\rm inf}\big\{{\rm Tr}[\sigma]\big\}$. Since the combination $\rho+\sigma$ is separable, it must be in block diagonal form; therefore, $\sigma$ must surely contain the contribution $-\rho_{NB}$. Further, it must also take care of the entanglement in the remaining block diagonal term $\rho_B$; in view of the results of {\sl Proposition 3}, this can be obtained (and in an optimal way) by the contribution $\sum_{k=0}^N p_k\, R_g(\rho_k)\, \sigma_k$, where $\sigma_k$ is the optimal density matrix that makes the diagonal block $\rho_k$ separable.
However, while $\sum_{k=0}^N p_k\, R_g(\rho_k)\, \sigma_k$ is a positive matrix, in general $\rho_{NB}$ is not; therefore $\sigma$ should contain a further contribution, a (unnormalized) positive and separable matrix $\tilde\sigma$ curing the negativity induced by $\rho_{NB}$. As a consequence, the generic form of the positive matrix $\sigma$ making the combination $\rho+\sigma$ separable is given by:
\begin{equation} \sigma=\sum_{k=0}^N p_k\, R_g(\rho_k)\, \sigma_k -\rho_{NB} + \tilde\sigma\ , \label{3-14} \end{equation}
and the computation of the generalized robustness of $\rho$ is reduced to the determination of the optimal $\tilde\sigma$; indeed:
\begin{equation} R_g(\rho)=\sum_{k=0}^N p_k\, R_g(\rho_k) + {\rm inf}\big\{{\rm Tr}[\tilde\sigma]\big\}\ . \label{3-15} \end{equation}
Upper bounds on $R_g(\rho)$ can then be obtained by estimating the above minimum value, trough specific choices of $\tilde\sigma$.
A simple possibility for curing the non positivity of $-\rho_{NB}$ is to add to it a matrix proportional to the modulus of its largest negative eigenvalue; in general, the value of this eigenvalue is however difficult to estimate. Another possibility is suggested by the general theory of positive matrices (see Theorem 6.1.1 and 6.1.10 in \cite{Horn}): a sufficient condition for a generic hermitian matrix $M_{ij}$ to be positive is that it must be
``diagonally dominated'', {\it i.e.} $M_{ii}\geq \sum_{j\neq i} |M_{ij}|$, $\forall i$.
\footnote{Note that this condition is base-dependent: nonequivalent conditions are obtained by expressing the matrix $M$ in different basis.}
Then, in a fixed separable basis, by choosing for $\tilde\sigma$ the diagonal matrix whose entries are given by the sum of the modulus of the elements of the corresponding rows of $-\rho_{NB}$, the matrix $\sigma$ in (\ref{3-14}) results positive and makes the combination $\rho+\sigma$ separable. One can then conclude that:
\begin{equation}
R_g(\rho)\leq\sum_{k=0}^N p_k\, R_g(\rho_k) + ||\rho_{NB}||_{\ell_1}\ , \label{3-16} \end{equation}
where, for any matrix $M$, $|| M||_{\ell_1}=\sum_{i,j} |M_{ij}|$ is the so-called $\ell_1$-norm (see \cite{Horn}).
\footnote{The same procedure can also be applied to the previously used decomposition of $\rho$ into its diagonal and off-diagonal parts: $\rho=\rho_D +\rho_{ND}$; the mixing matrix $\sigma$ would now be composed by $-\rho_{ND}$ plus a diagonal matrix whose entries are given by the sums of the modulus of the elements of each row of $\rho_{ND}$. In this case, one easily finds that: $R_g(\rho)\leq ||\rho_{ND}||_{\ell_1}$; although in general $||\rho_{ND}||_{\ell_1} \geq ||\rho_{NB}||_{\ell_1}$, this constitutes a different upper bound for the generalized robustness, independent from that given in (\ref{3-16}).}
In presence of just two modes, $M=2$, each of which forming a partition, the above considerations further simplify. In this case, the Fock basis in (\ref{2-6}) is given by the set of $N+1$ vectors $\{ |k;N-k\rangle,\ 0\leq k\leq N \}$, without the need of further labels; indeed, the $N$ bosons can occupy either one of the two modes. Notice that by (\ref{2-4}) this set of Fock vectors constitutes the only basis made of separable pure states \cite{Benatti3}. In this basis, a generic density matrix for the system can then be written as:
\begin{equation}
\rho=\sum_{k,l=0}^N\ \rho_{kl}\ | k; N-k\rangle \langle l; N-l |\ , \quad \sum_{k=0}^N\ \rho_{kk}=1\ . \label{3-17} \end{equation}
By {\sl Proposition 1}, once adapted to this simplified case, it follows that a state as in (\ref{3-17}) is separable if and only if $\rho_{kl}\sim\delta_{kl}$, {\it i.e.} the density matrix $\rho$ is diagonal in the Fock basis. As a consequence, an entangled state can never be made separable by mixing it with a separable state, so that its robustness of entanglement results always infinite.
In the case of the generalized robustness, since there are only diagonal and off-diagonal terms and no blocks, the above discussed upper bounds (\ref{3-10}) and (\ref{3-16}) simplify, becoming:
\begin{equation} i)\ R_g(\rho)\leq \lambda\, (N+1)\ ,\qquad ii)\
R_g(\rho)\leq ||\rho_{ND}||_{\ell_1}\ , \label{3-18} \end{equation}
where, as before, $\rho_{ND}$ is the non diagonal part of $\rho$ in the Fock basis, and $\lambda$ the modulus of its largest positive eigenvalue. These bounds can be explicitly evaluated for specific classes of states as shown below.
\noindent {\bf Examples:} {\sl i)} Let us first consider pure states of the form:
\begin{equation}
\rho\equiv|\psi\rangle\langle\psi|\ ,\qquad
|\psi\rangle= {1\over\sqrt{N+1}}\sum_{k=0}^N p_k\, |k;N-k\rangle\ ,\qquad p_k=e^{i\varphi_k}\ . \label{3-19} \end{equation}
The non-diagonal part of the matrix $\rho$ is given by $\rho_{ND}=(\Phi-\mathbbm{1})/(N+1)$
where $\Phi=\sum_{k,l} e^{i(\varphi_k-\varphi_l)}\, |k;N-k\rangle \langle l;N-l|$ is the $(N+1)\times(N+1)$ matrix of phase differences; its eigenvalues are zero and $N+1$, so that the largest negative eigenvalue of $-\rho_{ND}$ is in modulus
$N/(N+1)$. From the first bound in (\ref{3-18}) one therefore gets: $\ R_g(\rho)\leq N$. This is also the result of the second bound, since the norm $||\rho_{ND}||_{\ell_1}$ is also equal to $N$.
\break {\sl ii)} Nevertheless, the two bounds in (\ref{3-18})
do not in general coincide, as can be seen by slightly generalizing the states in (\ref{3-19}) by allowing the coefficients $p_k$ to acquire a non unit norm, $p_k=|p_k| e^{i\varphi_k}$,
$\sum_k |p_k|^2=1$. By choosing the norms and phases of the $p_k$'s to be uniformly distributed, one can easily generate states for which the second bound in (\ref{3-18}) is more stringent.
\break {\sl iii)} Note, however, that the hierarchy of the bounds in (\ref{3-18}) can be reversed, as it happens for instance with the following mixed state:
\begin{equation}
\rho=\frac{1}{N+1}\sum_{k=0}^N |k;N-k\rangle\langle k;N-k|-\frac{1}{N(N+1)}
\sum_{\substack{k,l=0\\k\neq l}}^N |k;N-k\rangle\langle l;N-l|\ . \label{3-20} \end{equation}
Indeed, now one has
$\rho_{ND}=(\mathbbm{1}-E)/N(N+1)$, where $E$ is the matrix with all entries equal to one. One easily checks that $||\rho_{ND}||_{\ell_1}=1$, while the modulus of the largest negative eigenvalue $\lambda$ of $-\rho_{ND}$ is $1/N(N+1)$. Therefore, from the first bound in (\ref{3-18}) one gets $R_g(\rho)\leq 1/N$, which is lower than the second one.
$\Box$
\section{On the geometry of $N$-boson states}
As shown by the previous results, the properties of the states of a system of $N$-bosons result rather different and to a certain extent richer then those describing distinguishable particles. This is surely a consequence of the indistinguishability of the system elementary constituents, but also of the presence of the additional constraint that fixes the total number of particles to be $N$. This additional ``rigidity'' allows nevertheless a detailed description of the geometric structure of the set of $N$-boson states.
Let us first consider the set of entangled states. As mentioned before, for systems of identical particles, negativity, as defined in (\ref{2-10}), results a much more exhaustive entanglement criteria than in systems of distinguishable particles: it seems then appropriate to look for states that maximize it.
\noindent {\bf Proposition 4.} {\sl Given any bipartition $(m, M-m)$, the negativity ${\cal N}(\rho)$ is maximized by pure states that have all the Schmidt coefficients nonvanishing and equal to a normalizing constant.}
\noindent {\sl Proof.} First observe that the negativity is a convex function \cite{Vidal1}, {\it i.e.} it satisfies the inequality
\begin{equation} {\cal N}\Big(\sum_i p_i\,\rho_i\Big)\leq \sum_i p_i\, {\cal N}(\rho_i)\ ,\qquad p_i\geq0\ ,\qquad \sum_i p_i=1\ , \label{4-1} \end{equation}
for any convex combination $\sum_i p_i\,\rho_i$ of density matrices $\rho_i$. Since any density matrix can be written as a convex combination of projectors over pure states, one can limit the search to pure states; in a given $(m, M-m)$-bipartition, they can be expanded in terms of the Fock basis (\ref{2-6}) as:
\begin{equation}
|\psi\rangle=\sum_{k=0}^N |\psi_k\rangle\ ,\qquad
|\psi_k\rangle\equiv\sum_{\sigma=1}^{D_k}\sum_{\sigma'=1}^{D_{N-k}}
\psi_{k\sigma\sigma'}\, |k,\sigma;N-k,\sigma'\rangle\ ,\qquad
\sum_k\sum_{\sigma\sigma'} |\psi_{k\sigma\sigma'}|^2=1\ . \label{4-2} \end{equation}
As observed before, for each $k$, the set of vectors $\{|k,\sigma;N-k,\sigma'\rangle\}$ span a subspace ${\cal H}_k\subset {\cal H}$ of finite dimension
$D_k\, D_{N-k}$ of the total Hilbert space $\cal H$, and the component $|\psi_k\rangle$ is a vector of this space. Recalling {\sl Remark 3}, one can then write it in Schmidt form,
\begin{equation}
|\psi_k\rangle\equiv\sum_{\alpha=1}^{{\cal D}_k}
\tilde\psi_{k\alpha}\, ||k,\alpha;N-k,\alpha\rangle\rangle\ ; \label{4-3} \end{equation}
in this decomposition, the orthonormal vectors $\{||k,\alpha;N-k,\alpha\rangle\rangle\}$ form the Schmidt basis, with ${\cal D}_k={\rm min}\{D_k, D_{N-k}\}$ ({\it cf.} (\ref{2-6})), while the Schmidt coefficients $\tilde\psi_{k\alpha}$ are non negative real numbers, satisfying
the normalization condition $\sum_k\sum_\alpha (\tilde\psi_{k\alpha})^2=1$. In this new basis, one can easily compute the negativity of the density matrix $|\psi\rangle\langle\psi|$,
\begin{equation}
{\cal N}\big(|\psi\rangle\langle\psi|\big)= {1\over2}\bigg[\bigg(\sum_{k=0}^N\sum_{\alpha=1}^{{\cal D}_k} \tilde\psi_{k\alpha}\bigg)^2-1\bigg]\ . \label{4-4} \end{equation}
Clearly, the negativity increases monotonically with the sum $\sum_k\sum_\alpha \tilde\psi_{k\alpha}$, that therefore needs to be maximized under the constraint $\sum_k\sum_\alpha (\tilde\psi_{k\alpha})^2=1$. One easily shows that the maximum is obtained when all coefficients $\tilde\psi_{k\alpha}$ are equal and constant,
\begin{equation} \tilde\psi_{k\alpha}={1\over\sqrt{\cal D}}\ ,\qquad {\cal D}=\sum_{k=0}^N {\cal D}_k\ , \label{4-5} \end{equation}
so that, ${\cal N}\big(|\psi\rangle\langle\psi|\big)=({\cal D}-1)/2$. Further, all Schmidt coefficients need to be non vanishing in order to get this maximum value for $\cal N$; indeed, for any state
$|\psi'\rangle$ with Schimdt number ${\cal D}'< {\cal D}$, one has:
${\cal N}\big(|\psi'\rangle\langle\psi'|\big)<({\cal D}-1)/2$.
$\Box$
\noindent In analogy with the standard case, states for which the negativity reaches the maximum value $({\cal D}-1)/2$ can be called ``maximally entangled states''.
\noindent
{\bf Remark 5:} {\sl i)} Given a fixed bipartition $(m, M-m)$, let us consider such a maximally entangled state $\rho=|\psi\rangle\langle\psi|$. By tracing over the degrees of freedom of the modes pertaining to the second partition, one obtains the reduced density matrix $\rho^{(1)}$ describing the first $m$ modes only:
\begin{equation} \rho^{(1)}={1\over {\cal D}} \sum_{k=0}^N\sum_{\alpha=1}^{{\cal D}_k}
||k,\alpha\rangle\rangle\, \langle\langle k,\alpha||\ ; \label{4-6} \end{equation}
similarly, by tracing over the first partition, one obtains the reduced density matrix $\rho^{(2)}$ describing the second $N-k$ modes. In the case of distinguishable particles, either $\rho^{(1)}$ or $\rho^{(2)}$ is proportional to the identity matrix; this is no longer true here: in fact, given a block with $k$ fixed, this happens only when $D_k\leq D_{N-k}$, {\it i.e.} when ${\cal D}_k=D_k$.
\break {\sl ii)} On the other hand, given the expression (\ref{4-6}), one easily computes its von Neumann entropy, obtaining: $S(\rho^{(1)})=-{\rm Tr}\big[\rho^{(1)}{\rm ln}\rho^{(1)}\big]={\rm ln}\,{\cal D}$, as in the standard case.
\break {\sl iii)} Similarly, the purity of $\rho^{(1)}$ is given by: ${\rm Tr}\big[\big(\rho^{(1)}\big)^2\big]=1/{\cal D}$. This result and that of the previous remark can equally be taken as alternative, equivalent definitions of the notion of maximally entangled states.
$\Box$
Let us now come to the analysis of the space $\cal S$ of separable states as determined by the generic bipartition $(m, M-m)$. Among them, the totally mixed state $\rho_{\rm mix}$, proportional to the unit matrix, stands out because of its peculiar properties. In fact, recall that in the case of distinguishable particles, $\rho_{\rm mix}$ always lays in the interior of $\cal S$ \cite{Zyczkowski,Bengtsson}; instead, now one finds:
\noindent
{\bf Proposition 5.} {\sl Given any bipartition $(m, M-m)$ and an associated separable basis made of the Fock vectors $|k, \sigma;N-k, \sigma'\rangle$, the totally mixed state,
\begin{equation} \rho_{\rm mix}={1\over D}\sum_{k=0}^N\ \sum_{\sigma,\sigma'}\
| k, \sigma; N-k, \sigma'\rangle \langle k, \sigma; N-k, \sigma' |\ , \label{4-7} \end{equation}
lays on the border of the space $\cal S$ of separable states.}
\noindent {\sl Proof.} Let us take a state $\rho_{\rm ent}$ which can not be written in block diagonal form as in (\ref{2-11}), (\ref{2-12}); by {\sl Proposition 1}, it is entangled. Then, consider the combination
\begin{equation} \rho_\epsilon={1\over 1+\epsilon}\big(\rho_{\rm mix}+\epsilon\, \rho_{\rm ent}\big)\ , \qquad \epsilon>0\ . \label{4-8} \end{equation}
For any $\epsilon$, this combination will never be separable, since it is not block diagonal. In other terms, in the vicinity of $\rho_{\rm mix}$, one always find entangled states, so that it must lay on the border of $\cal S$.
$\Box$
\noindent {\bf Remark 6:} {\sl i)} Note that similar considerations apply to all separable states: there always exist small perturbations of separable, necessarily block diagonal, states that make them not block diagonal, hence entangled. Instead, in the case of distinguishable particles, almost all separable states remain separable under sufficiently small arbitrary perturbations \cite{Bengtsson,Bandyopadhyay}.
\break
{\sl ii)} Further, using analogous steps, one can show that a Werner-like state, $\rho_W=p\, |\psi\rangle\langle\psi| + (1-p)\, \rho_{\rm mix}$,
$0\leq p\leq 1$, with $|\psi\rangle$ a maximally entangled state, is entangled for any nonvanishing value of the parameter $p$, while for distinguishable $d$-level particles, this happens only when $1/(d+1)<p\leq 1$ \cite{Werner1}.
$\Box$
The result of {\sl Proposition 5} is most strikingly illustrated by considering a system of $N$ bosons that can occupy two modes ($M=2$), each of which forming a partition. In the previous section, we have seen that a generic state $\rho$ for this system can be written in terms the Fock basis as in (\ref{3-17}); further, it results separable if and only the density matrix $\rho$ is diagonal in this basis. Instead, $\rho_\epsilon$ given in (\ref{4-8}) will develop non-diagonal entries as long as $\epsilon$ starts to be non vanishing.
As mentioned before, in this case the set of $N+1$ vectors $\{|k;N-k\rangle\}$ constitutes the only basis made of separable pure states \cite{Benatti3}. Therefore, the decomposition of a generic separable state $\rho$ in terms of projections on separable states turns out to be unique. As a consequence, the set $\cal S$ of separable states is a sub-variety of the convex space $\cal M$ of all states that turns out to be a simplex, whose vertices are given precisely by these projections.
The space of states of $N$ bosons confined in two modes is then much more geometrically structured than in the case of systems made of distinguishable particles. Indeed, given a complete set of observables $\{ {\cal O}_i \}$, one can decompose any density matrix $\rho$ as:
\begin{equation} \rho=\sum_i\, \rho_i\ \Bigg( { \sqrt{\rho}\, {\cal O}_i\, \sqrt{\rho} \over {\rm Tr}\big[{\cal O}_i\rho\big] }\Bigg)\ ,\qquad \rho_i\equiv{\rm Tr}\big[{\cal O}_i\rho\big]\ ,\quad {\cal O}_i>0,\quad \sum_i {\cal O}_i={1}; \label{4-9} \end{equation}
this decomposition is over pure states, whenever the operators ${\cal O}_i$ are chosen to be projectors. Therefore, in general, there are infinite ways of expressing a density matrix as a convex combination of projectors, even when these projectors are made of separable states. As seen, this conclusion no longer holds for systems of identical particles.
The totally mixed state $\rho_{\rm mix}$ has also another interesting property:
\footnote{For systems of distinguishable particles, the problem of finding so-called ``absolutely separable states'', {\it i.e.} states that are separable in any choice of tensor product structure, has been discussed in \cite{Kus,Bengtsson}.}
\noindent {\bf Proposition 6.} {\sl The totally mixed state is the only state that remains separable for any choice of bipartition.}
\noindent {\sl Proof.} First, let us again consider the simplest case $M=2$. Any Bogolubov transformation maps the set of creation and annihilation operators $a_i^\dagger$ and $a_i$, $i=1,2$ into new ones $b_i^\dag$, $b_i$, $i=1,2$; a simple example is given by
\begin{equation} b_1={a_1+a_2\over\sqrt{2}}\ ,\qquad b_2={a_1-a_2\over\sqrt{2}}\ , \label{4-10} \end{equation}
and their hermitian conjugates. The operators $b_i^\dag$, $b_i$ define a new bipartition $({\cal B}_1, {\cal B}_2)$ of the full algebra $\cal B$ of bounded operators, distinct from original one $({\cal A}_1, {\cal A}_2)$
generated by $a_i^\dag$, $a_i$. States (and operators as well) that are local in one bipartition might turn out to be non-local in the other. For instance, the Fock states $\{|k,N-k\rangle\}$ result entangled with respect to the new bipartition defined by the transformation (\ref{4-10}); indeed, one finds:
\begin{equation}
|k,N-k\rangle={1\over 2^{N/2}}{1\over\sqrt{k!(N-k)!}}\sum_{r=0}^k\sum_{s=0}^{N-k}
{k\choose r}{N-k\choose s}(-1)^{N-k-s} \big(b_1^\dag\big)^{r+s}\, \big(b_2^\dag\big)^{N-r-s}\, |0\rangle\ , \label{4-11} \end{equation}
so that $|k,N-k\rangle$ is a combination of $({\cal B}_1, {\cal B}_2)$-separable states.
A similar conclusion applies to the mixed states (\ref{3-17}): in general, any separable state $\rho_{\rm sep}=\sum_k \rho_k\, |k;N-k\rangle\langle k; N-k|$ is mapped by a Bogolubov transformation into a non-diagonal density matrix, and therefore into an entangled state. In fact, one can always find a unitary transformation $U$ that maps any diagonal matrix $\rho$ into a non-diagonal one $U\rho\, U^\dagger$. In particular, when $\rho$ is not degenerate, the transformed matrix $U\rho\, U^\dagger$ results diagonal only if the operator $U$ is itself a diagonal matrix; in this case however the corresponding Bogolubov transformation results trivial and does not define a new partition. The only density matrix that remains invariant under all unitary transformations is the one proportional to the unit matrix, {\it i.e.} the totally mixed one. This conclusion can easily be extended to the multimode case: given any separable state in a given $(m, M-m)$ bipartition, $\rho_{\rm sep}=\sum_{k=0}^N\ \sum_{\sigma,\sigma'}\ \rho_{k\sigma\sigma'}
| k, \sigma; N-k, \sigma'\rangle \langle k, \sigma; N-k, \sigma' |$, one can always construct a Bogolubov transformation that maps it into an entangled one: it is sufficient to apply the above considerations to any couple of modes belonging to separate partitions. Therefore, also in this more general setting, only the state $\rho_{\rm mix}$ in (\ref{4-7}) is left invariant by all Bogolubov transformations.
$\Box$
Thanks to these results, the global geometry of the space of $N$-boson states starts to emerge more clearly. Again the two-mode case is easier to describe. We have seen that by fixing a bipartition one selects the sets $\cal S$ of separable states; these form a sub-variety of the convex space $\cal M$ of all states, forming a $(N+1)$-dimensional simplex, with the projectors over Fock states as generators. Changing the bipartition through a Bogolubov transformation produces a new simplex, having only one point in common with the starting one, the state $\rho_{\rm mix}$. The geometry of the space of two-mode $N$-boson states has therefore a sort of star-like topology, with the various simplexes sharing just one point, the totally mixed state.
The case of $M$ modes is more complex. For a fixed bipartition, the space of separable states is a sub-space of the convex space of all states which is not any more strictly a simplex: the decomposition of generic separable state $\rho$ is no longer unique, since the Fock states in (\ref{2-6}) are no longer the only separable pure states: for each $k$, reshufflings over the indices $\sigma$, $\sigma'$ are still allowed. Nevertheless, also in this case the global state space presents a sort of star-like topology: only one point is shared by all separable bipartition sub-spaces, the totally mixed state $\rho_{\rm mix}$.
\section{Outlook}
In many-body systems composed by a fixed number of identical particles, the associated Hilbert space no longer exhibits the familiar particle tensor product structure. The usual notions of separability and entanglement based on such structure is therefore inapplicable: a generalized definition of separability is needed and can be given in terms of commuting algebras of observables instead of focusing on the particle tensor decomposition of states. The selection of these algebras is largely arbitrary, making it apparent that in systems of identical particles entanglement is not an absolute notion: it is strictly bounded to specific choices of commuting sets of system observables.
Using these generalized definitions, we have studied bipartite entanglement in systems composed by $N$ identical bosons that can occupy $M$ different modes. More specifically, we have analyzed to what extent entangled states result robust against mixing with another state, either separable or entangled. We have found that in general, the entanglement contained in bosonic states is much more robust than the one found in systems of distinguishable particles. This result has been obtained by analyzing the so-called {\sl robustness} and {\sl generalized robustness} of entanglement, of which explicit expressions and upper bounds have been respectively given. A quite general characterization of the geometry of the space of bosonic states has also been obtained: this space exhibits a star-like structure composed by intersecting subspaces each of one determined by a given bipartition through the subset of separable states. All these separable subspaces share one and only one point, the totally mixed state, hence the star-shape topology.
As a final remark, notice that all above results can be generalized to the case of systems where the total number of particles is not fixed, but commutes with all physical observables ({\it i.e.} we are in presence of a superselection rule \cite{Bartlett}). In such a situation, a general density matrix $\rho$ can be written as an incoherent mixture of states $\rho_N$ with fixed number $N$ of particles, having the general form (\ref{2-9}); explicitly:
\begin{equation} \rho=\sum_N \lambda_N \rho_N \ ,\qquad \lambda_N\geq 0\ ,\qquad \sum_N \lambda_N=1\ . \label{5-1} \end{equation}
The state $\rho$ is thus a convex combination of matrices $\rho_N$ having support on orthogonal spaces. Arguments similar to those used in proving {\sl Proposition 2} (and {\sl Proposition 3}) allow us to conclude that both the robustness and the generalized robustness of entanglement of the state $\rho$ in (\ref{5-1}) is the weighted average of the robustness of the components $\rho_N$, {\it i.e.} $R(\rho)=\sum_N \lambda_N\, R(\rho_N)$ for both cases. The problem of computing the robustness of incoherent particle number mixtures (\ref{5-1}) is then reduced to that of determining the robustness of the corresponding components at fixed particle number, for which the considerations and results discussed in the previous sections apply.
\end{document} |
\begin{document}
\preprint{V2}
\title{Long-Distance Atom-Photon Entanglement}
\author{W. Rosenfeld} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany}
\author{F. Hocke} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany}
\author{F. Henkel} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany}
\author{M. Krug} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany}
\author{J. Volz} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany}
\author{M. Weber} \email[corresponding author: ]{[email protected]} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany}
\author{H. Weinfurter} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at M\"unchen, D-80799 M\"unchen, Germany} \affiliation{Max-Planck-Institut f\"ur Quantenoptik, 85748 Garching, Germany}
\date{\today}
\begin{abstract} We report the observation of entanglement between a single trapped atom and a single photon at remote locations. The degree of coherence of the entangled atom-photon pair is verified via appropriate local correlation measurements, after communicating the photon via an optical fiber link of 300 m length. In addition we measured the temporal evolution of the atomic density matrix after projecting the atom via a state measurement of the photon onto several well defined spin states. We find that the state of the single atom dephases on a timescale of 150 $\mu$s, which represents an important step toward long-distance quantum networking with individual neutral atoms. \end{abstract}
\pacs{03.65.Ud,03.67.Mn,32.80.Qk,42.50.Xa}
\maketitle
Entanglement between light and matter \cite{Blinov04,Matsukevich05,Volz06,Sherson06} plays an outstanding role in long-distance quantum communication, allowing efficient distribution of quantum information over, in principle, arbitrary large distances. By interfacing matter-based quantum processors and photonic communication channels, light-matter entanglement is regarded as fundamental building block for future applications such as the quantum repeater \cite{Briegel98} and quantum networks. In addition, this new kind of entanglement would allow, e.g., quantum teleportation \cite{Bennett93} of quantum states of light onto matter \cite{Sherson06,Chen08} as well as the heralded generation of entanglement between quantum memories \cite{Moehring07} via entanglement swapping \cite{Zukowski93}. Light-matter entanglement is thus not only crucial for long range quantum communication but forms the basis for a first loophole-free test of Bell's inequality with a pair of entangled atoms at remote locations \cite{Simon03,Volz06}.
So far, three different approaches entangling light and matter have been pursued. The spontaneous decay in a lambda-type transition of a single trapped atom/ion enables one to entangle the internal degree of freedom of the emitted photon with the spin-state of the atom \cite{Blinov04,Volz06}. Experiments in this direction recently achieved the observation of entanglement between two individually trapped ions \cite{Moehring07}, the remote preparation of an atomic quantum memory \cite{Rosenfeld07}, and the realization of a single-atom single-photon quantum interface based on optical high-Q cavities \cite{Wilk07}. Other approaches are based on entanglement between coherently scattered photons and collective spin-excitations in atomic ensembles \cite{Matsukevich05,Chou05,Chen08}, and entanglement between continuous variables of light and matter \cite{Julsgaard04,Sherson06}.
The relevance of light-matter entanglement for quantum networking arises from the fact that it establishes non-classical correlations between a localized matter-based quantum memory and an optical carrier of quantum information which can easily be sent to a distant location. Together with appropriate quantum communication protocols like quantum teleportation this allows to map photonic quantum information (QI) into quantum memories, to buffer QI, and to reconvert it later on again to photonic quantum carriers. In this context decoherence of the photonic quantum channel as well as decoherence of the matter-based quantum memory are important figures of merit setting a limit how far quantum information can be distributed or how long this information can be stored, respectively. Therefore, the ability to generate and preserve light-matter entanglement over large distances \cite{Riedmatten06} opens the possibility for long-distance distribution of quantum information \cite{Simon07}.
In this Letter, we report the first direct observation of entanglement between the internal state of a single trapped $^{87}$Rb atom and the polarization state of a single photon which has passed 300 m optical fiber. This is achieved by actively stabilizing both the birefringence of the optical fiber-link as well as ambient magnetic fields in order to minimize dephasing of the atomic memory qubit, stored in the atomic ground state 5$^2S_{1/2}, F=1, m_F=\pm1$. Detailed coherence measurements of the atomic qubit show that photonic quantum information can be stored for about 150 $\mu$s.
\begin{figure}
\caption{(Color online). Schematic of long-distance atom-photon
entanglement. (a) During the spontaneous decay of a single optically trapped
$^{87}$Rb atom on the transition 5$^2P_{3/2}, F'=0 \rightarrow$ 5$^2S_{1/2},
F=1$ the polarization of the emitted photon gets entangled with the final
spin-state of the atom. (b) The emitted photon is coupled into a single-mode
optical fiber and communicated to a remote location where a polarization
analysis is performed. To overcome thermally and mechanically induced
fluctuations of the fiber birefringence, an active polarization compensation
is used. Therefore reference laser pulses are sent through the optical fiber
and the output polarizations are characterized in a reference
polarimeter. With the help of a software algorithm and a dynamic
polarization controller it is ensured that input and output polarizations
are identical. }
\label{Fig1}
\end{figure}
\begin{figure}
\caption{(Color online). Verification of atom-photon entanglement. Probability
of detecting the atomic qubit in (a)
$\left|\downarrow\right\rangle_x:=1/\sqrt{2}(\left|\uparrow\right\rangle_z -
\left|\downarrow\right\rangle_z)$ and (b)
$\left|\downarrow\right\rangle_y:=1/\sqrt{2}(\left|\uparrow\right\rangle_z -
i \left|\downarrow\right\rangle_z)$ conditioned on the detection of the
photon in detector APD$_1$ or APD$_2$, where the photonic qubit is projected
onto the states $1/\sqrt{2}(|\sigma^+\rangle \pm
e^{2i\beta}|\sigma^-\rangle)$. The phase $\beta$ can be set with a rotatable
$\lambda/2$ waveplate in front of a polarizing beam splitter (PBS). For
$\beta=0^{\circ}$ respectively $\beta/2=22.5^{\circ}$ the photon is analyzed
in the complementary measurement bases $\sigma_x$ and
$\sigma_y$. }
\label{Fig2}
\end{figure}
In our experiment, entanglement between the spin of a single optically trapped $^{87}$Rb atom and the polarization of a photon is generated in the spontaneous emission process in a lambda-type transition \cite{Volz06}, resulting in the maximally entangled atom-photon state \begin{equation}
|\Psi\rangle = \frac{1}{\sqrt{2}} (|1,-1\rangle |\sigma^+ \rangle +
|1,+1\rangle |\sigma^- \rangle ). \end{equation}
Here the two circular polarization states $\{\left|\sigma^{+}\right\rangle,
|\sigma^{-}\rangle\}$ of the photon define the photonic qubit, and the angular momentum states $\{\left|F=1,m_F=-1\right\rangle:=$
$\left|\downarrow\right\rangle_z$, $\left|F=1,m_F=+1\right\rangle:=$
$\left|\uparrow\right\rangle_z\}$ the atomic qubit, respectively. While the atom is spatially localized in the optical dipole trap, the emitted photon is coupled into a single mode optical fiber and guided to a remote location where a polarization analysis is performed. To measure and compensate drifts of the fiber birefringence, reference laser pulses with two complementary polarizations $(V,+45^{\circ})$ are sent through the optical fiber (incorporating a fiber-based dynamic polarization controller) and the respective output polarizations are analyzed with a reference polarimeter (see Fig. \ref{Fig1} (b)). Based on the difference between input and output polarization a software algorithm calculates new parameters for the dynamic polarization controller thereby optimizing the alignment. One such step takes 0.7 s, which is currently limited by the switching speed of the opto-mechanical shutters. These steps are repeated iteratively until input and output polarizations are identical within $99.9\%$ \cite{Hocke07}. Once the algorithm has compensated the fiber birefringence, typically after 10 steps, single photons from the atom are sent through the fiber. To verify atom-photon entanglement the internal atomic spin state is measured locally in two complementary measurement bases $\sigma_x$ and $\sigma_y$ with the help of a Stimulated-Raman-Adiabatic-Passage (STIRAP) technique \cite{Vewinger03,Volz06} and correlated with the polarization analysis of the photon. Typically, the bare photon detection efficiency is $1.2 \times 10^{-3}$, including coupling losses into the single mode optical fiber and the limited quantum efficiency of the single photon detectors. Together with transmission losses in the 300 m optical fiber and coupling losses of the dynamic polarization controller this results in the total detection efficiency of $0.6 \times 10^{-3}$. The final event rate of $15$ min$^{-1}$ is mainly caused by frequent reloading of the dipole trap.
For the analysis of entanglement, we determined the probability of detecting the atomic qubit in $\left|\downarrow\right\rangle_x$ and
$\left|\downarrow\right\rangle_y$ conditioned on the projection of the photon onto the states $1/\sqrt{2}(|\sigma^+\rangle \pm e^{2i\beta}|\sigma^-\rangle)$ (see Fig. \ref{Fig2} (a) and (b)). For $\beta=0^{\circ}$ APD$_1$ and APD$_2$
analyze the photonic qubit in the eigenstates $|H\rangle:=$
$1/\sqrt{2}(|\sigma^+\rangle + |\sigma^-\rangle)$ and $|V\rangle:=$
$1/\sqrt{2}(|\sigma^+\rangle - |\sigma^-\rangle)$ of $\hat{\sigma}_x$, whereas for $\beta=45^{\circ}$ APD$_1$ and APD$_2$ project onto the eigenstates
$\left|+45^{\circ}\right\rangle:=$ $1/\sqrt{2}(|\sigma^+ \rangle - i
|\sigma^-\rangle)$ and $\left|-45^{\circ}\right\rangle:=$
$1/\sqrt{2}(|\sigma^+\rangle + i |\sigma^-\rangle)$ of $\hat{\sigma}_y$. As expected, if a $|V\rangle$-polarized photon is detected, the atom is found with high probability in the corresponding state
$\left|\downarrow\right\rangle_x$, whereas if a $|H\rangle$-polarized photon is registered the atom is with low probability in the state
$\left|\downarrow\right\rangle_x$. Observing similar correlations in the complementary $\sigma_y$ basis of the atom (see Fig. \ref{Fig2}(b)) confirms the entanglement. The measurements in Fig. \ref{Fig2} show that the observed atom-photon pair is in the entangled state $|\Psi\rangle$ (see Eq. 1). To determine the degree of entanglement, sinusoidal functions were fitted onto the measured atom-photon correlation data. From the fits we infer a visibility of $V_{\sigma_x}=0.85\pm0.03$ for the analysis of the atomic qubit in $\sigma_x$ and $V_{\sigma_y}=0.75\pm0.03$ for $\sigma_y$, respectively. The limited visibility of the atom-photon correlations is caused mainly by errors in the atomic state detection ($7\%$), accidental photon detection events due to dark counts of the single photon detectors ($3\%$), errors in the preparation of the initial state ($1\%$), polarization drifts in the optical fiber during successive stabilization sequences of the dynamic polarization compensation ($1\%$), and residual shot-to-shot dephasing of the atomic qubit due to fluctuations of the ambient magnetic field. For the significantly reduced visibility in the atomic $\sigma_y$ basis compared to $\sigma_x$ we suppose residual magnetic fields along the $x$ axis, which lead to Larmor precession into the additional Zeeman sublevel $m_F=0$ of the 5$^2S_{1/2}, F=1$ hyperfine ground level \cite{Rosenfeld08a}. To estimate the atom-photon entanglement fidelity $F_{at-ph}$ we assume that errors in the atomic and photonic state detection are isotropic in all three complementary measurement bases (white noise). Herewith we derive a minimum fidelity $F_{at-ph}$ of $0.85\pm0.02$.
\begin{figure}\label{Fig3}
\end{figure}
For future applications in long-distance quantum communication, absorption losses and depolarization of the photonic qubit in optical fibers are important figures of merit \cite{Huebel07}. However, not only decoherence effects of the photon limit the distance over which quantum information can be distributed. Main criterion for these applications is the ability to store quantum information in quantum memories, which show long coherence times. So far, for quantum memories based on Zeeman qubits of neutral atoms, experimental coherence times of several 10 $\mu$s have been demonstrated
\cite{Riedmatten06,Chen08}. In principle, clock-state quantum memories can have much longer coherence times \cite{Kuhr03}, yet, manipulation of the corresponding light-matter entanglement is far less practical. In our case, fluctuating magnetic fields play an important role as they lead to decoherence/dephasing of the atomic Zeeman qubit $|1,\pm 1\rangle$ and consequently to decoherence/dephasing of the entangled atom-photon state.
In order to carefully distinguish between photonic and atomic decoherence we characterized the coherence properties of the atomic quantum memory by measuring the precession of the atomic spin in a magnetic field. This is achieved via quantum state tomography of the atomic ground level 5$^2S_{1/2}, F=1$, reconstruction of the respective density matrix
$\rho=r|\chi\rangle\langle\chi| +(1-r)\hat{1}/3$ \cite{Rosenfeld08a}, and determination of the corresponding purity parameter $r$. In contrast to the fidelity $F=\left\langle \Phi|\rho|\Phi\right\rangle$ which is the overlap between the measured density matrix $\rho$ and a pure target state
$\left|\Phi\right\rangle$, the purity parameter, here $r=\sqrt{1/2(3
tr(\rho^2) -1)}$, is related to the coherent fraction of the density matrix with respect to the \textit{closest} pure state $|\chi\rangle$ (which is in general unknown). Therefore $r$ is ideally suited to quantify decoherence effects of our atomic quantum memory.
In our spin-precession experiments, the 300 m optical fiber of the first experiment is replaced by a 5 m one, leading to a negligible time delay of $25$ ns between the preparation of the entangled atom-photon pair and the initialization of the atomic spin-state via a projective polarization measurement of the photon. The magnetic bias field is controlled via additional Helmholtz coils and an active feedback loop with an accuracy of
$|B|<2$ mG. After the atomic spin has freely evolved for defined time periods in the magnetic field, tomography of the final atomic spin state was performed by measuring populations of the atomic eigenstates of the Pauli spin-operators $\hat{\sigma}_x$, $\hat{\sigma}_y$, and $\hat{\sigma}_z$ \cite{Rosenfeld07}. In the case where a small magnetic guiding field of $5.5$
mG is applied along the quantization axis $z$ we observe the expected Larmor precession of a spin-1/2 atom (see Fig. \ref{Fig3}), with a 1/e dephasing time of 150 $\mu$s. In the general case where magnetic guiding field is not along the z-axis or in the case where no guiding field is applied, the atom can precess out of the qubit subspace $\{|1,-1\rangle, |1,+1\rangle\}$ into the
$|F=1,m_F=0\rangle$ Zeeman state of the $5^2S_{1/2}, F=1$ hyperfine ground level. Thus, for complete characterization of decoherence effects it is necessary to reconstruct the $3\times3$ spin-1 density matrix $\rho$. This is possible with certain constraints. Coherences between the states
$\left|1,\pm1\right\rangle$ and $\left|1,0\right\rangle$ can not be measured with the present atomic state detection technique, as the applied STIRAP pulses analyze only the $\{\left|1,-1\right\rangle, \left|1,+1\right\rangle\}$
qubit subspace in a complete way \cite{Volz06,Rosenfeld07}. However, the population in the $|1,0\rangle$ state can be inferred as the population missing in the $\{|1,-1\rangle, |1,+1\rangle\}$ subspace. To reconstruct the density matrix $\rho$ of the spin-1 state, we apply a worst-case assumption that there is no coherence between the $|1,0\rangle$ state and the others, and set the corresponding components to $0$. The resulting purity parameter $r$ thus is a conservative lower bound on the effective coherence of the atomic state.
\begin{figure}\label{Fig4}
\end{figure}
In a second measurement run no guiding field is applied (corresponding to the situation of a magnetic zero field with an accuracy of $\pm 2$ mG) and dephasing of the atomic superposition states
$1/\sqrt{2}(|1,-1\rangle\pm|1,+1\rangle)$ is analyzed by reconstruction of the minimal purity parameter $r$. Here we find transversal $1/e$ dephasing times of $T_2^* = 75..150$ $\mu$s, see Fig. \ref{Fig4}(a). For the states $|1,\pm 1\rangle$, see Fig. \ref{Fig4}(b), the longitudinal dephasing times are estimated by extrapolation to be $> 0.5$ ms. The faster dephasing of superposition states shows, that fluctuations respectively shot-to-shot noise of the effective magnetic field are mainly along the quantization axis $z$. This effect is due to a small fraction of circularly polarized dipole-trap light (below 1 $\%$), which leads in combination with a finite atomic temperature of 150 $\mu$K to a position-dependent differential light-shift \cite{Rosenfeld08a}.
In this Letter, we successfully demonstrated the generation and verification of entanglement between a single trapped neutral atom and single photon separated by 300 m optical fiber. Our implementation includes an active stabilization of ambient magnetic fields with an accuracy of $|B|<2$ mG, resulting in a dephasing time of the atomic memory level $5^2S_{1/2}, F=1$ of $\simeq 150$~$\mu$~s. Longer coherence times could be reached with higher accuracy of the polarization of the dipole trap light, lower temperature of the trapped atom, and better stability of the magnetic field. Nevertheless, together with the implemented stable optical fiber link also the current setup should allow to entangle two optically trapped $^{87}$Rb atoms at locations spatially separated by several 100 m, ready for future applications in long-distance quantum networking with neutral atoms and a loophole-free test of Bell's inequality \cite{Volz06}.
\end{document} |
\begin{document}
\author{M.A.\,Lifshits\footnote{Saint Petersburg State University, 199034, Saint Petersburg, University Emb., 7/9. {\tt [email protected]}}, S.E.\,Nikitin\footnote{Saint Petersburg State University, 199034, Saint Petersburg, University Emb., 7/9. {\tt [email protected]}}
}
\title{Energy saving approximation of Wiener process \\ under unilateral constraints\footnote{The work supported by RSF grant 21-11-00047.}}
\date{\today}
\maketitle
\begin{abstract}
We consider the energy saving approximation of a Wiener process under unilateral constraints.
We show that, almost surely, on large time intervals the minimal energy necessary for the approximation logarithmically depends on the interval's length. We also construct an adaptive approximation strategy that is optimal in a class of diffusion strategies and also provides the logarithmic order of energy consumption.
\end{abstract}
\section{Problem setting and main result} Let $AC[0,T]$ denote the space of absolutely continuous functions on the interval $[0,T]$. For $h\in AC[0,T]$ let us call kinetic energy \[
|h|_T^2 := \int_0^T h^\prime(t)^2 dt. \] In the works \cite{BL,KL2,IKL,KL1,LS,LSiu,Scher} the approximation of a random process sample path by the function $h$ of smallest energy was considered under various constraints on closeness between $h$ and the sample path. In particular, in \cite{LS} the energy saving approximation was studied for a Wiener process $W$ under {\it bilateral} uniform constraints.
For $T>0,r>0$ let us define the set of admissible approximations as \[
M^\pm_{T, r} := \left\{h \in AC[0, T] \, \big| \, \forall t \in [0, T] :
W(t) -r \le h(t) \le W(t)+ r ; h(0) = 0 \right\} \] and let \[
I^\pm_W(T,r) := \inf \left\{|h|_T^2 \: \big| \: h \in M^\pm_{T,r} \right\}. \] It was proved in \cite{LS} that for every fixed $r>0$ it is true that \[
\frac{I^\pm_W(T,r)}{T}
\stackrel{\text{a.s.}}{\longrightarrow} {\mathcal C}^2\, r^{-2},
\qquad \textrm{as } T\to \infty, \] where ${\mathcal C}\approx 0,63$ is some absolute constant (the exact value of ${\mathcal C}$ is unknown), i.e. the optimal approximation energy grows linearly in time.
In this work, we are interested in the behavior of a similar quantity under {\it unilateral} constraints, i.e. the set of admissible approximations is \[
M_{T, r} := \left\{h \in AC[0, T] \, \big| \, \forall t \in [0, T] :
h(t) \ge W(t) -r ; h(0) = 0 \right\}, \] and we are now interested in the behavior of \[
I_W(T,r) := \inf \left\{|h|_T^2 \: \big| \: h \in M_{T,r} \right\}. \] It is technically more convenient to translate the initial value of the approximating function to the point $r$, so that this function runs above the trajectory of the approximated process $W$. Let \[
M_{T, r}^\prime := \left\{h \in AC[0, T] \, \big| \, \forall t \in [0, T] :
h(t) \ge W(t); h(0) = r \right\}. \] Since the sets of functions $M_{T, r}$ and $M_{T, r}^\prime$ differ by a constant shift, it is easy to see that \[
I_W(T,r) = \inf \left\{|h|_T^2 \:\big | \: h \in M_{T,r}^\prime \right\}. \]
Our main result asserts that, when $T$ grows, the quantity $I_W(T,r)$ grows merely logarithmically.
\begin{thm} \label{t:main} For every fixed $r>0$ it is true that \begin{equation} \label{asymp_f}
\frac{I_W(T,r)}{\log T} \stackrel{\text{a.s.}}{\longrightarrow} \frac12,
\qquad \textrm{as } T\to \infty. \end{equation}
\end{thm}
In Section \ref{s:2} we establish a connection between the unilateral energy saving approximation of arbitrary continuous function with its minimal concave majorant. In Section \ref{s:3} we establish the necessary properties of Wiener process minimal concave majorant using the results of Groeneboom \cite{concMaj}. Section \ref{s:4} contains the proof of Theorem~\ref{t:main}.
In Sections \ref{s:5}--\ref{s:6} we consider a class of adaptive Markovian (diffusion) approximation strategies based on the current and past values of $W$. We prove that in this class the optimal strategy is defined by the formula \[
h'(t) = \frac{1}{h(t)-W(t)} \,. \] For this strategy the energy consumption also has the logarithmic order but is two times larger than that for the optimal non-adaptive strategy using the information about the whole trajectory of $W$. Namely, \[
\frac{ |h|_2^2} {\log T} \stackrel{\text{a.s.}}{\longrightarrow} 1,
\qquad \textrm{as } T\to \infty. \]
\section{Concave majorants as efficient approximations} \label{s:2}
It turns out that the optimal energy saving approximation under unilateral constraints may be described in terms of the minimal concave majorant (MCM) of the approximated function. Let $w:[0,T]\mapsto {\mathbb R}$ be a continuous function. Then the corresponding MCM $\overline{w}$ is the minimal concave function satisfying conditions \[
\overline{w} (t) \ge w(t), \qquad 0\le t\le T. \]
\begin{prop} \label{maj_form}
Let $r>w(0)$. Then the problem $|h|_T^2\to \min$ under the constraints $h(0)=r$ and \[
h(t) \ge w(t), \qquad 0\le t\le T, \] has a unique solution $\chi_*$ of the following form.
(a) If $r\ge \max_{0\le t\le T} w(t)$, then $\chi_*(t)\equiv r$.
(b) If $r<\max_{0\le t\le T} w(t)$, then $\chi_*$ is defined differently on three intervals. On the initial interval, $\chi_*$ is an affine function whose graph contains the point $(0,r)$ and is a tangent to the graph of $\overline{w}$. Then $\chi_*$ coincides with $\overline{w}$ until the first moment when the maximum of $w$ is attained. Finally, after that moment, $\chi_*$ is a constant. \end{prop}
The optimal energy saving majorant $\chi_*$ is shown in Figure~\ref{fig:eef}.
\begin{figure*}
\caption{The optimal energy saving majorant. }
\label{fig:eef}
\end{figure*}
\begin{proof}[ of the proposition] The problem's solution exists since for every $M>0$ the set of functions \[
\{ h\in AC[0,T]: h(0)=r, h\ge w, |h|_T\le M\} \] is compact in the space of continuous functions equipped with the topology of uniform convergence and the functional
$|\cdot|_T^2$ is lower semi-continuous in this topology.
The uniqueness of the solution follows from the fact that the set of functions satisfying problem's assumptions is convex and the functional $|\cdot|_T^2$ is strictly convex on this set.
Let us describe the solution.
Since case (a) is trivial, we consider case (b).
Let $\chi(\cdot)$ be the solution of our problem. We show first that $\chi$ is a convex non-decreasing function. Consider the function \[
\chi_1(t):= r +\int_0^t g(s) ds, \qquad 0\le t \le T, \] where $g(\cdot)$ is the non-increasing monotone rearrangement of the function $\max\{\chi'(\cdot),0\}$. Then $\chi_1$ is a concave non-decreasing function, $\chi_{1}(0)=r$ and $\chi_{1}(t) \ge \chi(t) \ge w(t)$ for all $t\in[0,T]$. Therefore, $\chi_{1}$ satisfies the problem's constraints. On the other hand, \[
|\chi_{1}|_T^2=\int_0^T g(s)^2 ds = \int_0^T \max\{\chi'(\cdot),0\}^2 ds
\le |\chi|_T^2. \] Due to the problem solution uniqueness we obtain $\chi_1=\chi$. This equality proves that $\chi$ is concave and non-decreasing.
Since $\chi_*$ is the smallest concave non-decreasing function satisfying problem's constraints, we have \[
\chi(t)\ge \chi_*(t), \qquad 0\le t\le T. \]
Furthermore, let us prove that $\chi(T)=\chi_*(T)=\max_{0\le t\le T} w(t)$. Indeed, in case (b) the function \[
\chi_2(t):= \min\{ \chi(t), \chi_*(T) \}, \qquad 0\le t \le T, \]
satisfies both problem constraints and $|\chi_{2}|_T^2 \le |\chi|_T^2$; by the uniqueness of the solution, we obtain $\chi=\chi_2$. In particular, $\chi(T)=\chi_*(T)$.
Finally, assume that the strict inequality $\chi(t_0)>\chi_*(t_0)$ holds for some $t_0\in [0,T]$. Then, since the function $\chi_*$ is concave and non-decreasing, there exists a non-decreasing affine function $\ell(\cdot)$ such that \[
\chi_*(t)\le \ell(t), \qquad 0\le t \le T, \] but $\ell(t_0)< \chi(t_0)$. However, at the endpoints of the interval $[0,T]$ the opposite inequality is true, because \[
\chi(0) = r =\chi_*(0) \le \ell(0), \qquad \chi(T) =\chi_*(T) \le \ell(0). \] Therefore, there exists a non-degenerated interval $[t_1,t_2]\subset[0,T]$ such that $t_0\in[t_1,t_2]$, $\ell(t_1)=\chi(t_1)$, $\ell(t_2)=\chi(t_2)$.
Since $\ell'(\cdot)$ is a constant, while $\chi'(\cdot)$ is not a constant on $[t_1,t_2]$, it follows from H\"older inequality that \begin{eqnarray*}
&& (t_2-t_1) \int_{t_1}^{t_2} \chi'(t)^2 dt > \left(\int_{t_1}^{t_2} \chi'(t)dt\right)^2
= (\chi(t_2)-\chi(t_1))^2 \\
&=& (\ell(t_2)-\ell(t_1))^2 = \left(\int_{t_1}^{t_2} \ell'(t)dt\right)^2
= (t_2-t_1) \int_{t_1}^{t_2} \ell'(t)^2 dt. \end{eqnarray*} We obtain \[
\int_{t_1}^{t_2} \chi'(t)^2 dt > \int_{t_1}^{t_2} \ell'(t)^2 dt. \] It follows that the function \[
\chi_3(t):= \min\{ \chi(t), \ell(t) \}, \qquad 0\le t \le T, \]
satisfies the problem's constraints and $|\chi_{3}|_T^2 < |\chi|_T^2$ but this is impossible by the definition of $\chi$. Therefore, the assumption $\chi(t_0)>\chi_*(t_0)$ brought us to a contradiction. \end{proof}
\section{Minimal concave majorant of a Wiener process} \label{s:3} We recall some notation and results from the article \cite{concMaj} that will be used in the sequel. Denote \[
\tau(a) := \sup \left\{t>0 \, \big| \, W(t)-t/a=\sup_{u>0}\left(W(u)-u/a\right)\right\}. \] The function $a\mapsto \tau(a)$ is non-decreasing.
According to \cite[Corollary~2.1]{concMaj} for every $a>0$ the random variable $\frac{\tau(a)}{a^2}$ has the distribution density \[
q(t) = 2 \, {\mathbb E\,}\left(\frac{X}{\sqrt{t}} - 1\right )_+, \qquad t>0, \] where $ x_+ := x \ed{x>0} $, $X$ is a standard normal random variable.
Next, let $\overline{W}$ be the global MCM for the Wiener process $W(t),t\ge 0$. Define a random process $L$ as \[
L(a,b) := \int_{\tau(a)}^{\tau(b)} \overline{W}'(t)^2 dt. \]
Our study is essentially based on the following result due to Groeneboom.
\begin{lem} \cite[Theorem~3.1]{concMaj}.
For every $a_0>0$ the process $X(t) := L(e^{a_0},e^{a_0+t}), t\ge 0$,
is a pure jump process with independent stationary increments and ${\mathbb E\,} X(t)=t$. \end{lem}
Moreover, there is an explicit description of the L\'evy measure of $X$ in \cite{concMaj} but we do not need it here. We are only interested in the Kolmogorov's strong law of large numbers for $X$ which asserts that \[
\frac{X(t)}{t} \stackrel{\text{a.s.}}{\longrightarrow} 1,
\qquad \textrm{as } t\to\infty. \]
Using the definition of $X$, letting $a_0=0$, and making the variable change $V=e^t$, we may reformulate this result as \begin{equation} \label{slln}
\frac{L(1,V)}{\log V} \stackrel{\text{a.s.}}{\longrightarrow} 1,
\qquad \textrm{as } V\to\infty. \end{equation}
\begin{lem} \label{tau_est}
For every $ \delta \in \left(0, \frac 12\right)$ with probability $1$ for all sufficiently large $T$ it is true that
\begin{equation}
\tau\left(T^{\frac 12 + \delta}\right) > T > \tau\left(T^{\frac 12 - \delta}\right).
\end{equation} \end{lem}
\begin{proof} The lower bound is based upon the inequalities \begin{eqnarray*}
{\mathbb P}\left( \tau\left(T^{\frac 12 - \delta}\right) \ge \frac T2 \right)
&=&
{\mathbb P}\left( \frac{\tau \left( T^{\frac 12 - \delta}\right)}{T^{1-2\delta}} \ge \frac{T^{2\delta}}2 \right) \\
&=& \int_{T^{2\delta}/2}^\infty q(t)\, dt \\
&=& \int_{T^{2\delta}/2}^\infty 2\,{\mathbb E\,}\left(\frac{X}{\sqrt{t}} - 1\right )_+ dt \\
&=& C_1 \int_{T^{2\delta}/2}^\infty \int_{\sqrt t}^\infty
\left(\frac x {\sqrt t} - 1\right )e^{-x^2/2} dx dt \\
&\le& C_2 \int_{T^{2\delta}/2}^\infty e^{-t/2} t^{-1/2} dt
\le C_2 \, e^{-T^{2\delta}/4}, \end{eqnarray*} where $C_1,C_2$ are some positive absolute constants.
Let $ T_n := n $, then the events $ D_n := \left\{\tau\left(T_n^{\frac 12 - \delta}\right) \ge \frac {T_n}2 \right \} $ satisfy \[
\sum\limits_{n=1}^\infty {\mathbb P}(D_n) < \infty. \] Therefore, by Borel--Cantelli lemma, with probability $1$ for all sufficiently large $n$ the event $D_n$ does not hold, i.e. with probability $1$ for all sufficiently large $n$ we have \begin{equation} \label{upest}
\tau\left(T_n^{\frac 12 - \delta}\right) < \frac {T_n}2. \end{equation} Let $n\ge 2$ be such that \eqref{upest} holds for $T_n$ and let $T \in [T_{n-1}, T_{n}]$. Since $\tau(\cdot)$ is non-decreasing, we have \[
\tau\left(T^{\frac 12 - \delta}\right) \le \tau\left(T_{n}^{\frac 12 - \delta}\right)
< \frac{T_n}2 = \frac{n}2 \le n-1 = T_{n-1} \le T. \] This provides us with a required lower bound for sufficiently large $T$.
In the same way, for the upper bound we have \begin{eqnarray*}
{\mathbb P}\left( \tau\left(T^{\frac 12 + \delta}\right) \le 2T \right)
&=&
\int_0^{2T^{-2\delta}} 2\, {\mathbb E\,}\left(\frac{X}{\sqrt{t}} - 1\right )_+ dt \\
&\le& C_3 \int^{2T^{-2\delta}}_0 e^{-t/2} t^{-1/2} dt
\le C_4 T^{-\delta}, \end{eqnarray*} where $C_3,C_4$ are some positive absolute constants.
Consider the sequence $ T'_n := 2^n$. For the events $ D'_n := \left\{\tau\left(T_n^{\frac 12 + \delta}\right) \le 2 T_n \right \}$ we have \[
\sum\limits_{n=1}^\infty {\mathbb P}(D'_n) < \infty. \] Therefore, by Borel--Cantelli lemma with probability $1$ for all sufficiently large $n$ the event $D'_n$ does not hold, i.e. with probability $1$ for all sufficiently large $n$ it is true that \begin{equation} \label{lowest}
\tau\left(T_n^{\frac 12 + \delta}\right) > 2\,T_n . \end{equation}
Let \eqref{lowest} be satisfied for some $T_n$ and let $ T \in [T_{n}, T_{n+1}] $. Since $\tau(\cdot)$ is non-decreasing, we have \[
\tau\left(T^{\frac 12 + \delta}\right)
\ge \tau\left(T_n^{\frac 12 + \delta}\right)
> 2 \, T_n = T_{n+1} \ge T. \] This provides us with a required upper bound for all sufficiently large $T$.
\end{proof}
The next theorem describes the asymptotic behavior of the energy of the minimal concave majorant for a Wiener process. For some $r>0$ let $\overline{W}^{(r)}$ denote MCM of a Wiener process $W$ on the whole real line, starting from the height $r$. Then the majorant $\overline{W}^{(r)}$ is an affine function on some initial interval $[0,\theta(r)]$, its graph contains the point $(0,r)$ and is a tangent to the graph of $\overline{W}$, while on $[\theta(r), \infty)$ the functions $\overline{W}^{(r)}$ and $\overline{W}$ coinсide.
\begin{thm} \label{asymp_f_th}
Let $\overline{W}^{(r)}$ be the global minimal concave majorant of $W$ starting from a height $r$.
Then, for every fixed $r$ it is true that
\begin{equation}
\frac{|\overline{W}^{(r)}|_T^2}{\log T} \stackrel{\text{a.s.}}{\longrightarrow} \frac 12,
\qquad \textrm{as } T\to\infty.
\end{equation} \end{thm}
\begin{proof} Compare the quantities \[
|\overline{W}^{(r)}|_T^2 = (\overline{W}^{(r)})'(0)^2 \theta(r) + \int_{\theta(r)}^T \overline{W}'(t)^2\, dt,
\qquad T\ge \theta(r), \] and \[
L(1,T^{1/2\pm\delta}) = \int_{\tau(1)}^{\tau(T^{1/2\pm\delta})} \overline{W}'(t)^2\, dt. \] They differ by a term (independent of $T$) corresponding to the initial segment of $\overline{W}^{(r)}$ and by the lower and upper integration limits; notice that the lower integration limits do not depend on $T$ in both cases.
By using Lemma~\ref{tau_est} for comparing the upper integration limits, we obtain \begin{eqnarray*}
\liminf\limits_{T \to \infty} \frac{|\overline{W}^{(r)}|_{T }^2}{\log {T }}
&\ge& \liminf\limits_{T \to \infty}
\frac {L \left(1, T^{1/2 - \delta}\right)}{\log {T }}; \\
\limsup\limits_{T \to \infty} \frac{|\overline{W}^{(r)}|_{T }^2}{\log {T }}
&\le& \liminf\limits_{T \to \infty}
\frac {L \left(1, T^{1/2 + \delta}\right)}{\log {T }}. \end{eqnarray*}
Taking into account the law of large numbers \eqref{slln} we have \[
\frac12 - \delta \le
\liminf\limits_{T \to \infty} \frac{|\overline{W}^{(r)}|_{T }^2}{\log {T }}
\le
\limsup\limits_{T \to \infty} \frac{|\overline{W}^{(r)}|_{T }^2}{\log {T }}
\le \frac12 +\delta. \] Letting $\delta\searrow 0$ yields the required result.
\end{proof}
\section{Proof of Theorem~\ref{t:main}} \label{s:4} \quad
{\bf Upper bound.} The restriction of the global MCM starting from the height $r$ onto the interval [0,T] belongs to the set of admissible functions: $\overline{W}^{(r)}\in M_{T, r}^\prime$. We derive from Theorem~\ref{asymp_f_th} that \[
\limsup\limits_{T \to \infty} \frac{I_W(T,r)}{\log {T }}
\le
\limsup\limits_{T \to \infty} \frac{|\overline{W}^{(r)}|_{T }^2}{\log {T }}
\le \frac12 \qquad \textrm{a.s.} \]
{\bf Lower bound.} For $r>0,T>0$ let $\overline{W}^{(r,T)}$ denote the {\it local} MCM of the Wiener process $W(t), t\in [0,T]$, starting from the height $r$. Let $\chi$ be the unique solution of the problem we are interested in,
$|h|_T^2\to \min, h\in M'_{T,r}$. Recall that its structure is described in Proposition~\ref{maj_form}. Since for large $T$ it is true that $\max_{0\le s\le T} W(s)>r$, for such $T$ the assumption of case (b) of that proposition is verified. In particular, it follows that \[
\chi(t)=\overline{W}^{(r,T)}(t), \qquad 0 \le t \le t_{\max}, \] where \[
t_{\max}=t_{\max}(T) := \min\{t: W(t)= \max_{0\le s\le T} W(s)\}. \] Notice that the function $\tau(\cdot)$ can not take values from the interval $(t_{\max},T)$. Therefore, if for some $a$ it is true that $\tau(a)<T$, then it is also true that $\tau(a)\le t_{\max}$. In this case we have \[
\chi(t)=\overline{W}^{(r,T)}(t)=\overline{W}^{(r)}(t), \qquad 0 \le t \le \tau(a). \] It follows that \[
I_W(T,r)=|\chi|_2^2 \ge \int_0^{\tau(a)} \chi'(t)^2 \, dt
=|\overline{W}^{(r)}|_{\tau(a)}^2. \] Let us fix $\delta\in (0,1/2)$. Let $a=a(T):= T^{1/2-\delta}$. Then by Lemma~\ref{tau_est} we have \[
T^{\frac{1-2\delta}{1+2\delta}} < \tau(a) < T \] a.s. for all sufficiently large $T$. Furthermore, it follows from Theorem~\ref{asymp_f_th} that, as $T\to \infty$, \[
|\overline{W}^{(r)}|_{\tau(a)}^2 \ge \frac{\log\tau(a)}{2} \, (1+o(1))
\ge \frac{1-2\delta}{2(1+2\delta)} \, \log T \, (1+o(1))
\qquad \textrm{a.s.} \]
By combining these estimates, we obtain
\[
I_W(T,r) \ge \frac{1-2\delta}{2(1+2\delta)} \, \log T \, (1+o(1))
\qquad \textrm{a.s.} \] Finally, by letting $\delta\searrow 0$, we arrive at \[
I_W(T,r) \ge \frac{1}{2} \, \log T \, (1+o(1))
\qquad \textrm{a.s.,} \] as required.
\section{Adaptive Markovian approximation} \label{s:5}
In practice, it is often necessary to arrange an approximation (a pursuit) in real time (adaptively), when the trajectory of the approximated process is known not on the entire time interval but only before the current time instant. In view of the Markov property of Wiener process, a reasonable strategy is to define the speed of a pursuit $h$ as a function of current positions of the processes $h$ and $W$, without taking past trajectories into account, i.e. let \begin{equation} \label{adapt_eq1}
h'(t):= b(h(t),W(t),t). \end{equation} On the qualitative level the function $b(x,w,t)$ must tend to infinity, as $x-w\searrow 0$, i.e. when the approximating process approaches the dangerous boundary it accelerates its movement trying to escape from a dangerous position. One has to optimize the function $b$ trying to reach the smallest average energy consumption. It is possible to reach the same logarithmic in time order of energy consumption as in the case of non-adaptive approximation but with somewhat larger coefficient. The difference of the coefficients represents the price we pay for not knowing the future of the process we try to approximate.
It is interesting to compare \eqref{adapt_eq1} with the form of the optimal adaptive strategy in the case of bilateral constraints \cite{LS} where \begin{equation} \label{adapt_eq2}
h'(t) = b(h(t)-W(t)). \end{equation} The latter strategy is more simple because the speed is governed only by the distance between the approximated and the approximating processes and does not depend on time.
Let us make a time and space change \begin{eqnarray*}
U(\tau) &:=& e^{-\tau/2} W(e^\tau), \\
z(\tau) &:=& e^{-\tau/2} h(e^\tau). \end{eqnarray*} Recall that $U(\cdot)$ is an Ornstein--Uhlenbeck process and therefore satisfies the equation \begin{equation} \label{OU_eq}
dU = -\frac {U \, d\tau}{2} + d \widetilde{W}, \end{equation} where $\widetilde{W}$ is a Wiener process. We have the following expression for the derivative of $z$ \begin{equation} \label{zprime_eq}
z^\prime(\tau) = -\frac 12 z(\tau) + e^{\tau/2} h^\prime(e^\tau), \end{equation} which yields \begin{equation} \label{hprime_eq}
h^\prime(e^\tau) = e^{-\tau/2} \left(z^\prime(\tau) +\frac{z(\tau)}{2}\right). \end{equation} Let us consider the distance between the approximated and approximating processes \begin{eqnarray} \label{def_Z_eq}
Z(\tau) &:=& z(\tau) - U(\tau). \end{eqnarray}
We will study time-homogeneous diffusion strategies \begin{equation} \label{diffusion_eq}
dZ = b(Z) d \tau - d \widetilde{W}. \end{equation} From equations \eqref{OU_eq} and \eqref{diffusion_eq} it follows that this is equivalent to \[
z'(\tau) + \frac{U(\tau)}{2} = b(Z(\tau)), \] which also implies \begin{equation} \label{zprime2_eq}
z'(\tau) + \frac{z(\tau)}{2} = b(Z(\tau)) + \frac{Z(\tau)}{2}. \end{equation}
Before proceeding to optimization, let us see how the diffusion strategies act in the initial framework. By \eqref{hprime_eq} and \eqref{zprime2_eq} we have \begin{eqnarray}
h'(e^\tau) &=& e^{-\tau/2} \left(b(Z(\tau))+ \frac{Z(\tau)}{2}\right)
:= e^{-\tau/2}\ \widetilde{b}(Z(\tau)) \\
&=& e^{-\tau/2} \ \widetilde{b}\left( e^{-\tau/2} \left(h(e^\tau)-W(e^\tau)\right) \right), \end{eqnarray} where $ \widetilde{b}(x):= b(x)+ \tfrac{x}{2}$. In other words, the form of the strategy is \begin{equation}\label{hstrategy_eq}
h'(t) = \frac{1}{\sqrt{t}}\ \widetilde{b}\left( \frac{1}{\sqrt{t}}
\left(h(t)-W(t)\right)\right). \end{equation} We see that this class of strategies is space-homogeneous but, in general, not time-homogeneous.
Now we proceed to the optimization of the shift coefficient $b(\cdot)$ determining the pursuit strategy. Let us use some basic facts about one-dimensional time-homogeneous diffusion, cf. \cite[Ch.IV.11]{Bor} and \cite[Ch.2]{BS}. Let \begin{eqnarray} \label{B}
B(x) &:=& 2 \int^x b(u) du, \\ \label{p0}
p_0(x) &:=& e^{B(x)}. \end{eqnarray} Assume that condition \begin{equation}\label{noexit}
\int_0 \frac{dx}{p_0(x)} = \infty \end{equation} is verified. Then, in Feller classification, the point $0$ is the entrance-boundary and not an exit-boundary for diffusion \eqref{diffusion_eq}. This means that the diffusion $Z$ remains forever in $[0,\infty)$. Moreover, the function \begin{equation} \label{p}
p(x) := Q^{-1} p_0(x), \end{equation} where $Q = \int_0^\infty p_0(x) dx$, is the density of the unique stationary distribution for $Z$. For the energy, by using \eqref{hprime_eq} and \eqref{zprime2_eq}, we obtain (a.s., as $T\to\infty$) \begin{eqnarray*}
\int_1^T h'(t)^2 dt &=& \int_0^{\log T} h'(e^\tau)^2 e^\tau d \tau
= \int_0^{\log T} \left( z^\prime(\tau) +\frac{z(\tau)}{2} \right)^2 d\tau \\
&=& \int_{0}^{\log T} \left( b(Z(\tau)) + \frac {Z(\tau)}2\right) ^2 d\tau \\
&\sim& {\log T} \int_0^{\infty} \left( b(x) + \frac x2\right) ^2 p(s) dx \\
&=& {\log T} \int_0^{\infty} \left( \left( \frac {\log p}{2}\right)^\prime(x) + \frac x2\right) ^2 p(x) dx \\
&=& {\log T} \int_0^{\infty} \left( \frac {p^\prime(x)^2}{4p(x)}
+ \frac{x p^\prime(x)}{2} + \frac{x^2 p(x)}{4} \right) dx \\
&=& {\log T} \left( -\frac 12 + \int_0^{\infty} \left( \frac {p^\prime(x)^2}{4p(x)}
+ \frac{x^2 p(x)}{4} \right) dx \right) \\
&:=& -\frac {\log T}2 + \frac {\log T}{4} J(p). \end{eqnarray*} Taking into account condition \eqref{noexit}, it remains to solve the variational problem
\begin{equation*}
\min \left\lbrace J(p) \Big| \int_0^\infty p(x) dx = 1, p(0)=0 \right\rbrace \end{equation*} over the class of densities concentrated on $[0, \infty)$. After the variable change \[
y(x) := p(x)^{1/2}, \] the variational problem transforms into \[
\min\left\{ \int_0^\infty\left( 4y'(x)^2+x^2 y(x)^2 \right) dx \ \Big| \int_0^\infty y(x)^2 dx = 1, y(0)=0 \right\}. \] We show in the next section that this minimum equals $6$; it is attained at the function \[
y(x) = (2/\pi)^{1/4}\, x\, \exp(-x^2/4). \] It follows that the asymptotic energy behavior for the optimal strategy is \[
\int_1^T h'(t)^2 dt \sim \log T, \quad T\to\infty, \] i.e. the optimal choice of the shift in the adaptive setting leads to two times larger energy consumption than for the optimal strategy in the non-adaptive setting.
In order to find the optimal shift, write \[
p(x)=y(x)^2= (2/\pi)^{1/2}\, x^2\, \exp(-x^2/2) \] and we find from \eqref{B} -- \eqref{p} \[
b(x)= \frac12\, (\ln p)'(x) = \frac 1x - \frac{x}{2}. \] Note that the density $p$ indeed satisfies the necessary condition \eqref{noexit}.
Returning to the initial problem, we obtain the shift $\widetilde{b}(x)=\tfrac{1}{x}$, thus the strategy \eqref{hstrategy_eq} takes the form \[
h'(t) = \frac{1}{h(t)-W(t)} \,. \] Curiously, the optimal diffusion strategy is not only space-homogeneous but also time-homogeneous, unlike arbitrary strategies of this class.
\section{Solution of the variational problem} \label{s:6}
\subsection{Quantum harmonic oscillator} Consider the Sturm--Liouville problem on the eigenvalues of a differential operator \[
\begin{cases}
-4y''(x)+x^2 y(x)=\gamma \, y(x), & x\ge 0, \\
y(0)=0.
\end{cases} \] It represents a special case of the quantum harmonic oscillator equation, extensively studied by physicists, see \cite[\S 23]{LaLi}. Its solution is well known. Usually one considers this equation on the entire real line. When performing the restriction to $[0,\infty)$, one should take into account the boundary condition $y(0)=0$, hence, to keep the restrictions to $[0,\infty)$ of {\it odd} solutions on ${\mathbb R}$ and multiply them by $\sqrt{2}$ in order to keep the normalization. We arrive at the orthonormal base $L_2[0,\infty)$ that consists of the functions $\psi_k, k\in 2{\mathbb N}-1$, given by \[
\psi_k(x) = (2^k k!)^{-1/2} (2/\pi)^{1/4} H_k(x/\sqrt{2}) \exp(-x^2/4), \] where $H_k(x)= (-1)^k e^{x^2}\tfrac{d^k}{dx^k}(e^{-x^2})$ are Hermite polynomials; these functions satisfy \[
-4\psi_k''(x)+x^2 \psi_k(x)=\gamma_k \psi_k(x), \] where $\gamma_k=2(2k+1)$.
In particular, the minimal eigenvalue is $\gamma_1=6$, $H_1(x)=2x$, while the corresponding eigenfunction is $\psi_1(x) = (2/\pi)^{1/4}\, x\, \exp(-x^2/4)$.
\subsection{Minimization} Consider quadratic form \[
G(y,z) := \int_0^\infty\left( 4y'(x)z'(x)+x^2 y(x)z(x) \right) dx. \] For twice differentiable functions satisfying additional assumption $y(0)=0$ integration by parts yields \[
G(y,z) := \int_0^\infty\left( -4y''(x)+x^2 y(x)\right) z(x) dx. \] In particular, \[
G(\psi_k,\psi_l) = \int_0^\infty \gamma_k \psi_k(x)\psi_l(x) dx =
\begin{cases}
\gamma_k,& k=l,\\
0,& k\not= l,
\end{cases} \] since $(\psi_k)$ is an orthonormal base.
If $y=\sum_{k\in 2{\mathbb N}-1} c_k \psi_k$, then \[
\int_0^\infty\left( 4y'(x)^2+x^2 y(x)^2 \right) dx
=G(y,y) = \sum_{k\in 2{\mathbb N}-1} c_k^2\gamma_k
\ge \sum_{k\in 2{\mathbb N}-1} c_k^2\gamma_1 = \gamma_1 \int_0^\infty y(x)^2 dx, \] and for $y=\psi_1$ the equality is attained in this chain. Therefore, \[
\min\left\{ \int_0^\infty\left( 4y'(x)^2+x^2 y(x)^2 \right) dx \ \Big| \int_0^\infty y(x)^2 dx = 1 \right\} =\gamma_1 = 6. \]
The authors are grateful to A.I.\,Nazarov for useful advice.
\end{document} |
\begin{document}
\title{Toy observer in unitary evolution: Histories, consciousness, and state reduction}
\author{Lutz Polley}
\date{\small Institute of Physics, Oldenburg University, 26111 Oldenburg, FRG}
\maketitle
\begin{abstract} For a toy version of a quantum system with a conscious observer, it is demonstrated that the many-worlds problem is solved by retreating into the conscious subspace of an entire observer history. In every step of a discretised time, the observer tries to ``see'' records of his past and present in a coherent temporal sequence, by scanning through a temporal fine-graining cascade. The extreme most likely occurs at the end of some branch, thus determining observer's world line. The relevant neurons, each with two dimensions, are power-law distributed in number, so order statistics implies that conscious dimension is located almost entirely in the extremal branch. \end{abstract}
\section{Introduction}
Among the various approaches to an interpretation of quantum theory, one is to regard the superposition principle, state vectors, and the Schr\"odinger equation as universally valid, and to seek a solution to the ensuing many-worlds problem \cite{Everett1973}. An old \cite{vNeumann1932} but still relevant \cite{Donald1999,Tegmark2014} conjecture is that the solution should involve the physical functioning of an observer's consciousness. The approach taken here falls in this category.
As to the merits of the superposition principle, most concepts of quantum theory, as presented in textbooks, rely on the formalism of state vectors. Even thermal systems, traditionally regarded as mixtures, can be treated as pure state vectors \cite{Tasaki1998}. The basic law of propagation of a particle takes an almost self-evident form \cite{Feynman1965,Baym1978,Zee1991,Polley2001} when positions are restricted to a spatial lattice, and superpositions are regarded as a logical possibility. For electromagnetic fields to be incorporated, only the complex phases already present in the hopping amplitudes need to be varied \cite{Wilson1974}. In comparison, Newton's laws are phenomenological.
A fundamental problem of a linear evolution equation emerges in application to a quantum system interacting with an observer. With a system in a superposition of properties $1,\ldots,B$, and an observer trying to determine the ``actual'' property, the Schr\"odinger equation implies a transition of the form \begin{equation} \label{EqualAmplitudeSplitting}
\left(\sum_{n=1}^B |n\rangle \right)|\mathrm{ready}\rangle \longrightarrow
\sum_{n=1}^B \Big(|n\rangle|\mathrm{observed}\,n\rangle\Big) \end{equation}
While experience suggests an observer should find himself in one of the states $|\mathrm{observed}\,n\rangle$ after the measurement, the superposition state produced by the Schr\"odinger equation does not indicate which of the possible results has ``actually'' been obtained. The observer rather seems to have split into $B$ branches of himself. To date, no consensus exists as to whether a superposition of observer states like (\ref{EqualAmplitudeSplitting}) describes something physically real. The problem appears less dramatic when the state vector is converted into a density matrix, as in the theory of decoherence, but the ambiguity about the result of the measurement persists \cite{Schlosshauer2007}. In Everett's many-worlds interpretation \cite{Everett1973} the superposition \emph{is} observer's real wavefunction. His inability to realise more than one of his branches is inferred from the (undisputed) impossibility of branches interacting with each other. While certain activities like talking in branch 1 about events in branch 2 can be ruled out in this way, a mere simultaneous awareness of branches is not covered by the argument \cite{Penrose1997}. An attempt at resolving this problem was made by this author in \cite{Polley2012}; in the present paper, that approach is simplified and generalised. The hypothesis is that an observer's awareness is extremal in one branch, in such a way that the sum of remaining branches can be regarded as a negligible contribution.
In the Copenhagen interpretation, the process of measurement is not described by a linear equation for amplitudes in a superposition, but is described by state reduction following Born's rule. Accordingly, superpositions evolve in a stochastic way. Superposition
(\ref{EqualAmplitudeSplitting}) would end up as $|n\rangle|\mathrm{observed}\,n\rangle$ with probability $1/B$ for each case. The disturbing point here is that probabilities occur at a fundamental level---no natural law supposedly exists that would determine, in principle, the outcome of a single quantum measurement. By contrast, an agreeable role for stochastics would be that of an approximation to deterministic but uncontrollably complicated dynamics. In the model of this paper, there will be a constant law of evolution (representing determinism) given by a unitary operator which is a peculiar kind of random matrix (assumed to approximate some complicated dynamics). Pseudo-random evolution results if the operator is applied repeatedly to an initial state of an appropriate class.
Since the early days of quantum mechanics the idea has been pondered that state reduction may involve an observer's consciousness \cite{vNeumann1932}. While the observer remains conscious with every result of the measurement, there may be variations in the degree of consciousness. Of all the details around us that could in principle catch our attention, only a tiny fraction actually does so. If entire histories are considered, that fraction multiplies with every instant of time, so there is lots of room for outstanding extremes.
A formal description of the physical functioning of a \emph{real} observer, including both Everettian branching and the influence of an irreducible stochastic process in the generation of the branches, has been given by Donald \cite{Donald1999}. For a model of state reduction, it may be an unnecessary complication to consider consciousness as versatile as that of a human being.
Yet, as emphasised in \cite{Donald1999}, a lesson from real brain dynamics is that consciousness cannot be described statically, by assigning labels to mental states as in equation (\ref{EqualAmplitudeSplitting}), but that neural activity is required, like the switching between firing and resting states of a neuron. It makes a great difference for model building! Neurons thus come as subsystems with two dimensions at least, and fluctuations in the number of neurons appear vastly enhanced in the dimensions of Hilbert spaces involved.
The guiding idea of the present paper is that Everett's many worlds are not all equivalent for an observer, but that his consciousness is \emph{physically} contained almost exclusively in one world. The technical basis of this approch is a theorem of order statistics \cite{Embrechts1997,David1981}, relating to statistical ensembles with power-law distribution. The largest draw, in that case, exceeds the second-largest by a quantity $\Delta$ of the order of the ensemble size taken to some power. On an Everettian world tree, the ensemble size is huge, and concentrated near the end of the branches. Thus the dominance of the extremal draw is particularly pronounced, and it occurs near the end of a branch, thus singling it out. If the draws are for numbers of neurons, the exceedance $\Delta$ exponentiates because the dimensions of subsystems multiply. In fact, to see whether and how this statistical mechanism might be relevant for state reduction was the main guide for the constructions below.
The model scenario is as follows. At an equidistant sequence of times, a quantum system is observed, generating $B$ branches of itself, of a number of records, and of a corresponding number of observer's neurons. The main interest is in the statistics of dimensions; therefore, system states, records, and neurons are only distinguished by an index and are not specified any further. The law of evolution, for a step of time, is given by a unitary operator. A specific form of initial state must be assumed for the scenario to unfold. This ``objective'' part of the model dynamics is constructed in section \ref{secRS}.
As to an observer's consciousness, it should be memory-based \cite{Edelman1989}; ``the history of a brain’s functioning is an essential part of its nature as an object on which a mind supervenes'' \cite{Donald1997}. In the simplification of the model, memories and the history of the brain's functioning are identified with the records. The supervening mind is represented by neuronal activity drawing on memories, i.e., records. The implementation of this drawing, by a (pseudo-)random cascade of neuronal activity, serves two purposes: generating a draw from a power-law distribution, and composing in one draw a conscious history---by ``recalling'' what happened at a certain time, then what happened before and after, and before and after that again, and so on. This ``subjective'' part of the model dynamics is constructed in section \ref{secObserversQuest}.
At several points in the construction of the evolution operator, random draws are made. As mentioned already, these are supposed to approximate some complex deterministic law of evolution, and they are made ``once and for all times''. The evolution operator is the same at all times.
Some conclusions are given in section \ref{Conclusions}, and a technically convenient restriction of the dynamics is defined in appendix \ref{secAvoidingLoops}.
\section{Records and objective dynamics\label{secRS}}
\subsection{A paradigm: Griffiths' models of histories}
An elaborate version of the Copenhagen interpretation is the formalism of consistent histories \cite{Griffiths2002}. The notion of measurement on a quantum system is modified to that of a property (selected from an available spectrum) which the system has, irrespective of whether an observer is in place. Those properties a system has at times $t_0,t_1,\ldots, t_n$, while no property should be imagined for intermediate times. By a generalised Born's rule, probabilities are defined for sequences of the available properties (histories of the system) to occur. There are constraints, involving the time evolution operator, to be imposed on the sort of properties that can form a history. In order to demonstrate the constraints, Griffiths devises a number of toy models by immediately constructing operators of time-evolution, rather than Hamiltonians. For this purpose, the tensor-product structure of the so-called history Hilbert space turns out to be quite convenient. Moreover, models for simultaneously running processes in a time interval can be simply constructed as products of unitaries.
Although the intention in \cite{Griffiths2002} is to keep observers out of the theory, it certainly is a reasonable approximation for an observer model as well to consider awareness only at times $t_0,t_1,\ldots, t_n$ while leaving unspecified observer's state in between. In the model to be constructed, the times will be equidistant. The operator of evolution from an instant to the next will be the product $U_\mathrm{wit\,2}U_\mathrm{con}U_\mathrm{wit\,1}U_\mathrm{orb}U_\mathrm{age}$, their roles being to increase the ages of records, move the system along its (branching) orbit, create first half of witnessing records, run awareness cascade, and create second half of witnessing records. Each of these is a product of a large but finite number of unitary factors.
The tensor product structure of the space of states, which in Griffiths' formalism emerges from the notion of history Hilbert space, is a convenient element of model building independently of that notion, because it facilitates the construction of operators that manifestly commute. Below, the tensor product will describe a reservoir of potential records and their neuronal counterparts, each of which being treated as a subsystem. By construction of the evolution operator, every branch will consist of its own collection of subsystems. This makes the numbers of subsystems very large, raising the question of whether a ``multiverse'' in the classical sense is tacitly assumed for the model. It should therefore be noted that the subsystems can be mathematically identified with degrees of freedom of a conventional Hilbert space. For example, any unitary space of dimension $2^n$ can be rewritten as a tensor product by expressing the index of basis vectors in binary form and identifying $$
|i_1,\ldots,i_n\rangle \equiv |i_1\rangle \cdots |i_n\rangle \qquad i_k=0,1 $$ For the counting of dimensions, both versions are equivalent. In order to show that one branch nearly exhausts all dynamical dimension of awareness, what matters is the quantity of records and of neuronal response. The only quality of relevance is the date at which a record is created. For the model it therefore suffices to distinguish records by an unspecified index, and to make their age the only stored content.
\subsection{Structure of state vectors; dating of records\label{secAgeing}}
The subsystems of the model are ``objective'' records and ``subjective'' bits of mental processing. For the records, two classes are assumed. Those in class ${\cal O}$ (orbits) induce further records in the course of evolution, giving rise to branching world lines. The branches emanating from a record $k\in{\cal O}$ are collected in a set ${\cal B}(k)$ of $B$ elements. They are envisioned as predetermined and (due to complex dynamics of real macroscopic systems) pseudorandom. They are defined here\footnote{A more technical specification, simplifying evaluation, is given in appendix \ref{secAvoidingLoops}.} by random draw, once and for all time: \begin{equation} \label{DefB(k)}
{\cal B}(k) = \{j(k,1),\ldots,j(k,B)\} \mbox{ where } j(k,s) = \mbox{random draw from }
{\cal O}\backslash\{k\} \end{equation} The other class of objective records consists of mere witnesses, holding redundant information about records in ${\cal O}$: \begin{equation} \label{DefW(k)}
k\in{\cal O}\mbox{ is witnessed by all records } l \in {\cal W}(k) \end{equation} All ${\cal W}(k)$ are assumed to have the same macroscopic number $W$ of elements.
Observer's neurons are associated with witnessing records, not immediately with orbital records. The model neurons are distinguished by an index and are not specified any further. However, it could make sense to address the same anatomical neuron at different times by different indices. The multiple degrees of freedom would then be provided by the metabolic environment.
Denoting by ${\cal R}_k$ and ${\cal N}_k$ the state-vector space of a record and an observer's ``neuron'', respectively, the model Hilbert space is $$
{\cal H} = \bigotimes_{k\in{\cal O}}\left({\cal R}_k \otimes
\bigotimes_{l\in{\cal W}(k)} \left( {\cal R}_l\otimes {\cal N}_l \right) \right) $$ The information to be stored in a recording subsystem is whether anything is recorded at all, and if so, since how many steps of evolution. The basis states of a record of the orbital kind thus are \begin{equation} \label{DefRecordBasisOrbit}
|\mathrm{blank}\rangle, ~ |\mathrm{age}~m\rangle \quad m\in{\bf Z}
~~~~ \mbox{ for each index in } {\cal O} \end{equation} Observer's mental processing of a record is modelled by transitions between the firing and resting state of a ``neuron''. The basis states of a witnessing record and its mental counterpart are \begin{equation} \label{DefRecordBasisWitness}
\left\{ \begin{array}{l} |\mathrm{blank}\rangle \\
|\mathrm{age}~m\rangle \quad m\in{\bf Z} \end{array}\right\} \otimes
\left\{ \begin{array}{c} |\mathrm{rest}\rangle \\ |\mathrm{fire}\rangle \end{array}\right\}
~~ \mbox{for each index in } \bigcup_{k\in{\cal O}}{\cal W}(k) \end{equation} The ages of records could be limited to an observer's lifetime, as it was done in \cite{Polley2012}, but the infinite dimension implicit in (\ref{DefRecordBasisOrbit}) and (\ref{DefRecordBasisWitness}) is harmless with finite products of unitary operators, and avoids an unnecessary degree of subjectivity in the model.
For all recording subsystems, an ageing operation is defined: \begin{equation} \label{DefUage}
U_\mathrm{age} = \prod_{k\in{\cal O}} U_\mathrm{age}(k)
\prod_{i\in{\cal W}(k)}U_\mathrm{age}(i) \end{equation} where for record number $r$ $$
\begin{array}{l}
U_\mathrm{age}(r)|\mathrm{blank}\rangle_r = |\mathrm{blank}\rangle_r \\
U_\mathrm{age}(r)|m\rangle_r = |m+1\rangle_r
\end{array} $$
\subsection{Preferred initial state\label{secInitialState}}
A real observer's identity derives from a single DNA molecule. For a model of an observer's history, this is taken here as justification for considering exclusively the evolution from an initial state in which one orbital record $k_0$ and its witnesses ${\cal W}(k_0)$ are in the zero-age state while all other records are in their blank states. The choice of a zero-age record determines observer's entire history. It is the only ``seed'' for all ensuing pseudo-random processes of evolution.
\subsection{Orbital branching\label{secOrbit}}
The idea is that observer's history branches at every step of evolution, like in a quantum measurement. A new branch is described by an index of a new record, and is not specified any further. Imagining a new quantity being measured at every step would seem to be consistent with this scenario.
Deutsch \cite{Deutsch1999} showed, for systems with sufficiently many degrees of freedom, that Born's rule for superpositions with coefficients more general than in equation (\ref{EqualAmplitudeSplitting}) can be reduced to the equal-amplitude case, providing the unitarity of any physical transformation is taken for granted. In the model to be constructed, evolution will be unitary. Therefore, invoking Deutsch's argument, only branching into equal-amplitude superpositions will be considered.
Under the condition that record $k$ is older than zero, and that all records to which the orbit possibly continues are blank, the orbit does continue as a superposition of zero-age states of the records of address set ${\cal B}(k)$. Else, the identity operation is carried out. The corresponding evolution operator, specific to point $k$ on an orbit, is defined using the following basis of the \emph{partial tensor product} relating to the records of the set ${\cal B}(k)$. \begin{equation} \label{PartialBasis}
\begin{array}{l}
|\Psi_0\rangle = \prod_{l\in{\cal B}(k)} |\mathrm{~blank}\rangle_l \\[3mm]
|\Psi_l\rangle = \Big(|0\rangle\langle\mathrm{blank}|\Big)_l|\Psi_0\rangle
\qquad l\in{\cal B}(k)
\end{array} \end{equation} That is, one basis vector has all records of ${\cal B}(k)$ in the blank
state, while the remaining have one record promoted to the zero-age state. The subspace orthogonal to $|\Psi_0\rangle$, \ldots, $|\Psi_B\rangle$ is spanned by product vectors with more than one record in a zero-age state or with records in higher-age states.
The idea of equal-amplitude branching from point $k$ is that $|\Psi_0\rangle$ should evolve
into a superpositon of $|\Psi_1\rangle$ to $|\Psi_B\rangle$; in the basis (\ref{PartialBasis}), \begin{equation} \label{SplittingShorthand}
\left(\begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right) \longrightarrow
\frac1{\sqrt B} \left(\begin{array}{c} 0 \\ 1 \\ \vdots \\ 1 \end{array} \right) \end{equation} A convenient way of completing this to define a unitary operator is to use a Fourier basis in $B$ dimensions, $$
F_m = \frac1{\sqrt B}\left(\begin{array}{c} \alpha_m^0 \\ \alpha_m^1 \\ \vdots \\
\alpha_m^{B-1} \end{array} \right) \qquad \alpha_m = \exp\frac{2\pi i m}{B}
\qquad m = 0,\ldots B-1 $$ Relating to the basis (\ref{PartialBasis}), and using the $F_m$ as $B$-dimensional column vectors, a branching operation can be defined using the $(B+1)\times(B+1)$ matrix $$
S = \left(\begin{array}{ccccc} 0 & 1 & 0 & \cdots & 0 \\
F_0 & 0 & F_1 & \cdots & F_{B-1} \end{array} \right) $$ whose columns form an orthonormal set. Conditioning on $\mathrm{age} = 1$ of record $k$, the factor of orbital evolution triggered by this record is then given by \begin{equation} \label{DefUorb(k)}
U_\mathrm{orb}(k) = 1 + \left( \sum_{n,n'=0}^B |\Psi_n\rangle
(S_{nn'}-\delta_{nn'}) \langle\Psi_{n'}|\right)_{{\cal B}(k)}
|\mathrm{age}~1\rangle_k \langle \mathrm{age}~1|_k \end{equation} The bracket reduces to zero, in particular, when branching from $k$ has occurred previously in the evolution, so that, by subsequent ageing of non-blank records, any zero-age components of records have been promoted to higher-age components; cf.\ (\ref{PartialBasis}). In this way, repeated branching from the same point as well as loops of evolution are avoided. Technically, however, an additional means of avoiding loops (appendix \ref{secAvoidingLoops}) facilitates the evaluation of evolution. The global operator of orbital evolution is \begin{equation} \label{DefUorb}
U_\mathrm{orb} = \prod_{k\in{\cal O}} U_\mathrm{orb}(k) \end{equation}
\subsection{Witnessing}
The basis states of the witnessing records are defined in (\ref{DefRecordBasisWitness}). In order to represent (\ref{DefW(k)}) by an evolution operator, orbital records of zero age are assumed to induce a change of witnessing records from blank to zero-age. The relevant part of the operation is, in self-explaining notation, \begin{equation} \label{UwitSimpl}
\left( \prod_{l\in{\cal W}(k)} \Big(|0\rangle \langle\mathrm{blank}|\Big)_l\right)
\Big( |0\rangle \langle 0|\Big)_k \end{equation} For the sake of unitarity, however, this needs to be complemented by further operations, although these will never become effective in the evolution of an initial state as defined in section \ref{secInitialState} and as age-promoted by the operators of section \ref{secAgeing}. A complemented version of the operator above would be $$
1 + \left( -1 + \prod_{l\in{\cal W}(k)} \left[|0\rangle \langle\mathrm{blank}|
+ |\mathrm{blank}\rangle \langle 0| + \sum_{m\neq 0}|m\rangle\langle m|\right]_l
\right) \Big( |0\rangle \langle 0|\Big)_k $$ However, witnessing records just created can be read and processed by an observer within the same step of evolution. It will be essential for the functioning of a stochastic mechanism below (section \ref{secObserversQuest}) that half of the witnessing records are created before the observer might immediately address them, while the other half is created thereafter. For this purpose, let the addresses of witnessing records be split into subsets of equal size, $$
{\cal W}(k) = {\cal W}_1(k) \cup {\cal W}_2(k) ~~~~~~
{\cal W}_1(k) \cap {\cal W}_2(k) = \emptyset $$ The two corresponding witness-generating evolution operators are \begin{equation} \label{DefUwit}
U_\mathrm{wit\,1} = \prod_{k\in{\cal O}} U_\mathrm{wit\,1}(k) ~~~~~~~~~~~~~~~~~~
U_\mathrm{wit\,2} = \prod_{k\in{\cal O}} U_\mathrm{wit\,2}(k) \end{equation} where \vspace*{-4mm} $$
~~~~~~~~ \begin{array}{l}
U_\mathrm{wit\,1,2}(k) ~ = ~ 1 ~ + \\[1mm] \displaystyle
\left( -1 + \!\!\! \prod_{l\in{\cal W}_{1,2}(k)} \left[|0\rangle \langle\mathrm{blank}|
+ |\mathrm{blank}\rangle \langle 0| + \sum_{m \neq 0}|m\rangle\langle m|\right]_l
\right) \Big( |0\rangle \langle 0|\Big)_k
\end{array} $$
\section{Observer's mental programme\label{secObserversQuest}}
Consider an observer who is constantly trying to assemble his records into a coherent temporal sequence. One way of doing this would be to see what happened at the middle $a/2$ of his life at age $a$, by seeking an appropriate record; then what happend a quarter before and after, at $a/4$ and $3a/4$, then at multiples of $a/8$, and so forth. This defines a cascade of increasing temporal resolution. For ``coherence'', connection by a logical AND is required. It would fit in with the spirit (not with the technical detail) of Tononi's concept of consciousness as Integrated Information \cite{Tononi2008}: ``Phenomenologically, every experience is an integrated whole, one that means what it means by virtue of being one, \ldots''. Moreover, memory becomes a fundamental constituent of consciousness \cite{Edelman1989} in this way.
\subsection{Generating the power-law statistics\label{secGenPowStat}}
The idea for generating power-law statistics, within one step of evolution, is as follows. While the AND condition is satisfied, the cascade of records addressed grows like $2^l$ where $l$ is the generation number. By the scanning procedure to be constructed, records of non-zero ages will be found with probability 1 whenever addressed, but records of zero age (being created within the same step and representing the ``present'') will only be found with probability 1/2. The entire cascade is stopped when the quest for a coherent picture of past and present fails, for which the probability is 1/2 in every generation. This is a standard mechanism for generating power law statistics \cite{SimkinRoychowdhury2006}, here with exponent $-1$ for the cumulative distribution function since a number greater than $n=2^{l+1}-1$ (sum of generations) is obtained with probability $\frac12(n+1)^{-1}$.
\subsection{Recursive construction of awareness cascade}
The unitary operator to be constructed in this section will go through all possible cascades of records of ages $a/2$, multiples of $a/4$, of $a/8$ and so on, looking for a randomly chosen witnessing record $i$ in every ${\cal W}(k)$ of the cascade. It should be noted again that all random draws are made once and for all times. Parameter $a$ will be an eigenvalue of observer's age operator, defined in section \ref{secObserversAge}.
We begin by constructing the $l$th generation of the cascade. Consider a set $g$ of addresses given by pairs $(k,i)$ with \begin{equation} \label{ikjDef}
\left. \begin{array}{rcl}
k(j) & = & \mbox{label of an orbital record} \\
& & \mbox{restricted by } k(j) < k(j') \mbox{ for } j < j'
\\[3mm]
i(j) & = & \mbox{random draw from ${\cal W}(k(j))$}
\end{array} \right\} \mbox{ for }j = 1,\ldots,2^l \end{equation} The ordering of the $k$ is for technical convenience; permutations are taken into account when assigning ages, as below. Denote the collection of all possible sets of the above form by $$
{\cal G}(l) = \big\{ \mbox{all possible $g$ of the form (\ref{ikjDef})} \big\} $$ The random draws are understood to be \emph{independent for different} $g$. The elementary projection on which the scanning operation is based is \begin{equation} \label{PaikDefinition}
\Big( |m\rangle \langle m|\Big)_i = \mbox{projection on age $m$ of record $i$} \end{equation} Below, fractional ages are converted to integers by the ceiling function $\lceil~\rceil$. To enable the scanning of all combinations of ages and records, let us define \begin{equation} \label{alphaDef}
\alpha = \mbox{sequence consisting of ages }\lceil 2^{-l}(j-1)a \rceil,
j = 1,\ldots,2^l, \mbox{ reordered} \end{equation} This distinguishes permutations of different ages, but not of equal ages. Denote the collection of all sequences of the form (\ref{alphaDef}) by $$
{\cal A}(l) = \big\{ \mbox{all possible $\alpha$ of the form (\ref{alphaDef})} \big\} $$ The projection operator testing whether the records given by $g$ have ages as given by $\alpha$ is \begin{equation} \label{DefP(g,a)}
P(g,\alpha) =
\left[\prod_{j=1}^{2^l} \Big( |\alpha(j)\rangle \langle \alpha(j)|\Big)_{i(j)}\right]
\left[\prod_{i \notin g} \Big( |\mbox{blank}\rangle \langle \mbox{blank}|\Big)_i\right] \end{equation} where $i(j)$ in the first bracket denotes elements of $g$. These projectors are mutually orthogonal, \begin{equation} \label{OrthogonalP(g,a)}
P(g,\alpha) P(g',\alpha') = 0 ~~ \mbox{ if } g\neq g' \mbox{ or } \alpha\neq \alpha' \end{equation} To show this, consider $g \neq g'$. Let $i$ be an index in $g$ but not in $g'$. Then in
$P(g,\alpha)$ we have a projector $( |m\rangle \langle m|)_i$ while in $P(g',\alpha')$ we have
$( |\mbox{blank}\rangle \langle \mbox{blank}|)_i$ instead. The product of these two is zero already. Secondly, consider the case of $g=g'$ and $\alpha\neq\alpha'$. Let $j_0$ be an index for which $\alpha(j_0)\neq \alpha'(j_0)$. Now the projectors are orthogonal because they project on different ages for record $i(j_0)$.
The idea for the modelling of observer's neuronal reaction is as follows. In the subspace where the test by $P(g,\alpha)$ is positive (all $p$ give 1) the observer notices it by some neural activity, and the next generation of the scanning process takes place. In the subspace where the test is negative (some $p$ give 0) nothing happens; evolution reduces to an identity operation. The neural activity is modelled by 2-dimensional rotations $\sigma_i$ in the counterparts ${\cal N}_i$ of the records. In the subspace where $g$ tests positive, the collective rotations are, with obvious assignment to the tensorial factors, $$
\sigma(g) = \prod_{i \in g} \sigma_i $$ The awareness cascade, running within a step of evolution, is conveniently constructed recursively, downward from higher to lower resolutions of time. This is enabled by the fact that after many generations the finite contents of the address sets ${\cal W}(k)$ will be exhausted. So there is a maximum $L$ for the generation number $l$, determined by the other parameters of the model. The counting of the generations will be upward here as usual, beginning with $l=1$ at age $a/2$. The recursion is initialised by \begin{equation} \label{UawaInitial}
U_\mathrm{awa}(L+1,a) = 1 \end{equation} and proceeds by \begin{equation} \label{UawaRecursion}
U_\mathrm{awa}(l,a) = 1 + \sum_{g\in{\cal G}(l)} \sum_{\alpha\in{\cal A}(l)}
\Big( -1 + U_\mathrm{awa}(l+1,a) \sigma(g) \Big) P(g,\alpha) \end{equation} The awareness operator for the completed cascade is $ U_\mathrm{awa}(1,a)$. Defining it by recursion is only a convenient way of representing the algebraic structure. In application to a state vector, projections of the various generations automatically occur in the natural order, $l=1,\ldots,L$.
In order to show that (\ref{UawaRecursion}) indeed defines a unitary operator, first note that $P(g,\alpha)$ only consists of projections on the ages of witnessing records, so that basis states of the form (\ref{DefRecordBasisWitness}) are eigenstates of the projectors. All age projectors commute among themselves, and commute with the $\sigma$ operations because these do not act on records. It follows, starting from (\ref{UawaInitial}) and going through (\ref{UawaRecursion}), that the projectors commute with the $U_\mathrm{awa}$ of all generations. Using (\ref{OrthogonalP(g,a)}), unitarity in the form $U_\mathrm{awa}(l,a)^\dag U_\mathrm{awa}(l,a) = 1$ can then be shown by straightforward algebra.
\subsection{Observer's age and conscious history\label{secObserversAge}}
Parameter $a$ of the preceding section is identified here as an eigenvalue of observer's age operator. It suffices to assign an age to any tensor product of basis vectors as defined in (\ref{DefRecordBasisWitness}). Assigning 0 to the ``blank'' state here, any basis state of record $i$ has an age value $a_i$. The age operator is defined by $$
A \prod_i |a_i\rangle = ( \max a_i) \prod_i |a_i\rangle $$ Observer's lifetime can be taken into account by restricting neuronal activity to ages $ a \leq T$, by including an operator factor $\Theta(T-A)$. Thus, the final expression for the evolution operator of observer's consciousness is \begin{equation} \label{DefUconsc}
U_\mathrm{con} = 1 + \Big(-1 + U_\mathrm{awa}(1,A)\Big)\Theta(T-A) \end{equation} The complete evolution operator is a product of the factors constructed above. In a new step of evolution, ages of all records are increased by one unit. Next, orbital records develop. The creation of witnessing records and their processing by the observer (operations that do not commute) are assumed to be intertwined in such a way that unitarity is manifestly preserved. The full evolution operator of the model is \begin{equation} \label{DefUmodel}
U = U_\mathrm{wit\,2} ~ U_\mathrm{con} ~ U_\mathrm{wit\,1} ~ U_\mathrm{orb} ~
U_\mathrm{age} \end{equation}
\subsection{Verifying the Scenario\label{ScenarioRecovered}}
\subsubsection{Structure of branches}
We start out from a product state as specified in section \ref{secInitialState} and repeatedly apply the evolution operator (\ref{DefUmodel}). Clearly, since product states form a basis, we can always write the resulting states as superpositions of products; however, we wish to show that only a superposition of special products emerges, which will be regarded as the ``branches'' of the wave function. After $a$ applications of the evolution operator $U$, the properties of those product states are as follows. \begin{enumerate} \item In each branch there is precisely one orbital record of age $0$. \item For each of the ages $0,\ldots,a$, there is one set ${\cal W}(k)$ of
witnessing records in the corresponding eigenstates of age; all other witnesses
are blank. \item Neuronal states and record states factorise (do not entangle). \end{enumerate} The initial state has these properties with $a=0$ by definition. Let us assume then that $a$ applications of $U$ have produced a superposition of product states with properties 1-3. When $U$ is applied once more, it suffices by linearity to consider the action on any of the product states. The first action is to increase by 1 the ages of all non-blank records. There is now for each of the ages $1,\ldots,a+1$ precisely one set ${\cal W}(k)$ of witnessing records in the corresponding eigenstates of age, while all other witnesses are blank; witnesses of age $0$ are missing so far. Also, a single orbital record of age 1 is generated from the previous one of age 0; let its address be $k_1$. Now applying $U_\mathrm{orb}$, as defined in (\ref{DefUorb}), only the factor with $k=k_1$ can have an effect since the projection on age 1 gives zero for all other $k$. Nontrivial action of $U_\mathrm{orb}(k_1)$ requires all records in ${\cal B}(k_1)$ to be blank, which would not be true if the system had been on any of those points before. Invoking the loop- avoiding specification of ${\cal B}(k_1)$, as given in appendix \ref{secAvoidingLoops}, we can regard this condition as satisfied within observer's lifetime. The action of $U_\mathrm{orb}(k_1)$ then consists in creating a new superposition of products, with a single zero-age orbital record in each of them, as expressed in (\ref{SplittingShorthand}). Property 1 holds in each of these products. For the remaining operations of $U$ it suffices by linearity again to apply them only on the product states just created by $U_\mathrm{orb}(k_1)$. Let $k_0$ be the single zero-age orbital record in one of them. Then, of all witness-generating factors of (\ref{DefUwit}), only $U_\mathrm{wit\,1}(k_0)$ and $U_\mathrm{wit\,2}(k_0)$ act nontrivially, due to the conditioning on zero-age. The records of the sets ${\cal W}_1(k_0)$ and ${\cal W}_2(k_0)$ are in blank states before this action, because orbital point $k_0$ was not visited before, so the simplified expression (\ref{UwitSimpl}) applies, and the records of ${\cal W}_1(k_0)$ are transformed from blank to zero-age states. Thus, $U_\mathrm{wit\,1}(k_0)$ generates the first half of zero-age witnessing records that were missing so far from the full range of ages. Next comes the action of the action of $U_\mathrm{con}$, defined in (\ref{DefUconsc}). It is the only factor of evolution which could affect property 3. It consists in making certain neuronal factors rotate if certain projections on the ages of records are nonzero, and no action else. All records are in definite ages or blank, so the projections of $U_\mathrm{con}$ preserve the product form of the state vector. The neuronal rotations preserve the product form by construction. Hence, property 3 continues to hold. Finally, the second half of zero-age witnessing records is generated by $U_\mathrm{wit\,2}(k_0)$, so property 2 holds as well after $a+1$ applications of the evolution operator.
\subsubsection{Awareness cascades}
To evaluate the awareness cascades encoded in $U_\mathrm{con}$, defined in (\ref{DefUconsc}), assume that observer's age is $a<T$ so that $U_\mathrm{awa}(1,a)$ applies. The projection operators $P(g,\alpha)$ of (\ref{UawaRecursion}), using (\ref{ikjDef}) and (\ref{alphaDef}) for $l=1$, test for randomly chosen records in ${\cal W}(k(0))$ and ${\cal W}(k(1))$, with ages $0$ and $\lceil a/2\rceil$ or the permutation of these, while the pair of $k(0)$ and $k(1)$ is ordered. By property 2 above, the product state (or branch) being considered has records of one set ${\cal W}(k_0)$ at age $0$ and of one set ${\cal W}(k_1)$ at age $\lceil a/2\rceil$. Hence, the only successful projection $P(g,\alpha)$ can be for $g=(k_0,k_1)$ and $\alpha=(0,\lceil a/2\rceil)$ or for $g=(k_1,k_0)$ and $\alpha=(\lceil a/2\rceil,0)$, depending on which of the addresses $k_0$ or $k_1$ is smaller. Only one term, at most, contributes to the sum over $g$ and $\alpha$ in (\ref{UawaRecursion}). The test for $k_1$ with age $\lceil a/2\rceil$ will be positive, since all witnesses of ages $1$ to $a$ have been created during the preceding steps of evolution. However, witnesses of age $0$ are created half before the action of $U_\mathrm{con}$ and half thereafter. If the randomly chosen record from ${\cal W}(k_0)$ is contained in the first half, ${\cal W}_1(k_0)$, it is created by $U_\mathrm{wit\,1}$ before the action of $U_\mathrm{con}$, so the test will result in $P=1$ in equation (\ref{UawaRecursion}); if it is created by $U_\mathrm{wit\,2}$ instead, the test will result in $P=0$. In the latter case, $U_\mathrm{awa}(1,a)$ reduces to the identity operation. In the case of $P=1$, the neurons associated with the witnesses for $k_0$ or $k_1$ become active through the $\sigma$ factor, and the second generation of the cascade comes into action through $U_\mathrm{awa}(2,a)$. The argument repeats: As a consequence of property 2, at most one combination $(g,\alpha)$ contributes to the sum (\ref{UawaRecursion}) for $U_\mathrm{awa}(2,a)$, namely that combination in which the records collected in $g$ are tested for the ages they actually have on the branch considered. The zero-age orbital record $k_0$ and its witnesses are the same for all generations of the cascade, but for each $l$ a new randomly chosen witness is tested. Projection $P(g,\alpha)$ reduces to $1$ if that witness happens to be created by $U_\mathrm{wit\,1}$, while it reduces to $0$ if it is created by $U_\mathrm{wit\,2}$. The probability for the cascade to continue is $1/2$ in every generation. Witnesses of higher age always test positive, as they have been created in the preceding steps of evolution.
\subsubsection{Statistics of dimensions of conscious subspaces\label{DimensionStatistics}}
If the observer lives to age $T$, the number of orbital points on his world-tree is $$
N = \frac{B^{T+1}-1}{B-1} $$ This is also the number of statistically independent awareness cascades, as we now show. By definition (\ref{ikjDef}), a new series of random draws is made for every sequence $g$, a selection of orbital addresses. This definition does not a priori relate to a specific time, but its occurrence in the projector $P(g,\alpha)$ of the evolution operator, defined in (\ref{DefP(g,a)}), combines it with a sequence of observer's ages. We intend to show the following: If projections with the same sequence, $g_1=g_2$, give positive results for two points on observer's world-tree, those points must be equal.
Let $a_1$ and $a_2$ be observer's ages at the two points, and let $k_1$ and $k_2$ be the orbital points of zero age, the ``present'' points. By (\ref{DefP(g,a)}) and (\ref{alphaDef}), the present point is always contained in $g$, so $k_1\in g_1$ in particular. This here implies $k_1\in g_2$. Since $P(g_2,\alpha_2)$ is assumed to test positive, $k_1$ must be an orbital record on the branch leading to $k_2$, so it either coincides with $k_2$ or is a record of age greater than zero. In the latter case, it must have been the zero-age record at an earlier age of the observer. Hence, $a_1<a_2$, unless $k_1=k_2$. Exchanging 1 and 2 in the argument, we find $a_1>a_2$ unless $k_1=k_2$. This implies $k_1=k_2$ and $a_1=a_2$.
For the number of neurons activated in a cascade (section \ref{secGenPowStat}) the probability distribution is a power law characterised by exponent $-1$. Hence, by a theorem of order statistics \cite{Embrechts1997}, the largest number of neurons activated exceeds the second-largest by a quantity of order $N$. More precisely, using notation of \cite{Embrechts1997} corollary 4.2.13, given an ensemble of size $N$ of random draws with the power-law distribution, we have for the difference between the largest draw $X_{1,N}$ and the second-largest $X_{2,N}$ \begin{equation} \label{FrechetSeparation}
X_{1,N}-X_{2,N} = N \, Y \qquad \mbox{$Y$ = random variable independent of $N$} \end{equation} The distribution of $Y$ is non-singular. The dimension of the active neuronal subspace in the extremal branch is $2^{X_{1,N}}$. The total dimension of active neuronal subspaces in all other branches is bounded by $2^{X_{2,N}}N$. For the latter to be larger than the former, the probability is $$
P(2^{NY}<N) = P(Y<N^{-1}\log_2N) = \mbox{negligible} $$ By a comfortable margin, an observer can expect to find his world-line well-defined, providing it is indeed the \emph{dimension} of awareness that matters, rather than the number of neurons.
Two arguments in favour of the dimension are at hand. The simplest is Fermi's Golden Rule, although it relies on state reduction and so goes beyond the framework of the model; any transition probabilities into observer's subspace of awareness would be proportional to the dimension of the subspace. The other argument uses a change of basis in the union of conscious subspaces of \emph{all}
branches. Let $|1\rangle,\ldots,|N\rangle$ be a basis for the extremal branch, and
$|N+1\rangle,\ldots,|N+M\rangle$ a basis for the remaining branches. We know that $M$ is tiny
in comparison to $N$. Now consider instead a Fourier basis, which consists of superpositions of all $|1\rangle,\ldots,|N+M\rangle$ with equal amplitudes but different phases. In each of the new basis vectors, properties of the non-extremal branches only occur in a tiny component. The situation is now similar to that of an electron bound to a proton on earth. It resides by $10^{-10^{18}}$ of its wavefunction behind the moon, but we still regard it as an electron on earth.
\subsection{Analysing states in a time-local basis\label{secLocalBasis}}
In order to analyse the properties of a state vector, it must be represented in a particular basis, such as the eigenbasis of an observable. For some applications, like representing dynamics in the Heisenberg picture, the basis may conveniently be chosen time-dependent.
As to the state vectors of observer's neurons, we have so far used a global basis which applies to all branches, and in which the evolution operator is constant. In this way, observer's entire experience emerges in a single step of evolution near the end of his lifetime. Observer's mental reactions thus appear highly non-local. However, when analysed in a time-dependent basis adapted to the evolution in one particular branch, observer's reactions occur simultaneously with the creation of the records, while the operator of evolution appears to change in a random way after each observation. This is trivial mathematically, but not physically. Consider a section of evolution of a neuronal subsystem, $$
\left(\begin{array}{c} x_1 \\ x_2 \end{array}\right) \stackrel{1}{\longrightarrow}
\left(\begin{array}{c} x_1 \\ x_2 \end{array}\right) \stackrel{\sigma}{\longrightarrow}
\left(\begin{array}{c} y_1 \\ y_2 \end{array}\right) $$ where entries relate to some initially chosen basis, and where $\sigma$ is a unitary $2\times 2$ matrix. Only in the second step something appears to happen here. Changing the basis for the second state vector such that $$
\left(\begin{array}{c} x_1 \\ x_2 \end{array}\right)_\mathrm{old~basis}
= \sigma^{-1}\left(\begin{array}{c} y_1 \\ y_2 \end{array}\right)_\mathrm{new~basis} $$ the section of evolution takes the form $$
\left(\begin{array}{c} x_1 \\ x_2 \end{array}\right) \stackrel{\sigma}{\longrightarrow}
\left(\begin{array}{c} y_1 \\ y_2 \end{array}\right) \stackrel{1}{\longrightarrow}
\left(\begin{array}{c} y_1 \\ y_2 \end{array}\right) $$ The step of evolution where change appears can be shifted to any position in the sequence. Due to the tensor-product structure of the model's branches, the argument applies separately to all neurons involved. It is thus \emph{possible} to choose a basis in which observer's neuronal reactions appear local, but it is a different choice in each branch, and the reason for the choice cannot be found in the state of records at the given time. In this sense, the choice appears to be intrinsically random.
\section{Conclusions\label{Conclusions}}
It has been demonstrated for a toy model of a quantum system with conscious observer, that a unitary evolution operator, repeatedly applied to an appropriate initial state, can accomplish two things: gather information about an observer's world-tree, and perform a random draw on the world-tree so as to single out a world-line of extreme awareness. A theorem, known from order statistics, about the dominance of the extreme in a power-law ensemble plays a central role.
What the model \emph{avoids} to do is giving up fundamental linearity, and introducing fundamental stochastics. The framework is state vectors and unitary evolution under a constant law. Yet certain vectors evolve pseudo-stochastically.
The role of time and causality in the model is precarious, inevitably so under the working hypothesis that a world-line should be determined by an extremal draw on a world-tree. As was shown in section \ref{secLocalBasis}, the model reproduces the usual scenario of alternating Schr\"odinger-type and Born-type evolution when represented in an appropriate basis. Observer's mental reactions then appear at the same instant as the generation of records, but the choice of the basis appears indeterminate at that instant. The model resolves that indeterminacy by omitting any erosion of ``witnessing records'', keeping them readable throughout observer's lifetime. Can this be true for more realistic ``witnessing records'', or is this the point where attempts at realistic modifications of the model must fail?
Non-universality of time could be an argument in favour of the optimistic alternative. From Special Relativity Theory, time is known to be observer-dependent, but only with negligible effects if observers move at low speed. On this basis, time is treated as universal in nonrelativistic quantum mechanics (likewise in the preceding sections of this paper). But quantum mechanics provides its own path to special relativity, in the sense that it enables pre-relativistic derivations of the Dirac equation \cite{Zee1991,Polley2001}; it might also provide its own version of observer-dependent time. The existence of two modes of evolution, Schr\"odinger and Born, might be an indication of it.
Having recovered the stochastic appearance of measurements in section \ref{secLocalBasis} by referring to a specific basis, we may have some freedom in reinterpreting the evolution \emph{operator} as something more general. Since logics is always part of a natural law, and conceptually more general, could the role of the operator be to generate a logical structure of which time evolution is only a representative in a particular basis? In elementary cases like those described by a Dirac equation, the law of motion is close to mere logics of nearest neighbours \cite{Polley2001}, so ``space-time'' might indeed reduce to ``space-logics''. For quantum systems with great complexity, like an elementary system coupled to a conscious observer, logical implications might depend on many conditions, and could be halted as long as some conditions were not met by the state vector.
In different ways, ``halted'' evolutions are also considered in other scenarios of quantum measurement. Stapp \cite{Stapp1993} proposed an interaction between mind and matter based on the quantum Zeno effect; it would keep observer's attention focussed on one outcome in a measurement, but a side effect would be the halting of processes in matter under observation. With consistent histories \cite{Griffiths2002}, there is a copy of Hilbert space assigned to each of the times $t_1,\ldots,t_n$ of measurement. In the present model, an analogue of such ``history Hilbert spaces'' may be seen in the subspaces defined by the written states of records at a time $t_k$.
The modelling of consciousness by a coherent, logically conjunctive neuronal activity appears to be in the \emph{spirit} of Integrated Information \cite{Tononi2008}. As a \emph{measure} of consciousness, however, section \ref{DimensionStatistics} of the present paper suggests to take the total dimension of the subspace of neuronal activity, which is very different from the entropy-based measure suggested in \cite{Tononi2008}. By taking logarithms of dimensions, an elementary relation like that of one subspace covering the union of many other subspaces becomes almost invisible.
If the model scenario could indeed be extended to more realistic systems and observers, it would suggest an easier intuitive look on state vectors, and on the persistent problem of ``the Now'' \cite{Mermin2013}. Intuitively, states of superposition have always been associated with potentialities for a quantum system, but the need for an actuality seemed to make it an insufficient characterisation. The model scenario suggests to identify actuality with that potentiality which involves an extremal degree of awareness. It is generated in one step of logical evolution, so an observer's impression of his entire experience as one shifting moment would seem less surprising.
\begin{appendix}
\section{Suspended orbital return\label{secAvoidingLoops}}
Presumably, the probability for an orbital point to be visited twice under the dynamics of section \ref{secOrbit} is negligible, but in order to enable exact statements on evolution, any returns of the system should be rigorously excluded for the time span of interest.
Loops of evolution on a branching orbit of points in ${\cal O}$ cannot be avoided entirely if ${\cal O}$ is a finite set, but they can be avoided within an observer's lifetime. To keep branches apart for $T$ splittings, assume ${\cal O}$ to be decomposable into $B^T$ subsets of the form ${\cal J}[s]$, mutually disjoint and big enough to serve as an ensemble for a random draw, with $s$ a register of the form \begin{equation} \label{Register}
s = [s_1,s_2,\ldots,s_T] ~~ \mbox{ where }~~ s_j \in \{1,2,\ldots,B \} \end{equation} Then, starting from $k\in{\cal J}[s_1,s_2,\ldots,s_T]$, define the jumping addresses $j(k,s)$ for branches $s = 1,\ldots,B$ by \begin{equation} \label{JumpingRegister}
j(k,s) = \mbox{random draw from }[(s_2\,\mathrm{mod}\,B) + 1,s_3,\ldots,s_T,s] \end{equation} The cyclic permutation in the first entry serves to avoid jumping within one subset. Entry $s$ will remain in the register for $T$ subsequent splittings. Thereafter, the corresponding information is lost, allowing for inevitable loops to close. The addresses generated in (\ref{JumpingRegister}) are a loop-avoiding specification of the elements of ${\cal B}(k)$, previously defined in (\ref{DefB(k)}).
\end{appendix}
\end{document} |
\begin{document}
\title{Federated learning with incremental clustering for heterogeneous data}
\author{Fabiola ESPINOZA CASTELLON \and Aurélien MAYOUE \and Jacques-Henri SUBLEMONTIER \and Cédric GOUY-PAILLER}
\authorrunning{F. ESPINOZA CASTELLON et al.}
\institute{Institut LIST, CEA, Université Paris-Saclay, F-91120, Palaiseau, France \email{[email protected]}} \maketitle \begin{abstract} Federated learning enables different parties to collaboratively build a global model under the orchestration of a server while keeping the training data on clients' devices. However, performance is affected when clients have heterogeneous data. To cope with this problem, we assume that despite data heterogeneity, there are groups of clients who have similar data distributions that can be clustered. In previous approaches, in order to cluster clients the server requires clients to send their parameters simultaneously. However, this can be problematic in a context where there is a significant number of participants that may have limited availability. To prevent such a bottleneck, we propose FLIC (Federated Learning with Incremental Clustering), in which the server exploits the updates sent by clients during federated training instead of asking them to send their parameters simultaneously. Hence no additional communications between the server and the clients are necessary other than what classical federated learning requires. We empirically demonstrate for various non-IID cases that our approach successfully splits clients into groups following the same data distributions. We also identify the limitations of FLIC by studying its capability to partition clients at the early stages of the federated learning process efficiently. We further address attacks on models as a form of data heterogeneity and empirically show that FLIC is a robust defense against poisoning attacks even when the proportion of malicious clients is higher than 50\%.
\keywords{Federated learning, clustering, non-IID data, poisoning attacks} \end{abstract}
\section{Introduction} \label{sec:intro} Federated Learning (FL) is a new distributed machine learning paradigm that enables multiple clients to build a common model under the orchestration of a central server. This paradigm functions while keeping the training data on clients' devices. McMahan et al. \cite{mcmahan2017communicationefficient}
introduced FL in order to preserve privacy and reduce the overhead communication costs due to data collection. Contrary to traditional server-side approaches which aggregate data on a central server for training, FL distributes learning tasks among clients and aggregates only locally-computed updates to build a single global model. Therefore, the global objective function $f$ to be minimized is formulated as a weighted sum of the local objective functions $f_k$:
\begin{equation}
\min_{w}{f(w)}=\min_{w}{\sum_{k=1}^{K}{\frac{n_{k}}{\sum_{q=1}^K{n_q}}}f_k(w)} \label{eq:weights} \end{equation}
\noindent where each of the $K$ clients has $n_k$ samples and $\sum_{q=1}^K{n_q}$ is the total number of data points belonging to all clients.
The local objective function $f_k$ measures the empirical risk over client-$k$'s local dataset. Its $n_k$ samples are drawn from a distribution $\mathcal{P}_k$:
\begin{equation} f_k(w)=\mathbb{E}_{(x,y)\sim P_k}[l(w;x,y)] \label{eq:loss} \end{equation} where the local loss function $l(w;x,y)$ measures the error of the model $w$ in predicting a true label $y$ given an input $x$.
FL was formalized by the algorithm FedAvg \cite{mcmahan2017communicationefficient}. In FedAvg, the server randomly initializes a global model $w_0$, typically a deep neural network. At round $t$, the server selects a subset $C_t$ of $C \cdot K \leq K$ clients that take part in training and sends them the current global model $w_{t-1}$. Each participant $k$ runs several epochs of minibatch stochastic gradient descent to minimize its local loss function. Afterwards, each client sends back to the server its update $\delta_{t}^{k}$ that is the difference between $w_{t-1}$ and the optimized local parameters $w_{t}^{k}$. Finally, the server averages the received updates to obtain the global model $w_t=w_{t-1}-\sum_{k \in C_t}{\lambda_k \delta_{t}^{k}}$ where $\lambda_k = \frac{n_k}{\sum_{q \in C_t} n_q}$ is the weight associated to the client $k$, thereby concluding a round of collaborative learning. The aggregation rule thus gives more weight in the weighted sum to clients having a higher number of examples. The FL process consists of multiple successive rounds.
Throughout the learning processes, the independent and identically distributed (IID) sampling of training data is a key point for training accurate models. It ensures that the stochastic gradient is an unbiased estimate of the full gradient. However, in FL scenarios where clients generate personal data from different locations and environments, it is unrealistic to assume that clients' local data is IID, i.e., each client's local data is uniformly sampled from the entire training dataset composed of the union of all local datasets. In non-IID scenarios, the global performance of FedAvg is severely degraded \cite{zhao2018federated} because the heterogeneity of data distribution across clients results in weight divergence during the collaborative training. At a round $t$, the difference between the data distribution of two clients $i$ and $j$ causes locally trained weights $w_t^i$ and $w_t^j$ to diverge and the convergence rate, precision and fairness of the federated model to degrade by comparison with homogeneous data. Figure \ref{fig:divergence} illustrates this phenomenon, simulating here a simple case of $1$-D linear regression in a FL context with two clients. In this toy problem, each client performs $10$ local epochs with $50$ samples, and the server executes $10$ global rounds. In the IID case, the clients collaborate to infer the same parameter equal to $45$. In the non-IID case, the parameters to be inferred of the first and second client are $20$ and $70$ respectively. For the IID case, both clients' weights follow the same direction and converge to the \textit{same} optimum, whereas for the non-IID case, clients' weights point to different directions and make the global model converge to a parameter different from their own optimums, which is the center of both parameters.
When non-IID is mentioned in the FL setting, it typically means that for two clients $i$ and $j$, $\mathcal{P}_i \neq \mathcal{P}_j$. Based on \cite{hsieh2020noniid,kairouz2021advances}, and knowing that for client $i$, $\mathcal{P}_i(x,y) = \mathcal{P}_i(y| x)\mathcal{P}_i(x) = \mathcal{P}_i(x| y) \mathcal{P}_i(y)$, different cases of non-IID data can be distinguished. Concept shift cases occur when conditional distributions vary across clients: \begin{itemize}
\item \textit{Concept shift on features:} marginal label distributions are shared $\mathcal{P}_i(y) = \mathcal{P}_j(y)$ but conditional distributions vary across clients $\mathcal{P}_i(x|y) \neq \mathcal{P}_j(x|y)$. This can arise in handwriting because some people might write \textquote{7} with bars or without and so, features might be different for a same label (number). We can also refer to this case as \textquote{different features, same labels}.
\item \textit{Concept shift on labels:} marginal features distributions are shared $\mathcal{P}_i(x) = \mathcal{P}_j(x)$ but label distributions conditioned on features vary across clients $\mathcal{P}_i(y|x) \neq \mathcal{P}_j(y|x)$. This can occur in sentiment analysis : for the same features, people can have different preferences (labels). This case can be referred as \textquote{same features, different labels}. It is also illustrated by the non-IID case in Figure~\ref{fig:divergence} because for the same inputs i.e. features, linear models will have different results i.e labels because their parameters are different (in this example the parameters are $20$ and $70$). \end{itemize}
This paper focuses on concept shift cases that can be addressed by clustering, deliberately omitting cases where marginal distributions vary across clients, because, on the one hand, machine learning is inherently robust to feature distribution skew ($\mathcal{P}_i(x) \neq \mathcal{P}_j(x)$ when $\mathcal{P}(y|x)$ is shared). Typically, one of the advantages of a convolutional neural network is to be robust to variant features through convolutions and pooling. On the other hand, clustering data with label distribution skew ($\mathcal{P}_i(y) \neq \mathcal{P}_j(y)$ when $\mathcal{P}(x|y)$ is shared) would group clients who only have a certain number of labels. Methods inspired by the incremental learning literature (FedProx \cite{Li2018}, SCAFFOLD \cite{Karimireddy2020} and SCAFFNEW \cite{mishchenko22}) are more suitable to address this latter case.
\begin{figure}
\caption{Linear regression example: evolution of clients' weights during federated training for IID and non-IID cases. We notice that non-IID clients' weights diverge, whereas IID clients converge to the same optimal weight.}
\label{fig:divergence}
\end{figure}
Another setting that can degrade the performance of a federated model is when it is attacked by malicious clients that try to poison the model during training-time \cite{mhamdi2018hidden}. Under the strong assumption that a malicious client $k$ has full knowledge of the aggregation rule used by the server and of the updates of others, it can make the aggregation result equal to an arbitrary value $U$ at any round $t$ by submitting the following update: \begin{equation} \delta_k^t = \frac{1}{\lambda_k} U - \sum_{i \in C_t, i\ne k}\frac{\lambda_i}{\lambda_k} \delta_i^t \label{eq:poison} \end{equation}
FedAvg, and more specifically the mean aggregation rule, are inherently vulnerable to these attacks, as shown by (\ref{eq:poison}). However, in practice clients do not have a full knowledge of the system. That is why the standard model poisoning attacks often consist in sending an update containing random weights, null weights, or, more efficiently, the opposite of the true weights. The impact of the attack can also be strengthened when several clients collude with each other. Such attacks can be considered as a form of data heterogeneity because the poisoned updates are different of other updates as they try to hinder the global model convergence.
\section{Related work}\label{sec:work} Improving FL models while dealing with non-IID data is an active field of research. Most of the approaches in literature try to personalize the global FL model to improve performances of individual clients. In works based on transfer learning \cite{chen2021fedhealth,yu2020salvaging} and meta-learning \cite{jiang2019improving}, the global model is trained using FedAvg and afterwards each client fine-tunes the shared model using its local data. In multi-task learning, the clients' models are trained simultaneously by exploiting commonalities and differences across the learning tasks. MOCHA \cite{smith2018federated} uses the correlation matrix among tasks as a regularization term while FedEM \cite{marfoq21} considers that the data distribution of each client is a mixture of unknown but shared underlying distributions and uses the Expectation-Maximization algorithm for training.
\begin{figure}
\caption{An overview of Incremental Clustering. (a) The server initializes the model. (b) Clients perform local training and send their parameters to the server who keeps them in memory. (c) It then computes the similarity between each parameter it has access to and fills in the adjacency matrix. The parameters received at that round are then averaged and sent back to clients. For example, at round 1, client 1 and 3 are sampled, so we can compute $S_1^{1,3}=S_1^{3,1}$. At round 2, client 2 and 4 are sampled, which means we can compute $S_2^{2,4}=S_2^{4,2}$ but also $S_2^{1,2}, S_2^{2,3}, S_2^{1,4}$ and $S_2^{3,4}$. At round 3, since client 2 was already sampled, we replace its parameters kept in memory by the most recent. The similarities linked to client 2 are thus recomputed, and the ones of client 6 are computed as well.}
\label{fig:ic}
\end{figure}
FedAvg permits to collaboratively learn a unique model while personalized approaches provide one model per client. We consider that clustering-based methods can be a relevant compromise between collaboration and personalization. Several works \cite{Ghosh,sattler2019clustered,Briggs} have already considered that it is possible to find a cluster structure in order to gather clients with similar data distributions and perform classical FedAvg training per cluster. Thus, there are as many models as clusters.
In \cite{Ghosh}, the number of clusters are estimated \textit{a priori} and each client is assigned to one of them before performing local training. Once the cluster of each client is identified, the server averages the parameters of each clusters separately. Determining each client's cluster requires high communication costs and may be unsuitable for large deep learning models.
Our work resembles more the one of \cite{sattler2019clustered,Briggs} who cluster clients based on their model parameters after FedAvg training. However, these approaches perform a communication round $T$ involving \textit{all} clients to build the clusters. In a cross-device setting where the number of clients is considerable \cite{hard2019federated}, this step is impractical in terms of clients’ availability and communication costs. To prevent such a bottleneck and to adapt to real-world applications, our method takes advantage of the updates received during FedAvg rounds and builds an adjacency matrix incrementally as clients are sampled for training.
Concerning model attacks, existing methods try to prevent the influence of the malicious clients by replacing the averaging step on the server-side with robust estimates of the mean, such as coordinate-wise median \cite{yin2021byzantinerobust} or Krum, an aggregation rule based on a score using couples of closest vectors \cite{blanchard2017machine}. However, theses approaches remain robust to model poisoning attacks while the proportion of adversaries that participate in each round of learning is strictly below 50$\%$ \cite{hu2021challenges}. In Section \ref{sec:Exp}, we will compare our cluster-based approach to the median aggregation rule method \cite{yin2021byzantinerobust}, an efficient defense which we will refer to as \textit{median defense}.
\section{Incremental clustering} \label{algo}
Similarly to prior works, we tackle the issue of heterogeneous data by extending FedAvg and adding a clustering step to separate clients into groups. We next train them independently to reach homogeneous data performance. However, contrary to \cite{sattler2019clustered,Briggs}, we avoid performing the burdensome round mentioned in Section \ref{sec:work} by taking advantage of the local updates we already have access to at each round. Specifically, at round $t$ of FedAvg, when $|C_t|$ clients finish local training, each client $k$ sends to the server its updates $\delta_t^k=w_{t-1}-w_t^k$. This is a good indicator of how clients' weights differ from the global model. As seen in Algorithm \ref{alg:IC} (l.\ref{line:s}) and Figure \ref{fig:ic}, these values are stored by the server in a matrix $M$ in order to fill in an adjacency matrix $S_t$ afterwards. This matrix contains the similarities between clients : $S_t=\left(s(\delta^i, \delta^j)\right)_{1 \leq i,j \leq K}$ where $s(\delta^i, \delta^j)$ is the similarity measure between the updates of client $i$ and $j$. During next round $t+1$, the server stores the new updates of clients belonging to $C_{t+1}$. In order to compute the most similarities between clients, the server calculates $s$ between recent and previous updates kept in memory. To this end, it keeps the most recent update if a client has already been sampled and does not forget previously stored updates. If a client has never been sampled, the values of its corresponding row and column in $S_t$ will be equal to zero. Furthermore, if a similarity between two clients has already been computed, we keep the most recent one (see Figure \ref{fig:heatmaps}).
Once FedAvg stabilizes, the second part of Algorithm \ref{alg:IC} (l.\ref{line:louvain}) begins : we cluster clients by creating a graph from $S_t$ and applying the Louvain method algorithm \cite{Blondel_2008}. Once we have detected different communities, that we call clusters, we resume separately FedAvg per clusters. Within a cluster, we expect that clients should have the same data distribution and the performances should reach the ones of identically distributed data.
It should be noted that our incremental method adds two biases in comparison with approaches requiring all clients to take part in the same round: \begin{itemize}
\item The coefficients of $S_t$ are not all calculated at the same round because the matrix fills in during rounds. Thus, at round $t$, the computed similarities are added to $S_t$ which also contains similarities computed at previous rounds $\tau, \tau<t$.
\item $s$ is often computed for updates of different rounds. For instance, if client $i$ was sampled at round $t$ and client $j$ at round $\tau<t$, then $S_{t}^{i,j}$ will be equal to $s(\delta_{t}^i, \delta_\tau^j)$. \end{itemize}
\begin{figure}
\caption{Evolution of the similarity matrix through rounds for an example with four clusters and for parameters $K=100$ and $C=0.1$ throughout rounds $0$, $10$ and $49$. At round $0$ the matrix contains $10 \times 10$ non-zero values because only $10$ clients have been sampled. As the server samples more clients, the matrix fills up. Calculated similarities change during training if clients are re-sampled. At round $49$, one of the clients has never been sampled. Thus, the graph resulting from the similarity matrix of round $49$ will not contain that client in its nodes, and so the client will not be assigned to a cluster.}
\label{fig:heatmaps}
\end{figure}
In previous methods \cite{sattler2019clustered,Briggs}, similarities are computed for a same round. Typically, the similarity between client $i$ and $j$ would be $s(\delta_{t}^i, \delta_t^j)$. We notice that the gap between our approach and the benchmark lies on the difference between the updates of client $j$ computed at different times. To clarify this, let us consider for simplicity that clients perform a single local epoch $E$ and SGD with a learning rate $\alpha$. Common similarities (Euclidean, Manhattan, Minkowski,...) between two points are associated with distances defined as the $p$-norm of the difference between these points for a certain $p$.
Previous methods \cite{sattler2019clustered,Briggs} would compute distances between updates of two distinct clients calculated at the same round $t$ as follows: \begin{equation}
\|\delta_t^i -\delta_t^j\|_p =\|(w_{t-1}-w_t^i)-(w_{t-1}-w_t^j)\|_p =\|w_t^j-w_t^i\|_p \label{eq:them}\\ \end{equation}
However, our method computes distances of updates obtained at different rounds, for instance $t$ and $\tau<t$. Thus: \begin{equation} \begin{split}
\|\delta_{t}^i - \delta_\tau^j \|_p &= \|(w_{t-1}-w_t^i)-(w_{\tau-1}-w_\tau^j)\|_p \\
&= \|(w_{t-1}-w_{\tau-1})+(w_\tau^j-w_t^i)\|_p \label{eq:ours}\\
&= \|(w_{t-1}-w_{\tau-1})+(w_t^j-w_t^i)+(w_\tau^j-w_t^j)\|_p \\ \end{split} \end{equation}
To upper bound the difference between the norms of the benchmark (\ref{eq:them}) and ours (\ref{eq:ours}), we use the triangle inequality : \begin{equation} \begin{split}
\|\delta_{t}^i - \delta_\tau^j \|_p - \|\delta_t^i -\delta_t^j\|_p &\leq \|(w_{t-1}-w_{\tau-1})+(w_\tau^j-w_t^j)\|_p \\
&\leq \underbrace{\|(w_{t-1}-w_{\tau-1})\|_p}_{\text{(a)}} + \underbrace{\|(w_\tau^j-w_t^j)\|_p}_{\text{(b)}} \label{eq:diff}\\ \end{split} \end{equation}
As we will discuss in Section \ref{sec:Exp}, the difference between our method and previous ones mainly relies on the sampling of clients during the training process. To get a better sense of term (\ref{eq:diff}.a), we can notice by induction that : $$w_{t-1}=w_{t-2} - \sum_k \lambda_k \delta_{t-1}^k =...=w_{\tau-1} - \sum_{t'=\tau}^{t-1} \sum_{k \in C_t'} \frac{n_k}{\sum_{q \in C_t'} n_q} \delta_{t'}^k$$
Thus, if client $j$ was sampled at a round much earlier than client $i$, term (\ref{eq:diff}.a) can be large for two reasons. Firstly, within a same round $t'$ the heterogeneity of data causes divergence, as we mentioned in section \ref{sec:intro}. Moreover, the more rounds take place in between time $t$ and $\tau$, the more terms are added. We can note in Figure \ref{fig:divergence} that the difference between global weights becomes more significant if the rounds are distant. However, if FedAvg reaches convergence, $w_{t-1}$ and $w_{\tau-1}$ will likely be similar, making term (\ref{eq:diff}.a) negligible. We can again notice in Figure \ref{fig:divergence} that at a stabilized state (presumably after round $8$ for this toy example), global weights will be comparable and thus their difference small.
Term (\ref{eq:diff}.b) also depends on when client $j$ was sampled. Although training is done on the same local data for client $j$, $w_\tau^j$ and $w_t^j$ can be significantly different if their starting points are distant.
Despite these biases, we use the updates received at each round to perform clustering because we consider that they contain relevant information about the clients' direction in the optimization process.
Given that we consider a cross-device context with a high number of clients, it is possible that during federated training not all clients will be sampled. If it is the case, no value will be present in its corresponding row and column in $S_t$, thus it will not be clustered. Following \cite{Ghosh}, to assign them to a group, we evaluate each model of the clusters with their test data. We notice that the highest accuracy is obtained by the model corresponding to the clients' clusters.
Note that $S_t$ does not influence in the computing of FedAvg and that we perform no additional communications between the server and the clients than what classical FL requires. The central server performs the supplementary calculations due to the adjacency matrix. We consider that in a decentralized setting where the server computational capacity is significantly higher than the clients', this extra work is not critical.
As we will see in Section \ref{sec:Exp}, our method can tackle the statistical challenge inherent to FL and find the correct clustering structure for different cases of non-IID data. It also avoids the step in which all clients send their updates to the server, which is very demanding in terms of communications costs.
Furthermore, by addressing the security challenge, our objective is to separate the updates from malicious and loyal clients and then, to build a global model from a cluster containing only loyal clients. We show in Section \ref{sec:Exp} that our approach is a robust defense even if there are a majority of adversaries. \begin{algorithm}[!t] \caption{FL through incremental clustering. $T$ rounds of FedAvg are performed before clustering and $T_f$ rounds after. $S_t$ is the adjacency matrix at round $t$ and matrix $M$ stores clients updates.}\label{alg:IC} \begin{algorithmic}[1] \Procedure{FLIncrementalClustering}{$K$} \State initialize $w_0$ \For{$t$ in $t=1,...,T$} \State $C_t \gets$ random subset of all clients $K$
\For{each client $k$ in $C_t$} \State $\delta^k_{t}, n_k$ = \small \textsc{ClientUpdate($w_{t-1}, k, E, B, \alpha$)} \State $M_k = \delta^k_{t}$ \Comment{Server stores update} \label{line:s} \EndFor
\State $w_{t}= w_{t-1} - \sum_k \frac{n_k}{\sum_{q \in C_t}{n_q}}\delta^k_{t}$
\For{$i, j$ in $K$} \State $S_t^{i,j}=s(M_i, M_j)$ \Comment{Update $S_t$ matrix} \EndFor \EndFor
\State $P \gets \small \textsc{LouvainMethod}(S_T)$ \label{line:louvain}
\For{cluster $c$ in $P$} \label{line:louvain2} \State initialize server $c$ with weights $w_{c,T}=w_T$
\For{$t$ in $t=T+1,..., T+T_f$} \State $C_{c,t} \gets$ random subset of clients in cluster $c$
\For{each client $k$ in $C_{c,t}$} \State $\delta^k_{t}, n_k$ = \small \textsc{ClientUpdate}($w_{c,t-1}, k, E, B, \alpha$) \EndFor
\State $w_{c,t}= w_{c,t-1} - \sum_k \frac{n_k}{\sum_{q \in C_t}{n_q}}\delta^k_{t}$ \EndFor \EndFor \label{line:louvain3} \EndProcedure
\Procedure{ClientUpdate}{$w, k, E, B, \alpha$} \State initialize $w_k= w$ \For{$e$ in $e=1,..., E$} \State Divide $n_k$ samples into batches of size $B$ : set $\mathcal{B}$
\For{b in $\mathcal{B}$} \State $w_k = w_k - \alpha \nabla l(w_k, b)$ \EndFor \EndFor \State $\delta_k = w-w_k$ \State \Return $(\delta_k, n_k)$ \Comment{\small Send update and number of samples} \EndProcedure \end{algorithmic} \end{algorithm} \section{Experiments ans discussion} \label{sec:Exp}
\subsection{Dataset and model} \label{Datasets}
We propose to use the community detection algorithm Louvain method \cite{Blondel_2008} for clustering. Contrary to \cite{Ghosh,sattler2019clustered,Briggs}, we want to avoid having a cluster dependent parameter because we assume that we do not know \textit{a priori} the cluster structure or the number of clusters we can expect. Louvain is a greedy algorithm that does not require such parameter. The basic Louvain algorithm considers graphs with positive weights. To this end, we define the following similarity, positive, upper bounded by $2$ and that uses the cosine distance as in \cite{sattler2019clustered}, which gives a good sense of the \textit{direction} gradients take during training:
\begin{equation} \label{eq:sim} s(\delta^i,\delta^j)=1+cos(\delta^i,\delta^j) \end{equation} Our approach is however agnostic to the clustering method and could be applied with other clustering algorithms.
We realize two types of experiments. The first ones simulate non-IID cases by artificially forming groups of clients with similar properties. These cases are \textit{verifiable} in the sense that after we apply our method, we can verify if clients following same data distributions are effectively grouped together. We will refer to this as the \textit{correct} clustering. Experiments are done with the MNIST dataset \cite{lecun2010mnist} dedicated to identify handwritten digits from pixel data. The dataset is partitioned into 100 clients, each having 600 training samples and 100 test samples. We simulate a user experience context with heterogeneous data by forming groups of clients following different data distributions. The non-IID case \textquote{same label, different features} is simulated by partitioning the data into four groups. Within each group the images are rotated of 90 degrees.
We will refer to this experiment as \textit{image rotation}. Similarly, for the non-IID case \textquote{same features, different labels}, we partition the clients into 5 groups and within each group two digit labels are swapped. This will be referred to as \textit{label swap}. For instance, a client of the first group will have 0 and 1 images labeled with 1 and 0 respectively. A client of the second group will have the correct labels for 0 and 1 images but will have labels 3 and 2 for 2 and 3 images.
The second type of experiments concern the application of our method as a defense to poisoned models. We continue to use the MNIST dataset \cite{lecun2010mnist} split into $100$ clients, from which a certain number of them will be attackers. An attacker is a malicious client that sends $-\delta_t^k$, \textit{i.e.} the opposite of its true update, in order to cause the global model to diverge. This attack will be referred to as \textit{minus grad attack}. The more clients are attackers, the more the global model will be perturbed. We set the strength of the attack by varying the number of adversaries and we realize experiments for $30$, $40$, $50$ and $60$ malicious clients out of the $100$ total clients. We implement an existing defense that replaces the typical mean aggregation by a median aggregation \cite{yin2021byzantinerobust} in order to compare its results to the ones of FLIC as a defense. As in the previous experiments, we can also speak of a \textit{correct} clustering if all malicious clients are separated from loyal clients. It should be noted that our method is agnostic to the aggregation rule. In our experiments we use the weighted mean, but a median aggregation could be used in order to combine both defenses.
Following \cite{mcmahan2017communicationefficient}, we build a convolutional neural network of two convolutional layers followed by a fully connected layer with a ReLu activation and a final softmax output layer. This architecture is used by both the server and the clients. We perform SGD with a learning rate of $\alpha=0.01$. Unless specified otherwise, the number of local epochs $E$ and the size of mini-batches $B$ are 5 and 10 respectively.
We used GPUs with specifications INTEL Skylake AVX512 support, 192Go RAM and at least 48 threads. The wall-clock time of computation for parameters $E=5$, $B=10$, $C=0.1$ and $200$ rounds was in average of one hour.
\subsection{Results on the statistical challenge} In this section, we lead two sets of experiments realized 20 times each to reduce randomness. During our first set of experiments, we assess the performance of our method when clients have heterogeneous data. Firstly, 10\% of the clients are randomly sampled at each round. Then, at round 200 when FedAvg reaches convergence, we cluster clients thanks to the adjacency matrix built during the 200 rounds.
After clustering, each group performs 5 rounds of FedAvg.
\begin{figure}
\caption{(a) FedAvg accuracy for IID, label swap and image rotation cases before and after clustering. (b)-(c) Focus on cluster's accuracies trained for 5 rounds for label swap and image rotation cases respectively. The displayed results are obtained by evaluating the servers' models on test images of clients managed by them. We display the mean result of all experiments with $0.95$ confidence interval.}
\label{fig:res1}
\end{figure}
\begin{table}[t!] \centering
\begin{tabular}{c|c|c} \hline
\multicolumn{1}{c|}{}& Pre-clustering & Post-clustering \\ \hline IID data & \multicolumn{2}{c}{$0.99\pm 0.01$} \\ \hline Label swap & $0.75\pm 0.075$ & $0.99 (\times1.32)\pm 0.011$ \\ \hline Image rotation & $0.93\pm 0.051$ & $0.98(\times1.05)\pm 0.013$ \\ \hline \end{tabular} \caption{Accuracy evolution before and after clustering. We display the mean of clients' accuracies $\pm$ the standard deviation and in parenthesis the relative increase from before clustering.} \label{tab:prepost} \end{table}
For both non-IID cases, all clients were correctly clustered as in \cite{Briggs} but without the necessity of sampling all clients at round $200$. Figure \ref{fig:res1} shows that after clustering, the mean accuracy of the models increases of $24\%$ and $5\%$ for the label swap case and the image rotation case respectively. We reach the same performances as in the IID case, which proves our method is well adapted to tackle the problem of heterogeneous data. Finally, Table \ref{tab:prepost} indicates that the variability between clients' results has decreased because the standard deviation after clustering decreases. This contributes to improve the fairness of the federated models.
\begin{figure}
\caption{Mean FedAvg accuracy with $0.95$ confidence interval for parameters $E=3$ and $B=50$ over $100$ rounds (blue curve and outer right axis). Number of clusters (violet curve and left axis) and clusters purity if (green curve and inner right axis) if clustering is done with the similarity matrix of each round of the abscissa.}
\label{fig:conv}
\end{figure}
For our second set of experiments (Figure \ref{fig:conv}), we evaluate the capability of our method to well cluster heterogeneous data at early rounds i.e. before convergence.
To this end, we cluster clients at every round - without training per cluster as in Algorithm \ref{alg:IC} (l.\ref{line:louvain2}-\ref{line:louvain3}). We focus only on the label swap case and change parameters $E$ and $B$ to $3$ and $50$ in order to slow down the convergence rate and to better analyze the behavior of Algorithm \ref{alg:IC}.
In Figure \ref{fig:conv}, we plot in green the cluster purity. We define this quantity as the percentage of clients grouped with others who follow the same data distributions. If it is equal to $1$, it means that FLIC formed groups of clients who all follow the same data distribution.
We notice in Figure \ref{fig:conv} that at the beginning of the training, the cluster purity is equal to $1$ and the number of clusters represented by the violet curve starts at $10$.
At the beginning of the training, the graph is small because not many clients have yet been sampled. For instance at round $1$, since $C=0.1$, the graph contains only $10$ nodes with similar edges. The Louvain method can not seam to find clear communities among nodes and forms $10$ communities each containing a single client, which explains the obtained purity.
As new clients are sampled and trained, more communities are found, but they sometimes contain clients of different data distributions, which makes the cluster purity decrease from its optimal value $1$. Before round $34$, the lowest round for which a client $k$ was sampled is in average equal to $1$ i.e. $\delta_1^k$ is used for the update of the adjacency matrix. Some clients have thus performed few epochs of local training and their updates are not clearly distinguishable. After round $34$, clusters purity reaches $1$, so performing training per cluster can enhance performances as clients per group follow same data distributions. Yet, these partitions are not optimal as not \textit{all} clients with same distributions are grouped together. As shown in Figure \ref{fig:conv} by the violet curve, starting from round $49$, five pure groups are found, making the clustering correct. At this point, the mean lowest round at which a client was sampled is equal to $6$. Most clients have thus updates that are representative of their local objective functions.
We can thus leverage the information received by the clients' updates during federated training even if they are computed at an early stage of their local optimization.
In practice, the updates sent by clients at early rounds could be ignored for the clustering.
\begin{figure}
\caption{Minus grad attack by $30$, $40$, $50$ and $60$ attackers out of $100$ loyal clients, defended by the median defense and FLIC. The displayed results are obtained by evaluating the servers’ models on test images of the clients managed by them. For the FLIC defense, clusters containing malicious clients are evaluated on clean images, which explains the drop in accuracy for them. We display the mean result of all experiments with $0.95$ confidence interval.}
\label{fig:attacks}
\end{figure} \subsection{Results on the security challenge}
\begin{table}[!t] \begin{center} \resizebox{\columnwidth}{!}{
\begin{tabular}{ c || c| c| c| c}
\hline
& 30 attackers & 40 attackers & 50 attackers & 60 attackers\\
\hline
\hline
FedAvg without atack & \multicolumn{4}{c}{$0.99\pm 0.01$}\\
\hline
\hline
FedAVG with attack & $0.94\pm0.003$ & $0.91\pm0.005$ & $0.14\pm0.14$ & $0.1\pm0.00$ \\
\hline
\hline
Median defense & $0.98\pm0.002$ & $0.57\pm0.423$ & $0.10\pm0.004$ &$0.1\pm0.00$ \\
\hline
\hline
FLIC - loyal clusters & $0.95\pm0.012$ & $0.94\pm0.004$ & $0.91\pm0.006$ & $0.97\pm0.004$\\ \hline Mean number of loyal clusters & $3.5$ & $1.09$ & $1$ & $1.1$\\ \hline Mean number of clients in loyal clusters & $20$ & $55$ & $50$ & $40$\\ \hline \end{tabular} } \end{center} \caption{Effect of minus grad attack on IID data and performances of both median and FLIC defenses according to the number of total malicious clients. For every method we display the best accuracy of the last 50 rounds $\pm$ the corresponding standard deviation. Results on FLIC defense concern only clusters of loyal clients. The displayed results are the accuracy after clustering, the mean number of loyal clients and the mean number of clients inside a loyal cluster.} \label{tab:att_pur} \end{table}
We also realize $20$ experiments and randomly sample $10\%$ of the clients at each round, with local parameters $E=1$ and $B=50$. We implement the attack mentioned in Subsection \ref{sec:Exp}.A, which is referred to as \textit{minus grad attack}. We evaluate our method FLIC as a defense and compare our results with the coordinate-wise median aggregation rule defense \cite{yin2021byzantinerobust}.
During FLIC training, we expect that after clustering, malicious clients are separated from loyal. The results in Figure \ref{fig:attacks} and Table \ref{tab:att_pur} are evaluated on client's test data. A malicious client will thus have poor performances if it is evaluated on its own corrupted model. If FLIC is a robust defense, its plot should split into two curves at round $T$, one representing loyal clients reaching adequate performances, and the other representing malicious clients dropping to a poor accuracy.
All simulations perform $300$ total rounds for the minus grad attack and the median defense. For the experiments on FLIC, the first simulations with $30$, $40$ and $50$ attackers, run $T=200$ rounds before clustering and $T_f=100$ rounds after clustering. We decided to perform more rounds after clustering than in the previous experiments in order to see if the new model, cleaned of malicious clients, can reach performances of models who have not been attacked.
As we can see in Figure \ref{fig:attacks} and Table \ref{tab:att_pur}, when the number of malicious clients is not high, for instance $30$, the median defense is robust and slightly outperforms FLIC. However, when the number of attackers is equal to $40$, the median defense learns with difficulty the task at the beginning of the training, and then fails after approximately $100$ rounds. As mentioned, the median defense is robust only if the majority of clients are loyal \cite{hu2021challenges}. At a particular round, since there are $40\%$ of attackers, it is possible that out of the $10$ sampled clients, more than $5$ will be attackers, which makes the median defense fail. The variability of the median defense for $40$ attackers in Figure \ref{fig:attacks} is thus due to the changing number of attackers at each round and for every simulation. On the contrary, as FLIC correctly separates clients at round $T=200$, loyal clients reach good performances again.
When the proportion of attackers is of $50\%$ or higher, the effects of the minus grad attack become predominant and the median defense collapses. FLIC manages though to correctly separate clients. For $50$ attackers, the new model after clustering reaches good performances in less than $100$ rounds.
For the experiment with $60$ attackers, we performed less rounds before clustering ($T=50$) because we noticed that if done later (for instance at round $T=200$ as the other experiments), when FLIC successfully separated malicious clients of the rest, loyal clients restarted training with a model that was too degraded by the attack and could not learn the objective task in the remaining rounds. If clustering is performed at round $T=50$, the new model of loyal clients has a lower convergence rate but the starting point of loyal clients' training is not as distant from their objective as before ($T=200$) and they can still rebuild an efficient model.
Clusters purity defined in Subsection \ref{sec:Exp}.B now reflects the capability of FLIC to correctly split malicious and loyal clients. If a malicious client is grouped with loyal clients, it decreases the methods purity. For all of our experiments, FLIC clusters correctly clients (purity equal to $1$).
In Table \ref{tab:att_pur} we display the mean number of loyal clusters and the mean number of clients in loyal clusters. Ideally, \textit{all} loyal clients should be grouped in one single cluster, in order to enhance the collaborative characteristic of FL. With $30$ attackers, loyal clients are generally not grouped all together but rather in small clusters of $20$ clients. Under this attack, the model can still learn the training task, and thus clients can more easily learn their own local objective before sending their updates to the server. Their updates are thus different from the attackers ones, but not sufficiently similar between themselves in order to be grouped in a single cluster. For the rest of the experiments, loyal clients are generally grouped together because the attacked model is distant from not only the global objective but from all local objectives. Clients have thus more difficulties to reach their local objective, and resemble more between themselves because they begin the new training at the same distant starting point.
\section{Conclusion and future work} In this work, we apply clustering techniques to FL under heterogeneous data. During FedAvg, we exploit the available information sent by the sampled clients at each round to compute similarities between clients incrementally, which enables us to cluster clients without having to compute \textit{all} their parameters at the same round. Our method is especially advantageous in a cross-device context where the number of clients is large, and the communication costs between them and the server are very high. We empirically show on a variety of non-IID settings that the obtained groups reach IID data performances. We also obtain partitions that effectively group clients following similar data distributions if most clients have performed enough rounds of local optimization. Moreover, we also address attacks on models as a form of data heterogeneity and apply our method as a defense technique consisting in separating malicious clients from the rest of the clients. We show that our method is a robust defense even when the malicious clients are in the majority, whereas existing methods fail to protect models in this case. Ongoing work consists in providing convergence proofs of our method. Other relevant future work is to check the adaptability of our method in differential privacy contexts where noise is added to parameters to reinforce clients' confidentiality \cite{mcmahan2017learning}.
\end{document} |
\begin{document}
\title{Interlaced Greedy Algorithm for Maximization of Submodular Functions in Nearly Linear Time}
\author{Alan Kuhnle \\ Department of Computer Science \\ Florida State University \\ \texttt{[email protected]}}
\maketitle \begin{abstract} A deterministic approximation algorithm is presented for the maximization of non-monotone submodular functions over a ground set of size $n$ subject to cardinality constraint $k$; the algorithm is based upon the idea of interlacing two greedy procedures. The algorithm uses interlaced, thresholded greedy procedures to obtain tight ratio $1/4 - \epsi$ in $O \left( \frac{n}{\epsi} \log \left( \frac{k}{\epsi} \right) \right)$ queries of the objective function, which improves upon both the ratio and the quadratic time complexity of the previously fastest deterministic algorithm for this problem. The algorithm is validated in the context of two applications of non-monotone submodular maximization, on which it outperforms the fastest deterministic and randomized algorithms in prior literature. \end{abstract} \section{Introduction} A nonnegative function $f$ defined on subsets of a ground set $U$ of size $n$ is \emph{submodular} iff for all $A,B \subseteq U$, $x \in U \setminus B$, such that $A \subseteq B$, it holds that $ f\left(B \cup x \right) - f(B) \le f\left(A \cup x \right) - f(A).$ Intuitively, the property of submodularity captures diminishing returns. Because of a rich variety of applications, the maximization of a nonnegative submodular function with respect to a cardinality constraint (MCC) has a long history of study \citep{Nemhauser1978}. Applications of MCC include viral marketing \citep{Kempe2003}, network monitoring \citep{Leskovec2007}, video summarization \citep{Mirzasoleiman2018}, and MAP Inference for Determinantal Point Processes \citep{Gillenwater2012}, among many others. In recent times, the amount of data generated by many applications has been increasing exponentially; therefore, linear or sublinear-time algorithms are needed.
If a submodular function $f$ is monotone\footnote{The function $f$ is monotone if for all $A \subseteq B$, $f(A) \le f(B)$.}, greedy approaches for MCC have proven effective and nearly optimal, both in terms of query complexity and approximation factor: subject to a cardinality constraint $k$, a simple greedy algorithm gives a $(1 - 1/e)$ approximation ratio in $O(kn)$ queries \citep{Nemhauser1978}, where $n$ is the size of the instance. Furthermore, this ratio is optimal under the value oracle model \citep{Nemhauser1978a}. \citet{Badanidiyuru2014} sped up the greedy algorithm to require $O\left( \frac{n}{\epsi} \log \frac{n}{\epsi} \right)$ queries while sacrificing only a small $\epsi > 0$ in the approximation ratio, while \citet{Mirzasoleiman2014} developed a randomized $(1 - 1/e - \epsi)$ approximation in $O(n / \epsi)$ queries.
When $f$ is non-monotone, the situation is very different; no subquadratic deterministic algorithm has yet been developed. Although a linear-time, randomized $(1/e - \epsi)$-approximation has been developed by \citet{Buchbinder2015a}, which requires $O\left( \frac{n}{\epsi^2} \log \frac{1}{\epsi} \right)$ queries, the performance guarantee of this algorithm holds only in expectation. A derandomized version of the algorithm with ratio $1/e$ has been developed by \citet{Buchbinder2018} but has time complexity $O(k^3 n )$. Therefore, in this work, an emphasis is placed upon the development of nearly linear-time, deterministic approximation algorithms.
\subsection*{Contributions} The deterministic approximation algorithm InterlaceGreedy (Alg. \ref{alg:tandem}) is provided for maximization of a submodular function subject to a cardinality constraint (MCC). InterlaceGreedy achieves ratio $1/4$ in $O(kn)$ queries to the objective function. A faster version of the algorithm is formulated in FastInterlaceGreedy (Alg. \ref{alg:fast-tandem}), which achieves ratio $(1/4 - \epsi)$ in $O\left( \frac{n}{\epsi} \log \frac{k}{\epsi} \right)$ queries.
In Table \ref{table:cc}, the relationship is shown to the fastest deterministic and randomized algorithms for MCC in prior literature.
Both algorithms operate by interlacing two greedy procedures together in a novel manner; that is, the two greedy procedures alternately select elements into disjoint sets and are disallowed from selection of the same element. This technique is demonstrated first with the interlacing of two standard greedy procedures in InterlaceGreedy, before interlacing thresholded greedy procedures developed by \citet{Badanidiyuru2014} for monotone submodular functions to obtain the algorithm FastInterlaceGreedy.
The algorithms are validated in the context of cardinality-constrained maximum cut and social network monitoring, which are both instances of MCC. In this evaluation, FastInterlaceGreedy is more than an order of magnitude faster than the fastest deterministic algorithm \citep{Gupta2010} and is both faster and obtains better solution quality than the fastest randomized algorithm \citep{Buchbinder2015a}. The source code for all implementations is available at \url{https://gitlab.com/kuhnle/non-monotone-max-cardinality}. \begin{table} \caption{Fastest algorithms for cardinality constraint} \centering \label{table:cc}
\begin{tabular}{ |c|c|c|c| }
\hline
\textbf{Algorithm} & \textbf{Ratio} & \textbf{Time complexity} & \textbf{Deterministic?} \\ \hline
FastInterlaceGreedy (Alg. \ref{alg:fast-tandem}) & $1/4 - \epsi$ & $O\left( \frac{n}{\epsi} \log \frac{k}{\epsi} \right)$ & Yes \\ \hline
\citet{Gupta2010} & $1/6 - \epsi$ & $O \left(nk + \frac{n}{\epsi}\right)$ & Yes \\ \hline
\citet{Buchbinder2015a} & $1/e - \epsi$ & $O\left( \frac{n}{\epsi^2} \log \frac{1}{\epsi} \right)$ & No \\
\hline \end{tabular}
\end{table}
\paragraph{Organization} The rest of this paper is organized as follows. Related work and preliminaries on submodular optimization are discussed in the rest of this section.
In Section \ref{sec:interlace}, InterlaceGreedy and FastInterlaceGreedy are presented and analyzed. Experimental validation is provided in Section \ref{sec:exp}.
\subsection*{Related Work} \label{sec:rw} The literature on submodular optimization comprises many works. In this section, a short review of relevant techniques is given for MCC; that is, maximization of non-monotone, submodular functions over a ground set of size $n$ with cardinality constraint $k$. For further information on other types of submodular optimization, interested readers are directed to the survey of \citet{Buchbinder2018a} and references therein.
A deterministic local search algorithm was developed by \citet{Lee2010a}, which achieves ratio $1/4 - \epsi$ in $O(n^4 \log n)$ queries. This algorithm runs two approximate local search procedures in succession. By contrast, the algorithm FastInterlaceGreedy employs interlacing of greedy procedures to obtain the same ratio in $O\left( \frac{n}{\epsi} \log \frac{k}{\epsi} \right)$ queries. In addition, a randomized local search algorithm was formulated by \citet{Vondrak2013}, which achieves ratio $\approx 0.309$ in expectation.
\citet{Gupta2010} developed a deterministic, iterated greedy approach, wherein two greedy procedures are run in succession and an algorithm for unconstrained submodular maximization are employed. This approach requires $O(nk)$ queries and has ratio $1/(4 + \alpha)$, where $\alpha$ is the inverse ratio of the employed subroutine for unconstrained, non-monotone submodular maximization; under the value query model, the smallest possible value for $\alpha$ is 2, as shown by \citet{Feige2011a}, so this ratio is at most $1/6$. The iterated greedy approach of \citet{Gupta2010} first runs one standard greedy algorithm to completion, then starts a second standard greedy procedure; this differs from the interlacing procedure which runs two greedy procedures concurrently and alternates between the selection of elements. The algorithm of \citet{Gupta2010} is experimentally compared to FastInterlaceGreedy in Section \ref{sec:exp}. The iterated greedy approach of \citet{Gupta2010} was extended and analyzed under more general constraints by a series of works: \citet{Mirzasoleiman2016,Feldman2017,Mirzasoleiman2018}.
An elegant randomized greedy algorithm of \citet{Buchbinder2014} achieves expected ratio $1/e$ in $O(kn)$ queries for MCC; this algorithm was derandomized by \citet{Buchbinder2018}, but the derandomized version requires $\oh{k^3n}$ queries. The randomized version was sped up in \citet{Buchbinder2015a} to achieve expected ratio $1/e - \epsi$ and require $O\left( \frac{n}{\epsi^2} \log \frac{1}{\epsi} \right)$ queries. Although this algorithm has better time complexity than FastInterlaceGreedy, the ratio of $1/e - \epsi$ holds only in expectation, which is much weaker than a deterministic approximation ratio. The algorithm of \citet{Buchbinder2015a} is experimentally evaluated in Section \ref{sec:exp}.
Recently, an improvement in the adaptive complexity of MCC was made by \citet{Balkanskia}. Their algorithm, BLITS, requires $O \left( \log^2 n \right)$ adaptive rounds of queries to the objective, where the queries within each round are independent of one another and thus can be parallelized easily. Previously the best adaptivity was the trivial $O(n)$. However, each round requires $\Omega( OPT^2 )$ samples to approximate expectations, which for the applications evaluated in Section \ref{sec:exp} is $\Omega( n^4 )$. For this reason, BLITS is evaluated as a heuristic in comparison with the proposed algorithms in Section \ref{sec:exp}. Further improvements in adaptive complexity have been made by \citet{Fahrbach2018a} and \citet{Ene2019}.
Streaming algorithms for MCC make only one or a few passes through the ground set. Streaming algorithms for MCC include those of \citet{Chekuri2015,Feldman2018,Mirzasoleiman2018}. A streaming algorithm with low adaptive complexity has recently been developed by \citet{Kazemi2019}. In the following, the algorithms are allowed to make an arbitrary number of passes through the data.
Currently, the best approximation ratio of any algorithm for MCC is $0.385$ of \citet{Buchbinder2016}. Their algorithm also works under a more general constraint than cardinality constraint; namely, a matroid constraint. This algorithm is the latest in a series of works (e.g. \citep{Naor2011,Ene2016a}) using the multilinear extension of a submodular function, which is expensive to evaluate.
\subsection*{Preliminaries} \label{sec:prelim} Given $n \in \mathbb N$, the notation $[n]$ is used for the set $\{0,1,\ldots,n-1\}$. In this work, functions $f$ with domain all subsets of a finite set are considered; hence, without loss of generality, the domain of the function $f$ is taken to be $2^{[n]}$, which is all subsets of $[n]$. An equivalent characterization of submodularity is that for each $A, B \subseteq [n]$, $f( A \cup B ) + f( A \cap B ) \le f(A) + f(B)$. For brevity, the notation $f_x( A )$ is used to denote the marginal gain $f(A \cup \{ x \}) - f(A)$ of adding element $x$ to set $A$.
In the following, the problem studied is to maximize a submodular function under a cardinality constraint (MCC), which is formally defined as follows. Let $f:2^{n} \to \reals$ be submodular; let $k \in [n]$. Then the problem is to determine
$$ \argmax_{A \subseteq [n] : |A| \le k } f(A). $$
An instance of MCC is the pair $(f,k)$; however, rather than an explicit description of $f$, the function $f$ is accessed by a value oracle; the value oracle may be queried on any set $A \subseteq [n]$ to yield $f(A)$. The efficiency or runtime of an algorithm is measured by the number of queries made to the oracle for $f$.
Finally, without loss of generality, instances of MCC considered in the following satisfy $n \ge 4k$. If this condition does not hold, the function may be extended to $[m]$ by adding dummy elements to the domain which do not change the function value. That is, the function $g:2^{m} \to \reals$ is defined as $g( A ) = f( A \cap [n] )$; it may be easily checked that $g$ remains submodular, and any possible solution to the MCC instance $(g,k)$ maps\footnote{The mapping is to discard all elements greater than $n$.} to a solution of $(f,k)$ of the same value. Hence, the ratio of any solution to $(g,k)$ to the optimal is the same as the ratio of the mapped solution to the optimal on $(f,k)$. \section{Approximation Algorithms} \label{sec:interlace} In this section, the approximation algorithms based upon interlacing greedy procedures are presented. In Section \ref{sec:slowInterlace}, the technique is demonstrated with standard greedy procedures in algorithm InterlaceGreedy. In Section \ref{sec:fig}, the nearly linear-time algorithm FastInterlaceGreedy is introduced. \subsection{The InterlaceGreedy Algorithm} \label{sec:slowInterlace} In this section, the InterlaceGreedy algorithm (InterlaceGreedy\xspace, Alg. \ref{alg:tandem}) is introduced. InterlaceGreedy\xspace takes as input an instance of MCC and outputs a set $C$.
\begin{algorithm}
\caption{InterlaceGreedy\xspace$(f,k)$: The InterlaceGreedy Algorithm}
\label{alg:tandem}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} $f : 2^{[n]} \to \reals$,
$k \in [n]$
\STATE {\bfseries Output:} $C \subseteq [n]$, such that $|C| \le k$.
\STATE $A_0 \gets B_0 \gets \emptyset$
\FOR{$i \gets 0$ to $k - 1$}
\STATE $a_i \gets \argmax_{x \in [n] \setminus (A_i \cup B_i)} f_x( A_{i} )$
\STATE $A_{i+1} \gets A_i + a_i$
\STATE $b_i \gets \argmax_{x \in [n] \setminus (A_{i+1} \cup B_i)} f_x( B_{i} )$
\STATE $B_{i+1} \gets B_i + b_i$
\ENDFOR
\STATE $D_1 \gets E_1 \gets \{ a_0 \}$
\FOR{$i \gets 1$ to $k - 1$}
\STATE $d_i \gets \argmax_{x \in [n] \setminus (D_i \cup E_i)} f_x( D_{i} )$
\STATE $D_{i+1} \gets D_i + d_i$
\STATE $e_i \gets \argmax_{x \in [n] \setminus (D_{i+1} \cup E_i)} f_x( E_{i} )$
\STATE $E_{i+1} \gets E_i + e_i$
\ENDFOR
\STATE \textbf{return} $C \gets \argmax \{f(A_i), f(B_i), f(D_i), f(E_i) : i \in [k + 1] \}$ \end{algorithmic} \end{algorithm} InterlaceGreedy\xspace operates by interlacing two standard greedy procedures. This interlacing is accomplished by maintaining two disjoint sets $A$ and $B$, which are initially empty. For $k$ iterations, the element $a \not \in B$ with the highest marginal gain with respect to $A$ is added to $A$, followed by an analogous greedy selection for $B$; that is, the element $b \not \in A$ with the highest marginal gain with respect to $B$ is added to $B$. After the first set of interlaced greedy procedures complete, a modified version is repeated with sets $D,E$, which are initialized to the maximum-value singleton $\{ a_0 \}$. Finally, the algorithm returns the set with the maximum $f$-value of any query the algorithm has made to $f$.
If $f$ is submodular, InterlaceGreedy has an approximation ratio of $1/4$ and query complexity $O( kn )$; the deterministic algorithm of \citet{Gupta2010} has the same time complexity to achieve ratio $1/6$. The full proof of Theorem \ref{thm:tg}
is provided in Appendix \ref{apx:proof-thm-tg}. \begin{theorem} \label{thm:tg}
Let $f: 2^{[n]} \to \reals$ be submodular, let $k \in [ n ]$,
let $O = \argmax_{|S| \le k} f(S)$, and let $C = $ InterlaceGreedy\xspace$(f,k)$.
Then
\[ f(C) \ge f(O) / 4, \]
and InterlaceGreedy\xspace makes $O( kn )$ queries to $f$. \end{theorem} \begin{proof}[Proof sketch]
The argument of
\citet{Fisher1978} shows that
the greedy algorithm is a $(1/2)$-approximation
for monotone submodular maximization with respect to a matroid
constraint. This argument also applies
to non-monotone, submodular functions, but it shows only that
$f(S) \ge \frac{1}{2} f(O \cup S)$, where $S$ is returned by
the greedy algorithm. Since $f$ is non-monotone, it is possible
for $f( O \cup S ) < f(S)$.
The main idea of the InterlaceGreedy algorithm
is to exploit the fact that
if $S$ and $T$ are disjoint,
\begin{equation} \label{eq:sm_disjoint}
f( O \cup S ) + f( O \cup T) \ge f( O ) + f( O \cup S \cup T ) \ge f(O), \end{equation}
which
is a consequence of the submodularity of $f$. Therefore,
by interlacing two greedy procedures, two disjoint
sets $A$,$B$ are obtained, which can be shown to almost satisfy
$f(A) \ge \frac{1}{2}f( O \cup A )$ and
$f(B) \ge \frac{1}{2}f( O \cup B )$,
after which the result follows from (\ref{eq:sm_disjoint}).
There is a technicality wherein the element $ a_0 $ must
be handled separately, which requires the second round of
interlacing to address.
\end{proof} \subsection{The FastInterlaceGreedy Algorithm} \label{sec:fig} In this section, a faster interlaced greedy algorithm (FastInterlaceGreedy (\texttt{FIG}\xspace), Alg. \ref{alg:fast-tandem}) is formulated, which requires $O(n \log k)$ queries. As input, an instance $(f,k)$ of MCC is taken, as well as a parameter $\delta > 0$.
\begin{algorithm}
\caption{\texttt{FIG}\xspace$(f,k, \delta)$: The FastInterlaceGreedy Algorithm}
\label{alg:fast-tandem}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} $f : 2^{[n]} \to \reals$,
$k \in [ n ]$
\STATE {\bfseries Output:} $C \subseteq [n]$, such that $|C| \le k$.
\STATE $A_0 \gets B_0 \gets \emptyset$
\STATE $M \gets \tau_A \gets \tau_B \gets \max_{x \in [n]} f(x)$
\STATE $i \gets -1$, $a_{-1} \gets 0$, $b_{-1} \gets 0$
\WHILE{$\tau_A \ge \epsi M/ n$ or $\tau_B \ge \epsi M/ n$}
\STATE $(a_{i+1}, \tau_A) \gets \texttt{ADD}\xspace (A,B,a_i,\tau_A)$
\STATE $(b_{i+1}, \tau_B) \gets \texttt{ADD}\xspace (B,A,b_i,\tau_B)$
\STATE $i \gets i + 1$
\ENDWHILE
\STATE $D_1 \gets E_1 \gets \{ a_0 \}$, $\tau_D \gets \tau_E \gets M$
\STATE $i \gets 0$, $d_{0} \gets 0$, $e_{0} \gets 0$
\WHILE{$\tau_D \ge \epsi M/ n$ or $\tau_E \ge \epsi M/ n$}
\STATE $(d_{i+1}, \tau_D) \gets \texttt{ADD}\xspace (D,E,d_i,\tau_D)$
\STATE $(e_{i+1}, \tau_E) \gets \texttt{ADD}\xspace (E,D,e_i,\tau_E)$
\STATE $i \gets i + 1$
\ENDWHILE
\STATE \textbf{return} $C \gets \argmax \{f(A), f(B), f(D), f(E) \}$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{\texttt{ADD}\xspace$(S,T,j,\tau)$: The \texttt{ADD}\xspace subroutine}
\label{alg:add}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Two sets $S,T \subseteq [n]$, element $j \in [n]$, $\tau \in \reals$
\STATE {\bfseries Output:} $(i, \tau)$, such that $i \in [n]$, $\tau \in \reals$
\IF{$|S| = k$}
\STATE \textbf{return} $( 0, (1 - \delta)\tau )$
\ENDIF
\WHILE{$\tau \ge \epsi M/ n$}
\FOR{$(x \gets j; x < n; x \gets x + 1)$}
\IF{$x \not \in T$}
\IF{$f_x(S) \ge \tau$}
\STATE $S \gets S \cup \{ x \}$
\STATE \textbf{return} $( x, \tau )$
\ENDIF
\ENDIF
\ENDFOR
\STATE $\tau \gets (1 - \delta) \tau$
\STATE $j \gets 0$
\ENDWHILE
\STATE \textbf{return} $(0 , \tau )$ \end{algorithmic} \end{algorithm} The algorithm \texttt{FIG}\xspace works as follows. As in InterlaceGreedy, there is a repeated interlacing of two greedy procedures. However, to ensure a faster query complexity, these greedy procedures are thresholded: a separate threshold $\tau$ is maintained for each of the greedy procedures. The interlacing is accomplished by alternating calls to the \texttt{ADD}\xspace subroutine (Alg. \ref{alg:add}), which adds a single element and is described below. When all of the thresholds fall below the value $\delta M / k$, the maximum of the greedy solutions is returned; here, $\delta > 0$ is the input parameter, $M$ is the maximum value of a singleton, and $k \le n$ is the cardinality constraint.
The \texttt{ADD}\xspace subroutine is responsible for adding a single element above the input threshold and decreasing the threshold. It takes as input four parameters: two sets $S,T$, element $j$, and threshold $\tau$; furthermore, \texttt{ADD}\xspace is given access to the oracle $f$, the budget $k$, and the parameter $\delta$ of \texttt{FIG}\xspace. As an overview, \texttt{ADD}\xspace adds the first\footnote{The first element $x > j$ in the natural ordering on $[n] = \{0, \ldots, n - 1\}$.} element $x \ge j$, such that $x \not \in T$ and such that the marginal gain $f_x(S)$ is at least $\tau$. If no such element $x \ge j$ exists, the threshold is decreased by a factor of $(1 - \delta)$ and the process is repeated (with $j$ set to $0$). When such an element $x$ is found, the element $x$ is added to $S$, and the new threshold value and position $x$ are returned. Finally, \texttt{ADD}\xspace ensures that the size of $S$ does not exceed $k$.
Next, the approximation ratio of \texttt{FIG}\xspace is proven. \begin{theorem} \label{thm:ftg}
Let $f: 2^{[n]} \to \reals$ be submodular, let $k \in [n]$,
and let $\epsi > 0$.
Let $O = \argmax_{|S| \le k} f(S)$.
Choose $\delta$ such that $(1 - 6 \delta)/4 > 1/4 - \epsi$,
and let $C = $ \texttt{FIG}\xspace$(f,k,\delta )$.
Then
\[ f(C) \ge ( 1 - 6\delta)f(O) / 4 \ge \left(1/4 - \epsi \right) f(O). \] \end{theorem}
\begin{proof}\let\qed\relax Let $A,B,C,D,E,M$ have their values at termination of $\texttt{FIG}\xspace (f,k,\delta )$. Let $A = \{ a_0, \ldots, a_{|A| - 1} \}$ be ordered by addition of elements by \texttt{FIG}\xspace into $A$. The proof requires the following four inequalities: \begin{align}
f(O \cup A) &\le (2 + 2\delta )f(A) + \delta M, \label{ineq:fast-A} \\
f( (O \setminus \{ a_0 \}) \cup B ) &\le (2 + 2\delta )f(B) + \delta M, \label{ineq:fast-B} \\
f( O \cup D ) &\le (2 + 2\delta )f(D) + \delta M, \label{ineq:fast-D}\\
f( O \cup E ) &\le (2 + 2\delta )f(E) + \delta M. \label{ineq:fast-E} \end{align} Once these inequalities have been established, Inequalities \ref{ineq:fast-A}, \ref{ineq:fast-B}, submodularity of $f$, and $A \cap B = \emptyset$ imply \begin{equation}
f( O \setminus \{a_0\} ) \le 2(1 + \delta)(f(A) + f(B)) + 2 \delta M. \end{equation} Similarly, from Inequalities \ref{ineq:fast-D}, \ref{ineq:fast-E}, submodularity of $f$, and $D \cap E = \{ a_0 \}$, it holds that \begin{equation}
f( O \cup \{ a_0 \} ) \le 2(1 + \delta)(f(D) + f(E)) + 2 \delta M. \end{equation} Hence, from the fact that either $a_0 \in O$ or $a_0 \not \in O$ and the definition of $C$, it holds that \begin{equation*}
f(O) \le 4(1+\delta )f(C) + 2 \delta M. \end{equation*} Since $f(C) \le f(O)$ and $M \le f(O)$, the theorem is proved.
The proofs of Inequalities \ref{ineq:fast-A}--\ref{ineq:fast-E} are similar. The proof of Inequality \ref{ineq:fast-B} is given here, while the proofs of the others are provided in Appendix \ref{apx:proof-thm-ftg}. \begin{proof}[Proof of Inequality \ref{ineq:fast-B}]
Let $A = \{a_0, \ldots, a_{|A| - 1} \}$ be ordered as specified by \texttt{FIG}\xspace. Likewise, let
$B = \{b_0, \ldots, b_{|B| - 1} \}$ be ordered as specified by \texttt{FIG}\xspace. \begin{lemma} \label{lemma:fast-order-B}
$O \setminus ( B \cup \{a_0\} ) = \{o_0,\ldots,o_{l-1} \}$ can be ordered such that
\begin{equation} \label{ineq:fast-marg-B}
f_{o_i}(B_i) \le (1 + 2 \delta) f_{b_i}(B_i),
\end{equation}
for any $i \in [|B|]$. \end{lemma} \begin{proof}
For each $i \in [|B|]$, define $\tau_{B_i}$ to be the value of $\tau$ when $b_i$
was added into $B$ by the \texttt{ADD}\xspace subroutine.
Order $o \in (O \setminus (B \cup \{ a_0 \} )) \cap A = \{o_0, \ldots, o_{\ell - 1} \} $ by
the order in which these elements were added into $A$. Order the remaining
elements of $O \setminus (B \cup \{ a_0 \})$ arbitrarily. Then, when
$b_i$ w;as chosen by \texttt{ADD}\xspace, it holds that $o_i \not \in A_{i + 1}$, since
$A_1 = \{ a_0 \}$ and $a_0 \not \in O \setminus (B \cup \{ a_0 \})$.
Also, it holds that $o_i \not \in B_i$ since $B_i \subseteq B$;
hence $o_i$ was not added into some (possibly non-proper) subset $B'_i$ of $B_i$
at the previous threshold value $\frac{\tau_{B_i}}{ (1 - \delta)}$.
By submodularity,
$f_{o_i}(B_i) \le f_{o_i}(B'_i) < \frac{\tau_{B_i}}{(1 - \delta)}$.
Since $f_{b_i}(B_i) \ge \tau_{B_i}$ and $\delta < 1/2$,
inequality (\ref{ineq:fast-marg-B}) follows. \end{proof}
Order $\hat{O} = O\setminus (B\cup \{a_0\}) = \{o_0, \ldots, o_{l-1} \}$ as
defined in the proof of Lemma \ref{lemma:fast-order-B},
and let $\hat{O}_i = \{o_0, \ldots, o_{i-1} \}$, if $i \ge 1$, and
let $\hat{O}_0 = \emptyset$.
Then
\begin{align*}
f( \hat{O} \cup B ) - f( B ) &= \sum_{i=0}^{l-1} f_{o_i} ( \hat{O}_i \cup B )\\
&= \sum_{i=0}^{|B| - 1} f_{o_i} ( \hat{O}_i \cup B ) + \sum_{i=|B|}^{l - 1} f_{o_i} ( \hat{O}_i \cup B ) \\
&\le \sum_{i=0}^{|B| - 1} f_{o_i} ( B_i ) + \sum_{i=|B|}^{l - 1} f_{o_i} (B ) \\
&\le \sum_{i=0}^{|B| - 1} (1 + 2\delta ) f_{b_i} ( B_i ) + \sum_{i=|B|}^{l - 1} f_{o_i} ( B ) \\
&\le ( 1 + 2 \delta ) f(B) + \delta M,
\end{align*}
where any empty sum is defined to be 0; the first inequality follows by submodularity, the second follows from Lemma \ref{lemma:fast-order-B}, and the third follows from the definition of $B$,
and the facts that, for any $i$ such that $|B| \le i < l$, $\max_{x \in [n] \setminus A_{|B|+1}} f_x(B) < \epsi M/ n $, $l - |B| \le k$, and $o_i \not \in A_{|B| + 1}$. \end{proof}\end{proof} \begin{theorem} \label{thm:ftg-speed}
Let $f: 2^{[n]} \to \reals$ be submodular, let $k \in [n ]$,
and let $\delta > 0$.
Then the number of queries to $f$ by $\texttt{FIG}\xspace(f,k,\delta)$ is
at most $O\left( \frac{n}{\delta} \log \frac{k}{\delta} \right)$. \end{theorem} \begin{proof}
Recall $[n] = \{0, 1, \ldots, n - 1\}$. Let $S \in \{ A,B,D,E \}$, and
$S = \{s_0, \ldots, s_{|S| - 1} \}$ in the order in which elements were
added to $S$.
When \texttt{ADD}\xspace is called by \texttt{FIG}\xspace to add an element $s_i \in [n]$ to $S$,
if the value of $\tau$ is the same as the value when $s_{i-1}$ was added to $S$, then
$s_i > s_{i-1}$. Finally, once \texttt{ADD}\xspace queries the marginal gain of adding $(n - 1)$,
the threshold is revised downward by a factor of $(1 - \delta)$.
Therefore, there are at most $O(n)$ queries of $f$ at each distinct value
of $\tau_A$, $\tau_B$, $\tau_D$, $\tau_E$. Since at
most $O(\frac{1}{\delta} \log \frac{k}{\delta})$ values are assumed by each of these
thresholds, the theorem follows. \end{proof} \section{Tight Examples} In this section, examples are provided showing that InterlaceGreedy or FastInterlaceGreedy may achieve performance ratio at most $1/4 + \epsi$ on specific instances, for each $\epsi > 0$. These examples show that the analysis in the preceding sections is tight.
Let $\epsi > 0$ and choose $k$ such that $1/k < \epsi$. Let $O$ and $D$ be disjoint sets each of $k$ distinct elements; and let $U = O \dot{\cup} \{a,b\} \dot{\cup} D$. A submodular function $f$ will be defined on subsets of $U$ as follows.
Let $C \subseteq U$. \begin{itemize}
\item If both $a \in C$ and $b \in C$, then $f(C) = 0$.
\item If $a \in C$ xor $b \in C$, then $f(C) = \frac{|C \cap O|}{2k} + \frac{1}{k}$.
\item If $a \not \in C$ and $b \not \in C$, then $f(C) = \frac{|C \cap O|}{k}$. \end{itemize}
The following proposition is proved in Appendix \ref{apx:tight}. \begin{proposition}\label{prop:sm}
The function $f$ is submodular. \end{proposition}
Next, observe that for any $o \in O$, $f_{a}( \emptyset ) = f_b( \emptyset) = f_o( \emptyset ) = 1/k$. Hence InterlaceGreedy or FastInterlaceGreedy may choose $a_0 = a$ and $b_0 = b$; after this choice, the only way to increase $f$ is by choosing elements of $O$. Hence $a_i, b_i$ will be chosen in $O$ until elements of $O$ are exhausted, which results in $k/2$ elements of $O$ added to each of $A$ and $B$. Thereafter, elements of $D$ will be chosen, which do not affect the function value. This yields $$f(A) = f(B) \le 1/k + 1/4.$$ Next, $D_1 = E_1 = \{ a \}$, and a similar situation arises, in which $k/2$ elements of $O$ are added to $D,E$, yielding $f(D) = f(E) = f(A)$. Hence InterlaceGreedy or FastInterlaceGreedy may return $A$, while $f( O ) = 1$. So $\frac{f(A)}{f(O)} \le 1/k + 1/4 \le 1/4 + \epsi$. \section{Experimental Evaluation} \label{sec:exp} In this section, performance of FastInterlaceGreedy (\texttt{FIG}\xspace) is compared with that of state-of-the-art algorithms on two applications of submodular maximization: cardinality-constrained maximum cut and network monitoring. \subsection{Setup} \paragraph{Algorithms} The following algorithms are compared. Source code for the evaluated implementations of all algorithms is available at \url{https://gitlab.com/kuhnle/non-monotone-max-cardinality}. \begin{itemize} \item \textbf{ FastInterlaceGreedy (Alg. \ref{alg:fast-tandem})}: \texttt{FIG}\xspace is implemented as specified
in the pseudocode, with the following addition: a stealing procedure
is employed at the end, which uses submodularity to quickly steal\footnote{Details of the stealing procedure are given in Appendix \ref{apx:steal}.}
elements from $A,B,D,E$ into $C$ in $O(k)$ queries. This does
not impact the performance guarantee, as the value of $C$ can only
increase. The parameter $\delta$ is set to $0.1$, yielding
approximation ratio of $0.1$.
\item \textbf{\citet{Gupta2010}}: The algorithm of \citet{Gupta2010} for cardinality
constraint; as the subroutine for the unconstrained maximization subproblems,
the deterministic, linear-time $1/3$-approximation algorithm of
\citet{Naor2012} is employed. This yields an overall approximation ratio
of $1/7$ for the implementation used herein. This algorithm is the fastest
determistic approximation algorithm in prior literature.
\item \textbf{FastRandomGreedy (FRG}): The $O \left( \frac{n}{\epsi^2} \ln \frac{1}{\epsi} \right)$
randomized algorithm of \citet{Buchbinder2015a} (Alg. 4 of that paper), with
expected ratio $1/e - \epsi$; the parameter $\epsi$ was set to 0.3, yielding
expected ratio of $\approx 0.07$ as evaluated herein.
This algorithm is the fastest
randomized approximation algorithm in prior literature.
\item \textbf{BLITS}: The $O \left( \log^2 n \right)$-adaptive algorithm
recently introduced in \citet{Balkanskia}; the algorithm is employed
as a heuristic without performance ratio, with the same parameter choices as in \citet{Balkanskia}.
In particular, $\epsi = 0.3$
and 30 samples are used to approximate the expections. Also, a bound on OPT
is guessed in logarithmically many iterations as described in \citet{Balkanskia}
and references therein. \end{itemize} Results for randomized algorithms are the mean of 10 trials, and the standard deviation is represented in plots by a shaded region. \paragraph{Applications} Many applications with non-monotone, submodular objective functions exist. In this section, two applications are chosen to demonstrate the performance of the evaluated algorithms. \begin{itemize}
\item Cardinality-Constrained Maximum Cut: The archetype of a submodular, non-monotone
function
is the maximum cut objective: given graph $G = (V,E)$, $S \subseteq V$, $f(S)$ is
defined to be the number of edges crossing from $S$ to $V \setminus S$. The cardinality constrained version of this problem is considered in the evaluation.
\item Social Network Monitoring: Given an online social
network, suppose it is desired to choose $k$ users to monitor,
such that the maximum amount of content is propagated
through these users.
Suppose the amount of content propagated between two users $u,v$ is encoded as
weight $w(u,v)$. Then
$ f(S) = \sum_{u \in S, v \not \in S} w(u,v).$ \end{itemize} \subsection{Results} In this section, results are presented for the algorithms on the two applications. In overview: in terms of objective value, \texttt{FIG}\xspace and \citet{Gupta2010} were about the same and outperformed BLITS and FRG. Meanwhile, \texttt{FIG}\xspace was the fastest algorithm by the metric of queries to the objective and was faster than \citet{Gupta2010} by at least an order of magnitude. \paragraph{Cardinality Constrained MaxCut} \begin{figure}
\caption{\textbf{(a)--(d)}: Objective value and runtime for cardinality-constrained maxcut on random graphs. \textbf{(e)--(f)}: Objective value and runtime for cardinality-constrained maxcut on ca-AstroPh with simulated amounts of content between users. In all plots, the $x$-axis shows the budget $k$. }
\label{er:obj}
\label{er:que}
\label{ba:obj}
\label{ba:que}
\end{figure}
For these experiments, two random graph models were employed: an Erdős-Rényi (ER) random graph with $1,000$ nodes and edge probability $p = 1/2$, and a Barabási–Albert (BA) graph with $n=10,000$ and $m=m_0=100$.
On the ER graph, results are shown in Figs. \ref{er:obj} and \ref{er:que}; the results on the BA graph are shown in Figs. \ref{ba:obj} and \ref{ba:que}. In terms of cut value, the algorithm of \citet{Gupta2010} performed the best, although the value produced by \texttt{FIG}\xspace was nearly the same. On the ER graph, the next best was FRG followed by BLITS; whereas on the BA graph, BLITS outperformed FRG in cut value. In terms of efficiency of queries, \texttt{FIG}\xspace used the smallest number on every evaluated instance, although the number did increase logarithmically with budget. The number of queries used by FRG was higher, but after a certain budget remained constant. The next most efficient was \citet{Gupta2010} followed by BLITS. \paragraph{Social Network Monitoring} For the social network monitoring application, the citation network ca-AstroPh from the SNAP dataset collection was used, with $n = 18,772$ users and $198,110$ edges. Edge weights, which represent the amount of content shared between users, were generated uniformly randomly in $[1,10]$. The results were similar qualitatively to those for the unweighted MaxCut problem presented previously. \texttt{FIG}\xspace is the most efficient in terms of number of queries, and \texttt{FIG}\xspace is only outperformed in solution quality by \citet{Gupta2010}, which required more than an order of magnitude more queries.
\paragraph{Effect of Stealing Procedure} \begin{figure}
\caption{Effect of stealing procedure on solution quality of \texttt{FIG}\xspace.}
\label{fig:abl-er}
\label{fig:abl-ba}
\label{fig:ablation}
\end{figure} In Fig. \ref{fig:ablation} above, the effect of removing the stealing
procedure is shown on the random graph instances.
Let $C_{FIG}$ be the solution returned by FIG, and
$C_{FIG*}$ be the solution returned by FIG with the
stealing procedure removed. Fig. \ref{fig:abl-er}
shows that on the ER instance, the stealing procedure
adds at most $1.5\%$ to the solution value; however,
on the BA instance, Fig. \ref{fig:abl-ba} shows that the
stealing procedure contributes up to $45\%$ increase
in solution value, although this effect degrades with larger $k$.
This behavior may be explained by the interlaced greedy process
being forced to leave good elements out of its solution, which
are then recovered during the stealing procedure.
\appendix \section{Proof of Theorem \ref{thm:tg}} \label{apx:proof-thm-tg} \begin{proof}[Proof of Theorem \ref{thm:tg}]
\begin{lemma} \label{lemm:minus}
$$ 4f(C) \ge f\left(O \setminus \{ a_0 \} \right). $$
\end{lemma}
\begin{proof}
Let $A = \argmax_{i \in [k + 1]} f( A_i )$.
Let $\hat{O} = O \setminus A_k = \{o_0, \ldots, o_{l-1}\}$ be ordered such that
for each $i \in [l]$, $o_i \not \in B_i$; this ordering is
possible since $B_0 = \emptyset$ and $l \le k$. Also,
for each $i \in [l]$, let $\hat{O}_i = \{o_0,\ldots,o_{i-1}\}$, and let $\hat{O}_0 = \emptyset$.
Then
\begin{align*}
f( O \cup A_k ) - f( A_k ) &= \sum_{i=0}^{l-1} f_{o_i} ( \hat{O}_i \cup A_k )\\
&\le \sum_{i=0}^{l-1} f_{o_i} (A_i) \\
&\le \sum_{i=0}^{l-1} f_{a_i} (A_i) = f( A_l ), \\
\end{align*}
where the first inequality follows from submodularity, the
second inequality follows from the greedy choice $a_i = \argmax_{x \in [n] \setminus (A_i \cup B_i)} f_x( A_{i} )$ and the fact
that $o_i \not \in B_i$. Hence
\begin{equation} \label{ineq:A}
f(O \cup A_k) \le f(A_l) + f(A_k) \le 2f(A).
\end{equation}
Let
$B = \argmax_{i \in [k + 1]} f( B_i )$.
Let $\hat{O} = O \setminus \left( \{ a_0 \} \cup B_k \right) = \{o_0, \ldots, o_{l-1}\}$ be ordered such that
for each $i \in [l]$, $o_i \not \in A_{i + 1}$; this ordering is
possible since $A_1 = \{ a_0 \}$, $a_0 \not \in \hat{O}$, and $l \le k$. Also,
for each $i \in [l]$, let $\hat{O}_i = \{o_0,\ldots,o_{i-1}\}$,
and let $\hat{O}_0 = \emptyset$.
Then
\begin{align*}
f( ( O \setminus \{ a_0 \} ) \cup B_k ) - f( B_k ) &= \sum_{i=0}^{l-1} f_{o_i} ( \hat{O}_i \cup B_k )\\
&\le \sum_{i=0}^{l-1} f_{o_i} (B_i) \\
&\le \sum_{i=0}^{l-1} f_{b_i} (B_i) = f( B_l ), \\
\end{align*}
where the first inequality follows from submodularity, the
second inequality follows from the greedy choice $b_i = \argmax_{x \in [n] \setminus (A_{i + 1} \cup B_i)} f_x( B_{i} )$ and the fact
that $o_i \not \in A_{i + 1}$. Hence
\begin{equation} \label{ineq:B}
f( ( O \setminus \{ a_0 \} )\cup B_k) \le f(B_l) + f(B_k) \le 2f(B).
\end{equation}
By inequalities (\ref{ineq:A}), (\ref{ineq:B}), the fact that $A_k \cap B_k = \emptyset$, and submodularity, it holds that
\begin{equation*}
f( O \setminus \{ a_0 \} ) \le f( O \cup A_k ) + f( ( O \setminus \{ a_0 \} \cup B_k ) \le 2 ( f(A) + f(B) ) \le 4 f(C).
\end{equation*}
\end{proof}
\begin{lemma} \label{lemm:plus}
$$ 4f(C) \ge f\left(O \cup \{ a_0 \} \right). $$
\end{lemma}
\begin{proof}
Let $D = \argmax_{i \in [k + 1]} f( A_i )$.
Let $\hat{O} = O \setminus D_k = \{o_0, \ldots, o_{l-1}\}$ be ordered such that
for each $i \in [l]$, $o_i \not \in E_i$; this ordering is
possible since $E_0 = \emptyset$ and $l \le k$. Also,
for each $i \in [l]$, let $\hat{O}_i = \{o_0,\ldots,o_{i-1}\}$,
and let $\hat{O}_0 = \emptyset$.
Then
\begin{align*}
f( O \cup D_k ) - f( D_k ) &= \sum_{i=0}^{l-1} f_{o_i} ( \hat{O}_i \cup D_k )\\
&\le \sum_{i=0}^{l-1} f_{o_i} (D_i) \\
&\le \sum_{i=0}^{l-1} f_{d_i} (D_i) = f( D_l ), \\
\end{align*}
where the first inequality follows from submodularity, the
second inequality follows from the greedy choice $d_i = \argmax_{x \in [n] \setminus (D_i \cup E_i)} f_x( D_{i} )$ and the fact
that $o_i \not \in E_i$. Hence
\begin{equation} \label{ineq:D}
f(O \cup D_k) \le f(D_l) + f(D_k) \le 2f(D).
\end{equation}
Let
$E = \argmax_{i \in [k + 1]} f( E_i )$.
Let $\hat{O} = O \setminus E_k = \{o_0, \ldots, o_{l-1}\}$ be ordered such that
for each $i \in [l]$, $o_i \not \in D_{i + 1}$; this ordering is
possible since $D_1 = \{ a_0 \}$, $a_0 \not \in \hat{O}$ (since $a_0 \in E_k$),
and $l \le k$. Also,
for each $i \in [l]$, let $\hat{O}_i = \{o_0,\ldots,o_{i-1}\}$,
and let $\hat{O}_0 = \emptyset$.
Then
\begin{align*}
f( O \cup E_k ) - f( E_k ) &= \sum_{i=0}^{l-1} f_{o_i} ( \hat{O}_i \cup E_k )\\
&\le \sum_{i=0}^{l-1} f_{o_i} (E_i) \\
&\le \sum_{i=0}^{l-1} f_{e_i} (E_i) = f( E_l ), \\
\end{align*}
where the first inequality follows from submodularity, the
second inequality follows from the greedy choices
$e_0 = \argmax_{x\in [n]} f(x)$, and if $i > 0$, $e_i = \argmax_{x \in [n] \setminus (D_{i + 1} \cup E_i)} f_x( E_{i} )$ and the fact
that $o_i \not \in D_{i + 1}$. Hence
\begin{equation} \label{ineq:E}
f( ( O \cup E_k) \le f(E_l) + f(E_k) \le 2f(E).
\end{equation}
By inequalities (\ref{ineq:D}), (\ref{ineq:E}), the fact that $D_k \cap E_k = \{ a_0 \}$, and submodularity, it holds that
\begin{equation*}
f( O \cup \{ a_0 \} ) \le f( O \cup D_k ) + f( ( O \cup E_k ) \le 2 ( f(D) + f(E) ) \le 4 f(C).
\end{equation*}
\end{proof}
The proof of the theorem follows from Lemmas \ref{lemm:minus}, \ref{lemm:plus}, and the fact that one of the statements $a_0 \in O$ or $a_0 \not \in O$ must hold;
hence, either $O \cup \{ a_0 \} = O$ or $O \setminus \{ a_0 \} = O$. \end{proof} \section{Proofs for Theorem \ref{thm:ftg}} \label{apx:proof-thm-ftg} \begin{proof}[Proof of Inequality \ref{ineq:fast-A}]
Let $A = \{a_0, \ldots, a_{|A| - 1} \}$ be ordered as specified
by \texttt{FIG}\xspace. Likewise, let
$B = \{b_0, \ldots, b_{|B| - 1} \}$ be ordered as specified
by \texttt{FIG}\xspace. \begin{lemma} \label{lemma:fast-order-A}
$O \setminus A = \{o_0,\ldots,o_{l-1} \}$ can be ordered such that
\begin{equation} \label{ineq:fast-marg-A}
f_{o_i}(A_i) \le (1 + 2 \delta) f_{a_i}(A_i),
\end{equation}
if $i \in [|A|]$. \end{lemma} \begin{proof}
Order $o \in (O \setminus A) \cap B = \{o_0, \ldots, o_{\ell - 1} \} $ by
the order in which these elements were added into $B$. Order the remaining
elements of $O \setminus A$ arbitrarily. Then, when
$a_i$ was chosen by \texttt{ADD}\xspace, it holds that $o_i \not \in B_i$.
Also, it is true $o_i \not \in A_i$;
hence $o_i$ was not added into some (possibly non-proper) subset $A'_i$ of $A_i$
at the previous threshold value $\frac{\tau_{A_i}}{ (1 - \delta)}$.
Hence $f_{o_i}(A_i) \le f_{o_i}(A'_i) < \frac{\tau_{A_i}}{(1 - \delta)}$,
since $o_i \not \in B_i$.
Since $f_{a_i}(A_i) \ge \tau_{A_i}$ and $\delta < 1/2$,
inequality (\ref{ineq:fast-marg-A}) follows. \end{proof}
Order $\hat{O} = O\setminus A = \{o_0, \ldots, o_{l-1} \}$ as
indicated in the proof of Lemma \ref{lemma:fast-order-A},
and let $\hat{O}_i = \{o_0, \ldots, o_{i-1} \}$, if $i \ge 1$,
$\hat{O}_0 = \emptyset$.
Then
\begin{align*}
f( O \cup A ) - f( A ) &= \sum_{i=0}^{l-1} f_{o_i} ( \hat{O}_i \cup A )\\
&= \sum_{i=0}^{|A| - 1} f_{o_i} ( \hat{O}_i \cup A ) + \sum_{i=|A|}^{l - 1} f_{o_i} ( \hat{O}_i \cup A ) \\
&\le \sum_{i=0}^{|A| - 1} f_{o_i} ( A_i ) + \sum_{i=|A|}^{l - 1} f_{o_i} ( A ) \\
&\le \sum_{i=0}^{|A| - 1} (1 + 2\delta ) f_{a_i} ( A_i ) + \sum_{i=|A|}^{l - 1} f_{o_i} ( A ) \\
&\le ( 1 + 2 \delta ) f(A) + \delta M,
\end{align*}
where any empty sum is defined to be 0; the first inequality follows by submodularity, the second follows from Lemma \ref{lemma:fast-order-A}, and the third follows from the definition of $A$, and the facts that $\max_{x \in [n] \setminus B_{|A|}} f_x(A) < \epsi M/ n $ and $l - |A| \le k$. \end{proof} \begin{proof}[Proof of Inequality \ref{ineq:fast-D}]
As in the proof of Inequality \ref{ineq:fast-A}, it suffices
to establish the following lemma. \end{proof}
\begin{lemma}
$O \setminus D = \{o_0,\ldots,o_{l-1} \}$ can be ordered such that
\begin{equation} \label{ineq:fast-marg-D}
f_{o_i}(D_i) \le (1 + 2 \delta) f_{d_i}(D_i),
\end{equation}
for $i \in [|D|]$. \end{lemma} \begin{proof}
Order $o \in (O \setminus D ) \cap E = \{o_0, \ldots, o_{\ell - 1} \} $ by
the order in which these elements were added into $E$. Order the remaining
elements of $O \setminus D$ arbitrarily. Then, when
$d_i$ was chosen by \texttt{ADD}\xspace, it holds that $o_i \not \in E_{i}$.
Also, it is true $o_i \not \in D_i$;
hence $o_i$ was not added into some (possibly non-proper) subset $D'_i$ of $D_i$
at the previous threshold value $\frac{\tau_{D_i}}{ (1 - \delta)}$.
Hence $f_{o_i}(D_i) \le f_{o_i}(D'_i) < \frac{\tau_{D_i}}{(1 - \delta)}$,
since $o_i \not \in E_{i}$.
Since $f_{d_i}(D_i) \ge \tau_{D_i}$ and $\delta < 1/2$,
inequality (\ref{ineq:fast-marg-D}) follows. \end{proof} \begin{proof}[Proof of Inequality \ref{ineq:fast-E}]
As in the proof of Inequality \ref{ineq:fast-A}, it suffices
to establish the following lemma.
\begin{lemma}
$O \setminus E = \{o_0,\ldots,o_{l-1} \}$ can be ordered such that
\begin{equation} \label{ineq:fast-marg-E}
f_{o_i}(E_i) \le (1 + 2 \delta) f_{e_i}(E_i),
\end{equation}
for $i \in [|E|]$. \end{lemma} \begin{proof}
Order $o \in (O \setminus E ) \cap D = \{o_0, \ldots, o_{\ell - 1} \} $ by
the order in which these elements were added into $D$. Order the remaining
elements of $O \setminus E$ arbitrarily. Then, when
$e_i$ was chosen by \texttt{ADD}\xspace, it holds that $o_i \not \in D_{i + 1}$, since
$D_1 = \{ a_0 \}$ and $a_0 = d_0 \not \in O \setminus E$.
Also, it is true $o_i \not \in E_i$;
hence $o_i$ was not added into some (possibly non-proper) subset $E'_i$ of $E_i$
at the previous threshold value $\frac{\tau_{E_i}}{ (1 - \delta)}$.
Hence $f_{o_i}(E_i) \le f_{o_i}(E'_i) < \frac{\tau_{E_i}}{(1 - \delta)}$,
since $o_i \not \in D_{i+1}$.
Since $f_{e_i}(E_i) \ge \tau_{E_i}$ and $\delta < 1/2$,
inequality (\ref{ineq:fast-marg-E}) follows. \end{proof} \end{proof} \section{Stealing Procedure for FastInterlaceGreedy} \label{apx:steal} In this section, an $O(k)$ procedure is described, which may improve the quality of the solution found by FastInterlaceGreedy (a similar procedure could also be employed for InterlaceGreedy).
Let $A,B,C,D,E$ have their values at the termination of FastInterlaceGreedy. Then calculate the sets $G = \{ B_c = f(C) - f(C \setminus \{ c \}) : c \in C \}$ and $H = \{ A_x = f( C \cup \{ x \} ) - f(C) : x \in A \cup B \cup D \cup E \}$. Then sort $G = (B_{c_1}, \ldots, B_{c_k})$ in non-decreasing order and sort $H = (A_{x_1}, \ldots, A_{x_l})$ in non-increasing order. Computing and sorting these sets requires $O(k \log k)$ time (and only $O(k)$ queries to $f$).
Finally, iterate through the elements of $G$ in the sorted order, and if $B_{c_i} < A_{x_i}$ then $C$ is assigned $C \setminus \{ c_i \} \cup \{ x_i \}$ if this assignment increases the value $f( C )$.
\section{Proof for Tight Examples} \label{apx:tight} \begin{proof}[Proof of Prop. \ref{prop:sm}]
Submodularity will be verified by checking the inequality
\begin{equation} \label{submod:apx}
f(S) + f(T) \ge f(S \cup T) + f (S \cap T)
\end{equation}
for all $S, T \subseteq U$.
\begin{itemize} \item \textbf{case $a \in S \cap T$, $b \not \in T \cup S$.}
Then Ineq. (\ref{submod:apx}) becomes
$$ \frac{ |S \cap O| }{ 2k } + \frac{ |T \cap O| }{ 2k } + \frac{2}{k} \ge \frac{|S \cap T \cap O |}{ 2k } + \frac{ |(S \cup T) \cap O| }{ 2k } + \frac{2}{k}, $$
which holds.
\item \textbf{case $a \in S \setminus T$, $b \in T \setminus S$.}
Then Ineq. (\ref{submod:apx}) becomes
$$ \frac{ |S \cap O| }{ 2k } + \frac{ |T \cap O| }{ 2k } + \frac{2}{k} \ge \frac{|S \cap T \cap O |}{ k }, $$
which holds.
\item \textbf{case $a \in S \setminus T$, $b \in S \setminus T$.} Then Ineq. (\ref{submod:apx}) becomes
$$ \frac{ |T \cap O| }{k} \ge \frac{|S \cap T \cap O |}{ k }, $$
which holds.
\item \textbf{case $a \in S \setminus T$, $b \in S \cap T$.} Then Ineq. (\ref{submod:apx}) becomes
$$ \frac{ |T \cap O| }{ 2k } + \frac{1}{k} \ge \frac{|S \cap T \cap O |}{ 2k } + \frac{1}{k}, $$
which holds.
\item \textbf{case $a \in S \cap T$, $b \in S \cap T$.} Then Ineq. (\ref{submod:apx}) becomes
$$ 0 \ge 0, $$
which holds
\item \textbf{case $a \not \in S \cup T$, $b \not \in S \cup T$.} Then Ineq. (\ref{submod:apx}) becomes
$$ | S \cap O | + |T \cap O| \ge | (S \cup T) \cap O | + | (S \cap T) \cap O |,$$
which holds.
\item \textbf{case $a \in S \setminus T$, $b \not \in S \cup T$.} Then Ineq. (\ref{submod:apx}) becomes
$$ \frac{| S \cap O |}{2k} + \frac{1}{k} + \frac{|T \cap O|}{k} \ge \frac{| (S \cup T) \cap O |}{2k} + \frac{1}{k} + \frac{| (S \cap T) \cap O |}{k},$$
which holds.
\end{itemize}
The remaining cases follow symmetrically. \end{proof}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
In this paper, we propose \pcpnet, a deep-learning based approach for estimating local 3D shape properties in point clouds. In contrast to the majority of prior techniques that concentrate on global or mid-level attributes, e.g., for shape classification or semantic labeling, we suggest a patch-based learning method, in which a series of local patches at multiple scales around each point is encoded in a structured manner. Our approach is especially well-adapted for estimating local shape properties such as normals (both unoriented and oriented) and curvature from raw point clouds in the presence of strong noise and multi-scale features. Our main contributions include both a novel multi-scale variant of the recently proposed PointNet architecture with emphasis on local shape information, and a series of novel applications in which we demonstrate how learning from training data arising from well-structured triangle meshes, and applying the trained model to noisy point clouds can produce superior results compared to specialized state-of-the-art techniques. Finally, we demonstrate the utility of our approach in the context of shape reconstruction, by showing how it can be used to extract normal orientation information from point clouds.
\begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010371.10010396.10010400</concept_id> <concept_desc>Computing methodologies~Point-based models</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010371.10010396.10010402</concept_id> <concept_desc>Computing methodologies~Shape analysis</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010521.10010542.10010294</concept_id> <concept_desc>Computer systems organization~Neural networks</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> \end{CCSXML}
\ccsdesc[500]{Computing methodologies~Point-based models} \ccsdesc[300]{Computing methodologies~Shape analysis} \ccsdesc[300]{Computer systems organization~Neural networks}
\printccsdesc \end{abstract}
\input{introduction.tex} \input{related_work.tex} \input{overview.tex} \section{Algorithm} \label{sec:algorithm}
Our goal in this work is to reconstruct local surface properties from a point cloud that approximately samples an unknown surface. In real-world settings, these point clouds typically originate from scans or stereo reconstructions and suffer from a wide range of deteriorating conditions, such as noise and varying sampling densities.
Traditional geometric approaches for recovering surface properties usually perform reasonably well with the correct parameter settings, but finding these settings is a data-dependent and often difficult task. The success of deep-learning methods, on the other hand, is in part due to the fact that they can be made robust to a wide range of conditions with a single hyper-parameter setting, seemingly a natural fit to this problem. The current lack of deep learning solutions may be due to the difficulty of applying deep networks to unstructured input like point clouds. Simply inputting points as a list would make the subsequent computations dependent on the input ordering.
A possible solution to this problem was recently proposed under the name of PointNet by Qi et al.~\cite{pointnet17}, who propose combining input points with a symmetric operation to achieve order-independence. However, PointNet is applied globally to the entire point cloud, and while the authors do demonstrate estimation of local surface properties as well, these results suffer from the global nature of the method.
PointNet computes two types of features in a point cloud: a single global feature vector for the entire point cloud and a set of local features vectors for each point. The local feature vectors are based on the position of a single point only, and do not include any neighborhood information. This reliance on only either fully local or fully global feature vectors makes it hard to estimate properties that depend on local neighborhood information.
Instead, we propose computing local feature vectors that describe the local neighborhood around a point. These features are better suited to estimate local surface properties.
In this section, we provide an alternative analysis of the PointNet architecture and show a variation of the method that can be applied to local patches instead of globally on the entire point cloud to get a strong performance increase for local surface property estimation, outperforming the current state-of-the art.
\subsection{Pre-processing} Given a point cloud $\mathbb{P} = \{p_1, \dots, p_N\}$, a local patch $\mathbb{P}^r_i$ is centered at point $p_i$ and contains all points within distance $r$ from the center. Our target for this patch are local surface properties such as normal $n_i$ and principal curvature values $\kappa_i^1$ and $\kappa_i^2$ at the center point $p_i$. To remove unnecessary degrees of freedom from the input space, we translate the patch to the origin and normalize its radius multiplying with $1/r$. Since the curvature values depend on scale, we transform output curvatures to the original scale of the point cloud by multiplying with $r$. Our network takes a fixed number of points as input. Patches that have too few points are padded with zeros (the patch center), and we pick a random subset for patches with too many points.
\begin{figure*}
\caption{\footnotesize Comparison of our approach for oriented normal estimation with the baseline jet-fitting and PCA coupled with MST-based normal orientation propagation in a post-processing step. We show the RMS angle error, the relative error compared to our performance, as well as relative fraction of flipped normals in other methods across different levels of noise. Note that the errors in oriented normal estimation are highly correlated with the number of orientation flips, suggesting the post-processing step as the main source of error.}
\label{fig:comp_oriented}
\end{figure*}
\subsection{Architecture} Our single-scale network follows the PointNet architecture, with two changes: we constrain the first spatial transformer to the domain of rotations and we exchange the $\max$ symmetric operation with a sum. An overview of the architecture is shown in Figure~\ref{fig:arch}. We will now provide intuition for this choice of architecture.
\mypara{Quaternion spatial transformer} The first step of the architecture transforms the input points to a canonical pose. This transformation is optimized for during training, using a spatial transformer network~\cite{jadberg:2015:st}. We constrain this transformation to rotations by outputting a quaternion instead of a $3 \times 3$ matrix. This is done for two reasons: First, unlike in semantic classification, our outputs are geometric properties that are sensitive to transformations of the patch. This caused unstable convergence behavior in early experiments where scaling would take on extreme values. Rotation-only constraints stabilized convergence. Second, we have to apply the inverse of this transformation to the final surface properties and computing the inverse of a rotation is trivial.
\mypara{Point functions and symmetric operation} One important property of the network is that it should be invariant to the input point ordering. Qi et al.~\cite{pointnet17} show that this can be achieved by applying a set of functions $\{h_1, \dots, h_k\}$ with shared parameters to each point \emph{separately} and then combine the resulting values for each point using a symmetric operation: \begin{equation*}
H_l(\mathbb{P}_i^r) = \sum_{p_j \in \mathbb{P}_i^r} h_l(p_j). \end{equation*} $H_l(\mathbb{P}_i^r)$ is then a feature of the patch and $h_l$ are called \emph{point functions}; they are scalar-valued functions, defined in the in the local coordinate frame of the patch. The functions $H_l$ can intuitively be understood as density estimators for a region given by $h_l$. Their response is stronger the more points are in the non-zero region of $h_l$. Compared to using the maximum as symmetric operation, as proposed by Qi et al., our sum did not have a significant performance difference; however we decided to use the sum to give our point functions a simple conceptual interpretation as density estimators. The point functions $h_l$ are computed as: \begin{equation*}
h_l(p_j) = (\mathrm{FNN}_2 \circ \mathrm{STN}_2)\left(g_1(p_j), \dots, g_{64}(p_j)\right), \end{equation*} where $\mathrm{FNN}_2$ is a three-layer fully-connected network and $\mathrm{STN}_2$ is the second spatial transformer. The functions $g$ can be understood as a less complex set of point functions, since they are at a shallower depth in the network. They are computed analogous to $h$.
\mypara{Second spatial transformer} The second spatial transformer operates on the feature vector $\mathbf{g}_j = \left(g_1(p_j), \dots, g_{64}(p_j)\right)$, giving a $64 \times 64$ transformation matrix. Some insight can be gained by interpreting the transformation as a fully connected layer with weights that are computed from the feature vectors $\mathbf{g}$ of \emph{all} points in the patch. This introduces global information into the point functions, increasing the performance of the network.
\mypara{Output regression} In a trained model, the patch feature vector $\mathbf{H}_j = \left(H_1(\mathbb{P}_i^r), \dots, H_k(\mathbb{P}_i^r)\right)$ provides a rich description of the patch. Various surface properties can be regressed from this feature vector. We use a three-layer fully connected network to perform this regression.
\subsection{Multi-scale} We will show in the results, that the architecture presented above is very robust to changes in noise strength and sample density. For additional robustness, we experimented with a multi-scale version of the architecture. Instead of using a single patch as input, we input three patches $\mathbb{P}_i^{r1}$, $\mathbb{P}_I^{r2}$ and $\mathbb{P}_I^{r1}$ with different radii. Due to the scale normalization of our patches, these are scaled to the same size, but contain differently sized regions of the point cloud. This allows all point functions to focus on the same region. We also triple the number of point functions, but apply each function to all three patches. The sum however, is computed over each patch separately. This results in nine-fold increase in patch features $H$, which are then used to regress the output properties. Figure~\ref{fig:arch} illustrates a simple version of this architecture with two patches.
\input{results.tex} \section{Conclusion, Limitations, and Future Work} \label{sec:conclusion}
We presented a unified method for estimating oriented normals and principal curvature values in noisy point clouds. Our approach is based on a modification of a recently proposed PointNet architecture, in which we place special emphasis on extracting local properties of a patch around a given central point. We train the network on point clouds arising from triangle meshes corrupted by various levels of noise and show through extensive evaluation that our approach achieves state-of-the-art results across a wide variety of challenging scenarios. In particular, our method allows to replace the difficult and error-prone manual tuning of parameters, present in the majority of existing techniques with a data-driven training. Moreover, we show improvement with respect to other recently proposed learning-based methods.
While producing promising results in a variety of challenging scenarios, our method can still fail in some settings, such as in the presence of large flat areas, in which patch-based information is not enough to determine the normal orientation. For example, our oriented normal estimation procedure can produce inconsistent results, e.g., in the centers of faces a large cube. A more in-depth analysis and a better-adapted multi-resolution scheme might be necessary to alleviate such issues.
In the future, we also plan to extend our technique to estimate other differential quantities such as principal curvature directions or even the full first and second fundamental forms, as well as other mid-level features such as the Shape Diameter Function from a noisy incomplete point cloud. Finally, it would also be interesting to study the relation of our approach to graph-based neural networks~\cite{ defferrard2016convolutional,henaff2015deep} on graphs built from local neighborhoods of the point cloud.
\end{document} |
\begin{document}
\title{Generating Gottesman-Kitaev-Preskill qubit using cross-Kerr interaction\\between squeezed light and Fock states in optics} \author{Kosuke Fukui} \affiliation{ Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan} \author{Mamoru Endo} \affiliation{ Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan} \author{Warit Asavanant} \affiliation{ Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan} \author{Atsushi Sakaguchi} \affiliation{ Optical Quantum Computing Research Team, RIKEN Center for Quantum Computing, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan} \author{Jun-ichi Yoshikawa} \affiliation{ Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan} \author{Akira Furusawa} \affiliation{ Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan} \affiliation{ Optical Quantum Computing Research Team, RIKEN Center for Quantum Computing, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan} \begin{abstract} Gottesman-Kitaev-Preskill (GKP) qubit is a promising ingredient for fault-tolerant quantum computation (FTQC) in optical continuous variables due to its advantage of noise tolerance and scalability. However, one of the main problems in the preparation of the optical GKP qubit is the difficulty in obtaining the nonlinearity. Cross-Kerr interaction is one of the promising candidates for this nonlinearity. There is no existing scheme to use the cross-Kerr interaction to generate the optical GKP qubit for FTQC. In this work, we propose a generation method of the GKP qubit by using a cross-Kerr interaction between a squeezed light and a superposition of Fock states. We numerically show that the GKP qubit with the 10 dB can be generated with a mean fidelities of 99.99 and 99.9~\% at the success probabilities of 2.7 and 4.8~\%, respectively. Therefore, our method has potential method to generate the optical GKP qubit with a quality required for FTQC when we obtain the sufficient technologies for the preparation of ancillary Fock states and a cross-Kerr interaction. \end{abstract}
\maketitle
\section{Introduction} Quantum information processing with continuous variables (CVs)~\cite{lloyd1999quantum,bartlett2002efficient} has been receiving attentions for few decades. In a CV system, the bosonic code for encoding quantum information in CVs is essential to remove errors during quantum information processing. A variety of bosonic codes have been developed, e.g., the cat code~\cite{chuang1997bosonic,cochrane1999macroscopically,albert2019pair,niset2008experimentally} and the binomial code~\cite{michael2016new}. Among bosonic codes, the Gottesman-Kitaev-Preskill (GKP) qubit~\cite{gottesman2001encoding} is a promising way to encode a qubit in CVs, where qubit is encoded in the continuous Hilbert space of harmonic oscillator's position and momentum variables. In particular, the GKP qubit has advantages of an error tolerance and scalability towards large-scale quantum computation (QC) with CVs: (1) The GKP qubit is designed to protect against small displacement noise, and can achieve the hashing bound of the additive Gaussian noise with a suitable quantum error correcting code~\cite{fukui2017analog,fukui2018high}. Furthermore, it protects against other types of noise, including noise from finite squeezing during measurement-based QC~\cite{menicucci2014fault} and photon loss~\cite{albert2018performance}. (2) The GKP qubit inherits the advantage of squeezed vacuum states on optical implementation; they can be entangled by only beam-splitter coupling~\cite{note1}. This feature allows us to entangle the GKP qubit with large-scale CV cluster state by using beam-splitter coupling. Recently, a large-scale, two-dimensional CV cluster state composed of squeezed vacuum states, has been realized experimentally in an optical setup~\cite{asavanant2019generation,larsen2019deterministic}. In addition, the single-and two-mode gates on a large-scale cluster state has also been demonstrated~\cite{Asavanant2021,larsen2021deterministic}. From these advantages, the GKP qubit is an indispensable resource for fault-tolerant quantum computation (FTQC) with CVs~\cite{menicucci2014fault,baragiola2019all,pantaleoni2020modular, walshe2020continuous,pantaleoni2021subsystem,grimsmo2021quantum,fukui2018tracking,vuillot2019quantum,noh2020fault,noh2020encoding,noh2021low,fukui2021efficient,seshadreesan2021coherent}. Furthermore, the GKP qubit is a promising element to a variety of quantum information processing such as long-distance quantum communication~\cite{fukui2021all,rozpkedek2021quantum}.
Experimentally, the GKP qubit has been generated recently in an ion trap~\cite{fluhmann2019encoding} and superconducting circuit quantum electrodynamics~\cite{campagne2020quantum}, where the squeezing level of generated GKP qubits is close to 10 dB, the required squeezing level for FTQC with CVs~\cite{fukui2018high,fukui2019high,noh2020fault,yamasaki2020polylog, bourassa2021blueprint,tzitrin2021fault}. On the other hand, a large-scale CV cluster state has so far not been demonstrated in such physical setups. Thus, generations of the GKP qubit may not be directly translated into large-scale QC except for an optical setup, since the advantage of the GKP qubit for scalability in CVs has not been employed as far as a framework of measurement-based QC with CVs~\cite{note4}. In an optical setup, various generation methods of GKP state are being developed ~\cite{pirandola2004constructing, pirandola2006continuous, pirandola2006generating, motes2017encoding,eaton2019non,su2019conversion, arrazola2019machine,tzitrin2020progress,lin2020encoding,hastrup2021generation,fukui2021efficient2}. Unfortunately, however, optical generation of the GKP qubit has remained unsuccessful due to the difficulties in obtaining the required nonlinearity
The Kerr type effect---a nonlinear effect where the refractive index onf a material changes when an electrical field is applied---has been widely studied as one of the methods to obtain a nonlinearity in an optical setup. For classical information processing, self-and cross-Kerr effects have been widely studied~\cite{dudley2009ten,dudley2009modulation,kupchak2019terahertz,endo2021coherent}, where self-and cross-Kerr effects change the refractive index depending on the incoming field itself and another field, respectively. In this work, we focus on the use of a cross-Kerr effect for the GKP qubit generation. The cross-Kerr effect has been studied extensively for quantum information processing~\cite{jeong2004generation,van2006hybrid,jin2007generating,glancy2008methods, lin2009quantum,feizpour2011amplifying,venkataraman2013phase,schmid2017verifying, yanagimoto2020engineering}, such as quantum computation~\cite{nemoto2004nearly,munro2005weak,spiller2006quantum,jeong2006quantum, shapiro2007continuous}, and the proposal to apply the cross-Kerr effect to generate the optical cat code, which is a typical non-Gaussian state for quantum information processing~\cite{gerry1999generation,jeong2005using,glancy2008methods,he2009scheme}.
The scheme for the GKP qubit generation with the cross-Kerr interaction has been proposed in Ref.~\cite{pirandola2004constructing}. In this scheme, two coherent states interact with each other via the cross-Kerr interaction, and the GKP qubit is generated after the measurement of one of the coherent states. One merit of this scheme is that the effective interaction could be large by using a large amplitude of the coherent state, even if the cross-Kerr interaction per photon is small. On the other hand, it is an open problem whether the state generated by Ref.~\cite{pirandola2004constructing} is appropriate for FTQC with the GKP qubit or not due to the two features: the first one is that the generated state has different codewords between position and momentum quadratures. Since the two-qubit gate on the GKP qubits interacts with the quadratures with each other, it is more suitable to use the same codewords between position and momentum quadratures. This feature requires the additional squeezing operation to match the codewords between both quadratures, where the squeezing operation in an optical setup~\cite{yoshikawa2007demonstration, miyata2014experimental} introduces a noise derived from a finite squeezing. The second one is that the probability of misidentifying the bit value of the generated state is $\sim 1\%$. The threshold of the squeezing level for FTQC with the GKP qubit has been known to be around 10 dB at most~\cite{fukui2018high,fukui2019high,yamasaki2020polylog}, which corresponds to the probability of the misidentifying the bit value, $\sim 0.01\%$. Although it is not known whether the probability of misidentifying the bit value of the state in Ref.~\cite{pirandola2004constructing} can be improved, it is unclear currently that the generated state could be FTQC with the GKP qubit~\cite{note6}.
In this paper, we propose the method for generating the GKP qubit via a cross-Kerr interaction between a squeezed light and the ancilla state of a superposition of Fock states, instead of the interaction between two coherent states in the conventional method. However, there is another difficulty due to the rotation of squeezed light in phase space, when replacing the coherent state with a squeezed light, as described in Sec.~III. Our scheme circumvents this problem by using positive and negative amplitudes for a cross-Kerr interaction, allowing us to apply the squeezed light to the preparation of the GKP qubit via a cross-Kerr interaction. In addition, the squeezing level of the generated GKP qubit is equal to the initial squeezing level of the input state. In the numerical calculation, we show that our method can prepare GKP qubit with a high squeezing level and high fidelity. Thus, our method has the potential towards FTQC with the GKP qubit if we can prepare the input squeezed vacuum state with a sufficient squeezing and the ancilla state of a superposition of Fock states.
The rest of the paper is organized as follows. In Sec.~\ref{Sec2}, we review the GKP qubit, the cross-Kerr effect, and the conventional scheme of the GKP qubit generation by using a cross-Kerr interaction. In Sec.~\ref{Sec3}, we describe our scheme that uses a cross-Kerr interaction between a squeezed light and a superposition of Fock states. In Sec.~\ref{Sec4}, we numerically investigate our scheme, showing numerical calculations of the fidelity between the generated state and the target GKP qubit, and the success probability of our scheme. Sec.~\ref{Sec5} is devoted to discussion and conclusion.
\section{Background}\label{Sec2} In this section, we review the GKP qubit, a simple example to apply a cross-Kerr interaction to a quantum information processing, and the conventional scheme to generate the GKP qubit by using a cross-Kerr interaction.
\subsection{GKP qubit} In this article, we work in units where $\hbar=1$ and the vacuum variances are $\langle\hat{q}^2\rangle_\text{vac}=\langle\hat{p}^2\rangle_\text{vac}=1/2$ for the position quadrature $\hat{q}$ and momentum quadratures $\hat{p}$. The logical 0 and 1 states of the GKP qubit $\ket {\widetilde{0}}$ and $\ket {\widetilde{1}}$ are composed of a series of Gaussian peaks of width $\Delta$ contained in a larger Gaussian envelope of width 1/$\kappa$, and each peak of logical 0 (1) state is separated by $2\sqrt{\pi}$ each other. In the position basis, the logical states $\ket {\widetilde{0}}$ and $\ket {\widetilde{1}}$ are given by \begin{eqnarray} \ket {\widetilde{0}} &\propto & \sum_{t=- \infty}^{\infty} \int e^{-\frac{(2t\sqrt{\pi})^2}{2(1/\kappa^2)}}{e}^{-\frac{(q-2t\sqrt{\pi})^2}{2\Delta^2}}\ket{q} dq, \\ \ket {\widetilde{1}} &\propto & \sum_{t=- \infty}^{\infty} \int e^{-\frac{[(2t+1)\sqrt{\pi}]^2}{2(1/\kappa^2)}} {e}^{-\frac{(q-(2t+1)\sqrt{\pi})^2}{2\Delta^2}}\ket{q} dq . \label{gkp} \end{eqnarray}
Although these states become the perfect GKP qubits with Dirac-comb wavefunctions in case of infinite squeezing ($\Delta \rightarrow 0$, $\kappa \rightarrow 0$)~\cite{gottesman2001encoding}, they are not orthogonal in the finite squeezing regime and there is a non-zero probability of misidentifying $\ket {\widetilde{0}}$ with $\ket {\widetilde{1}}$, and vice versa. We choose $\kappa$ and $\Delta$ so that the variance of each peak in the position and momentum observables is equal to $\sigma^{2}$, i.e., $\Delta^{2} = \kappa^{2} = 2\sigma^{2}$. The squeezing level $s$ is equal to $s=-10{\rm log}_{10}2\sigma^2.$
\subsection{Cross-Kerr interaction for the controlled-NOT gate} As an example of a cross-Kerr interaction for quantum information processing, we describe the controlled-NOT gate between two photons~\cite{nemoto2004nearly,munro2005weak}, which is a typical way to use this interaction for quantum information processing. We consider the interaction with the cross-Kerr interaction between two modes a and b. The Hamiltonian for a cross-Kerr interaction, $\hat{H}_{\rm CK}$, is described by \begin{equation} \hat{H}_{\rm CK}=\hbar \chi \hat{a}^{\dag}\hat{a}\hat{b}^{\dag}\hat{b}, \end{equation} where $\chi$ is the strength of the cross-Kerr nonlinearity, $\hat{a}(\hat{b})$ and $\hat{a}^\dag(\hat{b}^\dag)$ are the annihilation and creation operators for mode a (b), respectively. The annihilation operators are given by $\hat{a}(\hat{b}) = (\hat{q}_{\rm a(b)}+i\hat{p}_{\rm a(b)})/\sqrt{2}$. We then see the interaction between a superposition of Fock states, $\ket{\phi}_{\rm a}=(\ket{0}_{\rm a}+\ket{1}_{\rm a})/\sqrt{2}$, and a coherent state, $\ket{\alpha}_{\rm b}$, where $\alpha$, $\ket{0}$, and $\ket{1}$ are the amplitude of the coherent state, a vacuum state, and a single photon, respectively. After the interaction, the two states are entangled as \begin{eqnarray} {e}^{-i\frac{\hat{H}_{\rm CK}}{\hbar}t}\ket{\phi}_{\rm a}\ket{\alpha}&=&^{-i\frac{\hat{H}_{\rm CK}}{\hbar}t}(\ket{0}_{\rm a}+\ket{1}_{\rm a})\ket{\alpha}_{\rm b} /\sqrt{2}\nonumber \\ &=& (\ket{0}_{\rm a} \ket{\alpha}_{\rm b}+\ket{1}_{\rm a} \ket{\alpha {e}^{i\theta}}_{\rm b}) /\sqrt{2}, \end{eqnarray} where $\theta=\chi t$ with the interaction time $t$. In addition to the above interaction between modes a and b, the coherent state b and the additional photon c interact with each other by the cross-Kerr interaction with the strength of the nonlinearity $-\chi$. After the interaction, the states are described as \begin{eqnarray} e^{-i\frac{\hat{H}_{\rm CK}}{\hbar}t}&\ket{\phi}_{\rm c}& (\ket{0}_{\rm a} \ket{\alpha}_{\rm b}+\ket{1}_{\rm a} \ket{\alpha {e}^{i\theta}}_{\rm b})/{\sqrt{2}}\nonumber \\ = \{(&\ket{0}_{\rm a}&\ket{0}_{\rm c}+ \ket{1}_{\rm a}\ket{1}_{\rm c} )\ket{\alpha}_{\rm b}\nonumber \\ +&\ket{1}_{\rm a}&\ket{0}_{\rm c} \ket{\alpha {e}^{i\theta}}_{\rm b}+\ket{0}_{\rm a}\ket{1}_{\rm c} \ket{\alpha {e}^{-i\theta}}_{\rm b}\}/2. \end{eqnarray} To disentangle the coherent state with photons, the homodyne measurement is performed on the coherent state. If the measurement outcome $x$ is larger than $\alpha(1+{\rm cos}\theta)/2$, i.e., one distinguishes the coherent state as $\ket{\alpha}_{\rm b}$, the state is approximated by $(\ket{0}_{\rm a}\ket{0}_{\rm c} +\ket{1}_{\rm a}\ket{1}_{\rm c} )/\sqrt{2}.$ On the other hand, if $x$ is smaller than $\alpha(1+{\rm cos}\theta)/2$, the state is approximated by $({e}^{i \beta} \ket{0}_{\rm a}\ket{0}_{\rm c} +{e}^{-i \beta}\ket{1}_{\rm a}\ket{1}_{\rm c} )/\sqrt{2}, $ since the two states $\ket{\alpha {e}^{\pm i\theta}}$ are not distinguished, where $\beta=2\alpha {\rm sin} \theta (x-\alpha {\rm cos}\theta)$. Since the phase shift $\beta$ can be eliminated via a classical feed-forward operation~\cite{nemoto2004nearly,munro2005weak}, the states in modes a and c are entangled as \begin{equation} (\ket{0}_{\rm a}\ket{0}_{\rm c} +\ket{1}_{\rm a}\ket{1}_{\rm c} )/\sqrt{2}. \end{equation} The probability of misidentifying $\ket{\alpha}$ as $\ket{ \alpha {e}^{\pm i\theta}}$ is calculated by $\sim 10^{-4}$ when $\alpha \theta \sim \pi$ and $\alpha\sim 3\times 10^{5}$. In Refs.~\cite{nemoto2004nearly,munro2005weak}, a weak nonlinearity is assumed to be $\theta \sim 10^{-5}$, which is achievable in the experiments~\cite{feizpour2015observation}.
\subsection{Conventional method for the GKP qubit generation} In Ref.~\cite{pirandola2004constructing}, the cross-Kerr interaction between the two optical modes is used to generate the GKP qubit, where the two states in modes a and b are both prepared in coherent states as $\ket{\alpha}$ and $\ket{\beta}$ $( \alpha,~\beta \in \mathbb{R})$ with $\beta \gg \alpha$, respectively. Figure \ref{fig1} shows the schematic diagram for the conventional method of the GKP qubit generation, $\ket {\widetilde{1}}_{\rm con}$, which targets the logical 1 state of the GKP qubit, $\ket {\widetilde{1}}$. The modes a and b interact with each other by a nonlinear medium, and mode a is measured by using the homodyne detector in the $q$ quadrature. The two states after the interaction become \begin{equation} e^{-i\frac{\hat{H}_{\rm CK}}{\hbar}t}\ket{\alpha}_{\rm a}\ket{\beta}_{\rm b}={e}^{-\alpha^2/2}\sum_{n=0}^{\infty}\frac{\alpha^n}{\sqrt{n!}}\ket{n}_{\rm a}\ket{\beta {e}^{-in\theta}}_{\rm b}, \label{conv} \end{equation} where $\ket{n}_{\rm a}$ represents Fock states with photon number $n$ for mode $a$. Assume that the cross-Kerr phase shift per photon, $\theta=\chi t$, is small, and $\ket{\beta {e}^{-in\theta}}_{\rm b}$ is approximated as $\ket{\beta -i n\beta\theta}_{\rm b}$ with $\chi t \alpha^2\ll 1$. To see the effect depending on $n$, we redefine $\ket{\beta -i n\beta\theta}_{\rm b}$ as $\ket{-i n\beta\theta}_{\rm b}$, respectively. Then the right hand side of Eq.~(\ref{conv}) can be redefined as \begin{equation} {e}^{-\alpha^2/2}\sum_{n=0}^{\infty}\frac{\alpha^n}{\sqrt{n!}}\ket{n}_{\rm a}\ket{ -in\beta\theta}_{\rm b}, \label{conv2} \end{equation} which means that a large $\beta$ provides a sufficient interaction to realize a large displacement in the quadrature even if $\theta$ is small.
After the homodyne measurement of mode a, the generated state is described by the position and momentum wave functions, $\phi_q(q; \tau, \alpha, x)$ and $\phi_p(p; \tau, \alpha, x)$, as \begin{eqnarray} \phi_q(q; \tau, \alpha, x)\propto \sum_{n=0}^{\infty}\eta_n(\alpha, x) {\rm exp}(-q^2/2+i\pi n \tau q), \label{proq}\\ \phi_p(p; \tau, \alpha, x)\propto \sum_{n=0}^{\infty}\eta_n(\alpha, x) {\rm exp}[-(p-\pi n \tau)^2/2]\label{prop}, \end{eqnarray} where $x$ is the outcome of the homodyne measurement, $\tau$ is the interaction time $\tau=-\sqrt{2}\beta \chi t/\pi$, and $\eta_n(\alpha, x)$ is defined as $\rho_n(\alpha,x){e}^{(\alpha^2+x^2)/2}$ with $\rho_n(\alpha,x)=\alpha^2 H_n(x)/(2^{n/2}n!)$ and the Hermite polynomials $H_n(x)$. In Ref.~\cite{pirandola2004constructing}, $\tau=2$ is assumed, and thus $\beta$ is needed to be $\beta\sim 4\times 10^{5}$ using $\chi t = \theta \sim10^{-5}$.
\begin{figure}\label{fig1}
\end{figure}
To implement FTQC with the GKP qubit, there are two features that the generated state may be not appropriate to implement fault-tolerant QC with the GKP qubit: The first one is the mismatch of the codewords between $q$ and $p$ quadratures. The second one is the probability of misidentifying the bit value for FTQC with the GKP qubits. For the first problem, as we see Eqs.~(\ref{proq}) and (\ref{prop}), the intervals between codewords for the proposed method are different from those for the GKP qubit, $\sqrt{\pi}$. For example, the interval between codewords in the $q$ and $p$ quadrature for $\tau=\alpha=2$, becomes 1/4 and $4{\pi}$, respectively~\cite{pirandola2004constructing}. In general, this mismatch of the codewords makes it difficult to implement the two-qubit gate, e.g, the controlled-NOT gate or beam-splitter coupling, where the quadratures in $q$ and $p$ quadratures interact with each other. Thus, to implement the two-qubit gate, the additional squeezing operation to match the codewords will be required, where the squeezing operation in an optical setup introduces a noise derived from a finite squeezing~\cite{yoshikawa2007demonstration, miyata2014experimental}. The second problem is regarding the probability of misidentifying the bit value. The probability of misidentifying the bit value of the state generated by Ref.~\cite{pirandola2004constructing} is at least 1\%, which corresponds to that of the GKP qubit with a squeezing level~$\sim$6.2 dB. On the other hand, the threshold of the squeezing level is around 10 dB at least~\cite{fukui2018high,fukui2019high}, which corresponds to the probability of misidentifying the bit value~$\sim$0.01\%. Since the error probability for the state generated by Ref.~\cite{pirandola2004constructing} is larger than that of the threshold value for FTQC with the GKP qubits, the state by Ref.~\cite{pirandola2004constructing} may be not sufficient to implement FTQC. For the above two reasons, it has been unclear that the state generated by Ref.~\cite{pirandola2004constructing} could be FTQC with the GKP qubit.
\begin{figure*}\label{fig2}
\end{figure*}
\section{Proposed method}\label{Sec3} In this work, we overcome the two problems of the conventional GKP qubit generation; the mismatch of the interval for codewords between both quadratures and the error probability of the generated GKP qubit. To solve these problems, we use the squeezed light and a superposition of the Fock states with the appropriate coefficient weights as ancilla light for encoding the GKP code. It may be natural that the squeezed light is expected to be applied to the conventional method, which employs the coherent light. However, it is difficult to employ the squeezed light because each light has a different rotation phase. The key to evading this problem is to use the inverse of sign for the cross-Kerr interaction, as we will see later.
We here consider the generation of the GKP qubit with a finite number of peaks, e.g., the target GKP qubits, $\ket {\widetilde{0}}_m$ and $\ket {\widetilde{1}}_m$, are described as \begin{eqnarray} \ket {\widetilde{0}}_m &\propto& \sum_{t=-m}^{m} \int e^{-\frac{(2t\sqrt{\pi})^2}{2(1/\kappa^2)}}{e}^{-\frac{(s-2t\sqrt{\pi})^2}{2\Delta^2}}\ket{s}_q ds, \label{gkpfinite0} \\ \ket {\widetilde{1}}_m &\propto & \sum_{t=- m}^{m} \int e^{-\frac{[(2t+1)\sqrt{\pi}]^2}{2(1/\kappa^2)}} {e}^{-\frac{(s-(2t+1)\sqrt{\pi})^2}{2\Delta^2}}\ket{s}_q ds, \label{gkpfinite1} \end{eqnarray} where $2m+1$ is the number of peaks, assuming m > 0 in this work~\cite{note5}.
In the following, we target the preparation of $\ket {\widetilde{0}}_m$. Fig.~\ref{fig2} shows a schematic description of the proposed scheme consisting of a squeezed light, a superposition of Fock states, two Kerr mediums, and linear optics. Our method consists of 5 steps, where each step corresponds to Fig.~\ref{fig2}(a)(i)-(v), respectively. In Fig.~\ref{fig2}(b), the Wigner representation is not used for the description of our method, since we aim to give an insight into how to translate a position of each squeezed light. In the first step, the squeezed light and a superposition of Fock states are prepared, as described in Fig.~\ref{fig2}(a)(i). A superposition of Fock states, referred to as the ancilla Fock state, is defined by \begin{equation} \ket{\phi}=N\sum_{t=0}^{2m}c_{t}\ket{2t}, \label{ancilla} \end{equation} where $N$ is a normalization factor. To obtain the envelope's distribution of GKP qubit as described in Eqs.~(\ref{gkpfinite0}) and (\ref{gkpfinite1}), we set the coefficients for Eq.~(\ref{ancilla}), $c_{t}$, to \begin{equation} c_{t}={e}^{-2\pi\kappa^2 (t-m)^2} \frac{\sqrt{2^{2t} (2 t)!}}{H_t(0)} , \label{coe} \end{equation} where ${H_t(x)}$ is the Hermite polynomial and ${H_t(0)}$ is determined so that the envelope of the output state becomes that of the approximated GKP qubit when the homodyne measurement outcome $x=0$~\cite{note2}.
In the second step, the squeezed light and the ancilla Fock state interact with each other by a Kerr medium with the strength of the nonlinearity $\chi$, as described in Fig.~\ref{fig2}(b)(ii) for the specific case $m = 2$. In this step, the squeezed light and the ancilla Fock state are entangled, where the phase of the squeezed light in phase space is rotated by phase $\theta_t=t\theta$ corresponding to Fock bases in Eq.~(\ref{ancilla}). After the interaction, the squeezed light and the ancilla Fock state evolve (up to normalization) as \begin{equation} {e}^{i\hat{H}_{\rm CK}}\ket{\psi(\theta_0)}\ket{\phi}
\propto \sum_{t=0}^{2m}c_{t}\ket{\psi(\theta_t)}\ket{2t}, \end{equation}
where $\ket{\psi(\theta_t)}$ denotes the squeezed light with a phase parameter $\theta_t=t\theta$. We consider the ancilla Fock state composed of an even number of Fock states. This is because the probability of an odd number of Fock states becomes zero when the measurement outcome $x=0$ is postselected, where the probability of an odd number of Fock states for the position $x=0$, i.e., $|\braket{x=0|2t+1}|^2$, is zero. We mention that the amount of the phase rotation becomes $n$ times when we replace $\ket{2t}$ with $\ket{2nt}$ as the Fock bases in Eq.~(\ref{ancilla}).
In step 3, the displacement operation, $D_{1}$, transforms the amplitude of the squeezed light by $\beta+i\gamma$ in phase space $( \gamma \in \mathbb{R})$, as described in Fig.~\ref{fig2}(b)(iii) for the case $m$~=~2. In this step, we set values $\gamma=2m\sqrt{\pi}$ so that the distance between squeezed light in the $q$ quadrature becomes $2\sqrt{\pi}$ which corresponds to the codewords for $\ket{\widetilde{0}}$, as shown in Fig.~\ref{fig2}(c) for the case $m = 2$. We note that $\gamma$ is set to $\gamma=m\sqrt{\pi}$ for the generation of $\ket{\widetilde{+}}=(\ket{\widetilde{0}}+\ket{\widetilde{1}})\sqrt{2}$. The displacement error in the $p$ quadrature, $\delta$, is defined by \begin{equation} \delta=\beta(1-{\rm cos}\theta)=\beta\left\{1-\sqrt{1-({\gamma}/{m\beta})^2}\right\}, \label{delta} \end{equation} which occurs the phase error on the generated GKP qubit for $m>1$. Thus, $\beta$ should be sufficiently large compared to $\gamma$ to reduce the phase error. Later we will see that we can ignore the displacement error $\delta$ in our scheme.
In step 4, the squeezed light and the ancilla Fock state interact with each other by the second Kerr medium with the strength of the nonlinearity $-\chi$, as shown in Fig.~\ref{fig2}(b)(iv). In this step, the rotation value for the $t$-th Fock state, $\ket{2t}$, in the second step should be turned into the negative value, $-t\theta$, by using the inverse strength of the interaction,$-\chi$. Then, all phases of the superposition of squeezed light become $\theta_t=0$ after the phase rotation by the interaction with the ancilla Fock state. We note that each position of the squeezed lights in the $q$ quadrature corresponds to the target GKP qubit up to the displacement in the $p$ quadrature.
In step 5, the ancilla Fock state is measured by the homodyne detector in the $q$ quadrature, and the squeezed light is disentangled with the ancilla Fock state. The output state after the measurement of the ancilla Fock state depends on the measurement outcome $x$.
Although the output state is closest to the target state $\ket {\widetilde{0}}_m $ in Eq.~(\ref{gkpfinite0}) when $x=0$, the probability of obtaining $x=0$ gets close to zero. Thus, we implement the conditional measurement using the upper bound of the measurement outcome $v_{\rm up}$, where the generation succeeds when $|x|\leq v_{\rm up}$ so that the fidelity between the generated state and the target GKP qubit becomes a sufficient quality for FTQC.
In this step, when the homodyne measurement succeeds, we implement the displacement operation, $D_{2}$, on the squeezed light by $-\beta- m\delta/2$ in the $p$ quadrature, as shown in Fig.~\ref{fig2}(b)(v) for $m=2$. Otherwise, the GKP qubit generation fails (when $|x|> v_{\rm up}$). Assuming the measurement outcome $x$, the generated state can be described by \begin{equation} \ket{\widetilde{0}}_{m,x,\delta} \propto \sum_{t=-m}^{m} \int c_{t+m} H_{t+m}(x) {e}^{-\frac{(s-2t\sqrt{\pi})^2}{2\Delta^2}} {e}^{-i\delta_t^m s}\ket{s}_q ds, \label{gkpfinite3} \end{equation}
where we set the phase errors to $\delta_t^m= (m/2-|t|)\delta$. The phase errors for the specific case $m=2$ are $(\delta_{-2}^2,\delta_{-1}^2,\delta_{0}^2,\delta_{1}^2,\delta_{2}^2)=(-\delta,0,\delta,0,-\delta)$, as shown in Fig.~\ref{fig2}(d). We note that each of the coefficients of the generated state corresponds to the envelope's distribution of GKP qubit as described in Eq.~(\ref{coe}) with $x=0$, and the generated state corresponds to $\ket {\widetilde{0}}_m $ with $x=0$ and $\delta=0$ in Eq.~(\ref{gkpfinite0}). Thus, the proposed method generates the GKP qubit well by selecting appropriate parameters $m$, $\delta$, and $v_{\rm up}$.
\begin{figure}
\caption{Fidelity of the generated GKP qubit as a function of the measurement outcome $x$ for the several squeezing levels, 7, 8, 9, 10, and 11 dB, where $x$ is obtained by the homodyne measurement of a superposition of Fock states in the $q$ quadrature. The displacement error in the $p$ quadrature, $\delta$, is assumed to be zero.}
\label{fig3}
\end{figure}
\section{Numerical calculations}\label{Sec4} We consider the generation of the GKP qubit with the squeezed levels 7, 8, 9, 10, and 11 dB, where around 10 dB is sufficient to implement FTQC~\cite{fukui2018high,fukui2019high}. In Fig.~\ref{fig3}, the fidelity of the generated GKP qubit for the target state is plotted as a function of the homodyne measurement outcome $x$ in the $q$ quadrature, assuming $m=2$ and $\delta=0$. The fidelity, $F(x)$, is obtained by \begin{equation}
F(x)=|\braket {{{\widetilde{0}}|\widetilde{0}}}_{m,x,\delta}|^2 \end{equation}
Numerical results show that our method with $m=2$ can generate the approximated GKP qubit with high fidelity, e.g., $F(x)\geq 0.99$ for $x\leq 0.2$. In addition, our method is feasible in a wide range of the homodyne measurement, since the fidelity is stable for $|x|\leq 0.1$. This feasibility works to the advantage of the success probability of our scheme.
We note that the fidelity decreases when the squeezing level becomes larger than 10 dB. This is because the number of peaks $m=2$ is not enough to approximate the envelope of the GKP qubit $\ket{\widetilde{0}}$, where the probability of the peaks with the amplitude $\pm 6\sqrt{\pi}$ can not be ignored with the large squeezing level. To improve the fidelity for the generation of GKP qubit with the squeezing level larger than 10 dB, we may set the parameter for the number of peaks as $m=3$, i.e., the number of peaks of the generated state is 6. This is because the GKP qubit with a higher squeezing level has a larger number of peaks. For $m=3$, the number of coefficients of the ancilla Fock state is set to 6 in Eq.~(\ref{ancilla}). We found that the fidelities of the generated state with 10, 11, and 12 dB can be increased to 99.998, 99.986, and 99.938~\% by using the ancilla state with $m=3$, respectively. We note that the interaction strength $\chi$ does not change according to $m$.
We then see the effect of the displacement error in the $p$ quadrature, $\delta$. In Fig. \ref{fig4}, the fidelity of the generated GKP qubit with the homodyne measurement result $x=0$ is plotted as a function of the error $\delta$. Numerical results show that the effect of is $\delta$ expected to be ignored in the range of $\delta \leq 0.02$, where the fidelities decrease by only $\sim0.01$. In particular, the condition of $\beta$ is obtained by $\beta(1-{\rm cos}\theta) \leq 0.02$ using Eq.~(\ref{delta}). In the case of $\gamma=4\sqrt{\pi}$, $\beta$ is needed to be more than $\sim314$ for $\delta \leq 0.02$. In general, this requirement for $\beta$ is much less than $\sim 4\times 10^{5}$, which is required to obtain an amount of a cross-Kerr interaction as described in Sec. II. Thus, we ignore the displacement error $\delta$ in our scheme.
\begin{figure}
\caption{Fidelity of the generated GKP qubit with the homodyne measurement result $x=0$ as a function of the displacement error in the $p$ quadrature, $\delta$, for several squeezing levels, 7, 8, 9, 10, and 11 dB.}
\label{fig4}
\end{figure}
We finally calculate a mean fidelity as a function of the success probability. Although the fidelity of the generated state with $x=0$ reaches the maximum value as shown in Fig.~\ref{fig3}, the success probability of our method becomes almost zero.
Thus, we need to determine the adequate success probability for achieving the appropriate fidelity. Here we introduce an upper bound $v_{\rm up}$ so that the GKP qubit generation succeeds when $|x|\leq v_{\rm up}$. Then, the success probability with $v_{\rm up}$, $P_{\rm suc}(v_{\rm up})$, is calculated by \begin{equation} P_{\rm suc}(v_{\rm up})=\frac{1}{P_{\rm all}}\int_{-v_{\rm up}}^{v_{\rm up}}p(x)dx , \end{equation} where $p(x)$ and $P_{\rm all}$ are defined as \begin{eqnarray}
&p&(x)=\sum_{t=0}^{2m+1}|c_{2t} H_{2t}(x)|^2,\\ &P&_{\rm all}=\int_{-\infty}^{\infty}p(x) dx, \end{eqnarray} respectively. In the calculation of the success probability of our scheme, we assume that the ancilla Fock states is prepared deterministically since we focus on the potential of the cross-Kerr interaction. Then we define the mean fidelity, $\overline{F}(v_{\rm up})$, in order to verify the generated GKP qubit. $\overline{F}(v_{\rm up})$ is calculated by \begin{equation} \overline{F}(v_{\rm up})= \int_{-v_{\rm up}}^{v_{\rm up}}F'(x)dx, \end{equation} where $F'(x)$ is the differential of $F(x)$.
In Fig. \ref{fig5}, the mean fidelities between the generated state and the GKP qubit are plotted as a function of the success probability for the squeezing levels 7, 8, 9, 10, and 11 dB~ \cite{note3}. In the numerical results for the ancilla Fock state with $m=2$, the mean fidelities are stable for the success probabilities $P_{\rm suc}(v_{\rm up})<0.06$ with $v_{\rm up}\sim0.15$. Numerical results show that our method with $m=2$ can generate the GKP qubit with a feasible success probabilities, where the generated GKP qubit reaschs the fidelity larger than 99.9\%.
To perform FTQC with the GKP qubit, the fidelity for the GKP qubit with the squeezing level larger than 10 dB should be close to $\sim$1. To improve the fidelity, we consider the number of peaks as $m=3$. In Fig.~\ref{fig6}, we calculated the mean fidelities by using the ancilla Fock state with $m=3$. The mean fidelities more than 99.9\% can be achieved for the success probabilities $P_{\rm suc}(v_{\rm up})=0.05$ with $v_{\rm up}\sim0.16$. The fidelities for $P_{\rm suc}(v_{\rm up} \to 0) \sim 0$ correspond to 99.998, 99.986, and 99.938~\% for the target GKP qubit with 10, 11, and 12 dB, respectively.
\begin{figure}
\caption{Success probability and mean fidelity for the generated GKP states with the ancilla Fock state with $m$=2 for several squeezing levels, 7, 8, 9, 10, and 11 dB, assuming that the ancilla Fock states is prepared deterministically.}
\label{fig5}
\end{figure}
\begin{figure}
\caption{Success probability and mean fidelity for the generated GKP states with the ancilla Fock state with $m$=3 for several squeezing levels, 10, 11, and 12 dB. The fidelities for $P_{\rm suc}(v_{\rm up} \to 0) \sim 0$ correspond to 99.998, 99.986, and 99.938~\% for the target GKP qubit with 10, 11, and 12 dB, respectively. }
\label{fig6}
\end{figure}
\section{Discussion and conclusion}\label{Sec5} In this paper, we have developed the method to generate the GKP qubit using the cross-Kerr effect between a squeezed light and a superposition of Fock states. Our scheme overcomes the problems of the conventional method for the FTQC with the GKP qubits. We numerically show that our method has the potential to generate the GKP qubit with a high squeezing and a high fidelity. Thus our method could provide the potential way via the cross-Kerr interaction to realize FTQC with the GKP qubit.
We mention the experimental realization of the proposed method. The preparation of a squeezed light, the achievable squeezing level of 15 dB has been reported~\cite{vahlbruch2016detection}, which is more than the required squeezing level for FTQC~\cite{fukui2018high,fukui2019high,noh2020fault,larsen2021fault,bourassa2021blueprint,tzitrin2021fault}. Regarding the cross-Kerr interaction, a small strength of the cross-Kerr interaction is enough to implement our scheme, e.g., $\theta \sim 10^{-5}$ which is achievable in the experiments~\cite{feizpour2015observation}. In addition, the inverse of sign for the cross-Kerr interaction could be realized by using the measurement-induced interaction or Rubidium-based cross-Kerr medium~\cite{feizpour2015observation}. Regarding the preparation of the ancilla Fock state, the maximum value of coefficients for the ancilla has been still three in an optical setup~\cite{yukawa2013generating}. Fortunately, there is a promising scheme to generate arbitrary coefficients of a superposition of Fock states in Refs.~\cite{su2019conversion,tzitrin2020progress}, and the experimental realization for the photon number resolving detector, a key role in Refs.~\cite{su2019conversion,tzitrin2020progress}, has been demonstrated~\cite{lita2008counting,fukuda2011titanium,endo2021quantum}. Additionally, in a realistic experimental setup, there are the effects derived from a photon loss during the preparation of the ancilla Fock state, the imperfection of the cross-Kerr interaction, and the inaccuracy of the homodyne measurement. These effects may degrade the fidelity of the generated GKP qubit. We will investigate the effect of a photon loss in a future work since it is beyond the scope of this paper. Nevertheless, our method will provide an efficient way to generate the GKP qubit required for FTQC, once appropriate technologies for the preparation of the ancilla Fock state and the cross-Kerr interaction are available in an experimentally feasible way.
We also mention the possibility to obtain the cross-Kerr interaction by using the decomposition technique~\cite{sefi2011decompose,sefi2013measurement,kalajdzievski2019exact} and the multi-mode nonlinear coupling~\cite{sefi2019deterministic}. For the decomposition technique, the quantum gate for the $n$-order Hamiltonian with $n>3$ can be approximated by the sequential gates consisting of less than the $n'$-order Hamiltonians with $n'<n$. Thus, our proposed scheme can be implemented in an optical setup, by replacing the cross-Kerr interaction with the cubic phase gate and Gaussian operations. In an optical setup, the cubic phase gate can be implemented by the measurement-induced nonlinear interaction~\cite{gottesman2001encoding,filip2005measurement,sefi2013measurement,sabapathy2018states}, where the cubic phase state used for an ancilla in the nonlinear interaction has been demonstrated experimentally~\cite{yukawa2013generating}. For the multi-mode nonlinear coupling, Ref.~\cite{sefi2019deterministic} introduced the multi-mode gate for modes 1 and 2, which is composed of only $\hat{q}$ operators such as $e^{i \hat{q_1}^n \hat{q_2}^{n'}}$ with $n+n'>3$. Although the cross-Kerr gate is composed of $\hat{q}$ and $\hat{p}$ operators, which is described as $e^{i( \hat{q_1}^2+\hat{p_1}^2)(\hat{q_2}^{2}+\hat{p_2}^{2})}$, we may obtain the cross-Kerr interaction by applying multi-mode nonlinear coupling to the cross-Kerr gate. Lastly, it will be worth comparing our scheme with the breeding protocol with a phase estimation~\cite{terhal2016encoding,weigand2018generating}, which is a promising way to generate the GKP qubit. The difficulty of our scheme is the requirement of the ancilla of a superposition of Fock states, and a Kerr medium, while the difficulty of the breeding protocol is the requirement of the ancilla of the squeezed cat state in an optical setup.
In terms of fidelity and the success probability, our scheme could be comparative to the breeding protocol in an optical setup~\cite{weigand2018generating} when the ancilla Fock state for our method and squeezed cat state for the breeding protocol can be prepared deterministically.
\acknowledgments This work was partly supported by JST [Moonshot R$\&$D][Grant No. JPMJMS2064], JST [Moonshot R$\&$D][Grant No. JPMJMS2061], JSPS KAKENHI [Grant No. 20K15187], UTokyo Foundation, and donations from Nichia Corporation. M.E. acknowledges supports from Research Foundation for Opto-Science and Technology.
\end{document} |